diff --git a/how-tos/deploy-AGIC-with-Workload-Identity-using-helm/index.html b/how-tos/deploy-AGIC-with-Workload-Identity-using-helm/index.html index d17dad198..b7f333a8e 100644 --- a/how-tos/deploy-AGIC-with-Workload-Identity-using-helm/index.html +++ b/how-tos/deploy-AGIC-with-Workload-Identity-using-helm/index.html @@ -102,27 +102,25 @@
  • How to deploy AGIC via Helm using Workload Identity
  • @@ -224,46 +222,42 @@

    How to deploy AGIC

    This assumes you have an existing Application Gateway. If not, you can create it with command:

    bash az network application-gateway create -g myResourceGroup -n myApplicationGateway --sku Standard_v2 --public-ip-address myPublicIP --vnet-name myVnet --subnet mySubnet --priority 100

    -

    1. Add the AGIC Helm repository

    -

    bash -helm repo add application-gateway-kubernetes-ingress https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/ -helm repo update

    -

    2. Set environment variables

    +

    1. Set environment variables

    bash export RESOURCE_GROUP="myResourceGroup" export APPLICATION_GATEWAY_NAME="myApplicationGateway" export USER_ASSIGNED_IDENTITY_NAME="myIdentity" export FEDERATED_IDENTITY_CREDENTIAL_NAME="myFedIdentity"

    -

    3. Create resource group, AKS cluster and identity

    +

    2. Create resource group, AKS cluster and identity

    bash az group create --name "${RESOURCE_GROUP}" --location eastus az aks create -g "${RESOURCE_GROUP}" -n myAKSCluster --node-count 1 --enable-oidc-issuer --enable-workload-identity az identity create --name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RESOURCE_GROUP}"

    -

    4. Export the oidcIssuerProfile.issuerUrl

    +

    3. Export the oidcIssuerProfile.issuerUrl

    bash export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g "${RESOURCE_GROUP}" --query "oidcIssuerProfile.issuerUrl" -otsv)"

    -

    5. Create federated identity credential

    +

    4. Create federated identity credential

    Note: the name of the service account that gets created after the helm installation is “ingress-azure” and the following command assumes it will be deployed in “default” namespace. Please change the namespace name in the next command if you deploy the AGIC related Kubernetes resources in other namespace.

    bash az identity federated-credential create --name ${FEDERATED_IDENTITY_CREDENTIAL_NAME} --identity-name ${USER_ASSIGNED_IDENTITY_NAME} --resource-group ${RESOURCE_GROUP} --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:default:ingress-azure

    -

    6. Obtain the ClientID of the identity created before that is needed for the next step

    +

    5. Obtain the ClientID of the identity created before that is needed for the next step

    bash az identity show --resource-group "${RESOURCE_GROUP}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -otsv

    -

    7. Export the Application Gateway resource ID

    +

    6. Export the Application Gateway resource ID

    bash export APP_GW_ID="$(az network application-gateway show --name "${APPLICATION_GATEWAY_NAME}" --resource-group "${RESOURCE_GROUP}" --query 'id' --output tsv)"

    -

    8. Add Contributor role for the identity over the Application Gateway

    +

    7. Add Contributor role for the identity over the Application Gateway

    bash az role assignment create --assignee <identityClientID> --scope "${APP_GW_ID}" --role Contributor

    -

    9. In helm-config.yaml specify

    +

    8. In helm-config.yaml specify

    yaml armAuth: type: workloadIdentity identityClientID: <identityClientID>

    -

    10.Get the AKS cluster credentials

    +

    9. Get the AKS cluster credentials

    bash az aks get-credentials -g "${RESOURCE_GROUP}" -n myAKSCluster

    -

    11. Install the helm chart

    +

    10. Install the helm chart

    bash helm install ingress-azure \ -f helm-config.yaml \ diff --git a/how-tos/helm-upgrade/index.html b/how-tos/helm-upgrade/index.html index 5f7be24c1..069eee2fa 100644 --- a/how-tos/helm-upgrade/index.html +++ b/how-tos/helm-upgrade/index.html @@ -204,42 +204,10 @@

    Upgrading AGIC using Helm

    NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment.

    The Azure Application Gateway Ingress Controller for Kubernetes (AGIC) can be upgraded -using a Helm repository hosted on Azure Storage.

    -

    Before we begin the upgrade procedure, ensure that you have added the required repository:

    - +using a Helm repository hosted on MCR.

    Upgrade

    1. -

      Refresh the AGIC Helm repository to get the latest release:

      -

      bash -helm repo update

      -
    2. -
    3. -

      View available versions of the application-gateway-kubernetes-ingress chart:

      -

      bash -helm search repo -l application-gateway-kubernetes-ingress

      -

      Sample response:

      -

      bash -NAME CHART VERSION APP VERSION DESCRIPTION -application-gateway-kubernetes-ingress/ingress-azure 1.0.0 1.0.0 Use Azure Application Gateway as the ingress for an Azure... -application-gateway-kubernetes-ingress/ingress-azure 0.7.0-rc1 0.7.0-rc1 Use Azure Application Gateway as the ingress for an Azure... -application-gateway-kubernetes-ingress/ingress-azure 0.6.0 0.6.0 Use Azure Application Gateway as the ingress for an Azure...

      -

      Latest available version from the list above is: 0.7.0-rc1

      -
    4. -
    5. View the Helm charts currently installed:

      bash helm list

      diff --git a/index.html b/index.html index c7724bf8a..f3aa8c8cb 100644 --- a/index.html +++ b/index.html @@ -263,5 +263,5 @@

      Reporting Issues

      diff --git a/search/search_index.json b/search/search_index.json index d855ed473..9372040fd 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Introduction NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. The Application Gateway Ingress Controller allows Azure Application Gateway to be used as the ingress for an Azure Kubernetes Service aka AKS cluster. As shown in the figure below, the ingress controller runs as a pod within the AKS cluster. It consumes Kubernetes Ingress Resources and converts them to an Azure Application Gateway configuration which allows the gateway to load-balance traffic to Kubernetes pods. Reporting Issues The best way to report an issue is to create a Github Issue for the project. Please include the following information when creating the issue: Subscription ID for AKS cluster. Subscription ID for Application Gateway. AKS cluster name/ARM Resource ID. Application Gateway name/ARM Resource ID. Ingress resource definition that might causing the problem. The Helm configuration used to install the ingress controller.","title":"Introduction"},{"location":"#introduction","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. The Application Gateway Ingress Controller allows Azure Application Gateway to be used as the ingress for an Azure Kubernetes Service aka AKS cluster. As shown in the figure below, the ingress controller runs as a pod within the AKS cluster. It consumes Kubernetes Ingress Resources and converts them to an Azure Application Gateway configuration which allows the gateway to load-balance traffic to Kubernetes pods.","title":"Introduction"},{"location":"#reporting-issues","text":"The best way to report an issue is to create a Github Issue for the project. Please include the following information when creating the issue: Subscription ID for AKS cluster. Subscription ID for Application Gateway. AKS cluster name/ARM Resource ID. Application Gateway name/ARM Resource ID. Ingress resource definition that might causing the problem. The Helm configuration used to install the ingress controller.","title":"Reporting Issues"},{"location":"annotations/","text":"Annotations NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. A list of corresponding translations from AGIC to Application Gateway for Containers may be found here . Introductions The Kubernetes Ingress resource can be annotated with arbitrary key/value pairs. AGIC relies on annotations to program Application Gateway features, which are not configurable via the Ingress YAML. Ingress annotations are applied to all HTTP setting, backend pools and listeners derived from an ingress resource. List of supported annotations For an Ingress resource to be observed by AGIC it must be annotated with kubernetes.io/ingress.class: azure/application-gateway . Only then AGIC will work with the Ingress resource in question. Annotation Key Value Type Default Value Allowed Values Supported since appgw.ingress.kubernetes.io/backend-path-prefix string nil 1.3.0 appgw.ingress.kubernetes.io/backend-hostname string nil 1.2.0 appgw.ingress.kubernetes.io/backend-protocol string http http , https 1.0.0 appgw.ingress.kubernetes.io/ssl-redirect bool false 1.0.0 appgw.ingress.kubernetes.io/appgw-ssl-certificate string nil 1.2.0 appgw.ingress.kubernetes.io/appgw-trusted-root-certificate string nil 1.2.0 appgw.ingress.kubernetes.io/appgw-ssl-profile string nil 1.6.0-rc1 appgw.ingress.kubernetes.io/connection-draining bool false 1.0.0 appgw.ingress.kubernetes.io/connection-draining-timeout int32 (seconds) 30 1.0.0 appgw.ingress.kubernetes.io/cookie-based-affinity bool false 1.0.0 appgw.ingress.kubernetes.io/request-timeout int32 (seconds) 30 1.0.0 appgw.ingress.kubernetes.io/override-frontend-port string 1.3.0 appgw.ingress.kubernetes.io/use-private-ip bool false 1.0.0 appgw.ingress.kubernetes.io/waf-policy-for-path string 1.3.0 appgw.ingress.kubernetes.io/health-probe-hostname string nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-port int32 nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-path string nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-status-codes []string nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-interval int32 nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-timeout int32 nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-unhealthy-threshold int32 nil 1.4.0-rc1 appgw.ingress.kubernetes.io/rewrite-rule-set string nil 1.5.0-rc1 appgw.ingress.kubernetes.io/rewrite-rule-set-custom-resource string nil 1.6.0-rc1 appgw.ingress.kubernetes.io/hostname-extension string nil 1.4.0 Override Frontend Port The annotation allows to configure frontend listener to use different ports other than 80/443 for http/https. If the port is within the App Gw authorized range (1 - 64999), this listener will be created on this specific port. If an invalid port or no port is set in the annotation, the configuration will fallback on default 80 or 443. Usage yaml appgw.ingress.kubernetes.io/override-frontend-port: \"port\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-overridefrontendport namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/override-frontend-port: \"8080\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact External request will need to target http://somehost:8080 instead of http://somehost . Backend Path Prefix This annotation allows the backend path specified in an ingress resource to be re-written with prefix specified in this annotation. This allows users to expose services whose endpoints are different than endpoint names used to expose a service in an ingress resource. Usage yaml appgw.ingress.kubernetes.io/backend-path-prefix: Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-path-prefix: \"/test/\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact In the example above we have defined an ingress resource named go-server-ingress-bkprefix with an annotation appgw.ingress.kubernetes.io/backend-path-prefix: \"/test/\" . The annotation tells application gateway to create an HTTP setting which will have a path prefix override for the path /hello to /test/ . NOTE: In the above example we have only one rule defined. However, the annotations is applicable to the entire ingress resource so if a user had defined multiple rules the backend path prefix would be setup for each of the paths specified. Thus, if a user wants different rules with different path prefixes (even for the same service) they would need to define different ingress resources. If your incoming path is /hello/test/health but your backend requires /health you will want to ensure you have /* on your path yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-path-prefix: \"/\" spec: rules: - http: paths: - path: /hello/test/* pathType: Prefix backend: service: name: store-service Backend Hostname This annotations allows us to specify the host name that Application Gateway should use while talking to the Pods. Usage yaml appgw.ingress.kubernetes.io/backend-hostname: \"internal.example.com\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-timeout namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-hostname: \"internal.example.com\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact Backend Protocol This annotation allows us to specify the protocol that Application Gateway should use while talking to the Pods. Supported Protocols: http , https Note 1) Make sure to not use port 80 with HTTPS and port 443 with HTTP on the Pods. Usage yaml appgw.ingress.kubernetes.io/backend-protocol: \"https\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-timeout namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-protocol: \"https\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 443 pathType: Exact SSL Redirect Application Gateway can be configured to automatically redirect HTTP URLs to their HTTPS counterparts. When this annotation is present and TLS is properly configured, Kubernetes Ingress controller will create a routing rule with a redirection configuration and apply the changes to your App Gateway. The redirect created will be HTTP 301 Moved Permanently . Usage yaml appgw.ingress.kubernetes.io/ssl-redirect: \"true\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-redirect namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/ssl-redirect: \"true\" spec: tls: - hosts: - www.contoso.com secretName: testsecret-tls rules: - host: www.contoso.com http: paths: - backend: service: name: websocket-repeater port: number: 80 AppGw SSL Certificate The SSL certificate can be configured to Application Gateway either from a local PFX certificate file or a reference to a Azure Key Vault unversioned secret Id. When the annotation is present with a certificate name and the certificate is pre-installed in Application Gateway, Kubernetes Ingress controller will create a routing rule with a HTTPS listener and apply the changes to your App Gateway. appgw-ssl-certificate annotation can also be used together with ssl-redirect annotation in case of SSL redirect. Please refer to appgw-ssl-certificate feature for more details. Note * Annotation \"appgw-ssl-certificate\" will be ignored when TLS Spec is defined in ingress at the same time. * If a user wants different certs with different hosts(multi tls certificate termination), they would need to define different ingress resources. Use Azure CLI to install certificate to Application Gateway Configure from a local PFX certificate file bash az network application-gateway ssl-cert create -g $resgp --gateway-name $appgwName -n mysslcert --cert-file \\path\\to\\cert\\file --cert-password Abc123 Configure from a reference to a Key Vault unversioned secret id bash az keyvault certificate create --vault-name $vaultName -n cert1 -p \"$(az keyvault certificate get-default-policy)\" versionedSecretId=$(az keyvault certificate show -n cert --vault-name $vaultName --query \"sid\" -o tsv) unversionedSecretId=$(echo $versionedSecretId | cut -d'/' -f-5) # remove the version from the url az network application-gateway ssl-cert create -n mysslcert --gateway-name $appgwName --resource-group $resgp --key-vault-secret-id $unversionedSecretId To use PowerShell, please refer to Configure Key Vault - PowerShell . Usage yaml appgw.ingress.kubernetes.io/appgw-ssl-certificate: \"name-of-appgw-installed-certificate\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-certificate namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/appgw-ssl-certificate: \"name-of-appgw-installed-certificate\" spec: rules: - host: www.contoso.com http: paths: - backend: service: name: websocket-repeater port: number: 80 AppGW Trusted Root Certificate Users now can configure their own root certificates to Application Gateway to be trusted via AGIC. The annotaton appgw-trusted-root-certificate shall be used together with annotation backend-protocol to indicate end-to-end ssl encryption, multiple root certificates, separated by comma, if specified, e.g. \"name-of-my-root-cert1,name-of-my-root-certificate2\". Use Azure CLI to install your root certificate to Application Gateway Create your public root certificate for testing bash openssl ecparam -out test.key -name prime256v1 -genkey openssl req -new -sha256 -key test.key -out test.csr openssl x509 -req -sha256 -days 365 -in test.csr -signkey test.key -out test.crt Configure your root certificate to Application Gateway ```bash Rename test.crt to test.cer mv test.crt test.cer Configure the root certificate to your Application Gateway az network application-gateway root-cert create --cert-file test.cer --gateway-name $appgwName --name name-of-my-root-cert1 --resource-group $resgp ``` Repeat the steps above if you want to configure multiple trusted root certificates Usage yaml appgw.ingress.kubernetes.io/backend-protocol: \"https\" appgw.ingress.kubernetes.io/appgw-trusted-root-certificate: \"name-of-my-root-cert1\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-certificate namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-protocol: \"https\" appgw.ingress.kubernetes.io/appgw-trusted-root-certificate: \"name-of-my-root-cert1\" spec: rules: - host: www.contoso.com http: paths: - backend: service: name: websocket-repeater port: number: 80 AppGw Ssl Profile Note: This annotation is supported since 1.6.0-rc1. Users can configure a ssl profile on the Application Gateway per listener . When the annotation is present with a profile name and the profile is pre-installed in the Application Gateway, Kubernetes Ingress controller will create a routing rule with a HTTPS listener and apply the changes to your App Gateway. Connection Draining connection-draining : This annotation allows to specify whether to enable connection draining. connection-draining-timeout : This annotation allows to specify a timeout after which Application Gateway will terminate the requests to the draining backend endpoint. Usage yaml appgw.ingress.kubernetes.io/connection-draining: \"true\" appgw.ingress.kubernetes.io/connection-draining-timeout: \"60\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-drain namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/connection-draining: \"true\" appgw.ingress.kubernetes.io/connection-draining-timeout: \"60\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact Cookie Based Affinity This annotation allows to specify whether to enable cookie based affinity. Usage yaml appgw.ingress.kubernetes.io/cookie-based-affinity: \"true\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-affinity namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/cookie-based-affinity: \"true\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact Distinct cookie name In addition to cookie-based-affinity, you can set cookie-based-affinity-distinct-name: \"true\" to ensure a different affinity cookie is set per backend. Usage yaml appgw.ingress.kubernetes.io/cookie-based-affinity-distinct-name: \"true\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-affinity namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/cookie-based-affinity: \"true\" appgw.ingress.kubernetes.io/cookie-based-affinity-distinct-name: \"true\" spec: rules: - http: paths: - path: /affinity1/ pathType: Exact backend: service: name: affinity-service port: number: 80 - path: /affinity2/ pathType: Exact backend: service: name: affinity-service port: number: 80 Request Timeout This annotation allows to specify the request timeout in seconds after which Application Gateway will fail the request if response is not received. Usage yaml appgw.ingress.kubernetes.io/request-timeout: \"20\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-timeout namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/request-timeout: \"20\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact Use Private IP This annotation allows us to specify whether to expose this endpoint on Private IP of Application Gateway. Note 1) App Gateway doesn't support multiple IPs on the same port (example: 80/443). Ingress with annotation appgw.ingress.kubernetes.io/use-private-ip: \"false\" and another with appgw.ingress.kubernetes.io/use-private-ip: \"true\" on HTTP will cause AGIC to fail in updating the App Gateway. 2) For App Gateway that doesn't have a private IP, Ingresses with appgw.ingress.kubernetes.io/use-private-ip: \"true\" will be ignored. This will reflected in the controller logs and ingress events for those ingresses with NoPrivateIP warning. Usage yaml appgw.ingress.kubernetes.io/use-private-ip: \"true\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-timeout namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/use-private-ip: \"true\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact Azure Waf Policy For Path This annotation allows you to attach an already created WAF policy to the list paths for a host within a Kubernetes Ingress resource being annotated. The WAF policy must be created in advance. Example of using Azure Portal to create a policy: Once the policy is created, copy the URI of the policy from the address bar of Azure Portal: The URI would have the following format: bash /subscriptions//resourceGroups//providers/Microsoft.Network/applicationGatewayWebApplicationFirewallPolicies/ Note 1) Waf policy will only be applied to a listener if ingress rule path is not set or set to \"/\" or \"/*\" Usage yaml appgw.ingress.kubernetes.io/waf-policy-for-path: \"/subscriptions/abcd/resourceGroups/rg/providers/Microsoft.Network/applicationGatewayWebApplicationFirewallPolicies/adserver\" Example The example below will apply the WAF policy yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ad-server-ingress namespace: commerce annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/waf-policy-for-path: \"/subscriptions/abcd/resourceGroups/rg/providers/Microsoft.Network/applicationGatewayWebApplicationFirewallPolicies/adserver\" spec: rules: - http: paths: - path: /ad-server backend: service: name: ad-server port: number: 80 pathType: Exact - path: /auth backend: service: name: auth-server port: number: 80 pathType: Exact Note that the WAF policy will be applied to both /ad-server and /auth URLs. Health Probe Hostname This annotation allows specifically define a target host to be used for AGW health probe. By default, if backend container running service with liveliness probe of type HTTP GET defined, host used in liveliness probe definition is also used as a target host for health probe. However if annotation appgw.ingress.kubernetes.io/health-probe-hostname is defined it overrides it with its own value. Usage yaml appgw.ingress.kubernetes.io/health-probe-hostname: Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-hostname: \"my-backend-host.custom.app\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact Health Probe Port Health probe port annotation allows specifically define target TCP port to be used for AGW health probe. By default, if backend container running service has liveliness probe of type HTTP GET defined, port used in liveliness probe definition is also used as a port for health probe. Annotation appgw.ingress.kubernetes.io/health-probe-port has precedence over such default value. Usage yaml appgw.ingress.kubernetes.io/health-probe-port: Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-hostname: \"my-backend-host.custom.app\" appgw.ingress.kubernetes.io/health-probe-port: \"443\" appgw.ingress.kubernetes.io/health-probe-path: \"/healthz\" appgw.ingress.kubernetes.io/backend-protocol: https spec: tls: - secretName: \"my-backend-host.custom.app-ssl-certificate\" rules: - http: paths: - path: / backend: service: name: store-service port: number: 443 pathType: Exact Health Probe Path This annotation allows specifically define target URI path to be used for AGW health probe. By default, if backend container running service with liveliness probe of type HTTP GET defined , path defined in liveliness probe definition is also used as a path for health probe. However annotation appgw.ingress.kubernetes.io/health-probe-path overrides it with its own value. Usage yaml appgw.ingress.kubernetes.io/health-probe-path: Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-hostname: \"my-backend-host.custom.app\" appgw.ingress.kubernetes.io/health-probe-port: \"8080\" appgw.ingress.kubernetes.io/health-probe-path: \"/healthz\" spec: rules: - http: paths: - path: / backend: service: name: store-service port: number: 8080 Health Probe Status Codes This annotation defines healthy status codes returned by the health probe. The values are comma separated list of individual status codes or ranges defined as - . Usage yaml appgw.ingress.kubernetes.io/health-probe-status-codes: Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-status-codes: \"200-399, 401\" spec: rules: - http: paths: - path: / backend: service: name: store-service port: number: 8080 pathType: Exact Health Probe Interval This annotation sets AGW health probe interval. By default, if backend container running service with liveliness probe of type HTTP GET defined, interval in liveliness probe definition is also used as a interval for health probe. However annotation appgw.ingress.kubernetes.io/health-probe-interval overrides it with its value. Usage yaml appgw.ingress.kubernetes.io/health-probe-interval: Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-interval: \"20\" spec: rules: - http: paths: - path: / backend: service: name: store-service port: number: 8080 pathType: Exact Health Probe Timeout This annotation allows specifically define timeout for AGW health probe. By default, if backend container running service with liveliness probe of type HTTP GET defined, timeout defined in liveliness probe definition is also used for health probe. However annotation appgw.ingress.kubernetes.io/health-probe-timeout overrides it with its value. Usage yaml appgw.ingress.kubernetes.io/health-probe-timeout: Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-timeout: \"15\" spec: rules: - http: paths: - path: / backend: service: name: store-service port: number: 8080 pathType: Exact Health Probe Unhealthy Threshold This annotation allows specifically define target unhealthy thresold for AGW health probe. By default, if backend container running service with liveliness probe of type HTTP GET defined , threshold defined in liveliness probe definition is also used for health probe. However annotation appgw.ingress.kubernetes.io/health-probe-unhealthy-threshold overrides it with its value. Usage yaml appgw.ingress.kubernetes.io/health-probe-unhealthy-threshold: Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-unhealthy-threshold: \"5\" spec: rules: - http: paths: - path: / backend: service: name: store-service port: number: 8080 pathType: Exact Rewrite Rule Set This annotation allows to assign an existing rewrite rule set to the corresponding request routing rule(s). Rewrite rule set is managed via Azure Portal / CLI / PS. Usage yaml appgw.ingress.kubernetes.io/rewrite-rule-set: Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/rewrite-rule-set: add-custom-response-header spec: rules: - http: paths: - path: / pathType: Exact backend: service: name: store-service port: number: 8080 Rewrite Rule Set Custom Resource Note: This annotation is supported since 1.6.0-rc1. This annotation allows to assign a header/URL rewrite rule set created via the AzureApplicationGatewayRewrite CR to be associated to all rules in an ingress resource. AzureApplicationGatewayRewrite CR should be present in the same namespace as the ingress. Usage yaml appgw.ingress.kubernetes.io/rewrite-rule-set-custom-resource: Example ```yaml apiVersion: appgw.ingress.azure.io/v1beta1 kind: AzureApplicationGatewayRewrite metadata: name: my-rewrite-rule-set spec: rewriteRules: - name: rule1 ruleSequence: 21 conditions: - ignoreCase: false negate: false variable: http_req_Host pattern: example.com actions: requestHeaderConfigurations: - actionType: set headerName: incoming-test-header headerValue: incoming-test-value responseHeaderConfigurations: - actionType: set headerName: outgoing-test-header headerValue: outgoing-test-value urlConfiguration: modifiedPath: \"/api/\" modifiedQueryString: \"query=test-value\" reroute: false apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/rewrite-rule-set-custom-resource: my-rewrite-rule-set spec: rules: - http: paths: - path: / pathType: Exact backend: service: name: store-service port: number: 8080 ``` Hostname Extension This annotation allows to append additional hostnames to the host specified in the ingress resource. This applies to all the rules in the ingress resource. Usage yaml appgw.ingress.kubernetes.io/hostname-extension: \"hostname1, hostname2\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: store-app-ingress namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/hostname-extension: \"prod-store.app.com\" spec: rules: - host: \"store.app.com\" http: paths: - path: / pathType: Exact backend: service: name: store-service port: number: 8080","title":"Annotations"},{"location":"annotations/#annotations","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. A list of corresponding translations from AGIC to Application Gateway for Containers may be found here .","title":"Annotations"},{"location":"annotations/#introductions","text":"The Kubernetes Ingress resource can be annotated with arbitrary key/value pairs. AGIC relies on annotations to program Application Gateway features, which are not configurable via the Ingress YAML. Ingress annotations are applied to all HTTP setting, backend pools and listeners derived from an ingress resource.","title":"Introductions"},{"location":"annotations/#list-of-supported-annotations","text":"For an Ingress resource to be observed by AGIC it must be annotated with kubernetes.io/ingress.class: azure/application-gateway . Only then AGIC will work with the Ingress resource in question. Annotation Key Value Type Default Value Allowed Values Supported since appgw.ingress.kubernetes.io/backend-path-prefix string nil 1.3.0 appgw.ingress.kubernetes.io/backend-hostname string nil 1.2.0 appgw.ingress.kubernetes.io/backend-protocol string http http , https 1.0.0 appgw.ingress.kubernetes.io/ssl-redirect bool false 1.0.0 appgw.ingress.kubernetes.io/appgw-ssl-certificate string nil 1.2.0 appgw.ingress.kubernetes.io/appgw-trusted-root-certificate string nil 1.2.0 appgw.ingress.kubernetes.io/appgw-ssl-profile string nil 1.6.0-rc1 appgw.ingress.kubernetes.io/connection-draining bool false 1.0.0 appgw.ingress.kubernetes.io/connection-draining-timeout int32 (seconds) 30 1.0.0 appgw.ingress.kubernetes.io/cookie-based-affinity bool false 1.0.0 appgw.ingress.kubernetes.io/request-timeout int32 (seconds) 30 1.0.0 appgw.ingress.kubernetes.io/override-frontend-port string 1.3.0 appgw.ingress.kubernetes.io/use-private-ip bool false 1.0.0 appgw.ingress.kubernetes.io/waf-policy-for-path string 1.3.0 appgw.ingress.kubernetes.io/health-probe-hostname string nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-port int32 nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-path string nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-status-codes []string nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-interval int32 nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-timeout int32 nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-unhealthy-threshold int32 nil 1.4.0-rc1 appgw.ingress.kubernetes.io/rewrite-rule-set string nil 1.5.0-rc1 appgw.ingress.kubernetes.io/rewrite-rule-set-custom-resource string nil 1.6.0-rc1 appgw.ingress.kubernetes.io/hostname-extension string nil 1.4.0","title":"List of supported annotations"},{"location":"annotations/#override-frontend-port","text":"The annotation allows to configure frontend listener to use different ports other than 80/443 for http/https. If the port is within the App Gw authorized range (1 - 64999), this listener will be created on this specific port. If an invalid port or no port is set in the annotation, the configuration will fallback on default 80 or 443.","title":"Override Frontend Port"},{"location":"annotations/#usage","text":"yaml appgw.ingress.kubernetes.io/override-frontend-port: \"port\"","title":"Usage"},{"location":"annotations/#example","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-overridefrontendport namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/override-frontend-port: \"8080\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact External request will need to target http://somehost:8080 instead of http://somehost .","title":"Example"},{"location":"annotations/#backend-path-prefix","text":"This annotation allows the backend path specified in an ingress resource to be re-written with prefix specified in this annotation. This allows users to expose services whose endpoints are different than endpoint names used to expose a service in an ingress resource.","title":"Backend Path Prefix"},{"location":"annotations/#usage_1","text":"yaml appgw.ingress.kubernetes.io/backend-path-prefix: ","title":"Usage"},{"location":"annotations/#example_1","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-path-prefix: \"/test/\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact In the example above we have defined an ingress resource named go-server-ingress-bkprefix with an annotation appgw.ingress.kubernetes.io/backend-path-prefix: \"/test/\" . The annotation tells application gateway to create an HTTP setting which will have a path prefix override for the path /hello to /test/ . NOTE: In the above example we have only one rule defined. However, the annotations is applicable to the entire ingress resource so if a user had defined multiple rules the backend path prefix would be setup for each of the paths specified. Thus, if a user wants different rules with different path prefixes (even for the same service) they would need to define different ingress resources. If your incoming path is /hello/test/health but your backend requires /health you will want to ensure you have /* on your path yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-path-prefix: \"/\" spec: rules: - http: paths: - path: /hello/test/* pathType: Prefix backend: service: name: store-service","title":"Example"},{"location":"annotations/#backend-hostname","text":"This annotations allows us to specify the host name that Application Gateway should use while talking to the Pods.","title":"Backend Hostname"},{"location":"annotations/#usage_2","text":"yaml appgw.ingress.kubernetes.io/backend-hostname: \"internal.example.com\"","title":"Usage"},{"location":"annotations/#example_2","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-timeout namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-hostname: \"internal.example.com\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact","title":"Example"},{"location":"annotations/#backend-protocol","text":"This annotation allows us to specify the protocol that Application Gateway should use while talking to the Pods. Supported Protocols: http , https Note 1) Make sure to not use port 80 with HTTPS and port 443 with HTTP on the Pods.","title":"Backend Protocol"},{"location":"annotations/#usage_3","text":"yaml appgw.ingress.kubernetes.io/backend-protocol: \"https\"","title":"Usage"},{"location":"annotations/#example_3","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-timeout namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-protocol: \"https\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 443 pathType: Exact","title":"Example"},{"location":"annotations/#ssl-redirect","text":"Application Gateway can be configured to automatically redirect HTTP URLs to their HTTPS counterparts. When this annotation is present and TLS is properly configured, Kubernetes Ingress controller will create a routing rule with a redirection configuration and apply the changes to your App Gateway. The redirect created will be HTTP 301 Moved Permanently .","title":"SSL Redirect"},{"location":"annotations/#usage_4","text":"yaml appgw.ingress.kubernetes.io/ssl-redirect: \"true\"","title":"Usage"},{"location":"annotations/#example_4","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-redirect namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/ssl-redirect: \"true\" spec: tls: - hosts: - www.contoso.com secretName: testsecret-tls rules: - host: www.contoso.com http: paths: - backend: service: name: websocket-repeater port: number: 80","title":"Example"},{"location":"annotations/#appgw-ssl-certificate","text":"The SSL certificate can be configured to Application Gateway either from a local PFX certificate file or a reference to a Azure Key Vault unversioned secret Id. When the annotation is present with a certificate name and the certificate is pre-installed in Application Gateway, Kubernetes Ingress controller will create a routing rule with a HTTPS listener and apply the changes to your App Gateway. appgw-ssl-certificate annotation can also be used together with ssl-redirect annotation in case of SSL redirect. Please refer to appgw-ssl-certificate feature for more details. Note * Annotation \"appgw-ssl-certificate\" will be ignored when TLS Spec is defined in ingress at the same time. * If a user wants different certs with different hosts(multi tls certificate termination), they would need to define different ingress resources.","title":"AppGw SSL Certificate"},{"location":"annotations/#use-azure-cli-to-install-certificate-to-application-gateway","text":"Configure from a local PFX certificate file bash az network application-gateway ssl-cert create -g $resgp --gateway-name $appgwName -n mysslcert --cert-file \\path\\to\\cert\\file --cert-password Abc123 Configure from a reference to a Key Vault unversioned secret id bash az keyvault certificate create --vault-name $vaultName -n cert1 -p \"$(az keyvault certificate get-default-policy)\" versionedSecretId=$(az keyvault certificate show -n cert --vault-name $vaultName --query \"sid\" -o tsv) unversionedSecretId=$(echo $versionedSecretId | cut -d'/' -f-5) # remove the version from the url az network application-gateway ssl-cert create -n mysslcert --gateway-name $appgwName --resource-group $resgp --key-vault-secret-id $unversionedSecretId To use PowerShell, please refer to Configure Key Vault - PowerShell .","title":"Use Azure CLI to install certificate to Application Gateway"},{"location":"annotations/#usage_5","text":"yaml appgw.ingress.kubernetes.io/appgw-ssl-certificate: \"name-of-appgw-installed-certificate\"","title":"Usage"},{"location":"annotations/#example_5","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-certificate namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/appgw-ssl-certificate: \"name-of-appgw-installed-certificate\" spec: rules: - host: www.contoso.com http: paths: - backend: service: name: websocket-repeater port: number: 80","title":"Example"},{"location":"annotations/#appgw-trusted-root-certificate","text":"Users now can configure their own root certificates to Application Gateway to be trusted via AGIC. The annotaton appgw-trusted-root-certificate shall be used together with annotation backend-protocol to indicate end-to-end ssl encryption, multiple root certificates, separated by comma, if specified, e.g. \"name-of-my-root-cert1,name-of-my-root-certificate2\".","title":"AppGW Trusted Root Certificate"},{"location":"annotations/#use-azure-cli-to-install-your-root-certificate-to-application-gateway","text":"Create your public root certificate for testing bash openssl ecparam -out test.key -name prime256v1 -genkey openssl req -new -sha256 -key test.key -out test.csr openssl x509 -req -sha256 -days 365 -in test.csr -signkey test.key -out test.crt Configure your root certificate to Application Gateway ```bash","title":"Use Azure CLI to install your root certificate to Application Gateway"},{"location":"annotations/#rename-testcrt-to-testcer","text":"mv test.crt test.cer","title":"Rename test.crt to test.cer"},{"location":"annotations/#configure-the-root-certificate-to-your-application-gateway","text":"az network application-gateway root-cert create --cert-file test.cer --gateway-name $appgwName --name name-of-my-root-cert1 --resource-group $resgp ``` Repeat the steps above if you want to configure multiple trusted root certificates","title":"Configure the root certificate to your Application Gateway"},{"location":"annotations/#usage_6","text":"yaml appgw.ingress.kubernetes.io/backend-protocol: \"https\" appgw.ingress.kubernetes.io/appgw-trusted-root-certificate: \"name-of-my-root-cert1\"","title":"Usage"},{"location":"annotations/#example_6","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-certificate namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-protocol: \"https\" appgw.ingress.kubernetes.io/appgw-trusted-root-certificate: \"name-of-my-root-cert1\" spec: rules: - host: www.contoso.com http: paths: - backend: service: name: websocket-repeater port: number: 80","title":"Example"},{"location":"annotations/#appgw-ssl-profile","text":"Note: This annotation is supported since 1.6.0-rc1. Users can configure a ssl profile on the Application Gateway per listener . When the annotation is present with a profile name and the profile is pre-installed in the Application Gateway, Kubernetes Ingress controller will create a routing rule with a HTTPS listener and apply the changes to your App Gateway.","title":"AppGw Ssl Profile"},{"location":"annotations/#connection-draining","text":"connection-draining : This annotation allows to specify whether to enable connection draining. connection-draining-timeout : This annotation allows to specify a timeout after which Application Gateway will terminate the requests to the draining backend endpoint.","title":"Connection Draining"},{"location":"annotations/#usage_7","text":"yaml appgw.ingress.kubernetes.io/connection-draining: \"true\" appgw.ingress.kubernetes.io/connection-draining-timeout: \"60\"","title":"Usage"},{"location":"annotations/#example_7","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-drain namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/connection-draining: \"true\" appgw.ingress.kubernetes.io/connection-draining-timeout: \"60\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact","title":"Example"},{"location":"annotations/#cookie-based-affinity","text":"This annotation allows to specify whether to enable cookie based affinity.","title":"Cookie Based Affinity"},{"location":"annotations/#usage_8","text":"yaml appgw.ingress.kubernetes.io/cookie-based-affinity: \"true\"","title":"Usage"},{"location":"annotations/#example_8","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-affinity namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/cookie-based-affinity: \"true\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact","title":"Example"},{"location":"annotations/#distinct-cookie-name","text":"In addition to cookie-based-affinity, you can set cookie-based-affinity-distinct-name: \"true\" to ensure a different affinity cookie is set per backend.","title":"Distinct cookie name"},{"location":"annotations/#usage_9","text":"yaml appgw.ingress.kubernetes.io/cookie-based-affinity-distinct-name: \"true\"","title":"Usage"},{"location":"annotations/#example_9","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-affinity namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/cookie-based-affinity: \"true\" appgw.ingress.kubernetes.io/cookie-based-affinity-distinct-name: \"true\" spec: rules: - http: paths: - path: /affinity1/ pathType: Exact backend: service: name: affinity-service port: number: 80 - path: /affinity2/ pathType: Exact backend: service: name: affinity-service port: number: 80","title":"Example"},{"location":"annotations/#request-timeout","text":"This annotation allows to specify the request timeout in seconds after which Application Gateway will fail the request if response is not received.","title":"Request Timeout"},{"location":"annotations/#usage_10","text":"yaml appgw.ingress.kubernetes.io/request-timeout: \"20\"","title":"Usage"},{"location":"annotations/#example_10","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-timeout namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/request-timeout: \"20\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact","title":"Example"},{"location":"annotations/#use-private-ip","text":"This annotation allows us to specify whether to expose this endpoint on Private IP of Application Gateway. Note 1) App Gateway doesn't support multiple IPs on the same port (example: 80/443). Ingress with annotation appgw.ingress.kubernetes.io/use-private-ip: \"false\" and another with appgw.ingress.kubernetes.io/use-private-ip: \"true\" on HTTP will cause AGIC to fail in updating the App Gateway. 2) For App Gateway that doesn't have a private IP, Ingresses with appgw.ingress.kubernetes.io/use-private-ip: \"true\" will be ignored. This will reflected in the controller logs and ingress events for those ingresses with NoPrivateIP warning.","title":"Use Private IP"},{"location":"annotations/#usage_11","text":"yaml appgw.ingress.kubernetes.io/use-private-ip: \"true\"","title":"Usage"},{"location":"annotations/#example_11","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-timeout namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/use-private-ip: \"true\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact","title":"Example"},{"location":"annotations/#azure-waf-policy-for-path","text":"This annotation allows you to attach an already created WAF policy to the list paths for a host within a Kubernetes Ingress resource being annotated. The WAF policy must be created in advance. Example of using Azure Portal to create a policy: Once the policy is created, copy the URI of the policy from the address bar of Azure Portal: The URI would have the following format: bash /subscriptions//resourceGroups//providers/Microsoft.Network/applicationGatewayWebApplicationFirewallPolicies/ Note 1) Waf policy will only be applied to a listener if ingress rule path is not set or set to \"/\" or \"/*\"","title":"Azure Waf Policy For Path"},{"location":"annotations/#usage_12","text":"yaml appgw.ingress.kubernetes.io/waf-policy-for-path: \"/subscriptions/abcd/resourceGroups/rg/providers/Microsoft.Network/applicationGatewayWebApplicationFirewallPolicies/adserver\"","title":"Usage"},{"location":"annotations/#example_12","text":"The example below will apply the WAF policy yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ad-server-ingress namespace: commerce annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/waf-policy-for-path: \"/subscriptions/abcd/resourceGroups/rg/providers/Microsoft.Network/applicationGatewayWebApplicationFirewallPolicies/adserver\" spec: rules: - http: paths: - path: /ad-server backend: service: name: ad-server port: number: 80 pathType: Exact - path: /auth backend: service: name: auth-server port: number: 80 pathType: Exact Note that the WAF policy will be applied to both /ad-server and /auth URLs.","title":"Example"},{"location":"annotations/#health-probe-hostname","text":"This annotation allows specifically define a target host to be used for AGW health probe. By default, if backend container running service with liveliness probe of type HTTP GET defined, host used in liveliness probe definition is also used as a target host for health probe. However if annotation appgw.ingress.kubernetes.io/health-probe-hostname is defined it overrides it with its own value.","title":"Health Probe Hostname"},{"location":"annotations/#usage_13","text":"yaml appgw.ingress.kubernetes.io/health-probe-hostname: ","title":"Usage"},{"location":"annotations/#example_13","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-hostname: \"my-backend-host.custom.app\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact","title":"Example"},{"location":"annotations/#health-probe-port","text":"Health probe port annotation allows specifically define target TCP port to be used for AGW health probe. By default, if backend container running service has liveliness probe of type HTTP GET defined, port used in liveliness probe definition is also used as a port for health probe. Annotation appgw.ingress.kubernetes.io/health-probe-port has precedence over such default value.","title":"Health Probe Port"},{"location":"annotations/#usage_14","text":"yaml appgw.ingress.kubernetes.io/health-probe-port: ","title":"Usage"},{"location":"annotations/#example_14","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-hostname: \"my-backend-host.custom.app\" appgw.ingress.kubernetes.io/health-probe-port: \"443\" appgw.ingress.kubernetes.io/health-probe-path: \"/healthz\" appgw.ingress.kubernetes.io/backend-protocol: https spec: tls: - secretName: \"my-backend-host.custom.app-ssl-certificate\" rules: - http: paths: - path: / backend: service: name: store-service port: number: 443 pathType: Exact","title":"Example"},{"location":"annotations/#health-probe-path","text":"This annotation allows specifically define target URI path to be used for AGW health probe. By default, if backend container running service with liveliness probe of type HTTP GET defined , path defined in liveliness probe definition is also used as a path for health probe. However annotation appgw.ingress.kubernetes.io/health-probe-path overrides it with its own value.","title":"Health Probe Path"},{"location":"annotations/#usage_15","text":"yaml appgw.ingress.kubernetes.io/health-probe-path: ","title":"Usage"},{"location":"annotations/#example_15","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-hostname: \"my-backend-host.custom.app\" appgw.ingress.kubernetes.io/health-probe-port: \"8080\" appgw.ingress.kubernetes.io/health-probe-path: \"/healthz\" spec: rules: - http: paths: - path: / backend: service: name: store-service port: number: 8080","title":"Example"},{"location":"annotations/#health-probe-status-codes","text":"This annotation defines healthy status codes returned by the health probe. The values are comma separated list of individual status codes or ranges defined as - .","title":"Health Probe Status Codes"},{"location":"annotations/#usage_16","text":"yaml appgw.ingress.kubernetes.io/health-probe-status-codes: ","title":"Usage"},{"location":"annotations/#example_16","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-status-codes: \"200-399, 401\" spec: rules: - http: paths: - path: / backend: service: name: store-service port: number: 8080 pathType: Exact","title":"Example"},{"location":"annotations/#health-probe-interval","text":"This annotation sets AGW health probe interval. By default, if backend container running service with liveliness probe of type HTTP GET defined, interval in liveliness probe definition is also used as a interval for health probe. However annotation appgw.ingress.kubernetes.io/health-probe-interval overrides it with its value.","title":"Health Probe Interval"},{"location":"annotations/#usage_17","text":"yaml appgw.ingress.kubernetes.io/health-probe-interval: ","title":"Usage"},{"location":"annotations/#example_17","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-interval: \"20\" spec: rules: - http: paths: - path: / backend: service: name: store-service port: number: 8080 pathType: Exact","title":"Example"},{"location":"annotations/#health-probe-timeout","text":"This annotation allows specifically define timeout for AGW health probe. By default, if backend container running service with liveliness probe of type HTTP GET defined, timeout defined in liveliness probe definition is also used for health probe. However annotation appgw.ingress.kubernetes.io/health-probe-timeout overrides it with its value.","title":"Health Probe Timeout"},{"location":"annotations/#usage_18","text":"yaml appgw.ingress.kubernetes.io/health-probe-timeout: ","title":"Usage"},{"location":"annotations/#example_18","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-timeout: \"15\" spec: rules: - http: paths: - path: / backend: service: name: store-service port: number: 8080 pathType: Exact","title":"Example"},{"location":"annotations/#health-probe-unhealthy-threshold","text":"This annotation allows specifically define target unhealthy thresold for AGW health probe. By default, if backend container running service with liveliness probe of type HTTP GET defined , threshold defined in liveliness probe definition is also used for health probe. However annotation appgw.ingress.kubernetes.io/health-probe-unhealthy-threshold overrides it with its value.","title":"Health Probe Unhealthy Threshold"},{"location":"annotations/#usage_19","text":"yaml appgw.ingress.kubernetes.io/health-probe-unhealthy-threshold: ","title":"Usage"},{"location":"annotations/#example_19","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-unhealthy-threshold: \"5\" spec: rules: - http: paths: - path: / backend: service: name: store-service port: number: 8080 pathType: Exact","title":"Example"},{"location":"annotations/#rewrite-rule-set","text":"This annotation allows to assign an existing rewrite rule set to the corresponding request routing rule(s). Rewrite rule set is managed via Azure Portal / CLI / PS.","title":"Rewrite Rule Set"},{"location":"annotations/#usage_20","text":"yaml appgw.ingress.kubernetes.io/rewrite-rule-set: ","title":"Usage"},{"location":"annotations/#example_20","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/rewrite-rule-set: add-custom-response-header spec: rules: - http: paths: - path: / pathType: Exact backend: service: name: store-service port: number: 8080","title":"Example"},{"location":"annotations/#rewrite-rule-set-custom-resource","text":"Note: This annotation is supported since 1.6.0-rc1. This annotation allows to assign a header/URL rewrite rule set created via the AzureApplicationGatewayRewrite CR to be associated to all rules in an ingress resource. AzureApplicationGatewayRewrite CR should be present in the same namespace as the ingress.","title":"Rewrite Rule Set Custom Resource"},{"location":"annotations/#usage_21","text":"yaml appgw.ingress.kubernetes.io/rewrite-rule-set-custom-resource: ","title":"Usage"},{"location":"annotations/#example_21","text":"```yaml apiVersion: appgw.ingress.azure.io/v1beta1 kind: AzureApplicationGatewayRewrite metadata: name: my-rewrite-rule-set spec: rewriteRules: - name: rule1 ruleSequence: 21 conditions: - ignoreCase: false negate: false variable: http_req_Host pattern: example.com actions: requestHeaderConfigurations: - actionType: set headerName: incoming-test-header headerValue: incoming-test-value responseHeaderConfigurations: - actionType: set headerName: outgoing-test-header headerValue: outgoing-test-value urlConfiguration: modifiedPath: \"/api/\" modifiedQueryString: \"query=test-value\" reroute: false apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/rewrite-rule-set-custom-resource: my-rewrite-rule-set spec: rules: - http: paths: - path: / pathType: Exact backend: service: name: store-service port: number: 8080 ```","title":"Example"},{"location":"annotations/#hostname-extension","text":"This annotation allows to append additional hostnames to the host specified in the ingress resource. This applies to all the rules in the ingress resource.","title":"Hostname Extension"},{"location":"annotations/#usage_22","text":"yaml appgw.ingress.kubernetes.io/hostname-extension: \"hostname1, hostname2\"","title":"Usage"},{"location":"annotations/#example_22","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: store-app-ingress namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/hostname-extension: \"prod-store.app.com\" spec: rules: - host: \"store.app.com\" http: paths: - path: / pathType: Exact backend: service: name: store-service port: number: 8080","title":"Example"},{"location":"faq/","text":"Frequrently Asked Questions: [WIP] NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. What is an Ingress Controller Can single ingress controller instance manage multiple Application Gateway What is an Ingress Controller Kubernetes allows creation of deployment and service resource to expose a group of pods internally in the cluster. To expose the same service externally, an Ingress resource is defined which provides load balancing, SSL termination and name-based virtual hosting. To satify this Ingress resource, an Ingress Controller is required which listens for any changes to Ingress resources and configures the load balancer policies. The Application Gateway Ingress Controller allows Azure Application Gateway to be used as the ingress for an Azure Kubernetes Service aka AKS cluster. Can single ingress controller instance manage multiple Application Gateway Currently, One instance of Ingress Controller can only be associated to one Application Gateway.","title":"Frequrently Asked Questions: [WIP]"},{"location":"faq/#frequrently-asked-questions-wip","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. What is an Ingress Controller Can single ingress controller instance manage multiple Application Gateway","title":"Frequrently Asked Questions: [WIP]"},{"location":"faq/#what-is-an-ingress-controller","text":"Kubernetes allows creation of deployment and service resource to expose a group of pods internally in the cluster. To expose the same service externally, an Ingress resource is defined which provides load balancing, SSL termination and name-based virtual hosting. To satify this Ingress resource, an Ingress Controller is required which listens for any changes to Ingress resources and configures the load balancer policies. The Application Gateway Ingress Controller allows Azure Application Gateway to be used as the ingress for an Azure Kubernetes Service aka AKS cluster.","title":"What is an Ingress Controller"},{"location":"faq/#can-single-ingress-controller-instance-manage-multiple-application-gateway","text":"Currently, One instance of Ingress Controller can only be associated to one Application Gateway.","title":"Can single ingress controller instance manage multiple Application Gateway"},{"location":"helm-values-documenation/","text":"Helm Values Configuration Options NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Available options Field Default Description verbosityLevel 3 Sets the verbosity level of the AGIC logging infrastructure. See Logging Levels for possible values. reconcilePeriodSeconds Enable periodic reconciliation to checks if the latest gateway configuration is different from what it cached. Range: 30 - 300 seconds. Disabled by default. appgw.applicationGatewayID Resource Id of the Application Gateway. Example: applicationgatewayd0f0 appgw.subscriptionId Default is agent node pool's subscriptionId derived from CloudProvider config The Azure Subscription ID in which App Gateway resides. Example: a123b234-a3b4-557d-b2df-a0bc12de1234 appgw.resourceGroup Default is agent node pool's resource group derived from CloudProvider config Name of the Azure Resource Group in which App Gateway was created. Example: app-gw-resource-group appgw.name Name of the Application Gateway. Example: applicationgatewayd0f0 appgw.environment AZUREPUBLICCLOUD Specify which cloud environment. Possbile values: AZURECHINACLOUD , AZUREGERMANCLOUD , AZUREPUBLICCLOUD , AZUREUSGOVERNMENTCLOUD appgw.shared false This boolean flag should be defaulted to false . Set to true should you need a Shared App Gateway . appgw.subResourceNamePrefix No prefix if empty Prefix that should be used in the naming of the Application Gateway's sub-resources kubernetes.watchNamespace Watches all if empty Specify the name space, which AGIC should watch. This could be a single string value, or a comma-separated list of namespaces. kubernetes.securityContext runAsUser: 0 Specify the pod security context to use with AGIC deployment. By default, AGIC will assume root permission. Jump to Run without root for more information. kubernetes.containerSecurityContext {} Specify the container security context to use with AGIC deployment. kubernetes.podAnnotations {} Specify custom annotations for AGIC pod kubernetes.resources {} Specify resource quota for AGIC pod kubernetes.nodeSelector {} Scheduling node selector kubernetes.tolerations [] Scheduling tolerations kubernetes.affinity {} Scheduling affinity kubernetes.volumes.extraVolumes {} Specify additional volumes for the AGIC pod. This can be useful when running on a readOnlyRootFilesystem , as AGIC requires a writeable /tmp directory. kubernetes.volumes.extraVolumeMounts {} Specify additional volume mounts for the AGIC pod. This can be useful when running on a readOnlyRootFilesystem , as AGIC requires a writeable /tmp directory. kubernetes.ingressClass azure/application-gateway Specify a custom ingress class which will be used to match kubernetes.io/ingress.class in ingress manifest rbac.enabled false Specify true if kubernetes cluster is rbac enabled armAuth.type could be aadPodIdentity or servicePrincipal armAuth.identityResourceID Resource ID of the Azure Managed Identity armAuth.identityClientId The Client ID of the Identity. See below for more information on Identity armAuth.secretJSON Only needed when Service Principal Secret type is chosen (when armAuth.type has been set to servicePrincipal ) nodeSelector {} (Legacy: use kubernetes.nodeSelector instead) Scheduling node selector Example ```yaml appgw: applicationGatewayID: environment: \"AZUREUSGOVERNMENTCLOUD\" # default: AZUREPUBLICCLOUD armAuth: type: aadPodIdentity identityResourceID: identityClientID: kubernetes: nodeSelector: {} tolerations: [] affinity: {} rbac: enabled: false ``` Run without root By default, AGIC will assume root permission which allows it to read cloud-provider config and get meta-data information about the cluster. If you want AGIC to run without root access, then make sure that AGIC is installed with at least the following information to run successfully: ```yaml appgw: applicationGatewayID: # OR subscriptionId: resourceGroup: name: kubernetes: securityContext: runAsUser: 1000 # appgw-ingress-user ``` Note: AGIC also uses cloud-provider config to get Node's Virtual Network Name / Subscription and Route table name. If AGIC is not able to reach this information, It will skip assigning the Node's route table to Application Gateway's subnet which is required when using kubenet network plugin. To workaround, this assignment can be performed manually. Run with read-only root filesystem To run AGIC with readOnlyRootFilesystem , the following additional configuration items are required: yaml kubernetes: containerSecurityContext: readOnlyRootFilesystem: true volumes: extraVolumes: - name: tmp emptyDir: {} extraVolumeMounts: - name: tmp mountPath: /tmp Note: AGIC needs to be able to write to the /tmp directory.","title":"Helm Values Configuration Options"},{"location":"helm-values-documenation/#helm-values-configuration-options","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment.","title":"Helm Values Configuration Options"},{"location":"helm-values-documenation/#available-options","text":"Field Default Description verbosityLevel 3 Sets the verbosity level of the AGIC logging infrastructure. See Logging Levels for possible values. reconcilePeriodSeconds Enable periodic reconciliation to checks if the latest gateway configuration is different from what it cached. Range: 30 - 300 seconds. Disabled by default. appgw.applicationGatewayID Resource Id of the Application Gateway. Example: applicationgatewayd0f0 appgw.subscriptionId Default is agent node pool's subscriptionId derived from CloudProvider config The Azure Subscription ID in which App Gateway resides. Example: a123b234-a3b4-557d-b2df-a0bc12de1234 appgw.resourceGroup Default is agent node pool's resource group derived from CloudProvider config Name of the Azure Resource Group in which App Gateway was created. Example: app-gw-resource-group appgw.name Name of the Application Gateway. Example: applicationgatewayd0f0 appgw.environment AZUREPUBLICCLOUD Specify which cloud environment. Possbile values: AZURECHINACLOUD , AZUREGERMANCLOUD , AZUREPUBLICCLOUD , AZUREUSGOVERNMENTCLOUD appgw.shared false This boolean flag should be defaulted to false . Set to true should you need a Shared App Gateway . appgw.subResourceNamePrefix No prefix if empty Prefix that should be used in the naming of the Application Gateway's sub-resources kubernetes.watchNamespace Watches all if empty Specify the name space, which AGIC should watch. This could be a single string value, or a comma-separated list of namespaces. kubernetes.securityContext runAsUser: 0 Specify the pod security context to use with AGIC deployment. By default, AGIC will assume root permission. Jump to Run without root for more information. kubernetes.containerSecurityContext {} Specify the container security context to use with AGIC deployment. kubernetes.podAnnotations {} Specify custom annotations for AGIC pod kubernetes.resources {} Specify resource quota for AGIC pod kubernetes.nodeSelector {} Scheduling node selector kubernetes.tolerations [] Scheduling tolerations kubernetes.affinity {} Scheduling affinity kubernetes.volumes.extraVolumes {} Specify additional volumes for the AGIC pod. This can be useful when running on a readOnlyRootFilesystem , as AGIC requires a writeable /tmp directory. kubernetes.volumes.extraVolumeMounts {} Specify additional volume mounts for the AGIC pod. This can be useful when running on a readOnlyRootFilesystem , as AGIC requires a writeable /tmp directory. kubernetes.ingressClass azure/application-gateway Specify a custom ingress class which will be used to match kubernetes.io/ingress.class in ingress manifest rbac.enabled false Specify true if kubernetes cluster is rbac enabled armAuth.type could be aadPodIdentity or servicePrincipal armAuth.identityResourceID Resource ID of the Azure Managed Identity armAuth.identityClientId The Client ID of the Identity. See below for more information on Identity armAuth.secretJSON Only needed when Service Principal Secret type is chosen (when armAuth.type has been set to servicePrincipal ) nodeSelector {} (Legacy: use kubernetes.nodeSelector instead) Scheduling node selector","title":"Available options"},{"location":"helm-values-documenation/#example","text":"```yaml appgw: applicationGatewayID: environment: \"AZUREUSGOVERNMENTCLOUD\" # default: AZUREPUBLICCLOUD armAuth: type: aadPodIdentity identityResourceID: identityClientID: kubernetes: nodeSelector: {} tolerations: [] affinity: {} rbac: enabled: false ```","title":"Example"},{"location":"helm-values-documenation/#run-without-root","text":"By default, AGIC will assume root permission which allows it to read cloud-provider config and get meta-data information about the cluster. If you want AGIC to run without root access, then make sure that AGIC is installed with at least the following information to run successfully: ```yaml appgw: applicationGatewayID: # OR subscriptionId: resourceGroup: name: kubernetes: securityContext: runAsUser: 1000 # appgw-ingress-user ``` Note: AGIC also uses cloud-provider config to get Node's Virtual Network Name / Subscription and Route table name. If AGIC is not able to reach this information, It will skip assigning the Node's route table to Application Gateway's subnet which is required when using kubenet network plugin. To workaround, this assignment can be performed manually.","title":"Run without root"},{"location":"helm-values-documenation/#run-with-read-only-root-filesystem","text":"To run AGIC with readOnlyRootFilesystem , the following additional configuration items are required: yaml kubernetes: containerSecurityContext: readOnlyRootFilesystem: true volumes: extraVolumes: - name: tmp emptyDir: {} extraVolumeMounts: - name: tmp mountPath: /tmp Note: AGIC needs to be able to write to the /tmp directory.","title":"Run with read-only root filesystem"},{"location":"ingress-v1/","text":"Ingress V1 Support NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. This document describes AGIC's implementation of specific Ingress resource fields and features. As the Ingress specification has evolved between v1beta1 and v1, any differences between versions are highlighted to ensure clarity for AGIC users. Note: Ingress/V1 is fully supported with AGIC >= 1.5.1 Kubernetes Versions For Kubernetes version 1.19+, the API server translates any Ingress v1beta1 resources to Ingress v1 and AGIC watches Ingress v1 resources. IngressClass and IngressClass Name AGIC now supports using ingressClassName property along with kubernetes.io/ingress.class: azure/application-gateway to indicate that a specific ingress should processed by AGIC. yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: shopping-app spec: ingressClassName: azure-application-gateway ... Ingress Rules Wildcard Hostnames AGIC supports wildcard hostnames as documented by the upstream API as well as precise hostnames. Wildcard hostnames are limited to the whole first DNS label of the hostname, e.g. *.foo.com is valid but *foo.com , foo*.com , foo.*.com are not. * is also not a valid hostname. PathType property is now mandatory AGIC now supports PathType in Ingress V1. Exact path matches will now result in matching requests to the given path exactly. Prefix patch match type will now result in matching requests with a \"segment prefix\" rather than a \"string prefix\" according to the spec (e.g. the prefix /foo/bar will match requests with paths /foo/bar , /foo/bar/ , and /foo/bar/baz , but not /foo/barbaz ). ImplementationSpecific patch match type preserves the old path behaviour of AGIC < 1.5.1 and allows to backwards compatibility. Example Ingress YAML with different pathTypes defined: yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: shopping-app spec: rules: - http: paths: - path: /foo # this would stay /foo since pathType is Exact pathType: Exact - path: /bar # this would be converted to /bar* since pathType is Prefix pathType: Prefix - path: /baz # this would stay /baz since pathType is ImplementationSpecific pathType: ImplementationSpecific - path: /buzz* # this would stay /buzz* since pathType is ImplementationSpecific pathType: ImplementationSpecific Behavioural Change Notice Starting with AGIC 1.5.1, AGIC will now strip * from the path if PathType: Exact AGIC will now append * to path if PathType: Prefix Before AGIC 1.5.1, PathType property was ignored and path matching was performed using Application Gateway wildcard path patterns . Paths prefixed with * were treated as Prefix match and without were treated as Exact match. To continue using the old behaviour, use PathType: ImplementationSpecific match type in AGIC 1.5.1+ to ensure backwards compatibility. Here is a table illustrating the corner cases where the behaviour has changed: AGIC Version < 1.5.1 < 1.5.1 >= 1.5.1 >= 1.5.1 PathType Exact Prefix Exact Prefix Path /foo* /foo /foo* /foo Applied Path /foo* /foo /foo (* is stripped) /foo*(* is appended) Example YAML illustrating the corner cases above: yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: shopping-app spec: rules: - http: paths: - path: /foo* # this would be converted to /foo since pathType is Exact pathType: Exact - path: /bar # this would be converted to /bar* since pathType is Prefix pathType: Prefix - path: /baz* # this would stay /baz* since pathType is Prefix pathType: Prefix Mitigation In case you are affected by this behaviour change in mapping the paths, You can modify your ingress rules to use PathType: ImplementationSpecific so to retain the old behaviour. yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: shopping-app spec: rules: - http: paths: - path: /path* # this would stay /path* since pathType is ImplementationSpecific pathType: ImplementationSpecific","title":"Ingress V1 Support"},{"location":"ingress-v1/#ingress-v1-support","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. This document describes AGIC's implementation of specific Ingress resource fields and features. As the Ingress specification has evolved between v1beta1 and v1, any differences between versions are highlighted to ensure clarity for AGIC users. Note: Ingress/V1 is fully supported with AGIC >= 1.5.1","title":"Ingress V1 Support"},{"location":"ingress-v1/#kubernetes-versions","text":"For Kubernetes version 1.19+, the API server translates any Ingress v1beta1 resources to Ingress v1 and AGIC watches Ingress v1 resources.","title":"Kubernetes Versions"},{"location":"ingress-v1/#ingressclass-and-ingressclass-name","text":"AGIC now supports using ingressClassName property along with kubernetes.io/ingress.class: azure/application-gateway to indicate that a specific ingress should processed by AGIC. yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: shopping-app spec: ingressClassName: azure-application-gateway ...","title":"IngressClass and IngressClass Name"},{"location":"ingress-v1/#ingress-rules","text":"","title":"Ingress Rules"},{"location":"ingress-v1/#wildcard-hostnames","text":"AGIC supports wildcard hostnames as documented by the upstream API as well as precise hostnames. Wildcard hostnames are limited to the whole first DNS label of the hostname, e.g. *.foo.com is valid but *foo.com , foo*.com , foo.*.com are not. * is also not a valid hostname.","title":"Wildcard Hostnames"},{"location":"ingress-v1/#pathtype-property-is-now-mandatory","text":"AGIC now supports PathType in Ingress V1. Exact path matches will now result in matching requests to the given path exactly. Prefix patch match type will now result in matching requests with a \"segment prefix\" rather than a \"string prefix\" according to the spec (e.g. the prefix /foo/bar will match requests with paths /foo/bar , /foo/bar/ , and /foo/bar/baz , but not /foo/barbaz ). ImplementationSpecific patch match type preserves the old path behaviour of AGIC < 1.5.1 and allows to backwards compatibility. Example Ingress YAML with different pathTypes defined: yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: shopping-app spec: rules: - http: paths: - path: /foo # this would stay /foo since pathType is Exact pathType: Exact - path: /bar # this would be converted to /bar* since pathType is Prefix pathType: Prefix - path: /baz # this would stay /baz since pathType is ImplementationSpecific pathType: ImplementationSpecific - path: /buzz* # this would stay /buzz* since pathType is ImplementationSpecific pathType: ImplementationSpecific","title":"PathType property is now mandatory"},{"location":"ingress-v1/#behavioural-change-notice","text":"Starting with AGIC 1.5.1, AGIC will now strip * from the path if PathType: Exact AGIC will now append * to path if PathType: Prefix Before AGIC 1.5.1, PathType property was ignored and path matching was performed using Application Gateway wildcard path patterns . Paths prefixed with * were treated as Prefix match and without were treated as Exact match. To continue using the old behaviour, use PathType: ImplementationSpecific match type in AGIC 1.5.1+ to ensure backwards compatibility. Here is a table illustrating the corner cases where the behaviour has changed: AGIC Version < 1.5.1 < 1.5.1 >= 1.5.1 >= 1.5.1 PathType Exact Prefix Exact Prefix Path /foo* /foo /foo* /foo Applied Path /foo* /foo /foo (* is stripped) /foo*(* is appended) Example YAML illustrating the corner cases above: yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: shopping-app spec: rules: - http: paths: - path: /foo* # this would be converted to /foo since pathType is Exact pathType: Exact - path: /bar # this would be converted to /bar* since pathType is Prefix pathType: Prefix - path: /baz* # this would stay /baz* since pathType is Prefix pathType: Prefix","title":"Behavioural Change Notice"},{"location":"ingress-v1/#mitigation","text":"In case you are affected by this behaviour change in mapping the paths, You can modify your ingress rules to use PathType: ImplementationSpecific so to retain the old behaviour. yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: shopping-app spec: rules: - http: paths: - path: /path* # this would stay /path* since pathType is ImplementationSpecific pathType: ImplementationSpecific","title":"Mitigation"},{"location":"logging-levels/","text":"Logging Levels NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. AGIC has 3 logging levels. Level 1 is the default one and it shows minimal number of log lines. Level 5, on the other hand, would display all logs, including sanitized contents of config applied to ARM. The Kubernetes community has established 9 levels of logging for the kubectl tool. In this repository we are utilizing 3 of these, with similar semantics: Verbosity Description 1 Default log level; shows startup details, warnings and errors 3 Extended information about events and changes; lists of created objects 5 Logs marshaled objects; shows sanitized JSON config applied to ARM The verbosity levels are adjustable via the verbosityLevel variable in the helm-config.yaml file. Increase verbosity level to 5 to get the JSON config dispatched to ARM : add verbosityLevel: 5 on a line by itself in helm-config.yaml and re-install get logs with kubectl logs -n ","title":"Logging Levels"},{"location":"logging-levels/#logging-levels","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. AGIC has 3 logging levels. Level 1 is the default one and it shows minimal number of log lines. Level 5, on the other hand, would display all logs, including sanitized contents of config applied to ARM. The Kubernetes community has established 9 levels of logging for the kubectl tool. In this repository we are utilizing 3 of these, with similar semantics: Verbosity Description 1 Default log level; shows startup details, warnings and errors 3 Extended information about events and changes; lists of created objects 5 Logs marshaled objects; shows sanitized JSON config applied to ARM The verbosity levels are adjustable via the verbosityLevel variable in the helm-config.yaml file. Increase verbosity level to 5 to get the JSON config dispatched to ARM : add verbosityLevel: 5 on a line by itself in helm-config.yaml and re-install get logs with kubectl logs -n ","title":"Logging Levels"},{"location":"developers/build/","text":"Building the controller Running it locally Pre-requisite Obtain Azure Credentials Deploy Application Gateway and AKS Using startup script Visual Studio Code (F5 debugging) Run on a cluster using a Dev Release CMake options Running it locally This section outlines the environment variables and files necessary to successfully compile and run the Go binary, then connect it to an Azure Kubernetes Service . Pre-requisite go >= 1.13 OpenSSL Obtain Azure Credentials In order to run the Go binary locally and control a remote AKS server, you need Azure credentials. These will be stored in a JSON file in your home directory. Follow these instructions to create the $HOME/.azure/azureAuth.json file. The file is generated via: bash az ad sp create-for-rbac --sdk-auth > $HOME/.azure/azureAuth.json The file will contain a JSON blob with the following shape: json { \"clientId\": \"...\", \"clientSecret\": \"...\", \"subscriptionId\": \"\", \"tenantId\": \"...\", \"activeDirectoryEndpointUrl\": \"https://login.microsoftonline.com\", \"resourceManagerEndpointUrl\": \"https://management.azure.com/\", \"activeDirectoryGraphResourceId\": \"https://graph.windows.net/\", \"sqlManagementEndpointUrl\": \"https://management.core.windows.net:8443/\", \"galleryEndpointUrl\": \"https://gallery.azure.com/\", \"managementEndpointUrl\": \"https://management.core.windows.net/\" } Deploy Application Gateway and AKS To deploy a fresh setup, please follow the steps for template deployment in the greenfield documentation. Using startup script In the scripts directory you will find start.sh . This script builds and runs the ingress controller on your local machine and connects to a remote AKS cluster. A .env file in the root of the repository is required. Steps to run ingress controller: Get your cluster's credentials az aks get-credentials --name --resource-group Configure: cp .env.example .env and modify the environment variables in .env to match your config. Here is an example: ``` !/bin/bash export AZURE_AUTH_LOCATION=\"$HOME/.azure/azureAuth.json\" export APPGW_RESOURCE_ID=\" \" export KUBE_CONFIG_FILE=\"$HOME/.kube/config\" export APPGW_VERBOSITY_LEVEL=\"9\" ``` Run: ./scripts/start.sh Cleanup: delete /home/vsonline/go/src/github.com/Azure/application-gateway-kubernetes-ingress/bin Compiling... Build SUCCEEDED ERROR: logging before flag.Parse: I0723 18:37:31.980903 6757 utils.go:115] Using verbosity level 9 from environment variable APPGW_VERBOSITY_LEVEL Version: 1.2.0; Commit: ef716c14; Date: 2020-07-23-18:37T+0000 ERROR: logging before flag.Parse: I0723 18:37:31.989656 6766 utils.go:115] Using verbosity level 9 from environment variable APPGW_VERBOSITY_LEVEL ERROR: logging before flag.Parse: I0723 18:37:31.989720 6766 main.go:78] Unable to load cloud provider config ''. Error: Reading Az Context file \"\" failed: open : no such file or directory E0723 18:37:31.999445 6766 context.go:210] Error fetching AGIC Pod (This may happen if AGIC is running in a test environment). Error: resource name may not be empty I0723 18:37:31.999466 6766 environment.go:240] KUBERNETES_WATCHNAMESPACE is not set. Watching all available namespaces. ... Visual Studio Code (F5 debugging) You can also setup vscode to run the project with F5 and use breakpoint debugging. For this, you need to setup your launch.json file within .vscode folder. json { \"version\": \"0.2.0\", \"configurations\": [ { \"name\": \"Debug\", \"type\": \"go\", \"request\": \"launch\", \"mode\": \"debug\", \"program\": \"${workspaceFolder}/cmd/appgw-ingress\", \"env\": { \"APPGW_VERBOSITY_LEVEL\": \"9\", \"AZURE_AUTH_LOCATION\": \"/home//.azure/azureAuth.json\", \"APPGW_RESOURCE_ID\": \"\" }, \"args\": [ \"--kubeconfig=/home//.kube/config\", \"--in-cluster=false\" ] } ] } Create a Dev Release To test your changes on a cluster, you can use the Dev Release pipeline. Just select the build version from the drop-down list which matches the build in your PR or against your commit in the main branch. Dev Release generates a new docker image and helm package for your changes. Once the pipeline completes, use helm to install the release on your AKS cluster. ```bash add the staging helm repository helm repo add staging https://appgwingress.blob.core.windows.net/ingress-azure-helm-package-staging/ helm repo update list the available versions and pick the latest version helm search repo staging -l --devel NAME CHART VERSION APP VERSION DESCRIPTION staging/ingress-azure 10486 10486 Use Azure Application Gateway as the ingress fo... staging/ingress-azure 10465 10465 Use Azure Application Gateway as the ingress fo... staging/ingress-azure 10256 10256 Use Azure Application Gateway as the ingress fo... install/upgrade helm install ingress-azure \\ -f helm-config.yaml \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --version 10486 ``` You can also find the version by opening your build in the Merge Builds pipeline and looking for the buildid . Use this version when installing on the cluster after the Dev Release completes. CMake options This is a CMake-based project. Build targets include: ALL_BUILD (default target) builds appgw-ingress and dockerize target devenv builds a docker image with configured development environment vendor installs dependency using go mod in a docker container with image from devenv target appgw-ingress builds the binary for this controller in a docker container with image from devenv target dockerize builds a docker image with the binary from appgw-ingress target dockerpush pushes the docker image to a container registry with prefix defined in CMake variable To run the CMake targets: mkdir build && cd build creates and enters a build directory cmake .. generates project configuration in the build directory cmake --build . to build the default target, or cmake --build . --target to specify a target to run from above","title":"Building the controller"},{"location":"developers/build/#building-the-controller","text":"Running it locally Pre-requisite Obtain Azure Credentials Deploy Application Gateway and AKS Using startup script Visual Studio Code (F5 debugging) Run on a cluster using a Dev Release CMake options","title":"Building the controller"},{"location":"developers/build/#running-it-locally","text":"This section outlines the environment variables and files necessary to successfully compile and run the Go binary, then connect it to an Azure Kubernetes Service .","title":"Running it locally"},{"location":"developers/build/#pre-requisite","text":"go >= 1.13 OpenSSL","title":"Pre-requisite"},{"location":"developers/build/#obtain-azure-credentials","text":"In order to run the Go binary locally and control a remote AKS server, you need Azure credentials. These will be stored in a JSON file in your home directory. Follow these instructions to create the $HOME/.azure/azureAuth.json file. The file is generated via: bash az ad sp create-for-rbac --sdk-auth > $HOME/.azure/azureAuth.json The file will contain a JSON blob with the following shape: json { \"clientId\": \"...\", \"clientSecret\": \"...\", \"subscriptionId\": \"\", \"tenantId\": \"...\", \"activeDirectoryEndpointUrl\": \"https://login.microsoftonline.com\", \"resourceManagerEndpointUrl\": \"https://management.azure.com/\", \"activeDirectoryGraphResourceId\": \"https://graph.windows.net/\", \"sqlManagementEndpointUrl\": \"https://management.core.windows.net:8443/\", \"galleryEndpointUrl\": \"https://gallery.azure.com/\", \"managementEndpointUrl\": \"https://management.core.windows.net/\" }","title":"Obtain Azure Credentials"},{"location":"developers/build/#deploy-application-gateway-and-aks","text":"To deploy a fresh setup, please follow the steps for template deployment in the greenfield documentation.","title":"Deploy Application Gateway and AKS"},{"location":"developers/build/#using-startup-script","text":"In the scripts directory you will find start.sh . This script builds and runs the ingress controller on your local machine and connects to a remote AKS cluster. A .env file in the root of the repository is required. Steps to run ingress controller: Get your cluster's credentials az aks get-credentials --name --resource-group Configure: cp .env.example .env and modify the environment variables in .env to match your config. Here is an example: ```","title":"Using startup script"},{"location":"developers/build/#binbash","text":"export AZURE_AUTH_LOCATION=\"$HOME/.azure/azureAuth.json\" export APPGW_RESOURCE_ID=\" \" export KUBE_CONFIG_FILE=\"$HOME/.kube/config\" export APPGW_VERBOSITY_LEVEL=\"9\" ``` Run: ./scripts/start.sh Cleanup: delete /home/vsonline/go/src/github.com/Azure/application-gateway-kubernetes-ingress/bin Compiling... Build SUCCEEDED ERROR: logging before flag.Parse: I0723 18:37:31.980903 6757 utils.go:115] Using verbosity level 9 from environment variable APPGW_VERBOSITY_LEVEL Version: 1.2.0; Commit: ef716c14; Date: 2020-07-23-18:37T+0000 ERROR: logging before flag.Parse: I0723 18:37:31.989656 6766 utils.go:115] Using verbosity level 9 from environment variable APPGW_VERBOSITY_LEVEL ERROR: logging before flag.Parse: I0723 18:37:31.989720 6766 main.go:78] Unable to load cloud provider config ''. Error: Reading Az Context file \"\" failed: open : no such file or directory E0723 18:37:31.999445 6766 context.go:210] Error fetching AGIC Pod (This may happen if AGIC is running in a test environment). Error: resource name may not be empty I0723 18:37:31.999466 6766 environment.go:240] KUBERNETES_WATCHNAMESPACE is not set. Watching all available namespaces. ...","title":"!/bin/bash"},{"location":"developers/build/#visual-studio-code-f5-debugging","text":"You can also setup vscode to run the project with F5 and use breakpoint debugging. For this, you need to setup your launch.json file within .vscode folder. json { \"version\": \"0.2.0\", \"configurations\": [ { \"name\": \"Debug\", \"type\": \"go\", \"request\": \"launch\", \"mode\": \"debug\", \"program\": \"${workspaceFolder}/cmd/appgw-ingress\", \"env\": { \"APPGW_VERBOSITY_LEVEL\": \"9\", \"AZURE_AUTH_LOCATION\": \"/home//.azure/azureAuth.json\", \"APPGW_RESOURCE_ID\": \"\" }, \"args\": [ \"--kubeconfig=/home//.kube/config\", \"--in-cluster=false\" ] } ] }","title":"Visual Studio Code (F5 debugging)"},{"location":"developers/build/#create-a-dev-release","text":"To test your changes on a cluster, you can use the Dev Release pipeline. Just select the build version from the drop-down list which matches the build in your PR or against your commit in the main branch. Dev Release generates a new docker image and helm package for your changes. Once the pipeline completes, use helm to install the release on your AKS cluster. ```bash","title":"Create a Dev Release"},{"location":"developers/build/#add-the-staging-helm-repository","text":"helm repo add staging https://appgwingress.blob.core.windows.net/ingress-azure-helm-package-staging/ helm repo update","title":"add the staging helm repository"},{"location":"developers/build/#list-the-available-versions-and-pick-the-latest-version","text":"helm search repo staging -l --devel NAME CHART VERSION APP VERSION DESCRIPTION staging/ingress-azure 10486 10486 Use Azure Application Gateway as the ingress fo... staging/ingress-azure 10465 10465 Use Azure Application Gateway as the ingress fo... staging/ingress-azure 10256 10256 Use Azure Application Gateway as the ingress fo...","title":"list the available versions and pick the latest version"},{"location":"developers/build/#installupgrade","text":"helm install ingress-azure \\ -f helm-config.yaml \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --version 10486 ``` You can also find the version by opening your build in the Merge Builds pipeline and looking for the buildid . Use this version when installing on the cluster after the Dev Release completes.","title":"install/upgrade"},{"location":"developers/build/#cmake-options","text":"This is a CMake-based project. Build targets include: ALL_BUILD (default target) builds appgw-ingress and dockerize target devenv builds a docker image with configured development environment vendor installs dependency using go mod in a docker container with image from devenv target appgw-ingress builds the binary for this controller in a docker container with image from devenv target dockerize builds a docker image with the binary from appgw-ingress target dockerpush pushes the docker image to a container registry with prefix defined in CMake variable To run the CMake targets: mkdir build && cd build creates and enters a build directory cmake .. generates project configuration in the build directory cmake --build . to build the default target, or cmake --build . --target to specify a target to run from above","title":"CMake options"},{"location":"developers/contribute/","text":"Contribution Guidelines This is a Golang project. You can find the build instructions of the project in the Developer Guide . This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com . When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. This project has adopted the Microsoft Open Source Code of Conduct . For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.","title":"Contribution Guidelines"},{"location":"developers/contribute/#contribution-guidelines","text":"This is a Golang project. You can find the build instructions of the project in the Developer Guide . This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com . When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. This project has adopted the Microsoft Open Source Code of Conduct . For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.","title":"Contribution Guidelines"},{"location":"developers/design/","text":"Application Gateway Ingress Controller Design (WIP) Document Purpose This document is the detailed design and architecture of the Application Gateway Ingress Controller (AGIC) being built in this repository. Overview Application Gateway Ingress Controller (AGIC) is a Kubernetes application, which makes it possible for Azure Kubernetes Service (AKS) customers to leverage Azure's native Application Gateway L7 load-balancer to expose cloud software to the Internet. AGIC monitors the Kubernetes cluster it is hosted on and continuously updates an App Gateway, so that selected services are exposed to the Internet. The Ingress Controller runs in its own pod on the customer\u2019s AKS. AGIC monitors a subset of Kubernetes Resources for changes. The state of the AKS cluster is translated to App Gateway specific configuration and applied to the Azure Resource Manager (ARM) . High-level architecture The AGIC is composed of the following three sub components: K8S Context and Informers - handles events from the cluster and alerts the worker Worker - handles events coming from the informer and perform relevant actions Application Gateway Config Builder - generates the new gateway configuration Components Let's take a look at each component: 1. K8s Context and Informers When any change is applied on the k8s cluster by the user, AGIC needs to listen to these changes in order to update the corresponding configuration on the Application Gateway. We use the kubernetes informers for this purpose which is a standard for watching resources on the K8S API server. When AGIC starts, it sets up informers for watching following resources: Ingress : This is the top-level resource that AGIC monitors. It provides information about the layer-7 routing rules that need to be configured on the App Gateway. Service : Service provides an abstraction over the pods to expose as a network service. AGIC uses the service as logical grouping of pods to extract the IP addresses through the endpoints object created automatically along with the Service. Endpoints : Endpoints provides information about Pod IP Addresses behind a service and is used to populate AppGW's backend pool. Pod : Pod provides information about liveness and readiness probes which translated to health probe in App Gateway. AGIC only supports HTTP based liveness and readiness probe. Secret : This resource is for extracting SSL certificates when referenced in an ingress. This also triggeres a change when the secret is updated. CRDs : AGIC has some custom resources for supporting specific features like prohibited target for sharing a gateway. When starting the informers, AGIC also provides event handlers for each for create/update/delete operations on the resource. This handler is responsible for enqueuing an event . 2. Worker Worker is responsible for processing the events and performing updates. When Worker's Run function is called, it starts as a separate thread and waits on the Work channel. When an informers add an event to the channel, worker dequeues the event and checks whether the event is noise or is relevant. Events that are coming from unwatched namespaces and unreferenced pods/endpoints are skipped to reduce the churn. If the the last worker loop was run less than 1 second ago, it sleeps for the remainder and wakes up to space out the updates. After this, worker starts draining the rest of the events and calling the ProcessEvent function to process the event. ProcessEvent function does the following: Check if the Application Gateway is in Running or Starting operational state. Updates all ingress resources with public/private IP address of the App Gateway. Generate new config and update the Application Gateway. 3. Application Gateway Config Builder This component is responsible for using the information in the local kubernetes cache and generating the corresponding Application Gateway configuration as an output. Worker invokes the Build on this component which then generates various gateways sub-resources starting from leaf sub-resources like probes , http settings up to the request routing rules . go func (c *appGwConfigBuilder) Build(cbCtx *ConfigBuilderContext) (*n.ApplicationGateway, error) { ... err := c.HealthProbesCollection(cbCtx) ... err = c.BackendHTTPSettingsCollection(cbCtx) ... err = c.BackendAddressPools(cbCtx) ... // generates SSL certificate, frontend ports and http listeners err = c.Listeners(cbCtx) ... // generates URL path maps and request routing rules err = c.RequestRoutingRules(cbCtx) ... return &c.appGw, nil }","title":"Application Gateway Ingress Controller Design (WIP)"},{"location":"developers/design/#application-gateway-ingress-controller-design-wip","text":"","title":"Application Gateway Ingress Controller Design (WIP)"},{"location":"developers/design/#document-purpose","text":"This document is the detailed design and architecture of the Application Gateway Ingress Controller (AGIC) being built in this repository.","title":"Document Purpose"},{"location":"developers/design/#overview","text":"Application Gateway Ingress Controller (AGIC) is a Kubernetes application, which makes it possible for Azure Kubernetes Service (AKS) customers to leverage Azure's native Application Gateway L7 load-balancer to expose cloud software to the Internet. AGIC monitors the Kubernetes cluster it is hosted on and continuously updates an App Gateway, so that selected services are exposed to the Internet. The Ingress Controller runs in its own pod on the customer\u2019s AKS. AGIC monitors a subset of Kubernetes Resources for changes. The state of the AKS cluster is translated to App Gateway specific configuration and applied to the Azure Resource Manager (ARM) .","title":"Overview"},{"location":"developers/design/#high-level-architecture","text":"The AGIC is composed of the following three sub components: K8S Context and Informers - handles events from the cluster and alerts the worker Worker - handles events coming from the informer and perform relevant actions Application Gateway Config Builder - generates the new gateway configuration","title":"High-level architecture"},{"location":"developers/design/#components","text":"Let's take a look at each component:","title":"Components"},{"location":"developers/design/#1-k8s-context-and-informers","text":"When any change is applied on the k8s cluster by the user, AGIC needs to listen to these changes in order to update the corresponding configuration on the Application Gateway. We use the kubernetes informers for this purpose which is a standard for watching resources on the K8S API server. When AGIC starts, it sets up informers for watching following resources: Ingress : This is the top-level resource that AGIC monitors. It provides information about the layer-7 routing rules that need to be configured on the App Gateway. Service : Service provides an abstraction over the pods to expose as a network service. AGIC uses the service as logical grouping of pods to extract the IP addresses through the endpoints object created automatically along with the Service. Endpoints : Endpoints provides information about Pod IP Addresses behind a service and is used to populate AppGW's backend pool. Pod : Pod provides information about liveness and readiness probes which translated to health probe in App Gateway. AGIC only supports HTTP based liveness and readiness probe. Secret : This resource is for extracting SSL certificates when referenced in an ingress. This also triggeres a change when the secret is updated. CRDs : AGIC has some custom resources for supporting specific features like prohibited target for sharing a gateway. When starting the informers, AGIC also provides event handlers for each for create/update/delete operations on the resource. This handler is responsible for enqueuing an event .","title":"1. K8s Context and Informers"},{"location":"developers/design/#2-worker","text":"Worker is responsible for processing the events and performing updates. When Worker's Run function is called, it starts as a separate thread and waits on the Work channel. When an informers add an event to the channel, worker dequeues the event and checks whether the event is noise or is relevant. Events that are coming from unwatched namespaces and unreferenced pods/endpoints are skipped to reduce the churn. If the the last worker loop was run less than 1 second ago, it sleeps for the remainder and wakes up to space out the updates. After this, worker starts draining the rest of the events and calling the ProcessEvent function to process the event. ProcessEvent function does the following: Check if the Application Gateway is in Running or Starting operational state. Updates all ingress resources with public/private IP address of the App Gateway. Generate new config and update the Application Gateway.","title":"2. Worker"},{"location":"developers/design/#3-application-gateway-config-builder","text":"This component is responsible for using the information in the local kubernetes cache and generating the corresponding Application Gateway configuration as an output. Worker invokes the Build on this component which then generates various gateways sub-resources starting from leaf sub-resources like probes , http settings up to the request routing rules . go func (c *appGwConfigBuilder) Build(cbCtx *ConfigBuilderContext) (*n.ApplicationGateway, error) { ... err := c.HealthProbesCollection(cbCtx) ... err = c.BackendHTTPSettingsCollection(cbCtx) ... err = c.BackendAddressPools(cbCtx) ... // generates SSL certificate, frontend ports and http listeners err = c.Listeners(cbCtx) ... // generates URL path maps and request routing rules err = c.RequestRoutingRules(cbCtx) ... return &c.appGw, nil }","title":"3. Application Gateway Config Builder"},{"location":"developers/developer-guideline/","text":"Application Gateway Ingress Controller Development Guide Welcome to the Application Gateway Ingress Controller development guide! Table of contents Understanding the architecture Building and running the controller Installing the latest nightly build Running tests Contribution Guidelines","title":"Application Gateway Ingress Controller Development Guide"},{"location":"developers/developer-guideline/#application-gateway-ingress-controller-development-guide","text":"Welcome to the Application Gateway Ingress Controller development guide!","title":"Application Gateway Ingress Controller Development Guide"},{"location":"developers/developer-guideline/#table-of-contents","text":"Understanding the architecture Building and running the controller Installing the latest nightly build Running tests Contribution Guidelines","title":"Table of contents"},{"location":"developers/nightly/","text":"Install the latest nightly build To install the latest nightly release, Add the nightly helm repository bash helm repo add agic-nightly https://appgwingress.blob.core.windows.net/ingress-azure-helm-package-staging/ helm repo update Check the available version Latest version : or You can look up the version in the repo using helm. bash helm search repo agic-nightly Install using the same helm command by using the staging repository. bash helm install ingress-azure \\ -f helm-config.yaml \\ agic-nightly/ingress-azure \\ --version ","title":"Install the latest nightly build"},{"location":"developers/nightly/#install-the-latest-nightly-build","text":"To install the latest nightly release, Add the nightly helm repository bash helm repo add agic-nightly https://appgwingress.blob.core.windows.net/ingress-azure-helm-package-staging/ helm repo update Check the available version Latest version : or You can look up the version in the repo using helm. bash helm search repo agic-nightly Install using the same helm command by using the staging repository. bash helm install ingress-azure \\ -f helm-config.yaml \\ agic-nightly/ingress-azure \\ --version ","title":"Install the latest nightly build"},{"location":"developers/test/","text":"Testing the controller Unit Tests E2E Tests Testing Tips Unit Tests As is the convention in go, unit tests for the .go file you want to test live in the same folder and end with _test.go . We use the ginkgo / gomega testing framework for writing the tests. To execute the tests, use bash go test -v -tags unittest ./... E2E Tests E2E tests are going to test the specific scenarios with a real AKS and App Gateway setup with AGIC installed on it. E2E tests are automatically run every day 3 AM in the morning using an E2E pipeline . If you have cluster with AGIC installed, you can run e2e tests simply by: bash go test -v -tags e2e ./... You can also execute the run-e2e.sh which is used in the E2E pipeline to invoke the tests. This script will install AGIC with the version provided. ```bash export version=\" \" export applicationGatewayId=\" \" export identityResourceId=\" \" export identityClientId=\" \" ./scripts/e2e/run-e2e.sh ``` Testing Tips If you just want to run a specific set of tests, then an easy way is add F (Focus) to the It , Context , Describe directive in the test. For example: ```go FContext(\"Test obtaining a single certificate for an existing host\", func() { cb := newConfigBuilderFixture(nil) ingress := tests.NewIngressFixture() hostnameSecretIDMap := cb.newHostToSecretMap(ingress) actualSecret, actualSecretID := cb.getCertificate(ingress, host1, hostnameSecretIDMap) It(\"should have generated the expected secret\", func() { Expect(*actualSecret).To(Equal(\"eHl6\")) }) It(\"should have generated the correct secretID struct\", func() { Expect(*actualSecretID).To(Equal(expectedSecret)) }) }) ```","title":"Testing the controller"},{"location":"developers/test/#testing-the-controller","text":"Unit Tests E2E Tests Testing Tips","title":"Testing the controller"},{"location":"developers/test/#unit-tests","text":"As is the convention in go, unit tests for the .go file you want to test live in the same folder and end with _test.go . We use the ginkgo / gomega testing framework for writing the tests. To execute the tests, use bash go test -v -tags unittest ./...","title":"Unit Tests"},{"location":"developers/test/#e2e-tests","text":"E2E tests are going to test the specific scenarios with a real AKS and App Gateway setup with AGIC installed on it. E2E tests are automatically run every day 3 AM in the morning using an E2E pipeline . If you have cluster with AGIC installed, you can run e2e tests simply by: bash go test -v -tags e2e ./... You can also execute the run-e2e.sh which is used in the E2E pipeline to invoke the tests. This script will install AGIC with the version provided. ```bash export version=\" \" export applicationGatewayId=\" \" export identityResourceId=\" \" export identityClientId=\" \" ./scripts/e2e/run-e2e.sh ```","title":"E2E Tests"},{"location":"developers/test/#testing-tips","text":"If you just want to run a specific set of tests, then an easy way is add F (Focus) to the It , Context , Describe directive in the test. For example: ```go FContext(\"Test obtaining a single certificate for an existing host\", func() { cb := newConfigBuilderFixture(nil) ingress := tests.NewIngressFixture() hostnameSecretIDMap := cb.newHostToSecretMap(ingress) actualSecret, actualSecretID := cb.getCertificate(ingress, host1, hostnameSecretIDMap) It(\"should have generated the expected secret\", func() { Expect(*actualSecret).To(Equal(\"eHl6\")) }) It(\"should have generated the correct secretID struct\", func() { Expect(*actualSecretID).To(Equal(expectedSecret)) }) }) ```","title":"Testing Tips"},{"location":"features/agic-reconcile/","text":"Reconcile scenario (BETA) NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. When an Application Gateway is deployed through ARM template, a requirement is that the gateway configuration should contain a probe, listener, rule, backend pool and backend http setting. When such a template is re-deployed with minor changes (for example to WAF rules) on Gateway that is being controlled by AGIC, all the AGIC written rules are removed. Given such change on Application Gateway doesn\u2019t trigger any events on AGIC, AGIC doesn\u2019t reconcile the gateway back to the expected state. Solution To address the problem above, AGIC periodically checks if the latest gateway configuration is different from what it cached, and reconcile if needed to make gateway configuration is eventual correct. How to configure reconcile There are two ways to configure AGIC reconcile via helm, and to use the new feature, make sure the AGIC version is at least at 1.2.0-rc1 Configure inside helm values.yaml reconcilePeriodSeconds: 30 , it means AGIC checks the reconciling in every 30 seconds. Acceptable values are between 30 and 300. Configure from helm command line Configure from helm install command(first time install) and helm upgrade command, helm version is v3 ```bash helm fresh install helm intall -f helm-config.yaml oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure --version 1.2.0-rc3 --set reconcilePeriodSeconds=30 help upgrade --reuse-values, when upgrading, reuse the last release's values and merge in any overrides from the command line via --set and -f. helm upgrade oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure --reuse-values --version 1.2.0-rc3 --set reconcilePeriodSeconds=30 ```","title":"Agic reconcile"},{"location":"features/agic-reconcile/#reconcile-scenario-beta","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. When an Application Gateway is deployed through ARM template, a requirement is that the gateway configuration should contain a probe, listener, rule, backend pool and backend http setting. When such a template is re-deployed with minor changes (for example to WAF rules) on Gateway that is being controlled by AGIC, all the AGIC written rules are removed. Given such change on Application Gateway doesn\u2019t trigger any events on AGIC, AGIC doesn\u2019t reconcile the gateway back to the expected state.","title":"Reconcile scenario (BETA)"},{"location":"features/agic-reconcile/#solution","text":"To address the problem above, AGIC periodically checks if the latest gateway configuration is different from what it cached, and reconcile if needed to make gateway configuration is eventual correct.","title":"Solution"},{"location":"features/agic-reconcile/#how-to-configure-reconcile","text":"There are two ways to configure AGIC reconcile via helm, and to use the new feature, make sure the AGIC version is at least at 1.2.0-rc1","title":"How to configure reconcile"},{"location":"features/agic-reconcile/#configure-inside-helm-valuesyaml","text":"reconcilePeriodSeconds: 30 , it means AGIC checks the reconciling in every 30 seconds. Acceptable values are between 30 and 300.","title":"Configure inside helm values.yaml"},{"location":"features/agic-reconcile/#configure-from-helm-command-line","text":"Configure from helm install command(first time install) and helm upgrade command, helm version is v3 ```bash","title":"Configure from helm command line"},{"location":"features/agic-reconcile/#helm-fresh-install","text":"helm intall -f helm-config.yaml oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure --version 1.2.0-rc3 --set reconcilePeriodSeconds=30","title":"helm fresh install"},{"location":"features/agic-reconcile/#help-upgrade","text":"","title":"help upgrade"},{"location":"features/agic-reconcile/#-reuse-values-when-upgrading-reuse-the-last-releases-values-and-merge-in-any-overrides-from-the-command-line-via-set-and-f","text":"helm upgrade oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure --reuse-values --version 1.2.0-rc3 --set reconcilePeriodSeconds=30 ```","title":"--reuse-values, when upgrading, reuse the last release's values and merge in any overrides from the command line via --set and -f."},{"location":"features/appgw-ssl-certificate/","text":"Prerequisites NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. This documents assumes you already have the following Azure tools and resources installed: AKS with Advanced Networking enabled App Gateway v2 in the same virtual network as AKS AAD Pod Identity installed on your AKS cluster Cloud Shell is the Azure shell environment, which has az CLI, kubectl , and helm installed. These tools are required for the commands below. Please use Greenfield Deployment to install nonexistents. To use the new feature, make sure the AGIC version is at least at 1.2.0-rc3 bash helm install oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure -f helm-config.yaml --version 1.2.0-rc3 --generate-name Create a certificate and configure the certificate to AppGw The certificate below should only be used for testing purpose. ```bash appgwName=\"\" resgp=\"\" generate certificate for testing openssl req -x509 -nodes -days 365 -newkey rsa:2048 \\ -out test-cert.crt \\ -keyout test-cert.key \\ -subj \"/CN=test\" openssl pkcs12 -export \\ -in test-cert.crt \\ -inkey test-cert.key \\ -passout pass:test \\ -out test-cert.pfx configure certificate to app gateway az network application-gateway ssl-cert create \\ --resource-group $resgp \\ --gateway-name $appgwName \\ -n mysslcert \\ --cert-file test-cert.pfx \\ --cert-password \"test\" ``` Configure certificate from Key Vault to AppGw To configfure certificate from key vault to Application Gateway, an user-assigned managed identity will need to be created and assigned to AppGw, the managed identity will need to have GET secret access to KeyVault. ```bash Configure your resources appgwName=\"\" resgp=\"\" vaultName=\"\" location=\"\" aksClusterName=\"\" aksResourceGroupName=\"\" appgwName=\"\" IMPORTANT: the following way to retrieve the object id of the AGIC managed identity only applies when AGIC is deployed via the AGIC addon for AKS get the resource group name of the AKS cluster nrg=$(az aks show --name $aksClusterName --resource-group $aksResourceGroupName --query nodeResourceGroup --output tsv) get principalId of the AGIC managed identity identityName=\"ingressapplicationgateway- aksClusterName\" agicIdentityPrincipalId= aksClusterName\" agicIdentityPrincipalId= (az identity show --name $identityName --resource-group $nrg --query principalId --output tsv) One time operation, create Azure key vault and certificate (can done through portal as well) az keyvault create -n $vaultName -g $resgp --enable-soft-delete -l $location One time operation, create user-assigned managed identity az identity create -n appgw-id -g $resgp -l location identityID= location identityID= (az identity show -n appgw-id -g resgp -o tsv --query \"id\") identityPrincipal= resgp -o tsv --query \"id\") identityPrincipal= (az identity show -n appgw-id -g $resgp -o tsv --query \"principalId\") One time operation, assign AGIC identity to have operator access over AppGw identity az role assignment create --role \"Managed Identity Operator\" --assignee $agicIdentityPrincipalId --scope $identityID One time operation, assign the identity to Application Gateway az network application-gateway identity assign \\ --gateway-name $appgwName \\ --resource-group $resgp \\ --identity $identityID One time operation, assign the identity GET secret access to Azure Key Vault az keyvault set-policy \\ -n $vaultName \\ -g $resgp \\ --object-id $identityPrincipal \\ --secret-permissions get For each new certificate, create a cert on keyvault and add unversioned secret id to Application Gateway az keyvault certificate create \\ --vault-name vaultName \\ -n mycert \\ -p \" vaultName \\ -n mycert \\ -p \" (az keyvault certificate get-default-policy)\" versionedSecretId=$(az keyvault certificate show -n mycert --vault-name vaultName --query \"sid\" -o tsv) unversionedSecretId= vaultName --query \"sid\" -o tsv) unversionedSecretId= (echo $versionedSecretId | cut -d'/' -f-5) # remove the version from the url For each new certificate, Add the certificate to AppGw az network application-gateway ssl-cert create \\ -n mykvsslcert \\ --gateway-name $appgwName \\ --resource-group $resgp \\ --key-vault-secret-id $unversionedSecretId # ssl certificate with name \"mykvsslcert\" will be configured on AppGw ``` Testing the key vault certificate on Ingress Since we have certificate from Key Vault configured in Application Gateway, we can then add the new annotation appgw.ingress.kubernetes.io/appgw-ssl-certificate: mykvsslcert in Kubernetes ingress to enable the feature. ```bash install an app cat << EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: aspnetapp labels: app: aspnetapp spec: containers: - image: \"mcr.microsoft.com/dotnet/samples:aspnetapp\" name: aspnetapp-image ports: - containerPort: 80 protocol: TCP apiVersion: v1 kind: Service metadata: name: aspnetapp spec: selector: app: aspnetapp ports: - protocol: TCP port: 80 targetPort: 80 apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: aspnetapp annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/appgw-ssl-certificate: mykvsslcert spec: rules: - http: paths: - path: / backend: service: name: aspnetapp port: number: 80 pathType: Exact EOF ```","title":"Appgw ssl certificate"},{"location":"features/appgw-ssl-certificate/#prerequisites","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. This documents assumes you already have the following Azure tools and resources installed: AKS with Advanced Networking enabled App Gateway v2 in the same virtual network as AKS AAD Pod Identity installed on your AKS cluster Cloud Shell is the Azure shell environment, which has az CLI, kubectl , and helm installed. These tools are required for the commands below. Please use Greenfield Deployment to install nonexistents. To use the new feature, make sure the AGIC version is at least at 1.2.0-rc3 bash helm install oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure -f helm-config.yaml --version 1.2.0-rc3 --generate-name","title":"Prerequisites"},{"location":"features/appgw-ssl-certificate/#create-a-certificate-and-configure-the-certificate-to-appgw","text":"The certificate below should only be used for testing purpose. ```bash appgwName=\"\" resgp=\"\"","title":"Create a certificate and configure the certificate to AppGw"},{"location":"features/appgw-ssl-certificate/#generate-certificate-for-testing","text":"openssl req -x509 -nodes -days 365 -newkey rsa:2048 \\ -out test-cert.crt \\ -keyout test-cert.key \\ -subj \"/CN=test\" openssl pkcs12 -export \\ -in test-cert.crt \\ -inkey test-cert.key \\ -passout pass:test \\ -out test-cert.pfx","title":"generate certificate for testing"},{"location":"features/appgw-ssl-certificate/#configure-certificate-to-app-gateway","text":"az network application-gateway ssl-cert create \\ --resource-group $resgp \\ --gateway-name $appgwName \\ -n mysslcert \\ --cert-file test-cert.pfx \\ --cert-password \"test\" ```","title":"configure certificate to app gateway"},{"location":"features/appgw-ssl-certificate/#configure-certificate-from-key-vault-to-appgw","text":"To configfure certificate from key vault to Application Gateway, an user-assigned managed identity will need to be created and assigned to AppGw, the managed identity will need to have GET secret access to KeyVault. ```bash","title":"Configure certificate from Key Vault to AppGw"},{"location":"features/appgw-ssl-certificate/#configure-your-resources","text":"appgwName=\"\" resgp=\"\" vaultName=\"\" location=\"\" aksClusterName=\"\" aksResourceGroupName=\"\" appgwName=\"\"","title":"Configure your resources"},{"location":"features/appgw-ssl-certificate/#important-the-following-way-to-retrieve-the-object-id-of-the-agic-managed-identity","text":"","title":"IMPORTANT: the following way to retrieve the object id of the AGIC managed identity"},{"location":"features/appgw-ssl-certificate/#only-applies-when-agic-is-deployed-via-the-agic-addon-for-aks","text":"","title":"only applies when AGIC is deployed via the AGIC addon for AKS"},{"location":"features/appgw-ssl-certificate/#get-the-resource-group-name-of-the-aks-cluster","text":"nrg=$(az aks show --name $aksClusterName --resource-group $aksResourceGroupName --query nodeResourceGroup --output tsv)","title":"get the resource group name of the AKS cluster"},{"location":"features/appgw-ssl-certificate/#get-principalid-of-the-agic-managed-identity","text":"identityName=\"ingressapplicationgateway- aksClusterName\" agicIdentityPrincipalId= aksClusterName\" agicIdentityPrincipalId= (az identity show --name $identityName --resource-group $nrg --query principalId --output tsv)","title":"get principalId of the AGIC managed identity"},{"location":"features/appgw-ssl-certificate/#one-time-operation-create-azure-key-vault-and-certificate-can-done-through-portal-as-well","text":"az keyvault create -n $vaultName -g $resgp --enable-soft-delete -l $location","title":"One time operation, create Azure key vault and certificate (can done through portal as well)"},{"location":"features/appgw-ssl-certificate/#one-time-operation-create-user-assigned-managed-identity","text":"az identity create -n appgw-id -g $resgp -l location identityID= location identityID= (az identity show -n appgw-id -g resgp -o tsv --query \"id\") identityPrincipal= resgp -o tsv --query \"id\") identityPrincipal= (az identity show -n appgw-id -g $resgp -o tsv --query \"principalId\")","title":"One time operation, create user-assigned managed identity"},{"location":"features/appgw-ssl-certificate/#one-time-operation-assign-agic-identity-to-have-operator-access-over-appgw-identity","text":"az role assignment create --role \"Managed Identity Operator\" --assignee $agicIdentityPrincipalId --scope $identityID","title":"One time operation, assign AGIC identity to have operator access over AppGw identity"},{"location":"features/appgw-ssl-certificate/#one-time-operation-assign-the-identity-to-application-gateway","text":"az network application-gateway identity assign \\ --gateway-name $appgwName \\ --resource-group $resgp \\ --identity $identityID","title":"One time operation, assign the identity to Application Gateway"},{"location":"features/appgw-ssl-certificate/#one-time-operation-assign-the-identity-get-secret-access-to-azure-key-vault","text":"az keyvault set-policy \\ -n $vaultName \\ -g $resgp \\ --object-id $identityPrincipal \\ --secret-permissions get","title":"One time operation, assign the identity GET secret access to Azure Key Vault"},{"location":"features/appgw-ssl-certificate/#for-each-new-certificate-create-a-cert-on-keyvault-and-add-unversioned-secret-id-to-application-gateway","text":"az keyvault certificate create \\ --vault-name vaultName \\ -n mycert \\ -p \" vaultName \\ -n mycert \\ -p \" (az keyvault certificate get-default-policy)\" versionedSecretId=$(az keyvault certificate show -n mycert --vault-name vaultName --query \"sid\" -o tsv) unversionedSecretId= vaultName --query \"sid\" -o tsv) unversionedSecretId= (echo $versionedSecretId | cut -d'/' -f-5) # remove the version from the url","title":"For each new certificate, create a cert on keyvault and add unversioned secret id to Application Gateway"},{"location":"features/appgw-ssl-certificate/#for-each-new-certificate-add-the-certificate-to-appgw","text":"az network application-gateway ssl-cert create \\ -n mykvsslcert \\ --gateway-name $appgwName \\ --resource-group $resgp \\ --key-vault-secret-id $unversionedSecretId # ssl certificate with name \"mykvsslcert\" will be configured on AppGw ```","title":"For each new certificate, Add the certificate to AppGw"},{"location":"features/appgw-ssl-certificate/#testing-the-key-vault-certificate-on-ingress","text":"Since we have certificate from Key Vault configured in Application Gateway, we can then add the new annotation appgw.ingress.kubernetes.io/appgw-ssl-certificate: mykvsslcert in Kubernetes ingress to enable the feature. ```bash","title":"Testing the key vault certificate on Ingress"},{"location":"features/appgw-ssl-certificate/#install-an-app","text":"cat << EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: aspnetapp labels: app: aspnetapp spec: containers: - image: \"mcr.microsoft.com/dotnet/samples:aspnetapp\" name: aspnetapp-image ports: - containerPort: 80 protocol: TCP apiVersion: v1 kind: Service metadata: name: aspnetapp spec: selector: app: aspnetapp ports: - protocol: TCP port: 80 targetPort: 80 apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: aspnetapp annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/appgw-ssl-certificate: mykvsslcert spec: rules: - http: paths: - path: / backend: service: name: aspnetapp port: number: 80 pathType: Exact EOF ```","title":"install an app"},{"location":"features/cookie-affinity/","text":"Enable Cookie based Affinity NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Details on cookie based affinity for Application Gateway for Containers may be found here . As outlined in the Azure Application Gateway Documentation , Application Gateway supports cookie based affinity enabling which it can direct subsequent traffic from a user session to the same server for processing. Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: guestbook annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/cookie-based-affinity: \"true\" spec: rules: - http: paths: - backend: service: name: frontend port: number: 80","title":"Cookie affinity"},{"location":"features/cookie-affinity/#enable-cookie-based-affinity","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Details on cookie based affinity for Application Gateway for Containers may be found here . As outlined in the Azure Application Gateway Documentation , Application Gateway supports cookie based affinity enabling which it can direct subsequent traffic from a user session to the same server for processing.","title":"Enable Cookie based Affinity"},{"location":"features/cookie-affinity/#example","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: guestbook annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/cookie-based-affinity: \"true\" spec: rules: - http: paths: - backend: service: name: frontend port: number: 80","title":"Example"},{"location":"features/custom-ingress-class/","text":"Custom Ingress Class NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Minimum version: 1.3.0 Custom ingress class allows you to customize the ingress class selector that AGIC will use when filtering the ingress manifests. AGIC uses azure/application-gateway as default ingress class. This will allow you to target multiple AGICs on a single namespace as each AGIC can now use it's own ingress class. For instance, AGIC with ingress class agic-public can serves public traffic, and AGIC wit agic-private can serve \"internal\" traffic. To use a custom ingress class, Install AGIC by providing a value for kubernetes.ingressClass in helm config. bash helm install ./helm/ingress-azure \\ --name ingress-azure \\ -f helm-config.yaml --set kubernetes.ingressClass arbitrary-class Then, within the spec object, specify ingressClassName with the same value provided to AGIC. yaml kind: Ingress metadata: name: go-server-ingress-affinity namespace: test-ag spec: ingressClassName: arbitrary-class rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 Reference Proposal Document","title":"Custom Ingress Class"},{"location":"features/custom-ingress-class/#custom-ingress-class","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Minimum version: 1.3.0 Custom ingress class allows you to customize the ingress class selector that AGIC will use when filtering the ingress manifests. AGIC uses azure/application-gateway as default ingress class. This will allow you to target multiple AGICs on a single namespace as each AGIC can now use it's own ingress class. For instance, AGIC with ingress class agic-public can serves public traffic, and AGIC wit agic-private can serve \"internal\" traffic. To use a custom ingress class, Install AGIC by providing a value for kubernetes.ingressClass in helm config. bash helm install ./helm/ingress-azure \\ --name ingress-azure \\ -f helm-config.yaml --set kubernetes.ingressClass arbitrary-class Then, within the spec object, specify ingressClassName with the same value provided to AGIC. yaml kind: Ingress metadata: name: go-server-ingress-affinity namespace: test-ag spec: ingressClassName: arbitrary-class rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80","title":"Custom Ingress Class"},{"location":"features/custom-ingress-class/#reference","text":"Proposal Document","title":"Reference"},{"location":"features/multiple-namespaces/","text":"Multiple Namespace Support NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Motivation Kubernetes Namespaces make it possible for a Kubernetes cluster to be partitioned and allocated to sub-groups of a larger team. These sub-teams can then deploy and manage infrastructure with finer controls of resources, security, configuration etc. Kubernetes allows for one or more ingress resources to be defined independently within each namespace. As of version 0.7 Azure Application Gateway Kubernetes IngressController (AGIC) can ingest events from and observe multiple namespaces. Should the AKS administrator decide to use App Gateway as an ingress, all namespaces will use the same instance of App Gateway. A single installation of Ingress Controller will monitor accessible namespaces and will configure the App Gateway it is associated with. Version 0.7 of AGIC will continue to exclusively observe the default namespace, unless this is explicitly changed to one or more different namespaces in the Helm configuration (see section below). Enable multiple namespace support To enable multiple namespace support: modify the helm-config.yaml file in one of the following ways: delete the watchNamespace key entirely from helm-config.yaml - AGIC will observe all namespaces set watchNamespace to an empty string - AGIC will observe all namespaces add multiple namespaces separated by a comma ( watchNamespace: default,secondNamespace ) - AGIC will observe these namespaces exclusively apply Helm template changes with: helm install -f helm-config.yaml oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure Once deployed with the ability to observe multiple namespaces, AGIC will: list ingress resources from all accessible namespaces filter to ingress resources annotated with kubernetes.io/ingress.class: azure/application-gateway compose combined App Gateway config apply the config to the associated App Gateway via ARM Conflicting Configurations Multiple namespaced ingress resources could instruct AGIC to create conflicting configurations for a single App Gateway. (Two ingresses claiming the same domain for instance.) At the top of the hierarchy - listeners (IP address, port, and host) and routing rules (binding listener, backend pool and HTTP settings) could be created and shared by multiple namespaces/ingresses. On the other hand - paths, backend pools, HTTP settings, and TLS certificates could be created by one namespace only and duplicates will removed.. For example, consider the following duplicate ingress resources defined namespaces staging and production for www.contoso.com : yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: websocket-ingress namespace: staging annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - host: www.contoso.com http: paths: - backend: service: name: web-service port: number: 80 yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: websocket-ingress namespace: production annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - host: www.contoso.com http: paths: - backend: service: name: web-service port: number: 80 Despite the two ingress resources demanding traffic for www.contoso.com to be routed to the respective Kubernetes namespaces, only one backend can service the traffic. AGIC would create a configuration on \"first come, first served\" basis for one of the resources. If two ingresses resources are created at the same time, the one earlier in the alphabet will take precedence. From the example above we will only be able to create settings for the production ingress. App Gateway will be configured with the following resources: Listener: fl-www.contoso.com-80 Routing Rule: rr-www.contoso.com-80 Backend Pool: pool-production-contoso-web-service-80-bp-80 HTTP Settings: bp-production-contoso-web-service-80-80-websocket-ingress Health Probe: pb-production-contoso-web-service-80-websocket-ingress Note that except for listener and routing rule , the App Gateway resources created include the name of the namespace ( production ) for which they were created. If the two ingress resources are introduced into the AKS cluster at different points in time, it is likely for AGIC to end up in a scenario where it reconfigures App Gateway and re-routes traffic from namespace-B to namespace-A . For example if you added staging first, AGIC will configure App Gwy to route traffic to the staging backend pool. At a later stage, introducing production ingress, will cause AGIC to reprogram App Gwy, which will start routing traffic to the production backend pool. Restricting Access to Namespaces By default AGIC will configure App Gateway based on annotated Ingress within any namespace. Should you want to limit this behaviour you have the following options: limit the namespaces, by explicitly defining namespaces AGIC should observe via the watchNamespace YAML key in helm-config.yaml use Role/RoleBinding to limit AGIC to specific namespaces","title":"Multiple Namespace Support"},{"location":"features/multiple-namespaces/#multiple-namespace-support","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment.","title":"Multiple Namespace Support"},{"location":"features/multiple-namespaces/#motivation","text":"Kubernetes Namespaces make it possible for a Kubernetes cluster to be partitioned and allocated to sub-groups of a larger team. These sub-teams can then deploy and manage infrastructure with finer controls of resources, security, configuration etc. Kubernetes allows for one or more ingress resources to be defined independently within each namespace. As of version 0.7 Azure Application Gateway Kubernetes IngressController (AGIC) can ingest events from and observe multiple namespaces. Should the AKS administrator decide to use App Gateway as an ingress, all namespaces will use the same instance of App Gateway. A single installation of Ingress Controller will monitor accessible namespaces and will configure the App Gateway it is associated with. Version 0.7 of AGIC will continue to exclusively observe the default namespace, unless this is explicitly changed to one or more different namespaces in the Helm configuration (see section below).","title":"Motivation"},{"location":"features/multiple-namespaces/#enable-multiple-namespace-support","text":"To enable multiple namespace support: modify the helm-config.yaml file in one of the following ways: delete the watchNamespace key entirely from helm-config.yaml - AGIC will observe all namespaces set watchNamespace to an empty string - AGIC will observe all namespaces add multiple namespaces separated by a comma ( watchNamespace: default,secondNamespace ) - AGIC will observe these namespaces exclusively apply Helm template changes with: helm install -f helm-config.yaml oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure Once deployed with the ability to observe multiple namespaces, AGIC will: list ingress resources from all accessible namespaces filter to ingress resources annotated with kubernetes.io/ingress.class: azure/application-gateway compose combined App Gateway config apply the config to the associated App Gateway via ARM","title":"Enable multiple namespace support"},{"location":"features/multiple-namespaces/#conflicting-configurations","text":"Multiple namespaced ingress resources could instruct AGIC to create conflicting configurations for a single App Gateway. (Two ingresses claiming the same domain for instance.) At the top of the hierarchy - listeners (IP address, port, and host) and routing rules (binding listener, backend pool and HTTP settings) could be created and shared by multiple namespaces/ingresses. On the other hand - paths, backend pools, HTTP settings, and TLS certificates could be created by one namespace only and duplicates will removed.. For example, consider the following duplicate ingress resources defined namespaces staging and production for www.contoso.com : yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: websocket-ingress namespace: staging annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - host: www.contoso.com http: paths: - backend: service: name: web-service port: number: 80 yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: websocket-ingress namespace: production annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - host: www.contoso.com http: paths: - backend: service: name: web-service port: number: 80 Despite the two ingress resources demanding traffic for www.contoso.com to be routed to the respective Kubernetes namespaces, only one backend can service the traffic. AGIC would create a configuration on \"first come, first served\" basis for one of the resources. If two ingresses resources are created at the same time, the one earlier in the alphabet will take precedence. From the example above we will only be able to create settings for the production ingress. App Gateway will be configured with the following resources: Listener: fl-www.contoso.com-80 Routing Rule: rr-www.contoso.com-80 Backend Pool: pool-production-contoso-web-service-80-bp-80 HTTP Settings: bp-production-contoso-web-service-80-80-websocket-ingress Health Probe: pb-production-contoso-web-service-80-websocket-ingress Note that except for listener and routing rule , the App Gateway resources created include the name of the namespace ( production ) for which they were created. If the two ingress resources are introduced into the AKS cluster at different points in time, it is likely for AGIC to end up in a scenario where it reconfigures App Gateway and re-routes traffic from namespace-B to namespace-A . For example if you added staging first, AGIC will configure App Gwy to route traffic to the staging backend pool. At a later stage, introducing production ingress, will cause AGIC to reprogram App Gwy, which will start routing traffic to the production backend pool.","title":"Conflicting Configurations"},{"location":"features/multiple-namespaces/#restricting-access-to-namespaces","text":"By default AGIC will configure App Gateway based on annotated Ingress within any namespace. Should you want to limit this behaviour you have the following options: limit the namespaces, by explicitly defining namespaces AGIC should observe via the watchNamespace YAML key in helm-config.yaml use Role/RoleBinding to limit AGIC to specific namespaces","title":"Restricting Access to Namespaces"},{"location":"features/private-ip/","text":"Using Private IP for internal routing This feature allows to expose the ingress endpoint within the Virtual Network using a private IP. Pre-requisites Application Gateway with a Private IP configuration There are two ways to configure the controller to use Private IP for ingress, Assign to a particular ingress To expose a particular ingress over Private IP, use annotation appgw.ingress.kubernetes.io/use-private-ip in Ingress. Usage yaml appgw.ingress.kubernetes.io/use-private-ip: \"true\" For App Gateways without a Private IP, Ingresses annotated with appgw.ingress.kubernetes.io/use-private-ip: \"true\" will be ignored. This will be indicated in the ingress event and AGIC pod log. Error as indicated in the Ingress Event bash Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning NoPrivateIP 2m (x17 over 2m) azure/application-gateway, prod-ingress-azure-5c9b6fcd4-bctcb Ingress default/hello-world-ingress requires Application Gateway applicationgateway3026 has a private IP address Error as indicated in AGIC Logs bash E0730 18:57:37.914749 1 prune.go:65] Ingress default/hello-world-ingress requires Application Gateway applicationgateway3026 has a private IP address Assign Globally In case, requirement is to restrict all Ingresses to be exposed over Private IP, use appgw.usePrivateIP: true in helm config. Usage yaml appgw: subscriptionId: resourceGroup: name: usePrivateIP: true This will make the ingress controller filter the ipconfigurations for a Private IP when configuring the frontend listeners on the Application Gateway. AGIC will panic and crash if usePrivateIP: true and no Private IP is assigned. Notes: Application Gateway v2 SKU requires a Public IP. Should you require Application Gateway to be private, Attach a Network Security Group to the Application Gateway's subnet to restrict traffic.","title":"Using Private IP for internal routing"},{"location":"features/private-ip/#using-private-ip-for-internal-routing","text":"This feature allows to expose the ingress endpoint within the Virtual Network using a private IP. Pre-requisites Application Gateway with a Private IP configuration There are two ways to configure the controller to use Private IP for ingress,","title":"Using Private IP for internal routing"},{"location":"features/private-ip/#assign-to-a-particular-ingress","text":"To expose a particular ingress over Private IP, use annotation appgw.ingress.kubernetes.io/use-private-ip in Ingress.","title":"Assign to a particular ingress"},{"location":"features/private-ip/#usage","text":"yaml appgw.ingress.kubernetes.io/use-private-ip: \"true\" For App Gateways without a Private IP, Ingresses annotated with appgw.ingress.kubernetes.io/use-private-ip: \"true\" will be ignored. This will be indicated in the ingress event and AGIC pod log. Error as indicated in the Ingress Event bash Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning NoPrivateIP 2m (x17 over 2m) azure/application-gateway, prod-ingress-azure-5c9b6fcd4-bctcb Ingress default/hello-world-ingress requires Application Gateway applicationgateway3026 has a private IP address Error as indicated in AGIC Logs bash E0730 18:57:37.914749 1 prune.go:65] Ingress default/hello-world-ingress requires Application Gateway applicationgateway3026 has a private IP address","title":"Usage"},{"location":"features/private-ip/#assign-globally","text":"In case, requirement is to restrict all Ingresses to be exposed over Private IP, use appgw.usePrivateIP: true in helm config.","title":"Assign Globally"},{"location":"features/private-ip/#usage_1","text":"yaml appgw: subscriptionId: resourceGroup: name: usePrivateIP: true This will make the ingress controller filter the ipconfigurations for a Private IP when configuring the frontend listeners on the Application Gateway. AGIC will panic and crash if usePrivateIP: true and no Private IP is assigned. Notes: Application Gateway v2 SKU requires a Public IP. Should you require Application Gateway to be private, Attach a Network Security Group to the Application Gateway's subnet to restrict traffic.","title":"Usage"},{"location":"features/probes/","text":"Adding Health Probes to your service .. note:: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Details on custom health probes in Application Gateway for Containers [may be found here](https://learn.microsoft.com/azure/application-gateway/for-containers/custom-health-probe). By default, Ingress controller will provision an HTTP GET probe for the exposed pods. The probe properties can be customized by adding a Readiness or Liveness Probe to your deployment / pod spec. With readinessProbe or livenessProbe yaml apiVersion: apps/v1 kind: Deployment metadata: name: aspnetapp spec: replicas: 3 template: metadata: labels: service: site spec: containers: - name: aspnetapp image: mcr.microsoft.com/dotnet/samples:aspnetapp imagePullPolicy: IfNotPresent ports: - containerPort: 80 readinessProbe: httpGet: path: / port: 80 periodSeconds: 3 timeoutSeconds: 1 Kubernetes API Reference: Container Probes HttpGet Action Note: readinessProbe and livenessProbe are supported when configured with httpGet . Probing on a port other than the one exposed on the pod is currently not supported. HttpHeaders , InitialDelaySeconds , SuccessThreshold are not supported. Without readinessProbe or livenessProbe If the above probes are not provided, then Ingress Controller make an assumption that the service is reachable on Path specified for backend-path-prefix annotation or the path specified in the ingress definition for the service. Default Values for Health Probe For any property that can not be inferred by the readiness/liveness probe, Default values are set. Application Gateway Probe Property Default Value Path / Host localhost Protocol HTTP Timeout 30 Interval 30 UnhealthyThreshold 3","title":"Probes"},{"location":"features/probes/#adding-health-probes-to-your-service","text":".. note:: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Details on custom health probes in Application Gateway for Containers [may be found here](https://learn.microsoft.com/azure/application-gateway/for-containers/custom-health-probe). By default, Ingress controller will provision an HTTP GET probe for the exposed pods. The probe properties can be customized by adding a Readiness or Liveness Probe to your deployment / pod spec.","title":"Adding Health Probes to your service"},{"location":"features/probes/#with-readinessprobe-or-livenessprobe","text":"yaml apiVersion: apps/v1 kind: Deployment metadata: name: aspnetapp spec: replicas: 3 template: metadata: labels: service: site spec: containers: - name: aspnetapp image: mcr.microsoft.com/dotnet/samples:aspnetapp imagePullPolicy: IfNotPresent ports: - containerPort: 80 readinessProbe: httpGet: path: / port: 80 periodSeconds: 3 timeoutSeconds: 1 Kubernetes API Reference: Container Probes HttpGet Action Note: readinessProbe and livenessProbe are supported when configured with httpGet . Probing on a port other than the one exposed on the pod is currently not supported. HttpHeaders , InitialDelaySeconds , SuccessThreshold are not supported.","title":"With readinessProbe or livenessProbe"},{"location":"features/probes/#without-readinessprobe-or-livenessprobe","text":"If the above probes are not provided, then Ingress Controller make an assumption that the service is reachable on Path specified for backend-path-prefix annotation or the path specified in the ingress definition for the service.","title":"Without readinessProbe or livenessProbe"},{"location":"features/probes/#default-values-for-health-probe","text":"For any property that can not be inferred by the readiness/liveness probe, Default values are set. Application Gateway Probe Property Default Value Path / Host localhost Protocol HTTP Timeout 30 Interval 30 UnhealthyThreshold 3","title":"Default Values for Health Probe"},{"location":"features/rewrite-rule-set-custom-resource/","text":"Rewrite Rule Set Custom Resource (supported since 1.6.0-rc1) NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. URL Rewrite rules for Application Gateway for Containers may be found here for Gateway API and here for Ingress API . Header Rewrite rules for Application Gateway for Containers may be found here for Gateway API and here for Ingress API . Note: This feature is supported since 1.6.0-rc1. Please use appgw.ingress.kubernetes.io/rewrite-rule-set which allows using an existing rewrite rule set on Application Gateway. Application Gateway allows you to rewrite selected content of requests and responses. With this feature, you can translate URLs, query string parameters as well as modify request and response headers. It also allows you to add conditions to ensure that the URL or the specified headers are rewritten only when certain conditions are met. These conditions are based on the request and response information. Rewrite Rule Set Custom Resource brings this feature to AGIC. HTTP headers allow a client and server to pass additional information with a request or response. By rewriting these headers, you can accomplish important tasks, such as adding security-related header fields like HSTS/ X-XSS-Protection, removing response header fields that might reveal sensitive information, and removing port information from X-Forwarded-For headers. With URL rewrite capability, you can: Rewrite the host name, path and query string of the request URL Choose to rewrite the URL of all requests or only those requests which match one or more of the conditions you set. These conditions are based on the request and response properties (request header, response header and server variables). Choose to route the request based on either the original URL or the rewritten URL Usage To use the feature, the customer must define a Custom Resource of the type AzureApplicationGatewayRewrite which must have a name in the metadata section. The ingress manifest must refer this Custom Resource via the appgw.ingress.kubernetes.io/rewrite-rule-set-custom-resource annotation. Important points to note metadata & name In the metadata section, name of the AzureApplicationGatewayRewrite custom resource should match the custom resource referred in the annotation. RuleSequence The rule sequence must be unique for every rewrite rule Conditions You can use rewrite conditions, an optional configuration, to evaluate the content of HTTP(S) requests and responses and perform a rewrite only when one or more conditions are met. The following types of variables can be used to define a condition: HTTP headers in the request HTTP headers in the response Application Gateway server variables Note: While defining conditions, request headers must be prefixed with http_req_ , response headers must be prefixed with http_res_ and list of server variables can be found here Actions You use rewrite actions to specify the URL, request headers or response headers that you want to rewrite and the new value to which you intend to rewrite them to. The value of a URL or a new or existing header can be set to these types of values: Text Request header Response header Server Variable Combination of the any of the above Note: To specify a request header, you need to use the syntax http_req_headerName To specify a response header, you need to use the syntax http_resp_headerName To specify a server variable, you need to use the syntax var_serverVariable . See the list of supported server variables here URL Rewrite Configuration URL path: The value to which the path is to be rewritten to. URL Query String: The value to which the query string is to be rewritten to. Re-evaluate path map: Used to determine whether the URL path map is to be re-evaluated or not. If set to false , the original URL path will be used to match the path-pattern in the URL path map. If set to true , the URL path map will be re-evaluated to check the match with the rewritten path. Recommended: More information about Application Gateway's Rewrite feature can be found here Example ```yaml apiVersion: appgw.ingress.azure.io/v1beta1 kind: AzureApplicationGatewayRewrite metadata: name: my-rewrite-rule-set-custom-resource spec: rewriteRules: - name: rule1 ruleSequence: 21 conditions: - ignoreCase: false negate: false variable: http_req_Host pattern: example.com actions: requestHeaderConfigurations: - actionType: set headerName: incoming-test-header headerValue: incoming-test-value responseHeaderConfigurations: - actionType: set headerName: outgoing-test-header headerValue: outgoing-test-value urlConfiguration: modifiedPath: \"/api/\" modifiedQueryString: \"query=test-value\" reroute: false apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/rewrite-rule-set-custom-resource: my-rewrite-rule-set spec: rules: - http: paths: - path: / pathType: Exact backend: service: name: store-service port: number: 8080 ```","title":"Rewrite rule set custom resource"},{"location":"features/rewrite-rule-set-custom-resource/#rewrite-rule-set-custom-resource-supported-since-160-rc1","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. URL Rewrite rules for Application Gateway for Containers may be found here for Gateway API and here for Ingress API . Header Rewrite rules for Application Gateway for Containers may be found here for Gateway API and here for Ingress API . Note: This feature is supported since 1.6.0-rc1. Please use appgw.ingress.kubernetes.io/rewrite-rule-set which allows using an existing rewrite rule set on Application Gateway. Application Gateway allows you to rewrite selected content of requests and responses. With this feature, you can translate URLs, query string parameters as well as modify request and response headers. It also allows you to add conditions to ensure that the URL or the specified headers are rewritten only when certain conditions are met. These conditions are based on the request and response information. Rewrite Rule Set Custom Resource brings this feature to AGIC. HTTP headers allow a client and server to pass additional information with a request or response. By rewriting these headers, you can accomplish important tasks, such as adding security-related header fields like HSTS/ X-XSS-Protection, removing response header fields that might reveal sensitive information, and removing port information from X-Forwarded-For headers. With URL rewrite capability, you can: Rewrite the host name, path and query string of the request URL Choose to rewrite the URL of all requests or only those requests which match one or more of the conditions you set. These conditions are based on the request and response properties (request header, response header and server variables). Choose to route the request based on either the original URL or the rewritten URL","title":"Rewrite Rule Set Custom Resource (supported since 1.6.0-rc1)"},{"location":"features/rewrite-rule-set-custom-resource/#usage","text":"To use the feature, the customer must define a Custom Resource of the type AzureApplicationGatewayRewrite which must have a name in the metadata section. The ingress manifest must refer this Custom Resource via the appgw.ingress.kubernetes.io/rewrite-rule-set-custom-resource annotation.","title":"Usage"},{"location":"features/rewrite-rule-set-custom-resource/#important-points-to-note","text":"","title":"Important points to note"},{"location":"features/rewrite-rule-set-custom-resource/#metadata-name","text":"In the metadata section, name of the AzureApplicationGatewayRewrite custom resource should match the custom resource referred in the annotation.","title":"metadata & name"},{"location":"features/rewrite-rule-set-custom-resource/#rulesequence","text":"The rule sequence must be unique for every rewrite rule","title":"RuleSequence"},{"location":"features/rewrite-rule-set-custom-resource/#conditions","text":"You can use rewrite conditions, an optional configuration, to evaluate the content of HTTP(S) requests and responses and perform a rewrite only when one or more conditions are met. The following types of variables can be used to define a condition: HTTP headers in the request HTTP headers in the response Application Gateway server variables Note: While defining conditions, request headers must be prefixed with http_req_ , response headers must be prefixed with http_res_ and list of server variables can be found here","title":"Conditions"},{"location":"features/rewrite-rule-set-custom-resource/#actions","text":"You use rewrite actions to specify the URL, request headers or response headers that you want to rewrite and the new value to which you intend to rewrite them to. The value of a URL or a new or existing header can be set to these types of values: Text Request header Response header Server Variable Combination of the any of the above Note: To specify a request header, you need to use the syntax http_req_headerName To specify a response header, you need to use the syntax http_resp_headerName To specify a server variable, you need to use the syntax var_serverVariable . See the list of supported server variables here","title":"Actions"},{"location":"features/rewrite-rule-set-custom-resource/#url-rewrite-configuration","text":"URL path: The value to which the path is to be rewritten to. URL Query String: The value to which the query string is to be rewritten to. Re-evaluate path map: Used to determine whether the URL path map is to be re-evaluated or not. If set to false , the original URL path will be used to match the path-pattern in the URL path map. If set to true , the URL path map will be re-evaluated to check the match with the rewritten path. Recommended: More information about Application Gateway's Rewrite feature can be found here","title":"URL Rewrite Configuration"},{"location":"features/rewrite-rule-set-custom-resource/#example","text":"```yaml apiVersion: appgw.ingress.azure.io/v1beta1 kind: AzureApplicationGatewayRewrite metadata: name: my-rewrite-rule-set-custom-resource spec: rewriteRules: - name: rule1 ruleSequence: 21 conditions: - ignoreCase: false negate: false variable: http_req_Host pattern: example.com actions: requestHeaderConfigurations: - actionType: set headerName: incoming-test-header headerValue: incoming-test-value responseHeaderConfigurations: - actionType: set headerName: outgoing-test-header headerValue: outgoing-test-value urlConfiguration: modifiedPath: \"/api/\" modifiedQueryString: \"query=test-value\" reroute: false apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/rewrite-rule-set-custom-resource: my-rewrite-rule-set spec: rules: - http: paths: - path: / pathType: Exact backend: service: name: store-service port: number: 8080 ```","title":"Example"},{"location":"how-tos/continuous-deployment/","text":"Continuous Deployment with AKS and AGIC using Azure Pipelines NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. To achieve an efficiently deployed and managed global infrastucture, it is important to setup workflows for continuous integration and deployment. Azure Devops is one of the options to achieve this goal. In following example, we setup a Azure Devops release pipeline to deploy an AKS cluster along with AGIC as ingress. This example is merely a scaffolding. You need to separately setup a build pipeline to install your application and ingress on the AKS cluster deployed as part of the release. Setup up new service connection with service principal Note : Skip if already have service connection with owner access for role assigment Create a service principal to use with Azure Pipelines. This service principal will have owner access to current subscription. Access will be used to perform role assigement for AGIC identity in the pipeline. ```bash az ad sp create-for-rbac -n azure-pipeline-cd --role owner Copy the AppId and Password. We will use these in the next step. ``` Now, create a new service connection in Azure Devops. Select \" use the full version of the service connection dialog \" option so that you can provide the newly created service principal. Create a new Azure release pipeline We have prepared an example release pipeline . This pipeline has following tasks: Deploy AKS Cluster Create a user assigned identity used by AGIC Pod Install Helm Install AAD Pod identity Install AGIC Install a sample application (with ingress) To use the example release pipeline, Download the template and import it to your project's release pipeline. Now provide the required settings for all tasks: Select the correct Agent Pool and Agent Specification (ubuntu-18.04) Select the newly created service connection for the Create Kubernetes Cluster and Create AGIC Identity tasks. Provide the values for clientId and clientSecret that will be configured as cluster credentials for the AKS cluster. You should create a separate service principal for the AKS cluster for security reasons. ```bash create a new one and copy the appId and password to the variable section in the pipeline az ad sp create-for-rbac -n aks-cluster ``` Click Save . Now your pipeline is all set up. Hit Create release and provide a location(Azure region) where you want the cluster to be deployed. Snapshot of how the AKS node resource group will look: If this is your first deployment, AGIC will create a new application gateway. You should be able to visit the Application Gateway's ip address to visit the sample application.","title":"Continuous Deployment with AKS and AGIC using Azure Pipelines"},{"location":"how-tos/continuous-deployment/#continuous-deployment-with-aks-and-agic-using-azure-pipelines","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. To achieve an efficiently deployed and managed global infrastucture, it is important to setup workflows for continuous integration and deployment. Azure Devops is one of the options to achieve this goal. In following example, we setup a Azure Devops release pipeline to deploy an AKS cluster along with AGIC as ingress. This example is merely a scaffolding. You need to separately setup a build pipeline to install your application and ingress on the AKS cluster deployed as part of the release.","title":"Continuous Deployment with AKS and AGIC using Azure Pipelines"},{"location":"how-tos/continuous-deployment/#setup-up-new-service-connection-with-service-principal","text":"Note : Skip if already have service connection with owner access for role assigment Create a service principal to use with Azure Pipelines. This service principal will have owner access to current subscription. Access will be used to perform role assigement for AGIC identity in the pipeline. ```bash az ad sp create-for-rbac -n azure-pipeline-cd --role owner","title":"Setup up new service connection with service principal"},{"location":"how-tos/continuous-deployment/#copy-the-appid-and-password-we-will-use-these-in-the-next-step","text":"``` Now, create a new service connection in Azure Devops. Select \" use the full version of the service connection dialog \" option so that you can provide the newly created service principal.","title":"Copy the AppId and Password. We will use these in the next step."},{"location":"how-tos/continuous-deployment/#create-a-new-azure-release-pipeline","text":"We have prepared an example release pipeline . This pipeline has following tasks: Deploy AKS Cluster Create a user assigned identity used by AGIC Pod Install Helm Install AAD Pod identity Install AGIC Install a sample application (with ingress) To use the example release pipeline, Download the template and import it to your project's release pipeline. Now provide the required settings for all tasks: Select the correct Agent Pool and Agent Specification (ubuntu-18.04) Select the newly created service connection for the Create Kubernetes Cluster and Create AGIC Identity tasks. Provide the values for clientId and clientSecret that will be configured as cluster credentials for the AKS cluster. You should create a separate service principal for the AKS cluster for security reasons. ```bash","title":"Create a new Azure release pipeline"},{"location":"how-tos/continuous-deployment/#create-a-new-one-and-copy-the-appid-and-password-to-the-variable-section-in-the-pipeline","text":"az ad sp create-for-rbac -n aks-cluster ``` Click Save . Now your pipeline is all set up. Hit Create release and provide a location(Azure region) where you want the cluster to be deployed. Snapshot of how the AKS node resource group will look: If this is your first deployment, AGIC will create a new application gateway. You should be able to visit the Application Gateway's ip address to visit the sample application.","title":"create a new one and copy the appId and password to the variable section in the pipeline"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/","text":"How to deploy AGIC via Helm using Workload Identity NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. This assumes you have an existing Application Gateway. If not, you can create it with command: bash az network application-gateway create -g myResourceGroup -n myApplicationGateway --sku Standard_v2 --public-ip-address myPublicIP --vnet-name myVnet --subnet mySubnet --priority 100 1. Add the AGIC Helm repository bash helm repo add application-gateway-kubernetes-ingress https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/ helm repo update 2. Set environment variables bash export RESOURCE_GROUP=\"myResourceGroup\" export APPLICATION_GATEWAY_NAME=\"myApplicationGateway\" export USER_ASSIGNED_IDENTITY_NAME=\"myIdentity\" export FEDERATED_IDENTITY_CREDENTIAL_NAME=\"myFedIdentity\" 3. Create resource group, AKS cluster and identity bash az group create --name \"${RESOURCE_GROUP}\" --location eastus az aks create -g \"${RESOURCE_GROUP}\" -n myAKSCluster --node-count 1 --enable-oidc-issuer --enable-workload-identity az identity create --name \"${USER_ASSIGNED_IDENTITY_NAME}\" --resource-group \"${RESOURCE_GROUP}\" 4. Export the oidcIssuerProfile.issuerUrl bash export AKS_OIDC_ISSUER=\"$(az aks show -n myAKSCluster -g \"${RESOURCE_GROUP}\" --query \"oidcIssuerProfile.issuerUrl\" -otsv)\" 5. Create federated identity credential Note : the name of the service account that gets created after the helm installation is \u201cingress-azure\u201d and the following command assumes it will be deployed in \u201cdefault\u201d namespace. Please change the namespace name in the next command if you deploy the AGIC related Kubernetes resources in other namespace. bash az identity federated-credential create --name ${FEDERATED_IDENTITY_CREDENTIAL_NAME} --identity-name ${USER_ASSIGNED_IDENTITY_NAME} --resource-group ${RESOURCE_GROUP} --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:default:ingress-azure 6. Obtain the ClientID of the identity created before that is needed for the next step bash az identity show --resource-group \"${RESOURCE_GROUP}\" --name \"${USER_ASSIGNED_IDENTITY_NAME}\" --query 'clientId' -otsv 7. Export the Application Gateway resource ID bash export APP_GW_ID=\"$(az network application-gateway show --name \"${APPLICATION_GATEWAY_NAME}\" --resource-group \"${RESOURCE_GROUP}\" --query 'id' --output tsv)\" 8. Add Contributor role for the identity over the Application Gateway bash az role assignment create --assignee --scope \"${APP_GW_ID}\" --role Contributor 9. In helm-config.yaml specify yaml armAuth: type: workloadIdentity identityClientID: 10.Get the AKS cluster credentials bash az aks get-credentials -g \"${RESOURCE_GROUP}\" -n myAKSCluster 11. Install the helm chart bash helm install ingress-azure \\ -f helm-config.yaml \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --version 1.7.1","title":"How to deploy AGIC via Helm using Workload Identity"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#how-to-deploy-agic-via-helm-using-workload-identity","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. This assumes you have an existing Application Gateway. If not, you can create it with command: bash az network application-gateway create -g myResourceGroup -n myApplicationGateway --sku Standard_v2 --public-ip-address myPublicIP --vnet-name myVnet --subnet mySubnet --priority 100","title":"How to deploy AGIC via Helm using Workload Identity"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#1-add-the-agic-helm-repository","text":"bash helm repo add application-gateway-kubernetes-ingress https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/ helm repo update","title":"1. Add the AGIC Helm repository"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#2-set-environment-variables","text":"bash export RESOURCE_GROUP=\"myResourceGroup\" export APPLICATION_GATEWAY_NAME=\"myApplicationGateway\" export USER_ASSIGNED_IDENTITY_NAME=\"myIdentity\" export FEDERATED_IDENTITY_CREDENTIAL_NAME=\"myFedIdentity\"","title":"2. Set environment variables"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#3-create-resource-group-aks-cluster-and-identity","text":"bash az group create --name \"${RESOURCE_GROUP}\" --location eastus az aks create -g \"${RESOURCE_GROUP}\" -n myAKSCluster --node-count 1 --enable-oidc-issuer --enable-workload-identity az identity create --name \"${USER_ASSIGNED_IDENTITY_NAME}\" --resource-group \"${RESOURCE_GROUP}\"","title":"3. Create resource group, AKS cluster and identity"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#4-export-the-oidcissuerprofileissuerurl","text":"bash export AKS_OIDC_ISSUER=\"$(az aks show -n myAKSCluster -g \"${RESOURCE_GROUP}\" --query \"oidcIssuerProfile.issuerUrl\" -otsv)\"","title":"4. Export the oidcIssuerProfile.issuerUrl"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#5-create-federated-identity-credential","text":"Note : the name of the service account that gets created after the helm installation is \u201cingress-azure\u201d and the following command assumes it will be deployed in \u201cdefault\u201d namespace. Please change the namespace name in the next command if you deploy the AGIC related Kubernetes resources in other namespace. bash az identity federated-credential create --name ${FEDERATED_IDENTITY_CREDENTIAL_NAME} --identity-name ${USER_ASSIGNED_IDENTITY_NAME} --resource-group ${RESOURCE_GROUP} --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:default:ingress-azure","title":"5. Create federated identity credential"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#6-obtain-the-clientid-of-the-identity-created-before-that-is-needed-for-the-next-step","text":"bash az identity show --resource-group \"${RESOURCE_GROUP}\" --name \"${USER_ASSIGNED_IDENTITY_NAME}\" --query 'clientId' -otsv","title":"6. Obtain the ClientID of the identity created before that is needed for the next step"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#7-export-the-application-gateway-resource-id","text":"bash export APP_GW_ID=\"$(az network application-gateway show --name \"${APPLICATION_GATEWAY_NAME}\" --resource-group \"${RESOURCE_GROUP}\" --query 'id' --output tsv)\"","title":"7. Export the Application Gateway resource ID"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#8-add-contributor-role-for-the-identity-over-the-application-gateway","text":"bash az role assignment create --assignee --scope \"${APP_GW_ID}\" --role Contributor","title":"8. Add Contributor role for the identity over the Application Gateway"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#9-in-helm-configyaml-specify","text":"yaml armAuth: type: workloadIdentity identityClientID: ","title":"9. In helm-config.yaml specify"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#10get-the-aks-cluster-credentials","text":"bash az aks get-credentials -g \"${RESOURCE_GROUP}\" -n myAKSCluster","title":"10.Get the AKS cluster credentials"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#11-install-the-helm-chart","text":"bash helm install ingress-azure \\ -f helm-config.yaml \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --version 1.7.1","title":"11. Install the helm chart"},{"location":"how-tos/dns/","text":"Automate DNS updates NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. When a hostname is specified in the Kubernetes Ingress resource's rules, it can be used to automatically create DNS records for the given domain and App Gateway's IP address. To achieve this the ExternalDNS Kubernetes app is required. ExternalDNS in installable via a Helm chart . The following document provides a tutorial on setting up ExternalDNS with an Azure DNS. Below is a sample Ingress resource, annotated with kubernetes.io/ingress.class: azure/application-gateway , which configures aplpha.contoso.com . yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: websocket-ingress namespace: alpha annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - host: alpha.contoso.com http: paths: - path: / backend: service: name: contoso-service port: number: 80 pathType: Exact Application Gateway Ingress Controller (AGIC) automatically recognizes the public IP address assigned to the Application Gateway it is associated with, and sets this IP ( 1.2.3.4 ) on the Ingress resource as shown below: bash $ kubectl get ingress -A NAMESPACE NAME HOSTS ADDRESS PORTS AGE alpha alpha-ingress alpha.contoso.com 1.2.3.4 80 8m55s beta beta-ingress beta.contoso.com 1.2.3.4 80 8m54s Once the Ingresses contain both host and adrress, ExternalDNS will provision these to the DNS system it has been associated with and authorized for.","title":"Automate DNS updates"},{"location":"how-tos/dns/#automate-dns-updates","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. When a hostname is specified in the Kubernetes Ingress resource's rules, it can be used to automatically create DNS records for the given domain and App Gateway's IP address. To achieve this the ExternalDNS Kubernetes app is required. ExternalDNS in installable via a Helm chart . The following document provides a tutorial on setting up ExternalDNS with an Azure DNS. Below is a sample Ingress resource, annotated with kubernetes.io/ingress.class: azure/application-gateway , which configures aplpha.contoso.com . yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: websocket-ingress namespace: alpha annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - host: alpha.contoso.com http: paths: - path: / backend: service: name: contoso-service port: number: 80 pathType: Exact Application Gateway Ingress Controller (AGIC) automatically recognizes the public IP address assigned to the Application Gateway it is associated with, and sets this IP ( 1.2.3.4 ) on the Ingress resource as shown below: bash $ kubectl get ingress -A NAMESPACE NAME HOSTS ADDRESS PORTS AGE alpha alpha-ingress alpha.contoso.com 1.2.3.4 80 8m55s beta beta-ingress beta.contoso.com 1.2.3.4 80 8m54s Once the Ingresses contain both host and adrress, ExternalDNS will provision these to the DNS system it has been associated with and authorized for.","title":"Automate DNS updates"},{"location":"how-tos/helm-upgrade/","text":"Upgrading AGIC using Helm NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. The Azure Application Gateway Ingress Controller for Kubernetes (AGIC) can be upgraded using a Helm repository hosted on Azure Storage. Before we begin the upgrade procedure, ensure that you have added the required repository: View your currently added Helm repositories with: bash helm repo list Add the AGIC repo with: bash helm repo add \\ application-gateway-kubernetes-ingress \\ https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/ Upgrade Refresh the AGIC Helm repository to get the latest release: bash helm repo update View available versions of the application-gateway-kubernetes-ingress chart: bash helm search repo -l application-gateway-kubernetes-ingress Sample response: bash NAME CHART VERSION APP VERSION DESCRIPTION application-gateway-kubernetes-ingress/ingress-azure 1.0.0 1.0.0 Use Azure Application Gateway as the ingress for an Azure... application-gateway-kubernetes-ingress/ingress-azure 0.7.0-rc1 0.7.0-rc1 Use Azure Application Gateway as the ingress for an Azure... application-gateway-kubernetes-ingress/ingress-azure 0.6.0 0.6.0 Use Azure Application Gateway as the ingress for an Azure... Latest available version from the list above is: 0.7.0-rc1 View the Helm charts currently installed: bash helm list Sample response: bash NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE odd-billygoat 22 Fri Nov 08 15:56:06 2019 FAILED ingress-azure-1.0.0 1.0.0 default The Helm chart installation from the sample response above is named odd-billygoat . We will use this name for the rest of the commands. Your actual deployment name will most likely differ. Upgrade the Helm deployment to a new version: bash helm upgrade \\ odd-billygoat \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --version 1.0.0 Rollback Should the Helm deployment fail, you can rollback to a previous release. Get the last known healthy release number: bash helm history odd-billygoat Sample output: bash REVISION UPDATED STATUS CHART DESCRIPTION 1 Mon Jun 17 13:49:42 2019 DEPLOYED ingress-azure-0.6.0 Install complete 2 Fri Jun 21 15:56:06 2019 FAILED ingress-azure-xx xxxx From the sample output of the helm history command it looks like the last successful deployment of our odd-billygoat was revision 1 Rollback to the last successful revision: bash helm rollback odd-billygoat 1","title":"Upgrading AGIC using Helm"},{"location":"how-tos/helm-upgrade/#upgrading-agic-using-helm","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. The Azure Application Gateway Ingress Controller for Kubernetes (AGIC) can be upgraded using a Helm repository hosted on Azure Storage. Before we begin the upgrade procedure, ensure that you have added the required repository: View your currently added Helm repositories with: bash helm repo list Add the AGIC repo with: bash helm repo add \\ application-gateway-kubernetes-ingress \\ https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/","title":"Upgrading AGIC using Helm"},{"location":"how-tos/helm-upgrade/#upgrade","text":"Refresh the AGIC Helm repository to get the latest release: bash helm repo update View available versions of the application-gateway-kubernetes-ingress chart: bash helm search repo -l application-gateway-kubernetes-ingress Sample response: bash NAME CHART VERSION APP VERSION DESCRIPTION application-gateway-kubernetes-ingress/ingress-azure 1.0.0 1.0.0 Use Azure Application Gateway as the ingress for an Azure... application-gateway-kubernetes-ingress/ingress-azure 0.7.0-rc1 0.7.0-rc1 Use Azure Application Gateway as the ingress for an Azure... application-gateway-kubernetes-ingress/ingress-azure 0.6.0 0.6.0 Use Azure Application Gateway as the ingress for an Azure... Latest available version from the list above is: 0.7.0-rc1 View the Helm charts currently installed: bash helm list Sample response: bash NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE odd-billygoat 22 Fri Nov 08 15:56:06 2019 FAILED ingress-azure-1.0.0 1.0.0 default The Helm chart installation from the sample response above is named odd-billygoat . We will use this name for the rest of the commands. Your actual deployment name will most likely differ. Upgrade the Helm deployment to a new version: bash helm upgrade \\ odd-billygoat \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --version 1.0.0","title":"Upgrade"},{"location":"how-tos/helm-upgrade/#rollback","text":"Should the Helm deployment fail, you can rollback to a previous release. Get the last known healthy release number: bash helm history odd-billygoat Sample output: bash REVISION UPDATED STATUS CHART DESCRIPTION 1 Mon Jun 17 13:49:42 2019 DEPLOYED ingress-azure-0.6.0 Install complete 2 Fri Jun 21 15:56:06 2019 FAILED ingress-azure-xx xxxx From the sample output of the helm history command it looks like the last successful deployment of our odd-billygoat was revision 1 Rollback to the last successful revision: bash helm rollback odd-billygoat 1","title":"Rollback"},{"location":"how-tos/lets-encrypt/","text":"Certificate issuance with LetsEncrypt.org NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. This section configures your AKS to leverage LetsEncrypt.org and automatically obtain a TLS/SSL certificate for your domain. The certificate will be installed on Application Gateway, which will perform SSL/TLS termination for your AKS cluster. The setup described here uses the cert-manager Kubernetes add-on, which automates the creation and management of certificates. Follow the steps below to install cert-manager on your existing AKS cluster. Helm Chart Run the following script to install the cert-manager helm chart. This will: create a new cert-manager namespace on your AKS create the following CRDs: Certificate, Challenge, ClusterIssuer, Issuer, Order install cert-manager chart (from docs.cert-manager.io) ```bash Install the CustomResourceDefinition resources separately Note: --validate=false is required per https://github.com/jetstack/cert-manager/issues/2208#issuecomment-541311021 kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.13/deploy/manifests/00-crds.yaml --validate=false Create the namespace for cert-manager kubectl create namespace cert-manager Label the cert-manager namespace to disable resource validation kubectl label namespace cert-manager cert-manager.io/disable-validation=true Add the Jetstack Helm repository helm repo add jetstack https://charts.jetstack.io Update your local Helm chart repository cache helm repo update Install v0.11 of cert-manager Helm chart helm install cert-manager \\ --namespace cert-manager \\ --version v0.13.0 \\ jetstack/cert-manager ``` ClusterIssuer Resource Create a ClusterIssuer resource. It is required by cert-manager to represent the Lets Encrypt certificate authority where the signed certificates will be obtained. By using the non-namespaced ClusterIssuer resource, cert-manager will issue certificates that can be consumed from multiple namespaces. Let\u2019s Encrypt uses the ACME protocol to verify that you control a given domain name and to issue you a certificate. More details on configuring ClusterIssuer properties here . ClusterIssuer will instruct cert-manager to issue certificates using the Lets Encrypt staging environment used for testing (the root certificate not present in browser/client trust stores). The default challenge type in the YAML below is http01 . Other challenges are documented on letsencrypt.org - Challenge Types IMPORTANT: Update in the YAML below ```bash kubectl apply -f - < # ACME server URL for Let\u2019s Encrypt\u2019s staging environment. # The staging environment will not issue trusted certificates but is # used to ensure that the verification process is working properly # before moving to production server: https://acme-staging-v02.api.letsencrypt.org/directory privateKeySecretRef: # Secret resource used to store the account's private key. name: letsencrypt-secret # Enable the HTTP-01 challenge provider # you prove ownership of a domain by ensuring that a particular # file is present at the domain solvers: - http01: ingress: class: azure/application-gateway EOF ``` Deploy App Create an Ingress resource to Expose the guestbook application using the Application Gateway with the Lets Encrypt Certificate. Ensure you Application Gateway has a public Frontend IP configuration with a DNS name (either using the default azure.com domain, or provision a Azure DNS Zone service, and assign your own custom domain). Note the annotation cert-manager.io/cluster-issuer: letsencrypt-staging , which tells cert-manager to process the tagged Ingress resource. IMPORTANT: Update in the YAML below with your own domain (or the Application Gateway one, for example 'kh-aks-ingress.westeurope.cloudapp.azure.com') bash kubectl apply -f - < secretName: guestbook-secret-name rules: - host: http: paths: - backend: service: name: frontend port: number: 80 EOF Use kubectl describe clusterissuer letsencrypt-staging to view the state of status of the ACME account registration. Use kubectl get secret guestbook-secret-name -o yaml to view the certificate issued. After a few seconds, you can access the guestbook service through the Application Gateway HTTPS url using the automatically issued staging Lets Encrypt certificate. Your browser may warn you of an invalid cert authority. The staging certificate is issued by CN=Fake LE Intermediate X1 . This is an indication that the system worked as expected and you are ready for your production certificate. Production Certificate Once your staging certificate is setup successfully you can switch to a production ACME server: Replace the staging annotation on your Ingress resource with: cert-manager.io/cluster-issuer: letsencrypt-prod Delete the existing staging ClusterIssuer you created in the previous step and create a new one by replacing the ACME server from the ClusterIssuer YAML above with https://acme-v02.api.letsencrypt.org/directory Certificate Expiration and Renewal Before the Lets Encrypt certificate expires, cert-manager will automatically update the certificate in the Kubernetes secret store. At that point, Application Gateway Ingress Controller will apply the updated secret referenced in the ingress resources it is using to configure the Application Gateway.","title":"Certificate issuance with LetsEncrypt.org"},{"location":"how-tos/lets-encrypt/#certificate-issuance-with-letsencryptorg","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. This section configures your AKS to leverage LetsEncrypt.org and automatically obtain a TLS/SSL certificate for your domain. The certificate will be installed on Application Gateway, which will perform SSL/TLS termination for your AKS cluster. The setup described here uses the cert-manager Kubernetes add-on, which automates the creation and management of certificates. Follow the steps below to install cert-manager on your existing AKS cluster. Helm Chart Run the following script to install the cert-manager helm chart. This will: create a new cert-manager namespace on your AKS create the following CRDs: Certificate, Challenge, ClusterIssuer, Issuer, Order install cert-manager chart (from docs.cert-manager.io) ```bash","title":"Certificate issuance with LetsEncrypt.org"},{"location":"how-tos/lets-encrypt/#install-the-customresourcedefinition-resources-separately","text":"","title":"Install the CustomResourceDefinition resources separately"},{"location":"how-tos/lets-encrypt/#note-validatefalse-is-required-per-httpsgithubcomjetstackcert-managerissues2208issuecomment-541311021","text":"kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.13/deploy/manifests/00-crds.yaml --validate=false","title":"Note: --validate=false is required per https://github.com/jetstack/cert-manager/issues/2208#issuecomment-541311021"},{"location":"how-tos/lets-encrypt/#create-the-namespace-for-cert-manager","text":"kubectl create namespace cert-manager","title":"Create the namespace for cert-manager"},{"location":"how-tos/lets-encrypt/#label-the-cert-manager-namespace-to-disable-resource-validation","text":"kubectl label namespace cert-manager cert-manager.io/disable-validation=true","title":"Label the cert-manager namespace to disable resource validation"},{"location":"how-tos/lets-encrypt/#add-the-jetstack-helm-repository","text":"helm repo add jetstack https://charts.jetstack.io","title":"Add the Jetstack Helm repository"},{"location":"how-tos/lets-encrypt/#update-your-local-helm-chart-repository-cache","text":"helm repo update","title":"Update your local Helm chart repository cache"},{"location":"how-tos/lets-encrypt/#install-v011-of-cert-manager-helm-chart","text":"helm install cert-manager \\ --namespace cert-manager \\ --version v0.13.0 \\ jetstack/cert-manager ``` ClusterIssuer Resource Create a ClusterIssuer resource. It is required by cert-manager to represent the Lets Encrypt certificate authority where the signed certificates will be obtained. By using the non-namespaced ClusterIssuer resource, cert-manager will issue certificates that can be consumed from multiple namespaces. Let\u2019s Encrypt uses the ACME protocol to verify that you control a given domain name and to issue you a certificate. More details on configuring ClusterIssuer properties here . ClusterIssuer will instruct cert-manager to issue certificates using the Lets Encrypt staging environment used for testing (the root certificate not present in browser/client trust stores). The default challenge type in the YAML below is http01 . Other challenges are documented on letsencrypt.org - Challenge Types IMPORTANT: Update in the YAML below ```bash kubectl apply -f - < # ACME server URL for Let\u2019s Encrypt\u2019s staging environment. # The staging environment will not issue trusted certificates but is # used to ensure that the verification process is working properly # before moving to production server: https://acme-staging-v02.api.letsencrypt.org/directory privateKeySecretRef: # Secret resource used to store the account's private key. name: letsencrypt-secret # Enable the HTTP-01 challenge provider # you prove ownership of a domain by ensuring that a particular # file is present at the domain solvers: - http01: ingress: class: azure/application-gateway EOF ``` Deploy App Create an Ingress resource to Expose the guestbook application using the Application Gateway with the Lets Encrypt Certificate. Ensure you Application Gateway has a public Frontend IP configuration with a DNS name (either using the default azure.com domain, or provision a Azure DNS Zone service, and assign your own custom domain). Note the annotation cert-manager.io/cluster-issuer: letsencrypt-staging , which tells cert-manager to process the tagged Ingress resource. IMPORTANT: Update in the YAML below with your own domain (or the Application Gateway one, for example 'kh-aks-ingress.westeurope.cloudapp.azure.com') bash kubectl apply -f - < secretName: guestbook-secret-name rules: - host: http: paths: - backend: service: name: frontend port: number: 80 EOF Use kubectl describe clusterissuer letsencrypt-staging to view the state of status of the ACME account registration. Use kubectl get secret guestbook-secret-name -o yaml to view the certificate issued. After a few seconds, you can access the guestbook service through the Application Gateway HTTPS url using the automatically issued staging Lets Encrypt certificate. Your browser may warn you of an invalid cert authority. The staging certificate is issued by CN=Fake LE Intermediate X1 . This is an indication that the system worked as expected and you are ready for your production certificate. Production Certificate Once your staging certificate is setup successfully you can switch to a production ACME server: Replace the staging annotation on your Ingress resource with: cert-manager.io/cluster-issuer: letsencrypt-prod Delete the existing staging ClusterIssuer you created in the previous step and create a new one by replacing the ACME server from the ClusterIssuer YAML above with https://acme-v02.api.letsencrypt.org/directory Certificate Expiration and Renewal Before the Lets Encrypt certificate expires, cert-manager will automatically update the certificate in the Kubernetes secret store. At that point, Application Gateway Ingress Controller will apply the updated secret referenced in the ingress resources it is using to configure the Application Gateway.","title":"Install v0.11 of cert-manager Helm chart"},{"location":"how-tos/minimize-downtime-during-deployments/","text":"Minimizing Downtime During Deployments NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Purpose This document outlines a Kubernetes and Ingress controller configuration, which when incorporated with proper Kubernetes rolling updates deployment could achieve a near-zero-downtime deployments. Overview It is not uncommon for Kubernetes operators to observe Application Gateway 502 errors while performing a Kubernetes rolling update on an AKS cluster fronted by Application Gateway and AGIC. This document offers a method to alleviate this problem. Since the method described in this document relies on correctly aligning the timing of deployment events it is not possible to guarantee 100% elimination of the probability of running into a 502 error. Even with this method there will be a non-zero chance for a period of time where Application Gateway backends could lag behind the most recent updates applied by a rolling update to the Kubernetes pods. Understanding 502 Errors At a high level there are 3 scenarios in which one could observe 502 errors on an AKS cluster fronted with App Gateway and AGIC. In all of these the root cause is the delay one could observe in applying a IP address changes to the Application Gateway's backend pools. Scaling down a Kubernetes cluster: Kubernetes is instructed to lower the number of pod replicas (perhaps manually, or via Horizontal Pod Autoscaler, or some other mechanism) Pods are put in Terminating state, while simultaneously removed from the list of Endpoints. AGIC observes the fact that Pods + Endpoints changed and begins a config update on App Gateway It takes somewhere between a second and a few minutes for a pod, or a list of the pods to be removed from App Gateway's backend -- meanwhile App Gateway still attempts to deliver traffic to terminated pods Result is occasional 502 errors Rolling Updates: Customer updates the version of the software (perhaps using kubectl set image ) Kubernetes upgrades a percentage of the pods at a time. The size of the bucket is defined in the strategy section of the Deployment spec Kubernetes adds a new pod with a new image - pod goes through the states from ContainerCreating to Running When the new pod is in Running state - Kubernetes terminates the old pod The process described above is repeated until all pods are upgraded Kubernetes terminates resource-starved pods (CPU, RAM etc) Solution The solution below lowers the probability of running into a scenario where App Gateway's backend pool points to terminated pods, resulting in 502 error. The solution below does not completely remove this chance. Required configuration changes prior to performing a rolling update : Change the Pod and/or Deployment specs by adding preStop container life-cycle hooks , with a delay (sleep) of at least 90 seconds. Example: yaml kind: Deployment metadata: name: x labels: app: y spec: ... template: ... spec: containers: - name: ctr ... lifecycle: preStop: exec: command: [\"sleep\",\"90\"] Note: The \"sleep\" command assumes the container is based on Linux. For Windows containers the equivalent command is [\"powershell.exe\",\"-c\",\"sleep\",\"90\"] . The addition of the preStop container life cycle hook will: delay Kubernetes sending SIGTERM to the container by 90 seconds, but put the pod immediately in Terminating state simultaneously this will also immediately remove the pod from the Kubernetes Endpoints list this will cause AGIC to remove the pod from App Gateway's backend pool pod will continue to run for the next 90 seconds - giving App Gateway 90 seconds to execute \"remove from backend pools\" command Add connection draining annotation to the Ingress read by AGIC to allow for in-flight connections to complete. Example: yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: websocket-ingress annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/connection-draining: \"true\" appgw.ingress.kubernetes.io/connection-draining-timeout: \"30\" What this achieves - when a pod is pulled from an App Gateway backend it will disappear from the UI, but existing in-flight connections will not be immediately terminated -- they will be given 30 seconds to complete. We believe that the addition of the preStop hook and the connection draining annotation will drastically remove the probability for App Gateway to attempt to connect to a terminated pod. Add terminationGracePeriodSeconds to the Pod resource YAML. This must be set to a value that is greater than the preStop hook wait time. yaml kind: Deployment metadata: name: x labels: app: y spec: ... template: ... spec: containers: - name: ctr ... terminationGracePeriodSeconds: 101 Decrease interval between App Gateway health probes to backend pools. The goal is to increase number of probes per unit of time. This will ensure that a terminated pod, which has not yet been removed from App Gateway's backend pool, will be marked as unhealthy sooner, thus removing the probability of a request landing on a terminated pod and resulting in a 502 error. For example the following Kubernetes Deployment liveness probe will result in the respective pods being marked as unhealthy after 15 seconds and 3 failed probes. This config will be directly applied to Application Gateway (by AGIC), as well as Kubernetes. yaml ... livenessProbe: httpGet: path: / port: 80 periodSeconds: 4 timeoutSeconds: 5 failureThreshold: 3 Summary To achieve a near-zero-downtime deployments, we need to add a: preStop hook waiting for 90 seconds termination grace period of at least 90 seconds connection draining timeout of about 30 seconds aggressive health probes Note: All proposed parameter values above should be adjusted for the specifics of the system being deployed. Long term solutions to zero-downtime updates: Faster backend pool updates: The AGIC team is already working on the next iteration of the Ingress Controller, which will shorten the time to update App Gateway drastically. Faster backend pool updates will lower the probability to run into 502s. Rolling updates with App Gateway feedback: AGIC team is looking into a deeper integration between AGIC and the Kubernetes' rolling updates feature.","title":"Minimizing Downtime During Deployments"},{"location":"how-tos/minimize-downtime-during-deployments/#minimizing-downtime-during-deployments","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment.","title":"Minimizing Downtime During Deployments"},{"location":"how-tos/minimize-downtime-during-deployments/#purpose","text":"This document outlines a Kubernetes and Ingress controller configuration, which when incorporated with proper Kubernetes rolling updates deployment could achieve a near-zero-downtime deployments.","title":"Purpose"},{"location":"how-tos/minimize-downtime-during-deployments/#overview","text":"It is not uncommon for Kubernetes operators to observe Application Gateway 502 errors while performing a Kubernetes rolling update on an AKS cluster fronted by Application Gateway and AGIC. This document offers a method to alleviate this problem. Since the method described in this document relies on correctly aligning the timing of deployment events it is not possible to guarantee 100% elimination of the probability of running into a 502 error. Even with this method there will be a non-zero chance for a period of time where Application Gateway backends could lag behind the most recent updates applied by a rolling update to the Kubernetes pods.","title":"Overview"},{"location":"how-tos/minimize-downtime-during-deployments/#understanding-502-errors","text":"At a high level there are 3 scenarios in which one could observe 502 errors on an AKS cluster fronted with App Gateway and AGIC. In all of these the root cause is the delay one could observe in applying a IP address changes to the Application Gateway's backend pools. Scaling down a Kubernetes cluster: Kubernetes is instructed to lower the number of pod replicas (perhaps manually, or via Horizontal Pod Autoscaler, or some other mechanism) Pods are put in Terminating state, while simultaneously removed from the list of Endpoints. AGIC observes the fact that Pods + Endpoints changed and begins a config update on App Gateway It takes somewhere between a second and a few minutes for a pod, or a list of the pods to be removed from App Gateway's backend -- meanwhile App Gateway still attempts to deliver traffic to terminated pods Result is occasional 502 errors Rolling Updates: Customer updates the version of the software (perhaps using kubectl set image ) Kubernetes upgrades a percentage of the pods at a time. The size of the bucket is defined in the strategy section of the Deployment spec Kubernetes adds a new pod with a new image - pod goes through the states from ContainerCreating to Running When the new pod is in Running state - Kubernetes terminates the old pod The process described above is repeated until all pods are upgraded Kubernetes terminates resource-starved pods (CPU, RAM etc)","title":"Understanding 502 Errors"},{"location":"how-tos/minimize-downtime-during-deployments/#solution","text":"The solution below lowers the probability of running into a scenario where App Gateway's backend pool points to terminated pods, resulting in 502 error. The solution below does not completely remove this chance. Required configuration changes prior to performing a rolling update : Change the Pod and/or Deployment specs by adding preStop container life-cycle hooks , with a delay (sleep) of at least 90 seconds. Example: yaml kind: Deployment metadata: name: x labels: app: y spec: ... template: ... spec: containers: - name: ctr ... lifecycle: preStop: exec: command: [\"sleep\",\"90\"] Note: The \"sleep\" command assumes the container is based on Linux. For Windows containers the equivalent command is [\"powershell.exe\",\"-c\",\"sleep\",\"90\"] . The addition of the preStop container life cycle hook will: delay Kubernetes sending SIGTERM to the container by 90 seconds, but put the pod immediately in Terminating state simultaneously this will also immediately remove the pod from the Kubernetes Endpoints list this will cause AGIC to remove the pod from App Gateway's backend pool pod will continue to run for the next 90 seconds - giving App Gateway 90 seconds to execute \"remove from backend pools\" command Add connection draining annotation to the Ingress read by AGIC to allow for in-flight connections to complete. Example: yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: websocket-ingress annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/connection-draining: \"true\" appgw.ingress.kubernetes.io/connection-draining-timeout: \"30\" What this achieves - when a pod is pulled from an App Gateway backend it will disappear from the UI, but existing in-flight connections will not be immediately terminated -- they will be given 30 seconds to complete. We believe that the addition of the preStop hook and the connection draining annotation will drastically remove the probability for App Gateway to attempt to connect to a terminated pod. Add terminationGracePeriodSeconds to the Pod resource YAML. This must be set to a value that is greater than the preStop hook wait time. yaml kind: Deployment metadata: name: x labels: app: y spec: ... template: ... spec: containers: - name: ctr ... terminationGracePeriodSeconds: 101 Decrease interval between App Gateway health probes to backend pools. The goal is to increase number of probes per unit of time. This will ensure that a terminated pod, which has not yet been removed from App Gateway's backend pool, will be marked as unhealthy sooner, thus removing the probability of a request landing on a terminated pod and resulting in a 502 error. For example the following Kubernetes Deployment liveness probe will result in the respective pods being marked as unhealthy after 15 seconds and 3 failed probes. This config will be directly applied to Application Gateway (by AGIC), as well as Kubernetes. yaml ... livenessProbe: httpGet: path: / port: 80 periodSeconds: 4 timeoutSeconds: 5 failureThreshold: 3","title":"Solution"},{"location":"how-tos/minimize-downtime-during-deployments/#summary","text":"To achieve a near-zero-downtime deployments, we need to add a: preStop hook waiting for 90 seconds termination grace period of at least 90 seconds connection draining timeout of about 30 seconds aggressive health probes Note: All proposed parameter values above should be adjusted for the specifics of the system being deployed. Long term solutions to zero-downtime updates: Faster backend pool updates: The AGIC team is already working on the next iteration of the Ingress Controller, which will shorten the time to update App Gateway drastically. Faster backend pool updates will lower the probability to run into 502s. Rolling updates with App Gateway feedback: AGIC team is looking into a deeper integration between AGIC and the Kubernetes' rolling updates feature.","title":"Summary"},{"location":"how-tos/networking/","text":"How to setup networking between Application Gateway and AKS NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. When you are using Application Gateway with AKS for L7, you need to make sure that you have setup network connectivity correctly between the gateway and the cluster. Otherwise, you might receive 502s when reaching your site. There are two major things to consider when setting up network connectivity between Application Gateway and AKS Virtual Network Configuration When AKS and Application Gateway in the same virtual network When AKS and Application Gateway in different virtual networks Network Plugin used with AKS Kubenet Azure(advanced) CNI Virtual Network Configuration Deployed in same virtual network If you have deployed AKS and Application Gateway in the same virtual network with Azure CNI for network plugin, then you don't have to do any changes and you are good to go. Application Gateway instances should be able to reach the PODs. If you are using kubenet network plugin, then jump to Kubenet to setup the route table. Deployed in different vnets AKS can be deployed in different virtual network from Application Gateway's virtual network, however, the two virtual networks must be peered together. When you create a virtual network peering between two virtual networks, a route is added by Azure for each address range within the address space of each virtual network a peering is created for. ```bash aksClusterName=\" \" aksResourceGroup=\" \" appGatewayName=\" \" appGatewayResourceGroup=\" \" get aks vnet information nodeResourceGroup=$(az aks show -n $aksClusterName -g aksResourceGroup -o tsv --query \"nodeResourceGroup\") aksVnetName= aksResourceGroup -o tsv --query \"nodeResourceGroup\") aksVnetName= (az network vnet list -g nodeResourceGroup -o tsv --query \"[0].name\") aksVnetId= nodeResourceGroup -o tsv --query \"[0].name\") aksVnetId= (az network vnet show -n $aksVnetName -g $nodeResourceGroup -o tsv --query \"id\") get gateway vnet information appGatewaySubnetId=$(az network application-gateway show -n $appGatewayName -g appGatewayResourceGroup -o tsv --query \"gatewayIpConfigurations[0].subnet.id\") appGatewayVnetName= appGatewayResourceGroup -o tsv --query \"gatewayIpConfigurations[0].subnet.id\") appGatewayVnetName= (az network vnet show --ids appGatewaySubnetId -o tsv --query \"name\") appGatewayVnetId= appGatewaySubnetId -o tsv --query \"name\") appGatewayVnetId= (az network vnet show --ids $appGatewaySubnetId -o tsv --query \"id\") set up bi-directional peering between aks and gateway vnet az network vnet peering create -n gateway2aks \\ -g $appGatewayResourceGroup --vnet-name $appGatewayVnetName \\ --remote-vnet $aksVnetId \\ --allow-vnet-access az network vnet peering create -n aks2gateway \\ -g $nodeResourceGroup --vnet-name $aksVnetName \\ --remote-vnet $appGatewayVnetId \\ --allow-vnet-access ``` If you are using Azure CNI for network plugin with AKS, then you are good to go. If you are using Kubenet network plugin, then jump to Kubenet to setup the route table. Network Plugin used with AKS With Azure CNI When using Azure CNI, Every pod is assigned a VNET route-able private IP from the subnet. So, Gateway should be able reach the pods directly. With Kubenet When using Kubenet mode, Only nodes receive an IP address from subnet. Pod are assigned IP addresses from the PodIPCidr and a route table is created by AKS. This route table helps the packets destined for a POD IP reach the node which is hosting the pod. When packets leave Application Gateway instances, Application Gateway's subnet need to aware of these routes setup by the AKS in the route table. A simple way to achieve this is by associating the same route table created by AKS to the Application Gateway's subnet. When AGIC starts up, it checks the AKS node resource group for the existence of the route table. If it exists, AGIC will try to assign the route table to the Application Gateway's subnet, given it doesn't already have a route table. If AGIC doesn't have permissions to any of the above resources, the operation will fail and an error will be logged in the AGIC pod logs. This association can also be performed manually: ```bash aksClusterName=\" \" aksResourceGroup=\" \" appGatewayName=\" \" appGatewayResourceGroup=\" \" find route table used by aks cluster nodeResourceGroup=$(az aks show -n $aksClusterName -g aksResourceGroup -o tsv --query \"nodeResourceGroup\") routeTableId= aksResourceGroup -o tsv --query \"nodeResourceGroup\") routeTableId= (az network route-table list -g $nodeResourceGroup --query \"[].id | [0]\" -o tsv) get the application gateway's subnet appGatewaySubnetId=$(az network application-gateway show -n $appGatewayName -g $appGatewayResourceGroup -o tsv --query \"gatewayIpConfigurations[0].subnet.id\") associate the route table to Application Gateway's subnet az network vnet subnet update \\ --ids $appGatewaySubnetId --route-table $routeTableId ``` Further Readings Peer the two virtual networks together Virtual network peering How to peer your networks from different subscription Use kubenet to configure networking Use CNI to configure networking Network concept for AKS and Kubernetes When to decide to use kubenet or CNI","title":"How to setup networking between Application Gateway and AKS"},{"location":"how-tos/networking/#how-to-setup-networking-between-application-gateway-and-aks","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. When you are using Application Gateway with AKS for L7, you need to make sure that you have setup network connectivity correctly between the gateway and the cluster. Otherwise, you might receive 502s when reaching your site. There are two major things to consider when setting up network connectivity between Application Gateway and AKS Virtual Network Configuration When AKS and Application Gateway in the same virtual network When AKS and Application Gateway in different virtual networks Network Plugin used with AKS Kubenet Azure(advanced) CNI","title":"How to setup networking between Application Gateway and AKS"},{"location":"how-tos/networking/#virtual-network-configuration","text":"","title":"Virtual Network Configuration"},{"location":"how-tos/networking/#deployed-in-same-virtual-network","text":"If you have deployed AKS and Application Gateway in the same virtual network with Azure CNI for network plugin, then you don't have to do any changes and you are good to go. Application Gateway instances should be able to reach the PODs. If you are using kubenet network plugin, then jump to Kubenet to setup the route table.","title":"Deployed in same virtual network"},{"location":"how-tos/networking/#deployed-in-different-vnets","text":"AKS can be deployed in different virtual network from Application Gateway's virtual network, however, the two virtual networks must be peered together. When you create a virtual network peering between two virtual networks, a route is added by Azure for each address range within the address space of each virtual network a peering is created for. ```bash aksClusterName=\" \" aksResourceGroup=\" \" appGatewayName=\" \" appGatewayResourceGroup=\" \"","title":"Deployed in different vnets"},{"location":"how-tos/networking/#get-aks-vnet-information","text":"nodeResourceGroup=$(az aks show -n $aksClusterName -g aksResourceGroup -o tsv --query \"nodeResourceGroup\") aksVnetName= aksResourceGroup -o tsv --query \"nodeResourceGroup\") aksVnetName= (az network vnet list -g nodeResourceGroup -o tsv --query \"[0].name\") aksVnetId= nodeResourceGroup -o tsv --query \"[0].name\") aksVnetId= (az network vnet show -n $aksVnetName -g $nodeResourceGroup -o tsv --query \"id\")","title":"get aks vnet information"},{"location":"how-tos/networking/#get-gateway-vnet-information","text":"appGatewaySubnetId=$(az network application-gateway show -n $appGatewayName -g appGatewayResourceGroup -o tsv --query \"gatewayIpConfigurations[0].subnet.id\") appGatewayVnetName= appGatewayResourceGroup -o tsv --query \"gatewayIpConfigurations[0].subnet.id\") appGatewayVnetName= (az network vnet show --ids appGatewaySubnetId -o tsv --query \"name\") appGatewayVnetId= appGatewaySubnetId -o tsv --query \"name\") appGatewayVnetId= (az network vnet show --ids $appGatewaySubnetId -o tsv --query \"id\")","title":"get gateway vnet information"},{"location":"how-tos/networking/#set-up-bi-directional-peering-between-aks-and-gateway-vnet","text":"az network vnet peering create -n gateway2aks \\ -g $appGatewayResourceGroup --vnet-name $appGatewayVnetName \\ --remote-vnet $aksVnetId \\ --allow-vnet-access az network vnet peering create -n aks2gateway \\ -g $nodeResourceGroup --vnet-name $aksVnetName \\ --remote-vnet $appGatewayVnetId \\ --allow-vnet-access ``` If you are using Azure CNI for network plugin with AKS, then you are good to go. If you are using Kubenet network plugin, then jump to Kubenet to setup the route table.","title":"set up bi-directional peering between aks and gateway vnet"},{"location":"how-tos/networking/#network-plugin-used-with-aks","text":"","title":"Network Plugin used with AKS"},{"location":"how-tos/networking/#with-azure-cni","text":"When using Azure CNI, Every pod is assigned a VNET route-able private IP from the subnet. So, Gateway should be able reach the pods directly.","title":"With Azure CNI"},{"location":"how-tos/networking/#with-kubenet","text":"When using Kubenet mode, Only nodes receive an IP address from subnet. Pod are assigned IP addresses from the PodIPCidr and a route table is created by AKS. This route table helps the packets destined for a POD IP reach the node which is hosting the pod. When packets leave Application Gateway instances, Application Gateway's subnet need to aware of these routes setup by the AKS in the route table. A simple way to achieve this is by associating the same route table created by AKS to the Application Gateway's subnet. When AGIC starts up, it checks the AKS node resource group for the existence of the route table. If it exists, AGIC will try to assign the route table to the Application Gateway's subnet, given it doesn't already have a route table. If AGIC doesn't have permissions to any of the above resources, the operation will fail and an error will be logged in the AGIC pod logs. This association can also be performed manually: ```bash aksClusterName=\" \" aksResourceGroup=\" \" appGatewayName=\" \" appGatewayResourceGroup=\" \"","title":"With Kubenet"},{"location":"how-tos/networking/#find-route-table-used-by-aks-cluster","text":"nodeResourceGroup=$(az aks show -n $aksClusterName -g aksResourceGroup -o tsv --query \"nodeResourceGroup\") routeTableId= aksResourceGroup -o tsv --query \"nodeResourceGroup\") routeTableId= (az network route-table list -g $nodeResourceGroup --query \"[].id | [0]\" -o tsv)","title":"find route table used by aks cluster"},{"location":"how-tos/networking/#get-the-application-gateways-subnet","text":"appGatewaySubnetId=$(az network application-gateway show -n $appGatewayName -g $appGatewayResourceGroup -o tsv --query \"gatewayIpConfigurations[0].subnet.id\")","title":"get the application gateway's subnet"},{"location":"how-tos/networking/#associate-the-route-table-to-application-gateways-subnet","text":"az network vnet subnet update \\ --ids $appGatewaySubnetId --route-table $routeTableId ```","title":"associate the route table to Application Gateway's subnet"},{"location":"how-tos/networking/#further-readings","text":"Peer the two virtual networks together Virtual network peering How to peer your networks from different subscription Use kubenet to configure networking Use CNI to configure networking Network concept for AKS and Kubernetes When to decide to use kubenet or CNI","title":"Further Readings"},{"location":"how-tos/prevent-agic-from-overwriting/","text":"Preventing AGIC from removing certain rules NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Note: This feature is EXPERIMENTAL with limited support . Use with caution. By default AGIC assumes full ownership of the App Gateway it is linked to. AGIC version 0.8.0 and later allows retaining rules to allow adding VMSS as backend along with AKS cluster. Please backup your App Gateway's configuration before enabling this setting: using Azure Portal navigate to your App Gateway instance from Export template click Download The zip file you downloaded will have JSON templates, bash, and PowerShell scripts you could use to restore App Gateway Example Scenario Let's look at an imaginary App Gateway, which manages traffic for 2 web sites: dev.contoso.com - hosted on a new AKS, using App Gateway and AGIC prod.contoso.com - hosted on an Azure VMSS With default settings, AGIC assumes 100% ownership of the App Gateway it is pointed to. AGIC overwrites all of App Gateway's configuration. If we were to manually create a listener for prod.contoso.com (on App Gateway), without defining it in the Kubernetes Ingress, AGIC will delete the prod.contoso.com config within seconds. To install AGIC and also serve prod.contoso.com from our VMSS machines, we must constrain AGIC to configuring dev.contoso.com only. This is facilitated by instantiating the following CRD : bash cat < # existing field resourceGroup: # existing field name: # existing field shared: true # <<<<< Add this field to enable shared App Gateway >>>>> Apply the Helm changes: Ensure the AzureIngressProhibitedTarget CRD is installed with: bash kubectl apply -f https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/ae695ef9bd05c8b708cedf6ff545595d0b7022dc/crds/AzureIngressProhibitedTarget.yaml Update Helm: bash helm upgrade \\ --recreate-pods \\ -f helm-config.yaml \\ ingress-azure oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure As a result your AKS will have a new instance of AzureIngressProhibitedTarget called prohibit-all-targets : bash kubectl get AzureIngressProhibitedTargets prohibit-all-targets -o yaml The object prohibit-all-targets , as the name implies, prohibits AGIC from changing config for any host and path. Helm install with appgw.shared=true will deploy AGIC, but will not make any changes to App Gateway. Broaden permissions Since Helm with appgw.shared=true and the default prohibit-all-targets blocks AGIC from applying any config. Broaden AGIC permissions with: Create a new AzureIngressProhibitedTarget with your specific setup: bash cat < # existing field resourceGroup: # existing field name: # existing field shared: true # <<<<< Add this field to enable shared App Gateway >>>>> Apply the Helm changes: Ensure the AzureIngressProhibitedTarget CRD is installed with: bash kubectl apply -f https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/ae695ef9bd05c8b708cedf6ff545595d0b7022dc/crds/AzureIngressProhibitedTarget.yaml Update Helm: bash helm upgrade \\ --recreate-pods \\ -f helm-config.yaml \\ ingress-azure oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure As a result your AKS will have a new instance of AzureIngressProhibitedTarget called prohibit-all-targets : bash kubectl get AzureIngressProhibitedTargets prohibit-all-targets -o yaml The object prohibit-all-targets , as the name implies, prohibits AGIC from changing config for any host and path. Helm install with appgw.shared=true will deploy AGIC, but will not make any changes to App Gateway.","title":"Enable with new AGIC installation"},{"location":"how-tos/prevent-agic-from-overwriting/#broaden-permissions","text":"Since Helm with appgw.shared=true and the default prohibit-all-targets blocks AGIC from applying any config. Broaden AGIC permissions with: Create a new AzureIngressProhibitedTarget with your specific setup: bash cat <\" applicationGatewayGroupId=$(az group show -g $applicationGatewayGroupName -o tsv --query \"id\") az ad sp create-for-rbac -n \"azure-k8s-metric-adapter-sp\" --role \"Monitoring Reader\" --scopes applicationGatewayGroupId Now, We will deploy the Azure K8S Metric Adapter using the AAD service principal created above. ```bash kubectl create namespace custom-metrics use values from service principle created above to create secret kubectl create secret generic azure-k8s-metrics-adapter -n custom-metrics \\ --from-literal=azure-tenant-id= \\ --from-literal=azure-client-id= \\ --from-literal=azure-client-secret= kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/Azure/azure-k8s-metrics-adapter/master/deploy/adapter.yaml -n custom-metrics ``` We will create an ExternalMetric resource with name appgw-request-count-metric . This will instruct the metric adapter to expose AvgRequestCountPerHealthyHost metric for myApplicationGateway resource in myResourceGroup resource group. You can use the filter field to target a specific backend pool and backend http setting in the Application Gateway. Copy paste this YAML content in external-metric.yaml and apply with kubectl apply -f external-metric.yaml . yaml apiVersion: azure.com/v1alpha2 kind: ExternalMetric metadata: name: appgw-request-count-metric spec: type: azuremonitor azure: resourceGroup: myResourceGroup # replace with your application gateway's resource group name resourceName: myApplicationGateway # replace with your application gateway's name resourceProviderNamespace: Microsoft.Network resourceType: applicationGateways metric: metricName: AvgRequestCountPerHealthyHost aggregation: Average filter: BackendSettingsPool eq '~' # optional You can now make a request to the metric server to see if our new metric is getting exposed: ```bash kubectl get --raw \"/apis/external.metrics.k8s.io/v1beta1/namespaces/default/appgw-request-count-metric\" Sample Output { \"kind\": \"ExternalMetricValueList\", \"apiVersion\": \"external.metrics.k8s.io/v1beta1\", \"metadata\": { \"selfLink\": \"/apis/external.metrics.k8s.io/v1beta1/namespaces/default/appgw-request-count-metric\", }, \"items\": [ { \"metricName\": \"appgw-request-count-metric\", \"metricLabels\": null, \"timestamp\": \"2019-11-05T00:18:51Z\", \"value\": \"30\", }, ], } ``` Using the new metric to scale up our deployment Once we are able to expose appgw-request-count-metric through the metric server, We are ready to use Horizontal Pod Autoscaler to scale up our target deployment. In following example, we will target a sample deployment aspnet . We will scale up Pods when appgw-request-count-metric > 200 per Pod upto a max of 10 Pods. Replace your target deployment name and apply the following auto scale configuration. Copy paste this YAML content in autoscale-config.yaml and apply with kubectl apply -f autoscale-config.yaml . yaml apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: deployment-scaler spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: aspnet # replace with your deployment's name minReplicas: 1 maxReplicas: 10 metrics: - type: External external: metricName: appgw-request-count-metric targetAverageValue: 200 Test your configuration by using a load test tools like apache bench: bash ab -n10000 http:///","title":"Scale your Applications using Application Gateway Metrics (Beta)"},{"location":"how-tos/scale-applications-using-appgw-metrics/#scale-your-applications-using-application-gateway-metrics-beta","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. As incoming traffic increases, it becomes crucial to scale up your applications based on the demand. In the following tutorial, we explain how you can use Application Gateway's AvgRequestCountPerHealthyHost metric to scale up your application. AvgRequestCountPerHealthyHost is measure of average request that are sent to a specific backend pool and backend http setting combination. We are going to use following two components: Azure K8S Metric Adapter - We will using the metric adapter to expose Application Gateway metrics through the metric server. Horizontal Pod Autoscaler - We will use HPA to use Application Gateway metrics and target a deployment for scaling.","title":"Scale your Applications using Application Gateway Metrics (Beta)"},{"location":"how-tos/scale-applications-using-appgw-metrics/#setting-up-azure-k8s-metric-adapter","text":"We will first create an Azure AAD service principal and assign it Monitoring Reader access over Application Gateway's resource group. Paste the following lines in your Azure Cloud Shell : bash applicationGatewayGroupName=\"\" applicationGatewayGroupId=$(az group show -g $applicationGatewayGroupName -o tsv --query \"id\") az ad sp create-for-rbac -n \"azure-k8s-metric-adapter-sp\" --role \"Monitoring Reader\" --scopes applicationGatewayGroupId Now, We will deploy the Azure K8S Metric Adapter using the AAD service principal created above. ```bash kubectl create namespace custom-metrics","title":"Setting up Azure K8S Metric Adapter"},{"location":"how-tos/scale-applications-using-appgw-metrics/#use-values-from-service-principle-created-above-to-create-secret","text":"kubectl create secret generic azure-k8s-metrics-adapter -n custom-metrics \\ --from-literal=azure-tenant-id= \\ --from-literal=azure-client-id= \\ --from-literal=azure-client-secret= kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/Azure/azure-k8s-metrics-adapter/master/deploy/adapter.yaml -n custom-metrics ``` We will create an ExternalMetric resource with name appgw-request-count-metric . This will instruct the metric adapter to expose AvgRequestCountPerHealthyHost metric for myApplicationGateway resource in myResourceGroup resource group. You can use the filter field to target a specific backend pool and backend http setting in the Application Gateway. Copy paste this YAML content in external-metric.yaml and apply with kubectl apply -f external-metric.yaml . yaml apiVersion: azure.com/v1alpha2 kind: ExternalMetric metadata: name: appgw-request-count-metric spec: type: azuremonitor azure: resourceGroup: myResourceGroup # replace with your application gateway's resource group name resourceName: myApplicationGateway # replace with your application gateway's name resourceProviderNamespace: Microsoft.Network resourceType: applicationGateways metric: metricName: AvgRequestCountPerHealthyHost aggregation: Average filter: BackendSettingsPool eq '~' # optional You can now make a request to the metric server to see if our new metric is getting exposed: ```bash kubectl get --raw \"/apis/external.metrics.k8s.io/v1beta1/namespaces/default/appgw-request-count-metric\"","title":"use values from service principle created above to create secret"},{"location":"how-tos/scale-applications-using-appgw-metrics/#sample-output","text":"","title":"Sample Output"},{"location":"how-tos/scale-applications-using-appgw-metrics/#_1","text":"","title":"{"},{"location":"how-tos/scale-applications-using-appgw-metrics/#kind-externalmetricvaluelist","text":"","title":"\"kind\": \"ExternalMetricValueList\","},{"location":"how-tos/scale-applications-using-appgw-metrics/#apiversion-externalmetricsk8siov1beta1","text":"","title":"\"apiVersion\": \"external.metrics.k8s.io/v1beta1\","},{"location":"how-tos/scale-applications-using-appgw-metrics/#metadata","text":"","title":"\"metadata\":"},{"location":"how-tos/scale-applications-using-appgw-metrics/#_2","text":"","title":"{"},{"location":"how-tos/scale-applications-using-appgw-metrics/#selflink-apisexternalmetricsk8siov1beta1namespacesdefaultappgw-request-count-metric","text":"","title":"\"selfLink\": \"/apis/external.metrics.k8s.io/v1beta1/namespaces/default/appgw-request-count-metric\","},{"location":"how-tos/scale-applications-using-appgw-metrics/#_3","text":"","title":"},"},{"location":"how-tos/scale-applications-using-appgw-metrics/#items","text":"","title":"\"items\":"},{"location":"how-tos/scale-applications-using-appgw-metrics/#_4","text":"","title":"["},{"location":"how-tos/scale-applications-using-appgw-metrics/#_5","text":"","title":"{"},{"location":"how-tos/scale-applications-using-appgw-metrics/#metricname-appgw-request-count-metric","text":"","title":"\"metricName\": \"appgw-request-count-metric\","},{"location":"how-tos/scale-applications-using-appgw-metrics/#metriclabels-null","text":"","title":"\"metricLabels\": null,"},{"location":"how-tos/scale-applications-using-appgw-metrics/#timestamp-2019-11-05t001851z","text":"","title":"\"timestamp\": \"2019-11-05T00:18:51Z\","},{"location":"how-tos/scale-applications-using-appgw-metrics/#value-30","text":"","title":"\"value\": \"30\","},{"location":"how-tos/scale-applications-using-appgw-metrics/#_6","text":"","title":"},"},{"location":"how-tos/scale-applications-using-appgw-metrics/#_7","text":"","title":"],"},{"location":"how-tos/scale-applications-using-appgw-metrics/#_8","text":"```","title":"}"},{"location":"how-tos/scale-applications-using-appgw-metrics/#using-the-new-metric-to-scale-up-our-deployment","text":"Once we are able to expose appgw-request-count-metric through the metric server, We are ready to use Horizontal Pod Autoscaler to scale up our target deployment. In following example, we will target a sample deployment aspnet . We will scale up Pods when appgw-request-count-metric > 200 per Pod upto a max of 10 Pods. Replace your target deployment name and apply the following auto scale configuration. Copy paste this YAML content in autoscale-config.yaml and apply with kubectl apply -f autoscale-config.yaml . yaml apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: deployment-scaler spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: aspnet # replace with your deployment's name minReplicas: 1 maxReplicas: 10 metrics: - type: External external: metricName: appgw-request-count-metric targetAverageValue: 200 Test your configuration by using a load test tools like apache bench: bash ab -n10000 http:///","title":"Using the new metric to scale up our deployment"},{"location":"how-tos/websockets/","text":"Expose a WebSocket server As outlined in the Application Gateway v2 documentation - it provides native support for the WebSocket and HTTP/2 protocols . Please note, that for both Application Gateway and the Kubernetes Ingress - there is no user-configurable setting to selectively enable or disable WebSocket support. The Kubernetes deployment YAML below shows the minimum configuration used to deploy a WebSocket server, which is the same as deploying a regular web server: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: websocket-server spec: selector: matchLabels: app: ws-app replicas: 2 template: metadata: labels: app: ws-app spec: containers: - name: websocket-app imagePullPolicy: Always image: your-container-repo.azurecr.io/websockets-app ports: - containerPort: 8888 imagePullSecrets: - name: azure-container-registry-credentials apiVersion: v1 kind: Service metadata: name: websocket-app-service spec: selector: app: ws-app ports: - protocol: TCP port: 80 targetPort: 8888 apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: websocket-repeater annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - host: ws.contoso.com http: paths: - backend: service: name: websocket-app-service port: number: 80 ``` Given that all the prerequisites are fulfilled, and you have an App Gateway controlled by a K8s Ingress in your AKS, the deployment above would result in a WebSockets server exposed on port 80 of your App Gateway's public IP and the ws.contoso.com domain. The following cURL command would test the WebSocket server deployment: sh curl -i -N -H \"Connection: Upgrade\" \\ -H \"Upgrade: websocket\" \\ -H \"Origin: http://localhost\" \\ -H \"Host: ws.contoso.com\" \\ -H \"Sec-Websocket-Version: 13\" \\ -H \"Sec-WebSocket-Key: 123\" \\ http://1.2.3.4:80/ws WebSocket Health Probes If your deployment does not explicitly define health probes, App Gateway would attempt an HTTP GET on your WebSocket server endpoint. Depending on the server implementation ( here is one we love ) WebSocket specific headers may be required ( Sec-Websocket-Version for instance). Since App Gateway does not add WebSocket headers, the App Gateway's health probe response from your WebSocket server will most likely be 400 Bad Request . As a result App Gateway will mark your pods as unhealthy, which will eventually result in a 502 Bad Gateway for the consumers of the WebSocket server. To avoid this you may need to add an HTTP GET handler for a health check to your server ( /health for instance, which returns 200 OK ).","title":"Websockets"},{"location":"how-tos/websockets/#expose-a-websocket-server","text":"As outlined in the Application Gateway v2 documentation - it provides native support for the WebSocket and HTTP/2 protocols . Please note, that for both Application Gateway and the Kubernetes Ingress - there is no user-configurable setting to selectively enable or disable WebSocket support. The Kubernetes deployment YAML below shows the minimum configuration used to deploy a WebSocket server, which is the same as deploying a regular web server: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: websocket-server spec: selector: matchLabels: app: ws-app replicas: 2 template: metadata: labels: app: ws-app spec: containers: - name: websocket-app imagePullPolicy: Always image: your-container-repo.azurecr.io/websockets-app ports: - containerPort: 8888 imagePullSecrets: - name: azure-container-registry-credentials apiVersion: v1 kind: Service metadata: name: websocket-app-service spec: selector: app: ws-app ports: - protocol: TCP port: 80 targetPort: 8888 apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: websocket-repeater annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - host: ws.contoso.com http: paths: - backend: service: name: websocket-app-service port: number: 80 ``` Given that all the prerequisites are fulfilled, and you have an App Gateway controlled by a K8s Ingress in your AKS, the deployment above would result in a WebSockets server exposed on port 80 of your App Gateway's public IP and the ws.contoso.com domain. The following cURL command would test the WebSocket server deployment: sh curl -i -N -H \"Connection: Upgrade\" \\ -H \"Upgrade: websocket\" \\ -H \"Origin: http://localhost\" \\ -H \"Host: ws.contoso.com\" \\ -H \"Sec-Websocket-Version: 13\" \\ -H \"Sec-WebSocket-Key: 123\" \\ http://1.2.3.4:80/ws","title":"Expose a WebSocket server"},{"location":"how-tos/websockets/#websocket-health-probes","text":"If your deployment does not explicitly define health probes, App Gateway would attempt an HTTP GET on your WebSocket server endpoint. Depending on the server implementation ( here is one we love ) WebSocket specific headers may be required ( Sec-Websocket-Version for instance). Since App Gateway does not add WebSocket headers, the App Gateway's health probe response from your WebSocket server will most likely be 400 Bad Request . As a result App Gateway will mark your pods as unhealthy, which will eventually result in a 502 Bad Gateway for the consumers of the WebSocket server. To avoid this you may need to add an HTTP GET handler for a health check to your server ( /health for instance, which returns 200 OK ).","title":"WebSocket Health Probes"},{"location":"setup/install/","text":"Prerequisites Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. AGIC charts have been moveed to MCR. Use oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure as the target repository. You need to complete the following tasks prior to deploying AGIC on your cluster: Prepare your Azure subscription and your az-cli client. ```bash Sign in to your Azure subscription. SUBSCRIPTION_ID=' ' az login az account set --subscription $SUBSCRIPTION_ID Register required resource providers on Azure. az provider register --namespace Microsoft.ContainerService az provider register --namespace Microsoft.Network ``` Set an AKS cluster for your workload. AKS cluster should have the workload identity feature enabled. Learn how to enable workload identity on an existing AKS cluster. If using an existing cluster, ensure you enable Workload Identity support on your AKS cluster. Workload identities can be enabled via the following: ```bash AKS_NAME=' ' RESOURCE_GROUP=' ' az aks update -g $RESOURCE_GROUP -n $AKS_NAME --enable-oidc-issuer --enable-workload-identity --no-wait ``` If you don't have an existing cluster, use the following commands to create a new AKS cluster and workload identity enabled. ```bash AKS_NAME=' ' RESOURCE_GROUP=' ' LOCATION='northeurope' VM_SIZE=' ' # The size needs to be available in your location az group create --name $RESOURCE_GROUP --location $LOCATION az aks create \\ --resource-group $RESOURCE_GROUP \\ --name $AKS_NAME \\ --location $LOCATION \\ --node-vm-size $VM_SIZE \\ --network-plugin azure \\ --enable-oidc-issuer \\ --enable-workload-identity \\ --generate-ssh-key ``` Install Helm Helm is an open-source packaging tool that is used to install ALB controller. Helm is already available in Azure Cloud Shell. If you are using Azure Cloud Shell, no additional Helm installation is necessary. bash curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash Deploy or Use existing Application Gateway If using an existing Application Gateway, make sure the following: Set the environment variable. bash APPGW_ID=\"\" Follow steps here to make sure AppGW VNET is correctly setup i.e. either it is using same VNET as AKS or is peered. If you don't have an existing Application Gateway, use the following commands to create a new one. Setup environment variables ```bash AKS_NAME=' ' RESOURCE_GROUP=' ' LOCATION=\" \" APPGW_NAME=\"application-gateway\" APPGW_SUBNET_NAME=\"appgw-subnet\" ``` Deploy subnet for Application Gateway ```bash nodeResourceGroup=$(az aks show -n $AKS_NAME -g RESOURCE_GROUP -o tsv --query \"nodeResourceGroup\") aksVnetName= RESOURCE_GROUP -o tsv --query \"nodeResourceGroup\") aksVnetName= (az network vnet list -g nodeResourceGroup -o tsv --query \"[0].name\") aksVnetId= nodeResourceGroup -o tsv --query \"[0].name\") aksVnetId= (az network vnet show -n $aksVnetName -g $nodeResourceGroup -o tsv --query \"id\") az network vnet subnet create \\ --resource-group $nodeResourceGroup \\ --vnet-name $aksVnetName \\ --name $APPGW_SUBNET_NAME \\ --address-prefixes \"10.226.0.0/23\" APPGW_SUBNET_ID=$(az network vnet subnet list --resource-group $nodeResourceGroup --vnet-name aksVnetName --query \"[?name==' aksVnetName --query \"[?name==' APPGW_SUBNET_NAME'].id\" --output tsv) ``` Deploy Application Gateway ```bash az network application-gateway create \\ --name $APPGW_NAME \\ --location $LOCATION \\ --resource-group $RESOURCE_GROUP \\ --subnet $APPGW_SUBNET_ID \\ --capacity 2 \\ --sku Standard_v2 \\ --http-settings-cookie-based-affinity Disabled \\ --frontend-port 80 \\ --http-settings-port 80 \\ --http-settings-protocol Http \\ --public-ip-address appgw-ip \\ --priority 10 APPGW_ID=$(az network application-gateway show --name $APPGW_NAME --resource-group $RESOURCE_GROUP --query \"id\" --output tsv) ``` Install Application Gateway Ingress Controller Setup environment variables ```bash AKS_NAME=' ' RESOURCE_GROUP=' ' LOCATION=\" \" IDENTITY_RESOURCE_NAME='agic-identity' ``` Create a user managed identity for AGIC controller and federate the identity as Workload Identity to use in the AKS cluster. ```bash echo \"Creating identity $IDENTITY_RESOURCE_NAME in resource group $RESOURCE_GROUP\" az identity create --resource-group $RESOURCE_GROUP --name IDENTITY_RESOURCE_NAME IDENTITY_PRINCIPAL_ID=\" IDENTITY_RESOURCE_NAME IDENTITY_PRINCIPAL_ID=\" (az identity show -g $RESOURCE_GROUP -n IDENTITY_RESOURCE_NAME --query principalId -otsv)\" IDENTITY_CLIENT_ID=\" IDENTITY_RESOURCE_NAME --query principalId -otsv)\" IDENTITY_CLIENT_ID=\" (az identity show -g $RESOURCE_GROUP -n $IDENTITY_RESOURCE_NAME --query clientId -otsv)\" echo \"Waiting 60 seconds to allow for replication of the identity...\" sleep 60 echo \"Set up federation with AKS OIDC issuer\" AKS_OIDC_ISSUER=\" (az aks show -n \" (az aks show -n \" AKS_NAME\" -g \" RESOURCE_GROUP\" --query \"oidcIssuerProfile.issuerUrl\" -o tsv)\" az identity federated-credential create --name \"azure-alb-identity\" \\ --identity-name \" RESOURCE_GROUP\" --query \"oidcIssuerProfile.issuerUrl\" -o tsv)\" az identity federated-credential create --name \"azure-alb-identity\" \\ --identity-name \" IDENTITY_RESOURCE_NAME\" \\ --resource-group RESOURCE_GROUP \\ --issuer \" RESOURCE_GROUP \\ --issuer \" AKS_OIDC_ISSUER\" \\ --subject \"system:serviceaccount:default:ingress-azure\" resourceGroupId=$(az group show --name RESOURCE_GROUP --query id -otsv) nodeResourceGroup= RESOURCE_GROUP --query id -otsv) nodeResourceGroup= (az aks show -n $AKS_NAME -g RESOURCE_GROUP -o tsv --query \"nodeResourceGroup\") nodeResourceGroupId= RESOURCE_GROUP -o tsv --query \"nodeResourceGroup\") nodeResourceGroupId= (az group show --name $nodeResourceGroup --query id -otsv) echo \"Apply role assignments to AGIC identity\" az role assignment create --assignee-object-id $IDENTITY_PRINCIPAL_ID --assignee-principal-type ServicePrincipal --scope $resourceGroupId --role \"Reader\" az role assignment create --assignee-object-id $IDENTITY_PRINCIPAL_ID --assignee-principal-type ServicePrincipal --scope $nodeResourceGroupId --role \"Contributor\" az role assignment create --assignee-object-id $IDENTITY_PRINCIPAL_ID --assignee-principal-type ServicePrincipal --scope $APPGW_ID --role \"Contributor\" ``` Assignment of the managed identity immediately after creation may result in an error that the principalId does not exist. Allow about a minute of time to elapse for the identity to replicate in Microsoft Entra ID prior to delegating the identity. Add the AGIC Helm repository: bash helm repo add application-gateway-kubernetes-ingress https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/ helm repo update Install ALB Controller using Helm For new deployments AGIC can be installed by running the following commands: ```bash az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME # on aks cluster with only linux node pools helm install ingress-azure \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --set appgw.applicationGatewayID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= IDENTITY_CLIENT_ID \\ --set rbac.enabled=true \\ --version 1.7.3 # on aks cluster with windows node pools helm install ingress-azure \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --set appgw.applicationGatewayID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= IDENTITY_CLIENT_ID \\ --set rbac.enabled=true \\ --set nodeSelector.\"beta.kubernetes.io/os\"=linux \\ --version 1.7.3 ``` For existing deployments AGIC can be upgraded by running the following commands: ```bash az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME # on aks cluster with only linux node pools helm upgrade ingress-azure \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --set appgw.applicationGatewayID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= IDENTITY_CLIENT_ID \\ --set rbac.enabled=true \\ --version 1.7.3 # on aks cluster with windows node pools helm upgrade ingress-azure \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --set appgw.applicationGatewayID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= IDENTITY_CLIENT_ID \\ --set rbac.enabled=true \\ --set nodeSelector.\"beta.kubernetes.io/os\"=linux \\ --version 1.7.3 ``` Install a Sample App Now that we have App Gateway, AKS, and AGIC installed we can install a sample app via Azure Cloud Shell : ```yaml cat <\" Follow steps here to make sure AppGW VNET is correctly setup i.e. either it is using same VNET as AKS or is peered. If you don't have an existing Application Gateway, use the following commands to create a new one. Setup environment variables ```bash AKS_NAME=' ' RESOURCE_GROUP=' ' LOCATION=\" \" APPGW_NAME=\"application-gateway\" APPGW_SUBNET_NAME=\"appgw-subnet\" ``` Deploy subnet for Application Gateway ```bash nodeResourceGroup=$(az aks show -n $AKS_NAME -g RESOURCE_GROUP -o tsv --query \"nodeResourceGroup\") aksVnetName= RESOURCE_GROUP -o tsv --query \"nodeResourceGroup\") aksVnetName= (az network vnet list -g nodeResourceGroup -o tsv --query \"[0].name\") aksVnetId= nodeResourceGroup -o tsv --query \"[0].name\") aksVnetId= (az network vnet show -n $aksVnetName -g $nodeResourceGroup -o tsv --query \"id\") az network vnet subnet create \\ --resource-group $nodeResourceGroup \\ --vnet-name $aksVnetName \\ --name $APPGW_SUBNET_NAME \\ --address-prefixes \"10.226.0.0/23\" APPGW_SUBNET_ID=$(az network vnet subnet list --resource-group $nodeResourceGroup --vnet-name aksVnetName --query \"[?name==' aksVnetName --query \"[?name==' APPGW_SUBNET_NAME'].id\" --output tsv) ``` Deploy Application Gateway ```bash az network application-gateway create \\ --name $APPGW_NAME \\ --location $LOCATION \\ --resource-group $RESOURCE_GROUP \\ --subnet $APPGW_SUBNET_ID \\ --capacity 2 \\ --sku Standard_v2 \\ --http-settings-cookie-based-affinity Disabled \\ --frontend-port 80 \\ --http-settings-port 80 \\ --http-settings-protocol Http \\ --public-ip-address appgw-ip \\ --priority 10 APPGW_ID=$(az network application-gateway show --name $APPGW_NAME --resource-group $RESOURCE_GROUP --query \"id\" --output tsv) ```","title":"Deploy or Use existing Application Gateway"},{"location":"setup/install/#install-application-gateway-ingress-controller","text":"Setup environment variables ```bash AKS_NAME=' ' RESOURCE_GROUP=' ' LOCATION=\" \" IDENTITY_RESOURCE_NAME='agic-identity' ``` Create a user managed identity for AGIC controller and federate the identity as Workload Identity to use in the AKS cluster. ```bash echo \"Creating identity $IDENTITY_RESOURCE_NAME in resource group $RESOURCE_GROUP\" az identity create --resource-group $RESOURCE_GROUP --name IDENTITY_RESOURCE_NAME IDENTITY_PRINCIPAL_ID=\" IDENTITY_RESOURCE_NAME IDENTITY_PRINCIPAL_ID=\" (az identity show -g $RESOURCE_GROUP -n IDENTITY_RESOURCE_NAME --query principalId -otsv)\" IDENTITY_CLIENT_ID=\" IDENTITY_RESOURCE_NAME --query principalId -otsv)\" IDENTITY_CLIENT_ID=\" (az identity show -g $RESOURCE_GROUP -n $IDENTITY_RESOURCE_NAME --query clientId -otsv)\" echo \"Waiting 60 seconds to allow for replication of the identity...\" sleep 60 echo \"Set up federation with AKS OIDC issuer\" AKS_OIDC_ISSUER=\" (az aks show -n \" (az aks show -n \" AKS_NAME\" -g \" RESOURCE_GROUP\" --query \"oidcIssuerProfile.issuerUrl\" -o tsv)\" az identity federated-credential create --name \"azure-alb-identity\" \\ --identity-name \" RESOURCE_GROUP\" --query \"oidcIssuerProfile.issuerUrl\" -o tsv)\" az identity federated-credential create --name \"azure-alb-identity\" \\ --identity-name \" IDENTITY_RESOURCE_NAME\" \\ --resource-group RESOURCE_GROUP \\ --issuer \" RESOURCE_GROUP \\ --issuer \" AKS_OIDC_ISSUER\" \\ --subject \"system:serviceaccount:default:ingress-azure\" resourceGroupId=$(az group show --name RESOURCE_GROUP --query id -otsv) nodeResourceGroup= RESOURCE_GROUP --query id -otsv) nodeResourceGroup= (az aks show -n $AKS_NAME -g RESOURCE_GROUP -o tsv --query \"nodeResourceGroup\") nodeResourceGroupId= RESOURCE_GROUP -o tsv --query \"nodeResourceGroup\") nodeResourceGroupId= (az group show --name $nodeResourceGroup --query id -otsv) echo \"Apply role assignments to AGIC identity\" az role assignment create --assignee-object-id $IDENTITY_PRINCIPAL_ID --assignee-principal-type ServicePrincipal --scope $resourceGroupId --role \"Reader\" az role assignment create --assignee-object-id $IDENTITY_PRINCIPAL_ID --assignee-principal-type ServicePrincipal --scope $nodeResourceGroupId --role \"Contributor\" az role assignment create --assignee-object-id $IDENTITY_PRINCIPAL_ID --assignee-principal-type ServicePrincipal --scope $APPGW_ID --role \"Contributor\" ``` Assignment of the managed identity immediately after creation may result in an error that the principalId does not exist. Allow about a minute of time to elapse for the identity to replicate in Microsoft Entra ID prior to delegating the identity. Add the AGIC Helm repository: bash helm repo add application-gateway-kubernetes-ingress https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/ helm repo update Install ALB Controller using Helm","title":"Install Application Gateway Ingress Controller"},{"location":"setup/install/#for-new-deployments","text":"AGIC can be installed by running the following commands: ```bash az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME # on aks cluster with only linux node pools helm install ingress-azure \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --set appgw.applicationGatewayID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= IDENTITY_CLIENT_ID \\ --set rbac.enabled=true \\ --version 1.7.3 # on aks cluster with windows node pools helm install ingress-azure \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --set appgw.applicationGatewayID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= IDENTITY_CLIENT_ID \\ --set rbac.enabled=true \\ --set nodeSelector.\"beta.kubernetes.io/os\"=linux \\ --version 1.7.3 ```","title":"For new deployments"},{"location":"setup/install/#for-existing-deployments","text":"AGIC can be upgraded by running the following commands: ```bash az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME # on aks cluster with only linux node pools helm upgrade ingress-azure \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --set appgw.applicationGatewayID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= IDENTITY_CLIENT_ID \\ --set rbac.enabled=true \\ --version 1.7.3 # on aks cluster with windows node pools helm upgrade ingress-azure \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --set appgw.applicationGatewayID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= IDENTITY_CLIENT_ID \\ --set rbac.enabled=true \\ --set nodeSelector.\"beta.kubernetes.io/os\"=linux \\ --version 1.7.3 ```","title":"For existing deployments"},{"location":"setup/install/#install-a-sample-app","text":"Now that we have App Gateway, AKS, and AGIC installed we can install a sample app via Azure Cloud Shell : ```yaml cat <= v1.6.0, an error as shown below will be raised due to a breaking change. AAD Pod Identity introduced a breaking change after v1.5.5 due to CRD fields being case sensitive. The error is caused by AAD Pod Identity fields not matching what AGIC uses; more details of the mismatch under analysis of the issue . AAD Pod Identity v1.5 and lower have known issues with AKS' most recent base images, and therefore AKS has asked customers to upgrade to AAD Pod Identity v1.6 or higher. AGIC Pod Logs bash E0428 16:57:55.669130 1 client.go:132] Possible reasons: AKS Service Principal requires 'Managed Identity Operator' access on Controller Identity; 'identityResourceID' and/or 'identityClientID' are incorrect in the Helm config; AGIC Identity requires 'Contributor' access on Application Gateway and 'Reader' access on Application Gateway's Resource Group; E0428 16:57:55.669160 1 client.go:145] Unexpected ARM status code on GET existing App Gateway config: 403 E0428 16:57:55.669167 1 client.go:148] Failed fetching config for App Gateway instance. Will retry in 10s. Error: azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://management.azure.com/subscriptions/4c4aee1a-cfd4-4e7a-abe3-*******/resourceGroups/RG-NAME-DEV/providers/Microsoft.Network/applicationGateways/AG-NAME-DEV?api-version=2019-09-01: StatusCode=403 -- Original Error: adal: Refresh request failed. Status Code = '403'. Response body: getting assigned identities for pod default/agile-opossum-ingress-azure-579cbb6b89-sldr5 in CREATED state failed after 16 attempts, retry duration [5]s. Error: MIC Pod Logs bash E0427 00:13:26.222815 1 mic.go:899] Ignoring azure identity default/agic-azid-ingress-azure, error: Invalid resource id: \"\", must match /subscriptions//resourcegroups//providers/Microsoft.ManagedIdentity/userAssignedIdentities/ Analysis of the issue AAD breaking change details For AzureIdentity and AzureIdentityBinding created using AAD Pod Identity v1.6.0+, the following fields are changed AzureIdentity < 1.6.0 >= 1.6.0 ClientID clientID ClientPassword clientPassword ResourceID resourceID TenantID tenantID AzureIdentityBinding < 1.6.0 >= 1.6.0 AzureIdentity azureIdentity Selector selector NOTE AKS recommends to using AAD Pod Identity with version >= 1.6.0 AGIC fix to adapt to the breaking change Updated AGIC Helm templates to use the right fields regarding AAD Pod Identity, PR for reference. Resolving the issue It's recommended you upgrade your AGIC to release 1.2.0 and then apply AAD Pod Identity version >= 1.6.0 Upgrade AGIC to 1.2.0 AGIC version v1.2.0 will be required. ```bash https://github.com/Azure/application-gateway-kubernetes-ingress/blob/master/docs/how-tos/helm-upgrade.md --reuse-values when upgrading, reuse the last release's values and merge in any overrides from the command line via --set and -f. If '--reset-values' is specified, this is ignored helm repo update check the latest relese version of AGIC helm search repo -l application-gateway-kubernetes-ingress install release 1.2.0 helm upgrade \\ \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure --version 1.2.0 --reuse-values ``` ***Note:**_ If you're upgrading from v1.0.0 or below, you'll have to delete AGIC and then reinstall with v1.2.0. Install the right version of AAD Pod Identity AKS recommends upgrading the Azure Active Directory Pod Identity version on your Azure Kubernetes Service Clusters to v1.6. AAD pod identity v1.5 or lower have a known issue with AKS' most recent base images. To install AAD Pod Identity with version v1.6.0: RBAC enabled AKS cluster bash kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/v1.6.0/deploy/infra/deployment-rbac.yaml RBAC disabled AKS cluster bash kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/v1.6.0/deploy/infra/deployment.yaml","title":"Troubleshooting agic fails with aad pod identity breakingchange"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#troubleshooting-agic-v120-rc1-and-below-fails-with-a-breaking-change-introduced-in-aad-pod-identity-v16","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment.","title":"Troubleshooting: AGIC v1.2.0-rc1 and below fails with a breaking change introduced in AAD Pod Identity v1.6"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#overview","text":"If you're using AGIC with version < v1.2.0-rc2 and AAD Pod Identity with version >= v1.6.0, an error as shown below will be raised due to a breaking change. AAD Pod Identity introduced a breaking change after v1.5.5 due to CRD fields being case sensitive. The error is caused by AAD Pod Identity fields not matching what AGIC uses; more details of the mismatch under analysis of the issue . AAD Pod Identity v1.5 and lower have known issues with AKS' most recent base images, and therefore AKS has asked customers to upgrade to AAD Pod Identity v1.6 or higher. AGIC Pod Logs bash E0428 16:57:55.669130 1 client.go:132] Possible reasons: AKS Service Principal requires 'Managed Identity Operator' access on Controller Identity; 'identityResourceID' and/or 'identityClientID' are incorrect in the Helm config; AGIC Identity requires 'Contributor' access on Application Gateway and 'Reader' access on Application Gateway's Resource Group; E0428 16:57:55.669160 1 client.go:145] Unexpected ARM status code on GET existing App Gateway config: 403 E0428 16:57:55.669167 1 client.go:148] Failed fetching config for App Gateway instance. Will retry in 10s. Error: azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://management.azure.com/subscriptions/4c4aee1a-cfd4-4e7a-abe3-*******/resourceGroups/RG-NAME-DEV/providers/Microsoft.Network/applicationGateways/AG-NAME-DEV?api-version=2019-09-01: StatusCode=403 -- Original Error: adal: Refresh request failed. Status Code = '403'. Response body: getting assigned identities for pod default/agile-opossum-ingress-azure-579cbb6b89-sldr5 in CREATED state failed after 16 attempts, retry duration [5]s. Error: MIC Pod Logs bash E0427 00:13:26.222815 1 mic.go:899] Ignoring azure identity default/agic-azid-ingress-azure, error: Invalid resource id: \"\", must match /subscriptions//resourcegroups//providers/Microsoft.ManagedIdentity/userAssignedIdentities/","title":"Overview"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#analysis-of-the-issue","text":"","title":"Analysis of the issue"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#aad-breaking-change-details","text":"For AzureIdentity and AzureIdentityBinding created using AAD Pod Identity v1.6.0+, the following fields are changed AzureIdentity < 1.6.0 >= 1.6.0 ClientID clientID ClientPassword clientPassword ResourceID resourceID TenantID tenantID AzureIdentityBinding < 1.6.0 >= 1.6.0 AzureIdentity azureIdentity Selector selector NOTE AKS recommends to using AAD Pod Identity with version >= 1.6.0","title":"AAD breaking change details"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#agic-fix-to-adapt-to-the-breaking-change","text":"Updated AGIC Helm templates to use the right fields regarding AAD Pod Identity, PR for reference.","title":"AGIC fix to adapt to the breaking change"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#resolving-the-issue","text":"It's recommended you upgrade your AGIC to release 1.2.0 and then apply AAD Pod Identity version >= 1.6.0","title":"Resolving the issue"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#upgrade-agic-to-120","text":"AGIC version v1.2.0 will be required. ```bash","title":"Upgrade AGIC to 1.2.0"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#httpsgithubcomazureapplication-gateway-kubernetes-ingressblobmasterdocshow-toshelm-upgrademd","text":"","title":"https://github.com/Azure/application-gateway-kubernetes-ingress/blob/master/docs/how-tos/helm-upgrade.md"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#-reuse-values-when-upgrading-reuse-the-last-releases-values-and-merge-in-any-overrides-from-the-command-line-via-set-and-f-if-reset-values-is-specified-this-is-ignored","text":"helm repo update","title":"--reuse-values when upgrading, reuse the last release's values and merge in any overrides from the command line via --set and -f. If '--reset-values' is specified, this is ignored"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#check-the-latest-relese-version-of-agic","text":"helm search repo -l application-gateway-kubernetes-ingress","title":"check the latest relese version of AGIC"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#install-release-120","text":"helm upgrade \\ \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure --version 1.2.0 --reuse-values ``` ***Note:**_ If you're upgrading from v1.0.0 or below, you'll have to delete AGIC and then reinstall with v1.2.0.","title":"install release 1.2.0"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#install-the-right-version-of-aad-pod-identity","text":"AKS recommends upgrading the Azure Active Directory Pod Identity version on your Azure Kubernetes Service Clusters to v1.6. AAD pod identity v1.5 or lower have a known issue with AKS' most recent base images. To install AAD Pod Identity with version v1.6.0: RBAC enabled AKS cluster bash kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/v1.6.0/deploy/infra/deployment-rbac.yaml RBAC disabled AKS cluster bash kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/v1.6.0/deploy/infra/deployment.yaml","title":"Install the right version of AAD Pod Identity"},{"location":"troubleshootings/troubleshooting-agic-pod-stuck-in-not-ready-state/","text":"Troubleshooting: AGIC pod stuck in not ready state NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Illustration If AGIC pod is stuck in ready state, you must be seeing the following: ```bash $ kubectl get pods NAME READY STATUS RESTARTS AGE 0/1 Running 0 19s mic-774b9c5d7b-z4z8p 1/1 Running 1 15m mic-774b9c5d7b-zrdsm 1/1 Running 1 15m nmi-pv8ch ``` Common causes Stuck at creating authorizer Stuck getting Application Gateway AGIC is stuck at creating authorizer When the AGIC pod starts, in one of the steps, AGIC tries to get an AAD (Azure Active Directory) token for the identity assigned to it. This token is then used to perform updates on the Application gateway. This identity can be of two types: User Assigned Identity Service Principal When using User Assigned identity with AGIC, AGIC has a dependency on AAD Pod Identity . When you see your AGIC pod stuck at Creating Authorizer step, then the issue could be related to the setup of the user assigned identity and AAD Pod Identity. bash $ kubectl logs ERROR: logging before flag.Parse: I0628 18:09:49.947221 1 utils.go:115] Using verbosity level 3 from environment variable APPGW_VERBOSITY_LEVEL I0628 18:09:49.987776 1 environment.go:240] KUBERNETES_WATCHNAMESPACE is not set. Watching all available namespaces. I0628 18:09:49.987861 1 main.go:128] Application Gateway Details: Subscription=\"xxxx\" Resource Group=\"resgp\" Name=\"gateway\" I0628 18:09:49.987873 1 auth.go:46] Creating authorizer from Azure Managed Service Identity I0628 18:09:49.987945 1 httpserver.go:57] Starting API Server on :8123 AAD Pod Identity is responsible for assigning the user assigned identity provided by the user for AGIC as AGIC's Identity to the underlying AKS nodes and setup the IP table rules to allow AGIC to get an AAD token from the Instance Metadata service on the VM. When you install AAD Pod Identity on your AKS cluster, it will deploy two components: Managed Identity Controller (MIC): It runs with multiple replicas and one Pod is elected leader . It is responsible to do the assignment of the identity to the AKS nodes. Node Managed Identity (NMI): It runs as daemon on every node . It is responsible to enforce the IP table rules to allow AGIC to GET the access token. For further reading on how these components work, you can go through this readme . Here is a concept diagram on the project page. Now, In order to debug the authorizer issue further, we need to get the logs for mic and nmi pods. These pods usually start with mic and nmi as the prefix. We should first investigate the logs of mic and then nmi . ```bash $ kubectl get pods NAME READY STATUS RESTARTS AGE mic-774b9c5d7b-z4z8p 1/1 Running 1 15m mic-774b9c5d7b-zrdsm 1/1 Running 1 15m nmi-pv8ch 1/1 Running 1 15m ``` Issue in MIC Pod For mic pod, we will need to find the leader. An easy way to find the leader is by looking at the log size. Leader pod is the one that is actively working. MIC pod communicates with Azure Resource Manager(ARM) to assign the identity to the AKS nodes. If there are any issues in outbound connectivity, MIC can report TCP timeouts. Check your NSGs, UDRs and Firewall to make sure that you allow outbound traffic to Azure. bash Updating msis on node aks-agentpool-41724381-vmss, add [1], del [1], update[0] failed with error azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://management.azure.com/subscriptions/xxxx/resourceGroups/resgp/providers/Microsoft.Compute/virtualMachineScaleSets/aks-agentpool-41724381-vmss?api-version=2019-07-01: StatusCode=0 -- Original Error: adal: Failed to execute the refresh request. Error = 'Post \"https://login.microsoftonline.com//oauth2/token?api-version=1.0\": dial tcp: i/o timeout' You will see the following error if AKS cluster's Service Principal missing Managed Identity Operator access over User Assigned identity. You can follow the role assignment related step in the brownfield document . bash Updating msis on node aks-agentpool-32587779-vmss, add [1], del [0] failed with error compute.VirtualMachineScaleSetsClient#CreateOrUpdate: Failure sending request: StatusCode=403 -- Original Error: Code=\"LinkedAuthorizationFailed\" Message=\"The client '' with object id '' has permission to perform action 'Microsoft.Compute/virtualMachineScaleSets/write' on scope '/subscriptions/xxxx/resourceGroups//providers/Microsoft.Compute/virtualMachineScaleSets/aks-agentpool-32587779-vmss'; however, it does not have permission to perform action 'Microsoft.ManagedIdentity/userAssignedIdentities/assign/action' on the linked scope(s) '/subscriptions/xxxx/resourcegroups/resgp/providers/Microsoft.ManagedIdentity/userAssignedIdentities/' or the linked scope(s) are invalid.\" Issue in NMI Pod For nmi pod, we will need to find the pod running on the same node as AGIC pod. If you see 403 response for a token request, then make sure you have correctly assigned the needed permission to AGIC's identity . Reader access to Application Gateway's resource group. This is needed to list the resources in the this resource group. Contributor access to Application Gateway. This is needed to perform updates on the Application Gateway. AGIC is stuck getting Application Gateway AGIC can be stuck in getting the gateway due to: AGIC gets NotFound when getting Application Gateway When you see this error, Verify that the gateway actually exists in the subscription and resource group printed in the AGIC logs. If you are deploying in National Cloud or US Gov Cloud, then this issue could be related to incorrect environment endpoint setting. To correctly configure, set the appgw.environment property in the helm. AGIC gets Unauthorized when getting Application Gateway Verify that you have given needed permissions to AGIC's identity: Reader access to Application Gateway's resource group. This is needed to list the resources in the this resource group. Contributor access to Application Gateway. This is needed to perform updates on the Application Gateway.","title":"Troubleshooting agic pod stuck in not ready state"},{"location":"troubleshootings/troubleshooting-agic-pod-stuck-in-not-ready-state/#troubleshooting-agic-pod-stuck-in-not-ready-state","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment.","title":"Troubleshooting: AGIC pod stuck in not ready state"},{"location":"troubleshootings/troubleshooting-agic-pod-stuck-in-not-ready-state/#illustration","text":"If AGIC pod is stuck in ready state, you must be seeing the following: ```bash $ kubectl get pods NAME READY STATUS RESTARTS AGE 0/1 Running 0 19s mic-774b9c5d7b-z4z8p 1/1 Running 1 15m mic-774b9c5d7b-zrdsm 1/1 Running 1 15m nmi-pv8ch ```","title":"Illustration"},{"location":"troubleshootings/troubleshooting-agic-pod-stuck-in-not-ready-state/#common-causes","text":"Stuck at creating authorizer Stuck getting Application Gateway","title":"Common causes"},{"location":"troubleshootings/troubleshooting-agic-pod-stuck-in-not-ready-state/#agic-is-stuck-at-creating-authorizer","text":"When the AGIC pod starts, in one of the steps, AGIC tries to get an AAD (Azure Active Directory) token for the identity assigned to it. This token is then used to perform updates on the Application gateway. This identity can be of two types: User Assigned Identity Service Principal When using User Assigned identity with AGIC, AGIC has a dependency on AAD Pod Identity . When you see your AGIC pod stuck at Creating Authorizer step, then the issue could be related to the setup of the user assigned identity and AAD Pod Identity. bash $ kubectl logs ERROR: logging before flag.Parse: I0628 18:09:49.947221 1 utils.go:115] Using verbosity level 3 from environment variable APPGW_VERBOSITY_LEVEL I0628 18:09:49.987776 1 environment.go:240] KUBERNETES_WATCHNAMESPACE is not set. Watching all available namespaces. I0628 18:09:49.987861 1 main.go:128] Application Gateway Details: Subscription=\"xxxx\" Resource Group=\"resgp\" Name=\"gateway\" I0628 18:09:49.987873 1 auth.go:46] Creating authorizer from Azure Managed Service Identity I0628 18:09:49.987945 1 httpserver.go:57] Starting API Server on :8123 AAD Pod Identity is responsible for assigning the user assigned identity provided by the user for AGIC as AGIC's Identity to the underlying AKS nodes and setup the IP table rules to allow AGIC to get an AAD token from the Instance Metadata service on the VM. When you install AAD Pod Identity on your AKS cluster, it will deploy two components: Managed Identity Controller (MIC): It runs with multiple replicas and one Pod is elected leader . It is responsible to do the assignment of the identity to the AKS nodes. Node Managed Identity (NMI): It runs as daemon on every node . It is responsible to enforce the IP table rules to allow AGIC to GET the access token. For further reading on how these components work, you can go through this readme . Here is a concept diagram on the project page. Now, In order to debug the authorizer issue further, we need to get the logs for mic and nmi pods. These pods usually start with mic and nmi as the prefix. We should first investigate the logs of mic and then nmi . ```bash $ kubectl get pods NAME READY STATUS RESTARTS AGE mic-774b9c5d7b-z4z8p 1/1 Running 1 15m mic-774b9c5d7b-zrdsm 1/1 Running 1 15m nmi-pv8ch 1/1 Running 1 15m ```","title":"AGIC is stuck at creating authorizer"},{"location":"troubleshootings/troubleshooting-agic-pod-stuck-in-not-ready-state/#issue-in-mic-pod","text":"For mic pod, we will need to find the leader. An easy way to find the leader is by looking at the log size. Leader pod is the one that is actively working. MIC pod communicates with Azure Resource Manager(ARM) to assign the identity to the AKS nodes. If there are any issues in outbound connectivity, MIC can report TCP timeouts. Check your NSGs, UDRs and Firewall to make sure that you allow outbound traffic to Azure. bash Updating msis on node aks-agentpool-41724381-vmss, add [1], del [1], update[0] failed with error azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://management.azure.com/subscriptions/xxxx/resourceGroups/resgp/providers/Microsoft.Compute/virtualMachineScaleSets/aks-agentpool-41724381-vmss?api-version=2019-07-01: StatusCode=0 -- Original Error: adal: Failed to execute the refresh request. Error = 'Post \"https://login.microsoftonline.com//oauth2/token?api-version=1.0\": dial tcp: i/o timeout' You will see the following error if AKS cluster's Service Principal missing Managed Identity Operator access over User Assigned identity. You can follow the role assignment related step in the brownfield document . bash Updating msis on node aks-agentpool-32587779-vmss, add [1], del [0] failed with error compute.VirtualMachineScaleSetsClient#CreateOrUpdate: Failure sending request: StatusCode=403 -- Original Error: Code=\"LinkedAuthorizationFailed\" Message=\"The client '' with object id '' has permission to perform action 'Microsoft.Compute/virtualMachineScaleSets/write' on scope '/subscriptions/xxxx/resourceGroups//providers/Microsoft.Compute/virtualMachineScaleSets/aks-agentpool-32587779-vmss'; however, it does not have permission to perform action 'Microsoft.ManagedIdentity/userAssignedIdentities/assign/action' on the linked scope(s) '/subscriptions/xxxx/resourcegroups/resgp/providers/Microsoft.ManagedIdentity/userAssignedIdentities/' or the linked scope(s) are invalid.\"","title":"Issue in MIC Pod"},{"location":"troubleshootings/troubleshooting-agic-pod-stuck-in-not-ready-state/#issue-in-nmi-pod","text":"For nmi pod, we will need to find the pod running on the same node as AGIC pod. If you see 403 response for a token request, then make sure you have correctly assigned the needed permission to AGIC's identity . Reader access to Application Gateway's resource group. This is needed to list the resources in the this resource group. Contributor access to Application Gateway. This is needed to perform updates on the Application Gateway.","title":"Issue in NMI Pod"},{"location":"troubleshootings/troubleshooting-agic-pod-stuck-in-not-ready-state/#agic-is-stuck-getting-application-gateway","text":"AGIC can be stuck in getting the gateway due to: AGIC gets NotFound when getting Application Gateway When you see this error, Verify that the gateway actually exists in the subscription and resource group printed in the AGIC logs. If you are deploying in National Cloud or US Gov Cloud, then this issue could be related to incorrect environment endpoint setting. To correctly configure, set the appgw.environment property in the helm. AGIC gets Unauthorized when getting Application Gateway Verify that you have given needed permissions to AGIC's identity: Reader access to Application Gateway's resource group. This is needed to list the resources in the this resource group. Contributor access to Application Gateway. This is needed to perform updates on the Application Gateway.","title":"AGIC is stuck getting Application Gateway"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/","text":"Troubleshooting: Installing a simple application NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Azure Cloud Shell is the most convenient way to troubleshoot any problems with your AKS and AGIC installation. Launch your shell from shell.azure.com or by clicking the link: In the troubleshooting document, we will debug issues in the AGIC installation by installing a simple application step by step and check the output as we go along. The steps below assume: You have an AKS cluster, with Advanced Networking enabled AGIC has been installed on the AKS cluster You already hav an App Gateway on a VNET shared with your AKS cluster To verify that the App Gateway + AKS + AGIC installation is setup correctly, deploy the simplest possible app: ```bash cat < to verify that we have had a successful deployment. A successful deployment would have added the following lines to the log: I0927 22:34:51.281437 1 process.go:156] Applied App Gateway config in 20.461335266s I0927 22:34:51.281585 1 process.go:165] cache: Updated with latest applied config. I0927 22:34:51.282342 1 process.go:171] END AppGateway deployment Alternatively, from Cloud Shell we can retrieve only the lines indicating successful App Gateway configuration with kubectl logs | grep 'Applied App Gateway config in' , where should be the exact name of the AGIC pod. App Gateway will have the following configuration applied: Listener: Routing Rule: Backend Pool: There will be one IP address in the backend address pool and it will match the IP address of the Pod we observed earlier with kubectl get pods -o wide Finally we can use the cURL command from within Cloud Shell to establish an HTTP connection to the newly deployed app: Use kubectl get ingress to get the Public IP address of App Gateway Use curl -I -H 'Host: test.agic.contoso.com' A result of HTTP/1.1 200 OK indicates that the App Gateway + AKS + AGIC system is working as expected. Inspect Kubernetes Installation Pods, Services, Ingress Application Gateway Ingress Controller (AGIC) continuously monitors the following Kubernetes resources: Deployment or Pod , Service , Ingress The following must be in place for AGIC to function as expected: AKS must have one or more healthy pods . Verify this from Cloud Shell with kubectl get pods -o wide --show-labels If you have a Pod with an aspnetapp , your output may look like this: ```bash $> kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS aspnetapp 1/1 Running 0 17h 10.0.0.6 aks-agentpool-35064155-1 app=aspnetapp ``` One or more services , referencing the pods above via matching selector labels. Verify this from Cloud Shell with kubectl get services -o wide ```bash $> kubectl get services -o wide --show-labels NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR LABELS aspnetapp ClusterIP 10.2.63.254 80/TCP 17h app=aspnetapp ``` Ingress , annotated with kubernetes.io/ingress.class: azure/application-gateway , referencing the service above Verify this from Cloud Shell with kubectl get ingress -o wide --show-labels ```bash $> kubectl get ingress -o wide --show-labels NAME HOSTS ADDRESS PORTS AGE LABELS aspnetapp * 80 17h ``` View annotations of the ingress above: kubectl get ingress aspnetapp -o yaml (substitute aspnetapp with the name of your ingress) ```bash $> kubectl get ingress aspnetapp -o yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: azure/application-gateway name: aspnetapp spec: defaultBackend: service: name: aspnetapp port: number: 80 ``` The ingress resource must be annotated with kubernetes.io/ingress.class: azure/application-gateway . Verify Observed Nampespace Get the existing namespaces in Kubernetes cluster. What namespace is your app running in? Is AGIC watching that namespace? Refer to the Multiple Namespace Support documentation on how to properly configure observed namespaces. ```bash What namespaces exist on your cluster kubectl get namespaces What pods are currently running kubectl get pods --all-namespaces -o wide ``` The AGIC pod should be in the default namespace (see column NAMESPACE ). A healthy pod would have Running in the STATUS column. There should be at least one AGIC pod. ```bash Get a list of the Application Gateway Ingress Controller pods kubectl get pods --all-namespaces --selector app=ingress-azure ``` If the AGIC pod is not healthy ( STATUS column from the command above is not Running ): get logs to understand why: kubectl logs for the previous instance of the pod: kubectl logs --previous describe the pod to get more context: kubectl describe pod Do you have a Kubernetes Service and Ingress resources? ```bash Get all services across all namespaces kubectl get service --all-namespaces -o wide Get all ingress resources across all namespaces kubectl get ingress --all-namespaces -o wide ``` Is your Ingress annotated with: kubernetes.io/ingress.class: azure/application-gateway ? AGIC will only watch for Kubernetes Ingress resources that have this annotation. ```bash Get the YAML definition of a particular ingress resource kubectl get ingress --namespace -o yaml ``` AGIC emits Kubernetes events for certain critical errors. You can view these: in your terminal via kubectl get events --sort-by=.metadata.creationTimestamp in your browser using the Kubernetes Web UI (Dashboard)","title":"Troubleshooting installing a simple application"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/#troubleshooting-installing-a-simple-application","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Azure Cloud Shell is the most convenient way to troubleshoot any problems with your AKS and AGIC installation. Launch your shell from shell.azure.com or by clicking the link: In the troubleshooting document, we will debug issues in the AGIC installation by installing a simple application step by step and check the output as we go along. The steps below assume: You have an AKS cluster, with Advanced Networking enabled AGIC has been installed on the AKS cluster You already hav an App Gateway on a VNET shared with your AKS cluster To verify that the App Gateway + AKS + AGIC installation is setup correctly, deploy the simplest possible app: ```bash cat < to verify that we have had a successful deployment. A successful deployment would have added the following lines to the log: I0927 22:34:51.281437 1 process.go:156] Applied App Gateway config in 20.461335266s I0927 22:34:51.281585 1 process.go:165] cache: Updated with latest applied config. I0927 22:34:51.282342 1 process.go:171] END AppGateway deployment Alternatively, from Cloud Shell we can retrieve only the lines indicating successful App Gateway configuration with kubectl logs | grep 'Applied App Gateway config in' , where should be the exact name of the AGIC pod. App Gateway will have the following configuration applied: Listener: Routing Rule: Backend Pool: There will be one IP address in the backend address pool and it will match the IP address of the Pod we observed earlier with kubectl get pods -o wide Finally we can use the cURL command from within Cloud Shell to establish an HTTP connection to the newly deployed app: Use kubectl get ingress to get the Public IP address of App Gateway Use curl -I -H 'Host: test.agic.contoso.com' A result of HTTP/1.1 200 OK indicates that the App Gateway + AKS + AGIC system is working as expected.","title":"Troubleshooting: Installing a simple application"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/#inspect-kubernetes-installation","text":"","title":"Inspect Kubernetes Installation"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/#pods-services-ingress","text":"Application Gateway Ingress Controller (AGIC) continuously monitors the following Kubernetes resources: Deployment or Pod , Service , Ingress The following must be in place for AGIC to function as expected: AKS must have one or more healthy pods . Verify this from Cloud Shell with kubectl get pods -o wide --show-labels If you have a Pod with an aspnetapp , your output may look like this: ```bash $> kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS aspnetapp 1/1 Running 0 17h 10.0.0.6 aks-agentpool-35064155-1 app=aspnetapp ``` One or more services , referencing the pods above via matching selector labels. Verify this from Cloud Shell with kubectl get services -o wide ```bash $> kubectl get services -o wide --show-labels NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR LABELS aspnetapp ClusterIP 10.2.63.254 80/TCP 17h app=aspnetapp ``` Ingress , annotated with kubernetes.io/ingress.class: azure/application-gateway , referencing the service above Verify this from Cloud Shell with kubectl get ingress -o wide --show-labels ```bash $> kubectl get ingress -o wide --show-labels NAME HOSTS ADDRESS PORTS AGE LABELS aspnetapp * 80 17h ``` View annotations of the ingress above: kubectl get ingress aspnetapp -o yaml (substitute aspnetapp with the name of your ingress) ```bash $> kubectl get ingress aspnetapp -o yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: azure/application-gateway name: aspnetapp spec: defaultBackend: service: name: aspnetapp port: number: 80 ``` The ingress resource must be annotated with kubernetes.io/ingress.class: azure/application-gateway .","title":"Pods, Services, Ingress"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/#verify-observed-nampespace","text":"Get the existing namespaces in Kubernetes cluster. What namespace is your app running in? Is AGIC watching that namespace? Refer to the Multiple Namespace Support documentation on how to properly configure observed namespaces. ```bash","title":"Verify Observed Nampespace"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/#what-namespaces-exist-on-your-cluster","text":"kubectl get namespaces","title":"What namespaces exist on your cluster"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/#what-pods-are-currently-running","text":"kubectl get pods --all-namespaces -o wide ``` The AGIC pod should be in the default namespace (see column NAMESPACE ). A healthy pod would have Running in the STATUS column. There should be at least one AGIC pod. ```bash","title":"What pods are currently running"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/#get-a-list-of-the-application-gateway-ingress-controller-pods","text":"kubectl get pods --all-namespaces --selector app=ingress-azure ``` If the AGIC pod is not healthy ( STATUS column from the command above is not Running ): get logs to understand why: kubectl logs for the previous instance of the pod: kubectl logs --previous describe the pod to get more context: kubectl describe pod Do you have a Kubernetes Service and Ingress resources? ```bash","title":"Get a list of the Application Gateway Ingress Controller pods"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/#get-all-services-across-all-namespaces","text":"kubectl get service --all-namespaces -o wide","title":"Get all services across all namespaces"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/#get-all-ingress-resources-across-all-namespaces","text":"kubectl get ingress --all-namespaces -o wide ``` Is your Ingress annotated with: kubernetes.io/ingress.class: azure/application-gateway ? AGIC will only watch for Kubernetes Ingress resources that have this annotation. ```bash","title":"Get all ingress resources across all namespaces"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/#get-the-yaml-definition-of-a-particular-ingress-resource","text":"kubectl get ingress --namespace -o yaml ``` AGIC emits Kubernetes events for certain critical errors. You can view these: in your terminal via kubectl get events --sort-by=.metadata.creationTimestamp in your browser using the Kubernetes Web UI (Dashboard)","title":"Get the YAML definition of a particular ingress resource"},{"location":"tutorials/tutorial.e2e-ssl/","text":"Tutorial: Setting up E2E SSL NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. In this this tutorial, we will learn how to setup E2E SSL with AGIC on Application Gateway. We will Generate the frontend and the backend certificates Deploy a simple application with HTTPS Upload the backend certificate's root certificate to Application Gateway Setup ingress for E2E Note: Following tutorial makes use of test certificate generated using OpenSSL. These certificates are only for illustration and should be used in testing only. Generate the frontend and the backend certificates Let's start by first generating the certificates that we will be using for the frontend and backend SSL. First, we will generate the frontend certificate that will be presented to the clients connecting to the Application Gateway. This will have subject name CN=frontend . bash openssl ecparam -out frontend.key -name prime256v1 -genkey openssl req -new -sha256 -key frontend.key -out frontend.csr -subj \"/CN=frontend\" openssl x509 -req -sha256 -days 365 -in frontend.csr -signkey frontend.key -out frontend.crt Note: You can also use a certificate present on the Key Vault on Application Gateway for frontend SSL. Now, we will generate the backend certificate that will be presented by the backends to the Application Gateway. This will have subject name CN=backend bash openssl ecparam -out backend.key -name prime256v1 -genkey openssl req -new -sha256 -key backend.key -out backend.csr -subj \"/CN=backend\" openssl x509 -req -sha256 -days 365 -in backend.csr -signkey backend.key -out backend.crt Finally, we will install the above certificates on to our kubernetes cluster bash kubectl create secret tls frontend-tls --key=\"frontend.key\" --cert=\"frontend.crt\" kubectl create secret tls backend-tls --key=\"backend.key\" --cert=\"backend.crt\" Here is output after listing the secrets. ```bash kubectl get secrets NAME TYPE DATA AGE backend-tls kubernetes.io/tls 2 3m18s frontend-tls kubernetes.io/tls 2 3m18s ``` Deploy a simple application with HTTPS In this section, we will deploy a simple application exposing an HTTPS endpoint on port 8443. ```yaml apiVersion: v1 kind: Service metadata: name: website-service spec: selector: app: website ports: - protocol: TCP port: 8443 targetPort: 8443 apiVersion: apps/v1 kind: Deployment metadata: name: website-deployment spec: selector: matchLabels: app: website replicas: 2 template: metadata: labels: app: website spec: containers: - name: website imagePullPolicy: Always image: nginx:latest ports: - containerPort: 8443 volumeMounts: - mountPath: /etc/nginx/ssl name: secret-volume - mountPath: /etc/nginx/conf.d name: configmap-volume volumes: - name: secret-volume secret: secretName: backend-tls - name: configmap-volume configMap: name: website-nginx-cm apiVersion: v1 kind: ConfigMap metadata: name: website-nginx-cm data: default.conf: |- server { listen 8080 default_server; listen 8443 ssl; root /usr/share/nginx/html; index index.html; ssl_certificate /etc/nginx/ssl/tls.crt; ssl_certificate_key /etc/nginx/ssl/tls.key; location / { return 200 \"Hello World!\"; } } ``` You can also install the above yamls using: bash kubectl apply -f https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/sample-https-backend.yaml Verify that you can curl the application ```bash kubectl get pods NAME READY STATUS RESTARTS AGE website-deployment-9c8c6df7f-5bqwh 1/1 Running 0 24s website-deployment-9c8c6df7f-wxtnp 1/1 Running 0 24s kubectl exec -it website-deployment-9c8c6df7f-5bqwh -- curl -k https://localhost:8443 Hello World! ``` Upload the backend certificate's root certificate to Application Gateway When you are setting up SSL between Application Gateway and Backend, if you are using a self-signed certificate or a certificate signed by a custom root CA on the backend, then you need to upload self-signed or the Custom root CA of the backend certificate on the Application Gateway. bash applicationGatewayName=\"\" resourceGroup=\"\" az network application-gateway root-cert create \\ --gateway-name $applicationGatewayName \\ --resource-group $resourceGroup \\ --name backend-tls \\ --cert-file backend.crt Setup ingress for E2E Now, we will configure our ingress to use the frontend certificate for frontend SSL and backend certificate as root certificate so that Application Gateway can authenticate the backend. bash cat << EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: website-ingress annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/ssl-redirect: \"true\" appgw.ingress.kubernetes.io/backend-protocol: \"https\" appgw.ingress.kubernetes.io/backend-hostname: \"backend\" appgw.ingress.kubernetes.io/appgw-trusted-root-certificate: \"backend-tls\" spec: tls: - secretName: frontend-tls hosts: - website.com rules: - host: website.com http: paths: - path: / backend: service: name: website-service port: number: 8443 pathType: Exact EOF For frontend SSL, we have added tls section in our ingress resource. yaml tls: - secretName: frontend-tls hosts: - website.com For backend SSL, we have added the following annotations: yaml appgw.ingress.kubernetes.io/backend-protocol: \"https\" appgw.ingress.kubernetes.io/backend-hostname: \"backend\" appgw.ingress.kubernetes.io/appgw-trusted-root-certificate: \"backend-tls\" Here, it is important to note that backend-hostname should be the hostname that the backend will accept and it should also match with the Subject/Subject Alternate Name of the certificate used on the backend. After you have successfully completed all the above steps, you should be able to see the ingress's IP address and visit the website. ```bash kubectl get ingress NAME HOSTS ADDRESS PORTS AGE website-ingress website.com 80, 443 36m curl -k -H \"Host: website.com\" https:// Hello World! ```","title":"Tutorial: Setting up E2E SSL"},{"location":"tutorials/tutorial.e2e-ssl/#tutorial-setting-up-e2e-ssl","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. In this this tutorial, we will learn how to setup E2E SSL with AGIC on Application Gateway. We will Generate the frontend and the backend certificates Deploy a simple application with HTTPS Upload the backend certificate's root certificate to Application Gateway Setup ingress for E2E Note: Following tutorial makes use of test certificate generated using OpenSSL. These certificates are only for illustration and should be used in testing only.","title":"Tutorial: Setting up E2E SSL"},{"location":"tutorials/tutorial.e2e-ssl/#generate-the-frontend-and-the-backend-certificates","text":"Let's start by first generating the certificates that we will be using for the frontend and backend SSL. First, we will generate the frontend certificate that will be presented to the clients connecting to the Application Gateway. This will have subject name CN=frontend . bash openssl ecparam -out frontend.key -name prime256v1 -genkey openssl req -new -sha256 -key frontend.key -out frontend.csr -subj \"/CN=frontend\" openssl x509 -req -sha256 -days 365 -in frontend.csr -signkey frontend.key -out frontend.crt Note: You can also use a certificate present on the Key Vault on Application Gateway for frontend SSL. Now, we will generate the backend certificate that will be presented by the backends to the Application Gateway. This will have subject name CN=backend bash openssl ecparam -out backend.key -name prime256v1 -genkey openssl req -new -sha256 -key backend.key -out backend.csr -subj \"/CN=backend\" openssl x509 -req -sha256 -days 365 -in backend.csr -signkey backend.key -out backend.crt Finally, we will install the above certificates on to our kubernetes cluster bash kubectl create secret tls frontend-tls --key=\"frontend.key\" --cert=\"frontend.crt\" kubectl create secret tls backend-tls --key=\"backend.key\" --cert=\"backend.crt\" Here is output after listing the secrets. ```bash kubectl get secrets NAME TYPE DATA AGE backend-tls kubernetes.io/tls 2 3m18s frontend-tls kubernetes.io/tls 2 3m18s ```","title":"Generate the frontend and the backend certificates"},{"location":"tutorials/tutorial.e2e-ssl/#deploy-a-simple-application-with-https","text":"In this section, we will deploy a simple application exposing an HTTPS endpoint on port 8443. ```yaml apiVersion: v1 kind: Service metadata: name: website-service spec: selector: app: website ports: - protocol: TCP port: 8443 targetPort: 8443 apiVersion: apps/v1 kind: Deployment metadata: name: website-deployment spec: selector: matchLabels: app: website replicas: 2 template: metadata: labels: app: website spec: containers: - name: website imagePullPolicy: Always image: nginx:latest ports: - containerPort: 8443 volumeMounts: - mountPath: /etc/nginx/ssl name: secret-volume - mountPath: /etc/nginx/conf.d name: configmap-volume volumes: - name: secret-volume secret: secretName: backend-tls - name: configmap-volume configMap: name: website-nginx-cm apiVersion: v1 kind: ConfigMap metadata: name: website-nginx-cm data: default.conf: |- server { listen 8080 default_server; listen 8443 ssl; root /usr/share/nginx/html; index index.html; ssl_certificate /etc/nginx/ssl/tls.crt; ssl_certificate_key /etc/nginx/ssl/tls.key; location / { return 200 \"Hello World!\"; } } ``` You can also install the above yamls using: bash kubectl apply -f https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/sample-https-backend.yaml Verify that you can curl the application ```bash kubectl get pods NAME READY STATUS RESTARTS AGE website-deployment-9c8c6df7f-5bqwh 1/1 Running 0 24s website-deployment-9c8c6df7f-wxtnp 1/1 Running 0 24s kubectl exec -it website-deployment-9c8c6df7f-5bqwh -- curl -k https://localhost:8443 Hello World! ```","title":"Deploy a simple application with HTTPS"},{"location":"tutorials/tutorial.e2e-ssl/#upload-the-backend-certificates-root-certificate-to-application-gateway","text":"When you are setting up SSL between Application Gateway and Backend, if you are using a self-signed certificate or a certificate signed by a custom root CA on the backend, then you need to upload self-signed or the Custom root CA of the backend certificate on the Application Gateway. bash applicationGatewayName=\"\" resourceGroup=\"\" az network application-gateway root-cert create \\ --gateway-name $applicationGatewayName \\ --resource-group $resourceGroup \\ --name backend-tls \\ --cert-file backend.crt","title":"Upload the backend certificate's root certificate to Application Gateway"},{"location":"tutorials/tutorial.e2e-ssl/#setup-ingress-for-e2e","text":"Now, we will configure our ingress to use the frontend certificate for frontend SSL and backend certificate as root certificate so that Application Gateway can authenticate the backend. bash cat << EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: website-ingress annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/ssl-redirect: \"true\" appgw.ingress.kubernetes.io/backend-protocol: \"https\" appgw.ingress.kubernetes.io/backend-hostname: \"backend\" appgw.ingress.kubernetes.io/appgw-trusted-root-certificate: \"backend-tls\" spec: tls: - secretName: frontend-tls hosts: - website.com rules: - host: website.com http: paths: - path: / backend: service: name: website-service port: number: 8443 pathType: Exact EOF For frontend SSL, we have added tls section in our ingress resource. yaml tls: - secretName: frontend-tls hosts: - website.com For backend SSL, we have added the following annotations: yaml appgw.ingress.kubernetes.io/backend-protocol: \"https\" appgw.ingress.kubernetes.io/backend-hostname: \"backend\" appgw.ingress.kubernetes.io/appgw-trusted-root-certificate: \"backend-tls\" Here, it is important to note that backend-hostname should be the hostname that the backend will accept and it should also match with the Subject/Subject Alternate Name of the certificate used on the backend. After you have successfully completed all the above steps, you should be able to see the ingress's IP address and visit the website. ```bash kubectl get ingress NAME HOSTS ADDRESS PORTS AGE website-ingress website.com 80, 443 36m curl -k -H \"Host: website.com\" https:// Hello World! ```","title":"Setup ingress for E2E"},{"location":"tutorials/tutorial.general/","text":"Tutorial: Basic NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. These tutorials help illustrate the usage of Kubernetes Ingress Resources to expose an example Kubernetes service through the Azure Application Gateway over HTTP or HTTPS. Table of Contents Prerequisites Deploy guestbook application Expose services over HTTP Expose services over HTTPS Without specified hostname With specified hostname Integrate with other services Prerequisites Installed ingress-azure helm chart. Greenfield Deployment : If you are starting from scratch, refer to these installation instructions which outlines steps to deploy an AKS cluster with Application Gateway and install application gateway ingress controller on the AKS cluster. If you want to use HTTPS on this application, you will need a x509 certificate and its private key. Deploy guestbook application The guestbook application is a canonical Kubernetes application that composes of a Web UI frontend, a backend and a Redis database. By default, guestbook exposes its application through a service with name frontend on port 80 . Without a Kubernetes Ingress Resource the service is not accessible from outside the AKS cluster. We will use the application and setup Ingress Resources to access the application through HTTP and HTTPS. Follow the instructions below to deploy the guestbook application. Download guestbook-all-in-one.yaml from here Deploy guestbook-all-in-one.yaml into your AKS cluster by running bash kubectl apply -f guestbook-all-in-one.yaml Now, the guestbook application has been deployed. Expose services over HTTP In order to expose the guestbook application we will using the following ingress resource: yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: guestbook annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - http: paths: - pathType: Prefix path: / backend: service: name: frontend port: number: 80 This ingress will expose the frontend service of the guestbook-all-in-one deployment as a default backend of the Application Gateway. Save the above ingress resource as ing-guestbook.yaml . Deploy ing-guestbook.yaml by running: bash kubectl apply -f ing-guestbook.yaml Check the log of the ingress controller for deployment status. Now the guestbook application should be available. You can check this by visiting the public address of the Application Gateway. Expose services over HTTPS Without specified hostname Without specifying hostname, the guestbook service will be available on all the host-names pointing to the application gateway. Before deploying ingress, you need to create a kubernetes secret to host the certificate and private key. You can create a kubernetes secret by running bash kubectl create secret tls --key --cert Define the following ingress. In the ingress, specify the name of the secret in the secretName section. yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: guestbook annotations: kubernetes.io/ingress.class: azure/application-gateway spec: tls: - secretName: rules: - http: paths: - pathType: Prefix path: / backend: service: name: frontend port: number: 80 NOTE: Replace in the above Ingress Resource with the name of your secret. Store the above Ingress Resource in a file name ing-guestbook-tls.yaml . Deploy ing-guestbook-tls.yaml by running bash kubectl apply -f ing-guestbook-tls.yaml Check the log of the ingress controller for deployment status. Now the guestbook application will be available on HTTPS. In order to make the guestbook application available on HTTP, annotate the Ingress with yaml appgw.ingress.kubernetes.io/ssl-redirect: \"true\" Only in this case a HTTP Listener is created in Azure which redirects the visitor to the HTTPS version. With specified hostname You can also specify the hostname on the ingress in order to multiplex TLS configurations and services. By specifying hostname, the guestbook service will only be available on the specified host. Define the following ingress. In the ingress, specify the name of the secret in the secretName section and replace the hostname in the hosts section accordingly. yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: guestbook annotations: kubernetes.io/ingress.class: azure/application-gateway spec: tls: - hosts: - secretName: rules: - host: http: paths: - pathType: Prefix path: / backend: service: name: frontend port: number: 80 Deploy ing-guestbook-tls-sni.yaml by running bash kubectl apply -f ing-guestbook-tls-sni.yaml Check the log of the ingress controller for deployment status. Now the guestbook application will be available on both HTTP and HTTPS only on the specified host ( in this example).","title":"Tutorial: Basic"},{"location":"tutorials/tutorial.general/#tutorial-basic","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. These tutorials help illustrate the usage of Kubernetes Ingress Resources to expose an example Kubernetes service through the Azure Application Gateway over HTTP or HTTPS.","title":"Tutorial: Basic"},{"location":"tutorials/tutorial.general/#table-of-contents","text":"Prerequisites Deploy guestbook application Expose services over HTTP Expose services over HTTPS Without specified hostname With specified hostname Integrate with other services","title":"Table of Contents"},{"location":"tutorials/tutorial.general/#prerequisites","text":"Installed ingress-azure helm chart. Greenfield Deployment : If you are starting from scratch, refer to these installation instructions which outlines steps to deploy an AKS cluster with Application Gateway and install application gateway ingress controller on the AKS cluster. If you want to use HTTPS on this application, you will need a x509 certificate and its private key.","title":"Prerequisites"},{"location":"tutorials/tutorial.general/#deploy-guestbook-application","text":"The guestbook application is a canonical Kubernetes application that composes of a Web UI frontend, a backend and a Redis database. By default, guestbook exposes its application through a service with name frontend on port 80 . Without a Kubernetes Ingress Resource the service is not accessible from outside the AKS cluster. We will use the application and setup Ingress Resources to access the application through HTTP and HTTPS. Follow the instructions below to deploy the guestbook application. Download guestbook-all-in-one.yaml from here Deploy guestbook-all-in-one.yaml into your AKS cluster by running bash kubectl apply -f guestbook-all-in-one.yaml Now, the guestbook application has been deployed.","title":"Deploy guestbook application"},{"location":"tutorials/tutorial.general/#expose-services-over-http","text":"In order to expose the guestbook application we will using the following ingress resource: yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: guestbook annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - http: paths: - pathType: Prefix path: / backend: service: name: frontend port: number: 80 This ingress will expose the frontend service of the guestbook-all-in-one deployment as a default backend of the Application Gateway. Save the above ingress resource as ing-guestbook.yaml . Deploy ing-guestbook.yaml by running: bash kubectl apply -f ing-guestbook.yaml Check the log of the ingress controller for deployment status. Now the guestbook application should be available. You can check this by visiting the public address of the Application Gateway.","title":"Expose services over HTTP"},{"location":"tutorials/tutorial.general/#expose-services-over-https","text":"","title":"Expose services over HTTPS"},{"location":"tutorials/tutorial.general/#without-specified-hostname","text":"Without specifying hostname, the guestbook service will be available on all the host-names pointing to the application gateway. Before deploying ingress, you need to create a kubernetes secret to host the certificate and private key. You can create a kubernetes secret by running bash kubectl create secret tls --key --cert Define the following ingress. In the ingress, specify the name of the secret in the secretName section. yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: guestbook annotations: kubernetes.io/ingress.class: azure/application-gateway spec: tls: - secretName: rules: - http: paths: - pathType: Prefix path: / backend: service: name: frontend port: number: 80 NOTE: Replace in the above Ingress Resource with the name of your secret. Store the above Ingress Resource in a file name ing-guestbook-tls.yaml . Deploy ing-guestbook-tls.yaml by running bash kubectl apply -f ing-guestbook-tls.yaml Check the log of the ingress controller for deployment status. Now the guestbook application will be available on HTTPS. In order to make the guestbook application available on HTTP, annotate the Ingress with yaml appgw.ingress.kubernetes.io/ssl-redirect: \"true\" Only in this case a HTTP Listener is created in Azure which redirects the visitor to the HTTPS version.","title":"Without specified hostname"},{"location":"tutorials/tutorial.general/#with-specified-hostname","text":"You can also specify the hostname on the ingress in order to multiplex TLS configurations and services. By specifying hostname, the guestbook service will only be available on the specified host. Define the following ingress. In the ingress, specify the name of the secret in the secretName section and replace the hostname in the hosts section accordingly. yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: guestbook annotations: kubernetes.io/ingress.class: azure/application-gateway spec: tls: - hosts: - secretName: rules: - host: http: paths: - pathType: Prefix path: / backend: service: name: frontend port: number: 80 Deploy ing-guestbook-tls-sni.yaml by running bash kubectl apply -f ing-guestbook-tls-sni.yaml Check the log of the ingress controller for deployment status. Now the guestbook application will be available on both HTTP and HTTPS only on the specified host ( in this example).","title":"With specified hostname"}]} \ No newline at end of file +{"config":{"lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Introduction NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. The Application Gateway Ingress Controller allows Azure Application Gateway to be used as the ingress for an Azure Kubernetes Service aka AKS cluster. As shown in the figure below, the ingress controller runs as a pod within the AKS cluster. It consumes Kubernetes Ingress Resources and converts them to an Azure Application Gateway configuration which allows the gateway to load-balance traffic to Kubernetes pods. Reporting Issues The best way to report an issue is to create a Github Issue for the project. Please include the following information when creating the issue: Subscription ID for AKS cluster. Subscription ID for Application Gateway. AKS cluster name/ARM Resource ID. Application Gateway name/ARM Resource ID. Ingress resource definition that might causing the problem. The Helm configuration used to install the ingress controller.","title":"Introduction"},{"location":"#introduction","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. The Application Gateway Ingress Controller allows Azure Application Gateway to be used as the ingress for an Azure Kubernetes Service aka AKS cluster. As shown in the figure below, the ingress controller runs as a pod within the AKS cluster. It consumes Kubernetes Ingress Resources and converts them to an Azure Application Gateway configuration which allows the gateway to load-balance traffic to Kubernetes pods.","title":"Introduction"},{"location":"#reporting-issues","text":"The best way to report an issue is to create a Github Issue for the project. Please include the following information when creating the issue: Subscription ID for AKS cluster. Subscription ID for Application Gateway. AKS cluster name/ARM Resource ID. Application Gateway name/ARM Resource ID. Ingress resource definition that might causing the problem. The Helm configuration used to install the ingress controller.","title":"Reporting Issues"},{"location":"annotations/","text":"Annotations NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. A list of corresponding translations from AGIC to Application Gateway for Containers may be found here . Introductions The Kubernetes Ingress resource can be annotated with arbitrary key/value pairs. AGIC relies on annotations to program Application Gateway features, which are not configurable via the Ingress YAML. Ingress annotations are applied to all HTTP setting, backend pools and listeners derived from an ingress resource. List of supported annotations For an Ingress resource to be observed by AGIC it must be annotated with kubernetes.io/ingress.class: azure/application-gateway . Only then AGIC will work with the Ingress resource in question. Annotation Key Value Type Default Value Allowed Values Supported since appgw.ingress.kubernetes.io/backend-path-prefix string nil 1.3.0 appgw.ingress.kubernetes.io/backend-hostname string nil 1.2.0 appgw.ingress.kubernetes.io/backend-protocol string http http , https 1.0.0 appgw.ingress.kubernetes.io/ssl-redirect bool false 1.0.0 appgw.ingress.kubernetes.io/appgw-ssl-certificate string nil 1.2.0 appgw.ingress.kubernetes.io/appgw-trusted-root-certificate string nil 1.2.0 appgw.ingress.kubernetes.io/appgw-ssl-profile string nil 1.6.0-rc1 appgw.ingress.kubernetes.io/connection-draining bool false 1.0.0 appgw.ingress.kubernetes.io/connection-draining-timeout int32 (seconds) 30 1.0.0 appgw.ingress.kubernetes.io/cookie-based-affinity bool false 1.0.0 appgw.ingress.kubernetes.io/request-timeout int32 (seconds) 30 1.0.0 appgw.ingress.kubernetes.io/override-frontend-port string 1.3.0 appgw.ingress.kubernetes.io/use-private-ip bool false 1.0.0 appgw.ingress.kubernetes.io/waf-policy-for-path string 1.3.0 appgw.ingress.kubernetes.io/health-probe-hostname string nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-port int32 nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-path string nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-status-codes []string nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-interval int32 nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-timeout int32 nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-unhealthy-threshold int32 nil 1.4.0-rc1 appgw.ingress.kubernetes.io/rewrite-rule-set string nil 1.5.0-rc1 appgw.ingress.kubernetes.io/rewrite-rule-set-custom-resource string nil 1.6.0-rc1 appgw.ingress.kubernetes.io/hostname-extension string nil 1.4.0 Override Frontend Port The annotation allows to configure frontend listener to use different ports other than 80/443 for http/https. If the port is within the App Gw authorized range (1 - 64999), this listener will be created on this specific port. If an invalid port or no port is set in the annotation, the configuration will fallback on default 80 or 443. Usage yaml appgw.ingress.kubernetes.io/override-frontend-port: \"port\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-overridefrontendport namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/override-frontend-port: \"8080\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact External request will need to target http://somehost:8080 instead of http://somehost . Backend Path Prefix This annotation allows the backend path specified in an ingress resource to be re-written with prefix specified in this annotation. This allows users to expose services whose endpoints are different than endpoint names used to expose a service in an ingress resource. Usage yaml appgw.ingress.kubernetes.io/backend-path-prefix: Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-path-prefix: \"/test/\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact In the example above we have defined an ingress resource named go-server-ingress-bkprefix with an annotation appgw.ingress.kubernetes.io/backend-path-prefix: \"/test/\" . The annotation tells application gateway to create an HTTP setting which will have a path prefix override for the path /hello to /test/ . NOTE: In the above example we have only one rule defined. However, the annotations is applicable to the entire ingress resource so if a user had defined multiple rules the backend path prefix would be setup for each of the paths specified. Thus, if a user wants different rules with different path prefixes (even for the same service) they would need to define different ingress resources. If your incoming path is /hello/test/health but your backend requires /health you will want to ensure you have /* on your path yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-path-prefix: \"/\" spec: rules: - http: paths: - path: /hello/test/* pathType: Prefix backend: service: name: store-service Backend Hostname This annotations allows us to specify the host name that Application Gateway should use while talking to the Pods. Usage yaml appgw.ingress.kubernetes.io/backend-hostname: \"internal.example.com\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-timeout namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-hostname: \"internal.example.com\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact Backend Protocol This annotation allows us to specify the protocol that Application Gateway should use while talking to the Pods. Supported Protocols: http , https Note 1) Make sure to not use port 80 with HTTPS and port 443 with HTTP on the Pods. Usage yaml appgw.ingress.kubernetes.io/backend-protocol: \"https\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-timeout namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-protocol: \"https\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 443 pathType: Exact SSL Redirect Application Gateway can be configured to automatically redirect HTTP URLs to their HTTPS counterparts. When this annotation is present and TLS is properly configured, Kubernetes Ingress controller will create a routing rule with a redirection configuration and apply the changes to your App Gateway. The redirect created will be HTTP 301 Moved Permanently . Usage yaml appgw.ingress.kubernetes.io/ssl-redirect: \"true\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-redirect namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/ssl-redirect: \"true\" spec: tls: - hosts: - www.contoso.com secretName: testsecret-tls rules: - host: www.contoso.com http: paths: - backend: service: name: websocket-repeater port: number: 80 AppGw SSL Certificate The SSL certificate can be configured to Application Gateway either from a local PFX certificate file or a reference to a Azure Key Vault unversioned secret Id. When the annotation is present with a certificate name and the certificate is pre-installed in Application Gateway, Kubernetes Ingress controller will create a routing rule with a HTTPS listener and apply the changes to your App Gateway. appgw-ssl-certificate annotation can also be used together with ssl-redirect annotation in case of SSL redirect. Please refer to appgw-ssl-certificate feature for more details. Note * Annotation \"appgw-ssl-certificate\" will be ignored when TLS Spec is defined in ingress at the same time. * If a user wants different certs with different hosts(multi tls certificate termination), they would need to define different ingress resources. Use Azure CLI to install certificate to Application Gateway Configure from a local PFX certificate file bash az network application-gateway ssl-cert create -g $resgp --gateway-name $appgwName -n mysslcert --cert-file \\path\\to\\cert\\file --cert-password Abc123 Configure from a reference to a Key Vault unversioned secret id bash az keyvault certificate create --vault-name $vaultName -n cert1 -p \"$(az keyvault certificate get-default-policy)\" versionedSecretId=$(az keyvault certificate show -n cert --vault-name $vaultName --query \"sid\" -o tsv) unversionedSecretId=$(echo $versionedSecretId | cut -d'/' -f-5) # remove the version from the url az network application-gateway ssl-cert create -n mysslcert --gateway-name $appgwName --resource-group $resgp --key-vault-secret-id $unversionedSecretId To use PowerShell, please refer to Configure Key Vault - PowerShell . Usage yaml appgw.ingress.kubernetes.io/appgw-ssl-certificate: \"name-of-appgw-installed-certificate\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-certificate namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/appgw-ssl-certificate: \"name-of-appgw-installed-certificate\" spec: rules: - host: www.contoso.com http: paths: - backend: service: name: websocket-repeater port: number: 80 AppGW Trusted Root Certificate Users now can configure their own root certificates to Application Gateway to be trusted via AGIC. The annotaton appgw-trusted-root-certificate shall be used together with annotation backend-protocol to indicate end-to-end ssl encryption, multiple root certificates, separated by comma, if specified, e.g. \"name-of-my-root-cert1,name-of-my-root-certificate2\". Use Azure CLI to install your root certificate to Application Gateway Create your public root certificate for testing bash openssl ecparam -out test.key -name prime256v1 -genkey openssl req -new -sha256 -key test.key -out test.csr openssl x509 -req -sha256 -days 365 -in test.csr -signkey test.key -out test.crt Configure your root certificate to Application Gateway ```bash Rename test.crt to test.cer mv test.crt test.cer Configure the root certificate to your Application Gateway az network application-gateway root-cert create --cert-file test.cer --gateway-name $appgwName --name name-of-my-root-cert1 --resource-group $resgp ``` Repeat the steps above if you want to configure multiple trusted root certificates Usage yaml appgw.ingress.kubernetes.io/backend-protocol: \"https\" appgw.ingress.kubernetes.io/appgw-trusted-root-certificate: \"name-of-my-root-cert1\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-certificate namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-protocol: \"https\" appgw.ingress.kubernetes.io/appgw-trusted-root-certificate: \"name-of-my-root-cert1\" spec: rules: - host: www.contoso.com http: paths: - backend: service: name: websocket-repeater port: number: 80 AppGw Ssl Profile Note: This annotation is supported since 1.6.0-rc1. Users can configure a ssl profile on the Application Gateway per listener . When the annotation is present with a profile name and the profile is pre-installed in the Application Gateway, Kubernetes Ingress controller will create a routing rule with a HTTPS listener and apply the changes to your App Gateway. Connection Draining connection-draining : This annotation allows to specify whether to enable connection draining. connection-draining-timeout : This annotation allows to specify a timeout after which Application Gateway will terminate the requests to the draining backend endpoint. Usage yaml appgw.ingress.kubernetes.io/connection-draining: \"true\" appgw.ingress.kubernetes.io/connection-draining-timeout: \"60\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-drain namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/connection-draining: \"true\" appgw.ingress.kubernetes.io/connection-draining-timeout: \"60\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact Cookie Based Affinity This annotation allows to specify whether to enable cookie based affinity. Usage yaml appgw.ingress.kubernetes.io/cookie-based-affinity: \"true\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-affinity namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/cookie-based-affinity: \"true\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact Distinct cookie name In addition to cookie-based-affinity, you can set cookie-based-affinity-distinct-name: \"true\" to ensure a different affinity cookie is set per backend. Usage yaml appgw.ingress.kubernetes.io/cookie-based-affinity-distinct-name: \"true\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-affinity namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/cookie-based-affinity: \"true\" appgw.ingress.kubernetes.io/cookie-based-affinity-distinct-name: \"true\" spec: rules: - http: paths: - path: /affinity1/ pathType: Exact backend: service: name: affinity-service port: number: 80 - path: /affinity2/ pathType: Exact backend: service: name: affinity-service port: number: 80 Request Timeout This annotation allows to specify the request timeout in seconds after which Application Gateway will fail the request if response is not received. Usage yaml appgw.ingress.kubernetes.io/request-timeout: \"20\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-timeout namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/request-timeout: \"20\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact Use Private IP This annotation allows us to specify whether to expose this endpoint on Private IP of Application Gateway. Note 1) App Gateway doesn't support multiple IPs on the same port (example: 80/443). Ingress with annotation appgw.ingress.kubernetes.io/use-private-ip: \"false\" and another with appgw.ingress.kubernetes.io/use-private-ip: \"true\" on HTTP will cause AGIC to fail in updating the App Gateway. 2) For App Gateway that doesn't have a private IP, Ingresses with appgw.ingress.kubernetes.io/use-private-ip: \"true\" will be ignored. This will reflected in the controller logs and ingress events for those ingresses with NoPrivateIP warning. Usage yaml appgw.ingress.kubernetes.io/use-private-ip: \"true\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-timeout namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/use-private-ip: \"true\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact Azure Waf Policy For Path This annotation allows you to attach an already created WAF policy to the list paths for a host within a Kubernetes Ingress resource being annotated. The WAF policy must be created in advance. Example of using Azure Portal to create a policy: Once the policy is created, copy the URI of the policy from the address bar of Azure Portal: The URI would have the following format: bash /subscriptions//resourceGroups//providers/Microsoft.Network/applicationGatewayWebApplicationFirewallPolicies/ Note 1) Waf policy will only be applied to a listener if ingress rule path is not set or set to \"/\" or \"/*\" Usage yaml appgw.ingress.kubernetes.io/waf-policy-for-path: \"/subscriptions/abcd/resourceGroups/rg/providers/Microsoft.Network/applicationGatewayWebApplicationFirewallPolicies/adserver\" Example The example below will apply the WAF policy yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ad-server-ingress namespace: commerce annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/waf-policy-for-path: \"/subscriptions/abcd/resourceGroups/rg/providers/Microsoft.Network/applicationGatewayWebApplicationFirewallPolicies/adserver\" spec: rules: - http: paths: - path: /ad-server backend: service: name: ad-server port: number: 80 pathType: Exact - path: /auth backend: service: name: auth-server port: number: 80 pathType: Exact Note that the WAF policy will be applied to both /ad-server and /auth URLs. Health Probe Hostname This annotation allows specifically define a target host to be used for AGW health probe. By default, if backend container running service with liveliness probe of type HTTP GET defined, host used in liveliness probe definition is also used as a target host for health probe. However if annotation appgw.ingress.kubernetes.io/health-probe-hostname is defined it overrides it with its own value. Usage yaml appgw.ingress.kubernetes.io/health-probe-hostname: Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-hostname: \"my-backend-host.custom.app\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact Health Probe Port Health probe port annotation allows specifically define target TCP port to be used for AGW health probe. By default, if backend container running service has liveliness probe of type HTTP GET defined, port used in liveliness probe definition is also used as a port for health probe. Annotation appgw.ingress.kubernetes.io/health-probe-port has precedence over such default value. Usage yaml appgw.ingress.kubernetes.io/health-probe-port: Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-hostname: \"my-backend-host.custom.app\" appgw.ingress.kubernetes.io/health-probe-port: \"443\" appgw.ingress.kubernetes.io/health-probe-path: \"/healthz\" appgw.ingress.kubernetes.io/backend-protocol: https spec: tls: - secretName: \"my-backend-host.custom.app-ssl-certificate\" rules: - http: paths: - path: / backend: service: name: store-service port: number: 443 pathType: Exact Health Probe Path This annotation allows specifically define target URI path to be used for AGW health probe. By default, if backend container running service with liveliness probe of type HTTP GET defined , path defined in liveliness probe definition is also used as a path for health probe. However annotation appgw.ingress.kubernetes.io/health-probe-path overrides it with its own value. Usage yaml appgw.ingress.kubernetes.io/health-probe-path: Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-hostname: \"my-backend-host.custom.app\" appgw.ingress.kubernetes.io/health-probe-port: \"8080\" appgw.ingress.kubernetes.io/health-probe-path: \"/healthz\" spec: rules: - http: paths: - path: / backend: service: name: store-service port: number: 8080 Health Probe Status Codes This annotation defines healthy status codes returned by the health probe. The values are comma separated list of individual status codes or ranges defined as - . Usage yaml appgw.ingress.kubernetes.io/health-probe-status-codes: Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-status-codes: \"200-399, 401\" spec: rules: - http: paths: - path: / backend: service: name: store-service port: number: 8080 pathType: Exact Health Probe Interval This annotation sets AGW health probe interval. By default, if backend container running service with liveliness probe of type HTTP GET defined, interval in liveliness probe definition is also used as a interval for health probe. However annotation appgw.ingress.kubernetes.io/health-probe-interval overrides it with its value. Usage yaml appgw.ingress.kubernetes.io/health-probe-interval: Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-interval: \"20\" spec: rules: - http: paths: - path: / backend: service: name: store-service port: number: 8080 pathType: Exact Health Probe Timeout This annotation allows specifically define timeout for AGW health probe. By default, if backend container running service with liveliness probe of type HTTP GET defined, timeout defined in liveliness probe definition is also used for health probe. However annotation appgw.ingress.kubernetes.io/health-probe-timeout overrides it with its value. Usage yaml appgw.ingress.kubernetes.io/health-probe-timeout: Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-timeout: \"15\" spec: rules: - http: paths: - path: / backend: service: name: store-service port: number: 8080 pathType: Exact Health Probe Unhealthy Threshold This annotation allows specifically define target unhealthy thresold for AGW health probe. By default, if backend container running service with liveliness probe of type HTTP GET defined , threshold defined in liveliness probe definition is also used for health probe. However annotation appgw.ingress.kubernetes.io/health-probe-unhealthy-threshold overrides it with its value. Usage yaml appgw.ingress.kubernetes.io/health-probe-unhealthy-threshold: Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-unhealthy-threshold: \"5\" spec: rules: - http: paths: - path: / backend: service: name: store-service port: number: 8080 pathType: Exact Rewrite Rule Set This annotation allows to assign an existing rewrite rule set to the corresponding request routing rule(s). Rewrite rule set is managed via Azure Portal / CLI / PS. Usage yaml appgw.ingress.kubernetes.io/rewrite-rule-set: Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/rewrite-rule-set: add-custom-response-header spec: rules: - http: paths: - path: / pathType: Exact backend: service: name: store-service port: number: 8080 Rewrite Rule Set Custom Resource Note: This annotation is supported since 1.6.0-rc1. This annotation allows to assign a header/URL rewrite rule set created via the AzureApplicationGatewayRewrite CR to be associated to all rules in an ingress resource. AzureApplicationGatewayRewrite CR should be present in the same namespace as the ingress. Usage yaml appgw.ingress.kubernetes.io/rewrite-rule-set-custom-resource: Example ```yaml apiVersion: appgw.ingress.azure.io/v1beta1 kind: AzureApplicationGatewayRewrite metadata: name: my-rewrite-rule-set spec: rewriteRules: - name: rule1 ruleSequence: 21 conditions: - ignoreCase: false negate: false variable: http_req_Host pattern: example.com actions: requestHeaderConfigurations: - actionType: set headerName: incoming-test-header headerValue: incoming-test-value responseHeaderConfigurations: - actionType: set headerName: outgoing-test-header headerValue: outgoing-test-value urlConfiguration: modifiedPath: \"/api/\" modifiedQueryString: \"query=test-value\" reroute: false apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/rewrite-rule-set-custom-resource: my-rewrite-rule-set spec: rules: - http: paths: - path: / pathType: Exact backend: service: name: store-service port: number: 8080 ``` Hostname Extension This annotation allows to append additional hostnames to the host specified in the ingress resource. This applies to all the rules in the ingress resource. Usage yaml appgw.ingress.kubernetes.io/hostname-extension: \"hostname1, hostname2\" Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: store-app-ingress namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/hostname-extension: \"prod-store.app.com\" spec: rules: - host: \"store.app.com\" http: paths: - path: / pathType: Exact backend: service: name: store-service port: number: 8080","title":"Annotations"},{"location":"annotations/#annotations","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. A list of corresponding translations from AGIC to Application Gateway for Containers may be found here .","title":"Annotations"},{"location":"annotations/#introductions","text":"The Kubernetes Ingress resource can be annotated with arbitrary key/value pairs. AGIC relies on annotations to program Application Gateway features, which are not configurable via the Ingress YAML. Ingress annotations are applied to all HTTP setting, backend pools and listeners derived from an ingress resource.","title":"Introductions"},{"location":"annotations/#list-of-supported-annotations","text":"For an Ingress resource to be observed by AGIC it must be annotated with kubernetes.io/ingress.class: azure/application-gateway . Only then AGIC will work with the Ingress resource in question. Annotation Key Value Type Default Value Allowed Values Supported since appgw.ingress.kubernetes.io/backend-path-prefix string nil 1.3.0 appgw.ingress.kubernetes.io/backend-hostname string nil 1.2.0 appgw.ingress.kubernetes.io/backend-protocol string http http , https 1.0.0 appgw.ingress.kubernetes.io/ssl-redirect bool false 1.0.0 appgw.ingress.kubernetes.io/appgw-ssl-certificate string nil 1.2.0 appgw.ingress.kubernetes.io/appgw-trusted-root-certificate string nil 1.2.0 appgw.ingress.kubernetes.io/appgw-ssl-profile string nil 1.6.0-rc1 appgw.ingress.kubernetes.io/connection-draining bool false 1.0.0 appgw.ingress.kubernetes.io/connection-draining-timeout int32 (seconds) 30 1.0.0 appgw.ingress.kubernetes.io/cookie-based-affinity bool false 1.0.0 appgw.ingress.kubernetes.io/request-timeout int32 (seconds) 30 1.0.0 appgw.ingress.kubernetes.io/override-frontend-port string 1.3.0 appgw.ingress.kubernetes.io/use-private-ip bool false 1.0.0 appgw.ingress.kubernetes.io/waf-policy-for-path string 1.3.0 appgw.ingress.kubernetes.io/health-probe-hostname string nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-port int32 nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-path string nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-status-codes []string nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-interval int32 nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-timeout int32 nil 1.4.0-rc1 appgw.ingress.kubernetes.io/health-probe-unhealthy-threshold int32 nil 1.4.0-rc1 appgw.ingress.kubernetes.io/rewrite-rule-set string nil 1.5.0-rc1 appgw.ingress.kubernetes.io/rewrite-rule-set-custom-resource string nil 1.6.0-rc1 appgw.ingress.kubernetes.io/hostname-extension string nil 1.4.0","title":"List of supported annotations"},{"location":"annotations/#override-frontend-port","text":"The annotation allows to configure frontend listener to use different ports other than 80/443 for http/https. If the port is within the App Gw authorized range (1 - 64999), this listener will be created on this specific port. If an invalid port or no port is set in the annotation, the configuration will fallback on default 80 or 443.","title":"Override Frontend Port"},{"location":"annotations/#usage","text":"yaml appgw.ingress.kubernetes.io/override-frontend-port: \"port\"","title":"Usage"},{"location":"annotations/#example","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-overridefrontendport namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/override-frontend-port: \"8080\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact External request will need to target http://somehost:8080 instead of http://somehost .","title":"Example"},{"location":"annotations/#backend-path-prefix","text":"This annotation allows the backend path specified in an ingress resource to be re-written with prefix specified in this annotation. This allows users to expose services whose endpoints are different than endpoint names used to expose a service in an ingress resource.","title":"Backend Path Prefix"},{"location":"annotations/#usage_1","text":"yaml appgw.ingress.kubernetes.io/backend-path-prefix: ","title":"Usage"},{"location":"annotations/#example_1","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-path-prefix: \"/test/\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact In the example above we have defined an ingress resource named go-server-ingress-bkprefix with an annotation appgw.ingress.kubernetes.io/backend-path-prefix: \"/test/\" . The annotation tells application gateway to create an HTTP setting which will have a path prefix override for the path /hello to /test/ . NOTE: In the above example we have only one rule defined. However, the annotations is applicable to the entire ingress resource so if a user had defined multiple rules the backend path prefix would be setup for each of the paths specified. Thus, if a user wants different rules with different path prefixes (even for the same service) they would need to define different ingress resources. If your incoming path is /hello/test/health but your backend requires /health you will want to ensure you have /* on your path yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-path-prefix: \"/\" spec: rules: - http: paths: - path: /hello/test/* pathType: Prefix backend: service: name: store-service","title":"Example"},{"location":"annotations/#backend-hostname","text":"This annotations allows us to specify the host name that Application Gateway should use while talking to the Pods.","title":"Backend Hostname"},{"location":"annotations/#usage_2","text":"yaml appgw.ingress.kubernetes.io/backend-hostname: \"internal.example.com\"","title":"Usage"},{"location":"annotations/#example_2","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-timeout namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-hostname: \"internal.example.com\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact","title":"Example"},{"location":"annotations/#backend-protocol","text":"This annotation allows us to specify the protocol that Application Gateway should use while talking to the Pods. Supported Protocols: http , https Note 1) Make sure to not use port 80 with HTTPS and port 443 with HTTP on the Pods.","title":"Backend Protocol"},{"location":"annotations/#usage_3","text":"yaml appgw.ingress.kubernetes.io/backend-protocol: \"https\"","title":"Usage"},{"location":"annotations/#example_3","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-timeout namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-protocol: \"https\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 443 pathType: Exact","title":"Example"},{"location":"annotations/#ssl-redirect","text":"Application Gateway can be configured to automatically redirect HTTP URLs to their HTTPS counterparts. When this annotation is present and TLS is properly configured, Kubernetes Ingress controller will create a routing rule with a redirection configuration and apply the changes to your App Gateway. The redirect created will be HTTP 301 Moved Permanently .","title":"SSL Redirect"},{"location":"annotations/#usage_4","text":"yaml appgw.ingress.kubernetes.io/ssl-redirect: \"true\"","title":"Usage"},{"location":"annotations/#example_4","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-redirect namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/ssl-redirect: \"true\" spec: tls: - hosts: - www.contoso.com secretName: testsecret-tls rules: - host: www.contoso.com http: paths: - backend: service: name: websocket-repeater port: number: 80","title":"Example"},{"location":"annotations/#appgw-ssl-certificate","text":"The SSL certificate can be configured to Application Gateway either from a local PFX certificate file or a reference to a Azure Key Vault unversioned secret Id. When the annotation is present with a certificate name and the certificate is pre-installed in Application Gateway, Kubernetes Ingress controller will create a routing rule with a HTTPS listener and apply the changes to your App Gateway. appgw-ssl-certificate annotation can also be used together with ssl-redirect annotation in case of SSL redirect. Please refer to appgw-ssl-certificate feature for more details. Note * Annotation \"appgw-ssl-certificate\" will be ignored when TLS Spec is defined in ingress at the same time. * If a user wants different certs with different hosts(multi tls certificate termination), they would need to define different ingress resources.","title":"AppGw SSL Certificate"},{"location":"annotations/#use-azure-cli-to-install-certificate-to-application-gateway","text":"Configure from a local PFX certificate file bash az network application-gateway ssl-cert create -g $resgp --gateway-name $appgwName -n mysslcert --cert-file \\path\\to\\cert\\file --cert-password Abc123 Configure from a reference to a Key Vault unversioned secret id bash az keyvault certificate create --vault-name $vaultName -n cert1 -p \"$(az keyvault certificate get-default-policy)\" versionedSecretId=$(az keyvault certificate show -n cert --vault-name $vaultName --query \"sid\" -o tsv) unversionedSecretId=$(echo $versionedSecretId | cut -d'/' -f-5) # remove the version from the url az network application-gateway ssl-cert create -n mysslcert --gateway-name $appgwName --resource-group $resgp --key-vault-secret-id $unversionedSecretId To use PowerShell, please refer to Configure Key Vault - PowerShell .","title":"Use Azure CLI to install certificate to Application Gateway"},{"location":"annotations/#usage_5","text":"yaml appgw.ingress.kubernetes.io/appgw-ssl-certificate: \"name-of-appgw-installed-certificate\"","title":"Usage"},{"location":"annotations/#example_5","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-certificate namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/appgw-ssl-certificate: \"name-of-appgw-installed-certificate\" spec: rules: - host: www.contoso.com http: paths: - backend: service: name: websocket-repeater port: number: 80","title":"Example"},{"location":"annotations/#appgw-trusted-root-certificate","text":"Users now can configure their own root certificates to Application Gateway to be trusted via AGIC. The annotaton appgw-trusted-root-certificate shall be used together with annotation backend-protocol to indicate end-to-end ssl encryption, multiple root certificates, separated by comma, if specified, e.g. \"name-of-my-root-cert1,name-of-my-root-certificate2\".","title":"AppGW Trusted Root Certificate"},{"location":"annotations/#use-azure-cli-to-install-your-root-certificate-to-application-gateway","text":"Create your public root certificate for testing bash openssl ecparam -out test.key -name prime256v1 -genkey openssl req -new -sha256 -key test.key -out test.csr openssl x509 -req -sha256 -days 365 -in test.csr -signkey test.key -out test.crt Configure your root certificate to Application Gateway ```bash","title":"Use Azure CLI to install your root certificate to Application Gateway"},{"location":"annotations/#rename-testcrt-to-testcer","text":"mv test.crt test.cer","title":"Rename test.crt to test.cer"},{"location":"annotations/#configure-the-root-certificate-to-your-application-gateway","text":"az network application-gateway root-cert create --cert-file test.cer --gateway-name $appgwName --name name-of-my-root-cert1 --resource-group $resgp ``` Repeat the steps above if you want to configure multiple trusted root certificates","title":"Configure the root certificate to your Application Gateway"},{"location":"annotations/#usage_6","text":"yaml appgw.ingress.kubernetes.io/backend-protocol: \"https\" appgw.ingress.kubernetes.io/appgw-trusted-root-certificate: \"name-of-my-root-cert1\"","title":"Usage"},{"location":"annotations/#example_6","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-certificate namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-protocol: \"https\" appgw.ingress.kubernetes.io/appgw-trusted-root-certificate: \"name-of-my-root-cert1\" spec: rules: - host: www.contoso.com http: paths: - backend: service: name: websocket-repeater port: number: 80","title":"Example"},{"location":"annotations/#appgw-ssl-profile","text":"Note: This annotation is supported since 1.6.0-rc1. Users can configure a ssl profile on the Application Gateway per listener . When the annotation is present with a profile name and the profile is pre-installed in the Application Gateway, Kubernetes Ingress controller will create a routing rule with a HTTPS listener and apply the changes to your App Gateway.","title":"AppGw Ssl Profile"},{"location":"annotations/#connection-draining","text":"connection-draining : This annotation allows to specify whether to enable connection draining. connection-draining-timeout : This annotation allows to specify a timeout after which Application Gateway will terminate the requests to the draining backend endpoint.","title":"Connection Draining"},{"location":"annotations/#usage_7","text":"yaml appgw.ingress.kubernetes.io/connection-draining: \"true\" appgw.ingress.kubernetes.io/connection-draining-timeout: \"60\"","title":"Usage"},{"location":"annotations/#example_7","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-drain namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/connection-draining: \"true\" appgw.ingress.kubernetes.io/connection-draining-timeout: \"60\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact","title":"Example"},{"location":"annotations/#cookie-based-affinity","text":"This annotation allows to specify whether to enable cookie based affinity.","title":"Cookie Based Affinity"},{"location":"annotations/#usage_8","text":"yaml appgw.ingress.kubernetes.io/cookie-based-affinity: \"true\"","title":"Usage"},{"location":"annotations/#example_8","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-affinity namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/cookie-based-affinity: \"true\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact","title":"Example"},{"location":"annotations/#distinct-cookie-name","text":"In addition to cookie-based-affinity, you can set cookie-based-affinity-distinct-name: \"true\" to ensure a different affinity cookie is set per backend.","title":"Distinct cookie name"},{"location":"annotations/#usage_9","text":"yaml appgw.ingress.kubernetes.io/cookie-based-affinity-distinct-name: \"true\"","title":"Usage"},{"location":"annotations/#example_9","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-affinity namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/cookie-based-affinity: \"true\" appgw.ingress.kubernetes.io/cookie-based-affinity-distinct-name: \"true\" spec: rules: - http: paths: - path: /affinity1/ pathType: Exact backend: service: name: affinity-service port: number: 80 - path: /affinity2/ pathType: Exact backend: service: name: affinity-service port: number: 80","title":"Example"},{"location":"annotations/#request-timeout","text":"This annotation allows to specify the request timeout in seconds after which Application Gateway will fail the request if response is not received.","title":"Request Timeout"},{"location":"annotations/#usage_10","text":"yaml appgw.ingress.kubernetes.io/request-timeout: \"20\"","title":"Usage"},{"location":"annotations/#example_10","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-timeout namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/request-timeout: \"20\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact","title":"Example"},{"location":"annotations/#use-private-ip","text":"This annotation allows us to specify whether to expose this endpoint on Private IP of Application Gateway. Note 1) App Gateway doesn't support multiple IPs on the same port (example: 80/443). Ingress with annotation appgw.ingress.kubernetes.io/use-private-ip: \"false\" and another with appgw.ingress.kubernetes.io/use-private-ip: \"true\" on HTTP will cause AGIC to fail in updating the App Gateway. 2) For App Gateway that doesn't have a private IP, Ingresses with appgw.ingress.kubernetes.io/use-private-ip: \"true\" will be ignored. This will reflected in the controller logs and ingress events for those ingresses with NoPrivateIP warning.","title":"Use Private IP"},{"location":"annotations/#usage_11","text":"yaml appgw.ingress.kubernetes.io/use-private-ip: \"true\"","title":"Usage"},{"location":"annotations/#example_11","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-timeout namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/use-private-ip: \"true\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact","title":"Example"},{"location":"annotations/#azure-waf-policy-for-path","text":"This annotation allows you to attach an already created WAF policy to the list paths for a host within a Kubernetes Ingress resource being annotated. The WAF policy must be created in advance. Example of using Azure Portal to create a policy: Once the policy is created, copy the URI of the policy from the address bar of Azure Portal: The URI would have the following format: bash /subscriptions//resourceGroups//providers/Microsoft.Network/applicationGatewayWebApplicationFirewallPolicies/ Note 1) Waf policy will only be applied to a listener if ingress rule path is not set or set to \"/\" or \"/*\"","title":"Azure Waf Policy For Path"},{"location":"annotations/#usage_12","text":"yaml appgw.ingress.kubernetes.io/waf-policy-for-path: \"/subscriptions/abcd/resourceGroups/rg/providers/Microsoft.Network/applicationGatewayWebApplicationFirewallPolicies/adserver\"","title":"Usage"},{"location":"annotations/#example_12","text":"The example below will apply the WAF policy yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ad-server-ingress namespace: commerce annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/waf-policy-for-path: \"/subscriptions/abcd/resourceGroups/rg/providers/Microsoft.Network/applicationGatewayWebApplicationFirewallPolicies/adserver\" spec: rules: - http: paths: - path: /ad-server backend: service: name: ad-server port: number: 80 pathType: Exact - path: /auth backend: service: name: auth-server port: number: 80 pathType: Exact Note that the WAF policy will be applied to both /ad-server and /auth URLs.","title":"Example"},{"location":"annotations/#health-probe-hostname","text":"This annotation allows specifically define a target host to be used for AGW health probe. By default, if backend container running service with liveliness probe of type HTTP GET defined, host used in liveliness probe definition is also used as a target host for health probe. However if annotation appgw.ingress.kubernetes.io/health-probe-hostname is defined it overrides it with its own value.","title":"Health Probe Hostname"},{"location":"annotations/#usage_13","text":"yaml appgw.ingress.kubernetes.io/health-probe-hostname: ","title":"Usage"},{"location":"annotations/#example_13","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-hostname: \"my-backend-host.custom.app\" spec: rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 pathType: Exact","title":"Example"},{"location":"annotations/#health-probe-port","text":"Health probe port annotation allows specifically define target TCP port to be used for AGW health probe. By default, if backend container running service has liveliness probe of type HTTP GET defined, port used in liveliness probe definition is also used as a port for health probe. Annotation appgw.ingress.kubernetes.io/health-probe-port has precedence over such default value.","title":"Health Probe Port"},{"location":"annotations/#usage_14","text":"yaml appgw.ingress.kubernetes.io/health-probe-port: ","title":"Usage"},{"location":"annotations/#example_14","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-hostname: \"my-backend-host.custom.app\" appgw.ingress.kubernetes.io/health-probe-port: \"443\" appgw.ingress.kubernetes.io/health-probe-path: \"/healthz\" appgw.ingress.kubernetes.io/backend-protocol: https spec: tls: - secretName: \"my-backend-host.custom.app-ssl-certificate\" rules: - http: paths: - path: / backend: service: name: store-service port: number: 443 pathType: Exact","title":"Example"},{"location":"annotations/#health-probe-path","text":"This annotation allows specifically define target URI path to be used for AGW health probe. By default, if backend container running service with liveliness probe of type HTTP GET defined , path defined in liveliness probe definition is also used as a path for health probe. However annotation appgw.ingress.kubernetes.io/health-probe-path overrides it with its own value.","title":"Health Probe Path"},{"location":"annotations/#usage_15","text":"yaml appgw.ingress.kubernetes.io/health-probe-path: ","title":"Usage"},{"location":"annotations/#example_15","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-hostname: \"my-backend-host.custom.app\" appgw.ingress.kubernetes.io/health-probe-port: \"8080\" appgw.ingress.kubernetes.io/health-probe-path: \"/healthz\" spec: rules: - http: paths: - path: / backend: service: name: store-service port: number: 8080","title":"Example"},{"location":"annotations/#health-probe-status-codes","text":"This annotation defines healthy status codes returned by the health probe. The values are comma separated list of individual status codes or ranges defined as - .","title":"Health Probe Status Codes"},{"location":"annotations/#usage_16","text":"yaml appgw.ingress.kubernetes.io/health-probe-status-codes: ","title":"Usage"},{"location":"annotations/#example_16","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-status-codes: \"200-399, 401\" spec: rules: - http: paths: - path: / backend: service: name: store-service port: number: 8080 pathType: Exact","title":"Example"},{"location":"annotations/#health-probe-interval","text":"This annotation sets AGW health probe interval. By default, if backend container running service with liveliness probe of type HTTP GET defined, interval in liveliness probe definition is also used as a interval for health probe. However annotation appgw.ingress.kubernetes.io/health-probe-interval overrides it with its value.","title":"Health Probe Interval"},{"location":"annotations/#usage_17","text":"yaml appgw.ingress.kubernetes.io/health-probe-interval: ","title":"Usage"},{"location":"annotations/#example_17","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-interval: \"20\" spec: rules: - http: paths: - path: / backend: service: name: store-service port: number: 8080 pathType: Exact","title":"Example"},{"location":"annotations/#health-probe-timeout","text":"This annotation allows specifically define timeout for AGW health probe. By default, if backend container running service with liveliness probe of type HTTP GET defined, timeout defined in liveliness probe definition is also used for health probe. However annotation appgw.ingress.kubernetes.io/health-probe-timeout overrides it with its value.","title":"Health Probe Timeout"},{"location":"annotations/#usage_18","text":"yaml appgw.ingress.kubernetes.io/health-probe-timeout: ","title":"Usage"},{"location":"annotations/#example_18","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-timeout: \"15\" spec: rules: - http: paths: - path: / backend: service: name: store-service port: number: 8080 pathType: Exact","title":"Example"},{"location":"annotations/#health-probe-unhealthy-threshold","text":"This annotation allows specifically define target unhealthy thresold for AGW health probe. By default, if backend container running service with liveliness probe of type HTTP GET defined , threshold defined in liveliness probe definition is also used for health probe. However annotation appgw.ingress.kubernetes.io/health-probe-unhealthy-threshold overrides it with its value.","title":"Health Probe Unhealthy Threshold"},{"location":"annotations/#usage_19","text":"yaml appgw.ingress.kubernetes.io/health-probe-unhealthy-threshold: ","title":"Usage"},{"location":"annotations/#example_19","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/health-probe-unhealthy-threshold: \"5\" spec: rules: - http: paths: - path: / backend: service: name: store-service port: number: 8080 pathType: Exact","title":"Example"},{"location":"annotations/#rewrite-rule-set","text":"This annotation allows to assign an existing rewrite rule set to the corresponding request routing rule(s). Rewrite rule set is managed via Azure Portal / CLI / PS.","title":"Rewrite Rule Set"},{"location":"annotations/#usage_20","text":"yaml appgw.ingress.kubernetes.io/rewrite-rule-set: ","title":"Usage"},{"location":"annotations/#example_20","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/rewrite-rule-set: add-custom-response-header spec: rules: - http: paths: - path: / pathType: Exact backend: service: name: store-service port: number: 8080","title":"Example"},{"location":"annotations/#rewrite-rule-set-custom-resource","text":"Note: This annotation is supported since 1.6.0-rc1. This annotation allows to assign a header/URL rewrite rule set created via the AzureApplicationGatewayRewrite CR to be associated to all rules in an ingress resource. AzureApplicationGatewayRewrite CR should be present in the same namespace as the ingress.","title":"Rewrite Rule Set Custom Resource"},{"location":"annotations/#usage_21","text":"yaml appgw.ingress.kubernetes.io/rewrite-rule-set-custom-resource: ","title":"Usage"},{"location":"annotations/#example_21","text":"```yaml apiVersion: appgw.ingress.azure.io/v1beta1 kind: AzureApplicationGatewayRewrite metadata: name: my-rewrite-rule-set spec: rewriteRules: - name: rule1 ruleSequence: 21 conditions: - ignoreCase: false negate: false variable: http_req_Host pattern: example.com actions: requestHeaderConfigurations: - actionType: set headerName: incoming-test-header headerValue: incoming-test-value responseHeaderConfigurations: - actionType: set headerName: outgoing-test-header headerValue: outgoing-test-value urlConfiguration: modifiedPath: \"/api/\" modifiedQueryString: \"query=test-value\" reroute: false apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-server-ingress-bkprefix namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/rewrite-rule-set-custom-resource: my-rewrite-rule-set spec: rules: - http: paths: - path: / pathType: Exact backend: service: name: store-service port: number: 8080 ```","title":"Example"},{"location":"annotations/#hostname-extension","text":"This annotation allows to append additional hostnames to the host specified in the ingress resource. This applies to all the rules in the ingress resource.","title":"Hostname Extension"},{"location":"annotations/#usage_22","text":"yaml appgw.ingress.kubernetes.io/hostname-extension: \"hostname1, hostname2\"","title":"Usage"},{"location":"annotations/#example_22","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: store-app-ingress namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/hostname-extension: \"prod-store.app.com\" spec: rules: - host: \"store.app.com\" http: paths: - path: / pathType: Exact backend: service: name: store-service port: number: 8080","title":"Example"},{"location":"faq/","text":"Frequrently Asked Questions: [WIP] NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. What is an Ingress Controller Can single ingress controller instance manage multiple Application Gateway What is an Ingress Controller Kubernetes allows creation of deployment and service resource to expose a group of pods internally in the cluster. To expose the same service externally, an Ingress resource is defined which provides load balancing, SSL termination and name-based virtual hosting. To satify this Ingress resource, an Ingress Controller is required which listens for any changes to Ingress resources and configures the load balancer policies. The Application Gateway Ingress Controller allows Azure Application Gateway to be used as the ingress for an Azure Kubernetes Service aka AKS cluster. Can single ingress controller instance manage multiple Application Gateway Currently, One instance of Ingress Controller can only be associated to one Application Gateway.","title":"Frequrently Asked Questions: [WIP]"},{"location":"faq/#frequrently-asked-questions-wip","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. What is an Ingress Controller Can single ingress controller instance manage multiple Application Gateway","title":"Frequrently Asked Questions: [WIP]"},{"location":"faq/#what-is-an-ingress-controller","text":"Kubernetes allows creation of deployment and service resource to expose a group of pods internally in the cluster. To expose the same service externally, an Ingress resource is defined which provides load balancing, SSL termination and name-based virtual hosting. To satify this Ingress resource, an Ingress Controller is required which listens for any changes to Ingress resources and configures the load balancer policies. The Application Gateway Ingress Controller allows Azure Application Gateway to be used as the ingress for an Azure Kubernetes Service aka AKS cluster.","title":"What is an Ingress Controller"},{"location":"faq/#can-single-ingress-controller-instance-manage-multiple-application-gateway","text":"Currently, One instance of Ingress Controller can only be associated to one Application Gateway.","title":"Can single ingress controller instance manage multiple Application Gateway"},{"location":"helm-values-documenation/","text":"Helm Values Configuration Options NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Available options Field Default Description verbosityLevel 3 Sets the verbosity level of the AGIC logging infrastructure. See Logging Levels for possible values. reconcilePeriodSeconds Enable periodic reconciliation to checks if the latest gateway configuration is different from what it cached. Range: 30 - 300 seconds. Disabled by default. appgw.applicationGatewayID Resource Id of the Application Gateway. Example: applicationgatewayd0f0 appgw.subscriptionId Default is agent node pool's subscriptionId derived from CloudProvider config The Azure Subscription ID in which App Gateway resides. Example: a123b234-a3b4-557d-b2df-a0bc12de1234 appgw.resourceGroup Default is agent node pool's resource group derived from CloudProvider config Name of the Azure Resource Group in which App Gateway was created. Example: app-gw-resource-group appgw.name Name of the Application Gateway. Example: applicationgatewayd0f0 appgw.environment AZUREPUBLICCLOUD Specify which cloud environment. Possbile values: AZURECHINACLOUD , AZUREGERMANCLOUD , AZUREPUBLICCLOUD , AZUREUSGOVERNMENTCLOUD appgw.shared false This boolean flag should be defaulted to false . Set to true should you need a Shared App Gateway . appgw.subResourceNamePrefix No prefix if empty Prefix that should be used in the naming of the Application Gateway's sub-resources kubernetes.watchNamespace Watches all if empty Specify the name space, which AGIC should watch. This could be a single string value, or a comma-separated list of namespaces. kubernetes.securityContext runAsUser: 0 Specify the pod security context to use with AGIC deployment. By default, AGIC will assume root permission. Jump to Run without root for more information. kubernetes.containerSecurityContext {} Specify the container security context to use with AGIC deployment. kubernetes.podAnnotations {} Specify custom annotations for AGIC pod kubernetes.resources {} Specify resource quota for AGIC pod kubernetes.nodeSelector {} Scheduling node selector kubernetes.tolerations [] Scheduling tolerations kubernetes.affinity {} Scheduling affinity kubernetes.volumes.extraVolumes {} Specify additional volumes for the AGIC pod. This can be useful when running on a readOnlyRootFilesystem , as AGIC requires a writeable /tmp directory. kubernetes.volumes.extraVolumeMounts {} Specify additional volume mounts for the AGIC pod. This can be useful when running on a readOnlyRootFilesystem , as AGIC requires a writeable /tmp directory. kubernetes.ingressClass azure/application-gateway Specify a custom ingress class which will be used to match kubernetes.io/ingress.class in ingress manifest rbac.enabled false Specify true if kubernetes cluster is rbac enabled armAuth.type could be aadPodIdentity or servicePrincipal armAuth.identityResourceID Resource ID of the Azure Managed Identity armAuth.identityClientId The Client ID of the Identity. See below for more information on Identity armAuth.secretJSON Only needed when Service Principal Secret type is chosen (when armAuth.type has been set to servicePrincipal ) nodeSelector {} (Legacy: use kubernetes.nodeSelector instead) Scheduling node selector Example ```yaml appgw: applicationGatewayID: environment: \"AZUREUSGOVERNMENTCLOUD\" # default: AZUREPUBLICCLOUD armAuth: type: aadPodIdentity identityResourceID: identityClientID: kubernetes: nodeSelector: {} tolerations: [] affinity: {} rbac: enabled: false ``` Run without root By default, AGIC will assume root permission which allows it to read cloud-provider config and get meta-data information about the cluster. If you want AGIC to run without root access, then make sure that AGIC is installed with at least the following information to run successfully: ```yaml appgw: applicationGatewayID: # OR subscriptionId: resourceGroup: name: kubernetes: securityContext: runAsUser: 1000 # appgw-ingress-user ``` Note: AGIC also uses cloud-provider config to get Node's Virtual Network Name / Subscription and Route table name. If AGIC is not able to reach this information, It will skip assigning the Node's route table to Application Gateway's subnet which is required when using kubenet network plugin. To workaround, this assignment can be performed manually. Run with read-only root filesystem To run AGIC with readOnlyRootFilesystem , the following additional configuration items are required: yaml kubernetes: containerSecurityContext: readOnlyRootFilesystem: true volumes: extraVolumes: - name: tmp emptyDir: {} extraVolumeMounts: - name: tmp mountPath: /tmp Note: AGIC needs to be able to write to the /tmp directory.","title":"Helm Values Configuration Options"},{"location":"helm-values-documenation/#helm-values-configuration-options","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment.","title":"Helm Values Configuration Options"},{"location":"helm-values-documenation/#available-options","text":"Field Default Description verbosityLevel 3 Sets the verbosity level of the AGIC logging infrastructure. See Logging Levels for possible values. reconcilePeriodSeconds Enable periodic reconciliation to checks if the latest gateway configuration is different from what it cached. Range: 30 - 300 seconds. Disabled by default. appgw.applicationGatewayID Resource Id of the Application Gateway. Example: applicationgatewayd0f0 appgw.subscriptionId Default is agent node pool's subscriptionId derived from CloudProvider config The Azure Subscription ID in which App Gateway resides. Example: a123b234-a3b4-557d-b2df-a0bc12de1234 appgw.resourceGroup Default is agent node pool's resource group derived from CloudProvider config Name of the Azure Resource Group in which App Gateway was created. Example: app-gw-resource-group appgw.name Name of the Application Gateway. Example: applicationgatewayd0f0 appgw.environment AZUREPUBLICCLOUD Specify which cloud environment. Possbile values: AZURECHINACLOUD , AZUREGERMANCLOUD , AZUREPUBLICCLOUD , AZUREUSGOVERNMENTCLOUD appgw.shared false This boolean flag should be defaulted to false . Set to true should you need a Shared App Gateway . appgw.subResourceNamePrefix No prefix if empty Prefix that should be used in the naming of the Application Gateway's sub-resources kubernetes.watchNamespace Watches all if empty Specify the name space, which AGIC should watch. This could be a single string value, or a comma-separated list of namespaces. kubernetes.securityContext runAsUser: 0 Specify the pod security context to use with AGIC deployment. By default, AGIC will assume root permission. Jump to Run without root for more information. kubernetes.containerSecurityContext {} Specify the container security context to use with AGIC deployment. kubernetes.podAnnotations {} Specify custom annotations for AGIC pod kubernetes.resources {} Specify resource quota for AGIC pod kubernetes.nodeSelector {} Scheduling node selector kubernetes.tolerations [] Scheduling tolerations kubernetes.affinity {} Scheduling affinity kubernetes.volumes.extraVolumes {} Specify additional volumes for the AGIC pod. This can be useful when running on a readOnlyRootFilesystem , as AGIC requires a writeable /tmp directory. kubernetes.volumes.extraVolumeMounts {} Specify additional volume mounts for the AGIC pod. This can be useful when running on a readOnlyRootFilesystem , as AGIC requires a writeable /tmp directory. kubernetes.ingressClass azure/application-gateway Specify a custom ingress class which will be used to match kubernetes.io/ingress.class in ingress manifest rbac.enabled false Specify true if kubernetes cluster is rbac enabled armAuth.type could be aadPodIdentity or servicePrincipal armAuth.identityResourceID Resource ID of the Azure Managed Identity armAuth.identityClientId The Client ID of the Identity. See below for more information on Identity armAuth.secretJSON Only needed when Service Principal Secret type is chosen (when armAuth.type has been set to servicePrincipal ) nodeSelector {} (Legacy: use kubernetes.nodeSelector instead) Scheduling node selector","title":"Available options"},{"location":"helm-values-documenation/#example","text":"```yaml appgw: applicationGatewayID: environment: \"AZUREUSGOVERNMENTCLOUD\" # default: AZUREPUBLICCLOUD armAuth: type: aadPodIdentity identityResourceID: identityClientID: kubernetes: nodeSelector: {} tolerations: [] affinity: {} rbac: enabled: false ```","title":"Example"},{"location":"helm-values-documenation/#run-without-root","text":"By default, AGIC will assume root permission which allows it to read cloud-provider config and get meta-data information about the cluster. If you want AGIC to run without root access, then make sure that AGIC is installed with at least the following information to run successfully: ```yaml appgw: applicationGatewayID: # OR subscriptionId: resourceGroup: name: kubernetes: securityContext: runAsUser: 1000 # appgw-ingress-user ``` Note: AGIC also uses cloud-provider config to get Node's Virtual Network Name / Subscription and Route table name. If AGIC is not able to reach this information, It will skip assigning the Node's route table to Application Gateway's subnet which is required when using kubenet network plugin. To workaround, this assignment can be performed manually.","title":"Run without root"},{"location":"helm-values-documenation/#run-with-read-only-root-filesystem","text":"To run AGIC with readOnlyRootFilesystem , the following additional configuration items are required: yaml kubernetes: containerSecurityContext: readOnlyRootFilesystem: true volumes: extraVolumes: - name: tmp emptyDir: {} extraVolumeMounts: - name: tmp mountPath: /tmp Note: AGIC needs to be able to write to the /tmp directory.","title":"Run with read-only root filesystem"},{"location":"ingress-v1/","text":"Ingress V1 Support NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. This document describes AGIC's implementation of specific Ingress resource fields and features. As the Ingress specification has evolved between v1beta1 and v1, any differences between versions are highlighted to ensure clarity for AGIC users. Note: Ingress/V1 is fully supported with AGIC >= 1.5.1 Kubernetes Versions For Kubernetes version 1.19+, the API server translates any Ingress v1beta1 resources to Ingress v1 and AGIC watches Ingress v1 resources. IngressClass and IngressClass Name AGIC now supports using ingressClassName property along with kubernetes.io/ingress.class: azure/application-gateway to indicate that a specific ingress should processed by AGIC. yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: shopping-app spec: ingressClassName: azure-application-gateway ... Ingress Rules Wildcard Hostnames AGIC supports wildcard hostnames as documented by the upstream API as well as precise hostnames. Wildcard hostnames are limited to the whole first DNS label of the hostname, e.g. *.foo.com is valid but *foo.com , foo*.com , foo.*.com are not. * is also not a valid hostname. PathType property is now mandatory AGIC now supports PathType in Ingress V1. Exact path matches will now result in matching requests to the given path exactly. Prefix patch match type will now result in matching requests with a \"segment prefix\" rather than a \"string prefix\" according to the spec (e.g. the prefix /foo/bar will match requests with paths /foo/bar , /foo/bar/ , and /foo/bar/baz , but not /foo/barbaz ). ImplementationSpecific patch match type preserves the old path behaviour of AGIC < 1.5.1 and allows to backwards compatibility. Example Ingress YAML with different pathTypes defined: yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: shopping-app spec: rules: - http: paths: - path: /foo # this would stay /foo since pathType is Exact pathType: Exact - path: /bar # this would be converted to /bar* since pathType is Prefix pathType: Prefix - path: /baz # this would stay /baz since pathType is ImplementationSpecific pathType: ImplementationSpecific - path: /buzz* # this would stay /buzz* since pathType is ImplementationSpecific pathType: ImplementationSpecific Behavioural Change Notice Starting with AGIC 1.5.1, AGIC will now strip * from the path if PathType: Exact AGIC will now append * to path if PathType: Prefix Before AGIC 1.5.1, PathType property was ignored and path matching was performed using Application Gateway wildcard path patterns . Paths prefixed with * were treated as Prefix match and without were treated as Exact match. To continue using the old behaviour, use PathType: ImplementationSpecific match type in AGIC 1.5.1+ to ensure backwards compatibility. Here is a table illustrating the corner cases where the behaviour has changed: AGIC Version < 1.5.1 < 1.5.1 >= 1.5.1 >= 1.5.1 PathType Exact Prefix Exact Prefix Path /foo* /foo /foo* /foo Applied Path /foo* /foo /foo (* is stripped) /foo*(* is appended) Example YAML illustrating the corner cases above: yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: shopping-app spec: rules: - http: paths: - path: /foo* # this would be converted to /foo since pathType is Exact pathType: Exact - path: /bar # this would be converted to /bar* since pathType is Prefix pathType: Prefix - path: /baz* # this would stay /baz* since pathType is Prefix pathType: Prefix Mitigation In case you are affected by this behaviour change in mapping the paths, You can modify your ingress rules to use PathType: ImplementationSpecific so to retain the old behaviour. yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: shopping-app spec: rules: - http: paths: - path: /path* # this would stay /path* since pathType is ImplementationSpecific pathType: ImplementationSpecific","title":"Ingress V1 Support"},{"location":"ingress-v1/#ingress-v1-support","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. This document describes AGIC's implementation of specific Ingress resource fields and features. As the Ingress specification has evolved between v1beta1 and v1, any differences between versions are highlighted to ensure clarity for AGIC users. Note: Ingress/V1 is fully supported with AGIC >= 1.5.1","title":"Ingress V1 Support"},{"location":"ingress-v1/#kubernetes-versions","text":"For Kubernetes version 1.19+, the API server translates any Ingress v1beta1 resources to Ingress v1 and AGIC watches Ingress v1 resources.","title":"Kubernetes Versions"},{"location":"ingress-v1/#ingressclass-and-ingressclass-name","text":"AGIC now supports using ingressClassName property along with kubernetes.io/ingress.class: azure/application-gateway to indicate that a specific ingress should processed by AGIC. yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: shopping-app spec: ingressClassName: azure-application-gateway ...","title":"IngressClass and IngressClass Name"},{"location":"ingress-v1/#ingress-rules","text":"","title":"Ingress Rules"},{"location":"ingress-v1/#wildcard-hostnames","text":"AGIC supports wildcard hostnames as documented by the upstream API as well as precise hostnames. Wildcard hostnames are limited to the whole first DNS label of the hostname, e.g. *.foo.com is valid but *foo.com , foo*.com , foo.*.com are not. * is also not a valid hostname.","title":"Wildcard Hostnames"},{"location":"ingress-v1/#pathtype-property-is-now-mandatory","text":"AGIC now supports PathType in Ingress V1. Exact path matches will now result in matching requests to the given path exactly. Prefix patch match type will now result in matching requests with a \"segment prefix\" rather than a \"string prefix\" according to the spec (e.g. the prefix /foo/bar will match requests with paths /foo/bar , /foo/bar/ , and /foo/bar/baz , but not /foo/barbaz ). ImplementationSpecific patch match type preserves the old path behaviour of AGIC < 1.5.1 and allows to backwards compatibility. Example Ingress YAML with different pathTypes defined: yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: shopping-app spec: rules: - http: paths: - path: /foo # this would stay /foo since pathType is Exact pathType: Exact - path: /bar # this would be converted to /bar* since pathType is Prefix pathType: Prefix - path: /baz # this would stay /baz since pathType is ImplementationSpecific pathType: ImplementationSpecific - path: /buzz* # this would stay /buzz* since pathType is ImplementationSpecific pathType: ImplementationSpecific","title":"PathType property is now mandatory"},{"location":"ingress-v1/#behavioural-change-notice","text":"Starting with AGIC 1.5.1, AGIC will now strip * from the path if PathType: Exact AGIC will now append * to path if PathType: Prefix Before AGIC 1.5.1, PathType property was ignored and path matching was performed using Application Gateway wildcard path patterns . Paths prefixed with * were treated as Prefix match and without were treated as Exact match. To continue using the old behaviour, use PathType: ImplementationSpecific match type in AGIC 1.5.1+ to ensure backwards compatibility. Here is a table illustrating the corner cases where the behaviour has changed: AGIC Version < 1.5.1 < 1.5.1 >= 1.5.1 >= 1.5.1 PathType Exact Prefix Exact Prefix Path /foo* /foo /foo* /foo Applied Path /foo* /foo /foo (* is stripped) /foo*(* is appended) Example YAML illustrating the corner cases above: yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: shopping-app spec: rules: - http: paths: - path: /foo* # this would be converted to /foo since pathType is Exact pathType: Exact - path: /bar # this would be converted to /bar* since pathType is Prefix pathType: Prefix - path: /baz* # this would stay /baz* since pathType is Prefix pathType: Prefix","title":"Behavioural Change Notice"},{"location":"ingress-v1/#mitigation","text":"In case you are affected by this behaviour change in mapping the paths, You can modify your ingress rules to use PathType: ImplementationSpecific so to retain the old behaviour. yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: shopping-app spec: rules: - http: paths: - path: /path* # this would stay /path* since pathType is ImplementationSpecific pathType: ImplementationSpecific","title":"Mitigation"},{"location":"logging-levels/","text":"Logging Levels NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. AGIC has 3 logging levels. Level 1 is the default one and it shows minimal number of log lines. Level 5, on the other hand, would display all logs, including sanitized contents of config applied to ARM. The Kubernetes community has established 9 levels of logging for the kubectl tool. In this repository we are utilizing 3 of these, with similar semantics: Verbosity Description 1 Default log level; shows startup details, warnings and errors 3 Extended information about events and changes; lists of created objects 5 Logs marshaled objects; shows sanitized JSON config applied to ARM The verbosity levels are adjustable via the verbosityLevel variable in the helm-config.yaml file. Increase verbosity level to 5 to get the JSON config dispatched to ARM : add verbosityLevel: 5 on a line by itself in helm-config.yaml and re-install get logs with kubectl logs -n ","title":"Logging Levels"},{"location":"logging-levels/#logging-levels","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. AGIC has 3 logging levels. Level 1 is the default one and it shows minimal number of log lines. Level 5, on the other hand, would display all logs, including sanitized contents of config applied to ARM. The Kubernetes community has established 9 levels of logging for the kubectl tool. In this repository we are utilizing 3 of these, with similar semantics: Verbosity Description 1 Default log level; shows startup details, warnings and errors 3 Extended information about events and changes; lists of created objects 5 Logs marshaled objects; shows sanitized JSON config applied to ARM The verbosity levels are adjustable via the verbosityLevel variable in the helm-config.yaml file. Increase verbosity level to 5 to get the JSON config dispatched to ARM : add verbosityLevel: 5 on a line by itself in helm-config.yaml and re-install get logs with kubectl logs -n ","title":"Logging Levels"},{"location":"developers/build/","text":"Building the controller Running it locally Pre-requisite Obtain Azure Credentials Deploy Application Gateway and AKS Using startup script Visual Studio Code (F5 debugging) Run on a cluster using a Dev Release CMake options Running it locally This section outlines the environment variables and files necessary to successfully compile and run the Go binary, then connect it to an Azure Kubernetes Service . Pre-requisite go >= 1.13 OpenSSL Obtain Azure Credentials In order to run the Go binary locally and control a remote AKS server, you need Azure credentials. These will be stored in a JSON file in your home directory. Follow these instructions to create the $HOME/.azure/azureAuth.json file. The file is generated via: bash az ad sp create-for-rbac --sdk-auth > $HOME/.azure/azureAuth.json The file will contain a JSON blob with the following shape: json { \"clientId\": \"...\", \"clientSecret\": \"...\", \"subscriptionId\": \"\", \"tenantId\": \"...\", \"activeDirectoryEndpointUrl\": \"https://login.microsoftonline.com\", \"resourceManagerEndpointUrl\": \"https://management.azure.com/\", \"activeDirectoryGraphResourceId\": \"https://graph.windows.net/\", \"sqlManagementEndpointUrl\": \"https://management.core.windows.net:8443/\", \"galleryEndpointUrl\": \"https://gallery.azure.com/\", \"managementEndpointUrl\": \"https://management.core.windows.net/\" } Deploy Application Gateway and AKS To deploy a fresh setup, please follow the steps for template deployment in the greenfield documentation. Using startup script In the scripts directory you will find start.sh . This script builds and runs the ingress controller on your local machine and connects to a remote AKS cluster. A .env file in the root of the repository is required. Steps to run ingress controller: Get your cluster's credentials az aks get-credentials --name --resource-group Configure: cp .env.example .env and modify the environment variables in .env to match your config. Here is an example: ``` !/bin/bash export AZURE_AUTH_LOCATION=\"$HOME/.azure/azureAuth.json\" export APPGW_RESOURCE_ID=\" \" export KUBE_CONFIG_FILE=\"$HOME/.kube/config\" export APPGW_VERBOSITY_LEVEL=\"9\" ``` Run: ./scripts/start.sh Cleanup: delete /home/vsonline/go/src/github.com/Azure/application-gateway-kubernetes-ingress/bin Compiling... Build SUCCEEDED ERROR: logging before flag.Parse: I0723 18:37:31.980903 6757 utils.go:115] Using verbosity level 9 from environment variable APPGW_VERBOSITY_LEVEL Version: 1.2.0; Commit: ef716c14; Date: 2020-07-23-18:37T+0000 ERROR: logging before flag.Parse: I0723 18:37:31.989656 6766 utils.go:115] Using verbosity level 9 from environment variable APPGW_VERBOSITY_LEVEL ERROR: logging before flag.Parse: I0723 18:37:31.989720 6766 main.go:78] Unable to load cloud provider config ''. Error: Reading Az Context file \"\" failed: open : no such file or directory E0723 18:37:31.999445 6766 context.go:210] Error fetching AGIC Pod (This may happen if AGIC is running in a test environment). Error: resource name may not be empty I0723 18:37:31.999466 6766 environment.go:240] KUBERNETES_WATCHNAMESPACE is not set. Watching all available namespaces. ... Visual Studio Code (F5 debugging) You can also setup vscode to run the project with F5 and use breakpoint debugging. For this, you need to setup your launch.json file within .vscode folder. json { \"version\": \"0.2.0\", \"configurations\": [ { \"name\": \"Debug\", \"type\": \"go\", \"request\": \"launch\", \"mode\": \"debug\", \"program\": \"${workspaceFolder}/cmd/appgw-ingress\", \"env\": { \"APPGW_VERBOSITY_LEVEL\": \"9\", \"AZURE_AUTH_LOCATION\": \"/home//.azure/azureAuth.json\", \"APPGW_RESOURCE_ID\": \"\" }, \"args\": [ \"--kubeconfig=/home//.kube/config\", \"--in-cluster=false\" ] } ] } Create a Dev Release To test your changes on a cluster, you can use the Dev Release pipeline. Just select the build version from the drop-down list which matches the build in your PR or against your commit in the main branch. Dev Release generates a new docker image and helm package for your changes. Once the pipeline completes, use helm to install the release on your AKS cluster. ```bash add the staging helm repository helm repo add staging https://appgwingress.blob.core.windows.net/ingress-azure-helm-package-staging/ helm repo update list the available versions and pick the latest version helm search repo staging -l --devel NAME CHART VERSION APP VERSION DESCRIPTION staging/ingress-azure 10486 10486 Use Azure Application Gateway as the ingress fo... staging/ingress-azure 10465 10465 Use Azure Application Gateway as the ingress fo... staging/ingress-azure 10256 10256 Use Azure Application Gateway as the ingress fo... install/upgrade helm install ingress-azure \\ -f helm-config.yaml \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --version 10486 ``` You can also find the version by opening your build in the Merge Builds pipeline and looking for the buildid . Use this version when installing on the cluster after the Dev Release completes. CMake options This is a CMake-based project. Build targets include: ALL_BUILD (default target) builds appgw-ingress and dockerize target devenv builds a docker image with configured development environment vendor installs dependency using go mod in a docker container with image from devenv target appgw-ingress builds the binary for this controller in a docker container with image from devenv target dockerize builds a docker image with the binary from appgw-ingress target dockerpush pushes the docker image to a container registry with prefix defined in CMake variable To run the CMake targets: mkdir build && cd build creates and enters a build directory cmake .. generates project configuration in the build directory cmake --build . to build the default target, or cmake --build . --target to specify a target to run from above","title":"Building the controller"},{"location":"developers/build/#building-the-controller","text":"Running it locally Pre-requisite Obtain Azure Credentials Deploy Application Gateway and AKS Using startup script Visual Studio Code (F5 debugging) Run on a cluster using a Dev Release CMake options","title":"Building the controller"},{"location":"developers/build/#running-it-locally","text":"This section outlines the environment variables and files necessary to successfully compile and run the Go binary, then connect it to an Azure Kubernetes Service .","title":"Running it locally"},{"location":"developers/build/#pre-requisite","text":"go >= 1.13 OpenSSL","title":"Pre-requisite"},{"location":"developers/build/#obtain-azure-credentials","text":"In order to run the Go binary locally and control a remote AKS server, you need Azure credentials. These will be stored in a JSON file in your home directory. Follow these instructions to create the $HOME/.azure/azureAuth.json file. The file is generated via: bash az ad sp create-for-rbac --sdk-auth > $HOME/.azure/azureAuth.json The file will contain a JSON blob with the following shape: json { \"clientId\": \"...\", \"clientSecret\": \"...\", \"subscriptionId\": \"\", \"tenantId\": \"...\", \"activeDirectoryEndpointUrl\": \"https://login.microsoftonline.com\", \"resourceManagerEndpointUrl\": \"https://management.azure.com/\", \"activeDirectoryGraphResourceId\": \"https://graph.windows.net/\", \"sqlManagementEndpointUrl\": \"https://management.core.windows.net:8443/\", \"galleryEndpointUrl\": \"https://gallery.azure.com/\", \"managementEndpointUrl\": \"https://management.core.windows.net/\" }","title":"Obtain Azure Credentials"},{"location":"developers/build/#deploy-application-gateway-and-aks","text":"To deploy a fresh setup, please follow the steps for template deployment in the greenfield documentation.","title":"Deploy Application Gateway and AKS"},{"location":"developers/build/#using-startup-script","text":"In the scripts directory you will find start.sh . This script builds and runs the ingress controller on your local machine and connects to a remote AKS cluster. A .env file in the root of the repository is required. Steps to run ingress controller: Get your cluster's credentials az aks get-credentials --name --resource-group Configure: cp .env.example .env and modify the environment variables in .env to match your config. Here is an example: ```","title":"Using startup script"},{"location":"developers/build/#binbash","text":"export AZURE_AUTH_LOCATION=\"$HOME/.azure/azureAuth.json\" export APPGW_RESOURCE_ID=\" \" export KUBE_CONFIG_FILE=\"$HOME/.kube/config\" export APPGW_VERBOSITY_LEVEL=\"9\" ``` Run: ./scripts/start.sh Cleanup: delete /home/vsonline/go/src/github.com/Azure/application-gateway-kubernetes-ingress/bin Compiling... Build SUCCEEDED ERROR: logging before flag.Parse: I0723 18:37:31.980903 6757 utils.go:115] Using verbosity level 9 from environment variable APPGW_VERBOSITY_LEVEL Version: 1.2.0; Commit: ef716c14; Date: 2020-07-23-18:37T+0000 ERROR: logging before flag.Parse: I0723 18:37:31.989656 6766 utils.go:115] Using verbosity level 9 from environment variable APPGW_VERBOSITY_LEVEL ERROR: logging before flag.Parse: I0723 18:37:31.989720 6766 main.go:78] Unable to load cloud provider config ''. Error: Reading Az Context file \"\" failed: open : no such file or directory E0723 18:37:31.999445 6766 context.go:210] Error fetching AGIC Pod (This may happen if AGIC is running in a test environment). Error: resource name may not be empty I0723 18:37:31.999466 6766 environment.go:240] KUBERNETES_WATCHNAMESPACE is not set. Watching all available namespaces. ...","title":"!/bin/bash"},{"location":"developers/build/#visual-studio-code-f5-debugging","text":"You can also setup vscode to run the project with F5 and use breakpoint debugging. For this, you need to setup your launch.json file within .vscode folder. json { \"version\": \"0.2.0\", \"configurations\": [ { \"name\": \"Debug\", \"type\": \"go\", \"request\": \"launch\", \"mode\": \"debug\", \"program\": \"${workspaceFolder}/cmd/appgw-ingress\", \"env\": { \"APPGW_VERBOSITY_LEVEL\": \"9\", \"AZURE_AUTH_LOCATION\": \"/home//.azure/azureAuth.json\", \"APPGW_RESOURCE_ID\": \"\" }, \"args\": [ \"--kubeconfig=/home//.kube/config\", \"--in-cluster=false\" ] } ] }","title":"Visual Studio Code (F5 debugging)"},{"location":"developers/build/#create-a-dev-release","text":"To test your changes on a cluster, you can use the Dev Release pipeline. Just select the build version from the drop-down list which matches the build in your PR or against your commit in the main branch. Dev Release generates a new docker image and helm package for your changes. Once the pipeline completes, use helm to install the release on your AKS cluster. ```bash","title":"Create a Dev Release"},{"location":"developers/build/#add-the-staging-helm-repository","text":"helm repo add staging https://appgwingress.blob.core.windows.net/ingress-azure-helm-package-staging/ helm repo update","title":"add the staging helm repository"},{"location":"developers/build/#list-the-available-versions-and-pick-the-latest-version","text":"helm search repo staging -l --devel NAME CHART VERSION APP VERSION DESCRIPTION staging/ingress-azure 10486 10486 Use Azure Application Gateway as the ingress fo... staging/ingress-azure 10465 10465 Use Azure Application Gateway as the ingress fo... staging/ingress-azure 10256 10256 Use Azure Application Gateway as the ingress fo...","title":"list the available versions and pick the latest version"},{"location":"developers/build/#installupgrade","text":"helm install ingress-azure \\ -f helm-config.yaml \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --version 10486 ``` You can also find the version by opening your build in the Merge Builds pipeline and looking for the buildid . Use this version when installing on the cluster after the Dev Release completes.","title":"install/upgrade"},{"location":"developers/build/#cmake-options","text":"This is a CMake-based project. Build targets include: ALL_BUILD (default target) builds appgw-ingress and dockerize target devenv builds a docker image with configured development environment vendor installs dependency using go mod in a docker container with image from devenv target appgw-ingress builds the binary for this controller in a docker container with image from devenv target dockerize builds a docker image with the binary from appgw-ingress target dockerpush pushes the docker image to a container registry with prefix defined in CMake variable To run the CMake targets: mkdir build && cd build creates and enters a build directory cmake .. generates project configuration in the build directory cmake --build . to build the default target, or cmake --build . --target to specify a target to run from above","title":"CMake options"},{"location":"developers/contribute/","text":"Contribution Guidelines This is a Golang project. You can find the build instructions of the project in the Developer Guide . This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com . When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. This project has adopted the Microsoft Open Source Code of Conduct . For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.","title":"Contribution Guidelines"},{"location":"developers/contribute/#contribution-guidelines","text":"This is a Golang project. You can find the build instructions of the project in the Developer Guide . This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com . When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. This project has adopted the Microsoft Open Source Code of Conduct . For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.","title":"Contribution Guidelines"},{"location":"developers/design/","text":"Application Gateway Ingress Controller Design (WIP) Document Purpose This document is the detailed design and architecture of the Application Gateway Ingress Controller (AGIC) being built in this repository. Overview Application Gateway Ingress Controller (AGIC) is a Kubernetes application, which makes it possible for Azure Kubernetes Service (AKS) customers to leverage Azure's native Application Gateway L7 load-balancer to expose cloud software to the Internet. AGIC monitors the Kubernetes cluster it is hosted on and continuously updates an App Gateway, so that selected services are exposed to the Internet. The Ingress Controller runs in its own pod on the customer\u2019s AKS. AGIC monitors a subset of Kubernetes Resources for changes. The state of the AKS cluster is translated to App Gateway specific configuration and applied to the Azure Resource Manager (ARM) . High-level architecture The AGIC is composed of the following three sub components: K8S Context and Informers - handles events from the cluster and alerts the worker Worker - handles events coming from the informer and perform relevant actions Application Gateway Config Builder - generates the new gateway configuration Components Let's take a look at each component: 1. K8s Context and Informers When any change is applied on the k8s cluster by the user, AGIC needs to listen to these changes in order to update the corresponding configuration on the Application Gateway. We use the kubernetes informers for this purpose which is a standard for watching resources on the K8S API server. When AGIC starts, it sets up informers for watching following resources: Ingress : This is the top-level resource that AGIC monitors. It provides information about the layer-7 routing rules that need to be configured on the App Gateway. Service : Service provides an abstraction over the pods to expose as a network service. AGIC uses the service as logical grouping of pods to extract the IP addresses through the endpoints object created automatically along with the Service. Endpoints : Endpoints provides information about Pod IP Addresses behind a service and is used to populate AppGW's backend pool. Pod : Pod provides information about liveness and readiness probes which translated to health probe in App Gateway. AGIC only supports HTTP based liveness and readiness probe. Secret : This resource is for extracting SSL certificates when referenced in an ingress. This also triggeres a change when the secret is updated. CRDs : AGIC has some custom resources for supporting specific features like prohibited target for sharing a gateway. When starting the informers, AGIC also provides event handlers for each for create/update/delete operations on the resource. This handler is responsible for enqueuing an event . 2. Worker Worker is responsible for processing the events and performing updates. When Worker's Run function is called, it starts as a separate thread and waits on the Work channel. When an informers add an event to the channel, worker dequeues the event and checks whether the event is noise or is relevant. Events that are coming from unwatched namespaces and unreferenced pods/endpoints are skipped to reduce the churn. If the the last worker loop was run less than 1 second ago, it sleeps for the remainder and wakes up to space out the updates. After this, worker starts draining the rest of the events and calling the ProcessEvent function to process the event. ProcessEvent function does the following: Check if the Application Gateway is in Running or Starting operational state. Updates all ingress resources with public/private IP address of the App Gateway. Generate new config and update the Application Gateway. 3. Application Gateway Config Builder This component is responsible for using the information in the local kubernetes cache and generating the corresponding Application Gateway configuration as an output. Worker invokes the Build on this component which then generates various gateways sub-resources starting from leaf sub-resources like probes , http settings up to the request routing rules . go func (c *appGwConfigBuilder) Build(cbCtx *ConfigBuilderContext) (*n.ApplicationGateway, error) { ... err := c.HealthProbesCollection(cbCtx) ... err = c.BackendHTTPSettingsCollection(cbCtx) ... err = c.BackendAddressPools(cbCtx) ... // generates SSL certificate, frontend ports and http listeners err = c.Listeners(cbCtx) ... // generates URL path maps and request routing rules err = c.RequestRoutingRules(cbCtx) ... return &c.appGw, nil }","title":"Application Gateway Ingress Controller Design (WIP)"},{"location":"developers/design/#application-gateway-ingress-controller-design-wip","text":"","title":"Application Gateway Ingress Controller Design (WIP)"},{"location":"developers/design/#document-purpose","text":"This document is the detailed design and architecture of the Application Gateway Ingress Controller (AGIC) being built in this repository.","title":"Document Purpose"},{"location":"developers/design/#overview","text":"Application Gateway Ingress Controller (AGIC) is a Kubernetes application, which makes it possible for Azure Kubernetes Service (AKS) customers to leverage Azure's native Application Gateway L7 load-balancer to expose cloud software to the Internet. AGIC monitors the Kubernetes cluster it is hosted on and continuously updates an App Gateway, so that selected services are exposed to the Internet. The Ingress Controller runs in its own pod on the customer\u2019s AKS. AGIC monitors a subset of Kubernetes Resources for changes. The state of the AKS cluster is translated to App Gateway specific configuration and applied to the Azure Resource Manager (ARM) .","title":"Overview"},{"location":"developers/design/#high-level-architecture","text":"The AGIC is composed of the following three sub components: K8S Context and Informers - handles events from the cluster and alerts the worker Worker - handles events coming from the informer and perform relevant actions Application Gateway Config Builder - generates the new gateway configuration","title":"High-level architecture"},{"location":"developers/design/#components","text":"Let's take a look at each component:","title":"Components"},{"location":"developers/design/#1-k8s-context-and-informers","text":"When any change is applied on the k8s cluster by the user, AGIC needs to listen to these changes in order to update the corresponding configuration on the Application Gateway. We use the kubernetes informers for this purpose which is a standard for watching resources on the K8S API server. When AGIC starts, it sets up informers for watching following resources: Ingress : This is the top-level resource that AGIC monitors. It provides information about the layer-7 routing rules that need to be configured on the App Gateway. Service : Service provides an abstraction over the pods to expose as a network service. AGIC uses the service as logical grouping of pods to extract the IP addresses through the endpoints object created automatically along with the Service. Endpoints : Endpoints provides information about Pod IP Addresses behind a service and is used to populate AppGW's backend pool. Pod : Pod provides information about liveness and readiness probes which translated to health probe in App Gateway. AGIC only supports HTTP based liveness and readiness probe. Secret : This resource is for extracting SSL certificates when referenced in an ingress. This also triggeres a change when the secret is updated. CRDs : AGIC has some custom resources for supporting specific features like prohibited target for sharing a gateway. When starting the informers, AGIC also provides event handlers for each for create/update/delete operations on the resource. This handler is responsible for enqueuing an event .","title":"1. K8s Context and Informers"},{"location":"developers/design/#2-worker","text":"Worker is responsible for processing the events and performing updates. When Worker's Run function is called, it starts as a separate thread and waits on the Work channel. When an informers add an event to the channel, worker dequeues the event and checks whether the event is noise or is relevant. Events that are coming from unwatched namespaces and unreferenced pods/endpoints are skipped to reduce the churn. If the the last worker loop was run less than 1 second ago, it sleeps for the remainder and wakes up to space out the updates. After this, worker starts draining the rest of the events and calling the ProcessEvent function to process the event. ProcessEvent function does the following: Check if the Application Gateway is in Running or Starting operational state. Updates all ingress resources with public/private IP address of the App Gateway. Generate new config and update the Application Gateway.","title":"2. Worker"},{"location":"developers/design/#3-application-gateway-config-builder","text":"This component is responsible for using the information in the local kubernetes cache and generating the corresponding Application Gateway configuration as an output. Worker invokes the Build on this component which then generates various gateways sub-resources starting from leaf sub-resources like probes , http settings up to the request routing rules . go func (c *appGwConfigBuilder) Build(cbCtx *ConfigBuilderContext) (*n.ApplicationGateway, error) { ... err := c.HealthProbesCollection(cbCtx) ... err = c.BackendHTTPSettingsCollection(cbCtx) ... err = c.BackendAddressPools(cbCtx) ... // generates SSL certificate, frontend ports and http listeners err = c.Listeners(cbCtx) ... // generates URL path maps and request routing rules err = c.RequestRoutingRules(cbCtx) ... return &c.appGw, nil }","title":"3. Application Gateway Config Builder"},{"location":"developers/developer-guideline/","text":"Application Gateway Ingress Controller Development Guide Welcome to the Application Gateway Ingress Controller development guide! Table of contents Understanding the architecture Building and running the controller Installing the latest nightly build Running tests Contribution Guidelines","title":"Application Gateway Ingress Controller Development Guide"},{"location":"developers/developer-guideline/#application-gateway-ingress-controller-development-guide","text":"Welcome to the Application Gateway Ingress Controller development guide!","title":"Application Gateway Ingress Controller Development Guide"},{"location":"developers/developer-guideline/#table-of-contents","text":"Understanding the architecture Building and running the controller Installing the latest nightly build Running tests Contribution Guidelines","title":"Table of contents"},{"location":"developers/nightly/","text":"Install the latest nightly build To install the latest nightly release, Add the nightly helm repository bash helm repo add agic-nightly https://appgwingress.blob.core.windows.net/ingress-azure-helm-package-staging/ helm repo update Check the available version Latest version : or You can look up the version in the repo using helm. bash helm search repo agic-nightly Install using the same helm command by using the staging repository. bash helm install ingress-azure \\ -f helm-config.yaml \\ agic-nightly/ingress-azure \\ --version ","title":"Install the latest nightly build"},{"location":"developers/nightly/#install-the-latest-nightly-build","text":"To install the latest nightly release, Add the nightly helm repository bash helm repo add agic-nightly https://appgwingress.blob.core.windows.net/ingress-azure-helm-package-staging/ helm repo update Check the available version Latest version : or You can look up the version in the repo using helm. bash helm search repo agic-nightly Install using the same helm command by using the staging repository. bash helm install ingress-azure \\ -f helm-config.yaml \\ agic-nightly/ingress-azure \\ --version ","title":"Install the latest nightly build"},{"location":"developers/test/","text":"Testing the controller Unit Tests E2E Tests Testing Tips Unit Tests As is the convention in go, unit tests for the .go file you want to test live in the same folder and end with _test.go . We use the ginkgo / gomega testing framework for writing the tests. To execute the tests, use bash go test -v -tags unittest ./... E2E Tests E2E tests are going to test the specific scenarios with a real AKS and App Gateway setup with AGIC installed on it. E2E tests are automatically run every day 3 AM in the morning using an E2E pipeline . If you have cluster with AGIC installed, you can run e2e tests simply by: bash go test -v -tags e2e ./... You can also execute the run-e2e.sh which is used in the E2E pipeline to invoke the tests. This script will install AGIC with the version provided. ```bash export version=\" \" export applicationGatewayId=\" \" export identityResourceId=\" \" export identityClientId=\" \" ./scripts/e2e/run-e2e.sh ``` Testing Tips If you just want to run a specific set of tests, then an easy way is add F (Focus) to the It , Context , Describe directive in the test. For example: ```go FContext(\"Test obtaining a single certificate for an existing host\", func() { cb := newConfigBuilderFixture(nil) ingress := tests.NewIngressFixture() hostnameSecretIDMap := cb.newHostToSecretMap(ingress) actualSecret, actualSecretID := cb.getCertificate(ingress, host1, hostnameSecretIDMap) It(\"should have generated the expected secret\", func() { Expect(*actualSecret).To(Equal(\"eHl6\")) }) It(\"should have generated the correct secretID struct\", func() { Expect(*actualSecretID).To(Equal(expectedSecret)) }) }) ```","title":"Testing the controller"},{"location":"developers/test/#testing-the-controller","text":"Unit Tests E2E Tests Testing Tips","title":"Testing the controller"},{"location":"developers/test/#unit-tests","text":"As is the convention in go, unit tests for the .go file you want to test live in the same folder and end with _test.go . We use the ginkgo / gomega testing framework for writing the tests. To execute the tests, use bash go test -v -tags unittest ./...","title":"Unit Tests"},{"location":"developers/test/#e2e-tests","text":"E2E tests are going to test the specific scenarios with a real AKS and App Gateway setup with AGIC installed on it. E2E tests are automatically run every day 3 AM in the morning using an E2E pipeline . If you have cluster with AGIC installed, you can run e2e tests simply by: bash go test -v -tags e2e ./... You can also execute the run-e2e.sh which is used in the E2E pipeline to invoke the tests. This script will install AGIC with the version provided. ```bash export version=\" \" export applicationGatewayId=\" \" export identityResourceId=\" \" export identityClientId=\" \" ./scripts/e2e/run-e2e.sh ```","title":"E2E Tests"},{"location":"developers/test/#testing-tips","text":"If you just want to run a specific set of tests, then an easy way is add F (Focus) to the It , Context , Describe directive in the test. For example: ```go FContext(\"Test obtaining a single certificate for an existing host\", func() { cb := newConfigBuilderFixture(nil) ingress := tests.NewIngressFixture() hostnameSecretIDMap := cb.newHostToSecretMap(ingress) actualSecret, actualSecretID := cb.getCertificate(ingress, host1, hostnameSecretIDMap) It(\"should have generated the expected secret\", func() { Expect(*actualSecret).To(Equal(\"eHl6\")) }) It(\"should have generated the correct secretID struct\", func() { Expect(*actualSecretID).To(Equal(expectedSecret)) }) }) ```","title":"Testing Tips"},{"location":"features/agic-reconcile/","text":"Reconcile scenario (BETA) NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. When an Application Gateway is deployed through ARM template, a requirement is that the gateway configuration should contain a probe, listener, rule, backend pool and backend http setting. When such a template is re-deployed with minor changes (for example to WAF rules) on Gateway that is being controlled by AGIC, all the AGIC written rules are removed. Given such change on Application Gateway doesn\u2019t trigger any events on AGIC, AGIC doesn\u2019t reconcile the gateway back to the expected state. Solution To address the problem above, AGIC periodically checks if the latest gateway configuration is different from what it cached, and reconcile if needed to make gateway configuration is eventual correct. How to configure reconcile There are two ways to configure AGIC reconcile via helm, and to use the new feature, make sure the AGIC version is at least at 1.2.0-rc1 Configure inside helm values.yaml reconcilePeriodSeconds: 30 , it means AGIC checks the reconciling in every 30 seconds. Acceptable values are between 30 and 300. Configure from helm command line Configure from helm install command(first time install) and helm upgrade command, helm version is v3 ```bash helm fresh install helm intall -f helm-config.yaml oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure --version 1.2.0-rc3 --set reconcilePeriodSeconds=30 help upgrade --reuse-values, when upgrading, reuse the last release's values and merge in any overrides from the command line via --set and -f. helm upgrade oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure --reuse-values --version 1.2.0-rc3 --set reconcilePeriodSeconds=30 ```","title":"Agic reconcile"},{"location":"features/agic-reconcile/#reconcile-scenario-beta","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. When an Application Gateway is deployed through ARM template, a requirement is that the gateway configuration should contain a probe, listener, rule, backend pool and backend http setting. When such a template is re-deployed with minor changes (for example to WAF rules) on Gateway that is being controlled by AGIC, all the AGIC written rules are removed. Given such change on Application Gateway doesn\u2019t trigger any events on AGIC, AGIC doesn\u2019t reconcile the gateway back to the expected state.","title":"Reconcile scenario (BETA)"},{"location":"features/agic-reconcile/#solution","text":"To address the problem above, AGIC periodically checks if the latest gateway configuration is different from what it cached, and reconcile if needed to make gateway configuration is eventual correct.","title":"Solution"},{"location":"features/agic-reconcile/#how-to-configure-reconcile","text":"There are two ways to configure AGIC reconcile via helm, and to use the new feature, make sure the AGIC version is at least at 1.2.0-rc1","title":"How to configure reconcile"},{"location":"features/agic-reconcile/#configure-inside-helm-valuesyaml","text":"reconcilePeriodSeconds: 30 , it means AGIC checks the reconciling in every 30 seconds. Acceptable values are between 30 and 300.","title":"Configure inside helm values.yaml"},{"location":"features/agic-reconcile/#configure-from-helm-command-line","text":"Configure from helm install command(first time install) and helm upgrade command, helm version is v3 ```bash","title":"Configure from helm command line"},{"location":"features/agic-reconcile/#helm-fresh-install","text":"helm intall -f helm-config.yaml oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure --version 1.2.0-rc3 --set reconcilePeriodSeconds=30","title":"helm fresh install"},{"location":"features/agic-reconcile/#help-upgrade","text":"","title":"help upgrade"},{"location":"features/agic-reconcile/#-reuse-values-when-upgrading-reuse-the-last-releases-values-and-merge-in-any-overrides-from-the-command-line-via-set-and-f","text":"helm upgrade oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure --reuse-values --version 1.2.0-rc3 --set reconcilePeriodSeconds=30 ```","title":"--reuse-values, when upgrading, reuse the last release's values and merge in any overrides from the command line via --set and -f."},{"location":"features/appgw-ssl-certificate/","text":"Prerequisites NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. This documents assumes you already have the following Azure tools and resources installed: AKS with Advanced Networking enabled App Gateway v2 in the same virtual network as AKS AAD Pod Identity installed on your AKS cluster Cloud Shell is the Azure shell environment, which has az CLI, kubectl , and helm installed. These tools are required for the commands below. Please use Greenfield Deployment to install nonexistents. To use the new feature, make sure the AGIC version is at least at 1.2.0-rc3 bash helm install oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure -f helm-config.yaml --version 1.2.0-rc3 --generate-name Create a certificate and configure the certificate to AppGw The certificate below should only be used for testing purpose. ```bash appgwName=\"\" resgp=\"\" generate certificate for testing openssl req -x509 -nodes -days 365 -newkey rsa:2048 \\ -out test-cert.crt \\ -keyout test-cert.key \\ -subj \"/CN=test\" openssl pkcs12 -export \\ -in test-cert.crt \\ -inkey test-cert.key \\ -passout pass:test \\ -out test-cert.pfx configure certificate to app gateway az network application-gateway ssl-cert create \\ --resource-group $resgp \\ --gateway-name $appgwName \\ -n mysslcert \\ --cert-file test-cert.pfx \\ --cert-password \"test\" ``` Configure certificate from Key Vault to AppGw To configfure certificate from key vault to Application Gateway, an user-assigned managed identity will need to be created and assigned to AppGw, the managed identity will need to have GET secret access to KeyVault. ```bash Configure your resources appgwName=\"\" resgp=\"\" vaultName=\"\" location=\"\" aksClusterName=\"\" aksResourceGroupName=\"\" appgwName=\"\" IMPORTANT: the following way to retrieve the object id of the AGIC managed identity only applies when AGIC is deployed via the AGIC addon for AKS get the resource group name of the AKS cluster nrg=$(az aks show --name $aksClusterName --resource-group $aksResourceGroupName --query nodeResourceGroup --output tsv) get principalId of the AGIC managed identity identityName=\"ingressapplicationgateway- aksClusterName\" agicIdentityPrincipalId= aksClusterName\" agicIdentityPrincipalId= (az identity show --name $identityName --resource-group $nrg --query principalId --output tsv) One time operation, create Azure key vault and certificate (can done through portal as well) az keyvault create -n $vaultName -g $resgp --enable-soft-delete -l $location One time operation, create user-assigned managed identity az identity create -n appgw-id -g $resgp -l location identityID= location identityID= (az identity show -n appgw-id -g resgp -o tsv --query \"id\") identityPrincipal= resgp -o tsv --query \"id\") identityPrincipal= (az identity show -n appgw-id -g $resgp -o tsv --query \"principalId\") One time operation, assign AGIC identity to have operator access over AppGw identity az role assignment create --role \"Managed Identity Operator\" --assignee $agicIdentityPrincipalId --scope $identityID One time operation, assign the identity to Application Gateway az network application-gateway identity assign \\ --gateway-name $appgwName \\ --resource-group $resgp \\ --identity $identityID One time operation, assign the identity GET secret access to Azure Key Vault az keyvault set-policy \\ -n $vaultName \\ -g $resgp \\ --object-id $identityPrincipal \\ --secret-permissions get For each new certificate, create a cert on keyvault and add unversioned secret id to Application Gateway az keyvault certificate create \\ --vault-name vaultName \\ -n mycert \\ -p \" vaultName \\ -n mycert \\ -p \" (az keyvault certificate get-default-policy)\" versionedSecretId=$(az keyvault certificate show -n mycert --vault-name vaultName --query \"sid\" -o tsv) unversionedSecretId= vaultName --query \"sid\" -o tsv) unversionedSecretId= (echo $versionedSecretId | cut -d'/' -f-5) # remove the version from the url For each new certificate, Add the certificate to AppGw az network application-gateway ssl-cert create \\ -n mykvsslcert \\ --gateway-name $appgwName \\ --resource-group $resgp \\ --key-vault-secret-id $unversionedSecretId # ssl certificate with name \"mykvsslcert\" will be configured on AppGw ``` Testing the key vault certificate on Ingress Since we have certificate from Key Vault configured in Application Gateway, we can then add the new annotation appgw.ingress.kubernetes.io/appgw-ssl-certificate: mykvsslcert in Kubernetes ingress to enable the feature. ```bash install an app cat << EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: aspnetapp labels: app: aspnetapp spec: containers: - image: \"mcr.microsoft.com/dotnet/samples:aspnetapp\" name: aspnetapp-image ports: - containerPort: 80 protocol: TCP apiVersion: v1 kind: Service metadata: name: aspnetapp spec: selector: app: aspnetapp ports: - protocol: TCP port: 80 targetPort: 80 apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: aspnetapp annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/appgw-ssl-certificate: mykvsslcert spec: rules: - http: paths: - path: / backend: service: name: aspnetapp port: number: 80 pathType: Exact EOF ```","title":"Appgw ssl certificate"},{"location":"features/appgw-ssl-certificate/#prerequisites","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. This documents assumes you already have the following Azure tools and resources installed: AKS with Advanced Networking enabled App Gateway v2 in the same virtual network as AKS AAD Pod Identity installed on your AKS cluster Cloud Shell is the Azure shell environment, which has az CLI, kubectl , and helm installed. These tools are required for the commands below. Please use Greenfield Deployment to install nonexistents. To use the new feature, make sure the AGIC version is at least at 1.2.0-rc3 bash helm install oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure -f helm-config.yaml --version 1.2.0-rc3 --generate-name","title":"Prerequisites"},{"location":"features/appgw-ssl-certificate/#create-a-certificate-and-configure-the-certificate-to-appgw","text":"The certificate below should only be used for testing purpose. ```bash appgwName=\"\" resgp=\"\"","title":"Create a certificate and configure the certificate to AppGw"},{"location":"features/appgw-ssl-certificate/#generate-certificate-for-testing","text":"openssl req -x509 -nodes -days 365 -newkey rsa:2048 \\ -out test-cert.crt \\ -keyout test-cert.key \\ -subj \"/CN=test\" openssl pkcs12 -export \\ -in test-cert.crt \\ -inkey test-cert.key \\ -passout pass:test \\ -out test-cert.pfx","title":"generate certificate for testing"},{"location":"features/appgw-ssl-certificate/#configure-certificate-to-app-gateway","text":"az network application-gateway ssl-cert create \\ --resource-group $resgp \\ --gateway-name $appgwName \\ -n mysslcert \\ --cert-file test-cert.pfx \\ --cert-password \"test\" ```","title":"configure certificate to app gateway"},{"location":"features/appgw-ssl-certificate/#configure-certificate-from-key-vault-to-appgw","text":"To configfure certificate from key vault to Application Gateway, an user-assigned managed identity will need to be created and assigned to AppGw, the managed identity will need to have GET secret access to KeyVault. ```bash","title":"Configure certificate from Key Vault to AppGw"},{"location":"features/appgw-ssl-certificate/#configure-your-resources","text":"appgwName=\"\" resgp=\"\" vaultName=\"\" location=\"\" aksClusterName=\"\" aksResourceGroupName=\"\" appgwName=\"\"","title":"Configure your resources"},{"location":"features/appgw-ssl-certificate/#important-the-following-way-to-retrieve-the-object-id-of-the-agic-managed-identity","text":"","title":"IMPORTANT: the following way to retrieve the object id of the AGIC managed identity"},{"location":"features/appgw-ssl-certificate/#only-applies-when-agic-is-deployed-via-the-agic-addon-for-aks","text":"","title":"only applies when AGIC is deployed via the AGIC addon for AKS"},{"location":"features/appgw-ssl-certificate/#get-the-resource-group-name-of-the-aks-cluster","text":"nrg=$(az aks show --name $aksClusterName --resource-group $aksResourceGroupName --query nodeResourceGroup --output tsv)","title":"get the resource group name of the AKS cluster"},{"location":"features/appgw-ssl-certificate/#get-principalid-of-the-agic-managed-identity","text":"identityName=\"ingressapplicationgateway- aksClusterName\" agicIdentityPrincipalId= aksClusterName\" agicIdentityPrincipalId= (az identity show --name $identityName --resource-group $nrg --query principalId --output tsv)","title":"get principalId of the AGIC managed identity"},{"location":"features/appgw-ssl-certificate/#one-time-operation-create-azure-key-vault-and-certificate-can-done-through-portal-as-well","text":"az keyvault create -n $vaultName -g $resgp --enable-soft-delete -l $location","title":"One time operation, create Azure key vault and certificate (can done through portal as well)"},{"location":"features/appgw-ssl-certificate/#one-time-operation-create-user-assigned-managed-identity","text":"az identity create -n appgw-id -g $resgp -l location identityID= location identityID= (az identity show -n appgw-id -g resgp -o tsv --query \"id\") identityPrincipal= resgp -o tsv --query \"id\") identityPrincipal= (az identity show -n appgw-id -g $resgp -o tsv --query \"principalId\")","title":"One time operation, create user-assigned managed identity"},{"location":"features/appgw-ssl-certificate/#one-time-operation-assign-agic-identity-to-have-operator-access-over-appgw-identity","text":"az role assignment create --role \"Managed Identity Operator\" --assignee $agicIdentityPrincipalId --scope $identityID","title":"One time operation, assign AGIC identity to have operator access over AppGw identity"},{"location":"features/appgw-ssl-certificate/#one-time-operation-assign-the-identity-to-application-gateway","text":"az network application-gateway identity assign \\ --gateway-name $appgwName \\ --resource-group $resgp \\ --identity $identityID","title":"One time operation, assign the identity to Application Gateway"},{"location":"features/appgw-ssl-certificate/#one-time-operation-assign-the-identity-get-secret-access-to-azure-key-vault","text":"az keyvault set-policy \\ -n $vaultName \\ -g $resgp \\ --object-id $identityPrincipal \\ --secret-permissions get","title":"One time operation, assign the identity GET secret access to Azure Key Vault"},{"location":"features/appgw-ssl-certificate/#for-each-new-certificate-create-a-cert-on-keyvault-and-add-unversioned-secret-id-to-application-gateway","text":"az keyvault certificate create \\ --vault-name vaultName \\ -n mycert \\ -p \" vaultName \\ -n mycert \\ -p \" (az keyvault certificate get-default-policy)\" versionedSecretId=$(az keyvault certificate show -n mycert --vault-name vaultName --query \"sid\" -o tsv) unversionedSecretId= vaultName --query \"sid\" -o tsv) unversionedSecretId= (echo $versionedSecretId | cut -d'/' -f-5) # remove the version from the url","title":"For each new certificate, create a cert on keyvault and add unversioned secret id to Application Gateway"},{"location":"features/appgw-ssl-certificate/#for-each-new-certificate-add-the-certificate-to-appgw","text":"az network application-gateway ssl-cert create \\ -n mykvsslcert \\ --gateway-name $appgwName \\ --resource-group $resgp \\ --key-vault-secret-id $unversionedSecretId # ssl certificate with name \"mykvsslcert\" will be configured on AppGw ```","title":"For each new certificate, Add the certificate to AppGw"},{"location":"features/appgw-ssl-certificate/#testing-the-key-vault-certificate-on-ingress","text":"Since we have certificate from Key Vault configured in Application Gateway, we can then add the new annotation appgw.ingress.kubernetes.io/appgw-ssl-certificate: mykvsslcert in Kubernetes ingress to enable the feature. ```bash","title":"Testing the key vault certificate on Ingress"},{"location":"features/appgw-ssl-certificate/#install-an-app","text":"cat << EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: aspnetapp labels: app: aspnetapp spec: containers: - image: \"mcr.microsoft.com/dotnet/samples:aspnetapp\" name: aspnetapp-image ports: - containerPort: 80 protocol: TCP apiVersion: v1 kind: Service metadata: name: aspnetapp spec: selector: app: aspnetapp ports: - protocol: TCP port: 80 targetPort: 80 apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: aspnetapp annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/appgw-ssl-certificate: mykvsslcert spec: rules: - http: paths: - path: / backend: service: name: aspnetapp port: number: 80 pathType: Exact EOF ```","title":"install an app"},{"location":"features/cookie-affinity/","text":"Enable Cookie based Affinity NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Details on cookie based affinity for Application Gateway for Containers may be found here . As outlined in the Azure Application Gateway Documentation , Application Gateway supports cookie based affinity enabling which it can direct subsequent traffic from a user session to the same server for processing. Example yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: guestbook annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/cookie-based-affinity: \"true\" spec: rules: - http: paths: - backend: service: name: frontend port: number: 80","title":"Cookie affinity"},{"location":"features/cookie-affinity/#enable-cookie-based-affinity","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Details on cookie based affinity for Application Gateway for Containers may be found here . As outlined in the Azure Application Gateway Documentation , Application Gateway supports cookie based affinity enabling which it can direct subsequent traffic from a user session to the same server for processing.","title":"Enable Cookie based Affinity"},{"location":"features/cookie-affinity/#example","text":"yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: guestbook annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/cookie-based-affinity: \"true\" spec: rules: - http: paths: - backend: service: name: frontend port: number: 80","title":"Example"},{"location":"features/custom-ingress-class/","text":"Custom Ingress Class NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Minimum version: 1.3.0 Custom ingress class allows you to customize the ingress class selector that AGIC will use when filtering the ingress manifests. AGIC uses azure/application-gateway as default ingress class. This will allow you to target multiple AGICs on a single namespace as each AGIC can now use it's own ingress class. For instance, AGIC with ingress class agic-public can serves public traffic, and AGIC wit agic-private can serve \"internal\" traffic. To use a custom ingress class, Install AGIC by providing a value for kubernetes.ingressClass in helm config. bash helm install ./helm/ingress-azure \\ --name ingress-azure \\ -f helm-config.yaml --set kubernetes.ingressClass arbitrary-class Then, within the spec object, specify ingressClassName with the same value provided to AGIC. yaml kind: Ingress metadata: name: go-server-ingress-affinity namespace: test-ag spec: ingressClassName: arbitrary-class rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80 Reference Proposal Document","title":"Custom Ingress Class"},{"location":"features/custom-ingress-class/#custom-ingress-class","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Minimum version: 1.3.0 Custom ingress class allows you to customize the ingress class selector that AGIC will use when filtering the ingress manifests. AGIC uses azure/application-gateway as default ingress class. This will allow you to target multiple AGICs on a single namespace as each AGIC can now use it's own ingress class. For instance, AGIC with ingress class agic-public can serves public traffic, and AGIC wit agic-private can serve \"internal\" traffic. To use a custom ingress class, Install AGIC by providing a value for kubernetes.ingressClass in helm config. bash helm install ./helm/ingress-azure \\ --name ingress-azure \\ -f helm-config.yaml --set kubernetes.ingressClass arbitrary-class Then, within the spec object, specify ingressClassName with the same value provided to AGIC. yaml kind: Ingress metadata: name: go-server-ingress-affinity namespace: test-ag spec: ingressClassName: arbitrary-class rules: - http: paths: - path: /hello/ backend: service: name: store-service port: number: 80","title":"Custom Ingress Class"},{"location":"features/custom-ingress-class/#reference","text":"Proposal Document","title":"Reference"},{"location":"features/multiple-namespaces/","text":"Multiple Namespace Support NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Motivation Kubernetes Namespaces make it possible for a Kubernetes cluster to be partitioned and allocated to sub-groups of a larger team. These sub-teams can then deploy and manage infrastructure with finer controls of resources, security, configuration etc. Kubernetes allows for one or more ingress resources to be defined independently within each namespace. As of version 0.7 Azure Application Gateway Kubernetes IngressController (AGIC) can ingest events from and observe multiple namespaces. Should the AKS administrator decide to use App Gateway as an ingress, all namespaces will use the same instance of App Gateway. A single installation of Ingress Controller will monitor accessible namespaces and will configure the App Gateway it is associated with. Version 0.7 of AGIC will continue to exclusively observe the default namespace, unless this is explicitly changed to one or more different namespaces in the Helm configuration (see section below). Enable multiple namespace support To enable multiple namespace support: modify the helm-config.yaml file in one of the following ways: delete the watchNamespace key entirely from helm-config.yaml - AGIC will observe all namespaces set watchNamespace to an empty string - AGIC will observe all namespaces add multiple namespaces separated by a comma ( watchNamespace: default,secondNamespace ) - AGIC will observe these namespaces exclusively apply Helm template changes with: helm install -f helm-config.yaml oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure Once deployed with the ability to observe multiple namespaces, AGIC will: list ingress resources from all accessible namespaces filter to ingress resources annotated with kubernetes.io/ingress.class: azure/application-gateway compose combined App Gateway config apply the config to the associated App Gateway via ARM Conflicting Configurations Multiple namespaced ingress resources could instruct AGIC to create conflicting configurations for a single App Gateway. (Two ingresses claiming the same domain for instance.) At the top of the hierarchy - listeners (IP address, port, and host) and routing rules (binding listener, backend pool and HTTP settings) could be created and shared by multiple namespaces/ingresses. On the other hand - paths, backend pools, HTTP settings, and TLS certificates could be created by one namespace only and duplicates will removed.. For example, consider the following duplicate ingress resources defined namespaces staging and production for www.contoso.com : yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: websocket-ingress namespace: staging annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - host: www.contoso.com http: paths: - backend: service: name: web-service port: number: 80 yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: websocket-ingress namespace: production annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - host: www.contoso.com http: paths: - backend: service: name: web-service port: number: 80 Despite the two ingress resources demanding traffic for www.contoso.com to be routed to the respective Kubernetes namespaces, only one backend can service the traffic. AGIC would create a configuration on \"first come, first served\" basis for one of the resources. If two ingresses resources are created at the same time, the one earlier in the alphabet will take precedence. From the example above we will only be able to create settings for the production ingress. App Gateway will be configured with the following resources: Listener: fl-www.contoso.com-80 Routing Rule: rr-www.contoso.com-80 Backend Pool: pool-production-contoso-web-service-80-bp-80 HTTP Settings: bp-production-contoso-web-service-80-80-websocket-ingress Health Probe: pb-production-contoso-web-service-80-websocket-ingress Note that except for listener and routing rule , the App Gateway resources created include the name of the namespace ( production ) for which they were created. If the two ingress resources are introduced into the AKS cluster at different points in time, it is likely for AGIC to end up in a scenario where it reconfigures App Gateway and re-routes traffic from namespace-B to namespace-A . For example if you added staging first, AGIC will configure App Gwy to route traffic to the staging backend pool. At a later stage, introducing production ingress, will cause AGIC to reprogram App Gwy, which will start routing traffic to the production backend pool. Restricting Access to Namespaces By default AGIC will configure App Gateway based on annotated Ingress within any namespace. Should you want to limit this behaviour you have the following options: limit the namespaces, by explicitly defining namespaces AGIC should observe via the watchNamespace YAML key in helm-config.yaml use Role/RoleBinding to limit AGIC to specific namespaces","title":"Multiple Namespace Support"},{"location":"features/multiple-namespaces/#multiple-namespace-support","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment.","title":"Multiple Namespace Support"},{"location":"features/multiple-namespaces/#motivation","text":"Kubernetes Namespaces make it possible for a Kubernetes cluster to be partitioned and allocated to sub-groups of a larger team. These sub-teams can then deploy and manage infrastructure with finer controls of resources, security, configuration etc. Kubernetes allows for one or more ingress resources to be defined independently within each namespace. As of version 0.7 Azure Application Gateway Kubernetes IngressController (AGIC) can ingest events from and observe multiple namespaces. Should the AKS administrator decide to use App Gateway as an ingress, all namespaces will use the same instance of App Gateway. A single installation of Ingress Controller will monitor accessible namespaces and will configure the App Gateway it is associated with. Version 0.7 of AGIC will continue to exclusively observe the default namespace, unless this is explicitly changed to one or more different namespaces in the Helm configuration (see section below).","title":"Motivation"},{"location":"features/multiple-namespaces/#enable-multiple-namespace-support","text":"To enable multiple namespace support: modify the helm-config.yaml file in one of the following ways: delete the watchNamespace key entirely from helm-config.yaml - AGIC will observe all namespaces set watchNamespace to an empty string - AGIC will observe all namespaces add multiple namespaces separated by a comma ( watchNamespace: default,secondNamespace ) - AGIC will observe these namespaces exclusively apply Helm template changes with: helm install -f helm-config.yaml oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure Once deployed with the ability to observe multiple namespaces, AGIC will: list ingress resources from all accessible namespaces filter to ingress resources annotated with kubernetes.io/ingress.class: azure/application-gateway compose combined App Gateway config apply the config to the associated App Gateway via ARM","title":"Enable multiple namespace support"},{"location":"features/multiple-namespaces/#conflicting-configurations","text":"Multiple namespaced ingress resources could instruct AGIC to create conflicting configurations for a single App Gateway. (Two ingresses claiming the same domain for instance.) At the top of the hierarchy - listeners (IP address, port, and host) and routing rules (binding listener, backend pool and HTTP settings) could be created and shared by multiple namespaces/ingresses. On the other hand - paths, backend pools, HTTP settings, and TLS certificates could be created by one namespace only and duplicates will removed.. For example, consider the following duplicate ingress resources defined namespaces staging and production for www.contoso.com : yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: websocket-ingress namespace: staging annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - host: www.contoso.com http: paths: - backend: service: name: web-service port: number: 80 yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: websocket-ingress namespace: production annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - host: www.contoso.com http: paths: - backend: service: name: web-service port: number: 80 Despite the two ingress resources demanding traffic for www.contoso.com to be routed to the respective Kubernetes namespaces, only one backend can service the traffic. AGIC would create a configuration on \"first come, first served\" basis for one of the resources. If two ingresses resources are created at the same time, the one earlier in the alphabet will take precedence. From the example above we will only be able to create settings for the production ingress. App Gateway will be configured with the following resources: Listener: fl-www.contoso.com-80 Routing Rule: rr-www.contoso.com-80 Backend Pool: pool-production-contoso-web-service-80-bp-80 HTTP Settings: bp-production-contoso-web-service-80-80-websocket-ingress Health Probe: pb-production-contoso-web-service-80-websocket-ingress Note that except for listener and routing rule , the App Gateway resources created include the name of the namespace ( production ) for which they were created. If the two ingress resources are introduced into the AKS cluster at different points in time, it is likely for AGIC to end up in a scenario where it reconfigures App Gateway and re-routes traffic from namespace-B to namespace-A . For example if you added staging first, AGIC will configure App Gwy to route traffic to the staging backend pool. At a later stage, introducing production ingress, will cause AGIC to reprogram App Gwy, which will start routing traffic to the production backend pool.","title":"Conflicting Configurations"},{"location":"features/multiple-namespaces/#restricting-access-to-namespaces","text":"By default AGIC will configure App Gateway based on annotated Ingress within any namespace. Should you want to limit this behaviour you have the following options: limit the namespaces, by explicitly defining namespaces AGIC should observe via the watchNamespace YAML key in helm-config.yaml use Role/RoleBinding to limit AGIC to specific namespaces","title":"Restricting Access to Namespaces"},{"location":"features/private-ip/","text":"Using Private IP for internal routing This feature allows to expose the ingress endpoint within the Virtual Network using a private IP. Pre-requisites Application Gateway with a Private IP configuration There are two ways to configure the controller to use Private IP for ingress, Assign to a particular ingress To expose a particular ingress over Private IP, use annotation appgw.ingress.kubernetes.io/use-private-ip in Ingress. Usage yaml appgw.ingress.kubernetes.io/use-private-ip: \"true\" For App Gateways without a Private IP, Ingresses annotated with appgw.ingress.kubernetes.io/use-private-ip: \"true\" will be ignored. This will be indicated in the ingress event and AGIC pod log. Error as indicated in the Ingress Event bash Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning NoPrivateIP 2m (x17 over 2m) azure/application-gateway, prod-ingress-azure-5c9b6fcd4-bctcb Ingress default/hello-world-ingress requires Application Gateway applicationgateway3026 has a private IP address Error as indicated in AGIC Logs bash E0730 18:57:37.914749 1 prune.go:65] Ingress default/hello-world-ingress requires Application Gateway applicationgateway3026 has a private IP address Assign Globally In case, requirement is to restrict all Ingresses to be exposed over Private IP, use appgw.usePrivateIP: true in helm config. Usage yaml appgw: subscriptionId: resourceGroup: name: usePrivateIP: true This will make the ingress controller filter the ipconfigurations for a Private IP when configuring the frontend listeners on the Application Gateway. AGIC will panic and crash if usePrivateIP: true and no Private IP is assigned. Notes: Application Gateway v2 SKU requires a Public IP. Should you require Application Gateway to be private, Attach a Network Security Group to the Application Gateway's subnet to restrict traffic.","title":"Using Private IP for internal routing"},{"location":"features/private-ip/#using-private-ip-for-internal-routing","text":"This feature allows to expose the ingress endpoint within the Virtual Network using a private IP. Pre-requisites Application Gateway with a Private IP configuration There are two ways to configure the controller to use Private IP for ingress,","title":"Using Private IP for internal routing"},{"location":"features/private-ip/#assign-to-a-particular-ingress","text":"To expose a particular ingress over Private IP, use annotation appgw.ingress.kubernetes.io/use-private-ip in Ingress.","title":"Assign to a particular ingress"},{"location":"features/private-ip/#usage","text":"yaml appgw.ingress.kubernetes.io/use-private-ip: \"true\" For App Gateways without a Private IP, Ingresses annotated with appgw.ingress.kubernetes.io/use-private-ip: \"true\" will be ignored. This will be indicated in the ingress event and AGIC pod log. Error as indicated in the Ingress Event bash Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning NoPrivateIP 2m (x17 over 2m) azure/application-gateway, prod-ingress-azure-5c9b6fcd4-bctcb Ingress default/hello-world-ingress requires Application Gateway applicationgateway3026 has a private IP address Error as indicated in AGIC Logs bash E0730 18:57:37.914749 1 prune.go:65] Ingress default/hello-world-ingress requires Application Gateway applicationgateway3026 has a private IP address","title":"Usage"},{"location":"features/private-ip/#assign-globally","text":"In case, requirement is to restrict all Ingresses to be exposed over Private IP, use appgw.usePrivateIP: true in helm config.","title":"Assign Globally"},{"location":"features/private-ip/#usage_1","text":"yaml appgw: subscriptionId: resourceGroup: name: usePrivateIP: true This will make the ingress controller filter the ipconfigurations for a Private IP when configuring the frontend listeners on the Application Gateway. AGIC will panic and crash if usePrivateIP: true and no Private IP is assigned. Notes: Application Gateway v2 SKU requires a Public IP. Should you require Application Gateway to be private, Attach a Network Security Group to the Application Gateway's subnet to restrict traffic.","title":"Usage"},{"location":"features/probes/","text":"Adding Health Probes to your service .. note:: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Details on custom health probes in Application Gateway for Containers [may be found here](https://learn.microsoft.com/azure/application-gateway/for-containers/custom-health-probe). By default, Ingress controller will provision an HTTP GET probe for the exposed pods. The probe properties can be customized by adding a Readiness or Liveness Probe to your deployment / pod spec. With readinessProbe or livenessProbe yaml apiVersion: apps/v1 kind: Deployment metadata: name: aspnetapp spec: replicas: 3 template: metadata: labels: service: site spec: containers: - name: aspnetapp image: mcr.microsoft.com/dotnet/samples:aspnetapp imagePullPolicy: IfNotPresent ports: - containerPort: 80 readinessProbe: httpGet: path: / port: 80 periodSeconds: 3 timeoutSeconds: 1 Kubernetes API Reference: Container Probes HttpGet Action Note: readinessProbe and livenessProbe are supported when configured with httpGet . Probing on a port other than the one exposed on the pod is currently not supported. HttpHeaders , InitialDelaySeconds , SuccessThreshold are not supported. Without readinessProbe or livenessProbe If the above probes are not provided, then Ingress Controller make an assumption that the service is reachable on Path specified for backend-path-prefix annotation or the path specified in the ingress definition for the service. Default Values for Health Probe For any property that can not be inferred by the readiness/liveness probe, Default values are set. Application Gateway Probe Property Default Value Path / Host localhost Protocol HTTP Timeout 30 Interval 30 UnhealthyThreshold 3","title":"Probes"},{"location":"features/probes/#adding-health-probes-to-your-service","text":".. note:: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Details on custom health probes in Application Gateway for Containers [may be found here](https://learn.microsoft.com/azure/application-gateway/for-containers/custom-health-probe). By default, Ingress controller will provision an HTTP GET probe for the exposed pods. The probe properties can be customized by adding a Readiness or Liveness Probe to your deployment / pod spec.","title":"Adding Health Probes to your service"},{"location":"features/probes/#with-readinessprobe-or-livenessprobe","text":"yaml apiVersion: apps/v1 kind: Deployment metadata: name: aspnetapp spec: replicas: 3 template: metadata: labels: service: site spec: containers: - name: aspnetapp image: mcr.microsoft.com/dotnet/samples:aspnetapp imagePullPolicy: IfNotPresent ports: - containerPort: 80 readinessProbe: httpGet: path: / port: 80 periodSeconds: 3 timeoutSeconds: 1 Kubernetes API Reference: Container Probes HttpGet Action Note: readinessProbe and livenessProbe are supported when configured with httpGet . Probing on a port other than the one exposed on the pod is currently not supported. HttpHeaders , InitialDelaySeconds , SuccessThreshold are not supported.","title":"With readinessProbe or livenessProbe"},{"location":"features/probes/#without-readinessprobe-or-livenessprobe","text":"If the above probes are not provided, then Ingress Controller make an assumption that the service is reachable on Path specified for backend-path-prefix annotation or the path specified in the ingress definition for the service.","title":"Without readinessProbe or livenessProbe"},{"location":"features/probes/#default-values-for-health-probe","text":"For any property that can not be inferred by the readiness/liveness probe, Default values are set. Application Gateway Probe Property Default Value Path / Host localhost Protocol HTTP Timeout 30 Interval 30 UnhealthyThreshold 3","title":"Default Values for Health Probe"},{"location":"features/rewrite-rule-set-custom-resource/","text":"Rewrite Rule Set Custom Resource (supported since 1.6.0-rc1) NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. URL Rewrite rules for Application Gateway for Containers may be found here for Gateway API and here for Ingress API . Header Rewrite rules for Application Gateway for Containers may be found here for Gateway API and here for Ingress API . Note: This feature is supported since 1.6.0-rc1. Please use appgw.ingress.kubernetes.io/rewrite-rule-set which allows using an existing rewrite rule set on Application Gateway. Application Gateway allows you to rewrite selected content of requests and responses. With this feature, you can translate URLs, query string parameters as well as modify request and response headers. It also allows you to add conditions to ensure that the URL or the specified headers are rewritten only when certain conditions are met. These conditions are based on the request and response information. Rewrite Rule Set Custom Resource brings this feature to AGIC. HTTP headers allow a client and server to pass additional information with a request or response. By rewriting these headers, you can accomplish important tasks, such as adding security-related header fields like HSTS/ X-XSS-Protection, removing response header fields that might reveal sensitive information, and removing port information from X-Forwarded-For headers. With URL rewrite capability, you can: Rewrite the host name, path and query string of the request URL Choose to rewrite the URL of all requests or only those requests which match one or more of the conditions you set. These conditions are based on the request and response properties (request header, response header and server variables). Choose to route the request based on either the original URL or the rewritten URL Usage To use the feature, the customer must define a Custom Resource of the type AzureApplicationGatewayRewrite which must have a name in the metadata section. The ingress manifest must refer this Custom Resource via the appgw.ingress.kubernetes.io/rewrite-rule-set-custom-resource annotation. Important points to note metadata & name In the metadata section, name of the AzureApplicationGatewayRewrite custom resource should match the custom resource referred in the annotation. RuleSequence The rule sequence must be unique for every rewrite rule Conditions You can use rewrite conditions, an optional configuration, to evaluate the content of HTTP(S) requests and responses and perform a rewrite only when one or more conditions are met. The following types of variables can be used to define a condition: HTTP headers in the request HTTP headers in the response Application Gateway server variables Note: While defining conditions, request headers must be prefixed with http_req_ , response headers must be prefixed with http_res_ and list of server variables can be found here Actions You use rewrite actions to specify the URL, request headers or response headers that you want to rewrite and the new value to which you intend to rewrite them to. The value of a URL or a new or existing header can be set to these types of values: Text Request header Response header Server Variable Combination of the any of the above Note: To specify a request header, you need to use the syntax http_req_headerName To specify a response header, you need to use the syntax http_resp_headerName To specify a server variable, you need to use the syntax var_serverVariable . See the list of supported server variables here URL Rewrite Configuration URL path: The value to which the path is to be rewritten to. URL Query String: The value to which the query string is to be rewritten to. Re-evaluate path map: Used to determine whether the URL path map is to be re-evaluated or not. If set to false , the original URL path will be used to match the path-pattern in the URL path map. If set to true , the URL path map will be re-evaluated to check the match with the rewritten path. Recommended: More information about Application Gateway's Rewrite feature can be found here Example ```yaml apiVersion: appgw.ingress.azure.io/v1beta1 kind: AzureApplicationGatewayRewrite metadata: name: my-rewrite-rule-set-custom-resource spec: rewriteRules: - name: rule1 ruleSequence: 21 conditions: - ignoreCase: false negate: false variable: http_req_Host pattern: example.com actions: requestHeaderConfigurations: - actionType: set headerName: incoming-test-header headerValue: incoming-test-value responseHeaderConfigurations: - actionType: set headerName: outgoing-test-header headerValue: outgoing-test-value urlConfiguration: modifiedPath: \"/api/\" modifiedQueryString: \"query=test-value\" reroute: false apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/rewrite-rule-set-custom-resource: my-rewrite-rule-set spec: rules: - http: paths: - path: / pathType: Exact backend: service: name: store-service port: number: 8080 ```","title":"Rewrite rule set custom resource"},{"location":"features/rewrite-rule-set-custom-resource/#rewrite-rule-set-custom-resource-supported-since-160-rc1","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. URL Rewrite rules for Application Gateway for Containers may be found here for Gateway API and here for Ingress API . Header Rewrite rules for Application Gateway for Containers may be found here for Gateway API and here for Ingress API . Note: This feature is supported since 1.6.0-rc1. Please use appgw.ingress.kubernetes.io/rewrite-rule-set which allows using an existing rewrite rule set on Application Gateway. Application Gateway allows you to rewrite selected content of requests and responses. With this feature, you can translate URLs, query string parameters as well as modify request and response headers. It also allows you to add conditions to ensure that the URL or the specified headers are rewritten only when certain conditions are met. These conditions are based on the request and response information. Rewrite Rule Set Custom Resource brings this feature to AGIC. HTTP headers allow a client and server to pass additional information with a request or response. By rewriting these headers, you can accomplish important tasks, such as adding security-related header fields like HSTS/ X-XSS-Protection, removing response header fields that might reveal sensitive information, and removing port information from X-Forwarded-For headers. With URL rewrite capability, you can: Rewrite the host name, path and query string of the request URL Choose to rewrite the URL of all requests or only those requests which match one or more of the conditions you set. These conditions are based on the request and response properties (request header, response header and server variables). Choose to route the request based on either the original URL or the rewritten URL","title":"Rewrite Rule Set Custom Resource (supported since 1.6.0-rc1)"},{"location":"features/rewrite-rule-set-custom-resource/#usage","text":"To use the feature, the customer must define a Custom Resource of the type AzureApplicationGatewayRewrite which must have a name in the metadata section. The ingress manifest must refer this Custom Resource via the appgw.ingress.kubernetes.io/rewrite-rule-set-custom-resource annotation.","title":"Usage"},{"location":"features/rewrite-rule-set-custom-resource/#important-points-to-note","text":"","title":"Important points to note"},{"location":"features/rewrite-rule-set-custom-resource/#metadata-name","text":"In the metadata section, name of the AzureApplicationGatewayRewrite custom resource should match the custom resource referred in the annotation.","title":"metadata & name"},{"location":"features/rewrite-rule-set-custom-resource/#rulesequence","text":"The rule sequence must be unique for every rewrite rule","title":"RuleSequence"},{"location":"features/rewrite-rule-set-custom-resource/#conditions","text":"You can use rewrite conditions, an optional configuration, to evaluate the content of HTTP(S) requests and responses and perform a rewrite only when one or more conditions are met. The following types of variables can be used to define a condition: HTTP headers in the request HTTP headers in the response Application Gateway server variables Note: While defining conditions, request headers must be prefixed with http_req_ , response headers must be prefixed with http_res_ and list of server variables can be found here","title":"Conditions"},{"location":"features/rewrite-rule-set-custom-resource/#actions","text":"You use rewrite actions to specify the URL, request headers or response headers that you want to rewrite and the new value to which you intend to rewrite them to. The value of a URL or a new or existing header can be set to these types of values: Text Request header Response header Server Variable Combination of the any of the above Note: To specify a request header, you need to use the syntax http_req_headerName To specify a response header, you need to use the syntax http_resp_headerName To specify a server variable, you need to use the syntax var_serverVariable . See the list of supported server variables here","title":"Actions"},{"location":"features/rewrite-rule-set-custom-resource/#url-rewrite-configuration","text":"URL path: The value to which the path is to be rewritten to. URL Query String: The value to which the query string is to be rewritten to. Re-evaluate path map: Used to determine whether the URL path map is to be re-evaluated or not. If set to false , the original URL path will be used to match the path-pattern in the URL path map. If set to true , the URL path map will be re-evaluated to check the match with the rewritten path. Recommended: More information about Application Gateway's Rewrite feature can be found here","title":"URL Rewrite Configuration"},{"location":"features/rewrite-rule-set-custom-resource/#example","text":"```yaml apiVersion: appgw.ingress.azure.io/v1beta1 kind: AzureApplicationGatewayRewrite metadata: name: my-rewrite-rule-set-custom-resource spec: rewriteRules: - name: rule1 ruleSequence: 21 conditions: - ignoreCase: false negate: false variable: http_req_Host pattern: example.com actions: requestHeaderConfigurations: - actionType: set headerName: incoming-test-header headerValue: incoming-test-value responseHeaderConfigurations: - actionType: set headerName: outgoing-test-header headerValue: outgoing-test-value urlConfiguration: modifiedPath: \"/api/\" modifiedQueryString: \"query=test-value\" reroute: false apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress namespace: test-ag annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/rewrite-rule-set-custom-resource: my-rewrite-rule-set spec: rules: - http: paths: - path: / pathType: Exact backend: service: name: store-service port: number: 8080 ```","title":"Example"},{"location":"how-tos/continuous-deployment/","text":"Continuous Deployment with AKS and AGIC using Azure Pipelines NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. To achieve an efficiently deployed and managed global infrastucture, it is important to setup workflows for continuous integration and deployment. Azure Devops is one of the options to achieve this goal. In following example, we setup a Azure Devops release pipeline to deploy an AKS cluster along with AGIC as ingress. This example is merely a scaffolding. You need to separately setup a build pipeline to install your application and ingress on the AKS cluster deployed as part of the release. Setup up new service connection with service principal Note : Skip if already have service connection with owner access for role assigment Create a service principal to use with Azure Pipelines. This service principal will have owner access to current subscription. Access will be used to perform role assigement for AGIC identity in the pipeline. ```bash az ad sp create-for-rbac -n azure-pipeline-cd --role owner Copy the AppId and Password. We will use these in the next step. ``` Now, create a new service connection in Azure Devops. Select \" use the full version of the service connection dialog \" option so that you can provide the newly created service principal. Create a new Azure release pipeline We have prepared an example release pipeline . This pipeline has following tasks: Deploy AKS Cluster Create a user assigned identity used by AGIC Pod Install Helm Install AAD Pod identity Install AGIC Install a sample application (with ingress) To use the example release pipeline, Download the template and import it to your project's release pipeline. Now provide the required settings for all tasks: Select the correct Agent Pool and Agent Specification (ubuntu-18.04) Select the newly created service connection for the Create Kubernetes Cluster and Create AGIC Identity tasks. Provide the values for clientId and clientSecret that will be configured as cluster credentials for the AKS cluster. You should create a separate service principal for the AKS cluster for security reasons. ```bash create a new one and copy the appId and password to the variable section in the pipeline az ad sp create-for-rbac -n aks-cluster ``` Click Save . Now your pipeline is all set up. Hit Create release and provide a location(Azure region) where you want the cluster to be deployed. Snapshot of how the AKS node resource group will look: If this is your first deployment, AGIC will create a new application gateway. You should be able to visit the Application Gateway's ip address to visit the sample application.","title":"Continuous Deployment with AKS and AGIC using Azure Pipelines"},{"location":"how-tos/continuous-deployment/#continuous-deployment-with-aks-and-agic-using-azure-pipelines","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. To achieve an efficiently deployed and managed global infrastucture, it is important to setup workflows for continuous integration and deployment. Azure Devops is one of the options to achieve this goal. In following example, we setup a Azure Devops release pipeline to deploy an AKS cluster along with AGIC as ingress. This example is merely a scaffolding. You need to separately setup a build pipeline to install your application and ingress on the AKS cluster deployed as part of the release.","title":"Continuous Deployment with AKS and AGIC using Azure Pipelines"},{"location":"how-tos/continuous-deployment/#setup-up-new-service-connection-with-service-principal","text":"Note : Skip if already have service connection with owner access for role assigment Create a service principal to use with Azure Pipelines. This service principal will have owner access to current subscription. Access will be used to perform role assigement for AGIC identity in the pipeline. ```bash az ad sp create-for-rbac -n azure-pipeline-cd --role owner","title":"Setup up new service connection with service principal"},{"location":"how-tos/continuous-deployment/#copy-the-appid-and-password-we-will-use-these-in-the-next-step","text":"``` Now, create a new service connection in Azure Devops. Select \" use the full version of the service connection dialog \" option so that you can provide the newly created service principal.","title":"Copy the AppId and Password. We will use these in the next step."},{"location":"how-tos/continuous-deployment/#create-a-new-azure-release-pipeline","text":"We have prepared an example release pipeline . This pipeline has following tasks: Deploy AKS Cluster Create a user assigned identity used by AGIC Pod Install Helm Install AAD Pod identity Install AGIC Install a sample application (with ingress) To use the example release pipeline, Download the template and import it to your project's release pipeline. Now provide the required settings for all tasks: Select the correct Agent Pool and Agent Specification (ubuntu-18.04) Select the newly created service connection for the Create Kubernetes Cluster and Create AGIC Identity tasks. Provide the values for clientId and clientSecret that will be configured as cluster credentials for the AKS cluster. You should create a separate service principal for the AKS cluster for security reasons. ```bash","title":"Create a new Azure release pipeline"},{"location":"how-tos/continuous-deployment/#create-a-new-one-and-copy-the-appid-and-password-to-the-variable-section-in-the-pipeline","text":"az ad sp create-for-rbac -n aks-cluster ``` Click Save . Now your pipeline is all set up. Hit Create release and provide a location(Azure region) where you want the cluster to be deployed. Snapshot of how the AKS node resource group will look: If this is your first deployment, AGIC will create a new application gateway. You should be able to visit the Application Gateway's ip address to visit the sample application.","title":"create a new one and copy the appId and password to the variable section in the pipeline"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/","text":"How to deploy AGIC via Helm using Workload Identity NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. This assumes you have an existing Application Gateway. If not, you can create it with command: bash az network application-gateway create -g myResourceGroup -n myApplicationGateway --sku Standard_v2 --public-ip-address myPublicIP --vnet-name myVnet --subnet mySubnet --priority 100 1. Set environment variables bash export RESOURCE_GROUP=\"myResourceGroup\" export APPLICATION_GATEWAY_NAME=\"myApplicationGateway\" export USER_ASSIGNED_IDENTITY_NAME=\"myIdentity\" export FEDERATED_IDENTITY_CREDENTIAL_NAME=\"myFedIdentity\" 2. Create resource group, AKS cluster and identity bash az group create --name \"${RESOURCE_GROUP}\" --location eastus az aks create -g \"${RESOURCE_GROUP}\" -n myAKSCluster --node-count 1 --enable-oidc-issuer --enable-workload-identity az identity create --name \"${USER_ASSIGNED_IDENTITY_NAME}\" --resource-group \"${RESOURCE_GROUP}\" 3. Export the oidcIssuerProfile.issuerUrl bash export AKS_OIDC_ISSUER=\"$(az aks show -n myAKSCluster -g \"${RESOURCE_GROUP}\" --query \"oidcIssuerProfile.issuerUrl\" -otsv)\" 4. Create federated identity credential Note : the name of the service account that gets created after the helm installation is \u201cingress-azure\u201d and the following command assumes it will be deployed in \u201cdefault\u201d namespace. Please change the namespace name in the next command if you deploy the AGIC related Kubernetes resources in other namespace. bash az identity federated-credential create --name ${FEDERATED_IDENTITY_CREDENTIAL_NAME} --identity-name ${USER_ASSIGNED_IDENTITY_NAME} --resource-group ${RESOURCE_GROUP} --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:default:ingress-azure 5. Obtain the ClientID of the identity created before that is needed for the next step bash az identity show --resource-group \"${RESOURCE_GROUP}\" --name \"${USER_ASSIGNED_IDENTITY_NAME}\" --query 'clientId' -otsv 6. Export the Application Gateway resource ID bash export APP_GW_ID=\"$(az network application-gateway show --name \"${APPLICATION_GATEWAY_NAME}\" --resource-group \"${RESOURCE_GROUP}\" --query 'id' --output tsv)\" 7. Add Contributor role for the identity over the Application Gateway bash az role assignment create --assignee --scope \"${APP_GW_ID}\" --role Contributor 8. In helm-config.yaml specify yaml armAuth: type: workloadIdentity identityClientID: 9. Get the AKS cluster credentials bash az aks get-credentials -g \"${RESOURCE_GROUP}\" -n myAKSCluster 10. Install the helm chart bash helm install ingress-azure \\ -f helm-config.yaml \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --version 1.7.1","title":"How to deploy AGIC via Helm using Workload Identity"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#how-to-deploy-agic-via-helm-using-workload-identity","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. This assumes you have an existing Application Gateway. If not, you can create it with command: bash az network application-gateway create -g myResourceGroup -n myApplicationGateway --sku Standard_v2 --public-ip-address myPublicIP --vnet-name myVnet --subnet mySubnet --priority 100","title":"How to deploy AGIC via Helm using Workload Identity"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#1-set-environment-variables","text":"bash export RESOURCE_GROUP=\"myResourceGroup\" export APPLICATION_GATEWAY_NAME=\"myApplicationGateway\" export USER_ASSIGNED_IDENTITY_NAME=\"myIdentity\" export FEDERATED_IDENTITY_CREDENTIAL_NAME=\"myFedIdentity\"","title":"1. Set environment variables"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#2-create-resource-group-aks-cluster-and-identity","text":"bash az group create --name \"${RESOURCE_GROUP}\" --location eastus az aks create -g \"${RESOURCE_GROUP}\" -n myAKSCluster --node-count 1 --enable-oidc-issuer --enable-workload-identity az identity create --name \"${USER_ASSIGNED_IDENTITY_NAME}\" --resource-group \"${RESOURCE_GROUP}\"","title":"2. Create resource group, AKS cluster and identity"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#3-export-the-oidcissuerprofileissuerurl","text":"bash export AKS_OIDC_ISSUER=\"$(az aks show -n myAKSCluster -g \"${RESOURCE_GROUP}\" --query \"oidcIssuerProfile.issuerUrl\" -otsv)\"","title":"3. Export the oidcIssuerProfile.issuerUrl"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#4-create-federated-identity-credential","text":"Note : the name of the service account that gets created after the helm installation is \u201cingress-azure\u201d and the following command assumes it will be deployed in \u201cdefault\u201d namespace. Please change the namespace name in the next command if you deploy the AGIC related Kubernetes resources in other namespace. bash az identity federated-credential create --name ${FEDERATED_IDENTITY_CREDENTIAL_NAME} --identity-name ${USER_ASSIGNED_IDENTITY_NAME} --resource-group ${RESOURCE_GROUP} --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:default:ingress-azure","title":"4. Create federated identity credential"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#5-obtain-the-clientid-of-the-identity-created-before-that-is-needed-for-the-next-step","text":"bash az identity show --resource-group \"${RESOURCE_GROUP}\" --name \"${USER_ASSIGNED_IDENTITY_NAME}\" --query 'clientId' -otsv","title":"5. Obtain the ClientID of the identity created before that is needed for the next step"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#6-export-the-application-gateway-resource-id","text":"bash export APP_GW_ID=\"$(az network application-gateway show --name \"${APPLICATION_GATEWAY_NAME}\" --resource-group \"${RESOURCE_GROUP}\" --query 'id' --output tsv)\"","title":"6. Export the Application Gateway resource ID"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#7-add-contributor-role-for-the-identity-over-the-application-gateway","text":"bash az role assignment create --assignee --scope \"${APP_GW_ID}\" --role Contributor","title":"7. Add Contributor role for the identity over the Application Gateway"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#8-in-helm-configyaml-specify","text":"yaml armAuth: type: workloadIdentity identityClientID: ","title":"8. In helm-config.yaml specify"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#9-get-the-aks-cluster-credentials","text":"bash az aks get-credentials -g \"${RESOURCE_GROUP}\" -n myAKSCluster","title":"9. Get the AKS cluster credentials"},{"location":"how-tos/deploy-AGIC-with-Workload-Identity-using-helm/#10-install-the-helm-chart","text":"bash helm install ingress-azure \\ -f helm-config.yaml \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --version 1.7.1","title":"10. Install the helm chart"},{"location":"how-tos/dns/","text":"Automate DNS updates NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. When a hostname is specified in the Kubernetes Ingress resource's rules, it can be used to automatically create DNS records for the given domain and App Gateway's IP address. To achieve this the ExternalDNS Kubernetes app is required. ExternalDNS in installable via a Helm chart . The following document provides a tutorial on setting up ExternalDNS with an Azure DNS. Below is a sample Ingress resource, annotated with kubernetes.io/ingress.class: azure/application-gateway , which configures aplpha.contoso.com . yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: websocket-ingress namespace: alpha annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - host: alpha.contoso.com http: paths: - path: / backend: service: name: contoso-service port: number: 80 pathType: Exact Application Gateway Ingress Controller (AGIC) automatically recognizes the public IP address assigned to the Application Gateway it is associated with, and sets this IP ( 1.2.3.4 ) on the Ingress resource as shown below: bash $ kubectl get ingress -A NAMESPACE NAME HOSTS ADDRESS PORTS AGE alpha alpha-ingress alpha.contoso.com 1.2.3.4 80 8m55s beta beta-ingress beta.contoso.com 1.2.3.4 80 8m54s Once the Ingresses contain both host and adrress, ExternalDNS will provision these to the DNS system it has been associated with and authorized for.","title":"Automate DNS updates"},{"location":"how-tos/dns/#automate-dns-updates","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. When a hostname is specified in the Kubernetes Ingress resource's rules, it can be used to automatically create DNS records for the given domain and App Gateway's IP address. To achieve this the ExternalDNS Kubernetes app is required. ExternalDNS in installable via a Helm chart . The following document provides a tutorial on setting up ExternalDNS with an Azure DNS. Below is a sample Ingress resource, annotated with kubernetes.io/ingress.class: azure/application-gateway , which configures aplpha.contoso.com . yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: websocket-ingress namespace: alpha annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - host: alpha.contoso.com http: paths: - path: / backend: service: name: contoso-service port: number: 80 pathType: Exact Application Gateway Ingress Controller (AGIC) automatically recognizes the public IP address assigned to the Application Gateway it is associated with, and sets this IP ( 1.2.3.4 ) on the Ingress resource as shown below: bash $ kubectl get ingress -A NAMESPACE NAME HOSTS ADDRESS PORTS AGE alpha alpha-ingress alpha.contoso.com 1.2.3.4 80 8m55s beta beta-ingress beta.contoso.com 1.2.3.4 80 8m54s Once the Ingresses contain both host and adrress, ExternalDNS will provision these to the DNS system it has been associated with and authorized for.","title":"Automate DNS updates"},{"location":"how-tos/helm-upgrade/","text":"Upgrading AGIC using Helm NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. The Azure Application Gateway Ingress Controller for Kubernetes (AGIC) can be upgraded using a Helm repository hosted on MCR. Upgrade View the Helm charts currently installed: bash helm list Sample response: bash NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE odd-billygoat 22 Fri Nov 08 15:56:06 2019 FAILED ingress-azure-1.0.0 1.0.0 default The Helm chart installation from the sample response above is named odd-billygoat . We will use this name for the rest of the commands. Your actual deployment name will most likely differ. Upgrade the Helm deployment to a new version: bash helm upgrade \\ odd-billygoat \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --version 1.0.0 Rollback Should the Helm deployment fail, you can rollback to a previous release. Get the last known healthy release number: bash helm history odd-billygoat Sample output: bash REVISION UPDATED STATUS CHART DESCRIPTION 1 Mon Jun 17 13:49:42 2019 DEPLOYED ingress-azure-0.6.0 Install complete 2 Fri Jun 21 15:56:06 2019 FAILED ingress-azure-xx xxxx From the sample output of the helm history command it looks like the last successful deployment of our odd-billygoat was revision 1 Rollback to the last successful revision: bash helm rollback odd-billygoat 1","title":"Upgrading AGIC using Helm"},{"location":"how-tos/helm-upgrade/#upgrading-agic-using-helm","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. The Azure Application Gateway Ingress Controller for Kubernetes (AGIC) can be upgraded using a Helm repository hosted on MCR.","title":"Upgrading AGIC using Helm"},{"location":"how-tos/helm-upgrade/#upgrade","text":"View the Helm charts currently installed: bash helm list Sample response: bash NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE odd-billygoat 22 Fri Nov 08 15:56:06 2019 FAILED ingress-azure-1.0.0 1.0.0 default The Helm chart installation from the sample response above is named odd-billygoat . We will use this name for the rest of the commands. Your actual deployment name will most likely differ. Upgrade the Helm deployment to a new version: bash helm upgrade \\ odd-billygoat \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --version 1.0.0","title":"Upgrade"},{"location":"how-tos/helm-upgrade/#rollback","text":"Should the Helm deployment fail, you can rollback to a previous release. Get the last known healthy release number: bash helm history odd-billygoat Sample output: bash REVISION UPDATED STATUS CHART DESCRIPTION 1 Mon Jun 17 13:49:42 2019 DEPLOYED ingress-azure-0.6.0 Install complete 2 Fri Jun 21 15:56:06 2019 FAILED ingress-azure-xx xxxx From the sample output of the helm history command it looks like the last successful deployment of our odd-billygoat was revision 1 Rollback to the last successful revision: bash helm rollback odd-billygoat 1","title":"Rollback"},{"location":"how-tos/lets-encrypt/","text":"Certificate issuance with LetsEncrypt.org NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. This section configures your AKS to leverage LetsEncrypt.org and automatically obtain a TLS/SSL certificate for your domain. The certificate will be installed on Application Gateway, which will perform SSL/TLS termination for your AKS cluster. The setup described here uses the cert-manager Kubernetes add-on, which automates the creation and management of certificates. Follow the steps below to install cert-manager on your existing AKS cluster. Helm Chart Run the following script to install the cert-manager helm chart. This will: create a new cert-manager namespace on your AKS create the following CRDs: Certificate, Challenge, ClusterIssuer, Issuer, Order install cert-manager chart (from docs.cert-manager.io) ```bash Install the CustomResourceDefinition resources separately Note: --validate=false is required per https://github.com/jetstack/cert-manager/issues/2208#issuecomment-541311021 kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.13/deploy/manifests/00-crds.yaml --validate=false Create the namespace for cert-manager kubectl create namespace cert-manager Label the cert-manager namespace to disable resource validation kubectl label namespace cert-manager cert-manager.io/disable-validation=true Add the Jetstack Helm repository helm repo add jetstack https://charts.jetstack.io Update your local Helm chart repository cache helm repo update Install v0.11 of cert-manager Helm chart helm install cert-manager \\ --namespace cert-manager \\ --version v0.13.0 \\ jetstack/cert-manager ``` ClusterIssuer Resource Create a ClusterIssuer resource. It is required by cert-manager to represent the Lets Encrypt certificate authority where the signed certificates will be obtained. By using the non-namespaced ClusterIssuer resource, cert-manager will issue certificates that can be consumed from multiple namespaces. Let\u2019s Encrypt uses the ACME protocol to verify that you control a given domain name and to issue you a certificate. More details on configuring ClusterIssuer properties here . ClusterIssuer will instruct cert-manager to issue certificates using the Lets Encrypt staging environment used for testing (the root certificate not present in browser/client trust stores). The default challenge type in the YAML below is http01 . Other challenges are documented on letsencrypt.org - Challenge Types IMPORTANT: Update in the YAML below ```bash kubectl apply -f - < # ACME server URL for Let\u2019s Encrypt\u2019s staging environment. # The staging environment will not issue trusted certificates but is # used to ensure that the verification process is working properly # before moving to production server: https://acme-staging-v02.api.letsencrypt.org/directory privateKeySecretRef: # Secret resource used to store the account's private key. name: letsencrypt-secret # Enable the HTTP-01 challenge provider # you prove ownership of a domain by ensuring that a particular # file is present at the domain solvers: - http01: ingress: class: azure/application-gateway EOF ``` Deploy App Create an Ingress resource to Expose the guestbook application using the Application Gateway with the Lets Encrypt Certificate. Ensure you Application Gateway has a public Frontend IP configuration with a DNS name (either using the default azure.com domain, or provision a Azure DNS Zone service, and assign your own custom domain). Note the annotation cert-manager.io/cluster-issuer: letsencrypt-staging , which tells cert-manager to process the tagged Ingress resource. IMPORTANT: Update in the YAML below with your own domain (or the Application Gateway one, for example 'kh-aks-ingress.westeurope.cloudapp.azure.com') bash kubectl apply -f - < secretName: guestbook-secret-name rules: - host: http: paths: - backend: service: name: frontend port: number: 80 EOF Use kubectl describe clusterissuer letsencrypt-staging to view the state of status of the ACME account registration. Use kubectl get secret guestbook-secret-name -o yaml to view the certificate issued. After a few seconds, you can access the guestbook service through the Application Gateway HTTPS url using the automatically issued staging Lets Encrypt certificate. Your browser may warn you of an invalid cert authority. The staging certificate is issued by CN=Fake LE Intermediate X1 . This is an indication that the system worked as expected and you are ready for your production certificate. Production Certificate Once your staging certificate is setup successfully you can switch to a production ACME server: Replace the staging annotation on your Ingress resource with: cert-manager.io/cluster-issuer: letsencrypt-prod Delete the existing staging ClusterIssuer you created in the previous step and create a new one by replacing the ACME server from the ClusterIssuer YAML above with https://acme-v02.api.letsencrypt.org/directory Certificate Expiration and Renewal Before the Lets Encrypt certificate expires, cert-manager will automatically update the certificate in the Kubernetes secret store. At that point, Application Gateway Ingress Controller will apply the updated secret referenced in the ingress resources it is using to configure the Application Gateway.","title":"Certificate issuance with LetsEncrypt.org"},{"location":"how-tos/lets-encrypt/#certificate-issuance-with-letsencryptorg","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. This section configures your AKS to leverage LetsEncrypt.org and automatically obtain a TLS/SSL certificate for your domain. The certificate will be installed on Application Gateway, which will perform SSL/TLS termination for your AKS cluster. The setup described here uses the cert-manager Kubernetes add-on, which automates the creation and management of certificates. Follow the steps below to install cert-manager on your existing AKS cluster. Helm Chart Run the following script to install the cert-manager helm chart. This will: create a new cert-manager namespace on your AKS create the following CRDs: Certificate, Challenge, ClusterIssuer, Issuer, Order install cert-manager chart (from docs.cert-manager.io) ```bash","title":"Certificate issuance with LetsEncrypt.org"},{"location":"how-tos/lets-encrypt/#install-the-customresourcedefinition-resources-separately","text":"","title":"Install the CustomResourceDefinition resources separately"},{"location":"how-tos/lets-encrypt/#note-validatefalse-is-required-per-httpsgithubcomjetstackcert-managerissues2208issuecomment-541311021","text":"kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.13/deploy/manifests/00-crds.yaml --validate=false","title":"Note: --validate=false is required per https://github.com/jetstack/cert-manager/issues/2208#issuecomment-541311021"},{"location":"how-tos/lets-encrypt/#create-the-namespace-for-cert-manager","text":"kubectl create namespace cert-manager","title":"Create the namespace for cert-manager"},{"location":"how-tos/lets-encrypt/#label-the-cert-manager-namespace-to-disable-resource-validation","text":"kubectl label namespace cert-manager cert-manager.io/disable-validation=true","title":"Label the cert-manager namespace to disable resource validation"},{"location":"how-tos/lets-encrypt/#add-the-jetstack-helm-repository","text":"helm repo add jetstack https://charts.jetstack.io","title":"Add the Jetstack Helm repository"},{"location":"how-tos/lets-encrypt/#update-your-local-helm-chart-repository-cache","text":"helm repo update","title":"Update your local Helm chart repository cache"},{"location":"how-tos/lets-encrypt/#install-v011-of-cert-manager-helm-chart","text":"helm install cert-manager \\ --namespace cert-manager \\ --version v0.13.0 \\ jetstack/cert-manager ``` ClusterIssuer Resource Create a ClusterIssuer resource. It is required by cert-manager to represent the Lets Encrypt certificate authority where the signed certificates will be obtained. By using the non-namespaced ClusterIssuer resource, cert-manager will issue certificates that can be consumed from multiple namespaces. Let\u2019s Encrypt uses the ACME protocol to verify that you control a given domain name and to issue you a certificate. More details on configuring ClusterIssuer properties here . ClusterIssuer will instruct cert-manager to issue certificates using the Lets Encrypt staging environment used for testing (the root certificate not present in browser/client trust stores). The default challenge type in the YAML below is http01 . Other challenges are documented on letsencrypt.org - Challenge Types IMPORTANT: Update in the YAML below ```bash kubectl apply -f - < # ACME server URL for Let\u2019s Encrypt\u2019s staging environment. # The staging environment will not issue trusted certificates but is # used to ensure that the verification process is working properly # before moving to production server: https://acme-staging-v02.api.letsencrypt.org/directory privateKeySecretRef: # Secret resource used to store the account's private key. name: letsencrypt-secret # Enable the HTTP-01 challenge provider # you prove ownership of a domain by ensuring that a particular # file is present at the domain solvers: - http01: ingress: class: azure/application-gateway EOF ``` Deploy App Create an Ingress resource to Expose the guestbook application using the Application Gateway with the Lets Encrypt Certificate. Ensure you Application Gateway has a public Frontend IP configuration with a DNS name (either using the default azure.com domain, or provision a Azure DNS Zone service, and assign your own custom domain). Note the annotation cert-manager.io/cluster-issuer: letsencrypt-staging , which tells cert-manager to process the tagged Ingress resource. IMPORTANT: Update in the YAML below with your own domain (or the Application Gateway one, for example 'kh-aks-ingress.westeurope.cloudapp.azure.com') bash kubectl apply -f - < secretName: guestbook-secret-name rules: - host: http: paths: - backend: service: name: frontend port: number: 80 EOF Use kubectl describe clusterissuer letsencrypt-staging to view the state of status of the ACME account registration. Use kubectl get secret guestbook-secret-name -o yaml to view the certificate issued. After a few seconds, you can access the guestbook service through the Application Gateway HTTPS url using the automatically issued staging Lets Encrypt certificate. Your browser may warn you of an invalid cert authority. The staging certificate is issued by CN=Fake LE Intermediate X1 . This is an indication that the system worked as expected and you are ready for your production certificate. Production Certificate Once your staging certificate is setup successfully you can switch to a production ACME server: Replace the staging annotation on your Ingress resource with: cert-manager.io/cluster-issuer: letsencrypt-prod Delete the existing staging ClusterIssuer you created in the previous step and create a new one by replacing the ACME server from the ClusterIssuer YAML above with https://acme-v02.api.letsencrypt.org/directory Certificate Expiration and Renewal Before the Lets Encrypt certificate expires, cert-manager will automatically update the certificate in the Kubernetes secret store. At that point, Application Gateway Ingress Controller will apply the updated secret referenced in the ingress resources it is using to configure the Application Gateway.","title":"Install v0.11 of cert-manager Helm chart"},{"location":"how-tos/minimize-downtime-during-deployments/","text":"Minimizing Downtime During Deployments NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Purpose This document outlines a Kubernetes and Ingress controller configuration, which when incorporated with proper Kubernetes rolling updates deployment could achieve a near-zero-downtime deployments. Overview It is not uncommon for Kubernetes operators to observe Application Gateway 502 errors while performing a Kubernetes rolling update on an AKS cluster fronted by Application Gateway and AGIC. This document offers a method to alleviate this problem. Since the method described in this document relies on correctly aligning the timing of deployment events it is not possible to guarantee 100% elimination of the probability of running into a 502 error. Even with this method there will be a non-zero chance for a period of time where Application Gateway backends could lag behind the most recent updates applied by a rolling update to the Kubernetes pods. Understanding 502 Errors At a high level there are 3 scenarios in which one could observe 502 errors on an AKS cluster fronted with App Gateway and AGIC. In all of these the root cause is the delay one could observe in applying a IP address changes to the Application Gateway's backend pools. Scaling down a Kubernetes cluster: Kubernetes is instructed to lower the number of pod replicas (perhaps manually, or via Horizontal Pod Autoscaler, or some other mechanism) Pods are put in Terminating state, while simultaneously removed from the list of Endpoints. AGIC observes the fact that Pods + Endpoints changed and begins a config update on App Gateway It takes somewhere between a second and a few minutes for a pod, or a list of the pods to be removed from App Gateway's backend -- meanwhile App Gateway still attempts to deliver traffic to terminated pods Result is occasional 502 errors Rolling Updates: Customer updates the version of the software (perhaps using kubectl set image ) Kubernetes upgrades a percentage of the pods at a time. The size of the bucket is defined in the strategy section of the Deployment spec Kubernetes adds a new pod with a new image - pod goes through the states from ContainerCreating to Running When the new pod is in Running state - Kubernetes terminates the old pod The process described above is repeated until all pods are upgraded Kubernetes terminates resource-starved pods (CPU, RAM etc) Solution The solution below lowers the probability of running into a scenario where App Gateway's backend pool points to terminated pods, resulting in 502 error. The solution below does not completely remove this chance. Required configuration changes prior to performing a rolling update : Change the Pod and/or Deployment specs by adding preStop container life-cycle hooks , with a delay (sleep) of at least 90 seconds. Example: yaml kind: Deployment metadata: name: x labels: app: y spec: ... template: ... spec: containers: - name: ctr ... lifecycle: preStop: exec: command: [\"sleep\",\"90\"] Note: The \"sleep\" command assumes the container is based on Linux. For Windows containers the equivalent command is [\"powershell.exe\",\"-c\",\"sleep\",\"90\"] . The addition of the preStop container life cycle hook will: delay Kubernetes sending SIGTERM to the container by 90 seconds, but put the pod immediately in Terminating state simultaneously this will also immediately remove the pod from the Kubernetes Endpoints list this will cause AGIC to remove the pod from App Gateway's backend pool pod will continue to run for the next 90 seconds - giving App Gateway 90 seconds to execute \"remove from backend pools\" command Add connection draining annotation to the Ingress read by AGIC to allow for in-flight connections to complete. Example: yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: websocket-ingress annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/connection-draining: \"true\" appgw.ingress.kubernetes.io/connection-draining-timeout: \"30\" What this achieves - when a pod is pulled from an App Gateway backend it will disappear from the UI, but existing in-flight connections will not be immediately terminated -- they will be given 30 seconds to complete. We believe that the addition of the preStop hook and the connection draining annotation will drastically remove the probability for App Gateway to attempt to connect to a terminated pod. Add terminationGracePeriodSeconds to the Pod resource YAML. This must be set to a value that is greater than the preStop hook wait time. yaml kind: Deployment metadata: name: x labels: app: y spec: ... template: ... spec: containers: - name: ctr ... terminationGracePeriodSeconds: 101 Decrease interval between App Gateway health probes to backend pools. The goal is to increase number of probes per unit of time. This will ensure that a terminated pod, which has not yet been removed from App Gateway's backend pool, will be marked as unhealthy sooner, thus removing the probability of a request landing on a terminated pod and resulting in a 502 error. For example the following Kubernetes Deployment liveness probe will result in the respective pods being marked as unhealthy after 15 seconds and 3 failed probes. This config will be directly applied to Application Gateway (by AGIC), as well as Kubernetes. yaml ... livenessProbe: httpGet: path: / port: 80 periodSeconds: 4 timeoutSeconds: 5 failureThreshold: 3 Summary To achieve a near-zero-downtime deployments, we need to add a: preStop hook waiting for 90 seconds termination grace period of at least 90 seconds connection draining timeout of about 30 seconds aggressive health probes Note: All proposed parameter values above should be adjusted for the specifics of the system being deployed. Long term solutions to zero-downtime updates: Faster backend pool updates: The AGIC team is already working on the next iteration of the Ingress Controller, which will shorten the time to update App Gateway drastically. Faster backend pool updates will lower the probability to run into 502s. Rolling updates with App Gateway feedback: AGIC team is looking into a deeper integration between AGIC and the Kubernetes' rolling updates feature.","title":"Minimizing Downtime During Deployments"},{"location":"how-tos/minimize-downtime-during-deployments/#minimizing-downtime-during-deployments","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment.","title":"Minimizing Downtime During Deployments"},{"location":"how-tos/minimize-downtime-during-deployments/#purpose","text":"This document outlines a Kubernetes and Ingress controller configuration, which when incorporated with proper Kubernetes rolling updates deployment could achieve a near-zero-downtime deployments.","title":"Purpose"},{"location":"how-tos/minimize-downtime-during-deployments/#overview","text":"It is not uncommon for Kubernetes operators to observe Application Gateway 502 errors while performing a Kubernetes rolling update on an AKS cluster fronted by Application Gateway and AGIC. This document offers a method to alleviate this problem. Since the method described in this document relies on correctly aligning the timing of deployment events it is not possible to guarantee 100% elimination of the probability of running into a 502 error. Even with this method there will be a non-zero chance for a period of time where Application Gateway backends could lag behind the most recent updates applied by a rolling update to the Kubernetes pods.","title":"Overview"},{"location":"how-tos/minimize-downtime-during-deployments/#understanding-502-errors","text":"At a high level there are 3 scenarios in which one could observe 502 errors on an AKS cluster fronted with App Gateway and AGIC. In all of these the root cause is the delay one could observe in applying a IP address changes to the Application Gateway's backend pools. Scaling down a Kubernetes cluster: Kubernetes is instructed to lower the number of pod replicas (perhaps manually, or via Horizontal Pod Autoscaler, or some other mechanism) Pods are put in Terminating state, while simultaneously removed from the list of Endpoints. AGIC observes the fact that Pods + Endpoints changed and begins a config update on App Gateway It takes somewhere between a second and a few minutes for a pod, or a list of the pods to be removed from App Gateway's backend -- meanwhile App Gateway still attempts to deliver traffic to terminated pods Result is occasional 502 errors Rolling Updates: Customer updates the version of the software (perhaps using kubectl set image ) Kubernetes upgrades a percentage of the pods at a time. The size of the bucket is defined in the strategy section of the Deployment spec Kubernetes adds a new pod with a new image - pod goes through the states from ContainerCreating to Running When the new pod is in Running state - Kubernetes terminates the old pod The process described above is repeated until all pods are upgraded Kubernetes terminates resource-starved pods (CPU, RAM etc)","title":"Understanding 502 Errors"},{"location":"how-tos/minimize-downtime-during-deployments/#solution","text":"The solution below lowers the probability of running into a scenario where App Gateway's backend pool points to terminated pods, resulting in 502 error. The solution below does not completely remove this chance. Required configuration changes prior to performing a rolling update : Change the Pod and/or Deployment specs by adding preStop container life-cycle hooks , with a delay (sleep) of at least 90 seconds. Example: yaml kind: Deployment metadata: name: x labels: app: y spec: ... template: ... spec: containers: - name: ctr ... lifecycle: preStop: exec: command: [\"sleep\",\"90\"] Note: The \"sleep\" command assumes the container is based on Linux. For Windows containers the equivalent command is [\"powershell.exe\",\"-c\",\"sleep\",\"90\"] . The addition of the preStop container life cycle hook will: delay Kubernetes sending SIGTERM to the container by 90 seconds, but put the pod immediately in Terminating state simultaneously this will also immediately remove the pod from the Kubernetes Endpoints list this will cause AGIC to remove the pod from App Gateway's backend pool pod will continue to run for the next 90 seconds - giving App Gateway 90 seconds to execute \"remove from backend pools\" command Add connection draining annotation to the Ingress read by AGIC to allow for in-flight connections to complete. Example: yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: websocket-ingress annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/connection-draining: \"true\" appgw.ingress.kubernetes.io/connection-draining-timeout: \"30\" What this achieves - when a pod is pulled from an App Gateway backend it will disappear from the UI, but existing in-flight connections will not be immediately terminated -- they will be given 30 seconds to complete. We believe that the addition of the preStop hook and the connection draining annotation will drastically remove the probability for App Gateway to attempt to connect to a terminated pod. Add terminationGracePeriodSeconds to the Pod resource YAML. This must be set to a value that is greater than the preStop hook wait time. yaml kind: Deployment metadata: name: x labels: app: y spec: ... template: ... spec: containers: - name: ctr ... terminationGracePeriodSeconds: 101 Decrease interval between App Gateway health probes to backend pools. The goal is to increase number of probes per unit of time. This will ensure that a terminated pod, which has not yet been removed from App Gateway's backend pool, will be marked as unhealthy sooner, thus removing the probability of a request landing on a terminated pod and resulting in a 502 error. For example the following Kubernetes Deployment liveness probe will result in the respective pods being marked as unhealthy after 15 seconds and 3 failed probes. This config will be directly applied to Application Gateway (by AGIC), as well as Kubernetes. yaml ... livenessProbe: httpGet: path: / port: 80 periodSeconds: 4 timeoutSeconds: 5 failureThreshold: 3","title":"Solution"},{"location":"how-tos/minimize-downtime-during-deployments/#summary","text":"To achieve a near-zero-downtime deployments, we need to add a: preStop hook waiting for 90 seconds termination grace period of at least 90 seconds connection draining timeout of about 30 seconds aggressive health probes Note: All proposed parameter values above should be adjusted for the specifics of the system being deployed. Long term solutions to zero-downtime updates: Faster backend pool updates: The AGIC team is already working on the next iteration of the Ingress Controller, which will shorten the time to update App Gateway drastically. Faster backend pool updates will lower the probability to run into 502s. Rolling updates with App Gateway feedback: AGIC team is looking into a deeper integration between AGIC and the Kubernetes' rolling updates feature.","title":"Summary"},{"location":"how-tos/networking/","text":"How to setup networking between Application Gateway and AKS NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. When you are using Application Gateway with AKS for L7, you need to make sure that you have setup network connectivity correctly between the gateway and the cluster. Otherwise, you might receive 502s when reaching your site. There are two major things to consider when setting up network connectivity between Application Gateway and AKS Virtual Network Configuration When AKS and Application Gateway in the same virtual network When AKS and Application Gateway in different virtual networks Network Plugin used with AKS Kubenet Azure(advanced) CNI Virtual Network Configuration Deployed in same virtual network If you have deployed AKS and Application Gateway in the same virtual network with Azure CNI for network plugin, then you don't have to do any changes and you are good to go. Application Gateway instances should be able to reach the PODs. If you are using kubenet network plugin, then jump to Kubenet to setup the route table. Deployed in different vnets AKS can be deployed in different virtual network from Application Gateway's virtual network, however, the two virtual networks must be peered together. When you create a virtual network peering between two virtual networks, a route is added by Azure for each address range within the address space of each virtual network a peering is created for. ```bash aksClusterName=\" \" aksResourceGroup=\" \" appGatewayName=\" \" appGatewayResourceGroup=\" \" get aks vnet information nodeResourceGroup=$(az aks show -n $aksClusterName -g aksResourceGroup -o tsv --query \"nodeResourceGroup\") aksVnetName= aksResourceGroup -o tsv --query \"nodeResourceGroup\") aksVnetName= (az network vnet list -g nodeResourceGroup -o tsv --query \"[0].name\") aksVnetId= nodeResourceGroup -o tsv --query \"[0].name\") aksVnetId= (az network vnet show -n $aksVnetName -g $nodeResourceGroup -o tsv --query \"id\") get gateway vnet information appGatewaySubnetId=$(az network application-gateway show -n $appGatewayName -g appGatewayResourceGroup -o tsv --query \"gatewayIpConfigurations[0].subnet.id\") appGatewayVnetName= appGatewayResourceGroup -o tsv --query \"gatewayIpConfigurations[0].subnet.id\") appGatewayVnetName= (az network vnet show --ids appGatewaySubnetId -o tsv --query \"name\") appGatewayVnetId= appGatewaySubnetId -o tsv --query \"name\") appGatewayVnetId= (az network vnet show --ids $appGatewaySubnetId -o tsv --query \"id\") set up bi-directional peering between aks and gateway vnet az network vnet peering create -n gateway2aks \\ -g $appGatewayResourceGroup --vnet-name $appGatewayVnetName \\ --remote-vnet $aksVnetId \\ --allow-vnet-access az network vnet peering create -n aks2gateway \\ -g $nodeResourceGroup --vnet-name $aksVnetName \\ --remote-vnet $appGatewayVnetId \\ --allow-vnet-access ``` If you are using Azure CNI for network plugin with AKS, then you are good to go. If you are using Kubenet network plugin, then jump to Kubenet to setup the route table. Network Plugin used with AKS With Azure CNI When using Azure CNI, Every pod is assigned a VNET route-able private IP from the subnet. So, Gateway should be able reach the pods directly. With Kubenet When using Kubenet mode, Only nodes receive an IP address from subnet. Pod are assigned IP addresses from the PodIPCidr and a route table is created by AKS. This route table helps the packets destined for a POD IP reach the node which is hosting the pod. When packets leave Application Gateway instances, Application Gateway's subnet need to aware of these routes setup by the AKS in the route table. A simple way to achieve this is by associating the same route table created by AKS to the Application Gateway's subnet. When AGIC starts up, it checks the AKS node resource group for the existence of the route table. If it exists, AGIC will try to assign the route table to the Application Gateway's subnet, given it doesn't already have a route table. If AGIC doesn't have permissions to any of the above resources, the operation will fail and an error will be logged in the AGIC pod logs. This association can also be performed manually: ```bash aksClusterName=\" \" aksResourceGroup=\" \" appGatewayName=\" \" appGatewayResourceGroup=\" \" find route table used by aks cluster nodeResourceGroup=$(az aks show -n $aksClusterName -g aksResourceGroup -o tsv --query \"nodeResourceGroup\") routeTableId= aksResourceGroup -o tsv --query \"nodeResourceGroup\") routeTableId= (az network route-table list -g $nodeResourceGroup --query \"[].id | [0]\" -o tsv) get the application gateway's subnet appGatewaySubnetId=$(az network application-gateway show -n $appGatewayName -g $appGatewayResourceGroup -o tsv --query \"gatewayIpConfigurations[0].subnet.id\") associate the route table to Application Gateway's subnet az network vnet subnet update \\ --ids $appGatewaySubnetId --route-table $routeTableId ``` Further Readings Peer the two virtual networks together Virtual network peering How to peer your networks from different subscription Use kubenet to configure networking Use CNI to configure networking Network concept for AKS and Kubernetes When to decide to use kubenet or CNI","title":"How to setup networking between Application Gateway and AKS"},{"location":"how-tos/networking/#how-to-setup-networking-between-application-gateway-and-aks","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. When you are using Application Gateway with AKS for L7, you need to make sure that you have setup network connectivity correctly between the gateway and the cluster. Otherwise, you might receive 502s when reaching your site. There are two major things to consider when setting up network connectivity between Application Gateway and AKS Virtual Network Configuration When AKS and Application Gateway in the same virtual network When AKS and Application Gateway in different virtual networks Network Plugin used with AKS Kubenet Azure(advanced) CNI","title":"How to setup networking between Application Gateway and AKS"},{"location":"how-tos/networking/#virtual-network-configuration","text":"","title":"Virtual Network Configuration"},{"location":"how-tos/networking/#deployed-in-same-virtual-network","text":"If you have deployed AKS and Application Gateway in the same virtual network with Azure CNI for network plugin, then you don't have to do any changes and you are good to go. Application Gateway instances should be able to reach the PODs. If you are using kubenet network plugin, then jump to Kubenet to setup the route table.","title":"Deployed in same virtual network"},{"location":"how-tos/networking/#deployed-in-different-vnets","text":"AKS can be deployed in different virtual network from Application Gateway's virtual network, however, the two virtual networks must be peered together. When you create a virtual network peering between two virtual networks, a route is added by Azure for each address range within the address space of each virtual network a peering is created for. ```bash aksClusterName=\" \" aksResourceGroup=\" \" appGatewayName=\" \" appGatewayResourceGroup=\" \"","title":"Deployed in different vnets"},{"location":"how-tos/networking/#get-aks-vnet-information","text":"nodeResourceGroup=$(az aks show -n $aksClusterName -g aksResourceGroup -o tsv --query \"nodeResourceGroup\") aksVnetName= aksResourceGroup -o tsv --query \"nodeResourceGroup\") aksVnetName= (az network vnet list -g nodeResourceGroup -o tsv --query \"[0].name\") aksVnetId= nodeResourceGroup -o tsv --query \"[0].name\") aksVnetId= (az network vnet show -n $aksVnetName -g $nodeResourceGroup -o tsv --query \"id\")","title":"get aks vnet information"},{"location":"how-tos/networking/#get-gateway-vnet-information","text":"appGatewaySubnetId=$(az network application-gateway show -n $appGatewayName -g appGatewayResourceGroup -o tsv --query \"gatewayIpConfigurations[0].subnet.id\") appGatewayVnetName= appGatewayResourceGroup -o tsv --query \"gatewayIpConfigurations[0].subnet.id\") appGatewayVnetName= (az network vnet show --ids appGatewaySubnetId -o tsv --query \"name\") appGatewayVnetId= appGatewaySubnetId -o tsv --query \"name\") appGatewayVnetId= (az network vnet show --ids $appGatewaySubnetId -o tsv --query \"id\")","title":"get gateway vnet information"},{"location":"how-tos/networking/#set-up-bi-directional-peering-between-aks-and-gateway-vnet","text":"az network vnet peering create -n gateway2aks \\ -g $appGatewayResourceGroup --vnet-name $appGatewayVnetName \\ --remote-vnet $aksVnetId \\ --allow-vnet-access az network vnet peering create -n aks2gateway \\ -g $nodeResourceGroup --vnet-name $aksVnetName \\ --remote-vnet $appGatewayVnetId \\ --allow-vnet-access ``` If you are using Azure CNI for network plugin with AKS, then you are good to go. If you are using Kubenet network plugin, then jump to Kubenet to setup the route table.","title":"set up bi-directional peering between aks and gateway vnet"},{"location":"how-tos/networking/#network-plugin-used-with-aks","text":"","title":"Network Plugin used with AKS"},{"location":"how-tos/networking/#with-azure-cni","text":"When using Azure CNI, Every pod is assigned a VNET route-able private IP from the subnet. So, Gateway should be able reach the pods directly.","title":"With Azure CNI"},{"location":"how-tos/networking/#with-kubenet","text":"When using Kubenet mode, Only nodes receive an IP address from subnet. Pod are assigned IP addresses from the PodIPCidr and a route table is created by AKS. This route table helps the packets destined for a POD IP reach the node which is hosting the pod. When packets leave Application Gateway instances, Application Gateway's subnet need to aware of these routes setup by the AKS in the route table. A simple way to achieve this is by associating the same route table created by AKS to the Application Gateway's subnet. When AGIC starts up, it checks the AKS node resource group for the existence of the route table. If it exists, AGIC will try to assign the route table to the Application Gateway's subnet, given it doesn't already have a route table. If AGIC doesn't have permissions to any of the above resources, the operation will fail and an error will be logged in the AGIC pod logs. This association can also be performed manually: ```bash aksClusterName=\" \" aksResourceGroup=\" \" appGatewayName=\" \" appGatewayResourceGroup=\" \"","title":"With Kubenet"},{"location":"how-tos/networking/#find-route-table-used-by-aks-cluster","text":"nodeResourceGroup=$(az aks show -n $aksClusterName -g aksResourceGroup -o tsv --query \"nodeResourceGroup\") routeTableId= aksResourceGroup -o tsv --query \"nodeResourceGroup\") routeTableId= (az network route-table list -g $nodeResourceGroup --query \"[].id | [0]\" -o tsv)","title":"find route table used by aks cluster"},{"location":"how-tos/networking/#get-the-application-gateways-subnet","text":"appGatewaySubnetId=$(az network application-gateway show -n $appGatewayName -g $appGatewayResourceGroup -o tsv --query \"gatewayIpConfigurations[0].subnet.id\")","title":"get the application gateway's subnet"},{"location":"how-tos/networking/#associate-the-route-table-to-application-gateways-subnet","text":"az network vnet subnet update \\ --ids $appGatewaySubnetId --route-table $routeTableId ```","title":"associate the route table to Application Gateway's subnet"},{"location":"how-tos/networking/#further-readings","text":"Peer the two virtual networks together Virtual network peering How to peer your networks from different subscription Use kubenet to configure networking Use CNI to configure networking Network concept for AKS and Kubernetes When to decide to use kubenet or CNI","title":"Further Readings"},{"location":"how-tos/prevent-agic-from-overwriting/","text":"Preventing AGIC from removing certain rules NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Note: This feature is EXPERIMENTAL with limited support . Use with caution. By default AGIC assumes full ownership of the App Gateway it is linked to. AGIC version 0.8.0 and later allows retaining rules to allow adding VMSS as backend along with AKS cluster. Please backup your App Gateway's configuration before enabling this setting: using Azure Portal navigate to your App Gateway instance from Export template click Download The zip file you downloaded will have JSON templates, bash, and PowerShell scripts you could use to restore App Gateway Example Scenario Let's look at an imaginary App Gateway, which manages traffic for 2 web sites: dev.contoso.com - hosted on a new AKS, using App Gateway and AGIC prod.contoso.com - hosted on an Azure VMSS With default settings, AGIC assumes 100% ownership of the App Gateway it is pointed to. AGIC overwrites all of App Gateway's configuration. If we were to manually create a listener for prod.contoso.com (on App Gateway), without defining it in the Kubernetes Ingress, AGIC will delete the prod.contoso.com config within seconds. To install AGIC and also serve prod.contoso.com from our VMSS machines, we must constrain AGIC to configuring dev.contoso.com only. This is facilitated by instantiating the following CRD : bash cat < # existing field resourceGroup: # existing field name: # existing field shared: true # <<<<< Add this field to enable shared App Gateway >>>>> Apply the Helm changes: Ensure the AzureIngressProhibitedTarget CRD is installed with: bash kubectl apply -f https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/ae695ef9bd05c8b708cedf6ff545595d0b7022dc/crds/AzureIngressProhibitedTarget.yaml Update Helm: bash helm upgrade \\ --recreate-pods \\ -f helm-config.yaml \\ ingress-azure oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure As a result your AKS will have a new instance of AzureIngressProhibitedTarget called prohibit-all-targets : bash kubectl get AzureIngressProhibitedTargets prohibit-all-targets -o yaml The object prohibit-all-targets , as the name implies, prohibits AGIC from changing config for any host and path. Helm install with appgw.shared=true will deploy AGIC, but will not make any changes to App Gateway. Broaden permissions Since Helm with appgw.shared=true and the default prohibit-all-targets blocks AGIC from applying any config. Broaden AGIC permissions with: Create a new AzureIngressProhibitedTarget with your specific setup: bash cat < # existing field resourceGroup: # existing field name: # existing field shared: true # <<<<< Add this field to enable shared App Gateway >>>>> Apply the Helm changes: Ensure the AzureIngressProhibitedTarget CRD is installed with: bash kubectl apply -f https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/ae695ef9bd05c8b708cedf6ff545595d0b7022dc/crds/AzureIngressProhibitedTarget.yaml Update Helm: bash helm upgrade \\ --recreate-pods \\ -f helm-config.yaml \\ ingress-azure oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure As a result your AKS will have a new instance of AzureIngressProhibitedTarget called prohibit-all-targets : bash kubectl get AzureIngressProhibitedTargets prohibit-all-targets -o yaml The object prohibit-all-targets , as the name implies, prohibits AGIC from changing config for any host and path. Helm install with appgw.shared=true will deploy AGIC, but will not make any changes to App Gateway.","title":"Enable with new AGIC installation"},{"location":"how-tos/prevent-agic-from-overwriting/#broaden-permissions","text":"Since Helm with appgw.shared=true and the default prohibit-all-targets blocks AGIC from applying any config. Broaden AGIC permissions with: Create a new AzureIngressProhibitedTarget with your specific setup: bash cat <\" applicationGatewayGroupId=$(az group show -g $applicationGatewayGroupName -o tsv --query \"id\") az ad sp create-for-rbac -n \"azure-k8s-metric-adapter-sp\" --role \"Monitoring Reader\" --scopes applicationGatewayGroupId Now, We will deploy the Azure K8S Metric Adapter using the AAD service principal created above. ```bash kubectl create namespace custom-metrics use values from service principle created above to create secret kubectl create secret generic azure-k8s-metrics-adapter -n custom-metrics \\ --from-literal=azure-tenant-id= \\ --from-literal=azure-client-id= \\ --from-literal=azure-client-secret= kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/Azure/azure-k8s-metrics-adapter/master/deploy/adapter.yaml -n custom-metrics ``` We will create an ExternalMetric resource with name appgw-request-count-metric . This will instruct the metric adapter to expose AvgRequestCountPerHealthyHost metric for myApplicationGateway resource in myResourceGroup resource group. You can use the filter field to target a specific backend pool and backend http setting in the Application Gateway. Copy paste this YAML content in external-metric.yaml and apply with kubectl apply -f external-metric.yaml . yaml apiVersion: azure.com/v1alpha2 kind: ExternalMetric metadata: name: appgw-request-count-metric spec: type: azuremonitor azure: resourceGroup: myResourceGroup # replace with your application gateway's resource group name resourceName: myApplicationGateway # replace with your application gateway's name resourceProviderNamespace: Microsoft.Network resourceType: applicationGateways metric: metricName: AvgRequestCountPerHealthyHost aggregation: Average filter: BackendSettingsPool eq '~' # optional You can now make a request to the metric server to see if our new metric is getting exposed: ```bash kubectl get --raw \"/apis/external.metrics.k8s.io/v1beta1/namespaces/default/appgw-request-count-metric\" Sample Output { \"kind\": \"ExternalMetricValueList\", \"apiVersion\": \"external.metrics.k8s.io/v1beta1\", \"metadata\": { \"selfLink\": \"/apis/external.metrics.k8s.io/v1beta1/namespaces/default/appgw-request-count-metric\", }, \"items\": [ { \"metricName\": \"appgw-request-count-metric\", \"metricLabels\": null, \"timestamp\": \"2019-11-05T00:18:51Z\", \"value\": \"30\", }, ], } ``` Using the new metric to scale up our deployment Once we are able to expose appgw-request-count-metric through the metric server, We are ready to use Horizontal Pod Autoscaler to scale up our target deployment. In following example, we will target a sample deployment aspnet . We will scale up Pods when appgw-request-count-metric > 200 per Pod upto a max of 10 Pods. Replace your target deployment name and apply the following auto scale configuration. Copy paste this YAML content in autoscale-config.yaml and apply with kubectl apply -f autoscale-config.yaml . yaml apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: deployment-scaler spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: aspnet # replace with your deployment's name minReplicas: 1 maxReplicas: 10 metrics: - type: External external: metricName: appgw-request-count-metric targetAverageValue: 200 Test your configuration by using a load test tools like apache bench: bash ab -n10000 http:///","title":"Scale your Applications using Application Gateway Metrics (Beta)"},{"location":"how-tos/scale-applications-using-appgw-metrics/#scale-your-applications-using-application-gateway-metrics-beta","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. As incoming traffic increases, it becomes crucial to scale up your applications based on the demand. In the following tutorial, we explain how you can use Application Gateway's AvgRequestCountPerHealthyHost metric to scale up your application. AvgRequestCountPerHealthyHost is measure of average request that are sent to a specific backend pool and backend http setting combination. We are going to use following two components: Azure K8S Metric Adapter - We will using the metric adapter to expose Application Gateway metrics through the metric server. Horizontal Pod Autoscaler - We will use HPA to use Application Gateway metrics and target a deployment for scaling.","title":"Scale your Applications using Application Gateway Metrics (Beta)"},{"location":"how-tos/scale-applications-using-appgw-metrics/#setting-up-azure-k8s-metric-adapter","text":"We will first create an Azure AAD service principal and assign it Monitoring Reader access over Application Gateway's resource group. Paste the following lines in your Azure Cloud Shell : bash applicationGatewayGroupName=\"\" applicationGatewayGroupId=$(az group show -g $applicationGatewayGroupName -o tsv --query \"id\") az ad sp create-for-rbac -n \"azure-k8s-metric-adapter-sp\" --role \"Monitoring Reader\" --scopes applicationGatewayGroupId Now, We will deploy the Azure K8S Metric Adapter using the AAD service principal created above. ```bash kubectl create namespace custom-metrics","title":"Setting up Azure K8S Metric Adapter"},{"location":"how-tos/scale-applications-using-appgw-metrics/#use-values-from-service-principle-created-above-to-create-secret","text":"kubectl create secret generic azure-k8s-metrics-adapter -n custom-metrics \\ --from-literal=azure-tenant-id= \\ --from-literal=azure-client-id= \\ --from-literal=azure-client-secret= kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/Azure/azure-k8s-metrics-adapter/master/deploy/adapter.yaml -n custom-metrics ``` We will create an ExternalMetric resource with name appgw-request-count-metric . This will instruct the metric adapter to expose AvgRequestCountPerHealthyHost metric for myApplicationGateway resource in myResourceGroup resource group. You can use the filter field to target a specific backend pool and backend http setting in the Application Gateway. Copy paste this YAML content in external-metric.yaml and apply with kubectl apply -f external-metric.yaml . yaml apiVersion: azure.com/v1alpha2 kind: ExternalMetric metadata: name: appgw-request-count-metric spec: type: azuremonitor azure: resourceGroup: myResourceGroup # replace with your application gateway's resource group name resourceName: myApplicationGateway # replace with your application gateway's name resourceProviderNamespace: Microsoft.Network resourceType: applicationGateways metric: metricName: AvgRequestCountPerHealthyHost aggregation: Average filter: BackendSettingsPool eq '~' # optional You can now make a request to the metric server to see if our new metric is getting exposed: ```bash kubectl get --raw \"/apis/external.metrics.k8s.io/v1beta1/namespaces/default/appgw-request-count-metric\"","title":"use values from service principle created above to create secret"},{"location":"how-tos/scale-applications-using-appgw-metrics/#sample-output","text":"","title":"Sample Output"},{"location":"how-tos/scale-applications-using-appgw-metrics/#_1","text":"","title":"{"},{"location":"how-tos/scale-applications-using-appgw-metrics/#kind-externalmetricvaluelist","text":"","title":"\"kind\": \"ExternalMetricValueList\","},{"location":"how-tos/scale-applications-using-appgw-metrics/#apiversion-externalmetricsk8siov1beta1","text":"","title":"\"apiVersion\": \"external.metrics.k8s.io/v1beta1\","},{"location":"how-tos/scale-applications-using-appgw-metrics/#metadata","text":"","title":"\"metadata\":"},{"location":"how-tos/scale-applications-using-appgw-metrics/#_2","text":"","title":"{"},{"location":"how-tos/scale-applications-using-appgw-metrics/#selflink-apisexternalmetricsk8siov1beta1namespacesdefaultappgw-request-count-metric","text":"","title":"\"selfLink\": \"/apis/external.metrics.k8s.io/v1beta1/namespaces/default/appgw-request-count-metric\","},{"location":"how-tos/scale-applications-using-appgw-metrics/#_3","text":"","title":"},"},{"location":"how-tos/scale-applications-using-appgw-metrics/#items","text":"","title":"\"items\":"},{"location":"how-tos/scale-applications-using-appgw-metrics/#_4","text":"","title":"["},{"location":"how-tos/scale-applications-using-appgw-metrics/#_5","text":"","title":"{"},{"location":"how-tos/scale-applications-using-appgw-metrics/#metricname-appgw-request-count-metric","text":"","title":"\"metricName\": \"appgw-request-count-metric\","},{"location":"how-tos/scale-applications-using-appgw-metrics/#metriclabels-null","text":"","title":"\"metricLabels\": null,"},{"location":"how-tos/scale-applications-using-appgw-metrics/#timestamp-2019-11-05t001851z","text":"","title":"\"timestamp\": \"2019-11-05T00:18:51Z\","},{"location":"how-tos/scale-applications-using-appgw-metrics/#value-30","text":"","title":"\"value\": \"30\","},{"location":"how-tos/scale-applications-using-appgw-metrics/#_6","text":"","title":"},"},{"location":"how-tos/scale-applications-using-appgw-metrics/#_7","text":"","title":"],"},{"location":"how-tos/scale-applications-using-appgw-metrics/#_8","text":"```","title":"}"},{"location":"how-tos/scale-applications-using-appgw-metrics/#using-the-new-metric-to-scale-up-our-deployment","text":"Once we are able to expose appgw-request-count-metric through the metric server, We are ready to use Horizontal Pod Autoscaler to scale up our target deployment. In following example, we will target a sample deployment aspnet . We will scale up Pods when appgw-request-count-metric > 200 per Pod upto a max of 10 Pods. Replace your target deployment name and apply the following auto scale configuration. Copy paste this YAML content in autoscale-config.yaml and apply with kubectl apply -f autoscale-config.yaml . yaml apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: deployment-scaler spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: aspnet # replace with your deployment's name minReplicas: 1 maxReplicas: 10 metrics: - type: External external: metricName: appgw-request-count-metric targetAverageValue: 200 Test your configuration by using a load test tools like apache bench: bash ab -n10000 http:///","title":"Using the new metric to scale up our deployment"},{"location":"how-tos/websockets/","text":"Expose a WebSocket server As outlined in the Application Gateway v2 documentation - it provides native support for the WebSocket and HTTP/2 protocols . Please note, that for both Application Gateway and the Kubernetes Ingress - there is no user-configurable setting to selectively enable or disable WebSocket support. The Kubernetes deployment YAML below shows the minimum configuration used to deploy a WebSocket server, which is the same as deploying a regular web server: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: websocket-server spec: selector: matchLabels: app: ws-app replicas: 2 template: metadata: labels: app: ws-app spec: containers: - name: websocket-app imagePullPolicy: Always image: your-container-repo.azurecr.io/websockets-app ports: - containerPort: 8888 imagePullSecrets: - name: azure-container-registry-credentials apiVersion: v1 kind: Service metadata: name: websocket-app-service spec: selector: app: ws-app ports: - protocol: TCP port: 80 targetPort: 8888 apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: websocket-repeater annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - host: ws.contoso.com http: paths: - backend: service: name: websocket-app-service port: number: 80 ``` Given that all the prerequisites are fulfilled, and you have an App Gateway controlled by a K8s Ingress in your AKS, the deployment above would result in a WebSockets server exposed on port 80 of your App Gateway's public IP and the ws.contoso.com domain. The following cURL command would test the WebSocket server deployment: sh curl -i -N -H \"Connection: Upgrade\" \\ -H \"Upgrade: websocket\" \\ -H \"Origin: http://localhost\" \\ -H \"Host: ws.contoso.com\" \\ -H \"Sec-Websocket-Version: 13\" \\ -H \"Sec-WebSocket-Key: 123\" \\ http://1.2.3.4:80/ws WebSocket Health Probes If your deployment does not explicitly define health probes, App Gateway would attempt an HTTP GET on your WebSocket server endpoint. Depending on the server implementation ( here is one we love ) WebSocket specific headers may be required ( Sec-Websocket-Version for instance). Since App Gateway does not add WebSocket headers, the App Gateway's health probe response from your WebSocket server will most likely be 400 Bad Request . As a result App Gateway will mark your pods as unhealthy, which will eventually result in a 502 Bad Gateway for the consumers of the WebSocket server. To avoid this you may need to add an HTTP GET handler for a health check to your server ( /health for instance, which returns 200 OK ).","title":"Websockets"},{"location":"how-tos/websockets/#expose-a-websocket-server","text":"As outlined in the Application Gateway v2 documentation - it provides native support for the WebSocket and HTTP/2 protocols . Please note, that for both Application Gateway and the Kubernetes Ingress - there is no user-configurable setting to selectively enable or disable WebSocket support. The Kubernetes deployment YAML below shows the minimum configuration used to deploy a WebSocket server, which is the same as deploying a regular web server: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: websocket-server spec: selector: matchLabels: app: ws-app replicas: 2 template: metadata: labels: app: ws-app spec: containers: - name: websocket-app imagePullPolicy: Always image: your-container-repo.azurecr.io/websockets-app ports: - containerPort: 8888 imagePullSecrets: - name: azure-container-registry-credentials apiVersion: v1 kind: Service metadata: name: websocket-app-service spec: selector: app: ws-app ports: - protocol: TCP port: 80 targetPort: 8888 apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: websocket-repeater annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - host: ws.contoso.com http: paths: - backend: service: name: websocket-app-service port: number: 80 ``` Given that all the prerequisites are fulfilled, and you have an App Gateway controlled by a K8s Ingress in your AKS, the deployment above would result in a WebSockets server exposed on port 80 of your App Gateway's public IP and the ws.contoso.com domain. The following cURL command would test the WebSocket server deployment: sh curl -i -N -H \"Connection: Upgrade\" \\ -H \"Upgrade: websocket\" \\ -H \"Origin: http://localhost\" \\ -H \"Host: ws.contoso.com\" \\ -H \"Sec-Websocket-Version: 13\" \\ -H \"Sec-WebSocket-Key: 123\" \\ http://1.2.3.4:80/ws","title":"Expose a WebSocket server"},{"location":"how-tos/websockets/#websocket-health-probes","text":"If your deployment does not explicitly define health probes, App Gateway would attempt an HTTP GET on your WebSocket server endpoint. Depending on the server implementation ( here is one we love ) WebSocket specific headers may be required ( Sec-Websocket-Version for instance). Since App Gateway does not add WebSocket headers, the App Gateway's health probe response from your WebSocket server will most likely be 400 Bad Request . As a result App Gateway will mark your pods as unhealthy, which will eventually result in a 502 Bad Gateway for the consumers of the WebSocket server. To avoid this you may need to add an HTTP GET handler for a health check to your server ( /health for instance, which returns 200 OK ).","title":"WebSocket Health Probes"},{"location":"setup/install/","text":"Prerequisites Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. AGIC charts have been moveed to MCR. Use oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure as the target repository. You need to complete the following tasks prior to deploying AGIC on your cluster: Prepare your Azure subscription and your az-cli client. ```bash Sign in to your Azure subscription. SUBSCRIPTION_ID=' ' az login az account set --subscription $SUBSCRIPTION_ID Register required resource providers on Azure. az provider register --namespace Microsoft.ContainerService az provider register --namespace Microsoft.Network ``` Set an AKS cluster for your workload. AKS cluster should have the workload identity feature enabled. Learn how to enable workload identity on an existing AKS cluster. If using an existing cluster, ensure you enable Workload Identity support on your AKS cluster. Workload identities can be enabled via the following: ```bash AKS_NAME=' ' RESOURCE_GROUP=' ' az aks update -g $RESOURCE_GROUP -n $AKS_NAME --enable-oidc-issuer --enable-workload-identity --no-wait ``` If you don't have an existing cluster, use the following commands to create a new AKS cluster and workload identity enabled. ```bash AKS_NAME=' ' RESOURCE_GROUP=' ' LOCATION='northeurope' VM_SIZE=' ' # The size needs to be available in your location az group create --name $RESOURCE_GROUP --location $LOCATION az aks create \\ --resource-group $RESOURCE_GROUP \\ --name $AKS_NAME \\ --location $LOCATION \\ --node-vm-size $VM_SIZE \\ --network-plugin azure \\ --enable-oidc-issuer \\ --enable-workload-identity \\ --generate-ssh-key ``` Install Helm Helm is an open-source packaging tool that is used to install AGIC. Helm is already available in Azure Cloud Shell. If you are using Azure Cloud Shell, no additional Helm installation is necessary. bash curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash Deploy or Use existing Application Gateway If using an existing Application Gateway, make sure the following: Set the environment variable. bash APPGW_ID=\"\" Follow steps here to make sure AppGW VNET is correctly setup i.e. either it is using same VNET as AKS or is peered. If you don't have an existing Application Gateway, use the following commands to create a new one. Setup environment variables ```bash AKS_NAME=' ' RESOURCE_GROUP=' ' LOCATION=\" \" APPGW_NAME=\"application-gateway\" APPGW_SUBNET_NAME=\"appgw-subnet\" ``` Deploy subnet for Application Gateway ```bash nodeResourceGroup=$(az aks show -n $AKS_NAME -g RESOURCE_GROUP -o tsv --query \"nodeResourceGroup\") aksVnetName= RESOURCE_GROUP -o tsv --query \"nodeResourceGroup\") aksVnetName= (az network vnet list -g nodeResourceGroup -o tsv --query \"[0].name\") aksVnetId= nodeResourceGroup -o tsv --query \"[0].name\") aksVnetId= (az network vnet show -n $aksVnetName -g $nodeResourceGroup -o tsv --query \"id\") az network vnet subnet create \\ --resource-group $nodeResourceGroup \\ --vnet-name $aksVnetName \\ --name $APPGW_SUBNET_NAME \\ --address-prefixes \"10.226.0.0/23\" APPGW_SUBNET_ID=$(az network vnet subnet list --resource-group $nodeResourceGroup --vnet-name aksVnetName --query \"[?name==' aksVnetName --query \"[?name==' APPGW_SUBNET_NAME'].id\" --output tsv) ``` Deploy Application Gateway ```bash az network application-gateway create \\ --name $APPGW_NAME \\ --location $LOCATION \\ --resource-group $RESOURCE_GROUP \\ --subnet $APPGW_SUBNET_ID \\ --capacity 2 \\ --sku Standard_v2 \\ --http-settings-cookie-based-affinity Disabled \\ --frontend-port 80 \\ --http-settings-port 80 \\ --http-settings-protocol Http \\ --public-ip-address appgw-ip \\ --priority 10 APPGW_ID=$(az network application-gateway show --name $APPGW_NAME --resource-group $RESOURCE_GROUP --query \"id\" --output tsv) ``` Install Application Gateway Ingress Controller Setup environment variables ```bash AKS_NAME=' ' RESOURCE_GROUP=' ' LOCATION=\" \" IDENTITY_RESOURCE_NAME='agic-identity' ``` Create a user managed identity for AGIC controller and federate the identity as Workload Identity to use in the AKS cluster. ```bash echo \"Creating identity $IDENTITY_RESOURCE_NAME in resource group $RESOURCE_GROUP\" az identity create --resource-group $RESOURCE_GROUP --name IDENTITY_RESOURCE_NAME IDENTITY_PRINCIPAL_ID=\" IDENTITY_RESOURCE_NAME IDENTITY_PRINCIPAL_ID=\" (az identity show -g $RESOURCE_GROUP -n IDENTITY_RESOURCE_NAME --query principalId -otsv)\" IDENTITY_CLIENT_ID=\" IDENTITY_RESOURCE_NAME --query principalId -otsv)\" IDENTITY_CLIENT_ID=\" (az identity show -g $RESOURCE_GROUP -n $IDENTITY_RESOURCE_NAME --query clientId -otsv)\" echo \"Waiting 60 seconds to allow for replication of the identity...\" sleep 60 echo \"Set up federation with AKS OIDC issuer\" AKS_OIDC_ISSUER=\" (az aks show -n \" (az aks show -n \" AKS_NAME\" -g \" RESOURCE_GROUP\" --query \"oidcIssuerProfile.issuerUrl\" -o tsv)\" az identity federated-credential create --name \"agic\" \\ --identity-name \" RESOURCE_GROUP\" --query \"oidcIssuerProfile.issuerUrl\" -o tsv)\" az identity federated-credential create --name \"agic\" \\ --identity-name \" IDENTITY_RESOURCE_NAME\" \\ --resource-group RESOURCE_GROUP \\ --issuer \" RESOURCE_GROUP \\ --issuer \" AKS_OIDC_ISSUER\" \\ --subject \"system:serviceaccount:default:ingress-azure\" resourceGroupId=$(az group show --name RESOURCE_GROUP --query id -otsv) nodeResourceGroup= RESOURCE_GROUP --query id -otsv) nodeResourceGroup= (az aks show -n $AKS_NAME -g RESOURCE_GROUP -o tsv --query \"nodeResourceGroup\") nodeResourceGroupId= RESOURCE_GROUP -o tsv --query \"nodeResourceGroup\") nodeResourceGroupId= (az group show --name $nodeResourceGroup --query id -otsv) echo \"Apply role assignments to AGIC identity\" az role assignment create --assignee-object-id $IDENTITY_PRINCIPAL_ID --assignee-principal-type ServicePrincipal --scope $resourceGroupId --role \"Reader\" az role assignment create --assignee-object-id $IDENTITY_PRINCIPAL_ID --assignee-principal-type ServicePrincipal --scope $nodeResourceGroupId --role \"Contributor\" az role assignment create --assignee-object-id $IDENTITY_PRINCIPAL_ID --assignee-principal-type ServicePrincipal --scope $APPGW_ID --role \"Contributor\" ``` Assignment of the managed identity immediately after creation may result in an error that the principalId does not exist. Allow about a minute of time to elapse for the identity to replicate in Microsoft Entra ID prior to delegating the identity. Install AGIC using Helm For new deployments AGIC can be installed by running the following commands: ```bash az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME # on aks cluster with only linux node pools helm install ingress-azure \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --set appgw.applicationGatewayID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= IDENTITY_CLIENT_ID \\ --set rbac.enabled=true \\ --version 1.7.3 # on aks cluster with windows node pools helm install ingress-azure \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --set appgw.applicationGatewayID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= IDENTITY_CLIENT_ID \\ --set rbac.enabled=true \\ --set nodeSelector.\"beta.kubernetes.io/os\"=linux \\ --version 1.7.3 ``` For existing deployments AGIC can be upgraded by running the following commands: ```bash az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME # on aks cluster with only linux node pools helm upgrade ingress-azure \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --set appgw.applicationGatewayID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= IDENTITY_CLIENT_ID \\ --set rbac.enabled=true \\ --version 1.7.3 # on aks cluster with windows node pools helm upgrade ingress-azure \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --set appgw.applicationGatewayID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= IDENTITY_CLIENT_ID \\ --set rbac.enabled=true \\ --set nodeSelector.\"beta.kubernetes.io/os\"=linux \\ --version 1.7.3 ``` Install a Sample App Now that we have App Gateway, AKS, and AGIC installed we can install a sample app via Azure Cloud Shell : ```yaml cat <\" Follow steps here to make sure AppGW VNET is correctly setup i.e. either it is using same VNET as AKS or is peered. If you don't have an existing Application Gateway, use the following commands to create a new one. Setup environment variables ```bash AKS_NAME=' ' RESOURCE_GROUP=' ' LOCATION=\" \" APPGW_NAME=\"application-gateway\" APPGW_SUBNET_NAME=\"appgw-subnet\" ``` Deploy subnet for Application Gateway ```bash nodeResourceGroup=$(az aks show -n $AKS_NAME -g RESOURCE_GROUP -o tsv --query \"nodeResourceGroup\") aksVnetName= RESOURCE_GROUP -o tsv --query \"nodeResourceGroup\") aksVnetName= (az network vnet list -g nodeResourceGroup -o tsv --query \"[0].name\") aksVnetId= nodeResourceGroup -o tsv --query \"[0].name\") aksVnetId= (az network vnet show -n $aksVnetName -g $nodeResourceGroup -o tsv --query \"id\") az network vnet subnet create \\ --resource-group $nodeResourceGroup \\ --vnet-name $aksVnetName \\ --name $APPGW_SUBNET_NAME \\ --address-prefixes \"10.226.0.0/23\" APPGW_SUBNET_ID=$(az network vnet subnet list --resource-group $nodeResourceGroup --vnet-name aksVnetName --query \"[?name==' aksVnetName --query \"[?name==' APPGW_SUBNET_NAME'].id\" --output tsv) ``` Deploy Application Gateway ```bash az network application-gateway create \\ --name $APPGW_NAME \\ --location $LOCATION \\ --resource-group $RESOURCE_GROUP \\ --subnet $APPGW_SUBNET_ID \\ --capacity 2 \\ --sku Standard_v2 \\ --http-settings-cookie-based-affinity Disabled \\ --frontend-port 80 \\ --http-settings-port 80 \\ --http-settings-protocol Http \\ --public-ip-address appgw-ip \\ --priority 10 APPGW_ID=$(az network application-gateway show --name $APPGW_NAME --resource-group $RESOURCE_GROUP --query \"id\" --output tsv) ```","title":"Deploy or Use existing Application Gateway"},{"location":"setup/install/#install-application-gateway-ingress-controller","text":"Setup environment variables ```bash AKS_NAME=' ' RESOURCE_GROUP=' ' LOCATION=\" \" IDENTITY_RESOURCE_NAME='agic-identity' ``` Create a user managed identity for AGIC controller and federate the identity as Workload Identity to use in the AKS cluster. ```bash echo \"Creating identity $IDENTITY_RESOURCE_NAME in resource group $RESOURCE_GROUP\" az identity create --resource-group $RESOURCE_GROUP --name IDENTITY_RESOURCE_NAME IDENTITY_PRINCIPAL_ID=\" IDENTITY_RESOURCE_NAME IDENTITY_PRINCIPAL_ID=\" (az identity show -g $RESOURCE_GROUP -n IDENTITY_RESOURCE_NAME --query principalId -otsv)\" IDENTITY_CLIENT_ID=\" IDENTITY_RESOURCE_NAME --query principalId -otsv)\" IDENTITY_CLIENT_ID=\" (az identity show -g $RESOURCE_GROUP -n $IDENTITY_RESOURCE_NAME --query clientId -otsv)\" echo \"Waiting 60 seconds to allow for replication of the identity...\" sleep 60 echo \"Set up federation with AKS OIDC issuer\" AKS_OIDC_ISSUER=\" (az aks show -n \" (az aks show -n \" AKS_NAME\" -g \" RESOURCE_GROUP\" --query \"oidcIssuerProfile.issuerUrl\" -o tsv)\" az identity federated-credential create --name \"agic\" \\ --identity-name \" RESOURCE_GROUP\" --query \"oidcIssuerProfile.issuerUrl\" -o tsv)\" az identity federated-credential create --name \"agic\" \\ --identity-name \" IDENTITY_RESOURCE_NAME\" \\ --resource-group RESOURCE_GROUP \\ --issuer \" RESOURCE_GROUP \\ --issuer \" AKS_OIDC_ISSUER\" \\ --subject \"system:serviceaccount:default:ingress-azure\" resourceGroupId=$(az group show --name RESOURCE_GROUP --query id -otsv) nodeResourceGroup= RESOURCE_GROUP --query id -otsv) nodeResourceGroup= (az aks show -n $AKS_NAME -g RESOURCE_GROUP -o tsv --query \"nodeResourceGroup\") nodeResourceGroupId= RESOURCE_GROUP -o tsv --query \"nodeResourceGroup\") nodeResourceGroupId= (az group show --name $nodeResourceGroup --query id -otsv) echo \"Apply role assignments to AGIC identity\" az role assignment create --assignee-object-id $IDENTITY_PRINCIPAL_ID --assignee-principal-type ServicePrincipal --scope $resourceGroupId --role \"Reader\" az role assignment create --assignee-object-id $IDENTITY_PRINCIPAL_ID --assignee-principal-type ServicePrincipal --scope $nodeResourceGroupId --role \"Contributor\" az role assignment create --assignee-object-id $IDENTITY_PRINCIPAL_ID --assignee-principal-type ServicePrincipal --scope $APPGW_ID --role \"Contributor\" ``` Assignment of the managed identity immediately after creation may result in an error that the principalId does not exist. Allow about a minute of time to elapse for the identity to replicate in Microsoft Entra ID prior to delegating the identity. Install AGIC using Helm","title":"Install Application Gateway Ingress Controller"},{"location":"setup/install/#for-new-deployments","text":"AGIC can be installed by running the following commands: ```bash az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME # on aks cluster with only linux node pools helm install ingress-azure \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --set appgw.applicationGatewayID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= IDENTITY_CLIENT_ID \\ --set rbac.enabled=true \\ --version 1.7.3 # on aks cluster with windows node pools helm install ingress-azure \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --set appgw.applicationGatewayID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= IDENTITY_CLIENT_ID \\ --set rbac.enabled=true \\ --set nodeSelector.\"beta.kubernetes.io/os\"=linux \\ --version 1.7.3 ```","title":"For new deployments"},{"location":"setup/install/#for-existing-deployments","text":"AGIC can be upgraded by running the following commands: ```bash az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME # on aks cluster with only linux node pools helm upgrade ingress-azure \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --set appgw.applicationGatewayID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= IDENTITY_CLIENT_ID \\ --set rbac.enabled=true \\ --version 1.7.3 # on aks cluster with windows node pools helm upgrade ingress-azure \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \\ --set appgw.applicationGatewayID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= APPGW_ID \\ --set armAuth.type=workloadIdentity \\ --set armAuth.identityClientID= IDENTITY_CLIENT_ID \\ --set rbac.enabled=true \\ --set nodeSelector.\"beta.kubernetes.io/os\"=linux \\ --version 1.7.3 ```","title":"For existing deployments"},{"location":"setup/install/#install-a-sample-app","text":"Now that we have App Gateway, AKS, and AGIC installed we can install a sample app via Azure Cloud Shell : ```yaml cat <= v1.6.0, an error as shown below will be raised due to a breaking change. AAD Pod Identity introduced a breaking change after v1.5.5 due to CRD fields being case sensitive. The error is caused by AAD Pod Identity fields not matching what AGIC uses; more details of the mismatch under analysis of the issue . AAD Pod Identity v1.5 and lower have known issues with AKS' most recent base images, and therefore AKS has asked customers to upgrade to AAD Pod Identity v1.6 or higher. AGIC Pod Logs bash E0428 16:57:55.669130 1 client.go:132] Possible reasons: AKS Service Principal requires 'Managed Identity Operator' access on Controller Identity; 'identityResourceID' and/or 'identityClientID' are incorrect in the Helm config; AGIC Identity requires 'Contributor' access on Application Gateway and 'Reader' access on Application Gateway's Resource Group; E0428 16:57:55.669160 1 client.go:145] Unexpected ARM status code on GET existing App Gateway config: 403 E0428 16:57:55.669167 1 client.go:148] Failed fetching config for App Gateway instance. Will retry in 10s. Error: azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://management.azure.com/subscriptions/4c4aee1a-cfd4-4e7a-abe3-*******/resourceGroups/RG-NAME-DEV/providers/Microsoft.Network/applicationGateways/AG-NAME-DEV?api-version=2019-09-01: StatusCode=403 -- Original Error: adal: Refresh request failed. Status Code = '403'. Response body: getting assigned identities for pod default/agile-opossum-ingress-azure-579cbb6b89-sldr5 in CREATED state failed after 16 attempts, retry duration [5]s. Error: MIC Pod Logs bash E0427 00:13:26.222815 1 mic.go:899] Ignoring azure identity default/agic-azid-ingress-azure, error: Invalid resource id: \"\", must match /subscriptions//resourcegroups//providers/Microsoft.ManagedIdentity/userAssignedIdentities/ Analysis of the issue AAD breaking change details For AzureIdentity and AzureIdentityBinding created using AAD Pod Identity v1.6.0+, the following fields are changed AzureIdentity < 1.6.0 >= 1.6.0 ClientID clientID ClientPassword clientPassword ResourceID resourceID TenantID tenantID AzureIdentityBinding < 1.6.0 >= 1.6.0 AzureIdentity azureIdentity Selector selector NOTE AKS recommends to using AAD Pod Identity with version >= 1.6.0 AGIC fix to adapt to the breaking change Updated AGIC Helm templates to use the right fields regarding AAD Pod Identity, PR for reference. Resolving the issue It's recommended you upgrade your AGIC to release 1.2.0 and then apply AAD Pod Identity version >= 1.6.0 Upgrade AGIC to 1.2.0 AGIC version v1.2.0 will be required. ```bash https://github.com/Azure/application-gateway-kubernetes-ingress/blob/master/docs/how-tos/helm-upgrade.md --reuse-values when upgrading, reuse the last release's values and merge in any overrides from the command line via --set and -f. If '--reset-values' is specified, this is ignored helm repo update check the latest relese version of AGIC helm search repo -l application-gateway-kubernetes-ingress install release 1.2.0 helm upgrade \\ \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure --version 1.2.0 --reuse-values ``` ***Note:**_ If you're upgrading from v1.0.0 or below, you'll have to delete AGIC and then reinstall with v1.2.0. Install the right version of AAD Pod Identity AKS recommends upgrading the Azure Active Directory Pod Identity version on your Azure Kubernetes Service Clusters to v1.6. AAD pod identity v1.5 or lower have a known issue with AKS' most recent base images. To install AAD Pod Identity with version v1.6.0: RBAC enabled AKS cluster bash kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/v1.6.0/deploy/infra/deployment-rbac.yaml RBAC disabled AKS cluster bash kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/v1.6.0/deploy/infra/deployment.yaml","title":"Troubleshooting agic fails with aad pod identity breakingchange"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#troubleshooting-agic-v120-rc1-and-below-fails-with-a-breaking-change-introduced-in-aad-pod-identity-v16","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment.","title":"Troubleshooting: AGIC v1.2.0-rc1 and below fails with a breaking change introduced in AAD Pod Identity v1.6"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#overview","text":"If you're using AGIC with version < v1.2.0-rc2 and AAD Pod Identity with version >= v1.6.0, an error as shown below will be raised due to a breaking change. AAD Pod Identity introduced a breaking change after v1.5.5 due to CRD fields being case sensitive. The error is caused by AAD Pod Identity fields not matching what AGIC uses; more details of the mismatch under analysis of the issue . AAD Pod Identity v1.5 and lower have known issues with AKS' most recent base images, and therefore AKS has asked customers to upgrade to AAD Pod Identity v1.6 or higher. AGIC Pod Logs bash E0428 16:57:55.669130 1 client.go:132] Possible reasons: AKS Service Principal requires 'Managed Identity Operator' access on Controller Identity; 'identityResourceID' and/or 'identityClientID' are incorrect in the Helm config; AGIC Identity requires 'Contributor' access on Application Gateway and 'Reader' access on Application Gateway's Resource Group; E0428 16:57:55.669160 1 client.go:145] Unexpected ARM status code on GET existing App Gateway config: 403 E0428 16:57:55.669167 1 client.go:148] Failed fetching config for App Gateway instance. Will retry in 10s. Error: azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://management.azure.com/subscriptions/4c4aee1a-cfd4-4e7a-abe3-*******/resourceGroups/RG-NAME-DEV/providers/Microsoft.Network/applicationGateways/AG-NAME-DEV?api-version=2019-09-01: StatusCode=403 -- Original Error: adal: Refresh request failed. Status Code = '403'. Response body: getting assigned identities for pod default/agile-opossum-ingress-azure-579cbb6b89-sldr5 in CREATED state failed after 16 attempts, retry duration [5]s. Error: MIC Pod Logs bash E0427 00:13:26.222815 1 mic.go:899] Ignoring azure identity default/agic-azid-ingress-azure, error: Invalid resource id: \"\", must match /subscriptions//resourcegroups//providers/Microsoft.ManagedIdentity/userAssignedIdentities/","title":"Overview"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#analysis-of-the-issue","text":"","title":"Analysis of the issue"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#aad-breaking-change-details","text":"For AzureIdentity and AzureIdentityBinding created using AAD Pod Identity v1.6.0+, the following fields are changed AzureIdentity < 1.6.0 >= 1.6.0 ClientID clientID ClientPassword clientPassword ResourceID resourceID TenantID tenantID AzureIdentityBinding < 1.6.0 >= 1.6.0 AzureIdentity azureIdentity Selector selector NOTE AKS recommends to using AAD Pod Identity with version >= 1.6.0","title":"AAD breaking change details"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#agic-fix-to-adapt-to-the-breaking-change","text":"Updated AGIC Helm templates to use the right fields regarding AAD Pod Identity, PR for reference.","title":"AGIC fix to adapt to the breaking change"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#resolving-the-issue","text":"It's recommended you upgrade your AGIC to release 1.2.0 and then apply AAD Pod Identity version >= 1.6.0","title":"Resolving the issue"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#upgrade-agic-to-120","text":"AGIC version v1.2.0 will be required. ```bash","title":"Upgrade AGIC to 1.2.0"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#httpsgithubcomazureapplication-gateway-kubernetes-ingressblobmasterdocshow-toshelm-upgrademd","text":"","title":"https://github.com/Azure/application-gateway-kubernetes-ingress/blob/master/docs/how-tos/helm-upgrade.md"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#-reuse-values-when-upgrading-reuse-the-last-releases-values-and-merge-in-any-overrides-from-the-command-line-via-set-and-f-if-reset-values-is-specified-this-is-ignored","text":"helm repo update","title":"--reuse-values when upgrading, reuse the last release's values and merge in any overrides from the command line via --set and -f. If '--reset-values' is specified, this is ignored"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#check-the-latest-relese-version-of-agic","text":"helm search repo -l application-gateway-kubernetes-ingress","title":"check the latest relese version of AGIC"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#install-release-120","text":"helm upgrade \\ \\ oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure --version 1.2.0 --reuse-values ``` ***Note:**_ If you're upgrading from v1.0.0 or below, you'll have to delete AGIC and then reinstall with v1.2.0.","title":"install release 1.2.0"},{"location":"troubleshootings/troubleshooting-agic-fails-with-aad-pod-identity-breakingchange/#install-the-right-version-of-aad-pod-identity","text":"AKS recommends upgrading the Azure Active Directory Pod Identity version on your Azure Kubernetes Service Clusters to v1.6. AAD pod identity v1.5 or lower have a known issue with AKS' most recent base images. To install AAD Pod Identity with version v1.6.0: RBAC enabled AKS cluster bash kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/v1.6.0/deploy/infra/deployment-rbac.yaml RBAC disabled AKS cluster bash kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/v1.6.0/deploy/infra/deployment.yaml","title":"Install the right version of AAD Pod Identity"},{"location":"troubleshootings/troubleshooting-agic-pod-stuck-in-not-ready-state/","text":"Troubleshooting: AGIC pod stuck in not ready state NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Illustration If AGIC pod is stuck in ready state, you must be seeing the following: ```bash $ kubectl get pods NAME READY STATUS RESTARTS AGE 0/1 Running 0 19s mic-774b9c5d7b-z4z8p 1/1 Running 1 15m mic-774b9c5d7b-zrdsm 1/1 Running 1 15m nmi-pv8ch ``` Common causes Stuck at creating authorizer Stuck getting Application Gateway AGIC is stuck at creating authorizer When the AGIC pod starts, in one of the steps, AGIC tries to get an AAD (Azure Active Directory) token for the identity assigned to it. This token is then used to perform updates on the Application gateway. This identity can be of two types: User Assigned Identity Service Principal When using User Assigned identity with AGIC, AGIC has a dependency on AAD Pod Identity . When you see your AGIC pod stuck at Creating Authorizer step, then the issue could be related to the setup of the user assigned identity and AAD Pod Identity. bash $ kubectl logs ERROR: logging before flag.Parse: I0628 18:09:49.947221 1 utils.go:115] Using verbosity level 3 from environment variable APPGW_VERBOSITY_LEVEL I0628 18:09:49.987776 1 environment.go:240] KUBERNETES_WATCHNAMESPACE is not set. Watching all available namespaces. I0628 18:09:49.987861 1 main.go:128] Application Gateway Details: Subscription=\"xxxx\" Resource Group=\"resgp\" Name=\"gateway\" I0628 18:09:49.987873 1 auth.go:46] Creating authorizer from Azure Managed Service Identity I0628 18:09:49.987945 1 httpserver.go:57] Starting API Server on :8123 AAD Pod Identity is responsible for assigning the user assigned identity provided by the user for AGIC as AGIC's Identity to the underlying AKS nodes and setup the IP table rules to allow AGIC to get an AAD token from the Instance Metadata service on the VM. When you install AAD Pod Identity on your AKS cluster, it will deploy two components: Managed Identity Controller (MIC): It runs with multiple replicas and one Pod is elected leader . It is responsible to do the assignment of the identity to the AKS nodes. Node Managed Identity (NMI): It runs as daemon on every node . It is responsible to enforce the IP table rules to allow AGIC to GET the access token. For further reading on how these components work, you can go through this readme . Here is a concept diagram on the project page. Now, In order to debug the authorizer issue further, we need to get the logs for mic and nmi pods. These pods usually start with mic and nmi as the prefix. We should first investigate the logs of mic and then nmi . ```bash $ kubectl get pods NAME READY STATUS RESTARTS AGE mic-774b9c5d7b-z4z8p 1/1 Running 1 15m mic-774b9c5d7b-zrdsm 1/1 Running 1 15m nmi-pv8ch 1/1 Running 1 15m ``` Issue in MIC Pod For mic pod, we will need to find the leader. An easy way to find the leader is by looking at the log size. Leader pod is the one that is actively working. MIC pod communicates with Azure Resource Manager(ARM) to assign the identity to the AKS nodes. If there are any issues in outbound connectivity, MIC can report TCP timeouts. Check your NSGs, UDRs and Firewall to make sure that you allow outbound traffic to Azure. bash Updating msis on node aks-agentpool-41724381-vmss, add [1], del [1], update[0] failed with error azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://management.azure.com/subscriptions/xxxx/resourceGroups/resgp/providers/Microsoft.Compute/virtualMachineScaleSets/aks-agentpool-41724381-vmss?api-version=2019-07-01: StatusCode=0 -- Original Error: adal: Failed to execute the refresh request. Error = 'Post \"https://login.microsoftonline.com//oauth2/token?api-version=1.0\": dial tcp: i/o timeout' You will see the following error if AKS cluster's Service Principal missing Managed Identity Operator access over User Assigned identity. You can follow the role assignment related step in the brownfield document . bash Updating msis on node aks-agentpool-32587779-vmss, add [1], del [0] failed with error compute.VirtualMachineScaleSetsClient#CreateOrUpdate: Failure sending request: StatusCode=403 -- Original Error: Code=\"LinkedAuthorizationFailed\" Message=\"The client '' with object id '' has permission to perform action 'Microsoft.Compute/virtualMachineScaleSets/write' on scope '/subscriptions/xxxx/resourceGroups//providers/Microsoft.Compute/virtualMachineScaleSets/aks-agentpool-32587779-vmss'; however, it does not have permission to perform action 'Microsoft.ManagedIdentity/userAssignedIdentities/assign/action' on the linked scope(s) '/subscriptions/xxxx/resourcegroups/resgp/providers/Microsoft.ManagedIdentity/userAssignedIdentities/' or the linked scope(s) are invalid.\" Issue in NMI Pod For nmi pod, we will need to find the pod running on the same node as AGIC pod. If you see 403 response for a token request, then make sure you have correctly assigned the needed permission to AGIC's identity . Reader access to Application Gateway's resource group. This is needed to list the resources in the this resource group. Contributor access to Application Gateway. This is needed to perform updates on the Application Gateway. AGIC is stuck getting Application Gateway AGIC can be stuck in getting the gateway due to: AGIC gets NotFound when getting Application Gateway When you see this error, Verify that the gateway actually exists in the subscription and resource group printed in the AGIC logs. If you are deploying in National Cloud or US Gov Cloud, then this issue could be related to incorrect environment endpoint setting. To correctly configure, set the appgw.environment property in the helm. AGIC gets Unauthorized when getting Application Gateway Verify that you have given needed permissions to AGIC's identity: Reader access to Application Gateway's resource group. This is needed to list the resources in the this resource group. Contributor access to Application Gateway. This is needed to perform updates on the Application Gateway.","title":"Troubleshooting agic pod stuck in not ready state"},{"location":"troubleshootings/troubleshooting-agic-pod-stuck-in-not-ready-state/#troubleshooting-agic-pod-stuck-in-not-ready-state","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment.","title":"Troubleshooting: AGIC pod stuck in not ready state"},{"location":"troubleshootings/troubleshooting-agic-pod-stuck-in-not-ready-state/#illustration","text":"If AGIC pod is stuck in ready state, you must be seeing the following: ```bash $ kubectl get pods NAME READY STATUS RESTARTS AGE 0/1 Running 0 19s mic-774b9c5d7b-z4z8p 1/1 Running 1 15m mic-774b9c5d7b-zrdsm 1/1 Running 1 15m nmi-pv8ch ```","title":"Illustration"},{"location":"troubleshootings/troubleshooting-agic-pod-stuck-in-not-ready-state/#common-causes","text":"Stuck at creating authorizer Stuck getting Application Gateway","title":"Common causes"},{"location":"troubleshootings/troubleshooting-agic-pod-stuck-in-not-ready-state/#agic-is-stuck-at-creating-authorizer","text":"When the AGIC pod starts, in one of the steps, AGIC tries to get an AAD (Azure Active Directory) token for the identity assigned to it. This token is then used to perform updates on the Application gateway. This identity can be of two types: User Assigned Identity Service Principal When using User Assigned identity with AGIC, AGIC has a dependency on AAD Pod Identity . When you see your AGIC pod stuck at Creating Authorizer step, then the issue could be related to the setup of the user assigned identity and AAD Pod Identity. bash $ kubectl logs ERROR: logging before flag.Parse: I0628 18:09:49.947221 1 utils.go:115] Using verbosity level 3 from environment variable APPGW_VERBOSITY_LEVEL I0628 18:09:49.987776 1 environment.go:240] KUBERNETES_WATCHNAMESPACE is not set. Watching all available namespaces. I0628 18:09:49.987861 1 main.go:128] Application Gateway Details: Subscription=\"xxxx\" Resource Group=\"resgp\" Name=\"gateway\" I0628 18:09:49.987873 1 auth.go:46] Creating authorizer from Azure Managed Service Identity I0628 18:09:49.987945 1 httpserver.go:57] Starting API Server on :8123 AAD Pod Identity is responsible for assigning the user assigned identity provided by the user for AGIC as AGIC's Identity to the underlying AKS nodes and setup the IP table rules to allow AGIC to get an AAD token from the Instance Metadata service on the VM. When you install AAD Pod Identity on your AKS cluster, it will deploy two components: Managed Identity Controller (MIC): It runs with multiple replicas and one Pod is elected leader . It is responsible to do the assignment of the identity to the AKS nodes. Node Managed Identity (NMI): It runs as daemon on every node . It is responsible to enforce the IP table rules to allow AGIC to GET the access token. For further reading on how these components work, you can go through this readme . Here is a concept diagram on the project page. Now, In order to debug the authorizer issue further, we need to get the logs for mic and nmi pods. These pods usually start with mic and nmi as the prefix. We should first investigate the logs of mic and then nmi . ```bash $ kubectl get pods NAME READY STATUS RESTARTS AGE mic-774b9c5d7b-z4z8p 1/1 Running 1 15m mic-774b9c5d7b-zrdsm 1/1 Running 1 15m nmi-pv8ch 1/1 Running 1 15m ```","title":"AGIC is stuck at creating authorizer"},{"location":"troubleshootings/troubleshooting-agic-pod-stuck-in-not-ready-state/#issue-in-mic-pod","text":"For mic pod, we will need to find the leader. An easy way to find the leader is by looking at the log size. Leader pod is the one that is actively working. MIC pod communicates with Azure Resource Manager(ARM) to assign the identity to the AKS nodes. If there are any issues in outbound connectivity, MIC can report TCP timeouts. Check your NSGs, UDRs and Firewall to make sure that you allow outbound traffic to Azure. bash Updating msis on node aks-agentpool-41724381-vmss, add [1], del [1], update[0] failed with error azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://management.azure.com/subscriptions/xxxx/resourceGroups/resgp/providers/Microsoft.Compute/virtualMachineScaleSets/aks-agentpool-41724381-vmss?api-version=2019-07-01: StatusCode=0 -- Original Error: adal: Failed to execute the refresh request. Error = 'Post \"https://login.microsoftonline.com//oauth2/token?api-version=1.0\": dial tcp: i/o timeout' You will see the following error if AKS cluster's Service Principal missing Managed Identity Operator access over User Assigned identity. You can follow the role assignment related step in the brownfield document . bash Updating msis on node aks-agentpool-32587779-vmss, add [1], del [0] failed with error compute.VirtualMachineScaleSetsClient#CreateOrUpdate: Failure sending request: StatusCode=403 -- Original Error: Code=\"LinkedAuthorizationFailed\" Message=\"The client '' with object id '' has permission to perform action 'Microsoft.Compute/virtualMachineScaleSets/write' on scope '/subscriptions/xxxx/resourceGroups//providers/Microsoft.Compute/virtualMachineScaleSets/aks-agentpool-32587779-vmss'; however, it does not have permission to perform action 'Microsoft.ManagedIdentity/userAssignedIdentities/assign/action' on the linked scope(s) '/subscriptions/xxxx/resourcegroups/resgp/providers/Microsoft.ManagedIdentity/userAssignedIdentities/' or the linked scope(s) are invalid.\"","title":"Issue in MIC Pod"},{"location":"troubleshootings/troubleshooting-agic-pod-stuck-in-not-ready-state/#issue-in-nmi-pod","text":"For nmi pod, we will need to find the pod running on the same node as AGIC pod. If you see 403 response for a token request, then make sure you have correctly assigned the needed permission to AGIC's identity . Reader access to Application Gateway's resource group. This is needed to list the resources in the this resource group. Contributor access to Application Gateway. This is needed to perform updates on the Application Gateway.","title":"Issue in NMI Pod"},{"location":"troubleshootings/troubleshooting-agic-pod-stuck-in-not-ready-state/#agic-is-stuck-getting-application-gateway","text":"AGIC can be stuck in getting the gateway due to: AGIC gets NotFound when getting Application Gateway When you see this error, Verify that the gateway actually exists in the subscription and resource group printed in the AGIC logs. If you are deploying in National Cloud or US Gov Cloud, then this issue could be related to incorrect environment endpoint setting. To correctly configure, set the appgw.environment property in the helm. AGIC gets Unauthorized when getting Application Gateway Verify that you have given needed permissions to AGIC's identity: Reader access to Application Gateway's resource group. This is needed to list the resources in the this resource group. Contributor access to Application Gateway. This is needed to perform updates on the Application Gateway.","title":"AGIC is stuck getting Application Gateway"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/","text":"Troubleshooting: Installing a simple application NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Azure Cloud Shell is the most convenient way to troubleshoot any problems with your AKS and AGIC installation. Launch your shell from shell.azure.com or by clicking the link: In the troubleshooting document, we will debug issues in the AGIC installation by installing a simple application step by step and check the output as we go along. The steps below assume: You have an AKS cluster, with Advanced Networking enabled AGIC has been installed on the AKS cluster You already hav an App Gateway on a VNET shared with your AKS cluster To verify that the App Gateway + AKS + AGIC installation is setup correctly, deploy the simplest possible app: ```bash cat < to verify that we have had a successful deployment. A successful deployment would have added the following lines to the log: I0927 22:34:51.281437 1 process.go:156] Applied App Gateway config in 20.461335266s I0927 22:34:51.281585 1 process.go:165] cache: Updated with latest applied config. I0927 22:34:51.282342 1 process.go:171] END AppGateway deployment Alternatively, from Cloud Shell we can retrieve only the lines indicating successful App Gateway configuration with kubectl logs | grep 'Applied App Gateway config in' , where should be the exact name of the AGIC pod. App Gateway will have the following configuration applied: Listener: Routing Rule: Backend Pool: There will be one IP address in the backend address pool and it will match the IP address of the Pod we observed earlier with kubectl get pods -o wide Finally we can use the cURL command from within Cloud Shell to establish an HTTP connection to the newly deployed app: Use kubectl get ingress to get the Public IP address of App Gateway Use curl -I -H 'Host: test.agic.contoso.com' A result of HTTP/1.1 200 OK indicates that the App Gateway + AKS + AGIC system is working as expected. Inspect Kubernetes Installation Pods, Services, Ingress Application Gateway Ingress Controller (AGIC) continuously monitors the following Kubernetes resources: Deployment or Pod , Service , Ingress The following must be in place for AGIC to function as expected: AKS must have one or more healthy pods . Verify this from Cloud Shell with kubectl get pods -o wide --show-labels If you have a Pod with an aspnetapp , your output may look like this: ```bash $> kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS aspnetapp 1/1 Running 0 17h 10.0.0.6 aks-agentpool-35064155-1 app=aspnetapp ``` One or more services , referencing the pods above via matching selector labels. Verify this from Cloud Shell with kubectl get services -o wide ```bash $> kubectl get services -o wide --show-labels NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR LABELS aspnetapp ClusterIP 10.2.63.254 80/TCP 17h app=aspnetapp ``` Ingress , annotated with kubernetes.io/ingress.class: azure/application-gateway , referencing the service above Verify this from Cloud Shell with kubectl get ingress -o wide --show-labels ```bash $> kubectl get ingress -o wide --show-labels NAME HOSTS ADDRESS PORTS AGE LABELS aspnetapp * 80 17h ``` View annotations of the ingress above: kubectl get ingress aspnetapp -o yaml (substitute aspnetapp with the name of your ingress) ```bash $> kubectl get ingress aspnetapp -o yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: azure/application-gateway name: aspnetapp spec: defaultBackend: service: name: aspnetapp port: number: 80 ``` The ingress resource must be annotated with kubernetes.io/ingress.class: azure/application-gateway . Verify Observed Nampespace Get the existing namespaces in Kubernetes cluster. What namespace is your app running in? Is AGIC watching that namespace? Refer to the Multiple Namespace Support documentation on how to properly configure observed namespaces. ```bash What namespaces exist on your cluster kubectl get namespaces What pods are currently running kubectl get pods --all-namespaces -o wide ``` The AGIC pod should be in the default namespace (see column NAMESPACE ). A healthy pod would have Running in the STATUS column. There should be at least one AGIC pod. ```bash Get a list of the Application Gateway Ingress Controller pods kubectl get pods --all-namespaces --selector app=ingress-azure ``` If the AGIC pod is not healthy ( STATUS column from the command above is not Running ): get logs to understand why: kubectl logs for the previous instance of the pod: kubectl logs --previous describe the pod to get more context: kubectl describe pod Do you have a Kubernetes Service and Ingress resources? ```bash Get all services across all namespaces kubectl get service --all-namespaces -o wide Get all ingress resources across all namespaces kubectl get ingress --all-namespaces -o wide ``` Is your Ingress annotated with: kubernetes.io/ingress.class: azure/application-gateway ? AGIC will only watch for Kubernetes Ingress resources that have this annotation. ```bash Get the YAML definition of a particular ingress resource kubectl get ingress --namespace -o yaml ``` AGIC emits Kubernetes events for certain critical errors. You can view these: in your terminal via kubectl get events --sort-by=.metadata.creationTimestamp in your browser using the Kubernetes Web UI (Dashboard)","title":"Troubleshooting installing a simple application"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/#troubleshooting-installing-a-simple-application","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. Azure Cloud Shell is the most convenient way to troubleshoot any problems with your AKS and AGIC installation. Launch your shell from shell.azure.com or by clicking the link: In the troubleshooting document, we will debug issues in the AGIC installation by installing a simple application step by step and check the output as we go along. The steps below assume: You have an AKS cluster, with Advanced Networking enabled AGIC has been installed on the AKS cluster You already hav an App Gateway on a VNET shared with your AKS cluster To verify that the App Gateway + AKS + AGIC installation is setup correctly, deploy the simplest possible app: ```bash cat < to verify that we have had a successful deployment. A successful deployment would have added the following lines to the log: I0927 22:34:51.281437 1 process.go:156] Applied App Gateway config in 20.461335266s I0927 22:34:51.281585 1 process.go:165] cache: Updated with latest applied config. I0927 22:34:51.282342 1 process.go:171] END AppGateway deployment Alternatively, from Cloud Shell we can retrieve only the lines indicating successful App Gateway configuration with kubectl logs | grep 'Applied App Gateway config in' , where should be the exact name of the AGIC pod. App Gateway will have the following configuration applied: Listener: Routing Rule: Backend Pool: There will be one IP address in the backend address pool and it will match the IP address of the Pod we observed earlier with kubectl get pods -o wide Finally we can use the cURL command from within Cloud Shell to establish an HTTP connection to the newly deployed app: Use kubectl get ingress to get the Public IP address of App Gateway Use curl -I -H 'Host: test.agic.contoso.com' A result of HTTP/1.1 200 OK indicates that the App Gateway + AKS + AGIC system is working as expected.","title":"Troubleshooting: Installing a simple application"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/#inspect-kubernetes-installation","text":"","title":"Inspect Kubernetes Installation"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/#pods-services-ingress","text":"Application Gateway Ingress Controller (AGIC) continuously monitors the following Kubernetes resources: Deployment or Pod , Service , Ingress The following must be in place for AGIC to function as expected: AKS must have one or more healthy pods . Verify this from Cloud Shell with kubectl get pods -o wide --show-labels If you have a Pod with an aspnetapp , your output may look like this: ```bash $> kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS aspnetapp 1/1 Running 0 17h 10.0.0.6 aks-agentpool-35064155-1 app=aspnetapp ``` One or more services , referencing the pods above via matching selector labels. Verify this from Cloud Shell with kubectl get services -o wide ```bash $> kubectl get services -o wide --show-labels NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR LABELS aspnetapp ClusterIP 10.2.63.254 80/TCP 17h app=aspnetapp ``` Ingress , annotated with kubernetes.io/ingress.class: azure/application-gateway , referencing the service above Verify this from Cloud Shell with kubectl get ingress -o wide --show-labels ```bash $> kubectl get ingress -o wide --show-labels NAME HOSTS ADDRESS PORTS AGE LABELS aspnetapp * 80 17h ``` View annotations of the ingress above: kubectl get ingress aspnetapp -o yaml (substitute aspnetapp with the name of your ingress) ```bash $> kubectl get ingress aspnetapp -o yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: azure/application-gateway name: aspnetapp spec: defaultBackend: service: name: aspnetapp port: number: 80 ``` The ingress resource must be annotated with kubernetes.io/ingress.class: azure/application-gateway .","title":"Pods, Services, Ingress"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/#verify-observed-nampespace","text":"Get the existing namespaces in Kubernetes cluster. What namespace is your app running in? Is AGIC watching that namespace? Refer to the Multiple Namespace Support documentation on how to properly configure observed namespaces. ```bash","title":"Verify Observed Nampespace"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/#what-namespaces-exist-on-your-cluster","text":"kubectl get namespaces","title":"What namespaces exist on your cluster"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/#what-pods-are-currently-running","text":"kubectl get pods --all-namespaces -o wide ``` The AGIC pod should be in the default namespace (see column NAMESPACE ). A healthy pod would have Running in the STATUS column. There should be at least one AGIC pod. ```bash","title":"What pods are currently running"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/#get-a-list-of-the-application-gateway-ingress-controller-pods","text":"kubectl get pods --all-namespaces --selector app=ingress-azure ``` If the AGIC pod is not healthy ( STATUS column from the command above is not Running ): get logs to understand why: kubectl logs for the previous instance of the pod: kubectl logs --previous describe the pod to get more context: kubectl describe pod Do you have a Kubernetes Service and Ingress resources? ```bash","title":"Get a list of the Application Gateway Ingress Controller pods"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/#get-all-services-across-all-namespaces","text":"kubectl get service --all-namespaces -o wide","title":"Get all services across all namespaces"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/#get-all-ingress-resources-across-all-namespaces","text":"kubectl get ingress --all-namespaces -o wide ``` Is your Ingress annotated with: kubernetes.io/ingress.class: azure/application-gateway ? AGIC will only watch for Kubernetes Ingress resources that have this annotation. ```bash","title":"Get all ingress resources across all namespaces"},{"location":"troubleshootings/troubleshooting-installing-a-simple-application/#get-the-yaml-definition-of-a-particular-ingress-resource","text":"kubectl get ingress --namespace -o yaml ``` AGIC emits Kubernetes events for certain critical errors. You can view these: in your terminal via kubectl get events --sort-by=.metadata.creationTimestamp in your browser using the Kubernetes Web UI (Dashboard)","title":"Get the YAML definition of a particular ingress resource"},{"location":"tutorials/tutorial.e2e-ssl/","text":"Tutorial: Setting up E2E SSL NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. In this this tutorial, we will learn how to setup E2E SSL with AGIC on Application Gateway. We will Generate the frontend and the backend certificates Deploy a simple application with HTTPS Upload the backend certificate's root certificate to Application Gateway Setup ingress for E2E Note: Following tutorial makes use of test certificate generated using OpenSSL. These certificates are only for illustration and should be used in testing only. Generate the frontend and the backend certificates Let's start by first generating the certificates that we will be using for the frontend and backend SSL. First, we will generate the frontend certificate that will be presented to the clients connecting to the Application Gateway. This will have subject name CN=frontend . bash openssl ecparam -out frontend.key -name prime256v1 -genkey openssl req -new -sha256 -key frontend.key -out frontend.csr -subj \"/CN=frontend\" openssl x509 -req -sha256 -days 365 -in frontend.csr -signkey frontend.key -out frontend.crt Note: You can also use a certificate present on the Key Vault on Application Gateway for frontend SSL. Now, we will generate the backend certificate that will be presented by the backends to the Application Gateway. This will have subject name CN=backend bash openssl ecparam -out backend.key -name prime256v1 -genkey openssl req -new -sha256 -key backend.key -out backend.csr -subj \"/CN=backend\" openssl x509 -req -sha256 -days 365 -in backend.csr -signkey backend.key -out backend.crt Finally, we will install the above certificates on to our kubernetes cluster bash kubectl create secret tls frontend-tls --key=\"frontend.key\" --cert=\"frontend.crt\" kubectl create secret tls backend-tls --key=\"backend.key\" --cert=\"backend.crt\" Here is output after listing the secrets. ```bash kubectl get secrets NAME TYPE DATA AGE backend-tls kubernetes.io/tls 2 3m18s frontend-tls kubernetes.io/tls 2 3m18s ``` Deploy a simple application with HTTPS In this section, we will deploy a simple application exposing an HTTPS endpoint on port 8443. ```yaml apiVersion: v1 kind: Service metadata: name: website-service spec: selector: app: website ports: - protocol: TCP port: 8443 targetPort: 8443 apiVersion: apps/v1 kind: Deployment metadata: name: website-deployment spec: selector: matchLabels: app: website replicas: 2 template: metadata: labels: app: website spec: containers: - name: website imagePullPolicy: Always image: nginx:latest ports: - containerPort: 8443 volumeMounts: - mountPath: /etc/nginx/ssl name: secret-volume - mountPath: /etc/nginx/conf.d name: configmap-volume volumes: - name: secret-volume secret: secretName: backend-tls - name: configmap-volume configMap: name: website-nginx-cm apiVersion: v1 kind: ConfigMap metadata: name: website-nginx-cm data: default.conf: |- server { listen 8080 default_server; listen 8443 ssl; root /usr/share/nginx/html; index index.html; ssl_certificate /etc/nginx/ssl/tls.crt; ssl_certificate_key /etc/nginx/ssl/tls.key; location / { return 200 \"Hello World!\"; } } ``` You can also install the above yamls using: bash kubectl apply -f https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/sample-https-backend.yaml Verify that you can curl the application ```bash kubectl get pods NAME READY STATUS RESTARTS AGE website-deployment-9c8c6df7f-5bqwh 1/1 Running 0 24s website-deployment-9c8c6df7f-wxtnp 1/1 Running 0 24s kubectl exec -it website-deployment-9c8c6df7f-5bqwh -- curl -k https://localhost:8443 Hello World! ``` Upload the backend certificate's root certificate to Application Gateway When you are setting up SSL between Application Gateway and Backend, if you are using a self-signed certificate or a certificate signed by a custom root CA on the backend, then you need to upload self-signed or the Custom root CA of the backend certificate on the Application Gateway. bash applicationGatewayName=\"\" resourceGroup=\"\" az network application-gateway root-cert create \\ --gateway-name $applicationGatewayName \\ --resource-group $resourceGroup \\ --name backend-tls \\ --cert-file backend.crt Setup ingress for E2E Now, we will configure our ingress to use the frontend certificate for frontend SSL and backend certificate as root certificate so that Application Gateway can authenticate the backend. bash cat << EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: website-ingress annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/ssl-redirect: \"true\" appgw.ingress.kubernetes.io/backend-protocol: \"https\" appgw.ingress.kubernetes.io/backend-hostname: \"backend\" appgw.ingress.kubernetes.io/appgw-trusted-root-certificate: \"backend-tls\" spec: tls: - secretName: frontend-tls hosts: - website.com rules: - host: website.com http: paths: - path: / backend: service: name: website-service port: number: 8443 pathType: Exact EOF For frontend SSL, we have added tls section in our ingress resource. yaml tls: - secretName: frontend-tls hosts: - website.com For backend SSL, we have added the following annotations: yaml appgw.ingress.kubernetes.io/backend-protocol: \"https\" appgw.ingress.kubernetes.io/backend-hostname: \"backend\" appgw.ingress.kubernetes.io/appgw-trusted-root-certificate: \"backend-tls\" Here, it is important to note that backend-hostname should be the hostname that the backend will accept and it should also match with the Subject/Subject Alternate Name of the certificate used on the backend. After you have successfully completed all the above steps, you should be able to see the ingress's IP address and visit the website. ```bash kubectl get ingress NAME HOSTS ADDRESS PORTS AGE website-ingress website.com 80, 443 36m curl -k -H \"Host: website.com\" https:// Hello World! ```","title":"Tutorial: Setting up E2E SSL"},{"location":"tutorials/tutorial.e2e-ssl/#tutorial-setting-up-e2e-ssl","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. In this this tutorial, we will learn how to setup E2E SSL with AGIC on Application Gateway. We will Generate the frontend and the backend certificates Deploy a simple application with HTTPS Upload the backend certificate's root certificate to Application Gateway Setup ingress for E2E Note: Following tutorial makes use of test certificate generated using OpenSSL. These certificates are only for illustration and should be used in testing only.","title":"Tutorial: Setting up E2E SSL"},{"location":"tutorials/tutorial.e2e-ssl/#generate-the-frontend-and-the-backend-certificates","text":"Let's start by first generating the certificates that we will be using for the frontend and backend SSL. First, we will generate the frontend certificate that will be presented to the clients connecting to the Application Gateway. This will have subject name CN=frontend . bash openssl ecparam -out frontend.key -name prime256v1 -genkey openssl req -new -sha256 -key frontend.key -out frontend.csr -subj \"/CN=frontend\" openssl x509 -req -sha256 -days 365 -in frontend.csr -signkey frontend.key -out frontend.crt Note: You can also use a certificate present on the Key Vault on Application Gateway for frontend SSL. Now, we will generate the backend certificate that will be presented by the backends to the Application Gateway. This will have subject name CN=backend bash openssl ecparam -out backend.key -name prime256v1 -genkey openssl req -new -sha256 -key backend.key -out backend.csr -subj \"/CN=backend\" openssl x509 -req -sha256 -days 365 -in backend.csr -signkey backend.key -out backend.crt Finally, we will install the above certificates on to our kubernetes cluster bash kubectl create secret tls frontend-tls --key=\"frontend.key\" --cert=\"frontend.crt\" kubectl create secret tls backend-tls --key=\"backend.key\" --cert=\"backend.crt\" Here is output after listing the secrets. ```bash kubectl get secrets NAME TYPE DATA AGE backend-tls kubernetes.io/tls 2 3m18s frontend-tls kubernetes.io/tls 2 3m18s ```","title":"Generate the frontend and the backend certificates"},{"location":"tutorials/tutorial.e2e-ssl/#deploy-a-simple-application-with-https","text":"In this section, we will deploy a simple application exposing an HTTPS endpoint on port 8443. ```yaml apiVersion: v1 kind: Service metadata: name: website-service spec: selector: app: website ports: - protocol: TCP port: 8443 targetPort: 8443 apiVersion: apps/v1 kind: Deployment metadata: name: website-deployment spec: selector: matchLabels: app: website replicas: 2 template: metadata: labels: app: website spec: containers: - name: website imagePullPolicy: Always image: nginx:latest ports: - containerPort: 8443 volumeMounts: - mountPath: /etc/nginx/ssl name: secret-volume - mountPath: /etc/nginx/conf.d name: configmap-volume volumes: - name: secret-volume secret: secretName: backend-tls - name: configmap-volume configMap: name: website-nginx-cm apiVersion: v1 kind: ConfigMap metadata: name: website-nginx-cm data: default.conf: |- server { listen 8080 default_server; listen 8443 ssl; root /usr/share/nginx/html; index index.html; ssl_certificate /etc/nginx/ssl/tls.crt; ssl_certificate_key /etc/nginx/ssl/tls.key; location / { return 200 \"Hello World!\"; } } ``` You can also install the above yamls using: bash kubectl apply -f https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/sample-https-backend.yaml Verify that you can curl the application ```bash kubectl get pods NAME READY STATUS RESTARTS AGE website-deployment-9c8c6df7f-5bqwh 1/1 Running 0 24s website-deployment-9c8c6df7f-wxtnp 1/1 Running 0 24s kubectl exec -it website-deployment-9c8c6df7f-5bqwh -- curl -k https://localhost:8443 Hello World! ```","title":"Deploy a simple application with HTTPS"},{"location":"tutorials/tutorial.e2e-ssl/#upload-the-backend-certificates-root-certificate-to-application-gateway","text":"When you are setting up SSL between Application Gateway and Backend, if you are using a self-signed certificate or a certificate signed by a custom root CA on the backend, then you need to upload self-signed or the Custom root CA of the backend certificate on the Application Gateway. bash applicationGatewayName=\"\" resourceGroup=\"\" az network application-gateway root-cert create \\ --gateway-name $applicationGatewayName \\ --resource-group $resourceGroup \\ --name backend-tls \\ --cert-file backend.crt","title":"Upload the backend certificate's root certificate to Application Gateway"},{"location":"tutorials/tutorial.e2e-ssl/#setup-ingress-for-e2e","text":"Now, we will configure our ingress to use the frontend certificate for frontend SSL and backend certificate as root certificate so that Application Gateway can authenticate the backend. bash cat << EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: website-ingress annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/ssl-redirect: \"true\" appgw.ingress.kubernetes.io/backend-protocol: \"https\" appgw.ingress.kubernetes.io/backend-hostname: \"backend\" appgw.ingress.kubernetes.io/appgw-trusted-root-certificate: \"backend-tls\" spec: tls: - secretName: frontend-tls hosts: - website.com rules: - host: website.com http: paths: - path: / backend: service: name: website-service port: number: 8443 pathType: Exact EOF For frontend SSL, we have added tls section in our ingress resource. yaml tls: - secretName: frontend-tls hosts: - website.com For backend SSL, we have added the following annotations: yaml appgw.ingress.kubernetes.io/backend-protocol: \"https\" appgw.ingress.kubernetes.io/backend-hostname: \"backend\" appgw.ingress.kubernetes.io/appgw-trusted-root-certificate: \"backend-tls\" Here, it is important to note that backend-hostname should be the hostname that the backend will accept and it should also match with the Subject/Subject Alternate Name of the certificate used on the backend. After you have successfully completed all the above steps, you should be able to see the ingress's IP address and visit the website. ```bash kubectl get ingress NAME HOSTS ADDRESS PORTS AGE website-ingress website.com 80, 443 36m curl -k -H \"Host: website.com\" https:// Hello World! ```","title":"Setup ingress for E2E"},{"location":"tutorials/tutorial.general/","text":"Tutorial: Basic NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. These tutorials help illustrate the usage of Kubernetes Ingress Resources to expose an example Kubernetes service through the Azure Application Gateway over HTTP or HTTPS. Table of Contents Prerequisites Deploy guestbook application Expose services over HTTP Expose services over HTTPS Without specified hostname With specified hostname Integrate with other services Prerequisites Installed ingress-azure helm chart. Greenfield Deployment : If you are starting from scratch, refer to these installation instructions which outlines steps to deploy an AKS cluster with Application Gateway and install application gateway ingress controller on the AKS cluster. If you want to use HTTPS on this application, you will need a x509 certificate and its private key. Deploy guestbook application The guestbook application is a canonical Kubernetes application that composes of a Web UI frontend, a backend and a Redis database. By default, guestbook exposes its application through a service with name frontend on port 80 . Without a Kubernetes Ingress Resource the service is not accessible from outside the AKS cluster. We will use the application and setup Ingress Resources to access the application through HTTP and HTTPS. Follow the instructions below to deploy the guestbook application. Download guestbook-all-in-one.yaml from here Deploy guestbook-all-in-one.yaml into your AKS cluster by running bash kubectl apply -f guestbook-all-in-one.yaml Now, the guestbook application has been deployed. Expose services over HTTP In order to expose the guestbook application we will using the following ingress resource: yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: guestbook annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - http: paths: - pathType: Prefix path: / backend: service: name: frontend port: number: 80 This ingress will expose the frontend service of the guestbook-all-in-one deployment as a default backend of the Application Gateway. Save the above ingress resource as ing-guestbook.yaml . Deploy ing-guestbook.yaml by running: bash kubectl apply -f ing-guestbook.yaml Check the log of the ingress controller for deployment status. Now the guestbook application should be available. You can check this by visiting the public address of the Application Gateway. Expose services over HTTPS Without specified hostname Without specifying hostname, the guestbook service will be available on all the host-names pointing to the application gateway. Before deploying ingress, you need to create a kubernetes secret to host the certificate and private key. You can create a kubernetes secret by running bash kubectl create secret tls --key --cert Define the following ingress. In the ingress, specify the name of the secret in the secretName section. yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: guestbook annotations: kubernetes.io/ingress.class: azure/application-gateway spec: tls: - secretName: rules: - http: paths: - pathType: Prefix path: / backend: service: name: frontend port: number: 80 NOTE: Replace in the above Ingress Resource with the name of your secret. Store the above Ingress Resource in a file name ing-guestbook-tls.yaml . Deploy ing-guestbook-tls.yaml by running bash kubectl apply -f ing-guestbook-tls.yaml Check the log of the ingress controller for deployment status. Now the guestbook application will be available on HTTPS. In order to make the guestbook application available on HTTP, annotate the Ingress with yaml appgw.ingress.kubernetes.io/ssl-redirect: \"true\" Only in this case a HTTP Listener is created in Azure which redirects the visitor to the HTTPS version. With specified hostname You can also specify the hostname on the ingress in order to multiplex TLS configurations and services. By specifying hostname, the guestbook service will only be available on the specified host. Define the following ingress. In the ingress, specify the name of the secret in the secretName section and replace the hostname in the hosts section accordingly. yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: guestbook annotations: kubernetes.io/ingress.class: azure/application-gateway spec: tls: - hosts: - secretName: rules: - host: http: paths: - pathType: Prefix path: / backend: service: name: frontend port: number: 80 Deploy ing-guestbook-tls-sni.yaml by running bash kubectl apply -f ing-guestbook-tls-sni.yaml Check the log of the ingress controller for deployment status. Now the guestbook application will be available on both HTTP and HTTPS only on the specified host ( in this example).","title":"Tutorial: Basic"},{"location":"tutorials/tutorial.general/#tutorial-basic","text":"NOTE: Application Gateway for Containers has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. These tutorials help illustrate the usage of Kubernetes Ingress Resources to expose an example Kubernetes service through the Azure Application Gateway over HTTP or HTTPS.","title":"Tutorial: Basic"},{"location":"tutorials/tutorial.general/#table-of-contents","text":"Prerequisites Deploy guestbook application Expose services over HTTP Expose services over HTTPS Without specified hostname With specified hostname Integrate with other services","title":"Table of Contents"},{"location":"tutorials/tutorial.general/#prerequisites","text":"Installed ingress-azure helm chart. Greenfield Deployment : If you are starting from scratch, refer to these installation instructions which outlines steps to deploy an AKS cluster with Application Gateway and install application gateway ingress controller on the AKS cluster. If you want to use HTTPS on this application, you will need a x509 certificate and its private key.","title":"Prerequisites"},{"location":"tutorials/tutorial.general/#deploy-guestbook-application","text":"The guestbook application is a canonical Kubernetes application that composes of a Web UI frontend, a backend and a Redis database. By default, guestbook exposes its application through a service with name frontend on port 80 . Without a Kubernetes Ingress Resource the service is not accessible from outside the AKS cluster. We will use the application and setup Ingress Resources to access the application through HTTP and HTTPS. Follow the instructions below to deploy the guestbook application. Download guestbook-all-in-one.yaml from here Deploy guestbook-all-in-one.yaml into your AKS cluster by running bash kubectl apply -f guestbook-all-in-one.yaml Now, the guestbook application has been deployed.","title":"Deploy guestbook application"},{"location":"tutorials/tutorial.general/#expose-services-over-http","text":"In order to expose the guestbook application we will using the following ingress resource: yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: guestbook annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - http: paths: - pathType: Prefix path: / backend: service: name: frontend port: number: 80 This ingress will expose the frontend service of the guestbook-all-in-one deployment as a default backend of the Application Gateway. Save the above ingress resource as ing-guestbook.yaml . Deploy ing-guestbook.yaml by running: bash kubectl apply -f ing-guestbook.yaml Check the log of the ingress controller for deployment status. Now the guestbook application should be available. You can check this by visiting the public address of the Application Gateway.","title":"Expose services over HTTP"},{"location":"tutorials/tutorial.general/#expose-services-over-https","text":"","title":"Expose services over HTTPS"},{"location":"tutorials/tutorial.general/#without-specified-hostname","text":"Without specifying hostname, the guestbook service will be available on all the host-names pointing to the application gateway. Before deploying ingress, you need to create a kubernetes secret to host the certificate and private key. You can create a kubernetes secret by running bash kubectl create secret tls --key --cert Define the following ingress. In the ingress, specify the name of the secret in the secretName section. yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: guestbook annotations: kubernetes.io/ingress.class: azure/application-gateway spec: tls: - secretName: rules: - http: paths: - pathType: Prefix path: / backend: service: name: frontend port: number: 80 NOTE: Replace in the above Ingress Resource with the name of your secret. Store the above Ingress Resource in a file name ing-guestbook-tls.yaml . Deploy ing-guestbook-tls.yaml by running bash kubectl apply -f ing-guestbook-tls.yaml Check the log of the ingress controller for deployment status. Now the guestbook application will be available on HTTPS. In order to make the guestbook application available on HTTP, annotate the Ingress with yaml appgw.ingress.kubernetes.io/ssl-redirect: \"true\" Only in this case a HTTP Listener is created in Azure which redirects the visitor to the HTTPS version.","title":"Without specified hostname"},{"location":"tutorials/tutorial.general/#with-specified-hostname","text":"You can also specify the hostname on the ingress in order to multiplex TLS configurations and services. By specifying hostname, the guestbook service will only be available on the specified host. Define the following ingress. In the ingress, specify the name of the secret in the secretName section and replace the hostname in the hosts section accordingly. yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: guestbook annotations: kubernetes.io/ingress.class: azure/application-gateway spec: tls: - hosts: - secretName: rules: - host: http: paths: - pathType: Prefix path: / backend: service: name: frontend port: number: 80 Deploy ing-guestbook-tls-sni.yaml by running bash kubectl apply -f ing-guestbook-tls-sni.yaml Check the log of the ingress controller for deployment status. Now the guestbook application will be available on both HTTP and HTTPS only on the specified host ( in this example).","title":"With specified hostname"}]} \ No newline at end of file diff --git a/setup/install/index.html b/setup/install/index.html index d7cf5c9b6..522d45c54 100644 --- a/setup/install/index.html +++ b/setup/install/index.html @@ -245,7 +245,7 @@

      Register required resourc

    6. Install Helm

      -

      Helm is an open-source packaging tool that is used to install ALB controller.

      +

      Helm is an open-source packaging tool that is used to install AGIC.

      Helm is already available in Azure Cloud Shell. If you are using Azure Cloud Shell, no additional Helm installation is necessary.

      @@ -335,9 +335,9 @@

      Install Application Gate sleep 60

      echo "Set up federation with AKS OIDC issuer" AKS_OIDC_ISSUER="(az aks show -n "AKS_NAME" -g "RESOURCE_GROUP" --query "oidcIssuerProfile.issuerUrl" -o tsv)" -az identity federated-credential create --name "azure-alb-identity" \ +az identity federated-credential create --name "agic" \ --identity-name "IDENTITY_RESOURCE_NAME" \ --resource-group RESOURCE_GROUP \ --issuer "