Skip to content

4. Deployment

Alex Tzortzis edited this page Jul 11, 2024 · 4 revisions

1. Configure the Helm Chart

Update the values.yaml file with the modifications to the configuration (see /examples/values.ntua.yml as an example).In this guide, it is assumed that you have followed the instructions in the Requirements section. Please refer to the official TSG gitlab page for further information with regards to the configuration.The minimal configuration required to get your first deployment running, without data apps and ingresses, is as follows:

a. Host

Modify host to the domain name you configured with the ingress controller:

host: ${domain-name}

b. Connector IDs

Modify ids.info.idsid, ids.info.curator, ids.info.maintainer in the values.yml file to the corresponding identifiers that you filled in during creation of the certificates. ids.info.idsid should be the Connector ID, and ids.info.curator, ids.info.maintainer should be the Participant ID. (Optionally) change titlesand descriptions to the connector name, and a more descriptive description of your service in the future:

ids:
  info:
    idsid: ${IDS_COMPONENT_ID}
    curator: ${IDS_PARTICIPANT_ID}
    maintainer: ${IDS_PARTICIPANT_ID}
    titles:
      - ${CONNECTOR TITLE@en}
    descriptions:
      - ${CONNECTOR DESCRIPTION@en}

c. Data-app agents

Modify fields in the agents tab: Keep in mind that api-version is the version number you have used for your API when you uploaded in SwaggerHub (e.g 0.5). It is important to note that in order to retrieve the API spec for the data app, the URL used in the config should be the /apiproxy/registry/ variant instead of the /apis/ link from Swagger hub.

Important Note: As of version 2.3.1 of the OpenAPI data app (image docker.nexus.dataspac.es/data-apps/openapi-data-app:2.3.1), it is no longer necessery to add your openAPI description to Swaggerhub for the connector to find your app. In values.yaml file, at both places where the openApiBaseUrl is allowed (on the root config of the data app and per agent) now also openApiMapping is supported. We encourage the use of openApiMapping for less complexity and third-party overhead. The structure is similar to backendUrlMapping, so per version the full URL of the OpenAPI document can be provided:

agents:
    - id: ${IDS_COMPONENT_ID}:${AgentName} # custom agent defined by user
      backEndUrlMapping:
        ${api-version}: http://${service-name}:${internal-service-port}
      title: SERVICE TITLE
      # Comment/Uncomment either openApiBaseUrl or openApiMapping snippet
      # openApiBaseUrl: https://app.swaggerhub.com/apiproxy/registry/${username}/${api-name}/
      openApiMapping:
        ${api-version}: http://path_to_api_description_json
      versions: 
      - ${api-version}

d. Multiple connectors (optional)

When using multiple connectors in the same cluster: deploy connectors at different namespaces to avoid confusion between their certificates. Each connector namespace must contain the connector helm chart as well as its respective identity-secret. Additionally, to avoid overlap between connectors in the same namespace or/and domain, you should also modify in the values.yaml:

  • the data-app path at containers.services.ingress.path

     services:
       - port: 8080
         name: http
         ingress:
           path: /${data-app}/(.*)
  • the name of the identity secret at coreContainer.secrets.idsIdentity.name

     secrets:
       idsIdentity:
         name: ${ids-identity-secret}
  • the ingress path at coreContainer.ingress.path and adminUI.ingress.path yaml ingress: path: /${deployment-name}/ui/(.*) rewriteTarget: /$1 clusterIssuer: letsencrypt ingressClass: public

e. Security (Optional)

  • Modify ids.security.apiKey.key and Containers.key fields: Change the bit after APIKEY- to a random API key used for interaction between the core container and the data app.

    key: APIKEY-sgqgCPJWgQjmMWrKLAmkETDE
    ...
    apiKey: APIKEY-sgqgCPJWgQjmMWrKLAmkETDE
  • Modify ids.security.users.password field: Create your own BCrypt encoded password for the admin user of the connector (also used in the default configuration to secure the ingress of the data app).

    users:
        - id: admin
            # -- BCrypt encoded password
            password: ${admin-password}
            roles:
                - ADMIN
  • Connector's container UI is secured using ingress authentication, with credentials found in ids.security.users in connector's values.yaml. To secure the connector's data-app as well, the developer must uncomment the containers.services.ingress.annotations fields in values.yaml. Since, the authentication is implemented using ingress, the paths' prefix must match the one in coreContainer.ingress.path

    coreContainer.ingress.path: /${deployment-name}/(.*)
    ...
    containers.services.ingress.annotations:
      nginx.ingress.kubernetes.io/auth-url: "https://$host/${deployment-name}/external-auth/auth"
      nginx.ingress.kubernetes.io/auth-signin: "https://$host/${deployment-name}/external-auth/signin?rd=$escaped_request_uri"

2. Create IDS Identity secret

Cert-manager stores TLS certificates as Kubernetes secrets, making them easily accessible to your applications. When certificates are renewed, the updated certificates are automatically stored in the corresponding secrets. Create an Kubernetes secret containing the certificates acquired from identity creation.

microk8s kubectl create secret generic ids-identity-secret --from-file=ids.crt=./component.crt \
                                                           --from-file=ids.key=./component.key \
                                                           --from-file=ca.crt=./cachain.crt    \
                                                           -n ${namespace} 

please update to appropriate names the namespace (e.g default)

3. Add the Helm repository

Add the Helm repository of the TSG components:

helm repo add tsg https://nexus.dataspac.es/repository/tsg-helm
helm repo update

4. Helm chart installation

To install the Helm chart, execute:

microk8s helm upgrade --install                               \
        -n ${namespace}                                       \
        --repo https://nexus.dataspac.es/repository/tsg-helm  \
        --version 3.2.8                                       \
        -f values.yaml                                        \
        ${deployment-name}                                    \
        tsg-connector

please update to appropriate names the namespace (e.g default) and deployment-name (e.g my-connector) fields

5. Wait

Wait till you ensure connector pods are all in a ready (1/1) state (it might take at least a minute). You can watch the state of the pods using this command:

watch microk8s kubectl get all --all-namespaces