diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 000000000..e69de29bb diff --git a/01-introduction/current-state/index.html b/01-introduction/current-state/index.html new file mode 100644 index 000000000..167341995 --- /dev/null +++ b/01-introduction/current-state/index.html @@ -0,0 +1 @@ + Current state - Cloud Pak Deployer
Skip to content

Current state of the Cloud Pak Deployer🔗

The below picture indicates the current state of the Cloud Pak Deployer, which infrastructures are supported to provision or use OpenShift, the storage classes which can be controlled and the Cloud Paks with cartridges and components. Current state of the deployer

\ No newline at end of file diff --git a/01-introduction/images/cp-deploy-current-state.drawio b/01-introduction/images/cp-deploy-current-state.drawio new file mode 100644 index 000000000..6aae01e18 --- /dev/null +++ b/01-introduction/images/cp-deploy-current-state.drawio @@ -0,0 +1,142 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/01-introduction/images/cp-deploy-current-state.png b/01-introduction/images/cp-deploy-current-state.png new file mode 100644 index 000000000..32ddd8df7 Binary files /dev/null and b/01-introduction/images/cp-deploy-current-state.png differ diff --git a/05-install/install/index.html b/05-install/install/index.html new file mode 100644 index 000000000..c3c11abb4 --- /dev/null +++ b/05-install/install/index.html @@ -0,0 +1,14 @@ + Installing Cloud Pak Deployer - Cloud Pak Deployer
Skip to content

Installing the Cloud Pak Deployer🔗

Prerequisites🔗

To install and run the Cloud Pak Deployer, ensure that either podman or docker is available on your system. These are typically available on various Linux distributions such as Red Hat Enterprise Linux (preferred), Fedora, CentOS, Ubuntu, and MacOS. Note that Docker behaves differently on Windows compared to Linux platforms, potentially causing deployment issues.

Using a Windows workstation🔗

If you're working on a Windows workstation without access to a Linux server, you can use VirtualBox to create a Linux virtual machine for deployment.

Once the guest operating system is set up, log in as root. VirtualBox supports port forwarding for easy access to the Linux command line using tools like PuTTY.

Install on Linux🔗

On Red Hat Enterprise Linux of CentOS, run the following commands:

yum install -y podman git
+yum clean all
+

On MacOS, run the following commands:

brew install podman git
+podman machine create
+podman machine init
+

On Ubuntu, debian Based :

apt-get -y install podman
+podman machine create
+podman machine init
+

Generally, adhere to the instructions provided to install either podman or docker on your Linux system.

Clone the current repository🔗

Using the command line🔗

If you clone the repository from the command line, you will need to enter a token when you run the git clone command. You can retrieve your token as follows:

Go to a directory where you want to download the Git repo.

git clone --depth=1 https://github.com/IBM/cloud-pak-deployer.git
+

Build the image🔗

First go to the directory where you cloned the GitHub repository, for example ~/cloud-pak-deployer.

cd cloud-pak-deployer
+

Then run the following command to build the container image.

./cp-deploy.sh build
+

This process will take 5-10 minutes to complete and it will install all the pre-requisites needed to run the automation, including Ansible, Python and required operating system packages. For the installation to work, the system on which the image is built must be connected to the internet.

Downloading the Cloud Pak Deployer Image from Registry🔗

To download the Cloud Pak Deployer image from the Quay.io registry, you can use the Docker command-line interface (CLI) or Podman.

podman pull quay.io/cloud-pak-deployer/cloud-pak-deployer
+

This command pulls the latest version of the Cloud Pak Deployer image from the Quay.io repository. Once downloaded, you can use this image to deploy Cloud Paks

Tags and Versions🔗

By default, the above command pulls the latest version of the Cloud Pak Deployer image. If you want to specify a particular version or tag, you can append it to the image name. For example:

podman pull quay.io/cloud-pak-deployer/cloud-pak-deployer:<tag_or_version>
+

Replace <tag_or_version> with the specific tag or version you want to download.

\ No newline at end of file diff --git a/10-use-deployer/1-overview/overview/index.html b/10-use-deployer/1-overview/overview/index.html new file mode 100644 index 000000000..31a4752c7 --- /dev/null +++ b/10-use-deployer/1-overview/overview/index.html @@ -0,0 +1 @@ + Overview - Cloud Pak Deployer
Skip to content

Using Cloud Pak Deployer🔗

Running Cloud Pak Deployer🔗

There are 3 main steps you need to perform to provision an OpenShift cluster with the desired Cloud Pak(s):

  1. Install the Cloud Pak Deployer
  2. Run the Cloud Pak Deployer to create the cluster and install the Cloud Pak

What will I need?🔗

To complete the deployment, you will or may need the following. Details will be provided when you need them.

  • Your Cloud Pak entitlement key to pull images from the IBM Container Registry
  • IBM Cloud VPC: An IBM Cloud API key that allows you to provision infrastructure
  • vSphere: A vSphere user and password which has infrastructure create permissions
  • AWS ROSA: AWS IAM credentials (access key and secret access key), a ROSA login token and optionally a temporary security token
  • AWS Self-managed: AWS IAM credentials (access key and secret access key) and optionally a temporary security token
  • Azure: Azure service principal with the correct permissions
  • Existing OpenShift: Cluster admin login credentials of the OpenShift cluster

Executing commands on the OpenShift cluster🔗

The server on which you run the Cloud Pak Deployer may not have the necessary clients to interact with the cloud infrastructure, OpenShift, or the installed Cloud Pak. You can run commands using the same container image that runs the deployment of OpenShift and the Cloud Paks through the command line: Open a command line

Destroying your OpenShift cluster🔗

If you want to destroy the provisioned OpenShift cluster, including the installed Cloud Pak(s), you can do this through the Cloud pak Deployer. Steps can be found here: Destroy the assets

\ No newline at end of file diff --git a/10-use-deployer/3-run/aws-rosa/index.html b/10-use-deployer/3-run/aws-rosa/index.html new file mode 100644 index 000000000..3291c5c75 --- /dev/null +++ b/10-use-deployer/3-run/aws-rosa/index.html @@ -0,0 +1,41 @@ + AWS ROSA - Cloud Pak Deployer
Skip to content

Running the Cloud Pak Deployer on AWS (ROSA)🔗

On Amazon Web Services (AWS), OpenShift can be set up in various ways, managed by Red Hat (ROSA) or self-managed. The steps below are applicable to the ROSA (Red Hat OpenShift on AWS) installation. More information about ROSA can be found here: https://aws.amazon.com/rosa/

There are 5 main steps to run the deployer for AWS:

  1. Configure deployer
  2. Prepare the cloud environment
  3. Obtain entitlement keys and secrets
  4. Set environment variables and secrets
  5. Run the deployer

Topology🔗

A typical setup of the ROSA cluster is pictured below: ROSA configuration

When deploying ROSA, an external host name and domain name are automatically generated by Amazon Web Services and both the API and Ingress servers can be resolved by external clients. At this stage, one cannot configure the domain name to be used.

1. Configure deployer🔗

Deployer configuration and status directories🔗

Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory (STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory.

You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration. For ROSA installations, copy one of ocp-aws-rosa-*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files.

Example:

mkdir -p $HOME/cpd-config/config
+cp sample-configurations/sample-dynamic/config-samples/ocp-aws-rosa-elastic.yaml $HOME/cpd-config/config/
+cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/
+

Set configuration and status directories environment variables🔗

Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs.

export CONFIG_DIR=$HOME/cpd-config
+export STATUS_DIR=$HOME/cpd-status
+
  • CONFIG_DIR: Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files.
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and logs files.

Optional: advanced configuration🔗

If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration.

For special configuration with defaults and dynamic variables, refer to Advanced configuration.

2. Prepare the cloud environment🔗

Enable ROSA on AWS🔗

Before you can use ROSA on AWS, you have to enable it if this has not been done already. This can be done as follows:

Obtain the AWS IAM credentials🔗

You will need an Access Key ID and Secret Access Key for the deployer to run rosa commands.

Alternative: Using temporary AWS security credentials (STS)🔗

If your account uses temporary security credentials for AWS resources, you must use the Access Key ID, Secret Access Key and Session Token associated with your temporary credentials.

For more information about using temporary security credentials, see https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html.

The temporary credentials must be issued for an IAM role that has sufficient permissions to provision the infrastructure and all other components. More information about required permissions for ROSA cluster can be found here: https://docs.openshift.com/rosa/rosa_planning/rosa-sts-aws-prereqs.html#rosa-sts-aws-prereqs.

An example on how to retrieve the temporary credentials for a user-defined role:

printf "\nexport AWS_ACCESS_KEY_ID=%s\nexport AWS_SECRET_ACCESS_KEY=%s\nexport AWS_SESSION_TOKEN=%s\n" $(aws sts assume-role \
+--role-arn arn:aws:iam::678256850452:role/ocp-sts-role \
+--role-session-name OCPInstall \
+--query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]" \
+--output text)
+

This would return something like the below, which you can then paste into the session running the deployer.

export AWS_ACCESS_KEY_ID=ASIxxxxxxAW
+export AWS_SECRET_ACCESS_KEY=jtLxxxxxxxxxxxxxxxGQ
+export AWS_SESSION_TOKEN=IQxxxxxxxxxxxxxbfQ
+

You must set the infrastructure.use_sts to True in the openshift configuration if you need to use the temporary security credentials. Cloud Pak Deployer will then run the rosa create cluster command with the appropriate flag.

Obtain your ROSA login token🔗

To run rosa commands to manage the cluster, the deployer requires the ROSA login token.

  • Go to https://cloud.redhat.com/openshift/token/rosa
  • Login with your Red Hat user ID and password. If you don't have one yet, you need to create it.
  • Copy the offline access token presented on the screen and store it in a safe place.

If ROSA is already installed🔗

This scenario is supported. To enable this feature, please ensure that you take the following steps:

  1. Include the environment ID in the infrastrucure definition {{ env_id }} to match existing cluster
  2. Create "cluster-admin " password token using the following command:

    $ ./cp-deploy.sh vault set -vs={{env_id}}-cluster-admin-password=[YOUR PASSWORD]
    +

Without these changes, sthe cloud player will fail and you will receive the following error message: "Failed to get the cluster-admin password from the vault".

3. Acquire entitlement keys and secrets🔗

If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry.

Warning

As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.

4. Set environment variables and secrets🔗

export AWS_ACCESS_KEY_ID=your_access_key
+export AWS_SECRET_ACCESS_KEY=your_secret_access_key
+export ROSA_LOGIN_TOKEN="your_rosa_login_token"
+export CP_ENTITLEMENT_KEY=your_cp_entitlement_key
+

Optional: If your user does not have permanent administrator access but using temporary credentials, you can set the AWS_SESSION_TOKEN to be used for the AWS CLI.

export AWS_SESSION_TOKEN=your_session_token
+

  • AWS_ACCESS_KEY_ID: This is the AWS Access Key you retrieved above, often this is something like AK1A2VLMPQWBJJQGD6GV
  • AWS_SECRET_ACCESS_KEY: The secret associated with your AWS Access Key, also retrieved above
  • AWS_SESSION_TOKEN: The session token that will grant temporary elevated permissions
  • ROSA_LOGIN_TOKEN: The offline access token that was retrieved before. This is a very long string (200+ characters). Make sure you enclose the string in single or double quotes as it may hold special characters
  • CP_ENTITLEMENT_KEY: This is the entitlement key you acquired as per the instructions above, this is a 80+ character string

Warning

If your AWS_SESSION_TOKEN is expires while the deployer is still running, the deployer may end abnormally. In such case, you can just issue new temporary credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN) and restart the deployer. Alternatively, you can update the 3 vault secrets, respectively aws-access-key, aws-secret-access-key and aws-session-token with new values as they are re-retrieved by the deployer on a regular basis.

Optional: Set the GitHub Personal Access Token (PAT)🔗

In some cases, download of the cloudctl and cpd-cli clients from @IBM will fail because GitHub limits the number of API calls from non-authenticated clients. You can remediate this issue by creating a Personal Access Token on github.com and creating a secret in the vault.

./cp-deploy.sh vault set -vs github-ibm-pat=<your PAT>
+

Alternatively, you can set the secret by adding -vs github-ibm-pat=<your PAT> to the ./cp-deploy.sh env apply command.

5. Run the deployer🔗

Optional: validate the configuration🔗

If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators.

./cp-deploy.sh env apply --check-only --accept-all-licenses
+

Run the Cloud Pak Deployer🔗

To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions.

./cp-deploy.sh env apply --accept-all-licenses
+

You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx. For more information about the extra (dynamic) variables, see advanced configuration.

The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line.

When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background.

You can return to view the logs as follows:

./cp-deploy.sh env logs
+

Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings.

If you need to interrupt the automation, use CTRL-C to stop the logging output and then use:

./cp-deploy.sh env kill
+

On failure🔗

If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.

Finishing up🔗

Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified.

To retrieve the Cloud Pak URL(s):

cat $STATUS_DIR/cloud-paks/*
+

This will show the Cloud Pak URLs:

Cloud Pak for Data URL for cluster pluto-01 and project cpd:
+https://cpd-cpd.apps.pluto-01.pmxz.p1.openshiftapps.com
+

The admin password can be retrieved from the vault as follows:

List the secrets in the vault:

./cp-deploy.sh vault list
+

This will show something similar to the following:

Secret list for group sample:
+- aws-access-key
+- aws-secret-access-key
+- ibm_cp_entitlement_key
+- rosa-login-token
+- pluto-01-cluster-admin-password
+- cp4d_admin_zen_40_pluto_01
+- all-config
+

You can then retrieve the Cloud Pak for Data admin password like this:

./cp-deploy.sh vault get --vault-secret cp4d_admin_zen_40_pluto_01
+
PLAY [Secrets] *****************************************************************
+included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost
+cp4d_admin_zen_40_pluto_01: gelGKrcgaLatBsnAdMEbmLwGr
+

Post-install configuration🔗

You can find examples of a couple of typical changes you may want to do here: Post-run changes.

\ No newline at end of file diff --git a/10-use-deployer/3-run/aws-self-managed/index.html b/10-use-deployer/3-run/aws-self-managed/index.html new file mode 100644 index 000000000..5fc20e6b1 --- /dev/null +++ b/10-use-deployer/3-run/aws-self-managed/index.html @@ -0,0 +1,46 @@ + AWS Self-managed - Cloud Pak Deployer
Skip to content

Running the Cloud Pak Deployer on AWS (Self-managed)🔗

On Amazon Web Services (AWS), OpenShift can be set up in various ways, self-managed or managed by Red Hat (ROSA). The steps below are applicable to a self-managed OpenShift installation. The IPI (Installer Provisioned Infrastructure) installer will be used. More information about IPI installation can be found here: https://docs.openshift.com/container-platform/4.12/installing/installing_aws/installing-aws-customizations.html.

There are 5 main steps to run the deploye for AWS:

  1. Configure deployer
  2. Prepare the cloud environment
  3. Obtain entitlement keys and secrets
  4. Set environment variables and secrets
  5. Run the deployer
  6. Post-install configuration (Add GPU nodes)

See the deployer in action in this video: https://ibm.box.com/v/cpd-aws-self-managed

Topology🔗

A typical setup of the self-managed OpenShift cluster is pictured below: AWS self-managed OpenShift

Single-node OpenShift (SNO) on AWS🔗

Red Hat OpenShift also supports single-node deployments in which control plane and compute are combined into a single node. Obviously, this type of configuration does not cater for any high availability requirements that are usually part of a production installation, but it does offer a more cost-efficient option for development and testing purposes.

Cloud Pak Deployer can deploy a single-node OpenShift with elastic storage and a sample configuration is provided as part of the deployer.

Warning

When deploying the IBM Cloud Paks on single-node OpenShift, there may be intermittent timeouts as pods are starting up. In those cases, just re-run the deployer with the same configuration and check status of the pods.

1. Configure deployer🔗

Deployer configuration and status directories🔗

Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory (STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory.

You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration. For self-managed OpenShift installations, copy one of ocp-aws-self-managed-*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files.

Example:

mkdir -p $HOME/cpd-config/config
+cp sample-configurations/sample-dynamic/config-samples/ocp-aws-self-managed-elastic.yaml $HOME/cpd-config/config/
+cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/
+

Set configuration and status directories environment variables🔗

Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs.

export CONFIG_DIR=$HOME/cpd-config
+export STATUS_DIR=$HOME/cpd-status
+
  • CONFIG_DIR: Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files.
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and logs files.

Optional: advanced configuration🔗

If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration.

For special configuration with defaults and dynamic variables, refer to Advanced configuration.

2. Prepare the cloud environment🔗

Configure Route53 service on AWS🔗

When deploying a self-managed OpenShift on Amazon web Services, a public hosted zone must be created in the same account as your OpenShift cluster. The domain name or subdomain name registered in the Route53 service must be specifed in the openshift configuration of the deployer.

For more information on acquiring or specifying a domain on AWS, you can refer to https://github.com/openshift/installer/blob/master/docs/user/aws/route53.md.

Obtain the AWS IAM credentials🔗

If you can use your permanent security credentials for the AWS account, you will need an Access Key ID and Secret Access Key for the deployer to setup an OpenShift cluster on AWS.

Alternative: Using temporary AWS security credentials (STS)🔗

If your account uses temporary security credentials for AWS resources, you must use the Access Key ID, Secret Access Key and Session Token associated with your temporary credentials.

For more information about using temporary security credentials, see https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html.

The temporary credentials must be issued for an IAM role that has sufficient permissions to provision the infrastructure and all other components. More information about required permissions can be found here: https://docs.openshift.com/container-platform/4.10/authentication/managing_cloud_provider_credentials/cco-mode-sts.html#sts-mode-create-aws-resources-ccoctl.

An example on how to retrieve the temporary credentials for a user-defined role:

printf "\nexport AWS_ACCESS_KEY_ID=%s\nexport AWS_SECRET_ACCESS_KEY=%s\nexport AWS_SESSION_TOKEN=%s\n" $(aws sts assume-role \
+--role-arn arn:aws:iam::678256850452:role/ocp-sts-role \
+--role-session-name OCPInstall \
+--query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]" \
+--output text)
+

Thie would return something like the below, which you can then paste into the session running the deployer.

export AWS_ACCESS_KEY_ID=ASIxxxxxxAW
+export AWS_SECRET_ACCESS_KEY=jtLxxxxxxxxxxxxxxxGQ
+export AWS_SESSION_TOKEN=IQxxxxxxxxxxxxxbfQ
+

If the openshift configuration has the infrastructure.credentials_mode set to Manual, Cloud Pak Deployer will automatically configure and run the Cloud Credential Operator utility.

3. Acquire entitlement keys and secrets🔗

Acquire IBM Cloud Pak entitlement key🔗

If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry.

Warning

As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.

Acquire an OpenShift pull secret🔗

To install OpenShift you need an OpenShift pull secret which holds your entitlement.

Optional: Locate or generate a public SSH Key🔗

To obtain access to the OpenShift nodes post-installation, you will need to specify the public SSH key of your server; typically this is ~/.ssh/id_rsa.pub, where ~ is the home directory of your user. If you don't have an SSH key-pair yet, you can generate one using the steps documented here: https://cloud.ibm.com/docs/ssh-keys?topic=ssh-keys-generating-and-using-ssh-keys-for-remote-host-authentication#generating-ssh-keys-on-linux. Alternatively, deployer can generate SSH key-pair automatically if credential ocp-ssh-pub-key is not in the vault.

4. Set environment variables and secrets🔗

Set the Cloud Pak entitlement key🔗

If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key.

export CP_ENTITLEMENT_KEY=your_cp_entitlement_key
+
  • CP_ENTITLEMENT_KEY: This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry

Set the environment variables for AWS self-managed OpenShift deployment🔗

export AWS_ACCESS_KEY_ID=your_access_key
+export AWS_SECRET_ACCESS_KEY=your_secret_access_key
+

Optional: If your user does not have permanent administrator access but using temporary credentials, you can set the AWS_SESSION_TOKEN to be used for the AWS CLI.

export AWS_SESSION_TOKEN=your_session_token
+

  • AWS_ACCESS_KEY_ID: This is the AWS Access Key you retrieved above, often this is something like AK1A2VLMPQWBJJQGD6GV
  • AWS_SECRET_ACCESS_KEY: The secret associated with your AWS Access Key, also retrieved above
  • AWS_SESSION_TOKEN: The session token that will grant temporary elevated permissions

Warning

If your AWS_SESSION_TOKEN is expires while the deployer is still running, the deployer may end abnormally. In such case, you can just issue new temporary credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN) and restart the deployer. Alternatively, you can update the 3 vault secrets, respectively aws-access-key, aws-secret-access-key and aws-session-token with new values as they are re-retrieved by the deployer on a regular basis.

Create the secrets needed for self-managed OpenShift cluster🔗

You need to store the below credentials in the vault so that the deployer has access to them when installing self-managed OpenShift cluster on AWS.

./cp-deploy.sh vault set \
+    --vault-secret ocp-pullsecret \
+    --vault-secret-file /tmp/ocp_pullsecret.json
+

Optional: Create secret for public SSH key🔗

If you want to use your SSH key to access nodes in the cluster, set the Vault secret with the public SSH key.

./cp-deploy.sh vault set \
+    --vault-secret ocp-ssh-pub-key \
+    --vault-secret-file ~/.ssh/id_rsa.pub
+

Optional: Set the GitHub Personal Access Token (PAT)🔗

In some cases, download of the cloudctl and cpd-cli clients from @IBM will fail because GitHub limits the number of API calls from non-authenticated clients. You can remediate this issue by creating a Personal Access Token on github.com and creating a secret in the vault.

./cp-deploy.sh vault set -vs github-ibm-pat=<your PAT>
+

Alternatively, you can set the secret by adding -vs github-ibm-pat=<your PAT> to the ./cp-deploy.sh env apply command.

5. Run the deployer🔗

Optional: validate the configuration🔗

If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators.

./cp-deploy.sh env apply --check-only --accept-all-licenses
+

Run the Cloud Pak Deployer🔗

To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions.

./cp-deploy.sh env apply --accept-all-licenses
+

You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx. For more information about the extra (dynamic) variables, see advanced configuration.

The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line.

When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background.

You can return to view the logs as follows:

./cp-deploy.sh env logs
+

Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings.

If you need to interrupt the automation, use CTRL-C to stop the logging output and then use:

./cp-deploy.sh env kill
+

On failure🔗

If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.

Finishing up🔗

Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified.

To retrieve the Cloud Pak URL(s):

cat $STATUS_DIR/cloud-paks/*
+

This will show the Cloud Pak URLs:

Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com):
+https://cpd-cpd.apps.pluto-01.example.com
+

The admin password can be retrieved from the vault as follows:

List the secrets in the vault:

./cp-deploy.sh vault list
+

This will show something similar to the following:

Secret list for group sample:
+- aws-access-key
+- aws-secret-access-key
+- ocp-pullsecret
+- ocp-ssh-pub-key
+- ibm_cp_entitlement_key
+- pluto-01-cluster-admin-password
+- cp4d_admin_zen_40_pluto_01
+- all-config
+

You can then retrieve the Cloud Pak for Data admin password like this:

./cp-deploy.sh vault get --vault-secret cp4d_admin_zen_40_pluto_01
+
PLAY [Secrets] *****************************************************************
+included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost
+cp4d_admin_zen_40_pluto_01: gelGKrcgaLatBsnAdMEbmLwGr
+

6. Post-install configuration🔗

You can find examples of a couple of typical changes you may want to do here: Post-run changes:

  • Update the Cloud Pak for Data administrator password
  • Add GPU node(s) to your OpenShift cluster
\ No newline at end of file diff --git a/10-use-deployer/3-run/azure-aro/index.html b/10-use-deployer/3-run/azure-aro/index.html new file mode 100644 index 000000000..4b00d7a9b --- /dev/null +++ b/10-use-deployer/3-run/azure-aro/index.html @@ -0,0 +1,49 @@ + Azure ARO - Cloud Pak Deployer
Skip to content

Running the Cloud Pak Deployer on Microsoft Azure - ARO🔗

On Azure, OpenShift can be set up in various ways, managed by Red Hat (ARO) or self-managed. The steps below are applicable to the ARO (Azure Red Hat OpenShift).

There are 5 main steps to run the deployer for Azure:

  1. Configure deployer
  2. Prepare the cloud environment
  3. Obtain entitlement keys and secrets
  4. Set environment variables and secrets
  5. Run the deployer

Topology🔗

A typical setup of the ARO cluster is pictured below: ARO configuration

When deploying ARO, you can configure the domain name by setting the openshift.domain_name attribute. The resulting domain name is managed by Azure, and it must be unique across all ARO instances deployed in Azure. Both the API and Ingress urls are set to be public in the template, so they can be resolved by external clients. If you want to use a custom domain and don't have one yet, you buy one from Azure: https://learn.microsoft.com/en-us/azure/app-service/manage-custom-dns-buy-domain.

1. Configure deployer🔗

Deployer configuration and status directories🔗

Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory (STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory.

You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration. For ARO installations, copy one of ocp-azure-aro*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files.

Example:

mkdir -p $HOME/cpd-config/config
+cp sample-configurations/sample-dynamic/config-samples/ocp-azure-aro.yaml $HOME/cpd-config/config/
+cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/
+

Set configuration and status directories environment variables🔗

Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs.

export CONFIG_DIR=$HOME/cpd-config
+export STATUS_DIR=$HOME/cpd-status
+
  • CONFIG_DIR: Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files.
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and logs files.

Optional: advanced configuration🔗

If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration.

For special configuration with defaults and dynamic variables, refer to Advanced configuration.

2. Prepare the cloud environment🔗

Install the Azure CLI tool🔗

Install Azure CLI tool, and run the commands in your operating system.

Verify your quota and permissions in Microsoft Azure🔗

  • Check Azure resource quota of the subscription - Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an OpenShift cluster.
  • The ARO cluster is provisioned using the az command. Ideally, one has to have Contributor permissions on the subscription (Azure resources) and Application administrator role assigned in the Azure Active Directory. See details here.

Set environment variables for Azure🔗

export AZURE_RESOURCE_GROUP=pluto-01-rg
+export AZURE_LOCATION=westeurope
+export AZURE_SP=pluto-01-sp
+
  • AZURE_RESOURCE_GROUP: The Azure resource group that will hold all resources belonging to the cluster: VMs, load balancers, virtual networks, subnets, etc.. Typically you will create a resource group for every OpenShift cluster you provision.
  • AZURE_LOCATION: The Azure location of the resource group, for example useast or westeurope.
  • AZURE_SP: Azure service principal that is used to create the resources on Azure. You will get the service principal from the Azure administrator.

Store Service Principal credentials🔗

You must run the OpenShift installation using an Azure Service Principal with sufficient permissions. The Azure account administrator will share the SP credentials as a JSON file. If you have subscription-level access you can also create the Service Principal yourself. See steps in Create Azure service principal.

Example output in credentials file:

{
+  "appId": "a4c39ae9-f9d1-4038-b4a4-ab011e769111",
+  "displayName": "pluto-01-sp",
+  "password": "xyz-xyz",
+  "tenant": "869930ac-17ee-4dda-bbad-7354c3e7629c8"
+}
+

Store this file as /tmp/${AZURE_SP}-credentials.json.

Login as Service Principal🔗

Login as the service principal:

az login --service-principal -u a4c39ae9-f9d1-4038-b4a4-ab011e769111 -p xyz-xyz --tenant 869930ac-17ee-4dda-bbad-7354c3e7629c8
+

Register Resource Providers🔗

Make sure the following Resource Providers are registered for your subscription by running:

az provider register -n Microsoft.RedHatOpenShift --wait
+az provider register -n Microsoft.Compute --wait
+az provider register -n Microsoft.Storage --wait
+az provider register -n Microsoft.Authorization --wait
+

Create the resource group🔗

First the resource group must be created; this resource group must match the one configured in your OpenShift yaml config file.

az group create \
+  --name ${AZURE_RESOURCE_GROUP} \
+  --location ${AZURE_LOCATION}
+

3. Acquire entitlement keys and secrets🔗

If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry.

Warning

As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.

Acquire an OpenShift pull secret🔗

To install OpenShift you need an OpenShift pull secret which holds your entitlement.

4. Set environment variables and secrets🔗

Create the secrets needed for ARO deployment🔗

You need to store the OpenShift pull secret and service principal credentials in the vault so that the deployer has access to it.

./cp-deploy.sh vault set \
+    --vault-secret ocp-pullsecret \
+    --vault-secret-file /tmp/ocp_pullsecret.json
+
+
+./cp-deploy.sh vault set \
+    --vault-secret ${AZURE_SP}-credentials \
+    --vault-secret-file /tmp/${AZURE_SP}-credentials.json
+

Optional: Set the GitHub Personal Access Token (PAT)🔗

In some cases, download of the cloudctl and cpd-cli clients from @IBM will fail because GitHub limits the number of API calls from non-authenticated clients. You can remediate this issue by creating a Personal Access Token on github.com and creating a secret in the vault.

./cp-deploy.sh vault set -vs github-ibm-pat=<your PAT>
+

Alternatively, you can set the secret by adding -vs github-ibm-pat=<your PAT> to the ./cp-deploy.sh env apply command.

5. Run the deployer🔗

Optional: validate the configuration🔗

If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators.

./cp-deploy.sh env apply --check-only --accept-all-licenses
+

Run the Cloud Pak Deployer🔗

To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions.

./cp-deploy.sh env apply --accept-all-licenses
+

You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx. For more information about the extra (dynamic) variables, see advanced configuration.

The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line.

When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background.

You can return to view the logs as follows:

./cp-deploy.sh env logs
+

Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings.

If you need to interrupt the automation, use CTRL-C to stop the logging output and then use:

./cp-deploy.sh env kill
+

On failure🔗

If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.

Finishing up🔗

Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified.

To retrieve the Cloud Pak URL(s):

cat $STATUS_DIR/cloud-paks/*
+

This will show the Cloud Pak URLs:

Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com):
+https://cpd-cpd.apps.pluto-01.example.com
+

The admin password can be retrieved from the vault as follows:

List the secrets in the vault:

./cp-deploy.sh vault list
+

This will show something similar to the following:

Secret list for group sample:
+- ibm_cp_entitlement_key
+- sample-provision-ssh-key
+- sample-provision-ssh-pub-key
+- cp4d_admin_zen_sample_sample
+

You can then retrieve the Cloud Pak for Data admin password like this:

./cp-deploy.sh vault get --vault-secret cp4d_admin_zen_sample_sample
+
PLAY [Secrets] *****************************************************************
+included: /automation_script/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost
+cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr
+

Post-install configuration🔗

You can find examples of a couple of typical changes you may want to do here: Post-run changes.

\ No newline at end of file diff --git a/10-use-deployer/3-run/azure-self-managed/index.html b/10-use-deployer/3-run/azure-self-managed/index.html new file mode 100644 index 000000000..29a745f99 --- /dev/null +++ b/10-use-deployer/3-run/azure-self-managed/index.html @@ -0,0 +1,49 @@ + Azure Self-managed - Cloud Pak Deployer
Skip to content

Running the Cloud Pak Deployer on Microsoft Azure - Self-managed🔗

On Azure, OpenShift can be set up in various ways, managed by Red Hat (ARO) or self-managed. The steps below are applicable to the self-managed Red Hat OpenShift.

There are 5 main steps to run the deployer for Azure:

  1. Configure deployer
  2. Prepare the cloud environment
  3. Obtain entitlement keys and secrets
  4. Set environment variables and secrets
  5. Run the deployer

Topology🔗

A typical setup of the OpenShift cluster on Azure is pictured below: Self-managed configuration

When deploying self-managed OpenShift on Azure, you must configure the domain name by setting the openshift.domain_name, which must be public domain with a registrar. OpenShift will create a public DNS zone with additional entries to reach the OpenShift API and the applications (Cloud Paks). If you don't have a domain yet, you buy one from Azure: https://learn.microsoft.com/en-us/azure/app-service/manage-custom-dns-buy-domain.

1. Configure deployer🔗

Deployer configuration and status directories🔗

Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory (STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory.

You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration. For Azure self-managed installations, copy one of ocp-azure-self-managed*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files.

Example:

mkdir -p $HOME/cpd-config/config
+cp sample-configurations/sample-dynamic/config-samples/ocp-azure-self-managed.yaml $HOME/cpd-config/config/
+cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/
+

Set configuration and status directories environment variables🔗

Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs.

export CONFIG_DIR=$HOME/cpd-config
+export STATUS_DIR=$HOME/cpd-status
+
  • CONFIG_DIR: Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files.
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and logs files.

Optional: advanced configuration🔗

If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration.

For special configuration with defaults and dynamic variables, refer to Advanced configuration.

2. Prepare the cloud environment🔗

Install the Azure CLI tool🔗

Install Azure CLI tool, and run the commands in your operating system.

Verify your quota and permissions in Microsoft Azure🔗

  • Check Azure resource quota of the subscription - Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an OpenShift cluster.
  • The self-managed cluster is provisioned using the IPI installer command. Ideally, one has to have Contributor permissions on the subscription (Azure resources) and Application administrator role assigned in the Azure Active Directory. See details here.

Set environment variables for Azure🔗

export AZURE_RESOURCE_GROUP=pluto-01-rg
+export AZURE_LOCATION=westeurope
+export AZURE_SP=pluto-01-sp
+
  • AZURE_RESOURCE_GROUP: The Azure resource group that will hold all resources belonging to the cluster: VMs, load balancers, virtual networks, subnets, etc.. Typically you will create a resource group for every OpenShift cluster you provision.
  • AZURE_LOCATION: The Azure location of the resource group, for example useast or westeurope.
  • AZURE_SP: Azure service principal that is used to create the resources on Azure. You will get the service principal from the Azure administrator.

Store Service Principal credentials🔗

You must run the OpenShift installation using an Azure Service Principal with sufficient permissions. The Azure account administrator will share the SP credentials as a JSON file. If you have subscription-level access you can also create the Service Principal yourself. See steps in Create Azure service principal.

Example output in credentials file:

{
+  "appId": "a4c39ae9-f9d1-4038-b4a4-ab011e769111",
+  "displayName": "pluto-01-sp",
+  "password": "xyz-xyz",
+  "tenant": "869930ac-17ee-4dda-bbad-7354c3e7629c8"
+}
+

Store this file as /tmp/${AZURE_SP}-credentials.json.

Login as Service Principal🔗

Login as the service principal:

az login --service-principal -u a4c39ae9-f9d1-4038-b4a4-ab011e769111 -p xyz-xyz --tenant 869930ac-17ee-4dda-bbad-7354c3e7629c8
+

Create the resource group🔗

First the resource group must be created; this resource group must match the one configured in your OpenShift yaml config file.

az group create \
+  --name ${AZURE_RESOURCE_GROUP} \
+  --location ${AZURE_LOCATION}
+

3. Acquire entitlement keys and secrets🔗

Acquire IBM Cloud Pak entitlement key🔗

If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry.

Warning

As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.

Acquire an OpenShift pull secret🔗

To install OpenShift you need an OpenShift pull secret which holds your entitlement.

Optional: Locate or generate a public SSH Key🔗

To obtain access to the OpenShift nodes post-installation, you will need to specify the public SSH key of your server; typically this is ~/.ssh/id_rsa.pub, where ~ is the home directory of your user. If you don't have an SSH key-pair yet, you can generate one using the steps documented here: https://cloud.ibm.com/docs/ssh-keys?topic=ssh-keys-generating-and-using-ssh-keys-for-remote-host-authentication#generating-ssh-keys-on-linux. Alternatively, deployer can generate SSH key-pair automatically if credential ocp-ssh-pub-key is not in the vault.

4. Set environment variables and secrets🔗

Set the Cloud Pak entitlement key🔗

If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key.

export CP_ENTITLEMENT_KEY=your_cp_entitlement_key
+
  • CP_ENTITLEMENT_KEY: This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry

Create the secrets needed for self-managed OpenShift cluster🔗

You need to store the OpenShift pull secret and service principal credentials in the vault so that the deployer has access to it.

./cp-deploy.sh vault set \
+    --vault-secret ocp-pullsecret \
+    --vault-secret-file /tmp/ocp_pullsecret.json
+
+
+./cp-deploy.sh vault set \
+    --vault-secret ${AZURE_SP}-credentials \
+    --vault-secret-file /tmp/${AZURE_SP}-credentials.json
+

Optional: Create secret for public SSH key🔗

If you want to use your SSH key to access nodes in the cluster, set the Vault secret with the public SSH key.

./cp-deploy.sh vault set \
+    --vault-secret ocp-ssh-pub-key \
+    --vault-secret-file ~/.ssh/id_rsa.pub
+

Optional: Set the GitHub Personal Access Token (PAT)🔗

In some cases, download of the cloudctl and cpd-cli clients from @IBM will fail because GitHub limits the number of API calls from non-authenticated clients. You can remediate this issue by creating a Personal Access Token on github.com and creating a secret in the vault.

./cp-deploy.sh vault set -vs github-ibm-pat=<your PAT>
+

Alternatively, you can set the secret by adding -vs github-ibm-pat=<your PAT> to the ./cp-deploy.sh env apply command.

5. Run the deployer🔗

Optional: validate the configuration🔗

If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators.

./cp-deploy.sh env apply --check-only --accept-all-licenses
+

Run the Cloud Pak Deployer🔗

To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions.

./cp-deploy.sh env apply --accept-all-licenses
+

You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx. For more information about the extra (dynamic) variables, see advanced configuration.

The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line.

When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background.

You can return to view the logs as follows:

./cp-deploy.sh env logs
+

Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings.

If you need to interrupt the automation, use CTRL-C to stop the logging output and then use:

./cp-deploy.sh env kill
+

On failure🔗

If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.

Finishing up🔗

Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified.

To retrieve the Cloud Pak URL(s):

cat $STATUS_DIR/cloud-paks/*
+

This will show the Cloud Pak URLs:

Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com):
+https://cpd-cpd.apps.pluto-01.example.com
+

The admin password can be retrieved from the vault as follows:

List the secrets in the vault:

./cp-deploy.sh vault list
+

This will show something similar to the following:

Secret list for group sample:
+- ibm_cp_entitlement_key
+- sample-provision-ssh-key
+- sample-provision-ssh-pub-key
+- cp4d_admin_cpd_demo
+

You can then retrieve the Cloud Pak for Data admin password like this:

./cp-deploy.sh vault get --vault-secret cp4d_admin_zen_sample_sample
+
PLAY [Secrets] *****************************************************************
+included: /automation_script/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost
+cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr
+

Post-install configuration🔗

You can find examples of a couple of typical changes you may want to do here: Post-run changes.

\ No newline at end of file diff --git a/10-use-deployer/3-run/azure-service-principal/index.html b/10-use-deployer/3-run/azure-service-principal/index.html new file mode 100644 index 000000000..22a355f06 --- /dev/null +++ b/10-use-deployer/3-run/azure-service-principal/index.html @@ -0,0 +1,52 @@ + Create an Azure Service Principal - Cloud Pak Deployer
Skip to content

Create an Azure Service Principal🔗

Login to Azure🔗

Login to the Microsoft Azure using your subscription-level credentials.

az login
+

If you have a subscription with multiple tenants, use:

az login --tenant <TENANT_ID>
+

Example:

az login --tenant 869930ac-17ee-4dda-bbad-7354c3e7629c8
+To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code AXWFQQ5FJ to authenticate.
+[
+  {
+    "cloudName": "AzureCloud",
+    "homeTenantId": "869930ac-17ee-4dda-bbad-7354c3e7629c8",
+    "id": "72281667-6d54-46cb-8423-792d7bcb1234",
+    "isDefault": true,
+    "managedByTenants": [],
+    "name": "Azure Account",
+    "state": "Enabled",
+    "tenantId": "869930ac-17ee-4dda-bbad-7354c3e7629c8",
+    "user": {
+      "name": "your_user@domain.com",
+      "type": "user"
+    }
+  }
+]
+

Set subscription (optional)🔗

If you have multiple Azure subscriptions, specify the relevant subscription ID: az account set --subscription <SUBSCRIPTION_ID>

You can list the subscriptions via command:

az account subscription list
+

[
+  {
+    "authorizationSource": "RoleBased",
+    "displayName": "IBM xxx",
+    "id": "/subscriptions/dcexxx",
+    "state": "Enabled",
+    "subscriptionId": "dcexxx",
+    "subscriptionPolicies": {
+      "locationPlacementId": "Public_2014-09-01",
+      "quotaId": "EnterpriseAgreement_2014-09-01",
+      "spendingLimit": "Off"
+    }
+  }
+]
+

Create service principal🔗

Create the service principal that will do the installation and assign the Contributor role

Set environment variables for Azure🔗

export AZURE_SUBSCRIPTION_ID=72281667-6d54-46cb-8423-792d7bcb1234
+export AZURE_LOCATION=westeurope
+export AZURE_SP=pluto-01-sp
+
  • AZURE_SUBSCRIPTION_ID: The id of your Azure subscription. Once logged in, you can retrieve this using the az account show command.
  • AZURE_LOCATION: The Azure location of the resource group, for example useast or westeurope.
  • AZURE_SP: Azure service principal that is used to create the resources on Azure.

Create the service principal🔗

az ad sp create-for-rbac \
+  --role Contributor \
+  --name ${AZURE_SP} \
+  --scopes /subscriptions/${AZURE_SUBSCRIPTION_ID} | tee /tmp/${AZURE_SP}-credentials.json
+

Example output:

{
+  "appId": "a4c39ae9-f9d1-4038-b4a4-ab011e769111",
+  "displayName": "pluto-01-sp",
+  "password": "xyz-xyz",
+  "tenant": "869930ac-17ee-4dda-bbad-7354c3e7629c8"
+}
+

Set permissions for service principal🔗

Finally, set the permissions of the service principal to allow creation of the OpenShift cluster

az role assignment create \
+  --role "User Access Administrator" \
+  --assignee-object-id $(az ad sp list --display-name=${AZURE_SP} --query='[].id' -o tsv)
+

\ No newline at end of file diff --git a/10-use-deployer/3-run/existing-openshift-console/index.html b/10-use-deployer/3-run/existing-openshift-console/index.html new file mode 100644 index 000000000..73f7a9821 --- /dev/null +++ b/10-use-deployer/3-run/existing-openshift-console/index.html @@ -0,0 +1,500 @@ + Existing OpenShift using Console - Cloud Pak Deployer
Skip to content

Running deployer on OpenShift using console🔗

See the deployer in action deploying IBM watsonx.ai on an existing OpenShift cluster in this video: https://ibm.box.com/v/cpd-wxai-existing-ocp

Log in to the OpenShift cluster🔗

Log in as a cluster administrator to be able to run the deployer with the correct permissions.

Prepare the deployer Project🔗

  • Go to the OpenShift console
  • Click the "+" sign at the top of the page
  • Paste the following block (exactly) into the window
    ---
    +apiVersion: v1
    +kind: Namespace
    +metadata:
    +  creationTimestamp: null
    +  name: cloud-pak-deployer
    +---
    +apiVersion: v1
    +kind: ServiceAccount
    +metadata:
    +  name: cloud-pak-deployer-sa
    +  namespace: cloud-pak-deployer
    +---
    +apiVersion: rbac.authorization.k8s.io/v1
    +kind: RoleBinding
    +metadata:
    +  name: system:openshift:scc:privileged
    +  namespace: cloud-pak-deployer
    +roleRef:
    +  apiGroup: rbac.authorization.k8s.io
    +  kind: ClusterRole
    +  name: system:openshift:scc:privileged
    +subjects:
    +- kind: ServiceAccount
    +  name: cloud-pak-deployer-sa
    +  namespace: cloud-pak-deployer
    +---
    +apiVersion: rbac.authorization.k8s.io/v1
    +kind: ClusterRoleBinding
    +metadata:
    +  name: cloud-pak-deployer-cluster-admin
    +roleRef:
    +  apiGroup: rbac.authorization.k8s.io
    +  kind: ClusterRole
    +  name: cluster-admin
    +subjects:
    +- kind: ServiceAccount
    +  name: cloud-pak-deployer-sa
    +  namespace: cloud-pak-deployer
    +

Set the entitlement key🔗

  • Update the secret below with your container software Entitlement key from https://myibm.ibm.com/products-services/containerlibrary. Make sure the key is indented exactly as below.
  • Go to the OpenShift console
  • Click the "+" sign at the top of the page
  • Paste the following block with replaced YOUR_ENTITLEMENT_KEY on line 10
     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    ---
    +apiVersion: v1
    +kind: Secret
    +metadata:
    +  name: cloud-pak-entitlement-key
    +  namespace: cloud-pak-deployer
    +type: Opaque
    +stringData:
    +  cp-entitlement-key: |
    +    YOUR_ENTITLEMENT_KEY
    +

Configure the Cloud Paks and services to be deployed🔗

  • Update the configuration below to match what you want to deploy, do not change indent
  • Go to the OpenShift console
  • Click the "+" sign at the top of the page
  • Paste the following block (exactly into the window)
    ---
    +apiVersion: v1
    +kind: ConfigMap
    +metadata:
    +  name: cloud-pak-deployer-config
    +  namespace: cloud-pak-deployer
    +data:
    +  cpd-config.yaml: |
    +    global_config:
    +      environment_name: demo
    +      cloud_platform: existing-ocp
    +      confirm_destroy: False
    +
    +    openshift:
    +    - name: cpd-demo
    +      ocp_version: "4.15"
    +      cluster_name: cpd-demo
    +      domain_name: example.com
    +      mcg:
    +        install: False
    +        storage_type: storage-class
    +        storage_class: managed-nfs-storage
    +      gpu:
    +        install: auto
    +      openshift_ai:
    +        install: auto
    +        channel: auto
    +      openshift_storage:
    +      - storage_name: auto-storage
    +        storage_type: auto
    +
    +    cp4d:
    +    - project: cpd
    +      openshift_cluster_name: cpd-demo
    +      cp4d_version: 5.0.3
    +      db2u_limited_privileges: False
    +      use_fs_iam: True
    +      accept_licenses: True
    +      cartridges:
    +      - name: cp-foundation
    +        license_service:
    +          state: disabled
    +          threads_per_core: 2
    +      
    +      - name: lite
    +
    +      - name: scheduler 
    +        state: removed
    +        
    +      - name: analyticsengine 
    +        description: Analytics Engine Powered by Apache Spark 
    +        size: small 
    +        state: removed
    +
    +      - name: bigsql
    +        description: Db2 Big SQL
    +        state: removed
    +
    +      - name: ca
    +        description: Cognos Analytics
    +        size: small
    +        instances:
    +        - name: ca-instance
    +          metastore_ref: ca-metastore
    +        state: removed
    +
    +      - name: dashboard
    +        description: Cognos Dashboards
    +        state: removed
    +
    +      - name: datagate
    +        description: Db2 Data Gate
    +        state: removed
    +
    +      - name: dataproduct
    +        description: Data Product Hub
    +        state: removed
    +        
    +      - name: datastage-ent
    +        description: DataStage Enterprise
    +        state: removed
    +
    +      - name: datastage-ent-plus
    +        description: DataStage Enterprise Plus
    +        state: removed
    +
    +        # The default instance is created automatically with the DataStage installation. If you want to create additional instances
    +        # uncomment the section below and specify the various scaling options.
    +
    +        # instances:
    +        #   - name: ds-instance
    +        #     # Optional settings
    +        #     description: "datastage ds-instance"
    +        #     size: medium
    +        #     storage_class: efs-nfs-client
    +        #     storage_size_gb: 60
    +        #     # Custom Scale options
    +        #     scale_px_runtime:
    +        #       replicas: 2
    +        #       cpu_request: 500m
    +        #       cpu_limit: 2
    +        #       memory_request: 2Gi
    +        #       memory_limit: 4Gi
    +        #     scale_px_compute:
    +        #       replicas: 2
    +        #       cpu_request: 1
    +        #       cpu_limit: 3
    +        #       memory_request: 4Gi
    +        #       memory_limit: 12Gi    
    +
    +      - name: db2
    +        description: Db2 OLTP
    +        size: small
    +        instances:
    +        - name: ca-metastore
    +          metadata_size_gb: 20
    +          data_size_gb: 20
    +          backup_size_gb: 20  
    +          transactionlog_size_gb: 20
    +        state: removed
    +
    +      - name: db2wh
    +        description: Db2 Warehouse
    +        state: removed
    +
    +      - name: dmc
    +        description: Db2 Data Management Console
    +        state: removed
    +        instances:
    +        - name: data-management-console
    +          description: Data Management Console
    +          size: medium
    +          storage_size_gb: 50
    +
    +      - name: dods
    +        description: Decision Optimization
    +        size: small
    +        state: removed
    +
    +      - name: dp
    +        description: Data Privacy
    +        size: small
    +        state: removed
    +
    +      - name: dpra
    +        description: Data Privacy Risk Assessment
    +        state: removed
    +
    +      - name: dv
    +        description: Data Virtualization
    +        size: small 
    +        instances:
    +        - name: data-virtualization
    +        state: removed
    +
    +      # Please note that for EDB Postgress, a secret edb-postgres-license-key must be created in the vault
    +      # before deploying
    +      - name: edb_cp4d
    +        description: EDB Postgres
    +        state: removed
    +        instances:
    +        - name: instance1
    +          version: "15.4"
    +          #type: Standard
    +          #members: 1
    +          #size_gb: 50
    +          #resource_request_cpu: 1
    +          #resource_request_memory: 4Gi
    +          #resource_limit_cpu: 1
    +          #resource_limit_memory: 4Gi
    +
    +      - name: factsheet
    +        description: AI Factsheets
    +        size: small
    +        state: removed
    +
    +      - name: hee
    +        description: Execution Engine for Apache Hadoop
    +        size: small
    +        state: removed
    +
    +      - name: mantaflow
    +        description: MANTA Automated Lineage
    +        size: small
    +        state: removed
    +
    +      - name: match360
    +        description: IBM Match 360
    +        size: small
    +        wkc_enabled: true
    +        state: removed
    +
    +      - name: openpages
    +        description: OpenPages
    +        state: removed
    +
    +      # For Planning Analytics, the case version is needed due to defect in olm utils
    +      - name: planning-analytics
    +        description: Planning Analytics
    +        state: removed
    +
    +      - name: replication
    +        description: Data Replication
    +        license: IDRC
    +        size: small
    +        state: removed
    +
    +      - name: rstudio
    +        description: RStudio Server with R 3.6
    +        size: small
    +        state: removed
    +
    +      - name: spss
    +        description: SPSS Modeler
    +        state: removed
    +
    +      - name: syntheticdata
    +        description: Synthetic Data Generator
    +        state: removed
    +
    +      - name: voice-gateway
    +        description: Voice Gateway
    +        replicas: 1
    +        state: removed
    +
    +      - name: watson-assistant
    +        description: Watson Assistant
    +        size: small
    +        # noobaa_account_secret: noobaa-admin
    +        # noobaa_cert_secret: noobaa-s3-serving-cert
    +        state: removed
    +        instances:
    +        - name: wa-instance
    +          description: "Watson Assistant instance"
    +
    +      - name: watson-discovery
    +        description: Watson Discovery
    +        # noobaa_account_secret: noobaa-admin
    +        # noobaa_cert_secret: noobaa-s3-serving-cert
    +        state: removed
    +        instances:
    +        - name: wd-instance
    +          description: "Watson Discovery instance"
    +
    +      - name: watson-openscale
    +        description: Watson OpenScale
    +        size: small
    +        state: removed
    +
    +      - name: watson-speech
    +        description: Watson Speech (STT and TTS)
    +        stt_size: xsmall
    +        tts_size: xsmall
    +        # noobaa_account_secret: noobaa-admin
    +        # noobaa_cert_secret: noobaa-s3-serving-cert
    +        state: removed
    +
    +      # Please note that for watsonx.ai, the following pre-requisites exist:
    +      # If you want to use foundation models, you neeed to install the Node Feature Discovery and NVIDIA GPU operators. 
    +      #    You can do so by setting the openshift.gpu.install property to auto
    +      # OpenShift AI is a requirement for watsonx.ai. You can install this by setting the openshift.openshift_ai.install property to auto
    +      - name: watsonx_ai
    +        description: watsonx.ai
    +        state: removed
    +        installation_options:
    +          tuning_disabled: true
    +        models:
    +        - model_id: allam-1-13b-instruct
    +          state: removed
    +        - model_id: codellama-codellama-34b-instruct-hf
    +          state: removed
    +        - model_id: elyza-japanese-llama-2-7b-instruct
    +          state: removed
    +        - model_id: google-flan-ul2
    +          state: removed
    +        - model_id: google-flan-t5-xl
    +          state: removed
    +        - model_id: google-flan-t5-xxl
    +          state: removed
    +        - model_id: eleutherai-gpt-neox-20b
    +          state: removed
    +        - model_id: ibm-granite-8b-japanese
    +          state: removed
    +        - model_id: ibm-granite-13b-chat-v1
    +          state: removed
    +        - model_id: ibm-granite-13b-chat-v2
    +          state: removed
    +        - model_id: ibm-granite-13b-instruct-v1
    +          state: removed
    +        - model_id: ibm-granite-13b-instruct-v2
    +          state: removed
    +        - model_id: ibm-granite-20b-multilingual
    +          state: removed
    +        - model_id: core42-jais-13b-chat
    +          state: removed
    +        - model_id: meta-llama-llama-2-13b-chat
    +          state: removed
    +        - model_id: meta-llama-llama-3-8b-instruct
    +          state: removed
    +        - model_id: meta-llama-llama-2-70b-chat
    +          state: removed
    +        - model_id: mncai-llama-2-13b-dpo-v7
    +          state: removed
    +        - model_id: ibm-mistralai-merlinite-7b
    +          state: removed
    +        - model_id: ibm-mpt-7b-instruct2
    +          state: removed
    +        - model_id: mistralai-mixtral-8x7b-instruct-v01
    +          state: removed
    +        - model_id: ibm-mistralai-mixtral-8x7b-instruct-v01-q
    +          state: removed
    +        - model_id: bigscience-mt0-xxl
    +          state: removed
    +        - model_id: bigcode-starcoder
    +          state: removed
    +
    +      - name: watsonx_data
    +        description: watsonx.data
    +        state: removed
    +
    +      - name: watsonx_governance
    +        description: watsonx.governance
    +        state: removed
    +        installation_options:
    +          installType: all
    +          enableFactsheet: true
    +          enableOpenpages: true
    +          enableOpenscale: true
    +
    +      - name: watsonx_orchestrate
    +        description: watsonx.orchestrate
    +        app_connect:
    +          app_connect_project: ibm-app-connect
    +          app_connect_case_version: 11.5.0
    +          app_connect_channel_version: v11.5
    +        state: removed
    +
    +      - name: wca-ansible
    +        description: watsxonx Code Assistant for Red Hat Ansible Lightspeed
    +        state: removed
    +
    +      - name: wca-z
    +        description: watsxonx Code Assistant for Z
    +        state: removed
    +
    +      # For the IBM Knowledge Catalog, you can specify 3 editions: wkx, ikc_premium, or ikc_standard
    +      # Choose the correct IBM Knowledge Catalog edition below
    +      - name: wkc
    +        description: IBM Knowledge Catalog
    +        size: small
    +        state: removed
    +        installation_options:
    +          enableKnowledgeGraph: False
    +          enableDataQuality: False
    +
    +      - name: ikc_premium
    +        description: IBM Knowledge Catalog - Premium edition
    +        size: small
    +        state: removed
    +        installation_options:
    +          enableKnowledgeGraph: False
    +          enableDataQuality: False
    +
    +      - name: ikc_standard
    +        description: IBM Knowledge Catalog - Standard edition
    +        size: small
    +        state: removed
    +        installation_options:
    +          enableKnowledgeGraph: False
    +          enableDataQuality: False
    +
    +      - name: wml
    +        description: Watson Machine Learning
    +        size: small
    +        state: installed
    +
    +      - name: wml-accelerator
    +        description: Watson Machine Learning Accelerator
    +        replicas: 1
    +        size: small
    +        state: removed
    +
    +      - name: ws
    +        description: Watson Studio
    +        state: installed
    +
    +      - name: ws-pipelines
    +        description: Watson Studio Pipelines
    +        state: removed
    +
    +      - name: ws-runtimes
    +        description: Watson Studio Runtimes
    +        runtimes:
    +        - ibm-cpd-ws-runtime-241-py
    +        - ibm-cpd-ws-runtime-231-py
    +        - ibm-cpd-ws-runtime-241-pygpu
    +        - ibm-cpd-ws-runtime-231-pygpu
    +        - ibm-cpd-ws-runtime-241-r
    +        - ibm-cpd-ws-runtime-231-r
    +        state: removed 
    +

Start the deployer🔗

  • Go to the OpenShift console
  • Click the "+" sign at the top of the page
  • Paste the following block into the window. You can update the image on line 11 and the same value will be used for image for the Deployer Job (From release v3.0.2 onwards).
     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +19
    +20
    +21
    apiVersion: v1
    +kind: Pod
    +metadata:
    +  labels:
    +    app: cloud-pak-deployer-start
    +  generateName: cloud-pak-deployer-start-
    +  namespace: cloud-pak-deployer
    +spec:
    +  containers:
    +  - name: cloud-pak-deployer
    +    image: quay.io/cloud-pak-deployer/cloud-pak-deployer:latest
    +    imagePullPolicy: Always
    +    terminationMessagePath: /dev/termination-log
    +    terminationMessagePolicy: File
    +    command: ["/bin/sh","-xc"]
    +    args: 
    +      - /cloud-pak-deployer/scripts/deployer/cpd-start-deployer.sh
    +  restartPolicy: Never
    +  securityContext:
    +    runAsUser: 0
    +  serviceAccountName: cloud-pak-deployer-sa
    +

Follow the logs of the deployment🔗

  • Open the OpenShift console
  • Go to Workloads → Pods
  • Select cloud-pak-deployer as the project at the top of the page
  • Click the deployer Pod
  • Click Logs tab

Info

When running the deployer installing Cloud Pak for Data, the first run will fail. This is because the deployer applies the node configuration to OpenShift, which will cause all nodes to restart one by one, including the node that runs the deployer. Because of the Job setting, a new deployer pod will automatically start and resume from where it was stopped.

Re-run deployer when failed or if you want to update the configuration🔗

If the deployer has failed or if you want to make changes to the configuration after the successful run, you can do the following:

  • Open the OpenShift console
  • Go to Workloads → Jobs
  • Check the logs of the cloud-pak-deployer job
  • If needed, make changes to the cloud-pak-deployer-config Config Map by going to Workloads → ConfigMaps
  • Re-run the deployer
\ No newline at end of file diff --git a/10-use-deployer/3-run/existing-openshift/index.html b/10-use-deployer/3-run/existing-openshift/index.html new file mode 100644 index 000000000..4308b916c --- /dev/null +++ b/10-use-deployer/3-run/existing-openshift/index.html @@ -0,0 +1,45 @@ + Existing OpenShift - Cloud Pak Deployer
Skip to content

Running the Cloud Pak Deployer on an existing OpenShift cluster🔗

When running the Cloud Pak Deployer on an existing OpenShift cluster, the following is assumed:

  • The OpenShift cluster is up and running with sufficient compute nodes
  • The appropriate storage class(es) have been pre-created
  • You have cluster administrator permissions to OpenShift (except when using --dry-run)

Info

If you don't want to make changes to the OpenShift cluster and only want to review the steps deployer will run, you can use the --dry-run option with cp-deploy.sh. This will generate a log file $STATUS_DIR/log/deployer-activities.log, which lists the steps deployer will execute when running without --dry-run. Please note that the dry-run option has only been implemented for Cloud Pak for Data i.e. watsonx.

Info

You can also choose to run Cloud Pak Deployer as a job on the OpenShift cluster. This removes the dependency on a separate server or workstation to run the deployer. Please note that you may need unrestricted OpenShift entitlements for this. To run the deployer on OpenShift via the OpenShift console, see Run on OpenShift using console

With the Existing OpenShift type of deployment you can install and configure the Cloud Pak(s) both on connected and disconnected (air-gapped) cluster. When using the deployer for a disconnected cluster, make sure you specify --air-gapped for the cp-deploy.sh command.

There are 5 main steps to run the deployer for existing OpenShift:

  1. Configure deployer
  2. Prepare the cloud environment
  3. Obtain entitlement keys and secrets
  4. Set environment variables and secrets
  5. Run the deployer

1. Configure deployer🔗

Deployer configuration and status directories🔗

Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory (STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory.

You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration. For existing OpenShift installations, copy one of ocp-existing-ocp-*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files.

Example:

mkdir -p $HOME/cpd-config/config
+cp sample-configurations/sample-dynamic/config-samples/ocp-existing-ocp-auto.yaml $HOME/cpd-config/config/
+cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/
+

Set configuration and status directories environment variables🔗

Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs.

export CONFIG_DIR=$HOME/cpd-config
+export STATUS_DIR=$HOME/cpd-status
+
  • CONFIG_DIR: Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files.
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and logs files.

Optional: advanced configuration🔗

If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration.

For special configuration with defaults and dynamic variables, refer to Advanced configuration.

2. Prepare the cloud environment🔗

No steps should be required to prepare the infrastructure; this type of installation expects the OpenShift cluster to be up and running with the supported storage classes.

3. Acquire entitlement keys and secrets🔗

If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry.

Warning

As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.

4. Set environment variables and secrets🔗

Set the Cloud Pak entitlement key🔗

If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key.

export CP_ENTITLEMENT_KEY=your_cp_entitlement_key
+
  • CP_ENTITLEMENT_KEY: This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry

Store the OpenShift login command or configuration🔗

Because you will be deploying the Cloud Pak on an existing OpenShift cluster, the deployer needs to be able to access OpenShift. There are thre methods for passing the login credentials of your OpenShift cluster(s) to the deployer process:

  1. Generic oc login command (preferred)
  2. Specific oc login command(s)
  3. kubeconfig file

Regardless of which authentication option you choose, the deployer will retrieve the secret from the vault when it requires access to OpenShift. If the secret cannot be found or if it is invalid or the OpenShift login token has expired, the deployer will fail and you will need to update the secret of your choice.

For most OpenShift installations, you can retrieve the oc login command with a temporary token from the OpenShift console. Go to the OpenShift console and click on your user at the top right of the page to get the login command. Typically this command looks something like this: oc login --server=https://api.pluto-01.coc.ibm.com:6443 --token=sha256~NQUUMroU4B6q_GTBAMS18Y3EIba1KHnJ08L2rBHvTHA

Before passing the oc login command or the kubeconfig file, make sure you can login to your cluster using the command or the config file. If the cluster's API server has a self-signed certificate, make sure you specify the --insecure-skip-tls-verify flag for the oc login command.

Example:

oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify
+

Output:

Login successful.
+
+You have access to 65 projects, the list has been suppressed. You can list all projects with 'oc projects'
+
+Using project "default".
+

Option 1 - Generic oc login command🔗

This is the most straightforward option if you only have 1 OpenShift cluster in your configuration.

Set the environment variable for the oc login command

export CPD_OC_LOGIN="oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify"
+

Info

Make sure you put the oc login command between quotes (single or double) to make sure the full command is stored.

When the deployer is run, it automatically sets the oc-login vault secret to the specified oc login command. When logging in to OpenShift, the deployer first checks if there is a specific oc login secret for the cluster in question (see option 2). If there is not, it will default to the generic oc-login secret (option 1).

Option 2 - Specific oc login command(s)🔗

Use this option if you have multiple OpenShift clusters configured in th deployer configuration.

Store the login command in secret <cluster name>-oc-login

./cp-deploy.sh vault set \
+  -vs pluto-01-oc-login \
+  -vsv "oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify"
+

Info

Make sure you put the oc login command between quotes (single or double) to make sure the full command is stored.

Option 3 - Use a kubeconfig file🔗

If you already have a "kubeconfig" file that holds the credentials of your cluster, you can use this, otherwise: - Log in to OpenShift as a cluster administrator using your method of choice - Locate the Kubernetes config file. If you have logged in with the OpenShift client, this is typically ~/.kube/config

If you did not just login to the cluster, the current context of the kubeconfig file may not point to your cluster. The deployer will check that the server the current context points to matches the cluster_name and domain_name of the configured openshift object. To check the current context, run the following command:

oc config current-context
+

Now, store the Kubernetes config file as a vault secret.

./cp-deploy.sh vault set \
+    --vault-secret kubeconfig \
+    --vault-secret-file ~/.kube/config
+

If the deployer manages multiple OpenShift clusters, you can specify a kubeconfig file for each of the clusters by prefixing the kubeconfig with the name of the openshift object, for example:

./cp-deploy.sh vault set \
+    --vault-secret pluto-01-kubeconfig \
+    --vault-secret-file /data/pluto-01/kubeconfig
+
+./cp-deploy.sh vault set \
+    --vault-secret venus-02-kubeconfig \
+    --vault-secret-file /data/venus-02/kubeconfig
+
When connecting to the OpenShift cluster, a cluster-specific kubeconfig vault secret will take precedence over the generic kubeconfig secret.

Optional: Set the GitHub Personal Access Token (PAT)🔗

In some cases, download of the cloudctl and cpd-cli clients from @IBM will fail because GitHub limits the number of API calls from non-authenticated clients. You can remediate this issue by creating a Personal Access Token on github.com and creating a secret in the vault.

./cp-deploy.sh vault set -vs github-ibm-pat=<your PAT>
+

Alternatively, you can set the secret by adding -vs github-ibm-pat=<your PAT> to the ./cp-deploy.sh env apply command.

5. Run the deployer🔗

Optional: validate the configuration🔗

If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators.

./cp-deploy.sh env apply --check-only --accept-all-licenses
+

Run the Cloud Pak Deployer🔗

To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions.

./cp-deploy.sh env apply --accept-all-licenses
+

You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx. For more information about the extra (dynamic) variables, see advanced configuration.

The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line.

When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background.

You can return to view the logs as follows:

./cp-deploy.sh env logs
+

Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings.

If you need to interrupt the automation, use CTRL-C to stop the logging output and then use:

./cp-deploy.sh env kill
+

On failure🔗

If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.

Finishing up🔗

Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified.

To retrieve the Cloud Pak URL(s):

cat $STATUS_DIR/cloud-paks/*
+

This will show the Cloud Pak URLs:

Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com):
+https://cpd-cpd.apps.pluto-01.example.com
+

The admin password can be retrieved from the vault as follows:

List the secrets in the vault:

./cp-deploy.sh vault list
+

This will show something similar to the following:

Secret list for group sample:
+- ibm_cp_entitlement_key
+- oc-login
+- cp4d_admin_cpd_demo
+

You can then retrieve the Cloud Pak for Data admin password like this:

./cp-deploy.sh vault get --vault-secret cp4d_admin_cpd_sample
+
PLAY [Secrets] *****************************************************************
+included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost
+cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr
+

Post-install configuration🔗

You can find examples of a couple of typical changes you may want to do here: Post-run changes.

\ No newline at end of file diff --git a/10-use-deployer/3-run/fusion-hci/index.html b/10-use-deployer/3-run/fusion-hci/index.html new file mode 100644 index 000000000..79918dda8 --- /dev/null +++ b/10-use-deployer/3-run/fusion-hci/index.html @@ -0,0 +1,44 @@ + Spectrum HCI - Cloud Pak Deployer
Skip to content

Running the Cloud Pak Deployer on Fusion HCI (Spectrum Scale)🔗

This guide provides detailed instructions on using Cloud Pak Deployer (CPD) on Fusion HCI using Spectrum Scale as the storage backend.

Fusion HCI (Hyper-Converged Infrastructure) is an IBM offering that combines compute, storage, and networking resources into a single, pre-configured system. It simplifies IT infrastructure management and is optimized for deploying containerized applications. Notably, Fusion HCI integrates OpenShift, a leading container orchestration platform, for streamlined development and deployment. The stack is very simple :

  • Bare Metal
  • RedHat CoreOS
  • OpenShift

FUSION HCI Architecture

Info

This guide detail the process of installing CPD on a Spectrum HCI using Spectrum Scale as the underlying storage solution. Fusion HCI offers the option to deploy ODF but it's not covedered here, follow classic OCP+ODF guides. IBM Storage Fusion HCI System use ibm-storage-fusion-cp-sc storage class and is created by default on this environment.

There are 5 main steps to run the deployer for Fusion HCI

  1. Configure deployer
  2. Prepare the cloud environment
  3. Obtain entitlement keys and secrets
  4. Set environment variables and secrets
  5. Run the deployer

1. Configure deployer🔗

Deployer configuration and status directories🔗

Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory (STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory.

Discover sample configuration YAML files for OpenShift and Cloud Pak here: sample configuration. To set up FusionHCI, transfer the fusion-hci.yaml files to the $CONFIG_DIR/config directory. If you're interested in installing WatsonX as well, choose one of the watsonx-*.yaml files and copy it accordingly.

Example:

mkdir -p $HOME/cpd-config/config
+cp sample-configurations/sample-dynamic/config-samples/fusion-hci.yaml $HOME/cpd-config/config/
+cp sample-configurations/sample-dynamic/config-samples/watsonx-480.yaml $HOME/cpd-config/config/
+

Set configuration and status directories environment variables🔗

Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs.

export CONFIG_DIR=$HOME/cpd-config
+export STATUS_DIR=$HOME/cpd-status
+
  • CONFIG_DIR: Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files.
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and logs files.

Optional: advanced configuration🔗

If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration.

For special configuration with defaults and dynamic variables, refer to Advanced configuration.

2. Prepare the cloud environment🔗

No steps should be required to prepare the infrastructure; this type of installation expects the OpenShift cluster to be up and running with the supported storage classes.

3. Acquire entitlement keys and secrets🔗

If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry.

Warning

As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.

4. Set environment variables and secrets🔗

Set the Cloud Pak entitlement key🔗

If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key.

export CP_ENTITLEMENT_KEY=your_cp_entitlement_key
+
  • CP_ENTITLEMENT_KEY: This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry

Store the OpenShift login command or configuration🔗

Because you will be deploying the Cloud Pak on an existing OpenShift cluster, the deployer needs to be able to access OpenShift. There are thre methods for passing the login credentials of your OpenShift cluster(s) to the deployer process:

  1. Generic oc login command (preferred)
  2. Specific oc login command(s)
  3. kubeconfig file

Regardless of which authentication option you choose, the deployer will retrieve the secret from the vault when it requires access to OpenShift. If the secret cannot be found or if it is invalid or the OpenShift login token has expired, the deployer will fail and you will need to update the secret of your choice.

For most OpenShift installations, you can retrieve the oc login command with a temporary token from the OpenShift console. Go to the OpenShift console and click on your user at the top right of the page to get the login command. Typically this command looks something like this: oc login --server=https://api.pluto-01.coc.ibm.com:6443 --token=sha256~NQUUMroU4B6q_GTBAMS18Y3EIba1KHnJ08L2rBHvTHA

Before passing the oc login command or the kubeconfig file, make sure you can login to your cluster using the command or the config file. If the cluster's API server has a self-signed certificate, make sure you specify the --insecure-skip-tls-verify flag for the oc login command.

Example:

oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify
+

Output:

Login successful.
+
+You have access to 65 projects, the list has been suppressed. You can list all projects with 'oc projects'
+
+Using project "default".
+

Option 1 - Generic oc login command🔗

This is the most straightforward option if you only have 1 OpenShift cluster in your configuration.

Set the environment variable for the oc login command

export CPD_OC_LOGIN="oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify"
+

Info

Make sure you put the oc login command between quotes (single or double) to make sure the full command is stored.

When the deployer is run, it automatically sets the oc-login vault secret to the specified oc login command. When logging in to OpenShift, the deployer first checks if there is a specific oc login secret for the cluster in question (see option 2). If there is not, it will default to the generic oc-login secret (option 1).

Option 2 - Specific oc login command(s)🔗

Use this option if you have multiple OpenShift clusters configured in th deployer configuration.

Store the login command in secret <cluster name>-oc-login

./cp-deploy.sh vault set \
+  -vs pluto-01-oc-login \
+  -vsv "oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify"
+

Info

Make sure you put the oc login command between quotes (single or double) to make sure the full command is stored.

Option 3 - Use a kubeconfig file🔗

If you already have a "kubeconfig" file that holds the credentials of your cluster, you can use this, otherwise: - Log in to OpenShift as a cluster administrator using your method of choice - Locate the Kubernetes config file. If you have logged in with the OpenShift client, this is typically ~/.kube/config

If you did not just login to the cluster, the current context of the kubeconfig file may not point to your cluster. The deployer will check that the server the current context points to matches the cluster_name and domain_name of the configured openshift object. To check the current context, run the following command:

oc config current-context
+

Now, store the Kubernetes config file as a vault secret.

./cp-deploy.sh vault set \
+    --vault-secret kubeconfig \
+    --vault-secret-file ~/.kube/config
+

If the deployer manages multiple OpenShift clusters, you can specify a kubeconfig file for each of the clusters by prefixing the kubeconfig with the name of the openshift object, for example:

./cp-deploy.sh vault set \
+    --vault-secret pluto-01-kubeconfig \
+    --vault-secret-file /data/pluto-01/kubeconfig
+
+./cp-deploy.sh vault set \
+    --vault-secret venus-02-kubeconfig \
+    --vault-secret-file /data/venus-02/kubeconfig
+
When connecting to the OpenShift cluster, a cluster-specific kubeconfig vault secret will take precedence over the generic kubeconfig secret.

5. Run the deployer🔗

Optional: validate the configuration🔗

If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators.

./cp-deploy.sh env apply --check-only --accept-all-licenses
+

Run the Cloud Pak Deployer🔗

To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions.

./cp-deploy.sh env apply --accept-all-licenses
+

You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx. For more information about the extra (dynamic) variables, see advanced configuration.

The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line.

When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background.

You can return to view the logs as follows:

./cp-deploy.sh env logs
+

Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings.

If you need to interrupt the automation, use CTRL-C to stop the logging output and then use:

./cp-deploy.sh env kill
+

On failure🔗

If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.

Finishing up🔗

Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified.

To retrieve the Cloud Pak URL(s):

cat $STATUS_DIR/cloud-paks/*
+

This will show the Cloud Pak URLs:

Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com):
+https://cpd-cpd.apps.pluto-01.example.com
+

The admin password can be retrieved from the vault as follows:

List the secrets in the vault:

./cp-deploy.sh vault list
+

This will show something similar to the following:

Secret list for group sample:
+- ibm_cp_entitlement_key
+- oc-login
+- cp4d_admin_cpd_demo
+

You can then retrieve the Cloud Pak for Data admin password like this:

./cp-deploy.sh vault get --vault-secret cp4d_admin_cpd_sample
+
PLAY [Secrets] *****************************************************************
+included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost
+cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr
+

Post-install configuration🔗

You can find examples of a couple of typical changes you may want to do here: Post-run changes.

\ No newline at end of file diff --git a/10-use-deployer/3-run/ibm-cloud/index.html b/10-use-deployer/3-run/ibm-cloud/index.html new file mode 100644 index 000000000..d13a1428d --- /dev/null +++ b/10-use-deployer/3-run/ibm-cloud/index.html @@ -0,0 +1,27 @@ + IBM Cloud - Cloud Pak Deployer
Skip to content

Running the Cloud Pak Deployer on IBM Cloud🔗

You can use Cloud Pak Deployer to create a ROKS (Red Hat OpenShift Kubernetes Service) on IBM Cloud.

There are 5 main steps to run the deployer for IBM Cloud:

  1. Configure deployer
  2. Prepare the cloud environment
  3. Obtain entitlement keys and secrets
  4. Set environment variables and secrets
  5. Run the deployer

See the deployer in action in this video: https://ibm.box.com/v/cpd-ibm-cloud-roks

Topology🔗

A typical setup of the ROKS cluster on IBM Cloud VPC is pictured below: ROKS configuration

1. Configure deployer🔗

Deployer configuration and status directories🔗

Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory (STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory.

You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration. For IBM Cloud installations, copy one of ocp-ibm-cloud-roks*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files.

Example:

mkdir -p $HOME/cpd-config/config
+cp sample-configurations/sample-dynamic/config-samples/ocp-ibm-cloud-roks-ocs.yaml $HOME/cpd-config/config/
+cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/
+

Set configuration and status directories environment variables🔗

Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs.

export CONFIG_DIR=$HOME/cpd-config
+export STATUS_DIR=$HOME/cpd-status
+
  • CONFIG_DIR: Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files.
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and logs files.

Optional: advanced configuration🔗

If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration.

For special configuration with defaults and dynamic variables, refer to Advanced configuration.

2. Prepare the cloud environment🔗

Create an IBM Cloud API Key🔗

In order for the Cloud Pak Deployer to create the infrastructure and deploy IBM Cloud Pak for Data, it must perform tasks on IBM Cloud. In order to do so it requires an IBM Cloud API Key. This can be created by following these steps:

  • Go to https://cloud.ibm.com/iam/apikeys and login with your IBMid credentials
  • Ensure you have selected the correct IBM Cloud Account for which you wish to use the Cloud Pak Deployer
  • Click Create an IBM Cloud API Key and provide a name and description
  • Copy the IBM Cloud API key using the Copy button and store it in a safe place, as you will not be able to retrieve it later

Warning

You can choose to download the API key for later reference. However, when we reference the API key, we mean the IBM Cloud API key as a 40+ character string.

Set environment variables for IBM Cloud🔗

Set the environment variables specific to IBM Cloud deployments.

export IBM_CLOUD_API_KEY=your_api_key
+

  • IBM_CLOUD_API_KEY: This is the API key you generated using your IBM Cloud account, this is a 40+ character string

3. Acquire entitlement keys and secrets🔗

If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry.

Warning

As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.

4. Set environment variables and secrets🔗

Set the Cloud Pak entitlement key🔗

If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key.

export CP_ENTITLEMENT_KEY=your_cp_entitlement_key
+
  • CP_ENTITLEMENT_KEY: This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry

Optional: Set the GitHub Personal Access Token (PAT)🔗

In some cases, download of the cloudctl and cpd-cli clients from @IBM will fail because GitHub limits the number of API calls from non-authenticated clients. You can remediate this issue by creating a Personal Access Token on github.com and creating a secret in the vault.

./cp-deploy.sh vault set -vs github-ibm-pat=<your PAT>
+

Alternatively, you can set the secret by adding -vs github-ibm-pat=<your PAT> to the ./cp-deploy.sh env apply command.

5. Run the deployer🔗

Optional: validate the configuration🔗

If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators.

./cp-deploy.sh env apply --check-only --accept-all-licenses
+

Run the Cloud Pak Deployer🔗

To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions.

./cp-deploy.sh env apply --accept-all-licenses
+

You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx. For more information about the extra (dynamic) variables, see advanced configuration.

The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line.

When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background.

You can return to view the logs as follows:

./cp-deploy.sh env logs
+

Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings.

If you need to interrupt the automation, use CTRL-C to stop the logging output and then use:

./cp-deploy.sh env kill
+

On failure🔗

If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.

Finishing up🔗

Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified.

To retrieve the Cloud Pak URL(s):

cat $STATUS_DIR/cloud-paks/*
+

This will show the Cloud Pak URLs:

Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com):
+https://cpd-cpd.apps.pluto-01.example.com
+

The admin password can be retrieved from the vault as follows:

List the secrets in the vault:

./cp-deploy.sh vault list
+

This will show something similar to the following:

Secret list for group sample:
+- ibm_cp_entitlement_key
+- sample-provision-ssh-key
+- sample-provision-ssh-pub-key
+- sample-terraform-tfstate
+- cp4d_admin_cpd_demo
+

You can then retrieve the Cloud Pak for Data admin password like this:

./cp-deploy.sh vault get --vault-secret cp4d_admin_cpd_demo
+
PLAY [Secrets] *****************************************************************
+included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost
+cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr
+

Post-install configuration🔗

You can find examples of a couple of typical changes you may want to do here: Post-run changes.

\ No newline at end of file diff --git a/10-use-deployer/3-run/images/aws-rosa-ocs.png b/10-use-deployer/3-run/images/aws-rosa-ocs.png new file mode 100644 index 000000000..37f2cb217 Binary files /dev/null and b/10-use-deployer/3-run/images/aws-rosa-ocs.png differ diff --git a/10-use-deployer/3-run/images/aws-self-managed-ocs.png b/10-use-deployer/3-run/images/aws-self-managed-ocs.png new file mode 100644 index 000000000..37f2cb217 Binary files /dev/null and b/10-use-deployer/3-run/images/aws-self-managed-ocs.png differ diff --git a/10-use-deployer/3-run/images/azure-aro.png b/10-use-deployer/3-run/images/azure-aro.png new file mode 100644 index 000000000..8d4212c70 Binary files /dev/null and b/10-use-deployer/3-run/images/azure-aro.png differ diff --git a/10-use-deployer/3-run/images/ibm-roks-ocs.png b/10-use-deployer/3-run/images/ibm-roks-ocs.png new file mode 100644 index 000000000..42e568754 Binary files /dev/null and b/10-use-deployer/3-run/images/ibm-roks-ocs.png differ diff --git a/10-use-deployer/3-run/images/spectrum-hci-architecture.png b/10-use-deployer/3-run/images/spectrum-hci-architecture.png new file mode 100644 index 000000000..48ef9558a Binary files /dev/null and b/10-use-deployer/3-run/images/spectrum-hci-architecture.png differ diff --git a/10-use-deployer/3-run/images/vsphere-ocs-nfs.png b/10-use-deployer/3-run/images/vsphere-ocs-nfs.png new file mode 100644 index 000000000..b99c4bfcf Binary files /dev/null and b/10-use-deployer/3-run/images/vsphere-ocs-nfs.png differ diff --git a/10-use-deployer/3-run/run/index.html b/10-use-deployer/3-run/run/index.html new file mode 100644 index 000000000..1623e472c --- /dev/null +++ b/10-use-deployer/3-run/run/index.html @@ -0,0 +1 @@ + Running Cloud Pak Deployer - Cloud Pak Deployer
Skip to content
\ No newline at end of file diff --git a/10-use-deployer/3-run/vsphere/index.html b/10-use-deployer/3-run/vsphere/index.html new file mode 100644 index 000000000..e9df9629b --- /dev/null +++ b/10-use-deployer/3-run/vsphere/index.html @@ -0,0 +1,36 @@ + vSphere - Cloud Pak Deployer
Skip to content

Running the Cloud Pak Deployer on vSphere🔗

You can use Cloud Pak Deployer to create an OpenShift cluster on VMWare infrastructure.

There are 5 main steps to run the deployer for vSphere:

  1. Configure deployer
  2. Prepare the cloud environment
  3. Obtain entitlement keys and secrets
  4. Set environment variables and secrets
  5. Run the deployer

Topology🔗

A typical setup of the vSphere cluster with OpenShift is pictured below: vSphere configuration

When deploying OpenShift and the Cloud Pak(s) on VMWare vSphere, there is a dependency on a DHCP server for issuing IP addresses to the newly configured cluster nodes. Also, once the OpenShift cluster has been installed, valid fully qualified host names are required to connect to the OpenShift API server at port 6443 and applications running behind the ingress server at port 443. The Cloud Pak deployer cannot set up a DHCP server or a DNS server and to be able to connect to OpenShift or to reach the Cloud Pak after installation, name entries must be set up.

1. Configure deployer🔗

Deployer configuration and status directories🔗

Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory (STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory.

You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration. For vSphere installations, copy one of ocp-vsphere-*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files.

Example:

mkdir -p $HOME/cpd-config/config
+cp sample-configurations/sample-dynamic/config-samples/ocp-vsphere-ocs-nfs.yaml $HOME/cpd-config/config/
+cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/
+

Set configuration and status directories environment variables🔗

Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs.

export CONFIG_DIR=$HOME/cpd-config
+export STATUS_DIR=$HOME/cpd-status
+
  • CONFIG_DIR: Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files.
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and logs files.

Optional: advanced configuration🔗

If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration.

For special configuration with defaults and dynamic variables, refer to Advanced configuration.

2. Prepare the cloud environment🔗

Pre-requisites for vSphere🔗

In order to successfully install OpenShift on vSphere infrastructure, the following pre-requisites must have been met.

Pre-requisite Description
Red Hat pull secret A pull secret is required to download and install OpenShift. See Acquire pull secret
IBM Entitlement key When instaling an IBM Cloud Pak, you need an IBM entitlement key. See Acquire IBM Cloud Pak entitlement key
vSphere credentials The OpenShift IPI installer requires vSphere credentials to create VMs and storage
Firewall rules The OpenShift cluster's API server on port 6443 and application server on port 443 must be reachable.
Whitelisted URLs The OpenShift and Cloud Pak download locations and registry must be accessible from the vSphere infrastructure. See Whitelisted locations
DHCP When provisioning new VMs, IP addresses must be automatically assigned through DHCP
DNS A DNS server that will resolve the OpenShift API server and applications is required. See DNS configuration
Time server A time server to synchronize the time must be available in the network and configured through the DHCP server

There are also some optional settings, dependent on the specifics of the installation:

Pre-requisite Description
Bastion server It can be useful to have a bastion/installation server to run the deployer. This (virtual) server must reside within the vSphere network
NFS details If an NFS server is used for storage, it must be reacheable (firewall) and no_root_squash must be set
Private registry If the installation must use a private registry for the Cloud Pak installation, it must be available and credentials shared
Certificates If the Cloud Pak URL must have a CA-signed certificate, the key, certificate and CA bundle must be available at instlalation time
Load balancer The OpenShift IPI install creates 2 VIPs and takes care of the routing to the services. In some implementations, a load balancer provided by the infrastructure team is preferred. This load balancer must be configured externally

DNS configuration🔗

During the provisioning and configuration process, the deployer needs access to the OpenShift API and the ingress server for which the IP addresses are specified in the openshift object.

Ensure that the DNS server has the following entries:

  • api.openshift_name.domain_name → Point to the api_vip address configured in the openshift object
  • *.apps.openshift_name.domain_name → Point to the ingress_vip address configured in the openshift object

If you do not configure the DNS entries upfront, the deployer will still run and it will "spoof" the required entries in the container's /etc/hosts file. However to be able to connect to OpenShift and access the Cloud Pak, the DNS entries are required.

Obtain the vSphere user and password🔗

In order for the Cloud Pak Deployer to create the infrastructure and deploy the IBM Cloud Pak, it must have provisioning access to vSphere and it needs the vSphere user and password. The user must have permissions to create virtual machines.

Set environment variables for vSphere🔗

export VSPHERE_USER=your_vsphere_user
+export VSPHERE_PASSWORD=password_of_the_vsphere_user
+
  • VSPHERE_USER: This is the user name of the vSphere user, often this is something like admin@vsphere.local
  • VSPHERE_PASSWORD: The password of the vSphere user. Be careful with special characters like $, ! as they are not accepted by the IPI provisioning of OpenShift

3. Acquire entitlement keys and secrets🔗

Acquire IBM Cloud Pak entitlement key🔗

If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry.

Warning

As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.

Acquire an OpenShift pull secret🔗

To install OpenShift you need an OpenShift pull secret which holds your entitlement.

Optional: Locate or generate a public SSH Key🔗

To obtain access to the OpenShift nodes post-installation, you will need to specify the public SSH key of your server; typically this is ~/.ssh/id_rsa.pub, where ~ is the home directory of your user. If you don't have an SSH key-pair yet, you can generate one using the steps documented here: https://cloud.ibm.com/docs/ssh-keys?topic=ssh-keys-generating-and-using-ssh-keys-for-remote-host-authentication#generating-ssh-keys-on-linux. Alternatively, deployer can generate SSH key-pair automatically if credential ocp-ssh-pub-key is not in the vault.

4. Set environment variables and secrets🔗

Set the Cloud Pak entitlement key🔗

If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key.

export CP_ENTITLEMENT_KEY=your_cp_entitlement_key
+
  • CP_ENTITLEMENT_KEY: This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry

Create the secrets needed for vSphere deployment🔗

You need to store the OpenShift pull secret in the vault so that the deployer has access to it.

./cp-deploy.sh vault set \
+    --vault-secret ocp-pullsecret \
+    --vault-secret-file /tmp/ocp_pullsecret.json
+

Optional: Create secret for public SSH key🔗

If you want to use your SSH key to access nodes in the cluster, set the Vault secret with the public SSH key.

./cp-deploy.sh vault set \
+    --vault-secret ocp-ssh-pub-key \
+    --vault-secret-file ~/.ssh/id_rsa.pub
+

Optional: Set the GitHub Personal Access Token (PAT)🔗

In some cases, download of the cloudctl and cpd-cli clients from @IBM will fail because GitHub limits the number of API calls from non-authenticated clients. You can remediate this issue by creating a Personal Access Token on github.com and creating a secret in the vault.

./cp-deploy.sh vault set -vs github-ibm-pat=<your PAT>
+

Alternatively, you can set the secret by adding -vs github-ibm-pat=<your PAT> to the ./cp-deploy.sh env apply command.

5. Run the deployer🔗

Optional: validate the configuration🔗

If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators.

./cp-deploy.sh env apply --check-only --accept-all-licenses
+

Run the Cloud Pak Deployer🔗

To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions.

./cp-deploy.sh env apply --accept-all-licenses
+

You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx. For more information about the extra (dynamic) variables, see advanced configuration.

The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line.

When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background.

You can return to view the logs as follows:

./cp-deploy.sh env logs
+

Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings.

If you need to interrupt the automation, use CTRL-C to stop the logging output and then use:

./cp-deploy.sh env kill
+

On failure🔗

If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.

Finishing up🔗

Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified.

To retrieve the Cloud Pak URL(s):

cat $STATUS_DIR/cloud-paks/*
+

This will show the Cloud Pak URLs:

Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com):
+https://cpd-cpd.apps.pluto-01.example.com
+

The admin password can be retrieved from the vault as follows:

List the secrets in the vault:

./cp-deploy.sh vault list
+

This will show something similar to the following:

Secret list for group sample:
+- vsphere-user
+- vsphere-password
+- ocp-pullsecret
+- ocp-ssh-pub-key
+- ibm_cp_entitlement_key
+- sample-kubeadmin-password
+- cp4d_admin_cpd_demo
+

You can then retrieve the Cloud Pak for Data admin password like this:

./cp-deploy.sh vault get --vault-secret cp4d_admin_cpd_demo
+
PLAY [Secrets] *****************************************************************
+included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost
+cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr
+

Post-install configuration🔗

You can find examples of a couple of typical changes you may want to do here: Post-run changes.

\ No newline at end of file diff --git a/10-use-deployer/5-post-run/aws-self-managed-add-gpu/index.html b/10-use-deployer/5-post-run/aws-self-managed-add-gpu/index.html new file mode 100644 index 000000000..02f860491 --- /dev/null +++ b/10-use-deployer/5-post-run/aws-self-managed-add-gpu/index.html @@ -0,0 +1,300 @@ + Adding GPU nodes to self-managed OpenShift on AWS - Cloud Pak Deployer
Skip to content

Adding GPU nodes to self-managed OpenShift on AWS🔗

When deploying self-managed OpenShift on AWS, the compute nodes are represented as one or more OpenShift MachineSets. If your cluster is deployed in a single zone, there will be 1 MachineSet which defines the number and type of ec2 instances created in the AWS account. For multi-zone clusters, there will be 3 MachineSets.

Find the compute node MachineSet🔗

Below is an example of a compute node (worker) MachineSet created by the OpenShift installer. The instance type defines the node that is spun up, with 32 vCPUs and 128 GB of memory.

apiVersion: machine.openshift.io/v1beta1
+kind: MachineSet
+metadata:
+  annotations:
+    capacity.cluster-autoscaler.kubernetes.io/labels: kubernetes.io/arch=amd64
+    machine.openshift.io/GPU: '0'
+    machine.openshift.io/memoryMb: '131072'
+    machine.openshift.io/vCPU: '32'
+  resourceVersion: '22985'
+  name: fk-aws-sts-2th7t-worker-us-east-1a
+  uid: be5a9880-eaa0-4054-a77f-e5c4432eb51f
+  creationTimestamp: '2024-10-16T20:30:43Z'
+  generation: 1
+  managedFields:
+    - apiVersion: machine.openshift.io/v1beta1
+      fieldsType: FieldsV1
+      fieldsV1:
+        'f:metadata':
+          'f:labels':
+            .: {}
+            'f:machine.openshift.io/cluster-api-cluster': {}
+        'f:spec':
+          .: {}
+          'f:replicas': {}
+          'f:selector': {}
+          'f:template':
+            .: {}
+            'f:metadata':
+              .: {}
+              'f:labels':
+                .: {}
+                'f:machine.openshift.io/cluster-api-cluster': {}
+                'f:machine.openshift.io/cluster-api-machine-role': {}
+                'f:machine.openshift.io/cluster-api-machine-type': {}
+                'f:machine.openshift.io/cluster-api-machineset': {}
+            'f:spec':
+              .: {}
+              'f:lifecycleHooks': {}
+              'f:metadata': {}
+              'f:providerSpec':
+                .: {}
+                'f:value':
+                  'f:instanceType': {}
+                  'f:metadata':
+                    .: {}
+                    'f:creationTimestamp': {}
+                  'f:blockDevices': {}
+                  'f:kind': {}
+                  'f:securityGroups': {}
+                  'f:deviceIndex': {}
+                  'f:ami':
+                    .: {}
+                    'f:id': {}
+                  'f:metadataServiceOptions': {}
+                  'f:tags': {}
+                  .: {}
+                  'f:placement':
+                    .: {}
+                    'f:availabilityZone': {}
+                    'f:region': {}
+                  'f:subnet':
+                    .: {}
+                    'f:filters': {}
+                  'f:apiVersion': {}
+                  'f:iamInstanceProfile':
+                    .: {}
+                    'f:id': {}
+                  'f:credentialsSecret':
+                    .: {}
+                    'f:name': {}
+                  'f:userDataSecret':
+                    .: {}
+                    'f:name': {}
+      manager: cluster-bootstrap
+      operation: Update
+      time: '2024-10-16T20:30:43Z'
+    - apiVersion: machine.openshift.io/v1beta1
+      fieldsType: FieldsV1
+      fieldsV1:
+        'f:status': {}
+      manager: cluster-bootstrap
+      operation: Update
+      subresource: status
+      time: '2024-10-16T20:30:43Z'
+    - apiVersion: machine.openshift.io/v1beta1
+      fieldsType: FieldsV1
+      fieldsV1:
+        'f:metadata':
+          'f:annotations':
+            .: {}
+            'f:capacity.cluster-autoscaler.kubernetes.io/labels': {}
+            'f:machine.openshift.io/GPU': {}
+            'f:machine.openshift.io/memoryMb': {}
+            'f:machine.openshift.io/vCPU': {}
+      manager: machine-controller-manager
+      operation: Update
+      time: '2024-10-16T20:35:26Z'
+    - apiVersion: machine.openshift.io/v1beta1
+      fieldsType: FieldsV1
+      fieldsV1:
+        'f:status':
+          'f:availableReplicas': {}
+          'f:fullyLabeledReplicas': {}
+          'f:observedGeneration': {}
+          'f:readyReplicas': {}
+          'f:replicas': {}
+      manager: machineset-controller
+      operation: Update
+      subresource: status
+      time: '2024-10-16T20:41:50Z'
+  namespace: openshift-machine-api
+  labels:
+    machine.openshift.io/cluster-api-cluster: fk-aws-sts-2th7t
+spec:
+  replicas: 3
+  selector:
+    matchLabels:
+      machine.openshift.io/cluster-api-cluster: fk-aws-sts-2th7t
+      machine.openshift.io/cluster-api-machineset: fk-aws-sts-2th7t-worker-us-east-1a
+  template:
+    metadata:
+      labels:
+        machine.openshift.io/cluster-api-cluster: fk-aws-sts-2th7t
+        machine.openshift.io/cluster-api-machine-role: worker
+        machine.openshift.io/cluster-api-machine-type: worker
+        machine.openshift.io/cluster-api-machineset: fk-aws-sts-2th7t-worker-us-east-1a
+    spec:
+      lifecycleHooks: {}
+      metadata: {}
+      providerSpec:
+        value:
+          userDataSecret:
+            name: worker-user-data
+          placement:
+            availabilityZone: us-east-1a
+            region: us-east-1
+          credentialsSecret:
+            name: aws-cloud-credentials
+          instanceType: m5.8xlarge
+          metadata:
+            creationTimestamp: null
+          blockDevices:
+            - ebs:
+                encrypted: true
+                iops: 0
+                kmsKey:
+                  arn: ''
+                volumeSize: 120
+                volumeType: gp3
+          securityGroups:
+            - filters:
+                - name: 'tag:Name'
+                  values:
+                    - fk-aws-sts-2th7t-worker-sg
+          kind: AWSMachineProviderConfig
+          metadataServiceOptions: {}
+          tags:
+            - name: kubernetes.io/cluster/fk-aws-sts-2th7t
+              value: owned
+          deviceIndex: 0
+          ami:
+            id: ami-0d653d86d4113326a
+          subnet:
+            filters:
+              - name: 'tag:Name'
+                values:
+                  - fk-aws-sts-2th7t-private-us-east-1a
+          apiVersion: machine.openshift.io/v1beta1
+          iamInstanceProfile:
+            id: fk-aws-sts-2th7t-worker-profile
+status:
+  availableReplicas: 3
+  fullyLabeledReplicas: 3
+  observedGeneration: 1
+  readyReplicas: 3
+  replicas: 3
+

Transform to GPU MachineSet🔗

To create a GPU MachineSet in the same region, copy the yaml into your favourite text editor and remove all the unnecessary properties. Below is an example of the resulting yaml, highlighting the items that have changed to define the GPU node(s). In this example there is only 1 GPU node of type g6e.8xlarge. Only the highlighted properties should be changed.

 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
apiVersion: machine.openshift.io/v1beta1
+kind: MachineSet
+metadata:
+  name: fk-aws-sts-2th7t-gpu-us-east-1a
+  namespace: openshift-machine-api
+  labels:
+    machine.openshift.io/cluster-api-cluster: fk-aws-sts-2th7t
+spec:
+  replicas: 1
+  selector:
+    matchLabels:
+      machine.openshift.io/cluster-api-cluster: fk-aws-sts-2th7t
+      machine.openshift.io/cluster-api-machineset: fk-aws-sts-2th7t-gpu-us-east-1a
+  template:
+    metadata:
+      labels:
+        machine.openshift.io/cluster-api-cluster: fk-aws-sts-2th7t
+        machine.openshift.io/cluster-api-machine-role: worker
+        machine.openshift.io/cluster-api-machine-type: worker
+        machine.openshift.io/cluster-api-machineset: fk-aws-sts-2th7t-gpu-us-east-1a
+    spec:
+      providerSpec:
+        value:
+          userDataSecret:
+            name: worker-user-data
+          placement:
+            availabilityZone: us-east-1a
+            region: us-east-1
+          credentialsSecret:
+            name: aws-cloud-credentials
+          instanceType: g6e.8xlarge
+          metadata:
+            creationTimestamp: null
+          blockDevices:
+            - ebs:
+                encrypted: true
+                iops: 0
+                kmsKey:
+                  arn: ''
+                volumeSize: 120
+                volumeType: gp3
+          securityGroups:
+            - filters:
+                - name: 'tag:Name'
+                  values:
+                    - fk-aws-sts-2th7t-worker-sg
+          kind: AWSMachineProviderConfig
+          metadataServiceOptions: {}
+          tags:
+            - name: kubernetes.io/cluster/fk-aws-sts-2th7t
+              value: owned
+          deviceIndex: 0
+          ami:
+            id: ami-0d653d86d4113326a
+          subnet:
+            filters:
+              - name: 'tag:Name'
+                values:
+                  - fk-aws-sts-2th7t-private-us-east-1a
+          apiVersion: machine.openshift.io/v1beta1
+          iamInstanceProfile:
+            id: fk-aws-sts-2th7t-worker-profile
+

Create the GPU MachineSet🔗

Once ready, do the following to create the MachineSet:

  • Go to the OpenShift console
  • Click the "+" sign at the top of the page
  • Paste the updated yaml into the the window

The MachineSet will create Machine CRs in the openshift-machines project. If the instance type is available in the selected AWS region and there is enough capacity, the AWS instance(s) will be created and after a 5-10 minutes, it/they will appear as nodes in the OpenShift cluster.

\ No newline at end of file diff --git a/10-use-deployer/5-post-run/post-run/index.html b/10-use-deployer/5-post-run/post-run/index.html new file mode 100644 index 000000000..beeb562f4 --- /dev/null +++ b/10-use-deployer/5-post-run/post-run/index.html @@ -0,0 +1,10 @@ + Post-run changes - Cloud Pak Deployer
Skip to content

Post-run changes🔗

If you want to change the deployed configuration, you can just update the configuration files and re-run the deployer. Make sure that you use the same input configuration and status directories and also the env_id if you specified one, otherwise deployment may fail.

Below are a couple of examples of post-run changes you may want to do.

Change Cloud Pak for Data admin password🔗

When initially installed, the Cloud Pak Deployer will generate a strong password for the Cloud Pak for Data admin user (or cpadmin if you have selected to use Foundational Services IAM). If you want to change the password afterwards, you can do this from the Cloud Pak for Data user interface, but this means that the deployer will no longer be able to make changes to the Cloud Pak for Data configuration.

If you have updated the admin password from the UI, please make sure you also update the secret in the vault.

First, list the secrets in the vault:

./cp-deploy.sh vault list
+

This will show something similar to the following:

Secret list for group sample:
+- ibm_cp_entitlement_key
+- sample-provision-ssh-key
+- sample-provision-ssh-pub-key
+- sample-terraform-tfstate
+- cp4d_admin_zen_sample_sample
+

Then, update the password:

./cp-deploy.sh vault set -vs cp4d_admin_zen_sample_sample -vsv "my Really Sec3re Passw0rd"
+

Finally, run the deployer again. It will make the necessary changes to the OpenShift secret and check that the admin (or cpadmin) user can log in. In this case you can speed up the process via the --skip-infra flag.

./cp-deploy.sh env apply --skip-infra [--accept-all-liceneses]
+

Add GPU nodes to the cluster🔗

watsonx.ai requires GPUs to run and tune the foundation models. Deployer currently does not provision these GPU nodes, but you can add them manually from the OpenShift console.

GPU nodes on AWS🔗

For adding GPU nodes on AWS infrastructure, refer to Add GPUs to self-managed OpenShift on AWS.

\ No newline at end of file diff --git a/10-use-deployer/7-command/command/index.html b/10-use-deployer/7-command/command/index.html new file mode 100644 index 000000000..9403322cb --- /dev/null +++ b/10-use-deployer/7-command/command/index.html @@ -0,0 +1,27 @@ + Running commands - Cloud Pak Deployer
Skip to content

Open a command line within the Cloud Pak Deployer container🔗

Sometimes you may need to access the OpenShift cluster using the OpenShift client. For convenience we have made the oc command available in the Cloud Pak Deployer and you can start exploring the current OpenShift cluster immediately without having to install the client on your own workstation.

Prepare for the command line🔗

Set environment variables🔗

Make sure you have set the CONFIG_DIR and STATUS_DIR environment variables to the same values when you ran the env apply command. This will ensure that the oc command will access the OpenShift cluster(s) of that configuration.

Optional: prepare OpenShift cluster🔗

If you have not run the deployer yet and do not intend to install any Cloud Paks, but you do want to access the OpenShift cluster from the command line to check or prepare items, run the deployer with the --skip-cp-install flag.

./cp-deploy.sh env apply --skip-cp-install
+

Deployer will check the configuration, download clients, attempt to login to OpenShift and prepare the OpenShift cluster with the global pull secret and (for Cloud Pak for Data) node settings. After that the deployer will finish without installing any Cloud Pak.

Run the Cloud Pak Deployer command line🔗

./cp-deploy.sh env cmd 
+

You should see something like this:

-------------------------------------------------------------------------------
+Entering Cloud Pak Deployer command line in a container.
+Use the "exit" command to leave the container and return to the hosting server.
+-------------------------------------------------------------------------------
+Installing OpenShift client
+Current OpenShift context: cpd
+

Now, you can check the OpenShift cluster version:

oc get clusterversion
+

NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
+version   4.8.14    True        False         2d3h    Cluster version is 4.8.14
+

Or, display the list of OpenShift projects:

oc get projects | grep -v openshift-
+

NAME                                               DISPLAY NAME   STATUS
+calico-system                                                     Active
+default                                                           Active
+ibm-cert-store                                                    Active
+ibm-odf-validation-webhook                                        Active
+ibm-system                                                        Active
+kube-node-lease                                                   Active
+kube-public                                                       Active
+kube-system                                                       Active
+openshift                                                         Active
+services                                                          Active
+tigera-operator                                                   Active
+cpd                                                            Active
+

Exit the command line🔗

Once finished, exit out of the container.

exit
+

\ No newline at end of file diff --git a/10-use-deployer/9-destroy/destroy/index.html b/10-use-deployer/9-destroy/destroy/index.html new file mode 100644 index 000000000..4e2a9bec3 --- /dev/null +++ b/10-use-deployer/9-destroy/destroy/index.html @@ -0,0 +1,13 @@ + Destroy cluster - Cloud Pak Deployer
Skip to content

Destroy the created resources🔗

If you have previously used the Cloud Pak Deployer to create assets, you can destroy the assets with the same command.

Info

Currently, destroy is only implemented for OpenShift clusters on IBM Cloud ROKS, AWS and Azure, and for Cloud Pak for Data on an existing OpenShift cluster.

Prepare for destroy🔗

Prepare for destroy on existing OpenShift🔗

Set environment variables for existing OpenShift🔗

Optional: set environment variables for deployer config and status directories. If not specified, respectively $HOME/cpd-config and $HOME/cpd-status will be used.

export STATUS_DIR=$HOME/cpd-status
+export CONFIG_DIR=$HOME/cpd-config
+

  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and log files. Please note that if you have chosen to use a File Vault, the directory specified must be the one you used when you created the environment
  • CONFIG_DIR: Directory that holds the configuration. This must be the same directory you used when you created the environment

Prepare for destroy on IBM Cloud🔗

Set environment variables for IBM Cloud🔗

export IBM_CLOUD_API_KEY=your_api_key
+

Optional: set environment variables for deployer config and status directories. If not specified, respectively $HOME/cpd-config and $HOME/cpd-status will be used.

export STATUS_DIR=$HOME/cpd-status
+export CONFIG_DIR=$HOME/cpd-config
+

  • IBM_CLOUD_API_KEY: This is the API key you generated using your IBM Cloud account, this is a 40+ character string
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and log files. Please note that if you have chosen to use a File Vault, the directory specified must be the one you used when you created the environment
  • CONFIG_DIR: Directory that holds the configuration. This must be the same directory you used when you created the environment

Prepare for destroy on AWS🔗

Set environment variables for AWS🔗

We assume that the vault already holds the mandatory secrets for AWS Access Key, Secret Access Key and ROSA login token.

export STATUS_DIR=$HOME/cpd-status
+export CONFIG_DIR=$HOME/cpd-config
+
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and log files. Please note that if you have chosen to use a File Vault, the directory specified must be the one you used when you created the environment
  • CONFIG_DIR: Directory that holds the configuration. This must be the same directory you used when you created the environment

Prepare for destroy on Azure🔗

Set environment variables for Azure🔗

We assume that the vault already holds the mandatory secrets for Azure - Service principal id and its password, tenant id and ARO login token.

export STATUS_DIR=$HOME/cpd-status
+export CONFIG_DIR=$HOME/cpd-config
+
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and log files. Please note that if you have chosen to use a File Vault, the directory specified must be the one you used when you created the environment
  • CONFIG_DIR: Directory that holds the configuration. This must be the same directory you used when you created the environment

Run the Cloud Pak Deployer to destroy the assets🔗

./cp-deploy.sh env destroy --confirm-destroy
+

Please ensure you specify the same extra (dynamic) variables that you used when you ran the env apply command.

When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background.

You can return to view the logs as follows:

./cp-deploy.sh env logs
+

If you need to interrupt the process, use CTRL-C to stop the logging output and then use:

./cp-deploy.sh env kill
+

Finishing up🔗

Once the process has finished successfully, you can delete the status directory.

\ No newline at end of file diff --git a/30-reference/configuration/cloud-pak/index.html b/30-reference/configuration/cloud-pak/index.html new file mode 100644 index 000000000..42596be85 --- /dev/null +++ b/30-reference/configuration/cloud-pak/index.html @@ -0,0 +1,273 @@ + Cloud Paks - Cloud Pak Deployer
Skip to content

Cloud Paks🔗

Defines the Cloud Pak(s) which is/are layed out on the OpenShift cluster, typically in one or more OpenShift projects. The Cloud Pak definition represents the instance users connect to and which is responsible for managing the functional capabilities installed within the application.

Cloud Pak configuration🔗

cp4d🔗

Defines the Cloud Pak for Data instances to be configured on the OpenShift cluster(s).

cp4d:
+- project: cpd
+  openshift_cluster_name: sample
+  cp4d_version: 4.7.3
+  use_fs_iam: False
+  change_node_settings: True
+  db2u_limited_privileges: False
+  accept_licenses: False
+  openshift_storage_name: nfs-storage
+  cp4d_entitlement: 
+  - cpd-enterprise
+  # - cpd-standard
+  # - cognos-analytics
+  - data-product-hub
+  # - datastage
+  # - ikc-premium
+  # - ikc-standard
+  # - openpages
+  # - planning-analytics
+  # - product-master
+  # - speech-to-text
+  # - text-to-speech
+  - watson-assistant
+  # - watson-discovery
+  # - watsonx-ai
+  # - watsonx-code-assistant-ansible
+  # - watsonx-code-assistant-z
+  # - watsonx-data
+  # - watsonx-gov-mm
+  # - watsonx-gov-rc
+  # - watsonx-orchestrate  
+  cp4d_production_license: True
+  state: installed
+  
+  cartridges:
+  - name: cpfs
+  - name: cpd_platform
+

Properties🔗

Property Description Mandatory Allowed values
project Name of the OpenShift project of the Cloud Pak for Data instance Yes
openshift_cluster_name Name of the OpenShift cluster Yes, inferred from openshift Existing openshift cluster
cp4d_version Cloud Pak for Data version to install, this will determine the version for all cartridges that do not specify a version Yes 4.x.x
sequential_install Deprecated property No True (default), False
use_fs_iam If set to True the deployer will enable Foundational Services IAM for authentication No False (default), True
use_cp_alt_repo When set to False, deployer will use use the alternative repo specified in cp_alt_repo resource No True (default), False
change_node_settings Controls whether the node settings using the machine configs will be applied onto the OpenShift cluster. No True, False
db2u_limited_privileges Depicts whether Db2U containers run with limited privileges. If they do (True), Deployer will create KubeletConfig and Tuned OpenShift resources as per the documentation. No False (default), True
accept_licenses Set to 'True' to accept Cloud Pak licenses. Alternatively the --accept-all-licenses can be used for the cp-deploy.sh command No True, False (default)
cp4d_entitlement Set to cpd-enterprise, cpd-standard, watsonx-data, watsonx-ai, watsonx-gov-mm, watsonx-gov-rc, dependent on the deployed license, multiple entitlements can be specified No For valid values, refer to product documentation
cp4d_production_license Whether the Cloud Pak for Data is a production license No True (default), False
state Indicated whether Cloud Pak for Data must be installed or removed No installed (default), removed
image_registry_name When using private registry, specify name of image_registry No
openshift_storage_name References an openshift_storage element in the OpenShift cluster that was defined for this Cloud Pak for Data instance. The name must exist under `openshift.[openshift_cluster_name].openshift_storage. No, inferred from openshift->openshift_storage
cartridges List of cartridges to install for this Cloud Pak for Data instance. See Cloud Pak for Data cartridges for more details Yes

cp4i🔗

Defines the Cloud Pak for Integration installation to be configured on the OpenShift cluster(s).

cp4i:
+- project: cp4i
+  openshift_cluster_name: {{ env_id }}
+  openshift_storage_name: nfs-rook-ceph
+  cp4i_version: 2021.4.1
+  accept_licenses: False
+  use_top_level_operator: False
+  top_level_operator_channel: v1.5
+  top_level_operator_case_version: 2.5.0
+  operators_in_all_namespaces: True
+ 
+  instances:
+  - name: integration-navigator
+    type: platform-navigator
+    license: L-RJON-C7QG3S
+    channel: v5.2
+    case_version: 1.5.0
+

OpenShift projects🔗

The immediate content of the cp4i object is actually a list of OpenShift projects (namespaces). There can be more than one project and instances can be created in separate projects.

cp4i:
+- project: cp4i
+  ...
+
+- project: cp4i-ace
+  ...
+
+- project: cp4i-apic
+  ...
+

Operator channels, CASE versions, license IDs🔗

Before you run the Cloud Pak Deployer be sure that the correct operator channels are defined for the selected instance types. Some products require a license ID, please check the documentation of each product for the correct license. If you decide to use CASE files instead of the IBM Operator Catalog (more on that below) make sure that you selected the correct CASE versions - please refer: https://github.com/IBM/cloud-pak/tree/master/repo/case

CP4I main properties🔗

The following properties are defined on the project level:

Property Description Mandatory Allowed values
project The name of the OpenShift project that will be created and used for the installation of the defined instances. Yes
openshift_cluster_name Dynamically defined form the env_id parameter during the execution. Yes, inferred from openshift Existing openshift cluster
openshift_storage_name Reference to the storage definition that exists in the openshift object (please see above). The definition must include the class name of the file storage type and the class name of the block storage type. No, inferred from openshift->openshift_storage
cp4i_version The version of the Cloud Pak for Integration (e.g. 2021.4.1) Yes
use_case_files The property defines if the CASE files are used for installation. If it is True then the operator catalogs are created from the CASE files. If it is False, the IBM Operator Catalog from the entitled registry is used. No True, False (default)
accept_licenses Set to True to accept Cloud Pak licenses. Alternatively the --accept-all-licenses can be used for the cp-deploy.sh command Yes True, False
use_top_level_operator If it is True then the CP4I top-level operator that installs all other operators is used. Otherwise, only the operators for the selected instance types are installed. No True, False (default)
top_level_operator_channel Needed if the use_top_level_operator is True otherwise, it is ignored. Specifies the channel of the top-level operator. No
top_level_operator_case_version Needed if the use_top_level_operator is True otherwise, it is ignored. Specifies the CASE package version of the top-level operator. No
operators_in_all_namespaces It defines whether the operators are visible in all namespaces or just in the specific namespace where they are needed. No True, False (default)
instances List of the instances that are going to be created (please see below). Yes

Warning

Despite the properties use_case_files, use_top_level_operator and operators_in_all_namespaces are defined as optional, they are actually crucial for the way of execution of the installation process. If any of them is omitted, it is assumed that the default False value is used. If none of them exists, it means that all are False. In this case, it means that the IBM Operator Catalog is used and only the needed operators for specified instance types are installed in the specific namespace.

Properties of the individual instances🔗

The instance property contains one or more instances definitions. Each instance must have a unique name. There can be more the one instance of the same type.

Naming convention for instance types🔗

For each instance definition, an instance type must be specified. We selected the type names that are as much as possible similar to the naming convention used in the Platform Navigator use interface. The following table shows all existing types:

Instance type Description/Product name
platform-navigator Platform Navigator
api-management IBM API Connect
automation-assets Automation assets a.k.a Asset repo
enterprise-gateway IBM Data Power
event-endpoint-management Event endpoint manager - managing asynchronous APIs
event-streams IBM Event Streams - Kafka
high-speed-transfer-server Aspera HSTS
integration-dashboard IBM App Connect Integration Dashboard
integration-design IBM App Connect Designer
integration-tracing Operations Dashboard
messaging IBM MQ

Platform navigator🔗

The Platform Navigator is defined as one of the instance types. There is typically only one instance of it. The exception would be an installation in two or more completely separate namespaces (see the CP4I documentation). Special attention is paid to the installation of the Navigator. The Cloud Pak Deployer will install the Navigator instance first, before any other instance, and it will wait until the instance is ready (this could take up to 45 minutes).

When the installation is completed, you will find the admin user password in the status/cloud-paks/cp4i--cp4i-PN-access.txt file. Of course, you can obtain the password also from the platform-auth-idp-credentials secret in ibm-common-services namespace.

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be platform-navigator
license License ID L-RJON-C7QG3S
channel Subscription channel v5.2
case_version CASE version 1.5.0

API management (IBM API Connect)🔗

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be api-management
license License ID L-RJON-C7BJ42
version Version of API Connect 10.0.4.0
channel Subscription channel v2.4
case_version CASE version 3.0.5

Automation assets (Asset repo)🔗

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be automation-assets
license License ID L-PNAA-C68928
version Version of Asset repo 2021.4.1-2
channel Subscription channel v1.4
case_version CASE version 1.4.2

Enterprise gateway (IBM Data Power)🔗

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be enterprise-gateway
admin_password_secret The name of the secret where admin password is stored. The default name is used if you leave it empty.
license License ID L-RJON-BYDR3Q
version Version of Data Power 10.0-cd
channel Subscription channel v1.5
case_version CASE version 1.5.0

Event endpoint management🔗

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be event-endpoint-management
license License ID L-RJON-C7BJ42
version Version of Event endpoint manager 10.0.4.0
channel Subscription channel v2.4
case_version CASE version 3.0.5

Event streams🔗

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be event-streams
version Version of Event streams 10.5.0
channel Subscription channel v2.5
case_version CASE version 1.5.2

High speed transfer server (Aspera HSTS)🔗

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be high-speed-transfer-server
aspera_key A license key for the Aspera software
redis_version Version of the Redis database 5.0.9
version Version of Aspera HSTS 4.0.0
channel Subscription channel v1.4
case_version CASE version 1.4.0

Integration dashboard (IBM App Connect Dashboard)🔗

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be integration-dashboard
license License ID L-APEH-C79J9U
version Version of IBM App Connect 12.0
channel Subscription channel v3.1
case_version CASE version 3.1.0

Integration design (IBM App Connect Designer)🔗

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be integration-design
license License ID L-KSBM-C87FU2
version Version of IBM App Connect 12.0
channel Subscription channel v3.1
case_version CASE version 3.1.0

Integration tracing (Operation dashborad)🔗

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be integration-tracing
version Version of Integration tracing 2021.4.1-2
channel Subscription channel v2.5
case_version CASE version 2.5.2

Messaging (IBM MQ)🔗

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be messaging
queue_manager_name The name of the initial queue. Default is QUICKSTART
license License ID L-RJON-C7QG3S
version Version of IBM MQ 9.2.4.0-r1
channel Subscription channel v1.7
case_version CASE version 1.7.0

cp4waiops🔗

Defines the Cloud Pak for Watson AIOps installation to be configured on the OpenShift cluster(s). The following instances can be installed by the deployer:

  • AI Manager
  • Event Manager
  • Turbonomic
  • Instana
  • Infrastructure management
  • ELK stack (ElasticSearch, Logstash, Kibana)

Aside from the base install, the deployer can also install ready-to-use demos for each of the instances

cp4waiops:
+- project: cp4waiops
+  openshift_cluster_name: "{{ env_id }}"
+  openshift_storage_name: auto-storage
+  accept_licenses: False
+ 
+  instances:
+  - name: cp4waiops-aimanager
+    kind: AIManager
+    install: true
+  ...
+

AIOPS main properties🔗

The following properties are defined on the project level:

Property Description Mandatory Allowed values
project The name of the OpenShift project that will be created and used for the installation of the defined instances. Yes
openshift_cluster_name Dynamically defined form the env_id parameter during the execution. No, only if mutiple OpenShift clusters defined Existing openshift cluster
openshift_storage_name Reference to the storage definition that exists in the openshift object (please see above). No, inferred from openshift->openshift_storage
accept_licenses Set to True to accept Cloud Pak licenses. Alternatively the --accept-all-licenses can be used for the cp-deploy.sh command Yes True, False

Service instances🔗

The project that is specified at the cp4waiops level defines the OpenShift project into which the instances of each of the services will be installed. Below is a list of instance "kinds" that can be installed. For every "service instance" there can also be a "demo content" entry to prepare the demo content for the capability.

AI Manager🔗

  instances:
+  - name: cp4waiops-aimanager
+    kind: AIManager
+    install: true
+
+    waiops_size: small
+    custom_size_file: none
+    waiops_name: ibm-cp-watson-aiops
+    subscription_channel: v3.6
+    freeze_catalog: false
+
Property Description Mandatory Allowed values
name Unique name within the cluster using only lowercase alphanumerics and "-" Yes
kind Service kind to install Yes AIManager
install Must the service be installed? Yes true, false
waiops_size Size of the install Yes small, tall, custom
custom_size_file Name of the file holding the custom sizes if waiops_size is custom No
waiops_name Name of the CP4WAIOPS instance Yes
subscription_channel Subscription channel of the operator Yes
freeze_catalog Freeze the version of the catalog source? Yes false, true
case_install Must AI manager be installed via case files? No false, true
case_github_url GitHub URL to download case file Yes if case_install is true
case_name Name of the case file Yes if case_install is true
case_version Version of the case file to download Yes if case_install is true
case_inventory_setup Case file operation to run for this service Yes if case_install is true cpwaiopsSetup

AI Manager - Demo Content🔗

  instances:
+  - name: cp4waiops-aimanager-demo-content
+    kind: AIManagerDemoContent
+    install: true
+    ...
+
Property Description Mandatory Allowed values
name Unique name within the cluster using only lowercase alphanumerics and "-" Yes
kind Service kind to install Yes AIManagerDemoContent
install Must the content be installed? Yes true, false

See sample config for remainder of properties.

Event Manager🔗

  instances:
+  - name: cp4waiops-eventmanager
+    kind: EventManager
+    install: true
+    subscription_channel: v1.11
+    starting_csv: noi.v1.7.0
+    noi_version: 1.6.6
+
Property Description Mandatory Allowed values
name Unique name within the cluster using only lowercase alphanumerics and "-" Yes
kind Service kind to install Yes EventManager
install Must the service be installed? Yes true, false
subscription_channel Subscription channel of the operator Yes
starting_csv Starting Cluster Server Version Yes
noi_version Version of noi Yes

Event Manager Demo Content🔗

  instances:
+  - name: cp4waiops-eventmanager
+    kind: EventManagerDemoContent
+    install: true
+
Property Description Mandatory Allowed values
name Unique name within the cluster using only lowercase alphanumerics and "-" Yes
kind Service kind to install Yes EventManagerDemoContent
install Must the content be installed? Yes true, false

Infrastructure Management🔗

  instances:
+  - name: cp4waiops-infrastructure-management
+    kind: InfrastructureManagement
+    install: false
+    subscription_channel: v3.5
+
Property Description Mandatory Allowed values
name Unique name within the cluster using only lowercase alphanumerics and "-" Yes
kind Service kind to install Yes InfrastructureManagement
install Must the service be installed? Yes true, false
subscription_channel Subscription channel of the operator Yes

ELK stack🔗

ElasticSearch, Logstash and Kibana stack.

  instances:
+  - name: cp4waiops-elk
+    kind: ELK
+    install: false
+
Property Description Mandatory Allowed values
name Unique name within the cluster using only lowercase alphanumerics and "-" Yes
kind Service kind to install Yes ELK
install Must the service be installed? Yes true, false

Instana🔗

  instances:
+  - name: cp4waiops-instana
+    kind: Instana
+    install: true
+    version: 241-0
+
+    sales_key: 'NONE'
+    agent_key: 'NONE'
+
+    instana_admin_user: "admin@instana.local"
+    #instana_admin_pass: 'P4ssw0rd!'
+    
+    install_agent: true
+
+    integrate_aimanager: true
+    #integrate_turbonomic: true
+
Property Description Mandatory Allowed values
name Unique name within the cluster using only lowercase alphanumerics and "-" Yes
kind Service kind to install Yes Instana
install Must the service be installed? Yes true, false
version Version of Instana to install No
sales_key License key to be configured No
agent_key License key for agent to be configured No
instana_admin_user Instana admin user to be configured Yes
instana_admin_pass Instana admin user password to be set (if different from global password) No
install_agent Must the Instana agent be installed? Yes true, false
integrate_aimanager Must Instana be integrated with AI Manager? Yes true, false
integrate_turbonomic Must Instana be integrated with Turbonomic? No true, false

Turbonomic🔗

  instances:
+  - name: cp4waiops-turbonomic
+    kind: Turbonomic
+    install: true
+    turbo_version: 8.7.0
+
Property Description Mandatory Allowed values
name Unique name within the cluster using only lowercase alphanumerics and "-" Yes
kind Service kind to install Yes Turbonomic
install Must the service be installed? Yes true, false
turbo_version Version of Turbonomic to install Yes

Turbonomic Demo Content🔗

  instances:
+  - name: cp4waiops-turbonomic-demo-content
+    kind: TurbonomicDemoContent
+    install: true
+    #turbo_admin_password: P4ssw0rd!
+    create_user: false
+    demo_user: demo
+    #turbo_demo_password: P4ssw0rd!
+
Property Description Mandatory Allowed values
name Unique name within the cluster using only lowercase alphanumerics and "-" Yes
kind Service kind to install Yes TurbonomicDemoContent
install Must the content be installed? Yes true, false
turbo_admin_pass Turbonomic admin user password to be set (if different from global password) No
create_user Must the demo user be created? No false, true
demo_user Name of the demo user No
turbo_demo_password Demo user password if different from global password No

See sample config for remainder of properties.

cp4ba🔗

Defines the Cloud Pak for Business Automation installation to be configured on the OpenShift cluster(s).
See Cloud Pak for Business Automation for additional details.

---
+cp4ba:
+- project: cp4ba
+  collateral_project: cp4ba-collateral
+  openshift_cluster_name: "{{ env_id }}"
+  openshift_storage_name: auto-storage
+  accept_licenses: false
+  state: installed
+  cpfs_profile_size: small # Profile size which affect replicas and resources of Pods of CPFS as per https://www.ibm.com/docs/en/cpfs?topic=operator-hardware-requirements-recommendations-foundational-services
+
+  # Section for Cloud Pak for Business Automation itself
+  cp4ba:
+    # Set to false if you don't want to install (or remove) CP4BA
+    enabled: true # Currently always true
+    profile_size: small # Profile size which affect replicas and resources of Pods as per https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=pcmppd-system-requirements
+    patterns:
+      foundation: # Foundation pattern, always true - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__foundation
+        optional_components:
+          bas: true # Business Automation Studio (BAS) 
+          bai: true # Business Automation Insights (BAI)
+          ae: true # Application Engine (AE)
+      decisions: # Operational Decision Manager (ODM) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__odm
+        enabled: true
+        optional_components:
+          decision_center: true # Decision Center (ODM)
+          decision_runner: true # Decision Runner (ODM)
+          decision_server_runtime: true # Decision Server (ODM)
+        # Additional customization for Operational Decision Management
+        # Contents of the following will be merged into ODM part of CP4BA CR yaml file. Arrays are overwritten.
+        cr_custom:
+          spec:
+            odm_configuration:
+              decisionCenter:
+                # Enable support for decision models
+                disabledDecisionModel: false
+      decisions_ads: # Automation Decision Services (ADS) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__ads
+        enabled: true
+        optional_components:
+          ads_designer: true # Designer (ADS)
+          ads_runtime: true # Runtime (ADS)
+        gen_ai: # https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=services-configuring-generative-ai-secret
+          apiKey: <watsonx_ai_api_key>
+          authUrl: https://iam.bluemix.net/identity/token
+          mlUrl: https://us-south.ml.cloud.ibm.com
+          projectId: <project_id>          
+      content: # FileNet Content Manager (FNCM) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__ecm
+        enabled: true
+        optional_components:
+          cmis: true # Content Management Interoperability Services (FNCM - CMIS)
+          css: true # Content Search Services (FNCM - CSS)
+          es: true # External Share (FNCM - ES)
+          tm: true # Task Manager (FNCM - TM)
+          ier: true # IBM Enterprise Records (FNCM - IER)
+          icc4sap: false # IBM Content Collector for SAP (FNCM - ICC4SAP) - Currently not implemented
+      application: # Business Automation Application (BAA) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__baa
+        enabled: true
+        optional_components:
+          app_designer: true # App Designer (BAA)
+          ae_data_persistence: true # App Engine data persistence (BAA)
+      document_processing: # Automation Document Processing (ADP) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__adp
+        enabled: true
+        optional_components: 
+          document_processing_designer: true # Designer (ADP)
+        # Additional customization for Automation Document Processing
+        # Contents of the following will be merged into ADP part of CP4BA CR yaml file. Arrays are overwritten.
+        cr_custom:
+          spec:
+            ca_configuration:
+              ## NB: All config parameters for ADP are described here ==> https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=parameters-automation-document-processing
+              ocrextraction:
+                # [Tech Preview] OCR Engine 2 (IOCR) for ADP - Starts the Watson Document Understanding (WDU) pods to process documents.
+                use_iocr: auto # Allowed values: auto, all, none. Refer to doc for option details: https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=parameters-automation-document-processing#:~:text=ocrextraction.use_iocr
+                deep_learning_object_detection: # When enabled, ca_configuration.deeplearning parameters will be used (ignored otherwise), and deep-learning pods will be deployed to enhance object detection.
+                  # If disabled, all training will automatically be done in "fast-training" mode and should finish in less than 10 min.
+                  # Warn: If you enable this option and don't select the "fast training" mode in ADP before starting training, training could take hours (or more if you don't have GPUs).
+                  #       See "Important" note here for usage recommandation on using "fast/deeplarning" training: https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/23.0.2?topic=project-creating-data-extraction-model#:~:text=Training%20takes%20time
+                  enabled: true
+              deeplearning: # Only used if deep_learning_object_detection is enabled. Configure usage of GPU-enabled Nodes.
+                gpu_enabled: false # Use GPUs for deeplearning training instead of CPUs.
+                nodelabel_key: nvidia.com/gpu.present
+                nodelabel_value: "true"
+                replica_count: 1 # Controls the number of deep learning pod replicas. NB: The number of GPUs available on your cluster should be ≥ to replica_count.
+      workflow: # Business Automation Workflow (BAW) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__baw
+        enabled: true
+        optional_components:
+          baw_authoring: true # Workflow Authoring (BAW) - always keep true if workflow pattern is chosen. BAW Runtime is not implemented.
+          kafka: true # Will install a kafka cluster and enable kafka service for workflow authoring.
+  
+  # Section for IBM Process mining
+  pm:
+    # Set to false if you don't want to install (or remove) Process Mining
+    enabled: true
+    # Additional customization for Process Mining
+    # Contents of the following will be merged into PM CR yaml file. Arrays are overwritten.
+    cr_custom:
+      spec:
+        processmining:
+          storage:
+            # Disables redis to spare resources as per https://www.ibm.com/docs/en/process-mining/latest?topic=configurations-custom-resource-definition
+            redis:
+              install: false  
+
+  # Section for IBM Robotic Process Automation
+  rpa:
+    # Set to false if you don't want to install (or remove) RPA
+    enabled: true
+    # Additional customization for Robotic Process Automation
+    # Contents of the following will be merged into RPA CR yaml file. Arrays are overwritten.
+    cr_custom:
+      spec:
+        # Configures the NLP provider component of IBM RPA. You can disable it by specifying 0. https://www.ibm.com/docs/en/rpa/latest?topic=platform-configuring-rpa-custom-resources#basic-setup
+        nlp:
+          replicas: 1
+
+  # Set to false if you don't want to install (or remove) CloudBeaver (PostgreSQL, DB2, MSSQL UI)
+  cloudbeaver_enabled: true
+
+  # Set to false if you don't want to install (or remove) Roundcube
+  roundcube_enabled: true
+
+  # Set to false if you don't want to install (or remove) Cerebro
+  cerebro_enabled: true
+
+  # Set to false if you don't want to install (or remove) AKHQ
+  akhq_enabled: true
+
+  # Set to false if you don't want to install (or remove) Mongo Express
+  mongo_express_enabled: true
+
+  # Set to false if you don't want to install (or remove) phpLDAPAdmin
+  phpldapadmin_enabled: true
+
+  # Set to false if you don't want to install (or remove) OpenSearch Dashboards
+  opensearch_dashboards_enabled: true  
+

CP4BA main properties🔗

The following properties are defined on the project level.

Property Description Mandatory Allowed values
project The name of the OpenShift project that will be created and used for the installation of the defined instances. Yes Valid OCP project name
collateral_project The name of the OpenShift project that will be created and used for the installation of all collateral (prerequisites and extras). Yes Valid OCP project name
openshift_cluster_name Dynamically defined form the env_id parameter during the execution. No, only if multiple OpenShift clusters defined Existing openshift cluster
openshift_storage_name Reference to the storage definition that exists in the openshift object (please see above). No, inferred from openshift->openshift_storage
accept_licenses Set to true to accept Cloud Pak licenses. Alternatively the --accept-all-licenses can be used for the cp-deploy.sh command Yes true, false
state Set to installed to install enabled capabilities, set to removed to remove enabled capabilities. Yes installed, removed
cpfs_profile_size Profile size which affect replicas and resources of Pods of CPFS as per https://www.ibm.com/docs/en/cpfs?topic=operator-hardware-requirements-recommendations-foundational-services Yes starterset, small, medium, large

Cloud Pak for Business Automation properties🔗

Used to configure CP4BA.
Placed in cp4ba key on the project level.

Property Description Mandatory Allowed values
enabled Set to true to enable CP4BA. Currently always true. Yes true
profile_size Profile size which affect replicas and resources of Pods as per https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=pcmppd-system-requirements Yes small, medium, large
patterns Section where CP4BA patterns are configured. Please make sure to select all that is needed as a dependencies. Dependencies can be determined from documentation at https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments Yes Object - see details below

Foundation pattern properties🔗

Always configure in CP4BA.
https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__foundation
Placed in cp4ba.patterns.foundation key.

Property Description Mandatory Allowed values
optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern
optional_components.bas Set to true to enable Business Automation Studio Yes true, false
optional_components.bai Set to true to enable Business Automation Insights Yes true, false
optional_components.ae Set to true to enable Application Engine Yes true, false

Decisions pattern properties🔗

Used to configure Operation Decision Manager.
https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__odm
Placed in cp4ba.patterns.decisions key.

Property Description Mandatory Allowed values
enabled Set to true to enable decisions pattern. Yes true, false
optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern
optional_components.decision_center Set to true to enable Decision Center Yes true, false
optional_components.decision_runner Set to true to enable Decision Runner Yes true, false
optional_components.decision_server_runtime Set to true to enable Decision Server Yes true, false
cr_custom Additional customization for Operational Decision Management. Contents will be merged into ODM part of CP4BA CR yaml file. Arrays are overwritten. No Object

Decisions ADS pattern properties🔗

Used to configure Automation Decision Services.
https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__ads
Placed in cp4ba.patterns.decisions_ads key.

Property Description Mandatory Allowed values
enabled Set to true to enable decisions_ads pattern. Yes true, false
optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern
optional_components.ads_designer Set to true to enable Designer Yes true, false
optional_components.ads_runtime Set to true to enable Runtime Yes true, false
gen_ai Sub object for definition of GenAI connection. More on https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/24.0.0?topic=services-configuring-generative-ai-secret false Object
gen_ai.apiKey Set to real value of your Watsonx.AI platform false Your real value
gen_ai.authUrl Set to real value of your Watsonx.AI platform false Your real value
gen_ai.mlUrl Set to real value of your Watsonx.AI platform false Your real value
gen_ai.projectId Set to real value of your Watsonx.AI platform false Your real value

Content pattern properties🔗

Used to configure FileNet Content Manager.
https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__ecm
Placed in cp4ba.patterns.content key.

Property Description Mandatory Allowed values
enabled Set to true to enable content pattern. Yes true, false
optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern
optional_components.cmis Set to true to enable CMIS Yes true, false
optional_components.css Set to true to enable Content Search Services Yes true, false
optional_components.es Set to true to enable External Share. Currently not functional. Yes true, false
optional_components.tm Set to true to enable Task Manager Yes true, false
optional_components.ier Set to true to enable IBM Enterprise Records Yes true, false
optional_components.icc4sap Set to true to enable IBM Content Collector for SAP. Currently not functional. Always false. Yes false

Application pattern properties🔗

Used to configure Business Automation Application.
https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__baa
Placed in cp4ba.patterns.application key.

Property Description Mandatory Allowed values
enabled Set to true to enable application pattern. Yes true, false
optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern
optional_components.app_designer Set to true to enable Application Designer Yes true, false
optional_components.ae_data_persistence Set to true to enable App Engine data persistence Yes true, false

Document Processing pattern properties🔗

Used to configure Automation Document Processing.
https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__baa
Placed in cp4ba.patterns.document_processing key.

Property Description Mandatory Allowed values
enabled Set to true to enable document_processing pattern. Yes true, false
optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern
optional_components.document_processing_designer Set to true to enable Designer Yes true
cr_custom Additional customization for Automation Document Processing. Contents will be merged into ADP part of CP4BA CR yaml file. Arrays are overwritten. No Object

Workflow pattern properties🔗

Used to configure Business Automation Workflow.
https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__baw
Placed in cp4ba.patterns.workflow key.

Property Description Mandatory Allowed values
enabled Set to true to enable workflow pattern. Yes true, false
optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern
optional_components.baw_authoring Set to true to enable Workflow Authoring. Currently always true. Yes true
optional_components.kafka Set to true to enable kafka service for workflow authoring. Yes true, false

Process Mining properties🔗

Used to configure IBM Process Mining.
Placed in pm key on the project level.

Property Description Mandatory Allowed values
enabled Set to true to enable process mining. Yes true, false
cr_custom Additional customization for Process Mining. Contents will be merged into PM CR yaml file. Arrays are overwritten. No Object

Robotic Process Automation properties🔗

Used to configure IBM Robotic Process Automation.
Placed in rpa key on the project level.

Property Description Mandatory Allowed values
enabled Set to true to enable rpa. Yes true, false
cr_custom Additional customization for Process Mining. Contents will be merged into RPA CR yaml file. Arrays are overwritten. No Object

Other properties🔗

Used to configure extra UIs.
The following properties are defined on the project level.

Property Description Mandatory Allowed values
cloudbeaver_enabled Set to true to enable CloudBeaver (PostgreSQL, DB2, MSSQL UI). Yes true, false
roundcube_enabled Set to true to enable Roundcube. Client for mail. Yes true, false
cerebro_enabled Set to true to enable Cerebro. Client for ElasticSearch in CP4BA. Yes true, false
akhq_enabled Set to true to enable AKHQ. Client for Kafka in CP4BA. Yes true, false
mongo_express_enabled Set to true to enable Mongo Express. Client for MongoDB. Yes true, false
phpldapadmin_enabled Set to true to enable phpLDApAdmin. Client for OpenLDAP. Yes true, false
opensearch_dashboards_enabled Set to true to enable OpenSearch Dashboards. Client for OpenSearch. Yes true, false
\ No newline at end of file diff --git a/30-reference/configuration/cp4ba/index.html b/30-reference/configuration/cp4ba/index.html new file mode 100644 index 000000000..245c4fb20 --- /dev/null +++ b/30-reference/configuration/cp4ba/index.html @@ -0,0 +1 @@ + Cloud Pak for Business Automation - Cloud Pak Deployer
Skip to content

Cloud Pak for Business Automation🔗

Contains CP4BA version 23.0.2 iFix 3.
RPA and Process Mining are currently not deployed due to discrepancy in Cloud Pak Foundational Services version.
Contains IPM version 1.14.4. Contains RPA version 23.0.15.

Disclaimer ✋🔗

This is not an official IBM documentation.
Absolutely no warranties, no support, no responsibility for anything.
Use it on your own risk and always follow the official IBM documentations.
It is always your responsibility to make sure you are license compliant when using this repository to install IBM Cloud Pak for Business Automation.

Please do not hesitate to create an issue here if needed. Your feedback is appreciated.

Not for production use (neither dev nor test or prod environments). Suitable for Demo and PoC environments - but with Production deployment.

!Important - Keep in mind that this deployment contains capabilities (the ones which are not bundled with CP4BA) which are not eligible to run on Worker Nodes covered by CP4BA OCP Restricted licenses. More info on https://www.ibm.com/docs/en/cloud-paks/1.0?topic=clusters-restricted-openshift-entitlement.

Documentation base 📝🔗

Deploying CP4BA is based on official documentation which is located at https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest.

Deployment of other parts is also based on respective official documentations.

Benefits 🚀🔗

  • Automatic deployment of the whole platform where you don't need to take care about almost any prerequisites
  • OCP Ingress certificate is used for Routes so there is only one certificate you need to trust in you local machine to trust all URLs of the whole platform
  • Trusted certificate in browser also enable you to save passwords
  • Wherever possible a common admin user cpadmin with adjustable password is used, so you don't need to remember multiple credentials when you want to access the platform (convenience also comes with responsibility - so you don't want to expose your platform to whole world)
  • The whole platform is running on containers, so you don't need to manually prepare anything on traditional VMs and take care of them including required prerequisites
  • Many otherwise manual post-deployment steps have been automated
  • Pre integrated and automatically connected extras are deployed in the platform for easier access/management/troubleshooting
  • You have a working Production deployment which you can use as a reference for further custom deployments

General information 📢🔗

What is not included:

  • ICCs - not covered.
  • FNCM External share - Currently not supported with ZEN & IAM as per limitation on FNCM limitations
  • Asset Repository - it is more part of CP4I.
  • Workflow Server and Workstream Services - this is a dev deployment. BAW Authoring and (BAW + IAWS) are mutually exclusive in single project.
  • ADP Runtime deployment - this is a dev deployment.

What is in the package 📦🔗

When you perform full deployment, as a result you will get full CP4BA platform as seen in the picture. You can also omit some capabilities - this is covered later in this doc.

More details about each section from the picture follows below it.

images/cp4ba-installation.png

Extras section🔗

Contains extra software which makes working with the platform even easier.

  • phpLDAPadmin - Web UI for OpenLDAP directory making it easier to admin and troubleshoot the LDAP.
  • Gitea - Contains Git server with web UI and is used for ADS and ADP for project sharing and publishing. Organizations for ADS and APD are automatically created. Gitea is connected to OpenLDAP for authentication and authorization.
  • Nexus - Repository manager which contains pushed ADS java libraries needed for custom development and also for publishing custom ADS jars. Nexus is connected to OpenLDAP for authentication and authorization.
  • Roundcube - Web UI for included Mail server to be able to browse incoming emails.
  • Cerebro - Web UI elastic search browser automatically connected to ES instance deployed with CP4BA.
  • AKHQ - Web UI kafka browser automatically connected to Kafka instance deployed with CP4BA.
  • OpenSearch Dashboards - Web UI OpenSearch dashboard tool automatically connected to OpenSearch instance deployed with CP4BA.
  • Mail server - For various mail integrations e.g. from BAN, BAW and RPA.
  • Mongo Express - Web UI for Mongo DB databases for CP4BA and Process Mining to easier troubleshoot DB.
  • CloudBeaver - Web UI for Postgresql and MSSQL databases making it easier to admin and troubleshoot the DBs.

CP4BA (Cloud Pak for Business Automation) section🔗

CP4BA capabilities🔗

CP4BA capabilities are in purple color.

More info for CP4BA capabilities is available in official docs at https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest.

More specifically in overview of patterns at https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments.

Pink color is used for CPFS dedicated capabilities.

More info for CPFS dedicated capabilities is available in official docs at https://www.ibm.com/docs/en/cloud-paks/foundational-services/latest.

Magenta color is used for additional capabilities.

More info for Process Mining is available in official docs at https://www.ibm.com/docs/en/process-mining/latest.

More info for RPA is available in official docs at https://www.ibm.com/docs/en/rpa/latest.

Assets are currently not deployed.

CPFS (Cloud Pak Foundational Services) section🔗

Contains services which are reused by Cloud Paks.

More info available in official docs at https://www.ibm.com/docs/en/cpfs.

  • License metering - Tracks license usage.
  • Certificate Manager - Provides certificate handling.

Pre-requisites section🔗

Contains prerequisites for the whole platform.

  • PostgreSQL - Database storage for the majority of capabilities.
  • OpenLDAP - Directory solution for users and groups definition.
  • MSSQL server - Database storage for RPA server.
  • MongoDB - Database storage for ADS and Process Mining.

Environments used for installation 💻🔗

With proper sizing of the cluster and provided RWX File and RWO Block Storage Class, CP4BA deployed with Deployer should be working on any OpenShift 4.14 with Worker Nodes which in total have free 60 CPU, 128GB Memory for requests.

Automated post-deployment tasks ✅🔗

For your convenience the following post-deployment setup tasks have been automated:

Usage & operations 📇🔗

Endpoints, access info and other useful information is available in Project cloud-pak-deployer in ConfigMap cp4ba-usage in usage.md file after installation. It is best to copy the contents and open it in nice MarkDown editor like VSCode. The ConfigMap name can begin with a different name if you customized main CP4BA project name.

Optional post deployment steps ➡️🔗

CP4BA
Review and perform post deploy manual steps for CP4BA as specified in Project cloud-pak-deployer in ConfigMap cp4ba-postdeploy in postdeploy.md file. It is best to copy the contents and open it in nice MarkDown editor like VSCode. The ConfigMap name can begin with a different name if you customized main CP4BA project name.

RPA
Review and perform post deploy manual steps for RPA as specified in Project cloud-pak-deployer in ConfigMap cp4ba-rpa-postdeploy in postdeploy.md file. It is best to copy the contents and open it in nice MarkDown editor like VSCode. The ConfigMap name can begin with a different name if you customized main CP4BA project name.

Process Mining
Review and perform post deploy manual steps for IPM as specified in Project cloud-pak-deployer in ConfigMap cp4ba-pm-postdeploy in postdeploy.md file. It is best to copy the contents and open it in nice MarkDown editor like VSCode. The ConfigMap name can begin with a different name if you customized main CP4BA project name.

\ No newline at end of file diff --git a/30-reference/configuration/cp4d-access-control/index.html b/30-reference/configuration/cp4d-access-control/index.html new file mode 100644 index 000000000..dda54de76 --- /dev/null +++ b/30-reference/configuration/cp4d-access-control/index.html @@ -0,0 +1,93 @@ + Access Control - Cloud Pak Deployer
Skip to content

Cloud Pak for Data access control🔗

Cloud Pak for Data can connect to an external identity provider (IdP) for user authentication. This function is delegated to Foundational Services IAM. Additional to user authentication, the IdP's groups can be mapped to Cloud Pak for Data user groups for access control.

Roles - zen_role🔗

Cloud Pak Deployer can be used to define user-defined roles in Cloud Pak for Data. Roles identify the permissions that a user or user group has on the platform.

zen_role:
+- name: monitor-role
+  description: User-defined role for monitoring the platform
+  state: installed
+  permissions:
+  - monitor_platform
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the Cloud Pak for Data role Yes
description Description of the Cloud Pak for Data role Yes
state Indicates if the role must be installed or removed same OpenShift cluster No installed (default), removed
permissions[] List of permissions to grant to the role Yes

To find the permissions that are allows, you can use the following REST API (GET) after authenticating to the platform: https://$CP4D_URL/icp4d-api/v1/permissions.

Access Control - zen_access_control🔗

The zen_access_control object controls the creation of Zen user groups that map identify provider (IdP) groups and define the roles of the user group. A user_groups entry must contain at least 1 roles and must reference the associated IdP grouop(s).

Example with Red Hat SSO (Keycloak) authentication

zen_access_control:
+- project: cpd
+  openshift_cluster_name: "{{ env_id }}"
+  keycloak_name: ibm-keycloak
+  user_groups:
+  - name: cp4d-admins
+    description: Cloud Pak for Data Administrators
+    roles:
+    - Administrator
+    keycloak_groups:
+    - kc-cp4d-admins
+  - name: cp4d-data-engineers
+    description: Cloud Pak for Data Data Engineers
+    roles:
+    - User
+    keycloak_groups:
+    - kc-cp4d-data-engineers
+  - name: cp4d-data-scientists
+    description: Cloud Pak for Data Data Scientists
+    roles:
+    - User
+    keycloak_groups:
+    - kc-cp4d-data-scientists
+  - name: cp4d-monitor
+    description: Cloud Pak for Data monitoring
+    roles:
+    - monitor-role
+    keycloak_groups:
+    - kc-cp4d-monitor
+

Example with OpenLDAP authentication

zen_access_control:
+- project: cpd
+  openshift_cluster_name: "{{ env_id }}"
+  demo_openldap_name: ibm-openldap
+  user_groups:
+  - name: cp4d-admins
+    description: Cloud Pak for Data Administrators
+    roles:
+    - Administrator
+    ldap_groups:
+    - cn=cp4d-admins,ou=Groups,dc=cp,dc=internal
+  - name: cp4d-data-engineers
+    description: Cloud Pak for Data Data Engineers
+    roles:
+    - User
+    ldap_groups:
+    - cn=cp4d-data-engineers,ou=Groups,dc=cp,dc=internal
+  - name: cp4d-data-scientists
+    description: Cloud Pak for Data Data Scientists
+    roles:
+    - User
+    ldap_groups:
+    - cn=cp4d-data-scientists,ou=Groups,dc=cp,dc=internal
+  - name: cp4d-monitor
+    description: Cloud Pak for Data monitoring
+    roles:
+    - monitor-role
+    ldap_groups:
+    - cn=cp4d-monitor,ou=Groups,dc=cp,dc=internal
+

Property explanation🔗

Property Description Mandatory Allowed values
project project of the cp4d instance Yes
openshift_cluster_name Reference to the openshift name Yes
keycloak_name Name of the Red Hat SSO (Keycloak) instance on the same OpenShift cluster No
demo_openldap_name Name of the OpenLDAP instance defined in the demo_openldap resoureester No
user_groups[] Cloud Pak for Data user groups to be configured Yes
.name Name of the CP4D user group Yes
.description Description of the CP4D user group No
.roles[] List of CP4D roles to assign to the user grouop Yes
.keycloak_groups[] List of Red Hat SSO (Keycloak) groups to assign to the CP4D user group Yes if IdP is Keycloak
.ldap_groups[] List of OpenLDAP groups to assign to the CP4D user group Yes if IdP is OpenLDAP

role values: The following roles are defined by default in Cloud Pak for Data: - Administrator - User

Further roles can be defined in the zen object and can be referenced by the user_groups.roles[] property.

During the creation of User Group(s) the following validations are performed: - The provided role(s) are available in Cloud Pak for Data

Provisioned instance authorization - cp4d_instance_configuration🔗

When using Cloud Pak for Data LDAP connectivity and User Groups, the User Groups can be assigned to authorize the users of the LDAP groups access to the proviosioned instance(s).

Currently supported instance authorization:
- Cognos Analytics (ca)

Cognos Analytics instance authorization🔗

Cognos Analytics Authorization

cp4d_instance_configuration:
+- project: zen-sample                # Mandatory
+  openshift_cluster_name: sample     # Mandatory
+  cartridges:
+  - name: cognos_analytics
+    manage_access:                                  # Optional, requires LDAP connectivity
+    - ca_role: Analytics Viewer                     # Mandatory, one the CA Access roles
+      cp4d_user_group: CA_Analytics_Viewer          # Mandatory, the CP4D User Group Name
+    - ca_role: Analytics Administrators             # Mandatory, one the CA Access roles
+      cp4d_user_group: CA_Analytics_Administrators  # Mandatory, the CP4D User Group Name
+

A Cognos Analytics (ca) instance can have multiple manage_access entries. Each entry consists of 1 ca_role and 1 cp4d_user_group element. The ca_role must be one of the following possible values: - Analytics Administrators - Analytics Explorers - Analytics Users - Analytics Viewer

During the configuration of the instance authorization the following validations are performend: - LDAP configuration is completed - The provided ca_role is valid - The provided cp4d_user_group exists

Cloud Pak for Data LDAP configuration (deprecated)🔗

LDAP_Overview

IBM Cloud Pak for Data can connect to an LDAP user registry in order for users to log on with their LDAP credentials. The configuration of LDAP can be specified in a seperate yaml file in the config folder, or included in an existing yaml file.

LDAP configuration - cp4d_ldap_config🔗

A cp4d_ldap_config entry contains the connectivity information to the LDAP user registry. The project and openshift_cluster_name values uniquely identify the Cloud Pak for Data instance. The ldap_domain_search_password_vault entry contains a reference to the vault, which means that as a preparation the LDAP bind user password must be stored in the vault used by the Cloud Pak Deployer using the key referenced in the configuration. If the password is not available, the Cloud Pak Deployer will fail and not able to configure the LDAP connectivity.

# Each Cloud Pak for Data Deployment deployed in an OpenShift Project of an OpenShift cluster can have its own LDAP configuration
+cp4d_ldap_config:
+- project: cpd-instance
+  openshift_cluster_name: sample                                         # Mandatory
+  ldap_host: ldaps://ldap-host                                           # Mandatory
+  ldap_port: 636                                                         # Mandatory
+  ldap_user_search_base: ou=users,dc=ibm,dc=com                          # Mandatory
+  ldap_user_search_field: uid                                            # Mandatory
+  ldap_domain_search_user: uid=ibm_roks_bind_user,ou=users,dc=ibm,dc=com # Mandatory
+  ldap_domain_search_password_vault: ldap_bind_password                  # Mandatory, Password vault reference
+  auto_signup: "false"                                                   # Mandatory
+  ldap_group_search_base: ou=groups,dc=ibm,dc=com                        # Optional, but mandatory when using user groups
+  ldap_group_search_field: cn                                            # Optional, but mandatory when using user groups
+  ldap_mapping_first_name: cn                                            # Optional, but mandatory when using user groups
+  ldap_mapping_last_name: sn                                             # Optional, but mandatory when using user groups
+  ldap_mapping_email: mail                                               # Optional, but mandatory when using user groups
+  ldap_mapping_group_membership: memberOf                                # Optional, but mandatory when using user groups
+  ldap_mapping_group_member: member                                      # Optional, but mandatory when using user groups
+

The above configuration uses the LDAPS protocol to connect to port 636 on the ldap-host server. This server can be a private server if an upstream DNS server is also defined for the OpenShift cluster that runs Cloud Pak for Data. Common Name uid=ibm_roks_bind_user,ou=users,dc=ibm,dc=com is used as the bind user for the LDAP server and its password is retrieved from vault secret ldap_bind_password.

\ No newline at end of file diff --git a/30-reference/configuration/cp4d-assets/index.html b/30-reference/configuration/cp4d-assets/index.html new file mode 100644 index 000000000..42e6b5df1 --- /dev/null +++ b/30-reference/configuration/cp4d-assets/index.html @@ -0,0 +1,103 @@ + Assets - Cloud Pak Deployer
Skip to content

Cloud Pak Asset configuration🔗

The Cloud Pak Deployer can implement demo assets and accelerators as part of the deployment process to standardize standing up fully-featured demo environments, or to test patches or new versions of the Cloud Pak using pre-defined assets.

Node changes for ROKS and Satellite clusters🔗

If you put a script named apply-custom-node-settings.sh in the CONFIG_DIR/assets directory, it will be run as part of applying the node settings. This way you can override the existing node settings applied by the deployer or update the compute nodes with new settings. For more information regarding the apply-custom-node-settings.sh script, go to Prepare OpenShift cluster on IBM Cloud and IBM Cloud Satellite.

cp4d_asset🔗

A cp4d_asset entry defines one or more assets to be deployed for a specific Cloud Pak for Data instance (OpenShift project). In the configuration, a directory relative to the configuration directory (CONFIG_DIR) is specified. For example, if the directory where the configuration is stored is $HOME/cpd-config/sample and you specify assets as the asset directory, all assets under /cpd-config/sample/assets are processed.

You can create one or more subdirectories under the specified location, each holding an asset to be deployed. The deployer finds all cp4d-asset.sh scripts and cp4d-asset.yaml Ansible task files and runs them.

The following runtime attributes will be set prior to running the shell script or the Ansible task: * If the Cloud Pak for Data instances has the Common Core Services (CCS) custom resource installed, cpdctl is configured for the current Cloud Pak for Data instance and the current context is set to the admin user of the instance. This means you can run all cpdctl commands without first having to login to Cloud Pak for Data. * * The current working directory is set to the directory holding the cp4d-asset.sh script. * When running the cp4d-asset.sh shell script, the following environment variables are available: - CP4D_URL: Cloud Pak for Data URL - CP4D_ADMIN_PASSWORD: Cloud Pak for Data admin password - CP4D_OCP_PROJECT: OpenShift project that holds the Cloud Pak for Data instance - KUBECONFIG: OpenShift configuration file that allows you to run oc commands for the cluster

cp4d_asset:
+- name: sample-asset
+  project: cpd
+  asset_location: cp4d-assets
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the asset to be deployed. You can specify as many assets as wanted Yes
project Name of OpenShift project of the matching cp4d entry. The cp4d project must exist. Yes
asset_location Directory holding the asset(s). This is a directory relative to the config directory (CONFIG_DIR) that was passed to the deployer Yes

Asset example🔗

Below is an example asset that implements the Customer Attrition industry accelerator, which can be found here: https://github.com/IBM/Industry-Accelerators/blob/master/CPD%204.0.1.0/utilities-customer-attrition-prediction-industry-accelerator.tar.gz

To implement:

  • Download the zip file to the cp4d-assets directory in the specified configuration directory
  • Create the cp4d-asset.sh shell script (example below)
  • Add a cp4d_asset entry to the Cloud Pak for Data config file in the config directory (or in any other file with extention .yaml)

cp4d-asset.sh shell script:

#!/bin/bash
+SCRIPT_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )
+
+# Function to retrieve project by name
+function retrieve_project {
+    project_name=$1
+
+    # First check if project already exists
+    project_id=$(cpdctl project list \
+        --output json | \
+        jq -r --arg project_name $project_name \
+        'if .total_results==0 then "" else .resources[] | select(.entity.name == $project_name) | .metadata.guid end')
+
+    echo $project_id
+}
+
+# Function to create a project
+function create_project {
+    project_name=$1
+
+    retrieve_project $project_name
+
+    if [ "$project_id" != "" ];then
+        echo "Project $project_name already exists"
+        return
+    else
+        echo "Creating project $project_name"
+        storage_id=$(uuidgen)
+        storage=$(jq --arg storage_id $storage_id '. | .guid=$storage_id | .type="assetfiles"' <<< '{}')
+        cpdctl project create --name $project_name --storage "$storage"
+    fi
+
+    # Find project_id to return
+    project_id=$(cpdctl project list \
+        --output json | \
+        jq -r --arg project_name $project_name \
+        'if .total_results==0 then "" else .resources[] | select(.entity.name == $project_name) | .metadata.guid end')
+}
+
+# Function to import a project
+function import_project {
+    project_id=$1
+    zip_file=$2
+    import_id=$(cpdctl asset import start \
+        --project-id $project_id --import-file $zip_file \
+        --output json --jmes-query "metadata.id" --raw-output)
+    
+    cpdctl asset import get --project-id $project_id --import-id $import_id --output json
+
+}
+
+# Function to run jobs
+function run_jobs {
+    project_id=$1
+    for job in $(cpdctl job list --project-id $project_id \
+        --output json | jq -r '.results[] | .metadata.asset_id');do
+        cpdctl job run create --project-id $project_id --job-id $job --job-run "{}"
+    done
+}
+
+#
+# Start of the asset code
+#
+
+# Unpack the utilities-customer-attrition-prediction-industry-accelerator directory
+rm -rf /tmp/utilities-customer-attrition-prediction-industry-accelerator
+tar xzf utilities-customer-attrition-prediction-industry-accelerator.tar.gz -C /tmp
+asset_dir=/tmp/customer-attrition-prediction-industry-accelerator
+
+# Change to the asset directory
+pushd ${asset_dir} > /dev/null
+
+# Log on to Cloud Pak for Data with the admin user
+cp4d_token=$(curl -s -k -H 'Content-Type: application/json' -X POST $CP4D_URL/icp4d-api/v1/authorize -d '{"username": "admin", "password": "'$CP4D_ADMIN_PASSWORD'"}' | jq -r .token)
+
+# Import categories
+curl -s -k -H 'accept: application/json' -H "Authorization: Bearer ${cp4d_token}" -H "content-type: multipart/form-data" -X POST $CP4D_URL/v3/governance_artifact_types/category/import?merge_option=all -F "file=@./utilities-customer-attrition-prediction-glossary-categories.csv;type=text/csv"
+
+# Import glossary terms
+curl -s -k -H 'accept: application/json' -H "Authorization: Bearer ${cp4d_token}" -H "content-type: multipart/form-data" -X POST $CP4D_URL/v3/governance_artifact_types/glossary_term/import?merge_option=all -F "file=@./utilities-customer-attrition-prediction-glossary-terms.csv;type=text/csv"
+
+# Check if customer-attrition project already exists. If so, do nothing
+project_id=$(retrieve_project "customer-attrition")
+
+# If project does not exist, import it and run jobs
+if [ "$project_id" == "" ];then
+    create_project "customer-attrition"
+    import_project $project_id \
+        /tmp/utilities-customer-attrition-prediction-industry-accelerator/utilities-customer-attrition-prediction-analytics-project.zip
+    run_jobs $project_id
+else
+    echo "Skipping deployment of CP4D asset, project customer-attrition already exists"
+fi
+
+# Return to original directory
+popd > /dev/null
+
+exit 0
+

\ No newline at end of file diff --git a/30-reference/configuration/cp4d-cartridges/index.html b/30-reference/configuration/cp4d-cartridges/index.html new file mode 100644 index 000000000..ab7bfa417 --- /dev/null +++ b/30-reference/configuration/cp4d-cartridges/index.html @@ -0,0 +1,30 @@ + Cartridges - Cloud Pak Deployer
Skip to content

Cloud Pak for Data cartridges🔗

Defines the services (cartridges) which must be installed into the Cloud Pak for Data instances. The cartridges will be configured with the storage class defined at the Cloud Pak for Data object level. For each cartridge you can specify whether it must be installed or removed by specifying the state. If a cartridge is installed and the state is changed to removed, the cartridge and all of its instances are removed by the deployer when it is run.

An example Cloud Pak for Data object with cartridges is below:

cp4d:
+- project: cpd-instance
+  cp4d_version: 4.8.3
+
+  cartridges:
+  - name: cpfs
+
+  - name: cpd_platform
+
+  - name: db2oltp
+    size: small
+    instances:
+    - name: db2-instance
+      metadata_size_gb: 20
+      data_size_gb: 20
+      backup_size_gb: 20
+      transactionlog_size_gb: 20
+    state: installed
+
+  - name: wkc
+    size: small
+    state: removed
+
+  - name: wml
+    size: small
+    state: installed
+
+  - name: ws
+    state: installed
+

When run, the deployer installs the Db2 OLTP (db2oltp), Watson Machine Learning (wml) and Watson Studio (ws) cartridges. If the Watson Knowledge Catalog (wkc) is installed in the cpd-instance OpenShift project, it is removed.

After the deployer installs Db2 OLTP, a new Db2 instance is created with the specified attributes.

Cloud Pak for Data cartridges🔗

cp4d.cartridges🔗

This is a list of cartridges that will be installed in the Cloud Pak for Data instance. Every cartridge is identified by its name.

Some cartridges may require additional information to correctly install or to create an instance for the cartridge. Below you will find a list of all tested Cloud Pak for Data cartridges and their specific properties.

Properties for all cartridges🔗

Property Description Mandatory Allowed values
name Name of the cartridge Yes
state Whether the cartridge must be installed or removed. If not specified, the cartridge will be installed No installed, removed
installation_options Record of properties that will be applied to the spec of the OpenShift Custom Resource No

Cartridge cpfs or cp-foundation🔗

Defines the Cloud Pak Foundational Services (fka Common Services) which are required for all Cloud Pak for Data installations. Cloud Pak for Data Foundational Services provide functionalities around certificate management, license service, identity and access management (IAM), etc.

This cartridge is mandatory for every Cloud Pak for Data instance.

Cartridge cpd_platform or lite🔗

Defines the Cloud Pak for Data platform operator (fka "lite") which installs the base services needed to operate Cloud Pak for Data, such as the Zen metastore, Zen watchdog and the user interface.

This cartridge is mandatory for every Cloud Pak for Data instance.

Cartridge wkc🔗

Manages the Watson Knowledge Catalog installation for the Cloud Pak for Data instance.

Additional properties for cartridge wkc🔗

Property Description Mandatory Allowed values
size Scale configuration of the cartridge No small (default), medium, large
installation_options.install_wkc_core_only Install only the core of WKC? No True, False (default)
installation_options.enableKnowledgeGraph Enable the knowledge graph for business lineage? No True, False (default)
installation_options.enableDataQuality Enable data quality for WKC? No True, False (default)
installation_options.enableMANTA Enable MANTA? No True, False (default)
\ No newline at end of file diff --git a/30-reference/configuration/cp4d-connections/index.html b/30-reference/configuration/cp4d-connections/index.html new file mode 100644 index 000000000..e377d7dfe --- /dev/null +++ b/30-reference/configuration/cp4d-connections/index.html @@ -0,0 +1,20 @@ + Platform connections - Cloud Pak Deployer
Skip to content

Cloud Pak for Data platform connections🔗

Cloud Pak for Data platform connection - cp4d_conection🔗

The cp4d_connection object can be used to create Global Platform connections.

cp4d_connection:
+- name: connection_name                                 # Name of the connection, must be unique
+  type: database                                        # Type, currently supported: [database]
+  cp4d_instance: cpd                                    # CP4D instance on which the connection must be created
+  openshift_cluster_name: cluster_name                  # OpenShift cluster name on which the cp4d_instance is deployed
+  database_type: db2                                    # Type of connection
+  database_hostname: hostname                           # Hostname of the connection
+  database_port: 30556                                  # Port of the connection
+  database_name: bludb                                  # Database name of the connection
+  database_port_ssl: true                               # enable ssl flag
+  database_credentials_username: 77066f69               # Username of the datasource
+  database_credentials_password_secret: db-credentials  # Vault lookup name to contain the password
+  database_ssl_certificate_secret: db-ssl-cert          # Vault lookup name to contain the SSL certificate
+

Cloud Pak for Data backup and restore platform connections - cp4d_backup_restore_connections🔗

The cp4d_backup_restore_connections can be used to backup all current configured Global Platform connections, which are either created by the Cloud Pak Deployer or added manually. The backup is stored in the status/cp4d/exports folder as a json file.

A backup file can be used to restore global platform connections. A flag can be used to indicate whether if a Global Platform connection with the same name already exists, the restore is skipped.

Using the Cloud Pak Deployer cp4d_backup_restore_connections capability implements the following: - Connect to the IBM Cloud Pak for Data instance specified using cp4d_instance and openshift_cluster_name - If connections_backup_file is specified export all Global Platform connections to the specified file in the status/cp4d/export/connections folder - If connections_restore_file is specified, load the file and restore the Global Platform connections - The connections_restore_overwrite (true/false) indicates whether if a Global Platform Connection with the same already exists, it will be replaced.

cp4d_backup_restore_connections:
+- cp4d_instance: cpd
+  openshift_cluster_name: {{ env_id }}
+  connections_backup_file: {{ env_id }}_cpd_connections.json
+  connections_restore_file: {{ env_id }}_cpd_connection.json
+  connections_restore_overwrite: false
+
\ No newline at end of file diff --git a/30-reference/configuration/cp4d-instances/index.html b/30-reference/configuration/cp4d-instances/index.html new file mode 100644 index 000000000..e5a07d0ea --- /dev/null +++ b/30-reference/configuration/cp4d-instances/index.html @@ -0,0 +1,108 @@ + Instances - Cloud Pak Deployer
Skip to content

Cloud Pak for Data instances🔗

Manage cloud Pak for Data instances🔗

Some cartridges have the ability to create one or more instances to run an isolated installation of the cartridge. If instances have been configured for the cartridge, the deployer can manage creating and deleting the instances.

The following Cloud Pak for Data cartridges are currently supported for managing instances:

Analytics engine powered by Apache Spark Instances🔗

Analytics Engine instances can be defined by adding the instances section to the cartridges entry of cartridge analytics-engine. The following example shows the configuration to define an instance.

cp4d:
+- project: cpd-instance
+  openshift_cluster_name: "{{ env_id }}"
+...
+  cartridges:
+  - name: analytics-engine
+    size: small
+    state: installed
+    instances:
+    - name: analyticsengine-instance
+      storage_size_gb: 50
+
Property Description Mandatory Allowed Values
name Name of the instance Yes
storage_size_db Size of the storage allocated to the instance Yes numeric value

DataStage instances🔗

DataStage instances can be defined by adding the instances section to the cartridges entry of cartridge datastage-ent-plus. The following example shows the configuration to define an instance.

DataStage, upon deployment, always creates a default instance called ds-px-default. This instance cannot be configured in the instances section.

cp4d:
+- project: cpd-instance
+  openshift_cluster_name: "{{ env_id }}"
+...
+  cartridges:
+  - name: datastage-ent-plus
+    state: installed
+
+    instances:
+    - name: ds-instance
+      # Optional settings
+      description: "datastage ds-instance"
+      size: medium
+      storage_class: efs-nfs-client
+      storage_size_gb: 60
+      # Optional Custom Scale options
+      scale_px_runtime:
+        replicas: 2
+        cpu_request: 500m
+        cpu_limit: 2
+        memory_request: 2Gi
+        memory_limit: 4Gi
+      scale_px_compute:
+        replicas: 2
+        cpu_request: 1
+        cpu_limit: 3
+        memory_request: 4Gi
+        memory_limit: 12Gi   
+
Property Description Mandatory Allowed Values
name Name of the instance Yes
description Description of the instance No
size Size of the DataStage instance No small (default), medium, large
storage_class Override the default storage class No
storage_size_gb Storage size allocated to the DataStage instance No numeric

Optionally, the default px_runtime and px_compute instances of the DataStage instance can be tweaked. Both scale_px_runtime and scale_px_compute must be specified when used, and all properties must be specified.

Property Description Mandatory
replicas Number of replicas Yes
cpu_request CPU Request value Yes
memory_request Memory Request value Yes
cpu_limit CPU limit value Yes
memory_limit Memory limit value Yes

Db2 OLTP Instances🔗

DB2 OLTP instances can be defined by adding the instances section to the cartridges entry of cartridge db2. The following example shows the configuration to define an instance.

cp4d:
+- project: cpd-instance
+  openshift_cluster_name: "{{ env_id }}"
+...
+  cartridges:
+  - name: db2
+    size: small
+    state: installed
+    instances:
+    - name: db2 instance
+      metadata_size_gb: 20
+      data_size_gb: 20
+      backup_size_gb: 20  
+      transactionlog_size_gb: 20
+    
+
Property Description Mandatory Allowed Values
name Name of the instance Yes
metadata_size_gb Size of the metadata store Yes numeric value
data_size_gb Size of the data store Yes numeric value
backup_size_gb Size of the backup store Yes numeric value
transactionlog_size_gb Size of the transactionlog store Yes numeric value

Data Virtualization Instances🔗

Data Virtualization instances can be defined by adding the instances section to the cartridges entry of cartridge dv. The following example shows the configuration to define an instance.

cp4d:
+- project: cpd-instance
+  openshift_cluster_name: "{{ env_id }}"
+...
+  cartridges:
+  - name: dv
+    size: small
+    state: installed
+    instances:
+    - name: data-virtualization
+
Property Description Mandatory Allowed Values
name Name of the instance Yes

Cognos Analytics Instance🔗

A Cognos Analytics instance can be defined by adding the instances section to the cartridges entry of cartridge ca. The following example shows the configuration to define an instance.

cp4d:
+- project: cpd-instance
+  openshift_cluster_name: "{{ env_id }}"
+...
+  cartridges:
+  - name: ca
+    size: small
+    state: installed
+    instances:
+    - name: ca-instance
+      metastore_ref: ca-metastore
+
Property Description Mandatory
name Name of the instance Yes
metastore_ref Name of the DB2 instance used for the Cognos Repository database Yes

The Cognos Content Repository database can use an IBM Cloud Pak for Data DB2 OLTP instance. The Cloud Pak Deployer will first determine whether an existing DB2 OLTP existing with the name specified metastore_ref. If this is the case, this DB2 OLTP instance will be used and the database is prepared using the Cognos DB2 script prior to provisioning the Cognos instance.

EDB Postgres for Cloud Pak for Data instances🔗

EnterpriseDB instances can be defined by adding the instances section to the cartridges entry of cartridge dv. The following example shows the configuration to define an instance.

cp4d:
+- project: cpd-instance
+  openshift_cluster_name: "{{ env_id }}"
+...
+  cartridges:
+
+  # Please note that for EDB Postgress, a secret edb-postgres-license-key must be created in the vault
+  # before deploying
+  - name: edb_cp4d
+    size: small
+    state: installed
+    instances:
+    - name: instance1
+      version: "13.5"
+      #Optional Parameters
+      type: Standard
+      members: 1
+      size_gb: 50
+      resource_request_cpu: 1000m
+      resource_request_memory: 4Gi
+      resource_limit_cpu: 1000m
+      resource_limit_memory: 4Gi
+
Property Description Mandatory Allowed Values
name Name of the instance Yes
version Version of the EDB PostGres instance Yes 12.11, 13.5
type Enterprise or Standard version No Standard (default), Enterprise
members Number of members of the instance No number, 1 (default)
size_gb Storage Size allocated to the instance No number, 50 (default)
resource_request_cpu Request CPU of the instance No 1000m (default)
resource_request_memory Request Memory of the instance No 4Gi (default)
resource_limit_cpu Limit CPU of the instance No 1000m (default)
resource_limit_memory Limit Memory of the instance No 4Gi (default)

OpenPages Instance🔗

An OpenPages instance can be defined by adding the instances section to the cartridges entry of cartridge openpages. The following example shows the configuration to define an instance.

cp4d:
+- project: cpd-instance
+  openshift_cluster_name: "{{ env_id }}"
+...
+  cartridges:
+  - name: openpages
+    state: installed
+    instances:
+    - name: openpages-instance
+      size: xsmall
+
Property Description Mandatory
name Name of the instance Yes
size The size of the OpenPages instances, default is xsmall No
\ No newline at end of file diff --git a/30-reference/configuration/cp4d-saml/index.html b/30-reference/configuration/cp4d-saml/index.html new file mode 100644 index 000000000..10358e3bb --- /dev/null +++ b/30-reference/configuration/cp4d-saml/index.html @@ -0,0 +1,10 @@ + Cloud Pak for Data SAML configuration - Cloud Pak Deployer
Skip to content

Cloud Pak for Data SAML configuration🔗

You can configure Single Sign-on (SSO) by specifying a SAML server for the Cloud Pak for Data instance, which will take care of authenticating users. SAML configuration can be used in combination with the Cloud Pak for Data LDAP configuration, in which case LDAP complements the identity with access management (groups) for users.

SAML configuration - cp4d_saml_config🔗

An cp4d_saml_config entry holds connection information, certificates and field configuration that is needed in the exchange between Cloud Pak for Data user management and the identity provider (idP). The entry must created for every Cloud Pak for Data project that requires SAML authentication.

When a cp4d_saml_config entry exists for a certain cp4d project, the user management pods are updated with a samlConfig.json file and then restarted. If an entry is removed later, the file is removed and the pods restarted again. When no changes are needed, the file in the pod is left untouched and no restart takes place.

For more information regarding the Cloud Pak for Data SAML configuration, check the single sign-on documentation: https://www.ibm.com/docs/en/cloud-paks/cp-data/4.0?topic=client-configuring-sso

cp4d_saml_config:
+- project: cpd
+  entrypoint: "https://prepiam.ice.ibmcloud.com/saml/sps/saml20ip/saml20/login"
+  field_to_authenticate: email
+  sp_cert_secret: {{ env_id }}-cpd-sp-cert
+  idp_cert_secret: {{ env_id }}-cpd-idp-cert
+  issuer: "cp4d"
+  identifier_format: ""
+  callback_url: ""
+

The above configuration uses the IBM preproduction IAM server to delegate authentication to and authentication is done via the user's e-mail address. An issuer must be configured in the identity provider (idP) and the idP's certificate must be kept in the vault so Cloud Pak for Data can confirm its identity.

Property explanation🔗

Property Description Mandatory Allowed values
project Name of OpenShift project of the matching cp4d entry. The cp4d project must exist. Yes
entrypoint URL of the identity provider (idP) login page Yes
field_to_authenticate Name of the parameter to authenticate with the idP Yes
sp_cert_secret Vault secret that holds the private certificate to authenticate to the idP. If not specified, requests will not be signed. No
idp_cert_secret Vault secret that holds the public certificate of the idP. This confirms the identity of the idP Yes
issuer The name you chose to register the Cloud Pak for Data instance with your idP Yes
identifier_format Format of the requests from Cloud Pak for Data to the idP. If not specified, urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress is used No
callback_url Specify the callback URL if you want to override the default of cp4d_url/auth/login/sso/callback No

The callbackUrl field in the samlConfig.json file is automatically populated by the deployer if it is not specified by the cp4d_saml_config entry. It then consists of the Cloud Pak for Data base URL appended with /auth/login/sso/callback.

Before running the deployer with SAML configuration, ensure that the secret configured for idp_cert_secret exists in the vault. Check Vault configuration for instructions on adding secrets to the vault.

\ No newline at end of file diff --git a/30-reference/configuration/cpd-global-config/index.html b/30-reference/configuration/cpd-global-config/index.html new file mode 100644 index 000000000..adbd4e9c9 --- /dev/null +++ b/30-reference/configuration/cpd-global-config/index.html @@ -0,0 +1,8 @@ + Global config - Cloud Pak Deployer
Skip to content

Global configuration for Cloud Pak Deployer🔗

global_config🔗

Cloud Pak Deployer can use properties set in the global configuration (global_config) during the deployment process and also as substitution variables in the configuration, such as {{ env_id}} and {{ ibm_cloud_region }}.

The following global_config variables are automatically copied into a "simple" form so they can be referenced in the configuration file(s) and also overridden using the command line.

Variable name Description
environment_name Name used to group secrets, typically you will specify sample
cloud_platform Cloud platform applicable to configuration, such as ibm-cloud, aws, azure
env_id Environment ID used in various other configuration objects
ibm_cloud_region When Cloud Platform is ibm-cloud, the region into which the ROKS cluster is deployed
aws_region When Cloud Platform is aws, the region into which the ROSA/self-managed OpenShift cluster is deployed
azure_location When Cloud Platform is azure, the region into which the ARO OpenShift cluster is deployed
universal_admin_user User name to be used for admin user (currently not used)
universal_password Password to be used for all (admin) users it not specified in the vault
confirm_destroy Is destroying of clusters, services/cartridges and instances allowed?

For all other variables, you can refer to the qualified form, for example: "{{ global_config.division }}"

Sample global configuration:

global_config:
+  environment_name: sample
+  cloud_platform: ibm-cloud
+  env_id: pluto-01
+  ibm_cloud_region: eu-de
+  universal_password: very_secure_Passw0rd$
+  confirm_destroy: False
+

If you run the cp-deploy.sh command and specify -e env_id=jupiter-03, this will override the value in the global_config object. The same applies to the other variables.

\ No newline at end of file diff --git a/30-reference/configuration/cpd-objects/index.html b/30-reference/configuration/cpd-objects/index.html new file mode 100644 index 000000000..06f2962c9 --- /dev/null +++ b/30-reference/configuration/cpd-objects/index.html @@ -0,0 +1 @@ + Objects overview - Cloud Pak Deployer
Skip to content

Configuration objects🔗

All objects used by the Cloud Pak Deployer are defined in a yaml format in files in the config directory. You can create a single yaml file holding all objects, or group objects in individual yaml files. At deployment time, all yaml files in the config directory are merged.

To make it easier to navigate the different object types, they have been groups in different tabs. You can also use the index below to find the definitions.

Configuration🔗

Infrastructure🔗

  • Infrastructure objects
  • Provider
  • Resource groups
  • Virtual Private Clouds (VPCs)
  • Security groups
  • Security rules
  • Address prefixes
  • Subnets
  • Floating ips
  • Virtual Server Instances (VSIs)
  • NFS Servers
  • SSH keys
  • Transit Gateways

OpenShift object types🔗

Cloud Pak for Data Cartridges object types🔗

\ No newline at end of file diff --git a/30-reference/configuration/demo-openldap/index.html b/30-reference/configuration/demo-openldap/index.html new file mode 100644 index 000000000..f061336a5 --- /dev/null +++ b/30-reference/configuration/demo-openldap/index.html @@ -0,0 +1,61 @@ + Demo OpenLDAP - Cloud Pak Deployer
Skip to content

OpenLDAP configuration (for demonstration purposes only)🔗

You can install an OpenLDAP service on your OpenShift cluster for demonstration and testing purposes. This way you can experiment with LDAP identity providers in Foundational Services if you don't (yet) have access to an enterprise-ready LDAP service in the organization's infrastructure services.

Note Installing an OpenLDAP server must only be done if you have unrestricted OpenShift Container Platform entitlements. When using the Cloud Pak entitlements for Red Hat OpenShift, installing third-party applications like Bitnami OpenLDAP is not allowed.

Demonstration OpenLDAP configuration - demo_openldap🔗

A demo_ldap resource in the configuration indicates that the Bitname OpenLDAP service is installed on the specified OpenShift cluster. The default OpenShift poject for the OpenLDAP service is openldap. You can install several instances on the same OpenShift cluster if necessary, each with its own name and openldap_project project.

Sample configuration

demo_openldap:
+- name: cp4d-openldap
+  openshift_cluster_name: "{{ env_id }}"
+  openldap_project: openldap
+  ldap_config:
+    ldap_tls: True
+    bind_admin_user: cn=admin,dc=cp,dc=internal
+    base_dn: dc=cp,dc=internal
+    base_dc: cp
+    base_domain: cp.internal
+    user_ou: Users
+    user_id_attribute: uid
+    user_display_name_attribute: cn
+    user_base_dn: ou=Users,dc=cp,dc=internal
+    user_object_class: inetOrgPerson
+    group_ou: Groups
+    group_id_attribute: cn
+    group_display_name_attribute: cn
+    group_base_dn: ou=Groups,dc=cp,dc=internal
+    group_object_class: groupOfUniqueNames
+    group_member_attribute: uniqueMember
+  users:
+  - uid: ttoussaint
+    givenName: Tara
+    sn: Toussaint
+    mail: ttoussaint@cp.internal
+  - uid: rramones
+    givenName: Rosa
+    sn: Ramones
+    mail: rramones@cp.internal
+    # password: specific_password_for_the_user
+  - uid: ssharpe
+    givenName: Shelly
+    sn: Sharpe
+    mail: ssharpe@cp.internal
+    # password: specific_password_for_the_user
+  - uid: pprimo
+    givenName: Paco
+    sn: Primo
+    mail: pprimo@cp.internal
+    # password: specific_password_for_the_user
+  - uid: rroller
+    givenName: Rico
+    sn: Roller
+    mail: rroller@cp.internal
+    # password: specific_password_for_the_user
+  groups:
+  - cn: cp4d-admins
+    members:
+    - uid=ttoussaint,ou=Users,dc=cp,dc=internal
+  - cn: cp4d-data-engineers
+    members:
+    - uid=rramones,ou=Users,dc=cp,dc=internal
+    - uid=ssharpe,ou=Users,dc=cp,dc=internal
+  - cn: cp4d-data-scientists
+    members:
+    - uid=pprimo,ou=Users,dc=cp,dc=internal
+    - uid=ssharpe,ou=Users,dc=cp,dc=internal
+    - uid=rroller,ou=Users,dc=cp,dc=internal
+  state: installed
+

The above configuration installs the OpenLDAP service in OpenShift project openldap and configures it for domain cp.internal. Subsequently, an LDIF file with the Organization Units, Groups and Users is generated and then the OpenLDAP service is started.

The OpenLDAP name is referenced in the Cloud Pak for Data Access Control resource and this is also where the mapping from LDAP groups to Cloud Pak for Data groups takes place.

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the OpenLDAP server, for reference by zen_access_control Yes
openshift_cluster_name Name of OpenShift cluster into which the OpenLDAP service is installed Yes. if more than 1 openshift resource in the configuration
openldap_project OpenShift project into which the OpenLDAP server is installed No, default is openldap
ldap_config LDAP configuration Yes
.ldap_tls Set to True if the LDAPS protocol just be used to communicate with the LDAP server No False (default), True
.bind_admin_user Distinguished name of the user to bind (login) to the LDAP server Yes
.base_dn Base domain name, specify through dc components Yes
.base_dc First dc component in the base_dn Yes
.base_domain Base domain of the LDAP root, specified as cp.internal Yes
.user_ou Organizational Unit of users, typically Users Yes
.user_id_attribute Attribute used to identify user, typically uid Yes
.user_display_name_attribute Common name of the user, typically cn Yes
.user_base_dn Base domain name of users, typically user_ou, followed by base_dn Yes
.user_object_class Object class of the users, typically inetOrgPerson Yes
.group_ou Organizational Unit of groups, typically Groups Yes
.group_id_attribute Attribte used to idenfity group, typically cn Yes
.group_display_name_attribute Common name of the group, typically cn Yes
.group_base_dn Base domain name of groups, typically group_ou, followed by base_dn Yes
.group_object_class Object class of the gruops, typically groupOfUniqueNames Yes
.group_member_attribute Attribute used for a member (user) of a group, typically uniqueMember Yes
users[] List of users to be added to the LDAP configuration Yes
.uid User identifier that is used to login to the platform Yes
.givenName First name of the user Yes
.sn Surname of the user Yes
.mail e-mail address of the user Yes
.password Password to be assigned to the user. If not specified, the universal password is used No
groups[] List of groups to be added to the LDAP configuration Yes
.cn Group identifier, together with the group_ou and base_dn, this will define the group to map to the Cloud Pak group(s) Yes
.members[] List of user distinguished names to be added as members to the group Yes
state Indicates whether or nog OpenLDAP must be installed Yes installed, removed
\ No newline at end of file diff --git a/30-reference/configuration/dns/index.html b/30-reference/configuration/dns/index.html new file mode 100644 index 000000000..c72428280 --- /dev/null +++ b/30-reference/configuration/dns/index.html @@ -0,0 +1,11 @@ + DNS - Cloud Pak Deployer
Skip to content

Upstream DNS servers for OpenShift🔗

When deploying OpenShift in a private network, one may want to reach additional private network services by their host name. Examples could be a database server, Hadoop cluster or an LDAP server. OpenShift provides a DNS operator which deploys and manages CoreDNS which takes care of name resolution for pods running inside the container platform, also known as DNS forwarding.

If the services that need to be reachable our registered on public DNS servers, you typically do not have to configure upstream DNS servers.

The upstream DNS used for a particular OpenShift cluster is configured like this:

openshift:
+- name: sample
+...
+  upstream_dns:
+  - name: sample-dns
+    zones:
+    - example.com
+    dns_servers:
+    - 172.31.2.73:53
+

The zones which have been defined for each of the upstream_dns configurations control which DNS server(s) will be used for name resolution. For example, if example.com is given as the zone and an upstream DNS server of 172.31.2.73:53, any host name matching *.example.com will be resolved using DNS server 172.31.2.73 and port 53.

If you want to remove the upstream DNS that was previously configured, you can change the deployer configuration as below and run the deployer. Removing the upstream_dns element altogether will not make changes to the OpenShift DNS operator.

  upstream_dns: []
+

See https://docs.openshift.com/container-platform/4.8/networking/dns-operator.html for more information about the operator that is configured by specifying upstream DNS servers.

Property explanation🔗

Property Description Mandatory Allowed values
upstream_dns[] List of alternative upstream DNS servers(s) for OpenShift No
name Name of the upstream DNS entry Yes
zones Specification of one or more zone for which the DNS server is applicable Yes
dns_servers One or more DNS servers (host:port) that will resolve host names in the specified zone Yes
\ No newline at end of file diff --git a/30-reference/configuration/images/cloud-pak-context-deployment-basic.png b/30-reference/configuration/images/cloud-pak-context-deployment-basic.png new file mode 100644 index 000000000..1f01ab267 Binary files /dev/null and b/30-reference/configuration/images/cloud-pak-context-deployment-basic.png differ diff --git a/30-reference/configuration/images/cloud-pak-context-deployment-full.png b/30-reference/configuration/images/cloud-pak-context-deployment-full.png new file mode 100644 index 000000000..6cecab493 Binary files /dev/null and b/30-reference/configuration/images/cloud-pak-context-deployment-full.png differ diff --git a/30-reference/configuration/images/cloud-pak-context-deployment.drawio b/30-reference/configuration/images/cloud-pak-context-deployment.drawio new file mode 100644 index 000000000..19eb76b5e --- /dev/null +++ b/30-reference/configuration/images/cloud-pak-context-deployment.drawio @@ -0,0 +1 @@ +7V1rd5vYkv01WWvmQ7x4ybY+YoEdcgWKbNSO/CVLRgoGyZZHD0vw62fvOiALkJ10J06nZzr3pmMBxTlU7dr1OEf4ndm5314sRo93/nw8mb0ztPH2nem8Mwxd147xD49k6shp21AH4kUyLi56PnCV5JPioFYcXSfjybJy4Wo+n62Sx+rBaP7wMIlWlWOjxWK+qV72dT6rjvo4iieNA1fRaNY8ep2MV3fFUxgnz8c/TJL4rhxZP26rM/ej8uLiSZZ3o/F8s3fIdN+ZncV8vlI/3W87kxmVV+rlsrtwjE3fvBhNllfXHz+etVPrvbrZ+Z8RKR7haTRbFw/lPXxdjJarxTparReTYnqrrHzm8tEWk4fVX55K9OWT91GLt6v3j9GX8eL9yXnQeV9a9ZW5aP8Vze8f1yv82JFpzRc0kXx4mKw288X0vxsz/jp/WF0VH7V35tnTZLFKYER7lsQPOHafjMc8eTYqDkR4tMkCB5aPoyh5iMP5I46+h1HOviazWWc+m+Os8zB/oNRivn4YT8bFzTd3yWpyBTmOtgHscexudT/DJ513XC3m00l5h3eGaZknx2dnuzMljEwcGY+Wd7v7AsKrUfLAacmNovlsNnpcJre7x5psH0cP5eWLSbReLJOnyeVkqdyGR7/TdiUmoKfJdg/phS0vJvP7yWqR4ZLirHlcmK7w4+PSjzfPXmGVUL/b8wirVThj4Yjx7tbP4MEPBX7+BKxbTSwFCh+w50+B8OFhm+70aX07SyLc59MieRoRuVpnNl+PGzAFBTzyx/ttTLY8Sm7vj27n0PDZ4yLM5NRiEifzB8LwG5BeEbE7PM8mX1fPaO7KJ8c06gC33gLfbZ3/a+DbeA3f2vPM6HfWG8N9h5Xvx7tRxXtba8K9fQDuJ6XcT8e7edoAXu9x8nB19zVZ/UoKb+JfTSP52pzGW/EyUHd+ruHPj4G3gP+v4GVqojJ7/indXF2mt78f0Dsw/HUCN46/j8Bb7R8HdJQZg/945x+j7H8W5/9jaVe9+fv3JUr3kFTQpvZpNH3HuZyRSkeLFVLDGFng24H6G1P5k7CucvP3Y7rd7nQEFT9IyOVt/p6E43Vj/2XAtk5Pvg+w5k9g4MMoMRsoeTHEJ/dSWOyg0R3dTmaf5stkJeHduZ2vVvN7XDDjibNRNI3F6DVqegFZy0dV63xNtrTmmYxml0e18gh+Ho9Wo3emrT4a58snpEZnW2DH6Hz6EBg32Zl1e71dR7mWjD5capEzf+qaY3OctUw/az1F99GTn9obv9POx/dR4n24W91etPLew91ydN1afLr6OB9/uNz0ktMnSJndhyjv3rezm+x02wunra6prvOSM3N0famNHC3x88Em+hAn3sXdbHQ9no9xLHBsQ459uHm8+Tzu3Jpx20vt2O/YeRDaMc/jHsbN54/56Lq9/nTlbbupm3xKhunkwj3xOnbsOZu77pUWXxp/rHHd3fiinfWSs3Bk/KENjTgOwsHaT+x5kMfbnuM+eh0tlmf+4B93s3aurru7iz7Y66HRXuH8qnr/rTb6fLm8CVvT4efLu+71LL01rNOuefk4vtjOatdObz4H6adpkN1cn2s31/14bMy0UWc3n/WnTltdU5e7vskrx9Tf0+iivZxcj59uk7PHG+gsujjH/bQt9LK5NWbr8Qf/WWfOdE39B6mdeQ6O5f1WN522euFwAx1ovjNYB2m8DRyP1+9/Trq5de+Zd3e9TI3bNS5nNw8+cPMxCaxhuJQZfbqI4tF1v+3Nzu5ujMHKv/LiyLzMbo3VDKhY3xqtGWaWjz98fBrhPJ5+Or6gVV1Y/ubx9mLT9u5bT7f3g/LzKnr4A9rVkpvPN7Pbe2in49VGLXT04F3Mphhl3w7JAZ2pmX44AxbimDoLw0jvprEV5FM9cAZLP/Tw2dWgJ8tzfbN3ZZm9MDL80De6jpsFVxZ0Mtz0OrblJxb0CXkc96+sVs+Jt346iHuOB10PzeBqs/XDeO07cRY4/Rz325fZBh0LuOubfjiMMf4af80gdfMgoZwLOdfqObbpOfU5usC6Ddl4E2RnTpC7GC+y/BTnr+qyQwuyG4yz6YUuZTU8nxU40TZIbKPXsSzMD/aebjknfMa1nhY4ftzP67L0sSnuSz3YJp8ZvpP1woFe0w2e3cJ1USsI+7VznkLJVFmYKLm5ny1vYQsvtzfB4PLiMimOdfRsfL0t0XMMH37yktNDCNh5BD3gjEgIoHkg3NV7zrDlZ6VWvA21CEuAjXx8Hlpgn4xPjqfZgNlg7UHsh1Ojm3o5NJH3qFEnWuM+GTSx9Ry3FSRW5uPJ/XwKrdh5l5rPMdbVmdOUtTHOoAWNZpT1M2sDzwKLeUugBPed8ucsyGjR6doPB1aQTvOuw/vELc7PTyNYA1a7soCk6Qbee0DWwzhRyw/7Fp4HDI1xwqnm5/DoMM668myu1utwTj4QMwDyBi3RRU02yCzMwQNKXMoCfZ7Ww3MGnZoeiTJaNowN6LNEZsZ7BVc1S2dvZ2kXOPUNxATOkNaxemF/93R+iqdLo3xfq4L/cBorf+9DSwM8Oc7LdfT3fu0c0RTnsLYgBNYgpuknOjRKy8BSUU4f9MgHQEiQ+6IVajfIwQfgAHCt4oR8CJ/zNdwXlomMIO+LpXBf8oIGv8mDfHBAFtpPY4yL+Ti0DBCQ+5lCLbVPq/VhlQ0tA3S7O8RXZevPSl7og1988lZVh4oXcG1/H3maQnzVysGb+XNArYXAPp7Av9rAD2JoBlpybFpZh9bgn7YBBl76KXCcRtAomOvK1uHbW1oOPgsW9/i5BcaDNZkFQCshteRbYN6MWQGZ03f61FpDlhEBjAkt2bUxh7BkbAqnJDZQAXYPbeEMyBlABBgcjOvQyrzWRvQV/6zLGuLnYFqiqTZm4xyZFv62QWTgfemrQJ2wMu47IGODveHjHZtz2EgWAC6CLLjAArJ8sn/8qn6LDONSrKiZ+Hw/ut4ukXWl4Df90r25KI91P3/Uby8GKpY/PM6i+9N6BrEfuTeI2HPvggjSMOaAUVCDPhiNwJ/0IpfIw7N5mBejSt8A/4BrfHikZzIyBx2bfAxZl6hkpNbwrNBtRC+M/dQnHrYq6m0YieCBPu1bl9PBmzg2hL7hUSlsx8jrMEuCHGNGPgU3+obHKCnZQR/2YHQcwKN8ZAFDRHDEgZQZVgSPmWZy38zKYRcD98Z9B8reOSM474sYQQbLbbAH8ClY8WlX3LcPxvLzgLGlLkd2gMeS3TgfhRO/xbliXLLelkzTnOu+7qoe3HvDiGyDYxA5wFmYIaJan5EEkRIzJD85RKBrgJfpMRm8a0PL0fo7bkv7Jp4csjYjsg4PE34CMjLeJ8ihsRSWTKdEScb8qCGrIj9zmPqYG3Im+A33ZeSrzZc5NOIHr1XzpdWmov26LPIx3NcT1kBM2h+zhei6pQznCzkg0pK8XEX5qYmsJRM2Svt4Fm8Di3NMMll9zJqOXtFv1YO3b5h35TYw2gf/9GUWyE+IQ03yH5eRCv4WIofK5Ql0FSlV9BNOS33kGvGOixHNGHWQd3nIKSxWMuCAIbmtRb8g//Y69I2abOrTp2QcyoLrdPCbCc2Sb4EQ3DcEb17ZuS9+5JW8aATkDscVDsBTbPA5hz+pLLghi8w9tGH5yBSez8gfMg6qLBtzHCCDngryYEXyjkWehy7qsrrKKFituZQVvyfChY/39ej4G+G3tIwf5AWvyBJqeVfoP9dXCeScP9zL6bBRY6G+Wk/u/8huM+/p+yon1wT6GMMUH+e+ydxPoY9eFzEXtCTGZRvoLyIamRExDzbgBS12EoIQHos4DdS3VOyxt8hKkJ96mOsAevE1ohx60JE3M0Oqy5qsZH3yGfJgVkPw6hbxB1mJ38imMuHzMNKeq95NrmL/AHq0JUNCNbMWfUv8qctWcmjYiHOcZop7XVZ2zLTAyTHnpDG+Q05hsyFrM9aW9YQJbDIfUdxc0SOfp7+r3DGnHM8Oj0dNk3tJUfVeSNwNXf3N4nLI7sEAc6FePMunL6WDIhMGbvnsYDLBASpGYB4yrAFswS0qwUxYmbKZBQy4RXwt4llKFtxAHxH8IW6VXFGThR9Y8G93W8RQ+IcN27m0JflmHaSDXHxL5UyoPl1TdAo/6aZDnT6luGKYk1URW+nvLeALDOuzLsN9h9Q/ckOfMRb3dcEN0y1jpUduoP6Rk0hsxtzxrJbiOrlvVRZ1Fuy4LeI6uQFMzdyE52w8K3QndVR9vvs6rPtz9Eb+XJ8DuQt+5AwsZQvkTA58ISQOy2gXI89t2Hiraty+Bt5r2Nhn9RAO5doDNs6ALeZXyGHs+phFtEN9HDJWkP+n4HRUMFe7aId6f6CwA/5HnGCOxCiqiU9hvnLflFyECJ7bZmPM6rkiOnucN7Fes78LHCEnl4rLro25p7+GfffO1SLzz82/brLi7ubHp/Fne4WI+GJfrNZLK1jF23YHwXL0+Wxzc91/XbYY8/b6PB8V46qfteyZmT6ao4uZdnN1sJP3wh3Hn4OZ90F1jnv3d9nN9RDYD4SjwHsv89g+1jvt1fBzsPieJyi19txRfUXHL7EmMwZWkrAyM44pKxAgiR2gM78eiYS9cttiz8Fzkb1L9eizu8MsidkL2HbIY3HouCpaooKBPdmBYk+DfTygu3Fv3gvjxi0V5TbVedVlc1YGMaJt3Bj3OjkLVLTtm4wuYLgWUK2zqmXfE/NCloPPqHwC3rsjTA/viMyig8W+IMaVChTYkqpn6xd9QkZC8fAiEh46r6q6ooqp3T9kFhd67GJJloeoic/I0lLq1N9ITcBoIZmC6swhWkvPW8Zy7I3K3Lwiv2ZkYQbgohobvHxNRs+OdPyNw46GcXCNI5lnJtcx+3FcVpm4xmXtsPLlOe2cmR/uk0lWmrOTOMhUlmDX55uzJ4+sQZfnqT2ryhZdsm5G/GDctVT2ksn0N5KpOQNWoGIHyVJTxUbIRqTPJb2gK5VxSE3ELgplr6Bnqajr2cj+Ocn4VJ8aURg1T3VNASwN/JuS6WfFOVTyzCi8Gm4wH+IXLI9qukNMVjDHCL8mbgPppPisemGTSGUfOZ8F1UUotVZl3aPLrDlj5iJZWm0NpG6jTU22audi3OfslxEqRZUfSpdGZH1W5mEh20HmonQomSP+FtntlBWCqWrDxrNW/Ou6ntnvr5x8XyYQCx/nWjL+/HH5n04gjAiOf/w2w4E/H17MI147V80xcuRwBp9SeSjR5GWqSmb+5AMlQ2gIVSy7kMyL06KCdTxNeXxMz1qyjoT3kNU0yTPpzVydkFqqtBatIb0YeNOwVdRhUvVjbNFqbUzkeZ5YnSsiGLMyX3hFphDDmoz39ZRs6LNnVpdlF5jWY91ChCACTLXCe+vnZA0BeZDqSaWeYq5wyK405UzWQ4HkzJUx6zp6Tb8+2VQ9a0RGqcqez2PYZwtP1MB27FMulW7gnWHRSUEeRHTSQ6SrkdMTI50MxWdCrqSzP6bqc+oc8ynsAyZai0c70kdlPwo5lSvdEuRVNdmBRDilZ1vpA7qRLjXr5I7F3gD1HJPF1JzYO2WdDPZKB6ozRF2GlI2FRZuytEl/Kywq9bnUZaX9Noh+GsdUXaU+e6tbdsSV7Wuy0pnBeOwbMwdO6PHS1anpEcdCYQDannltHjDKFOs8ofOqDUaeq5UsbLE321XXm6gHpZMv9XFi0f7s6y6Vfrgygby0w9qaUSySmkRy9SuuE5XrXkNGL1P1NtjxYm7eV/UL6+OKrCt1k58WPc1wKAyveu6bbU/q40G5TlT0CmI1p5B1E3J3WSeS+hj497KiB1+X5UpGi71sPB9kpZbLVJ9VrZkB4yU2NtI73fVuarKprE8V41KWPSKv7O1X9LjLohx5HmWznHWFq6kI+7INJFNCJFQ9VrWii6gI/A/K1ZhM1h/T0sdd9vX1nR1SdgH9Ha+A18BtiByq78wOIrK4AftESzXPAVd6LFWD1mTzMguKM8oy8j/zDvWOyJNzzOp8Ve3qFfwp9ilxsXcOvCt9Zlv6Ol32MoqIxXUOPF9rryMpOgMnc91F1WhVWaNH7nL8okdeHbOiP8ezeh1ZG2AvAM/BdY9BiSf/Vd0Lz5UrWsONZNA5M+qI89YUHlnX+ciA6OsbrvrB9gO1JoTYgnkyW0C2HheZF6O5WodlTw/HuKPEkoyDPfnQKzPgqmzK1Uj4ptiOq3tcXxiW3WjUo8yuin5gucrJTrH0JTxmV6o3KqucIss1DrMp65EjczVHyg6JZWUHrs5nzKoLrsLcuVYiWal0zyuykn1BTxvF+W7GdQ36hHBkRY+7lU5mSOzFsRIxBYuOVAuv2YA8Z4ju0mmhD1d6R5iXrMuR44o555VzDvsN1GG/2C0RlRmWrMtRL7CrVuxMIC8h7soaOOxTlx1CFvPLhWvqY1Z8FjqAXuCbCjN1fxdO9mUtjnxRk82Jr2muOK3u7xH10iJHKaxX/b0hW/F34eRM4qji3Z2/1+e75+/+K+ck/hBrsptDPUvOHn65nsqVcZ/xKy3HJH68TCod+kcaqb6XxD65ditjSfwBD0jfS/ILhekwkh0mgvFcZEvOrMlyrY69oSIXYx8n9Yr+v8u1Q/aGCs7kOF6xLkqM12TZq0ZmTx+GLHuuOqtnxZm8lvshhrqsyORSiRXxR8bhHg1iVu0SENkiltdla3rkylCPeQz06LPyT7nWOZV4ynHYN6LdpUPwig0k/kjPOSKuW0ERB3z2Xdmvlj0fzFeiEsvsQSNHcIuxI1M4oBg7oN7zWGPu1ozlci3iINddpApkJbQpdyPVZMnxwLXEScoKp8G+whcBc8R8WKzDSCwBVoey/gu+0GRdJpXYTl2yas2lyhaekvVW7oriiht4kLui4nLfDXv5yImKNZyQPXZfkx1VHJc93jxqFbug6niuyXIX1BD5QMw1dYxDjPbJabKrizu8eqFc28Tzfn0gnM052tIDbsq6+X4M47XQ27bA5FbF8TL+1fFclQ0kj7PLfCpT+x+kf8916y3X62SfUdbAM58dOpdOE3WByh44CfsqT2/Isl/ryq6RQhYYUuv3SueIBbnsfuM6u6pJwoEle4Z2eWOf/Xbxcck5uQaYeyVfA2MWsVPsa+Kae6zLTp1iLxLmwHjCc1Kr7PKZUHbJKT5wN1zD4N6q0ubENP2+iNPsvw8t0af4LdcLdnGNXSh2xGQfAJ9F1iVzWZNpxgHuMs0jte71vJ8kV3ljTdbxyrpgy31a0k+WDgR34fSzXZ3D3KEeB2qyPekWRcVeBdrJ5b6vwme5+we5g9o7UY+zzBGt0m5qFyAw6Ygf1mXZ0cN9B7s4LLuOWEeC/7ieFjA2F1zQk31n02LPDnHJ3B32y2WPDHMlq9yPAqxJhyXYrXFKh8VS9ZatZJ14q3JHyfs1dmPUri5yQWxxrmrlvlbHVGWZl2TPfkdMebnKJcv1Cq+ocV1dasK0XAsmFwzKNU9fcba7VVyo1n9YF5Xjig+zJ5FKrcI1Q5OdWKlFQ670kwMFN5ns/JIuV2R69VyJsRM1strRQH+3ycHs4EndVpVlhwm2Zx0ue4KmW9lhoGrRlviM6i3ozXy1Jpuz/+Fvek6Rr7KTlfcNVQ/6mfKp8nkq+WouOzZCf1PU5bgnezCx7E9pygpXZJIzcu2dtpIu4u54Pa9m7DTUOuCQnc9NsTMklzX1tNwPxWcfauVasS87Z6eZ4iupKRiDZH0xVGvU1KXeLXob5N8irhuy14bcI/ZgjoO6RXoDtnRffVnfkbqSOYCslSleqtWVXOt22O30nuOys/OJmmys0ffo09J/SbmPaVjuOrRkP1cqO/gyyQ/yYaZ2QdIHuGPGLfIFj3pBTIuKvbCsw1zpXfAc90rIOXAB66xdzJX9UJEpqxTNc9X+SMi+FdfNFCYr++rYa2OdHhacApuoPSOSI9b7I0agfE72VLBXJLuUxZ41uXy449763jjVR2EuydxrI3vN2A8odvPUZXPV2+grTkg9xijFB9S5Y+vFnpRar4a9Oe5LYB7Jupm7KlELFnth6rJcd3zWH2WRG6h9JZnsTCJ+czmH8QflfgKfqwDsCarPiqOgy+fcTvbY7PRHHzJ2uV3KlYVnjpK9zaErKxW0C1eedvFdOBd5nsxJel41WcacMj6J7KbY2VTn60ztg3Z1xTNVrg8dz1T5uYrj7N32wjIOeIzNFmuAomdBbieXMoeGf3K/R+nD1djV6GmEXJ+11Z64Yi9Sj7mGyt2qslz/3cvdqvGU6/uS1xc9qVoPpiZbjeOyqobnKTiVu+fps2q/B54nVvv2wmZuIbqQOUrN05Bl3cixVb/E3+tZeOSrzXNNxvm6Ze5MPJlq5Ul8o5YrEU/ELfciUp9TS9UFgmuuz+tFPcbcDLFdVr64h34rdfeuZ8RVpYGKSa7yhecYLDum9Wc+Fx/XxK7qWxlZsV6eiZ+nkjfIvp1GHVjtcddryIx5EuxWfuOhJsu+c8xvLpTf0GCNLn3Z2rna+sL+OeZHfsGRnib720L1DQrJbfO+tesD58zpWYcwdyK3R1rPKff7e5l8J4G1UThoynI/GPeBSu1QGVNiJrlW5fe1tYWKnPSj2LvW1d4X8tUgl/3MsiPcLntTmfomCzAr/OhuBf9FjRSq/U+mX+47lp6Lu1Wrn2ej2h6HjHtOf8IeB2fzdNP4blL0fOVp1wwMXMFv4h38nnvtS7iWcXIiL2ZofFPzwJcaX/3a6Pd/qfHkuPKlRutIM9vPf46txlcc9faRpje/5GjoR23zrb7naDW/DVt+CfeXfRfWhF5+9Luwv/D73W/2NdjjtlX9Gmyr+eKN0+Nf+S3Y1r/o+H3QoX8bHSe/FB3H/6Ljt0FHy/jduOPkX3T8NuiwzN8NHc033PyLjt/m9RuHIsuvfPuG0YwsDVA0XozxOFkkGF50jDGSx+Xk0/OhfQs2kVK+r0+rG66CIfP5ZRu7l24k9zEecZbc4r+jaAWbfRknsN9qTj2cJxzhYbL6EvEdM0d8J0fzRUAvvjDohdcgzTmpFY3VelNctKuwMJrvxTJ07cg6bSJDN42j47cCRzOweGc+DrgPq2Q1g+mM4xnfkHa7wE8xf7qcxMmyHPtHQPQnkLNv1PHk62gtc3r9FTD7wGi80O1b6Etu7/FfeQkM/pk8JdHkSzKeUCnZl+VkwQMFAJ/57uBLhXZAe+mdRXu+9Gbg00+r6LOa6Ds+wEnHb8VJZvO9UfZyOVktf1m4olGOj/8x4eolVvtlYcw6PfAiv7d6i1SQaF+uHH/5R2f91V/ffr1Onfv3zfLZe1iuRg/Rr8txCJlO5/8IaBpY+G4cvVxVm7XcR2++ekzXfiVomrnPv6D5zUBj1RPmXwmaw0lR83118rZl4kWF/jd8paLRLOac4AoHrjA0TPv3pO4vrQC8kILhkq/y5+fkYUntXdfnhRlQHSyjObSSfQH2vybxejHiewKLzOwAPl819/d3AE6qkNWP/+Z8ymg3QPP8TuXFz0rY/4nQ+SUp/JsBzfwbgXYwnjZfYnxx4C3KP+tlnvtY+OE3et6tVvzdBzYVYJw/3i6PVhvg5SiaEyePizmsC4TwatD7uW7pVruttY4tSzvlW3+P8ezni+Ph5P7pvP3F0rQt/h6lj3ET47uQ/31B82WsvNyhrgVN47S5Anqoy3TyVsBoRi17PYady1fK76NjOllFd4V1HufJw0om0zrD/zG9jvrbcvi7C3DkyGgdOHjo2EnzoN68DP/oh0aoHzx07KR5UG9exk/lrKsHDx07aTVnXJfWD0jrNekWF+/n69UMGWBn91s9DjDwDp3xYjROJtV0r2WdWO29c47035S7PswXxFadBjsnuqmfHyLOnfNWAsI32eA7M++9cFQPFPVcvkET9d8oMNosraPFZDlfL6KJF8lvEsBH9VP1qhFx/eV+9ACiWLyljx+XvzKh9PF2883jrQP9wvLYT/fxZpLRdexPDf8+iMG3RsABg5pHu+ZxGe+b4D1ptTtUQcMZipLrhcj03IQ8BL2/NRTox3oTJr80R2j+ghF7vbpj5hVJev4bIyYZ3R8guE6R//1jMXJ8bH4TI+Wq5q/BSDOR9OcPCXz1T2cMR5ouMbD8dy9ct0+/dab4d+9MebfmmR/OQ/5KVH858TicoRxKZQ7e8sV8xKhmFN/0vD2X+s4cveGVYtLJwn2aKMsecs+Ipb7ePlrIvc4eF2VsVrBpeu25zf8dSks0raWdmP8Ix60XgKbWXE0+PuC4rTdz3ObKTXcex4e89v9De6HRmbrfsdiXmdJLuU79Qkx5M+jUm1SHfhXTT+odsDDe/T4+Obf3Ww1N938B \ No newline at end of file diff --git a/30-reference/configuration/images/cloud-pak-deployer-logging.drawio b/30-reference/configuration/images/cloud-pak-deployer-logging.drawio new file mode 100644 index 000000000..c44403b47 --- /dev/null +++ b/30-reference/configuration/images/cloud-pak-deployer-logging.drawio @@ -0,0 +1 @@ +3LzXluNIli34NfVYuaAI8QiA0IQgBCFeZkFrrfH1A/OIrBQRXd09N6umb/uKcAcN0o7c+xwD/4ay7SFM4VCofZI2f0Og5Pgb+vwbgsAwhN9/wMj5bYSkkG8D+VQm3w/6bcAqr/T7IPR9dC2TdP7DgUvfN0s5/HEw7rsujZc/jIXT1O9/PCzrmz/edQjz9IcBKw6bH0fdMlmK76P4A/tth5iWefHrrWGc+ranDX89+vtU5iJM+v13Qyj3N5Sd+n75ttUebNoA6f0qmG/n8f/B3n882ZR2y3/lBEcm1a7vccc/siafRFO8lr+j366yhc36fcbfH3Y5fxXB1K9dkoKLQH9Dmb0ol9Qawhjs3W+l32PF0jb3J/je/PGhvj/nlk5Levxu6PtDCmnfpst03od834tQ3wX23WTgXwW4/6aAB/59rPid7LFfDwy/Kz3/x7V/E8u98V0y/w0pYT9IyVijpozvixlTuYVLeu9lm35N7r9GEy5ZP7U/CPKe//JHac3L1Ncp2zf9dI90fXcfyWRl0/xpKGzKvLs/xrdU03ucAdIsbyulv+9oyyQBt/mpev6owL9AQyiC/qcagnHiRw2h/yoFEX+xGf9RMX9DUJ6H7p97TxLOxddV/iJrR7E/WTv+oywx8ifWjv7LrJ38QZhmCgxbDJe/gZviDTDj6N7IwYY+pJ1VlNny6577pv/Y+fsTpt8Gv40k5fbnoXkIu1/H2L5bwrK7Df73TvWPe/z+0N8N/+Gif7kHNmm2/P/vfwT+n9rMbTX/Rv9DHj/YzD/CYVj/E1+E/z/4Ih6TaZT9SU/3eBKmZBb/LFz+FTLH/yhzDMN/9FP0JzL/l7kp/l8IejfmGMBm2X7hnH/Y7SuM0sbo53IpeyCpqF+Wvr0PaMAOJozr/EtHv5Nu9vXzE9tfeqCwcB6+4a+sPIBmma9b0r+OQr+O3NvFsgD0RoPpI/w6NH2Y/LKXddmmSRn+0k/5PQw+D+DzvR33bdt38721FGt7xxYeBf/Brn8En7+/+ry3zyH9Zd7A+XdOgobj7z/f/8vQ5f++8I3+NHr/AqH/54YhqSOq/D/cGTZj+Hftyp7eU/07/O+ziyTNwrX5WUT8b1tFEi7hbRTfPiI80BDClh9GN3dIEfKevn80yyk4J7+32Bl8Hlna//pLh3IDRtG64d4f0y9SxLeAGHGdMUX7YuRe502f47Jex3EcyxbzbwizjuPyXnNFKt3axA47b6nxoFVsZRipuvdDBy+e5e3njD6ND7vDjzs18hcZGTXP0z3PdlQolciKw486JsVr08iKhWkykYxKp/OC4+R8sj4yN+43KeBv0AoX+LzCmRubJyvpT3DpYUBWDLbKhe+Lp9hOyksQZNM6REFV0yAb3veZSjo8xOkTVveUGKRsVDlMCXnuP0/f+8jKZ5w/VqDkS4ITxEr7kbQRW02+mWepx5TkR8Ly6PrszosgQdpE5o2UN/amdX6myflMVWBZpFeQHyFODpyPX40+we+SpYV5ATNuY8S9hHhquBBFw0t4yHX/7sutuqM+k4rNw3dqy3692JLvF2qB9AIKq3f4OnwFXa5MwC6+EJQAd8fmzgU8hh1yp3X35HkHv9zQ/vZcC6EMjnJveHh2ujabPve2HF+O//gEWr6wrDTx0gi7JYKjFm6Yb2HTwAMkXdCIkmv7Er0SKzGVjzUUg+gp+U+m1Mz67CAKieT3+jgLX2E+l/dIp7GsxyZvFEWlT9MpMJgibr9iouFjZSJQFgc55rQu1ki5/e1QzOyUZAYrWqaHIxJc4L5k5kFFoTCOsohDwjOKGmxaqauYQ8CxTmxBJJVlqfL5rm+TghMTtDZQ+AyjhusbUc3kD/Bv72pxGXtUQaRG4rGny6ffIgReSBTPPPwUGltDi89L0e/cwh/VfJyX/mZb6Ox4fxDFA0pIFEkpr9IDm66esXA/Gn1a15SauGb2ulhgaZVj+zvvlIfNXCLDiTREqTbxTezbCgHbdP2Xqi0Gvt0aYmC3gobkXRc1/1qxNn14ShBOoQIbPQBeutNaDFs3QINuqNLXLe3AGWHHLfePdU7wpOLu+vBecvzMSRCmc+QYWdgsG5jUHs48+z5WFa0xkJdI4wbash053viREW6JMGfqCeblr6+If8SBYZTSMHuja6HHg3+I4qf13fuo95yEpztC+1EwPJduQEnrJXafGlOU1UE29LZ3HrkhIpNdBuHdf3nvzDD+oypBP4IbcdhXVKEt56ObyoP1JQmEx38bi0WhXwj4X4PSfpoYkB9BWjgtU5mAiPu/EKIhP0I0GP5JMv4rINpPBf5jdeV/lcAx4n+awPEfBG70gIQ0fT7/b5D4DyYO/Shx4vFvFPiPHOR/l8AxDP7l8T9L5D+WZ+g1KZdvQs9LgJv/Orn/pCLyrxb5P2q7/0TgCPbvjCq/hrDfifx3RS/o9deL/ZZxRD6wB/Sj7DMyTuN/WXwh/ih7hPxJZQn7F1U5EKfUi4gSegia/97CyTvW4l8j+u/Emt7Z0/r+sZ+W4ub1Xdhwv43+qeD22zGvHpDSL3FX6bKc39tN4br0f1TGT4q84K5/EOpP5Dz36xSn/8yQfm1hhVOeLv8MqSE/V9OUNuFSbn98kp/J/OtUeprC83cHDH3ZLfPvrmyAgd9hVuRP2sf/1FH6T47/tRjym76/PcFv2v/HVP4PnPHH8gY9DE0Zh181i68o+DfQ6Zv2cEpA8fh/oVtiP7ol9ZOIiP3LIuKPVOJHP+0SGnRggSyacJ7L+I+i/aOX/sHlbklNpwf2/AJj5K8DPtj7yz/2P4/vp377dP7+k5FO5T1TIP9/XnT/Tz3x19j/J///UVW/707+RBW/jv33PPgHl8Mg4ufJ8ddLfItB38/6J76LPf54IfTxpwt9k8wPF/rL3Pi/0Hn+v8KC/qcYBon+RYZBIr/AoHr+/Qf+o5lAyL/VTJAfodcPZjLvZduEXzh1vh9n+Z7Wf7CJ30fs3wrX+RQm5a3UPyHevyBq/7kCBP0YtMmfrmMg/2Vh+8eFDP+XOd0/NZL/Ia74gKBfSOpPbkP8gv2p3/NfdUhwOQj6zSH/lAEQCP3h0v+BU/538eAD+hPsQP8N+O5Xk/yP8N0d1lAYGMc3nBf3TZPGS/8jzvu/vsf1kkBLi/re4/JQNqXuvznNffW4sE4/k+SxJwHqgVI3kzyCNUpXFP1MqdSe+31C7kezBDH3XZja56CRzrlakou3fdIrxzcjzdA5zNClbPnXUDC9qk2556kM/xqk5QyUeV6UdQGF+SANiFv4fKoHrbeh0bRayeh8yBiU12cv2q4BBvdX0/hxXes9Gksk9jiuB40hDRfwYCQZyHIjg3ZqERy5iETNItDITU6bvLaOk8USfbRcuwiV4RPiHJA53PQi6EdQT3tLhYvAhHlp5blLCTJjEUECj5Td/wNUwZlYGwi5EU7Qx0E5rjjeHSmD5h185hroA9hiM42FbZSgweMbSZcUYHirhZXwUFF9rbZuaCJhP8TsdYVFvlGyRtcOsr0F1C8nodSzeBfwiLZ1nebTd1wjLYtGxBF1ksbqT05ycXZDMsJWKPPFFx5od7U4STSzB28o/3Hia1dGu1fRsOUILIyv3Lu5itybxIvLQUcGdlRtRkPbh5ZCVdpT1tRDrUAnQ819xIy07Jod7W0MUUF+zoxG3EzJ49vPjtvm+eK+whVj56NTG63XUC5nU5irRbRZgmoC3RUe49t5uYqC9j8cpuzvxnSlHGqf+9ymtGwsgS9dcLjzlWQyW5HbJWMvgy5SkhS9QGNdJD5hrdS8009QZYOuC+YVjW4YapKoeVKnQKVU8A6I+Sj6WFc5Wpk2L4wwQ+VZfmuQitCv6Mj8ZLNPPLaNKLnPCM8pSj10+urorcQsO49yXJzxqsZZS/CaY/l3W9GSw0823ITFTOi26c8uRKHBNYuvhmD63TvdFz9wJjy/SGKG6AxruDRZwvzj42T80dSK8fgOsv0o4Nf3E62j1Lns1BViyxTmd2UKPT2rrtbu6UPm4Zaorm5xyFnwE5fwSROo07AXHyxbyLtu0J5OX+wt6Q1ucXLjgq4XeebmJBp8asVIZl1Jl0Jzq2XaHq/e3L4CZ3jsMDvqetZLnwLHC/aTtPXzc8nAlued2uhtl0PQ4MITqudeKDVuLjrOFfIBtgwMdvdTNJ7j7IMal+glt1toU5qW5vO9qx+s0YLK7jnZypO3qUuhZHKOxbjSck0Fk/bMs7fMpLFbjnfaWKZzcbKUufJ5+abCTyeUxIfnurqO9h/5wKmyTJO0HLvO22PymkjItPMnnQuP7Hz0LVNyiAmMyhNtf8oKJg5yiLbeMpNze87aQ0vz4juUdrrc48NcQWBYmGlCj2cLiqb8ByIXwdeyDHqOLM3xpszL5qkM+mOJ6ZdQPYFBI7NKhFaCDbWQbOwsnnHeY+3RTmE8u9Vp1BCGdWVM4e9LT+M+MHThzdcDplNHmJMjh5QNL71aDAZ3dAItK73TohZ8A+tP/P3+JTgi0sDidWBTbYEAomkB3UdIMvqMBRqvSwjikOp9YFT8JHZ2nHqwVNLuOwzPjP7berstP1OYoz7LGKc7XUPXHoYrRfnYxTSpeVPlKfO+/EbBdbZMF3fmr6sjcfu6rz6V5nG7DcPe4Yp/SpuHl6/ek+lLxCUsA6sE4pzx4JV9GvynmhrEeiSNqnfMxxo8RiklToRBK/k48XXFILAudcNGUVw6aecKlpMd2TpFgf1EtNsv1rjiGNocPa+bVLkHDNNZuTrNxGG9j9eIs+qt6HA1VJD++aKISQoEoDv2leMLmZS8iuH2+RIDbOLcPnuHqUelKEVUNjS+oyx4x/Qodpcy7TSCTjro/27b/G21j44LLtXeEwoPa3oP59hL0pvLRHa8d0JHwys8SUv9YNImN2kb+tmqKB4FweJgS3lOUn+mlaAt5STJZ3/sh8q/ViPnQT9efFxoLZjHFedemIvvt94YeUCIPNPlhlQguHZYwMckGbI6tsCBF1fyJ8pOwvx6OKaQl3V7q8XBtRIcjXsaIH305EHP+/OeHMxShFP7dugBcuqzMg5BNZtMolQMe8FoD0GS34hudO4tkmbAds0UrKCQvJiaTHLPmRo3+/htMrqeJAmJ5jkT1aLS8rKojpVi7xtIZw/t8605j81ZnICR58HhRb4PIRMwQ5jqMY8o+dG5+8lbcQFkN5+CetCxJznvtQiADddsLs3NZIaFZznX8Gr3zTCw4eUBrbad0XLDHSEJOqSNiDEeOxPU9TGqmpM76vtF4lqCkjgFZnGEKOo5KE4XydQyRy+9WO9JY2/BM5/5x+U/w6xsin7RPCOu8ae2e88H3XzpBEsEmAuW+srTBYdjgtvbmCqndXQJLJzEvjSHT44HgMSk3L98E/MTAoicqDpdoHkkqtTnO4jBg2TPzABQjyf5LZrK5JDDfhGD0pYOO58+w/W2imuM59aivVZb2YkZocgy5tVgJGL4vLOW70RXSvTyZCzoDicVSInrw0E3Q+E9Q1FKOEcM6FXxWKB5brfycl+1nM4azzuyGSU2Ga7KoMLb7eOX9pUILKNzPmCpUaKgxM5CoZo9ylICyZxWUZ/9bjBOhQA9OWfbcSDJBSLQcV+Ebzy/rdk8ELzjgPYZdgDgMNs77mr2j6MP00sEKx5axqBKmaZaGWOmrTPXJO2LOkHTR3Xlw61x3spyxFXk2LeYmmaAnlcTN7YomAgGCYGoR1QiZFeoV3bXDeqc7gQqBll6RaTdTKXTN7SsmPTnTZ6iYejYhm6bu+m6KBW+Sbej5ewjLXX9jTohhI1MveoQVBUcYS0zEUOQ18GWnOu+AJA5czAZ0jZITM88SXGQD28ft7PF/rWv52vWXj5uOeW7ogkLLfgLeVTZ3iwx2cJ+br+P8K17q0HWT/ueSLdVBowaiKCzlcSb5KskXBpAHoPxboDKa6gI8ifUor0Ayc7KrSr/aRetLnjkjv69bFZgnZFOZ0AbJv4wHlfOscOLR8HThlWBcXgaU5Gc6QfVEOJYS27PKFI+QCZBWKeWrZnIJN3TDVh2X7VlTgoG781E9a4eVEn4xsAJUg1bhWW35WMYwVU/6HTI3/M59+mD6lUDrEKRJoNojZfUm+4rKeuVaLNge0YASNnohtWO13GYzDGzrahcWDykXoDzFrnxMIBGVzQZ0kMiq4efv2gpfrPmyq4FEe3Lt5BBCM+MwiweGt6zLB1yFWKcWlshn3o+n0ZDfa4tEBvh5LqWzSwxdyMbf9KU0uSP5UMWP6MqgNDeNoPVWBueZJAwMRX/4RXmo5grJn4FhSq/dN3hEVbqzj71tm92PuggW5Lfoj2jgWnl4bCIoSVLkePm0o0mZANNmIubrzrlz7cMJVote1fenov5TlmpdJdn90RGK9tXEEaoDte9JZXV91g+VY8A6g4Qc17J2/mEYcZoWFCcPZVexgACB+5fAFswIXlIhVvCXWSteu/FS1guqo+IAhATcEuFiZqpjiqviYtxToj58dSd3X9ZW4Fo7mtNXyBfHMRVVSvH9UGEazkEVuA1EkPuQ+9p9dle89VN9DE8HmbXOYOlI0z3TC/ilDOCAsIgn8Mb6z93nlZFvmqWrUKE+G0dJTdoTqHXdyIFGAB5fx6AW+c1p7q5Zm2zSga74fdOiaNLtqYe0ltPLz1tyVlznrN4/fm2wq4Wn1/xAldfIGGgFto3uasO+aN6+nkedhShAHpovPdGoaGGRXziJL2MegYZiXg1xw+m/+EFzGyFIfhSo/XuT1iyA3fdHnsZ8YzlIISnkNSMeHGGLgklultQmR8uXzDaaijrikqJqvQqj3VO9RX5fSA6sAZ0WxFwc/mbddzZ3RL23qRujOIVKXUUDJpnBWEgEGIYrMdBM5eHc4qcHHAoQkuJrunyVHabQfbkEO5pRT2jFo3PEsA2lDT9hVf8QHGcsaCild+BM1Mkdt5Pri2ZHTyji2nOcB6zjgvX/b0dlry/TdZJkTbAHuaWHsMgLjX3rt99PQ1Yp9r+hXXinKKlbpFAJR2kSbOlPTfDtUCkF46EOFmOJ4vTrkWrcJWUJyWSwC5DgbquyjtBiaX+eE0pPt2uwHyo7GggY5pVgYtEGBWURiRS8XGY8RYAa97h4lWWQd+SCTC5zfnmTOmz22pH+Tz1srZA8lh1kotkSq7UT6FS1Ma2e1C8KAynqONAIcMBzIlurLp88jjzRs1iMeyQIsXXHXuugMHe1RjRQbbOTBjqS3wpKYjRwKNIjEqAsy5KCNkF737FyaC9U4MXiStHfTqIqj+nTXvElwcmBwYUjNymMe8uI3fcGyn5sKE88dkJ495P+ttxsY9VVrSSecqTwxUk9S7MAaotRxjEEKZIeUHJp1jOGNWdWcNXEXI2SWzr0SHtcz4rIwo9zROQSszCXkjUKF4gKRFxS16tuBRAN/XNme+3JeXX53PFxwXcgeSaNFYfB85a7XDTtMenyY4g7Vf704Y0ALYK/LojrOW2xRfSAutVeOz5FUTBv8tgm65xfGw5sFEL7Qpab3iVd6LqP7DKZnQVal4DhqPgvLNF0SgOBc5p0sYT1QYBwVp8qque6ip9Z47uXZEwRG3YooGJP8eGaom30ZPrBcmYSMf4i3lhLxkBD9+/UW/zC3jFd5kbbVkJ3GG6DYDx9Fo1tiBwMIHX5sd4U07AB59RSj14o8Z881O7aqWGPaSnAZNN9DR2+/0Eg+TX9iS5OzTjT+HOFhXwSNR375BOu77eeg3efx5F9XzmzeEVCYozPPeMgCRYC9RLYD+KUBmzPp9Cst8A3FCCL61l+4WjczXi+jf2cArVGt2mvXnIGrRGpgcnVyKAUCmk5TylSQVVQ5Xt5Gc6vLB1lqpdnIENFt2aujT1MqGX8HQECY+YI/OxZ7f3HfMGwcMTSuX1QcnYBc9uAkGtRrAalxrZpArPCF1feO7AySYeRWFhhpR5M/Ut8GSs93n2uOC3k87OPWs+ux4C2RgrSO+Rq9nS3TNhGs2c+FDlLFC/5aEsiI1WTHAYz098Nm39qVbwxJVv+rTfxqtY5eOboz5eFFg3630tTJUG6/02UEp9woK1og/8A+oGlPlG6e0Y4sRRWOuQwhOaM/zwKoIcY8o2Exp/6RJ3BHpGO0t15Hag+zFWEK4Lq8LKP4m9BCZwbl4qSXRBfbzUjK0diWPkjbVxlqada6485Hc8bH54MnNB+raicFDfhHhGwSAJ2nRdjZl8tqfRKq/QvjJgp+QBJ2VnoA+Kdtp4H29kMJqjGJlQHDtbtMo3YqncA2mXPWrCOnOD1OgveS6twa+jGnvRC/EgAIThScKCHWQah6ONWH8JLpZEE2ibJkFQbbdakJPMAQ6Ngg7Qaq6va+0EQHVmHmDlMqrcjHV9sYmNEmuIXL7Y6h/APi9t4CoVYyCTazXTj+0xiWCSqpfMfGRRbL4woTUvkY9opVlmhycZB0PNlvd76iWZN6xSOOUztmTpR/aAvWo1mc8zGt3xlsdYDbHXtA/IVqkBYtX+Iy6gUgHUllDES62VeQ0TnADCcofwkvNcRlq1fjAmdvNiABJGg9pMRuB5DcvVDtOEnIztzzLWuV2Iva+ebXPDIGEyXQfBGbIBWTG9mb/fru3VW6Dgc7Oca/EVJ3RtkAQunOkeJ5VAfpu0gB6jm3pDmO0xaN4RmebBUqsbJ/yEzpfTTk/rybbeLMf5fY0QWeyMIpygvlDZ1cxz0RTTNId6pJ9dimS8H5uF7eqLKsS+dLqLxuQHND1fuvM6z4KjmytJIMexKvsD6/FcNoHvH+ygwH2/lKVcgzhpo8CB5vnRxu3laHPuX/psHjcD5yLim9uZUV86fE3VZBQjF7qaEYRGWPwpEj2U3nP2LdSensq8cPE1PxZF3rbXgy0VFH6xyLJ/yrjU0vwOAaVqR7zMBB1X0S63z1iOKu6hmH1u09IovQvnnqAyOFKO0X3ZMGF/KRqhXz4Jp9MnkG5mjtkVVwdigZVVo5U+LNyOnKXMlKawuXCc1Uuq5TcfmtPRCwVcdEAUohHsZr8iEHKeyVSZzOCjn/DKdjgGafAS3C3bv/iZryiVcz7QGFZeu/mI4Gl3c6vD/B5YSI8BmUx52Hd+nM9nETokLAMXbUWJzJ8MYxZsX4sjYsnlHQic0HYiWn5bbb3iMAtqpDO5LkPoRsdOx7P7cm257+rZe7KGDCqhGhG+6yRScCcnJSk2b4EUvK8MtqlUKLnD7viRuUbChooTcl9kjjqtki6AAVgHQLXkUoQYtW6AqUgroTaFqCyzoefNa0MMvA/Tu2LGlJLEfT5so/p3TjlY4uIV5ySdGELejkMjAO7U5zftT5pCdH1hUblppj6iPNmiaCb78s6VMbaiDxB6mAUs3rL1JpXYG4Q3bEMMUJPnegYW8kxH60Xqr2IVCt+amlJxXmtT7j5zIzQxhkbsEZ+mcG+OZZFpweyCzOSlIOMkPJUKz3kS0sdQaXa/9MTx0S6BOTUn56YDIB/JBfBrMRfa07V9u1OIE2nto0IoPxWbkqjyzDPUg6IAQqfhFaYI7fpYqNOboXTq5VkknyfEfSxdiaeJqJbLxurn02/l0Go/yKtcvl6SoB5ohkE3HjXbkTvVti4ffDt26Fr0cJxRwi106HEhF/EtnfCk5n3b+HrDpkN1+bVpo3CzWKQZNRSdw3FxSOngAPHVAmCqZvrR1nGvpjeWD+q1havA9XB0NrJuPLDdBvpz+x5Oj0QUogja3rH0MOmbDUVk7/nSjcC3I7vNmE8JiLY69E58TbIqStAF7YXGDtcck3bAzZuOtB3dixvaP0iCINYMd78xz3r3nAhJMKKQVclzsYqt3sKNz27io+Ealx8CxPJsJhAtyrh4+x4QA2VbKIfKvDCFFSiCQ3VrmGBF25cAFGuRGFVadNCbK4QLTFLlzumHiGPd/WYNcvrV14pR6hA+CoEdgCzqKEVweXmfy6+vc343znwHH2bDTE9MXilK6u/LXgUyJokSd9NkD/tmcF62kMer7W+s+unFwraFVILmncvZLUJvb8DJ1kQokBOqRUCCXRcib2qgO2JDh/E81SXwa8u+ye+MNR+8cUb+PS6vECxGGyDKM8/p0qPC31yn8JI9ffGTY4cduQKLR3n2bQnmoZeqaoYodSVBdMFOyjKJr8UpqYEwY0SCCxpwzNHMgcjgwixZzlnkMiSUKmW6/MHzUBIMy/sT1USqOmc5KYP5+mwt/FVaGz0SvHC5ZcWtwucdsp4PrxZBSBLMpDZwOuTZkJclesvS1fOMqMhw0bpZnjPo+s3WMGbGOhyXZxAIU3OzGJob1FrC43ytqkfJgKlQGHOk5HJ0cWIVqO+5tgVLeEOyk/3GlTNac4nzxe7AdIyaM+kURbgzdowk9zXQWrhEowI+6oo+b5+TOp5OtE9pv9xxHj7v0Q+ghe/achAKuo2pdokIp2juSbKtOr4sp1QUGl78szz8DZPyqmqf/Dw6MUCX/V41xYOV0rXl4hodrMN1K1UkY8/8iCFLOM+baOYS8HL8YX+AxL6/lRboSEs45YM7XfNW4n18cQEy4kHe6FK8awLUQ1v5x2en56RgRvrqwP5aOE5847SKuwXPURHfXS+zf72YkuvV17u7IXxQb6lloNPXi09TZ5ZQZ7OfTJtZVKqOZ88yGZ+L56y3YAoNSCJN6G1hGcPFO3/n0vOFK3X2YoIRIduACVcjY54JgFeg0rgCgDVGkWug73SKIOqlUMuDGv28eALWoD6etO5Abbk4oUxxZhulOzZbazcHcEIyC54wq+oBJILxKlPe+HEky9nIJ1Vv1eDzVgFmmXJMd18663+cZKcaPT3Z+qZIm9L4CyUda7SkJPxcY9W5GRqItdFxM+HBA4ZXmuhFkcCpzh5NnbieidAdyQWrSR4EcPzUo9QAFSRyGC6tH/nQxMxy1IlwT77IatMDFKU3vKkFb43tQtuvNGuCjQXqITfm8Id3Ji9k5gQE9BzI8L5gvBBG/x5BPh3eg4xyvkw1b9M3G173YYWGTF6kcZVBvM5OAQKspJQiGP+dwpzMDN5xmvHJKh+xIagX8QH1gUp+gShftuuyfvUNqRKJjyQs3u+memPlR3mKDGvOni/PUZvoDpbs5CQQmXHBIoZ9lEJkWsmqhVZT8xoWXftr6QSi63qgdCAdOsOms2xAvMaaJhSshb/ekeYSDzZAuiianMqGZNPjCsQIDoWphwD4UVwWvW+znJF91LK9FCx/I7idIgpFbnat6d6bsVCS6pYs+LSduALDvzQgG4hCo0fuPFR4dOYr7wYu7JlZATWm9XVzfPTyfOvGt4pxQJafJAbrFhb7XpjSutFfmhmTRxLilyeMZ89oT4mzTHYR6uHdU8w0gU4XRUgwUCOBJ6C6zxvLU8LqntWUhx1AriqF+flh5K5+hFQCUtPr7X1u+tq0LfAZ0i2tp5bNsSRGb/O12BMI+wkjN77RJp64IqCMUjdmTN3pTDdGbm8D01dvKjw71ofroGEz5icAsdH5VTiMx9dtzPwi2oDUWTeD4VY3YnXl89ZLduerzLn5Fnjyp7+u6U0d0TVAuuqMmvGS1ALLP96OXJtL9i9voyF7EfQz3XTbTwINvHH4+ThWLZXvBE9vpA08VXiBIMyoJ/fMRW4iQui8hJVDgeTUHXhf1QGhvyJAj7Ba8++w04CeWt/4xTGM86WsC3WzMwiCH0lPDl0MvcfamtrpqwtkyB/tjoJmaQ555bp1yDF9w2KYcXYkjqMb1lWf8kG7e+l0EDezI5yfqjC9NsgxCBQ3Z1kR8A67CfwwR7fF7usel5N9TkS3EJL+MdH8YlT9AWXfIX89PvNX5Wjj5nr7UsVNEhMztfavFH8dmUQL3+vUwHIDyzpW9hkTEEc2ROuZfSy60bgDRaaAgofUQL/lDmJqWXly+gDKEdFXt8XhgcrGZ3aVCxGbS4xXsPbQwsK5ZZFbAQk1lUW22HkMHG7FW8o1oI9gJ3G6P3Ay2o12dwEzv/UsC4N2Y36+UOBrTcWNKAWeQMnEu7aASm7aqejO3L8S4eg5rzL0G2IAc/Ulgq20BfRn/bdddcF1Ru+iOBAKMuIFXp4Pja1cy21RWYOlwL0i5dq4uZ7h2Yn0UQTaRVtmNOOr2bFwHpdlKP0hhvBbQ8/K7lD0pT5SnuPrE2CbsQzyyvlMmPtV+wiQr1dcY1tW8Oe0da9Z9Tf/6aygaMnoSWAMBOBpxvm1SodERTSRtz5bku3Km3JIGQ3IGTjWA/88mrdqy1dR4foRvzVQVnp8wUHKe6zrBjG2EppxyDxtdypa1Qh1w6glqJ3os2qOBr/W56gnkRoYmkr1yLlN6joHBvmAzFwWOAwC+t7gZ9Bk6DYPHq+/HAn4a1tSHGin8VMFJFltAfrU6VGi62kldN/1ealWwiG3isfCPhskXVH4ojOl8mmD+kYgo5Thxs8pkD6WMAyWUGBZirhREPkyioC/6nU0MwClW+gdtZ+xENbVkIhsvGHeTTiYtzR9zoMXHD0PFxz0ez0ue9GD8squGabSfbFIfgiDOIIQbpiTpfLIzaRvqnh9M3rYvflVIQgGMAYecHvRHmWzwxw0P8iHtuu6oT96TLACrSj4jpX3HJ7uw2JzZnszh3YHu4LN1z8VmyW4CvXFyyw6rreH/SY6YqxylMjQZK1wQe9MnEpY8FPMogA+qN7L9LT3hGiCb6wH4qm+pIjVahvCderobJK/GB/QbMyoLLE0u1xcelfcT9WdnPp+mJYuqC8PITV09T/M1/KdY0mO3Tb0YCw25cG/dta3Se71Vctn9Ool2olFcNikvN+XvyiwjHwt4lEtUNvFPIOYsQGUlHg77TVa2bNGZaX3294mDZ7tRwfJ4Q7LnfjSnlFq0freenDDxODst5OvHtpVYude4SI7+8tY+WIBvgKnpoaaparZeK18Xr5fD8SXpsnPDGJCtrq2Z/TeBD0tyfdBoMlpSWZlJ4jfPPMc2HFnz3Ck7U/UaeNpNwj1KJHl3ctu2XAKOva94lriGoY8LqU4CFPy8SmNDbSc0Vn2jK1Z13oNqmu8QB7QQgmLYvGpp5RgMgh0bivc0z5MFvZHhtb4tUVbTgE3Gsg08pLTOkaAmtxp28Uv53Jhi3kW6vyC8kB+yqXMnQ0nYzc/Ti8D1EcoEOvJbH91dZSbazel/lGPMvl+Y8yT/bSQfbHBrPvsjQne7Y6uAJDerMSlQrZI2noMOTfPK8lpRIHpxoBgNv3h5JjteBuWZ9l2A/40ybbBskiy71pdOhRkCFR63OrIneJPdIdp2y0lwp9b1dOo2zjgNDxaZVFCpXBwfmghpntIqLb5GEHAi+ZIU04rU/SY6zEb94JjU3d4HJACoia5L9R1UBDAdWq9TasPFtVoaw5Fgo8t3OC9/Xddm7SsDSLamgNGqY0gpbeHCUDJYXQM5hq1DtdWGqbRtPUZmxlHPhz2SE7v1oT9Xg2DJKvmfNTcPfVQcDQr0/ydqV4gCMXdzfFiG6f9Aulf7CPnJWY8ch35EE+Xy9+FBOlCCGIo2Zhpml5HGNcPwi7WUOSg19zp+keyiA9OFWgcLIX+Wf07Bbf2A1Fl4gCmyuTHBrflYyskhx44XcLkfC/talv9+zFBaeQTU0vkxZu+UOiLquwv2A7o/+pnzvNJAVFBXHwSTAkzBeLe7EaS6FMB9RWrBmi2frKLrq41VrBycCqWLI+PvH69K4yWP6cpJJECkSRumjoo7kB1t521elXLILtQseVSMPnpi3OcmxwfdV/fmewtmxnKOt5ZW8zOAXAigxu58fUAt/WxohKwl0W/OmMSPTKnP6Pjkc8QZhn8tVsPtHnYbG4XxwLixFuQqtuKeKlwReSMl0T5WsEx4a7zLm/UjjSy/y5tyAmfpcD5fnFpX98BIYsmRyrnBSoWHjkDFsMLWnluIAWCnI/mKEWBDU68zRd7aC66+bPI2Ya9EXMVxLQnjtYB9RorKac7RkUs+kwYQyNoDpKsdxxUEnbvvXZnq8UKnp6a8gh3Webvu5aY4MRxlhedeCmlecJh+Xk9kXFbH4qK6KDYjz/a8QoJr0q8O891wZLxqw7R52765BwPgfWyGUtiDyQ8XeqQj1ezxMrwDL0AvMfFxJVTf4SusCx+1F7sG9k8QPDC4gZseDiJxtISbCYsoUTN82JsBLgpObo8YAL5vKuBr6kJdg7few/xVnWGwGzXgjUnwu5l10Pt1TWI+7VkAMaYs6pVN0Ffvfr1dS5rTJLtkfuCFCz6HvmIvdFInblAyaX7ym7Y2j7tYc76b98RwnTuCJ1qpYAepL0qyzuYXmGkw/nMdjeHZmi+EDTxpuCCIjVD6A5VZoC2EF8e8bkR6+u9U8EJVmkurwSJCct0EQh07roZ2yjNVqNbCZoOw2fOUTUNkah6mGPHt773ICEFxevhsBTbBBA6kj5du8fq0MhObz7ruVz4cW8RFCU/twcQTZTlbVruCHEldllCxIhf0cyIwJ6vT/LojcShIm9SNLA2mEBS4XE8Q9+B4W6hTUYV2q1eQC+daTrP5HU8zWbG7gzVe2IuVcPxay8VLZzM160oKxFJ1Qcm2pl6zQxm2DhS3lfn4NJx864IPQ1ksRsbtT2VVKntOoJdY0G0IWiomHJE2tb5EvtWbuBhaLFVAr80Y5hlJxNSb8PllLzjHSqKRlgoqem188fOh7Z41eiz+YakMdUA9iNx56OnaRZ8n4nJfAQvSmb55PEGZNWaO90P4aMce83khyuvL6BgsNCS+K+J/kyiwydH9zLzJpcDn1dGnAlPtHwtfjSwstRaNxKqNdU73E8s+AnAldBRzV1ikzljHC2UcvAzjYlKfSig+mBlStbFa5XtPXBk2zAYj8E1jcjMLr/nB4UWXRsTD4M0eJbcI7o6KrHFJ8Kcj2/dkRSspQZJVoKtSU09hGtr+B2YT3aI/EWOzwvModPJp0OXdb9ovj95dypuNnP1RFh6DACa2agsYlI/Nyd6VMKLx4+5RcraIW+MsrBFVNGMWoVFCAhpjXQsjWwFtENl2Pmf8UBpzWiNqJdQqp4X9GG/So45Bx2E5zpCX2776Nsxnx7zTff344VOXstNEVQlLD70qqmld24Dlh7SoFoJQj97x7dOQXLpvSbi+gxT1s9v4khVRMSWME145QNUVh7Lh5X2AR7LpTp2BSONJsUwPB/kS9c40OOd0egpX5ipGWoyMJ+OA3I3hfy1uBNOEE6nz8OTDB7MiIqNgsLgAa71WN4h2WNNpR/TLtlV3PfLg78yQjtXm+1Y5QtpoU3RDTXsDBCpxEJfvrY1xWjHkl+D5TYaMpd7FKTVALKkP/PX4VBQvPjbxfhtwTdhJry2gT+QLnPHbcZeodXChpq6VKRil6pzaX1c3ovybiD+6VW2GzM0Kouk886G6SNBmL6+6grdxGS3K1qAq4tkaUzaKQVCb/h4Advnd8VOlwasZpr9NFmh4OCKaBWA8bTaPOshhm/rGIMlh6KvP+c0oShEpJ3QWl+5GyAg28S+GiVUIJuw1l9+DQe9NqXAndCbbt0Ikj8mU9ce5fObsYH8hL4fXAPCG/uiYJYvntO+n8V46xoRSkDSl7LrQojk3PRR+fAEgjQz2RhObajhuV1MXrBkay0RNBgCahYMZWzYJvpNgwtFqzmchE60zXnjFII4xEk1Lj+naBo/6RdCkKzxo2bVHcjZU7heh3UZQxUUHGT4qNzhi9zHxkssUK9s6RZ5RCeOfK2fENyliqz0ZQ/4No8Y8K36W4BPVT2AFsFIUn6SJifnqS2D1CfHVHRcydviOGkXQbnBqSPRuM95EZvLGrh3oVCGq5EvHOCqSwWYDpa2HPLOUUAxDVOnqpFn9bqZNCuFhHtJNgtqW5XrfOrTDfZZ/SIOLFOhOIslRW5LA6gkhzKFjbAg4Cix2kBEr6hS15XYMOLmsfHX6tID4iIdDWxQpNuuNqpXsWHQyRzUR+nAink1ZVV3d3KSLOTOOweGOihJgu9TQ2t+kCqgXScabLZUA5Mgziy9QbNODetrlNo7bXVSV3aYFEZgWgJ4vQiIql5CIwkoRn1D2uLR2wd4N1nD5O7mzX6dpI0ur8YEmTeW3tsEOO1aRyI7hBq383BcPqc0AUUnpjfR+hWRS3HjpPh8W6FwYy1JJYkj1kkd7V5f7wTo8fCAuZ5jRcmvJfwKvt4/0BHP/iwhKdhOI6thPo7BRASwzqqMvQErCWqlplBvTB/oTJHbTYPqY0j9W46m1z+v6XUDVPTLniY+pZnXtMbx1/LT5kp1Z3q1wR5rz2yzItiZkAVwGK1Nee1Zngbdx5/NH6BN1Qwg/UgaPahyjTtxWmA5jO41L1p4Zm3hrqT80KOilHlV5YDqM5RS2UxjwQLqzAyQd0WfF74xz9WwLPhwvAYuRHa6tv0btZbmBygEqV+vGQD3SwGRn4EMIBDwiLStkZJ97mAN68BWLE4v23qJAPTy/YnrmPyo91L18ZH0P02n+RgXF6ExmgpkL6CAbNQwaLEI6xPF3fhBoRwoVJM3LLeO9yK3xRgXXko+Ymh2E0YTJQCTADCmT1tywxDH3SCU/JZ6hrw+qVxGb4gK1L6NYfz+6DgKv9tVCx4C3HQvL+v6ip6Gd/G84iqXOJLCdwXXyWo+HibNJCcRFDOKij4o7mTQlw2gCp9d0YISN62Nk0tvu24+7RusJjJ+I9eQE0XIjASTlprYxxwtEInsxVZi2xZB30fuW6ht38kXu3QkrhOkdU2zVc0b9HlHSdNODgLFjDheNhA7MyjJEDspeqRmFemZdZlN2Cey+OmTRx3T4pjR17yTnzESQ4Fvkg8j4wY7al3i7OrHBMPHZXTWMsMlDgXt3OqRBsXSEmHuYtgzyUJarnRL7tTCemQnZSOUl4FKyp0JVlnLkjkQ5uVoRudZdWTNOKAYRdJhJPfvhpYEZg93syzhpBCkvb3DBgg7IA10YKUI8/UOESAPiQ9rHitZNztevQIYC8v1AKyVCKsbgdGe+aPKoHhkGRPieusxa28uRJJU86uv6wXexVshESUd5LGUwvUPOGyaXJfcRN8zH0o2AMlrJFqBe8DC+ggymzQMQkltn1oCMtOzznT2upFWF8zC6o6teXDY7cnaWQuWPDJsJenPGwPmrHhEZQR/bjnmjwPTIlTHCBCjtJtrldV9ra3DZmefj/fN+5Epc+2vepnqX5zlBqju2GOEgqLS1xf6pXzRZSdEPurSPrzKMBuu/9z3sisYlXErzXYV+FW8rwn7iNmyHoIenqGz6fD6ENdVJJ8LKK8Rz+5xPt9THwpLcWYnn4gfdG7twF8jg0CiBzk6MyhRMzTzXnE8JeZBWacaGocbnSusoxbsdH7qJbkDMvBXM3/s7JA7Y1n7NCp+uDF1+4QvzD6cCL1aAncznlWwolxhXBCGRMmD9WAyc4lgHogUrUixcBIJOkJOShz24SkhFZ+88BF81p2l3tdT/UCrA3Qk+a/21P5pIWo13vlMIaptNC0sgLIRrdCxnR8P6ZFjNx9KQveat9XedgYg5q4XFVbzaUh/7g9CUVwj4Hv4RGCzfkGw5rx5aur6eajVpuAoipJU9QmkIOII/ihRmeBP5MNufN+8LREqPDau6updc29GOlyfClakfzwSdvvqIAH4XnLacJG3Ce6eC32kFaIm992sTRHmDsRLIwc5zCv1JBmFx3eqvsbkTeyHYnNlfOy4XsmiqJLik8Xu8Aye+hB5pS3Aa0uVqcczbrsBNnrW/ST9zRS2BDV0FS9uCeRzEnn+O7JkIcDgK0hJ5QDVCKMiQJ/QCQKO0KQnVb6eFzPGuF170JrYs2MvtGy6JSktZl+30xhA4q1p1z6BQzr5AJKL4iC2NEso5yXoVN+ZmTdefbwAhKtjqEGunbFR8Ov76vEXVeL+tYFq4bhgtJIlNt2G8ljBkqI4rhTDdojso9s+QeXabmjdb+fg5jUDHlKtHDLzVb5EP3oKuYr2XefoTgg9HV2h3vVYNZasiaHEJkqj1ixi0XnDWEmoooyRTZ6ODfL42T8SiBAiH4dFHYr9u34Mz4ZFq6SFv1Ig9dg51/0q3gYvkQjc12Lgq5s8d114yUkYndt3Xramh3TYTfeG5ZtdLUp9Be/z7SGVORm7rwxvs3qWB2MW5/1gnG/Ci5yCeEJ4JMXzX10hUOW4ckD0r1YDH4re9iZBwrQhLD+z/JZdPrqQ56hpVlvNUaq8cdiURlvgrJKHeonzZWgL8PnGHmFKu3sIzSJYneOeLX9rGefZHS6kTnE4vy8vTpaq/lbXgwPrkorF9N/WcRrOiOi18pYJC/rYVXXGMoESQnKSzrwFqldyOkwLmKaGiLca6bhl3NE41nm7Bdk7Y3HNEJKJhSeegU085OKc0zTejyGCyJfOuwg1VOohbKD64EXrgIaQNltxa48l1UvFRxSpXidTCiMpDD+WdP8itKucZN4NOigKTKfp5iyVqnR3xPJgubFvrLGXrXZIDqvSuNsf1UHJC8n+TPWLnbrVJGCZ6YeSVkZL1c6rFvJyc9+xhU1xq1PCCamQ03/eJVFCjPSu8IizTDc62mp5s0fShUgcp/LRiWJJBsDO0UUF5G7Jq1E2pz04yOnzjF9t687RQYxCNsDZ+jx8HmFnG1WqeAWF4PLWj7t84rN733gePalq85wFQizFFuZqxPcbqicXWw+PFCFxjZ5aNJ9Wb9bxyDiVlzDSM4Y1oikaur7UV7URAR1YqVGZon3uQYGgaIaJcHVC1Rwj0Y31rp3A0ykpJ06X96p4cZKvHrl2U7bPmyEw2tcIaE4B0Frj44awTWhKRQa1Zuzi0GzZN2OzzSs5BgHarSZhQr+0Wj6wp8gF7/YzCD26L77jOAx75I2m6wNXVdcZ7zux9py1MBDwnEOG4W2LpG1Cv17uCQ3WfdQjppFvN2NsWVfbWuu0bG05ymoj3SaLFdfN4TGR9KsmnNY5C12dkTHmnksPihFsDWhecGGteK1ZRTqXSF8TzhH2gs4CTLwwakXnvnoCnME+SjhCIrnRd+2Z76dW3SDUEzwPY5ihrqTwwC1Z9h92x8gQfEde7XPHktXxSiq4/aQsRdmVrGFCpr5QYE6OrH1tnBm8THbnQF8DeTnKH/HmU611mMd58pL7elQGVLqv8MY2Hc1x70UgZBE0KTJQksPR0+jojleacMY+uAsA5vpAJUKwcuXO/erAPKVIFmQl7Ffw3e9MSRMwSgc87n8+Sh+342CnQx3Gk/982YyD59HG5fWlH+LLhYSvt4cwk9pdN0664OCVr7dGuBvxjuxS+0lsdd0T4xpFf2ci5BcS/NbIdaeekmJAzx22JtbRqx079YO1hXUXbVjg1Y75xPGnJnxFgFFOGT6KbClbI+jsR0vhlSd6DHGblUJhyxC85P9l6i2WZEeibcGvucP7TAzDkEJSiBknbWJm1te3PE+9tq5BnUxlhMg3rLXJe7mpzzr/B9CAvHRkp8UlH41mosfCI5d8SMu+uktQ0W1zgc/0lDOhm7zYC9f7zyuBcaHxaG0i65NfU0MbqtR+4GAd5NrTdHthQOeFYPuvOQZs5qSboHBmfBCGX3XnAvwAYpAu63aT2SPX+s8k8VFe4zoubdxgz0JWm80voVaaqPwozIW0V1zRH1MRrsOKYqipOuM3tMuz8qE1V0Ccv70h5pboPkX/q4fNvEM2ZiH3iGulCOTfIoOs4PYV6jbL65XFlw1hs4x2wkKlaI/utTwgYVAQzXS1Nq/fMWP9gKkLlihfzrHYTmi7rvTzn53BpPf0uSkn9KJPD+5/e3hLkiYPgEMs54O7Hv/nKvcX5Hz1Js5Eai/uptqlL9QLAb6z0afo+NcbO0OAkynA70KqmDMhy4GwVG3aiboUzc+lIpzlSk2v14AOBYpx9MLafrDfoOYZkt4TRto3r0XwvzACn/j8NF9TnWuk5inFzFYrsWYbwpGlSVxXxxewoO/hnczdju59++HnvYEZ7AL5fKYUMrqCiEJx8lDorRgnC2bRil6NGvUwnzWpvc/IhZgT6WlB5TmKJ/B7H08KpeWGt9SPeCZgSrwHlMoy6IeoPOCEQWDCKAiSeoYoG6YqCe5hSIxjezELgU9UkzXXnPRBCLv0K4wjitBdVGQ9G4ev5ouYEXB1+JoA9skNnlU/gyA15eBkmQFaNMv9u3kTyhPIS6bpMGkiQwIVqPDTB4DsGhHXCMHrE03v2UoLFSvb2u4G4jWvThye7MWR7vAc2nKzS4qf5SjcqiO1RD9Mxy4B5qwGLUdL5fDFsRE7+aGu0bWA3iw1Jx4WHYjmqX7Lb/Q4x+fc++cbD2RG/tWTMxhwMVA56FEffqIK/Z3bGmSQGoUla7nZz4KNX7IsCEJKHYMJvdJJHkN0DyDF55CrxBwuVjzF+XCd1FJACgnZcQqgwffxHBxmX4XPXuhb0PEeHiH5bc/2VBK9iTQAO3+W73xlySVnUseHAM2hAn0cY0GGyX9ZwK8S+Z51pPn3Pi3cMx51iQqe/yUhlDyn+65oFhnkVlhDwAwFJga6S6/HdhwUdZ4ZdJjAJNnBmk2499SjfCJrFvAFUewPytCjzSNWUBb6fCAR8Ka4UGS4v0acTSzMo9/TqNW3269Be9kkf3E0CJswTEJDD7Dfm+xUluXE+zlLrinI6vJsK0dNey/SFPZVckpTHlZRGdniijswMF+hsQB59PWOF0EX0vMW3S12HnbWo+Pyyr5ZnBxDpCSXiA5+MnCR4L68b5Y9oKt8/Trq6mAoBNThWDHResEUkyV/xBinSjcUhokd6WV1vo6g3g8InTN5gBxxyUgcHh7BerPOBHtSj2BoGUJLJARKN308eOVb2AGluBDPN277Go558CjxE1lqPQ8PPjykTMA08Xu1bAp+/UScSr6UVO79qo4Nr2PnsXA5TR1PeWZ+voJKQxTxynz6YCQaJi6e7sKnuEx7diQJZ7prnn13xUViEvlLZraQoh5j1Ihz+H5YHyaOg+TdoM+SED8hTgGBnqrsJCS/t8Hl7ap0rXQ1XqgM0EHff5MP68al6d7uDYpGXjEi7wDHaVKtJ7hlZhdqvCQYTkgmfR3KameTIAcSRU6Yc7qrZEr81i3cxtX8Ef7a3XRt8OOEAH0VcL811QWYNm9BPulj6cXaL3SW3bO0Ey/V0eOHfGGkRnFladw0559n97YEhf4GoiCkqd/rH6Hp3a/zfYSr+6lpuqVzkvdamFrqAsVhkJ/8eQncshuDzpzt8ssaF0WS19Z8EdWOeKt+fH2XraB4NlD+TfEL2Zy2aQrBZcVzfkt69IT+gocjLzDRmpro7yg+9INCzIsSj9D2TnRLLF83BJOfsRcYqqqhY2VZr/VEpUgj+L9icA6P0wYFJDCOm/pJgsdnH3HCKJy3RD6oStLXUuCgDSCU+jLH5IDs3YZfTTcApwJCHUUqFp7ZCrtu/TGh+K+aq8QbhNzEe0YfG7RBSizuVTCicpiofZlN1xNNkYKHDM6jazE8tEBmKAuNhZrS7OvupQR3rhfqn9kT28qxmk501UYxb9owDBHw5CIzFJlfRWr2t5NeRXvUHk9XXT50qf0nANEgyRsIzUMQ2x81wLTFFk+et0XCFLgL9n99hmYX+hl+FK4MRrQHR8FM2ydj22qSHSfgPNHrQQylVpu9Ld1Fp48Y3lMOk+GuYK7sZaG+/SC2esDf2j31cPv8XSqqjQANygZUwj0NGl/RiiWbjkb2vFgsrEb0TBc3Iej0T5ExiKbyEW7Mj0uBrKxjgpdekAbrHmSATU+eqbPIIT8ix1c8r5vv4b93kYAwVtppSN9Ng0rgZY1juC78XBbnLzbqA3rzrpGxy7yYv0A87VdSKdQ5cnNZhSFLatOpDJvqM4IzKyBnjQDiQMP92lKtqiEaxInsHn1qzynXy73zeIceJMXmQiYBKCVB5Hbadnzbgt/q/tDD84fBIdTU9M11cMT+QKUx7cBZGZyBU3ZnL3SZsND7vN+sogZzE8WkHpjuPuG6uo4Fq/zNlzFnZfszYgQHx3BPFpVqLAMS0taKrO9zf/0nAzEuioWyjKsUGJHkyZK+JmVw8SswtmxN/QrH60vwQLJ3TcsQF50UhDIm5sDmf/ufMCZeSS4c5oVTKju80EfA7z6IWlubPPZfKP3lHJ/vAnjMHp2X3v/t7F8sNtEWvagX+MhRLl6sKzCeOkJ/yBruF7/KKS1Nw+ab73uJPCyR/8WvbweIw4EbdTpnmeIYJtJrtg9bGQvKWX6F4U4hoX+nS9r1HqT6yQH7KxkauH9nOH8HuYIgKM/rzAdd7Q+J6bx2eLYGaT+rGcv6+/G7dY4dthqTAG5/tvl48SqNw1c8DbX5YJ+SzNH0Vr9fle4hUmDONNQP/zdML1qQu+hME62H20ip5La21YY3K5vK5btGoCylGpZV7HiaruuGp3hvii1y0eOINYWHkPf51bGIN+62ntzza67HC+np1A+7ziNtylLb4WcnLDIHIqCU08pUdwd3JrhNfPLYZp2QKvp/yexr4P7GfJz6jH6V7fRTF4lblsk9EajIHBynzR/n9/AS8KJCsXWWT93IyUOjQOnbuvr3yurhBTM1L9Rc6li/U1GFexP4ZaeWnx7Gq7CYRqxXqSXBLqg/J1djzM5fbrTKME4d1WOZhz7/bZDyKAdKAExprn5rVcruPYiQWTEH1WLif39evHmOMInQaLZZ6uitMyddjt/Xj1fJjfTsoxW6evowt+fOrqTYFt/6dndxkxrWpHrpXF5WulAFUdVW36yg+uVv/kcCckSnFUpZQ1yxp9mZOLeeAjPiQ3/2VPVH0HeTw9WsG79wRDqrZarRWomYRG+ZJFHk4jUDG1IUfV0nMlZNeTfdYbIPqWLixn51l4FyTBpeG8g5DYyrlUR1FNVFlm7YDLPOL+LHb3K4rT+2uSV+1hZkLm7VAdXX4PS97YnvyzctS/BXZ/3E2+bSil2i3azKJ6iPezIGwBdmXnWfSk3HQ+Rto4vlylpWQQcLkJbaSFDESJxdcGXXzw7T4Hes7WNW2GRiGZHMqo0IRKIjU+Czav3kMS3VUrTM/Lh9cDHsaBDxZuwI5hecXkdEqobZIncXgiisarKMqsb90F/u2CpCcwqmvhHfx0q4lfpLHHxIuXQhxy23bbH8vNDGSf9sDEePYTantFDg9t+Upk7nZibURG5dY7WY5iwKc0KWLAWKpPMshgFTkAJDkdW0KlB5/fwF442l8+ivrY2Ux1DoMOR/mVg83P38WpS6KnyZdpNdQzWaZEGkj8RBGOGJ11A/Y9BSyCqhxPrqPZmEvjd/5cb5X4co9NcW/9wT0mor5J1PryjOgt6U0af5NoOTGXfsE8PIT+bd/ITyVQaWxtu0LzZJkdkH/W6Fpwlt1YkhPwRdxn5GHPqsJ/aSCs53w7Svc+eZ5rw4xz2lgXrpRdaMGxfV82X337uKVw7XUHqhJ843vcQNf1UDHHsvuWuQKCKIIUvU4ERao0x4Y10jt39b7C8PQzHuj8q/QjDKZ/Xj9+aTNVvPqsv1s5AskWc7zwiAQwB2HcXGjxFEMJq0I0lmmbn5Kf9KRdIjAEv5ijI8tbxwXPH5wi91sYDtD84rQsGf96UYW1l7H64rW6kZfOPpKnw/DCj3uwMkeXX0LxMoJ6D6j++qlKLK29ZrWyoqAn3v8us4D2L9ZeQIaTrHgaUustOdkJ+7sbYLuLh1u+9Xzf7Ei83oIX7Bl+ZWlxyF7t8At+gsfr+hPa/rghmVxoj+Xas0kha+Rla57EQrgmapUS47ewRJtWtV7MbnuRpGm15sN6eRZ8+Z6XIoi2qK+Bq1gFndtLhKY9mI90gaQh7ENOBxOGWsat0wLhOUTU8vm+8zBJST2suJga2cmGV5IcxfhX9I2F2WVJGEtZD4GhAgO/q4iea3NYPjSVeyGc9Flb+QuHfskCfKA6EgMNDk/0zuo+KnvBvSD4SwNfx57q/6ORUvfjh+J38uhGHVFskWL2b0gHhrgPmyTNTT0caMv0OrxUpral2NFsfwFsosuZKxQG9Y/u3sZyBuVsXIrW5elzWK8Ukk7LAtGbmuVFwuqRuV58L6XbSJSqvkK8n6Yc65LMn0cV6UnVxmGgONh94G4npJKMAHoaenAQJ2ZmPoe6B2rUDLCyRdGSVjSAdssmWPFu//gvCU/L/kYVv+97BfAG4qmREQAgclFAhZ9cMScCehj5J/1OEp9o2FTmq575/+BUHCUATWxCnrwroi71I4AXUU8RzFRCDP3zzKInNyI9mvDi+UXVwHA726I65umOrYv4FKeGsBdV9tGMM92IzXJoEny3eX++TKkoASWsxfP71lPxta/aYqlCVG4J6IvpyCD/6TaJ3N1+QY7r192VdkgDw2ko6oCnDIx701kJ7rndAoV7D8OPmivqiX0fJ4kpfoeXLoHCNAgAN266tgMRqS50Uv+pch7Wcf9a/kNM7tmI3rd5dTsZLz4JukRSpoUE1+qS6oBuDn/tdQGqX6MIRZK/q10zgh59OgfMq1So2UnmGBsAy0vvs0bPTsjmo8IUndQ6VE+/wNMzV1u6k/UuXKoGzrNY8wQUcPYii7XuG/bKNWhX/MpwoA9mInb7zK9PdiQ/oOQw7RV8liPEoZvqpspcpm+EgbT4vQLWPwDaLYi4tPzMOKPPERE9fQ5rzyFzNxQv3Ov25/vBi8Zgs6FJS08skawGuw93XiwDYUQfjXMeMPbELa1zCQdSJ5LFi+r2Gcrmn12g+aCGOp8e+Jf2Bdh/Jh8+IpgSCmjMdc5EWDWZK9jeZo5lkxzyc0/mHmEkMS7VkJggqr5w8cpIU7i/I8fhXfdc7OhSsYtqWXNrbG4vj059RWZLnh+7cg72upVen3Cc0MGORdx+laMXtZBfUwzi8t8wfEHyM2fakFReSU4zPGwNZQHn885iWObsNw2Bk/Y4WHkgd24GQw/uJH0XqAPZxdrPV3B0XibwspxSRsMYoS0NfQjUqXKM1daIrT7EnkFFecbs5HpRKaexRny0iwuIQeIUyxe5NG7lozyGCy4XJ7aUsK+V7/qROsusvtYdhSmXRWPbM27G4nTBitcVW876EXQPzGh55QHS4N2Q/MT2iV5iwmROr0nmN2jTAtwWFbvnRINim10a6sfwbZs10Eb0QpsSCP03XE5J2HfGhZAsUxDKFm6ULcgqFLXgcv1OcXT/QF7216DGZOkKFmaiXxJHlyWs4u5+myTQcHFNoyMSZzXpMhMH724lRMNEfoPau6CfLzMO7A+aNYDjD99JdANaco2+lE3P2wDcHKfMb2QO1xnzFjADapNlzSFBOreqRuJeQXVj5wmWQAFv8cY88dmQMmn/RFTo/4jkqhqYsaJY5SzK3yrfkoOXw0N5J1GCisOYvE+/KvMTa/9579hoYIMsN3XDknf+YjrCP8eK1HSCLW/16HPyq+Mx8NB2VucxWG6KH9gYGCQt4iXtuq7Jg3DIPaR4PJLX/dSdBs+Gg5xgIz7psszMBuaJlXZxWRKbwf9HO/4u63VC4X1bb+vHLz2RP+t02ubU12uymV6Sp4/wPI+ztqOQb31RKirtIG12tRqJZp2toh4SYIXImOav/rVEpk36IbDCDEfyQurz2B8zhGEt1ooYMG84o7ucA4cNaHq8QmGvw+ww6MkmLibZjMJxlYmf82L8hQz9exHKUgZmGJIL4x+Av6zBqUyyTWzMPv/RvRd/N361SXHEn96mFL+LWPOksvtMPTCZ9bdN72AUFjfWNzM1KHlrqePrDS96kAWlmTdPeqQBdlTWyx8HJAy7EDgUQWr5TdHyCBMOoT39HSGVoAeddJXht+3Wl6nLLTAFgu8DSUV1ER17sJ6ks+T7QT8W8m9v+GMOg+PBw/ZOI1PTQENilfPMH4Ne9NjGZvfjJDAPKsCAqLEiyMqhsDmjZdSU54OesCjoBPddPstmhehbsM3n5dZW8JWC0IDKZxEcc9WeT8tkP9RiJJejfZ9Vea3spwZKvYeIxAiv0vZ0H+2pllVhRYjSG6r/L0WJVXr9d0iVl2T/lqcWfifRk/6VNbkOxp1I0+yH65Wzflh1+ypiuGYXb50tdcciltDBv+M1tn5ZvrzSb2S1j8pGZF03I/H7lkO5l351HgT5njrLF/WBbgcmrUq4n0aUFbnNcRt4iUeO9CoWgk7FSQq36EY04Lz+KS01m6F9lMC4y0nb+p55ruJ0e/D/wQ4sIbxkFNlzgi6PAon89HOGjJ4ni35hQJWRWvfFet13OSz/rFIjp37Jfq7PP0WoQ/y1/sWUhZ25DgVmM/qWBQ9qWYEF38mqTIb+9nGHGAaJBvNUT6t1fy/Bdtk2OCpqUX3TydT6fPJnc2XBPxF4/j2rsuAklXtDD4mj8uSk3+xgdoCfzakN+3Tt3iLss97yvvzDh4qZceJCqU8U/6gvIgaZoP8iNoNXCL6IMkpsoWwDk5kuy3L1NljA5Hbnv719Iap78ehK54f4DIoabXxt6uihqvkDuGBlfIYR1x+vc3EvDrTzkc0U32UQeohdQJEp8dWQdlGNL8v5EXkA8Gs0mb1O5f9sVy7yFgTTUKw5bDRfoFk64q6NNgH4kP1ID8lakHbgGCJYvHKA4MuJqIKYz5HZ3/dED57TmbtuRXGJNA+dQnDOOY6xkD/aE9oG9UhH/jq9wCugD9U0ujHyTV+PQQqqXN2B5IqLxgQGEZ0IyrooKQf07KZpWdBitSoNvwqz3Ji4MLRddDbKRlivueRepksELzW8p3LZfip3zB635Mn9TkrS5imJL7RZxleYJpMX5UfrggHSzeDTWLsSzEciyGiXK4jwmWIWeCXuvxwRYILcgnVOt9mYlp+Uv28CAyK0GZ5/UvmW/5p9h3akP3HBpQJFiqFMVi4Uzt9dGlse8b+SwfQbuHBUeSZFIAsNz2ALS5bfxUv7xmxpwzrX+xEPfH8EWD4iENjX7GwsM/iekr6bwd34h6l/xUivXMmNKgpgXf/Fz0IuRbaHeG916Q6isBENvrFYMM2NkBsbtc8hz0638vVVerCKCHE2Mot11mcq/lI0m+yafgEeQrAEMetrJo2TrEu46dxWRKTyBYR9Gc4/Sa5tzDLo10H/+mT1aR0mPR3r08Ry0aET0EWrJbx7IEelg8P8lNlVSmaUuAqLHdcQvtfvp1PzRXqbreUMPa9NrvB4wz4q9HpmK+dwZsQyw7uvw1efwTJ1hvYwxTDqU1m4DJL8h/bLbJQMDDC/I9o/OvESFY0L6GOezr1evZAfsbxzqCKJ74/Rtc+qG/BNcWFaskVUaLFXQZmmPg/13CU7H/wpWa3nahXPEMd83s6sipv/vJgn4mqJz2k/Lj8aCHVYuitoW9hJqgaBBYvJk2twos6a6CLNXiP7ymcGD6GqENlYJVdoTCrncHW5D4rC5L5FhZ0QxRGx++VCkT0F9q4XFPjqoptxAlOfeCtgnAu4TVACXgzg+GR2lVkoni4nj4N/0TSOISSf79+KYFCE/tC1ZlK+B7IEVSUHTzFGbJgyype8JzfMaF8r5/DXpX6sGeV2TU6bJuoTGLmQjaKLDLupGLMoerGmJdLizk1rRE63uYF3Skkt+j3klmwE6cc/aizXk5H6vtNDMAdaDuPsg7Re7FbA7SQiUFkHEZRpo43SwuMr+99lWEX3hu+tEtzf3fQ9xDSVPNH2agbh2K1wTtoTRNHkJW3/XWyyvbVbTTzb+x+w5MvTCoS5R9/CDqnPOSrznXIEhUxbPJfkY9ePEzBhoxMRjMw+Mx0jECEQRr+zZ+NL4lVwyePPSVGj927Ffu/lt/HUdTzQgeSlAy7uEkjp1NcxxIwpVFXcgrPHpODC1wHPv9kB9/b56LjffHURM17quD3ar9hsC5ct2qfkoAFf6TI+G4/7E+GkTgvD+Pb1pwhxyRZvleiAozm5/06FeCqv02f5CVFUe8IGjtb4p8/uZEfD8PCI8GNN2CLSJ42vpvAm3yp5+7InvwX8F8HPtakyqriju5MjvixF8wcY+SHN63kDw92cA4iMJemZ8+d78EqKNLIV7aYrUyvI+OMDY2Bioi/83yo+PYKbjuaCuVfQnC+hywJRJELH0H67msQsdFu3uRCZKZ2CQGZiifHmN5G3VAnzbUhqdwnpPSaKBq6tBlRoC4zDKGMEka32RDxRMll7P59/oJXL1W9KVe64w1D69GXAzqChhJ4kHE+tjhcD7Finz1f1w7oUpwNLeDroqwKAhAPgrwK6oL1iKXWJI8oOdlF/TnQLtWjpHcrCvux0ey9w2oFo7t36n6gkl52eszhujC4szALHX4ie7ZjbLKwYJzyhU31LVah3V7gd1p+Hy6jeGlplJfDlxFxoJ8S9jBpwT2/pwmrC+aOwQrJvp73/RfTAzfEVMYbknGXqdQhOM2jS/Ec+Kcob2kaEBBWx/sOHnzmyhB1FwCuonytl2lFRd/qAeC5S02H1t4UpLtZlTe59R8pM3UXVo+xJF8ZhLSHdgK4E74Ruu9gVjR4TyGsWDc81AblWH50T+HMvc1JTCiuWiO6fBTPwQC3ooV/xnlvCwCVU+yjfD/Ip1tXkqpzKulNCAtcYHdLoSz32SkjFyGP6GtK7AzR9G2e6w79X6WxOediL6s06+wW4SW8fCx1NM2aMT+KznluyBf12PhGO97RuoU+mfKU0BfjL88Md7pr6ytcAjZLYmn4aBWUK8+M4Ab33sufzHSTZOyxoeDYUf2+Lwg9YvNyK2fNfrP+0X7NM2JzcljnKOWQB8KAvqzYnL7m+3lkQ9H3VuOq13nkEaNe8VEVEg7I+knsQObxRnbrUtojK17LW5oFZp/jSj871OBXIkLf25vX5ppn+kLN3CS67+pOag7l3WR8gVJRVJug+qpWsLsKZin/7wKJ9ix2Kw2DgSd17DIxlViytnRqjCPG+lMMfngd0xWci3oBY2Sgq+J9GMmBT8DEChS/XzHPuvWM1wWMgwf1iHZ9JMvS3U5Mo1H4YUIKEzqZhn8cp3Z3w63WtxzzX6F9oKjud8Snw8XXrShdbHbjjJQ9iEJi7cuvhIEQZU2OcZivmLK8CewowZlWTCCmpajock8WJdOLdXvBzeFv8ZUKNNUVBdtdcV5ln3xOT8VE/2RXfz69XCtT9vnh1One92Y6Zvdd/Yp5GcM5ejB8K2s84eXwboXksRYC3BYcqBu2KhJ1gKGjTKhUaQbFihtijyU+lLSVm9H613WcJPdHhJaKrIYx1I446/jGsacAw3bokRAjd+s1QLwml9F2qMhbXxhqJPyQvUIltvC6Lh+TxnGz/1neEXxn+0ZeDCGkn88JiXXphU7KVJ0a6cGbjrs17cW8Ek1y6C0OVd3LgE8G2HmX/kTOezXowOfHczO+hoU/fGG1ySyXl6Ikq/MVtWsnWL4dnw3sepK5RItsh71JI1+wx1FVJPnQj/Ta8noXIC5rUwkzVPSv18JRj9k8p8EglQE/MG/FytTqRgGEmHvRmYvUbjS2SDYYDRnw5qdH1Lr96c0x/BXE07Pw3r06mB55xHU7gp0qeS3D5Ftnw8LI3uowNJlZfQXtxjDnpyD+dk/zXhGHM2e+3VDlHbLXfok+ZewshTJ9Tq+J0wmrBtpIVRROkB++zrC7KMNd1OlGBrUP0qjM2uJeDGykoZ05bt5R3Z3C5zPqfDK6z+Mx6ilNn389vGsHmtjuG67db0mkF1rHjR4gl3XIw7+XPHyKa98n5nIu+XB6CmtDT3xXsvNTfoidwMJyv8xl78tIOC/QcFhtEHSu0KslVdbSDi2gh1jWipLL45oSYOEPTv8pSS7vf/Lyz/UP2Fg6hvhQpN52YKmv1YigUQzEvvl79QnxKyzeDptBGTtQr5HPIV57jBBEIA4/5PJ3odNechbVNWWfqmjww3ovPXhcTjRAHQu8OgPBGkALGD5PXXu8RE3c+lCxNis7+tFUnesu++7nGYv7FV/ptFGKuhPqS7o/JHrVmw7qTibOc2sL4xKR/XoLmYSIz9TlqgweRzK+ZevKoAxV0LoxYLWy/a3CGl+we3JqvaV5x9lyTjYdkAiy3pez9D2f5dhlbHQMxb+gJhcXmior1nWKczKAr0f14SXZunsrfbNPp/dNWavP1xxUJNCdgNWUhq4MBvlzFxkxLX+wlderr2Q0UfF3Y8Ah+Y+ytNX+Cof1NpREFGkrqG/ESTU+X5IbEhjOfoKmHBMcrK5sjxQX85PkhzRePRrgvhZZiOzU5+rfAH3a2ZN62NWuyCO17Gd7Vq5G/Uazlo8XugtJ9F92wK68xHL4T7xsnEiQb9g7syyVl+/5cNt2YwsFw1G0SNs/gCDO672gxN6SR3+yTM1rRI2BF8oS5H4IBhObzZNvKK2LEbw5nd2dmMy5HEiK6vfRqhkXLqu3Imc3JI8vFgY16VwV9+/askibeQtwKuUwtjx8BpipIl1L7YEWTwTNMNm9kvRB2wdSs6ab4rZU2+kgDy3Q7s4JmIqvoj6XCoe7if5A82FohjDWQfWz5gXkzRsHXTspew0NuKe5ThMOVdUrZYtqUgDjZMd2haaoy+jo06phv3bjZO9gKc2NhvhdbMyojScDXBnPkx+mpvg5lCjgYFeN7ZTKaSgPHTYq7NEaz8QmYim5wsD4C0p3jpKiVn+eUOEEqKMpoK7ZcO2FuhU1mVPDtPMmsmPPwld00NAyl7UEC2dO3UM+eVCKw4Vh7OqeGms/Ymmm//OuunqmTjw9tAkYBQR77gGRgDQZ5ez8Jdei4pC8AujfyKaSy1SVD817PoLEnhN0v9laOc2iCkYxYl0aiTxwYO9ztjzobKjpIchENNwKQ0FGjx8V1gOgDfwEhQIHuONWOlb9YRX5Fz982yEo42uXKXjhs33hbjJV9iyn9eo25Tajsu0ecjjS2mvxx42G0+Gtpfe3FcViHcZ5dq7H5Co6e2Fevn//VmH8fcAHQLMrhnQ4+Oa38CyXRupRCJXlLVelm4zQwkq16SQ4/xlp0VXeJu/XeuB26OvLosfIJ66zROpxT5vI4tPWZXefciiejGuTYwTL4zzw3bnTxY+ilwYyM3SXQND/h41iCnrHnDu3Axcc9+gW4jvuxo+7vKeHda8sOXk2k8r9feVGD+7b+1AIqU6NNc3Mt47sMG5/2hIjRC5fpgumt3E1OGEfGjS8rVemCB67CdgeQNyvi8PpQudoBz0hwEfnovVuwjo88usx3Q42WlqwrES7eQ0+Lo4s1gPy/h+aAjiPWdW72zj4rbjw0sZBXWVxIObmEXH23pLb/wCAzg9iw4QczlriFHwMhU+Ho7DsDCnaX6Vd9OS+vRVkZCIRAsh02T/TJUWrrGYJstyc6/1wj6YTs8wV0OuoRzINiuQ69W56o+pecc478HNax9qa2GRzrpLqCb2jagSzdBx7nGO8ePmZWwjBSE4uZbm1uz9EJUPLaEkMizqfB4QWvQhimtD3JmaxtcgflcaZHXzOm6o2o/NzvOPa6bCMuTjvI98/j6uNUZ+jvDKaSD9CGWlF/YRdx398FQp85elC0fjH9TaOc1esOlfj9VGdwuoHA/QjBGIbgmfXFdbLkzCK/qw4+MlytmATzb5ARDFVqcpwYVReacDj2EJBVU2cR0oXyuO5X9YNcUSXHE5jVYY1plr+XyQZU9U6CH1Kk/zBk8PhpmuJ04KVn1kuS4u64G606sR32JOiPSDFUw+y5c7XelcXix1ReNEL/Zyt3whRJEt2/w+2q0HrbVZEb+tKHaoEkSBjYPgkKEbI6Q8L+gglAa2X48C7I9+TdEwelJ7iX0WJyHm20tDBC4VV0q4EgeLEdGi6Ib9Oawufe2f7DFFHV0xoe0ERc65me1mU/tSraN05HXDwOyy2StOGH+q5Ld/2b8KwqrhdT9c56uKyJ4KrOZ9wWAnl7SGbb0cF36VK3qsC46bykH9B+FQf/9Hg7ijlaFM/CLqxxcg+VI9283g1kiC3Z6J2uu23t1r6bN9QhNU2yR9rdejj1HXv+1HtAETvh1pjCVYUSieomDya1aihGbLa5VGKd0pd4H5NdEEwAbrsSBO+82BZ06ex905mbqintUfbTxjyN6rsIXMeoYy1hCOYy6ABBr2jNAty/nXbMGVmMixqg4BAo+Ai9cnbFikKz7yR21JBOaKX01iazuJQ9qjC9w/JDRpGrLbCxcj4g0eA64dDVF2hKKtUNn1WK7pjoQJ9MKzWz5WShPXjLf6SrtCyfu4CvMEKXUzbRCoN6gdHit2tZCv9LPgcSmJb1qhzXwubSzEtSbHLxtx/TyK+7GKisnxw3pc1d4JTmtWuCrp7GImPfRn/tW21lBNZ+cvVpCJ2zHeySznUPrvxvHN6wpRjU67KdlAWSF2kyI8/Ibccp4TJooTVIJlDMTZuNvGrfHKLRrKUsjaFnAWsA6GMbatfc2wQFbXEoRTePxNnXEIbniWaEdJ+3ipdWv8TPg5D/4Hgpl4JNG8/xGG1P/WdrE6WOFrqjVVyPOBamDN/DW2qnkBptLxQllRQDA8BxF08sQxu7a1b9psofc3z9VPvXbgyN3eXEe/2/6huld8vcQxREWNSfoF2csZsRfTpcOX9iziJbyoZMBL0CfKa7ow9G9i4jmJDbovY9yLD4cXX2n0qvTkn2ot+x9hX6EIMzKobRjV8RTr8vwa5AIbNciWIY1LbaSoLGTriZBLKSuv66r5tGtzvZjCUVO5pMqASFOxHaAzepw8/+5KRz9tLTpf5opiO6kxudRpnCxDRGggqmKJAOguWMKGiii8BlMqIXJlo6l1mB1WWI31r+4k/dFyl7YSWFwWLcYd26SJ6MzNZSUgd//fNMgGM2tRKGe3Jn6iGqLxrTDxF7+BF2SqmvfNKmWfZw3OLT2oyBT3kbLziepZ6dgzRRJ4TuiUfrL1XgZS8atexrEuEwD71CihD1BPx8FdGR8/ZWDbOE6ttqcHEd8CSLURVqd/d7doByHwYsax6/AFwV+oFhnlBDsg8Nl1/dWnu4hvFOqrtCEiTn/RMDWFDdZHXfVUjs1ErjhxLswR1jMxSiG9NUSHaMOD4rnTrCyH1TD5ZLiWndAsxXLDnWZOdel/Mx3LCwmS8+OinBL0Zo3ivgFi2CDkkVgMHraN8IGbpahC2TuSz+VlGSB92X6MsC1LPCNDk6ydKeENhRr39WPwBjHDD03RtRiYL/2ROat34lv4vhyvSmaQK9K6ZQoIe2LTzRPtJNIFD/cQqHygBYkH0oouuH6ZRWHRFfFZEbU0c0Wc74h//WBkQyz8nRsTQq5GOWuL6w7hbO7qDkD1mlUOa56Hg3IM3FaQzfBxRbfo1MVvK5a0rlj0/ZWeSsspmbC3N1ZYaUvqxrRZy3nbPlKfFndPpM54iTh42eXHQkonLkNh3zrodEOg6gRnpPp0/ZDkyUbfEGAYDKhmlo8l5i14wljS9J/hA30UB3nDUtUQmCz5uB+w+qcLzfuB6MA1NAKYscx3ZZr7XDyyYl1jXEKyIHK/t/TfxFwrT4sW0XkSBXU4/JkUdWN+55+85GkTpPj3/64Y+2y9c1GCpZYllfFVv4PsYrD1sSG5OdVeuq7jcwbsBD/RX1QWU7f06nvDyEmFJz/K6hewETj/jThH4CBCdnQbkosKNJ3y6YUGf7stCoZhjJwYyiX0UfwUfTGdEYuD1tQqE7N12h8VVzbvoqqEcsKPpOH42Kp2Kng39omZYybAzP4+VI28WgegM1lFb4PHZTUXOie0Dgug060ZzLRa/F6dhqu19nC/rIubT3E0rK0a4V3sir8s91+gDhP/Ba4ZyEZTmmabr76y68aq02Vb20oEWhlPkhaL9IttPL/srBcgRtK5ptmrWrbdJZ7COo1vEQ8yL4mH2U2f7UbVewuI3cn0NaLqdP7wbYfWpPrbA9iAau9HyUkGrqs6ANRAoVqhI5vG3wjfsc0DsYbafMaeae8lgc+E69dwtGTpGrIcmH9wcuPb/7v3H8iFMHgH52NevpY32Fggbxsfway0FKLz4tAtdunMa5lw+AgabO8nNa1rDcascY8cu8tiOcKT3vuXUfUO5oSsq6mvm+ZFzSjosIYLUP37Byq50UjTqlovSMQXPaG1czl27N/xY+1Jv+Z5FAPEuuC2/ii3WTnTc9a5D3yP7mcZ+iN/wAU1o/vSdMIEyvLxqbBSWQX3mcXOrs6CwUMtv+mCWnUom6bmEr8yFSfCOabjrREjbOpu9W/IiYTWLxVZHPvzt9OEWZF0bWy1VAbp1rqzi4RznPNup+BJecGunW/s/hgVF4Wj2FoqCU9E1jojaGSr+IiDqN9Jm0Y3fdDnA4I7eiknB+qBWx6+aPGHiLTWqyTug+c8yYTO7dZ7pnG6FrDlX+0IkPkv60L1Wv6Nrnlsp/dn9ktFuTpul6RM7nvvJV2OFlcpObIqO4leX+vBmiW7qQ9/RFS+4U1X/VRW4FtKhGl2seR5JJFJabvmZNqmvYw8JorFJgLcFXwjJb8d59ei+clgmitoLKz+E44DzlDj+Ij3RdJt4NMfoqS8E7N+FVaZcu4y1cLafyNaWeZjUaTn8qJtWUJOwjIliNAJTy5+DQS0Gl5AFt3jfX1YnDmJytXTl00Y6eb6bj7IT5WdboaCZJ3+9hJ+QOgQ3ETSPcVB3VF+IPDgmWf/Nxwa+8IHOThOsqx862Zp/WBRTkIPFqidFo9QB1UgUiyW6Qc5TuYLEtSGCBuGbEgkKK/hxwzDNW7IdfLPwZWFwgi+iZvFrtRPZ/813RoWpRHbN9eq0DNbRwxhQdBfC6ar/CiJcgyh89Y2FbP7mNWSlcX42hWVYnwi1rfiBfdsl03xYUgbotuLkkMkXqoDInFoENEf9Hh1SFX088B2PedS9Oej+XWonJ0nyW/qvZOS9GxuVC6unzAeg9vlRtMHRdOlYOFM96D2lRjtYxjBuj4PAn9VavVPfIYTFPpu4dRY6VRd01/asy6MTOBCeq/ZY7CYuz/hiisYtasn32o21rqq6mu63YtoZyihLdnGDcOb6eT+EMqGCL8r20GIN0mS9UXfiVdjIlfDg8oDM+BMcDoih3SkHOqzYiLeUSt+YU9QEY/6+fvR1tdolpInzmJvZM9TsRbZb/tF577SyziMY33mZUbgwLQizZ6Td6elZIaDuPYBu/Kqu/YchMX7ehXQbcKPfLm6denGAryNCUIPxfBohEAfEao7zhSqPyr3bzy+RQVf/MfFynJJtRDpvahv8nVxNVJM8VLVqrLW8E9J2gnXmH8jW1C9Vrau8IrrIVP/Wd0g0Lc9qX62n60yyJdtgZl6EHfJvoP4qdQW++mvxIZed8o1thgwOdx5hMryfdTXnTd0vijx5kekui4eFio9aGNaDNlSHKPsRLlO69C4Fu4WXn4bSwzOmDC3/O2mp/1CYmAKu7+GFEqXdh40WkTxP+6Hsdzg8rQZkm6EN77W9p76xSBJQHU7Ch5UckbinAR9+xTSdIK5s0ym24dwJOHovwL0SPOBDzD6mYwIcdYtwJoHQcS+h76X54RQGVsCXZYQy9wfUpfnQloFbKeZPuRGFKi8c7kxv/TM1xQ1MKLDjBkG/4QI+7Uo/BP1jKZlnW53c2ZFyi6kNu0FBCmWgnv8wnj2qju6jzR5fP94BdYaLBBggwuX/IFtXhj4VjSG+M11oWi0icBirzb430AZJv6bjw62QWMKwsBeDNg2kKZ8ri+bIYOyiC8f4kt45YSKU6wlFLVPmZZqwjATl/Ts9/yNjnXhkckTy9B0Mf+ix+NvA1Ak8Gss8YHKHBvn/oU8/ZL/9cX5OpJKItroF8duHCMtDGGHRYeJGKMwBEtbF9qemopNG5+Do+cbqS3fnPv2hqWGMxzKwIsNIGzJTOcZmnrkDHR+qW4ERErjIszehlm3hTGBseuvwYSg8yaYbHXlkw/4j+t4p7V3s2fZ/0FfV/FKKzTFSz5s/4O+gAwR1RmV/x/ujrs5/l/tKb7BV/1fBPr3wSNftvz690EYHEK5/0HZ/hLysc+35YXY0H9/BVsUgG/c/35FYOr/gAw9OHTW2Vb9O4xB/+e/g1Vel9V/t4Ch/wf77+vx+u9Q+f9d4G8g0t9lgXG/2PzFeP/dxd/PCFRn/13TrfUqoYURgtb/7eHMTLX0f8n/HiTu9vzfxz57Vr8GFerGF/MQ/4O8C/G+mHTsujzdwHbS4PPrdnf/fX6t4gn8WPdx+f7LgHdSp3GnxEneGeNab/U4vH9Pxm0b+/cDHfgDE6dtuYz7kLFj954V/WZ5Ee/d9v87w6erS/DNbZzeo/E6vTfw/lrUV/4+EvN3wc//PQr93yPgVPEW/w/6+ffrC5GH90nY2mN064RkoRzBumu2W3Fu+f6kiO//WJr9hOB4gLI5/f5bvpLBmZ6FDfqdZfiZRWgANITJ8GhP8h1FvSUX+/t8v1CGySpCzF+xfchB86fkWlGqTOf+7BzfzR/mU8LMp5bs8Jmql2prSxkEKsMrk7jdkbyum7yDshAmyiOwcRGf61EfHGiy7HY2ux71tyfBGiTHM8Hg+mqe4s8DnHoqUhh+PfgHQzouAvgizSaqftluv/QIgTxkphYJyCFlt0M9L5+Sfi8/7Ll+ExojJH9rRJVwNwKgBNNf58iFh8SEdeuldchJqmARQQS3BHK8ESoTTKpNpNQJoPgnQTmuusyBAv5Kh+9SAzM/nV+3zJVj/HVqhEY2ZBU4fLTCTgboT1V2Rze0H+ngv0J54qo8aEn7tC5ymMLLCRbhhbLpKRDJx9H1D5+baYv0LJqQVzKIGqt/OdEH024L0pFpS+GrAND0nqDIbg3gA+U9N31OeXZGFY17jsTi9CkDBCqk0SIVrgSxDdhVtRWNX5O8Varc35KmXirA8C9NDREr0YrXSWmmMSUV5d3FB/ELuUxfBfsrYfpjDSl248MLjUYN5Uo2h7n2h3Zb1CwCUEWM79ftqapP6HGYfJqd5Ysl1H/Ptc8/krFFofjA8ck3osUcVenUjLNN+o8WxeSvUf5HenErt7w7LlDjgAlsWFB1umGoWaaWWZv/bT0QmRG5XtWY6ir3kZcjiBPMUHmWPzqkIfUnuYowO5z7ZcLG3+zQ+GUveYD+Tf3LwP63Ll7Pmzs/zbxqGdFyLG/2zUd0+cWBu7haSd2xwH55NBo9YCY1yYxncPsKP3EWvCoUuUKfAuu4PNvi0gsJKvU0tWECfoCcMIn43fyibZK7j5P7r7uyhNVsLGH8rKqv9Wf+twMQ2TzD5lKrEGY+GVIWWE7D+ZvcuZfDMGlfd6zOngomv7pBt8f+UHdpLT+Dz+0UKewnG3Jo7bVCO9M9WHslcif8hNlZ14tR9CqCqFgv69uv9/xNiFxP+vgcpwSK7Ewio0dOQen58NF5bf4az3QgsGeYo+maFh5qPL8ge9VCW/K8tr7mqXpYp0WNM3KSXWampYuxaHGuzfji9iwVk4/Md7StrHN6jnf7VPqUv8WW1ybkpbOqv24s/vDA93UdHT3pIui6zrO8nochOFPqWSjIcsrvpxTw4sZf0llzyN9Iz+DnhEtRMWn0cnbblJiSO0vWmfoP/zNj8fzUZ3pZf2h/Y5blZTE92N2b9yBqE0KtKKDvzH443pJ4ybrlSce39KMIDQg9PMiqkrGdYVMrZAe7/u60HLH+6pc4Xf3mNloIw4Y6pQnz0fN0jAxdMPl2wnT6iktq5pC640Wlx2BwRTfSijq4bXojDhAyDwFRE9wf0sG/58KW1gYGRNOizws1szlkbJC72GJgh1SwycHPy5ziuvVoa8QzdBmemUPTNv2eX2nMVb91SnwGXUP3EYYbWfacalnUsmvKnDGfsJMJna3zzV/55xko4q8ndamtC4TFWBB/+YpHQNTKGEif50eIWAHqM9KSCeCd/Rq81ywdYuNZp+oD49lTwMi1yP0AgAyum9h3DAK0/8Dm328bxJOrWE5yJfv+CayXfPxxs+edwNDuGnndouszYpjBLtVlJS/bvJSZYNV3oePdUCEwPqGqUooGBui1ffWsIItcNincf5VfhC2cPxZmnAd0jtJk40CzmRSRmX7m3/DIy/lB0EUHdc3Hsf5LUOiE4NOgQCG+7MWc7nkU/1+WrmLBUSWK/hIaYBkI7i473N35+kf1vN3M9HQCVVfOuSqabCEwIAwPXR0nc+RXHCfra7GL9vLpo0nSmedtFrbl3yKOd97w2lYvonSP13mpnLIbJQcSqgL+oC1vXU9aBnEpmKbeGWVECBw9lIZYIR/tsoGOiRJkD0z1+dt4LHlJcRPWPwJcSdt+mGp1sb0IJ/OZR8iY/DiQ0ffMxcVsmb+1f//1Aj711xgXr1pdIVIqhikw+iIYMewEP7nPHsn/YlnWX9mIGKTUYpFnSbcfa0xNi9b1LMtItCzppBXknpMEdW5k5zz+OtM07/+g01qkGfiX38V+qvKcYjqipzjXUw6Ry2vwz5uzU9DPC603r17fNBBdc6/+dka1TCmu3WLFVWC7z6T052EY2KSAsG/VD0bPTq+FJL7x10hoAz/pqG2vWdXc0lVNhfxoGUp+KPAWV4yigYt+vlX2UoBrFBUm+H0xkw+sX+n5nDet8iHrz5ejhT31WmcM/mY7izcYmUU/sDg2gc67LB0Bct6UXx3dIvtDYn8391nc4G+9IcD4oYWFGSDtHtEMOv/lkKRRf2b0N1Kg+BUGgHocyR3JUmeXFI+bENWOeDnl4k2PaVfPnK69/Q16bWcW+uW3trHuBi0Sk2cWPTcIvpjp9U3b0GtOwLA3bsdd9DBkLjBkuYZLxICUhsMiLfCHnZPGpmd1xvi9ls2oscXwVRrlwaojRftzBLYxuH+7rDIZJU4GitUCr2sROPOviobM/wLjNgi4J/fuBxY4uUgAdzxWsfkpX2m2LuQzsH+jm5m/VFxxDuzTnZ6rT4vyt5m2pw2qlr5UL2H08vL6PcvHqs3QHG+ecgJhc7soEV+W0tCm2+9fqfpufYwjiRaCRmJw1DMqEpLPtztz6gZ1L68DFaIiB1OXu6V2x+4rydbXM8lbMAwdO0AJz6HrgliF1refbfecv+IwvqgTQpjE0psBQVXe5fe6EDAEUS6mZv2/RtcXdIGXIR2DxPQiEGUX8V4W+SpbGj7nfiurpoQf263N5kvYaMU9CN4UZ7elZA+HpWNesakHu0G2v7/hgkdjwKiB8DrTiJxFKjXh/xXgG3QAgmcaCuK8K9SjIw9J7s7uKuf1m9ZWHPJa/1GyGtAao38LcBvWBzfwp2SZSeFQ8LRxU2HsJ0+pRCr0i+oIYW5Ff6RlsZwgiyDsWyv2QqCz4edHDHPu2rZmFf0ZrUwNnhEC5rczPgSpxr3MMMfmGUb0tPg3n0pzvdcxx6lRNQD7ExeD6A1FHC1fyep2J/oiOsDabRpz0ANr3WBgMYmlV0dW2bjCxZGHyx4R/taAZE+yGCIukg0elspXTE3G2pm9IpLz/7ptgv8VFGZz0GSuknhJTYyxamvHXB6EXJ5M7b2DUJtBuKWuFStDrMPMpF6eU5rk2SFkcysKeGsSHCvIUh6frID4hW44j5NpT7Z2TPgzCk356LrLIYw43GMe/D/7dQJZxL82IaD5GnitMp42IbYlMXH9UnzRhGSgGf2w69Pm3G1KUKa1UvCU/b1ZZs6Itb/9hh8y28W5AzNCDR892HJJNef6pwYEuO4IsdadfJWPn1bsC/Oye+aiYkx/CwDDB2ALOiYvsfJreEjsXR+DdIvrTQ0RgQfHBNRSppNuaZMm6NJqXjNixX+6e4aKfVSI5it7rgB/cRFP0+wsO0bJRyv/AgedSJPnNAZae/fP+gzL95pw3BoGd7L1l/j/8oe4pYKgwGGQv8nERu/106rANd12NAifmvZVs5PmVnr7OlKAARDTwwFnLltW9UvNPlaVjE4jHN36g27FngfIaP+C/HZEdy851ub0n2nHQyv8S91/wLYTDkNtdOxKX51KvPmFZRkPFCEDemiYZyd/oY5BQuImg4L6RQWJBC3LTVbocTxm9fwU/V2jbY43LDqRvx/4WSccbbsIEcgktSJBWqBbRgn+ETWWx5Yb9rU7yn6SWqQavSlTnVVDWTIvRAfSgB7737Yx6f8ybZmz+XO0qBejBFVOXRWNlkVFGAiEGAYTsNDKlvGaIzcLFIrQcmLohjKX/G6SAimGx6+s3kmPpjdo9+RQ0go3Tg4j2XXnikp27gTKTJHY/T65thVO9EseurvjdS4GNt5P87hs6TQtxs1BgxpuHfk1TcLWsmZrju0yYYPqhA82CGuO1roNlvC9jEETV1v7HYZvA0vPXxlxMyxHVrfTCnblyzlHiiSBPYYMgZFZAy+D+ZXKkn8WkBvwqOLqIGNZVZ5NBBjl5U4gcgG/rPQAudjohCulrqOxJzMgcof7T5ny33C0ruz99LoF9VPurpNsIlFSo3qVSlEH059RpVDYh6KuC4UMFzCnb2e39Y/70CZqVZvhxBQpKK/teSIaM5s5+UbFvtJxrG/pA8aH0n8l8iT2t4y13OQYcirO/7OTUf+6hiARdpbyBohqvdv5BsSfBmYXKD1AkVc01tOnpYE1kZqLOyoQfgM/n+Oim66PeXbdfOUikH/sR0by4MH+Yp31DAMbQlc5x8vlkkoFrforY4QqQq4WiR0jOuVjyRV1QqG39TciBbMxBUk6OYhEOSHek1cbNgfQTTVZyzRtsXw870mvB6gDyXZ5quLXh7H76aVpuNcVV5SPu+P18RcAWxlWXgtr+331h7RKgFCx37+09V/yjemGzg2x7cJmLXYaaH/hVTkIaohjjUPrKtQpE/b5Kwi+exRN0phn3S7vAkHtEGCshZ+667mufl/PMZgNCUPUgW0aePHf3FE9YRojuT+QhAnf9KPQCqZICHj40USDI6zg/XNK7OxIcuRPCygdCPRWNY4ocjGe01Z8fikn4IO/JKdwzmix0PJaH2yVGSE9j+hi+S7zcL5PMIlh6yyif0Lr58e/3qIBGomG/mvSv36o90H3GT28an6/sruCKkM/NMf+EnASjA3iJXCYJKiE2Z5XiQ7IHn8pPhT3uv/D0aWasKOJ4W6l2rPf9R8w17E3Cj26WdApRWsyabs/cQGVVF+VGaRfPinYvorNKYDaVLIa9tz/UooFKfzP5cVPQl9FiP2GcxxoExiPgK9lxUPJ1AfPbv1tzzSi3XjUxCFVeEW+7fMpXTg7hKuqbMwQi2D9v9uoYALvN374sF90Zh0Z6zeMEPDGWEUGeKkW2/C+Cd1p1sLFKmuT4PegIkqNXsg+8Ke8P6vl6D+1gRe2Nr+3YxpKtUv/Vr3TuEKBASIBgGmmONmmaaCU+oN5e0fxD9hPylGWiX6Pa0ozV2bsS4xvaC0+V9AQ5JxSjpV9P4ouslekF193a67SifQwxSrC92GV37kfcdZABO4jyEXxW1FekFupfSJpiphYnxZ5PvjWzkHhwMGWx5EFCJlzdhJPqkkIdxJNIq8tz9NZmXf8jF5WYucpgJySFwzWPKI49XX79JxfZDBbs5BYUJq6R7JLL2Jp/AvptzPp4rbwo9wYH2mt7SlskxZTvhuBEwDCcCRhwy6yzNPVJ0y4RQ9Dohl0LAvPq47fbMhNlgCHJhEYlMuxY9tqNwCqK42DWltUfhnrrjCZgxJ7jDyh0OseYJ+PNrGNitGQxfaaFabOnCUwSbVbAdoTUkvB+N56BC75yt22uhxJuxhq9Vw4UopovbBKZmVv7sk6TJwJU1o1W+87mf35PY+5mdKg63HIUakJYtTRE0DLBg2uLaMIRW3ldY+zD6ibof0pfqSylJBebXHawri/WSLXbFCHRfMcp2GlOmAaX5Kp421zWzqVMIbq3XcvDOIXy3eRD03+2431Mv+w3/tn/Jv0+rKcZwtlN/b/5kI9H3rAbyqDwj7rAT1GD/WFMAc+acGVWNbFULufZtyCro/bLz/7x/TBKqXl+xkxsjkFRbhR+6CSr1n3psmWZU3t/P0NOVJwYWpVjq9vKp+G4u1vGl1e0PJTdFe574r9dk+WQa5rN44H6+lad1EYXswkw+O41bXUAjvpoECB1hXv0/5xtbUMH321rpeBswnxT+2sZKxdrqVaMkmRB92tBEITLPWqTI9Fcy3+mdo7UGnlIygrvsnScSg4U8sorDDIdnp1Wmt5+ZqAWnUSTqKjgW2+PnuuWInK/iVbY+l8xVk0K/d9QXlyxRL7jnVHx+Mja4T+hCScL14kvswccxq2jYQKq5tOq0OYfxW5yOklz2FrY1l7FFU77Lwvq6MPCrjohMhExzvd+STA5PyypbHoKUS9+ClOOAVu8OH9ozj/+FkIxnzdOJrCsnJaeAIvp1/aAxaOQEJGDJzJUsbjEKblelexS8ISUNFeEMnyR9NWxYytMCO2VL+GwI0dN/lKpt23+wcG45C5ldy3KfaT6/ymq6/4jjQO7Rr8GEMCkVCNiM02S+SPW5KimFrvgVRcKE+OJTcoecL+7ElsJ2JTw/JlKNBXmzfZEMEArAOgWrM5QszaMMFUotVQn0NUUTjQ7+W1MQYaKUdfKOhaFFnPYzo1fH3KxRAPJ7s36aYQYrruFwFwp/1/aMWiycQwVjZVWlYeIvKPqapucZ7g3mnjqMYI+U4rj6VHsb+kEjOBecMOBJSIHuxIw3xZ6Gi7ieNT7XwV2ktXy66yd/UZ0i9CE1JoxvD0/qvnmOuq0KLVB54pyIHHyTgq53/rwuf41GjOuI3E5WkPT9+aW7LLBZCP6AP4tVnbN9C183hdiJtoPd4gVJgLXU00ZREY6kVRAKF/4R2mCO3xbNQdrVi89fquMu8HsZ6ty+myEM32OFj7+4W9FNu9B6qQOhABoXC0wKAXj1r9zN5q39Y4188DulcjnBYU/x46hD/IQ/xzJxyp/T9GofnrLkR1STm0mX9ZLNLNGoqu8by5pHiBuglNi4CoWrmn7fPZLCZWTupzxDvPjnByd5Ju4NjpgPvzxxHOr0zgkwQ6zFTEre/LhhJyDELxReDHVfytiiagrz2gr+Prsl2WoyHqHzR12e5atAvuzG+inehZvdAeJwmC2IuP/495tmfgJkiGEZWkioGPNUxj8qBoTla1j8aWFw8xHFPwRI/S/qc3J8RAmR4qobqsLB4kXzcW1e1pgWXt3CIQrEVSVO7RSe+eGK4wUZUGd5wSlvHPlzVI+V9eK0Wpi/dkArsAWdRRimBBSpnjduVezc5dX+NDH5gVCJmSo6RuPs7OkylJ1B8/z8547CZXcfgy3Z3wYFRvFCrH4XMRWk+2ZI4EfbXhQ/YWQgGf0Gw8Ep06nwRLB70WG7qM361uUdjazkt+V6zzPp07c+a8KTFIVk4QFVj38uhJFR6+WwXZmSvc4jrxQO5A4lGOMW3euvRaVa0YpZ4sSh7YzRk6C7U0J7W/0R8J74MEHH11ayTQH34VbfeuSgnia5WyfO7iOCiLps30kpbIVfeuF3myFO/o4b/Q2hyQoLb1KKr3Cn+vyfrhQQuW1nO8lbXG5xtzTMxJ4vco8j0IjKQqPoL9sjx30vWXrWH0ig2fj7QCQ5hbh01/2UltxU9a7k2D12BPD01h9JWT2zWkmV2hYeA7Nix+OpJZHPMj38leimwoDBemY9RaiLcgwINxYiR57pHWwzWaVPDVNt/71Tlx4L6Z5tWO4s/r5JlzGEEbN/T1xFffPqX6LSHcqntfkunVWbHdWpa/8Bbe9RUemFg2Tf/j1tlNAbocz6arcEbM955NW3SyL99vVIFMA8sTYoZwfy/RLEWg5R/c8cCJOf+EOtKRnnBrnL19673E9/9XDyAjARTMPsX5oDCT+9qlFzLLb5ExI1cG8POWv+7PwWoN+x48SyXc8CjWqCh0zY6qYg4vhI/aI7cNdAGBG34ZrBoaHMYrtJVBxeb6jQxdcKVwr3oPXqEDTqSLgyOuU7gyS7MUf8pHbguFjmaE7CM63o2C/v2tIAaRxh0ArDlJfAM18yWBKEWmNpyaw7L6Adag4r+v7kJ9vbmxRLFWn+Qnttr7sEZwRtLbJ6N3NQBIBONUun7x40zWq1Euqt6rkWeqALMsf6uZdCb03OykOj2/mfalSIfchRslXnuy5ST821PVfRkasLXJ9TLhKQCCV1voQ5FAqe4Rzd20XYnYn8kNa8m/SvPPrSe5ASJI5DQ92jhzsYVZ9awT8Zn9kdUOlHHmesdZWmRqzBA7YaPZC2xs0Aj5KfvBgztTkJXlEZBzIOP3A9ONMEZzBv50MicJZUOJ6kwrtDpOD2H5C1mc8P2oNBIMTg4QYCPmFEGHZg6zEj0F122lNyN7QkdQCvFXRNRICrDydb9v+1/ekKqR9MriyjS7xsRqT/4JNGOtQSitSZ/pLpad5MIThfHAAoZ5ciXQvWi3fK+pZQsLvvNXM4Houh7JA3CH7nToDBMRytx+CRnr4QWkYtksgEFnF1l1JVVM2aGnDbARLApTOA/4UVpXY+gwrFF4at0/MlaayMfJEZkiD6fV9MCkbZSkhq2IvH4Q/iZCPBo4G4hCE7x0cRWe3fUph4mNR3qVQYxpV16Ojz5BaL/4VjYuyA6zzGD8ymbMja7tF/3lhbEEJCH8acJ8j7T2E1nbYja+ncyRopcFZLooQoTBNRKfDET3OWP7iVg7MpqMOxHkq2Jc3h4tDS0eUxlwTYoZeC997foe6Azp1/ZPK9ZUFBLTUjYHbOHkMlrqQqPPAmFHQBil7ayUet2Zbszs2UdWqL5UeHVtjx2g6TDWHwCxyV8d9CedlVeYuU1wAKmzXwbD7n7C6LJn6jVzck3hvnwLPPkv3Pf8pY7oHiFDcyfd/IhqhZVecCLP4ZOjEhxfyNl4/c4P3Qmzf2OWPM+1W7E2s0/+Im2gqbwi/O2IutlfKbALEUP3w+8sKMvh1BNoXwPWsnFKAugR1mrha3Y6kFMbu7C6pnl95H2jXnYGQTCejeQ0pJA5t/bSL39ZIEPytNcKWrU1lY3vtzFLjx2DYcY9kJ8PemBD49X41z9rd4DYlZnh8lb5RTkg1yDQj/U3V3TAXgI/rckrsed+pvXi3AsxbISoexZaPrSq41DxP+Rv51+pNC5oVgnOrUm7LCVWah+V/KNchfjl/49TA8mNbPvamV9KQCzZEX1gjangJ/MJLvKv7Tqmpq8pDRDdSvKP1UFPBpP8ZVtc7q849Fc89Uak1pZ+GljDtbhy37Mo7YiEusYme+y+JvZjp0fOdiCP4GRpfuIfMjmN/vQBM3/vWeIn7cX8XCXDz54LB1HzHIGSWfAcEZW9tFPW3XVUMv4a2aAx9BdiAHENRYJptL9RvqHpNEP03IlZVRdCQUa6wdsP15jGt/0elTRYjPwnkZ+DXdsVXt1En4V/vSL0bKVPd2LxOm/bVIdTCn3eG/o1zoCiiornHMu1N8A2cx2Vjest2N9svlfy/irjUkeSP7/lGJRVDY/w5+4gaEnrWWRMBOBpxv1XpUOiAppJx1hs2fGUXT3ltAbOGSgW/vHwzlQd6amaj36lJig1P/E/OEgF+L4fEO3IsZXG9M/xl6pXjVg3jFaE+uV7N93VfZ79N+tZokaGplIjch+Luq+RQeKQVUo8i0Hgvg/4F3UFeqxTwOmKC2oAs76mWJBO45bmbxPiEaE//TuL33bZCT30Q05s5Xgq7QrfGDACd0fh51vITfg1/u8ySXKanb2bJ0Mso2kso0BZinBQEKkYVcQ97T5bBYDSPWQmvTdX/L4bIlHML8x7CQdtiot3Xxzv6mW8fUC+N2AL5TvJSvGsMJWfm01yUxylCYSw05ptTUAe1velis8/oYf9l19VPG8AYeAAtxecWbIGzEXLi8S1U9cNHR8x3o60quIGRjrLvyFwqbUyo1VCp4s90RHqXsMU2UeFxkqxqoEdnel8iY6Qqiwl0F+yldlodBdWJWz4JxRJBF/UGBR6PgZ8ssAv1gP2VN9yxO61A2EHdXYPMdwMDyQbC6rIbM2pN/97yr7XDDermrhl67yqBAipoXvo0X/lO9eWXadj6NFcHTLOKScTOiSr/MXyab1RBCezCRZbZNN8wk2GJeSviEe1QWwXCwxixSYQUuKcfNS+8lmAXYCm6RyLBq8OPkBSfMLSICjaL8ntr372AdzRKfht0y33AB0aYfCfeJPcUzF2rtr+9lnnloZatao5n1b2lDBsJ+Lvpknvryy92H0nMMZggX62GIbA0JRfUWIkN0pNjv5NzHwydzx/HS8ZtPl2OoTCa2QzR8mvO1ZG53GUfVvY45j7iPkHmCnp8mrjAClndJUC4+j2vd2j5pkf4Ae0WMSSVPjpOcVbNALdxw6P3xAmK8eToD1VjuQo/9YATGSeBNltX2DyyeIvxyn8KZcP2/SvUlcFKiPpJ9USe3eshL38OH8MEB+hgK0ni1MZ2qS09mHJw6udJdI0MfrHeD3kPEy06iHzYgKzP9EdANKXlfhUzFRZ384xCwZWiW4n8PQwRwR96LhbYo4bHFhZFMcL+POsOCbbJslx6HXxkpEpUr/z0Sb+knrJa6YdvxaJcO3VQKNe4YDz+OrlTY7lyv1wUw/RAy6i2hFiBAFvmisu5VdeEnxt52I+K5bJ/Qm/IBlYTfLcqOeiIIDr1PZY9hAU1Wh7CSV8iG3sFJih2bbWV9ImAe2tCaPUjhfzV8N4cMlxck3WnvQu2zcapn2/tjd36wfxWAzP7uC9CcfcDYMkm+7GW/Z99Zh3NbvQwpNuFGCE0uHleKnz+YYVMioMXnIiPV+ljnjEz2dLsxIhnQezw2mys/I8f644bXHCqfZYYCFlHXTdE23C+1AVmkZbpXt7+Lrg3sERVSIuIKp0eR1wX+NHJbrfidVFTCrP2mmOPXwfE4RGvJTakiA99I1CFapx/mD7317wsHB/P1DBzUFsehN0DdMV4r/sRhS/twziK3YL0Gz7YzZd3VusYqTolm1JmvGyVcwG+0rebfFZIkMk+bGsvzk2UDscd6s+zTZJPlQdpRgtYa6wrvuS46sd29eTmZJVoIwb3K1Nn6CEnJfAF/npg4OvDbGq4THF/iqDsQgBWX692Q3IXwwz9Ec5bRztcIcpneragJ0webF5pYgTK19A7nTL5L8KjuXju2b9onakk0KzdiA3/tU8G4bVo80A1kmCxZLy/YCIRUCugMVwvFbfB3CBwOejJUpR4A+s8Iovhms+eoSrwDqGcxBrE6XfQJjtCxo1RpRvf06qVAjpOIVmkBwkmeC6qCwezLP1V7vHKu67dPUVn5LEvd9aY7ybpkVZDcIj19YNx7Wn/JD52HFZRcDCCvyD9/MTE0GTBa+fGyKw3UqHvvdpheSaTpGtOLQtMhcS3z51SZfSbak8/eIgIkD6OG3c1uOHyra5WVMYEzkCQPDi6gVsn3gRjK0nmILfYpFa1804CPCl5OyDLWh0uZ5qFGpqht3T/7mH9GgGg6ePZ8O6G2HOehih/hk6xP8rGYAx+m5a1c9QZVRBrK/fU5LsrzLkxWjTzyREnOOLtIX/t7TLV4oXtvY/Z1qL8d/yDHrwZ+hWG/lvQNMub2a0KHGiw+XKDC+Hpr9cxWvCS8F5Weym2J+awgBpIa6+0vsgdsU8qegGVZqbkiEpYVs+Av0thVqxg9IcNXkvQdNh+C5Zqv1CJKpe1jxwfRjgJCSjn3a6bNmxAIRORG/oz1SdOskdrV+71hs3nz2CoqT3agDRJUXZ5/WJEE/m1DVEzJ8nWWkByPPjZfhoZC6VBIusgdpgAsl5/PrFoQvDw/a1aJXvj3YDuXS6GwKL0z95sdLOYKjBD/OpFk6Vs5a1eLGU96LsTCDV8G+opKW39GTFnSuWY3NP/jftzIbQ80gShrlT+1vO5dZpE9g3NkSboo5KKVf4OjpXY//CDRwMbY5KfB7NmFbJLfg8OD5STr72DhUEI67k3Ar61XPKqa+UFv39v6EWUw0gPyJ74+P3y7wW0bVojw+SbJVu7vO3aLVlb98jQpRlnpX02Pr5AwoGA21ZqCxfbxFcLrsGxSq7UopCTp4/dHyjtbKFycRIYm+/SKjV1ODyvZQPM4AroatZh8whS9q4eihn4V+eEo2KyyD6YBdyMaR7U5wjUGTHMOiA/mgaUVhD+b4fFNvf1lg4sP6Mu2sWT56Byhzhh9A3/i87koNaauBkRdhe1DxA2L6Fzcj6MVMSblJ6P+AdBp38ud+6HTctDJfgdcXdYe2BAIv4BKCZg0oCJo5rd6NXwyvc51p7pG5d8sUoG1MlzZdWm7iKASFtkYH5IkcFnVAdD6E3X+hXM3ojGUWUatcNxR2lZul70oF5bhNU8Xt87OdywdeX7p+Xgi5Bzy4J1GTMZxpVS8tf3wYkPf6CaCUw/cxr3wYZKUVzz4T9F+dMWL7EkWqIhKnhLxHUOIis4JvHiOcEz/XWXKeMkUaXY9innKRH11iQ413R5Cc9mKUZajbR3sCCc7f4UgGT6QjCHfR1+pERTs+o0MkoGPRNP/u1mTE5Yl2jX8spOk06jhvOPQWh3bvDDIz8h7TQrhqmFnYniJRTfqyVY8+xr2tLymT7nYas9ZlEeTMBLxmu3HO5FJRu4fHQYV9xXVzwyjFxFzIU/nysmBLbPWyouU8lKvaoOpu31xMoVPACcW9UmWEu0KSusiG4O3pMeH75m4+HHkJ2Os2Xh5uHZL6YeFIyhL7w8QGyz52yk28dqGZawzzboehiq2TngfD02rrqMfY59jkFJYdCqP/WPKMoRPi6sb0rpR8hwNukoZpkVCRZsDY+YQtHo7bkQJ3Ql26BbXLXYukaXv/+CRvwT6iJsx0wb4xCwQxX/ZbzvKv5vWuEB71M2FYPQwyRrJ/jTQgvwEjTi4N9qAM1An9IyQcWHa0nog5DQMyCpowDO4Sw6z581WsuK6LL12GDeYmBHWLF9iP9lmSZvfwPIYj27KlF8xpy5uYf5bIfY2qiioWMEJWGzyaNqaEIFRrU/bdH8OT+IH/1E7y/NYmdK870OdYZA7rV/jPwuapH0MYbWc4t4uKWHHUUkPpj6eabNtKxuW4+JFBpsOpMdP5v3YTusSfWrGTK8DVS+QBc9agA08HiUULBPfMopmHq0nTSqj4vk2bEmPAf0WFAbKvxXa+9/ehc1T/iwNAN+mGwrCodcQKR5FiisBnm+Q9K7A44IiVp1H0nDox4eWz6V116QWyio5EDgnTH0yftLnQ0uliTitcuLFtPVzft8Don0UZev3NhqIuSJJjkibbcJDbgdt1kcphajSyCuIv8Bc06Ne3KLPav2xrEoR4wMU7Aa/HIvz5wut1iI4soWjUhbQu+hwe0m2xh8vTL7nxu0kE3pbOA501F81gAp93bRGCmWGNPDk7r35JnIOhEjxbaKgm5VS9OSm/TjvkXa4kqSVypTurooPz1BOjphMPsyDKCGLbi54n++g90JHC8LSZ5x+0kNS7nOVqICNYZlXbACkwuauWWQoM5x9GVIo+XBrXXlIfvOVrB+HsW5QWo6J88LVz+pZVlT9O/8tPuyXV3UfroTLVfcdgJ7C7IBjiM1uec9qtv4zum3hFO0KFqxt/SWHEOoMY3Xsdpg3IYPeiUL/8r+srfSQnXk6qWOFVlwdUXKKUyhcaAAurCihCz+d7P56B/u2Hb8OUGHVwJzPIc5z9qLa44CASpf20GQP1yQORXcAYQMHhE3rdIzfxOUMM6MQ3z+W7H/ggA9HLj/dExCW/PWg0/Mxl63aCFGJtWsTFbMuRsIIBstH/7gvj9h378FKdQFgSqyReW25e5SX01p1WQk3gKrX5GawKYSlIDYPy9HdGP48/Hj2Ix7KlfzOmLyhbfA1HBtR9znJqe/kFhs9+1COfhblCCYhib7zKZ1e9Jm1JkSepzyh+dbNYLt750dhNRtaKoEP5tq4P+ZACVueJJNpR4aW2aPXo/DOvtvGA1kz4vco1ZQYCshLe+YpeGmKtFAlEoTCP0fRWNY+KbfOuEbrk5tSuyAy/ue17satmhv9dKWk52EShmpOl2ANtZQFmBOFk1Ii0ji79iKBzCuZEtzH8c6lo2S8+hFtzcipEYCnSTxI2CnZyk94l7aPEFhq/HGOxthesPFPVrrycalIpbgvmb4awkA2mlPGyl2/L7VdyUg1BB8bcgVWV2SSuyNeLX7epm99cMZAuW5tEn+Y0TaTS7r8jTZ3xadQ1nFS+e/Ws2gNkBbmAAlSL0Xw8RIA9ZCGsBI9ovO94DMFGBY9jxb2MVwuhGZPR3iTcFlM4MbUHsaOOrZrIxkuVa2Px9XhQ8nB0TSTZAAUPJ7IjDcdeVuuhn+lmEUHYASN4iyQ7UA+Z3PCoc0jAIOXdCaovIQi8Gyz3bTtx98Bb2cB0dzmKvJmt3y9vSTDONqP9eDFgywpXUCey951jiF6YlqI4RwEZpL9eqm/ezjgFb3XO9zJf3I0vhO3/xMjV8WNuPULBVMflbVgaq2/Ccq4bihki8rZ0raAwLbIR8v8tpYFT62HlxqkCv0nPPGDxl6naKRniF7m74tJew7wL520B4jfgN+P0zlzHmt+oubi4TPHTtnSjcE4NAEpyc3RWEqOkvbe6fT06sk7wvLTRPLzqXGVetmOX22i17DTLQV6vET2Yq3bluwy8qeOyc+2PGVdYYL4TebJF/GL8m2lG2Mh4IQ5IMZwKYLHwiWiciRxtSqNxMhK6YFTOXwQM5ptKb4z0+ZPxVHEM91y+0uUBGkvtLT51eD1G7YZYrhaiO0fUwD8JGX/mbOuWFi3iJvXwoi/1nPXbnOGmAmIdRkBkt/EL678QJWfaNiBvhG4GtVoFgzTU5ahnGdWrVrmIpihJVFczfpIUP8sFrVCK4G/GYgxs70xagKmDSpm3MljVp8fJDKtqREccz5vjLIAH4XrPa9JCvCJ6BD3niDlGLb3Z7V8WlC3HizEIureSBKKHwbOaqMmcmcV6yw9bpdX70RhIElRR+DPaaZ/DUl8DJfQXalhpLT9eP40fYHNjvk4wvUzgy1NDVT/WeQLlmSRCaiS3xEQY/UU7KF4hGGA0B8oRuFLGEJv6oWvk99Jx+nDaA9sxZXWf7SpZfk+JmjW2/zBEkvDftOzdQSLf8mxIru4gjriLKBhm6tK9n5gxlTLe/RbsYapD7YBwU/P8YZ1qh6k/4HCBaOG/YVy4y59vH0tzAoiy7vpjCToycs9//QOTa6b562K/Ry2umT0z1UkyvT60IYfLjSxUdh8HV3Rj6ubpMme3cdLakCbHIZHKntgxif8uOtrNYRWmjWAIdm6TZOz3xb6odl8ZVGwuj2eLTr2PQJuv/evR7Cj9Z3/8L3kaKQES+shmf3c9+p84rUhYn9/E/L9vzS7ycbjBh6WVXm9w+kXmbAdJYi3GG8mRaza++aKu63wdjQwvepBzYEyIgKY77ywqBKMdTAqL/9Br4SzU6wcKLmDbFtbdKpuRzyYP8Zk2z+2ZNctn8wJY4Ozxr1xw0imwoQUf0WV/sEedf/4yhVQDVOf7dc+8tfzjmhCtxkF02HOuHlcRmfK8LZ0FdUrVZoWlft+HOiN7KpkTYkOc0zZ1KBErw2U266xGpQc3q8JfHNDVGgt3I56Ngr86171ctyNGdq2eFkEKoAuGOHAKXqnvN8/S8pgQiFZ3zEWpq1Is/QPQhSPYJjSFttdPemWtqFCtPEKhRJ3MKIynsc235+Udodykrghd0UH87e7thLXIwlsAV6oth57Gz51Gy+ym77EZjX31UJ7msRMdbWoVZht0iYIkep/orz7aq3U/Ll/Xhm6mNLWmvU/wNqZA7emZN1BAtms0nYW3LT66+2UzmyoYYSdNcugZBqMkIyDm6qYDcbWUzS9ZyRhe5eL9U6Xt/TS5i5osJLvbfFXIIszqo3KQ7CATX7/34m5feg/niefSmmiNwNwixZYdfm/lzvlA9e5h2wnOE/GjfpUfLZQ9W/ZMYt6zw83fFsE6wBEPXt/ZpDiL6RnZuNJbg3GdUIShaYALc3FCzpkjyYr3nJD75ktULq0tnUymsGKpXqb2UzTNpAvuGGgGtOQBae3q9ELaLLbEqoN5K/Q+02s7L2Bzrya6Jh067y+g4rO2ei5wl8UFTP418Z1/hBpbFMLzsNF2f2KZ57vQ8iX1k7Y2GgOZcEgwfRyIeC/rX3BMbjI+3M6aRpl/QjqSrfasNWrH3LGX3ie6Q1f7RrQlfyK/SEm7v3pWursicsr9tBMEIBixcoqMH64VnLxrSfYTvs3xYwtnQlYcJBaN2dB2bH8AZDF7DCZJInX5qv/K8teYFoQEfBBhNT20jxtfHlqQQdwZaguDX8mrea0t2N6ip6NWTuhYkX7SnBVnGSoZZKbHPvXNX0Ez2+sBQA345KfH0CKnevqzrvjnRV/DGgGpfiV9sM3xZ1tx4QhJAkqIAIbkPehvDd+DkLl4x7+MDgLnjqEjwdim/vl+d6J+YSLwkx+P++dvl+iVg9Btxn9Dz5DHt58nJpzZOl/CnOLT7KZODLdtHvwTFh/i/7iHMok7fT7Mhujj5r2uEfRHvzGxtmKX2MPwwtpN1sxCgsBJhUyP3k/qJsgH9TtheGFdvTuzWL8bh91NwYJ5TB9pLU68lQpmHUVaePFmy5aPjdcbTcnjniBFD/G6nUNg2+CDr5aY+6/wfQAPy0hGdFpdcNJqJHvOPXHIhJfvqLkFFt80FPlNTTodu8mIvXO+/rwTGhcahtYmsT35NDWWoUvuFg3WQa0/T7YUGnRe87b/m+G8JMdUEhTPjAz8I1Z3z8AOIQbqs201kj1zrgkngo7zGdVzauMGchaw2m19CrTSR+VGYC2GvuKI/psJfhxXFUFN1hjC0y7NyoTVXQJx/vSHmlug+RS/Uw2beIRMzkHvEtVIEsrDIICu4/fi6zfJ6ZfBlQ5gso5ywUEnKo3otDwgYFETTXa3N62/MGD+g64L5lC/nWGwntF1XEvxnpzHp/fjclBNq0acH9389vCVJkwd/Q4zmg70eX3CV+wdyvnoTZyK5F3dT7dIP6vkA35noW3Tc642dIcCJFOB3PlXM+SPLAb9UbdqJuhTNz6UirOVKTa/XgA4FinH0/Np+MWFQ8wxJ7wkj7JvTIvhfGIFLfG6ar6nONULzlGJmqvWzZhvCEqX5ua6OK2Be38M7mbsd3fv2y817A9PYBfL5dMlnVAV9CsXJQ763Ypwo6EUrejVq1MN81qT2viMbYk6kpwWZ5yiewO9zPCmUlhveksLnmYAp8R5QKkuj308FNnyAmDlnFB+CfIYoG6YqCe5hSIxjezHLB5/IJmuuOemDEHapVxhHFKG6qMh6Jg5fzRcxI2Dr8DUBzJMbHKN+B15qysHJMgO0aJb7b/MmlPsgL5mmwqSJDAlUoMJPHwCya0RswwevTzS9ZystVKxsa7sbiNO8OnE4ohdHqsNzaMvNLikEy1HYVUdqiXrojlkCzFkNSo6WyuGKY/vsxJe8RtcCerPUrHhYVCCap/orf9HjHN9z759fPBDZ3zIpmsaAi4HKQY/68BtVqHBua5BBahSWjOVmggUbQrIsCEJIHY3xvdJJHv3pHkCKzyFXP3O4WPEU58N1kksBKQRkxymABr/Hc3CYeRU+e6FvQcV7eITErz3bU0n0JtIA7BQs3/nJkkvMhI4PAZpDBfo4xoIMk/+yAKESuZ5xpFl43xbuaY+8RAXP/5IQSp5TfVc0iwxyK4zBY4YCfwaqS6/HdhwUdZ4ZdJjABNHBmv1x76lHuUTWLOALotgflKFHm0esoCz0uUD6wJviQpHhCo04m1iYR8LTqNWv269Be9kkd7EUCJvQdEJBD7Dfm+xUluXE+zlLrsnL6vJsK0tOey9SJPZTclJTHkZRadliizswMF+hsAB59PWOF17n0/MW3S12HmbWo+Pyyr5ZnBxDpCSXPh38ZOBLgvvyfln2gK7y9eeoq4OhEFCHY8VE6wVTdJb8EWOcLN2QHyZmpJbV+Tm8ej8gdE7nAXLEJS2xeHgE6804E+xJPYKhZQgtER8o3fT14JVrYQeU4kIc17jtazjmwSPFb2Sp9Tw8+PAQ8gemPsKrZVMg9NPnVPKlJHNPqDomvI6dw8LlNHU85ej5+fEqBZGfV+bTByPQMHHxdOe/xWXasyNJON1d8+y7Ky5+JpG7ZHoLSfIxRu1zDr8v48Of4yA4N+izJMRPiFVAoKcqOwnJ721wObsqXStdjRcqA3TQ97/ky7hxabq3e4OikVeMiDvAcYpQ6wlu6dmFGi8JhhOSCV+HstrZJMiBRJHl55zqKpkUf3ULt3E1f/m/djddG/w4+YC+CrjfmuoCTJuzIJ/wsfRi7Bc6y+5Z2omX6ughID8YqVFcWRo3zbnn2b0tQaG/gSgIYer3+kdoevfn/B7+6gQ1Tbd0TvJeC1NLXaA4DPKTOy+eXXZj0OmzXYSscVEkeW3ND1HtiLPqx9d32QqKZ/ubzMotRHPapskHlxXP+S3p0RP6Cx6OHE9Ha2qiwlF8qQeF6BclHqHtneiWWL5u8CY3Yy8wVFVDx8qyXuuJTJGG94VicA6P1QYFJDCOmxQk3uOyrzhhJM5ZIhdUJeFrKXDQBhBKfZljYkD2bsOvphuAUwGhjiIVC89s+V23/phQ/FfNVeINQmziPaOPDdogJQb3KhhRWUzUfvSm64mmSMFDBOfRtRgeWiAzlIXGQk5p9nP3UoI71wv17+yJbeVYTSe6aqOYN2UYhvg3XTkzFJlbRXL2t5NaRXvUHk9XXS50yV3ggWgQxA2E5vl8tj9qgGmLLZ4cZ4sfk2cv2Bf6DM0u9DsIJK4MRrQHR0FP2zdj2mqSHSdgPdHrQQylVpu9Ld1Fp44Y3lMWk+GuoK/sZaG+/SC2esC/2j31cPv+fVVUGwEalA2ohHsaNL6iFUs2HY3sebEYWI2omSruD69TgiJjEEXmI9yYX/dv7rcD1i/SBWEw7kEE2PTkmTqLLCJ8cnzF87r5Hf77FAkIY6WdhvTdNKgfvKxxDNd5wWVw7mKiPqA27xppu8yL+QfE034llUSdIzeXlR+ypDadyrDJPvuwZgXkrOH/NpTcry3VqhqiQJzI7tGn9pxyvdw7j3foQVJsLmQCgFICRG6nbce3LRBWV0APzx8G56Ompm+ugyP2ByqNaQc+lcZpOGV25kKXCQu97/ubVdRgbqKY5ANT3TdcV9exYJW7uTJmrWx/RuzDwjHcE0WlGsuAhJS1Iuv73j//yUCMi2SgLGMrBUYkebKkn0kabPwKjC1bU7/C8foSPJDsXdMyxEUnBaGMiT6w+fiHOk28klw4zAunVHZ4oY6A230QtbY2eex/UCrkLJfvPHjNHp2X3hd25i8Wm2iLXtQLfOQoGy/WFRhPHaECsob7xa1ySknTsPnmey6RhyXyv/j17QBxOHCjTucsUxzDRHrN9mErY0A5i1AY7hR+9N90Sbveg1Q/MWB/JUMD++8TTuEgVhAE5Tid/qKr/SUwndMOz9YgTbCasax/X79b59hhqhEsoxNs8/HiVRqHn3gaavPFviWRo+mt/n4q1UMET59pqB++MEwvWpC76EwTrYfbSKnktrbVhjMrm8zlu0agLCUbhlHseJqu64aneG+KLXLR44g1hYOQ9/3VsYg39rae3PNrtscL6enUL7POI2XKUtvhZ8cvMpjlSJNOK5PdHdwZ7zbxyWGbdUKq6P8ls6+B/Rvzceoz+lO2009dJG4ZOvdEoCJzcJw2d5y/w0vAQYVi6yzfupGTh0KB0rd19e/I6uEFMzXH12zqWMKpqPy98dyyk4ugh/HKL6YR61VqSbD7t6ZnNcbsFHKjVYZx6sgeyzz0+Vf8wD3KgX4ApjRXv7UqZfcehM+smIVqMfF/ghdvnsNPIjSabZY6euvMSZfj9yVwKrERnn20fFdPX/r23NmVFNviWt/uLnZSw5pQL53Ny0rnqyCq2uqXFWS//M3/SP6Wg1uhlDWfK/Y0OxPn1lNgWnyo756q/gj6bnK4mnVDCEcE7DaqRmv9xAR6ywSBIhenGdiQoujrOpGxasq76Q6TeQgVEzfmp4OtwJg0vDaQdRoYVyuJ7EiyiyzdsGl6nV/Ej9/EcFt/bHNL/KwtiFzcqgOqr8Hpe9sT38M3LYv3V2f9xtvmUopdot2syieoj3syGsAXel51n0xNx0PkbaOK5cpaRkEHC5CW2khQxEicnXdl188O0+B2rO1jht/kzzIimVUbEYhERybPZdX6zWNKqqVomblx++Ji2FEg4k3bEcwtOLWOiFQNs0XsLgSRWNVkGVmN+6G/3LFV+ObkTX37/B4rYVfyL3HwJeTShRy33LbF8vNCGyf9u9EsNYbZnFJ8gdt/U5o6nZ3pUBPZdY3VYpqzKMw/smQpUCSdZzEMmIIUGIqsplWByuvnLxhvLJ1H/WxtJD2aRIch/8vE4uHu59ei1FXhy5Sb7BqqUQQDIn0EDsIIT7yG+hmDlkJGCSXGV+/J/Oh781dunP91iEJ/bfHPPSGttkLe+fSK4izoTRp9mm9/e2mMO/Y/w8hN5t0IfPkqA0PhbdoXm6TIzIP+tsLT+LbqxJAbgi5jviMOfdcTe0kF67th2te580xzXpzjnlJAvfQia8aNjer5svvfXcUri2sotVAT65te4oZC1QDH3kvuGiSKCGLIEjk4kdYoE95Y18juvxb7y8OQtCuQ+Y8PRvmsBG5vvlmz9Yy6XIKFZIk823n2ATgEYNdRbPwYQXijSTuCoJeZnZ/yr1QkPQJwla8ow1PL8ccVny/8Upe/LYnBeUUo+PG+FGMra+/LdWUrNYNvPF2F74cB5X53/C3qRv8ygXICqv+4rkpJsrxtvbalovqg71P+HOdBrL+M3EeaznFgyIvodCfk5m6s7QIubt3u+1Wzv/Fi03qIX/CludUlR6H7N8AtOgtBGNrzui6YVins0793lUbSwtXIKpedaEXQLDXK9V9t77E0ObScBz4Nl2TAm2XBFry3mwl47z2eXkB1U+TVpRgTI02vus6PQgEn3ZfmZFrpzYuKVSlCO9z3WVPq+GC7KQlda0oNh4VpWJWFR6n51OIk+Vno84o9K0kAuABVv6/DykNZabp+Gm/Z9Ph4810KveWk1nwg+uvlzfMDYX4V/gFmtWlchiLSAMKjQF7e0YZVMJjG8Pc7WfB6OGZFYgBha+k+i+UbgN/AQJ39Ubm3gh7SpovfN4Stovd9McrnkN3oZrkN/zoAgpRrKJmckJI95C4+4kkSVo17E1HeBiwmLS6JedZqFIFrINH4gkc8uSIZ01p3j120guBrVT8maxCiA4vpfp1TfFmIqJgTJyyOmfbacBXkRs4WnPaCjHVonOqiLC9aqUhVChh2rfGF5RThFx8Erpb40PFqFvLqiU3N4eJ8k66UnFK4rRl0Zw0m53394BC9X/KwKf6+LPOCm1KieAhD3xIKCC+7fvbZA9MG0dur4BC62oRHpdi2T/eAIL7P/XeQ7jLTjsA5BIoBLYHdez5i0P3rR5mndqbH29miubwJS6/DZ7tH5QUSLf1rqIQ25ivuiwUiqAsa0VLH4Gh6znwdbFFgQEwK2WOn1/RrAYtXl7k8RxDYYSHDymjv3bHaWlyFD8HWWad1hvqbx4aSAVZ+M2SdS33Tc50d6MXykh/FH9QXdhJc7Hf8OHquFNj78CLAHrm0hTcpFcqyvBO8Uxe3owu7h3Nq+7KN2vHa0y5p0b7RVVRD5T2gGn8TjVf01z/3GF2u5fJDYUYla+dGopiUjb38KZYy0RNyAnnM1OHq6pKg1tIrrNAYxzUXFmP182tmamhWXX3E0pHesq1HPYIYGd6QLm9aiX7TlVhk7jbu8p0nwNGjO5xF8n2wIXkFAQtpi2hSLiH3jCKZibzqHtRE48y38+Azfhi5Uf6JOFCWRi6kogpY7Yf/IiqKie/xO+2P5r1br34LvyWtXLz44OJvXRXboAWEAMrYRvQBDUBldB1aRpxD/Jl5FON4jovbfOCYHwqVe278mx/fFzed5XfxMmJCudSJn+TbS7Kz4AxOXTPiuJhEP9RUIFCs3guGEUF5/8BBkjuTIE0DI3uOfbQOWIKgJT5uY6PPtkd+DnWB5gu8vjP0bEuliN9PYKSvQt40lKxko5OUtx7G/iZFdr/xx5BOHteCwDLC9ii9pysgiz4u9TiOTk2xyBHdQ4kGovubRI1wJzcI5v3qw8lBGm+zYShiGkDOR36NYBgDGF3TS00kVGcmCVa1RoGVHWG8WA8WC2DqYJQuQt5kY3IAENnqDBK6KlXH/dECi/VxWxLAc7tPFSPlVaw3RRfyqNHKkTZBe9lBTKm18875Ax4A8R1ucoQ1sNAlzzc+gVkYkxBjid25ttHW/Dj7u2V64i5auNiEm7z8FLJrORBaC2JsAi6raZDB2Td+k5L4FsdQmJImM3bxuia6LTgTn280kie4NcneGxmGB6qhFtgdZ/Fh2puUJfM67uwr0KaBUKn9qAye8tIHpyKCMQDPXZWVl+6bcnrWG4SiB8m7O3miPgTJSkbs6vq19xfqMzQ7bA3bhOj9q5Mq3cENITbLW2wXTHpg5Q0WcfrC4q+tb5ktsa/Kxz2B1UKuJRJgbMNajsIEccpsrT9yBu71BaUt8hbWHHnsMtyjjA3m2tJvX2N+qnu2I2X417j5ZQBvt3ExUUC672PwB9mzp71mgdSpz1wXXLjbkbegkDOxR7fKG+L2fa90YW+w8+90EjDpHlwMEU8N2yrx06s31NSt0hJLZc7zu6lbUIcp5NOB1bU7zsy4t5j7rqNjmaPVrHJpODLafV/kzQxqhoBdOQewIzf++WgUoqHqprJxsPZ9RyTDymPsUg6tS3D8/g3x77HDqbdv37Yehxec/6aAl+zB+vqO0h5YxhZWo9cRtG8rKSpa+9G4456WOKZ+QIZyPIZlL3ghDQoI8vTem+F7UoFMwpF66r/P37CunZi1VRx8wLWzA03+29zKJD7QDk1GdGrgad16CI60lc6MUOkb4rw730zQ3/QHcYmTzS19TZBUoUGC036PHNvAm8ji5KL9ARIAIT7RFc6trvqAex74uaLnlST7IdnvaAOK50ggK8M8qjbjrS/53OGGRd8J2/42YdA8sN+/0MipWqDzdFw8eILyKs4dKdVavXgCXsizQDAoiCA/KE70myN+xhnmZrTz+gjoWNX1ZgnGmTtz727nWXQmj1Q8TyEqG7LsnYb2d90VJhRw3L3wtjuT5JL7PV2E2qV4XOi+Gf3mr+1JogWeVimsZeS7Q8qsfKymg02Sc0hng9oj50noQR7qDKV3razkjnfz1TgJ13/jJVkQBLGKx33NRIdQh6DmPpN5lJ6xXHRsPQ6LF1e0YJjO5yMVdCtxzjTw3CGxrDl0N02/uJwYtHLEPZJXZ/sxxA0kxu5DKBgO+Y3wM8ULUcRuwEmYMzJNtjydSJ4S1+M7dmzdfqXw+wFvTJg5Xd+J8RQGCO5v+fP58DspmiznVKwsQovsFg/VOi3DubSbTax1hm4ujy5Lzpn/af58SwPCXPsYNWvrTnidsE7ZAMj8W8d5drlfXY98SAU8s8aSl5bb9Iu2SRFGkuKDbu7WI5N7lVoLrLCIQaOocs8Tg5IFznWu4vaTUOJf+wA1Bh8d8mWqxMmvotiyrnSPlAXnau7eRIU8/LjPL3acJDk/2/1GfR8RvqHYUOj8NU62KHnN46lSeotCl7X+OdIaJd/uDV1xXg/gfUUutbWeJTGcAbv3NSrj/TKg5PfXEpDxxgwMyTr9KD3QAMoICPcGLb3c90n2t+UF4L2N2cRVbDaGfrDcs/Rn/i6CzPs7xRgRz9LvEn8bsA9Qv/krQ/Od/A2WzC4l2+DrqwmITBnMYP+VAfm7ZXTS4Aw/xL78qQ4QRBHH1XvyQ7qvvBEhykRnsfpk/p6fmmttx4naI/tAKSzKct+EygMGZJp6D+MqMM9nn4OwaHkjX4rk8Np/K1d0I/+E4WUXanEeo66joSruzcBgCumqpEL4FA943fbxkxic2YYUVbDfkDVNlzdMyguLD+snvck5gWpSpgmZtklRYQZ2EUZT+ISRSzXcyAzAOX4HSrXNEzbOv2QP90ZmRSB13e5x5hvuzreNWOEtA3oY8ucygZGIPxJruTVx6LpaOoqbV69+RqE4HuUXWK6b/x5zW7mxevyaCbGPpPpGfNTtPQP7+Y3rKnkPuYt+YsOTk2ndmZB4SH7I+XKkVKET44yuXia4IcTk6pWinesn2oK9iO2xin766tkestpMdG2Y8ZhT0ZQy/E2DQyjCaeYJ3yppj2Mm/uQcBDH8q8iDRhJMSwM4x7bSCE/I8Q3WESRr252q2le/iQPZRd/xk5a4eJuke833Xgl6SPa+Gm/mPs++FuT3V3QSOZFI0uQBYmg21ITbr3ZeN8mWiqbVRL/Unfr9vsoZ8pY9VRDPPXy6xuYNnn+HPP6wE6g1EYLIu9wYtU9lJ+DdFl2nb8DD9bMtJTNGDyHEbx7FHHTV4nZ0j/zasQ5vFE9gfo1LPySDsU1e0nJcpqRQAqeu2jr69ydcBfkbrlS1pg2kkqPYc6IXW0q8zYtn+DMCxbgdhBcNO9kvahg2DejGxAiEPU+j9bg6pW+KV+mniRr98JrMvt3XMLUvZaS0Qhh03Mtf/dijNUnEh9IMJ4BYueBxlVIe/iYmGnX4oBhSAxCifc1wE794FzPrVwjY44OgYVIWeCw7KBr8un++nDiHonfdnmG+Dk/l8WZpye/33hRJTpD1nRsF92ZJnQOcoiPK5Wf/VeCh1I3cD8so42lefG3kE+Y3oW8VVS3lRQaWFUA7bJBLjWEKJrMbJ7AnotfB7oGnr544pvRBm9N83GbTqob/1oE6Wy9tBL7lk9GLMxHnL49LIFRHyWqyocF0KiPz3+BYtb2d6+vvS1x9QRL1DzMQlwZESwx3QJLENyYpD7214kw3BW4149d23waJBwa1sbwNH0iZMk70VPvseZEoOTrejrB7N35C3oOYCPj2w+MQ3NZ94Q3Wdk10q1yDLwg4uvDDNV5kWw/f/aW/hsKJqvs3wcspe7MiS0+GMfQ45kiCxmclGt4HAucoiny/0Je7VtdBhutjK7ESdeVOr+V2Ae+9Ms0sv7IP5N6dQcGw/bw+8o3AuT+Lb5hgC+2hanpuAPMTnR3k4JW8on5Xr5fkBYVc328sJoE+vz4RzOd+w6M+STbviAiONP92oI1/8rnJkvsb2dtEkafWibwoqJ3Jky2M3Ali1yBKwXXx8d3hNYi+Udgz9ZL76mYftjUxQAtLKBeK8+ABRIZahwXoby8/MorsnG33plTox0FY7h00BQyLRKY379PMNVSw2geZQKmBjIJvBNLhUqa7EjvwaQK1v3P7PgiVfEVN6dtU9yGHmocAxHGdiVdYOGB8Puo/24+hyrnAj+u1TEh9c0rIRm9dASWKv1FS+wYG0yGU+CP/w9LyZYzCmeW3ZYiEvv/mo17/imj9Jc9EGsd34H68C/Kzw20jRVBmVCX75ULJZXyiASPreygebxBu+tiMPjyRKNURU+m/gnO0g6SwIG8fUsn2VaVUQdWc73QXLhsvvX9cU7ErerbEI166RGTnEgx5/p/EtCcYGwDKBvx9dvoXE0M3yOD7S5SQxyjkwbCOwwPx7CijSDfO67egrfM3FL+4VRABYipedxPmLKtMSjb6EDcASmtk3BZ/JzjdTrC0TYlxi6uhOaS0CwN+Tzig2aDpgy3PhMu1vrGi3b51fUbY+yZWIkWyvbt3eeoqgqcEY1Ztw+bGrvd5tBFK7jNIWZH7ihanK+b9Ip1NVoiJxCmF2EMNdr7TLvijWyWoCB2KO4C1zZEjg+Gmvc0rcb+myGWtAD9ep1cilwDMw+4hiauuwID8LTnlWj9bln1mKZc5QmUMvCPhiFde9F+eGG21h9cWMACsBkeToFdKoFPu6YUbzDUV3whqx1Feot1GkD29PY4Xu9mipMZLa+3rfsNtHKfYYqUhymCTJ3cZes9nRfj66+3l4jdLXGuGKm1r43qFuvmIlVAzQckntnyLRinLqQpgiMxryS9g4eu/Awm/n/LNlTjg53K3uR63iTxRHcXZjkmMXtnYtA3l36x0XGr88i4bzOgIkCN/VoXlrUioFwt9GZ1TkdBCFWzM6MEsEZcdyFQ2OP+7j2Z8zvAJDKKMLrH4pUYZPfw3UKR42YZ8lrWj2DSgKC6oArzuRk8Sq2KgapdAc+F1YRInTcHH15m8dXfK2TmW9JurDziaujX2uGDmBAtYZqtpCR2mbxwzOfPkSp7nFXGVIiTiSqoIvjw9qECa+sNb07LXJJ75y9wqhcJ8UIP/HUwFUlWBNcFSFvSdBzdz3JiP5Edy0PPbgZU2rp8vShzOeSGGZ7TM5BHQV++LwQXBS16mDye9dM9FkTLn12BJvrIigyqa89tslAr0PFkRX24S6CaUxyVttGYwH7IGq+R0AN8QoUnZpszqvxPXIGLvcNDkBfTW+E1qxb9Wk5HFLeyT2uP7Ki5OWAtBqcn1lu22hKK8zLv7hxX/6J6ee9tQcrdLJfhSN0IrhrJmbkTPjrv12NYcPIh67uUmY6vWwV7LhhkZI31Cm2Zc0vfo3mhNRifIj9s/KpF2s1wQPXkyy3ppZd2zoquOFEcs5nCWtLDDSZgJNhhSDI4NvFSrRL11XsxtpgJuHKLGMCIIf/D4Lwe+qQjwgzInLRGJEPgiZm16as3hO+e75623NWdNG60XEAvzleu9/9WEk1O/7J3Sm+6x+5WzvLJUcOsHS9fPhwahLZBB8TRTkkFNSrdGe6e+1lfV7wGF0/t6zBChXlKb3HHGYGaaQJlWRdeISJh5QQ0Ay3L7Or9dFSLW3gSboRAU+dY/ioM9qbFwUpKcBGTpOVmLt1fzGp9D5uTHfui3XolNcnvN7Zod0kRg1bTLco5vdq2+Yf/2N00LWfBzRvOnOLNtokL3knq9I9QmcIVrKVYn7vLM8UUg++O5/EZAgL9GwUG4AuJDIdrMyjXAbEtG9iEp5LkTBrgg34Q93f9Sku3W/fLyN/GHGajqgtjAoB5vQdUeLREDghEK3fy79QFQyyQcdhO+vHZCzB6NQZbZlO/7b5z/TiX3Qycc4M6KYonfxNbA+j1564FDf8D+e3KBg79vkOY335HbEvsabmE15jaA9NVkHiuSOEPVMg85jY7fyu5IwhWX4a9cnsDxxZc1XzdctldjnGiPH+SW6OBNSEVKusc0VkB83+Xjl6/KX2UuB8CDBc3H219DqP76lyspKiNNX8KU0HfsgIgX1bQcgeV9T90sIr6jTPR+Y3JZrsKeapoHP8kz8Fyu8o+bpdGX0tXbdLTnkD72cEHfmhS87ZGCUN8fpsOMmvIUO5dv8PDLueUSfCuo8+HBwNgGaWR4Rv7A5ga/EUXi7LsLggKN6/rYAlSaJU+fCoY4w+szzXzl8flxnMVql3xUEDdJdGi0yn0WD+B+1Kxhfoxy44Xh3NejWUpnJR7FWQn7A72lOLwui4c3LqRZ1MMebxyLYebtOzMvJeM1XLDOq55mgk7JWohMn1fhDot1o5hWELt3cFRFKpgFgCdMEzja87rdGXUdLbAlCSG4eq2VXogEuKxASwpT86WEiueZ2aGdmaKL5jPlOATqaBujFDTUhO78WpWCH1oOXAIEN5D2wZZvFs94D8Om1uOi98jSF6w5XQS1Je5AvPzc9M1sG5AhewLssYmwO5/fGD0il2W9Pyrf/OrTbOC6pb0n9hJ6HGphSzMUJOwzLBfTEhWoBobRCiwTzuDHoyMOsQK9y4niLQfHJjJq/jGzEiTXrPXizqwfvSQz3oeD9Rp85bq27FLGefkmg06ZRFL9vpGJcLyZdwQfJ8ruMoixUfysIUTwYUoS/tXQQVPxZCJpkisFSWpO+Mcb+bbugJfLHtQQzq0zthTOsIEZBbLNmmU01+Z2h+PFMZNmOFoq9JzV1/HbioizHR3BXtBnFRP/S6+Fec57ud7dIckmJi4onwp0vBny3TrufhnaqfEjAoRRLBlrUbhRf6tS+riJdC/IvveFJJgLXQZ6F91kmn3B27sJMgAO0Yot5KW4/MNyjva5V8xWB0cqk2FFpuuEnJjh1/Tr1so6JpbtUE0WcOhcWMu+BfXK4YHlJhfLKDz2kFGq3Ot+EzWdNROP/399ln743q8MvZ5d3cP7xzEY37QcCyoFLJPlpZrndjUCESiWOJei7PFO8zZ3V289lx21Bk+ZZ8+HXGWdRlyNPM6CZo8wS6394Hn5YFwLG0aOH6abbo+vxH9kKdehiybbGgS8LawhQ9LemekcO72muavhNUC3TQluZ37uDqpu0LBS5SWl8mVEykuvS92hUC531fH0lHN3pLevnxtSQVim7YYDpxc2tigm7ao4M+YDEwSX/vg0pwM28/ihZK5hhA1/kdeGZ0L5EAG+v6l5GzYr2XWF2WasHqwKnidr5Mtu6syHBADOtSflSlc2alouOOWBVxZR2NmRmjW0qdbkQs+3Aadrkj5kzEcFUDJaJPzHRVEQ5Kckyc7iqhtcGxkFCrBQMCE8ibfPWKrBEglJPM8X+2gv5INo5ASyFeDo8g6tkww4bpUp3pAYV4RyLlg/+qEyZxpqzasAKmxbsTJWdQ1lb3sfPk5WRBaUY7ydqUlmTu4XUrjA5AssRcLW416HFr6x/FwhZyLG4VGIzEK+Wd2simqi8iKjdb39nIigCLgo60KPu/ZziaCvzT986otfTF7Imb6FTYM/HFFI3Glq/F57O7G0dr3ldPI7Y7WS7fxWjvtwSvFYOwd3pikNG8TBGX7o4XZj+ajfK+tsfxHFWiUJxgZhcSU9hyAxAZQWdu4wV8m26X1oJUFiVHZYlZQp2p4q6biheYsV4Ma1MkuyGk12ihrPO4pzWrklqcpP8wbaw60gz6QOAPf85e18ls1XspCZNJvKAkexlm/FZnp8AENrunpduJk3XKmTLDCNILSw7Ie+hb7BIV3TB0C+H9CByTVoPRbl1T/aOYb94IrNKXRpFAeIZ8015jtEVMrBgu00goWzrOnWZzfb5NF/kkvlVXhGmLphBD5lRroZdeWJlQaTodv2PbVJRifbQfQp4+/G0L8KwrLmNC9YprMM8Y7wzfrZ4HeSS1KBllYMM7dIJTlUOcuORa/8gXCwt/1xg9i9kYBUYCDl4/GAdCqu5aRgo8f+Zk1Y5bZr52yV+Fk/gfFW28RdpVWDhxDnn/Ejao/wTIvrQ/FSFIjG0B+9ihYJvl6zSiFhQrOLjae+dTi+YIN26TdOy/zmZMf37WysRJxhR2u3OhwRYG1l0ABGNQEprfP7PuUvB+rWBJENzXrnZIKlEEuRovQ+BA6vL14doG7ijnBLH6XBIZDNvxWOLM0o9EkHz2B348CoqtBmzWwECdf7GmBlq5C8QQRpBvKmRVJFtjiIwSeaXtK+EKqwpJzZleoZiO7HkanbT4iLanxfud7a4aGkFxNixK8JDnOBMUkJ19MxNxEfVaoUPd6I42Vh1A1lmI+2F1TDonS2f5iTzJZxa+UT7sJf41fbWgEVmR7fSIZGdkM4OzXtXe6YleXqxxTCKpm0Y7y+ZYXIhQtg/+0z074PEMuPtxIspQDWQp0mavSHb+FAEgPaMl9jAWpvM8amsc4J5PHynP1gDPZf1xkbY/t7DjcYt/bHtW70rwHex85932AmGook5334PvGYysoXG8k9VTHHEro/QPVqM2+JzHKaX1Vpu4Eky28wPHsj6PiBIlZlqUxSr4H76+fqJW7Ts/hmrY6tXU13E+3Dvm5s64KsRDj5gOz5COmTapOeIV0TexxeWNTB2e9i+VFdCPzrmHiMQg1v8xB1ws2iOSMObpkc3F0uRffFrDMQQEp6axsGZTiEqjgYHZ9BvXqzZVDtECsuyDPeuALgEPLCaZpi3M1Snw+msJVEKojCx5JEaHrgCG87y5hNbsm7qQSboc4wsuIKkQqNRPEigPgaIEoa81/ZfUlYEyGBVm+XSgBf6HBsbGoDZVqlvbM9cG8wnbkpeRqVBJNyhiauQzJ1Mkn28c370w2yRoxK4IvJqbCvoARwdMlUxKDXawWpsuI8o0zo+178Y012IjSEbSCsbCQ6Wty3VBZ5juVbuRstrZNerviWj8exzOML9olBhO9XPG0bdSR0+BS+ZaEosViu5odc80KqFTNbjdmcvOl5340o26qCBwQzQCVQ8vFOQODS8/zVpzuQp+fKI7QBJIy/aJiSgDrtwY5yyPtqQGcU2ydi88sR6wWfXCqkAaTuAtHUqmaagUoQf1JUTQ9gEiOpZg8jI9rkb0/H4oT8+Pg4MCv7nVHBqKe/Mew35BGbFBo0Nf8B6zkvA8nd48/ppunr9KXbPoCWJHKUBIySeiSY2+dK1FW3zunYBN4kQVaCbzzuj8SanR1dPPP4eGU8vbkitZ1HH7NGOlldwYpDjXdRFwKKG5ihqMfN8ASrx7PITbLEPgukFEYmC9MVco8dDC2ABpmpNgDorOWjMtl254/6Ki//rV4zi37JsqCX955dc7zuP47g5K0ye01J4+YZCZ63kGNh2gUVdNZK8wtpiu2Q1EsxretH7JL86rDEHk4B/U2x/5hQYUdFwG9rCxxO8Io6xuqJNp5fKL7TwdN5EHwbVFPzxxSy5n3DSFS1r+698ij00ookis5TafxxPi/1DweYth3SXtNQ82+PZa4tksxjo4EWqgphY5x+I/dbQ/465ppZkjeQxuHwW4fDHXFe1QYzfaU5S2o/QZl/pxh9r519ErypFAWRcmW3vdlFf+0iXXQyojk1TUOn9NUT3EgysCQkTuFW14rgowKOXphWD2DDUI4JWZtnAUyyNQuQ8vI9dMolJ+z/pi3yuq4PrBBIBfCRvQR+MJ0eCb1aVwoV0VXS7SVb1A9RFUw+wFtUUXRoFCvh3Qv5RNQ+YW/P/i5Q9Kxc+ldm0pJce5dNKzawD2Dp59edbgx/IpX8+8g0WC6Vi3pFlV9cgsJBZVYQ5yBnxNDs30AdIvwJXFOABSckSdeMttDLSivjaZnrgvlqEY2iGgnkg21cr2jNByCG4rEk6SNaltXGrkzbtWdiNzTNsYtYdZduetm58xu7k8hzgJXx+KLrBixx+ZsBrAOV+yWkOH1/V7FfUAMESgkPdBIxIbohq/vGGirjHjqqueYYPGK2W4LBlMSzT7NX/b8315nuz7N/31wIhbZgNmTFo3n9lX75beVCkBbnXLAfHLpGDpm6DRX0H14Fre0gxmWp3jZr7C1FzjybNn8n18ZQitaCLJ+2FcE4SZZXlAz3SzC/on9930puOFTVstJyHPIEl2+sTIps67t/aWvUzmkaBB8yT7CpPvJllPZ4H1XmvbZH89IU/uLf1wTVg/O46ZjxCsvHI4JSoWXUo2YrPVsTfF9q/o4n0Ch9UdcVG3ulIdshylItZw4IZhFXozEBK2BqN5d4vm/3b9KEUeJkpa+VWPjJ2jiTAwVTlHFOK6NxcYKOla30duslGwaD0JgKDo5Y2tjDe5Ct5EIWIL4Haejt+IHvzxvc0Qop3mH3feSegfMfIlIbtxTZD5pxOBXYl1Ntqcpqqk8Xv9qRl+cZ2gGqpfi1rrktu/MmmiHCTBnWU5RH53n2giwGky3lDFrkDYdPxryRek4v4sPtIZGtaN2WX4XmuYYQQJKeTWkacGiUm7Y+qKZuTj2LsHy2MB91eE9PcKZlvUowPilIsjmJBOVf5tjBFNb3j3CdONn4HvnBCsI9EPNbIqUhZQ5VzrT1a9FKUx+TwF2HEyzT5DMclAheAA5wdNCzx4BFd308b2+X8UBhYkUiUw5PMkConaqr/kBfRbLbCfDjZfzNEr7f0OH7EHF75ztxhdkOgb1rHN2vOTTCgDve23Y8L1zjpEl1I2GGAzfiK60aDUALlG+kWCiSD7QfFPMmqHUB1HVJF/G3vIYbUgRV2T7T8J+BK3KZ4j0DNfJNru7W+h261U1CxVYmU8vANRpbCECe1x4NpincIApSBMDT2tQltXmI2eClSXnqGRZCdEAmU3K8czTzKnsgoPbh5YbxLmCPq/NG4mA/JD/w/siQImvHjmxaxibw14Ozc1dYK4vj79i5ByFq6VQrbFTdQTT4l8MOhvcWTRe8iVLtDVtnrDe3rvvLct8QyCjE4h3oBMYwwKzBWJvJWJ7jL+1Z5XrKswG5VfTem9TVHWDJ5pTSVqNn1ittnmXJGE77INoJiElTslBddycyvj6YvEL890y3N8Qbx/HyoO/YrRCBrcBe4V41YI9gMkC7uCcs7NFCLFxhIzCgyyuQS3y9bW+qczAK0RUmodPT+y5pE+/W7SQzT+4kFESRLnVT3bdBUhYn187aw5RT3YYcawcdadEca/KD/Nle+T1twg1csThV4UQ8uA4xRPZ5f6sYT+4hrNn2GChfIvMuNLoEGZ2920GKYk7UAOrcsKuzZXZUXEjQQlHLolLRT4FbMVsbv5YtsFbJa5u7+XnjiXcvju9r6xaXX8tLF+nNl62+kbgAe0qeDXmJ2OTb4S3YCp9XwtaW4FMZ2LqYQnNd2FWt27eeIHLGRyDaNupnItlJfZx1yZRtvWgFqUqqQD9n9uIf/zYSKZQyQHb+TdNTvwHWU7nVnX0CJHMz9SopwOjP90Notnc40ghwJ0RrT206V2EQQORhzQr9GxbtATtGXls/uTgeb99ZKtWsnd/jYPAeBrrFaUd7EP6MegjZy+oj9Q1BQtcBzOnaAVBEJk8WBUBT1wfXpCkXFx7ZSKoL2AF+Rd4+nYibO4oxBPVt0WFEFIV+AohmTAL9hB2lqmmrWe2UmqG88YlFuj6GCwXv7N8gmtzyCq89iW/P2x+GNXvzDbCBuYN/3zEvFHjJKoV9pyqXVdKAQKFTavTXUIaKfv3R3zFoVI7pyIMBmxpQ5c/J0CnUy7Pw+ENcAS4sX7KyOQeC+imSQokpamTjjmaO72CbJxoaHDb3dRtxD3rcfwNAId+rkNh7RWZfWecX8vQK7tvlx2NIShFrwm8UOVEENSCA7CYZxEIEgwAorm1guUoi1E109LaWrbg6MxnLdLqpBBMYSK8V69+wJTUeR2Bood2T2ak44ctSKhsi1tpPmsUPMYicvwMmGJnV/mgpCxd/3n9sy9mNtRkdTf8L/JiKh1uBPZvX7PwX+AFkb8r4XRqjOevXP0uCMsHS/8NeUTtF/6reOeMzyr9CwHshzP4LTHcnnw1dts4PxAb+3uYdUfDe5/rzkQT+fDyqdC3/LCHAv5F/FsusKsq/v4XA/4b8/Wq0/Fkq/ufNf82Qfj/5KvaTzh589/cJfv+HgCr98x3IqbQyJvkBAJZ/7cDUSNTkX/++2h61W/bnsj8Ly3q1fxfmYXvwzXsT8Nmdo6zWzBqj5P3rMUfjs1auXfv3z8s6D01GD+0wPyv90D+XUXnVtv++9C8QnEYZkSfPetRWRf+sJc+2ZvP/260H/9tNxiDgH3YZgbB/2maQAP55l//9e//3txj4py3N0iKz/n4c5rUciqGPWvY/Vqn/2HTg+fQf18jDMP7d6jpb18uq7vcm0bYO/0iIZ7vmy//PH4L3Zv+G/vtH5vx78z+frr+f0mgp/yex/zz4+7T/PTWelxu2Ocn+m134y8BrNBfZ+t9ch//X1J2zNlqr/R+f478i1e+rn3mOrv90wThU/br8pzvr78J/MA0KoP/ANBAI/S9k/3PH/2CC//lo/9/5Av4n0ftsafU+VTs8rgaQD/MRzekjGP8XJfIRv5hAERT4Z7HMiSRL/v8SSwT7R7F8VN0/iSX5X0gl8n9BKv9LXQ2i/7T91hr16bPjz+qwreP2OgrZPA//hwT4p+38BxH7z0RIszza2ufNqWKO0uq5/n9RpP9b8fsn2vzvjRCJ/Ns/MjuM/jMpYAz996v+D3Xkm6IdhvU/i86zQ6UypNl7xf8A \ No newline at end of file diff --git a/30-reference/configuration/images/cloud-pak-deployer-logging.png b/30-reference/configuration/images/cloud-pak-deployer-logging.png new file mode 100644 index 000000000..d42c0bafa Binary files /dev/null and b/30-reference/configuration/images/cloud-pak-deployer-logging.png differ diff --git a/30-reference/configuration/images/cloud-pak-deployer-monitors.drawio b/30-reference/configuration/images/cloud-pak-deployer-monitors.drawio new file mode 100644 index 000000000..4afd4ed0c --- /dev/null +++ b/30-reference/configuration/images/cloud-pak-deployer-monitors.drawio @@ -0,0 +1 @@ +7L3XsqRImi76NH05ZWhxCQQ6CLS8OYYWgQi0ePqNr8zqrqysM9N7T1ef3n1mmVUF4RC488vvF07+BeW6Q5ziT6UNWd7+BYGy4y/o4y8IAsMQcX+AkfPbCEUj3wbKqc6+X/S3Abu+8u+D0PfRtc7y+YcLl2Fol/rz42A69H2eLj+MxdM07D9eVgztj7N+4jL/acBO4/bnUb/Olur7KIFjfzsh5XVZ/To1TNDfznTxr1d/f5S5irNh/80Qyv8F5aZhWL4ddQeXt4B6vxLm2++E/5ezf13ZlPfL3/MDV6G0fhgINzyKtpwkS7qW/0C/3WWL2/X7E39f7HL+SoJpWPssBzeB/oKye1Uvuf2JU3B2v5l+j1VL197f4Pvw50V9X+eWT0t+/Gbo+yLFfOjyZTrvS76fRejvBPsuMvCvBNz/xgCc+D5W/Yb22K8Xxt+ZXv713n8jy33wnTL/G1TCfqKSsSZtnd43M6Z6i5f8Psu1w5rdn0YbL8UwdT8R8n7+5Udqzcs0vHNuaIfpHumH/r6SLeq2/d1Q3NZlf39Nb6rm9zgLqFnfUsp8P9HVWQam+UP2/MjAfwCHUAT9LzkEE+TPHEL/LAaR/2Ax/pExf0FQQYDuv/tMFs/V113+QdKOYj9KOwL9TEuM+gNpR6g/i5jUT8S0ciDYUrz8BXCaaIEYJ/dBCQ70T97bVV0sv565J/3ryd/+YPrb4LeRrN5+PzR/4v7XMW7ol7jub4H/rVL9dY7fXvqb4R9u+g/XwDYvlv/v9Y8k/kuZuaXmn6h/CP6TzPzVHMbv/0QX4f8DXSRSKk+K3/HpHs/inCrSPzKX/wiaEz/SHMOIn/UU+yM9/bNoTvwdRu/GHB9wWHdfOOevcvuMk7w1hrle6gFQKhmWZejuC1pwgo3Td/nFo99Qt/j6+wPZXwbAsHj+fMNfRX0AzrJfUzK/jkK/jtzH1bIA9MaAx0eE9dMOcfbLXr/rLs/q+JdhKu9h8P0Dvt/H6dB1Qz/fR0u1drdtEVDwHzj1V+PzH8+hHJzzk/8yb+D3t0+CPsd//PH5Xz59+c8z3+gfSAWK/AKh/33BkLURVf8f/ozbMf6P11U8gof2H/A/Ty6yvIjX9o8s4v+2VGTxEt9C8e0rIgAOIVztsbq1Q6pYDsz997LdinfL+4ibwfeRY8KvTyZWWjCKvlve9KywypHQBmQkdNaSnItVBl2wQp4vBp0gCKxYrL8g7DqOi7mWqlz7bws7nLKjx4PRsJVl5eY+Dx2CdNa3nrP6NOJOTxy3axQuKjHegsAMAtfTsVwjKwHj75SSru1FNRzMUJlsNDpTVjyvlJPtKfy430GBcINWuCLmFS781Do5WX+AW38+yIrBdr0IQ/WQukl9iqJi2YckaloeFR/z/qWaf3Bp8uLmfiQWqVtNiXNSmQfvEQaeonrj7NmRWi4ZQZIrEybyRm5vymQftZ7ScpiIC94Pxe0XgYN0yCIY6WAcLPv0psn1piaybSqoKE9Ms4MQ0merT7BZc4w4L+CJuxTxLzGdWj5G0fgSceU9mEO9NbfVZ3OpxUP3bTvPJ1cLw0IvkF5BcWPGzyNU0eUqROwSKlGNCH9sb18gYNih9K/+fnjBJS4/dr6tayHVj6veBwFRnL7D5Y+9q8enG+Je9CoXjpMnQR5hv0YI1CYMyxS3F1hA1ketJPtOKDMruZJTja+xFCUPOXyw9ct6nz1EI4lirvhZhSrrXQGeT2P9HtuyVVWNOS23wmCavPWKTT6eXUiAWTzkWtO62CPtD7dCsbNbUwWsvgo9HpHoAvNSRQBVlcq66iJ9MoFVtWh71bqGuSSc6uQWJXJd15pQ7vo2qQQ5QWsLxY84afmhlbRC8YB+B1dHKBjeRImWSMeeL96wJQi8UChRBMQpts4Lrbynqt/+XDia+Tgv3eQ66OyF8CNJB5RRKJLTQaNHDtM8UvFeGnPa15RbxMsadKnC8qbEdrPsVdxhL4nlJQaiNYf8RvZthYBs+uFTey0Gsd0cYmG/gT6Z+a7ewnPFuhwP1CieYhU2BgC8dLezWe7dAg76scZcN7Ujd4Rdv949+5zgSSP8FQ+eSvooKWCmS+QYOdiqW5h64e48hyHWVJ3xoS6JIQy043pqvPEjK94UYc88EK0rXJ+JgKeRYdTyZw5G30YPXMAlyetC/77KnLP49EdoPypW4PMNMGm9pN57Y6q6usiG3vIuIDdEZIvLIIP7UwjOAhM8TY2GEUzEY19WhbFdT7dUnAtlGZjHf1oUi0K/kPCfg9L+0DEgP4O0eFqmOgMW998QoiE/QzQY/pMg2h8S/Ofsyr8VwTHyX43gPydq/q0IjlP/agT/g7jv34ngBP2vRnDiJ4JrQ18vNzX+Dcj9kwWHfiY3if8Tqf1ziP1vRG0Mg3/B/7Xo/XPq8d+I3jhM/KvRm/6J3j8ROu8zBhTSADHaeJ7r9Efa/pjd/CE9fpNqOoPvZ76+hODLTYTvXx/Hb08+zr/8FynSJZ7KfPmvrWOe/VDV+5kjvy0l/YH1/nVsytt4qbcfa4F/xITvMxhD3S+/iQGQH2MADCZ+vMU8rFOaf//V31j5041g7Pc3on680TfC/HSjL5n462P/NzJK0E9y8pusP/RdR2uQsvmHqmlC4RgO/aymBZXm6Z/mhMjfpfSoP8iuY3+QXf/TtBT+OaNnTODeVb7OQNz77PNX1v+7UZ/4mfrwH2VU/zzq/xw2/zlG8hcYo35rKOFf4P/CUH59M/Kpvp8UMOC/aT1/VfN/EfOJQeQfe8v/XfOJ4T/eCMV/d6M/23z+HV0W/1dI0L+KYFDoP0gwKOQXGFSKvv/BP4oJhPxzxeTn7MX/iMl/Q0xw/B8kJjj+LyUmP+dc9GTOpy0GRbv5vwcCfnLpP4jQb4HA30p/5RRn9X3971om/hFggMZ+FzGh+B/AARL65Z8ZNME/R6l/jp7+s6Im+F9Lb38fNiHk7zj5d4dN+O/CJuSfGzYhP4dNPwnKvNddG38pzHwvZ/neAvuTVPyzVe/35as/6Gug/rAJ8x/RlvbH1Pz/Q1+Dvd//4+dvfQ0sXoQ4fB+UDP/V14AFK5KZgZy+8FiUcPW9TzINFUon1/vmcLBGhzsEpa5Tvz7Ii/Ns4WBhpuV7TXI590z33BQPSwogb3Sdkd0bOrzeVVwLeNhEnXxaa529tYXitXGTx3oveark+Du6vyBY8vGe2UE3DniMUjUfGAMabWSe4WSWURiLV6SEc9h7vRZTcmzMgQOZHThePYaA8R6myfCyUMv3MFP6pWw+uTq1XI01ebMUmYp938+syhWjierRursALpRVQbyHebm+P8xJbAvWBXOYpR/edDE5JmYqbo6FXTAhnmE5NqzvYUYW7om5feh578G49+/Ze8IIkLiWzQdbz5H3NTHDcZV1k97i5Oq+n3p8gt1jTAYstZPviZlSvH85iVXKui/2a2K5vIfvJ2QqZlaVQzDBUm+ecbwIfjjUPLOHk+g5YOL7WQf3Zidv1nJ4r2+OPUow7+F7qVH9PzT6Hxr9SiMvmTXQEdKCXgY3HPMHX/nVCTeP/luvHPQ097wCPTzYW+erUKJQg4Jwb72HSDabQh6eAY21oB5dFtladHreZoIIo5KoWKvCHCpAhY/eG9hbaQhLcpiyHwK+/4BGpGwueLL5nHORzNDAlI0/9SMaTVM29e5auNOwdEIUQvJQak7MjW074/6iFRwX5qXG5phi+Zx/gq74N8PqS7r2yAmB7p4t64jeYheufrqILErds+zHYEE3kkxYaWnMB6CMlGByWHrpNN24nd37TIH4BPgd4fYAbtVdOEKEZv4skOVJ5NZ9zWOTAJ9w+oEzhET0gxPW6oRfD5SlDY4EDSAR6WbvV4hD1i0ZuEmDfpCyB70vNpSvNtRV/iYMpMVfkvlhUXScGOrZPnw04bCoXDL48VhPB3SLdPfdepoxtnfmOsa1Ez0OGsxAD8xyLXMnrFurL7mgByPzxHa9rG1brtnw7dTj9Gm3pA9WnhZ9+nJHvF+eDmilIRG7XHlAshHpwHqlilKHFd3obTbI7B4QRJK/1/4Q8EZ45KA1iVozvcqL0bK1hlJT+ptsiNVnI6mPirb0SGqtLV3bmqoKNulQLb7ig1KAZ2btZDLvDw62r83qPuRMT5HM8/cNcpb2+JsOWzLvBw5XaGsOuzM8Tnx5fmsVgqUmZEUlhJyblnCzHEXlIosXInD4YDzJIFBKllvLsRKEHCVceIEntOIxYipfNVSdQGCE66vxHn5J9v3/YT0SZ3pNiPQxDbmSstfLYfmMhacSuylXWmg5bCPrP+ApLAcsqUG/0nNFUQVVcXQ54qgqlm4itKfeQ2Wxh5oER6rKv4BUJMWDEx4htHoYvGZqVhxv0EK6FcGIIyuvPxFBg0/QUPepkHVEUsXxlVtbvY8OSVnu1XiAQYF4HLO7KNIhtObL2wmskVfMrpNNbD1wu1rzsBgxl4IROdhV+06pXPOwtHryiKgDels50T0JWG3bZcB3puLbVKTTcYlzwj+5e3FiHUPdaxWjrFKZ8Bk4T8LzF1ecP7LUlksQKZV8xkkAJWdZNzFMhsEnxCPFB7ox8yMdrJ1jrgHCBE/6U2aftoysdXaBWCCXhlePpdh6pD4gJ2KX12AakzffMuXtVdJ9/PheH7vJjdIM0CnEhS2+B7fv6HXZg3Ij3h/Lm/WdsqRCO92K5A6DhxZX3T+jFGDP9fGG2OK+xYt2ezUdPR4ju4jajBwJ+u1jumn8weRAQfPnowptfw5EYrdZxGVd/twlXhk+2JZNEViFDLnP95d2fr6JHPaZIXxJlHwx2UH1oozozQ/pTW1FGfQN2oX4odC0poegXXMjgTKCp47ilyvtt4V9EINekxdOB7pSbkcJllkvqfuCiUhj40KbmhzVm3XMZK+231ZzixiLOccHUyL4RbkuTQOvAHrL9jhPjI8VxDgvkTARbk59ZlC+v17ubuUTmvgVy5IDyamZbzdr5ukNZmOQ2bhmHWjb/Zs4oZKhI93YTZmyCySgCGLfXYNGklZVxV2eTEOXEpnWnGN+aV2MvhRH31DIaeDkwhVlfpYb5GwTYY9MyKMS6EPcEtnF7s9QD3sPmCTkQEjdDdDFhxpMvYkTfHZ/fN9qG1Jv5UCkMZaCUIpcnXGevNsER2jFNZtoDxInkYXKE5e2dMFz1unWEeK2aJNAbsmHwgwXUIIbejnzq7p97AWWlWyQs8jAoCMO9z0g39MV8pl42k5sZsiACxZlQ856MM3Z8zJo8ptupcT3uAr5EXNzt4bdF2J3r6qFaG7JezcYIdSf370t716hoMdnHMkhjjReOjza0017qluOKVD6AhQ0OBqKSsenvb6j2v5Aoqn1ulXdVKnHhMkYCVEa3Vc0SsL8gMeneuNSe4ps3lbWmaxgBkmbz1OHmQJpc4EqbG2SB8uR+tWD3aXZsGw4D0l+QQFDCeeF2iM/6EtgiV7zOXRox8fnezcsCOsIiwkjF/LMxqC10p6sHHvpyVCrLS1fkM3HsQcWTDA6DnO889KqsqeOg6sny0GbizVlb9hSfrMGLBwt9wyxg7aXnHqyrnw9m9GMMjdo8W7bilkLzC8NYeUYNpk9HtgrTSc5AZ2RXW55drFoPOR5BRPfth7N59YQta0cRtDlCsFQRg3amQJKKJFk0kzSQ348vd9edgvJieTZgCCRKkaKgvYEWTyM4Lrdz5d9fK+13O4jg0urVvZBrNuBVW0s1/TjpVsjrpYZfB5m0EAAsRGocn71XsqF5Bh0ZgQfDvQ9B8Z+VKfj3xaVFNQPy8KgH3h0G2RMNGAm13t5e/A04OfEvxXQ/Nrhpf4YJDmzL4c/CPHWmpLykMQRtyV+7vvL5GV4MeXPSZX4ba+dCF+CrqiMxewoVQg0G0mNCc/a8sGm7+6K2r5ylnneiHGKej3djoSIb3AVo5e+p3BZWty889hrfd0wYvY/G1/WmkIAHVgeky+bb6iJedMPHyy37iqk5ovQwbzz3rvFGE2CPHA3cZELGCShJ66ISXmVZFCTwAvYeXdKaa/QjUuzeDeDtgg4cFpeTQJOOxHnH9jR6pydrnPXxM9FWywXTbKMHEuEEWipGQ6tCllumz8bQ5miUgkOl4rEOy4XV456tx2cpX6quggp5+nZR8pa8f7BD+9xz0PIiWlHYmeREG8ploeaEyv3OJYl3wyvEE1CHlNFWbk785Y3ZU7Py3wvpNTJGL9Uu4qffuPeN2KC7HULOuLZT1m+3VZ1beHQRATyQZjWe+zmJBg9A5Oaoa8i/YkLug8ZuVFHMWTbSq9Sw0wYbm3a96ONNAg/TeubZKdVmYk+I5BclWT+Lt6rYkMFXp9OAbRo8LSHJVY8SYvKKOAvAbGY9r5eonmB2GtloKaY14mRJZ/mLB64LLpDdj11mpkZhRgrn4MLdQsIgH0c+RTZe6WhmEYeg97srB9ZWfkplL4RQtYLK/IhD5PbEZJQhzK+yWXDL/c1/kkn8csYuO79OjppoV1ebtucwkOfFM4diDpfDaBhXFOy0C4F/zm+2dteftZwQ/MWyAc7FFMcKbed8aCCR8WMNe6rArxugCJQnxufKhlSeNi5VBpGbCPjc0LGvMjSe2559gp6ucK7WmADPzzCwaR1mJ/xvm2cEOyzYeXwtqDv1he4Xv5w8dki0RsRQ6U9qG13WW2+sOezLmyL1zbdGT/LJ1In5nO8Ytnls7xAqZR1/LY06GMqd+OgIr3wtGdvwGld1iy3lM7VTeIgexADUCbLz8jtLyJcUlx5zG0BWB3ZZi0qTLdzeuND4xIDwLs2CloIWKRvjq2mb2TlrgiJ3+5caKDFQwt19o6BHw8MVUD+CD8vntiOBS4yyEdzC455gouh8FnegpKpl8HBztQ85dtHsfYCtgpZgknuKaBByWIeM7MH06uhnNycXZDdKqyP+AHxE9F1z6D34wQZ0P5iWmbWjigPehvHYdCTw3auw9EF+06pZrQsAImFLbBL/jzGCDkKCp26APeyTCu6E58i74qK2xUFVyNbIeXJ11axxO6NvXujYckfdLCHSYTtxw69o4785rnhA4nR51hvD4On0nqhnDewGFPB021kvDpg3dfg2qlgpzkZ9x1/V6XUNlxz4sO9hYs6PdBp0aMmGAkHpk3OEAankDBPUiGXgIulNm8lLRcKSvcOpdv4OkYZdnQQFu2eN4e3hzwS6P3QHFkol5TProVS9uBgYZ9+vEcXLocm2MLmtu6AJp/e4u6gt/IZvRHUspAYbXX4jTSZ/lXut2hQeM0gAJ3c4S8HH4/t5TBLAi2VfD0ekc3iioskonPfwYavTwLTlX0M4XD0muudD6E24rWJHrM+2mAnCpvLMPU0spIeIDd/2jaVs4SddMYp7be/zchztTL99dGj4j2GL3hz8dfYpbycWLqykqY8BtftjQQFxt2h7DGQCZCNSxRouidYnU5QSQr79anPsaze1r/deiGGcv62VIRIaJMJcy9jT9i4OvA89PLk+rJiODgnQwcrDUrJpkIejB0qMKncDiKqJcNZss7Mm5WvGQ5RT1D7fqu7OiEvyLut9MgY5pPjsCULyDWQhZsWY1IkabeYNqSfgh+xszBV7ORaE5yDjQzQ0+zxM+I0yR46u8uOoltlnVKRwsJHt+SUMJhEDOylKd5bb3n68dDQj+8rvgbMUb8ldK9Nn4uvr7I9gP1zYOOAuoT3yd49uDnbphbmxqDF5JNyBW55GadSrRG49i3MPMkL6Cw2EYdi9Y0I6i11RYKniLZiRXg935bGRnNNxmF+ryVpWvwpWmfzFGn22PhrevcGxnfzVInw6Ox3xLVSri8/nf61k0QVGeuWGzGMtEzDxkNaNOq1ty0bjwKsv6Fq757uKlQLBz0z0VQjgmO62gX7OZ70Q74hxwNbFxe5ic9uJ4gq/ByFbLlMsqILNy4zpqTcj3ORQ4er4BjvZaFZqHfevhz5CTYi6Qrx6F8X/HTFdey61x0hdJRQp57KP9yiFBqPfeU2fLJ3qCbyjdW+aiGWnzbAViTS8AIkJqq49fMXJccWqicQ7M6+0W4J2VgkgiMuCVW9Il9Td4NVsiP50xWTmbll5GBIWPWG8ywszpyX17QZWFfitVhVx/t2ItVteMh5SRexkjhJSgpPj0XEx9FuuBGGeKTVbW3clQ9l/02S7BrAlGGlw9d2mGUCm8AKE5a7Gt1OHCIEnEoVWAqezyt0caSrzlaviYddbT3D8DEW1vtMxTyABLaWodBSxE59BIoKHPS2BqbQDJ/DTjQ6csUdWbWngkcykzM4FPhJoXWSPHowfGqLgyiwIih5iiATDzaOlW/1rdyS31kLCBuY6Vzi647vqcsyj8yZFHrLErzn20UnHOClgT98WQWiJ+6zct4NowlndV/Gpj1dQVYN7Iw32ChGC7yTcHu/IOXnoYzm5wyph48RjgkCJTkPjA/YdSZo2fhE1gDvnqFSXvExUVj4skTFeRtNnngG/lZq4qlWqBxLvUdONAMHuFb1kAsNqCWBuF5wb54kH99KbZLb2wUZxt6/XfBkv2/0e3Y2k68KPG/d+FpANmCD5oUP3DNKn2JzR0bV2HOPK6VEqEr8cZclLwXWXI7ZisBWHY0iSIkMrHjGVll7Hnksz+Pk4CA08kTZKNlbbxCa0p4tIIhceSKOW+0Q3x7ZPsj3eaPrAyxTMP33pwYWO6g3ZiOrNTsDtxosMhPxUc5bljeA7BlP8kzWtoSfAft8RVwdpa5qi5hF5RBpLxHsLLKncy/mjhxFesQ4itDzple29Vw0efaaXUEO7aQrtZEU2n7Ympu9SGKpkAeuNHuD+5lzIK/YYV8x79pBrfYPhQ2Xir86e/MU98b5rPL46G7+uWzZtEarVN+DH91+e+AKjxjfHlVF3tbjRGeganNhW9AhZX/bCQyOS5nlldpmGGuyBxaTTIsZHoAPV+erWgdyrap5e3Y15OtXxt7RTyzvj/+D3xaJDdKPweJQ67ttsGxxON0ljJvSU0COC52PN+YX73hEWJUmusO8dC2ItmZig6xNoujbLcqiFLyqgMVqvRTsy5uO1/1tzJWLkjpElGU4p0mD1bdgyeepf7F0OA3vF7nFL0WT7kspzf4sZJATNJJkpDCfj/K9M/xtODJN9Ppy9KDkxt2d/t75xKH9Yqogr8hKSoutnPF4iRUZ+dk+RG+6YeHo3+FL2HfIQjZoVuQtfKwPH9KsarJtECe2PhGddfXY8zsk15+zV1nsCzpGIXXQrScprk23Ln1wGoUBrU7lHc4HId47o+tBCNQ3dYePNL3F2OiBZPbsevz8jrnCxzFJVVeNpXgAtCxQVppxvzTex/Et8cpma7CeVIE4IVd23nMyUIOAslD4gOznTDA2da1vUeoVq2g3usWW5aCEp7g/y4KOVjrE7dRZWql9cS/Ryr4S7CALKROsmXOwFRvrZDxqn3ldAcjzFh0svbKJg19ctlP4wwYB6RboWX374xCnpAv2YUhtm+j1kHB3xEHw6otzvIZ9ex9m+eT5SXlmTMnUXGU3Jh8wo/9UbRrN1zPZ1qKu/Pwh0RTr2ZLFyUpplj4Xu9eaXlfpTj5Fq/hC+Wk4ukn9fGHBLnY+Mt3O9tYw0qAoTFAPunVinF1ZZtbzgALix28FNaX7982dwCFNJIrkH6ECKbKOrpAX70XC4unGHcJKtTqul4j6GgYl4hVr8GcLssWjQ9edRGUunijaU/TSvSz6tW2mrDydT2R3xAhPJOkAlwJe68FKiP5x4+TAIH8vWkBBuMo4XJNB6P/Slu1Jg+3HAj6Tj55dqSCdgAgEEw2/aieWAmw/N1sBeQENbVp47jA5Lo53b/pz4A8n0cLPRZbQZHIEuoVY5Enl3a6JlzGz0Cm4pnyj77DojzB6a50wdwGNt3iJWWGvM+UbzQVEEnHiZIKRo9f5YE/DU2+L1EHFRbRB9IJA/fnmsKIFC41lMxAQ2AjPj8K8ATLP+xNdKLTeD207nurkt5+9T4XahF+G5An2QKdXxYla2nEvKGnyCcSwPh435Dni78/H8Sz69KJkPi4F6fg+T9FWOfERyPsBpi5UVS2AgeayAkGH9w2nsoQHtO1lpkeQmvsWFKOCrjRyW+IWxW5bndv8zqd45xAddojrpUisUbrOcpmxRn9Sxn1on/lWkyCLnl8vTACx1tE91+NQ+x6PfO+Ne4P8TVyAxKygR13IgNdRetvYMmDoOmBuCI8DYMpJfDEWlH3c7fH5hOF+7EaDy14FCQMtN+4giCGuZVE/tdp2+tY6XlZpZEXR2k0PgD9pCtQr4tKJPu6yVrctwZRWO/1UUC8KAAEqcgzKDK4L+Ojt4zZP7gwgmM3xiNXSSMc/zKK9oEu0b+hGzTwAuqsnb2LjxA85whYuINPUBnvN/SdafBVXlqTAYD3w2bBv9tSeHqdT2zmSxlIj9waAI8ZLr1+7Hn/UIztzJ02X/XGGETygdZp9rDT2AMREnQfsFsRnf7bj5JxzMm3Vc7ytNN3nC7l2nfRU6bkSAm/0XX872WNp2uTNH7rEf1VzpEZT2qWgU4FFps5Mp0zHqSY+AwfTAbbH5Ob2sck8WhhMbeJ5kS1Irg+niMawaFgJ3toXi6q6NhwlqYlkWEGZmZkTs1JDDhdYuaH0qMIRu/W+M2iD6FJuxfUGEXJ6JJxjkb0vSSkkzjJOPOD8MGG5OSCMd6DOeF84j+XGohk/7ZjQrLHHNcCTqNInEIvkdrnzEj434sPKGxYpq63bbXhYkjFmDzmzSM94u3jgqXR3BDxhGyyRxCMEDccpFx8MWTbPQtaxge2ZcS40ET6yoa1pNmT3LHfYUHzEoX9vrQJ1UtF3SdhbMo175COUHZBCeYvQqebFidZbps8wD4Ns1Ryg86FbU5mdsieSE9zL4pBUKdkRhBNhW/p6AKln51RpXGSqztert+8AG6joaFXI6NTbckrEbC2Fwm6YeRiw05T9M1fcj15pXP9qm9mlEIkAIkDFKiZwD9JKldqlsvPNyW8YeT/zEHVX8kKpLokesHlTdUn6874rHsO6pWORhD9uszNibxjYPjQc/dc7dmwlv6Pf1432l3AQczsgPCwTxfMNihl58alWUA0JPrkVkHz9XsT4jNXpCTcF7Xne62Y+1r6GA+PSoD0zYHUSXX8tdSd/AB7en3bVHaPb2oaNnXM3nMcrAiFwFlyfGUe4txpSldStUmwXoO768faDbT1J98ErMPoSA3m3zufcLD8megMquYcgQcvGX+XKofBuN21Si69M0KfWLanP72ek1E13uizj6f7ChopuJYHWjPoajzyhQdUiLuLyKuVyNPBWyDR8PfCQ30e9EqLW4/0PF2bT294b/TKduJUskMLh6xhL3xjlEd2juaNep8GRUcKgC0oHvbL2QnX4InjhZBb7J0ARW9o1+rCJH2CaiPpF0vmWuZ2MHC9EJPanusHlzVm0hgpvUkUu8hvWrWNTxD6QH1xvBSNGYTCXQYwYq1AfxLoEnpl+bpCVoBxMAxFKJRBCA9v5bjUXA2xjYQqo8iE5zKAZnztQSptW7oRWsyLnLRat/1K7g9JMeyH1UuyTk+WjxHqicHU8auvBQ6LkPYRdsl39JOI3WP1Mne+NQEnN3Fc/Haz1HY5vgjgb6zSDokSfQMg3A0RWV5dMDy3I+GxELld3zOapbXQFmFuYAeiKZQtxOq29BbnB3r7BftAkT/dGVn1FkOs8eYx3+0Q8shaFJeev2u8yPfaHQUwjjO2aOmbcVvoT8c5cxEo8Vd1qFtpxt3FN3Q/osYa3/oHbOIg9FBrSnu2NMANf3cyo0KSFf8JLb+pAnd6STIOynOU3AVOHPXy9m2Px3L3JYvU5w4QR6qLbnwd2SrQ/gEttIpGyQ0azTwVrajv6TfZak+SxxL0P2+pHnPi1O0PBeCPeHa2g0NP7QJZToYeHjeJuVs7h6bjRXqd5KPH8sqbhoaKQjj+zSBA+PhepoLQ4pFeY2SccZrVmo1HB6gOPG+URSErmUPEyvrFXB2e8p3vvbptE2Lqg7CWsA6DQY/g4BDE2Yb8/vTagz0UYi1Z6Hq/iOU+KO3tITy7aZ5mDOOgaFuoJZqaOheaWEPEL5E0g47SzXorq562E4zarZp83kaP2avVifVmya4zY08PNX/PmD186FaBVJF5QHkhACt0k1geYP7wPAccPEO5+/GdyIwX+kVnInnBxeV7iXKkRvsvRM9ZE27Groibc5cYzaJI4XMx/Qi4qPrq5jUreH02Htus7iZoXB4XcIVxoGQaNZKd2CDVNbCL7R7S8jxg5dWiwrD2rTZe3ux8dXwVaF5WcogkNVXOu26TfQRC1KbyobYL+Hj7ZG7cbd+Zjpb91eZg/fQwHdWG/n0i5+rR2TrEFEaMWMklpPl/zGcgwlMgq8ba76jCr53ZEA4T5Y39BRTn6udkmzhws3EHKrzt4TtxP6Oivj935eThLlXkfwTeai8MM1Iwna2AReXyptZEs5S63/bB9QOlXaPDwsQeKtKRQM4n8xOIgXUptDXR6J3KbsA59A32fVZL0H+lUNt7RMxCvYhTfnV2lfIqlVqo3PI2XQUmSiARP9oqml++Jq3W0CE8pra3N+Hm1psmPbE0m5DjOhT6HCRwq3lfXChq9zafkc58HLh2J0xNWFNtHvjyzEEeasPWewa3yq616uoMjkT+rUWhcRvXaRk3FRStKhtnG4wAlVS9RFBl/4jcg4x+fV/82Up3zIhKLK5lZ6sxADExdIGTPjMigzj6vJOMQTZC3nhqSLF0xytrGG3BEEE/TNrsX2jzZw1OWRqt9dYSq/oMiHqCJXYlescZGngm0LJTOmTwdfDKibKKjO7Y2sZEix/ipSzF4cQ0r3mAVe137NI5vWFUNPL7uiCeWEX8U+nzSP88zV3w1k/uBddeD9f3YMdLxGD8gwjwC1r1j3GBLS8tb9XVZJ2I4fTjOXg4wjw/RRxyieD4dpvbJpdAiSR+fFQcx3IxIULr52kDHdNA/6RYZubjbCVxiJwB7+zwKO7/FiU94Jc+YeviwqbDGkyG4842J/RIYUJG70vBVquwOlIUWiEiCr/dIUSZRr/HQTWqTdlH1iDcduAs9MPPVDm+fk9bveBnOjHSL2/OiC4boWtpu4XQHja9JD0k3lZBFsx8yKc0G5MGlX8+oPPPvTCpaob3mK3ScuLGuIMkqMsHpryA2O6Cgy2KxHfdbxzzFjYzUJqhHuqEZgb9IXYeoO/T9vOzqEfhdS7hNjL3CPnpbXoWl9xQqUojtZhgvDC5QRzmXbp/Ucucmq01Xa08rNl5mDJk4kLhZ4KXjovlBOY7mW+YFzHv+Gkr/NT4EdjYnvydM2m21lxI8S++Z7exA2YTcWfJMpgKZgyTusHHxh+SGCdo9KoKejQpY+8SE7KnYMXGe0hjIEgIgDCimo0fEBQ+etgUhAIUkaBh3hPuCzOY40XV0nsoL9VCKDei0DzdFgtt9UmaDMCi+oEBRoaiY1hNz1KLl5eGfXEHINUMvzoLKwGr66QqiEjcBSngBwqZ95NCh3wtuoXOxmoq2q02GFWcPftlbfMW4Gk2ftCEwaO28kBDafDO+0fm+dRh2qpgMi09xbPYwR/WwYQ2XdyM6bAT4VPtOrThS4a6X9EBT0PvmuquYe/ayZjMuhzfkHmXaqG+yO4Ewi/ZT8YoHk9P6k9pffh6ly8dg+wV2rxFv5fq9U3ZQyz7V3pYdf4Qg/5lEc95QnfTeaT8pqOTsj2GuKG8tJSJ9E3XSv7BW6nC3mBp2n67gNKWIM4MwTiTpEr+1m63OBFLAI0mAF0bx2pmGgq+0M+paPPbIkgwltDFSUn7skChJ5z1/HUiLb77fb8DaekMsiQnofrYetmuxngQ5Isb1fY4nZJxepbgRS0zXcCtEgAwPCVmj+XmjR0LyH5Xydp96J+ymX6KR/Hy+xywK1KYeR8a39DmRrB2E7aO/SR9lxZBaW6+LRS4EIOTPiraCJ4Sv8pHDyJE9u9cLpg7+k6GR1+pb/qqH+BXy8AXEKCzdyUE17ZxDBsnewExhrfE61k6yJ1JpMtNaQOYf5zUURPJC6YGwnTF2oPvO83ImrvWQWPUr6cNlWbd7JIRh/hvhQH9fBHDZHCP08+nOkFLn5zXjBgT8TjauunRQS9Q8uDfq60UBe14d3ED1MD0CfbQupGi0VJNPTBpQIgAY93m80a6bEiWZb5N8VslFaOtRQzjcwnqcCWMSIZbZf5ihF+so2O/QACK9me5aMe3vuDhetR4wFoYeFm59uJZWHXRyYHvgZ2SToyJvElbJKBR2cXXCYZJtFxdN6JyXJyQYJX2VDT8E+XzjsS3yhlwRvSHhR8Ap8MJHShmbx8UUuGG+KFN+EgKtGzo1CSHltlNm8Iu9FrepV2t4BbUYFyDUroiUmtGtuic93xO8wuaM8aLWtFFRq+kNG1XoRGNvQScTICvIMHIE9Uqkfnp94c/bJ524g8vC8gAJiIPsbhF67BaQvlpmQBev8JxZHnQBg6oCJ8sMw/HWg5Flr4mXUmVq0Cr81MyvZvcXyJW/VIIb7Hu4YuaDYeSSZyDQdwglpVoNMgO6hnlGKxnmaYGuYSvXRpEZODCfWZ7gh7y5c5wQIlPI3ZEQw/DcxTP3VObD5O+J7bUc+XtY/u3EtzW99WQA3fX3xCzzAEuFQNcvHzRq9W2puyuDiWWL15jyXuqIfE3MsL9dqhBC06BWX4/624mFxlqZUSxLMDFjMmCpC+isZgXsHL4t9abR439o9D80+k9oBFpf6eNjH2gb8mXw9aY9VrFcnJ/eSlmWYMcO+sdv2/vPd1P93XuYft1Qe/64pem3W5j+YAfTn7aBCf5P3iYE/fC6ZCjJq3irwYnfvzt7nb9eswHlx5JPfQwm7X7z+o2vfx3h5z2g/5ovvv5HsfnHLaIIhP6CoT/vEiX+YI/on/eqxb/jpRH/129We8rgnbv095fwBiiX07/drNbrZ5bhexahAWg+YDM8WpN8RVFvyuXu3O8flGEyyxALkjrvkIdGpuTfslKZzsmsvNCOt6EpYZapFTu8PhU7aK+pDAKNFZ4feTkjdZ4XdV0A7o/yiAT19VyPumBDk2m1s9H1KJA2LOYg2a4PDObX8hS/LpBTTWUKw48LZzCk5SNQGEmzD1VvVNRNHUIgF5lpRQK6tLLToa6t5xWpRvGO7xaxMcI7EIuoEm4HUBuB6Yez5eJFYuK8dMrc5yRVcIgogyWBAkSEqgSbvj6k0oqgJpagPF8dZk+BuEmHz/IFogtHaqexcowaRBmhkfVZBYa3t7iSASppz9XRjZdEOrhUPK+4KjdaeTFvF9lMEQ3rSaz1It1FImEcXWeE3EzfSMehCXkkvfzi9Acv+wS3IQXpqLT1FKoA1Eg6giLbOYA3VPDc9NrV0Rk0NO54EgNAPECgQhks8smXoEwEu9prRmMnhJZKU7s7utIODbQpIloZIlbyKq7ZfZnGJ7lDirNgEL9Qy/RWoQMEUwDsXil24r3WvoYXypdcDvNvCW1vlDuB178KmNDNy1VVTOjxmLqbreXLJdQ99rnLGcVYolC+4HgXGtlit6p0atZZPrpEy3LyBDlfifTit/oW3DuEbBxQyMOCqtUNQ8syrczeOWApHZkROR/VkOoaz6jTFsQJZtweW9hapCH1KzmKMNuck0gdIwE4PT6nJA/Q6euVwys5Ky5ej4s7Xs04vzLizXOC2TWM7Ao3Wm7jaiZ1xwpnH6LR6JqlZ0uywx6c/lP48BY8PylyhpgCa/k8W+LSCwkq9V5awwZCDzlhEgmr+UDfSe5eTu6LqW2Js9lY4sDMmv/q9hxXBLgjm6tfXGoWw8wnQ8oC7DScBbRPrmXff14Pd6j2jgo+fnXy44KuF3WW1iQZQm6nSGFfWZ9Dc/cqXnu6BnP3jNwPvsPcqOvFIHsVQVScl3Xvh3cpQJbnnd6YbVdASsQkMnrgnyg9bj46zg0C4v1vnYR3ZJvOaeGhxiUF2a0WrynPa+th7pqHta+ocQZescs7dNLl+EYXrs368nJNFZsP7GOwrax1Ol5wu1RhSmmy1bkJBeX2MQ83liU88H1dRwdPOQi6rvMsr8e+D/aUuiYKspzywZQiXpz40LE1j1hAqALJCaeiYtOohBjbVNiS30vO+XSMIJmxvDP1nh7WCgzDwk4Tejw60PUpeBC1iOGrKKDHyDG8YCmCYp3qR8eXlHmKDUgrXciskbGdYZ+3mG3cLJ1pOWDd0U1xOvvNabzveK+vU5owLz1Ph8jQRVN4fzCdPuKSGnmkbgX52WHAkwlu9Crq4LTphdhApB2CTbeiKyEtLF0HNr3tr6L6K2KGBMnGkLXBm6GXGNghLfBgVPIypzhOPVoaeQ9dVmDH0LRNvxNmGnO1R50STK+/0HWA4UZVPaeaJq1smzJnzStsVULn6nzxZ+G6eopwQPFhqq0DlA1Amkd4yFtA1M8hUJhLImSsAGX7tGQDeOUehuA1U4vYeNZqes969idg1VrmJdAhERwnsa4YBPa7bdgoSUsv73zF8Yqr2Kckcl7C+MNijyuBoe0xCLpF13vEsr1datNMHrZ5PEeC025Gx6uhfW0OrKqUooEBum1fPT6RSS2bFO4eTynCJt4fCjPOAzpHabJxoNFMishMmVHqL3XaGQSddFDH2rb5W15DJ0SfBh0n8WFP5uccB1k2+ULiQLIJOlpBFShGHj4WY/HTa0O9rUnSURRtHrbVxyQPZ96Ir6WeZOUcjv3QhOdqlALo8ZbwC32L1nGlZRCXkmnqrVFGpCSwfWnIFUK8DhvomKxAds9VxFcTtOIlxUla3yr3lbKsm6lVB9/JcDLueYQMyUMAL+X2zMnFbFU8X98u/drq9GiMQ9SstpBpDcOeMDpAkBy2kp+ce4fkILtxWTnIgclBSk8WtZfsm7CG1LRYXc+yO8wvSzZ5S2onKJI2Nqqzb8Cd4S/vW8kLm4s0AyOPgyeqcv/EbMR+4lxPBUQtj97fT8FOq692n1PUDiYNZNdcK1CCZN93tDK3kxVXge1en2e3b4aBfZ4gkVF1vdHxn9tCkkzMGAlr4Dsbvd/HqL3c0tXMJ0W8MpQiaPAUR4yigYsSTJVNHXsM8pMLHgxmioH1KD1f8D6zuqn6xQistKbe2xkCUJAQ5RN0O7AXLA9NoIsuz0agYb8pGR1dIpugsC/OEZMbACDxlfIKLSzMQH3FI5teFxkBSRrtYUYpWEjxKAwA2wVK2JKpzg4lHhYpqh35cMrJ+1ymXV1jOnc2E3SvlZvYEUpsY14NViY/nll0Qi/5cqbXJ2tDtzlpgEtccRfdDFUIDFWt4RIxoGcjYNEr8PtVUIam43XOeNyWzaixyfA1FhVNf0ifry9HYBu9CyrORKai5M5BsVbgdf3V88FoaMh9Fxi3QQCf3LPrv9qwIgnweKhikyhvabYOhOh5wH2W+wBwWOw9f7W75+qf6SmBV7J3rEHXCkN3CsZOW2+tWT5U7wzN8eYqPyBjaxcl4qtKGtrsm2EBn1eLMLYkmkgWiQGpR1QmFV98r9yuG/Q53Q5Uior8SiinnWp3aBlFtRjPpE7JMHRsQ7fN33RdkqvQYrrRdveRkfvhRp0QwiWW3vQIqomuuNaFhCHI8+Bq3vdB9/4NusDDUI5BYXoRyKqLeIJz3MqWhte+ns/59QwJ263NhiFttBIuBG+KvV1SqoPD0jGP2NSD1aDeD1AU77fGgFEDEXWukQWLetakzwDIY7AgMS28UAn4T6hDBxFS3JVfNcHrlte7EpDb+g+K1YCt0jpTAG5YBG7gV8lzn6eAgtXGTYXxRJ7SiVLoB92S0viW/YFV5fIDWSRpn69iLSQ26x9+xHH7+lrmrGKJwcq04BrAO4uE1iBISos7leO2xTOM6HrjTP4pzfmchxynB80AJRt5MsjOeMqD5T+z+r2SXRFtD9BpgTnohr3doOcxhWdnR9X4uMLlQYTLDrnxMIBGVzIZMi5TDR6WT0ZOTc5aubUik335ZjJI8VHQmC1AH3NW5ENpYozX3nYs5EEo5Mnnfa6ggcwg3VJ/FTNHzv3IpV6e0y/Fs0PIFmYUZPGSYJtB589GZAUkTmwjeILKeqq1YtKXUWjKS9ddAeHk/hzyYPsm5x9Qevn6Zy2A5r/AY5XxZ5FiW5ET1y/lG00oBpqxFz9f71w4TQXKXm8luMruXCwz5+TaXx79AxntYl+BGaF7Qg+WXNHMsX5oAQnYHSHWvFK38omfGWNgUXX3XH4aH2A4iPD62lESU4dc+TXcJ/aqD0G6xPWihYgkAjIBtVTZpJ3eSRO0aTXOGTnjD93dw6e9VcjLf675E/iLg7yaZuX5IUqIVwl6yJBWZqn9MwSv99ld89VPzPHBcavv3Y+tI2z/yC/yVAqSBsSgHh8TG7zbT2uS0LTL1iBiatpHzX9ebqW/b0cKMABiejiIiss3r/nly95mjYp2IxzcmkCXYs0DZLAfQX46sruWAm8L+sO04/4tPb7sBaGBhC6G2ujQlr72KfHmEZZl3NOkCsJDw9xblYFaDgnJkwoK+hEVFBK8eeFjhZ4gYlYnfqIvNtrmcMKyE/nrhu91IrC2i5CBStEzEqQFumS05G9RY3l8uWCM3dL2ldQy3ehNmeq8FqqKeSA6kAZ0WxEwufJNOm7vbov7YNE3RgmqnD4qFi2LijQQCDEMLuChmS/jOUdOHigU+crJvu3LXPHbjxIoMTwwqnYmHZqeNYBtKGWFi6CGkeq6Y0Unq7A/vzrgsPNe+WspnOiRXGx7xvNY9Hy87uZ22MpuWpybI12E4daWH5+PtLx5820O7+mD9ZoTXlgvzTla6zYFWNJDL3m2X4/N8G1g6cUjI0+OF6jqdN6SXflqLlAyRWKXoUJ935S9qKbycDynnJhAccuji6OFjGnWRD6RYFRUW4nMJfyw0g20cEQ7XD3rOho6KvvqYne/KVP+6Le3q3oPvX6DDf3uqlN8otBKo3mVRtMb1+1R9aQxgqaPA4UMF0ROTGu/64dAsCZqVYvhxDQlPW/bc0UsZjZjwkTFOrNxrC/ppYJOIwpoFIXRoHhTLmoMOZXgf9nJqLtdQ5BIK097PUS/vdNhAvJLA7MDFLtQ5BaNefdZpedNpBbilg6kRy+O+zDpputjnl03jFoE6oMnVCQPLswFrK1HGNgQtsoFUS2nVClYzZ85I9QQarYobBvQTz6UQlEnNHpaJwgqMRt7IkmrBpGsJuRNea3hwXsqBM3kLdO05fLyvCs9LqAOFN/mqYYfBGd3nztMw722OKJ8WB2vi8EmOkGFn7eFtf2u+kJaJUCo2OMv3zs12cvg2r51Q2w5sPEVOw203vCq7CUtxLHGYXUNap8fjEDB784ORZM0Fnm3zdtA0loEGGvpoa16rv8vlq5jwVEdCP4SmHw0mJxzuJFzznz9Q7PvNruza4PU6q6qDlK/b+QYzIaEIepANw28+G/uqJ4wjZHcH0hChW+KK7SCKtIHPPxoIsERVvCOnxI7O5Ic+dMCKkUDvVWNI4pclOe0FZtfygn44C/JKYwzWjS0vNZXGzUeIT2P6GL5LvNwvk8wiWHrLKJ/Qiv+499o0YATiYT+69K/fqj3QYePHlY1v1/ZXUGVITjNsb8ErARjA70EDpMEkVDb8yrRAc3RX4oPxb3u/3B0qSbsaKKYW6n27Hf9y0P2qDcKPbrZv8YRTSZt9ycuYALLV2UG6ZdPCrqvYnMKoOOXrIY997+UYkEK/3N5EU/oqwjR33COA20C5xHwtax4CJn64NmtvyZmI9qNR00cUoXXz7d98NKFs0O4qspGDbEI1v/HdxRM4P1GnA/7RWfWkbF+wwj91YNWZICVarEN75vQnWYtXKyyNgn+H1REqdELGQ7jJag0dPSf2sALW5vf2zENpdql699BxRQK5GmDv5tzxMk2TQOh1B/M2zuC4R7QDSjLRL7HNaWZKzP2JcY3tBb4FTQEOaeUY2VfXNFF9or04utuzVU6kR6maEX4PqzyO/cjzhqYwH0EuSh+K8oLciu1z0+afky0T4s8H3xr56Bw4GDL48jCB+HbTuJJNQnhTqJJ5LXleTor846f0ctK7DwFsFPygrN6MBCM+rp9es4vMpitWUgsKE3dI9mlF7E0/vXptzPp4rbwo9wYH2mt7SlskxZVvhuBEQDCcCRhw+5nmaerT5hwix6GRDLoWBaeVx2/2T43WT5/SfIB0Gp2bFvtBkB1pTGQ+kXkl7HuCpM5CLHHnycUet0D7PPRJrZRURqy2F6zwtSZswQmqXYrLKxIUktB+d56BC75yt22uhxJuyhi9Vw4UopovbBKZmVv7sk6TJwJVVo1W+87mf35XY+5mdKg6zHIUakJYtTREzagVIBtyyhCUVt53eMMJ8Bi+VP8SGUpfXq1xWgLfXkxAAmzQR0WzXOchpbqgGp8SaaOt81t6VTCGKp3370wiF8s3/3gNNmBqJi/zD/s9/4ZbSD4vCzn2ULZjX0HBIEHpwfspjIo7LP+rxbjUF8Ic2CTFlyJZV0Mtftpxi3I+rj98rN/TB+sUlq+nxF/NqegCDdqH0TyNeveNNmyrKmdv78h/xRcmFqV4+ubyqehePubRpcXtPwU3VXuu2K/3ZNlkOvajePBerrWXRSGFzPJ8DhudS21wE86yF9V1or1af+42lqGj75a18vA2YT4d+ysZKxdrqVaMkk/D7JbCYQkaOpVmR6L5lr8c7V3oNIKLigrtsnScSgYU8sIrDCf7fTqtNby8nUBteoknERHA9t8ffZc0RKR/Uu2xtL5irNoVu77gvLkiiX6HeuOjsdH1gj9CUk4X7xIfJk56jRsGwkVWjedVocw/x7kIqeXPIetjWXtUVTtsPO+rI48COCi00cmOt7pzicBLueXLY1FTyHixU9xwikIgw/vH8X5x89CWW7cG0NSWFZOC0vg5fRLe0DDEVjI+Fe6upTxOIRpud5V7JIw6PPjekEkyx9NWxUztsL8saX6dQRu7LjJVzLtvt1x+K+WYSX3bYr95Dq/6eorviONQ7sGP8aQgBKqEbHZZomMuyUpiqn1LkjFhfLkWHKDkCfsz57EdiI6NSxfhgJ9tXmTDRGYt8kBoFqz+YeYtWGCqUSroT6HqKJwoN/La2MUFFuMvlDQtSiynsd0avjGlIshHk52b9JNoY/pul8wApJu73+7v2gyMYyVTZWWlYcf+cdUVbc4T3DvtHFUY/T5TiuPpkexv6QSNYF7Q48PaBc42JGG+bLQkXYTx6fa+Sq0l66WXWXv6jOkX4QmpNCMYult8e+Pc10VWrT6IDIFOYg4GUfl/G9d+BybGs0Zt5G4PO3h6VtzS3a5APIRfQC/Nmv7Brp2Hm8IcROtx5oPFeZCVxNNWQSGelEUQOhfeIcpQns8G3FHKxZvvb6rzPtBrGfrcrosRLM9Dtr+fmEvxXbvfZR6+7vFjcKQAoVePGr1M3urfVtjXD8PyF6NcFpQ/LvoEPZ8HuJfOOFILfj3w98VgAOiS8qhzfzLYj/drCHIGs+bS4oXSAJrWgRM1co9bZ/PZjHRclKfI955doSTu5N0A0NPB+yfP45wfmUCnyTQYaYiZn1fNpSQYxCKLwI/rgJMPMoJ6GsPyBv4umyX5WiI+gdJXba7Fu2CO/ObaCdyVi+0x0iCIPYC9/8xz/YM3OSToUQlqWLgow3TmDwo4ZZVDdfY8uIhhmMKnugR2sd7c/oYCNNDJVSXlcWDIQEbi+j2tMCydm7RX8d+isg9MundE8MVKqrS4I5TwjL++bIGKf/La6UIdfGeTKB/RbA6QhFsCSZZcbtyr2bnrq/zoQ/UCoRMyRFSNx9n58mUJGrcz7MzHrvJVRy+THcnPBjVG4XKcfhchNaTLZkjQd7TgJO99aFATGg2/hOdOp8ESwe9Hhu6jN+tblHY2s5Lfle08/DOnTlz3pT4A3XjBFGBdS+PnlTh4btVkJ25wi2uEw/kDiwe4RjT5q1Lr1XVihHqyaLkgd2cobNQS3NSA27GSHgfJODoq1sjgcb5VbTduyoliK9VyvK5i+OgLJo200taIlfdu17kyVK8o4f/pLU5+CsQOorq3cLf67J+WND+1THzVtYa+DfmmJiTxO9R5HsQGElV4IL9sjx30vWXraH0ig44Lq3AEebWYdNfdlJbEU/LvWmwmgavQqH0lZPbNaSZXSFh4Ds2LOIdySyOict3spciGwrDheootRbiLQjwYJwoSZ57pPVwjSQVfLXN937PnDhw30zzakfx53XyzDmMoI0b+nriq2+fUv2WEG7VvS/J9Oqs2G4ty194C+/6Cg9ULJum/3Hr7KYAXY5n01UYI+Z7z6YtMtmX7zeqQKaB5QkxQ7i/l2iW4l8bA+Z4YMX+vzYz0j894dYYe/vWu4nvv68eQEYCKJh9ivMtgHq+dumFzPJbZNTIlQH8vuWvGz9YrWHfhWephBsexRoVha7ZUVXM4YXwUXvktoEsfzczLoNVQ4PDeIW2MojYXL+RoQuuFO5V78ErgJpLrouDI65TuDJLsxR/Ci63hUJH84fsIzrejYL+ZQBeAaVxBwBrThLfQMx8SSBKkakNo+awrH6ANajY76u7UF9vbixRrNUn+Ymu9j6sEZyR9IZn9K4GAImgnErXL36cyXo1ykXVezXyTBVglqVEdV/RmdBzs5Pq9Pxm2pciHXIXbpR47cmWk/BvT1X3ZWjA1ybXy4SnABhebSEPRYJDdY9I7qbtSsT+TG5oS3LAgeO3nuQGUJDIaXq0ceZiC7XqWSfiM/sjqx1oWsr1jrO0yNSYIXbCRrMX2NigEfJTFseCO1M+K8t/QM6BjN8PTDfCGM0ZxNPJnCSEDSWqM63Q6jg9hOUvZHHCF1fpTzA4OUCAjZhTBB2aOcxK9BRct5XejOwJHUEphAf0gUZSgJev+33b//KGVP1JryyuTLNrTLT25J9AM9YahNKa9JnuotlJLjxRGA8soKgnVwLdi3bL95patrDgO39VER9d1yP5r3vOnQ6dYSJCmdsvIaM9/HeJM5sFMGhKJauupIopO/S0AT6CRWAK4/9m4dXVGDoMaxSeWvePjJbmB3fyj0yRh9NqemDSNkJSw1ZEXj8IOzD8RwNrA1FIgpUupsKzuz7lMLHxSK8y0Jh25eX4yBOE9otvZeOC7DDLDMavbMbc6Np+0V9eGEtAEsLfSZjvkdZ+ImtbzMa3kzlS9LKATBdFiDDYRgLPgLrPGdtPRNuR0WTMiSBfFePy9mhpaLGYykBoUszAe+lr1/fgzJB+bf+0Yk1FITEtZXMW4PYzWupCo88CYf8AGaXtrJR6w5luzOzZR1aovlR4dW2PHaDpMNYfALHJ/SccprPyGjO3CQ4gdfbLYNjdTxhd9ky9Zk6uKdyXb4En/4X7nr/UEdmjz9DcSTc/olqhpRecn+fwyVEJji/kbLx+54fuhFmkgQ5Tz3PtVqzNDM9fpA1OKq8AJ0yrN/srBXYhYuh++J1FwMqpJzh9zQAWXUn+KmNbLXzdTgdyamMXVtc0r4+8b9TLziAIxrKRnIYUMufWXvrlLwtkSJ72ekGrtqay8f02ZumxY1DUuAcSx5EDHRqvxr7+WbsDxK7MDJe3yi/KAbkGgeDWKsk8PqAvgZ/W5LXYcz/TenHuhRg2QtQ9CykfWtUxqPgf8rfzr1QaV5sPPzi3Ju2ylFipfVRyXLkK8cv/r1MDy41s+9qZX0pALNkRfWCNqeAn8wk28m8YZ0xNX1MaILqV5B+rg7GFTPKXbXE5sGXzr3jqjUitLcUbWMO0uHLftSjtiIS6xiZ79L4mFrfTI2c7kEdwsjQ/MZxMTqM/fcDM332W+El7MT9XyfCz58JB1DxHIGQWPEdEZS/tlHV3HZWMv0Y2aAz9hRjAXEORYBptA/nZ0HSaIXruxKyq60NBRrrB2w/TmMa3/R6RNFiM/CeRn4Nd2xVe3USfBbC7SE/PVvp0Jxqv87ZNdTilEP7u0K9xBgRRVCznWK69AbaZ66hsXG9B/T/tI/r8DR1JHUnGf8sxKKsaHuHP3YFoSetZZEwE4GnG/VelQyICkknHWGzZ8ZRdPeW0BtYZHCwM97DOVB3pqRpcv1ITzEo4sT84SAXYvh8Q7cixlcb0z/GXqleNWDeMVoT65Xs33dXhz/6b9SxRI0NTqfFzH4u6r5FBgtGwEs+iENjvA/5FXYEc6xRwuuKK4Lz2NcWCdBq3NGAlmyNCfvp3Fr/tshN66Iec2MrxVNoVtjG/7pPvCPx8C7kJvwb1j0AmOc3O3s2TIZrRNJpRoCxFOCiIVIwq4p52n63ib8gUZCa9N1f8vhsiUcwvzHsJB22Ki3dfHO/qZbzhIN8bsIXynWSleFaYys/NJrkpjtIE+rDTmm1NQB7W96WKzz+jh/2XX1U8bwBj4AC3F5xZsgbURcqLxLRT1w0dG1HejrSq4gZGOksYlLGn1sqMVgmdLvpER6h7DVNkuAqNlWJVAzs60/kSHSFVWUqgv2Qrs9HoLqxK2PBPKJIIvqgxKPR8DPhkgV+sB/ypvuVg4OPxYQd1dg8x3AwPJBsLqshszak3/3vKvtcMN6uamGXrvKoEH1JD9tCj/8p3ri27TsfQo7k6ZIxTTiZ0SFb50/JpvVEEJ7MJFl1k03zCTYalz18Rj2oDbRcNDGJFJyApcU4+al/5LDqVEU3TORYNXh1sgKT4hKVBULRfkttf/ewDuKNT8L9Nt9wDZGiEwX/iTXJPxQDTiMBZgXNLQ6xa1Ry8lT0lDNuJ+Ntp0luBTyh23wmMMVigny2GIXA05VeUGMmNUpOjfxMzn8wdz1/HSwZtvp3uQ2H1ZzNHya87VkbmcZR9W9jjmMPFHAduSrq82jhAyhlZpcA4un1v96h55gfEAS0W0SQVfnpO8Rb9ge5jh8dvCJOV40nQnipHcpQUOEYTmSdBdtsX6GVY/OU4hb/D5cM2/avUVYHKSPpJtcTeHSuhLz/OHwPoIxTw9WRxKkOblNY+LHl4tbNEmiZK/xivh5yHiVY9ZF5MYPYnsgNA+rISn4qZKuvbOWb9smxEtxN4epgjgj50zC1Rxw0OtCyK4wX8eVYck22T5Dj0unjJnylSv/PRJv6Sesnrph2/Folw7dVAo17jgPP46uVNjuXKxbmph+gBExHtCFGCgDfNFZfyKy8JtrZzMZ8Vy+T+hF2QDLwmeW7Uc1EQwHVqeyx7CIpqtL2EEj5EN3YKzNBsW+sraZOA9NaEUmrHi/l7wniwyXFyTdae9C7bNxqqfb+2N3cr/vFYFMvu4N0Jx9wNgySb7sZa9n31mHc1u9DCk24U4ITS4eV4qYN/w+ozKgxWciI9X6X+8Yifz5ZmJUI6HwMfSnZWnufPFactRjjVHgsspKyDrnuiTXg4VSFptFW6t4dvCO4d7KNKxLX9zcC+DrivsaMS3e/E6iIqlWftNMcevo8JpBEvpbYkSA99oxCFapw/2A7o/x4W7u9HgaWC2PQm6Bqmq4//shtR/N4y0FfsFqDZ9sdsurq3aMVI0S3bkjRjZauYDfqVvNvis0SGSBK3LB2IO1A7HHerPs02ST5UHaUYLWGusK77kuOrHds3kpmSVSCMG9ytTZ+g05mXwBf56YOBrw3RquFRxf4qg7EIAVl+vdkNyF8MMzSunDaGdJjDlE51bcBPmLzYvFbEiZUvfO50y+S/Co4F912zflH7p5NCs3YgN/7VPBuG1aP9XVIvCRZLyvcDFIuAXAGL4Xitvg8QAkHMR0qEosAPrPCaL4ppPnKEq8A6hnMQaxOl30CY7QsaNUaUb39OqlQI6TiFZpAcJJnguqgsHsyz9Ve7Ryvuu3T1FZ+SxL3fWqO8m6ZFWQ3CI9fWDce1p/w+87FjsvrRgdiPY/38xETQZMEb54ZoK7hdh773aYXkmk6RrTi0LTLXJ7596pIupdtSefrFQQSuWqHTxm09fqhsm5s1hTE/RwAIXly9gA2PF8HYeoIp+C0WqXXdjIMAX0rOPgeYQLmeahRqaobe0/+5h/RoBoOnj2dDu/vDnPUwQv0zdB//r2QARum7aVU/Q5RRBVpfv6ck2V9lyIvRpp9J+HGO76ctfLDJta8UL2ztf860FuP/g0sGf4ZutZFBDtLZ5c2MFiVOdLhcmeHl0PSXq3hNeCk4L4vdFPtTUxggLcTVV3ofxK6YJxXdoEpzU7JPStiW/4FA5m5Y0YPSHDV5N0HTYfguWar9QiSiXtY8cH0YYCQkI3g7XbbsWABCJ6I39GeqTp3kjtavXeuNm8/+gyCk954AokuKss/r80M8mVPXEDHjT7LSwl/nvpdho5G5VBIssgZqg4lPzmPXLw5dGB62r0WrfH+0G8il090QWJyO58VKO4OhBj/Up1o4Vc5a1uLFUt6NsjOBVEHXHzdYektPVty5Yjk29+R/085sCD2PJGGYO7W/5VxunTaBfWP7aFPUUSnlCl9H52r0n9zAwdDmqAT+aMa0Sm7B58GBSzn5+jtEEIy4knMr6FfPKae+Ulrk1/1D0qhqAPsR2Rsbv18wfhk0/PFBkq3SzeFgGhPXsrfvESHCMs9Kemz9/AEFg4G2LFSWr7cILpddg2KVXSlFISfPOB3fSK1sYTIxktjbLxJqNTW4fC/lwwzgSuhq1iFzyJI2rh7KWfiXp0SjYjJQH+xCLoZ0b4pz/OvTMww6oHFNIwprKN/3g2L72xoLB4MweNcsljwDlTnC70Pf2L/sSA5qqUGQFWF7UfPgw/YtbEbWj5mScJPS+wHvMOjkz/3W7bhpYbgEbyjuDmsPBFjEJgDNHEQSUHFcuxu5Gl7h8GvtP3Xrki9G2Zgqab602sRVDAhp+xmY7+eooBOq4yH05gv5akZvJKOIUO26IZij1Cx9Tzpwz22CKH6Pjf1cLtj60v3zUpAl6NklgZqMwadRtbT8jW3A0uMvUCuB62de/zbIn1I090zYf3HOhOVLHKmGSJga/hJBjQFlBds8RjwneK635jpllDS6HEXxcpIeXWNBjndFkp/0oJZmqNlEewML1t3iS2XzF5wg3EFfpx8ZYfSMCJ2MwOABnv3azJgc0a7Rr+UUnSYdxw3jnoLQ7t1hBkb+Q1pIVw1TC7sTRMopP9bKsefo17UlZbL9Tvus9ZlEefM3AyVcuedyKSjdwuOhw77iurjglWPirs9Q+POxokps97Ch5j6VqOij6mzeXk+gUMELxL1RZYa5QJK6yobg7ugx4fmlAZ+MHEJ2Os2Xh5uHZL6oeFIyhLzw8QG2z52yk29/47DWMM92KLrYKtl5YDy9tq56jOLHPqeg5FAI9d+aZxT1Eb5ubO9K6UcfEG3SUE0yKpIsWBufsIWjUVtycJyQl26BZtxrsXQNq3//jA3EJ8TE2A64N0ahYIarfst53tX87vWH/5tvv9XDEEMk6+dYE8ILcNL04qA4dSBG4A8p+cCio/VE1KEfoFnQlHGghxB2Hc5XveayIrJ8HTaYlxj4IVZscem3JMvs5X8IQbRnTy2a15EzN/8ol/0YUxNVLGSEiDTgmzSmhiJUSFD33/6DJTf++auf4P2tSexccSb8WGcUnK32n4PPVT2CNt7Icm4RF7fkqKOA1B9LN9+0kY7NdfMhgUqDVWei83/rJnSPPbFmJVOGr5HK32i+RwWYDhaPEgrumUdQDVWXppNW9XmZNCPGhP+IDgO0rcZ3vfb2o3NV/4gDQzcIzqBZVTriBJTkWKLQGeZ5HCF252/qfdKo+04cKPHy2PSvuvSC2ERHIgeIdMfTJ+0udDSyWJOK1S4sW09XN+3wBifR/rxx50IRFyFJEyCIlpvEBuyum0wOU6uRRRB3kb+gWaemXZnF/g1bgzjUAyrGCXgtHrQXgaVqt9jIIopWTUjbgu/hgdNNtjB5+mV3PjfpIJvSWSDypqJ5LIDT7m0iMFOssScHp/VvyTMgOtGjhbRKQm7Vi5PS27Rj/sVaokoSV6qTOjIofz0BejphMDuyjCCGrYg/0V//gf4JHG+LSd5xO0mNy3mOFiKCdUalnQNYSdTKLYUEc44hK0UeLw1qrykP33W0gvH3LMoLUJE/e1q4/Esry56mf+Wn3ZPr7qL00Zlqv+KwE9hdPhvgMFqfc9qvvo3vmHpHOEGHqhlg9RNxDqDGN97AaYNyGD3olC//K/rK30kJ05OqljhVZcHWFwilMoXGgALqwoo+ZvO9H/ygf7th2/DlBh1cCczyHOc/ai2uf9MW1L82A3D8ckDkV7AGEHB4RN63n5r5naCGdWIaBv9ux/4IAPRy443rqIS1Z62G+EyGXjdoIcqmVWzMlgw5GxCQjRYGKRZ+/yG4n2IUwgKhmnxhuX2Zm9RXc1oFOYml0OpntCaASxZqAIy/tyP6cYzjfhSLYU/9Yk5fVLb4Hh8VbPsxx6np6TgCm/2uRRgPd4MSFMPYfJfJrH5P2pQiS1L4KeM62awXZn3p7CaiakUQIQTiTgH92QAic8WTbAjx0to0e/R+GNbbecFqJuEvco1ZQYCshLe+YpeGqKtFAlEoTCP0fRWNY+KbfOuEbrk5tSuyAy/ue17satkhv9dLWk52EQhqpOl2/M2vhbLi42TV+GkZWfwVQ+EQzv3ZwvzHIa5ls/QcasHNrSiJIuBskphRsJOT9D5xDy22wPD1GIO9rXCNQ1G/9nqiQam4Jai/Gc5KMpBWysNWui2/X8VNOR8qKICS8kaCXdKKbI34dbu62f01A9nSYG7FSX7jRBrN7ivy9BmfVl3DWcWLZ/+6DeB2/u4PApUi9F8PESAPWQhrASPaLzvegwoYC8OOAKzVH0Y3IqO/S6wpoHRmaAtiRxtbNZONP1muhc3f50XBw9kxkWQDFDCUzI4YHHddqYt+pp9FCGUHgOTtJ9nB8YD5HYsKhzQMQs6dkNoistCLwXLPthN3cHEKbQ/X0WEs+p5k7W55W5ppphH134sBS0a4kjqBvXcdS+xCtQTRUQL4KO3lWnXzftYxoKt7rpf58v7PUvjOn16mhg9r+xGiu86cIEBUAtVtWM5VQ3FDJNbWzhU0htWxo/d+l9PAiITbeXGqf8M2zj1jsJSp2yka4RW6uwFvL2HfBfK3AXmN+A3Y/TOXMea36i5uLhM8ZO2dKNwTg/gkGDm7K5Co6S9t7jieE+sk70sLzdOLzmXGVStmub12y16HDM6rVWInM5XuXLfhFxE8ds79MeMqa4wXQm+2yD+MXxPtCFsZD4R+kgxjApgsfCJaJyJHGlKo3EyErpgVM5fBAjmm0pvjPT5k/FUcQz3XL6S5QEaS+0tPnV4PUbthliv1UR2j62EeyEZf+Zs65YWJWIm+fCiL/Wc9duc4/+7KGEZBZrTwC+m/EyNk2TciboTvD2y1CgRrrslRyzCuU6t2FUtRlKiqP7AKAv7BsRqRCO7+eMzBjZ1pC1AVMGnTNmbLmrR4+SEV7Z8RwzLm+MsgAfhes9r0kK8JnoEPeeIOUYtvdntXxaULceLMQi6t5IEoIfBs5qoyZyZxXrLD1ul14nojCYJKCj8Gfd0zeOpL4OS+Am1LjaWnK+74EToH9vsk48sUjgwxdBWv3hUo1ywJQjOxJT5C4SfKSfkCaoTRECBP6EYRS2jij6qV30PPKe60AbRnzuo621ey/JoUN2ts+2WOIOHdad/5m5PnlhMILrL7ccRVRNggQ5b2jcycoYzpBhCujiIGuQ/GQcHK/9XjClXj4XMAtXDe0K9cZM63j6W5gUVZdn0xhZ34c85+/wPKtdN99bBfo5fXTHhM9VJMr0+tCGHy40sVGYfB1d0Y+rm6TJnt3HS2pAmxyGRyp7bMx/6WHW1nsYrQRrEEOjpJs3d6IvAQApfGVRsLo9li069jkCbr4b8QSGEn6/t/4m2kCETkK5uB7372O3VekbI4uY//edmeX+LldIMJSy+72uT2iczbDD6NtRhnKE+m1fzqi7aq+30wNrTgTcqBPyECkuK4v6wQUDmeEhD9p9fAH6rRCRZeRLUprr1VMiWfS57Pb9Y0u2/WJJdNHLbE2eFZu+agUWRDCToifH2xR5x//TOGVgFU5/h3z727jHPMCVfiILtsONYPK4nN+G4X9jfzs9qs0LSv23Dnj97KpkTYkOc0zZ1KBELw2U266xGpQc3q8JdHNTX+BLuRz0fBXp1r3++xIEd3rp4V+hRCFQh35BCYVN1rnqfnNSUQqeic/6GmRr34A6gPQbJPSAxpq532zlxTo1h5gkCNOplTKEmh+LXl5x+h3aWsCF7QQVHgdbphLXKxyU9XqC+GncfOnkfJ7qfsshuNfc+jOsllJTre0irMMuwWAUv0ONVfebZV7X5avqwP30xtdEl7neJvSIXc0TNrooZo0WzwhLUtP7n6ZjOZKxviT5rm0jUIQk3+je1HNhWQu61sZslazugiF++XKn3vr8lFzHwxwcX+u0Luw6wOIjfpDoTg+t0ff/PSezBfPI/cVHME7gZ9bNnh12bGzxeqZw/TTlj+IXHtu/RIuezBquOJccsKP39XFO0ESzB0fWuf5iCib2TnRmMJzn1G1QdBClSAmxtq1vSTvFjvOQk8X7J6YXXpbCqFFUP1KrWXsnkmTaDfUCOgNQdAa0+vF8J2sSVWBdRbqY9Dq+28jM2xnuyaeOi0u4yOw9ruuchZEh+07dMfMIeZG1gWRbGy03R9YpvmudPzJPaRtTcaAifnkmD4OBLxWJC/5p7YYHysnVGNNP2CdiRd7Vtt0Iq9Zym7T3SHrHZctyZsIb9KS7i9e1e6un7mlP1tIxAjmBbQvOhBe+HZi4Z0H+H7LDhLOBuy8jChoNSOrGPzAziDwWo4+SRSp5/arzxvrXlBaMAHAUrTU9uI8YXbkhRizkBLEPx6Xs17fcnuBjUVveekrgXJF+1p+SxjJcOslNjn3rkraCZ7Y2CogbiclFh6hFRvX9Z135zoK1hjQLWvxC+2Gb4sa248IQkgSVEASQ5HbmP4DpzcxSvq4T4AmDuGiARvl/Ib+9WJ/omJxEtyPO448A71l4CRb8ThoefJY9rPk5NPbZwu4U9xaBcvk4Mt20e/BMWH+L/uIdSiTt9PsyG6OPmva4R9Ee/MbG2YpfYw/FC2k3WzEKCwEmFTI/eT+omyAf1O2F4YV29O9NYvxuH3U3BgnlMH2ktTryVCmYcRVp48WbLlo+N1xtNyeOeIEf343U4hsG3wQdbLTX3W+T+ABuylIzotLrloNBM95h+55EJK9tVdgopumwtspqacDt3kxV6Y3n9fC4wLjUNq87M++TU1lKFK7RcO1kGuPU23Fxp0XvC2/7pjwGZOqgkKZ8YGfhCqO+fhBxCDdFm3m8geudYFk8BGeY3ruLQxgzkLWW02v4RaaSLzozAXwl4xRX9Mhb8OK4qhpuoMYWiXZ+VCa66AOf96Q8wt0X2KXqiHzbxDJmYg94hrpQhkYZFBVnD78XWb5fXKYMv2YbKMcsJCJSmP6rU8IGBQEE13tTavvzFj/ICuCwYvX86x2E5ou64k+M9Oo9L78bkpJ9SiTw/m/3p4S5ImD0BALOeDvR5fcJX7B3K+ehNnIrkXd1Pt0g/q+QDbmehbdNwbjZ0hwIgU4Hc+VcwZl+WAX6o27URdiubnUj+s5UpNr9eADgWKcfT82n5RYVDz7JPeE0rYN6dF8D8ZgUt8bpqvqc41QvOUYmaqFV+z7cMSpYlfV8cVMK/v4Z3M3Y7sffvl5r2BafQC+Xy65DOqgvBCcfKQ760YIwp60YpejRr1MJ81qb3vyIaoE+lpQeY5giXw+xxPCqXlhrWkgD8TcCXeA0plaeSLVx4IwkCYMAqcIJ8hyoapSoJ7GBLj2F7MgmMT2WTNNSd9EMIu9RrjiHyoLiqynonD9+SLqBGwdfi6AObJDY5RvwMvNeXgZJkBWjTL/bd5E8Lhn5dMU2HSRIYEKlDhpw8A2TUituGDNyaa3rOVFiJWtrXdDcRpXp04HNGLI9VhObSBYZqFYDkKu+qfWqIeumOWAHVWg5KjpXK44tjwnfiS1+ha4NwsNSseFhWI5qn+yl/0OMf33PvnFw9ERvzVk9MoCDFQOehRH36jChHObQ0ySI3CkrHcTLBgQ0iW5fMhpI5G+V7pJI/GuweQ4nPIVXwOFyue4ny4TnIpIIWA7DgF0OD3eA4GM++Bz17oW1DxHh4h8WvP9lQSvYk0ADsFy3d+suQSM6FjQ4DkUIE8jrF8hsl/WYBQiVzPONIsvG8L97RHXqKC5X9JCCXPqb4rmkUGuRXG4FFDgfGB6tLrsR0HQZxnBh0mMEF0sGbj7j31CJfImgViQRT7gzL0SPOIFZSFPhdIOLwpLhQZrtCIs4mGeSQ8jVr9uv0atJdNchdLAdmEphMKeoD/3mSnsiwn3s9Zck1eVpdnW1ly2nuRItGfkpOa8jCKSssWW9yBgfoKhQafR1/veOF1Pj1v0d1i52FmPTour+ybxcnRj5TkEt7BT/Z3S999eb8se/6ucv056uqgCASOw7GiovWCKTpL/ogxRpZuyA8TM1LL6vwcXr0fIJ3TefA54pKWWCw8gvVmnAn2pP6DImUILREfKN309eCVa2EHlOJCHNe47es45sEjxW9kqfU8PNjwEDIOU7jwnrIpEPoJP5V8KcncE6qOCa9j59BwOU0dSzl6fn68SkEk/tp8+qAEEiYulu78t7hMe3YkCaO7a559d8VEfBK5S6a3kCQfY9Twc/h9GR/Gj4Pg3KDPkhA7IVYBQk9VdtInv7fB5eyqdK10NV6oDNBB3/+SL+PGpene7g2KRl4zIu4AwyhCrSe4pWcXarwkGE5IJnwdympnkyAHEkWWn3Oqq2RS/NUt3MbV/OX/2t10DVxGhoO+CrjfmuoCTJuzIJ/w0fRi7Bc6y+5Z2omX6sghfH7wp0YwZWncNOeeZ/e2BIH+BqJ8CFO/1z9C07s/5/fwVyeoabqlc5L3Wpha6gLFYZCf3Hnx7LIbg06f7SJkjYt8ktfX/D6qHXFW/fj6LltB8Wyg/JvkFqI5bdPkg8uK5/yW9OgJ/QULR46nozU1EeEovtSDQPSLEo/Q9k5kSyxfN3iTm9EXGKqqoaNlWa/1RKafhveFYnAOj9UGBSQwjpsUJN7jsq84oSTGWSIXVCXha+AOetoARqkvc0wMn73bsKvpBhBUgNRRpGLhmS2/69YfE4r/qrlKrPkQm3jPyGODNkiJwbwK/qgsKmo/etP1RFOk4CGC8+haFAstkBnKQmMhpzT7uXspwZ3rhfp39sS2cqymE121UcybMgxDBDy5yAxF5laRnP3tpFbRHrXH01WXC11yF3hgGgRxA6N5cHz7owaottjiyXG2iJs8e8G+0GdIdiHfQSAxZTCiPTgKetq+GdNWk+w4AeuJXg80lFpt9rZ0F506YnhPWVSGu4K+speF+vbzsdUD/tXuqYfb9++rotoIkKBsQCXc0yDxFa1osulIZM+LxcBqRM1UceO8TgmKjEIUmY9wY37dvzGpjgkWvSAMxj2IAJ2ePFNnkf0IeI6tWF43v8N/nyIBMlbaaZ++mwYVx8oaQzGdF1wG4y4m6gNq866Rtsu8mH/APO3XUknEOXJzWfkhS2rTqQyb7DOcNStgZw0PdKDhfn2pVtUQBXQiu0ee2nPK9XLvPN6h55OicyETAJQSQLmdth3btkBYXQE5PH8YHFxNTd9cB0fsD0Qa0w58Ko3RcMrszIUsExp6YNpkFTWomygm+cBU9w3X1XUsWOVuroxZK9ufEcVZOIZ7oqhUYxk+IWWtn/V975//ZEDjIhkoy9hKgT+SPFnSzyQNNn4NxpatqV/heH0JHkj2rmkZYqKTAiljog90Pv6hThOrJBcO88IplR1eqCPgdh+o1tYmj/0PSoWc5fKdB6/ZI/PS+8LO/GmxibboRb3AR46w8WJdgfHUESJ81nC/uFVOKWkaNt981yXy0ET+p1/ff9dpHphRp3OWKY5hfnrN9mErY0A5i1AY7hTi+m+6pF3vQaqfGNC/kqGB/fcJp3AQKxBBOU6nv8hqfwlU57TDszVIE6xmLOvf1+/WOXaYakwCuBVs8/HiVRqHn3gaavNFvyWRI+mt/n4q1UMET59pqB++MEwvWpC76EwTrYfbSKnktrbVhjMrm8zlu/5AWUo2DKPY8TRd1w1P8d4UW+QixxFrCgd93vdXxyLe2Nt6cs+v2R4rpKdTv8w6j5QpS22HnR2/yCxQQEmnlcnuDu6Md5v45NDNOiFV9P+S2dfA/o35OPUZ+Snb6afuJ24ZOvdEcETm4Dht7jh/h5eAhQrF1lm+dSMnD4WAQ9/W1b8lq4cXzNQcX7OpYwmnovL3xnPLTi6CHsYrv5hGrFepJcEuqD8nVmPMTiE3WmUYp47s0cxDnv8v+XmUA8EBpjRXv7UqZfeeD59ZMQvVYuL/BC/ePIefRGg02yx19NaZky7H7kvgVGIjPPto+a6evvTtubMrKbbFtb7dXeykhjWhXjqbl5XOV0FUtdUvK8h++Zv/kYAc0WmFUtbgV+xpdibOrafAtPhQ3z1V/RH03eRwNeuGEI6fzmrparRWPCaQWyYI5HNxmoEOKYK8ofMzVk15N91hMg+houLG/HSXhnJUGl4fyDoNjKmVRHYk2UWWbtg0vc4v4sduYritP7a5JX7WFkQubtUB1dfg9L3tie/im5bF+6uzfuNtcynFLpFuVmVwKSj3ZDSAL/S86j6Zmo73kbeNKpYraxkFGSxAWmojQT5G4uy8K7t+dpgGt6NtHzP8JuPL+Mms2oiAEh2ZPJdV6zePKamWomXmxu2LiWFHAcWbtiOYWzBqHT9SNcwWsbsQRKJVk2VkNe6H/nLHVuGbkzf1Df89VsKu5F/i4EvIpQs5brlti+XnhTZO+nejWWoMszml+AKz/6Y0dTo706Emsusaq8U0Z1GY47JkKVAknWcxDKjyKVDks5pWBSqvnz8x3lg6j/rZ2kh6NIkMQ/6XicXC3c+vRamrwpcpN9k1RKMIBih9BAZkhCdeQ/2MQUsho4QS46v3ZOL63vyVG+d/HaLQX1v8c0+fVlsh73x6RXEW5CaNPs23v5ntxh37+DByk3k3Al++h4GhsDbti01SZOZBflvhaXxbdWLIDUGXMd8Rg77rib6kgvXdMO3r3HmmOS/OcU/BJRecXmTNuLFRPV92/7ureGUxDaEWamJ900vcUKgaENh7yV2DRBGBhiyRgxNpjTJhjXWN7P5r0b88DEm7Apn/+GCUz0rg9uabNVvPqMslWJ8skWc7z3CAQwB2HcXGjz8f3mjSjiDoZWbnp/wrFUmPAGzla8rw1HL8ccXnC7/UxQK+PzivCAG/3pdibGXtfbmubKVm8I2nq7D9MKDc7w6Q5NWRv0ygnIDqP66rUpIsb1uvbamocOR9yp/jPB/rLyOHS9M5Dgx5EZ3uhNzcjbVdwMWt232/avY3XmxaD7ELvjS3uuQodP8GuEVnIQhDe17XBdMqheL9u1dpJC1c/VnlshOtCJqlRrns7OEl1a5VsRuf52pobXqx3ZxGnj1npssiDKIp4uvUAnp10+IqjWXD379JQ8iD6Aa8DquMVa0bxmWCsunpZfN99gHlpPZyogZgecvyQpi/Cv8Qt7ssqSIJbSHxdSDAdvRxE81fawbHk65EM56LKv8gce+YIU+UB0KAMNDk/1zuo2KnvBuSACRsDXue+6d+T8WLH5bbCcGFULTaItnixIwaPt4aoL4s4/V0tDHt79BqMdKaWlejxTG8hTJDrETMUxua/zr7GfCbUVFiq5s3ZI1ifOIJM2xLRqwrGZdL6kbluTB+F22i0ir5SjB+mLMuQ9B9nBdlJ5eZRkPjobeBuF4SAvBB6Olp8DmBZ6Hugdy1AikvkHSllYwmHN1kenu0OF8IwlPy/5KHbfn/y/4AuKlkmv/gGCih+BBVPywBe+L6KPlHHZ5i31jIpJb7/u1fEMQPRWBNrLIujCtyLonhUEfiz1FM+Of5m0dZZE5uJPvVYYWyi+tgIFd3xNUNkx3zN1AJay1w3FcbRjEPNuO1SeDJ8t3lPtmyxKGEEvM3Tm+ZYEOr31SFssQfuMejH6tgg/8kWmdzNTGGe29f9hUZII/9SUdEBTjk694aSM/1TmiUK9h+jHhRX9TLSHk8yUv0PDl0jhEgwAG99ZW3aO2T50Uv+pch7Wcf9a/lNM7tmI3rd5dTMZLzYJukRSpoUE2EVOdVA/Bz/2cojVJ9adysFf3aKQyX82lQvuVapUZKzTCPWwZS330aNnp2RzWWEITuIVKiff+GmZq63dRfqXJlULb1ukcYp6LnYyi7XmFCtpGrwj3mUwUAezGTN15lKrzYkLrDkP3oq2TRHqkMP1W2UmUz/E8bTwvfLWPwC6LYi4tvzMGKPHERHdfQ5rz2F9NxQgrnX7c/VgxeswUdAkpauWQN4DXY+zpxYBuKIOznmPEXNiHtZxifdSI4NFh+r2Ocrmn12i+S8GOpce8HC2Bfh/Jh8uIpgSGmtEdfxEWBWZK9jeRI5lkxxyUU9qXnEv0k2rPiOBlWzx84SAt3FuV5/Cm+65ydC1cwbEsvbWyNxfGp76mtn+WGb2H5vMtSq5LwDc0MOORdx6haMXtZBfUwjpCW+QP0x4hJX2pB4jnp+LQxMDWUx1+Pfomj29AsesbPWGGh5HHgQ1Du4kbReoA/nF209XcH+cS/FlKKid9iBMGhn6EblS6RmrtQJKvZk8gqrjjdrI9IJTT3CMaUEW+xCTVCqGL3JvW5a80ggsmGy+2lLSnke/23TtDqLreHZkpl0hn1zNqwu50wobXGVbG+h14AIYwPNSE6XBqyH5jf0CrNWUzw1Ok9x+waflqCw7Z86ZBsQmqjXVn/HLJnux+sEaXEgjxW1z8m5zzEQ8kSKI6hcTVLF/zmDV3yOnghv0I8URe8t+kxmDlOhJqplfiT5MlpObucp8s2HSw40JaJ0pnzugye9rMXp6KiOULvp6obLz8P7Q6sP4rlAFNPf/Fkc4qynU743Q/bEKz0d2wPxB73GTUG4JNqwyVMMbGqR+pWXH5h5QOXSQZgseAYe+7ILHD5hC+yesR1ZApNXdQocZSibpVvzVfJ4aO5P1mHgsKas0i8H/c6Y/N375kwNHiQGb7jyjkhmA+/jvDjtR4uiWgvvAF/VHxnPhoWytzmKgzRQ/oDBQWFnIW/vlXZUW8YBrWPBpNd/rqToNnwkXKMeXrcN5mfgd/QMq/OKjxTOD/o537F3F+pXC6ibf155eazJ5ywTa5tTXa7KZXpKlgvAOT9G7UchftqCRFXaYPr9ShkSzdt7RBwEwSuREW1/3MqJbJv0Q0GIPEfictpT+A8jpFEN1LooMG8Yk82MA6M8eEqsfEGu8+wA6Ok6HgbJvNJBkbmfs0LMtTzDSxHyYtZWH4+vjH4C/LMGpTLBNrMg/D+Du+7+bd1qkuMhH71sMUL7aPO0gvtsHTC5haZt334ILG+MbkZqUNLXk8fWCn2d+GjtCbp7lWBLsqa2KLh5YCWYwcCiSxOKbs/QAKh5De+o6UztADyrpO4Nuy60/Q4ZacBsJznKCivoiKudxPUl3yfaMdjYcb3/4cw6D48HMJn4jQ9NHgmKcHF3X7NeROt2ZufzBCAPOsHgUUJ5kfVjQFNm64kx72ccQFHwKa6aXZbNK/CXQZvv66yt3i05nka1diIZZ8scoTtUH+RSBDeTXT9laa3MhzZKjYezRNiL+QMyF87s8yIPKPRePdTnh6t8uqNmi4+y+4pXy3mTJwvYyd1assnexp1ow6iX+7WTblBSNZ0RVHULl/6mksuqY1hw31n66x8c72ZxH4Ji5/UjGha7vcrl0wnc+488twps6w19g/DAFxOjno1ET7Fa4vzBuL2IyXeu1EIEvE7GeSqH2Go08KzuORUlu5FNlM8LW2nMPVs0wlyJHzhBxcXzjAOcrrE8YMMj/L9fvmDkiyWc2tWkT6r4pXvrvV6TnBZv1h45479Up19nl4L/+f5iz0LSWsbEsxq7CflDdK+FBOiCqFJivz2BMOIg48G+VaDp//19l7Lkis5tuDX1GOXUYtH6qDW8qWNKqi15tdfMjJrpu/UtTEbs+5J2+fkTgaDwh0OrAXAgV9XmOnnbZNjjCSlB93crU+m9yq3NlhhMYvGceWdJwalC/w1+IrfT0JNfuUDtAR8dMiHrVL3exXFlneld2QcOFdz9wYqlOEnfUGx4yTJB/keNNr7iPANJabKfF/j5Eiy3zxMlTZaFLrs9c+W1jj9dK/rivd7AO8rcqnt9SyJ4Qy5va9RBe+XASU/v5KArD/mYETWGaX2QAOoIyDeG7T0St+n+d+SF4D/FmaTVqnZWObBcs+hV5tqBILMuwt1MyKdZdClwTZgFFC/8StTD9zv6yyZPVpxfn1SRUShTXZw/q4B5bPlTNrgrDAkgUJVBwiiiOsZPUmR3rveiAhl47NYA/L77p+aa33Hidon+1AtbNr23oDKAwYUhn4346qwIOTUQdiMspHvjHzhtf9UnuTFwQnDyy7W0jzGXcdAVdJbockW8lXJhUgVD3jd9pFKTd5qI5ouuE/EWZYnmBbtRwXFBWlv8W6oWbRlQZZj0XSUg12MMTQ+YeRSDTcyA/AXv0O12uYJG+dfsId/PbMSkHle95D5hr+/20as8JYDPQwFc5nCSCwcqb3cujR0XS0fxS1oVz+jUJKMygss1y14t7mt/Fg9vGZCnCOtPrEQd3vPwsH3xg2NvIevh1KJ6SvptO5sRDxTfijf5cjowiDGGV39XPQiiP1qV4Z2XpDqC/YitscqBtmrZ3vIbnPJc2DWZ09VV8voRQ8HQhNuM0/4Vsl7krAJ9eUhiBVeRR42smjZOsC7jp3FeEqOr7OOIDnH6TTNufpNGsgu/oxUVuLSbZHeNd97JRoR2Qdasln7PAd6+L0/kpsqqUySlgAQQ7OhFtx+9PO6Sa5Udb0m+qXutM/nVc6Qv+yZivjeETA1Nm/w/Nvk8UecQL2JEUTZlcasAzo/Af+2mTp7HR5ekG8ZmbNGBCFB8yjmsKsWr2N65FeOdXi9eCL7K1xKkSzGNd+SUZIyI8USOA3NMdC/t/BU5K+7UtObNpRLnubOiVkcOfU3P5lhagSKcTsIPx52sl+0KGoa0EuIEYh6gUHrcXXLwJKuMshSLf7hNYV7q69hWl8qSGlHMOh6V7AGic/osoQPpRVNALHy4UOVMgH+pBYad/igmnIDEJJzzXCTvHgXs+p3EXAHhaBRWhZ4orgoGv6qf76SOEeSf92+ab2Ep/IFq7SV93tviORLkPX9NQv+jZK6BzjFR/xVnvHXgGembuR+REYdT+sSavM7YUETBXZR1fK3yMGyAhiXC79yY1qixe7mCeyp5Hewd+DZqyeOKXvQ5jQft9W0mhm8eaDu1ssbgW/fyeylmUi+r4zLIFTH6Wpxkcl2GqsIn/BY9b2d6+vvS1x9QRL1DzMQlw7ESwJ3QJomNyarz3zrxZltKtzq5q/svgMSDwxqE2UbKEidcl7yNefsBYkoeSbZjqh7B35C3o2YCPjWw+MR3DEC8XXWdk18a3yDLwg4evAjNX7s2I/c/Z1/HYVTzQhuQlAy7uYkjplMc+hxzJVFXchLNLoPBP6iKPL5QB/+Wj0XGS7KURM17sqdWcvtAt5r5bpVfpQA+Pp3DoXD9mN95OuB834W37TAFtojzfK9EBYmJj/IwS8FVfusfi8rCwp5QdDYbApRvzoRLHW/7tGAJJu3RQRPWn8r0Ca/9bkpsgf+Eubj2NfqVFlU1MmVyRFH/gSxa5Dk8LqE5O7wGkRfL+yZ+el9dXMAO7oUooUtlgvN+/AAIkNtwCL0t5YfGcfOl2v3plSZhyAs9w5aIobFEttb92l9dVS02weZQJmJjGJghvLh0Za3EjtANaHW31/nPgiNfJea2reZEUAuPQ8hiOMGm6yweMD4/Osu9uJ/VD0X+KFey4TUN69GXPzmFdCSxL8e630Dw+kQS/xZ/8PSCmWCwrkdtGWEREHwxqNefkW0wfLNJQbHd+B+2AVJ7XDbyDGUm1XJffhI9tiAaMDY/hyqL5iElz02o49OJM4MxFL7j+ge7SCrHCg4h1xyfVWpVVg159u7hc/Hy+gfaip1Rc+VeCzIl4TsfIohz+9pwviiuQGgYsKfZ6R/PjF0g0yhvyQZeYzCNxzWcXggnhPnNOkl3/pNaOuCDcUvfhUlgJiKl27CvG2XacnFFHEDoLzG5m0Ld4oz7QTL25Sat7SaukvKuzjg94QDugNaAdgKbLRc6+sr2p3bMGaEu29iJTIk37t7V6auIgRaNGfNMR1+7PpAQBux5KlBzotvoOpJtmL+z9PZ5IWUyrxaSD3UYOfb7UI4ulWGisil+QNY2y9y5DDctLd1pd7Hkvi8FeGHdfolconAPOw+knraCgzI35RTvg3yZdlnjvbYI1LH0D9SnnjXi/GLE6Ot/sjaAoaA3eBoGvZqCXTqPb1wg72m4hND7TgqS7w7CLJnt88LUjfbtNz4Wa1/vE+0jeOU2Jw8xDlsCeSuQO/+rBhff7W9PPzmiGvNUbVtHdyoUO87YiXUTFBKJXZgMyhtu1UBDLF1Ld8LWIT6z0YU/kOVb6zEBanL2+Z63CbyRA0U5zo2NXt147I2Utg3qIjLTVDeZYOZHQHy5M+qcIIdi/Vio6+g8xoS2aiKjTkzWCXicQOZKSYffPbRSs4ZPoFBUtAlkT70qKBH8DqKVD/fEGpZO5rLQprmwyrE6270ZakqBrr2CPQrvhQmdbMMfLjO5K+7W87usWSfr/aAo6lbE58PZ160gWW2m5YwYObGMYu3Tr4UBEGVVjlGYr6ki/AjMIMGZFkwvDkte03iebDMrVqoLIWawm9jKpBpKqyLb6dXnmEefM6P35GkZBc9Px1Y6eNKfVDicM8LMX2zZSefgD5GXwweCF7KMlG8/M77V5Joa34NlhyoKzJokjW/xUbp0PimKxIoTQrdhPpQ0kZvBuuZ1nCV3Q4QGiKyaMdSOOO34xpEnB0Om28BvTl+k1YJr9VkFWmL+rT2hb5KihPWI1BuvkbLdVtK037u3/0jin90T8+/ZSj526NTfKkbsZUiRbc2oufG3X5s6xc8iHrulSbnqtbFXsuGmTkrU5HDsB4Z+ExvthZrECTl9Y9KZLz8K0q+MlllvbSK4dvxVceqKxVzNMt61OEkzIYbDKkmz4V+pleS0bov5rYyETcPSWdZCYQpPPkrgW8oAqRQ9mRkIhXDQMLszcjsOQoXMusF+y3NWTNm64fEwn6Ueu9/OeHk1C97p/aWd+xB5S7vWir4lcKylaIYENpCBZROKyNZ1KINe3R2+mN/NOMeUDi7r8cMEdolt+md5CxmZSmU61V8jYiMWRfUALCitC/57aoIsfcm3EyVoMk3/1EanElLxJOWlTQkS9/NW7y9mtf4HAqvPPbDuI1KatLbb27P6pAmBqumXZZzfKNr9Q0Hd7DpesSB1BnPVHHm20RH3iX3RkdoTeiJ11KsbtJ9czeQgPwPc/m1gAB/hYLDaAWkZ4YYKy/XEHNsBdmHtFDmThzg4u3UzTP9LyTZbt0vLn8Tf4SBri6IC036YQua/miJBBDNSOzm36UPgF4m8XCa6JW1E2L3eAzz3KGDIHj9/HcmexST8oA3q6otfVJHB+t3560PDv0BB+/OBR7+vE6aFxYw/JY613CLqzm3IWSsFvtYkdQdqpZ9ptPshK3sjjRacQX+KOUJHB98Wb/rhivOao4T4wuD0hIdvImZRMv3mCUqiO+7cvziVd9XmSsh8GBB62H7awTVn+DyZFVj5elDWDL6th2Q8KKaliO0/c9pWEUsdLSF3q9PLv9qsK9Z1iFMygw8p2vCQ7N05lK7epuO9hyyxx4u6JuTgrc9UhDae2Mmyunpm2Hn8gkfeTm3rwzfKupSAhia2yCPrMAqFGxt8OtRJM6+uyAo1PmuT2xAYzjyDOhwSHK8PrM8UB/Oj+McVnvko4L4SWYis1Xvs3gA96NmTYsyy00Qh3Nfj2Yp3ZV4FGcl7g/0lpPoumwB3viI4VAfe9g4lsDsW3dmXkrWb/hwnVcjy0WDVvQImahX4Q6LfaOYXhC7f/B0RaqYDYAnzBA42guG05l1HS+wLYsRuPqtnV2IDHicyMgqWwuljErnmTuRk1uSh35n2nUJ1NU3Vi0YqIm8+bUqhTC0PLiECG4i7YMt3yie+W6GzeyHovfI0hecNV0EvaXeQLzy3PTN7JiQqfgi7HOpuLtU8gPNX0Ux+qMKrI8xzSZu2Pq7Yy9lxqEWtyxHQcI5o3KxbEmFamAY7dC24Bx+GB1xSBXoX26cbF9wbGKzFh4zK0NKzdkv7sz70U9z83042KjBd13XtlMquKDcZNipk0Rqn9czEY03C77AW1K8ZZASs/hZQ4gQoowkgqthwqYSyFTWZU8O08yacMofhbbugFfKHtQQza07tjTOcqEVh4rDWWU819Z2R+PFs5Nuunom9rzd18lbioh3XAPBXtBnF5PwC69F36/gf43ujkgutXBRpSrQ9Wco8Oqk+0VopyaICRBGsXSsJfFGg63KmOMmsr0g+z4Q03AuDAXoPXRTGO4Fb+8gKAA4xCu2kJfqCY/IuTp1r5ijDa5cpsOKTNcJuQkrrNnHq9V1TG3HpZs85NG5sJd9C+uVx0PbSy+OVQXsmUa58q77DdR09kw8/P+iln743O8aepld3cM75ZpsYNmuDZUilivKUs1zu5qhBBRL8pXj/GGn3/brrf56LjtqD746z34Aeeo6jbgW+7wNzT5hlXpL4d/ywbg2Noy8MEw30x4fWaAU+WtAF0O2NQj4W1RDpqx7r3Hnptc0dzW8hui2qeHtzs/VQc0LG06u/LRUP6xE+9l1aTsUKeWuub6R8d6O9M71oyEVhOX6brpwdmFji2Lyrkkzaz0wQfQYKmB4A3DYh4eSXx0jHPiDvDY8F8tnEuD7k1m36XCyU1eYYyXawWngeXLmd9ktg6VIAOA9Z1KvbOXipuXDUxkEdZHEnRvpWUebak0v9HwLcHoWGUDmfFQAraBFKlAeioKgMKVpfhZX3eD6yKpQiEWiBeFpslFjqYVLLKbJPF/co70QCtHJCeQqwDWUHVonBXC9Klf9ITWvGOU9sH70Q2XNDNRaVwFU2LZiZaIZOsrdzj5Qbl7ENvTFBCfX0tyavA+k8qElFFiGRK3Pv4QWvrHvuULuRIzDoxDZhXyjunkV10Tlx2br+fs5EWER8nHeRT5/7ecSQx9HeOQ0kD6YspAzc4ubDlM8Ucj8aenCXvs7sbROvX2Z9LfHaiXb+c0cD+CMFrB2Du9cVxsuTMIzopjh9hLlqN8z63x/EcVapSnGhVFxpT2PIAkBlDZ27jBfKY7lU4yaIgmquJxGKjTjTJV83NC8JSpw43qZp3mNpjtNj+cdJ19GvWW5+p7WDbSHV0G+RR8A7gfLW/ksn690IXN5ttQFjhP9uxWb5QshDK3Z6nfRZt1wpU2KyDai2MJKEAU2+jqHDN0YAOV+QAem1KD9WJRX/+jnGPWDJzWn2GVxEiK+PddY4BJxqYQLtjMIFs2KbtjUbrXpo/9kj/5W0Rlj2oYR+JSb2WbWlS9VOkxGXtv39CabneKEMVUmn41lfhmEZc3rfrhMZxnhHRFY9TPAbyeXtAJtvRhmfpFLcqi+HDcWvfoHwsH+9ocGcXsjA5nIQirlC4B8qp7tZmBjJMFmT1jltWvnbpVErVRovtk2SVfp1eAjxPmn/YjWIwLb4sZQvDMKxGMUjH7FSIRQr3mlkjChO8Um0J86Gl+wwXjM66dl89cyJ/ftbpxMnFHH6Lc2HDFgb2XYAGY1ARljCPs+fV8JNOwJIhuG88/JAksxkWNV7QMIHF4uXh2gYeGueMuU2uAQyH0/FY4szSj2aQfPYHfjwKhp0GbPXAyJ1/sa4NtBXtkggrRCZdNjuSJbHMTgE80ueV8ITVwy3upK7Qwlj3IV+g5S4qKbIFCvN3d4KJnFgljpY4HDXGBsWsL1dMxNLMSVJscPG3H9PIq7oYy+o+OH1bConRMc1qRwZdLa3wn34I/5y22tgIrMjk+sQCO3IbyTWc6udOzK8fVjCmGNTNsxWd+0QuTCRbD/9Lnl3AeIfY83EyyjAc5G3SZujEdu4VCWQsa2XmMB6m8xxqaxzwkU8PKcg3AM91/VGQfj+nuONhi394daN8bHBO9j5z+vMxONJJL3KaFPfbayv4uDfH1NtcYSuimgerWZv8RWOc2vqnS8UFaU1xmevx50/EARu7I1Nq3X0PvVc/VTr+k5fLNX19GvpruJ9hFfL3EMUVFjnHxA9nxEzEm3ac+SnoU9hBeWDHAOukR5VBcC/yomHqNYw9s8xJ14c+iXlQavTA/+Lpei+2D2GYogLb+5DYM6HGJVHKyBz6BRvdEyqHaJFReVGW88EXAJZeF1XTXvZqnPB1M4aioXRBFgaSo2PXBEt5Pn7Ka05N1UosPSZxTbSYXIhU6ieBFCQg0QJYMF79p9p7AmIgKt3iqVAL4w0dg49AYqjMb4Z3vg/mC5c1MKDCqLFu0OTVJHZObmshLgm/+nGmSNmJUoFJNbYR9RDeH4UuiYRa/XCtJlxftmmTL3vQTHmu5EZIrbQNj5SHSMtG+ZIgk8J7RKN9p6J79S8SkfxrHM4wv2iUGC73d5Og7qyuhAFYFtoyix2J4eRHzzQqoVs1qd3dxv0wuBF9OOXYUPCGaBSqSV4+2AwGfn+ctPdyHf+KrPog0hcfx5w9QUNBgfdtVD2VcTOuPEORFHWI7EKIT00iAdIA0PiKdWs7IcVMOEylAtO4BJiuWaO8ycaNO/NR2LEwqSg3JhTgk6s4JR33h92K/LI7FoNGxqgQLr+VuGsrcn1Oll2Uv6sm0fQFuWeFoGRlk7Uszrv2rcVbfBG9gE3iRBVmJgPvRH5qzOiS+BfThemUxvrEhr5zHA7JFJV0+0k0gXPNSDgOIGZijucSs6wephFl+LLDFqgdTCzBVxuiL+sYORDTAgO9UmAJ21clQW1+7CUV/lFbzZa1bRL3ke9srec+sXr3vKFd1vq85+UzK4dcai7y/kWFhOQYedvTLCQlpSO6T1UkzrSkld+r06LHWGU0TfwS4oCyqcuAiFbW2Bww3fpY5xRqqP5wdK7mzwDQEE3wLV9ExZYt68bxhLmv4x/Hc9ir28IqlqCHSWUC71zv7hAtO2Q/prGmrhrbHMt0Wa+1w8MGJVIVyCM6/nfmvIX8VcK0+/DaTzOPzm4fBH8q1qk50+8pyndZCi7L9mjLnXzjkJwVKLgsj4stve6GKwdrEhuTnRnLquo1P26gl+JFlYFlO38KprRfBRBUc/yqoHsGEoz0acI3AAJju6Dcjf8t10yqcnHPy6LQqGYQycGMoFQCl+Cj+YzojFXqsrlY6ZKu32kivqZ1JVTDnAW9JQdGhUOxW8C6Fiep+wt2Z/F6pGXi79u2ayklx7j8sqLnQOYOnnl043ZjCR6vfzrGmwXCoP9Yvqe/EpCoeVVUG8i5wxy3B/HXWI+MdxTQM2nJIkU7P6wiwro46nba0LFmhFPEpaLJIPtvH8orUegBhJx5Jmz9Ky7TbxFMapfQu7oWlOPMSuu2wzys6bX9+dTJ4DrI7HB103YEnKXw9gA6i8DyEn2Xtf1XlBDRCqJTwwacxG6Ias3utrqMx76OjmmhPwSLhuCQdLls4+y1/1/17cYLs/z/55YyE02oL5kBeP5g1W5pW3lY9ARpq/ovPg0DV2ycxr6LCnBA20t4MYl6V6y6xxtxy782w5wp1eG0uregtyQtZWBOum+beiFbhfwvld+tfnzeSGI00rK/2LQ77oCY2dy7Fjf/YPY4/6OU2DGEDWCTYVpVxm6Yz3UeX+a3t0P8vgD/55TVA9uA9Nx8x3sVA+EZYqo6A+PdvZ2Vrg+1LzZzyBRu2Luq64xC9NxYlQjm55a0Awm7ganQ05EdO6ucS/+3b/Ok2YJU5WxlpJRZCujTu5UDjFOe+2CpoUJ+ja+cpst1FyUTiIjaXi4IhljTO8G9lKPuIA4nOQptGOFHxTr3NHL+Rkh733kXsW/v4QkdZ4pcRRaM7jdOhcbrVlGqdrAVP8ckdemWcZF6iW4le65radzp8YlohydVhPSRnd59kLshgsrlRyaFE2HD5Z60bqObsIit8jIl/Rui0/KiPwDSGCJDNb8jTg0Kg0bX3QTd2cRh5j39nGAtQVfCPF2ZbzK9GkMpDkviQSln+FYwcz2Ngp8Tpxsgl8ksIKwjsQ61MipSnnLl3OjP0r0crQlEXgnsuLtmUJOQ7KhCACBzi66NljwGJ4Af5tb4/1QXHiJCJXD182QaidqqumoI8qO+0EBMky/noJ36/r8H2IpL2/O3FF+Q6BvWce3a84NMKCO947TjIvfONmaXUjUY4DNxKorRYPQAuUr6dYLFIK2g+afQPUhggahmxI+Jteww8Zgmpcn+v4z8AVX4UWfBM1v5tS3a3923RrWISGrWyulaFnNo4YgoKgPxpMV/lBEuUYgKe1qUt68xGrwUuL9rUzKsT4gCy25AX3aOZV8UFA66PLi5JdxB6q83ri4CAiKXh/1pCq6MeObHrOpfDHh/NzVzk7T5LP2HkHIenZVKtcXN1hPASXyw2m/yZNF4KF0u0N22diNLdhBMty3xDIqsTiH+gEJjDAruFYW+lYnuMv7Fl9jUzgQnKrmL236Ks7wJL70mpbjb5Vr4x1liVruu2DaCcgIS3ZRg3Dm8jkojBlhYTPmW2vizdJkuVB34lXISJXgb3Kv2rAGcF0gHZpTznYZ8REvKJGZEFPUCGP+Pjb3lTnYBaSJ05iZ2T3XTIW3q3bSea+0skoiCJd5mVG4ICkIk2ek7eHpWSGA7n2Drryorv2FITfZ3iVd7cJP/DF4laFGwvgOiQQ2X/7W8MEco9g3XHGUP0QuX+h8SUq6OzfLlIUc6qFUOdFXZ0vs6vhYooWqlYWlYZSBW4nXG3+SrbAeqWs7df7njee+vfiBoG+bkn5sf1skd942RqYqQdwp+w7kJ9KzXc7/AVb4fNKudoWAzoHWw9TGb6Luqr1+tYXJd6kRKJt434m0p00xtmQLcUxilaUq7QKjXPmLuHht7FEo7QJcvOvm572CbGe/trd2adAOjdTr5EijP64H8JwvcuTZoi7EVr7WtN5KosAkgDrdhTcsOQM2DEK+kp9pfF4687SmW7vwp6Eg/8I0C1NO9qDMDUaEeQsa4DUNwSJXQewp+eEQBFbAlkUAENfFK7L01daBGQj6S7kBvhd8s7pxvzc0awpam+JDjOmaZQKIYa1CJSKOlrTsla32ymzImUTUpv0AgwXC8HdP2E8eeUVXXua3L6/PwJr9dbrYAO/Lv5527zQ4KVoNPaZqq+ikSYEip1ao7+CMnT8q4/+tkGjv5iBPBiwqQFNoU6WyaBemcWHD/EFuHBCySnWHIoaVaSFmtD0yCUdwx6fwbFONDJ5bO7rNuYf9Lj/GoBCgV8hif8umX3l3J/L0y/4T/c9HkNSSlgTfeLYjWOoAQFkt8gwEWMYBEBpbUPbU1OxbuKjd/R8xbWZzTm2Myw1nMBQfq1Y/7ot6fE4QlOPnJ7MT9WNXpHSuAix137SbWFIQOT8bTDByLwORltd+IR6/3At7zT2ZnYM8w/4MRWPtAJjPOf9+g/4AWSQqE6w/J/cFbdT/B/a/WUDVv0PCPhz4p7Pa37+ORF8D8HcP2CmO4V86PJ1fiA28PfTt0XB+43r78nY3yscVbaWf44hwD/JPwfLvCrKv/dH4H8if78bL38OFf/X1X/VkH73fDX7yeQPwPv7CL/fIaDK/t9e4++F97jd8j/n/TmwrFf798BSxuP7a9XFxfM3/b5ylcatEid5awxLtVZD/3yeDOs6dM8J7fsBHadNMQ9bnzFDO8zP51n+jbd2/S9XoNqqeL+5DuNzNF7GPH3f+Vud+fPQ9O+G1L+OAv868l4qXuN/wNSffz4IuH/AGVN5tG4dgCwUwzutmu2WnFs8v0nh8z/GYKj373clyefzt3dxLWd6FtLrYOb4RfLq4deOkEreSlQ8O82lTnFjlkZdtKo51q6n+KvxiLXGDKlnup1im6j1ZpM9HPeVbdsugGZEZYnp2YILOImW0uAUI5qZGs3RjhzMxg8uQ53VPKpm82ydGoB0jNxW/TW5JABU+HWus08Ma0AHO3wpgiIo63v4V19tCzMa0tXB/PxML18rDPFzTODZBpZpwJnEw7aYo5A2GMfW3Afh7E71jzNx8IrmuAbhr/Nqh3BX25M3eQfb5hnRvy3xqYBVcd9CkWZr0O77Vty1vWXmaP2RXZi90QZM92DwtFfmcvADre9DcFSIvJ4pL1QSPI0/HxgTQfBF8B9TZSmo3BOMFEAfKsrFrJjoY6CHQNnI3qK+l26ByLcXmh3LLFfbBbJDylJsidpDYL96aye/5YFnxcFSh3Zs3sFnid36YVtkIa1Ki9GhWxISBnVQIH3oz4974GdfYkZ5+KwZRfJolXWte9dmTbCr6+W4OHLESkvUPSSXO/URVdHeRZ2KCfL3ZVuAH/kl7WVbigzJryPHJCQ3HBrSOE1ZTb6JYm/ZLUV+xhGgHCTPkHa+M6VKv/h+pA4HrMY29KjXi1iQbDtCceRESrJaEW2kJeiwDd9P0Mu+oV8Ok1pAD3WlffPW0UB67ivZ3uZ5cvdYYtGut1X6DeXr1V1xcrlU1axcnxSyegLSXbngGgQfm0qS72l/Qp8833DMGIo3jQVm1RL7eR+M+U72m+ULrXNadsvhtSseQ7+cDaXwjtuyResZZBXSJIXeyT7SuVWx4V9lo20HskdcgL5N2RphMKLxJFaL3hawv+J5L2dSffv2A+xFmfSDNorMTZAve4T9u2uQb44HdGjtGPodaNoCE59+O8bEW5jxjc6onaM+z/Z1fNviR4mWVTm0Uf2eMeKZSCQ/E/J9t+zjgeH3u8FZBSXWmiplSoWCVlMH8TUW548nhM/PbZ5k/1kS8iMApDsarjMwCGXqNKUgyLDsyUeWJjnKeQdqgexjNZ1kN0pNDg3AFhRgsLhWqGrh82eq7IpmY2eunF4x8IJUh5DGkyNmZ1jdyJmMUlotXlStRKxCGAukT6WgCI/A2fOHPlLhqoVWrAX7exPZg1uiRxoEc/dSVAEZQhoNaflQSKCZEknTKlR2wz5H1o74k11dlie7nB4HdhxYQMu5VWOLTPXatVmxgopwCjzLJ1dRBbdveP0oEJ0dw9WNIa7qjgWwdvmXsxUKj6Atne7idGLZmDqxFGG8EvWwfyJVFPwo8otABSgivxYwy9eOYNifsSTUv63JL8a5TIpaOzgpB0xn6xvHFWPAHkXvvCbeiHdAih/h0j9Gx8pxNxYMBYGWTFYfD875ZyhIIBJMDsTI7/lziEe7GpOjfzTdIZ8jHnufqeEZc3pXIQgtMIXc6mo3wCMr4q3d6i1CPR2/360HWdqizhdOn7vPYH8ZsEaPr2vjC5OPzhUttf9tqF/mOJsECwi4YcwoeXHGzJBKoYYhxqR85tFDWEtVHjN54azfC3H3xNXpMjHwzui5SoSHJsabthZ23LaUbD5PmYNQi0AdPvdA49qc1plnBo+W4NcAIM327dBhlptzDg6fOWLutHYxz3NS2e1RVqNaNvEokSJqnjgT+zBItfyKg6NL09gVKZOCR11Lago14VVbv922PG94h/i1GytX+YZHbgLXr1LuYYYrtNdnqdctNXi8+UpGuRkn2gfYBUraI52Xb1+YK5qRuy0Bg3+252JasJycywOX38RVe9bSKUrZ5oiNLXkSkHKrq+l+1Y1K8kmRnC8/K2Y21WOteh6c1W8+9JtWD5gPEYdW4bI0Zl+oUebK74W8iQRLLLr+WVSP5dGNk9CJX6mlO2evCUXG9sEfn8kenN7sBWZpee5UQL2abWIc6Xx0vaQtY7953WPPkqHIN0+dJjtTlKRdG+a/iTSvwSEIyoghcvV+UWEsYF5V5THKb9/Etx6jLDdb8vuBkw0W22yDo/JEkAxSolVAmBXguK4BuKl1xatLnf1vRUW9DF3+Vp8RbtDTmwa4NI9Hxzywe8mNz0XE3ssUQ0j90/ma/67o5o0rJZzalrQ4eT1KdR3cG3fdq+Nq2hXKBc8wQ0IYMv76IdGa5N4SCl08bNcUrsYlPl1eunJJne6jr2QlbqZRILa7SRhUHGXBP24R1dMmjA+UDcr8HNlXClZrdPTFh72PJO6JNJkVzjivQi7Hz9hHXRiGAIHTmbd++3cxV+IZw51IfLas9a0Oq1xv05VX++YbpEBZGwVD/Mb7kRqe7oRTS10q9cOsV4ca36UkWqMkz7PW32jifkrV4cCXrsz9yNmWrPA1PgjmIz2D/B1TWe2rOxdPzOCxFgIi1btc3/bIgT5VZv2sGvUoys3KK2gJZLSXJUUhcyqZc3wwo/rMPCatBpXS9x2HcnWJxSKuzVhJlAjVps9dPyPdHeuHdF2MSXxYO1M7AW1swgMVsJlLbjHiSyj8hb+hMc9wuNO1nbMVT4RSYs2KWaQZZ9r58YFaPI6znB2TA7rmQQ03LNoO61CY1TnP+qFBTbriyYQ60i8qsK4AEcpNptDA25yArQVMi3axCrIfZdKeee+Mt9vqJT8nwqUGUManW1PxTRCtrqiaumoEJK0409ehHa08X29zBdJiec4xkPAU1bZoOSelXGLK64UMjvzFR9TMgDs3rFdxUD18X1U+73VIE79dydBaR+kGh9ozfkl9UUfofPj2xWC0H1iZAxDGuSQCpZdJYdmaNa0JEUwMv2tcDWTKbqyEsel4sbQmsVdu12BpuBqqasNaFroA6JUGC//t3PT9KmiQZWGv3dzxIEuOGnhKPqREuCH7Cvz6zzJ6fgTD0t4UdoIo3qSr3m/U49FcdxmrJjhI3WJj+e5PUfVxCy7XP3JHWL3LCb+aFCgKwiobQl7NDGzBfr/7uh7fHv3SyNYeyzPIB1WIIlh9pND3mtGnwU2OK2mBTWz2toU9EL2rpCo+/PFR4VSe/HbfzUuYT0zYk1NjMVPd5BI5urQXVsD++teXDuPXfTddqi2D1XAAkqFilpKwmLYfgAqW3+DIzE1TejTKvnTBegHNu2XrajnvVXAC4h6uUmRjap8aJQSRBjlIW5IOvQRUlODt82cM99db/C0QUr8CLQqvck08b83OUBEiLvMX3fPFVamwKVgODoGdHZ6BRwC2H2ZCPjK6aWYVf52HFRCM/MG3F8MUu+6muiHU/hisNlbY04kYFErBlermEPh9s/v57uaZB+4/eP5IjpXX2wdTUGekYIfIsdMWmKQ7SF96yabFin2+oAYdgJpzXOihgrxXm1557AnKCiXaZ7VuK4R/+XYfwYvBFpdpaU5h3Xl9V9+g6j4rhKdG8QlWGWMPyPnbHkXQGwDqiT/IEtP7sYkCd4O/u1eBRq/99kxoQo21McROtKgwcGIydrKpGqsijPDlzMu2A/8eAZyySyXHP+Q73jFHrfHzVq/jMfi+VyG93aMXVYbz2PHEMdLTUYkvUgtZK+zBX8/A54fyGP1TLylb4V5MdX6dyUWEVosbc+hjsnvRvtoId06eD7UX3lb8kLZrd/YNFreS5Ij9rFSiF/Fh3nyHIga5pQ5l0+wLUmBxcgME8NEeLuP4dfk8MUed8f052o/1wPhhOiMAz8AXwfBIhumPYa1lTMc8quHLvnEYT9MC8M+DvuuCpLDZh/gyC0y/UAyeh0hf/jzaf/GOjccnfDvdnaeycLL41aDct30ZaFdiKnn5KbS2RgL6h61+jRmGMElWCPMyMPZNNphb8I14RYCo3SYQcRHoiOm0RCiO121WZheRWwb6lZDvbWLxiYVgcVqIXu3cy+TBHX8VDuwG2S8TaM8jzfgCHCj3HOZGcbAHdHGk2uzp68ANi9CePnGM0ijsGuwPAqd8uO1Ly+6wx6C+g8KvwI3QMJ6vQOQeD0GFhKPmh/cEyMGFuB/2xAtiCxRKeQgg7lvMK+FGRDb091mNWeCBUzAJYoU+yuNwX883BKjsi2d/+7bSCQMnbnrm2DB+zvEX5d7gDB379/sFSbS2hpS/yJyWqTyaaOjF85oM5lGj3dUXflH/GzUZ48gGBE1nATLIYUyz7HeZPspikl83KurEwRvYAjZIgr24fd7HeNj2m4h9sO5oBSS5ADAM0WsSQzLoLwugRrrdLZwlHUiaKvYCiO7D+co3AJVA2V6T3wViJiazkTtMwDXxN3pcEh9YxVFhEbsK6/4GMJ53MCjFSPmcRtcMsi4pwpW6gQFbw2WoxyB5hAXb0hjSygutIkeM9teaqP1kVavsxerJKGoaVk34gD/KcY3RySFiNnt7uATqnpyqEBjIAQFkVaVJwwNAxUb0l1s+hkrgaL8iRA0B9/tupPBtycZbRtz41HSWJXQmnkD6N1kXufuApO4SBq90lW0ViJVBzMhRE4cotvwZtTbCFvzQDMy28is8bpsvMN/t/QCCzWw5e7NsvZaXuIMGRPNEDVQs4BVHr2um5NpqlINiUsBP0ZLMbsYwzlq9+aNivY3KUAy7/G6//h9aG6sFihjGdWQU1IkDOMi45A2uNaUNx2Gx5sTNuvQIv/fRIZntMUnQmmWeZoFzdAGFveWrXb0MJptzY/mx7znvAZzJuMR5vZTGgu3JGbt7qk/wH6kLwJSpprLIP1vSNBtWeTFhEg+/woN8sMDRC9+B5K3GiDoE6t2pYZAHXz43giO3IM31Bh4ONRkpMX19bAfJ/QDm0X0Y4xL3jKwL22je7b54/MK2D5r/4ba3zzO/POA8DusFNdduaj5seRRBDsj+YCUN4t2gqIAf8Mphzi+mcVR/fh/yG2i6PVV8DXBL8jnhAI71Fv3DvTLnS3BIy/n8cxrQTcG2olbiDegjJsxrKlfzF9noWnAtbdApTjX+xIgQiQtcEHeXjq2sVGREy18N7UT+y08jY16+g/BzXPmrVQPi/sEyvY+Cr6/WMiIMoLBFWVIlo4MB1oUqbw+zI16sOiHKtzEtH/lgxbjQOhVbIcO/7hhKuQgfC+vM68of+1wM7xKVTw6NPFmdVmWeBush4NLD0EsFWKYus0Uphy1kWXSaetsSYxtSInlFeKFMW9VDiWsrkuzkyoe1nYGH7aSBERQ7byrsEgGSSk3UlRn9ZjmZ38dpHSdedBjOSjpZ/Krd6iwSAfjjleMhXOtYOFnhAYfI3n/N7ve+dO+CtDo8pMdi4Ay2DdODiwYJeajPRN+f5rS0tHcvqlfgcLjpX85g8QcHmn7Wj1cPqpQnLFLgtneBLvwBUkBr0Mi8qjP0+SwZM5+zs0yenNtoCOgPNOGryv6Z8/4irqpZycUWTzTGPK4pXQ2UmBbbr5GNi1wmL31Aw08Lzbye2e74AWd20MQodF8WesnICYzYCFYCSB7MYTIsFc5cSDzz3rXy0n826bjJKegZkpZsL2Sr8xRToyQMFQ+S4JP0Ltb8TEsC50UnDB4z8zxaPkchOTLpZIILu3IdmuYWGXVP9cHwHKUgI9es3DuPLnq161t9ZrUZN0hPY35rJXwY2kfgqnqEz73lZtpcFP3E/iTFDhM7itQ+PO1gGs12hxUVo3o6T7phgVlA/JJWJSlySRfUnW7W7aZouMJTEGKyglW5z5zbqmKVLxt7wBjuDVvPhdN1ZJdrV484SR08xOtwpZsvlcS3Dh31Vi9PS4Gm4da7a0aNQK1HkUhryBb05gyFwwhgNdlUypa5Ctot+jBm04nRZSV3TarCDtKVauguklQOYEqh1uMJcuTlZ/koFZe08vz6da2JWSsuUiSJ2j7H9UyFwQReDrPvItbPK20lrX5VToB7pUh+umlwkF+TpzF6XsPXbTJQi/Znegj1h7QuOmZt2WcuQbizrVXMe3pBbv/rJcWWOA/jIJIan8IM3g1JsCwZObpFnRfIw/zbovSe+b0fMN8+bKpfLqb+YOvVSsRJz9+k9syJTn7E/o33jHGCmesaSoQ+6b4cFMRDznPDGdzsowuP1U2yN9b1dZZnXY4TsFCP1bmp6vpjbvzHYIBr5J++sKYLkyevhKfvhwEl6EbDtcKsiIvmPCinfIzjVEFnIWd2w6O1SLPUFhg9pdm6JzLv4kKzEtPvxWJMu8i+Mxp0p8VL3s+RpUTJnCXNqTk294zzu9HEhQ80CulBaM7vjOkFkumD44FewIKxxHY0V02NP1VoY99M7DNZ/Xob7vsmvyaD3KCuvI4OtjYWoYL82fIGsx0S6OcW+UvSZrl4VKA/+r+diS9IOf68uP7bST1CUbxyXVv04g8yDdg2M8civpsAHsL9jgU6PjpNgcYTiLtBll4PX65QUytwwNWBhfDu0hyYOI5mZUzvRA+n1v25kkGM8AXHFDDA93HPWDxuvMRf36EkCFJjh/D4uU7GFatZLrZD3eynT3I2lejjgZesGWv+DyS4HyxvfPYz0GUTTIn/8DBJ1ERZMSQpFM9omOBYewSg46tweL1F0fJOot7FzFkmRv9ryNE8Gpcq6KACkvQWY5qTU4s+xOcD+IOp/plPoSo4jQPazzhbD2ysPFP2M7m2uIa7M3zSI0BA15BujK20BPSUVR+V5WjizG3qCDCkDcVSS2nuR69CH8vxrFSp0YjQt27QE9tHdzp3Y680G5+OaKe8xHTO2xtltiXP1kUFgUXvEUWZ9zVx7p6V4D9zHvms/e6GUHhZISVpHi1lTDBvAuzIr2P242MHv0SnHoGBOzTQog+M7U8XaA+OujU++c6tN4LUVxAqVJ54e7QnK+LTvPIDJz6wQrsVrlGtRw1GHgPL7Bt7Go7n6n0s5KAvjvAkaTM7cR+lnWYrHFlZCQYllzw/N3Owt4t8jSciOImA13xi6bwSZGP/Oj8PJAHXqnrvUpKy28FjUN1/FhHWRFNKerLizlJk6bNpc7CMSTkoiIfAXNOMQ+EwcIXSPxNESYQdfVmUBiCLpslh5YYHmyDHr8QGD6oAI9tzHMfduLF/QdSbDspWfF+AbfHaU+mm8ge/HgW0WAMHbMH5x5sCkIbDUCYTile5J3BGsgP6oV4P2VAvduXTYxz2wX1FPgWIDZS0N3pSIi3R8vgt4agZNrIyDdqk0y2Q+S9EONPBlV649w8fnV1x1UdSBdKRe5grO4NvJftbCyeIyPCCk74i8NuB0xqs7wh6HkvWc/7HB7tIUZiNTmQ0nji3ImLtUx9sOK2U3cgzT3z58w9xoaO45Kj7sWIcttNinq5OA2rhMxkf8ThxrToP1MK4Twvj2cAfKmZEXqs6LkwjAXvMrAIgA3RsgC3ZAmf4kBSxH7lkBFqRrKnll+Y2PkrD8MfWnvBnKecGZGRenB9FpgXeUNYsNVAw3FcsSKi6BhYu0JwkshbVYI2TG1uGOrZcBOW9kggLu/Wh2HcK6NnPSkGdEcDKBdcdmMx9nwOo1M/l+sYxRv8aPUbGnOSSD+xCcBIhneqmyGJK6MbciFLi17iErwFE+qoy9n3ftkOyqSQKINYyu15if6kdYYNfw1BzG/3bdYvUfZnSxRvYopjeiCHCmqiq1DazEYqDoRI2VEZSH+XZE83107kdBgsoL9Kcxpq4Ai9x2mlnO1S1pqUIKgxvhIDotm18Z9EiHvuLpYAG6XD9TbsClHIS81vJu9C0CIIZJD8aPay/FIUmoqznsZuuLhFy6lZ7cIFFAFxzfVu70ujY9m95Wir6Kp4nRtrMLxRTALvs2g+10DSD+OpgNqA4cTK/iPbbuNTTLRl9ZFl8Q/X/g8kL7+b//5q8gKL/lrwAI//EwH/PXoD/G1IX0GZg0Y1i7sQ3Ukbou9xL/gME//9LXfgHBH9/f/47khfKdR2fwaD+mOYxWf65HlVX/DN9iwXy4zx8qzb/z9/ZL0cDERAhSQDFEAQgIAzC3lHmZyzMu50n/xMBgPP575/1WPxPCgD0vwsAhP979sq/Dv3X2f/Xsf/+2f/3xBWhWsu3mzvQDX21vrGi5+bYL+Ekef6BFe9vy7DNb7l2YM7Hd/6Hv8/yX6XmGZL31HLtnkdiwfdb6zw0+b+koR/6V5ieaWr/H4fivzLR5t//U55LV2XZexP6KKs1t8c4fe94zPErQT+Be0XnJyz/YxMJ/u8TCSP/PpEg8H+YSfj/+0y+Yjo82u///kx4XrVUhyx/z/hf \ No newline at end of file diff --git a/30-reference/configuration/images/cloud-pak-deployer-monitors.png b/30-reference/configuration/images/cloud-pak-deployer-monitors.png new file mode 100644 index 000000000..77cc0d0b2 Binary files /dev/null and b/30-reference/configuration/images/cloud-pak-deployer-monitors.png differ diff --git a/30-reference/configuration/images/cognos_authorization.png b/30-reference/configuration/images/cognos_authorization.png new file mode 100644 index 000000000..6f042f56f Binary files /dev/null and b/30-reference/configuration/images/cognos_authorization.png differ diff --git a/30-reference/configuration/images/cp4ba-installation.png b/30-reference/configuration/images/cp4ba-installation.png new file mode 100644 index 000000000..85a4bb6ee Binary files /dev/null and b/30-reference/configuration/images/cp4ba-installation.png differ diff --git a/30-reference/configuration/images/cp4d_events.png b/30-reference/configuration/images/cp4d_events.png new file mode 100644 index 000000000..a94020363 Binary files /dev/null and b/30-reference/configuration/images/cp4d_events.png differ diff --git a/30-reference/configuration/images/cp4d_monitors.png b/30-reference/configuration/images/cp4d_monitors.png new file mode 100644 index 000000000..c5f86d62d Binary files /dev/null and b/30-reference/configuration/images/cp4d_monitors.png differ diff --git a/30-reference/configuration/images/ldap_user_groups.png b/30-reference/configuration/images/ldap_user_groups.png new file mode 100644 index 000000000..2fed31bff Binary files /dev/null and b/30-reference/configuration/images/ldap_user_groups.png differ diff --git a/30-reference/configuration/infrastructure/index.html b/30-reference/configuration/infrastructure/index.html new file mode 100644 index 000000000..dcf9e61eb --- /dev/null +++ b/30-reference/configuration/infrastructure/index.html @@ -0,0 +1,162 @@ + Infrastructure - Cloud Pak Deployer
Skip to content

Infrastructure🔗

For some of the cloud platforms, you must explicitly specify the infrastructure layer on which the OpenShift cluster(s) will be provisioned, or you can override the defaults.

For IBM Cloud, you can configure the VPC, subnets, NFS server(s), other Virtual Server Instance(s) and a number of other objects. When provisioning OpenShift on vSphere, you can configure data center, data store, network and virtual machine definitions. For Azure ARO you configure a single object with information about the virtual network (vnet) to be used and the node server profiles. When deploying OpenShift on AWS you can specify an EFS server if you want to use elastic storage.

This page lists all the objects you can configure for each of the supported cloud providers. - IBM Cloud - Microsoft Azure - Amazon AWS - vSphere

IBM Cloud🔗

For IBM Cloud, the following object types are supported:

IBM Cloud provider🔗

Defines the provider that Terraform will use for managing the IBM Cloud assets.

provider:
+- name: ibm
+  region: eu-de
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the provider cluster No ibm
region Region to connect to Yes Any IBM Cloud region

IBM Cloud resource_group🔗

The resource group is for cloud asset grouping purposes. You can define multiple resource groups in your IBM cloud account to group the provisioned assets. If you do not need to group your assets, choose default.

resource_group:
+- name: default
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the existing resource group Yes

IBM Cloud ssh_keys🔗

SSH keys to connect to VSIs. If you have Virtual Server Instances in your VPC, you will need an SSH key to connect to them. SSH keys defined here will be looked up in the vault and created if they don't exist already.

ssh_keys:
+- name: vsi-access
+  managed: True
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the SSH key in IBM Cloud Yes
managed Determines if the SSH key will be created if it doesn't exist No True (default), False

IBM Cloud security_rule🔗

Defines the services (or ports) which are allowed within the context of a VPC and/or VSI.

security_rule:
+- name: https
+  tcp: {port_min: 443, port_max: 443}
+- name: ssh
+  tcp: {port_min: 22, port_max: 22}
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the security rule Yes
tcp Range of tcp ports (port_min and port_max) to allow No 1-65535
udp Range of udp ports (port_min and port_max) to allow No 1-65535
icmp ICMP Type and Code for IPv4 (code and type) to allow No 1-255 for code, 1-254 for type

IBM Cloud vpc🔗

Defines the virtual private cloud which groups the provisioned objects (including VSIs and OpenShift cluster).

vpc:
+- name: sample
+  allow_inbound: ['ssh', 'https']
+  classic_access: false
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the Virtual Private Cloud Yes
managed Controls whether the VPC is managed. The default is True. Only set to False if the VPC is not managed but only referenced by other objects such as transit gateways. No True (default), False
allow_inbound Security rules which are allowed for inbound traffic No Existing security_rule
classic_access Connect VPC to IBM Cloud classic infratructure resources No false (default), true

IBM Cloud address_prefix🔗

Defines the zones used within the VPC, along with the subnet the addresses will be issued for.

- name: sample-zone-1
+  vpc: sample
+  zone: eu-de-1
+  cidr: 10.27.0.0/26
+- name: sample-zone-2
+  vpc: sample
+  zone: eu-de-2
+  cidr: 10.27.0.64/26
+- name: sample-zone-3
+  vpc: sample
+  zone: eu-de-3
+  cidr: 10.27.0.128/26
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the zone Yes
zone Zone in the IBM Cloud Yes
cidr Address range that IPs in this zone will fall into Yes
vpc Virtual Private Cloud this address prefix belongs to Yes, inferred from vpc Existing vpc

IBM Cloud subnet🔗

Defines the subnet that Virtual Server Instances and ROKS compute nodes will be attached to.

subnet:
+- name: sample-subnet-zone-1
+  address_prefix: sample-zone-1
+  ipv4_cidr_block: 10.27.0.0/26
+  zone: eu-de-1
+  vpc: sample
+  network_acl: sample-acl
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the subnet Yes
zone Zone this subnet belongs to Yes, inferred from address_prefix->zone
ipv4_cidr_block Address range that IPs in this subnet will fall into Yes, inferred from address_prefix->cidr Range of subrange of zone
address_prefix Zone of the address prefix definition Yes, inferred from address_prefix Existing address_prefix
vpc Virtual Private Cloud this subnet prefix belongs to Yes, inferred from address_prefix->vpc Existing vpc
network_acl Reference to the network access control list protecting this subnet No

IBM Cloud network_acl🔗

Defines the network access control list to be associated with subnets to allow or deny traffic from or to external connections. The rules are processed in sequence per direction. Rules that appear higher in the list will be processed first.

network_acl:
+- name: "{{ env_id }}-acl"
+  vpc_name: "{{ env_id }}"
+  rules:
+  - name: inbound-ssh
+    action: allow               # Can be allow or deny
+    source: "0.0.0.0/0"
+    destination: "0.0.0.0/0"
+    direction: inbound
+    tcp:
+      source_port_min: 1        # optional
+      source_port_max: 65535    # optional
+      dest_port_min: 22         # optional
+      dest_port_max: 22         # optional
+  - name: output-udp
+    action: deny                # Can be allow or deny
+    source: "0.0.0.0/0"
+    destination: "0.0.0.0/0"
+    direction: outbound
+    udp:
+      source_port_min: 1        # optional
+      source_port_max: 65535    # optional
+      dest_port_min: 1000       # optional
+      dest_port_max: 2000       # optional
+  - name: output-icmp
+    action: allow               # Can be allow or deny
+    source: "0.0.0.0/0"
+    destination: "0.0.0.0/0"
+    direction: outbound
+    icmp:
+      code: 1
+      type: 1
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the network access control liet Yes
vpc_name Virtual Private Cloud this network ACL belongs to Yes
rules Rules to be applied, every rule is an entry in the list Yes
rules.name Unique name of the rule Yes
rules.action Defines whether the traffic is allowed or denied Yes allow, deny
rules.source Source address range that defines the rule Yes
rules.destination Destination address range that defines the rule Yes
rules.direction Inbound or outbound direction of the traffic Yes inbound, outbound
rules.tcp Rule for TCP traffic No
rules.tcp.source_port_min Low value of the source port range No, default=1 1-65535
rules.tcp.source_port_max High value of the source port range No, default=65535 1-65535
rules.tcp.dest_port_min Low value of the destination port range No, default=1 1-65535
rules.tcp.dest_port_max High value of the destination port range No, default=65535 1-65535
rules.udp Rule for UDP traffic No
rules.udp.source_port_min Low value of the source port range No, default=1 1-65535
rules.udp.source_port_max High value of the source port range No, default=65535 1-65535
rules.udp.dest_port_min Low value of the destination port range No, default=1 1-65535
rules.udp.dest_port_max High value of the destination port range No, default=65535 1-65535
rules.icmp Rule for ICMP traffic No
rules.icmp.code ICMP traffic code No, default=all 0-255
rules.icmp.type ICMP traffic type No, default=all 0-254

IBM Cloud vsi🔗

Defines a Virtual Server Instance within the VPC.

vsi:
+- name: sample-bastion
+  infrastructure:
+    type: vpc
+    keys:
+    - "vsi-access"
+    image: ibm-redhat-8-3-minimal-amd64-3
+    subnet: sample-subnet-zone-1
+    primary_ipv4_address: 10.27.0.4
+    public_ip: True
+    vpc_name: sample
+    zone: eu-de-3
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the Virtual Server Instance Yes
infrastructure Infrastructure attributes Yes
infrastructure.type Infrastructure type Yes vpc
infrastructure.allow_ip_spoofing Decide if IP spoofing is allowed for the interface or not No False (default), True
infrastructure.keys List of SSH keys to attach to the VSI Yes, inferred from ssh_keys Existing ssh_keys
infrastructure.image Operating system image to be used Yes Existing image in IBM Cloud
infrastructure.profile Server profile to be used, for example cx2-2x4 Yes Existing profile in IBM Cloud
infrastructure.subnet Subnet the VSI will be connected to Yes, inferred from sunset Existing subnet
infrastructure.primary_ipv4_address IP v4 address that will be assigned to the VSI No If specified, address in the subnet range
infrastructure.public_ip Must a public IP address be attached to this VSI? No False (default), True
infrastructure.vpc_name Virtual Private Cloud this VSI belongs to Yes, inferred from vpc Existing vpc
infrastructure.zone Zone the VSI will be plaed into Yes, inferred from subnet->zone

IBM Cloud transit_gateway🔗

Connects one or more VPCs to each other.

transit_gateway:
+- name: sample-tgw
+  location: eu-de
+  connections:
+  - vpc: other-vpc
+  - vpc: sample
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the transit gateway Yes
location IBM Cloud location of the transit gateway Yes
connections Defines which VPCs must be included in the transit gateway Yes
connection.vpc Defines the VPC to include. Every VPC must exist in the configuration, even if not managed by this configuration. When referencing an existing VPC, make sure that there is a vpc object of that name with managed set to False. Yes Existing vpc

IBM Cloud nfs_server🔗

Defines a Virtual Server Instance within the VPC that will be used as an NFS server.

nfs_server:
+- name: sample-nfs
+  infrastructure:
+    type: vpc
+    vpc_name: sample
+    subnet: sample-subnet-zone-1
+    zone: eu-de-1
+    primary_ipv4_address: 10.27.0.5
+    image: ibm-redhat-8-3-minimal-amd64-3
+    profile: cx2-2x4
+    bastion_host: sample-bastion
+    storage_folder: /data/nfs
+    storage_profile: 10iops-tier
+    keys:
+      - "sample-nfs-provision"
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the Virtual Server Instance Yes
infrastructure Infrastructure attributes Yes
infrastructure.image Operating system image to be used Yes Existing image in IBM Cloud
infrastructure.profile Server profile to be used, for example cx2-2x4 Yes Existing profile in IBM Cloud
infrastructure.type Type of infrastructure for NFS servers to Yes vpc
infrastructure.vpc_name Virtual Private Cloud this VSI belongs to Yes, inferred from vpc Existing vpc
infrastructure.subnet Subnet the VSI will be connected to Yes, inferred from subnet Existing subnet
infrastructure.zone Zone the VSI will be plaed into Yes, inferred from subnet->zone
infrastructure.primary_ipv4_address IP v4 address that will be assigned to the VSI No If specified, address in the subnet range
infrastructure.bastion_host Specify the VSI of the bastion to reach this NFS server No
infrastructure.storage_profile Storage profile that will be used Yes 3iops-tier, 5iops-tier, 10iops-tier
infrastructure.volume_size_gb Size of the NFS server data volume Yes
infrastructure.storage_folder Folder that holds the data, this will be mounted from the NFS storage class Yes
infrastructure.keys List of SSH keys to attach to the NFS server VSI Yes, inferred from ssh_keys Existing ssh_keys
infrastructure.allow_ip_spoofing Decide if IP spoofing is allowed for the interface or not No False (default), True

IBM Cloud cos🔗

Defines a IBM Cloud Cloud Object Storage instance and allows to create buckets.

cos:
+- name: {{ env_id }}-cos
+  plan: standard
+  location: global
+  serviceids:
+  - name: {{ env_id }}-cos-serviceid
+    roles: ["Manager", "Viewer", "Administrator"]
+  buckets:
+  - name: bucketone6c9d6840
+    cross_region_location: eu
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the serviceid Yes
plan short description of the serviceid Yes
location collection of servicekeys that should be created for the parent serviceid Yes
serviceids Collection of references to defined seriveids No
serviceids.name Name of the serviceid Yes
serviceids.roles An array of strings to define which role should be granted to the serviceid Yes
buckets Collection of buckets that should be created inside the cos instance No
buckets[].name Name of the bucket No
buckets[].storage_class Storage class of the bucket No standard (default), vault, cold, flex, smart
buckets[].endpoint_type Endpoint type of the bucket No public (default), private
buckets[].cross_region_location If you use this parameter, do not set single_site_location or region_location at the same time. Yes (one of) us, eu, ap
buckets[].region_location If you set this parameter, do not set single_site_location or cross_region_location at the same time. Yes (one of) au-syd, eu-de, eu-gb, jp-tok, us-east, us-south, ca-tor, jp-osa, br-sao
buckets[].single_site_location If you set this parameter, do not set region_location or cross_region_location at the same time. Yes (one of) ams03, che01, hkg02, mel01, mex01, mil01, mon01, osl01, par01, sjc04, sao01, seo01, sng01, and tor01

serviceid🔗

Defines a iam_service_id that can be granted several role based accesss right via attaching iam_policies to it.

serviceid:
+- name: sample-serviceid
+  description: to access ibmcloud services from external
+  servicekeys:
+  - name: primarykey
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the serviceid Yes
description short description of the serviceid No
servicekeys collection of servicekeys that should be created for the parent serviceid No
servicekeys.name Name of the servicekey Yes

Microsoft Azure🔗

For Microsoft Azure, the following object type is supported:

Azure🔗

Defines an infrastructure configuration onto which OpenShift will be provisioned.

azure:
+- name: sample
+  resource_group:
+    name: sample
+    location: westeurope
+  vnet:
+    name: vnet
+    address_space: 10.0.0.0/22
+  control_plane:
+    subnet:
+      name: control-plane-subnet
+      address_prefixes: 10.0.0.0/23
+  compute:
+    subnet:
+      name: compute-subnet
+      address_prefixes: 10.0.2.0/23
+

Properties explanation🔗

Property Description Mandatory Allowed values
name Name of the azure definition object, will be referenced by openshift Yes
resource_group Resource group attributes Yes
resource_group.name Name of the resource group (will be provisioned) Yes unique value, it must not exist
resource_group.location Azure location Yes to pick a different location, run: az account list-locations -o table
vnet Virtual network attributes Yes
vnet.name Name of the virtual network Yes
vnet.address_space Address space of the virtual network Yes
control_plane Control plane (master) nodes attributes Yes
control_plane.subnet Control plane nodes subnet attributes Yes
control_plane.subnet.name Name of the control plane nodes subnet Yes
control_plane.subnet.address_prefixes Address prefixes of the control plane nodes subnet (divided by a , comma, if relevant) Yes
control_plane.vm Control plane nodes virtual machine attributes Yes
control_plane.vm.size Virtual machine size (aka flavour) of the control plane nodes Yes Standard_D8s_v3, Standard_D16s_v3, Standard_D32s_v3
compute Compute (worker) nodes attributes Yes
compute.subnet Compute nodes subnet attributes Yes
compute.subnet.name Name of the compute nodes subnet Yes
compute.subnet.address_prefixes Address prefixes of the compute nodes subnet (divided by a , comma, if relevant) Yes
compute.vm Compute nodes virtual machine attributes Yes
compute.vm.size Virtual machine size (aka flavour) of the compute nodes Yes See the full list of supported virtual machine sizes
compute.vm.disk_size_gb Disk size in GBs of the compute nodes virtual machine Yes minimum value is 128
compute.vm.count Number of compute nodes virtual machines Yes minimum value is 3

Amazon🔗

For Amazon AWS, the following object types are supported:

AWS EFS Server nfs_server🔗

Defines a new Elastic File Storage (EFS) service that is connected to the OpenShift cluster within the same VPC. The file storage will be used as the back-end for the efs-nfs-client OpenShift storage class.

nfs_server:
+- name: sample-elastic
+  infrastructure:
+    aws_region: eu-west-1
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the EFS File System service to be created Yes
infrastructure Infrastructure attributes Yes
infrastructure.aws_region AWS region where the storage will be provisioned Yes

vSphere🔗

For vSphere, the following object types are supported:

vSphere vsphere🔗

Defines the vSphere vCenter onto which OpenShift will be provisioned.

vsphere:
+- name: sample
+  vcenter: 10.99.92.13
+  datacenter: Datacenter1
+  datastore: Datastore1
+  cluster: Cluster1
+  network: "VM Network"
+  folder: /Datacenter1/vm/sample
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the vSphere definition, will be referenced by openshift Yes
vcenter Host or IP address of the vSphere Center Yes
datacenter vSphere Data Center to be used for the virtual machines Yes
datastore vSphere Datastore to be used for the virtual machines Yes
cluster vSphere cluster to be used for the virtual machines Yes
resource_pool vSphere resource pool No
network vSphere network to be used for the virtual machines Yes
folder Fully qualified folder name into which the OpenShift cluster will be placed; the folder must exist Yes

vSphere vm_definition🔗

Defines the virtual machine properties to be used for the control-plane nodes and compute nodes.

vm_definition:
+- name: control-plane
+  vcpu: 8
+  memory_mb: 32768
+  boot_disk_size_gb: 100
+- name: compute
+  vcpu: 16
+  memory_mb: 65536
+  boot_disk_size_gb: 200
+  # Optional overrides for vsphere properties
+  # datastore: Datastore1
+  # network: "VM Network"
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the VM definition, will be referenced by openshift Yes
vcpu Number of virtual CPUs to be assigned to the VMs Yes
memory_mb Amount of memory in MiB of the virtual machines Yes
boot_disk_size_gb Size of the virtual machine boot disk in GiB Yes
datastore vSphere Datastore to be used for the virtual machines, overrides vsphere.datastore No
network vSphere network to be used for the virtual machines, overrides vsphere.network No

vSphere nfs_server🔗

Defines an existing NFS server that will be used for the OpenShift NFS storage class.

nfs_server:
+- name: sample-nfs
+  infrastructure:
+    host_ip: 10.99.92.31
+    storage_folder: /data/nfs
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the NFS server Yes
infrastructure Infrastructure attributes Yes
infrastructure.host_ip Host or IP address of the NFS server Yes
infrastructure.storage_folder Folder that holds the data, this will be mounted from the NFS storage class Yes
\ No newline at end of file diff --git a/30-reference/configuration/logging-auditing/index.html b/30-reference/configuration/logging-auditing/index.html new file mode 100644 index 000000000..9658deabd --- /dev/null +++ b/30-reference/configuration/logging-auditing/index.html @@ -0,0 +1,42 @@ + Logging and auditing - Cloud Pak Deployer
Skip to content

Logging and auditing for Cloud Paks🔗

For logging and auditing of Cloud Pak for Data we make use of the OpenShift logging framework, which delivers a lot of flexibility in capturing logs from applications, storing them in an ElasticSearch datastore in the cluster (currently not supported by the deployer), or forwarding the log entries to external log collectors such as an ElasticSearch, Fluentd, Loki and others.

Logging overview

OpenShift logging captures 3 types of logging entries from workload that is running on the cluster:

  • infrastructure - logs generated by OpenShift processes
  • audit - audit logs generated by applications as well as OpenShift
  • application - all other applications on the cluster

Logging configuration - openshift_logging🔗

Defines how OpenShift forwards the logs to external log collectors. Currently, the following log collector types are supported:

  • loki

When OpenShift logging is activated via the openshift_logging object, all 3 logging types are activated automatically. You can specify logging_output items to forward log records to the log collector of your choice. In the below example, the application logs are forwarded to a loki server https://loki-application.sample.com and audit logs to https://loki-audit.sample.com, both have the same certificate to connect with:

openshift_logging:
+- openshift_cluster_name: pluto-01
+  configure_es_log_store: False
+  cluster_wide_logging:
+  - input: application
+    logging_name: loki-application
+  - input: infrastructure
+    logging_name: loki-application
+  - input: audit
+    logging_name: loki-audit
+  logging_output:
+  - name: loki-application
+    type: loki
+    url: https://loki-application.sample.com
+    certificates:
+      cert: pluto-01-loki-cert
+      key: pluto-01-loki-key
+      ca: pluto-01-loki-ca
+  - name: loki-audit
+    type: loki
+    url: https://loki-audit.sample.com
+    certificates:
+      cert: pluto-01-loki-cert
+      key: pluto-01-loki-key
+      ca: pluto-01-loki-ca
+

Cloud Pak for Data and Foundational Services application logs are automatically picked up and forwarded to the loki-application logging destination and no additional configuration is needed.

Property explanation🔗

Property Description Mandatory Allowed values
openshift_cluster_name Name of the OpenShift cluster to configure the logging for Yes
configure_es_log_store Must internal ElasticSearch log store and Kibana be provisioned? (default False) No True, False (default)
cluster_wide_logging Defines which classes of log records will be sent to the log collectors No
cluster_wide_logging.input Specifies OpenShift log records class to forwawrd Yes application, infrastructure, audit
cluster_wide_logging.logging_name Specifies the logging_output to send the records to . If not specified, records will be sent to the internal log only No
cluster_wide_logging.labels Specify your own labels to be added to the log records. Every logging input/output combination can have its own labes No
logging_output Defines the log collectors. If configure_es_log_store is True, output will always be sent to the internal ES log store No
logging_output.name Log collector name, referenced by cluster_wide_logging or cp4d_audit Yes
logging_output.type Type of the log collector, currently only loki is possible Yes loki
logging_output.url URL of the log collector; this URL must be reachable from within the cluster Yes
logging_output.certificates Defines the vault secrets that hold the certificate elements Yes, if url is https
logging_output.certificates.cert Public certificate to connect to the URL Yes
logging_output.certificates.key Private key to connect to the URL Yes
logging_output.certificates.ca Certificate Authority bundle to connect to the URL Yes

If you also want to activate audit logging for Cloud Pak for Data, you can do this by adding a cp4d_audit_config object to your configuration. With the below example, the Cloud Pak for Data audit logger is configured to write log records to the standard output (stdout) of the pods, after which they are forwarded to the loki-audit logging destination by a ClusterLogForwarder custom resource. Optionally labels can be specified which are added to the ClusterLogForwarder custom resource pipeline entry.

cp4d_audit_config:
+- project: cpd
+  audit_replicas: 2
+  audit_output:
+  - type: openshift-logging
+    logging_name: loki-audit
+    labels:
+      cluster_name: "{{ env_id }}"    
+

Info

Because audit log entries are written to the standard output, they will also be picked up by the generic application log forwarder and will therefore also appear in the application logging destination.

Cloud Pak for Data audit configuration🔗

IBM Cloud Pak for Data has a centralized auditing component for base platform and services auditable events. Audit events include login and logout to the platform, creation and deletion of connections and many more. Services that support auditing are documented here: https://www.ibm.com/docs/en/cloud-paks/cp-data/4.0?topic=data-services-that-support-audit-logging

The Cloud Pak Deployer simplifies the recording of audit log entries by means of the OpenShift logging framework, which can in turn be configured to forward entries to various log collectors such as Fluentd, Loki and ElasticSearch.

Audit configuration - cp4d_audit_config🔗

A cp4d_audit_config entry defines the audit configuration for a Cloud Pak for Data instance (OpenShift project). The main configuration items are the number of replicas and the output. Currently only one output type is supported: openshift-logging, which allows the OpenShift logging framework to pick up audit entries and forward to the designated collectors.

When a cp4d_audit_config entry exists for a certain cp4d project, the zen-audit-config ConfigMap is updated and then the audit logging deployment is restarted. If no configuration changes have been made, no restart is done.

Additionally, for the audit_output entries, the OpenShift logging ClusterLogForwarder instance is updated to forward audit entries to the designated logging output. In the example below the auditing is configured with 2 replicas and an input and pipeline is added to the ClusterLogForwarder instance so output to the matching channel defined in openshift_logging.logging_output.

cp4d_audit_config:
+- project: cpd
+  audit_replicas: 2
+  audit_output:
+  - type: openshift-logging
+    logging_name: loki-audit
+    labels:
+      cluster_name: "{{ env_id }}"
+

Property explanation🔗

Property Description Mandatory Allowed values
project Name of OpenShift project of the matching cp4d entry. The cp4d project must exist. Yes
audit_replicas Number of replicas for the Cloud Pak for Data audit logger. No (default 1)
audit_output Defines where the audit logs should be written to Yes
audit_output.type Type of auditing output, defines where audit logging entries will be written Yes openshift-logging
audit_output.logging_name Name of the logging_output entry in the openshift_logging object. This logging_output entry must exist. Yes
audit_output.labels Optional list of labels set to the ClusterLogForwarder custom resource pipeline No
\ No newline at end of file diff --git a/30-reference/configuration/monitoring/index.html b/30-reference/configuration/monitoring/index.html new file mode 100644 index 000000000..9a2178b69 --- /dev/null +++ b/30-reference/configuration/monitoring/index.html @@ -0,0 +1,75 @@ + Monitoring - Cloud Pak Deployer
Skip to content

Monitoring OpenShift and Cloud Paks🔗

For monitoring of Cloud Pak for Data we make use of the OpenShift Monitoring framework. The observations generated by Cloud Pak for Data are pushed to the OpenShift Monitoring Prometheus endpoint. This will allow (external) monitoring tools to combine the observations from the OpenShift platform and Cloud Pak for Data from a single source.

Monitoring overview

OpenShift monitoring🔗

To deploy Cloud Pak for Data Monitors, its is mandatory to also enable the OpenShift monitoring. OpenShift monitoring is activated via the openshift_monitoring object.

openshift_monitoring:
+- openshift_cluster_name: pluto-01
+  user_workload: enabled
+  remote_rewrite_url: http://www.example.com:1234/receive
+  retention_period: 15d
+  pvc_storage_class: ibmc-vpc-block-retain-general-purpose
+  pvc_storage_size_gb: 100
+  grafana_operator: enabled
+  grafana_project: grafana
+  labels:
+    cluster_name: pluto-01
+
Property Description Mandatory Allowed values
user_worload Allow pushing Prometheus metrics to OpenShift (must be set to True for monitoring to work) Yes True, False
pvc_storage_class Storage class to keep persistent monitoring data No Valid storage class
pvc_storage_size_gb Size of the PVC holding the monitoring data Yes if pv_storage_class is set
remote_rewrite_url Set this value to redirect metrics to remote Prometheus NO
retention_period Number of seconds (s), minutes (m), hours(h), days (d), weeks (w), years (y) to retain monitoring data. Default is 15d Yes
labels Additional labels to be added to the metrics No
grafana_operator Enable Grafana community operator? No False (default), True
grafana_project If enabled, project in which to enable the Grafana operator Yes, if grafana_operator enabled

Note Labels must be specified as a YAML record where each line is a key-value. The labels will be added to the prometheus key of the user-workload-monitoring-config ConfigMap and to the prometheusK8S key of the cluster-monitoring-config ConfigMap.

Note When the Grafana operator is enabled, you can build your own Grafana dashboard based on the metrics collected by Prometheus. When installed, Grafana creates a local admin user with user name root and passwowrd secret. Grafana can be accessed using the OpenShift route that is created in the project specified by grafana_project.

Cloud Pak for Data monitoring🔗

The observations of Cloud Pak for Data are generated using the zen-watchdog component, which is part of the cpd_platform cartridge and therefore available on each instance of Cloud Pak for Data. Part of the zen-watchdog installation is a set of monitors which focus on the technical deployment of Cloud Pak for Data (e.g. running pods and bound Persistent Volume Claims (pvcs)).

Additional monitors which focus more on the operational usage of Cloud Pak for Data can be deployed as well. These monitors are maintained in a seperate Git repository and be accessed at IBM/cp4d-monitors. Using the Cloud Pak Deployer, monitors can be deployed which uses the Cloud Pak for Data zen-watchdog monitor framework. This allows adding custom monitors to the zen-watchdog, making these custom monitors visible in the Cloud Pak for Data metrics.

Cloud Pak for Data Monitors Overview

Using the Cloud Pak Deployer cp4d_monitors capability implements the following: - Create Cloud Pak for Data ServiceMonitor endpoint to forward zen-watchdog monitor events to OpenShift Cluster monitoring - Create source repository auth secrets (optional, if pulling monitors from secure repo) - Create target container registry auth secrets (optional, if pushing monitor images to secure container registry) - Deploy custom monitors, which will be added to the zen-watchdog monitor framework

For custom monitors to be deployed, it is mandatory to enable the OpenShift user-workload monitoring, as specified in OpenShift monitoring.

The Cloud Pak for Data monitors are specified in a cp4d_monitors definition.

cp4d_monitors:
+- name: cp4d-monitor-set-1
+  cp4d_instance: zen-45
+  openshift_cluster_name: pluto-01
+  default_monitor_source_repo: https://github.com/IBM/cp4d-monitors
+  #default_monitor_source_token_secret: monitors_source_repo_secret
+  #default_monitor_target_cr: de.icr.io/monitorrepo  
+  #default_monitor_target_cr_user_secret: monitors_target_cr_username
+  #default_monitor_target_cr_password_secret: monitors_target_cr_password
+  # List of monitors
+  monitors:
+  - name: cp4dplatformcognosconnectionsinfo
+    context: cp4d-cognos-connections-info
+    label: latest
+    schedule: "*/15 * * * *"
+  - name: cp4dplatformcognostaskinfo
+    context: cp4d-cognos-task-info
+    label: latest
+    schedule: "*/15 * * * *"
+  - name: cp4dplatformglobalconnections
+    context: cp4d-platform-global-connections
+    label: latest
+    schedule: "*/15 * * * *"
+  - name: cp4dplatformwatsonstudiojobinfo
+    context: cp4d-watsonstudio-job-info
+    label: latest
+    schedule: "*/15 * * * *"
+  - name: cp4dplatformwatsonstudiojobscheduleinfo
+    context: cp4d-watsonstudio-job-schedule-info
+    label: latest
+    schedule: "*/15 * * * *"
+  - name: cp4dplatformwatsonstudioruntimeusage
+    context: cp4d-watsonstudio-runtime-usage
+    label: latest
+    schedule: "*/15 * * * *"
+  - name: cp4dplatformwatsonknowledgecataloginfo
+    context: cp4d-wkc-info
+    label: latest
+    schedule: "*/15 * * * *"
+  - name: cp4dplatformwmldeploymentspaceinfo
+    context: cp4d-wml-deployment-space-info
+    label: latest  
+    schedule: "*/15 * * * *"
+  - name: cp4dplatformwmldeploymentspacejobinfo
+    context: cp4d-wml-deployment-space-job-info
+    label: latest
+    schedule: "*/15 * * * *"
+

Each cp4d_monitors entry contains a set of default settings, which are applicable to the monitors list. These defaults can be overwritten per monitor if needed.

Property Description Mandatory Allowed values
name The name of the monitor set Yes lowercase RFC 1123 subdomain (1)
cp4d_instance The OpenShift project (namespace) on which the Cloud Pak for Data instance resides Yes
openshift_cluster_name The Openshift cluster name Yes
default_monitor_source_repo The default repository location of all monitors located in the monitors section No
default_monitor_source_token_secret The default repo access token secret name, must be available in the vault No
default_monitor_target_cr The default target container registry (cr) for the monitor image to be pushed. When omitted, the OpenShift internal registry is used No
default_monitor_target_cr_user_secret The default target container registry user name secret name used to push the monitor image. Must be available in the vault No
default_monitor_target_cr_password_secret The default target container registry password secret name used to push the monitor image. Must be available in the vault No
monitors List of monitors Yes

Per monitors entry, the following settings are specified:

Property Description Mandatory Allowed values
name The name of the monitor entry Yes lowercase RFC 1123 subdomain (1)
monitor_source_repo Overrides default_monitor_source_repo for this single monitor No
monitor_source_token_secret Overrides default_monitor_source_token_secret for this single monitor No
monitor_target_cr Overrides default_monitor_target_cr for this single monitor No
monitor_target_cr_user_secret Overrides default_monitor_target_cr_user_secret for this single monitor No
monitor_target_cr_user_password Overrides default_monitor_target_cr_user_password for this single monitor No
context Sets the context of the monitor the the source repo (sub folder name) Yes
label Set the label of the pushed image, default to 'latest' No
schedule Sets the schedule of the generated Cloud Pak for Data monitor cronjob Yes

Each monitor has a set of event_types, which contain the observations generated by the monitor. These event types are retrieved directly from the github repository, which it is expected that each context contains a file called event_types.yml. During deployment of the monitor this file is retrieved and used to populate the event_types of the monitor.

If the Deployer runs and the monitor is already deployed, the following process is used: - The build process is restarted to ensure the latest image of monitor is used - A comparison is made between the monitor's current configuration and the configuration created by the Deployer. If these are identical, the monitor's configuration is left as-is, however if these are different, the monitor's configuration is rebuild and the monitor is re-deployed.

Example monitior - global platform connections🔗

This monitor counts the number of Global Platform connections and for each Global Platform Connection a test is executed to test whether the connection can still be established.

Generated metrics🔗

Once the monitor is deployed, the following metrics are available in IBM Cloud Pak for Data.

Overview Events and Alerts

On the Platform Management Events page the following entries are added: - Cloud Pak for Data Global Connections Count - Global Connection - <Global Connection Name> (for each connection)

Using the IBM Cloud Pak for Data Prometheus endpoint🔗

https://<CP4D-BASE-URL>/zen/metrics

It will generate 2 types of metrics:

  • global_connections_count
    Provides the number of available connections
  • global_connection_valid
    For each connection, a test action is performed
    • 1 (Test Connection success)
    • 0 (Test connection failed)
# HELP global_connections_count 
+# TYPE global_connections_count gauge
+global_connections_count{event_type="global_connections_count",monitor_type="cp4d_platform_global_connections",reference="Cloud Pak for Data Global Connections Count"} 2
+
+# HELP global_connection_valid 
+# TYPE global_connection_valid gauge
+global_connection_valid{event_type="global_connection_valid",monitor_type="cp4d_platform_global_connections",reference="Cognos MetaStore Connection"} 1
+global_connection_valid{event_type="global_connection_valid",monitor_type="cp4d_platform_global_connections",reference="Cognos non-shared"} 0
+

Zen Watchdog metrics (used in platform management events) - watchdog_cp4d_platform_global_connections_global_connections_count - watchdog_cp4d_platform_global_connections_global_connection_valid (for each connection)

Zen Watchdog metrics can have the following values: - 2 (info) - 1 (warning) - 0 (critical)

# HELP watchdog_cp4d_platform_global_connections_global_connection_valid 
+# TYPE watchdog_cp4d_platform_global_connections_global_connection_valid gauge
+watchdog_cp4d_platform_global_connections_global_connection_valid{event_type="global_connection_valid",monitor_type="cp4d_platform_global_connections",reference="Cognos MetaStore Connection"} 2
+watchdog_cp4d_platform_global_connections_global_connection_valid{event_type="global_connection_valid",monitor_type="cp4d_platform_global_connections",reference="Cognos non-shared"} 1
+
+# HELP watchdog_cp4d_platform_global_connections_global_connections_count 
+# TYPE watchdog_cp4d_platform_global_connections_global_connections_count gauge
+watchdog_cp4d_platform_global_connections_global_connections_count{event_type="global_connections_count",monitor_type="cp4d_platform_global_connections",reference="Cloud Pak for Data Global Connections Count"} 2
+
\ No newline at end of file diff --git a/30-reference/configuration/openshift/index.html b/30-reference/configuration/openshift/index.html new file mode 100644 index 000000000..6e3581aa8 --- /dev/null +++ b/30-reference/configuration/openshift/index.html @@ -0,0 +1,235 @@ + OpenShift - Cloud Pak Deployer
Skip to content

OpenShift cluster(s)🔗

You can configure one or more OpenShift clusters that will be layed down on the specified infrastructure, or which already exist.

Dependent on the cloud platform on which the OpenShift cluster will be provisioned, different installation methods apply. For IBM Cloud, Terraform is used, whereas for vSphere the IPI installer is used. On AWS (ROSA), the rosa CLI is used to create and modify ROSA clusters. Each of the different platforms have slightly different properties for the openshift objects.

openshift🔗

For OpenShift, there are 5 flavours:

Every OpenShift cluster definition of a few mandatory properties that control which version of OpenShift is installed, the number and flavour of control plane and compute nodes and the underlying infrastructure, dependent on the cloud platform on which it is provisioned. Storage is a mandatory element for every openshift definition. For a list of supported storage types per cloud platform, refer to Supported storage types.

Additionally, one can configure Upstream DNS Servers and OpenShift logging.

The Multicloud Object Gateway (MCG) supports access to s3-compatible object storage via an underpinning block/file storage class, through the Noobaa operator. Some Cloud Pak for Data services such as Watson Assistant need object storage to run. MCG does not need to be installed if OpenShift Data Foundation (fka OCS) is also installed as the operator includes Noobaa.

Existing OpenShift🔗

When using the Cloud Pak Deployer on an existing OpenShift cluster, the scripts assume that the cluster is already operational and that any storage classes have been pre-created. The deployer accesses the cluster through a vault secret with the kubeconfig information; the name of the secret is <name>-kubeconfig.

openshift:
+- name: sample
+  ocp_version: 4.8
+  cluster_name: sample
+  domain_name: example.com
+  cloud_native_toolkit: False
+  oadp: False
+  infrastructure:
+    type: standard
+    processor_architecture: amd64
+  upstream_dns:
+  - name: sample-dns
+     zones:
+     - example.com
+     dns_servers:
+     - 172.31.2.73:53
+  gpu:
+    install: auto
+  openshift_ai:
+    install: auto
+    channel: auto
+  mcg:
+    install: True
+    storage_type: storage-class
+    storage_class: managed-nfs-storage
+  openshift_storage:
+  - storage_name: nfs-storage
+    storage_type: nfs
+    # ocp_storage_class_file: managed-nfs-storage
+    # ocp_storage_class_block: managed-nfs-storage
+

Property explanation for existing OpenShift clusters🔗

Property Description Mandatory Allowed values
name Name of the OpenShift cluster Yes
ocp_version OpenShift version of the cluster, used to download the client. If you want to install 4.10, specify "4.10" Yes >= 4.6
cluster_name Name of the cluster (part of the FQDN) Yes
domain_name Domain name of the cluster (part of the FQDN) Yes
cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default)
oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default)
infrastructure.type Infrastructure OpenShift is deployed on. See below for additional explanation detect (default)
infrastructure.processor_architecture Architecture of the processor that the OpenShift cluster is deployed on No amd64 (default), ppc64le, s390x
openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No
upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No
gpu Control Node Feature Discovery and NVIDIA GPU operators No
gpu.install Must Node Feature Discovery and NVIDIA GPU operators be installed (Once installed, False does not uninstall). auto will install the operators if needed by any of the Cloud Pak/watsonx components Yes auto, True, False
openshift_ai Control installation of OpenShift AI No
openshift_ai.install Must OpenShift AI be installed (Once installed, False does not uninstall). auto will install OpenShift AI if needed by any of the Cloud Pak/watsonx components Yes auto, True, False
openshift_ai.channel Which oeprator channel must be installed No auto (default), stable, …
mcg Multicloud Object Gateway properties No
mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False
mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class
mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class
openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes
infastructure.type - Type of infrastructure🔗

When deploying on existing OpenShift, the underlying infrastructure can pose some restrictions on capabilities available. For example, Red Hat OpenShift on IBM Cloud (aka ROKS) does not include the Machine Config Operator and ROSA on AWS does not allow to set labels for Machine Config Pools. This means that node settings required for Cloud Pak for Data must be applied in a non-standard manner.

The following values are allowed for infrastructure.type:

  • detect (default): The deployer will attempt to detect the underlying cloud infrastructure. This is done by retrieving the existing storage classes and then inferring the cloud type.
  • standard: The deployer will assume a standard OpenShift cluster with no further restrictions. This is the fallback value for detect if the underlying infra cannot be detected.
  • aws-self-managed: A self-managed OpenShift cluster on AWS. No restrictions.
  • aws-rosa: Managed Red Hat OpenShift on AWS. Some restrictions with regards to Machine Config Pools apply.
  • azure-aro: Managed Red Hat OpenShift on Azure. No known restrictions.
  • vsphere: OpenShift on vSphere. No known restrictions.
openshift_storage[] - OpenShift storage definitions🔗
Property Description Mandatory Allowed values
storage_name Name of the storage definition, to be referenced by the Cloud Pak Yes
storage_type Type of storage class to use in the OpenShift cluster Yes nfs, ocs, aws-elastic, auto, custom
ocp_storage_class_file OpenShift storage class to use for file storage if different from default for storage_type Yes if storage_type is custom
ocp_storage_class_block OpenShift storage class to use for block storage if different from default for storage_type Yes if storage_type is custom

Info

The custom storage_type can be used in case you want to use a non-standard storage class(es). In this case the storage class(es) must be already configured on the OCP cluster and set in the respective ocp_storage_class_file and ocp_storage_class_block variables

Info

The auto storage_type will let the deployer automatically detect the storage type based on the existing storage classes in the OpenShift cluster.

Supported storage types🔗

An openshift definition always includes the type(s) of storage that it will provide. When the OpenShift cluster is provisioned by the deployer, the necessary infrastructure and storage class(es) are also configured. In case an existing OpenShift cluster is referenced by the configuration, the storage classes are expected to exist already.

The table below indicates which storage classes are supported by the Cloud Pak Deployer per cloud infrastructure.

Warning

The ability to provision or use certain storage types does not imply support by the Cloud Paks or by OpenShift itself. There are several restrictions for production use OpenShift Data Foundation, for example when on ROSA.

Cloud Provider NFS Storage OCS/ODF Storage Portworx Elastic Custom (2)
ibm-cloud Yes Yes Yes No Yes
vsphere Yes (1) Yes No No Yes
aws No Yes No Yes (3) Yes
azure No Yes No No Yes
existing-ocp Yes Yes No Yes Yes
  • (1) An existing NFS server can be specified so that the deployer configures the managed-nfs-storage storage class. The deployer will not provision or change the NFS server itself.
  • (2) If you specify a custom storage type, you must specify the storage class to be used for block (RWO) and file (RWX) storage.
  • (3) Specifying this storage type means that Elastic File Storage (EFS) and Elastic Block Storage (EBS) storage classes will be used. For EFS, an nfs_server object is required to define the "file server" storage on AWS.

OpenShift on IBM Cloud (ROKS)🔗

VPC-based OpenShift cluster on IBM Cloud, using the Red Hat OpenShift Kubernetes Services (ROKS).

openshift:
+- name: sample
+  managed: True
+  ocp_version: 4.8
+  compute_flavour: bx2.16x64
+  secondary_storage: 900gb.10iops-tier
+  compute_nodes: 3
+  cloud_native_toolkit: False
+  oadp: False
+  infrastructure:
+    type: vpc
+    vpc_name: sample
+    subnets:
+    - sample-subnet-zone-1
+    - sample-subnet-zone-2
+    - sample-subnet-zone-3
+    cos_name: sample-cos
+    private_only: False
+    deny_node_ports: False
+  upstream_dns:
+  - name: sample-dns
+     zones:
+     - example.com
+     dns_servers:
+     - 172.31.2.73:53
+  mcg:
+    install: True
+    storage_type: storage-class
+    storage_class: managed-nfs-storage
+  openshift_ai:
+    install: auto
+    channel: auto
+  openshift_storage:
+  - storage_name: nfs-storage
+    storage_type: nfs
+    nfs_server_name: sample-nfs
+  - storage_name: ocs-storage
+    storage_type: ocs
+    storage_flavour: bx2.16x64
+    secondary_storage: 900gb.10iops-tier
+    ocs_storage_label: ocs
+    ocs_storage_size_gb: 500
+    ocs_version: 4.8.0
+  - storage_name: pwx-storage
+    storage_type: pwx 
+    pwx_etcd_location: {{ ibm_cloud_region }}
+    pwx_storage_size_gb: 200 
+    pwx_storage_iops: 10 
+    pwx_storage_profile: "10iops-tier"
+    stork_version: 2.6.2
+    portworx_version: 2.7.2
+

Property explanation OpenShift clusters on IBM Cloud (ROKS)🔗

Property Description Mandatory Allowed values
name Name of the OpenShift cluster Yes
managed Is the ROKS cluster managed by this deployer? See note below. No True (default), False
ocp_version ROKS Kubernetes version. If you want to install 4.10, specify "4.10" Yes >= 4.6
compute_flavour Type of compute node to be used Yes Node flavours
secondary_storage Additional storage to be added to the compute servers No 900gb.10iops-tier, …
compute_nodes Total number of compute nodes. This must be a factor of the number of subnets Yes Integer
resource_group IBM Cloud resource group for the ROKS cluster Yes
cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default)
oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default)
infrastructure.type Type of infrastructure to provision ROKS cluster on No vpc
infrastructure.vpc_name Name of the VPC if type is vpc Yes, inferrred from vpc Existing VPC
infrastructure.subnets List of subnets within the VPC to use. Either 1 or 3 subnets must be specified Yes Existing subnet
infrastructure.cos_name Reference to the cos object created for this cluster Yes Existing cos object
infrastructure.private_only If true, it indicates that the ROKS cluster must be provisioned without public endpoints No True, False (default)
infrastructure.deny_node_ports If true, the Allow ICMP, TCP and UDP rules for the security group associated with the ROKS cluster are removed if present. If false, the Allow ICMP, TCP and UDP rules are added if not present. No True, False (default)
openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No
upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No
gpu Control Node Feature Discovery and NVIDIA GPU operators No
gpu.install Must Node Feature Discovery and NVIDIA GPU operators be installed (Once installed, False does not uninstall). auto will install the operators if needed by any of the Cloud Pak/watsonx components Yes auto, True, False
openshift_ai Control installation of OpenShift AI No
openshift_ai.install Must OpenShift AI be installed (Once installed, False does not uninstall). auto will install OpenShift AI if needed by any of the Cloud Pak/watsonx components Yes auto, True, False
openshift_ai.channel Which oeprator channel must be installed No auto (default), stable, …
mcg Multicloud Object Gateway properties No
mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False
mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class
mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class
openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes

The managed attribute indicates whether the ROKS cluster is managed by the Cloud Pak Deployer. If set to False, the deployer will not provision the ROKS cluster but expects it to already be available in the VPC. You can still use the deployer to create the VPC, the subnets, NFS servers and other infrastructure, but first run it without an openshift element. Once the VPC has been created, manually create an OpenShift cluster in the VPC and then add the openshift element with managed set to False. If you intend to use OpenShift Container Storage, you must also activate the add-on and create the OcsCluster custom resource.

Warning

If you set infrastructure.private_only to True, the server from which you run the deployer must be able to access the ROKS cluster via its private endpoint, either by establishing a VPN to the cluster's VPC, or by making sure the deployer runs on a server that has a connection with the ROKS VPC via a transit gateway.

openshift_storage[] - OpenShift storage definitions🔗
Property Description Mandatory Allowed values
openshift_storage[] List of storage definitions to be defined on OpenShift Yes
storage_name Name of the storage definition, to be referenced by the Cloud Pak Yes
storage_type Type of storage class to create in the OpenShift cluster Yes nfs, ocs or pwx
storage_flavour Type of compute node to be used for the storage nodes Yes Node flavours, default is bx2.16x64
secondary_storage Additional storage to be added to the storage server No 900gb.10iops-tier, …
nfs_server_name Name of the NFS server within the VPC Yes if storage_type is nfs Existing nfs_server
ocs_storage_label Label to be used for the dedicated OCS nodes in the cluster Yes if storage_type is ocs
ocs_storage_size_gb Size of the OCS storage in Gibibytes (Gi) Yes if storage_type is ocs
ocs_version Version of OCS (ODF) to be deployed. If left empty, the latest version will be deployed No >= 4.6
pwx_etcd_location Location where the etcd service will be deployed, typically the same region as the ROKS cluster Yes if storage_type is pwx
pwx_storage_size_gb Size of the Portworx storage that will be provisioned Yes if storage_type is pwx
pwx_storage_iops IOPS for the storage volumes that will be provisioned Yes if storage_type is pwx
pwx_storage_profile IOPS storage tier the storage volumes that will be provisioned Yes if storage_type is pwx
stork_version Version of the Portworx storage orchestration layer for Kubernetes Yes if storage_type is pwx
portworx_version Version of the Portworx storage provider Yes if storage_type is pwx

Warning

When deploying a ROKS cluster with OpenShift Data Foundation (fka OpenShift Container Storage/OCS), the minimum version of OpenShift is 4.7.

OpenShift on vSphere🔗

openshift:
+- name: sample
+  domain_name: example.com
+  vsphere_name: sample
+  ocp_version: 4.8
+  control_plane_nodes: 3
+  control_plane_vm_definition: control-plane
+  compute_nodes: 3
+  compute_vm_definition: compute
+  api_vip: 10.99.92.51
+  ingress_vip: 10.99.92.52
+  cloud_native_toolkit: False
+  oadp: False
+  infrastructure:
+    openshift_cluster_network_cidr: 10.128.0.0/14
+  upstream_dns:
+  - name: sample-dns
+     zones:
+     - example.com
+     dns_servers:
+     - 172.31.2.73:53
+  gpu:
+    install: auto
+  openshift_ai:
+    install: auto
+    channel: auto
+  mcg:
+    install: True
+    storage_type: storage-class
+    storage_class: thin
+  openshift_storage:
+  - storage_name: nfs-storage
+    storage_type: nfs
+    nfs_server_name: sample-nfs
+  - storage_name: ocs-storage
+    storage_type: ocs
+    ocs_storage_label: ocs
+    ocs_storage_size_gb: 512
+    ocs_dynamic_storage_class: thin
+

Property explanation OpenShift clusters on vSphere🔗

Property Description Mandatory Allowed values
name Name of the OpenShift cluster Yes
domain_name Domain name of the cluster, this will also depict the route to the API and ingress endpoints Yes
ocp_version OpenShift version. If you want to install 4.10, specify "4.10" Yes >= 4.6
control_plane_nodes Total number of control plane nodes, typically 3 Yes Integer
control_plane_vm_definition vm_definition object that will be used to define number of vCPUs and memory for the control plane nodes Yes Existing vm_definition
compute_nodes Total number of compute nodes Yes Integer
compute_vm_definition vm_definition object that will be used to define number of vCPUs and memory for the compute nodes Yes Existing vm_definition
api_vip Virtual IP address that the installer will provision for the API server Yes
ingress_vip Virtual IP address that the installer will provision for the ingress server Yes
cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default)
oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default)
infrastructure Infrastructure properties No
infrastructure.openshift_cluster_network_cidr Network CIDR used by the OpenShift pods. Normally you would not have to change this, unless other systems in the network are in the 10.128.0.0/14 subnet. No CIDR
openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No
upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No
gpu Control Node Feature Discovery and NVIDIA GPU operators No
gpu.install Must Node Feature Discovery and NVIDIA GPU operators be installed (Once installed, False does not uninstall). auto will install the operators if needed by any of the Cloud Pak/watsonx Yes auto, True, False
openshift_ai Control installation of OpenShift AI No
openshift_ai.install Must OpenShift AI be installed (Once installed, False does not uninstall). auto will install OpenShift AI if needed by any of the Cloud Pak/watsonx components Yes auto, True, False
openshift_ai.channel Which oeprator channel must be installed No auto (default), stable, …
mcg Multicloud Object Gateway properties No
mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False
mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class
mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class
openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes
openshift_storage[] - OpenShift storage definitions🔗
Property Description Mandatory Allowed values
openshift_storage[] List of storage definitions to be defined on OpenShift Yes
storage_name Name of the storage definition, to be referenced by the Cloud Pak Yes
storage_type Type of storage class to create in the OpenShift cluster Yes nfs or ocs
nfs_server_name Name of the NFS server within the VPC Yes if storage_type is nfs Existing nfs_server
ocs_version Version of the OCS operator. If not specified, this will default to the ocp_version No >= 4.6
ocs_storage_label Label to be used for the dedicated OCS nodes in the cluster Yes if storage_type is ocs
ocs_storage_size_gb Size of the OCS storage in Gibibytes (Gi) Yes if storage_type is ocs
ocs_dynamic_storage_class Storage class that will be used for provisioning OCS. On vSphere clusters, thin is usually available after OpenShift installation Yes if storage_type is ocs
storage_vm_definition VM Definition that defines the virtual machine attributes for the OCS nodes Yes if storage_type is ocs

OpenShift on AWS - self-managed🔗

nfs_server:
+- name: sample-elastic
+  infrastructure:
+    aws_region: eu-west-1
+
+openshift:
+- name: sample
+  ocp_version: 4.10.34
+  domain_name: cp-deployer.eu
+  compute_flavour: m5.4xlarge
+  compute_nodes: 3
+  cloud_native_toolkit: False
+  oadp: False
+  infrastructure:
+    type: self-managed
+    aws_region: eu-central-1
+    multi_zone: True
+    credentials_mode: Manual
+    private_only: True
+    machine_cidr: 10.2.1.0/24
+    openshift_cluster_network_cidr: 10.128.0.0/14
+    subnet_ids:
+    - subnet-06bbef28f585a0dd3
+    - subnet-0ea5ac344c0fbadf5
+    hosted_zone_id: Z08291873MCIC4TMIK4UP
+    ami_id: ami-09249dd86b1933dd5
+  mcg:
+    install: True
+    storage_type: storage-class
+    storage_class: gp3-csi
+  openshift_storage:
+  - storage_name: ocs-storage
+    storage_type: ocs
+    ocs_storage_label: ocs
+    ocs_storage_size_gb: 512
+  - storage_name: sample-elastic
+    storage_type: aws-elastic
+

Property explanation OpenShift clusters on AWS (self-managed)🔗

Property Description Mandatory Allowed values
name Name of the OpenShift cluster Yes
ocp_version OpenShift version version, specified as x.y.z Yes >= 4.6
domain_name Base domain name of the cluster. Together with the name, this will be the domain of the OpenShift cluster. Yes
control_plane_flavour Flavour of the AWS servers used for the control plane nodes. m5.xxlarge is the recommended value 4 GB of memory Yes
control_plane_nodes Total number of control plane Yes Integer
compute_flavour Flavour of the AWS servers used for the compute nodes. m5.4xlarge is a large node with 16 cores and 64 GB of memory Yes
compute_nodes Total number of compute nodes Yes Integer
cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default)
oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default)
infrastructure Infrastructure properties Yes
infrastructure.type Type of OpenShift cluster on AWS. Yes rosa or self-managed
infrastructure.aws_region Region of AWS where cluster is deployed. Yes
infrastructure.multi_zone Determines whether the OpenShift cluster is deployed across multiple availability zones. Default is True. No True (default), False
infrastructure.credentials_mode Security requirement of the Cloud Credential Operator (COO) when doing installations with temporary AWS security credentials. Default (omit) is automatically handled by CCO. No Manual, Mint
infrastructure.machine_cdr Machine CIDR. This value will be used to create the VPC and its subnets. In case of an existing VPC, specify the CIDR of that VPC. No CIDR
infrastructure.openshift_cluster_network_cidr Network CIDR used by the OpenShift pods. Normally you would not have to change this, unless other systems in the network are in the 10.128.0.0/14 subnet. No CIDR
infrastructure.subnet_ids Existing public and private subnet IDs in the VPC to be used for the OpenShift cluster. Must be specified in combination with machine_cidr and hosted_zone_id. No Existing subnet IDs
infrastructure.private_only Indicates whether the OpenShift can be accessed from the internet. Default is True No True, False
infrastructure.hosted_zone_id ID of the AWS Route 53 hosted zone that controls the DNS entries. If not specified, the OpenShift installer will create a hosted zone for the specified domain_name. This attribute is only needed if you create the OpenShift cluster in an existing VPC No
infrastructure.control_plane_iam_role If not standard, specify the IAM role that the OpenShift installer must use for the control plane nodes during cluster creation No
infrastructure.compute_iam_role If not standard, specify the IAM role that the OpenShift installer must use for the compute nodes during cluster creation No
infrastructure.ami_id ID of the AWS AMI to boot all images No
openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No
gpu Control Node Feature Discovery and NVIDIA GPU operators No
gpu.install Must Node Feature Discovery and NVIDIA GPU operators be installed (Once installed, False does not uninstall). auto will install the operators if needed by any of the Cloud Pak/watsonx Yes auto, True, False
openshift_ai Control installation of OpenShift AI No
openshift_ai.install Must OpenShift AI be installed (Once installed, False does not uninstall). auto will install OpenShift AI if needed by any of the Cloud Pak/watsonx components Yes auto, True, False
openshift_ai.channel Which oeprator channel must be installed No auto (default), stable, …
mcg Multicloud Object Gateway properties No
mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False
mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class
mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class
openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes

When deploying the OpenShift cluster within an existing VPC, you must specify the machine_cidr that covers all subnets and the subnet IDs within the VPC. For example:

    machine_cidr: 10.243.0.0/24
+    subnets_ids:
+    - subnet-0e63f662bb1842e8a
+    - subnet-0673351cd49877269
+    - subnet-00b007a7c2677cdbc
+    - subnet-02b676f92c83f4422
+    - subnet-0f1b03a02973508ed
+    - subnet-027ca7cc695ce8515
+

openshift_storage[] - OpenShift storage definitions🔗
Property Description Mandatory Allowed values
openshift_storage[] List of storage definitions to be defined on OpenShift Yes
storage_name Name of the storage definition, to be referenced by the Cloud Pak Yes
storage_type Type of storage class to create in the OpenShift cluster Yes ocs, aws-elastic
ocs_version Version of the OCS operator. If not specified, this will default to the ocp_version No
ocs_storage_label Label to be used for the dedicated OCS nodes in the cluster Yes if storage_type is ocs
ocs_storage_size_gb Size of the OCS storage in Gibibytes (Gi) Yes if storage_type is ocs
ocs_dynamic_storage_class Storage class that will be used for provisioning ODF. gp3-csi is usually available after OpenShift installation No

OpenShift on AWS - ROSA🔗

nfs_server:
+- name: sample-elastic
+  infrastructure:
+    aws_region: eu-west-1
+
+openshift:
+- name: sample
+  ocp_version: 4.10.34
+  compute_flavour: m5.4xlarge
+  compute_nodes: 3
+  cloud_native_toolkit: False
+  oadp: False
+  infrastructure:
+    type: rosa
+    aws_region: eu-central-1
+    multi_zone: True
+    use_sts: False
+    credentials_mode: Manual
+  upstream_dns:
+  - name: sample-dns
+     zones:
+     - example.com
+     dns_servers:
+     - 172.31.2.73:53
+  gpu:
+    install: auto
+  openshift_ai:
+    install: auto
+    channel: auto
+  mcg:
+    install: True
+    storage_type: storage-class
+    storage_class: gp3-csi
+  openshift_storage:
+  - storage_name: ocs-storage
+    storage_type: ocs
+    ocs_storage_label: ocs
+    ocs_storage_size_gb: 512
+  - storage_name: sample-elastic
+    storage_type: aws-elastic
+

Property explanation OpenShift clusters on AWS (ROSA)🔗

Property Description Mandatory Allowed values
name Name of the OpenShift cluster Yes
ocp_version OpenShift version version, specified as x.y.z Yes >= 4.6
compute_flavour Flavour of the AWS servers used for the compute nodes. m5.4xlarge is a large node with 16 cores and 64 GB of memory Yes
cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default)
oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default)
infrastructure Infrastructure properties Yes
infrastructure.type Type of OpenShift cluster on AWS. Yes rosa or self-managed
infrastructure.aws_region Region of AWS where cluster is deployed. Yes
infrastructure.multi_zone Determines whether the OpenShift cluster is deployed across multiple availability zones. Default is True. No True (default), False
infrastructure.use_sts Determines whether AWS Security Token Service must be used by the ROSA installer. Default is False. No True, False (default)
infrastructure.credentials_mode Change the security requirement of the Cloud Credential Operator (COO). Default (omit) is automatically handled by CCO. No Manual, Mint
infrastructure.machine_cdr Machine CIDR, for example 10.243.0.0/16. No CIDR
infrastructure.subnet_ids Existing public and private subnet IDs in the VPC to be used for the OpenShift cluster. Must be specified in combination with machine_cidr. No Existing subnet IDs
compute_nodes Total number of compute nodes Yes Integer
upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No
openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No
upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No
gpu Control Node Feature Discovery and NVIDIA GPU operators No
gpu.install Must Node Feature Discovery and NVIDIA GPU operators be installed (Once installed, False does not uninstall). auto will install the operators if needed by any of the Cloud Pak/watsonx Yes auto, True, False
openshift_ai Control installation of OpenShift AI No
openshift_ai.install Must OpenShift AI be installed (Once installed, False does not uninstall). auto will install OpenShift AI if needed by any of the Cloud Pak/watsonx components Yes auto, True, False
openshift_ai.channel Which oeprator channel must be installed No auto (default), stable, …
mcg Multicloud Object Gateway properties No
mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False
mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class
mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class
openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes

When deploying the OpenShift cluster within an existing VPC, you must specify the machine_cidr that covers all subnets and the subnet IDs within the VPC. For example:

    machine_cidr: 10.243.0.0/24
+    subnets_ids:
+    - subnet-0e63f662bb1842e8a
+    - subnet-0673351cd49877269
+    - subnet-00b007a7c2677cdbc
+    - subnet-02b676f92c83f4422
+    - subnet-0f1b03a02973508ed
+    - subnet-027ca7cc695ce8515
+

openshift_storage[] - OpenShift storage definitions🔗
Property Description Mandatory Allowed values
openshift_storage[] List of storage definitions to be defined on OpenShift Yes
storage_name Name of the storage definition, to be referenced by the Cloud Pak Yes
storage_type Type of storage class to create in the OpenShift cluster Yes ocs, aws-elastic
ocs_version Version of the OCS operator. If not specified, this will default to the ocp_version No
ocs_storage_label Label to be used for the dedicated OCS nodes in the cluster Yes if storage_type is ocs
ocs_storage_size_gb Size of the OCS storage in Gibibytes (Gi) Yes if storage_type is ocs
ocs_dynamic_storage_class Storage class that will be used for provisioning ODF. gp3-csi is usually available after OpenShift installation No

OpenShift on Microsoft Azure (ARO)🔗

openshift:
+- name: sample
+  azure_name: sample
+  domain_name: example.com
+  ocp_version: 4.10.54
+  cloud_native_toolkit: False
+  oadp: False
+  network:
+    pod_cidr: "10.128.0.0/14"
+    service_cidr: "172.30.0.0/16"
+  gpu:
+    install: auto
+  openshift_ai:
+    install: auto
+    channel: auto
+  openshift_storage:
+  - storage_name: ocs-storage
+    storage_type: ocs
+    ocs_storage_label: ocs
+    ocs_storage_size_gb: 512
+    ocs_dynamic_storage_class: managed-premium
+

Property explanation for OpenShift cluster on Microsoft Azure (ARO)🔗

Warning

You are not allowed to specify the OCP version of the ARO cluster. The latest current version is provisioned automatically instead no matter what value is specified in the "ocp_version" parameter. The "ocp_version" parameter is mandatory for compatibility with other layers of the provisioning, such as the OpenShift client. For instance, the value is used by the process which downloads and installs the oc client. Please, specify the value according to what OCP version will be provisioned.

Property Description Mandatory Allowed values
name Name of the OpenShift cluster Yes
azure_name Name of the azure element in the configuration Yes
domain_name Domain mame of the cluster, if you want to override the name generated by Azure No
ocp_version The OpenShift version. If you want to install 4.10, specify "4.10" Yes >= 4.6
cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default)
oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default)
network Cluster network attributes Yes
network.pod_cidr CIDR of pod network Yes Must be a minimum of /18 or larger.
network.service_cidr CIDR of service network Yes Must be a minimum of /18 or larger.
openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No
upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No
gpu Control Node Feature Discovery and NVIDIA GPU operators No
gpu.install Must Node Feature Discovery and NVIDIA GPU operators be installed (Once installed, False does not uninstall). auto will install the operators if needed by any of the Cloud Pak/watsonx Yes auto, True, False
openshift_ai Control installation of OpenShift AI No
openshift_ai.install Must OpenShift AI be installed (Once installed, False does not uninstall). auto will install OpenShift AI if needed by any of the Cloud Pak/watsonx components Yes auto, True, False
openshift_ai.channel Which oeprator channel must be installed No auto (default), stable, …
mcg Multicloud Object Gateway properties No
mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False
mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class
mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class
openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes
openshift_storage[] - OpenShift storage definitions🔗
Property Description Mandatory Allowed values
openshift_storage[] List of storage definitions to be defined on OpenShift Yes
storage_name Name of the storage Yes
storage_type Type of storage class to create in the OpenShift cluster Yes ocs or nfs
ocs_version Version of the OCS operator. If not specified, this will default to the ocp_version No
ocs_storage_label Label (or rather a name) to be used for the dedicated OCS nodes in the cluster - together with the combination of Azure location and zone id Yes if storage_type is ocs
ocs_storage_size_gb Size of the OCS storage in Gibibytes (Gi) Yes if storage_type is ocs
ocs_dynamic_storage_class Storage class that will be used for provisioning OCS. In Azure, you must select managed-premium Yes if storage_type is ocs managed-premium
\ No newline at end of file diff --git a/30-reference/configuration/private-registry/index.html b/30-reference/configuration/private-registry/index.html new file mode 100644 index 000000000..11bd7620f --- /dev/null +++ b/30-reference/configuration/private-registry/index.html @@ -0,0 +1,58 @@ + Private registries - Cloud Pak Deployer
Skip to content

Private registry🔗

In cases where the OpenShift cluster is in an environment with limited internet connectivity, you may want OpenShift to pull Cloud Pak images from a private image registry (aka container registry). There may also be other reasons for choosing a private registry over the entitled registry.

Configuring a private registry🔗

The below steps outline how to configure a private registry for a Cloud Pak deployment. When the image_registry object is referenced by the Cloud Pak object (such as cp4d), the deployer makes the following changes in OpenShift so that images are pulled from the private registry:

  • Global pull secret: The image registry's credentials are retrieved from the vault (the secret name must be image-registry-<name> and an entry for the registry is added to the global pull secret (secret pull-secret in project openshift-config).
  • ImageContentSourcePolicy: This is a mapping between the original location of the image, for example quay.io/opencloudio/zen-metastoredb@sha256:582cac2366dda8520730184dec2c430e51009a854ed9ccea07db9c3390e13b29 is mapped to registry.coc.uk.ibm.com:15000/opencloudio/zen-metastoredb@sha256:582cac2366dda8520730184dec2c430e51009a854ed9ccea07db9c3390e13b29.
  • Image registry settings: OpenShift keeps image registry settings in custom resource image.config.openshift.io/cluster. If a private registry with a self-signed certificate is configured, certificate authority's PEM secret must be created as a configmap in the openshift-config project. The deployer uses the vault secret referenced in registry_trusted_ca_secret property to create or update the configmap so that OpenShift can connect to the registry in a secure manner. Alternatively, you add the registry_insecure: true property to pull images without checking the certificate.

image_registry🔗

Defines a private registry that will be used for pulling the Cloud Pak container images from. Additionally, if the Cloud Pak entitlement key was specified at run time of the deployer, the images defined by the case files will be mirrored to this private registry.

image_registry:
+- name: cpd463
+  registry_host_name: registry.example.com
+  registry_port: 5000
+  registry_insecure: false
+  registry_trusted_ca_secret: cpd463-ca-bundle
+

Properties🔗

Property Description Mandatory Allowed values
name Name by which the image registry is identified. Yes
registry_host_name Host name or IP address of the registry server Yes
registry_port Port that the image registry listens on. Default is the https port (443) No
registry_namespace Namespace (path) within the registry that holds the Cloud Pak images. Mandatory only when using the IBM Cloud Container Registry (ICR) No
registry_insecure Defines whether insecure registry access with a self-signed certificate is allowed No True, False (default)
registry_trusted_ca_secret Defines the vault secret which holds the certificate authority bundle that must be used when connecting to this private registry. This parameter cannot be specified if registry_insecure is also specified. No

Warning

The registry_host_name you specify in the image_registry definition must also be available for DNS lookup within OpenShift. If the registry runs on a server that is not registered in the DNS, use its IP address instead of a host name.

When mirroring images, the deployer connects to the registry using the host name and port. If the port is omitted, the standard https protocol (443) is used. If a registry_namespace is specified, for example when using the IBM Container Registry on IBM Cloud, it will be appended to the registry URL.

The user and password to connect to the registry will be retrieved from the vault, using secret image-registry-<your_image_registry_name> and must be stored in the format registry_user:registry_password. For example, if you want to connect to the image registry cpd404 with user admin and password very_s3cret, you would create a secret as follows:

./cp-deploy.sh vault set \
+  -vs image-registry-cpd463 \
+  -vsv "admin:very_s3cret"
+

If you need to connect to a private registry which is not signed by a public certificate authority, you have two choices: * Store the PEM certificate that that holds the CA bundle in a vault secret and specify that secret for the registry_trusted_ca_secret property. This is the recommended method for private registries. * Specify registry_insecure: false (not recommended): This means that the registry (and port) will be marked as insecure and OpenShift will pull images from it, even if its certificate is self-signed.

For example, if you have a file /tmp/ca.crt with the PEM certificate for the certificate authority, you can do the following:

./cp-deploy.sh vault set \
+  -vs cpd463-ca-bundle \
+  -vsf /tmp/ca.crt
+

This will create a vault secret which the deployer will use to populate a configmap in the openshift-config project, which in turn is referenced by the image.config.openshift.io/cluster custom resource. For the above configuration, configmap cpd404-ca-bundle would be created and teh image.config.openshift.io/cluster would look something like this:

apiVersion: config.openshift.io/v1
+kind: Image
+metadata:
+...
+...
+  name: cluster
+spec:
+  additionalTrustedCA:
+    name: cpd463-ca-bundle
+

Using the IBM Container Registry as a private registry🔗

If you want to use a private registry when running the deployer for a ROKS cluster on IBM Cloud, you must use the IBM Container Registry (ICR) service. The deployer will automatically create the specified namespace in the ICR and set up the credentials accordingly. Configure an image_registry object with the host name of the private registry and the namespace that holds the images. An example of using the ICR as a private registry:

image_registry:
+- name: cpd463
+  registry_host_name: de.icr.io
+  registry_namespace: cpd463
+

The registry host name must end with icr.io and the registry namespace is mandatory. No other properties are needed; the deployer will retrieve them from IBM Cloud.

If you have already created the ICR namespace, create a vault secret for the image registry credentials:

./cp-deploy.sh vault set \
+  -vs image-registry-cpd463
+  -vsv "admin:very_s3cret"
+

An example of configuring the private registry for a cp4d object is below:

cp4d:
+- project: cpd-instance
+  openshift_cluster_name: {{ env_id }}
+  cp4d_version: 4.8.3
+  image_registry_name: cpd463
+

The Cloud Pak for Data installation refers to the cpd463 image_registry object.

If the ibm_cp_entitlement_key secret is in the vault at the time of running the deployer, the required images will be mirrored from the entitled registry to the private registry. If all images are already available in the private registry, just specify the --skip-mirror-images flag when you run the deployer.

Using a private registry for the Cloud Pak installation (non-IBM Cloud)🔗

Configure an image_registry object with the host name of the private registry and some optional properties such as port number, CA certificate and whether insecure access to the registry is allowed.

Example:

image_registry:
+- name: cpd463
+  registry_host_name: registry.example.com
+  registry_port: 5000
+  registry_insecure: false
+  registry_trusted_ca_secret: cpd463-ca-bundle
+

Warning

The registry_host_name you specify in the image_registry definition must also be available for DNS lookup within OpenShift. If the registry runs on a server that is not registered in the DNS, use its IP address instead of a host name.

To create the vault secret for the image registry credentials:

./cp-deploy.sh vault set \
+  -vs image-registry-cpd463
+  -vsv "admin:very_s3cret"
+

To create the vault secret for the CA bundle:

./cp-deploy.sh vault set \
+  -vs cpd463-ca-bundle
+  -vsf /tmp/ca.crt
+

Where ca.crt looks something like this:

-----BEGIN CERTIFICATE-----
+MIIFszCCA5ugAwIBAgIUT02v9OdgdvjgQVslCuL0wwCVaE8wDQYJKoZIhvcNAQEL
+BQAwaTELMAkGA1UEBhMCVVMxETAPBgNVBAgMCE5ldyBZb3JrMQ8wDQYDVQQHDAZB
+cm1vbmsxFjAUBgNVBAoMDUlCTSBDbG91ZCBQYWsxHjAcBgNVBAMMFUlCTSBDbG91
+...
+mcutkgtbkq31XYZj0CiM451Qp8KnTx0=
+-----END CERTIFICATE-
+

An example of configuring the private registry for a cp4d object is below:

cp4d:
+- project: cpd-instance
+  openshift_cluster_name: {{ env_id }}
+  cp4d_version: 4.8.3
+  image_registry_name: cpd463
+

The Cloud Pak for Data installation refers to the cpd463 image_registry object.

If the ibm_cp_entitlement_key secret is in the vault at the time of running the deployer, the required images will be mirrored from the entitled registry to the private registry. If all images are already available in the private registry, just specify the --skip-mirror-images flag when you run the deployer.

\ No newline at end of file diff --git a/30-reference/configuration/redhat-sso/index.html b/30-reference/configuration/redhat-sso/index.html new file mode 100644 index 000000000..dc52562be --- /dev/null +++ b/30-reference/configuration/redhat-sso/index.html @@ -0,0 +1,13 @@ + Red Hat SSO - Cloud Pak Deployer
Skip to content

Red Hat Single Sign-on (SSO) configuration🔗

You can configure Red Hat Single Sign-on (SSO) to be installed on the OpenShift cluster as an Identity Provider (IdP). Red Hat SSO implements the open-source Keycloak project which offers a user registry and can also federate other IdPs.

Red Hat SSO configuration - openshift_redhat_sso🔗

An openshift_redhat_sso resource indicates that the Red Hat Single Sign-on operator must be installed on the referenced OpenShift cluster. A single SSO configuration can have only 1 Keycloak realms defined. The Keycloak realm holds all configuration needed for authentication. If you want to host more than 1 Keycloak server on the cluster, specify multiple openshift_redhat_sso entries, each with its own keycloak_name. The keycloak_name also determines the OpenShift project that will be created.

openshift_redhat_sso:
+- openshift_cluster_name: "{{ env_id }}"
+  keycloak_name: ibm-keycloak
+  groups:
+  - name: kc-cp4d-admins
+    state: present
+  - name: kc-cp4d-data-engineers
+    state: present
+  - name: kc-cp4d-data-scientists
+    state: present
+  - name: kc-cp4d-monitors
+    state: present
+

The above configuration installs the Red Hat SSO operator in OpenShift project ibm-keycloak and creates a Keycloak instance named ibm-keycloak. The instance has a single realm: master which contains the groups, users and clients which are then leveraged by Cloud Pak Foundational Services.

Currently you can only define Keycloak groups which are later mapped to Cloud Pak for Data user groups. Creating users and setting up federated identity providers must be done by logging into Keycloak.

The Keycloak name is referenced in the Zen Access Control resource and this is also where the mapping from Keycloak groups to Cloud Pak for Data groups takes place.

Property explanation🔗

Property Description Mandatory Allowed values
openshift_cluster_name Name of OpenShift cluster onto which the Red Hat SSO operator is installed Yes. if more than 1 openshift resource in the configuration
keycloak_name Name of the Keycloak server, this also determines the name of the project into which the Keycloak server will be created Yes
.groups[] Groups that will be created in the Keycloak realm Yes
.name Name of the Keycloak group Yes
.state Whether the group is present or absent Yes present, absent
\ No newline at end of file diff --git a/30-reference/configuration/topologies/index.html b/30-reference/configuration/topologies/index.html new file mode 100644 index 000000000..54ffdd842 --- /dev/null +++ b/30-reference/configuration/topologies/index.html @@ -0,0 +1 @@ + Topologies - Cloud Pak Deployer
Skip to content

Deployment topologies🔗

Configuration of the topology to be deployed typically boils down to choosing the cloud infrastructure you want to deploy, then choosing the type of OpenShift and storage, integrating with infrastructure services and then setting up the Cloud Pak(s). For most initial implementations, a basic deployment will suffice and later this can be extended with additional configuration.

Depicted below is the basic deployment topology, followed by a topology with all bells and whistles.

Basic deployment🔗

Basic deployment

For more details on each of the configuration elements, refer to:

Extended deployment🔗

Extended deployment

For more details about extended deployment, refer to:

\ No newline at end of file diff --git a/30-reference/configuration/vault/index.html b/30-reference/configuration/vault/index.html new file mode 100644 index 000000000..d97bcd93c --- /dev/null +++ b/30-reference/configuration/vault/index.html @@ -0,0 +1,4 @@ + Vault - Cloud Pak Deployer
Skip to content

Vault configuration🔗

Vault configuration🔗

Throughout the deployment process, the Cloud Pak Deployer will create secrets in a vault and retrieve them later. Examples of secrets are: ssh keys, Cloud Pak for Data admin password. Additionally, when provisioning infrastructure no the IBM Cloud, the resulting Terraform state file is also stored in the vault so it can be used later if the configuration needs to be changed.

Configuration of the vault is done through a vault object in the configuration. If you want to use the file-based vault in the status directory, you do not need to configure anything.

The following Vault implementations can be used to store and retrieve secrets: - File Vault (no encryption) - IBM Cloud Secrets Manager - Hashicorp Vault (token authentication) - Hashicorp Vault (certificate authentication)

The File Vault is the default vault and also the simplest. It does not require a password and all secrets are stored in base-64 encoding in a properties file under the <status_directory>/vault directory. The name of the vault file is the environment_name you specified in the global configuration, inventory file or at the command line.

All of the other vault options require some secret manager (IBM Cloud service or Hashicorp Vault) to be available and you need to specify a password or provide a certificate.

Sample Vault config:

vault:
+  vault_type: file-vault
+  vault_authentication_type: none
+

Properties for all vault implementations🔗

Property Description Mandatory Allowed values
vault_type Chosen implementation of the vault Yes file-vault, ibmcloud-vault, hashicorp-vault

Properties for file-vault🔗

Property Description Mandatory Allowed values
vault_authentication_type Authentication method for the file vault No none

Properties for ibmcloud-vault🔗

Property Description Mandatory Allowed values
vault_authentication_type Authentication method for the file vault No api-key
vault_url URL for the IBM Cloud secrets manager instance Yes

Properties for hashicorp-vault🔗

Property Description Mandatory Allowed values
vault_authentication_type Authentication method for the file vault No api-key, certificate
vault_url URL for the Hashicorp vault, this is typically https://hostname:8200 Yes
vault_api_key When authentication type is api-key, the field to authenticate with Yes
vault_secret_path Default secret path to store and retrieve secrets into/from Yes
vault_secret_field Default field to store or retrieve secrets Yes
vault_secret_path_append_group Determines whether or not the secrete group will be appended to the path Yes True (default), False
vault_secret_base64 Depicts if secrets are stored in base64 format for Hashicorp Vault Yes True (default), False
\ No newline at end of file diff --git a/30-reference/process/configure-cloud-pak/index.html b/30-reference/process/configure-cloud-pak/index.html new file mode 100644 index 000000000..8e7df1811 --- /dev/null +++ b/30-reference/process/configure-cloud-pak/index.html @@ -0,0 +1 @@ + Configure Cloud Paks - Cloud Pak Deployer
Skip to content

Configure the Cloud Pak(s)🔗

This stage focuses on post-installation configuration of the Cloud Paks and cartridges.

Cloud Pak for Data🔗

Web interface certificate🔗

When provisioning on IBM Cloud ROKS, a CA-signed certificate for the ingress subdomain is automatically generated in the IBM Cloud certificate manager. The deployer retrieves the certificate and adds it to the secret that stores the certificate key. This will avoid getting a warning when opening the Cloud Pak for Data home page.

Configure identity and access management🔗

For Cloud Pak for Data you can configure:

  • SAML for Single Sign-on. When specified in the cp4d_saml_config object, the deployer configures the user management pods to redirect logins to the identity provider (idP) of choice.
  • LDAP configuration. LDAP can be used both for authentication (if no SSO has been configured) and for access management by mapping LDAP groups to Cloud Pak for Data user groups. Specify the LDAP or LDAPS properties in the cp4d_ldap_config object so that the deployer configures it for Cloud Pak for Data. If SAML has been configured for authentication, the configured LDAP server is only used for access management.
  • User group configuration. This creates user-defined user groups in Cloud Pak for Data to match the LDAP configuration. The configuration object used for this is cp4d_user_group_configuration.

Provision instances🔗

Some cartridges such as Data Virtualization have the ability to create one or more instances to run an isolated installation of the cartridge. If instances have been configured for the cartridge, this steps provisions them. The following Cloud Pak for Data cartridges are currently supported for creating instances:

  • Analytics engine powered by Apache Spark (analytics-engine)
  • Db2 OLTP (db2)
  • Cognos Analytics (ca)
  • Data Virtualization (dv)

Configure instance access🔗

Cloud Pak for Data does not support group-defined access to cartridge instances. After creation of the instances (and also when the deployer is run with the --cp-config-only flag), the permissions of users accessing the instance is configured.

For Cognos Analytics, the Cognos Authorization process is run to apply user group permissions to the Cognos Analytics instance.

Create or change platform connections🔗

Cloud Pak for Data defines data source connections at the platform level and these can be reused in some cartridges like Watson Knowledge Catalog and Watson Studio. The cp4d_connection object defines each of the platform connections that must be managed by the deployer.

Backup and restore connections🔗

If you want to back up or restore platform connections, the cp4d_backup_restore_connections object defines the JSON file that will be used for backup and restore.

\ No newline at end of file diff --git a/30-reference/process/configure-infra/index.html b/30-reference/process/configure-infra/index.html new file mode 100644 index 000000000..6c8b04477 --- /dev/null +++ b/30-reference/process/configure-infra/index.html @@ -0,0 +1 @@ + Configure infrastructure - Cloud Pak Deployer
Skip to content

Configure infrastructure🔗

This stage focuses on the configuration of the provisioned infrastructure.

Configure infrastructure for IBM Cloud🔗

Configure the VPC bastion server(s)🔗

In a configuration scenario where NFS is used for OpenShift storage, the NFS server must be provisioned as a VSI within the VPC that contains the OpenShift cluster. It is best practice to shield off the NFS server from the outside world by using a jump host (bastion) to access it.

This steps configures the bastion host which has a public IP address to serve as a jump host to access other servers and services within the VPC.

Configure the VPC NFS server(s)🔗

Configures the NFS server using the specs in the nfs_server configuration object(s). It installs the required packages and sets up the NFSv4 service. Additionally, it will format the empty volume as xfs and export it so it can be used by the managed-nfs-storage storage class in the OpenShift cluster.

Configure the OpenShift storage classes🔗

This steps takes care of configuring the storage classes in the OpenShift cluster. Storage classes are an abstraction of the underlying physical and virtual storage. When run, it processes the openshift_storage elements within the current openshift configuration object.

Two types of storage classes can be automatically created and configured:

NFS Storage🔗

Creates the managed-nfs-storage OpenShift storage class using the specified nfs_server_name which references an nfs_server configuration object.

OCS Storage🔗

Activates the ROKS cluster's OpenShift Container Storage add-on to install the operator into the cluster. Once finished with the preparation, the OcsCluster OpenShift object is created to provision the storage cluster. As the backing storage the ibmc-vpc-block-metro-10iops-tier storage class is used, which has the appropriate IO characteristics for the Cloud Paks.

Info

Both NFS and OCS storage classes can be created but only 1 storage class of each type can exist in the cluster at the moment. If more than one storage class of the same type is specified, the configuration will fail.

\ No newline at end of file diff --git a/30-reference/process/cp4d-cartridges/cognos-authorization/index.html b/30-reference/process/cp4d-cartridges/cognos-authorization/index.html new file mode 100644 index 000000000..9adb1d93d --- /dev/null +++ b/30-reference/process/cp4d-cartridges/cognos-authorization/index.html @@ -0,0 +1,22 @@ + Automated Cognos Authorization using LDAP groups - Cloud Pak Deployer
Skip to content

Automated Cognos Authorization using LDAP groups🔗

Authorization Overview

Description🔗

The automated cognos authorization capability uses LDAP groups to assign users to a Cognos Analytics Role, which allows these users to login to IBM Cloud Pak for Data and access the Cognos Analytics instance. This capability will perform the following tasks: - Create a User Group and assign the associated LDAP Group(s) and Cloud Pak for Data role(s) - For each member of the LDAP Group(s) part of the User Group, create the user as a Cloud Pak for Data User and assigned the Cloud Pak for Data role(s) - For each member of the LDAP Group(s) part of the User Group, assign membership to the Cognos Analytics instance and authorize for the Cognos Analytics Role

If the User Group is already present, validate all LDAP Group(s) are associated with the User Group. Add the LDAP Group(s) not yet assiciated to the User Group. Existing LDAP groups will not be removed from the User Group

If a User is already present in Cloud Pak for Data, it will not be updated.

If a user is already associated with the Cognos Analytics instance, keep its original membership and do not update the membership

Pre-requisites🔗

Prior to running the script, ensure: - LDAP configuration in IBM Cloud Pak for Data is completed and validated - Cognos Analytics instance is provisioned and running in IBM Cloud Pak for Data - The role(s) that will be associated with the User Group are present in IBM Cloud Pak for Data

Usage of the Script🔗

The script is available in automation-roles/50-install-cloud-pak/cp4d-service/files/assign_CA_authorization.sh.

Run the script without arguments to show its usage help.

# ./assign_CA_authorization.sh                                                                               
+Usage:
+
+assign_CA_authorization.sh
+  <CLOUD_PAK_FOR_DATA_URL>
+  <CLOUD_PAK_FOR_DATA_LOGIN_USER>
+  <CLOUD_PAK_FOR_DATA_LOGIN_PASSWORD>
+  <CLOUD_PAK_FOR_DATA_USER_GROUP_NAME>
+  <CLOUD_PAK_FOR_DATA_USER_GROUP_DESCRIPTION>
+  <CLOUD_PAK_FOR_DATA_USER_GROUP_ROLES_ASSIGNMENT>
+  <CLOUD_PAK_FOR_DATA_USER_GROUP_LDAP_GROUPS_MAPPING>
+  <CLOUD_PAK_FOR_DATA_COGNOS_ANALYTICS_ROLE>
+

  • The URL to the IBM Cloud Pak for Data instance
  • The login user to IBM Cloud Pak for Data, e.g. the admin user
  • The login password to IBM Cloud Pak for Data
  • The Cloud Pak for Data User Group Name
  • The Cloud Pak for Data User Group Description
  • The Cloud Pak for Data roles associated to the User Group. Use a ; seperated list to assign multiple roles
  • The LDAP Groups associated to the User Group. Use a ; seperated list to assign LDAP groups
  • The Cognos Analytics Role each member of the User Group will be associated with, which must be one of:
  • Analytics Administrators
  • Analytics Explorers
  • Analytics Users
  • Analytics Viewer

Running the script🔗

Using the command example provided by the ./assign_CA_authorization.sh command, run the script with its arguments

# ./assign_CA_authorization.sh \
+  https://...... \
+  admin \
+  ******** \
+  "Cognos User Group" \
+  "Cognos User Group Description" \
+  "wkc_data_scientist_role;zen_administrator_role" \
+  "cn=ca_group,ou=groups,dc=ibm,dc=com" \
+  "Analytics Viewer"
+
The script execution will run through the following tasks:

Validation
Confirm all required arguments are provided.
Confirm at least 1 User Group Role assignment is provided.
Confirm at least 1 LDAP Group is provided.

Login to Cloud Pak for Data and generate a Bearer token
Using the provided IBM Cloud for Data URL, username and password, login to Cloud pak for Data and generate the Bearer token used for subsequent commands. Exit with an error if the login to IBM Cloud Pak for Data fails.

Confirm the provided User Group role(s) are present in Cloud Pak for Data
Acquire all Cloud Pak for Data roles and confirm the provided User Group role(s) are one of the existing Cloud Pak for Data roles. Exit with an error if a role is provided which is not currently present in IBM Cloud Pak for Data.

Confirm the provided Cognos Analytics role is valid
Ensure the provided Cognos Analytics role is one of the available Cognos Analytics roles. Exit with an error if a Cognos Analytics role is provided that does not match with the available Cognos Analytics roles.

Confirm LDAP is configured in IBM Cloud Pak for Data
Ensures the LDAP configuration is completed. Exit with an error if there is no current LDAP configuration.

Confirm the provided LDAP groups are present in the LDAP User Registry
Using IBM Cloud Pak for Data, query whether the provided LDAP groups are present in the LDAP User registry. Exit with an error if a LDAP Group is not available.

Confirm if the IBM Cloud Pak for Data User Group exists
Queries the IBM Cloud Pak for Data User Groups. If the provided User Group exists, acquire the Group ID.

If the IBM Cloud Pak for Data User Group does not exist, create it
If the User Group does not exist, create it, and assign the IBM Cloud Pak for Data Roles and LDAP Groups to the new User Group

If the IBM Cloud Pak for Data User Group does exist, validate the associated LDAP Groups
If the User Group already exists, confirm all provided LDAP groups are associated with the User Group. Add LDAP groups that are not yet associated.

Get the Cognos Analytics instance ID
Queries the IBM Cloud Pak for Data service instances and acquires the Cognos Analytics instance ID. Exit with an error if no Cognos Analytics instance is available

Ensure each user member of the IBM Cloud Pak for Data User Group is an existing user
Each user that is member of the provided LDAP groups, ensure this member is an IBM Cloud Pak for Data User. Create a new user with the provided User Group role(s) if the the user is not yet available. Any existing User(s) will not be updated. If Users are removed from an LDAP Group, these users will not be removed from Cloud Pak for Data.

Ensure each user member of the IBM Cloud Pak for Data User Group is associated to the Cognos Analytics instance
Each user that is member of the provided LDAP groups, ensure this member is associated to the Cognos Analytics instance with the provided Cognos Analytics role. Any user that is already associated to the Cognos Analytics instance will have its Cognos Analytics role updated to the provided Cognos Analytics Role

\ No newline at end of file diff --git a/30-reference/process/cp4d-cartridges/cognos_authorization.png b/30-reference/process/cp4d-cartridges/cognos_authorization.png new file mode 100644 index 000000000..6f042f56f Binary files /dev/null and b/30-reference/process/cp4d-cartridges/cognos_authorization.png differ diff --git a/30-reference/process/deploy-assets/index.html b/30-reference/process/deploy-assets/index.html new file mode 100644 index 000000000..c4c33acda --- /dev/null +++ b/30-reference/process/deploy-assets/index.html @@ -0,0 +1 @@ + Deploy assets - Cloud Pak Deployer
Skip to content

Deploy Cloud Pak assets🔗

Cloud Pak for Data🔗

For Cloud Pak for Data, this stage does the following:

  • Deploy Cloud Pak for Data assets which are defined with object cp4d_asset
  • Deploy the Cloud Pak for Data monitors identified with cp4d_monitors elements.

Deploy Cloud Pak for Data assets🔗

See cp4d_asset for more details.

Cloud Pak for Data monitors🔗

See cp4d_monitors for more details.

\ No newline at end of file diff --git a/30-reference/process/images/provisioning-process.drawio b/30-reference/process/images/provisioning-process.drawio new file mode 100644 index 000000000..e61717386 --- /dev/null +++ b/30-reference/process/images/provisioning-process.drawio @@ -0,0 +1 @@ +7Zldb5swFIZ/TS5b8ZHwcZmk7Vapkyql2rqryg0OuDUcZEwC+/WzwQkwKKXS2kQiuUh8Xn/EPi/PISITcxlm3xiKgx/gYToxNC+bmFcTw9Bd0xQfUslLxXGNUvAZ8dSgSliRP1iJmlJT4uGkMZADUE7ipriGKMJr3tAQY7BrDtsAbX5rjHzcElZrRNvqL+LxQJ3CsCv9OyZ+sP9m3XLLnhDtB6uTJAHyYFeTzOuJuWQAvGyF2RJTmbx9Xsp5N2/0HjbGcMSHTMiMe3b3RNhDPH/IYf38sshuLnRlT8Lz/YmxJxKgQmA8AB8iRK8rdcEgjTwsl9VEVI25A4iFqAvxBXOeKzdRykFIAQ+p6sUZ4Y+19m+51OVMRVeZWrkIchVsIOJqQUNX8RIosGLX5k3xEnp5HnmIN/OkpARStsZ9yVHXG2I+5j3jrIObAgMMIeYsF/MYpoiTbXMfSF2P/mFcZZloKNc+4mC57hbR9IDO5cSwqNjx4pmJli9bPxElntgORC3DKztlXncB4XgVoyIzOwF107qaDbrVY8OGUFrTNc11NWmj2IYfCW0t3MCic7HFjBPB21x1hMTziqtMnUt046zfynbq1YQL21bwqepjz1S8q1jWTaUFNY4t7ZPsmrXsmkcJeRYn/tcWUS9i2UxI5FM8l7VsgD3NtFviVdhRlMJ9AROo2frUcKr3/QjlrHZpaM0hHwbw//rmdPrWZZz9WcaJJJ0rZW8FHFAp7WNWSquFntFVKe8ZlikYU5m0tFMrk/rsTFs/RQNoc45Jm92izeymDbYkKX6WaLfRhqExcTc1T44768xdP08DuHOPyZ3T4m7axd0Sog3xU3mfGx93B35Ohzv7zF0/TwO407Vu278GPLcF3qwLvNso4ahYbUkh9cTnPXodE3zuybHnnNl7h6kh8L1h+xc9BdNa9Fnv3fbGyZ9zavgdno+fPH4iwyx/rAe1WTKsphXRMbEd+uxaN46Kbfvptd2F7RWOKchtzZME82REuBqzU+O1vGIanjldnq1CeJVl9gEno3JsanydYyKs/kAs+mp/w5rXfwE= \ No newline at end of file diff --git a/30-reference/process/images/provisioning-process.png b/30-reference/process/images/provisioning-process.png new file mode 100644 index 000000000..382f40168 Binary files /dev/null and b/30-reference/process/images/provisioning-process.png differ diff --git a/30-reference/process/install-cloud-pak/index.html b/30-reference/process/install-cloud-pak/index.html new file mode 100644 index 000000000..60dda3e8c --- /dev/null +++ b/30-reference/process/install-cloud-pak/index.html @@ -0,0 +1,13 @@ + Install the Cloud Pak - Cloud Pak Deployer
Skip to content

Install the Cloud Pak(s)🔗

This stage focuses on preparing the OpenShift cluster for installing the Cloud Pak(s) and then proceeds with the installation of Cloud Paks and the cartridges. The below documentation will start with a list of steps that will be executed for all Cloud Paks, then proceed with Cloud Pak specific activities. The execution of the steps may slightly differ from the sequence in the documentation.

Sections:

Remove Cloud Pak for Data🔗

Before going ahead with the mirroring of container images and installation of Cloud Pak for Data, the previous configuration (if any) is retrieved from the vault to determine if a Cloud Pak for Data instance has been removed. If a previously installed cp4d object no longer exists in the current configuration, its associated instance is removed from the OpenShift cluster.

First, the custom resources are removed from the OpenShift project. This happens with a grace period of 5 minutes. After the grace period has expired, OpenShift automatically forcefully deletes the custom resource and its associated definitions. Then, the control plane custom resource Ibmcpd is removed and finally the namespace (project). For the namespace deletion, a grace period of 10 minutes is applied.

Prepare private image registry🔗

When installing the Cloud Paks, images must be pulled from an image registry. All Cloud Paks support pulling images directly from the IBM Entitled Registry using the entitlement key, but there may be situations this is not possible, for example in air-gapped environents, or when images must be scanned for vulnerabilities before they are allowed to be used. In those cases, a private registry will have to be set up.

The Cloud Pak Deployer can mirror images to a private registry from the entitled registry. On IBM Cloud, the deployer is also capable of creating a namespace in the IBM Container Registry and mirror the images to that namespace.

When a private registry has been specified in the Cloud Pak entry (using the image_registry_name property), the necessary OpenShift configuration changes will also be made.

Create IBM Container Registry namespace (IBM Cloud only)🔗

If OpenShift is deployed on IBM Cloud (ROKS), the IBM Container Registry should be used as the private registry from which the images will be pulled. Images in the ICR are organized by namespace and can be accessed using an API key issued for a service account. If an image_registry object is specified in the configuration, this process will take care of creating the service account, then the API key and it will store the API key in the vault.

Connect to the specified private image registry🔗

If an image registry has been specified for the Cloud Pak using the image_registry_name property, the referenced image_registry entry is looked up in the configuration and the credentials are retrieved from the vault. Then the connection to the registry is tested by logging on.

Install Cloud Pak for Data and cartridges🔗

Prepare OpenShift cluster for Cloud Pak installation🔗

Cloud Pak for Data requires a number of cluster-wide settings:

  • Create an ImageContentSourcePolicy if images must be pulled from a private registry
  • Set the global pull secret with the credentials to pull images from the entitled or private image registry
  • Create a Tuned object to set kernel semaphores and other properties of CoreOS containers being spun up
  • Allow unsafe system controls in the Kubelet configuration
  • Set PIDs limit and default ulimit for the CRI-O configuration

For all OpenShift clusters, except ROKS on IBM Cloud, these settings are applied using OpenShift configuration objects and then picked up by the Machine Config Operator. This operator will then apply the settings to the control plane and compute nodes as appropriate and reload them one by one.

To avoid having to reload the nodes more than once, the Machine Config Operator is paused before the settings are applied. After all setup, the Machine Config Operator is released and the deployment process will then wait until all nodes are ready with the configuration applied.

Prepare OpenShift cluster on IBM Cloud and IBM Cloud Satellite🔗

As mentioned before, ROKS on IBM Cloud does not include the Machine Config Operator and would normally require the compute nodes to be reloaded (classic ROKS) or replaced (ROKS on VPC) to make the changes effective. While implementing this process, we have experienced intermittent reliability issues where replacement of nodes never finished or the cluster ended up in a unusable state. To avoid this, the process is applying the settings in a different manner.

On every node, a cron job is created which starts every 5 minutes. It runs a script that checks if any of the cluster-wide settings must be (re-)applied, then updates the local system and restarts the crio and kubelet daemons. If no settings are to be adjusted, the daemons will not be restarted and therefore the cron job has minimal or no effect on the running applications.

Compute node changes that are made by the cron job: ImageContentSourcePolicy: File /etc/containers/registries.conf is updated to include registry mirrors for the private registry. Kubelet: File /etc/kubernetes/kubelet.conf is appended with the allowedUnsafeSysctls entries. CRI-O: pids_limit and default_ulimit changes are made to the /etc/crio/crio.conf file. Pull secret: The registry and credentials are appended to the /.docker/config.json configuration.

There are scenarios, especially on IBM Cloud Satellite, where custom changes must be applied to the compute nodes. This is possible by adding the apply-custom-node-settings.sh to the assets directory within the CONFIG_DIR directory. Once Kubelet, CRI-O and other changes have been applied, this script (if existing) is run to apply any additional configuration changes to the compute node.

By setting the NODE_UPDATED script variable to 1 you can tell the deployer to restart the crio and kubelet daemons.

WARNING: You should never set the NODE_UPDATED script variable to 0 as this will cause previous changes to the pull secret, ImageContentSourcePolicy and others not to become effective.

WARNING: Do not end the script with the exit command; this will stop the calling script from running and therefore not restart the daemons.

Sample script:

#!/bin/bash
+
+#
+# This is a sample script that will cause the crio and kubelet daemons to be restarted once by checking
+# file /tmp/apply-custom-node-settings-run. If the file doesn't exist, it creates it and sets NODE_UPDATED to 1.
+# The deployer will observe that the node has been updated and restart the daemons.
+#
+
+if [ ! -e /tmp/apply-custom-node-settings-run ];then
+    touch /tmp/apply-custom-node-settings-run
+    NODE_UPDATED=1
+fi
+

Mirror images to the private registry🔗

If a private image registry is specified, and if the IBM Cloud Pak entitlement key is available in the vault (cp_entitlement_key secret), the Cloud Pak case files for the Foundational Services, the Cloud Pak control plane and cartridges are downloaded to a subdirectory of the status directory that was specified. Then all images defined for the cartridges are mirrored from the entitled registry to the private image registry. Dependent on network speed and how many cartridges have been configured, the mirroring can take a very long time (12+ hours). All images which have already been mirrored to the private registry are skipped by the mirroring process.

Even if all images have been mirrored, the act of checking existence and digest can still take a bit of time (10-15 minutes). To avoid this, you can remove the cp_entitlement_key secret from the vault and unset the CP_ENTITLEMENT_KEY environment variable before running the Cloud Pak Deployer.

Create catalog sources🔗

The images of the operators which control the Cloud Pak are defined in OpenShift CatalogSource objects which reside in the openshift-marketplace project. Operator subscriptions subsequently reference the catalog source and define the update channel. When images are pulled from the entitled registry, most subscriptions reference the same ibm-operator-catalog catalog source (and also a Db2U catalog source). If images are pulled from a private registry, the control plane and also each cartridge reference their own catalog source in the openshift-marketplace project.

This step creates the necessary catalog sources, dependent on whether the entitled registry or a private registry is used. For the entitled registry, it creates the catalog source directly using a YAML template; when using a private registry, the cloudctl case command is used for the control plane and every cartridge to install the catalog sources and their dependencies.

Get OpenShift storage classes🔗

Most custom resources defined by the cartridge operators require some back-end storage. To be able to reference the correct OpenShift storage classes, they are retrieved based on the openshift_storage_name property of the Cloud Pak object.

Prepare the Cloud Pak for Data operator🔗

When using express install, the Cloud Pak for Data operator also installs the Cloud Pak Foundational Services. Consecutively, this part of the deployer:

  • Creates the operator project if it doesn't exist already
  • Creates an OperatorGroup
  • Installs the license service and certificate manager
  • Creates the platform operator subscription
  • Waits until the ClusterServerVersion objects for the platform operator and Operand Deployment Lifecycle Manager have been created

Install the Cloud Pak for Data control plane🔗

When the Cloud Pak for Data operator has been installed, the process continues by creating an OperandRequest object for the platform operator which manages the project in the which Cloud Pak for Data instance is installed. Then it creates an Ibmcpd custom resource in the project which installs the controle plane with nginx the metastore, etc.

The Cloud Pak for Data control plane is a pre-requisite for all cartridges so at this stage, the deployer waits until the Ibmcpd status reached the Completed state.

Once the control plane has been installed successfully, the deployer generates a new strong 25-character password for the Cloud Pak for Data admin user and stores this into the vault. Additionally, the admin-user-details secret in the OpenShift project is updated with the new password.

Install the specified Cloud Pak for Data cartridges🔗

Now that the control plane has been installed in the specified OpenShift project, cartridges can be installed. Every cartridge is controlled by its own operator subscription in the operators project and a custom resource. The deployer iterates twice over the specified cartridges, first to create the operator subscriptions, then to create the custom resources.

Create cartridge operator subscriptions🔗

This steps creates subscription objects for each cartridge in the operators project, using a YAML template that is included in the deployer code and the subscription_channel specified in the cartridge definition. Keeping the subscription channel separate delivers flexibility when new subscription channels become available over time.

Once the subscription has been created, the deployer waits for the associate CSV(s) to be created and reach the Installed state.

Delete obsolete cartridges🔗

If this is not the first installation, earlier configured cartridges may have been removed. This steps iterates over all supported cartridges and checks if the cartridge has been installed and wheter it exists in the configuration of the current cp4d object. If the cartridge is no longer defined, its custom resource is removed; the operator will then take care of removing all OpenShift configuration.

Install the cartridges🔗

This steps creates the Custom Resources for each cartridge. This is the actual installation of the cartridge. Cartridges can be installed in parallel to a certain extent and the operator will wait for the dependencies to be installed first before starting the processes. For example, if Watson Studio and Watson Machine Learning are installed, both have a dependency on the Common Core Services (CCS) and will wait for the CCS object to reach the Completed state before proceeding with the install. Once that is the case, both WS and WML will run the installation process in parallel.

Wait until all cartridges are ready🔗

Installation of the cartridges can take a very long time; up to 5 hours for Watson Knowledge Catalog. While cartridges are being installed, the deployer checks the states of all cartridges on a regular basis and reports these in a log file. The deployer will retry until all specified cartridges have reached the Completed state.

Configure LDAP authentication for Cloud Pak for Data🔗

If LDAP has been configured for the Cloud Pak for Data element, it will be configured after all cartridges have finished installing.

\ No newline at end of file diff --git a/30-reference/process/overview/index.html b/30-reference/process/overview/index.html new file mode 100644 index 000000000..f6d231fa2 --- /dev/null +++ b/30-reference/process/overview/index.html @@ -0,0 +1 @@ + Overview - Cloud Pak Deployer
Skip to content

Deployment process overview🔗

Deployment process overview

When running the Cloud Pak Deployer (cp-deploy env apply), a series of pre-defined stages are followed to arrive at the desired end-state.

10 - Validation🔗

In this stage, the following activities are executed:

  • Is the specified cloud platform in the inventory file supported?
  • Are the mandatory variables defined?
  • Can the deployer connect to the specified vault?

20 - Prepare🔗

In this stage, the following activities are executed:

  • Read the configuration files from the config directory
  • Replace variable placeholders in the configuration with the extra parameters passed to the cp-deploy command
  • Expand the configuration with defaults from the defaults directory
  • Run the "linter" to check the object attributes in the configuration and their relations
  • Generate the Terraform scripts to provision the infrastructure (IBM Cloud only)
  • Download all CLIs needed for the selected cloud platform and cloud pak(s), if not air-gapped

30 - Provision infra🔗

In this stage, the following activities are executed:

  • Run Terraform to create or change the infrastructure components for IBM cloud
  • Run the OpenShift installer-provisioned infrastructure (IPI) installer for AWS (ROSA), Azure (ARO) or vSphere

40 - Configure infra🔗

In this stage, the following activities are executed:

  • Configure the VPC bastion and NFS server(s) for IBM Cloud
  • Configure the OpenShift storage classes or test validate the existing storege classes if an existing OpenShift cluster is used
  • Configure OpenShift logging

50 - Install Cloud Pak🔗

In this stage, the following activities are executed:

  • Create the IBM Container Registry namespace for IBM Cloud
  • Connect to the specified image registry and create ImageContentSourcePolicy
  • Prepare OpenShift cluster for Cloud Pak for Data installation
  • Mirror images to the private registry
  • Install Cloud Pak for Data control plane
  • Configure Foundational Services license service
  • Install specified Cloud Pak for Data cartridges

60 - Configure Cloud Pak🔗

In this stage, the following activities are executed:

  • Add OpenShift signed certificate to Cloud Pak for Data web server when on IBM Cloud
  • Configure LDAP for Cloud Pak for Data
  • Configure SAML authentication for Cloud Pak for Data
  • Configure auditing for Cloud Pak for Data
  • Configure instance for the cartridges (Analytics engine, Db2, Cognos Analytics, Data Virtualization, …)
  • Configure instance authorization using the LDAP group mapping

70 - Deploy Assets🔗

  • Configure Cloud Pak for Data monitors
  • Install Cloud Pak for Data assets

80 - Smoke Tests🔗

In this stage, the following activities are executed:

  • Show the Cloud Pak for Data URL and admin password
\ No newline at end of file diff --git a/30-reference/process/prepare/index.html b/30-reference/process/prepare/index.html new file mode 100644 index 000000000..99236d48c --- /dev/null +++ b/30-reference/process/prepare/index.html @@ -0,0 +1 @@ + Prepare deployment - Cloud Pak Deployer
Skip to content

Prepare the deployer🔗

This stage mainly takes care of checking the configuration and expanding it where necessary so it can be used by subsequent stages. Additionally, the preparation also calls the roles that will generate Terraform or other configuration files which are needed for provisioning and configuration.

Generator🔗

All yaml files in the config directory of the specified CONFIG_DIR are processed and a composite JSON object, all_config is created, which contains all configuration.

While processing the objects defined in the config directory files, the defaults directory is also processed to determine if any supplemental "default" variables must be added to the configuration objets. This makes it easy for example to ensure VSIs always use the correct Red Hat Enterprise Linux image available on IBM Cloud.

You will find the generator roles under the automation-generators directory. There are cloud-provider dependent roles such as openshift which have a structure dependent on the chosen cloud provider and there are generic roles such as cp4d which are not dependent on the cloud provider.

To find the appropriate role for the object, the generator first checks if the role is found under the specified cloud provider directory. If not found, it will call the role under generic.

Linting🔗

Each of the objects have a syntax checking module called preprocessor.py. This Python program checks the attributes of the object in question and can also add defaults for properties which are missing. All errors found are collected and displayed at the end of the generator.

\ No newline at end of file diff --git a/30-reference/process/provision-infra/index.html b/30-reference/process/provision-infra/index.html new file mode 100644 index 000000000..fadb3c986 --- /dev/null +++ b/30-reference/process/provision-infra/index.html @@ -0,0 +1 @@ + Provision infrastructure - Cloud Pak Deployer
Skip to content

Provision infrastructure🔗

This stage will provision the infrastructure that was defined in the input configuration files. Currently, this has only been implemented for IBM Cloud.

IBM Cloud🔗

The IBM Cloud infrastructure provisioning runs Terraform to initially provision the infrastructure components such as VPC, VSIs, security groups, ROKS cluster and others. Also, if changes have been made in the configuration, Terraform will attempt to make the changes to reach the desired end-state.

Based on the chosen action (apply or destroy), Terraform is instructed to provision or change the infrastructure components or to destroy everything.

The Terraform state file (tfstate) is maintained in the vault and is critical to enable dynamic updates to the infrastructure. If the state file is lost or corrupted, updates to the infrastructure will have to be done manually. The Ansible tasks have been built in a way that the Terraform state file is always persisted into the vault, even if the apply or destroy process has failed.

There are 3 main steps:

Terraform init🔗

This step initializes the Terraform provider (ibm) with the correct version. If needed, the Terraform modules for the provider are downloaded or updated.

Terraform plan🔗

Applying changes to the infrastructure using Terraform based on the input configuration files may cause critical components to be replaced (destroyed and recreated). The plan step checks what will be changed. If infrastructure components are destroyed and the --confirm-destroy parameter has not be specified for the deployer, the process is aborted.

Terraform apply or Terraform destroy🔗

This is the execution of the plan and will provision new infrastructure (apply) or destroy everything (destroy).

While the Terraform apply or destroy process is running, a .tfstate file is updated on disk. When the command completes, the deployer writes this as a secret to the vault so it can be used next time to update (or destroy) the infrastructure components.

\ No newline at end of file diff --git a/30-reference/process/smoke-tests/index.html b/30-reference/process/smoke-tests/index.html new file mode 100644 index 000000000..68d8d3d14 --- /dev/null +++ b/30-reference/process/smoke-tests/index.html @@ -0,0 +1,2 @@ + Smoke tests - Cloud Pak Deployer
Skip to content

Smoke tests🔗

This is the final stage before returning control to the process that started the deployer. Here tests to check that the Cloud Pak and its cartridges has been deployed correctly and that everything is running as expected.

The method for smoke tests should be dynamic, for example by referencing a Git repository and context (directory within the repository); the code within that directory then deploys the asset(s).

Cloud Pak for Data smoke tests🔗

Show the Cloud Pak for Data URL and admin password🔗

This "smoke test" finds the route of the Cloud Pak for Data instance(s) and retrieves the admin password from the vault which is then displayed.

Example:

['CP4D URL: https://cpd-cpd.fke09-10-a939e0e6a37f1ce85dbfddbb7ab97418-0000.eu-gb.containers.appdomain.cloud', 'CP4D admin password: ITnotgXcMTcGliiPvVLwApmsV']
+

With this information you can go to the Cloud Pak for Data URL and login using the admin user.

\ No newline at end of file diff --git a/30-reference/process/validate/index.html b/30-reference/process/validate/index.html new file mode 100644 index 000000000..426f037ff --- /dev/null +++ b/30-reference/process/validate/index.html @@ -0,0 +1 @@ + Validate - Cloud Pak Deployer
Skip to content

10 - Validation - Validate the configuration🔗

In this stage, the following activities are executed:

  • Is the specified cloud platform in the inventory file supported?
  • Are the mandatory variables defined?
  • Can the deployer connect to the specified vault?
\ No newline at end of file diff --git a/30-reference/timings/index.html b/30-reference/timings/index.html new file mode 100644 index 000000000..de81395af --- /dev/null +++ b/30-reference/timings/index.html @@ -0,0 +1 @@ + Timings - Cloud Pak Deployer
Skip to content

Timings for the deployment🔗

Duration of the overall deployment process🔗

Phase Step Time in minutes Comments
10 - Validation 3
20 - Prepare Generators 3
30 - Provision infrastructure Create VPC 1
Create VSI without storage 5
Create VSI with storage 10
Create VPC ROKS cluster 45
Install ROKS OCS add-on and create storage classes 45
40 - Configure infrastructure Install NFS on VSIs 10
Create NFS storage classes 5
Create private container registry namespace 5
50 - Install Cloud Pak Prepare OpenShift for Cloud Pak for Data install 60 During this step, the compute nodes may be replaced and also the Kubernetes services may be restarted.
Mirror Cloud Pak for Data images to private registry (only done when using private registry) 30-600 If the entitled registry is used, this step will be skipped. When using a private registry, if images have already been mirrored, the duration will be much shorter, approximately 10 minutes.
Install Cloud Pak for Data control plane 20
Create Cloud Pak for Data subscriptions for cartridges 15
Install cartridges 20-300 The amount of time really depends on the cartridges being installed. In the table below you will find an estimate of the installation time for each cartridge. Cartridges will be installed in parallel through the operators.
60 - Configure Cloud Pak Configure Cloud Pak for Data LDAP 5
Provision instances for cartridges 30-60 For cartridges that have instances defined. Creation of the instances will run in parallel where possible.
Configure cartridge and instance permissions based on LDAP config 10
70 - Deploy assets No activities yet 0
80 - Smoke tests Show Cloud Pak for Data cluster details 1

Cloud Pak for Data cartridge deployment🔗

Cartridge Full name Installation time Instance provisioning time Dependencies
cpd_platform Cloud Pak for Data control plane 20 N/A
ccs Common Core Services 75 N/A
db2aas Db2 as a Service 30 N/A
iis Information Server 60 N/A ccs, db2aas
ca Cognos Analytics 20 45 ccs
planning-analytics Planning Analytics 15 N/A
watson_assistant Watson Assistant 70 N/A
watson-discovery Watson Discovery 100 N/A
] watson-ks Watson Knowledge Studio 20 N/A
watson-speech Watson Speech to Text and Text to Speech 20 N/A
wkc Watson Knowledge Catalog 90 N/A ccs, db2aas, iis
wml Watson Machine Learning 45 N/A ccs
ws Watson Studio 30 N/A ccs

Examples:

  • Cloud Pak for Data installation with just Cognos Analytics will take 20 (control plane) + 75 (ccs) + 20 (ca) + 45 (ca instance) = ~160 minutes
  • Cloud Pak for Data installation with Cognos Analytics and Watson Studio will take 20 (control plane) + 75 (ccs) + 45 (ws+ca) + 45 (ca instance) = ~185 minutes
  • Cloud Pak for Data installation with just Watson Knowledge Catalog will take 20 (control plane) + 75 (ccs) + 30 (db2aas) + 60 (iis) + 90 (wkc) = ~275 minutes
  • Cloud Pak for Data installation with Watson Knowledge Catalog and Watson Studio will take the same time because WS will finish 30 minutes after installing CCS, while WKC will take a lot longer to complete
\ No newline at end of file diff --git a/40-troubleshooting/cp4d-uninstall/index.html b/40-troubleshooting/cp4d-uninstall/index.html new file mode 100644 index 000000000..7a315ba6d --- /dev/null +++ b/40-troubleshooting/cp4d-uninstall/index.html @@ -0,0 +1 @@ + Cloud Pak for Data uninstall - Cloud Pak Deployer
Skip to content

Uninstall Cloud Pak for Data and Foundational Services🔗

For convenience, the Cloud Pak Deployer includes a script that removes the Cloud Pak for Data instance from the OpenShift cluster, then Cloud Pak Foundational Services and finally the catalog sources and CRDs.

Steps:

  • Make sure you are connected to the OpenShift cluster
  • Run script ./scripts/cp4d/cp4d-delete-instance.sh <CP4D_project>

You will have to confirm that you want to delete the instance and all other artifacts.

Warning

Please be very careful with this command. Ensure you are connected to the correct OpenShift cluster and that no other Cloud Paks use operator namespace. The action cannot be undone.

\ No newline at end of file diff --git a/40-troubleshooting/ibm-cloud-access-nfs-server/index.html b/40-troubleshooting/ibm-cloud-access-nfs-server/index.html new file mode 100644 index 000000000..b47fa7121 --- /dev/null +++ b/40-troubleshooting/ibm-cloud-access-nfs-server/index.html @@ -0,0 +1,69 @@ + Access NFS server provisioned on IBM Cloud - Cloud Pak Deployer
Skip to content

Access NFS server provisioned on IBM Cloud🔗

When choosing the "simple" sample configuration for ROKS VPC on IBM Cloud, the deployer also provisions a Virtual Server Instance and installs a standard NFS server on it. In some cases you may want to get access to the NFS server for troubleshooting.

For security reasons, the NFS server can only be reached via a bastion server that is connected to the internet, i.e. use the bastion server as a jump host, this to avoid exposing NFS volumes to the outside world and provide an extra layer of protection. Additionally, password login is disabled on both the bastion and NFS servers and one must use the private SSH key to connect.

Start the command line within the container🔗

Getting SSH access to the NFS server is easiest from within the deployer container as it has all tools installed to extract the IP addresses from the Terraform state file.

Optional: Ensure that the environment variables for the configuration and status directories are set. If not specified, the directories are assumed to be $HOME/cpd-config and $HOME/cpd-status.

export STATUS_DIR=$HOME/cpd-status
+export CONFIG_DIR=$HOME/cpd-config
+

Start the deployer command line.

./cp-deploy.sh env command
+

-------------------------------------------------------------------------------
+Entering Cloud Pak Deployer command line in a container.
+Use the "exit" command to leave the container and return to the hosting server.
+-------------------------------------------------------------------------------
+Installing OpenShift client
+Current OpenShift context: pluto-01
+

Obtain private SSH key🔗

Access to both the bastion and NFS servers are typically protected by the same SSH key, which is stored in the vault. To list all vault secrets, run the command below.

cd /cloud-pak-deployer
+./cp-deploy.sh vault list
+
./cp-deploy.sh vault list
+
+Starting Automation script...
+
+PLAY [Secrets] *****************************************************************
+Secret list for group sample:
+- ibm_cp_entitlement_key
+- sample-terraform-tfstate
+- cp4d_admin_zen_40_fke34d
+- sample-all-config
+- pluto-01-provision-ssh-key
+- pluto-01-provision-ssh-pub-key
+
+PLAY RECAP *********************************************************************
+localhost                  : ok=11   changed=0    unreachable=0    failed=0    skipped=21   rescued=0    ignored=0
+

Then, retrieve the private key (in the above example pluto-01-provision-ssh-key) to an output file in your ~/.ssh directory, make sure it has the correct private key format (new line at the end) and permissions (600).

SSH_FILE=~/.ssh/pluto-01-rsa
+mkdir -p ~/.ssh
+chmod 600 ~/.ssh
+./cp-deploy.sh vault get -vs pluto-01-provision-ssh-key \
+    -vsf $SSH_FILE
+echo -e "\n" >> $SSH_FILE
+chmod 600 $SSH_FILE
+

Find the IP addresses🔗

To connect to the NFS server, you need the public IP address of the bastion server and the private IP address of the NFS server. Obviously these can be retrieved from the IBM Cloud resource list (https://cloud.ibm.com/resources), but they are also kept in the Terraform "tfstate" file

./cp-deploy.sh vault get -vs sample-terraform-tfstate \
+    -vsf /tmp/sample-terraform-tfstate
+

The below commands do not provide the prettiest output but you should be able to extract the IP addresses from them.

For the bastion node public (floating) IP address:

cat /tmp/sample-terraform-tfstate | jq -r '.resources[]' | grep -A 10 -E "ibm_is_float"
+

  "type": "ibm_is_floating_ip",
+  "name": "pluto_01_bastion",
+  "provider": "provider[\"registry.terraform.io/ibm-cloud/ibm\"]",
+  "instances": [
+    {
+      "schema_version": 0,
+      "attributes": {
+        "address": "149.81.215.172",
+...
+        "name": "pluto-01-bastion",
+

For the NFS server:

cat /tmp/sample-terraform-tfstate | jq -r '.resources[]' | grep -A 10 -E "ibm_is_instance|primary_network_interface"
+

...
+--
+  "type": "ibm_is_instance",
+  "name": "pluto_01_nfs",
+  "provider": "provider[\"registry.terraform.io/ibm-cloud/ibm\"]",
+  "instances": [
+...
+--
+        "primary_network_interface": [
+...
+            "name": "pluto-01-nfs-nic",
+            "port_speed": 0,
+            "primary_ipv4_address": "10.227.0.138",
+

In the above examples, the IP addresses are:

  • Bastion public IP address: 149.81.215.172
  • NFS server private IP address: 10.227.0.138

SSH to the NFS server🔗

Finally, to get command line access to the NFS server:

BASTION_IP=149.81.215.172
+NFS_IP=10.227.0.138
+ssh -i $SSH_FILE \
+  -o ProxyCommand="ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
+  -i $SSH_FILE -W %h:%p -q $BASTION_IP" \
+  root@$NFS_IP
+

Stopping the session🔗

Once you've finished exploring the NFS server, you can exit from it:

exit
+

Finally, exit from the deployer container which is then terminated.

exit
+

\ No newline at end of file diff --git a/404.html b/404.html new file mode 100644 index 000000000..509da2097 --- /dev/null +++ b/404.html @@ -0,0 +1 @@ + Cloud Pak Deployer

404 - Not found

\ No newline at end of file diff --git a/50-advanced/advanced-configuration/index.html b/50-advanced/advanced-configuration/index.html new file mode 100644 index 000000000..8e3cdd0e8 --- /dev/null +++ b/50-advanced/advanced-configuration/index.html @@ -0,0 +1,59 @@ + Advanced configuration - Cloud Pak Deployer
Skip to content

Cloud Pak Deployer Advanced Configuration🔗

The Cloud Pak Deployer includes several samples which you can use to build your own configuration. You can find sample configuration yaml files in the sub-directories of the sample-configurations directory of the repository. Descriptions and topologies are also included in the sub-directories.

Warning

Do not make changes to the sample configurations in the cloud-pak-deployer directory, but rather copy it to your own home directory or somewhere else and then make changes. If you store your own configuration under the repository's clone, you may not be able to update (pull) the repository with changes applied on GitHub, or accidentally overwrite it.

Warning

The deployer expects to manage all objects referenced in the configuration files, including the referenced OpenShift cluster and Cloud Pak installation. If you have already pre-provisioned the OpenShift cluster, choose a configuration with existing-ocp cloud platform. If the Cloud Pak has already been installed, unexpected and undesired activities may happen. The deployer has not been designed to alter a pre-provisioned OpenShift cluster or existing Cloud Pak installation.

Configuration steps - static sample configuration🔗

  1. Copy the static sample configuration directory to your own directory:
    mkdir -p $HOME/cpd-config/config
    +cp -r ./sample-configurations/roks-ocs-cp4d/config/* $HOME/cpd-config/config/
    +cd $HOME/cpd-config/config
    +
  2. Edit the "cp4d-....yaml" file and select the cartridges to be installed by changing the state to installed. Additionally you can accept the Cloud Pak license in the config file by specifying accept_licenses: True.
    nano ./config/cp4d-450.yaml
    +

The configuration typically works without any configuration changes and will create all referenced objects, including the Virtual Private Cloud, subnets, SSH keys, ROKS cluster and OCS storage ndoes. There is typically no need to change address prefixes and subnets. The IP addresses used by the provisioned components are private to the VPC and are not externally exposed.

Configuration steps - dynamically choose OpenShift and Cloud Pak🔗

  1. Copy the sample configuration directory to your own directory:
    mkdir -p $HOME/cpd-config/config
    +
  2. Copy the relevant OpenShift configuration file from the samples-configuration directory to the config directory, for example:
    cp ./sample-configurations/sample-dynamic/config-samples/ocp-ibm-cloud-roks-ocs.yaml $HOME/cpd-config/config/
    +
  3. Copy the relevant "cp4d-…" file from the samples-configuration directory to the config directory, for example:

    cp ./sample-configurations/sample-dynamic/config-samples/cp4d-462.yaml $HOME/cpd-config/config/
    +

  4. Edit the "$HOME/cpd-config/config/cp4d-....yaml" file and select the cartridges to be installed by changing the state to installed. Additionally you can accept the Cloud Pak license in the config file by specifying accept_licenses: True.

    nano $HOME/cpd-config/config/cp4d-463.yaml
    +

For more advanced configuration topics such as using a private registry, setting up transit gateways between VPCs, etc, go to the Advanced configuration section

Directory structure🔗

Every configuration has a fixed directory structure, consisting of mandatory and optional subdirectories. Directory structure

Mandatory subdirectories:

  • config: Keeps one or more yaml files with your OpenShift and Cloud Pak configuration

Additionally, there are 3 optional subdirectories:

  • defaults: Directory that keeps the defaults which will be merged with your configuration
  • inventory: Keep global settings for the configuration such as environment name or other variables used in the configs
  • assets: Keeps directories of assets which must be deployed onto the Cloud Pak

config directory🔗

You can choose to keep only a single file per subdirectory or, for more complex configurations, you can create multiple yaml files. You can find a full list of all supported object types here: Configuration objects. The generator automatically merges all .yaml files in the config and defaults directory. Files with different extensions are ignored. In the sample configurations we split configuration of the OpenShift ocp-... and Cloud Pak cp4.-... objects.

For example, your config directory could hold the following files:

cp4d-463.yaml
+ocp-ibm-cloud-roks-ocs.yaml
+

This will provision a ROKS cluster on IBM Cloud with OpenShift Data Foundation (fka OCS) and Cloud Pak for Data 4.0.8.

defaults directory (optional)🔗

Holds the defaults for all object types. If a certain object property has not been specified in the config directory, it will be retrieved from the defaults directory using the flavour specified in the configured object. If no flavour has been selected, the default flavour will be chosen.

You should not need this subdirectory in most circumstances.

assets directory (optional)🔗

Optional directory holding the assets you wish to deploy for the Cloud Pak. More information about Cloud Pak for Data assets which can be deployed can be found in object definition cp4d_asset. The directory can be named differently as well, for example cp4d-assets or customer-churn-demo.

inventory directory (optional)🔗

The Cloud Pak Deployer pipeline has been built using Ansible and it can be configured using "inventory" files. Inventory files allow you to specify global variables used throughout Ansible playbooks. In the current version of the Cloud Pak Deployer, the inventory directory has become fully optional as the global_config and vault objects have taken over its role. However, if there are certain global variables such as env_id you want to pass via an inventory file, you can also do this.

Vault secrets🔗

User passwords, certificates and other "secret" information is kept in the vault, which can be either a flat file (not encrypted), HashiCorp Vault or the IBM Cloud Secrets Manager service. Some of the deployment configurations require that the vault is pre-populated with secrets which as needed during the deployment. For example, a vSphere deployment needs the vSphere user and password to authenticate to vSphere and Cloud Pak for Data SAML configuration requires the idP certificate

All samples default to the File Vault, meaning that the vault will be kept in the vault directory under the status directory you specify when you run the deployer. Detailed descriptions of the vault settings can be found in the sample inventory file and also here: vault settings.

Optional: Ensure that the environment variables for the configuration and status directories are set. If not specified, the directories are assumed to be $HOME/cpd-config and $HOME/cpd-status.

export STATUS_DIR=$HOME/cpd-status
+export CONFIG_DIR=$HOME/cpd-config
+

Set vSphere user secret:

./cp-deploy.sh vault set \
+    --vault-secret vsphere-user \
+    --vault-secret-value super_user@vsphere.local
+

Or, if you want to create the secret from an input file:

./cp-deploy.sh vault set \
+    --vault-secret kubeconfig \
+    --vault-secret-file ~/.kube/config
+

Using a GitHub repository for the configuration🔗

If the configuration is kept in a GitHub repository, you can set environment variables to have the deployer pull the GitHub repository to the current server before starting the process.

Set environment variables.

export CPD_CONFIG_GIT_REPO="https://github.com/IBM/cloud-pak-deployer-config.git"
+export CPD_CONFIG_GIT_REF="main"
+export CPD_CONFIG_GIT_CONTEXT=""
+

  • CPD_CONFIG_GIT_REPO: The clone URL of the GitHub repository that holds the configuration.
  • CPD_CONFIG_GIT_REF: The branch, tag or commit ID to be cloned. If not specified, the repository's default branch will be cloned.
  • CPD_CONFIG_GIT_CONTEXT: The directory within the GitHub repository that holds the configuration. This directory must contain the config directory under which the YAML files are kept.

Info

When specifying a GitHub repository, the contents will be copied under $STATUS_DIR/cpd-config and this directory is then set as the configuration directory.

Using dynamic variables (extra variables)🔗

In some situations you may want to use a single configuration for deployment in different environments, such as development, acceptance test and production. The Cloud Pak Deployer uses the Jinja2 templating engine which is included in Ansible to pre-process the configuration. This allows you to dynamically adjust the configuration based on extra variables you specify at the command line.

Example:

./cp-deploy.sh env apply \
+  -e ibm_cloud_region=eu_gb \
+  -e env_id=jupiter-03 [--accept-all-liceneses]
+

This passes the env_id and ibm_cloud_region variables to the Cloud Pak Deployer, which can then populate variables in the configuration. In the sample configurations, the env_id is used to specify the name of the VPC, ROKS cluster and others and overrides the value specified in the global_config definition. The ibm_cloud_region overrides region specified in the inventory file.

...
+vpc:
+- name: "{{ env_id }}"
+  allow_inbound: ['ssh']
+
+address_prefix:
+### Prefixes for the client environment
+- name: "{{ env_id }}-zone-1"
+  vpc: "{{ env_id }}"
+  zone: {{ ibm_cloud_region }}-1
+  cidr: 10.231.0.0/26
+...
+

When running with the above cp-deploy.sh command, the snippet would be generated as:

...
+vpc:
+- name: "jupiter-03"
+  allow_inbound: ['ssh']
+
+address_prefix:
+### Prefixes for the client environment
+- name: "jupiter-03-zone-1"
+  vpc: "jupiter-03"
+  zone: eu-de-1
+  cidr: 10.231.0.0/26
+...
+

The ibm_cloud_region variable is specified in the inventory file. This is another method of specifying variables for dynamic configuration.

You can even include more complex constructs for dynamic configuration, with if statements, for loops and others.

An example where the OpenShift OCS storage classes would only be generated for a specific environment (pluto-prod) would be:

  openshift_storage:
+  - storage_name: nfs-storage
+    storage_type: nfs
+    nfs_server_name: "{{ env_id }}-nfs"
+{% if env_id == 'jupiter-prod' %}
+  - storage_name: ocs-storage
+    storage_type: ocs
+    ocs_storage_label: ocs
+    ocs_storage_size_gb: 500
+{% endif %}
+

For a more comprehensive overview of Jinja2 templating, see https://docs.ansible.com/ansible/latest/user_guide/playbooks_templating.html

\ No newline at end of file diff --git a/50-advanced/alternative-repo-reg/index.html b/50-advanced/alternative-repo-reg/index.html new file mode 100644 index 000000000..c82a522f3 --- /dev/null +++ b/50-advanced/alternative-repo-reg/index.html @@ -0,0 +1,32 @@ + Using alternative CASE repositories and registries - Cloud Pak Deployer
Skip to content

Using alternative repositories and registries🔗

Warning

In most scenarios you will not need this type of configuration.

Alternative repositories and registries are mainly geared towards pre-GA use of the Cloud Paks where CASE files are downloaded from internal repositories and staging container image registries need to be used as images have not been released yet.

Building the Cloud Pak Deployer image🔗

By default the Cloud Pak Deployer image is built on top of the olm-utils images in icr.io. If you're working with a pre-release of the Cloud Pak OLM utils image, you can override the setting as follows:

export CPD_OLM_UTILS_V2_IMAGE=cp.staging.acme.com:4.8.0
+

Or, for Cloud Pak for Data 5.0:

export CPD_OLM_UTILS_V3_IMAGE=cp.staging.acme.com:5.0.0
+

Subsequently, run the install commmand:

./cp-deploy.sh build
+

Configuring the alternative repositories and registries🔗

When specifying a cp_alt_repo object in a YAML file, this is used for all Cloud Paks. The object triggers the following steps: * The following files are created in the /tmp/work directory in the container: play_env.sh, resolvers.yaml and resolvers_auth. * When downloading CASE files using the ibm-pak plug-in, the play_env sets the locations of the resolvers and authorization files. * Also, the locations of the case files for the Cloud Pak, Foundational Servides and Open Content are set in an enviroment variable. * Registry mirrors are configured using an ImageContentSourcePolicy resource in the OpenShift cluster. * Registry credentials are added to the OpenShift cluster's global pull secret.

The cp_alt_repo is configured like this:

cp_alt_repo:
+  repo:
+    token_secret: github-internal-repo
+    cp_path: https://raw.internal-repo.acme.com/cpd-case-repo/4.8.0/promoted/case-repo-promoted
+    fs_path: https://raw.internal-repo.acme.com/cloud-pak-case-repo/main/repo/case
+    opencontent_path: https://raw.internal-repo.acme.com/cloud-pak-case-repo/main/repo/case
+  registry_pull_secrets:
+  - registry: cp.staging.acme.com
+    pull_secret: cp-staging
+  - registry: fs.staging.acme.com
+    pull_secret: cp-fs-staging
+  registry_mirrors:
+  - source: cp.icr.com/cp
+    mirrors:
+    - cp.staging.acme.com/cp
+  - source: cp.icr.io/cp/cpd
+    mirrors:
+    - cp.staging.acme.com/cp/cpd
+  - source: icr.io/cpopen
+    mirrors:
+    - fs.staging.acme.com/cp
+  - source: icr.io/cpopen/cpfs
+    mirrors:
+    - fs.staging.acme.com/cp
+

Property explanation🔗

Property Description Mandatory Allowed values
repo Repositories to be accessed and the Git token Yes
repo.token_secret Secret in the vault that holds the Git login token Yes
repo.cp_path Repository path where to find Cloud Pak CASE files Yes
repo.fs)path Repository path where to find the Foundational Services CASE files Yes
repo.opencontent_path Repository path where to find the Open Content CASE files Yes
registry_pull_secrets List of registries and their pull secrets, will be used to configure global pull secret Yes
.registry Registry host name Yes
.pull_secret Vault secret that holds the pull secret (user:password) for the registry Yes
registry_mirrors List of registries and their mirrors, will be used to configure the ImageContentSourcePolicy Yes
.source Registry and path referenced by the Cloud Pak/FS pod Yes
.mirrors: List of alternate registry locations for this source Yes

Configuring the secrets🔗

Before running the deployer with a cp_alt_repo object, you need to ensure the referenced secrets are present in the vault.

For the GitHub token, you need to set the token (typically a deploy key) to login to GitHub or GitHub Enterprise.

./cp-deploy.sh vault set -vs github-internal-repo=abc123def456
+

For the registry credentials, specify the user and password separated by a colon (:):

./cp-deploy.sh vault set -vs cp-staging="cp-staging-user:cp-staging-password"
+

You can also set these tokens on the cp-deploy.sh env apply command line.

./cp-deploy.sh env apply -f -vs github-internal-repo=abc123def456 -vs cp-staging="cp-staging-user:cp-staging-password
+

Running the deployer🔗

To run the deployer you can now use the standard process:

./cp-deploy.sh env apply -v
+

\ No newline at end of file diff --git a/50-advanced/apply-node-settings-non-mco/index.html b/50-advanced/apply-node-settings-non-mco/index.html new file mode 100644 index 000000000..d5ccf2635 --- /dev/null +++ b/50-advanced/apply-node-settings-non-mco/index.html @@ -0,0 +1,44 @@ + Apply OpenShift node settings when machine config operator does not exist - Cloud Pak Deployer
Skip to content

Apply OpenShift node settings when machine config operator does not exist🔗

Cloud Pak Deployer automatically applies cluster and node settings before installing the Cloud Pak(s). Sometimes you may also want to automate applying these node settings without installing the Cloud Pak. For convenience, the repository includes a script that makes the same changes normally done through automation: scripts/cp4d/cp4d-apply-non-mco-cluster-settings.sh.

To apply the node settings, do the following:

  • If images are pulled from the entitled registry, set the CP_ENTITLEMENT_KEY environment variable
  • If images are to be pulled from a private registry, set both the CPD_PRIVATE_REGISTRY and CPD_PRIVATE_REGISTRY_CREDS environment variables
  • Log in to the OpenShift cluster with cluster-admin permissions
  • Run the scripts/cp4d/cp4d-apply-non-mco-cluster-settings.sh script.

The CPD_PRIVATE_REGISTRY value must reference the registry host name and optionally the port and namespace that must prefix the images. For example, if the images are kept in https://de.icr.io/cp4d-470, you must specify de.icr.io/cp4d-470 for the CPD_PRIVATE_REGISTRY environment variable. If images are kept in https://cust-reg:5000, you must specify cust-reg:5000 for the CPD_PRIVATE_REGISTRY environment variable.

For the CPD_PRIVATE_REGISTRY_CREDS value, specify both the user and password in a single string, separated by a colon (:). For example: admin:secret_passw0rd.

Warning

When setting the private registry and its credentials, the script automatically creates the configuration that will set up ImageContentSourcePolicy and global pull secret alternatives. This change cannot be undone using the script. It is not possible to set the private registry and later change to entitled registry. Changing the private registry's credentials can be done by re-running the script with the new credentials.

Example🔗

export CPD_PRIVATE_REGISTRY=de.icr.io/cp4d-470
+export CPD_PRIVATE_REGISTRY_CREDS="iamapikey:U97KLPYF663AE4XAQL0"
+./scripts/cp4d/cp4d-apply-non-mco-cluster-settings.sh
+
Creating ConfigMaps and secret
+configmap "cloud-pak-node-fix-scripts" deleted
+configmap/cloud-pak-node-fix-scripts created
+configmap "cloud-pak-node-fix-config" deleted
+configmap/cloud-pak-node-fix-config created
+secret "cloud-pak-node-fix-secrets" deleted
+secret/cloud-pak-node-fix-secrets created
+Setting global pull secret
+/tmp/.dockerconfigjson
+info: pull-secret was not changed
+secret/cloud-pak-node-fix-secrets data updated
+Private registry specified, creating ImageContentSourcePolicy for registry de.icr.io/cp4d-470
+Generating Tuned config
+tuned.tuned.openshift.io/cp4d-ipc unchanged
+Writing fix scripts to config map
+configmap/cloud-pak-node-fix-scripts data updated
+configmap/cloud-pak-node-fix-scripts data updated
+configmap/cloud-pak-node-fix-scripts data updated
+configmap/cloud-pak-node-fix-scripts data updated
+Creating service account for DaemonSet
+serviceaccount/cloud-pak-crontab-sa unchanged
+clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: "cloud-pak-crontab-sa"
+Recreate DaemonSet
+daemonset.apps "cloud-pak-crontab-ds" deleted
+daemonset.apps/cloud-pak-crontab-ds created
+Showing running DaemonSet pods
+NAME                         READY   STATUS              RESTARTS   AGE
+cloud-pak-crontab-ds-b92f9   0/1     Terminating         0          12m
+cloud-pak-crontab-ds-f85lf   0/1     ContainerCreating   0          0s
+cloud-pak-crontab-ds-jlbvm   0/1     ContainerCreating   0          0s
+cloud-pak-crontab-ds-rbj65   1/1     Terminating         0          12m
+cloud-pak-crontab-ds-vckrs   0/1     ContainerCreating   0          0s
+cloud-pak-crontab-ds-x288p   1/1     Terminating         0          12m
+Waiting for 5 seconds for pods to start
+
+Showing running DaemonSet pods
+NAME                         READY   STATUS    RESTARTS   AGE
+cloud-pak-crontab-ds-f85lf   1/1     Running   0          5s
+cloud-pak-crontab-ds-jlbvm   1/1     Running   0          5s
+cloud-pak-crontab-ds-vckrs   1/1     Running   0          5s
+
\ No newline at end of file diff --git a/50-advanced/cp4d-pre-release/index.html b/50-advanced/cp4d-pre-release/index.html new file mode 100644 index 000000000..50e183283 --- /dev/null +++ b/50-advanced/cp4d-pre-release/index.html @@ -0,0 +1 @@ + Installing pre-releases of Cloud Pak for DAta - Cloud Pak Deployer
Skip to content
\ No newline at end of file diff --git a/50-advanced/gitops/index.html b/50-advanced/gitops/index.html new file mode 100644 index 000000000..08edb4769 --- /dev/null +++ b/50-advanced/gitops/index.html @@ -0,0 +1,14 @@ + Continuous Adoption using GitOps - Cloud Pak Deployer
Skip to content

GitOps

The process of supporting multiple products, releases and patch levels within a release has great similarity to the git-flow model, which has been really well-described by Vincent Driessen in his blog post: https://nvie.com/posts/a-successful-git-branching-model/. This model has been and is still very popular with many software-development teams.

Below is a description of how a git-flow could be implemented with the Cloud Pak Deployer. The following steps are covered:

  • Setting up the company's Git and image registry for the Cloud Paks
  • The git-flow change process
  • Feeding Cloud Pak changes into the process
  • Deploying the Cloud Pak changes

Environments, Git and registry🔗

Governed Process with Continuous Adoption.

There are 4 Cloud Pak environments within the company's domain: Dev, UAT, Pre-prod and Prod. Each of these environments have a namespace in the company's registry (or an isolated registry could be created per environment) and the Cloud Pak release installed is represented by manifests in a branch of the Git repository, respectively dev, uat, pp and prod.

Organizing registries by namespace has the advantage that duplication of images can be avoided. Each of the namespaces can have their own set of images that have been approved for running in the associated environment. The image itself is referenced by digest (i.e., checksum) and organized on disk as such. If one tries to copy an image to a different namespace within the same registry, only a new entry is created, the image itself is not duplicated because it already exists.

The manifests (CASE files) representing the Cloud Pak components are present in each of the branches of the Git repository, or there is a configuration file that references the location of the case file, including the exact version number.

In the Cloud Pak Deployer, we have chosen to reference the CASE versions in the configuration, for example:

cp4d:
+- project: cpd-instance
+  openshift_cluster_name: {{ env_id }}
+  cp4d_version: 4.8.3
+  openshift_storage_name: ocs-storage
+  cartridges:
+  - name: cpfs
+  - name: cpd_platform
+  - name: ws
+    state: installed
+  - name: wml
+    size: small
+    state: installed
+

If Cloud Pak for Data has been configured with a private registry in the deployer config, the deployer will mirror images from the IBM entitled registry to the private registry. In the above configuration, no private registry has been specified. The deployer will automatically download and use the CASE files to create the catalog sources.

Change process using git-flow🔗

With the initial status in place, the continuous adoption process may commence, using the principles of git-flow.

Git-flow addresses a couple of needs for continuous adoption:

  • Control and visibility over what software (version) runs in which environment; there is a central truth which describes the state of every environment managed
  • New features (in case of the deployer: new operator versions and custom resources) can be tested without affecting the pending releases or production implementation
  • While preparing for a new release, hot fixes can still be applied to the production environments

git-flow

The Git repository consists of 4 branches: dev, uat, pp and prd. At the start, release 4.0.0 is being implemented and it will go through the stages from dev to prd. When the installation has been tested in development, a pull request (PR) is done to promote to the uat branch. The PR is reviewed, and changes are then merged into the uat branch. After testing in the uat branch, the steps are repeated until the 4.0.0 release is eventually in production.

With each of the implementation and promotion steps, the registry namespaces and associated with the particular branch are updated with the images described in the manifests kept in the Git repository. Additionally, the changes are installed in the respective environments. The details of these processes will be outlined later.

New patches are received, committed and installed on the dev branch on a regular basis and when no issues are found, the changes are gathered into a PR for uat. When no issues are found for 2 weeks, another PR is done for the pp branch and eventually for prd. During this promotion flow, new patches are still being received in dev.

While version 4.0.2 is running in production, a critical defect is found for which a hot fix is developed. The hot fix is first committed to the pp branch and tested and then a PR is made to promote it to the prd branch. In the meantime, the dev and uat branches continue with their own release schedule. The hot fix is included in 4.0.4 which will be promoted as part of the 4.0.5 release.

The uat, pp and prd branches can be protected by a branch protection rule so that changes from dev can only be promoted (via a pull request) after an approving review or, when the intention is to promote changes in a fully automated manner, after passing status checks and testing. Read Managing a branch protection rule for putting in these controls in GitHub or Protected branches for GitLab.

With this flow, there is control over patches, promotion approvals and releases installed in each of the environments. Additional branches could be introduced if additional environments are in play or if different releases are being managed using the git-flow.

Feeding patches and releases into the flow🔗

As discussed above, patches are first "developed" in the dev branch, i.e., changes are fed into the Git repository, images are loaded into the company's registry (dev namespace) and the installed into the Dev environment.

The process of receiving and installing the patches is common for all Cloud Paks: the cloudctl case tool downloads the CASE file associated with the operator version and the same CASE file can be used to upload images into the company's registry. Then a Catalog Source is created which makes the images available to the operator subscriptions, which in turn manage the various custom resources in the Cloud Pak instance. For example, the ws operator manages the Ws custom resource and this CR ensures that OpenShift deployments, secrets, Config Maps, Stateful Sets, and so forth are managed within the Cloud Pak for Data instance project.

In the git-flow example, Watson Studio release 4.0.2 is installed by updating the Catalog Source. Detailed installation steps for Cloud Pak for Data can be found in the IBM documentation.

Deploying the Cloud Pak changes🔗

Now that the hard work of managing changes to the Git repository branches and image registry namespaces has been done, we can look at the (automatic) deployment of the changes.

In a continuous adoption workflow, the implementation of new releases and patches is automated by means of a pipeline, which allows for deployment and testing in a predictable and controlled manner. A pipeline executes a series of steps to inspect the change and then run the command to install it in the respective environment. Moreover, after installation tests can be automatically executed. The most-popular tools for pipelines are ArgoCD, GitLab pipelines and Tekton (serverless).

To link the execution of a pipeline with the git-flow pull request, one can use ArcoCD or a GitHub/GitLab webhook. As soon as a PR is accepted and changes are applied to the Git branch, the pipeline is triggered and will run the Cloud Pak Deployer to automatically apply the changes according to the latest version.

\ No newline at end of file diff --git a/50-advanced/images/air-gapped-overview.drawio b/50-advanced/images/air-gapped-overview.drawio new file mode 100644 index 000000000..1d645e3da --- /dev/null +++ b/50-advanced/images/air-gapped-overview.drawio @@ -0,0 +1 @@ +7Vtbc9o4FP41zOw+xGN8Ax4DSbqdSbfMJrNtnjoCC6ONsVxZ3PrrV7IlXySR2ARIO7vpTEHysWSf85276LmT1e4DAenyEw5h3HPscNdzb3qO4/QHLvvgM/tiZjhyiomIoLCY6lcTD+gHFJO2mF2jEGYNQopxTFHanJzjJIFz2pgDhOBtk2yB4+auKYigNvEwB7E++wWFdCnewhlU839AFC3lzv1gVFxZAUks3iRbghBva1Pubc+dEIxp8W21m8CYM0/yZYF/ROD+yy143H/6SJ+++sOn1VWx2F2XW8QrbEC8Fi/1J6RbTJ5REsm3IDChp93V0XadErQBFHJhxXgdss/fQpQJwcHwd8Emupe8ZxxL+dfVLuLgstBsZc0wg9Y4JY/7/BKBEcIJm1nghD6IW2023kBCERPjdYwidv2G4pTNAjGK4YK97jhLwZwx4T4f3bhONfXIyW88vjCK4wmOMWHjBCds/THB6ySEodhpu0QUPrDb+NZb9qBsbklXMRv1+YKU4GcoV+g57qjP/5VXJKr45iHIluW6jDEUoAQSMRZPxkZXXn45jkGaoVn5ynCXgkTeTeB8TTK0gX/BrFAqPttS3BIxjIdwV9MDIf4PEK8gJXtGIq66EuRCy52RGG8rnfHk3LKmL768EQg9jcq1K8CxLwJzHfDnavjT8AVDpupiiAld4ggnIL6tZhVJVzT3OMcHl+I/kNK9sFtgTXFT+jksi4vDuoD7ubwQ/Vr7/sS3sXwxutmVYmWDvRwkjDdfc8Kh78uJp/rV6sZ81LhzCgli7C0x1Q0PGV6TOXyJTjCdAhLBFxcUUucSeBFfBMaAMhQ37boBK+LWKUbsTUpcDvwmLn1bgVvxSuKuuolTF1IAHqi4LV5ZW+iaELCvkaWcIHvhgfujxj6u1zC97EuxYqUXJbNaqcrn77OAxNNPD2j7OY2+323xdyCse11VJsJAT8Ezv3nFvBx76CDmRnNG2LeIf2Ow4ehZxzHkxAuCV3wszbxKzm11Voh1u4QJJ8WhYdmM8ZIalfUezFh40VAwadHnDMEc1prhX6EwLHSZG0JQmUshCra4P+75N+21QVoW1TqWQYjYpOHnTVbTtrgjqItbjI5FvSTBi0UGaU81od2gYnx373WrGjGjmXYzLGa+dXJAQVM/PU93QH2TA+qfzQH5GqvGIKM8WlE5VnmZ/uvxhDmwqYc/xqjDtkcju4PJ91pL4P04HGgcvoFpjPfMDmih5DOk82Vd8yG53cDCAOQMk9G5rcZhB2M5EQ3G3ChNcYZy2Ros0b1CMMOUMkt5SJR4TWMW9k3KhMY2mTk1NmYqj+bQYpo0hynNLJ7o5IT1+LXHA7WBMx6eAwfyalMRXcfyNU30DTBxB5Lw5DgZ6P4NJwsUrQkw6uP/YLkUWAY/H1iGGlgeKKDr7CIoqfP814LKXf53TrviKljRHbwJKd7Z8kvdvxtj1ssmmFV+2MgNrTJVPJQdily0yj+falfMuWi3GO/V5FHw8/XcUcTf584dPSWidI9MHV3FyPn9dqnjqUJ2Vw+TLMsSyRpd8hSOCRDRPIl7IWdjyBP0KlGZ8dkV+eWzt5omMIi0z+akHr85nbuyLVuYtTdCT5qSfc9ow86X3rl6qPQ55e4FxLrYpwTvdEmfNl5SvIznjfyc+NTOUQOi0VuqznEDIVhZTmilOSvOV1911PpqoHs/z7OGBv9Xzr7FAxrLRnqF/z0cYKOkygdTQJkIk3zGsc2F9srdOT+9v5M6eW5/5w6VAGvQzuF1LXF69rC5T6B0l1Rj6Dkv0b+5JGpk+shgB2HysFwgvTLZEhfHdO70ymzxGGihP8ZrHbDSleoxf73ndeWYgnvbzotHb+h8CYPe1EVX9Qf1ple/d2SXi3PClJrUI4RRe4WWWDi6GBk4gW6tHUOy4o/ebqrNQNJ7wM0Sf89h3LAngFCCwrzYfzZM627jY7IggIMZkg1LNs+5ud4VnBqCVzXSQUmGwrzXobawFR0EWVpECwu045gep7WGG3s+lGaw1oN7MRQ+HDQp+PacwWA8lkHRGMyfo1xPaySL/E9RAa3dzbUx7/VUnzdoFTEex2jG/kczljLchYAC/gG5rL4xvvDkYf9NCM/KNlHDUBiLHKVBUQzNaDSZdKkilGBqrZu+WnLq65FUYFDN4Fx1hH6LlsplYNUJC2DOQ41vIWImmGLOh7s87k8g/ZbrhgCCanMP2uYDHgfzh6JcWH6XGKwzLq4cW61FjjRguLZr9Q1BdtnGOz069CrTx/EnNnF7MGX/61AO3hVFR1qkEC7AOn+mA/ZIpna/iCmqKdP50KcWNw3nZy5rlfSiUe6lGfvWc7rmjf/LRcF6MK4+i/3bHK/SNffMk/yxMOGYyQdJcdhNP152roD5JKfEPHcQ5E79EgFzW1wHXXGtRcLy9OdrkbB3rvaOoxvUaw0Z71m3t7qd6jriDFlHabctZJy7PuGrZ7lU49e2IB8cSs9OfJZL3UcePjvrWS490avORPCCdt72Nh3mqhUWilI/P9fVO3z4Sz3t9a61/5aQdk5Z63fcYdCQryyIHotzCROlXnuSav9wn4IPm39GP8Bof73+/ueHv70rw7m/6zSN/zMCDFxfEeAbj9+Vy/TPIEEzP/R+zU/lzHpHNaG7HYgOr/nPPKq4i83cIc7GIqnlh0klxSzG8+fHJUrkBUHYJbZv6xX7bev7F3KfnuqNjnWfAxmbyYUG7dznyTCvn9ApD1baCQ4NDWqWtSYxBvywsT25/8g/eEjsBGDF4/BklqU5AtT7VogBh2SlF7Qs6z2M45Gd7dI4/Aydbfnzr4tZRj1pLCSviM/Y521kWKfqKNezPaWPDLaZa81jpFchvPyPzTPCEMGqylI/T9iqxOKassyup4K7Z4DsltdLGwPhteoZ4NA54ogfG1Y/qCugVP0s0b39Fw==7Vxbc5s4FP41ntl9sAcDNvAY20mbNk2yTbKb9CWDjWyrwYgK+dZfvxIIDJJ8DThOpslMbAlxO/rOp6NPR6kZ3cniE3bD8TfkAb+ma96iZvRquq43LYN+sJplUmM7elIxwtBLqpqrijv4G/BKjddOoQeiQkOCkE9gWKwcoCAAA1KoczFG82KzIfKLdw3dEZAq7gauL9f+Bz0y5m+hW6v6zwCOxumdm20nOTJx08b8TaKx66F5rso4rxldjBBJvk0WXeAz46V2qX/7ZzEy8M2w2zH+9abt/37/uq4nF7vY5xT+CjPXn/KXugZkjvALDEbpW2AQkHLvqkt3vcVw5hLAOstHU49+/uXBiHcc8P7mZiLL1PbUYiH7OlmMGLgasD9p9BGFVifE98v4EAYjiAJaM0QBueOnarQ8A5hA2o1nPhzR4z2CQlrr8pIPhvR1O1HoDqgRruJSz9BXVfesec9kF4a+30U+wrQcoIBev4PRNPCAx+80H0MC7uhp7NZz+qC0bkwmPi012QUJRi8gvUJNN5wm+82OpKhiN/fcaJxdlxqGuDAAmJf5k9FS3YwP+74bRrCfvTJYhG6Qno3BYIojOAPfQZQ4FavdsbtTxFAbgkXOD3j3fwJoAghe0ib8qJGCnHu5YfPyfOUzpsPrxjl/MQ1e6XI/HWXXXgGOfuGYU+Pv0b75jO3+fNADd4Gjj0n3+SuHbB5/Er6AR12dFxEmYzRCgeufr2qFnl61uUIxPlgv/gSELDlvuVOCir0fwzI5aOc7uBn3FySPue9P7DaNFi/1Flm30sIyLQTUNo9xQ7vVSiue8kdXJ8alwpm3AENq3gxT++EhQlM8ABvacVIkLh6BTdczk3bM/hvRhYHvEorhIqsrkMJPvUWQvkeGSqtVRGVLE8CWvBA/K09w4oUEeLd14ULJG0sXioGbvc9OWL758tBvoZ/Dyfzs85k9utbDxUyB5S5n0Fv3hZ08ocNQVNPbPmO1PqbfRuwb7VfWvVPfB6zxEKMJK6c8LDZnZBollp+PQcCaIk9x2Yi+LlF605Xbp+N/wQNSyh1QiDHcScw8gZ6XOBtjKnfFZyEzZ2zgVqfW6inhutH1RfrKogR+k8JArKI16o2O7hS6nrPIochMm6DhMAKvxcrtF7D8cYOWv4wfw1/BmXbx7f73Lrw3orQW7uf6asPtNUS0iz5kmvIQ0VQNEU3R2Q4ZIpSmkgOjjhsRFk+IFluNA83tI7469MgHKMq4QNMcR1OT8saO3t4Db2dhOQiUg7wXQAbjvMsDfD4DiefHhkrjZk2MkNZGWTxO8xkb3aIIxn2qoKAroUEfEUIpcl0XoinxaUDWzaYamorfxKiVujocgAb1oAEISdRgU5C4YT6yrLEQytI7dhX9nx4VYjSdxhqiB7YU8DCstGHp+DD+4ON08GGdHj7Mt8FH3trvCyQX8U+VJGIIKJFHcRVGzBJGmMv6lx+P/j14vFrUQ/NiOv398EMR7tyErCNcX45bbzFaLCUAlcspQn+YptOKG5cNIwkxSlyJMJoB4E4autcIY1NUJwjUjRUrpAGfLkPFNBu2AixZbelwkUM+CQ9HUAUKMgAr3LqE9mIQ1+iaWhxK5YJUE9hFLtive7fO75upApub4G/yyqon+HVxYm7vOMM/YLI1+3n1bAdhv44mD9effHABP39J3/O46FHrRvpmJOwEOTqtP2Oa+YrO4roLyKzCVSQvbdH30eDlfgyDpJo3UmP3CDhUNnTKxuGu5KN8GjnavQlBcDceQllN2dFiBwjzckyVPAUcyk+xTVXPtBs5gMnr6HVdFaloWjzdfYWanoE0jzdDHLLzQnqzdqByziyhirNyAm/T2R3pKRQOlk/aelseTXVF5NVyKkJzS8JRUZSs6dQYWpfyB4ZeLE9Whei29CSXwRC7DMoAz2jcXOG9Leneq2WunJ5Ko7MIegrNFQ3zimy6Mia4oRuFSUw3hAsG606Y0/HpQ8IwAjlpf6P8uj60FSBu6pbV6aSha8cdvIxiV801GcY/ghdIq2jMIWOFevXZg5MRNbQP+/Qv7E/oX88lLvsArMOeqa0CAsnymfdgI5qNClyhnLRlnCJwjeN0u2tmRRsBtbN3msJKQzYHynlnW+Gc7RKmRco3sLfHuccB1V5IcAdsEH72IOVggpgZLuK5WQDIc+wZHAYi6a4l5zVDDmIPRVhftfaIkvdGRV3XREnFkWBhaEajqZgEsTXiarDhyHTZ+UYrzpnHxStFIkl9z4jslRg6kI08MHSn8TOt4aJ08v1OaCjnSpVhT9RpnDdmpKas1MSjNLXedECmbKXyaCFwU5YBLjnP1LN8FB4+UJiKD1b6atCOs6O9xbq6LqCgbckoyHTfPAz0MpaE1G8hrwndEZdMI8nMf5TdHZXdstAiTDEsTdb/VVhplaD+q19BnjCLlKH9NUCTcMpi526MHoQZs8eFIMlyk/PKqprVlpIeZhpWOw67jzGrrQxL0nQ1TfvcNl01KwPTG60lqdcFWhr7PTUSSqKeqMHcJqKknKRWVjg8CdMmWzFtUq0mVcc4spTQA6GPlooo4M/C9PaF6dKAop/a0CTrPtS6QziaYleZQfQHLcdDS9M8NbTYEh6On3t8SLrxnn100MLNQ8u1+52mffP9EsPvztM/fufrsRYQbQEoloCAXROEs7S6dRcqL0FYbVdZ0+lxEQ0y9VsbU2+Mt3uI0o6XDW8sZB7EHKZIJA7i6TCfp6nTjKMxDEMgi9dHyAvO6z7GHrDlbvn65Na61tB4fHkSmcHqNSdZ9Tg7BV5abWnYa0PDntx0KpsSTGFm1BIVt105RxSXLXs3zjnD2F3mmnF32v2BbW3jc7Wsje3pl+QJysW2rCqdhaEvK9any0aZe75620IJbJQF3mahL5sCViskqx1Sg9+SqwpMtSKudZuvTiD2cnbkQeud8aBlFvMMW1a7Eh4UM19T+WptTCi014/CgyqVq+1O2Lwu6Efso9FoJOHcBGKMMIsO1+0f4xvGQLYwmEtpENtSz6HgGscbzhQZEO+Hhtel5rwlDafxfbaR8Wg0rMjyubqU12qUmkFBxSgrzzmvhggShDuPjMbAh7LoYMY/tJ429CBYrSznJZSdlpUNlWa/YetWaZqGQJeq9VyLD5F5TcPWy9icocaGLJaewhCdpklnw+tTbuBds6uaFg7dGr3r8Jt60tbxN10grXwAFuN068ABuC6o+G1xT3/F4ocha7HimLdxdV+SRNA88JHLdj5rjOzogPmexq91qSKHjF+6YRe6Nt3Q+drxrF28qiFcobrxLE3AepeziC05/GZNtWj8Ch57S2VW3I1o6wdKs9K+c5HmSpoe6KIEzOWYtfKNo36usqYHmzrvNNbANa11YWYJxqexN05vTKP+s4dj4FYXzknanSIvS5WdV8Z/y1FuZJJDubMBM3Y5mfPKW8qjtiJ8TFJIa1vTaKREVCl7qmO02+uAKb/kRiMdnIRpKYJ2VRJMGcl3yjeQ08KbH87mYjSq2HZ6VJvLS3Mfzubigns25r6VzRXJxsaHM7qweG03j2d05X+pkpcXP5zNdVG0PiK51J+0y97dA14Y/SmA1/rw622gCOjMj2ZzaT2xOnKhxdX/40xi7tV/NTXO/wc=7VxrV6M6F/41rnXeD+3iWtqPtrWOc3SmWh11vrgQ0halBEN6O7/+TSBQSNKLFmqdNbqWNiFA2Hn2sy/Z9ETvTBbnyA7HV9AF/ommuIsTvXuiaWqrZZF/tGeZ9FhWI+kYIc9lg1YdA+8/wDoV1jv1XBAVBmIIfeyFxU4HBgFwcKHPRgjOi8OG0C/eNbRHQOgYOLYv9t57Lh4nvU3NWvV/A95onN5ZbbSSIxM7HcyeJBrbLpznuvSzE72DIMTJp8miA3wqvFQuN5eoq82v9XMbRIP779/brRejllys955TTDaBme1P2VP9AHgO0asXjNLHQCDAJd9WFW7bR97MxoAulw+nLvn/j+tFbOmA+z8mKLxMpU9kFtKPk8WIwqvuPU/qz5CAqx2i22V8CIGRBwPSM4QBHrBTFdKeAYQ9spCnvjcix7sYhqTXZi0fDMnztqPQdogULuNWV9dWXbd0eNegF/Z8vwN9iEg7gAG5fhvBaeACl91pPvYwGJDT6K3nZKKkb4wnPmmp9IIYwVeQXuFE01sq/c2OpLiiN3ftaJxdlwgG214AEGuzmZFWzYgP+74dRt5z9shgEdpBejYCzhRF3gzcgChRK9q743pnmCFCBIucKjAAnAM4ARgtyRB2VE9xzhRd11l7vlIbo8X6xjmVMZqs02aqOsquvYIc+cBQJ0egevX2c/YAw85jcHU6Cv3zx36nJuL+Z4gJWGxyZsOny/+MyKcR/dRHcLEU8BdCL8AAnc2ItKJ0OVM9Vvj1WrvmDDV5IBEYGEbLjAf79jPw+zDycIzkrgPoTXMQvuQGPEOM4WQrxrPrwCn2CY46GUcqyXMUdGsGgD2pa249jEVRIVRqum7VzQJaDE2CFqPelMAl6y0dLyJhCYAALrEOrAkRHsMRhdPZqpejhtWYSxgTCl3vF4Dxkpk6e4phEToZpNJG38ZkGYO4R1PkvAEWHn7IWMDDj/QzkXHS6i5yh7rLj1EBttEI4A3iY+OoiDaiAAHfxoSXirZaspjs1D5Vwhx6NL3INBZPIBGcIgew0zhMZPPYCSZSgehNCa+AYDAeelgATCn21XnqX3xXRgtcC50nF9Ws3g+CVhGuyTS8oTiNbcZx4rluDF+BOwrmsKaJNNbrKeRnP6PICLKIa32TPVQ/agCpJAqzpz+p+5AMU1u7a0cGht0NZaMI34bWEKlPkxhKs7U/7zlL7e7fi953Z/mGem+GMvgJa7UUpTkkdZh/1rdfT+hciDiUjo0w8dFHxB2vDNSaLkzlIhgim6IZoJnnVHrz1gaPlfqZUSrtvGrZUZgY1aG3oFBthwB5ZGFimJK7kkgF9FddeSUQlW2tb8HB1iCBVLud+g5t23kdxeqXGzKMfzhkCw4uVTJvEgdB6f+uNyFhQc/3nslf4nGTv66NbfoP0BV4IhFZgD28fGJLUo9mo4L+S73ejCc4/mi1Op1YA8WF3QyRnTXO4l3ThuhsNCQK1yjBMTUv/xtoV1d3Z43h8m7YJc7Aq10TcfZJqHoXFGyHWu0n1yPEiiEVQy/2jgOAn+J4juGAZ9K1jLvGjkA6KUzXypS7KSXBoqYpHC5SAsjhQlf0uipxQ2kAVw04VDFsuWhfkY4zqnQ+WTsheLkpi5s+SEguGNrTeE5r6CgNgL4IE+WUqTr0cV6s3jocKUkNpuhOxpaXSG/q4CkC1bi2chUQHZILxjS1LFvEXAICU35ixeXf4ojKQ+i8v6zu+KDquyHARzINS8SArkpAkCUVS2cfvSGIfoBtPI0EKUevADtjxgYlJUpOaBrJ0trNCrIi0iSIjPAKWRE4HFJeIahzQIijOk0gxwMF68WsWmVg4cIGSzEFrKiysEHVzRLiBmkEKrFUHGUo/zhwEk6pB92J4QMRZfa4ESRJaDHrW1WwWkry1tCtRux5HyJYrQpMQgza0gQwSWNQowQsSXlHpJ2DEI48M2sq9PfYOCjxeaI6VZqIcHKy7VGdcTKLGGmaonEyZWmKqiCiirmBT8SIopg9o10BRvbI3mv1afRM6DlOrlaGDLXFWaIdw2m9Kp9FNQVgdEHow6XEN6wQIEdFFlsdlszPqo5AtO0ei3VQAhGNDBHu0BtNUcKmf8HyaWBRjSMDi2YIcDjA5l8uQ/J+ced36DZa0Kq36JrcWlrcIpW3QSdfOlHPuyxp6dH9A2VMYB7XvvCJNDczGzRAcWJyEEdRr50CIwmLhaNx3BONvTCkSQoJhmLV3pKzXRvzIBof2KsoIqalWK5m+8Tsclk2XQqijYAXlDYrrmJ3PcnXL0mVWakr7GIfBVI6hBBUBCoBiegz9Im+xo+YW9Lc0jXeprRUKxZvLYnRTsmAZriIZZweTmFQ2n7R4XKyza+aki0t4inSlm7t5NdWtk0k2QwV+SRwT2mZIwWIb0eR5xSFVbRIe5mXhLU3mRdJpcjGJONWO7Qt1FT2Yxm28Hyk2+Czq2vslXAhDj/8ZRK5VGf2JGEQnAc+tL+UGVq3k/IhM6RZzf0gksW4xTOqs0vGMfidZInQMqsno41cQRltrirK4lZaUralds2Qpk3LZqEDObl8FqS5m5MrXIfPxZrKbrRBaN9e5oYxDVzPcmu88nXz4sen81rhPZlBuei3BA47PSp9WKnAY0ED5PrwPmjn7ebGkr+qoW3wkORdnF2xzVdYCEpSEraFCTeVjfMyrY3jK8K2WDd6Goa+GCEcr3HO1PMYYsTMbTcKa6lyWK3QVO9Qx/V1bPfHzfCFObKAp83G4X33fgLv4EIxd+a0tObqy3CaZRTdfJP3DT+J0xrqfuPTAqRKOVDyLlqx1DhOM6zNqk08EuuiuPZniOCE4msMxMFhlsnJle3yg4gasdOVcHuV7xFzcspCR8TJqQ/OoJUt+wHSepoIsMsLsXpJul9SSKaVtX2b3wnitl/seaTXHd8T815G/EP6yUDXA6u0Xn77aKc6S12WblOUVkupssCkpjbk7FLYz2FrlU/pNLV6VTs6ppgfOQZznb65lb2t9cgGrn9zizZyKeI9zfdGPSovl7ev/eZdduuD9rvGZ/v4uoTy0nQ+9G9er29frlQXjM3lpTJzDclbqceAwlWA29LVvONIjYJubPEd4xaPyEIy6OPw3Og1Hk/IXBY8La4mWHg9ujx4Ls373z/+7TzPHfy7Bnvfbh6eXiTwfI+DFk59P++eZZ4VP3zlaCnzMQjoUOhKLhuRxxVfZzyANyYCdqM27+18KXU95dyj2DSVBnU7cBd1SMI93Yp3C4/PoRqG6G2osi8fUPk9mo94G5vi35yo2nYkrTIq/XWF9zl8Gxd6u8P3eRIWPX1BtH8LuLYXcO25/ulR/lW6uli/JdvCXX0TROn4+Nxy4b/4KL6De3z4MD4HH3/kO04lkQj/TuRubxwYJViY/ky7ve48POEH5yZYnt9h4Fzu4u78reYqvZqrJCxxBcUSj1CX8U0JWHrs/3q5efh2hh5+z29/jwaNa7sl8QdPHarU5XyfhfSWooOkCvhlEN3ueApAF8ihrTca66yh+JAbhfTht6izTY3cMsveYyvDKZU+geh0SNI7X1zmO7wYdlCZi4Zc/9NkrnLFOM0D4rz2pjpP3fn18iF8ea39PvvxK7qTmMY/TuZCWahyOJlfnD/1bq2fb7/Gy2dtYgxfkTuS8LmkVvBry9zkN72r4xbSXH35apIgW32FrX72fw== \ No newline at end of file diff --git a/50-advanced/images/air-gapped-portable.png b/50-advanced/images/air-gapped-portable.png new file mode 100644 index 000000000..b809ed17a Binary files /dev/null and b/50-advanced/images/air-gapped-portable.png differ diff --git a/50-advanced/images/directory-structure.drawio b/50-advanced/images/directory-structure.drawio new file mode 100644 index 000000000..1ae684fe8 --- /dev/null +++ b/50-advanced/images/directory-structure.drawio @@ -0,0 +1 @@ +7Z3dk5s2EMD/Gj+eB0l8PsZ3l2SmaebayzTTvnQ4kG1yGFGQ7yN/fYUtsJFkm/MZUAuTh9hrWQjtT6tld+WboOvVy6fMT5e/khDHE2iELxN0M4EQAMNm/xWS163E9eBWsMiikDfaCe6jn5gLDS5dRyHOaw0pITGN0rowIEmCA1qT+VlGnuvN5iSuXzX1F1gS3Ad+LEu/RyFd8ruAzk7+GUeLZXllYHvbT1Z+2ZjfSb70Q/K8J0K3E3SdEUK3r1Yv1zguJq+cl+33Ph74tBpYhhPa5AvBX/iPr9+u5vZv6Vf0w0xurF++XPFenvx4zW+YD5a+ljOQP2IaFLdiTNAsJVFCcXb7xC5azCxgsurGigahny9xyN8s6SouG9GMPOJrEpOMSRKSsN5nsf+A4zuSRzQiCRMHuOicffCEMxoxJXwRGjwQSslqr8GHOFoUH1CSMilZ0zhK2FVKFopB+LxJ1Tkbb1rc2+plUTA7JfN5FOApIyjAKc2nBSObhvMojssRTyCChgNnLpPLM19OIxsVftkTcU18wmSFafbKmpSf2pwKviygyd8/7yADBpct9wBzuMznXC+qrneqZy+49t9AAhxJ6IUEhOokoN5JQCMJjUnwvOvrjx/bIcFx+ybBHEnQggQb9k2C1SsJwlRbRvFPW0LSjKRF94XbegkaTLdOgwlkGiwFDKbXEgz2CIM+MDg9w+CMMGgDg4V6hsEdYdAGBscA/cLgjTBoA4Pb9zZR+ix7NOT+Ko3xVZCa4fTVZ9oU6WD3SuuaLuc1xnOq0MgqCsPiy7MM59FP/2HTUYXW5pas2cS6KXpaU6biTbSxEUJcdAHVWNZpdw4AhW5gW849kKOAXDdRMs/8QStHsW46Vo4cmOPKIUE6ZNWonJ2OVSNHykI899cxM+gDVoyriFt5nepFjlttl8w0Sp4GqhRHscs4XSoFNUgw4CT8UOTsdjOxpxUcLvA9b0syuiQLkvjx7U56eNpyss4CfIyYbTvqZwtMj7TjYBVDOaqEE15WKctw7NPoqZ5qVM08v8JdAdjkYBIJeYLytvfNvwX3UoNCR1X2qdz1LKGj7cRIHTFl+a97zfgCODjgKqtbxjXtWsqSvdj2uKOsmtN3gNcgn9EWeCeBgloBJeoHgTOBQgJQtml1AhRyuwCqQVpEe0uG9AIPCnoUt6FzwZNMYkvgubAL8BpkYfbAC2I/z6NgP3AC6iAW8jufUpwlGwk0UOWllNUusGVgvYbA2loBawtbr4POBNYRt0S3HWAtw6pdx0PG8XEJC+mt7U3P6mBBNEg+9GaJYUOwLa3A/s9ZYiQmxFAX4DVIdGgPnl4WteKj1OO54JliRy35nhJ4nVi8BkkV7cFzhgFeteW2DJ5ldQBeac21BM9sCJ6rFXim+JBsnwueAITtwWk36DnQ7gC9BkXqvaHX9PFFr+dtUY/uuY8vSHx8ebeXVwsyX4IeKNETkGQeLSSG5ESAGOAXwvlSlv9gpkDODCii/srsQEbWSVgVPRzk9A21pa5X0xhS1JaqyGstMWAezqINVUWOU7eylqwiu1MVyRHPKCnKdAjvfNSR5fWtIzk4WBbYDNncAWGngwpz11Zy2r+ls3vq3PyD6Ozzt8fP5Hf7z/HEnSZnKjyFUW3tTIWSBNkxkUjQKY10jGZN3NrWgpeumAhtKY3keZdNIyl1Jvtbfp7j4Xpbbn0n9xoW0be2Q4yn7vrZIcQwCjAUJKhqpsRqmYuRILt0RbH0QNepBSxBPYqSNrPLhdrvObjhLlTbEBYqaFjc2NpC7ffc00hCRYLqdzQ6JUHOz22cKyDxMAyjbSNxT1XESTr1rhQHkTYaklfsQDWkipB0q6F+QyTDPjjoiDgAhUXtFocGcZIRh65wgA0PkraHgxy+KB6KrjZGfJovB2rGxTpSAHoPdMtxjD1FNTwf979UlVCKC2DTUHR7unpbZXlrsehjMeaTsWi9DmeJJRa7aNKby3tssScxJdFSdQ8wLlxZplavJkXcx5g6yZ5etbSm5MRBoR6sKXuW1JNYpHap8wlidK/0O9tlT5M67nexp1dVowXFMO25p1JtyaVp6ViqxF7p5LbLnial3Mfs2en8r14JYK3he0dlo7p+QJOS7GNG6TRAUCuAbNEFgsaZAIkH4gBwbd0AGiMrGkVWgOJnUzqNrEA5svKc/83u+webtunPKJXgGMjzurQV2LKmOg2tQE1+geBdrkM5Zl0tP0BnPjPJlr+lZ6aDQz7o1EhfuHC9mbyhrKI8j5LFBNpxYSoeMvZqsflVpaLNUGvOREXYtiVZFFX1uBhCamBR2Nvd3xjZqnT3l1rQ7b8= \ No newline at end of file diff --git a/50-advanced/images/directory-structure.png b/50-advanced/images/directory-structure.png new file mode 100644 index 000000000..024b17017 Binary files /dev/null and b/50-advanced/images/directory-structure.png differ diff --git a/50-advanced/images/git-flow.png b/50-advanced/images/git-flow.png new file mode 100644 index 000000000..2ead3b759 Binary files /dev/null and b/50-advanced/images/git-flow.png differ diff --git a/50-advanced/images/gitops-pictures.pptx b/50-advanced/images/gitops-pictures.pptx new file mode 100644 index 000000000..0e2912e49 Binary files /dev/null and b/50-advanced/images/gitops-pictures.pptx differ diff --git a/50-advanced/images/governed-process-ca.png b/50-advanced/images/governed-process-ca.png new file mode 100644 index 000000000..6332f8ec7 Binary files /dev/null and b/50-advanced/images/governed-process-ca.png differ diff --git a/50-advanced/images/not-air-gapped.png b/50-advanced/images/not-air-gapped.png new file mode 100644 index 000000000..f37c2b395 Binary files /dev/null and b/50-advanced/images/not-air-gapped.png differ diff --git a/50-advanced/images/semi-air-gapped.png b/50-advanced/images/semi-air-gapped.png new file mode 100644 index 000000000..0100abc05 Binary files /dev/null and b/50-advanced/images/semi-air-gapped.png differ diff --git a/50-advanced/locations-to-whitelist/index.html b/50-advanced/locations-to-whitelist/index.html new file mode 100644 index 000000000..a6440c670 --- /dev/null +++ b/50-advanced/locations-to-whitelist/index.html @@ -0,0 +1 @@ + Locations to whitelist on bastion - Cloud Pak Deployer
Skip to content

Locations to whitelist on bastion🔗

When building or running the deployer in an environment with strict policies for internet access, you may have to specify the list of URLs that need to be accessed by the deployer.

Locations to whitelist when building the deployer image.🔗

Location Used for
registry.access.redhat.com Base image
icr.io olm-utils base image
cdn.redhat.com Installing operating system packages
cdn-ubi.redhat.com Installing operating system packages
rpm.releases.hashicorp.com Hashicorp Vault integration
dl.fedoraproject.org Extra Packages for Enterprise Linux (EPEL)
mirrors.fedoraproject.org EPEL mirror site
fedora.mirrorservice.org EPEL mirror site
pypi.org Python packages for deployer
galaxy.ansible.com Ansible Galaxy packages

Locations to whitelist when running the deployer for existing OpenShift.🔗

Location Used for
github.com Case files, Cloud Pak clients: cloudctl, cpd-cli, cpdctl
gcr.io Google Container Registry (GCR)
objects.githubusercontent.com Binary content for github.com
raw.githubusercontent.com Binary content for github.com
mirror.openshift.com OpenShift client
ocsp.digicert.com Certificate checking
subscription.rhsm.redhat.com OpenShift subscriptions
\ No newline at end of file diff --git a/50-advanced/private-registry-and-air-gapped/index.html b/50-advanced/private-registry-and-air-gapped/index.html new file mode 100644 index 000000000..f66fb072f --- /dev/null +++ b/50-advanced/private-registry-and-air-gapped/index.html @@ -0,0 +1,44 @@ + Private registry and air-gapped - Cloud Pak Deployer
Skip to content

Using a private registry🔗

Some environments, especially in situations where the OpenShift cannot directly connect to the internet, require a private registry for OpenShift to pull the Cloud Pak images from. The Cloud Pak Deployer can mirror images from the entitled registry to a private registry that you want to use for the Cloud Pak(s). Also, if infrastructure which holds the OpenShift cluster is fully disconnected from the internet, the Cloud Pak Deployer can build a registry which can be stored on a portable hard disk or pen drive and then shipped to the site.

Info

Note: In all cases, the deployer can work behind a proxy to access the internet. Go to Running behind proxy for more information.

The below instructions are not limited to disconnected (air-gapped) OpenShift clusters, but are more generic for deployment using a private registry.

There are three use cases for mirroring images to a private registry and using this to install the Cloud Pak(s):

Use cases 1 and 3 are also outlined in the Cloud Pak for Data installation documentation: https://www.ibm.com/docs/en/cloud-paks/cp-data/4.5.x?topic=tasks-mirroring-images-your-private-container-registry

For specifying a private registry in the Cloud Pak Deployer configuration, please see Private registry. Example of specifying a private registry with a self-signed certificate in the configuration:

image_registry:
+- name: cpd453
+  registry_host_name: registry.coc.ibm.com
+  registry_port: 5000
+  registry_insecure: True
+

The cp4d instance must reference the image_registry object using the image_registry_name:

cp4d:
+- project: zen-45
+  openshift_cluster_name: {{ env_id }}
+  cp4d_version: 4.5.3
+  openshift_storage_name: ocs-storage
+  image_registry_name: cpd453
+

Info

The deployer only supports using a private registry for the Cloud Pak images, not for OpenShift itself. Air-gapped installation of OpenShift is currently not in scope for the deployer.

Warning

The registry_host_name you specify in the image_registry definition must also be available for DNS lookup within OpenShift. If the registry runs on a server that is not registered in the DNS, use its IP address instead of a host name.

The main 3 directories that are needed for both types of air-gapped installations are:

  • Cloud Pak Deployer directory: cloud-pak-deployer
  • Configuration directory: The directory that holds a all the Cloud Pak Deployer configuration
  • Status directory: The directory that will hold all downloads, vault secrets and the portable registry when applicable (use case 3)

Fpr use cases 2 and 3, where the directories must be shipped to the air-gapped cluster, the Cloud Pak Deployer and Configuration directories will be stored in the Status directory for simplicity.

Use case 1 - Mirror images and install using a bastion server🔗

This is effectively "not-air-gapped" scenario, where the following conditions apply:

  • The private registry is hosted inside the private dloud
  • The bastion server can connect to the internet and mirror images to the private image registry
  • The bastion server is optionally connected to the internet via a proxy server. See Running behind a proxy for more details
  • The bastion server can connect to OpenShift

Not-air-gapped

On the bastion server🔗

The bastion server is connected to the internet and OpenShift cluster.

  • If there are restrictions regarding the internet sites that can be reached, ensure that the website domains the deployer needs are whitelisted. For a list of domains, check locations to whitelist
  • If a proxy server is configured for the bastion node, check the settings (http_proxy, https_proxy, no_proxy environment variables)
  • Build the Cloud Pak Deployer image using ./cp-deploy.sh build
  • Create or update the directory with the configuration; make sure all your Cloud Paks and cartridges are specified as well as an image_registry entry to identify the private registry
  • Export the CONFIG_DIR and STATUS_DIR environment variables to respectively point to the configuration directory and the status directory
  • Export the CP_ENTITLEMENT_KEY environment variable with your Cloud Pak entitlement key
  • Create a vault secret image-registry-<name> holding the connection credentials for the private registry specified in the configuration (image_registry). For example for a registry definition with name cpd453, create secret image-registry-cpd453.

    ./cp-deploy.sh vault set \
    +    -vs image-registry-cpd453 \
    +    -vsv "admin:very_s3cret"
    +

  • Set the environment variable for the oc login command. For example:

    export CPD_OC_LOGIN="oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify"
    +

  • Run the ./cp-deploy.sh env apply command to start deployment of the Cloud Pak to the OpenShift cluster. For example:

    ./cp-deploy.sh env apply
    +
    The existence of the image_registry definition and its reference in the cp4d definition instruct the deployer to mirror images to the private registry and to configure the OpenShift cluster to pull images from the private registry. If you have already mirrored the Cloud Pak images, you can add the --skip-mirror-images parameter to speed up the deployment process.

Use case 2 - Mirror images with an internet-connected server, install using a bastion🔗

This use case is also sometimes referred to as "semi-air-gapped", where the following conditions apply:

  • The private registry is hosted outside of the private cloud that hosts the bastion server and OpenShift
  • An internet-connected server external to the private cloud can reach the entitled registry and the private registry
  • The internet-connected server is optionally connected to the internet via a proxy server. See Running behind a proxy for more details
  • The bastion server cannot connect to the internet
  • The bastion server can connect to OpenShift

Semi-air-gapped

Warning

Please note that in this case the Cloud Pak Deployer expects an OpenShift cluster to be available already and will only work with an existing-ocp configuration. The bastion server does not have access to the internet and can therefore not instantiate an OpenShift cluster.

On the internet-connected server🔗

  • If there are restrictions regarding the internet sites that can be reached, ensure that the website domains the deployer needs are whitelisted. For a list of domains, check locations to whitelist
  • If a proxy server is configured for the internet-connected server, check the settings (http_proxy, https_proxy, no_proxy environment variables)
  • Build the Cloud Pak Deployer image using ./cp-deploy.sh build
  • Create or update the directory with the configuration; make sure all your Cloud Paks and cartridges are specified as well as an image_registry entry to identify the private registry
  • Export the CONFIG_DIR and STATUS_DIR environment variables to respectively point to the configuration directory and the status directory
  • Export the CP_ENTITLEMENT_KEY environment variable with your Cloud Pak entitlement key
  • Create a vault secret image-registry-<name> holding the connection credentials for the private registry specified in the configuration (image_registry). For example for a registry definition with name cpd453, create secret image-registry-cpd453.
    ./cp-deploy.sh vault set \
    +    -vs image-registry-cpd453 \
    +    -vsv "admin:very_s3cret"
    +
    If the status directory does not exist it is created at this point.

Diagram step 1🔗

  • Run the deployer using the ./cp-deploy.sh env download --skip-portable-registry command. For example:

    ./cp-deploy.sh env download \
    +    --skip-portable-registry
    +
    This will download all clients to the status directory and then mirror images from the entitled registry to the private registry. If mirroring fails, fix the issue and just run the env download again.

  • Before saving the status directory, you can optionally remove the entitlement key from the vault:

    ./cp-deploy.sh vault delete \
    +    -vs ibm_cp_entitlement_key
    +

Diagram step 2🔗

When the download finished successfully, the status directory holds the deployer scripts, the configuration directory and the deployer container image.

Diagram step 3🔗

Ship the status directory from the internet-connected server to the bastion server.

You can use tar with gzip mode or any other compression technique. The total size of the directories should be relatively small, typically < 5 GB

On the bastion server🔗

The bastion server is not connected to the internet but is connected to the private registry and the OpenShift cluster.

Diagram step 4🔗

We're using the instructions in Run on existing OpenShift, adding the --air-gapped and --skip-mirror-images flags, to start the deployer:

  • Restore the status directory onto the bastion server
  • Export the STATUS_DIR environment variable to point to the status directory
  • Untar the cloud-pak-deployer scripts, for example:

    tar xvzf $STATUS_DIR/cloud-pak-deployer.tar.gz
    +

  • Set the CPD_AIRGAP environment variable to true

    export CPD_AIRGAP=true
    +

  • Set the environment variable for the oc login command. For example:

    export CPD_OC_LOGIN="oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify"
    +

  • Run the cp-deploy.sh env apply --skip-mirror-images command to start deployment of the Cloud Pak to the OpenShift cluster. For example:

    cd cloud-pak-deployer
    +./cp-deploy.sh env apply \
    +    --skip-mirror-images
    +

The CPD_AIRGGAP environment variable tells the deployer it will not download anything from the internet; --skip-mirror-images indicates that images are already available in the private registry that is included in the configuration (image_registry)

Use case 3 - Mirror images using a portable image registry🔗

This use case is also usually referred to as "air-gapped", where the following conditions apply:

  • The private registry is hosted in the private cloud that hosts the bastion server and OpenShift
  • The bastion server cannot connect to the internet
  • The bastion server can connect to the private registry and the OpenShift cluster
  • The internet-connected server cannot connect to the private cloud
  • The internet-connected server is optionally connected to the internet via a proxy server. See Running behind a proxy for more details
  • You need a portable registry to fill the private registry with the Cloud Pak images

Air-gapped using portable registry

Warning

Please note that in this case the Cloud Pak Deployer expects an OpenShift cluster to be available already and will only work with an existing-ocp configuration. The bastion server does not have access to the internet and can therefore not instantiate an OpenShift cluster.

On the internet-connected server🔗

  • If there are restrictions regarding the internet sites that can be reached, ensure that the website domains the deployer needs are whitelisted. For a list of domains, check locations to whitelist
  • If a proxy server is configured for the bastion node, check the settings (http_proxy, https_proxy, no_proxy environment variables)
  • Build the Cloud Pak Deployer image using cp-deploy.sh build
  • Create or update the directory with the configuration, making sure all your Cloud Paks and cartridges are specified
  • Export the CONFIG_DIR and STATUS_DIR environment variables to respectively point to the configuration directory and the status directory
  • Export the CP_ENTITLEMENT_KEY environment variable with your Cloud Pak entitlement key

Diagram step 1🔗

  • Run the deployer using the ./cp-deploy.sh env download command. For example:

    ./cp-deploy.sh env download
    +
    This will download all clients, start the portable registry and then mirror images from the entitled registry to the portable registry. The portable registry data is kept in the status directory. If mirroring fails, fix the issue and just run the env download again.

  • Before saving the status directory, you can optionally remove the entitlement key from the vault:

    ./cp-deploy.sh vault delete \
    +    -vs ibm_cp_entitlement_key
    +

See the download of watsonx.ai in action: https://ibm.box.com/v/cpd-air-gapped-download

Diagram step 2🔗

When the download finished successfully, the status directory holds the deployer scripts, the configuration directory, the deployer container image and the portable registry.

Diagram step 3🔗

Ship the status directory from the internet-connected server to the bastion server.

You can use tar with gzip mode or any other compression technique. The status directory now holds all assets required for the air-gapped installation and its size can be substantial (100+ GB). You may want to use multi-volume tar files if you are using network transfer.

On the bastion server🔗

The bastion server is not connected to the internet but is connected to the private registry and OpenShift cluster.

Diagram step 4🔗

See the air-gapped installation of Cloud Pak for Data in action: https://ibm.box.com/v/cpd-air-gapped-install. For the demonstration video, the download of the previous step has first been re-run to only download the Cloud Pak for Data control plane to avoid having to ship and upload ~700 GB.

We're using the instructions in Run on existing OpenShift, adding the CPD_AIRGAP environment variable.

  • Restore the status directory onto the bastion server. Make sure the volume to which you restore has enough space to hold the entire status directory, which includes the portable registry.
  • Export the STATUS_DIR environment variable to point to the status directory
  • Untar the cloud-pak-deployer scripts, for example:

    tar xvzf $STATUS_DIR/cloud-pak-deployer.tar.gz
    +cd cloud-pak-deployer
    +

  • Set the CPD_AIRGAP environment variable to true

    export CPD_AIRGAP=true
    +

  • Set the environment variable for the oc login command. For example:

    export CPD_OC_LOGIN="oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify"
    +

  • Create a vault secret image-registry-<name> holding the connection credentials for the private registry specified in the configuration (image_registry). For example for a registry definition with name cpd453, create secret image-registry-cpd453.

    ./cp-deploy.sh vault set \
    +    -vs image-registry-cpd453 \
    +    -vsv "admin:very_s3cret"
    +

  • Run the ./cp-deploy.sh env apply command to start deployment of the Cloud Pak to the OpenShift cluster. For example:

    ./cp-deploy.sh env apply
    +
    The CPD_AIRGGAP environment variable tells the deployer it will not download anything from the internet. As a first action, the deployer mirrors images from the portable registry to the private registry included in the configuration (image_registry)

Running behind a proxy🔗

If the Cloud Pak Deployer is run from a server that has the HTTP proxy environment variables set up, i.e. "proxy" environment variables are configured on the server and in the terminal session, it will also apply these settings in the deployer container.

The following environment variables are automatically applied to the deployer container if set up in the session running the cp-deploy.sh command:

  • http_proxy
  • https_proxy
  • no_proxy

If you do not want the deployer to use the proxy environment variables, you must remove them before running the cp-deploy.sh command:

unset http_proxy
+unset https_proxy
+unset no_proxy
+

Special settings for debug and DaemonSet images in air-gapped mode🔗

Specifically when running the deployer on IBM Cloud ROKS, certain OpenShift settings must be applied using DaemonSets in the kube-system namespace. Additionally, the deployer uses the oc debug node commands to retrieve kubelet and crio configuration files from the compute nodes.

The default container images used by the DaemonSets and oc debug node commands are based on Red Hat's Universal Base Image and will be pulled from Red Hat registries. This is typically not possible in air-gapped installations, hence different images must be used. It is your responsibility to copy suitable (preferably UBI) images to an image registry that is connected to the OpenShift cluster. Also, if a pull secret is needed to pull the image(s) from the registry, you must create the associated secret in the kube-system OpenShift project.

To configure alternative container images for the deployer to use, set the following properties in the .inv file kept in your configuration's inventory directory, or specify them as additional command line parameters for the cp-deploy.sh command.

If you do not set these values, the deployer assumes that the default images are used for DaemonSet and oc debug node.

Property Description Example
cpd_oc_debug_image Container image to be used for the oc debug command. registry.redhat.io/rhel8/support-tools:latest
cpd_ds_image Container image to be used for the DaemonSets that configure Kubelet, etc. registry.access.redhat.com/ubi8/ubi:latest
\ No newline at end of file diff --git a/50-advanced/run-on-openshift/build-image-and-run-deployer-on-openshift/index.html b/50-advanced/run-on-openshift/build-image-and-run-deployer-on-openshift/index.html new file mode 100644 index 000000000..3cb03646d --- /dev/null +++ b/50-advanced/run-on-openshift/build-image-and-run-deployer-on-openshift/index.html @@ -0,0 +1,342 @@ + Build image and run deployer on OpenShift - Cloud Pak Deployer
Skip to content

Build image and run deployer on OpenShift🔗

Create configuration🔗

export CONFIG_DIR=$HOME/cpd-config && mkdir -p $CONFIG_DIR/config
+
+cat << EOF > $CONFIG_DIR/config/cpd-config.yaml
+---
+global_config:
+  environment_name: demo
+  cloud_platform: existing-ocp
+  confirm_destroy: False
+
+openshift:
+- name: cpd-demo
+  ocp_version: "4.10"
+  cluster_name: cpd-demo
+  domain_name: example.com
+  openshift_storage:
+  - storage_name: nfs-storage
+    storage_type: nfs
+
+cp4d:
+- project: cpd-instance
+  openshift_cluster_name: cpd-demo
+  cp4d_version: 4.8.3
+  accept_licenses: True
+  cartridges:
+  - name: cp-foundation
+    license_service:
+      state: disabled
+      threads_per_core: 2
+  - name: lite
+
+#
+# All tested cartridges. To install, change the "state" property to "installed". To uninstall, change the state
+# to "removed" or comment out the entire cartridge. Make sure that the "-" and properties are aligned with the lite
+# cartridge; the "-" is at position 3 and the property starts at position 5.
+#
+
+  - name: analyticsengine 
+    size: small 
+    state: removed
+
+  - name: bigsql
+    state: removed
+
+  - name: ca
+    size: small
+    instances:
+    - name: ca-instance
+      metastore_ref: ca-metastore
+    state: removed
+
+  - name: cde
+    state: removed
+
+  - name: datagate
+    state: removed
+
+  - name: datastage-ent-plus
+    state: removed
+
+    # The default instance is created automatically with the DataStage installation. If you want to create additional instances
+    # uncomment the section below and specify the various scaling options.
+
+    # instances:
+    #   - name: ds-instance
+    #     # Optional settings
+    #     description: "datastage ds-instance"
+    #     size: medium
+    #     storage_class: efs-nfs-client
+    #     storage_size_gb: 60
+    #     # Custom Scale options
+    #     scale_px_runtime:
+    #       replicas: 2
+    #       cpu_request: 500m
+    #       cpu_limit: 2
+    #       memory_request: 2Gi
+    #       memory_limit: 4Gi
+    #     scale_px_compute:
+    #       replicas: 2
+    #       cpu_request: 1
+    #       cpu_limit: 3
+    #       memory_request: 4Gi
+    #       memory_limit: 12Gi    
+
+  - name: db2
+    size: small
+    instances:
+    - name: ca-metastore
+      metadata_size_gb: 20
+      data_size_gb: 20
+      backup_size_gb: 20  
+      transactionlog_size_gb: 20
+    state: removed
+
+  - name: db2wh
+    state: removed
+
+  - name: dmc
+    state: removed
+
+  - name: dods
+    size: small
+    state: removed
+
+  - name: dp
+    size: small
+    state: removed
+
+  - name: dv
+    size: small 
+    instances:
+    - name: data-virtualization
+    state: removed
+
+  - name: hadoop
+    size: small
+    state: removed
+
+  - name: mdm
+    size: small
+    wkc_enabled: true
+    state: removed
+
+  - name: openpages
+    state: removed
+
+  - name: planning-analytics
+    state: removed
+
+  - name: rstudio
+    size: small
+    state: removed
+
+  - name: spss
+    state: removed
+
+  - name: voice-gateway
+    replicas: 1
+    state: removed
+
+  - name: watson-assistant
+    size: small
+    state: removed
+
+  - name: watson-discovery
+    state: removed
+
+  - name: watson-ks
+    size: small
+    state: removed
+
+  - name: watson-openscale
+    size: small
+    state: removed
+
+  - name: watson-speech
+    stt_size: xsmall
+    tts_size: xsmall
+    state: removed
+
+  - name: wkc
+    size: small
+    state: removed
+
+  - name: wml
+    size: small
+    state: installed
+
+  - name: wml-accelerator
+    replicas: 1
+    size: small
+    state: removed
+
+  - name: wsl
+    state: installed
+
+EOF
+

Log in to the OpenShift cluster🔗

Log is as a cluster administrator to be able to run the deployer with the correct permissions.

Prepare the deployer project🔗

oc new-project cloud-pak-deployer 
+
+oc project cloud-pak-deployer
+oc create serviceaccount cloud-pak-deployer-sa
+oc adm policy add-scc-to-user privileged -z cloud-pak-deployer-sa
+oc adm policy add-cluster-role-to-user cluster-admin -z cloud-pak-deployer-sa
+

Build deployer image and push to the internal registry🔗

Building the deployer image typically takes ~5 minutes. Only do this if the image has not been built yet.

cat << EOF | oc apply -f -
+apiVersion: image.openshift.io/v1
+kind: ImageStream
+metadata:
+  name: cloud-pak-deployer
+spec:
+  lookupPolicy:
+    local: true
+EOF
+
+cat << EOF | oc create -f -
+kind: Build
+apiVersion: build.openshift.io/v1
+metadata:
+  generateName: cloud-pak-deployer-bc-
+  namespace: cloud-pak-deployer
+spec:
+  serviceAccount: builder
+  source:
+    type: Git
+    git:
+      uri: 'https://github.com/IBM/cloud-pak-deployer'
+      ref: wizard
+  strategy:
+    type: Docker
+    dockerStrategy:
+      buildArgs:
+      - name: CPD_OLM_UTILS_V2_IMAGE
+        value: icr.io/cpopen/cpd/olm-utils-v2:latest
+      - name: CPD_OLM_UTILS_V3_IMAGE
+        value: icr.io/cpopen/cpd/olm-utils-v3:latest
+  output:
+    to:
+      kind: ImageStreamTag
+      name: 'cloud-pak-deployer:latest'
+  triggeredBy:
+    - message: Manually triggered
+EOF
+

Now, wait until the deployer image has been built.

oc get build -n cloud-pak-deployer -w
+

Set configuration🔗

oc create cm -n cloud-pak-deployer cloud-pak-deployer-config
+oc set data -n cloud-pak-deployer cm/cloud-pak-deployer-config \
+  --from-file=$CONFIG_DIR/config
+

Start the deployer job🔗

export CP_ENTITLEMENT_KEY=your_entitlement_key
+
+cat << EOF | oc apply -f -
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: cloud-pak-deployer-status
+  namespace: cloud-pak-deployer
+spec:
+  accessModes:
+  - ReadWriteMany
+  resources:
+    requests:
+      storage: 10Gi
+EOF
+
+cat << EOF | oc apply -f -
+apiVersion: batch/v1
+kind: Job
+metadata:
+  labels:
+    app: cloud-pak-deployer
+  name: cloud-pak-deployer
+  namespace: cloud-pak-deployer
+spec:
+  parallelism: 1
+  completions: 1
+  backoffLimit: 0
+  template:
+    metadata:
+      name: cloud-pak-deployer
+      labels:
+        app: cloud-pak-deployer
+    spec:
+      containers:
+      - name: cloud-pak-deployer
+        image: cloud-pak-deployer:latest
+        imagePullPolicy: Always
+        terminationMessagePath: /dev/termination-log
+        terminationMessagePolicy: File
+        env:
+        - name: CONFIG_DIR
+          value: /Data/cpd-config
+        - name: STATUS_DIR
+          value: /Data/cpd-status
+        - name: CP_ENTITLEMENT_KEY
+          value: ${CP_ENTITLEMENT_KEY}
+        volumeMounts:
+        - name: config-volume
+          mountPath: /Data/cpd-config/config
+        - name: status-volume
+          mountPath: /Data/cpd-status
+        command: ["/bin/sh","-xc"]
+        args: 
+          - /cloud-pak-deployer/cp-deploy.sh env apply -v
+      restartPolicy: Never
+      securityContext:
+        runAsUser: 0
+      serviceAccountName: cloud-pak-deployer-sa
+      volumes:
+      - name: config-volume
+        configMap:
+          name: cloud-pak-deployer-config
+      - name: status-volume
+        persistentVolumeClaim:
+          claimName: cloud-pak-deployer-status        
+EOF
+

Optional: start debug job🔗

The debug job can be useful if you want to access the status directory of the deployer if the deployer job has failed.

cat << EOF | oc apply -f -
+apiVersion: batch/v1
+kind: Job
+metadata:
+  labels:
+    app: cloud-pak-deployer-debug
+  name: cloud-pak-deployer-debug
+  namespace: cloud-pak-deployer
+spec:
+  parallelism: 1
+  completions: 1
+  backoffLimit: 0
+  template:
+    metadata:
+      name: cloud-pak-deployer-debug
+      labels:
+        app: cloud-pak-deployer-debug
+    spec:
+      containers:
+      - name: cloud-pak-deployer-debug
+        image: cloud-pak-deployer:latest
+        imagePullPolicy: Always
+        terminationMessagePath: /dev/termination-log
+        terminationMessagePolicy: File
+        env:
+        - name: CONFIG_DIR
+          value: /Data/cpd-config
+        - name: STATUS_DIR
+          value: /Data/cpd-status
+        volumeMounts:
+        - name: config-volume
+          mountPath: /Data/cpd-config/config
+        - name: status-volume
+          mountPath: /Data/cpd-status
+        command: ["/bin/sh","-xc"]
+        args: 
+          - sleep infinity
+      restartPolicy: Never
+      securityContext:
+        runAsUser: 0
+      serviceAccountName: cloud-pak-deployer-sa
+      volumes:
+      - name: config-volume
+        configMap:
+          name: cloud-pak-deployer-config
+      - name: status-volume
+        persistentVolumeClaim:
+          claimName: cloud-pak-deployer-status        
+EOF
+

Follow the logs of the deployment🔗

oc logs -f -n cloud-pak-deployer job/cloud-pak-deployer
+

In some cases, especially if the OpenShift cluster is remote from where the oc command is running, the oc logs -f command may terminate abruptly.

\ No newline at end of file diff --git a/50-advanced/run-on-openshift/run-deployer-on-openshift-using-console/index.html b/50-advanced/run-on-openshift/run-deployer-on-openshift-using-console/index.html new file mode 100644 index 000000000..14c5e2d61 --- /dev/null +++ b/50-advanced/run-on-openshift/run-deployer-on-openshift-using-console/index.html @@ -0,0 +1 @@ + Running deployer on OpenShift using console - Cloud Pak Deployer
Skip to content
\ No newline at end of file diff --git a/50-advanced/run-on-openshift/run-deployer-wizard-on-openshift/index.html b/50-advanced/run-on-openshift/run-deployer-wizard-on-openshift/index.html new file mode 100644 index 000000000..413923e05 --- /dev/null +++ b/50-advanced/run-on-openshift/run-deployer-wizard-on-openshift/index.html @@ -0,0 +1,141 @@ + Run deployer wizard on OpenShift - Cloud Pak Deployer
Skip to content

Run deployer wizard on OpenShift🔗

Log in to the OpenShift cluster🔗

Log is as a cluster administrator to be able to run the deployer with the correct permissions.

Prepare the deployer project and the storage🔗

  • Go to the OpenShift console
  • Click the "+" sign at the top of the page
  • Paste the following block (exactly into the window)
    ---
    +apiVersion: v1
    +kind: Namespace
    +metadata:
    +  creationTimestamp: null
    +  name: cloud-pak-deployer
    +---
    +apiVersion: v1
    +kind: ServiceAccount
    +metadata:
    +  name: cloud-pak-deployer-sa
    +  namespace: cloud-pak-deployer
    +---
    +apiVersion: rbac.authorization.k8s.io/v1
    +kind: RoleBinding
    +metadata:
    +  name: system:openshift:scc:privileged
    +  namespace: cloud-pak-deployer
    +roleRef:
    +  apiGroup: rbac.authorization.k8s.io
    +  kind: ClusterRole
    +  name: system:openshift:scc:privileged
    +subjects:
    +- kind: ServiceAccount
    +  name: cloud-pak-deployer-sa
    +  namespace: cloud-pak-deployer
    +---
    +apiVersion: rbac.authorization.k8s.io/v1
    +kind: ClusterRoleBinding
    +metadata:
    +  name: cloud-pak-deployer-cluster-admin
    +roleRef:
    +  apiGroup: rbac.authorization.k8s.io
    +  kind: ClusterRole
    +  name: cluster-admin
    +subjects:
    +- kind: ServiceAccount
    +  name: cloud-pak-deployer-sa
    +  namespace: cloud-pak-deployer
    +---
    +apiVersion: v1
    +kind: PersistentVolumeClaim
    +metadata:
    +  name: cloud-pak-deployer-config
    +  namespace: cloud-pak-deployer
    +spec:
    +  accessModes:
    +  - ReadWriteMany
    +  resources:
    +    requests:
    +      storage: 1Gi
    +---
    +apiVersion: v1
    +kind: PersistentVolumeClaim
    +metadata:
    +  name: cloud-pak-deployer-status
    +  namespace: cloud-pak-deployer
    +spec:
    +  accessModes:
    +  - ReadWriteMany
    +  resources:
    +    requests:
    +      storage: 10Gi
    +

Run the deployer wizard and expose route🔗

  • Go to the OpenShift console
  • Click the "+" sign at the top of the page
  • Paste the following block (exactly into the window)
    apiVersion: apps/v1
    +kind: Deployment
    +metadata:
    +  name: cloud-pak-deployer-wizard
    +  namespace: cloud-pak-deployer
    +spec:
    +  replicas: 1
    +  selector:
    +    matchLabels:
    +      app: cloud-pak-deployer-wizard
    +  template:
    +    metadata:
    +      name: cloud-pak-deployer-wizard
    +      labels:
    +        app: cloud-pak-deployer-wizard
    +    spec:
    +      containers:
    +      - name: cloud-pak-deployer
    +        image: quay.io/cloud-pak-deployer/cloud-pak-deployer:latest
    +        imagePullPolicy: Always
    +        terminationMessagePath: /dev/termination-log
    +        terminationMessagePolicy: File
    +        ports:
    +        - containerPort: 8080
    +          protocol: TCP
    +        env:
    +        - name: CONFIG_DIR
    +          value: /Data/cpd-config
    +        - name: STATUS_DIR
    +          value: /Data/cpd-status
    +        - name: CPD_WIZARD_PAGE_TITLE
    +          value: "Cloud Pak Deployer"
    +#        - name: CPD_WIZARD_MODE
    +#          value: existing-ocp
    +        volumeMounts:
    +        - name: config-volume
    +          mountPath: /Data/cpd-config
    +        - name: status-volume
    +          mountPath: /Data/cpd-status
    +        command: ["/bin/sh","-xc"]
    +        args: 
    +          - mkdir -p /Data/cpd-config/config && /cloud-pak-deployer/cp-deploy.sh env wizard -v
    +      securityContext:
    +        runAsUser: 0
    +      serviceAccountName: cloud-pak-deployer-sa
    +      volumes:
    +      - name: config-volume
    +        persistentVolumeClaim:
    +          claimName: cloud-pak-deployer-config   
    +      - name: status-volume
    +        persistentVolumeClaim:
    +          claimName: cloud-pak-deployer-status        
    +---
    +apiVersion: v1
    +kind: Service
    +metadata:
    +  name: cloud-pak-deployer-wizard-svc
    +  namespace: cloud-pak-deployer    
    +spec:
    +  selector:                  
    +    app: cloud-pak-deployer-wizard
    +  ports:
    +  - nodePort: 0
    +    port: 8080            
    +    protocol: TCP
    +---
    +apiVersion: route.openshift.io/v1
    +kind: Route
    +metadata:
    +  name: cloud-pak-deployer-wizard
    +spec:
    +  tls:
    +    termination: edge
    +  to:
    +    kind: Service
    +    name: cloud-pak-deployer-wizard-svc
    +    weight: null
    +

Open the wizard🔗

Now you can access the deployer wizard using the route created in the cloud-pak-deployer project. * Open the OpenShift console * Go to Networking → Routes * Click the Cloud Pak Deployer wizard route

\ No newline at end of file diff --git a/80-development/deployer-development-setup/index.html b/80-development/deployer-development-setup/index.html new file mode 100644 index 000000000..39ff59409 --- /dev/null +++ b/80-development/deployer-development-setup/index.html @@ -0,0 +1,56 @@ + Deployer development setup - Cloud Pak Deployer
Skip to content

Deployer Development Setup🔗

Setting up a virtual machine or server to develop the Cloud Pak Deployer code. Focuses on initial setup of a server to run the deployer container, setting up Visual Studio Code, issuing GPG keys and running the deployer in development mode.

Set up a server for development🔗

We recommend to use a Red Hat Linux server for development of the Cloud Pak Deployer, either using a virtual server in the cloud or a virtual machine on your workstation. Ideally you run Visual Studio Code on your workstation and connect it to the remote Red Hat Linux server, updating the code and running it immediately from that server.

Install required packages🔗

To allow for remote development, a number of packages need to be installed on the Linux server. Not having these will cause VSCode not to work and the error messages are difficult to debug. To install these packages, run the following as the root user:

yum install -y git podman wget unzip tar gpg pinentry
+

Additionally, you can also install EPEL and screen to make it easier to keep your session if it gets disconnected.

yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
+yum install -y screen
+

Set up development user🔗

It is recommended to use a special development user (your user name) on the Linux server, rather than using root. Not only will this be more secure; it also prevent destructive mistakes. In the below steps, we create a user fk-dev and give it sudo permissions.

useradd -G wheel fk-dev
+

To give the fk-dev permissions to run commands as root, change the sudo settings.

visudo
+

Scroll down until you see the following line:

# %wheel        ALL=(ALL)       NOPASSWD: ALL
+

Change the line to look like this:

%wheel        ALL=(ALL)       NOPASSWD: ALL
+

Now, save the file by pressing Esc, followed by : and x.

Configure password-less SSH for development user🔗

Especially when running the virtual server in the cloud, users would logon using their SSH key. This requires the public key of the workstation to be added to the development user's SSH configuration.

Make sure you run the following commands as the development user (fk-dev):

mkdir -p ~/.ssh
+chmod 700 ~/.ssh
+touch ~/.ssh/authorized_keys
+chmod 600 ~/.ssh/authorized_keys
+

Then, add the public key of your workstation to the authorized_keys file.

vi ~/.ssh/authorized_keys
+

Press the i to enter insert mode for vi. Then paste the public SSH key, for example:

ssh-rsa AAAAB3NzaC1yc2EAAAADAXABAAABAQEGUeXJr0ZHy1SPGOntmr/7ixmK3KV8N3q/+0eSfKVTyGbhUO9lC1+oYcDvwMrizAXBJYWkIIwx4WgC77a78....fP3S5WYgqL fk-dev
+

Finally save the file by pressing Esc, followed by : and x.

Configure Git for the development user🔗

Run the following commands as the development user (fk-dev):

git config --global user.name "Your full name"
+git config --global user.email "your_email_address"
+git config --global credential.helper "cache --timeout=86400"
+

Set up GPG for the development user🔗

We also want to ensure that commits are verified (trusted) by signing them with a GPG key. This requires set up on the development server and also on your Git account.

First, set up a new GPG key:

gpg --default-new-key-algo rsa4096 --gen-key
+

You will be prompted to specify your user information:

  • Real name: Enter your full name
  • Email address: Your e-mail address that will be used to sign the commits

Press o at the following prompt:

Change (N)ame, (E)mail, or (O)kay/(Q)uit?
+

Then, you will be prompted for a passphrase. You cannot use a passphrase for your GPG key if you want to use it for automatic signing of commits. Just press Enter multiple times until the GPG key has been generated.

List the signatures of the known keys. You will use the signature to sign the commits and to retrieve the public key.

gpg --list-signatures
+

Output will look something like this:

/home/fk-dev/.gnupg/pubring.kbx
+-----------------------------------
+pub   rsa4096 2022-10-30 [SC] [expires: 2024-10-29]
+      BC83E8A97538EDD4E01DC05EA83C67A6D7F71756
+uid           [ultimate] FK Developer <fk-dev@ibm.com>
+sig 3        A83C67A6D7F71756 2022-10-30  FK Developer <fk-dev@ibm.com>
+

You will use the signature to retrieve the public key:

gpg --armor --export A83C67A6D7F71756
+

The public key will look something like below:

-----BEGIN PGP PUBLIC KEY BLOCK-----
+
+mQINBGNeGNQBEAC/y2tovX5s0Z+onUpisnMMleG94nqOtajXG1N0UbHAUQyKfirt
+O8t91ek+e5PEsVkR/RLIM1M1YkiSV4irxW/uFPucXHZDVH8azfnJjf6j6cXWt/ra
+1I2vGV3dIIQ6aJIBEEXC+u+N6rWpCOF5ERVrumGFlDhL/PY8Y9NM0cNQCbOcciTV
+5a5DrqyHC3RD5Bcn5EA0/5ISTCGQyEbJe45G8L+a5yRchn4ACVEztR2B/O5iOZbM
+.
+.
+.
+4ojOJPu0n5QLA5cI3RyZFw==
+=sx91
+-----END PGP PUBLIC KEY BLOCK-----
+

Now that you have the signature, you can configure Git to sign commits:

git config --global user.signingkey A83C67A6D7F71756
+

Next, add your GPG key to your Git user.

  • Go to IBM/cloud-pak-deployer.git
  • Log in using your public GitHub user
  • Click on your user at the top right of the pages
  • Click select
  • In the left menu, select SSH and GPG keys
  • Click New GPG key
  • Enter a meaningful title for your GPG key, for example: FK Development Server
  • Paste the public GPG key
  • Confirm by pushing the Add GPG key button

Commits done on your development server will now be signed with your user name and e-mail address and will show as Verified when listing the commits.

Clone the repository🔗

Clone the repository using a git command. The command below is the clone of the main Cloud Pak Deployer repository. If you have forked the repository to develop features, you will have to use the URL of your own fork.

git clone https://github.com/IBM/cloud-pak-deployer.git
+

Connect VSCode to the development server🔗

  • Install the Remote - SSH extension in VSCode
  • Click on the green icon in the lower left of VSCode
  • Open SSH Config file, choose the one in your home directory
  • Add the following lines:
    Host nickname_of_your_server
    +   HostName ip_address_of_your_server
    +   User fk-dev
    +

Once you have set up this server in the SSH config file, you can connect to it and start remote development.

  • Open
  • Select the cloud-pak-deployer directory (this is the cloned repository)
  • As the directory is a cloned Git repo, VSCode will automatically open the default branch

From that point forward you can use VSCode as if you were working on your laptop, make changes and use a separate terminal to test your changes.

Cloud Pak Deployer developer command line option🔗

The Cloud Pak Deployer runs as a container on the server. When you're in the process of developing new features, having to always rebuild the image is a bit of a pain, hence we've introduced a special command line parameter.

./cp-deploy.sh env apply .... --cpd-develop [--accept-all-liceneses]
+

When adding the --cpd-develop parameter to the command line, the current directory is mapped as a volume to the /cloud-pak-deployer directory within the container. This means that any latest changes you've done to the Ansible playbooks or other commands will take effect immediately.

Warning

Even though it is possible to run the deployer multiple times in parallel, for different environments, please be aware that is NOT possible when you use the --cpd-develop parameter. If you run two deploy processes with this parameters, you will see errors with permissions.

Cloud Pak Deployer developer container image tag🔗

When working on multiple changes concurrently, you may have to switch between branches or tags. By default, the Cloud Pak Deployer image is built with image latest, but you can override this by setting the CPD_IMAGE_TAG environment variable in your session.

export CPD_IMAGE_TAG=cp4d-460
+./cp-deploy.sh build
+

When building the deployer, the image is now tagged:

podman image ls
+

REPOSITORY                           TAG         IMAGE ID      CREATED        SIZE
+localhost/cloud-pak-deployer         cp4d-460    8b08cb2f9a2e  8 minutes ago  1.92 GB
+

When running the deployer with the same environment variable set, you will see an additional message in the output.

./cp-deploy.sh env apply
+

Cloud Pak Deployer image tag cp4d-460 will be used.
+...
+

Cloud Pak Deployer podman or docker command🔗

By default, the cp-deploy.sh command detects if podman (preferred) or docker is found on the system. In case both are present, podman is used. You can override this behaviour by setting the CPD_CONTAINER_ENGINE environment variable.

export CPD_CONTAINER_ENGINE=docker
+./cp-deploy.sh build
+
Container engine docker will be used.
+
\ No newline at end of file diff --git a/80-development/doc-development-setup/index.html b/80-development/doc-development-setup/index.html new file mode 100644 index 000000000..d3128610e --- /dev/null +++ b/80-development/doc-development-setup/index.html @@ -0,0 +1,11 @@ + Deployer documentation development setup - Cloud Pak Deployer
Skip to content

Documentation Development setup🔗

Mkdocs themes encapsulate all of the configuration and implementation details of static documentation sites. This GitHub repository has been built with a dependency on the Mkdocs tool. This GiHub repository is connected to GitHub Actions; any commit to the main branch will cause a build of the GitHub pages to be triggered. The preferred method of working while developing documentation is to use the tooling from a loacal system

Local tooling installation🔗

If you want to test the documentation pages you're developing, it is best to run Mkdocs in a container and map your local docs folder to a folder inside the container. This avoids having to install nvm and many modules on your workstation.

Do the following:

  • Make sure you have cloned this repository to your development server
  • Start from the main directory of the cloud-pak-deployer repository
    cd docs
    +./dev-doc-build.sh
    +

This will build a Red Hat UBI image with all requirements pre-installed. It will take ~2-10 minutes to complete this step, dependent on your network bandwidth.

Running the documentation image🔗

./dev-doc-run.sh
+

This will start the container as a daemon and tail the logs. Once running, you will see the following message:

...
+INFO     -  Documentation built in 3.32 seconds
+INFO     -  [11:55:49] Watching paths for changes: 'src', 'mkdocs.yml'
+INFO     -  [11:55:49] Serving on http://0.0.0.0:8000/cloud-pak-deployer/...
+

Starting the browser🔗

Now that the container has fully started, it automatically tracks all changes under the docs folder and updates the pages site automatically. You can view the site by opening a browswer for URL:

http://localhost:8000

Stopping the documentation container🔗

If you don't want to test your changes locally anymore, stop the docker container.

podman kill cpd-doc
+

Next time you want to test your changes, re-run the ./dev-doc-run.sh, which will delete the container, delete cache and build the documentation.

Removing the docker container and image🔗

If you want to remove all from your development server, do the following:

podman rm -f cpd-doc
+podman rmi -f cpd-doc:latest
+

Note that after merging your updated documentation with the main branch, the pages site will be rendered by a GitHub action. Go to GitHub Actions if you want to monitor the build process.

\ No newline at end of file diff --git a/80-development/doc-guidelines/index.html b/80-development/doc-guidelines/index.html new file mode 100644 index 000000000..067c991d8 --- /dev/null +++ b/80-development/doc-guidelines/index.html @@ -0,0 +1,37 @@ + Deployer documentation guidelines - Cloud Pak Deployer
Skip to content

Deployer documentation guidelines

Documentation guidelines🔗

This document contains a few formatting rules/requirements to maintain uniformity and structure across our documentation.

Formatting🔗

Code block input🔗

Code block inputs should be created by surrounding the code text with three tick marks ``` key. For example, to create the following code block:

oc get nodes
+

Your markdown input would look like:

``` { .bash .copy }
+oc get nodes
+```
+

Code block output🔗

Code block outputs should specify the output language. This can be done by putting the language after the opening tick marks. For example, to create the following code block:

{
+    "cloudName": "AzureCloud",
+    "homeTenantId": "fcf67057-50c9-4ad4-98f3-ffca64add9e9",
+    "id": "d604759d-4ce2-4dbc-b012-b9d7f1d0c185",
+    "isDefault": true,
+    "managedByTenants": [],
+    "name": "Microsoft Azure Enterprise",
+    "state": "Enabled",
+    "tenantId": "fcf67057-50c9-4ad4-98f3-ffca64add9e9",
+    "user": {
+    "name": "example@example.com",
+    "type": "user"
+    }
+}
+

Your markdown input would look like:

```output
+{
+    "cloudName": "AzureCloud",
+    "homeTenantId": "fcf67057-50c9-4ad4-98f3-ffca64add9e9",
+    "id": "d604759d-4ce2-4dbc-b012-b9d7f1d0c185",
+    "isDefault": true,
+    "managedByTenants": [],
+    "name": "Microsoft Azure Enterprise",
+    "state": "Enabled",
+    "tenantId": "fcf67057-50c9-4ad4-98f3-ffca64add9e9",
+    "user": {
+    "name": "example@example.com",
+    "type": "user"
+    }
+}
+```
+

Information block (inline notifications)🔗

If you want to highlight something to reader, using an information or a warning block, use the following code:

!!! warning
+    Warning: please do not shut down the cluster at this stage.
+

This will show up as:

Warning

Warning: please do not shut down the cluster at this stage.

You can also info and error.

\ No newline at end of file diff --git a/assets/images/favicon.png b/assets/images/favicon.png new file mode 100644 index 000000000..1cf13b9f9 Binary files /dev/null and b/assets/images/favicon.png differ diff --git a/assets/javascripts/bundle.83f73b43.min.js b/assets/javascripts/bundle.83f73b43.min.js new file mode 100644 index 000000000..43d8b70f6 --- /dev/null +++ b/assets/javascripts/bundle.83f73b43.min.js @@ -0,0 +1,16 @@ +"use strict";(()=>{var Wi=Object.create;var gr=Object.defineProperty;var Di=Object.getOwnPropertyDescriptor;var Vi=Object.getOwnPropertyNames,Vt=Object.getOwnPropertySymbols,Ni=Object.getPrototypeOf,yr=Object.prototype.hasOwnProperty,ao=Object.prototype.propertyIsEnumerable;var io=(e,t,r)=>t in e?gr(e,t,{enumerable:!0,configurable:!0,writable:!0,value:r}):e[t]=r,$=(e,t)=>{for(var r in t||(t={}))yr.call(t,r)&&io(e,r,t[r]);if(Vt)for(var r of Vt(t))ao.call(t,r)&&io(e,r,t[r]);return e};var so=(e,t)=>{var r={};for(var o in e)yr.call(e,o)&&t.indexOf(o)<0&&(r[o]=e[o]);if(e!=null&&Vt)for(var o of Vt(e))t.indexOf(o)<0&&ao.call(e,o)&&(r[o]=e[o]);return r};var xr=(e,t)=>()=>(t||e((t={exports:{}}).exports,t),t.exports);var zi=(e,t,r,o)=>{if(t&&typeof t=="object"||typeof t=="function")for(let n of Vi(t))!yr.call(e,n)&&n!==r&&gr(e,n,{get:()=>t[n],enumerable:!(o=Di(t,n))||o.enumerable});return e};var Mt=(e,t,r)=>(r=e!=null?Wi(Ni(e)):{},zi(t||!e||!e.__esModule?gr(r,"default",{value:e,enumerable:!0}):r,e));var co=(e,t,r)=>new Promise((o,n)=>{var i=p=>{try{s(r.next(p))}catch(c){n(c)}},a=p=>{try{s(r.throw(p))}catch(c){n(c)}},s=p=>p.done?o(p.value):Promise.resolve(p.value).then(i,a);s((r=r.apply(e,t)).next())});var lo=xr((Er,po)=>{(function(e,t){typeof Er=="object"&&typeof po!="undefined"?t():typeof define=="function"&&define.amd?define(t):t()})(Er,function(){"use strict";function e(r){var o=!0,n=!1,i=null,a={text:!0,search:!0,url:!0,tel:!0,email:!0,password:!0,number:!0,date:!0,month:!0,week:!0,time:!0,datetime:!0,"datetime-local":!0};function s(k){return!!(k&&k!==document&&k.nodeName!=="HTML"&&k.nodeName!=="BODY"&&"classList"in k&&"contains"in k.classList)}function p(k){var ft=k.type,qe=k.tagName;return!!(qe==="INPUT"&&a[ft]&&!k.readOnly||qe==="TEXTAREA"&&!k.readOnly||k.isContentEditable)}function c(k){k.classList.contains("focus-visible")||(k.classList.add("focus-visible"),k.setAttribute("data-focus-visible-added",""))}function l(k){k.hasAttribute("data-focus-visible-added")&&(k.classList.remove("focus-visible"),k.removeAttribute("data-focus-visible-added"))}function f(k){k.metaKey||k.altKey||k.ctrlKey||(s(r.activeElement)&&c(r.activeElement),o=!0)}function u(k){o=!1}function d(k){s(k.target)&&(o||p(k.target))&&c(k.target)}function y(k){s(k.target)&&(k.target.classList.contains("focus-visible")||k.target.hasAttribute("data-focus-visible-added"))&&(n=!0,window.clearTimeout(i),i=window.setTimeout(function(){n=!1},100),l(k.target))}function L(k){document.visibilityState==="hidden"&&(n&&(o=!0),X())}function X(){document.addEventListener("mousemove",J),document.addEventListener("mousedown",J),document.addEventListener("mouseup",J),document.addEventListener("pointermove",J),document.addEventListener("pointerdown",J),document.addEventListener("pointerup",J),document.addEventListener("touchmove",J),document.addEventListener("touchstart",J),document.addEventListener("touchend",J)}function te(){document.removeEventListener("mousemove",J),document.removeEventListener("mousedown",J),document.removeEventListener("mouseup",J),document.removeEventListener("pointermove",J),document.removeEventListener("pointerdown",J),document.removeEventListener("pointerup",J),document.removeEventListener("touchmove",J),document.removeEventListener("touchstart",J),document.removeEventListener("touchend",J)}function J(k){k.target.nodeName&&k.target.nodeName.toLowerCase()==="html"||(o=!1,te())}document.addEventListener("keydown",f,!0),document.addEventListener("mousedown",u,!0),document.addEventListener("pointerdown",u,!0),document.addEventListener("touchstart",u,!0),document.addEventListener("visibilitychange",L,!0),X(),r.addEventListener("focus",d,!0),r.addEventListener("blur",y,!0),r.nodeType===Node.DOCUMENT_FRAGMENT_NODE&&r.host?r.host.setAttribute("data-js-focus-visible",""):r.nodeType===Node.DOCUMENT_NODE&&(document.documentElement.classList.add("js-focus-visible"),document.documentElement.setAttribute("data-js-focus-visible",""))}if(typeof window!="undefined"&&typeof document!="undefined"){window.applyFocusVisiblePolyfill=e;var t;try{t=new CustomEvent("focus-visible-polyfill-ready")}catch(r){t=document.createEvent("CustomEvent"),t.initCustomEvent("focus-visible-polyfill-ready",!1,!1,{})}window.dispatchEvent(t)}typeof document!="undefined"&&e(document)})});var qr=xr((hy,On)=>{"use strict";/*! + * escape-html + * Copyright(c) 2012-2013 TJ Holowaychuk + * Copyright(c) 2015 Andreas Lubbe + * Copyright(c) 2015 Tiancheng "Timothy" Gu + * MIT Licensed + */var $a=/["'&<>]/;On.exports=Pa;function Pa(e){var t=""+e,r=$a.exec(t);if(!r)return t;var o,n="",i=0,a=0;for(i=r.index;i{/*! + * clipboard.js v2.0.11 + * https://clipboardjs.com/ + * + * Licensed MIT © Zeno Rocha + */(function(t,r){typeof It=="object"&&typeof Yr=="object"?Yr.exports=r():typeof define=="function"&&define.amd?define([],r):typeof It=="object"?It.ClipboardJS=r():t.ClipboardJS=r()})(It,function(){return function(){var e={686:function(o,n,i){"use strict";i.d(n,{default:function(){return Ui}});var a=i(279),s=i.n(a),p=i(370),c=i.n(p),l=i(817),f=i.n(l);function u(V){try{return document.execCommand(V)}catch(A){return!1}}var d=function(A){var M=f()(A);return u("cut"),M},y=d;function L(V){var A=document.documentElement.getAttribute("dir")==="rtl",M=document.createElement("textarea");M.style.fontSize="12pt",M.style.border="0",M.style.padding="0",M.style.margin="0",M.style.position="absolute",M.style[A?"right":"left"]="-9999px";var F=window.pageYOffset||document.documentElement.scrollTop;return M.style.top="".concat(F,"px"),M.setAttribute("readonly",""),M.value=V,M}var X=function(A,M){var F=L(A);M.container.appendChild(F);var D=f()(F);return u("copy"),F.remove(),D},te=function(A){var M=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body},F="";return typeof A=="string"?F=X(A,M):A instanceof HTMLInputElement&&!["text","search","url","tel","password"].includes(A==null?void 0:A.type)?F=X(A.value,M):(F=f()(A),u("copy")),F},J=te;function k(V){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?k=function(M){return typeof M}:k=function(M){return M&&typeof Symbol=="function"&&M.constructor===Symbol&&M!==Symbol.prototype?"symbol":typeof M},k(V)}var ft=function(){var A=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{},M=A.action,F=M===void 0?"copy":M,D=A.container,Y=A.target,$e=A.text;if(F!=="copy"&&F!=="cut")throw new Error('Invalid "action" value, use either "copy" or "cut"');if(Y!==void 0)if(Y&&k(Y)==="object"&&Y.nodeType===1){if(F==="copy"&&Y.hasAttribute("disabled"))throw new Error('Invalid "target" attribute. Please use "readonly" instead of "disabled" attribute');if(F==="cut"&&(Y.hasAttribute("readonly")||Y.hasAttribute("disabled")))throw new Error(`Invalid "target" attribute. You can't cut text from elements with "readonly" or "disabled" attributes`)}else throw new Error('Invalid "target" value, use a valid Element');if($e)return J($e,{container:D});if(Y)return F==="cut"?y(Y):J(Y,{container:D})},qe=ft;function Fe(V){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?Fe=function(M){return typeof M}:Fe=function(M){return M&&typeof Symbol=="function"&&M.constructor===Symbol&&M!==Symbol.prototype?"symbol":typeof M},Fe(V)}function ki(V,A){if(!(V instanceof A))throw new TypeError("Cannot call a class as a function")}function no(V,A){for(var M=0;M0&&arguments[0]!==void 0?arguments[0]:{};this.action=typeof D.action=="function"?D.action:this.defaultAction,this.target=typeof D.target=="function"?D.target:this.defaultTarget,this.text=typeof D.text=="function"?D.text:this.defaultText,this.container=Fe(D.container)==="object"?D.container:document.body}},{key:"listenClick",value:function(D){var Y=this;this.listener=c()(D,"click",function($e){return Y.onClick($e)})}},{key:"onClick",value:function(D){var Y=D.delegateTarget||D.currentTarget,$e=this.action(Y)||"copy",Dt=qe({action:$e,container:this.container,target:this.target(Y),text:this.text(Y)});this.emit(Dt?"success":"error",{action:$e,text:Dt,trigger:Y,clearSelection:function(){Y&&Y.focus(),window.getSelection().removeAllRanges()}})}},{key:"defaultAction",value:function(D){return vr("action",D)}},{key:"defaultTarget",value:function(D){var Y=vr("target",D);if(Y)return document.querySelector(Y)}},{key:"defaultText",value:function(D){return vr("text",D)}},{key:"destroy",value:function(){this.listener.destroy()}}],[{key:"copy",value:function(D){var Y=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body};return J(D,Y)}},{key:"cut",value:function(D){return y(D)}},{key:"isSupported",value:function(){var D=arguments.length>0&&arguments[0]!==void 0?arguments[0]:["copy","cut"],Y=typeof D=="string"?[D]:D,$e=!!document.queryCommandSupported;return Y.forEach(function(Dt){$e=$e&&!!document.queryCommandSupported(Dt)}),$e}}]),M}(s()),Ui=Fi},828:function(o){var n=9;if(typeof Element!="undefined"&&!Element.prototype.matches){var i=Element.prototype;i.matches=i.matchesSelector||i.mozMatchesSelector||i.msMatchesSelector||i.oMatchesSelector||i.webkitMatchesSelector}function a(s,p){for(;s&&s.nodeType!==n;){if(typeof s.matches=="function"&&s.matches(p))return s;s=s.parentNode}}o.exports=a},438:function(o,n,i){var a=i(828);function s(l,f,u,d,y){var L=c.apply(this,arguments);return l.addEventListener(u,L,y),{destroy:function(){l.removeEventListener(u,L,y)}}}function p(l,f,u,d,y){return typeof l.addEventListener=="function"?s.apply(null,arguments):typeof u=="function"?s.bind(null,document).apply(null,arguments):(typeof l=="string"&&(l=document.querySelectorAll(l)),Array.prototype.map.call(l,function(L){return s(L,f,u,d,y)}))}function c(l,f,u,d){return function(y){y.delegateTarget=a(y.target,f),y.delegateTarget&&d.call(l,y)}}o.exports=p},879:function(o,n){n.node=function(i){return i!==void 0&&i instanceof HTMLElement&&i.nodeType===1},n.nodeList=function(i){var a=Object.prototype.toString.call(i);return i!==void 0&&(a==="[object NodeList]"||a==="[object HTMLCollection]")&&"length"in i&&(i.length===0||n.node(i[0]))},n.string=function(i){return typeof i=="string"||i instanceof String},n.fn=function(i){var a=Object.prototype.toString.call(i);return a==="[object Function]"}},370:function(o,n,i){var a=i(879),s=i(438);function p(u,d,y){if(!u&&!d&&!y)throw new Error("Missing required arguments");if(!a.string(d))throw new TypeError("Second argument must be a String");if(!a.fn(y))throw new TypeError("Third argument must be a Function");if(a.node(u))return c(u,d,y);if(a.nodeList(u))return l(u,d,y);if(a.string(u))return f(u,d,y);throw new TypeError("First argument must be a String, HTMLElement, HTMLCollection, or NodeList")}function c(u,d,y){return u.addEventListener(d,y),{destroy:function(){u.removeEventListener(d,y)}}}function l(u,d,y){return Array.prototype.forEach.call(u,function(L){L.addEventListener(d,y)}),{destroy:function(){Array.prototype.forEach.call(u,function(L){L.removeEventListener(d,y)})}}}function f(u,d,y){return s(document.body,u,d,y)}o.exports=p},817:function(o){function n(i){var a;if(i.nodeName==="SELECT")i.focus(),a=i.value;else if(i.nodeName==="INPUT"||i.nodeName==="TEXTAREA"){var s=i.hasAttribute("readonly");s||i.setAttribute("readonly",""),i.select(),i.setSelectionRange(0,i.value.length),s||i.removeAttribute("readonly"),a=i.value}else{i.hasAttribute("contenteditable")&&i.focus();var p=window.getSelection(),c=document.createRange();c.selectNodeContents(i),p.removeAllRanges(),p.addRange(c),a=p.toString()}return a}o.exports=n},279:function(o){function n(){}n.prototype={on:function(i,a,s){var p=this.e||(this.e={});return(p[i]||(p[i]=[])).push({fn:a,ctx:s}),this},once:function(i,a,s){var p=this;function c(){p.off(i,c),a.apply(s,arguments)}return c._=a,this.on(i,c,s)},emit:function(i){var a=[].slice.call(arguments,1),s=((this.e||(this.e={}))[i]||[]).slice(),p=0,c=s.length;for(p;p0&&i[i.length-1])&&(c[0]===6||c[0]===2)){r=0;continue}if(c[0]===3&&(!i||c[1]>i[0]&&c[1]=e.length&&(e=void 0),{value:e&&e[o++],done:!e}}};throw new TypeError(t?"Object is not iterable.":"Symbol.iterator is not defined.")}function N(e,t){var r=typeof Symbol=="function"&&e[Symbol.iterator];if(!r)return e;var o=r.call(e),n,i=[],a;try{for(;(t===void 0||t-- >0)&&!(n=o.next()).done;)i.push(n.value)}catch(s){a={error:s}}finally{try{n&&!n.done&&(r=o.return)&&r.call(o)}finally{if(a)throw a.error}}return i}function q(e,t,r){if(r||arguments.length===2)for(var o=0,n=t.length,i;o1||p(d,L)})},y&&(n[d]=y(n[d])))}function p(d,y){try{c(o[d](y))}catch(L){u(i[0][3],L)}}function c(d){d.value instanceof nt?Promise.resolve(d.value.v).then(l,f):u(i[0][2],d)}function l(d){p("next",d)}function f(d){p("throw",d)}function u(d,y){d(y),i.shift(),i.length&&p(i[0][0],i[0][1])}}function uo(e){if(!Symbol.asyncIterator)throw new TypeError("Symbol.asyncIterator is not defined.");var t=e[Symbol.asyncIterator],r;return t?t.call(e):(e=typeof he=="function"?he(e):e[Symbol.iterator](),r={},o("next"),o("throw"),o("return"),r[Symbol.asyncIterator]=function(){return this},r);function o(i){r[i]=e[i]&&function(a){return new Promise(function(s,p){a=e[i](a),n(s,p,a.done,a.value)})}}function n(i,a,s,p){Promise.resolve(p).then(function(c){i({value:c,done:s})},a)}}function H(e){return typeof e=="function"}function ut(e){var t=function(o){Error.call(o),o.stack=new Error().stack},r=e(t);return r.prototype=Object.create(Error.prototype),r.prototype.constructor=r,r}var zt=ut(function(e){return function(r){e(this),this.message=r?r.length+` errors occurred during unsubscription: +`+r.map(function(o,n){return n+1+") "+o.toString()}).join(` + `):"",this.name="UnsubscriptionError",this.errors=r}});function Qe(e,t){if(e){var r=e.indexOf(t);0<=r&&e.splice(r,1)}}var Ue=function(){function e(t){this.initialTeardown=t,this.closed=!1,this._parentage=null,this._finalizers=null}return e.prototype.unsubscribe=function(){var t,r,o,n,i;if(!this.closed){this.closed=!0;var a=this._parentage;if(a)if(this._parentage=null,Array.isArray(a))try{for(var s=he(a),p=s.next();!p.done;p=s.next()){var c=p.value;c.remove(this)}}catch(L){t={error:L}}finally{try{p&&!p.done&&(r=s.return)&&r.call(s)}finally{if(t)throw t.error}}else a.remove(this);var l=this.initialTeardown;if(H(l))try{l()}catch(L){i=L instanceof zt?L.errors:[L]}var f=this._finalizers;if(f){this._finalizers=null;try{for(var u=he(f),d=u.next();!d.done;d=u.next()){var y=d.value;try{ho(y)}catch(L){i=i!=null?i:[],L instanceof zt?i=q(q([],N(i)),N(L.errors)):i.push(L)}}}catch(L){o={error:L}}finally{try{d&&!d.done&&(n=u.return)&&n.call(u)}finally{if(o)throw o.error}}}if(i)throw new zt(i)}},e.prototype.add=function(t){var r;if(t&&t!==this)if(this.closed)ho(t);else{if(t instanceof e){if(t.closed||t._hasParent(this))return;t._addParent(this)}(this._finalizers=(r=this._finalizers)!==null&&r!==void 0?r:[]).push(t)}},e.prototype._hasParent=function(t){var r=this._parentage;return r===t||Array.isArray(r)&&r.includes(t)},e.prototype._addParent=function(t){var r=this._parentage;this._parentage=Array.isArray(r)?(r.push(t),r):r?[r,t]:t},e.prototype._removeParent=function(t){var r=this._parentage;r===t?this._parentage=null:Array.isArray(r)&&Qe(r,t)},e.prototype.remove=function(t){var r=this._finalizers;r&&Qe(r,t),t instanceof e&&t._removeParent(this)},e.EMPTY=function(){var t=new e;return t.closed=!0,t}(),e}();var Tr=Ue.EMPTY;function qt(e){return e instanceof Ue||e&&"closed"in e&&H(e.remove)&&H(e.add)&&H(e.unsubscribe)}function ho(e){H(e)?e():e.unsubscribe()}var Pe={onUnhandledError:null,onStoppedNotification:null,Promise:void 0,useDeprecatedSynchronousErrorHandling:!1,useDeprecatedNextContext:!1};var dt={setTimeout:function(e,t){for(var r=[],o=2;o0},enumerable:!1,configurable:!0}),t.prototype._trySubscribe=function(r){return this._throwIfClosed(),e.prototype._trySubscribe.call(this,r)},t.prototype._subscribe=function(r){return this._throwIfClosed(),this._checkFinalizedStatuses(r),this._innerSubscribe(r)},t.prototype._innerSubscribe=function(r){var o=this,n=this,i=n.hasError,a=n.isStopped,s=n.observers;return i||a?Tr:(this.currentObservers=null,s.push(r),new Ue(function(){o.currentObservers=null,Qe(s,r)}))},t.prototype._checkFinalizedStatuses=function(r){var o=this,n=o.hasError,i=o.thrownError,a=o.isStopped;n?r.error(i):a&&r.complete()},t.prototype.asObservable=function(){var r=new j;return r.source=this,r},t.create=function(r,o){return new To(r,o)},t}(j);var To=function(e){oe(t,e);function t(r,o){var n=e.call(this)||this;return n.destination=r,n.source=o,n}return t.prototype.next=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.next)===null||n===void 0||n.call(o,r)},t.prototype.error=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.error)===null||n===void 0||n.call(o,r)},t.prototype.complete=function(){var r,o;(o=(r=this.destination)===null||r===void 0?void 0:r.complete)===null||o===void 0||o.call(r)},t.prototype._subscribe=function(r){var o,n;return(n=(o=this.source)===null||o===void 0?void 0:o.subscribe(r))!==null&&n!==void 0?n:Tr},t}(g);var _r=function(e){oe(t,e);function t(r){var o=e.call(this)||this;return o._value=r,o}return Object.defineProperty(t.prototype,"value",{get:function(){return this.getValue()},enumerable:!1,configurable:!0}),t.prototype._subscribe=function(r){var o=e.prototype._subscribe.call(this,r);return!o.closed&&r.next(this._value),o},t.prototype.getValue=function(){var r=this,o=r.hasError,n=r.thrownError,i=r._value;if(o)throw n;return this._throwIfClosed(),i},t.prototype.next=function(r){e.prototype.next.call(this,this._value=r)},t}(g);var At={now:function(){return(At.delegate||Date).now()},delegate:void 0};var Ct=function(e){oe(t,e);function t(r,o,n){r===void 0&&(r=1/0),o===void 0&&(o=1/0),n===void 0&&(n=At);var i=e.call(this)||this;return i._bufferSize=r,i._windowTime=o,i._timestampProvider=n,i._buffer=[],i._infiniteTimeWindow=!0,i._infiniteTimeWindow=o===1/0,i._bufferSize=Math.max(1,r),i._windowTime=Math.max(1,o),i}return t.prototype.next=function(r){var o=this,n=o.isStopped,i=o._buffer,a=o._infiniteTimeWindow,s=o._timestampProvider,p=o._windowTime;n||(i.push(r),!a&&i.push(s.now()+p)),this._trimBuffer(),e.prototype.next.call(this,r)},t.prototype._subscribe=function(r){this._throwIfClosed(),this._trimBuffer();for(var o=this._innerSubscribe(r),n=this,i=n._infiniteTimeWindow,a=n._buffer,s=a.slice(),p=0;p0?e.prototype.schedule.call(this,r,o):(this.delay=o,this.state=r,this.scheduler.flush(this),this)},t.prototype.execute=function(r,o){return o>0||this.closed?e.prototype.execute.call(this,r,o):this._execute(r,o)},t.prototype.requestAsyncId=function(r,o,n){return n===void 0&&(n=0),n!=null&&n>0||n==null&&this.delay>0?e.prototype.requestAsyncId.call(this,r,o,n):(r.flush(this),0)},t}(gt);var Lo=function(e){oe(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t}(yt);var kr=new Lo(Oo);var Mo=function(e){oe(t,e);function t(r,o){var n=e.call(this,r,o)||this;return n.scheduler=r,n.work=o,n}return t.prototype.requestAsyncId=function(r,o,n){return n===void 0&&(n=0),n!==null&&n>0?e.prototype.requestAsyncId.call(this,r,o,n):(r.actions.push(this),r._scheduled||(r._scheduled=vt.requestAnimationFrame(function(){return r.flush(void 0)})))},t.prototype.recycleAsyncId=function(r,o,n){var i;if(n===void 0&&(n=0),n!=null?n>0:this.delay>0)return e.prototype.recycleAsyncId.call(this,r,o,n);var a=r.actions;o!=null&&((i=a[a.length-1])===null||i===void 0?void 0:i.id)!==o&&(vt.cancelAnimationFrame(o),r._scheduled=void 0)},t}(gt);var _o=function(e){oe(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t.prototype.flush=function(r){this._active=!0;var o=this._scheduled;this._scheduled=void 0;var n=this.actions,i;r=r||n.shift();do if(i=r.execute(r.state,r.delay))break;while((r=n[0])&&r.id===o&&n.shift());if(this._active=!1,i){for(;(r=n[0])&&r.id===o&&n.shift();)r.unsubscribe();throw i}},t}(yt);var me=new _o(Mo);var S=new j(function(e){return e.complete()});function Yt(e){return e&&H(e.schedule)}function Hr(e){return e[e.length-1]}function Xe(e){return H(Hr(e))?e.pop():void 0}function ke(e){return Yt(Hr(e))?e.pop():void 0}function Bt(e,t){return typeof Hr(e)=="number"?e.pop():t}var xt=function(e){return e&&typeof e.length=="number"&&typeof e!="function"};function Gt(e){return H(e==null?void 0:e.then)}function Jt(e){return H(e[bt])}function Xt(e){return Symbol.asyncIterator&&H(e==null?void 0:e[Symbol.asyncIterator])}function Zt(e){return new TypeError("You provided "+(e!==null&&typeof e=="object"?"an invalid object":"'"+e+"'")+" where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.")}function Zi(){return typeof Symbol!="function"||!Symbol.iterator?"@@iterator":Symbol.iterator}var er=Zi();function tr(e){return H(e==null?void 0:e[er])}function rr(e){return fo(this,arguments,function(){var r,o,n,i;return Nt(this,function(a){switch(a.label){case 0:r=e.getReader(),a.label=1;case 1:a.trys.push([1,,9,10]),a.label=2;case 2:return[4,nt(r.read())];case 3:return o=a.sent(),n=o.value,i=o.done,i?[4,nt(void 0)]:[3,5];case 4:return[2,a.sent()];case 5:return[4,nt(n)];case 6:return[4,a.sent()];case 7:return a.sent(),[3,2];case 8:return[3,10];case 9:return r.releaseLock(),[7];case 10:return[2]}})})}function or(e){return H(e==null?void 0:e.getReader)}function U(e){if(e instanceof j)return e;if(e!=null){if(Jt(e))return ea(e);if(xt(e))return ta(e);if(Gt(e))return ra(e);if(Xt(e))return Ao(e);if(tr(e))return oa(e);if(or(e))return na(e)}throw Zt(e)}function ea(e){return new j(function(t){var r=e[bt]();if(H(r.subscribe))return r.subscribe(t);throw new TypeError("Provided object does not correctly implement Symbol.observable")})}function ta(e){return new j(function(t){for(var r=0;r=2;return function(o){return o.pipe(e?b(function(n,i){return e(n,i,o)}):le,Te(1),r?De(t):Qo(function(){return new ir}))}}function jr(e){return e<=0?function(){return S}:E(function(t,r){var o=[];t.subscribe(T(r,function(n){o.push(n),e=2,!0))}function pe(e){e===void 0&&(e={});var t=e.connector,r=t===void 0?function(){return new g}:t,o=e.resetOnError,n=o===void 0?!0:o,i=e.resetOnComplete,a=i===void 0?!0:i,s=e.resetOnRefCountZero,p=s===void 0?!0:s;return function(c){var l,f,u,d=0,y=!1,L=!1,X=function(){f==null||f.unsubscribe(),f=void 0},te=function(){X(),l=u=void 0,y=L=!1},J=function(){var k=l;te(),k==null||k.unsubscribe()};return E(function(k,ft){d++,!L&&!y&&X();var qe=u=u!=null?u:r();ft.add(function(){d--,d===0&&!L&&!y&&(f=Ur(J,p))}),qe.subscribe(ft),!l&&d>0&&(l=new at({next:function(Fe){return qe.next(Fe)},error:function(Fe){L=!0,X(),f=Ur(te,n,Fe),qe.error(Fe)},complete:function(){y=!0,X(),f=Ur(te,a),qe.complete()}}),U(k).subscribe(l))})(c)}}function Ur(e,t){for(var r=[],o=2;oe.next(document)),e}function P(e,t=document){return Array.from(t.querySelectorAll(e))}function R(e,t=document){let r=fe(e,t);if(typeof r=="undefined")throw new ReferenceError(`Missing element: expected "${e}" to be present`);return r}function fe(e,t=document){return t.querySelector(e)||void 0}function Ie(){var e,t,r,o;return(o=(r=(t=(e=document.activeElement)==null?void 0:e.shadowRoot)==null?void 0:t.activeElement)!=null?r:document.activeElement)!=null?o:void 0}var wa=O(h(document.body,"focusin"),h(document.body,"focusout")).pipe(_e(1),Q(void 0),m(()=>Ie()||document.body),G(1));function et(e){return wa.pipe(m(t=>e.contains(t)),K())}function $t(e,t){return C(()=>O(h(e,"mouseenter").pipe(m(()=>!0)),h(e,"mouseleave").pipe(m(()=>!1))).pipe(t?Ht(r=>Le(+!r*t)):le,Q(e.matches(":hover"))))}function Jo(e,t){if(typeof t=="string"||typeof t=="number")e.innerHTML+=t.toString();else if(t instanceof Node)e.appendChild(t);else if(Array.isArray(t))for(let r of t)Jo(e,r)}function x(e,t,...r){let o=document.createElement(e);if(t)for(let n of Object.keys(t))typeof t[n]!="undefined"&&(typeof t[n]!="boolean"?o.setAttribute(n,t[n]):o.setAttribute(n,""));for(let n of r)Jo(o,n);return o}function sr(e){if(e>999){let t=+((e-950)%1e3>99);return`${((e+1e-6)/1e3).toFixed(t)}k`}else return e.toString()}function Tt(e){let t=x("script",{src:e});return C(()=>(document.head.appendChild(t),O(h(t,"load"),h(t,"error").pipe(v(()=>$r(()=>new ReferenceError(`Invalid script: ${e}`))))).pipe(m(()=>{}),_(()=>document.head.removeChild(t)),Te(1))))}var Xo=new g,Ta=C(()=>typeof ResizeObserver=="undefined"?Tt("https://unpkg.com/resize-observer-polyfill"):I(void 0)).pipe(m(()=>new ResizeObserver(e=>e.forEach(t=>Xo.next(t)))),v(e=>O(Ye,I(e)).pipe(_(()=>e.disconnect()))),G(1));function ce(e){return{width:e.offsetWidth,height:e.offsetHeight}}function ge(e){let t=e;for(;t.clientWidth===0&&t.parentElement;)t=t.parentElement;return Ta.pipe(w(r=>r.observe(t)),v(r=>Xo.pipe(b(o=>o.target===t),_(()=>r.unobserve(t)))),m(()=>ce(e)),Q(ce(e)))}function St(e){return{width:e.scrollWidth,height:e.scrollHeight}}function cr(e){let t=e.parentElement;for(;t&&(e.scrollWidth<=t.scrollWidth&&e.scrollHeight<=t.scrollHeight);)t=(e=t).parentElement;return t?e:void 0}function Zo(e){let t=[],r=e.parentElement;for(;r;)(e.clientWidth>r.clientWidth||e.clientHeight>r.clientHeight)&&t.push(r),r=(e=r).parentElement;return t.length===0&&t.push(document.documentElement),t}function Ve(e){return{x:e.offsetLeft,y:e.offsetTop}}function en(e){let t=e.getBoundingClientRect();return{x:t.x+window.scrollX,y:t.y+window.scrollY}}function tn(e){return O(h(window,"load"),h(window,"resize")).pipe(Me(0,me),m(()=>Ve(e)),Q(Ve(e)))}function pr(e){return{x:e.scrollLeft,y:e.scrollTop}}function Ne(e){return O(h(e,"scroll"),h(window,"scroll"),h(window,"resize")).pipe(Me(0,me),m(()=>pr(e)),Q(pr(e)))}var rn=new g,Sa=C(()=>I(new IntersectionObserver(e=>{for(let t of e)rn.next(t)},{threshold:0}))).pipe(v(e=>O(Ye,I(e)).pipe(_(()=>e.disconnect()))),G(1));function tt(e){return Sa.pipe(w(t=>t.observe(e)),v(t=>rn.pipe(b(({target:r})=>r===e),_(()=>t.unobserve(e)),m(({isIntersecting:r})=>r))))}function on(e,t=16){return Ne(e).pipe(m(({y:r})=>{let o=ce(e),n=St(e);return r>=n.height-o.height-t}),K())}var lr={drawer:R("[data-md-toggle=drawer]"),search:R("[data-md-toggle=search]")};function nn(e){return lr[e].checked}function Je(e,t){lr[e].checked!==t&&lr[e].click()}function ze(e){let t=lr[e];return h(t,"change").pipe(m(()=>t.checked),Q(t.checked))}function Oa(e,t){switch(e.constructor){case HTMLInputElement:return e.type==="radio"?/^Arrow/.test(t):!0;case HTMLSelectElement:case HTMLTextAreaElement:return!0;default:return e.isContentEditable}}function La(){return O(h(window,"compositionstart").pipe(m(()=>!0)),h(window,"compositionend").pipe(m(()=>!1))).pipe(Q(!1))}function an(){let e=h(window,"keydown").pipe(b(t=>!(t.metaKey||t.ctrlKey)),m(t=>({mode:nn("search")?"search":"global",type:t.key,claim(){t.preventDefault(),t.stopPropagation()}})),b(({mode:t,type:r})=>{if(t==="global"){let o=Ie();if(typeof o!="undefined")return!Oa(o,r)}return!0}),pe());return La().pipe(v(t=>t?S:e))}function ye(){return new URL(location.href)}function lt(e,t=!1){if(B("navigation.instant")&&!t){let r=x("a",{href:e.href});document.body.appendChild(r),r.click(),r.remove()}else location.href=e.href}function sn(){return new g}function cn(){return location.hash.slice(1)}function pn(e){let t=x("a",{href:e});t.addEventListener("click",r=>r.stopPropagation()),t.click()}function Ma(e){return O(h(window,"hashchange"),e).pipe(m(cn),Q(cn()),b(t=>t.length>0),G(1))}function ln(e){return Ma(e).pipe(m(t=>fe(`[id="${t}"]`)),b(t=>typeof t!="undefined"))}function Pt(e){let t=matchMedia(e);return ar(r=>t.addListener(()=>r(t.matches))).pipe(Q(t.matches))}function mn(){let e=matchMedia("print");return O(h(window,"beforeprint").pipe(m(()=>!0)),h(window,"afterprint").pipe(m(()=>!1))).pipe(Q(e.matches))}function Nr(e,t){return e.pipe(v(r=>r?t():S))}function zr(e,t){return new j(r=>{let o=new XMLHttpRequest;return o.open("GET",`${e}`),o.responseType="blob",o.addEventListener("load",()=>{o.status>=200&&o.status<300?(r.next(o.response),r.complete()):r.error(new Error(o.statusText))}),o.addEventListener("error",()=>{r.error(new Error("Network error"))}),o.addEventListener("abort",()=>{r.complete()}),typeof(t==null?void 0:t.progress$)!="undefined"&&(o.addEventListener("progress",n=>{var i;if(n.lengthComputable)t.progress$.next(n.loaded/n.total*100);else{let a=(i=o.getResponseHeader("Content-Length"))!=null?i:0;t.progress$.next(n.loaded/+a*100)}}),t.progress$.next(5)),o.send(),()=>o.abort()})}function je(e,t){return zr(e,t).pipe(v(r=>r.text()),m(r=>JSON.parse(r)),G(1))}function fn(e,t){let r=new DOMParser;return zr(e,t).pipe(v(o=>o.text()),m(o=>r.parseFromString(o,"text/html")),G(1))}function un(e,t){let r=new DOMParser;return zr(e,t).pipe(v(o=>o.text()),m(o=>r.parseFromString(o,"text/xml")),G(1))}function dn(){return{x:Math.max(0,scrollX),y:Math.max(0,scrollY)}}function hn(){return O(h(window,"scroll",{passive:!0}),h(window,"resize",{passive:!0})).pipe(m(dn),Q(dn()))}function bn(){return{width:innerWidth,height:innerHeight}}function vn(){return h(window,"resize",{passive:!0}).pipe(m(bn),Q(bn()))}function gn(){return z([hn(),vn()]).pipe(m(([e,t])=>({offset:e,size:t})),G(1))}function mr(e,{viewport$:t,header$:r}){let o=t.pipe(ee("size")),n=z([o,r]).pipe(m(()=>Ve(e)));return z([r,t,n]).pipe(m(([{height:i},{offset:a,size:s},{x:p,y:c}])=>({offset:{x:a.x-p,y:a.y-c+i},size:s})))}function _a(e){return h(e,"message",t=>t.data)}function Aa(e){let t=new g;return t.subscribe(r=>e.postMessage(r)),t}function yn(e,t=new Worker(e)){let r=_a(t),o=Aa(t),n=new g;n.subscribe(o);let i=o.pipe(Z(),ie(!0));return n.pipe(Z(),Re(r.pipe(W(i))),pe())}var Ca=R("#__config"),Ot=JSON.parse(Ca.textContent);Ot.base=`${new URL(Ot.base,ye())}`;function xe(){return Ot}function B(e){return Ot.features.includes(e)}function Ee(e,t){return typeof t!="undefined"?Ot.translations[e].replace("#",t.toString()):Ot.translations[e]}function Se(e,t=document){return R(`[data-md-component=${e}]`,t)}function ae(e,t=document){return P(`[data-md-component=${e}]`,t)}function ka(e){let t=R(".md-typeset > :first-child",e);return h(t,"click",{once:!0}).pipe(m(()=>R(".md-typeset",e)),m(r=>({hash:__md_hash(r.innerHTML)})))}function xn(e){if(!B("announce.dismiss")||!e.childElementCount)return S;if(!e.hidden){let t=R(".md-typeset",e);__md_hash(t.innerHTML)===__md_get("__announce")&&(e.hidden=!0)}return C(()=>{let t=new g;return t.subscribe(({hash:r})=>{e.hidden=!0,__md_set("__announce",r)}),ka(e).pipe(w(r=>t.next(r)),_(()=>t.complete()),m(r=>$({ref:e},r)))})}function Ha(e,{target$:t}){return t.pipe(m(r=>({hidden:r!==e})))}function En(e,t){let r=new g;return r.subscribe(({hidden:o})=>{e.hidden=o}),Ha(e,t).pipe(w(o=>r.next(o)),_(()=>r.complete()),m(o=>$({ref:e},o)))}function Rt(e,t){return t==="inline"?x("div",{class:"md-tooltip md-tooltip--inline",id:e,role:"tooltip"},x("div",{class:"md-tooltip__inner md-typeset"})):x("div",{class:"md-tooltip",id:e,role:"tooltip"},x("div",{class:"md-tooltip__inner md-typeset"}))}function wn(...e){return x("div",{class:"md-tooltip2",role:"tooltip"},x("div",{class:"md-tooltip2__inner md-typeset"},e))}function Tn(e,t){if(t=t?`${t}_annotation_${e}`:void 0,t){let r=t?`#${t}`:void 0;return x("aside",{class:"md-annotation",tabIndex:0},Rt(t),x("a",{href:r,class:"md-annotation__index",tabIndex:-1},x("span",{"data-md-annotation-id":e})))}else return x("aside",{class:"md-annotation",tabIndex:0},Rt(t),x("span",{class:"md-annotation__index",tabIndex:-1},x("span",{"data-md-annotation-id":e})))}function Sn(e){return x("button",{class:"md-clipboard md-icon",title:Ee("clipboard.copy"),"data-clipboard-target":`#${e} > code`})}var Ln=Mt(qr());function Qr(e,t){let r=t&2,o=t&1,n=Object.keys(e.terms).filter(p=>!e.terms[p]).reduce((p,c)=>[...p,x("del",null,(0,Ln.default)(c))," "],[]).slice(0,-1),i=xe(),a=new URL(e.location,i.base);B("search.highlight")&&a.searchParams.set("h",Object.entries(e.terms).filter(([,p])=>p).reduce((p,[c])=>`${p} ${c}`.trim(),""));let{tags:s}=xe();return x("a",{href:`${a}`,class:"md-search-result__link",tabIndex:-1},x("article",{class:"md-search-result__article md-typeset","data-md-score":e.score.toFixed(2)},r>0&&x("div",{class:"md-search-result__icon md-icon"}),r>0&&x("h1",null,e.title),r<=0&&x("h2",null,e.title),o>0&&e.text.length>0&&e.text,e.tags&&x("nav",{class:"md-tags"},e.tags.map(p=>{let c=s?p in s?`md-tag-icon md-tag--${s[p]}`:"md-tag-icon":"";return x("span",{class:`md-tag ${c}`},p)})),o>0&&n.length>0&&x("p",{class:"md-search-result__terms"},Ee("search.result.term.missing"),": ",...n)))}function Mn(e){let t=e[0].score,r=[...e],o=xe(),n=r.findIndex(l=>!`${new URL(l.location,o.base)}`.includes("#")),[i]=r.splice(n,1),a=r.findIndex(l=>l.scoreQr(l,1)),...p.length?[x("details",{class:"md-search-result__more"},x("summary",{tabIndex:-1},x("div",null,p.length>0&&p.length===1?Ee("search.result.more.one"):Ee("search.result.more.other",p.length))),...p.map(l=>Qr(l,1)))]:[]];return x("li",{class:"md-search-result__item"},c)}function _n(e){return x("ul",{class:"md-source__facts"},Object.entries(e).map(([t,r])=>x("li",{class:`md-source__fact md-source__fact--${t}`},typeof r=="number"?sr(r):r)))}function Kr(e){let t=`tabbed-control tabbed-control--${e}`;return x("div",{class:t,hidden:!0},x("button",{class:"tabbed-button",tabIndex:-1,"aria-hidden":"true"}))}function An(e){return x("div",{class:"md-typeset__scrollwrap"},x("div",{class:"md-typeset__table"},e))}function Ra(e){var o;let t=xe(),r=new URL(`../${e.version}/`,t.base);return x("li",{class:"md-version__item"},x("a",{href:`${r}`,class:"md-version__link"},e.title,((o=t.version)==null?void 0:o.alias)&&e.aliases.length>0&&x("span",{class:"md-version__alias"},e.aliases[0])))}function Cn(e,t){var o;let r=xe();return e=e.filter(n=>{var i;return!((i=n.properties)!=null&&i.hidden)}),x("div",{class:"md-version"},x("button",{class:"md-version__current","aria-label":Ee("select.version")},t.title,((o=r.version)==null?void 0:o.alias)&&t.aliases.length>0&&x("span",{class:"md-version__alias"},t.aliases[0])),x("ul",{class:"md-version__list"},e.map(Ra)))}var Ia=0;function ja(e){let t=z([et(e),$t(e)]).pipe(m(([o,n])=>o||n),K()),r=C(()=>Zo(e)).pipe(ne(Ne),pt(1),He(t),m(()=>en(e)));return t.pipe(Ae(o=>o),v(()=>z([t,r])),m(([o,n])=>({active:o,offset:n})),pe())}function Fa(e,t){let{content$:r,viewport$:o}=t,n=`__tooltip2_${Ia++}`;return C(()=>{let i=new g,a=new _r(!1);i.pipe(Z(),ie(!1)).subscribe(a);let s=a.pipe(Ht(c=>Le(+!c*250,kr)),K(),v(c=>c?r:S),w(c=>c.id=n),pe());z([i.pipe(m(({active:c})=>c)),s.pipe(v(c=>$t(c,250)),Q(!1))]).pipe(m(c=>c.some(l=>l))).subscribe(a);let p=a.pipe(b(c=>c),re(s,o),m(([c,l,{size:f}])=>{let u=e.getBoundingClientRect(),d=u.width/2;if(l.role==="tooltip")return{x:d,y:8+u.height};if(u.y>=f.height/2){let{height:y}=ce(l);return{x:d,y:-16-y}}else return{x:d,y:16+u.height}}));return z([s,i,p]).subscribe(([c,{offset:l},f])=>{c.style.setProperty("--md-tooltip-host-x",`${l.x}px`),c.style.setProperty("--md-tooltip-host-y",`${l.y}px`),c.style.setProperty("--md-tooltip-x",`${f.x}px`),c.style.setProperty("--md-tooltip-y",`${f.y}px`),c.classList.toggle("md-tooltip2--top",f.y<0),c.classList.toggle("md-tooltip2--bottom",f.y>=0)}),a.pipe(b(c=>c),re(s,(c,l)=>l),b(c=>c.role==="tooltip")).subscribe(c=>{let l=ce(R(":scope > *",c));c.style.setProperty("--md-tooltip-width",`${l.width}px`),c.style.setProperty("--md-tooltip-tail","0px")}),a.pipe(K(),ve(me),re(s)).subscribe(([c,l])=>{l.classList.toggle("md-tooltip2--active",c)}),z([a.pipe(b(c=>c)),s]).subscribe(([c,l])=>{l.role==="dialog"?(e.setAttribute("aria-controls",n),e.setAttribute("aria-haspopup","dialog")):e.setAttribute("aria-describedby",n)}),a.pipe(b(c=>!c)).subscribe(()=>{e.removeAttribute("aria-controls"),e.removeAttribute("aria-describedby"),e.removeAttribute("aria-haspopup")}),ja(e).pipe(w(c=>i.next(c)),_(()=>i.complete()),m(c=>$({ref:e},c)))})}function mt(e,{viewport$:t},r=document.body){return Fa(e,{content$:new j(o=>{let n=e.title,i=wn(n);return o.next(i),e.removeAttribute("title"),r.append(i),()=>{i.remove(),e.setAttribute("title",n)}}),viewport$:t})}function Ua(e,t){let r=C(()=>z([tn(e),Ne(t)])).pipe(m(([{x:o,y:n},i])=>{let{width:a,height:s}=ce(e);return{x:o-i.x+a/2,y:n-i.y+s/2}}));return et(e).pipe(v(o=>r.pipe(m(n=>({active:o,offset:n})),Te(+!o||1/0))))}function kn(e,t,{target$:r}){let[o,n]=Array.from(e.children);return C(()=>{let i=new g,a=i.pipe(Z(),ie(!0));return i.subscribe({next({offset:s}){e.style.setProperty("--md-tooltip-x",`${s.x}px`),e.style.setProperty("--md-tooltip-y",`${s.y}px`)},complete(){e.style.removeProperty("--md-tooltip-x"),e.style.removeProperty("--md-tooltip-y")}}),tt(e).pipe(W(a)).subscribe(s=>{e.toggleAttribute("data-md-visible",s)}),O(i.pipe(b(({active:s})=>s)),i.pipe(_e(250),b(({active:s})=>!s))).subscribe({next({active:s}){s?e.prepend(o):o.remove()},complete(){e.prepend(o)}}),i.pipe(Me(16,me)).subscribe(({active:s})=>{o.classList.toggle("md-tooltip--active",s)}),i.pipe(pt(125,me),b(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:s})=>s)).subscribe({next(s){s?e.style.setProperty("--md-tooltip-0",`${-s}px`):e.style.removeProperty("--md-tooltip-0")},complete(){e.style.removeProperty("--md-tooltip-0")}}),h(n,"click").pipe(W(a),b(s=>!(s.metaKey||s.ctrlKey))).subscribe(s=>{s.stopPropagation(),s.preventDefault()}),h(n,"mousedown").pipe(W(a),re(i)).subscribe(([s,{active:p}])=>{var c;if(s.button!==0||s.metaKey||s.ctrlKey)s.preventDefault();else if(p){s.preventDefault();let l=e.parentElement.closest(".md-annotation");l instanceof HTMLElement?l.focus():(c=Ie())==null||c.blur()}}),r.pipe(W(a),b(s=>s===o),Ge(125)).subscribe(()=>e.focus()),Ua(e,t).pipe(w(s=>i.next(s)),_(()=>i.complete()),m(s=>$({ref:e},s)))})}function Wa(e){return e.tagName==="CODE"?P(".c, .c1, .cm",e):[e]}function Da(e){let t=[];for(let r of Wa(e)){let o=[],n=document.createNodeIterator(r,NodeFilter.SHOW_TEXT);for(let i=n.nextNode();i;i=n.nextNode())o.push(i);for(let i of o){let a;for(;a=/(\(\d+\))(!)?/.exec(i.textContent);){let[,s,p]=a;if(typeof p=="undefined"){let c=i.splitText(a.index);i=c.splitText(s.length),t.push(c)}else{i.textContent=s,t.push(i);break}}}}return t}function Hn(e,t){t.append(...Array.from(e.childNodes))}function fr(e,t,{target$:r,print$:o}){let n=t.closest("[id]"),i=n==null?void 0:n.id,a=new Map;for(let s of Da(t)){let[,p]=s.textContent.match(/\((\d+)\)/);fe(`:scope > li:nth-child(${p})`,e)&&(a.set(p,Tn(p,i)),s.replaceWith(a.get(p)))}return a.size===0?S:C(()=>{let s=new g,p=s.pipe(Z(),ie(!0)),c=[];for(let[l,f]of a)c.push([R(".md-typeset",f),R(`:scope > li:nth-child(${l})`,e)]);return o.pipe(W(p)).subscribe(l=>{e.hidden=!l,e.classList.toggle("md-annotation-list",l);for(let[f,u]of c)l?Hn(f,u):Hn(u,f)}),O(...[...a].map(([,l])=>kn(l,t,{target$:r}))).pipe(_(()=>s.complete()),pe())})}function $n(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return $n(t)}}function Pn(e,t){return C(()=>{let r=$n(e);return typeof r!="undefined"?fr(r,e,t):S})}var Rn=Mt(Br());var Va=0;function In(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return In(t)}}function Na(e){return ge(e).pipe(m(({width:t})=>({scrollable:St(e).width>t})),ee("scrollable"))}function jn(e,t){let{matches:r}=matchMedia("(hover)"),o=C(()=>{let n=new g,i=n.pipe(jr(1));n.subscribe(({scrollable:c})=>{c&&r?e.setAttribute("tabindex","0"):e.removeAttribute("tabindex")});let a=[];if(Rn.default.isSupported()&&(e.closest(".copy")||B("content.code.copy")&&!e.closest(".no-copy"))){let c=e.closest("pre");c.id=`__code_${Va++}`;let l=Sn(c.id);c.insertBefore(l,e),B("content.tooltips")&&a.push(mt(l,{viewport$}))}let s=e.closest(".highlight");if(s instanceof HTMLElement){let c=In(s);if(typeof c!="undefined"&&(s.classList.contains("annotate")||B("content.code.annotate"))){let l=fr(c,e,t);a.push(ge(s).pipe(W(i),m(({width:f,height:u})=>f&&u),K(),v(f=>f?l:S)))}}return P(":scope > span[id]",e).length&&e.classList.add("md-code__content"),Na(e).pipe(w(c=>n.next(c)),_(()=>n.complete()),m(c=>$({ref:e},c)),Re(...a))});return B("content.lazy")?tt(e).pipe(b(n=>n),Te(1),v(()=>o)):o}function za(e,{target$:t,print$:r}){let o=!0;return O(t.pipe(m(n=>n.closest("details:not([open])")),b(n=>e===n),m(()=>({action:"open",reveal:!0}))),r.pipe(b(n=>n||!o),w(()=>o=e.open),m(n=>({action:n?"open":"close"}))))}function Fn(e,t){return C(()=>{let r=new g;return r.subscribe(({action:o,reveal:n})=>{e.toggleAttribute("open",o==="open"),n&&e.scrollIntoView()}),za(e,t).pipe(w(o=>r.next(o)),_(()=>r.complete()),m(o=>$({ref:e},o)))})}var Un=".node circle,.node ellipse,.node path,.node polygon,.node rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}marker{fill:var(--md-mermaid-edge-color)!important}.edgeLabel .label rect{fill:#0000}.label{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.label foreignObject{line-height:normal;overflow:visible}.label div .edgeLabel{color:var(--md-mermaid-label-fg-color)}.edgeLabel,.edgeLabel p,.label div .edgeLabel{background-color:var(--md-mermaid-label-bg-color)}.edgeLabel,.edgeLabel p{fill:var(--md-mermaid-label-bg-color);color:var(--md-mermaid-edge-color)}.edgePath .path,.flowchart-link{stroke:var(--md-mermaid-edge-color);stroke-width:.05rem}.edgePath .arrowheadPath{fill:var(--md-mermaid-edge-color);stroke:none}.cluster rect{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}.cluster span{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}g #flowchart-circleEnd,g #flowchart-circleStart,g #flowchart-crossEnd,g #flowchart-crossStart,g #flowchart-pointEnd,g #flowchart-pointStart{stroke:none}g.classGroup line,g.classGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.classGroup text{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.classLabel .box{fill:var(--md-mermaid-label-bg-color);background-color:var(--md-mermaid-label-bg-color);opacity:1}.classLabel .label{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node .divider{stroke:var(--md-mermaid-node-fg-color)}.relation{stroke:var(--md-mermaid-edge-color)}.cardinality{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.cardinality text{fill:inherit!important}defs #classDiagram-compositionEnd,defs #classDiagram-compositionStart,defs #classDiagram-dependencyEnd,defs #classDiagram-dependencyStart,defs #classDiagram-extensionEnd,defs #classDiagram-extensionStart{fill:var(--md-mermaid-edge-color)!important;stroke:var(--md-mermaid-edge-color)!important}defs #classDiagram-aggregationEnd,defs #classDiagram-aggregationStart{fill:var(--md-mermaid-label-bg-color)!important;stroke:var(--md-mermaid-edge-color)!important}g.stateGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.stateGroup .state-title{fill:var(--md-mermaid-label-fg-color)!important;font-family:var(--md-mermaid-font-family)}g.stateGroup .composit{fill:var(--md-mermaid-label-bg-color)}.nodeLabel,.nodeLabel p{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}a .nodeLabel{text-decoration:underline}.node circle.state-end,.node circle.state-start,.start-state{fill:var(--md-mermaid-edge-color);stroke:none}.end-state-inner,.end-state-outer{fill:var(--md-mermaid-edge-color)}.end-state-inner,.node circle.state-end{stroke:var(--md-mermaid-label-bg-color)}.transition{stroke:var(--md-mermaid-edge-color)}[id^=state-fork] rect,[id^=state-join] rect{fill:var(--md-mermaid-edge-color)!important;stroke:none!important}.statediagram-cluster.statediagram-cluster .inner{fill:var(--md-default-bg-color)}.statediagram-cluster rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.statediagram-state rect.divider{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}defs #statediagram-barbEnd{stroke:var(--md-mermaid-edge-color)}.attributeBoxEven,.attributeBoxOdd{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityBox{fill:var(--md-mermaid-label-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityLabel{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.relationshipLabelBox{fill:var(--md-mermaid-label-bg-color);fill-opacity:1;background-color:var(--md-mermaid-label-bg-color);opacity:1}.relationshipLabel{fill:var(--md-mermaid-label-fg-color)}.relationshipLine{stroke:var(--md-mermaid-edge-color)}defs #ONE_OR_MORE_END *,defs #ONE_OR_MORE_START *,defs #ONLY_ONE_END *,defs #ONLY_ONE_START *,defs #ZERO_OR_MORE_END *,defs #ZERO_OR_MORE_START *,defs #ZERO_OR_ONE_END *,defs #ZERO_OR_ONE_START *{stroke:var(--md-mermaid-edge-color)!important}defs #ZERO_OR_MORE_END circle,defs #ZERO_OR_MORE_START circle{fill:var(--md-mermaid-label-bg-color)}.actor{fill:var(--md-mermaid-sequence-actor-bg-color);stroke:var(--md-mermaid-sequence-actor-border-color)}text.actor>tspan{fill:var(--md-mermaid-sequence-actor-fg-color);font-family:var(--md-mermaid-font-family)}line{stroke:var(--md-mermaid-sequence-actor-line-color)}.actor-man circle,.actor-man line{fill:var(--md-mermaid-sequence-actorman-bg-color);stroke:var(--md-mermaid-sequence-actorman-line-color)}.messageLine0,.messageLine1{stroke:var(--md-mermaid-sequence-message-line-color)}.note{fill:var(--md-mermaid-sequence-note-bg-color);stroke:var(--md-mermaid-sequence-note-border-color)}.loopText,.loopText>tspan,.messageText,.noteText>tspan{stroke:none;font-family:var(--md-mermaid-font-family)!important}.messageText{fill:var(--md-mermaid-sequence-message-fg-color)}.loopText,.loopText>tspan{fill:var(--md-mermaid-sequence-loop-fg-color)}.noteText>tspan{fill:var(--md-mermaid-sequence-note-fg-color)}#arrowhead path{fill:var(--md-mermaid-sequence-message-line-color);stroke:none}.loopLine{fill:var(--md-mermaid-sequence-loop-bg-color);stroke:var(--md-mermaid-sequence-loop-border-color)}.labelBox{fill:var(--md-mermaid-sequence-label-bg-color);stroke:none}.labelText,.labelText>span{fill:var(--md-mermaid-sequence-label-fg-color);font-family:var(--md-mermaid-font-family)}.sequenceNumber{fill:var(--md-mermaid-sequence-number-fg-color)}rect.rect{fill:var(--md-mermaid-sequence-box-bg-color);stroke:none}rect.rect+text.text{fill:var(--md-mermaid-sequence-box-fg-color)}defs #sequencenumber{fill:var(--md-mermaid-sequence-number-bg-color)!important}";var Gr,Qa=0;function Ka(){return typeof mermaid=="undefined"||mermaid instanceof Element?Tt("https://unpkg.com/mermaid@11/dist/mermaid.min.js"):I(void 0)}function Wn(e){return e.classList.remove("mermaid"),Gr||(Gr=Ka().pipe(w(()=>mermaid.initialize({startOnLoad:!1,themeCSS:Un,sequence:{actorFontSize:"16px",messageFontSize:"16px",noteFontSize:"16px"}})),m(()=>{}),G(1))),Gr.subscribe(()=>co(this,null,function*(){e.classList.add("mermaid");let t=`__mermaid_${Qa++}`,r=x("div",{class:"mermaid"}),o=e.textContent,{svg:n,fn:i}=yield mermaid.render(t,o),a=r.attachShadow({mode:"closed"});a.innerHTML=n,e.replaceWith(r),i==null||i(a)})),Gr.pipe(m(()=>({ref:e})))}var Dn=x("table");function Vn(e){return e.replaceWith(Dn),Dn.replaceWith(An(e)),I({ref:e})}function Ya(e){let t=e.find(r=>r.checked)||e[0];return O(...e.map(r=>h(r,"change").pipe(m(()=>R(`label[for="${r.id}"]`))))).pipe(Q(R(`label[for="${t.id}"]`)),m(r=>({active:r})))}function Nn(e,{viewport$:t,target$:r}){let o=R(".tabbed-labels",e),n=P(":scope > input",e),i=Kr("prev");e.append(i);let a=Kr("next");return e.append(a),C(()=>{let s=new g,p=s.pipe(Z(),ie(!0));z([s,ge(e),tt(e)]).pipe(W(p),Me(1,me)).subscribe({next([{active:c},l]){let f=Ve(c),{width:u}=ce(c);e.style.setProperty("--md-indicator-x",`${f.x}px`),e.style.setProperty("--md-indicator-width",`${u}px`);let d=pr(o);(f.xd.x+l.width)&&o.scrollTo({left:Math.max(0,f.x-16),behavior:"smooth"})},complete(){e.style.removeProperty("--md-indicator-x"),e.style.removeProperty("--md-indicator-width")}}),z([Ne(o),ge(o)]).pipe(W(p)).subscribe(([c,l])=>{let f=St(o);i.hidden=c.x<16,a.hidden=c.x>f.width-l.width-16}),O(h(i,"click").pipe(m(()=>-1)),h(a,"click").pipe(m(()=>1))).pipe(W(p)).subscribe(c=>{let{width:l}=ce(o);o.scrollBy({left:l*c,behavior:"smooth"})}),r.pipe(W(p),b(c=>n.includes(c))).subscribe(c=>c.click()),o.classList.add("tabbed-labels--linked");for(let c of n){let l=R(`label[for="${c.id}"]`);l.replaceChildren(x("a",{href:`#${l.htmlFor}`,tabIndex:-1},...Array.from(l.childNodes))),h(l.firstElementChild,"click").pipe(W(p),b(f=>!(f.metaKey||f.ctrlKey)),w(f=>{f.preventDefault(),f.stopPropagation()})).subscribe(()=>{history.replaceState({},"",`#${l.htmlFor}`),l.click()})}return B("content.tabs.link")&&s.pipe(Ce(1),re(t)).subscribe(([{active:c},{offset:l}])=>{let f=c.innerText.trim();if(c.hasAttribute("data-md-switching"))c.removeAttribute("data-md-switching");else{let u=e.offsetTop-l.y;for(let y of P("[data-tabs]"))for(let L of P(":scope > input",y)){let X=R(`label[for="${L.id}"]`);if(X!==c&&X.innerText.trim()===f){X.setAttribute("data-md-switching",""),L.click();break}}window.scrollTo({top:e.offsetTop-u});let d=__md_get("__tabs")||[];__md_set("__tabs",[...new Set([f,...d])])}}),s.pipe(W(p)).subscribe(()=>{for(let c of P("audio, video",e))c.pause()}),Ya(n).pipe(w(c=>s.next(c)),_(()=>s.complete()),m(c=>$({ref:e},c)))}).pipe(Ke(se))}function zn(e,{viewport$:t,target$:r,print$:o}){return O(...P(".annotate:not(.highlight)",e).map(n=>Pn(n,{target$:r,print$:o})),...P("pre:not(.mermaid) > code",e).map(n=>jn(n,{target$:r,print$:o})),...P("pre.mermaid",e).map(n=>Wn(n)),...P("table:not([class])",e).map(n=>Vn(n)),...P("details",e).map(n=>Fn(n,{target$:r,print$:o})),...P("[data-tabs]",e).map(n=>Nn(n,{viewport$:t,target$:r})),...P("[title]",e).filter(()=>B("content.tooltips")).map(n=>mt(n,{viewport$:t})))}function Ba(e,{alert$:t}){return t.pipe(v(r=>O(I(!0),I(!1).pipe(Ge(2e3))).pipe(m(o=>({message:r,active:o})))))}function qn(e,t){let r=R(".md-typeset",e);return C(()=>{let o=new g;return o.subscribe(({message:n,active:i})=>{e.classList.toggle("md-dialog--active",i),r.textContent=n}),Ba(e,t).pipe(w(n=>o.next(n)),_(()=>o.complete()),m(n=>$({ref:e},n)))})}var Ga=0;function Ja(e,t){document.body.append(e);let{width:r}=ce(e);e.style.setProperty("--md-tooltip-width",`${r}px`),e.remove();let o=cr(t),n=typeof o!="undefined"?Ne(o):I({x:0,y:0}),i=O(et(t),$t(t)).pipe(K());return z([i,n]).pipe(m(([a,s])=>{let{x:p,y:c}=Ve(t),l=ce(t),f=t.closest("table");return f&&t.parentElement&&(p+=f.offsetLeft+t.parentElement.offsetLeft,c+=f.offsetTop+t.parentElement.offsetTop),{active:a,offset:{x:p-s.x+l.width/2-r/2,y:c-s.y+l.height+8}}}))}function Qn(e){let t=e.title;if(!t.length)return S;let r=`__tooltip_${Ga++}`,o=Rt(r,"inline"),n=R(".md-typeset",o);return n.innerHTML=t,C(()=>{let i=new g;return i.subscribe({next({offset:a}){o.style.setProperty("--md-tooltip-x",`${a.x}px`),o.style.setProperty("--md-tooltip-y",`${a.y}px`)},complete(){o.style.removeProperty("--md-tooltip-x"),o.style.removeProperty("--md-tooltip-y")}}),O(i.pipe(b(({active:a})=>a)),i.pipe(_e(250),b(({active:a})=>!a))).subscribe({next({active:a}){a?(e.insertAdjacentElement("afterend",o),e.setAttribute("aria-describedby",r),e.removeAttribute("title")):(o.remove(),e.removeAttribute("aria-describedby"),e.setAttribute("title",t))},complete(){o.remove(),e.removeAttribute("aria-describedby"),e.setAttribute("title",t)}}),i.pipe(Me(16,me)).subscribe(({active:a})=>{o.classList.toggle("md-tooltip--active",a)}),i.pipe(pt(125,me),b(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:a})=>a)).subscribe({next(a){a?o.style.setProperty("--md-tooltip-0",`${-a}px`):o.style.removeProperty("--md-tooltip-0")},complete(){o.style.removeProperty("--md-tooltip-0")}}),Ja(o,e).pipe(w(a=>i.next(a)),_(()=>i.complete()),m(a=>$({ref:e},a)))}).pipe(Ke(se))}function Xa({viewport$:e}){if(!B("header.autohide"))return I(!1);let t=e.pipe(m(({offset:{y:n}})=>n),Be(2,1),m(([n,i])=>[nMath.abs(i-n.y)>100),m(([,[n]])=>n),K()),o=ze("search");return z([e,o]).pipe(m(([{offset:n},i])=>n.y>400&&!i),K(),v(n=>n?r:I(!1)),Q(!1))}function Kn(e,t){return C(()=>z([ge(e),Xa(t)])).pipe(m(([{height:r},o])=>({height:r,hidden:o})),K((r,o)=>r.height===o.height&&r.hidden===o.hidden),G(1))}function Yn(e,{header$:t,main$:r}){return C(()=>{let o=new g,n=o.pipe(Z(),ie(!0));o.pipe(ee("active"),He(t)).subscribe(([{active:a},{hidden:s}])=>{e.classList.toggle("md-header--shadow",a&&!s),e.hidden=s});let i=ue(P("[title]",e)).pipe(b(()=>B("content.tooltips")),ne(a=>Qn(a)));return r.subscribe(o),t.pipe(W(n),m(a=>$({ref:e},a)),Re(i.pipe(W(n))))})}function Za(e,{viewport$:t,header$:r}){return mr(e,{viewport$:t,header$:r}).pipe(m(({offset:{y:o}})=>{let{height:n}=ce(e);return{active:o>=n}}),ee("active"))}function Bn(e,t){return C(()=>{let r=new g;r.subscribe({next({active:n}){e.classList.toggle("md-header__title--active",n)},complete(){e.classList.remove("md-header__title--active")}});let o=fe(".md-content h1");return typeof o=="undefined"?S:Za(o,t).pipe(w(n=>r.next(n)),_(()=>r.complete()),m(n=>$({ref:e},n)))})}function Gn(e,{viewport$:t,header$:r}){let o=r.pipe(m(({height:i})=>i),K()),n=o.pipe(v(()=>ge(e).pipe(m(({height:i})=>({top:e.offsetTop,bottom:e.offsetTop+i})),ee("bottom"))));return z([o,n,t]).pipe(m(([i,{top:a,bottom:s},{offset:{y:p},size:{height:c}}])=>(c=Math.max(0,c-Math.max(0,a-p,i)-Math.max(0,c+p-s)),{offset:a-i,height:c,active:a-i<=p})),K((i,a)=>i.offset===a.offset&&i.height===a.height&&i.active===a.active))}function es(e){let t=__md_get("__palette")||{index:e.findIndex(o=>matchMedia(o.getAttribute("data-md-color-media")).matches)},r=Math.max(0,Math.min(t.index,e.length-1));return I(...e).pipe(ne(o=>h(o,"change").pipe(m(()=>o))),Q(e[r]),m(o=>({index:e.indexOf(o),color:{media:o.getAttribute("data-md-color-media"),scheme:o.getAttribute("data-md-color-scheme"),primary:o.getAttribute("data-md-color-primary"),accent:o.getAttribute("data-md-color-accent")}})),G(1))}function Jn(e){let t=P("input",e),r=x("meta",{name:"theme-color"});document.head.appendChild(r);let o=x("meta",{name:"color-scheme"});document.head.appendChild(o);let n=Pt("(prefers-color-scheme: light)");return C(()=>{let i=new g;return i.subscribe(a=>{if(document.body.setAttribute("data-md-color-switching",""),a.color.media==="(prefers-color-scheme)"){let s=matchMedia("(prefers-color-scheme: light)"),p=document.querySelector(s.matches?"[data-md-color-media='(prefers-color-scheme: light)']":"[data-md-color-media='(prefers-color-scheme: dark)']");a.color.scheme=p.getAttribute("data-md-color-scheme"),a.color.primary=p.getAttribute("data-md-color-primary"),a.color.accent=p.getAttribute("data-md-color-accent")}for(let[s,p]of Object.entries(a.color))document.body.setAttribute(`data-md-color-${s}`,p);for(let s=0;sa.key==="Enter"),re(i,(a,s)=>s)).subscribe(({index:a})=>{a=(a+1)%t.length,t[a].click(),t[a].focus()}),i.pipe(m(()=>{let a=Se("header"),s=window.getComputedStyle(a);return o.content=s.colorScheme,s.backgroundColor.match(/\d+/g).map(p=>(+p).toString(16).padStart(2,"0")).join("")})).subscribe(a=>r.content=`#${a}`),i.pipe(ve(se)).subscribe(()=>{document.body.removeAttribute("data-md-color-switching")}),es(t).pipe(W(n.pipe(Ce(1))),ct(),w(a=>i.next(a)),_(()=>i.complete()),m(a=>$({ref:e},a)))})}function Xn(e,{progress$:t}){return C(()=>{let r=new g;return r.subscribe(({value:o})=>{e.style.setProperty("--md-progress-value",`${o}`)}),t.pipe(w(o=>r.next({value:o})),_(()=>r.complete()),m(o=>({ref:e,value:o})))})}var Jr=Mt(Br());function ts(e){e.setAttribute("data-md-copying","");let t=e.closest("[data-copy]"),r=t?t.getAttribute("data-copy"):e.innerText;return e.removeAttribute("data-md-copying"),r.trimEnd()}function Zn({alert$:e}){Jr.default.isSupported()&&new j(t=>{new Jr.default("[data-clipboard-target], [data-clipboard-text]",{text:r=>r.getAttribute("data-clipboard-text")||ts(R(r.getAttribute("data-clipboard-target")))}).on("success",r=>t.next(r))}).pipe(w(t=>{t.trigger.focus()}),m(()=>Ee("clipboard.copied"))).subscribe(e)}function ei(e,t){return e.protocol=t.protocol,e.hostname=t.hostname,e}function rs(e,t){let r=new Map;for(let o of P("url",e)){let n=R("loc",o),i=[ei(new URL(n.textContent),t)];r.set(`${i[0]}`,i);for(let a of P("[rel=alternate]",o)){let s=a.getAttribute("href");s!=null&&i.push(ei(new URL(s),t))}}return r}function ur(e){return un(new URL("sitemap.xml",e)).pipe(m(t=>rs(t,new URL(e))),de(()=>I(new Map)))}function os(e,t){if(!(e.target instanceof Element))return S;let r=e.target.closest("a");if(r===null)return S;if(r.target||e.metaKey||e.ctrlKey)return S;let o=new URL(r.href);return o.search=o.hash="",t.has(`${o}`)?(e.preventDefault(),I(new URL(r.href))):S}function ti(e){let t=new Map;for(let r of P(":scope > *",e.head))t.set(r.outerHTML,r);return t}function ri(e){for(let t of P("[href], [src]",e))for(let r of["href","src"]){let o=t.getAttribute(r);if(o&&!/^(?:[a-z]+:)?\/\//i.test(o)){t[r]=t[r];break}}return I(e)}function ns(e){for(let o of["[data-md-component=announce]","[data-md-component=container]","[data-md-component=header-topic]","[data-md-component=outdated]","[data-md-component=logo]","[data-md-component=skip]",...B("navigation.tabs.sticky")?["[data-md-component=tabs]"]:[]]){let n=fe(o),i=fe(o,e);typeof n!="undefined"&&typeof i!="undefined"&&n.replaceWith(i)}let t=ti(document);for(let[o,n]of ti(e))t.has(o)?t.delete(o):document.head.appendChild(n);for(let o of t.values()){let n=o.getAttribute("name");n!=="theme-color"&&n!=="color-scheme"&&o.remove()}let r=Se("container");return We(P("script",r)).pipe(v(o=>{let n=e.createElement("script");if(o.src){for(let i of o.getAttributeNames())n.setAttribute(i,o.getAttribute(i));return o.replaceWith(n),new j(i=>{n.onload=()=>i.complete()})}else return n.textContent=o.textContent,o.replaceWith(n),S}),Z(),ie(document))}function oi({location$:e,viewport$:t,progress$:r}){let o=xe();if(location.protocol==="file:")return S;let n=ur(o.base);I(document).subscribe(ri);let i=h(document.body,"click").pipe(He(n),v(([p,c])=>os(p,c)),pe()),a=h(window,"popstate").pipe(m(ye),pe());i.pipe(re(t)).subscribe(([p,{offset:c}])=>{history.replaceState(c,""),history.pushState(null,"",p)}),O(i,a).subscribe(e);let s=e.pipe(ee("pathname"),v(p=>fn(p,{progress$:r}).pipe(de(()=>(lt(p,!0),S)))),v(ri),v(ns),pe());return O(s.pipe(re(e,(p,c)=>c)),s.pipe(v(()=>e),ee("pathname"),v(()=>e),ee("hash")),e.pipe(K((p,c)=>p.pathname===c.pathname&&p.hash===c.hash),v(()=>i),w(()=>history.back()))).subscribe(p=>{var c,l;history.state!==null||!p.hash?window.scrollTo(0,(l=(c=history.state)==null?void 0:c.y)!=null?l:0):(history.scrollRestoration="auto",pn(p.hash),history.scrollRestoration="manual")}),e.subscribe(()=>{history.scrollRestoration="manual"}),h(window,"beforeunload").subscribe(()=>{history.scrollRestoration="auto"}),t.pipe(ee("offset"),_e(100)).subscribe(({offset:p})=>{history.replaceState(p,"")}),s}var ni=Mt(qr());function ii(e){let t=e.separator.split("|").map(n=>n.replace(/(\(\?[!=<][^)]+\))/g,"").length===0?"\uFFFD":n).join("|"),r=new RegExp(t,"img"),o=(n,i,a)=>`${i}${a}`;return n=>{n=n.replace(/[\s*+\-:~^]+/g," ").trim();let i=new RegExp(`(^|${e.separator}|)(${n.replace(/[|\\{}()[\]^$+*?.-]/g,"\\$&").replace(r,"|")})`,"img");return a=>(0,ni.default)(a).replace(i,o).replace(/<\/mark>(\s+)]*>/img,"$1")}}function jt(e){return e.type===1}function dr(e){return e.type===3}function ai(e,t){let r=yn(e);return O(I(location.protocol!=="file:"),ze("search")).pipe(Ae(o=>o),v(()=>t)).subscribe(({config:o,docs:n})=>r.next({type:0,data:{config:o,docs:n,options:{suggest:B("search.suggest")}}})),r}function si(e){var l;let{selectedVersionSitemap:t,selectedVersionBaseURL:r,currentLocation:o,currentBaseURL:n}=e,i=(l=Xr(n))==null?void 0:l.pathname;if(i===void 0)return;let a=ss(o.pathname,i);if(a===void 0)return;let s=ps(t.keys());if(!t.has(s))return;let p=Xr(a,s);if(!p||!t.has(p.href))return;let c=Xr(a,r);if(c)return c.hash=o.hash,c.search=o.search,c}function Xr(e,t){try{return new URL(e,t)}catch(r){return}}function ss(e,t){if(e.startsWith(t))return e.slice(t.length)}function cs(e,t){let r=Math.min(e.length,t.length),o;for(o=0;oS)),o=r.pipe(m(n=>{let[,i]=t.base.match(/([^/]+)\/?$/);return n.find(({version:a,aliases:s})=>a===i||s.includes(i))||n[0]}));r.pipe(m(n=>new Map(n.map(i=>[`${new URL(`../${i.version}/`,t.base)}`,i]))),v(n=>h(document.body,"click").pipe(b(i=>!i.metaKey&&!i.ctrlKey),re(o),v(([i,a])=>{if(i.target instanceof Element){let s=i.target.closest("a");if(s&&!s.target&&n.has(s.href)){let p=s.href;return!i.target.closest(".md-version")&&n.get(p)===a?S:(i.preventDefault(),I(new URL(p)))}}return S}),v(i=>ur(i).pipe(m(a=>{var s;return(s=si({selectedVersionSitemap:a,selectedVersionBaseURL:i,currentLocation:ye(),currentBaseURL:t.base}))!=null?s:i})))))).subscribe(n=>lt(n,!0)),z([r,o]).subscribe(([n,i])=>{R(".md-header__topic").appendChild(Cn(n,i))}),e.pipe(v(()=>o)).subscribe(n=>{var a;let i=__md_get("__outdated",sessionStorage);if(i===null){i=!0;let s=((a=t.version)==null?void 0:a.default)||"latest";Array.isArray(s)||(s=[s]);e:for(let p of s)for(let c of n.aliases.concat(n.version))if(new RegExp(p,"i").test(c)){i=!1;break e}__md_set("__outdated",i,sessionStorage)}if(i)for(let s of ae("outdated"))s.hidden=!1})}function ls(e,{worker$:t}){let{searchParams:r}=ye();r.has("q")&&(Je("search",!0),e.value=r.get("q"),e.focus(),ze("search").pipe(Ae(i=>!i)).subscribe(()=>{let i=ye();i.searchParams.delete("q"),history.replaceState({},"",`${i}`)}));let o=et(e),n=O(t.pipe(Ae(jt)),h(e,"keyup"),o).pipe(m(()=>e.value),K());return z([n,o]).pipe(m(([i,a])=>({value:i,focus:a})),G(1))}function pi(e,{worker$:t}){let r=new g,o=r.pipe(Z(),ie(!0));z([t.pipe(Ae(jt)),r],(i,a)=>a).pipe(ee("value")).subscribe(({value:i})=>t.next({type:2,data:i})),r.pipe(ee("focus")).subscribe(({focus:i})=>{i&&Je("search",i)}),h(e.form,"reset").pipe(W(o)).subscribe(()=>e.focus());let n=R("header [for=__search]");return h(n,"click").subscribe(()=>e.focus()),ls(e,{worker$:t}).pipe(w(i=>r.next(i)),_(()=>r.complete()),m(i=>$({ref:e},i)),G(1))}function li(e,{worker$:t,query$:r}){let o=new g,n=on(e.parentElement).pipe(b(Boolean)),i=e.parentElement,a=R(":scope > :first-child",e),s=R(":scope > :last-child",e);ze("search").subscribe(l=>s.setAttribute("role",l?"list":"presentation")),o.pipe(re(r),Wr(t.pipe(Ae(jt)))).subscribe(([{items:l},{value:f}])=>{switch(l.length){case 0:a.textContent=f.length?Ee("search.result.none"):Ee("search.result.placeholder");break;case 1:a.textContent=Ee("search.result.one");break;default:let u=sr(l.length);a.textContent=Ee("search.result.other",u)}});let p=o.pipe(w(()=>s.innerHTML=""),v(({items:l})=>O(I(...l.slice(0,10)),I(...l.slice(10)).pipe(Be(4),Vr(n),v(([f])=>f)))),m(Mn),pe());return p.subscribe(l=>s.appendChild(l)),p.pipe(ne(l=>{let f=fe("details",l);return typeof f=="undefined"?S:h(f,"toggle").pipe(W(o),m(()=>f))})).subscribe(l=>{l.open===!1&&l.offsetTop<=i.scrollTop&&i.scrollTo({top:l.offsetTop})}),t.pipe(b(dr),m(({data:l})=>l)).pipe(w(l=>o.next(l)),_(()=>o.complete()),m(l=>$({ref:e},l)))}function ms(e,{query$:t}){return t.pipe(m(({value:r})=>{let o=ye();return o.hash="",r=r.replace(/\s+/g,"+").replace(/&/g,"%26").replace(/=/g,"%3D"),o.search=`q=${r}`,{url:o}}))}function mi(e,t){let r=new g,o=r.pipe(Z(),ie(!0));return r.subscribe(({url:n})=>{e.setAttribute("data-clipboard-text",e.href),e.href=`${n}`}),h(e,"click").pipe(W(o)).subscribe(n=>n.preventDefault()),ms(e,t).pipe(w(n=>r.next(n)),_(()=>r.complete()),m(n=>$({ref:e},n)))}function fi(e,{worker$:t,keyboard$:r}){let o=new g,n=Se("search-query"),i=O(h(n,"keydown"),h(n,"focus")).pipe(ve(se),m(()=>n.value),K());return o.pipe(He(i),m(([{suggest:s},p])=>{let c=p.split(/([\s-]+)/);if(s!=null&&s.length&&c[c.length-1]){let l=s[s.length-1];l.startsWith(c[c.length-1])&&(c[c.length-1]=l)}else c.length=0;return c})).subscribe(s=>e.innerHTML=s.join("").replace(/\s/g," ")),r.pipe(b(({mode:s})=>s==="search")).subscribe(s=>{switch(s.type){case"ArrowRight":e.innerText.length&&n.selectionStart===n.value.length&&(n.value=e.innerText);break}}),t.pipe(b(dr),m(({data:s})=>s)).pipe(w(s=>o.next(s)),_(()=>o.complete()),m(()=>({ref:e})))}function ui(e,{index$:t,keyboard$:r}){let o=xe();try{let n=ai(o.search,t),i=Se("search-query",e),a=Se("search-result",e);h(e,"click").pipe(b(({target:p})=>p instanceof Element&&!!p.closest("a"))).subscribe(()=>Je("search",!1)),r.pipe(b(({mode:p})=>p==="search")).subscribe(p=>{let c=Ie();switch(p.type){case"Enter":if(c===i){let l=new Map;for(let f of P(":first-child [href]",a)){let u=f.firstElementChild;l.set(f,parseFloat(u.getAttribute("data-md-score")))}if(l.size){let[[f]]=[...l].sort(([,u],[,d])=>d-u);f.click()}p.claim()}break;case"Escape":case"Tab":Je("search",!1),i.blur();break;case"ArrowUp":case"ArrowDown":if(typeof c=="undefined")i.focus();else{let l=[i,...P(":not(details) > [href], summary, details[open] [href]",a)],f=Math.max(0,(Math.max(0,l.indexOf(c))+l.length+(p.type==="ArrowUp"?-1:1))%l.length);l[f].focus()}p.claim();break;default:i!==Ie()&&i.focus()}}),r.pipe(b(({mode:p})=>p==="global")).subscribe(p=>{switch(p.type){case"f":case"s":case"/":i.focus(),i.select(),p.claim();break}});let s=pi(i,{worker$:n});return O(s,li(a,{worker$:n,query$:s})).pipe(Re(...ae("search-share",e).map(p=>mi(p,{query$:s})),...ae("search-suggest",e).map(p=>fi(p,{worker$:n,keyboard$:r}))))}catch(n){return e.hidden=!0,Ye}}function di(e,{index$:t,location$:r}){return z([t,r.pipe(Q(ye()),b(o=>!!o.searchParams.get("h")))]).pipe(m(([o,n])=>ii(o.config)(n.searchParams.get("h"))),m(o=>{var a;let n=new Map,i=document.createNodeIterator(e,NodeFilter.SHOW_TEXT);for(let s=i.nextNode();s;s=i.nextNode())if((a=s.parentElement)!=null&&a.offsetHeight){let p=s.textContent,c=o(p);c.length>p.length&&n.set(s,c)}for(let[s,p]of n){let{childNodes:c}=x("span",null,p);s.replaceWith(...Array.from(c))}return{ref:e,nodes:n}}))}function fs(e,{viewport$:t,main$:r}){let o=e.closest(".md-grid"),n=o.offsetTop-o.parentElement.offsetTop;return z([r,t]).pipe(m(([{offset:i,height:a},{offset:{y:s}}])=>(a=a+Math.min(n,Math.max(0,s-i))-n,{height:a,locked:s>=i+n})),K((i,a)=>i.height===a.height&&i.locked===a.locked))}function Zr(e,o){var n=o,{header$:t}=n,r=so(n,["header$"]);let i=R(".md-sidebar__scrollwrap",e),{y:a}=Ve(i);return C(()=>{let s=new g,p=s.pipe(Z(),ie(!0)),c=s.pipe(Me(0,me));return c.pipe(re(t)).subscribe({next([{height:l},{height:f}]){i.style.height=`${l-2*a}px`,e.style.top=`${f}px`},complete(){i.style.height="",e.style.top=""}}),c.pipe(Ae()).subscribe(()=>{for(let l of P(".md-nav__link--active[href]",e)){if(!l.clientHeight)continue;let f=l.closest(".md-sidebar__scrollwrap");if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:d}=ce(f);f.scrollTo({top:u-d/2})}}}),ue(P("label[tabindex]",e)).pipe(ne(l=>h(l,"click").pipe(ve(se),m(()=>l),W(p)))).subscribe(l=>{let f=R(`[id="${l.htmlFor}"]`);R(`[aria-labelledby="${l.id}"]`).setAttribute("aria-expanded",`${f.checked}`)}),fs(e,r).pipe(w(l=>s.next(l)),_(()=>s.complete()),m(l=>$({ref:e},l)))})}function hi(e,t){if(typeof t!="undefined"){let r=`https://api.github.com/repos/${e}/${t}`;return st(je(`${r}/releases/latest`).pipe(de(()=>S),m(o=>({version:o.tag_name})),De({})),je(r).pipe(de(()=>S),m(o=>({stars:o.stargazers_count,forks:o.forks_count})),De({}))).pipe(m(([o,n])=>$($({},o),n)))}else{let r=`https://api.github.com/users/${e}`;return je(r).pipe(m(o=>({repositories:o.public_repos})),De({}))}}function bi(e,t){let r=`https://${e}/api/v4/projects/${encodeURIComponent(t)}`;return st(je(`${r}/releases/permalink/latest`).pipe(de(()=>S),m(({tag_name:o})=>({version:o})),De({})),je(r).pipe(de(()=>S),m(({star_count:o,forks_count:n})=>({stars:o,forks:n})),De({}))).pipe(m(([o,n])=>$($({},o),n)))}function vi(e){let t=e.match(/^.+github\.com\/([^/]+)\/?([^/]+)?/i);if(t){let[,r,o]=t;return hi(r,o)}if(t=e.match(/^.+?([^/]*gitlab[^/]+)\/(.+?)\/?$/i),t){let[,r,o]=t;return bi(r,o)}return S}var us;function ds(e){return us||(us=C(()=>{let t=__md_get("__source",sessionStorage);if(t)return I(t);if(ae("consent").length){let o=__md_get("__consent");if(!(o&&o.github))return S}return vi(e.href).pipe(w(o=>__md_set("__source",o,sessionStorage)))}).pipe(de(()=>S),b(t=>Object.keys(t).length>0),m(t=>({facts:t})),G(1)))}function gi(e){let t=R(":scope > :last-child",e);return C(()=>{let r=new g;return r.subscribe(({facts:o})=>{t.appendChild(_n(o)),t.classList.add("md-source__repository--active")}),ds(e).pipe(w(o=>r.next(o)),_(()=>r.complete()),m(o=>$({ref:e},o)))})}function hs(e,{viewport$:t,header$:r}){return ge(document.body).pipe(v(()=>mr(e,{header$:r,viewport$:t})),m(({offset:{y:o}})=>({hidden:o>=10})),ee("hidden"))}function yi(e,t){return C(()=>{let r=new g;return r.subscribe({next({hidden:o}){e.hidden=o},complete(){e.hidden=!1}}),(B("navigation.tabs.sticky")?I({hidden:!1}):hs(e,t)).pipe(w(o=>r.next(o)),_(()=>r.complete()),m(o=>$({ref:e},o)))})}function bs(e,{viewport$:t,header$:r}){let o=new Map,n=P(".md-nav__link",e);for(let s of n){let p=decodeURIComponent(s.hash.substring(1)),c=fe(`[id="${p}"]`);typeof c!="undefined"&&o.set(s,c)}let i=r.pipe(ee("height"),m(({height:s})=>{let p=Se("main"),c=R(":scope > :first-child",p);return s+.8*(c.offsetTop-p.offsetTop)}),pe());return ge(document.body).pipe(ee("height"),v(s=>C(()=>{let p=[];return I([...o].reduce((c,[l,f])=>{for(;p.length&&o.get(p[p.length-1]).tagName>=f.tagName;)p.pop();let u=f.offsetTop;for(;!u&&f.parentElement;)f=f.parentElement,u=f.offsetTop;let d=f.offsetParent;for(;d;d=d.offsetParent)u+=d.offsetTop;return c.set([...p=[...p,l]].reverse(),u)},new Map))}).pipe(m(p=>new Map([...p].sort(([,c],[,l])=>c-l))),He(i),v(([p,c])=>t.pipe(Fr(([l,f],{offset:{y:u},size:d})=>{let y=u+d.height>=Math.floor(s.height);for(;f.length;){let[,L]=f[0];if(L-c=u&&!y)f=[l.pop(),...f];else break}return[l,f]},[[],[...p]]),K((l,f)=>l[0]===f[0]&&l[1]===f[1])))))).pipe(m(([s,p])=>({prev:s.map(([c])=>c),next:p.map(([c])=>c)})),Q({prev:[],next:[]}),Be(2,1),m(([s,p])=>s.prev.length{let i=new g,a=i.pipe(Z(),ie(!0));if(i.subscribe(({prev:s,next:p})=>{for(let[c]of p)c.classList.remove("md-nav__link--passed"),c.classList.remove("md-nav__link--active");for(let[c,[l]]of s.entries())l.classList.add("md-nav__link--passed"),l.classList.toggle("md-nav__link--active",c===s.length-1)}),B("toc.follow")){let s=O(t.pipe(_e(1),m(()=>{})),t.pipe(_e(250),m(()=>"smooth")));i.pipe(b(({prev:p})=>p.length>0),He(o.pipe(ve(se))),re(s)).subscribe(([[{prev:p}],c])=>{let[l]=p[p.length-1];if(l.offsetHeight){let f=cr(l);if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:d}=ce(f);f.scrollTo({top:u-d/2,behavior:c})}}})}return B("navigation.tracking")&&t.pipe(W(a),ee("offset"),_e(250),Ce(1),W(n.pipe(Ce(1))),ct({delay:250}),re(i)).subscribe(([,{prev:s}])=>{let p=ye(),c=s[s.length-1];if(c&&c.length){let[l]=c,{hash:f}=new URL(l.href);p.hash!==f&&(p.hash=f,history.replaceState({},"",`${p}`))}else p.hash="",history.replaceState({},"",`${p}`)}),bs(e,{viewport$:t,header$:r}).pipe(w(s=>i.next(s)),_(()=>i.complete()),m(s=>$({ref:e},s)))})}function vs(e,{viewport$:t,main$:r,target$:o}){let n=t.pipe(m(({offset:{y:a}})=>a),Be(2,1),m(([a,s])=>a>s&&s>0),K()),i=r.pipe(m(({active:a})=>a));return z([i,n]).pipe(m(([a,s])=>!(a&&s)),K(),W(o.pipe(Ce(1))),ie(!0),ct({delay:250}),m(a=>({hidden:a})))}function Ei(e,{viewport$:t,header$:r,main$:o,target$:n}){let i=new g,a=i.pipe(Z(),ie(!0));return i.subscribe({next({hidden:s}){e.hidden=s,s?(e.setAttribute("tabindex","-1"),e.blur()):e.removeAttribute("tabindex")},complete(){e.style.top="",e.hidden=!0,e.removeAttribute("tabindex")}}),r.pipe(W(a),ee("height")).subscribe(({height:s})=>{e.style.top=`${s+16}px`}),h(e,"click").subscribe(s=>{s.preventDefault(),window.scrollTo({top:0})}),vs(e,{viewport$:t,main$:o,target$:n}).pipe(w(s=>i.next(s)),_(()=>i.complete()),m(s=>$({ref:e},s)))}function wi({document$:e,viewport$:t}){e.pipe(v(()=>P(".md-ellipsis")),ne(r=>tt(r).pipe(W(e.pipe(Ce(1))),b(o=>o),m(()=>r),Te(1))),b(r=>r.offsetWidth{let o=r.innerText,n=r.closest("a")||r;return n.title=o,B("content.tooltips")?mt(n,{viewport$:t}).pipe(W(e.pipe(Ce(1))),_(()=>n.removeAttribute("title"))):S})).subscribe(),B("content.tooltips")&&e.pipe(v(()=>P(".md-status")),ne(r=>mt(r,{viewport$:t}))).subscribe()}function Ti({document$:e,tablet$:t}){e.pipe(v(()=>P(".md-toggle--indeterminate")),w(r=>{r.indeterminate=!0,r.checked=!1}),ne(r=>h(r,"change").pipe(Dr(()=>r.classList.contains("md-toggle--indeterminate")),m(()=>r))),re(t)).subscribe(([r,o])=>{r.classList.remove("md-toggle--indeterminate"),o&&(r.checked=!1)})}function gs(){return/(iPad|iPhone|iPod)/.test(navigator.userAgent)}function Si({document$:e}){e.pipe(v(()=>P("[data-md-scrollfix]")),w(t=>t.removeAttribute("data-md-scrollfix")),b(gs),ne(t=>h(t,"touchstart").pipe(m(()=>t)))).subscribe(t=>{let r=t.scrollTop;r===0?t.scrollTop=1:r+t.offsetHeight===t.scrollHeight&&(t.scrollTop=r-1)})}function Oi({viewport$:e,tablet$:t}){z([ze("search"),t]).pipe(m(([r,o])=>r&&!o),v(r=>I(r).pipe(Ge(r?400:100))),re(e)).subscribe(([r,{offset:{y:o}}])=>{if(r)document.body.setAttribute("data-md-scrolllock",""),document.body.style.top=`-${o}px`;else{let n=-1*parseInt(document.body.style.top,10);document.body.removeAttribute("data-md-scrolllock"),document.body.style.top="",n&&window.scrollTo(0,n)}})}Object.entries||(Object.entries=function(e){let t=[];for(let r of Object.keys(e))t.push([r,e[r]]);return t});Object.values||(Object.values=function(e){let t=[];for(let r of Object.keys(e))t.push(e[r]);return t});typeof Element!="undefined"&&(Element.prototype.scrollTo||(Element.prototype.scrollTo=function(e,t){typeof e=="object"?(this.scrollLeft=e.left,this.scrollTop=e.top):(this.scrollLeft=e,this.scrollTop=t)}),Element.prototype.replaceWith||(Element.prototype.replaceWith=function(...e){let t=this.parentNode;if(t){e.length===0&&t.removeChild(this);for(let r=e.length-1;r>=0;r--){let o=e[r];typeof o=="string"?o=document.createTextNode(o):o.parentNode&&o.parentNode.removeChild(o),r?t.insertBefore(this.previousSibling,o):t.replaceChild(o,this)}}}));function ys(){return location.protocol==="file:"?Tt(`${new URL("search/search_index.js",eo.base)}`).pipe(m(()=>__index),G(1)):je(new URL("search/search_index.json",eo.base))}document.documentElement.classList.remove("no-js");document.documentElement.classList.add("js");var ot=Go(),Ut=sn(),Lt=ln(Ut),to=an(),Oe=gn(),hr=Pt("(min-width: 960px)"),Mi=Pt("(min-width: 1220px)"),_i=mn(),eo=xe(),Ai=document.forms.namedItem("search")?ys():Ye,ro=new g;Zn({alert$:ro});var oo=new g;B("navigation.instant")&&oi({location$:Ut,viewport$:Oe,progress$:oo}).subscribe(ot);var Li;((Li=eo.version)==null?void 0:Li.provider)==="mike"&&ci({document$:ot});O(Ut,Lt).pipe(Ge(125)).subscribe(()=>{Je("drawer",!1),Je("search",!1)});to.pipe(b(({mode:e})=>e==="global")).subscribe(e=>{switch(e.type){case"p":case",":let t=fe("link[rel=prev]");typeof t!="undefined"&<(t);break;case"n":case".":let r=fe("link[rel=next]");typeof r!="undefined"&<(r);break;case"Enter":let o=Ie();o instanceof HTMLLabelElement&&o.click()}});wi({viewport$:Oe,document$:ot});Ti({document$:ot,tablet$:hr});Si({document$:ot});Oi({viewport$:Oe,tablet$:hr});var rt=Kn(Se("header"),{viewport$:Oe}),Ft=ot.pipe(m(()=>Se("main")),v(e=>Gn(e,{viewport$:Oe,header$:rt})),G(1)),xs=O(...ae("consent").map(e=>En(e,{target$:Lt})),...ae("dialog").map(e=>qn(e,{alert$:ro})),...ae("palette").map(e=>Jn(e)),...ae("progress").map(e=>Xn(e,{progress$:oo})),...ae("search").map(e=>ui(e,{index$:Ai,keyboard$:to})),...ae("source").map(e=>gi(e))),Es=C(()=>O(...ae("announce").map(e=>xn(e)),...ae("content").map(e=>zn(e,{viewport$:Oe,target$:Lt,print$:_i})),...ae("content").map(e=>B("search.highlight")?di(e,{index$:Ai,location$:Ut}):S),...ae("header").map(e=>Yn(e,{viewport$:Oe,header$:rt,main$:Ft})),...ae("header-title").map(e=>Bn(e,{viewport$:Oe,header$:rt})),...ae("sidebar").map(e=>e.getAttribute("data-md-type")==="navigation"?Nr(Mi,()=>Zr(e,{viewport$:Oe,header$:rt,main$:Ft})):Nr(hr,()=>Zr(e,{viewport$:Oe,header$:rt,main$:Ft}))),...ae("tabs").map(e=>yi(e,{viewport$:Oe,header$:rt})),...ae("toc").map(e=>xi(e,{viewport$:Oe,header$:rt,main$:Ft,target$:Lt})),...ae("top").map(e=>Ei(e,{viewport$:Oe,header$:rt,main$:Ft,target$:Lt})))),Ci=ot.pipe(v(()=>Es),Re(xs),G(1));Ci.subscribe();window.document$=ot;window.location$=Ut;window.target$=Lt;window.keyboard$=to;window.viewport$=Oe;window.tablet$=hr;window.screen$=Mi;window.print$=_i;window.alert$=ro;window.progress$=oo;window.component$=Ci;})(); +//# sourceMappingURL=bundle.83f73b43.min.js.map + diff --git a/assets/javascripts/bundle.83f73b43.min.js.map b/assets/javascripts/bundle.83f73b43.min.js.map new file mode 100644 index 000000000..fe920b7d6 --- /dev/null +++ b/assets/javascripts/bundle.83f73b43.min.js.map @@ -0,0 +1,7 @@ +{ + "version": 3, + "sources": ["node_modules/focus-visible/dist/focus-visible.js", "node_modules/escape-html/index.js", "node_modules/clipboard/dist/clipboard.js", "src/templates/assets/javascripts/bundle.ts", "node_modules/tslib/tslib.es6.mjs", "node_modules/rxjs/src/internal/util/isFunction.ts", "node_modules/rxjs/src/internal/util/createErrorClass.ts", "node_modules/rxjs/src/internal/util/UnsubscriptionError.ts", "node_modules/rxjs/src/internal/util/arrRemove.ts", "node_modules/rxjs/src/internal/Subscription.ts", "node_modules/rxjs/src/internal/config.ts", "node_modules/rxjs/src/internal/scheduler/timeoutProvider.ts", "node_modules/rxjs/src/internal/util/reportUnhandledError.ts", "node_modules/rxjs/src/internal/util/noop.ts", "node_modules/rxjs/src/internal/NotificationFactories.ts", "node_modules/rxjs/src/internal/util/errorContext.ts", "node_modules/rxjs/src/internal/Subscriber.ts", "node_modules/rxjs/src/internal/symbol/observable.ts", "node_modules/rxjs/src/internal/util/identity.ts", "node_modules/rxjs/src/internal/util/pipe.ts", "node_modules/rxjs/src/internal/Observable.ts", "node_modules/rxjs/src/internal/util/lift.ts", "node_modules/rxjs/src/internal/operators/OperatorSubscriber.ts", "node_modules/rxjs/src/internal/scheduler/animationFrameProvider.ts", "node_modules/rxjs/src/internal/util/ObjectUnsubscribedError.ts", "node_modules/rxjs/src/internal/Subject.ts", "node_modules/rxjs/src/internal/BehaviorSubject.ts", "node_modules/rxjs/src/internal/scheduler/dateTimestampProvider.ts", "node_modules/rxjs/src/internal/ReplaySubject.ts", "node_modules/rxjs/src/internal/scheduler/Action.ts", "node_modules/rxjs/src/internal/scheduler/intervalProvider.ts", "node_modules/rxjs/src/internal/scheduler/AsyncAction.ts", "node_modules/rxjs/src/internal/Scheduler.ts", "node_modules/rxjs/src/internal/scheduler/AsyncScheduler.ts", "node_modules/rxjs/src/internal/scheduler/async.ts", "node_modules/rxjs/src/internal/scheduler/QueueAction.ts", "node_modules/rxjs/src/internal/scheduler/QueueScheduler.ts", "node_modules/rxjs/src/internal/scheduler/queue.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameAction.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameScheduler.ts", "node_modules/rxjs/src/internal/scheduler/animationFrame.ts", "node_modules/rxjs/src/internal/observable/empty.ts", "node_modules/rxjs/src/internal/util/isScheduler.ts", "node_modules/rxjs/src/internal/util/args.ts", "node_modules/rxjs/src/internal/util/isArrayLike.ts", "node_modules/rxjs/src/internal/util/isPromise.ts", "node_modules/rxjs/src/internal/util/isInteropObservable.ts", "node_modules/rxjs/src/internal/util/isAsyncIterable.ts", "node_modules/rxjs/src/internal/util/throwUnobservableError.ts", "node_modules/rxjs/src/internal/symbol/iterator.ts", "node_modules/rxjs/src/internal/util/isIterable.ts", "node_modules/rxjs/src/internal/util/isReadableStreamLike.ts", "node_modules/rxjs/src/internal/observable/innerFrom.ts", "node_modules/rxjs/src/internal/util/executeSchedule.ts", "node_modules/rxjs/src/internal/operators/observeOn.ts", "node_modules/rxjs/src/internal/operators/subscribeOn.ts", "node_modules/rxjs/src/internal/scheduled/scheduleObservable.ts", "node_modules/rxjs/src/internal/scheduled/schedulePromise.ts", "node_modules/rxjs/src/internal/scheduled/scheduleArray.ts", "node_modules/rxjs/src/internal/scheduled/scheduleIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleAsyncIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleReadableStreamLike.ts", "node_modules/rxjs/src/internal/scheduled/scheduled.ts", "node_modules/rxjs/src/internal/observable/from.ts", "node_modules/rxjs/src/internal/observable/of.ts", "node_modules/rxjs/src/internal/observable/throwError.ts", "node_modules/rxjs/src/internal/util/EmptyError.ts", "node_modules/rxjs/src/internal/util/isDate.ts", "node_modules/rxjs/src/internal/operators/map.ts", "node_modules/rxjs/src/internal/util/mapOneOrManyArgs.ts", "node_modules/rxjs/src/internal/util/argsArgArrayOrObject.ts", "node_modules/rxjs/src/internal/util/createObject.ts", "node_modules/rxjs/src/internal/observable/combineLatest.ts", "node_modules/rxjs/src/internal/operators/mergeInternals.ts", "node_modules/rxjs/src/internal/operators/mergeMap.ts", "node_modules/rxjs/src/internal/operators/mergeAll.ts", "node_modules/rxjs/src/internal/operators/concatAll.ts", "node_modules/rxjs/src/internal/observable/concat.ts", "node_modules/rxjs/src/internal/observable/defer.ts", "node_modules/rxjs/src/internal/observable/fromEvent.ts", "node_modules/rxjs/src/internal/observable/fromEventPattern.ts", "node_modules/rxjs/src/internal/observable/timer.ts", "node_modules/rxjs/src/internal/observable/merge.ts", "node_modules/rxjs/src/internal/observable/never.ts", "node_modules/rxjs/src/internal/util/argsOrArgArray.ts", "node_modules/rxjs/src/internal/operators/filter.ts", "node_modules/rxjs/src/internal/observable/zip.ts", "node_modules/rxjs/src/internal/operators/audit.ts", "node_modules/rxjs/src/internal/operators/auditTime.ts", "node_modules/rxjs/src/internal/operators/bufferCount.ts", "node_modules/rxjs/src/internal/operators/catchError.ts", "node_modules/rxjs/src/internal/operators/scanInternals.ts", "node_modules/rxjs/src/internal/operators/combineLatest.ts", "node_modules/rxjs/src/internal/operators/combineLatestWith.ts", "node_modules/rxjs/src/internal/operators/debounce.ts", "node_modules/rxjs/src/internal/operators/debounceTime.ts", "node_modules/rxjs/src/internal/operators/defaultIfEmpty.ts", "node_modules/rxjs/src/internal/operators/take.ts", "node_modules/rxjs/src/internal/operators/ignoreElements.ts", "node_modules/rxjs/src/internal/operators/mapTo.ts", "node_modules/rxjs/src/internal/operators/delayWhen.ts", "node_modules/rxjs/src/internal/operators/delay.ts", "node_modules/rxjs/src/internal/operators/distinctUntilChanged.ts", "node_modules/rxjs/src/internal/operators/distinctUntilKeyChanged.ts", "node_modules/rxjs/src/internal/operators/throwIfEmpty.ts", "node_modules/rxjs/src/internal/operators/endWith.ts", "node_modules/rxjs/src/internal/operators/finalize.ts", "node_modules/rxjs/src/internal/operators/first.ts", "node_modules/rxjs/src/internal/operators/takeLast.ts", "node_modules/rxjs/src/internal/operators/merge.ts", "node_modules/rxjs/src/internal/operators/mergeWith.ts", "node_modules/rxjs/src/internal/operators/repeat.ts", "node_modules/rxjs/src/internal/operators/scan.ts", "node_modules/rxjs/src/internal/operators/share.ts", "node_modules/rxjs/src/internal/operators/shareReplay.ts", "node_modules/rxjs/src/internal/operators/skip.ts", "node_modules/rxjs/src/internal/operators/skipUntil.ts", "node_modules/rxjs/src/internal/operators/startWith.ts", "node_modules/rxjs/src/internal/operators/switchMap.ts", "node_modules/rxjs/src/internal/operators/takeUntil.ts", "node_modules/rxjs/src/internal/operators/takeWhile.ts", "node_modules/rxjs/src/internal/operators/tap.ts", "node_modules/rxjs/src/internal/operators/throttle.ts", "node_modules/rxjs/src/internal/operators/throttleTime.ts", "node_modules/rxjs/src/internal/operators/withLatestFrom.ts", "node_modules/rxjs/src/internal/operators/zip.ts", "node_modules/rxjs/src/internal/operators/zipWith.ts", "src/templates/assets/javascripts/browser/document/index.ts", "src/templates/assets/javascripts/browser/element/_/index.ts", "src/templates/assets/javascripts/browser/element/focus/index.ts", "src/templates/assets/javascripts/browser/element/hover/index.ts", "src/templates/assets/javascripts/utilities/h/index.ts", "src/templates/assets/javascripts/utilities/round/index.ts", "src/templates/assets/javascripts/browser/script/index.ts", "src/templates/assets/javascripts/browser/element/size/_/index.ts", "src/templates/assets/javascripts/browser/element/size/content/index.ts", "src/templates/assets/javascripts/browser/element/offset/_/index.ts", "src/templates/assets/javascripts/browser/element/offset/content/index.ts", "src/templates/assets/javascripts/browser/element/visibility/index.ts", "src/templates/assets/javascripts/browser/toggle/index.ts", "src/templates/assets/javascripts/browser/keyboard/index.ts", "src/templates/assets/javascripts/browser/location/_/index.ts", "src/templates/assets/javascripts/browser/location/hash/index.ts", "src/templates/assets/javascripts/browser/media/index.ts", "src/templates/assets/javascripts/browser/request/index.ts", "src/templates/assets/javascripts/browser/viewport/offset/index.ts", "src/templates/assets/javascripts/browser/viewport/size/index.ts", "src/templates/assets/javascripts/browser/viewport/_/index.ts", "src/templates/assets/javascripts/browser/viewport/at/index.ts", "src/templates/assets/javascripts/browser/worker/index.ts", "src/templates/assets/javascripts/_/index.ts", "src/templates/assets/javascripts/components/_/index.ts", "src/templates/assets/javascripts/components/announce/index.ts", "src/templates/assets/javascripts/components/consent/index.ts", "src/templates/assets/javascripts/templates/tooltip/index.tsx", "src/templates/assets/javascripts/templates/annotation/index.tsx", "src/templates/assets/javascripts/templates/clipboard/index.tsx", "src/templates/assets/javascripts/templates/search/index.tsx", "src/templates/assets/javascripts/templates/source/index.tsx", "src/templates/assets/javascripts/templates/tabbed/index.tsx", "src/templates/assets/javascripts/templates/table/index.tsx", "src/templates/assets/javascripts/templates/version/index.tsx", "src/templates/assets/javascripts/components/tooltip2/index.ts", "src/templates/assets/javascripts/components/content/annotation/_/index.ts", "src/templates/assets/javascripts/components/content/annotation/list/index.ts", "src/templates/assets/javascripts/components/content/annotation/block/index.ts", "src/templates/assets/javascripts/components/content/code/_/index.ts", "src/templates/assets/javascripts/components/content/details/index.ts", "src/templates/assets/javascripts/components/content/mermaid/index.css", "src/templates/assets/javascripts/components/content/mermaid/index.ts", "src/templates/assets/javascripts/components/content/table/index.ts", "src/templates/assets/javascripts/components/content/tabs/index.ts", "src/templates/assets/javascripts/components/content/_/index.ts", "src/templates/assets/javascripts/components/dialog/index.ts", "src/templates/assets/javascripts/components/tooltip/index.ts", "src/templates/assets/javascripts/components/header/_/index.ts", "src/templates/assets/javascripts/components/header/title/index.ts", "src/templates/assets/javascripts/components/main/index.ts", "src/templates/assets/javascripts/components/palette/index.ts", "src/templates/assets/javascripts/components/progress/index.ts", "src/templates/assets/javascripts/integrations/clipboard/index.ts", "src/templates/assets/javascripts/integrations/sitemap/index.ts", "src/templates/assets/javascripts/integrations/instant/index.ts", "src/templates/assets/javascripts/integrations/search/highlighter/index.ts", "src/templates/assets/javascripts/integrations/search/worker/message/index.ts", "src/templates/assets/javascripts/integrations/search/worker/_/index.ts", "src/templates/assets/javascripts/integrations/version/findurl/index.ts", "src/templates/assets/javascripts/integrations/version/index.ts", "src/templates/assets/javascripts/components/search/query/index.ts", "src/templates/assets/javascripts/components/search/result/index.ts", "src/templates/assets/javascripts/components/search/share/index.ts", "src/templates/assets/javascripts/components/search/suggest/index.ts", "src/templates/assets/javascripts/components/search/_/index.ts", "src/templates/assets/javascripts/components/search/highlight/index.ts", "src/templates/assets/javascripts/components/sidebar/index.ts", "src/templates/assets/javascripts/components/source/facts/github/index.ts", "src/templates/assets/javascripts/components/source/facts/gitlab/index.ts", "src/templates/assets/javascripts/components/source/facts/_/index.ts", "src/templates/assets/javascripts/components/source/_/index.ts", "src/templates/assets/javascripts/components/tabs/index.ts", "src/templates/assets/javascripts/components/toc/index.ts", "src/templates/assets/javascripts/components/top/index.ts", "src/templates/assets/javascripts/patches/ellipsis/index.ts", "src/templates/assets/javascripts/patches/indeterminate/index.ts", "src/templates/assets/javascripts/patches/scrollfix/index.ts", "src/templates/assets/javascripts/patches/scrolllock/index.ts", "src/templates/assets/javascripts/polyfills/index.ts"], + "sourcesContent": ["(function (global, factory) {\n typeof exports === 'object' && typeof module !== 'undefined' ? factory() :\n typeof define === 'function' && define.amd ? define(factory) :\n (factory());\n}(this, (function () { 'use strict';\n\n /**\n * Applies the :focus-visible polyfill at the given scope.\n * A scope in this case is either the top-level Document or a Shadow Root.\n *\n * @param {(Document|ShadowRoot)} scope\n * @see https://github.com/WICG/focus-visible\n */\n function applyFocusVisiblePolyfill(scope) {\n var hadKeyboardEvent = true;\n var hadFocusVisibleRecently = false;\n var hadFocusVisibleRecentlyTimeout = null;\n\n var inputTypesAllowlist = {\n text: true,\n search: true,\n url: true,\n tel: true,\n email: true,\n password: true,\n number: true,\n date: true,\n month: true,\n week: true,\n time: true,\n datetime: true,\n 'datetime-local': true\n };\n\n /**\n * Helper function for legacy browsers and iframes which sometimes focus\n * elements like document, body, and non-interactive SVG.\n * @param {Element} el\n */\n function isValidFocusTarget(el) {\n if (\n el &&\n el !== document &&\n el.nodeName !== 'HTML' &&\n el.nodeName !== 'BODY' &&\n 'classList' in el &&\n 'contains' in el.classList\n ) {\n return true;\n }\n return false;\n }\n\n /**\n * Computes whether the given element should automatically trigger the\n * `focus-visible` class being added, i.e. whether it should always match\n * `:focus-visible` when focused.\n * @param {Element} el\n * @return {boolean}\n */\n function focusTriggersKeyboardModality(el) {\n var type = el.type;\n var tagName = el.tagName;\n\n if (tagName === 'INPUT' && inputTypesAllowlist[type] && !el.readOnly) {\n return true;\n }\n\n if (tagName === 'TEXTAREA' && !el.readOnly) {\n return true;\n }\n\n if (el.isContentEditable) {\n return true;\n }\n\n return false;\n }\n\n /**\n * Add the `focus-visible` class to the given element if it was not added by\n * the author.\n * @param {Element} el\n */\n function addFocusVisibleClass(el) {\n if (el.classList.contains('focus-visible')) {\n return;\n }\n el.classList.add('focus-visible');\n el.setAttribute('data-focus-visible-added', '');\n }\n\n /**\n * Remove the `focus-visible` class from the given element if it was not\n * originally added by the author.\n * @param {Element} el\n */\n function removeFocusVisibleClass(el) {\n if (!el.hasAttribute('data-focus-visible-added')) {\n return;\n }\n el.classList.remove('focus-visible');\n el.removeAttribute('data-focus-visible-added');\n }\n\n /**\n * If the most recent user interaction was via the keyboard;\n * and the key press did not include a meta, alt/option, or control key;\n * then the modality is keyboard. Otherwise, the modality is not keyboard.\n * Apply `focus-visible` to any current active element and keep track\n * of our keyboard modality state with `hadKeyboardEvent`.\n * @param {KeyboardEvent} e\n */\n function onKeyDown(e) {\n if (e.metaKey || e.altKey || e.ctrlKey) {\n return;\n }\n\n if (isValidFocusTarget(scope.activeElement)) {\n addFocusVisibleClass(scope.activeElement);\n }\n\n hadKeyboardEvent = true;\n }\n\n /**\n * If at any point a user clicks with a pointing device, ensure that we change\n * the modality away from keyboard.\n * This avoids the situation where a user presses a key on an already focused\n * element, and then clicks on a different element, focusing it with a\n * pointing device, while we still think we're in keyboard modality.\n * @param {Event} e\n */\n function onPointerDown(e) {\n hadKeyboardEvent = false;\n }\n\n /**\n * On `focus`, add the `focus-visible` class to the target if:\n * - the target received focus as a result of keyboard navigation, or\n * - the event target is an element that will likely require interaction\n * via the keyboard (e.g. a text box)\n * @param {Event} e\n */\n function onFocus(e) {\n // Prevent IE from focusing the document or HTML element.\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (hadKeyboardEvent || focusTriggersKeyboardModality(e.target)) {\n addFocusVisibleClass(e.target);\n }\n }\n\n /**\n * On `blur`, remove the `focus-visible` class from the target.\n * @param {Event} e\n */\n function onBlur(e) {\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (\n e.target.classList.contains('focus-visible') ||\n e.target.hasAttribute('data-focus-visible-added')\n ) {\n // To detect a tab/window switch, we look for a blur event followed\n // rapidly by a visibility change.\n // If we don't see a visibility change within 100ms, it's probably a\n // regular focus change.\n hadFocusVisibleRecently = true;\n window.clearTimeout(hadFocusVisibleRecentlyTimeout);\n hadFocusVisibleRecentlyTimeout = window.setTimeout(function() {\n hadFocusVisibleRecently = false;\n }, 100);\n removeFocusVisibleClass(e.target);\n }\n }\n\n /**\n * If the user changes tabs, keep track of whether or not the previously\n * focused element had .focus-visible.\n * @param {Event} e\n */\n function onVisibilityChange(e) {\n if (document.visibilityState === 'hidden') {\n // If the tab becomes active again, the browser will handle calling focus\n // on the element (Safari actually calls it twice).\n // If this tab change caused a blur on an element with focus-visible,\n // re-apply the class when the user switches back to the tab.\n if (hadFocusVisibleRecently) {\n hadKeyboardEvent = true;\n }\n addInitialPointerMoveListeners();\n }\n }\n\n /**\n * Add a group of listeners to detect usage of any pointing devices.\n * These listeners will be added when the polyfill first loads, and anytime\n * the window is blurred, so that they are active when the window regains\n * focus.\n */\n function addInitialPointerMoveListeners() {\n document.addEventListener('mousemove', onInitialPointerMove);\n document.addEventListener('mousedown', onInitialPointerMove);\n document.addEventListener('mouseup', onInitialPointerMove);\n document.addEventListener('pointermove', onInitialPointerMove);\n document.addEventListener('pointerdown', onInitialPointerMove);\n document.addEventListener('pointerup', onInitialPointerMove);\n document.addEventListener('touchmove', onInitialPointerMove);\n document.addEventListener('touchstart', onInitialPointerMove);\n document.addEventListener('touchend', onInitialPointerMove);\n }\n\n function removeInitialPointerMoveListeners() {\n document.removeEventListener('mousemove', onInitialPointerMove);\n document.removeEventListener('mousedown', onInitialPointerMove);\n document.removeEventListener('mouseup', onInitialPointerMove);\n document.removeEventListener('pointermove', onInitialPointerMove);\n document.removeEventListener('pointerdown', onInitialPointerMove);\n document.removeEventListener('pointerup', onInitialPointerMove);\n document.removeEventListener('touchmove', onInitialPointerMove);\n document.removeEventListener('touchstart', onInitialPointerMove);\n document.removeEventListener('touchend', onInitialPointerMove);\n }\n\n /**\n * When the polfyill first loads, assume the user is in keyboard modality.\n * If any event is received from a pointing device (e.g. mouse, pointer,\n * touch), turn off keyboard modality.\n * This accounts for situations where focus enters the page from the URL bar.\n * @param {Event} e\n */\n function onInitialPointerMove(e) {\n // Work around a Safari quirk that fires a mousemove on whenever the\n // window blurs, even if you're tabbing out of the page. \u00AF\\_(\u30C4)_/\u00AF\n if (e.target.nodeName && e.target.nodeName.toLowerCase() === 'html') {\n return;\n }\n\n hadKeyboardEvent = false;\n removeInitialPointerMoveListeners();\n }\n\n // For some kinds of state, we are interested in changes at the global scope\n // only. For example, global pointer input, global key presses and global\n // visibility change should affect the state at every scope:\n document.addEventListener('keydown', onKeyDown, true);\n document.addEventListener('mousedown', onPointerDown, true);\n document.addEventListener('pointerdown', onPointerDown, true);\n document.addEventListener('touchstart', onPointerDown, true);\n document.addEventListener('visibilitychange', onVisibilityChange, true);\n\n addInitialPointerMoveListeners();\n\n // For focus and blur, we specifically care about state changes in the local\n // scope. This is because focus / blur events that originate from within a\n // shadow root are not re-dispatched from the host element if it was already\n // the active element in its own scope:\n scope.addEventListener('focus', onFocus, true);\n scope.addEventListener('blur', onBlur, true);\n\n // We detect that a node is a ShadowRoot by ensuring that it is a\n // DocumentFragment and also has a host property. This check covers native\n // implementation and polyfill implementation transparently. If we only cared\n // about the native implementation, we could just check if the scope was\n // an instance of a ShadowRoot.\n if (scope.nodeType === Node.DOCUMENT_FRAGMENT_NODE && scope.host) {\n // Since a ShadowRoot is a special kind of DocumentFragment, it does not\n // have a root element to add a class to. So, we add this attribute to the\n // host element instead:\n scope.host.setAttribute('data-js-focus-visible', '');\n } else if (scope.nodeType === Node.DOCUMENT_NODE) {\n document.documentElement.classList.add('js-focus-visible');\n document.documentElement.setAttribute('data-js-focus-visible', '');\n }\n }\n\n // It is important to wrap all references to global window and document in\n // these checks to support server-side rendering use cases\n // @see https://github.com/WICG/focus-visible/issues/199\n if (typeof window !== 'undefined' && typeof document !== 'undefined') {\n // Make the polyfill helper globally available. This can be used as a signal\n // to interested libraries that wish to coordinate with the polyfill for e.g.,\n // applying the polyfill to a shadow root:\n window.applyFocusVisiblePolyfill = applyFocusVisiblePolyfill;\n\n // Notify interested libraries of the polyfill's presence, in case the\n // polyfill was loaded lazily:\n var event;\n\n try {\n event = new CustomEvent('focus-visible-polyfill-ready');\n } catch (error) {\n // IE11 does not support using CustomEvent as a constructor directly:\n event = document.createEvent('CustomEvent');\n event.initCustomEvent('focus-visible-polyfill-ready', false, false, {});\n }\n\n window.dispatchEvent(event);\n }\n\n if (typeof document !== 'undefined') {\n // Apply the polyfill to the global document, so that no JavaScript\n // coordination is required to use the polyfill in the top-level document:\n applyFocusVisiblePolyfill(document);\n }\n\n})));\n", "/*!\n * escape-html\n * Copyright(c) 2012-2013 TJ Holowaychuk\n * Copyright(c) 2015 Andreas Lubbe\n * Copyright(c) 2015 Tiancheng \"Timothy\" Gu\n * MIT Licensed\n */\n\n'use strict';\n\n/**\n * Module variables.\n * @private\n */\n\nvar matchHtmlRegExp = /[\"'&<>]/;\n\n/**\n * Module exports.\n * @public\n */\n\nmodule.exports = escapeHtml;\n\n/**\n * Escape special characters in the given string of html.\n *\n * @param {string} string The string to escape for inserting into HTML\n * @return {string}\n * @public\n */\n\nfunction escapeHtml(string) {\n var str = '' + string;\n var match = matchHtmlRegExp.exec(str);\n\n if (!match) {\n return str;\n }\n\n var escape;\n var html = '';\n var index = 0;\n var lastIndex = 0;\n\n for (index = match.index; index < str.length; index++) {\n switch (str.charCodeAt(index)) {\n case 34: // \"\n escape = '"';\n break;\n case 38: // &\n escape = '&';\n break;\n case 39: // '\n escape = ''';\n break;\n case 60: // <\n escape = '<';\n break;\n case 62: // >\n escape = '>';\n break;\n default:\n continue;\n }\n\n if (lastIndex !== index) {\n html += str.substring(lastIndex, index);\n }\n\n lastIndex = index + 1;\n html += escape;\n }\n\n return lastIndex !== index\n ? html + str.substring(lastIndex, index)\n : html;\n}\n", "/*!\n * clipboard.js v2.0.11\n * https://clipboardjs.com/\n *\n * Licensed MIT \u00A9 Zeno Rocha\n */\n(function webpackUniversalModuleDefinition(root, factory) {\n\tif(typeof exports === 'object' && typeof module === 'object')\n\t\tmodule.exports = factory();\n\telse if(typeof define === 'function' && define.amd)\n\t\tdefine([], factory);\n\telse if(typeof exports === 'object')\n\t\texports[\"ClipboardJS\"] = factory();\n\telse\n\t\troot[\"ClipboardJS\"] = factory();\n})(this, function() {\nreturn /******/ (function() { // webpackBootstrap\n/******/ \tvar __webpack_modules__ = ({\n\n/***/ 686:\n/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n\n// EXPORTS\n__webpack_require__.d(__webpack_exports__, {\n \"default\": function() { return /* binding */ clipboard; }\n});\n\n// EXTERNAL MODULE: ./node_modules/tiny-emitter/index.js\nvar tiny_emitter = __webpack_require__(279);\nvar tiny_emitter_default = /*#__PURE__*/__webpack_require__.n(tiny_emitter);\n// EXTERNAL MODULE: ./node_modules/good-listener/src/listen.js\nvar listen = __webpack_require__(370);\nvar listen_default = /*#__PURE__*/__webpack_require__.n(listen);\n// EXTERNAL MODULE: ./node_modules/select/src/select.js\nvar src_select = __webpack_require__(817);\nvar select_default = /*#__PURE__*/__webpack_require__.n(src_select);\n;// CONCATENATED MODULE: ./src/common/command.js\n/**\n * Executes a given operation type.\n * @param {String} type\n * @return {Boolean}\n */\nfunction command(type) {\n try {\n return document.execCommand(type);\n } catch (err) {\n return false;\n }\n}\n;// CONCATENATED MODULE: ./src/actions/cut.js\n\n\n/**\n * Cut action wrapper.\n * @param {String|HTMLElement} target\n * @return {String}\n */\n\nvar ClipboardActionCut = function ClipboardActionCut(target) {\n var selectedText = select_default()(target);\n command('cut');\n return selectedText;\n};\n\n/* harmony default export */ var actions_cut = (ClipboardActionCut);\n;// CONCATENATED MODULE: ./src/common/create-fake-element.js\n/**\n * Creates a fake textarea element with a value.\n * @param {String} value\n * @return {HTMLElement}\n */\nfunction createFakeElement(value) {\n var isRTL = document.documentElement.getAttribute('dir') === 'rtl';\n var fakeElement = document.createElement('textarea'); // Prevent zooming on iOS\n\n fakeElement.style.fontSize = '12pt'; // Reset box model\n\n fakeElement.style.border = '0';\n fakeElement.style.padding = '0';\n fakeElement.style.margin = '0'; // Move element out of screen horizontally\n\n fakeElement.style.position = 'absolute';\n fakeElement.style[isRTL ? 'right' : 'left'] = '-9999px'; // Move element to the same position vertically\n\n var yPosition = window.pageYOffset || document.documentElement.scrollTop;\n fakeElement.style.top = \"\".concat(yPosition, \"px\");\n fakeElement.setAttribute('readonly', '');\n fakeElement.value = value;\n return fakeElement;\n}\n;// CONCATENATED MODULE: ./src/actions/copy.js\n\n\n\n/**\n * Create fake copy action wrapper using a fake element.\n * @param {String} target\n * @param {Object} options\n * @return {String}\n */\n\nvar fakeCopyAction = function fakeCopyAction(value, options) {\n var fakeElement = createFakeElement(value);\n options.container.appendChild(fakeElement);\n var selectedText = select_default()(fakeElement);\n command('copy');\n fakeElement.remove();\n return selectedText;\n};\n/**\n * Copy action wrapper.\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @return {String}\n */\n\n\nvar ClipboardActionCopy = function ClipboardActionCopy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n var selectedText = '';\n\n if (typeof target === 'string') {\n selectedText = fakeCopyAction(target, options);\n } else if (target instanceof HTMLInputElement && !['text', 'search', 'url', 'tel', 'password'].includes(target === null || target === void 0 ? void 0 : target.type)) {\n // If input type doesn't support `setSelectionRange`. Simulate it. https://developer.mozilla.org/en-US/docs/Web/API/HTMLInputElement/setSelectionRange\n selectedText = fakeCopyAction(target.value, options);\n } else {\n selectedText = select_default()(target);\n command('copy');\n }\n\n return selectedText;\n};\n\n/* harmony default export */ var actions_copy = (ClipboardActionCopy);\n;// CONCATENATED MODULE: ./src/actions/default.js\nfunction _typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { _typeof = function _typeof(obj) { return typeof obj; }; } else { _typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return _typeof(obj); }\n\n\n\n/**\n * Inner function which performs selection from either `text` or `target`\n * properties and then executes copy or cut operations.\n * @param {Object} options\n */\n\nvar ClipboardActionDefault = function ClipboardActionDefault() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n // Defines base properties passed from constructor.\n var _options$action = options.action,\n action = _options$action === void 0 ? 'copy' : _options$action,\n container = options.container,\n target = options.target,\n text = options.text; // Sets the `action` to be performed which can be either 'copy' or 'cut'.\n\n if (action !== 'copy' && action !== 'cut') {\n throw new Error('Invalid \"action\" value, use either \"copy\" or \"cut\"');\n } // Sets the `target` property using an element that will be have its content copied.\n\n\n if (target !== undefined) {\n if (target && _typeof(target) === 'object' && target.nodeType === 1) {\n if (action === 'copy' && target.hasAttribute('disabled')) {\n throw new Error('Invalid \"target\" attribute. Please use \"readonly\" instead of \"disabled\" attribute');\n }\n\n if (action === 'cut' && (target.hasAttribute('readonly') || target.hasAttribute('disabled'))) {\n throw new Error('Invalid \"target\" attribute. You can\\'t cut text from elements with \"readonly\" or \"disabled\" attributes');\n }\n } else {\n throw new Error('Invalid \"target\" value, use a valid Element');\n }\n } // Define selection strategy based on `text` property.\n\n\n if (text) {\n return actions_copy(text, {\n container: container\n });\n } // Defines which selection strategy based on `target` property.\n\n\n if (target) {\n return action === 'cut' ? actions_cut(target) : actions_copy(target, {\n container: container\n });\n }\n};\n\n/* harmony default export */ var actions_default = (ClipboardActionDefault);\n;// CONCATENATED MODULE: ./src/clipboard.js\nfunction clipboard_typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { clipboard_typeof = function _typeof(obj) { return typeof obj; }; } else { clipboard_typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return clipboard_typeof(obj); }\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nfunction _defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } }\n\nfunction _createClass(Constructor, protoProps, staticProps) { if (protoProps) _defineProperties(Constructor.prototype, protoProps); if (staticProps) _defineProperties(Constructor, staticProps); return Constructor; }\n\nfunction _inherits(subClass, superClass) { if (typeof superClass !== \"function\" && superClass !== null) { throw new TypeError(\"Super expression must either be null or a function\"); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, writable: true, configurable: true } }); if (superClass) _setPrototypeOf(subClass, superClass); }\n\nfunction _setPrototypeOf(o, p) { _setPrototypeOf = Object.setPrototypeOf || function _setPrototypeOf(o, p) { o.__proto__ = p; return o; }; return _setPrototypeOf(o, p); }\n\nfunction _createSuper(Derived) { var hasNativeReflectConstruct = _isNativeReflectConstruct(); return function _createSuperInternal() { var Super = _getPrototypeOf(Derived), result; if (hasNativeReflectConstruct) { var NewTarget = _getPrototypeOf(this).constructor; result = Reflect.construct(Super, arguments, NewTarget); } else { result = Super.apply(this, arguments); } return _possibleConstructorReturn(this, result); }; }\n\nfunction _possibleConstructorReturn(self, call) { if (call && (clipboard_typeof(call) === \"object\" || typeof call === \"function\")) { return call; } return _assertThisInitialized(self); }\n\nfunction _assertThisInitialized(self) { if (self === void 0) { throw new ReferenceError(\"this hasn't been initialised - super() hasn't been called\"); } return self; }\n\nfunction _isNativeReflectConstruct() { if (typeof Reflect === \"undefined\" || !Reflect.construct) return false; if (Reflect.construct.sham) return false; if (typeof Proxy === \"function\") return true; try { Date.prototype.toString.call(Reflect.construct(Date, [], function () {})); return true; } catch (e) { return false; } }\n\nfunction _getPrototypeOf(o) { _getPrototypeOf = Object.setPrototypeOf ? Object.getPrototypeOf : function _getPrototypeOf(o) { return o.__proto__ || Object.getPrototypeOf(o); }; return _getPrototypeOf(o); }\n\n\n\n\n\n\n/**\n * Helper function to retrieve attribute value.\n * @param {String} suffix\n * @param {Element} element\n */\n\nfunction getAttributeValue(suffix, element) {\n var attribute = \"data-clipboard-\".concat(suffix);\n\n if (!element.hasAttribute(attribute)) {\n return;\n }\n\n return element.getAttribute(attribute);\n}\n/**\n * Base class which takes one or more elements, adds event listeners to them,\n * and instantiates a new `ClipboardAction` on each click.\n */\n\n\nvar Clipboard = /*#__PURE__*/function (_Emitter) {\n _inherits(Clipboard, _Emitter);\n\n var _super = _createSuper(Clipboard);\n\n /**\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n * @param {Object} options\n */\n function Clipboard(trigger, options) {\n var _this;\n\n _classCallCheck(this, Clipboard);\n\n _this = _super.call(this);\n\n _this.resolveOptions(options);\n\n _this.listenClick(trigger);\n\n return _this;\n }\n /**\n * Defines if attributes would be resolved using internal setter functions\n * or custom functions that were passed in the constructor.\n * @param {Object} options\n */\n\n\n _createClass(Clipboard, [{\n key: \"resolveOptions\",\n value: function resolveOptions() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n this.action = typeof options.action === 'function' ? options.action : this.defaultAction;\n this.target = typeof options.target === 'function' ? options.target : this.defaultTarget;\n this.text = typeof options.text === 'function' ? options.text : this.defaultText;\n this.container = clipboard_typeof(options.container) === 'object' ? options.container : document.body;\n }\n /**\n * Adds a click event listener to the passed trigger.\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n */\n\n }, {\n key: \"listenClick\",\n value: function listenClick(trigger) {\n var _this2 = this;\n\n this.listener = listen_default()(trigger, 'click', function (e) {\n return _this2.onClick(e);\n });\n }\n /**\n * Defines a new `ClipboardAction` on each click event.\n * @param {Event} e\n */\n\n }, {\n key: \"onClick\",\n value: function onClick(e) {\n var trigger = e.delegateTarget || e.currentTarget;\n var action = this.action(trigger) || 'copy';\n var text = actions_default({\n action: action,\n container: this.container,\n target: this.target(trigger),\n text: this.text(trigger)\n }); // Fires an event based on the copy operation result.\n\n this.emit(text ? 'success' : 'error', {\n action: action,\n text: text,\n trigger: trigger,\n clearSelection: function clearSelection() {\n if (trigger) {\n trigger.focus();\n }\n\n window.getSelection().removeAllRanges();\n }\n });\n }\n /**\n * Default `action` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultAction\",\n value: function defaultAction(trigger) {\n return getAttributeValue('action', trigger);\n }\n /**\n * Default `target` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultTarget\",\n value: function defaultTarget(trigger) {\n var selector = getAttributeValue('target', trigger);\n\n if (selector) {\n return document.querySelector(selector);\n }\n }\n /**\n * Allow fire programmatically a copy action\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @returns Text copied.\n */\n\n }, {\n key: \"defaultText\",\n\n /**\n * Default `text` lookup function.\n * @param {Element} trigger\n */\n value: function defaultText(trigger) {\n return getAttributeValue('text', trigger);\n }\n /**\n * Destroy lifecycle.\n */\n\n }, {\n key: \"destroy\",\n value: function destroy() {\n this.listener.destroy();\n }\n }], [{\n key: \"copy\",\n value: function copy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n return actions_copy(target, options);\n }\n /**\n * Allow fire programmatically a cut action\n * @param {String|HTMLElement} target\n * @returns Text cutted.\n */\n\n }, {\n key: \"cut\",\n value: function cut(target) {\n return actions_cut(target);\n }\n /**\n * Returns the support of the given action, or all actions if no action is\n * given.\n * @param {String} [action]\n */\n\n }, {\n key: \"isSupported\",\n value: function isSupported() {\n var action = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : ['copy', 'cut'];\n var actions = typeof action === 'string' ? [action] : action;\n var support = !!document.queryCommandSupported;\n actions.forEach(function (action) {\n support = support && !!document.queryCommandSupported(action);\n });\n return support;\n }\n }]);\n\n return Clipboard;\n}((tiny_emitter_default()));\n\n/* harmony default export */ var clipboard = (Clipboard);\n\n/***/ }),\n\n/***/ 828:\n/***/ (function(module) {\n\nvar DOCUMENT_NODE_TYPE = 9;\n\n/**\n * A polyfill for Element.matches()\n */\nif (typeof Element !== 'undefined' && !Element.prototype.matches) {\n var proto = Element.prototype;\n\n proto.matches = proto.matchesSelector ||\n proto.mozMatchesSelector ||\n proto.msMatchesSelector ||\n proto.oMatchesSelector ||\n proto.webkitMatchesSelector;\n}\n\n/**\n * Finds the closest parent that matches a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @return {Function}\n */\nfunction closest (element, selector) {\n while (element && element.nodeType !== DOCUMENT_NODE_TYPE) {\n if (typeof element.matches === 'function' &&\n element.matches(selector)) {\n return element;\n }\n element = element.parentNode;\n }\n}\n\nmodule.exports = closest;\n\n\n/***/ }),\n\n/***/ 438:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar closest = __webpack_require__(828);\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction _delegate(element, selector, type, callback, useCapture) {\n var listenerFn = listener.apply(this, arguments);\n\n element.addEventListener(type, listenerFn, useCapture);\n\n return {\n destroy: function() {\n element.removeEventListener(type, listenerFn, useCapture);\n }\n }\n}\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element|String|Array} [elements]\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction delegate(elements, selector, type, callback, useCapture) {\n // Handle the regular Element usage\n if (typeof elements.addEventListener === 'function') {\n return _delegate.apply(null, arguments);\n }\n\n // Handle Element-less usage, it defaults to global delegation\n if (typeof type === 'function') {\n // Use `document` as the first parameter, then apply arguments\n // This is a short way to .unshift `arguments` without running into deoptimizations\n return _delegate.bind(null, document).apply(null, arguments);\n }\n\n // Handle Selector-based usage\n if (typeof elements === 'string') {\n elements = document.querySelectorAll(elements);\n }\n\n // Handle Array-like based usage\n return Array.prototype.map.call(elements, function (element) {\n return _delegate(element, selector, type, callback, useCapture);\n });\n}\n\n/**\n * Finds closest match and invokes callback.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Function}\n */\nfunction listener(element, selector, type, callback) {\n return function(e) {\n e.delegateTarget = closest(e.target, selector);\n\n if (e.delegateTarget) {\n callback.call(element, e);\n }\n }\n}\n\nmodule.exports = delegate;\n\n\n/***/ }),\n\n/***/ 879:\n/***/ (function(__unused_webpack_module, exports) {\n\n/**\n * Check if argument is a HTML element.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.node = function(value) {\n return value !== undefined\n && value instanceof HTMLElement\n && value.nodeType === 1;\n};\n\n/**\n * Check if argument is a list of HTML elements.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.nodeList = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return value !== undefined\n && (type === '[object NodeList]' || type === '[object HTMLCollection]')\n && ('length' in value)\n && (value.length === 0 || exports.node(value[0]));\n};\n\n/**\n * Check if argument is a string.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.string = function(value) {\n return typeof value === 'string'\n || value instanceof String;\n};\n\n/**\n * Check if argument is a function.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.fn = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return type === '[object Function]';\n};\n\n\n/***/ }),\n\n/***/ 370:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar is = __webpack_require__(879);\nvar delegate = __webpack_require__(438);\n\n/**\n * Validates all params and calls the right\n * listener function based on its target type.\n *\n * @param {String|HTMLElement|HTMLCollection|NodeList} target\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listen(target, type, callback) {\n if (!target && !type && !callback) {\n throw new Error('Missing required arguments');\n }\n\n if (!is.string(type)) {\n throw new TypeError('Second argument must be a String');\n }\n\n if (!is.fn(callback)) {\n throw new TypeError('Third argument must be a Function');\n }\n\n if (is.node(target)) {\n return listenNode(target, type, callback);\n }\n else if (is.nodeList(target)) {\n return listenNodeList(target, type, callback);\n }\n else if (is.string(target)) {\n return listenSelector(target, type, callback);\n }\n else {\n throw new TypeError('First argument must be a String, HTMLElement, HTMLCollection, or NodeList');\n }\n}\n\n/**\n * Adds an event listener to a HTML element\n * and returns a remove listener function.\n *\n * @param {HTMLElement} node\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNode(node, type, callback) {\n node.addEventListener(type, callback);\n\n return {\n destroy: function() {\n node.removeEventListener(type, callback);\n }\n }\n}\n\n/**\n * Add an event listener to a list of HTML elements\n * and returns a remove listener function.\n *\n * @param {NodeList|HTMLCollection} nodeList\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNodeList(nodeList, type, callback) {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.addEventListener(type, callback);\n });\n\n return {\n destroy: function() {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.removeEventListener(type, callback);\n });\n }\n }\n}\n\n/**\n * Add an event listener to a selector\n * and returns a remove listener function.\n *\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenSelector(selector, type, callback) {\n return delegate(document.body, selector, type, callback);\n}\n\nmodule.exports = listen;\n\n\n/***/ }),\n\n/***/ 817:\n/***/ (function(module) {\n\nfunction select(element) {\n var selectedText;\n\n if (element.nodeName === 'SELECT') {\n element.focus();\n\n selectedText = element.value;\n }\n else if (element.nodeName === 'INPUT' || element.nodeName === 'TEXTAREA') {\n var isReadOnly = element.hasAttribute('readonly');\n\n if (!isReadOnly) {\n element.setAttribute('readonly', '');\n }\n\n element.select();\n element.setSelectionRange(0, element.value.length);\n\n if (!isReadOnly) {\n element.removeAttribute('readonly');\n }\n\n selectedText = element.value;\n }\n else {\n if (element.hasAttribute('contenteditable')) {\n element.focus();\n }\n\n var selection = window.getSelection();\n var range = document.createRange();\n\n range.selectNodeContents(element);\n selection.removeAllRanges();\n selection.addRange(range);\n\n selectedText = selection.toString();\n }\n\n return selectedText;\n}\n\nmodule.exports = select;\n\n\n/***/ }),\n\n/***/ 279:\n/***/ (function(module) {\n\nfunction E () {\n // Keep this empty so it's easier to inherit from\n // (via https://github.com/lipsmack from https://github.com/scottcorgan/tiny-emitter/issues/3)\n}\n\nE.prototype = {\n on: function (name, callback, ctx) {\n var e = this.e || (this.e = {});\n\n (e[name] || (e[name] = [])).push({\n fn: callback,\n ctx: ctx\n });\n\n return this;\n },\n\n once: function (name, callback, ctx) {\n var self = this;\n function listener () {\n self.off(name, listener);\n callback.apply(ctx, arguments);\n };\n\n listener._ = callback\n return this.on(name, listener, ctx);\n },\n\n emit: function (name) {\n var data = [].slice.call(arguments, 1);\n var evtArr = ((this.e || (this.e = {}))[name] || []).slice();\n var i = 0;\n var len = evtArr.length;\n\n for (i; i < len; i++) {\n evtArr[i].fn.apply(evtArr[i].ctx, data);\n }\n\n return this;\n },\n\n off: function (name, callback) {\n var e = this.e || (this.e = {});\n var evts = e[name];\n var liveEvents = [];\n\n if (evts && callback) {\n for (var i = 0, len = evts.length; i < len; i++) {\n if (evts[i].fn !== callback && evts[i].fn._ !== callback)\n liveEvents.push(evts[i]);\n }\n }\n\n // Remove event from queue to prevent memory leak\n // Suggested by https://github.com/lazd\n // Ref: https://github.com/scottcorgan/tiny-emitter/commit/c6ebfaa9bc973b33d110a84a307742b7cf94c953#commitcomment-5024910\n\n (liveEvents.length)\n ? e[name] = liveEvents\n : delete e[name];\n\n return this;\n }\n};\n\nmodule.exports = E;\nmodule.exports.TinyEmitter = E;\n\n\n/***/ })\n\n/******/ \t});\n/************************************************************************/\n/******/ \t// The module cache\n/******/ \tvar __webpack_module_cache__ = {};\n/******/ \t\n/******/ \t// The require function\n/******/ \tfunction __webpack_require__(moduleId) {\n/******/ \t\t// Check if module is in cache\n/******/ \t\tif(__webpack_module_cache__[moduleId]) {\n/******/ \t\t\treturn __webpack_module_cache__[moduleId].exports;\n/******/ \t\t}\n/******/ \t\t// Create a new module (and put it into the cache)\n/******/ \t\tvar module = __webpack_module_cache__[moduleId] = {\n/******/ \t\t\t// no module.id needed\n/******/ \t\t\t// no module.loaded needed\n/******/ \t\t\texports: {}\n/******/ \t\t};\n/******/ \t\n/******/ \t\t// Execute the module function\n/******/ \t\t__webpack_modules__[moduleId](module, module.exports, __webpack_require__);\n/******/ \t\n/******/ \t\t// Return the exports of the module\n/******/ \t\treturn module.exports;\n/******/ \t}\n/******/ \t\n/************************************************************************/\n/******/ \t/* webpack/runtime/compat get default export */\n/******/ \t!function() {\n/******/ \t\t// getDefaultExport function for compatibility with non-harmony modules\n/******/ \t\t__webpack_require__.n = function(module) {\n/******/ \t\t\tvar getter = module && module.__esModule ?\n/******/ \t\t\t\tfunction() { return module['default']; } :\n/******/ \t\t\t\tfunction() { return module; };\n/******/ \t\t\t__webpack_require__.d(getter, { a: getter });\n/******/ \t\t\treturn getter;\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/define property getters */\n/******/ \t!function() {\n/******/ \t\t// define getter functions for harmony exports\n/******/ \t\t__webpack_require__.d = function(exports, definition) {\n/******/ \t\t\tfor(var key in definition) {\n/******/ \t\t\t\tif(__webpack_require__.o(definition, key) && !__webpack_require__.o(exports, key)) {\n/******/ \t\t\t\t\tObject.defineProperty(exports, key, { enumerable: true, get: definition[key] });\n/******/ \t\t\t\t}\n/******/ \t\t\t}\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/hasOwnProperty shorthand */\n/******/ \t!function() {\n/******/ \t\t__webpack_require__.o = function(obj, prop) { return Object.prototype.hasOwnProperty.call(obj, prop); }\n/******/ \t}();\n/******/ \t\n/************************************************************************/\n/******/ \t// module exports must be returned from runtime so entry inlining is disabled\n/******/ \t// startup\n/******/ \t// Load entry module and return exports\n/******/ \treturn __webpack_require__(686);\n/******/ })()\n.default;\n});", "/*\n * Copyright (c) 2016-2024 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport \"focus-visible\"\n\nimport {\n EMPTY,\n NEVER,\n Observable,\n Subject,\n defer,\n delay,\n filter,\n map,\n merge,\n mergeWith,\n shareReplay,\n switchMap\n} from \"rxjs\"\n\nimport { configuration, feature } from \"./_\"\nimport {\n at,\n getActiveElement,\n getOptionalElement,\n requestJSON,\n setLocation,\n setToggle,\n watchDocument,\n watchKeyboard,\n watchLocation,\n watchLocationTarget,\n watchMedia,\n watchPrint,\n watchScript,\n watchViewport\n} from \"./browser\"\nimport {\n getComponentElement,\n getComponentElements,\n mountAnnounce,\n mountBackToTop,\n mountConsent,\n mountContent,\n mountDialog,\n mountHeader,\n mountHeaderTitle,\n mountPalette,\n mountProgress,\n mountSearch,\n mountSearchHiglight,\n mountSidebar,\n mountSource,\n mountTableOfContents,\n mountTabs,\n watchHeader,\n watchMain\n} from \"./components\"\nimport {\n SearchIndex,\n setupClipboardJS,\n setupInstantNavigation,\n setupVersionSelector\n} from \"./integrations\"\nimport {\n patchEllipsis,\n patchIndeterminate,\n patchScrollfix,\n patchScrolllock\n} from \"./patches\"\nimport \"./polyfills\"\n\n/* ----------------------------------------------------------------------------\n * Functions - @todo refactor\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch search index\n *\n * @returns Search index observable\n */\nfunction fetchSearchIndex(): Observable {\n if (location.protocol === \"file:\") {\n return watchScript(\n `${new URL(\"search/search_index.js\", config.base)}`\n )\n .pipe(\n // @ts-ignore - @todo fix typings\n map(() => __index),\n shareReplay(1)\n )\n } else {\n return requestJSON(\n new URL(\"search/search_index.json\", config.base)\n )\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Application\n * ------------------------------------------------------------------------- */\n\n/* Yay, JavaScript is available */\ndocument.documentElement.classList.remove(\"no-js\")\ndocument.documentElement.classList.add(\"js\")\n\n/* Set up navigation observables and subjects */\nconst document$ = watchDocument()\nconst location$ = watchLocation()\nconst target$ = watchLocationTarget(location$)\nconst keyboard$ = watchKeyboard()\n\n/* Set up media observables */\nconst viewport$ = watchViewport()\nconst tablet$ = watchMedia(\"(min-width: 960px)\")\nconst screen$ = watchMedia(\"(min-width: 1220px)\")\nconst print$ = watchPrint()\n\n/* Retrieve search index, if search is enabled */\nconst config = configuration()\nconst index$ = document.forms.namedItem(\"search\")\n ? fetchSearchIndex()\n : NEVER\n\n/* Set up Clipboard.js integration */\nconst alert$ = new Subject()\nsetupClipboardJS({ alert$ })\n\n/* Set up progress indicator */\nconst progress$ = new Subject()\n\n/* Set up instant navigation, if enabled */\nif (feature(\"navigation.instant\"))\n setupInstantNavigation({ location$, viewport$, progress$ })\n .subscribe(document$)\n\n/* Set up version selector */\nif (config.version?.provider === \"mike\")\n setupVersionSelector({ document$ })\n\n/* Always close drawer and search on navigation */\nmerge(location$, target$)\n .pipe(\n delay(125)\n )\n .subscribe(() => {\n setToggle(\"drawer\", false)\n setToggle(\"search\", false)\n })\n\n/* Set up global keyboard handlers */\nkeyboard$\n .pipe(\n filter(({ mode }) => mode === \"global\")\n )\n .subscribe(key => {\n switch (key.type) {\n\n /* Go to previous page */\n case \"p\":\n case \",\":\n const prev = getOptionalElement(\"link[rel=prev]\")\n if (typeof prev !== \"undefined\")\n setLocation(prev)\n break\n\n /* Go to next page */\n case \"n\":\n case \".\":\n const next = getOptionalElement(\"link[rel=next]\")\n if (typeof next !== \"undefined\")\n setLocation(next)\n break\n\n /* Expand navigation, see https://bit.ly/3ZjG5io */\n case \"Enter\":\n const active = getActiveElement()\n if (active instanceof HTMLLabelElement)\n active.click()\n }\n })\n\n/* Set up patches */\npatchEllipsis({ viewport$, document$ })\npatchIndeterminate({ document$, tablet$ })\npatchScrollfix({ document$ })\npatchScrolllock({ viewport$, tablet$ })\n\n/* Set up header and main area observable */\nconst header$ = watchHeader(getComponentElement(\"header\"), { viewport$ })\nconst main$ = document$\n .pipe(\n map(() => getComponentElement(\"main\")),\n switchMap(el => watchMain(el, { viewport$, header$ })),\n shareReplay(1)\n )\n\n/* Set up control component observables */\nconst control$ = merge(\n\n /* Consent */\n ...getComponentElements(\"consent\")\n .map(el => mountConsent(el, { target$ })),\n\n /* Dialog */\n ...getComponentElements(\"dialog\")\n .map(el => mountDialog(el, { alert$ })),\n\n /* Color palette */\n ...getComponentElements(\"palette\")\n .map(el => mountPalette(el)),\n\n /* Progress bar */\n ...getComponentElements(\"progress\")\n .map(el => mountProgress(el, { progress$ })),\n\n /* Search */\n ...getComponentElements(\"search\")\n .map(el => mountSearch(el, { index$, keyboard$ })),\n\n /* Repository information */\n ...getComponentElements(\"source\")\n .map(el => mountSource(el))\n)\n\n/* Set up content component observables */\nconst content$ = defer(() => merge(\n\n /* Announcement bar */\n ...getComponentElements(\"announce\")\n .map(el => mountAnnounce(el)),\n\n /* Content */\n ...getComponentElements(\"content\")\n .map(el => mountContent(el, { viewport$, target$, print$ })),\n\n /* Search highlighting */\n ...getComponentElements(\"content\")\n .map(el => feature(\"search.highlight\")\n ? mountSearchHiglight(el, { index$, location$ })\n : EMPTY\n ),\n\n /* Header */\n ...getComponentElements(\"header\")\n .map(el => mountHeader(el, { viewport$, header$, main$ })),\n\n /* Header title */\n ...getComponentElements(\"header-title\")\n .map(el => mountHeaderTitle(el, { viewport$, header$ })),\n\n /* Sidebar */\n ...getComponentElements(\"sidebar\")\n .map(el => el.getAttribute(\"data-md-type\") === \"navigation\"\n ? at(screen$, () => mountSidebar(el, { viewport$, header$, main$ }))\n : at(tablet$, () => mountSidebar(el, { viewport$, header$, main$ }))\n ),\n\n /* Navigation tabs */\n ...getComponentElements(\"tabs\")\n .map(el => mountTabs(el, { viewport$, header$ })),\n\n /* Table of contents */\n ...getComponentElements(\"toc\")\n .map(el => mountTableOfContents(el, {\n viewport$, header$, main$, target$\n })),\n\n /* Back-to-top button */\n ...getComponentElements(\"top\")\n .map(el => mountBackToTop(el, { viewport$, header$, main$, target$ }))\n))\n\n/* Set up component observables */\nconst component$ = document$\n .pipe(\n switchMap(() => content$),\n mergeWith(control$),\n shareReplay(1)\n )\n\n/* Subscribe to all components */\ncomponent$.subscribe()\n\n/* ----------------------------------------------------------------------------\n * Exports\n * ------------------------------------------------------------------------- */\n\nwindow.document$ = document$ /* Document observable */\nwindow.location$ = location$ /* Location subject */\nwindow.target$ = target$ /* Location target observable */\nwindow.keyboard$ = keyboard$ /* Keyboard observable */\nwindow.viewport$ = viewport$ /* Viewport observable */\nwindow.tablet$ = tablet$ /* Media tablet observable */\nwindow.screen$ = screen$ /* Media screen observable */\nwindow.print$ = print$ /* Media print observable */\nwindow.alert$ = alert$ /* Alert subject */\nwindow.progress$ = progress$ /* Progress indicator subject */\nwindow.component$ = component$ /* Component observable */\n", "/******************************************************************************\nCopyright (c) Microsoft Corporation.\n\nPermission to use, copy, modify, and/or distribute this software for any\npurpose with or without fee is hereby granted.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH\nREGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY\nAND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,\nINDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM\nLOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR\nOTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR\nPERFORMANCE OF THIS SOFTWARE.\n***************************************************************************** */\n/* global Reflect, Promise, SuppressedError, Symbol, Iterator */\n\nvar extendStatics = function(d, b) {\n extendStatics = Object.setPrototypeOf ||\n ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) ||\n function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; };\n return extendStatics(d, b);\n};\n\nexport function __extends(d, b) {\n if (typeof b !== \"function\" && b !== null)\n throw new TypeError(\"Class extends value \" + String(b) + \" is not a constructor or null\");\n extendStatics(d, b);\n function __() { this.constructor = d; }\n d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());\n}\n\nexport var __assign = function() {\n __assign = Object.assign || function __assign(t) {\n for (var s, i = 1, n = arguments.length; i < n; i++) {\n s = arguments[i];\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p];\n }\n return t;\n }\n return __assign.apply(this, arguments);\n}\n\nexport function __rest(s, e) {\n var t = {};\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)\n t[p] = s[p];\n if (s != null && typeof Object.getOwnPropertySymbols === \"function\")\n for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {\n if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))\n t[p[i]] = s[p[i]];\n }\n return t;\n}\n\nexport function __decorate(decorators, target, key, desc) {\n var c = arguments.length, r = c < 3 ? target : desc === null ? desc = Object.getOwnPropertyDescriptor(target, key) : desc, d;\n if (typeof Reflect === \"object\" && typeof Reflect.decorate === \"function\") r = Reflect.decorate(decorators, target, key, desc);\n else for (var i = decorators.length - 1; i >= 0; i--) if (d = decorators[i]) r = (c < 3 ? d(r) : c > 3 ? d(target, key, r) : d(target, key)) || r;\n return c > 3 && r && Object.defineProperty(target, key, r), r;\n}\n\nexport function __param(paramIndex, decorator) {\n return function (target, key) { decorator(target, key, paramIndex); }\n}\n\nexport function __esDecorate(ctor, descriptorIn, decorators, contextIn, initializers, extraInitializers) {\n function accept(f) { if (f !== void 0 && typeof f !== \"function\") throw new TypeError(\"Function expected\"); return f; }\n var kind = contextIn.kind, key = kind === \"getter\" ? \"get\" : kind === \"setter\" ? \"set\" : \"value\";\n var target = !descriptorIn && ctor ? contextIn[\"static\"] ? ctor : ctor.prototype : null;\n var descriptor = descriptorIn || (target ? Object.getOwnPropertyDescriptor(target, contextIn.name) : {});\n var _, done = false;\n for (var i = decorators.length - 1; i >= 0; i--) {\n var context = {};\n for (var p in contextIn) context[p] = p === \"access\" ? {} : contextIn[p];\n for (var p in contextIn.access) context.access[p] = contextIn.access[p];\n context.addInitializer = function (f) { if (done) throw new TypeError(\"Cannot add initializers after decoration has completed\"); extraInitializers.push(accept(f || null)); };\n var result = (0, decorators[i])(kind === \"accessor\" ? { get: descriptor.get, set: descriptor.set } : descriptor[key], context);\n if (kind === \"accessor\") {\n if (result === void 0) continue;\n if (result === null || typeof result !== \"object\") throw new TypeError(\"Object expected\");\n if (_ = accept(result.get)) descriptor.get = _;\n if (_ = accept(result.set)) descriptor.set = _;\n if (_ = accept(result.init)) initializers.unshift(_);\n }\n else if (_ = accept(result)) {\n if (kind === \"field\") initializers.unshift(_);\n else descriptor[key] = _;\n }\n }\n if (target) Object.defineProperty(target, contextIn.name, descriptor);\n done = true;\n};\n\nexport function __runInitializers(thisArg, initializers, value) {\n var useValue = arguments.length > 2;\n for (var i = 0; i < initializers.length; i++) {\n value = useValue ? initializers[i].call(thisArg, value) : initializers[i].call(thisArg);\n }\n return useValue ? value : void 0;\n};\n\nexport function __propKey(x) {\n return typeof x === \"symbol\" ? x : \"\".concat(x);\n};\n\nexport function __setFunctionName(f, name, prefix) {\n if (typeof name === \"symbol\") name = name.description ? \"[\".concat(name.description, \"]\") : \"\";\n return Object.defineProperty(f, \"name\", { configurable: true, value: prefix ? \"\".concat(prefix, \" \", name) : name });\n};\n\nexport function __metadata(metadataKey, metadataValue) {\n if (typeof Reflect === \"object\" && typeof Reflect.metadata === \"function\") return Reflect.metadata(metadataKey, metadataValue);\n}\n\nexport function __awaiter(thisArg, _arguments, P, generator) {\n function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }\n return new (P || (P = Promise))(function (resolve, reject) {\n function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }\n function rejected(value) { try { step(generator[\"throw\"](value)); } catch (e) { reject(e); } }\n function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }\n step((generator = generator.apply(thisArg, _arguments || [])).next());\n });\n}\n\nexport function __generator(thisArg, body) {\n var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g = Object.create((typeof Iterator === \"function\" ? Iterator : Object).prototype);\n return g.next = verb(0), g[\"throw\"] = verb(1), g[\"return\"] = verb(2), typeof Symbol === \"function\" && (g[Symbol.iterator] = function() { return this; }), g;\n function verb(n) { return function (v) { return step([n, v]); }; }\n function step(op) {\n if (f) throw new TypeError(\"Generator is already executing.\");\n while (g && (g = 0, op[0] && (_ = 0)), _) try {\n if (f = 1, y && (t = op[0] & 2 ? y[\"return\"] : op[0] ? y[\"throw\"] || ((t = y[\"return\"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t;\n if (y = 0, t) op = [op[0] & 2, t.value];\n switch (op[0]) {\n case 0: case 1: t = op; break;\n case 4: _.label++; return { value: op[1], done: false };\n case 5: _.label++; y = op[1]; op = [0]; continue;\n case 7: op = _.ops.pop(); _.trys.pop(); continue;\n default:\n if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }\n if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; }\n if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; }\n if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; }\n if (t[2]) _.ops.pop();\n _.trys.pop(); continue;\n }\n op = body.call(thisArg, _);\n } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }\n if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };\n }\n}\n\nexport var __createBinding = Object.create ? (function(o, m, k, k2) {\n if (k2 === undefined) k2 = k;\n var desc = Object.getOwnPropertyDescriptor(m, k);\n if (!desc || (\"get\" in desc ? !m.__esModule : desc.writable || desc.configurable)) {\n desc = { enumerable: true, get: function() { return m[k]; } };\n }\n Object.defineProperty(o, k2, desc);\n}) : (function(o, m, k, k2) {\n if (k2 === undefined) k2 = k;\n o[k2] = m[k];\n});\n\nexport function __exportStar(m, o) {\n for (var p in m) if (p !== \"default\" && !Object.prototype.hasOwnProperty.call(o, p)) __createBinding(o, m, p);\n}\n\nexport function __values(o) {\n var s = typeof Symbol === \"function\" && Symbol.iterator, m = s && o[s], i = 0;\n if (m) return m.call(o);\n if (o && typeof o.length === \"number\") return {\n next: function () {\n if (o && i >= o.length) o = void 0;\n return { value: o && o[i++], done: !o };\n }\n };\n throw new TypeError(s ? \"Object is not iterable.\" : \"Symbol.iterator is not defined.\");\n}\n\nexport function __read(o, n) {\n var m = typeof Symbol === \"function\" && o[Symbol.iterator];\n if (!m) return o;\n var i = m.call(o), r, ar = [], e;\n try {\n while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value);\n }\n catch (error) { e = { error: error }; }\n finally {\n try {\n if (r && !r.done && (m = i[\"return\"])) m.call(i);\n }\n finally { if (e) throw e.error; }\n }\n return ar;\n}\n\n/** @deprecated */\nexport function __spread() {\n for (var ar = [], i = 0; i < arguments.length; i++)\n ar = ar.concat(__read(arguments[i]));\n return ar;\n}\n\n/** @deprecated */\nexport function __spreadArrays() {\n for (var s = 0, i = 0, il = arguments.length; i < il; i++) s += arguments[i].length;\n for (var r = Array(s), k = 0, i = 0; i < il; i++)\n for (var a = arguments[i], j = 0, jl = a.length; j < jl; j++, k++)\n r[k] = a[j];\n return r;\n}\n\nexport function __spreadArray(to, from, pack) {\n if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) {\n if (ar || !(i in from)) {\n if (!ar) ar = Array.prototype.slice.call(from, 0, i);\n ar[i] = from[i];\n }\n }\n return to.concat(ar || Array.prototype.slice.call(from));\n}\n\nexport function __await(v) {\n return this instanceof __await ? (this.v = v, this) : new __await(v);\n}\n\nexport function __asyncGenerator(thisArg, _arguments, generator) {\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\n var g = generator.apply(thisArg, _arguments || []), i, q = [];\n return i = Object.create((typeof AsyncIterator === \"function\" ? AsyncIterator : Object).prototype), verb(\"next\"), verb(\"throw\"), verb(\"return\", awaitReturn), i[Symbol.asyncIterator] = function () { return this; }, i;\n function awaitReturn(f) { return function (v) { return Promise.resolve(v).then(f, reject); }; }\n function verb(n, f) { if (g[n]) { i[n] = function (v) { return new Promise(function (a, b) { q.push([n, v, a, b]) > 1 || resume(n, v); }); }; if (f) i[n] = f(i[n]); } }\n function resume(n, v) { try { step(g[n](v)); } catch (e) { settle(q[0][3], e); } }\n function step(r) { r.value instanceof __await ? Promise.resolve(r.value.v).then(fulfill, reject) : settle(q[0][2], r); }\n function fulfill(value) { resume(\"next\", value); }\n function reject(value) { resume(\"throw\", value); }\n function settle(f, v) { if (f(v), q.shift(), q.length) resume(q[0][0], q[0][1]); }\n}\n\nexport function __asyncDelegator(o) {\n var i, p;\n return i = {}, verb(\"next\"), verb(\"throw\", function (e) { throw e; }), verb(\"return\"), i[Symbol.iterator] = function () { return this; }, i;\n function verb(n, f) { i[n] = o[n] ? function (v) { return (p = !p) ? { value: __await(o[n](v)), done: false } : f ? f(v) : v; } : f; }\n}\n\nexport function __asyncValues(o) {\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\n var m = o[Symbol.asyncIterator], i;\n return m ? m.call(o) : (o = typeof __values === \"function\" ? __values(o) : o[Symbol.iterator](), i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i);\n function verb(n) { i[n] = o[n] && function (v) { return new Promise(function (resolve, reject) { v = o[n](v), settle(resolve, reject, v.done, v.value); }); }; }\n function settle(resolve, reject, d, v) { Promise.resolve(v).then(function(v) { resolve({ value: v, done: d }); }, reject); }\n}\n\nexport function __makeTemplateObject(cooked, raw) {\n if (Object.defineProperty) { Object.defineProperty(cooked, \"raw\", { value: raw }); } else { cooked.raw = raw; }\n return cooked;\n};\n\nvar __setModuleDefault = Object.create ? (function(o, v) {\n Object.defineProperty(o, \"default\", { enumerable: true, value: v });\n}) : function(o, v) {\n o[\"default\"] = v;\n};\n\nexport function __importStar(mod) {\n if (mod && mod.__esModule) return mod;\n var result = {};\n if (mod != null) for (var k in mod) if (k !== \"default\" && Object.prototype.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);\n __setModuleDefault(result, mod);\n return result;\n}\n\nexport function __importDefault(mod) {\n return (mod && mod.__esModule) ? mod : { default: mod };\n}\n\nexport function __classPrivateFieldGet(receiver, state, kind, f) {\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a getter\");\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot read private member from an object whose class did not declare it\");\n return kind === \"m\" ? f : kind === \"a\" ? f.call(receiver) : f ? f.value : state.get(receiver);\n}\n\nexport function __classPrivateFieldSet(receiver, state, value, kind, f) {\n if (kind === \"m\") throw new TypeError(\"Private method is not writable\");\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a setter\");\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot write private member to an object whose class did not declare it\");\n return (kind === \"a\" ? f.call(receiver, value) : f ? f.value = value : state.set(receiver, value)), value;\n}\n\nexport function __classPrivateFieldIn(state, receiver) {\n if (receiver === null || (typeof receiver !== \"object\" && typeof receiver !== \"function\")) throw new TypeError(\"Cannot use 'in' operator on non-object\");\n return typeof state === \"function\" ? receiver === state : state.has(receiver);\n}\n\nexport function __addDisposableResource(env, value, async) {\n if (value !== null && value !== void 0) {\n if (typeof value !== \"object\" && typeof value !== \"function\") throw new TypeError(\"Object expected.\");\n var dispose, inner;\n if (async) {\n if (!Symbol.asyncDispose) throw new TypeError(\"Symbol.asyncDispose is not defined.\");\n dispose = value[Symbol.asyncDispose];\n }\n if (dispose === void 0) {\n if (!Symbol.dispose) throw new TypeError(\"Symbol.dispose is not defined.\");\n dispose = value[Symbol.dispose];\n if (async) inner = dispose;\n }\n if (typeof dispose !== \"function\") throw new TypeError(\"Object not disposable.\");\n if (inner) dispose = function() { try { inner.call(this); } catch (e) { return Promise.reject(e); } };\n env.stack.push({ value: value, dispose: dispose, async: async });\n }\n else if (async) {\n env.stack.push({ async: true });\n }\n return value;\n}\n\nvar _SuppressedError = typeof SuppressedError === \"function\" ? SuppressedError : function (error, suppressed, message) {\n var e = new Error(message);\n return e.name = \"SuppressedError\", e.error = error, e.suppressed = suppressed, e;\n};\n\nexport function __disposeResources(env) {\n function fail(e) {\n env.error = env.hasError ? new _SuppressedError(e, env.error, \"An error was suppressed during disposal.\") : e;\n env.hasError = true;\n }\n var r, s = 0;\n function next() {\n while (r = env.stack.pop()) {\n try {\n if (!r.async && s === 1) return s = 0, env.stack.push(r), Promise.resolve().then(next);\n if (r.dispose) {\n var result = r.dispose.call(r.value);\n if (r.async) return s |= 2, Promise.resolve(result).then(next, function(e) { fail(e); return next(); });\n }\n else s |= 1;\n }\n catch (e) {\n fail(e);\n }\n }\n if (s === 1) return env.hasError ? Promise.reject(env.error) : Promise.resolve();\n if (env.hasError) throw env.error;\n }\n return next();\n}\n\nexport default {\n __extends,\n __assign,\n __rest,\n __decorate,\n __param,\n __metadata,\n __awaiter,\n __generator,\n __createBinding,\n __exportStar,\n __values,\n __read,\n __spread,\n __spreadArrays,\n __spreadArray,\n __await,\n __asyncGenerator,\n __asyncDelegator,\n __asyncValues,\n __makeTemplateObject,\n __importStar,\n __importDefault,\n __classPrivateFieldGet,\n __classPrivateFieldSet,\n __classPrivateFieldIn,\n __addDisposableResource,\n __disposeResources,\n};\n", "/**\n * Returns true if the object is a function.\n * @param value The value to check\n */\nexport function isFunction(value: any): value is (...args: any[]) => any {\n return typeof value === 'function';\n}\n", "/**\n * Used to create Error subclasses until the community moves away from ES5.\n *\n * This is because compiling from TypeScript down to ES5 has issues with subclassing Errors\n * as well as other built-in types: https://github.com/Microsoft/TypeScript/issues/12123\n *\n * @param createImpl A factory function to create the actual constructor implementation. The returned\n * function should be a named function that calls `_super` internally.\n */\nexport function createErrorClass(createImpl: (_super: any) => any): T {\n const _super = (instance: any) => {\n Error.call(instance);\n instance.stack = new Error().stack;\n };\n\n const ctorFunc = createImpl(_super);\n ctorFunc.prototype = Object.create(Error.prototype);\n ctorFunc.prototype.constructor = ctorFunc;\n return ctorFunc;\n}\n", "import { createErrorClass } from './createErrorClass';\n\nexport interface UnsubscriptionError extends Error {\n readonly errors: any[];\n}\n\nexport interface UnsubscriptionErrorCtor {\n /**\n * @deprecated Internal implementation detail. Do not construct error instances.\n * Cannot be tagged as internal: https://github.com/ReactiveX/rxjs/issues/6269\n */\n new (errors: any[]): UnsubscriptionError;\n}\n\n/**\n * An error thrown when one or more errors have occurred during the\n * `unsubscribe` of a {@link Subscription}.\n */\nexport const UnsubscriptionError: UnsubscriptionErrorCtor = createErrorClass(\n (_super) =>\n function UnsubscriptionErrorImpl(this: any, errors: (Error | string)[]) {\n _super(this);\n this.message = errors\n ? `${errors.length} errors occurred during unsubscription:\n${errors.map((err, i) => `${i + 1}) ${err.toString()}`).join('\\n ')}`\n : '';\n this.name = 'UnsubscriptionError';\n this.errors = errors;\n }\n);\n", "/**\n * Removes an item from an array, mutating it.\n * @param arr The array to remove the item from\n * @param item The item to remove\n */\nexport function arrRemove(arr: T[] | undefined | null, item: T) {\n if (arr) {\n const index = arr.indexOf(item);\n 0 <= index && arr.splice(index, 1);\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { UnsubscriptionError } from './util/UnsubscriptionError';\nimport { SubscriptionLike, TeardownLogic, Unsubscribable } from './types';\nimport { arrRemove } from './util/arrRemove';\n\n/**\n * Represents a disposable resource, such as the execution of an Observable. A\n * Subscription has one important method, `unsubscribe`, that takes no argument\n * and just disposes the resource held by the subscription.\n *\n * Additionally, subscriptions may be grouped together through the `add()`\n * method, which will attach a child Subscription to the current Subscription.\n * When a Subscription is unsubscribed, all its children (and its grandchildren)\n * will be unsubscribed as well.\n *\n * @class Subscription\n */\nexport class Subscription implements SubscriptionLike {\n /** @nocollapse */\n public static EMPTY = (() => {\n const empty = new Subscription();\n empty.closed = true;\n return empty;\n })();\n\n /**\n * A flag to indicate whether this Subscription has already been unsubscribed.\n */\n public closed = false;\n\n private _parentage: Subscription[] | Subscription | null = null;\n\n /**\n * The list of registered finalizers to execute upon unsubscription. Adding and removing from this\n * list occurs in the {@link #add} and {@link #remove} methods.\n */\n private _finalizers: Exclude[] | null = null;\n\n /**\n * @param initialTeardown A function executed first as part of the finalization\n * process that is kicked off when {@link #unsubscribe} is called.\n */\n constructor(private initialTeardown?: () => void) {}\n\n /**\n * Disposes the resources held by the subscription. May, for instance, cancel\n * an ongoing Observable execution or cancel any other type of work that\n * started when the Subscription was created.\n * @return {void}\n */\n unsubscribe(): void {\n let errors: any[] | undefined;\n\n if (!this.closed) {\n this.closed = true;\n\n // Remove this from it's parents.\n const { _parentage } = this;\n if (_parentage) {\n this._parentage = null;\n if (Array.isArray(_parentage)) {\n for (const parent of _parentage) {\n parent.remove(this);\n }\n } else {\n _parentage.remove(this);\n }\n }\n\n const { initialTeardown: initialFinalizer } = this;\n if (isFunction(initialFinalizer)) {\n try {\n initialFinalizer();\n } catch (e) {\n errors = e instanceof UnsubscriptionError ? e.errors : [e];\n }\n }\n\n const { _finalizers } = this;\n if (_finalizers) {\n this._finalizers = null;\n for (const finalizer of _finalizers) {\n try {\n execFinalizer(finalizer);\n } catch (err) {\n errors = errors ?? [];\n if (err instanceof UnsubscriptionError) {\n errors = [...errors, ...err.errors];\n } else {\n errors.push(err);\n }\n }\n }\n }\n\n if (errors) {\n throw new UnsubscriptionError(errors);\n }\n }\n }\n\n /**\n * Adds a finalizer to this subscription, so that finalization will be unsubscribed/called\n * when this subscription is unsubscribed. If this subscription is already {@link #closed},\n * because it has already been unsubscribed, then whatever finalizer is passed to it\n * will automatically be executed (unless the finalizer itself is also a closed subscription).\n *\n * Closed Subscriptions cannot be added as finalizers to any subscription. Adding a closed\n * subscription to a any subscription will result in no operation. (A noop).\n *\n * Adding a subscription to itself, or adding `null` or `undefined` will not perform any\n * operation at all. (A noop).\n *\n * `Subscription` instances that are added to this instance will automatically remove themselves\n * if they are unsubscribed. Functions and {@link Unsubscribable} objects that you wish to remove\n * will need to be removed manually with {@link #remove}\n *\n * @param teardown The finalization logic to add to this subscription.\n */\n add(teardown: TeardownLogic): void {\n // Only add the finalizer if it's not undefined\n // and don't add a subscription to itself.\n if (teardown && teardown !== this) {\n if (this.closed) {\n // If this subscription is already closed,\n // execute whatever finalizer is handed to it automatically.\n execFinalizer(teardown);\n } else {\n if (teardown instanceof Subscription) {\n // We don't add closed subscriptions, and we don't add the same subscription\n // twice. Subscription unsubscribe is idempotent.\n if (teardown.closed || teardown._hasParent(this)) {\n return;\n }\n teardown._addParent(this);\n }\n (this._finalizers = this._finalizers ?? []).push(teardown);\n }\n }\n }\n\n /**\n * Checks to see if a this subscription already has a particular parent.\n * This will signal that this subscription has already been added to the parent in question.\n * @param parent the parent to check for\n */\n private _hasParent(parent: Subscription) {\n const { _parentage } = this;\n return _parentage === parent || (Array.isArray(_parentage) && _parentage.includes(parent));\n }\n\n /**\n * Adds a parent to this subscription so it can be removed from the parent if it\n * unsubscribes on it's own.\n *\n * NOTE: THIS ASSUMES THAT {@link _hasParent} HAS ALREADY BEEN CHECKED.\n * @param parent The parent subscription to add\n */\n private _addParent(parent: Subscription) {\n const { _parentage } = this;\n this._parentage = Array.isArray(_parentage) ? (_parentage.push(parent), _parentage) : _parentage ? [_parentage, parent] : parent;\n }\n\n /**\n * Called on a child when it is removed via {@link #remove}.\n * @param parent The parent to remove\n */\n private _removeParent(parent: Subscription) {\n const { _parentage } = this;\n if (_parentage === parent) {\n this._parentage = null;\n } else if (Array.isArray(_parentage)) {\n arrRemove(_parentage, parent);\n }\n }\n\n /**\n * Removes a finalizer from this subscription that was previously added with the {@link #add} method.\n *\n * Note that `Subscription` instances, when unsubscribed, will automatically remove themselves\n * from every other `Subscription` they have been added to. This means that using the `remove` method\n * is not a common thing and should be used thoughtfully.\n *\n * If you add the same finalizer instance of a function or an unsubscribable object to a `Subscription` instance\n * more than once, you will need to call `remove` the same number of times to remove all instances.\n *\n * All finalizer instances are removed to free up memory upon unsubscription.\n *\n * @param teardown The finalizer to remove from this subscription\n */\n remove(teardown: Exclude): void {\n const { _finalizers } = this;\n _finalizers && arrRemove(_finalizers, teardown);\n\n if (teardown instanceof Subscription) {\n teardown._removeParent(this);\n }\n }\n}\n\nexport const EMPTY_SUBSCRIPTION = Subscription.EMPTY;\n\nexport function isSubscription(value: any): value is Subscription {\n return (\n value instanceof Subscription ||\n (value && 'closed' in value && isFunction(value.remove) && isFunction(value.add) && isFunction(value.unsubscribe))\n );\n}\n\nfunction execFinalizer(finalizer: Unsubscribable | (() => void)) {\n if (isFunction(finalizer)) {\n finalizer();\n } else {\n finalizer.unsubscribe();\n }\n}\n", "import { Subscriber } from './Subscriber';\nimport { ObservableNotification } from './types';\n\n/**\n * The {@link GlobalConfig} object for RxJS. It is used to configure things\n * like how to react on unhandled errors.\n */\nexport const config: GlobalConfig = {\n onUnhandledError: null,\n onStoppedNotification: null,\n Promise: undefined,\n useDeprecatedSynchronousErrorHandling: false,\n useDeprecatedNextContext: false,\n};\n\n/**\n * The global configuration object for RxJS, used to configure things\n * like how to react on unhandled errors. Accessible via {@link config}\n * object.\n */\nexport interface GlobalConfig {\n /**\n * A registration point for unhandled errors from RxJS. These are errors that\n * cannot were not handled by consuming code in the usual subscription path. For\n * example, if you have this configured, and you subscribe to an observable without\n * providing an error handler, errors from that subscription will end up here. This\n * will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onUnhandledError: ((err: any) => void) | null;\n\n /**\n * A registration point for notifications that cannot be sent to subscribers because they\n * have completed, errored or have been explicitly unsubscribed. By default, next, complete\n * and error notifications sent to stopped subscribers are noops. However, sometimes callers\n * might want a different behavior. For example, with sources that attempt to report errors\n * to stopped subscribers, a caller can configure RxJS to throw an unhandled error instead.\n * This will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onStoppedNotification: ((notification: ObservableNotification, subscriber: Subscriber) => void) | null;\n\n /**\n * The promise constructor used by default for {@link Observable#toPromise toPromise} and {@link Observable#forEach forEach}\n * methods.\n *\n * @deprecated As of version 8, RxJS will no longer support this sort of injection of a\n * Promise constructor. If you need a Promise implementation other than native promises,\n * please polyfill/patch Promise as you see appropriate. Will be removed in v8.\n */\n Promise?: PromiseConstructorLike;\n\n /**\n * If true, turns on synchronous error rethrowing, which is a deprecated behavior\n * in v6 and higher. This behavior enables bad patterns like wrapping a subscribe\n * call in a try/catch block. It also enables producer interference, a nasty bug\n * where a multicast can be broken for all observers by a downstream consumer with\n * an unhandled error. DO NOT USE THIS FLAG UNLESS IT'S NEEDED TO BUY TIME\n * FOR MIGRATION REASONS.\n *\n * @deprecated As of version 8, RxJS will no longer support synchronous throwing\n * of unhandled errors. All errors will be thrown on a separate call stack to prevent bad\n * behaviors described above. Will be removed in v8.\n */\n useDeprecatedSynchronousErrorHandling: boolean;\n\n /**\n * If true, enables an as-of-yet undocumented feature from v5: The ability to access\n * `unsubscribe()` via `this` context in `next` functions created in observers passed\n * to `subscribe`.\n *\n * This is being removed because the performance was severely problematic, and it could also cause\n * issues when types other than POJOs are passed to subscribe as subscribers, as they will likely have\n * their `this` context overwritten.\n *\n * @deprecated As of version 8, RxJS will no longer support altering the\n * context of next functions provided as part of an observer to Subscribe. Instead,\n * you will have access to a subscription or a signal or token that will allow you to do things like\n * unsubscribe and test closed status. Will be removed in v8.\n */\n useDeprecatedNextContext: boolean;\n}\n", "import type { TimerHandle } from './timerHandle';\ntype SetTimeoutFunction = (handler: () => void, timeout?: number, ...args: any[]) => TimerHandle;\ntype ClearTimeoutFunction = (handle: TimerHandle) => void;\n\ninterface TimeoutProvider {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n delegate:\n | {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n }\n | undefined;\n}\n\nexport const timeoutProvider: TimeoutProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n setTimeout(handler: () => void, timeout?: number, ...args) {\n const { delegate } = timeoutProvider;\n if (delegate?.setTimeout) {\n return delegate.setTimeout(handler, timeout, ...args);\n }\n return setTimeout(handler, timeout, ...args);\n },\n clearTimeout(handle) {\n const { delegate } = timeoutProvider;\n return (delegate?.clearTimeout || clearTimeout)(handle as any);\n },\n delegate: undefined,\n};\n", "import { config } from '../config';\nimport { timeoutProvider } from '../scheduler/timeoutProvider';\n\n/**\n * Handles an error on another job either with the user-configured {@link onUnhandledError},\n * or by throwing it on that new job so it can be picked up by `window.onerror`, `process.on('error')`, etc.\n *\n * This should be called whenever there is an error that is out-of-band with the subscription\n * or when an error hits a terminal boundary of the subscription and no error handler was provided.\n *\n * @param err the error to report\n */\nexport function reportUnhandledError(err: any) {\n timeoutProvider.setTimeout(() => {\n const { onUnhandledError } = config;\n if (onUnhandledError) {\n // Execute the user-configured error handler.\n onUnhandledError(err);\n } else {\n // Throw so it is picked up by the runtime's uncaught error mechanism.\n throw err;\n }\n });\n}\n", "/* tslint:disable:no-empty */\nexport function noop() { }\n", "import { CompleteNotification, NextNotification, ErrorNotification } from './types';\n\n/**\n * A completion object optimized for memory use and created to be the\n * same \"shape\" as other notifications in v8.\n * @internal\n */\nexport const COMPLETE_NOTIFICATION = (() => createNotification('C', undefined, undefined) as CompleteNotification)();\n\n/**\n * Internal use only. Creates an optimized error notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function errorNotification(error: any): ErrorNotification {\n return createNotification('E', undefined, error) as any;\n}\n\n/**\n * Internal use only. Creates an optimized next notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function nextNotification(value: T) {\n return createNotification('N', value, undefined) as NextNotification;\n}\n\n/**\n * Ensures that all notifications created internally have the same \"shape\" in v8.\n *\n * TODO: This is only exported to support a crazy legacy test in `groupBy`.\n * @internal\n */\nexport function createNotification(kind: 'N' | 'E' | 'C', value: any, error: any) {\n return {\n kind,\n value,\n error,\n };\n}\n", "import { config } from '../config';\n\nlet context: { errorThrown: boolean; error: any } | null = null;\n\n/**\n * Handles dealing with errors for super-gross mode. Creates a context, in which\n * any synchronously thrown errors will be passed to {@link captureError}. Which\n * will record the error such that it will be rethrown after the call back is complete.\n * TODO: Remove in v8\n * @param cb An immediately executed function.\n */\nexport function errorContext(cb: () => void) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n const isRoot = !context;\n if (isRoot) {\n context = { errorThrown: false, error: null };\n }\n cb();\n if (isRoot) {\n const { errorThrown, error } = context!;\n context = null;\n if (errorThrown) {\n throw error;\n }\n }\n } else {\n // This is the general non-deprecated path for everyone that\n // isn't crazy enough to use super-gross mode (useDeprecatedSynchronousErrorHandling)\n cb();\n }\n}\n\n/**\n * Captures errors only in super-gross mode.\n * @param err the error to capture\n */\nexport function captureError(err: any) {\n if (config.useDeprecatedSynchronousErrorHandling && context) {\n context.errorThrown = true;\n context.error = err;\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { Observer, ObservableNotification } from './types';\nimport { isSubscription, Subscription } from './Subscription';\nimport { config } from './config';\nimport { reportUnhandledError } from './util/reportUnhandledError';\nimport { noop } from './util/noop';\nimport { nextNotification, errorNotification, COMPLETE_NOTIFICATION } from './NotificationFactories';\nimport { timeoutProvider } from './scheduler/timeoutProvider';\nimport { captureError } from './util/errorContext';\n\n/**\n * Implements the {@link Observer} interface and extends the\n * {@link Subscription} class. While the {@link Observer} is the public API for\n * consuming the values of an {@link Observable}, all Observers get converted to\n * a Subscriber, in order to provide Subscription-like capabilities such as\n * `unsubscribe`. Subscriber is a common type in RxJS, and crucial for\n * implementing operators, but it is rarely used as a public API.\n *\n * @class Subscriber\n */\nexport class Subscriber extends Subscription implements Observer {\n /**\n * A static factory for a Subscriber, given a (potentially partial) definition\n * of an Observer.\n * @param next The `next` callback of an Observer.\n * @param error The `error` callback of an\n * Observer.\n * @param complete The `complete` callback of an\n * Observer.\n * @return A Subscriber wrapping the (partially defined)\n * Observer represented by the given arguments.\n * @nocollapse\n * @deprecated Do not use. Will be removed in v8. There is no replacement for this\n * method, and there is no reason to be creating instances of `Subscriber` directly.\n * If you have a specific use case, please file an issue.\n */\n static create(next?: (x?: T) => void, error?: (e?: any) => void, complete?: () => void): Subscriber {\n return new SafeSubscriber(next, error, complete);\n }\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected isStopped: boolean = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected destination: Subscriber | Observer; // this `any` is the escape hatch to erase extra type param (e.g. R)\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * There is no reason to directly create an instance of Subscriber. This type is exported for typings reasons.\n */\n constructor(destination?: Subscriber | Observer) {\n super();\n if (destination) {\n this.destination = destination;\n // Automatically chain subscriptions together here.\n // if destination is a Subscription, then it is a Subscriber.\n if (isSubscription(destination)) {\n destination.add(this);\n }\n } else {\n this.destination = EMPTY_OBSERVER;\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `next` from\n * the Observable, with a value. The Observable may call this method 0 or more\n * times.\n * @param {T} [value] The `next` value.\n * @return {void}\n */\n next(value?: T): void {\n if (this.isStopped) {\n handleStoppedNotification(nextNotification(value), this);\n } else {\n this._next(value!);\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `error` from\n * the Observable, with an attached `Error`. Notifies the Observer that\n * the Observable has experienced an error condition.\n * @param {any} [err] The `error` exception.\n * @return {void}\n */\n error(err?: any): void {\n if (this.isStopped) {\n handleStoppedNotification(errorNotification(err), this);\n } else {\n this.isStopped = true;\n this._error(err);\n }\n }\n\n /**\n * The {@link Observer} callback to receive a valueless notification of type\n * `complete` from the Observable. Notifies the Observer that the Observable\n * has finished sending push-based notifications.\n * @return {void}\n */\n complete(): void {\n if (this.isStopped) {\n handleStoppedNotification(COMPLETE_NOTIFICATION, this);\n } else {\n this.isStopped = true;\n this._complete();\n }\n }\n\n unsubscribe(): void {\n if (!this.closed) {\n this.isStopped = true;\n super.unsubscribe();\n this.destination = null!;\n }\n }\n\n protected _next(value: T): void {\n this.destination.next(value);\n }\n\n protected _error(err: any): void {\n try {\n this.destination.error(err);\n } finally {\n this.unsubscribe();\n }\n }\n\n protected _complete(): void {\n try {\n this.destination.complete();\n } finally {\n this.unsubscribe();\n }\n }\n}\n\n/**\n * This bind is captured here because we want to be able to have\n * compatibility with monoid libraries that tend to use a method named\n * `bind`. In particular, a library called Monio requires this.\n */\nconst _bind = Function.prototype.bind;\n\nfunction bind any>(fn: Fn, thisArg: any): Fn {\n return _bind.call(fn, thisArg);\n}\n\n/**\n * Internal optimization only, DO NOT EXPOSE.\n * @internal\n */\nclass ConsumerObserver implements Observer {\n constructor(private partialObserver: Partial>) {}\n\n next(value: T): void {\n const { partialObserver } = this;\n if (partialObserver.next) {\n try {\n partialObserver.next(value);\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n\n error(err: any): void {\n const { partialObserver } = this;\n if (partialObserver.error) {\n try {\n partialObserver.error(err);\n } catch (error) {\n handleUnhandledError(error);\n }\n } else {\n handleUnhandledError(err);\n }\n }\n\n complete(): void {\n const { partialObserver } = this;\n if (partialObserver.complete) {\n try {\n partialObserver.complete();\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n}\n\nexport class SafeSubscriber extends Subscriber {\n constructor(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((e?: any) => void) | null,\n complete?: (() => void) | null\n ) {\n super();\n\n let partialObserver: Partial>;\n if (isFunction(observerOrNext) || !observerOrNext) {\n // The first argument is a function, not an observer. The next\n // two arguments *could* be observers, or they could be empty.\n partialObserver = {\n next: (observerOrNext ?? undefined) as (((value: T) => void) | undefined),\n error: error ?? undefined,\n complete: complete ?? undefined,\n };\n } else {\n // The first argument is a partial observer.\n let context: any;\n if (this && config.useDeprecatedNextContext) {\n // This is a deprecated path that made `this.unsubscribe()` available in\n // next handler functions passed to subscribe. This only exists behind a flag\n // now, as it is *very* slow.\n context = Object.create(observerOrNext);\n context.unsubscribe = () => this.unsubscribe();\n partialObserver = {\n next: observerOrNext.next && bind(observerOrNext.next, context),\n error: observerOrNext.error && bind(observerOrNext.error, context),\n complete: observerOrNext.complete && bind(observerOrNext.complete, context),\n };\n } else {\n // The \"normal\" path. Just use the partial observer directly.\n partialObserver = observerOrNext;\n }\n }\n\n // Wrap the partial observer to ensure it's a full observer, and\n // make sure proper error handling is accounted for.\n this.destination = new ConsumerObserver(partialObserver);\n }\n}\n\nfunction handleUnhandledError(error: any) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n captureError(error);\n } else {\n // Ideal path, we report this as an unhandled error,\n // which is thrown on a new call stack.\n reportUnhandledError(error);\n }\n}\n\n/**\n * An error handler used when no error handler was supplied\n * to the SafeSubscriber -- meaning no error handler was supplied\n * do the `subscribe` call on our observable.\n * @param err The error to handle\n */\nfunction defaultErrorHandler(err: any) {\n throw err;\n}\n\n/**\n * A handler for notifications that cannot be sent to a stopped subscriber.\n * @param notification The notification being sent\n * @param subscriber The stopped subscriber\n */\nfunction handleStoppedNotification(notification: ObservableNotification, subscriber: Subscriber) {\n const { onStoppedNotification } = config;\n onStoppedNotification && timeoutProvider.setTimeout(() => onStoppedNotification(notification, subscriber));\n}\n\n/**\n * The observer used as a stub for subscriptions where the user did not\n * pass any arguments to `subscribe`. Comes with the default error handling\n * behavior.\n */\nexport const EMPTY_OBSERVER: Readonly> & { closed: true } = {\n closed: true,\n next: noop,\n error: defaultErrorHandler,\n complete: noop,\n};\n", "/**\n * Symbol.observable or a string \"@@observable\". Used for interop\n *\n * @deprecated We will no longer be exporting this symbol in upcoming versions of RxJS.\n * Instead polyfill and use Symbol.observable directly *or* use https://www.npmjs.com/package/symbol-observable\n */\nexport const observable: string | symbol = (() => (typeof Symbol === 'function' && Symbol.observable) || '@@observable')();\n", "/**\n * This function takes one parameter and just returns it. Simply put,\n * this is like `(x: T): T => x`.\n *\n * ## Examples\n *\n * This is useful in some cases when using things like `mergeMap`\n *\n * ```ts\n * import { interval, take, map, range, mergeMap, identity } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(5));\n *\n * const result$ = source$.pipe(\n * map(i => range(i)),\n * mergeMap(identity) // same as mergeMap(x => x)\n * );\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * Or when you want to selectively apply an operator\n *\n * ```ts\n * import { interval, take, identity } from 'rxjs';\n *\n * const shouldLimit = () => Math.random() < 0.5;\n *\n * const source$ = interval(1000);\n *\n * const result$ = source$.pipe(shouldLimit() ? take(5) : identity);\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * @param x Any value that is returned by this function\n * @returns The value passed as the first parameter to this function\n */\nexport function identity(x: T): T {\n return x;\n}\n", "import { identity } from './identity';\nimport { UnaryFunction } from '../types';\n\nexport function pipe(): typeof identity;\nexport function pipe(fn1: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction, fn3: UnaryFunction): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction,\n ...fns: UnaryFunction[]\n): UnaryFunction;\n\n/**\n * pipe() can be called on one or more functions, each of which can take one argument (\"UnaryFunction\")\n * and uses it to return a value.\n * It returns a function that takes one argument, passes it to the first UnaryFunction, and then\n * passes the result to the next one, passes that result to the next one, and so on. \n */\nexport function pipe(...fns: Array>): UnaryFunction {\n return pipeFromArray(fns);\n}\n\n/** @internal */\nexport function pipeFromArray(fns: Array>): UnaryFunction {\n if (fns.length === 0) {\n return identity as UnaryFunction;\n }\n\n if (fns.length === 1) {\n return fns[0];\n }\n\n return function piped(input: T): R {\n return fns.reduce((prev: any, fn: UnaryFunction) => fn(prev), input as any);\n };\n}\n", "import { Operator } from './Operator';\nimport { SafeSubscriber, Subscriber } from './Subscriber';\nimport { isSubscription, Subscription } from './Subscription';\nimport { TeardownLogic, OperatorFunction, Subscribable, Observer } from './types';\nimport { observable as Symbol_observable } from './symbol/observable';\nimport { pipeFromArray } from './util/pipe';\nimport { config } from './config';\nimport { isFunction } from './util/isFunction';\nimport { errorContext } from './util/errorContext';\n\n/**\n * A representation of any set of values over any amount of time. This is the most basic building block\n * of RxJS.\n *\n * @class Observable\n */\nexport class Observable implements Subscribable {\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n source: Observable | undefined;\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n operator: Operator | undefined;\n\n /**\n * @constructor\n * @param {Function} subscribe the function that is called when the Observable is\n * initially subscribed to. This function is given a Subscriber, to which new values\n * can be `next`ed, or an `error` method can be called to raise an error, or\n * `complete` can be called to notify of a successful completion.\n */\n constructor(subscribe?: (this: Observable, subscriber: Subscriber) => TeardownLogic) {\n if (subscribe) {\n this._subscribe = subscribe;\n }\n }\n\n // HACK: Since TypeScript inherits static properties too, we have to\n // fight against TypeScript here so Subject can have a different static create signature\n /**\n * Creates a new Observable by calling the Observable constructor\n * @owner Observable\n * @method create\n * @param {Function} subscribe? the subscriber function to be passed to the Observable constructor\n * @return {Observable} a new observable\n * @nocollapse\n * @deprecated Use `new Observable()` instead. Will be removed in v8.\n */\n static create: (...args: any[]) => any = (subscribe?: (subscriber: Subscriber) => TeardownLogic) => {\n return new Observable(subscribe);\n };\n\n /**\n * Creates a new Observable, with this Observable instance as the source, and the passed\n * operator defined as the new observable's operator.\n * @method lift\n * @param operator the operator defining the operation to take on the observable\n * @return a new observable with the Operator applied\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * If you have implemented an operator using `lift`, it is recommended that you create an\n * operator by simply returning `new Observable()` directly. See \"Creating new operators from\n * scratch\" section here: https://rxjs.dev/guide/operators\n */\n lift(operator?: Operator): Observable {\n const observable = new Observable();\n observable.source = this;\n observable.operator = operator;\n return observable;\n }\n\n subscribe(observerOrNext?: Partial> | ((value: T) => void)): Subscription;\n /** @deprecated Instead of passing separate callback arguments, use an observer argument. Signatures taking separate callback arguments will be removed in v8. Details: https://rxjs.dev/deprecations/subscribe-arguments */\n subscribe(next?: ((value: T) => void) | null, error?: ((error: any) => void) | null, complete?: (() => void) | null): Subscription;\n /**\n * Invokes an execution of an Observable and registers Observer handlers for notifications it will emit.\n *\n * Use it when you have all these Observables, but still nothing is happening.\n *\n * `subscribe` is not a regular operator, but a method that calls Observable's internal `subscribe` function. It\n * might be for example a function that you passed to Observable's constructor, but most of the time it is\n * a library implementation, which defines what will be emitted by an Observable, and when it be will emitted. This means\n * that calling `subscribe` is actually the moment when Observable starts its work, not when it is created, as it is often\n * the thought.\n *\n * Apart from starting the execution of an Observable, this method allows you to listen for values\n * that an Observable emits, as well as for when it completes or errors. You can achieve this in two\n * of the following ways.\n *\n * The first way is creating an object that implements {@link Observer} interface. It should have methods\n * defined by that interface, but note that it should be just a regular JavaScript object, which you can create\n * yourself in any way you want (ES6 class, classic function constructor, object literal etc.). In particular, do\n * not attempt to use any RxJS implementation details to create Observers - you don't need them. Remember also\n * that your object does not have to implement all methods. If you find yourself creating a method that doesn't\n * do anything, you can simply omit it. Note however, if the `error` method is not provided and an error happens,\n * it will be thrown asynchronously. Errors thrown asynchronously cannot be caught using `try`/`catch`. Instead,\n * use the {@link onUnhandledError} configuration option or use a runtime handler (like `window.onerror` or\n * `process.on('error)`) to be notified of unhandled errors. Because of this, it's recommended that you provide\n * an `error` method to avoid missing thrown errors.\n *\n * The second way is to give up on Observer object altogether and simply provide callback functions in place of its methods.\n * This means you can provide three functions as arguments to `subscribe`, where the first function is equivalent\n * of a `next` method, the second of an `error` method and the third of a `complete` method. Just as in case of an Observer,\n * if you do not need to listen for something, you can omit a function by passing `undefined` or `null`,\n * since `subscribe` recognizes these functions by where they were placed in function call. When it comes\n * to the `error` function, as with an Observer, if not provided, errors emitted by an Observable will be thrown asynchronously.\n *\n * You can, however, subscribe with no parameters at all. This may be the case where you're not interested in terminal events\n * and you also handled emissions internally by using operators (e.g. using `tap`).\n *\n * Whichever style of calling `subscribe` you use, in both cases it returns a Subscription object.\n * This object allows you to call `unsubscribe` on it, which in turn will stop the work that an Observable does and will clean\n * up all resources that an Observable used. Note that cancelling a subscription will not call `complete` callback\n * provided to `subscribe` function, which is reserved for a regular completion signal that comes from an Observable.\n *\n * Remember that callbacks provided to `subscribe` are not guaranteed to be called asynchronously.\n * It is an Observable itself that decides when these functions will be called. For example {@link of}\n * by default emits all its values synchronously. Always check documentation for how given Observable\n * will behave when subscribed and if its default behavior can be modified with a `scheduler`.\n *\n * #### Examples\n *\n * Subscribe with an {@link guide/observer Observer}\n *\n * ```ts\n * import { of } from 'rxjs';\n *\n * const sumObserver = {\n * sum: 0,\n * next(value) {\n * console.log('Adding: ' + value);\n * this.sum = this.sum + value;\n * },\n * error() {\n * // We actually could just remove this method,\n * // since we do not really care about errors right now.\n * },\n * complete() {\n * console.log('Sum equals: ' + this.sum);\n * }\n * };\n *\n * of(1, 2, 3) // Synchronously emits 1, 2, 3 and then completes.\n * .subscribe(sumObserver);\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Subscribe with functions ({@link deprecations/subscribe-arguments deprecated})\n *\n * ```ts\n * import { of } from 'rxjs'\n *\n * let sum = 0;\n *\n * of(1, 2, 3).subscribe(\n * value => {\n * console.log('Adding: ' + value);\n * sum = sum + value;\n * },\n * undefined,\n * () => console.log('Sum equals: ' + sum)\n * );\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Cancel a subscription\n *\n * ```ts\n * import { interval } from 'rxjs';\n *\n * const subscription = interval(1000).subscribe({\n * next(num) {\n * console.log(num)\n * },\n * complete() {\n * // Will not be called, even when cancelling subscription.\n * console.log('completed!');\n * }\n * });\n *\n * setTimeout(() => {\n * subscription.unsubscribe();\n * console.log('unsubscribed!');\n * }, 2500);\n *\n * // Logs:\n * // 0 after 1s\n * // 1 after 2s\n * // 'unsubscribed!' after 2.5s\n * ```\n *\n * @param {Observer|Function} observerOrNext (optional) Either an observer with methods to be called,\n * or the first of three possible handlers, which is the handler for each value emitted from the subscribed\n * Observable.\n * @param {Function} error (optional) A handler for a terminal event resulting from an error. If no error handler is provided,\n * the error will be thrown asynchronously as unhandled.\n * @param {Function} complete (optional) A handler for a terminal event resulting from successful completion.\n * @return {Subscription} a subscription reference to the registered handlers\n * @method subscribe\n */\n subscribe(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((error: any) => void) | null,\n complete?: (() => void) | null\n ): Subscription {\n const subscriber = isSubscriber(observerOrNext) ? observerOrNext : new SafeSubscriber(observerOrNext, error, complete);\n\n errorContext(() => {\n const { operator, source } = this;\n subscriber.add(\n operator\n ? // We're dealing with a subscription in the\n // operator chain to one of our lifted operators.\n operator.call(subscriber, source)\n : source\n ? // If `source` has a value, but `operator` does not, something that\n // had intimate knowledge of our API, like our `Subject`, must have\n // set it. We're going to just call `_subscribe` directly.\n this._subscribe(subscriber)\n : // In all other cases, we're likely wrapping a user-provided initializer\n // function, so we need to catch errors and handle them appropriately.\n this._trySubscribe(subscriber)\n );\n });\n\n return subscriber;\n }\n\n /** @internal */\n protected _trySubscribe(sink: Subscriber): TeardownLogic {\n try {\n return this._subscribe(sink);\n } catch (err) {\n // We don't need to return anything in this case,\n // because it's just going to try to `add()` to a subscription\n // above.\n sink.error(err);\n }\n }\n\n /**\n * Used as a NON-CANCELLABLE means of subscribing to an observable, for use with\n * APIs that expect promises, like `async/await`. You cannot unsubscribe from this.\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * #### Example\n *\n * ```ts\n * import { interval, take } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(4));\n *\n * async function getTotal() {\n * let total = 0;\n *\n * await source$.forEach(value => {\n * total += value;\n * console.log('observable -> ' + value);\n * });\n *\n * return total;\n * }\n *\n * getTotal().then(\n * total => console.log('Total: ' + total)\n * );\n *\n * // Expected:\n * // 'observable -> 0'\n * // 'observable -> 1'\n * // 'observable -> 2'\n * // 'observable -> 3'\n * // 'Total: 6'\n * ```\n *\n * @param next a handler for each value emitted by the observable\n * @return a promise that either resolves on observable completion or\n * rejects with the handled error\n */\n forEach(next: (value: T) => void): Promise;\n\n /**\n * @param next a handler for each value emitted by the observable\n * @param promiseCtor a constructor function used to instantiate the Promise\n * @return a promise that either resolves on observable completion or\n * rejects with the handled error\n * @deprecated Passing a Promise constructor will no longer be available\n * in upcoming versions of RxJS. This is because it adds weight to the library, for very\n * little benefit. If you need this functionality, it is recommended that you either\n * polyfill Promise, or you create an adapter to convert the returned native promise\n * to whatever promise implementation you wanted. Will be removed in v8.\n */\n forEach(next: (value: T) => void, promiseCtor: PromiseConstructorLike): Promise;\n\n forEach(next: (value: T) => void, promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n const subscriber = new SafeSubscriber({\n next: (value) => {\n try {\n next(value);\n } catch (err) {\n reject(err);\n subscriber.unsubscribe();\n }\n },\n error: reject,\n complete: resolve,\n });\n this.subscribe(subscriber);\n }) as Promise;\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): TeardownLogic {\n return this.source?.subscribe(subscriber);\n }\n\n /**\n * An interop point defined by the es7-observable spec https://github.com/zenparsing/es-observable\n * @method Symbol.observable\n * @return {Observable} this instance of the observable\n */\n [Symbol_observable]() {\n return this;\n }\n\n /* tslint:disable:max-line-length */\n pipe(): Observable;\n pipe(op1: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction, op3: OperatorFunction): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction,\n ...operations: OperatorFunction[]\n ): Observable;\n /* tslint:enable:max-line-length */\n\n /**\n * Used to stitch together functional operators into a chain.\n * @method pipe\n * @return {Observable} the Observable result of all of the operators having\n * been called in the order they were passed in.\n *\n * ## Example\n *\n * ```ts\n * import { interval, filter, map, scan } from 'rxjs';\n *\n * interval(1000)\n * .pipe(\n * filter(x => x % 2 === 0),\n * map(x => x + x),\n * scan((acc, x) => acc + x)\n * )\n * .subscribe(x => console.log(x));\n * ```\n */\n pipe(...operations: OperatorFunction[]): Observable {\n return pipeFromArray(operations)(this);\n }\n\n /* tslint:disable:max-line-length */\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(): Promise;\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(PromiseCtor: typeof Promise): Promise;\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(PromiseCtor: PromiseConstructorLike): Promise;\n /* tslint:enable:max-line-length */\n\n /**\n * Subscribe to this Observable and get a Promise resolving on\n * `complete` with the last emission (if any).\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * @method toPromise\n * @param [promiseCtor] a constructor function used to instantiate\n * the Promise\n * @return A Promise that resolves with the last value emit, or\n * rejects on an error. If there were no emissions, Promise\n * resolves with undefined.\n * @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise\n */\n toPromise(promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n let value: T | undefined;\n this.subscribe(\n (x: T) => (value = x),\n (err: any) => reject(err),\n () => resolve(value)\n );\n }) as Promise;\n }\n}\n\n/**\n * Decides between a passed promise constructor from consuming code,\n * A default configured promise constructor, and the native promise\n * constructor and returns it. If nothing can be found, it will throw\n * an error.\n * @param promiseCtor The optional promise constructor to passed by consuming code\n */\nfunction getPromiseCtor(promiseCtor: PromiseConstructorLike | undefined) {\n return promiseCtor ?? config.Promise ?? Promise;\n}\n\nfunction isObserver(value: any): value is Observer {\n return value && isFunction(value.next) && isFunction(value.error) && isFunction(value.complete);\n}\n\nfunction isSubscriber(value: any): value is Subscriber {\n return (value && value instanceof Subscriber) || (isObserver(value) && isSubscription(value));\n}\n", "import { Observable } from '../Observable';\nimport { Subscriber } from '../Subscriber';\nimport { OperatorFunction } from '../types';\nimport { isFunction } from './isFunction';\n\n/**\n * Used to determine if an object is an Observable with a lift function.\n */\nexport function hasLift(source: any): source is { lift: InstanceType['lift'] } {\n return isFunction(source?.lift);\n}\n\n/**\n * Creates an `OperatorFunction`. Used to define operators throughout the library in a concise way.\n * @param init The logic to connect the liftedSource to the subscriber at the moment of subscription.\n */\nexport function operate(\n init: (liftedSource: Observable, subscriber: Subscriber) => (() => void) | void\n): OperatorFunction {\n return (source: Observable) => {\n if (hasLift(source)) {\n return source.lift(function (this: Subscriber, liftedSource: Observable) {\n try {\n return init(liftedSource, this);\n } catch (err) {\n this.error(err);\n }\n });\n }\n throw new TypeError('Unable to lift unknown Observable type');\n };\n}\n", "import { Subscriber } from '../Subscriber';\n\n/**\n * Creates an instance of an `OperatorSubscriber`.\n * @param destination The downstream subscriber.\n * @param onNext Handles next values, only called if this subscriber is not stopped or closed. Any\n * error that occurs in this function is caught and sent to the `error` method of this subscriber.\n * @param onError Handles errors from the subscription, any errors that occur in this handler are caught\n * and send to the `destination` error handler.\n * @param onComplete Handles completion notification from the subscription. Any errors that occur in\n * this handler are sent to the `destination` error handler.\n * @param onFinalize Additional teardown logic here. This will only be called on teardown if the\n * subscriber itself is not already closed. This is called after all other teardown logic is executed.\n */\nexport function createOperatorSubscriber(\n destination: Subscriber,\n onNext?: (value: T) => void,\n onComplete?: () => void,\n onError?: (err: any) => void,\n onFinalize?: () => void\n): Subscriber {\n return new OperatorSubscriber(destination, onNext, onComplete, onError, onFinalize);\n}\n\n/**\n * A generic helper for allowing operators to be created with a Subscriber and\n * use closures to capture necessary state from the operator function itself.\n */\nexport class OperatorSubscriber extends Subscriber {\n /**\n * Creates an instance of an `OperatorSubscriber`.\n * @param destination The downstream subscriber.\n * @param onNext Handles next values, only called if this subscriber is not stopped or closed. Any\n * error that occurs in this function is caught and sent to the `error` method of this subscriber.\n * @param onError Handles errors from the subscription, any errors that occur in this handler are caught\n * and send to the `destination` error handler.\n * @param onComplete Handles completion notification from the subscription. Any errors that occur in\n * this handler are sent to the `destination` error handler.\n * @param onFinalize Additional finalization logic here. This will only be called on finalization if the\n * subscriber itself is not already closed. This is called after all other finalization logic is executed.\n * @param shouldUnsubscribe An optional check to see if an unsubscribe call should truly unsubscribe.\n * NOTE: This currently **ONLY** exists to support the strange behavior of {@link groupBy}, where unsubscription\n * to the resulting observable does not actually disconnect from the source if there are active subscriptions\n * to any grouped observable. (DO NOT EXPOSE OR USE EXTERNALLY!!!)\n */\n constructor(\n destination: Subscriber,\n onNext?: (value: T) => void,\n onComplete?: () => void,\n onError?: (err: any) => void,\n private onFinalize?: () => void,\n private shouldUnsubscribe?: () => boolean\n ) {\n // It's important - for performance reasons - that all of this class's\n // members are initialized and that they are always initialized in the same\n // order. This will ensure that all OperatorSubscriber instances have the\n // same hidden class in V8. This, in turn, will help keep the number of\n // hidden classes involved in property accesses within the base class as\n // low as possible. If the number of hidden classes involved exceeds four,\n // the property accesses will become megamorphic and performance penalties\n // will be incurred - i.e. inline caches won't be used.\n //\n // The reasons for ensuring all instances have the same hidden class are\n // further discussed in this blog post from Benedikt Meurer:\n // https://benediktmeurer.de/2018/03/23/impact-of-polymorphism-on-component-based-frameworks-like-react/\n super(destination);\n this._next = onNext\n ? function (this: OperatorSubscriber, value: T) {\n try {\n onNext(value);\n } catch (err) {\n destination.error(err);\n }\n }\n : super._next;\n this._error = onError\n ? function (this: OperatorSubscriber, err: any) {\n try {\n onError(err);\n } catch (err) {\n // Send any errors that occur down stream.\n destination.error(err);\n } finally {\n // Ensure finalization.\n this.unsubscribe();\n }\n }\n : super._error;\n this._complete = onComplete\n ? function (this: OperatorSubscriber) {\n try {\n onComplete();\n } catch (err) {\n // Send any errors that occur down stream.\n destination.error(err);\n } finally {\n // Ensure finalization.\n this.unsubscribe();\n }\n }\n : super._complete;\n }\n\n unsubscribe() {\n if (!this.shouldUnsubscribe || this.shouldUnsubscribe()) {\n const { closed } = this;\n super.unsubscribe();\n // Execute additional teardown if we have any and we didn't already do so.\n !closed && this.onFinalize?.();\n }\n }\n}\n", "import { Subscription } from '../Subscription';\n\ninterface AnimationFrameProvider {\n schedule(callback: FrameRequestCallback): Subscription;\n requestAnimationFrame: typeof requestAnimationFrame;\n cancelAnimationFrame: typeof cancelAnimationFrame;\n delegate:\n | {\n requestAnimationFrame: typeof requestAnimationFrame;\n cancelAnimationFrame: typeof cancelAnimationFrame;\n }\n | undefined;\n}\n\nexport const animationFrameProvider: AnimationFrameProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n schedule(callback) {\n let request = requestAnimationFrame;\n let cancel: typeof cancelAnimationFrame | undefined = cancelAnimationFrame;\n const { delegate } = animationFrameProvider;\n if (delegate) {\n request = delegate.requestAnimationFrame;\n cancel = delegate.cancelAnimationFrame;\n }\n const handle = request((timestamp) => {\n // Clear the cancel function. The request has been fulfilled, so\n // attempting to cancel the request upon unsubscription would be\n // pointless.\n cancel = undefined;\n callback(timestamp);\n });\n return new Subscription(() => cancel?.(handle));\n },\n requestAnimationFrame(...args) {\n const { delegate } = animationFrameProvider;\n return (delegate?.requestAnimationFrame || requestAnimationFrame)(...args);\n },\n cancelAnimationFrame(...args) {\n const { delegate } = animationFrameProvider;\n return (delegate?.cancelAnimationFrame || cancelAnimationFrame)(...args);\n },\n delegate: undefined,\n};\n", "import { createErrorClass } from './createErrorClass';\n\nexport interface ObjectUnsubscribedError extends Error {}\n\nexport interface ObjectUnsubscribedErrorCtor {\n /**\n * @deprecated Internal implementation detail. Do not construct error instances.\n * Cannot be tagged as internal: https://github.com/ReactiveX/rxjs/issues/6269\n */\n new (): ObjectUnsubscribedError;\n}\n\n/**\n * An error thrown when an action is invalid because the object has been\n * unsubscribed.\n *\n * @see {@link Subject}\n * @see {@link BehaviorSubject}\n *\n * @class ObjectUnsubscribedError\n */\nexport const ObjectUnsubscribedError: ObjectUnsubscribedErrorCtor = createErrorClass(\n (_super) =>\n function ObjectUnsubscribedErrorImpl(this: any) {\n _super(this);\n this.name = 'ObjectUnsubscribedError';\n this.message = 'object unsubscribed';\n }\n);\n", "import { Operator } from './Operator';\nimport { Observable } from './Observable';\nimport { Subscriber } from './Subscriber';\nimport { Subscription, EMPTY_SUBSCRIPTION } from './Subscription';\nimport { Observer, SubscriptionLike, TeardownLogic } from './types';\nimport { ObjectUnsubscribedError } from './util/ObjectUnsubscribedError';\nimport { arrRemove } from './util/arrRemove';\nimport { errorContext } from './util/errorContext';\n\n/**\n * A Subject is a special type of Observable that allows values to be\n * multicasted to many Observers. Subjects are like EventEmitters.\n *\n * Every Subject is an Observable and an Observer. You can subscribe to a\n * Subject, and you can call next to feed values as well as error and complete.\n */\nexport class Subject extends Observable implements SubscriptionLike {\n closed = false;\n\n private currentObservers: Observer[] | null = null;\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n observers: Observer[] = [];\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n isStopped = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n hasError = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n thrownError: any = null;\n\n /**\n * Creates a \"subject\" by basically gluing an observer to an observable.\n *\n * @nocollapse\n * @deprecated Recommended you do not use. Will be removed at some point in the future. Plans for replacement still under discussion.\n */\n static create: (...args: any[]) => any = (destination: Observer, source: Observable): AnonymousSubject => {\n return new AnonymousSubject(destination, source);\n };\n\n constructor() {\n // NOTE: This must be here to obscure Observable's constructor.\n super();\n }\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n lift(operator: Operator): Observable {\n const subject = new AnonymousSubject(this, this);\n subject.operator = operator as any;\n return subject as any;\n }\n\n /** @internal */\n protected _throwIfClosed() {\n if (this.closed) {\n throw new ObjectUnsubscribedError();\n }\n }\n\n next(value: T) {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n if (!this.currentObservers) {\n this.currentObservers = Array.from(this.observers);\n }\n for (const observer of this.currentObservers) {\n observer.next(value);\n }\n }\n });\n }\n\n error(err: any) {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n this.hasError = this.isStopped = true;\n this.thrownError = err;\n const { observers } = this;\n while (observers.length) {\n observers.shift()!.error(err);\n }\n }\n });\n }\n\n complete() {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n this.isStopped = true;\n const { observers } = this;\n while (observers.length) {\n observers.shift()!.complete();\n }\n }\n });\n }\n\n unsubscribe() {\n this.isStopped = this.closed = true;\n this.observers = this.currentObservers = null!;\n }\n\n get observed() {\n return this.observers?.length > 0;\n }\n\n /** @internal */\n protected _trySubscribe(subscriber: Subscriber): TeardownLogic {\n this._throwIfClosed();\n return super._trySubscribe(subscriber);\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n this._throwIfClosed();\n this._checkFinalizedStatuses(subscriber);\n return this._innerSubscribe(subscriber);\n }\n\n /** @internal */\n protected _innerSubscribe(subscriber: Subscriber) {\n const { hasError, isStopped, observers } = this;\n if (hasError || isStopped) {\n return EMPTY_SUBSCRIPTION;\n }\n this.currentObservers = null;\n observers.push(subscriber);\n return new Subscription(() => {\n this.currentObservers = null;\n arrRemove(observers, subscriber);\n });\n }\n\n /** @internal */\n protected _checkFinalizedStatuses(subscriber: Subscriber) {\n const { hasError, thrownError, isStopped } = this;\n if (hasError) {\n subscriber.error(thrownError);\n } else if (isStopped) {\n subscriber.complete();\n }\n }\n\n /**\n * Creates a new Observable with this Subject as the source. You can do this\n * to create custom Observer-side logic of the Subject and conceal it from\n * code that uses the Observable.\n * @return {Observable} Observable that the Subject casts to\n */\n asObservable(): Observable {\n const observable: any = new Observable();\n observable.source = this;\n return observable;\n }\n}\n\n/**\n * @class AnonymousSubject\n */\nexport class AnonymousSubject extends Subject {\n constructor(\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n public destination?: Observer,\n source?: Observable\n ) {\n super();\n this.source = source;\n }\n\n next(value: T) {\n this.destination?.next?.(value);\n }\n\n error(err: any) {\n this.destination?.error?.(err);\n }\n\n complete() {\n this.destination?.complete?.();\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n return this.source?.subscribe(subscriber) ?? EMPTY_SUBSCRIPTION;\n }\n}\n", "import { Subject } from './Subject';\nimport { Subscriber } from './Subscriber';\nimport { Subscription } from './Subscription';\n\n/**\n * A variant of Subject that requires an initial value and emits its current\n * value whenever it is subscribed to.\n *\n * @class BehaviorSubject\n */\nexport class BehaviorSubject extends Subject {\n constructor(private _value: T) {\n super();\n }\n\n get value(): T {\n return this.getValue();\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n const subscription = super._subscribe(subscriber);\n !subscription.closed && subscriber.next(this._value);\n return subscription;\n }\n\n getValue(): T {\n const { hasError, thrownError, _value } = this;\n if (hasError) {\n throw thrownError;\n }\n this._throwIfClosed();\n return _value;\n }\n\n next(value: T): void {\n super.next((this._value = value));\n }\n}\n", "import { TimestampProvider } from '../types';\n\ninterface DateTimestampProvider extends TimestampProvider {\n delegate: TimestampProvider | undefined;\n}\n\nexport const dateTimestampProvider: DateTimestampProvider = {\n now() {\n // Use the variable rather than `this` so that the function can be called\n // without being bound to the provider.\n return (dateTimestampProvider.delegate || Date).now();\n },\n delegate: undefined,\n};\n", "import { Subject } from './Subject';\nimport { TimestampProvider } from './types';\nimport { Subscriber } from './Subscriber';\nimport { Subscription } from './Subscription';\nimport { dateTimestampProvider } from './scheduler/dateTimestampProvider';\n\n/**\n * A variant of {@link Subject} that \"replays\" old values to new subscribers by emitting them when they first subscribe.\n *\n * `ReplaySubject` has an internal buffer that will store a specified number of values that it has observed. Like `Subject`,\n * `ReplaySubject` \"observes\" values by having them passed to its `next` method. When it observes a value, it will store that\n * value for a time determined by the configuration of the `ReplaySubject`, as passed to its constructor.\n *\n * When a new subscriber subscribes to the `ReplaySubject` instance, it will synchronously emit all values in its buffer in\n * a First-In-First-Out (FIFO) manner. The `ReplaySubject` will also complete, if it has observed completion; and it will\n * error if it has observed an error.\n *\n * There are two main configuration items to be concerned with:\n *\n * 1. `bufferSize` - This will determine how many items are stored in the buffer, defaults to infinite.\n * 2. `windowTime` - The amount of time to hold a value in the buffer before removing it from the buffer.\n *\n * Both configurations may exist simultaneously. So if you would like to buffer a maximum of 3 values, as long as the values\n * are less than 2 seconds old, you could do so with a `new ReplaySubject(3, 2000)`.\n *\n * ### Differences with BehaviorSubject\n *\n * `BehaviorSubject` is similar to `new ReplaySubject(1)`, with a couple of exceptions:\n *\n * 1. `BehaviorSubject` comes \"primed\" with a single value upon construction.\n * 2. `ReplaySubject` will replay values, even after observing an error, where `BehaviorSubject` will not.\n *\n * @see {@link Subject}\n * @see {@link BehaviorSubject}\n * @see {@link shareReplay}\n */\nexport class ReplaySubject extends Subject {\n private _buffer: (T | number)[] = [];\n private _infiniteTimeWindow = true;\n\n /**\n * @param bufferSize The size of the buffer to replay on subscription\n * @param windowTime The amount of time the buffered items will stay buffered\n * @param timestampProvider An object with a `now()` method that provides the current timestamp. This is used to\n * calculate the amount of time something has been buffered.\n */\n constructor(\n private _bufferSize = Infinity,\n private _windowTime = Infinity,\n private _timestampProvider: TimestampProvider = dateTimestampProvider\n ) {\n super();\n this._infiniteTimeWindow = _windowTime === Infinity;\n this._bufferSize = Math.max(1, _bufferSize);\n this._windowTime = Math.max(1, _windowTime);\n }\n\n next(value: T): void {\n const { isStopped, _buffer, _infiniteTimeWindow, _timestampProvider, _windowTime } = this;\n if (!isStopped) {\n _buffer.push(value);\n !_infiniteTimeWindow && _buffer.push(_timestampProvider.now() + _windowTime);\n }\n this._trimBuffer();\n super.next(value);\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n this._throwIfClosed();\n this._trimBuffer();\n\n const subscription = this._innerSubscribe(subscriber);\n\n const { _infiniteTimeWindow, _buffer } = this;\n // We use a copy here, so reentrant code does not mutate our array while we're\n // emitting it to a new subscriber.\n const copy = _buffer.slice();\n for (let i = 0; i < copy.length && !subscriber.closed; i += _infiniteTimeWindow ? 1 : 2) {\n subscriber.next(copy[i] as T);\n }\n\n this._checkFinalizedStatuses(subscriber);\n\n return subscription;\n }\n\n private _trimBuffer() {\n const { _bufferSize, _timestampProvider, _buffer, _infiniteTimeWindow } = this;\n // If we don't have an infinite buffer size, and we're over the length,\n // use splice to truncate the old buffer values off. Note that we have to\n // double the size for instances where we're not using an infinite time window\n // because we're storing the values and the timestamps in the same array.\n const adjustedBufferSize = (_infiniteTimeWindow ? 1 : 2) * _bufferSize;\n _bufferSize < Infinity && adjustedBufferSize < _buffer.length && _buffer.splice(0, _buffer.length - adjustedBufferSize);\n\n // Now, if we're not in an infinite time window, remove all values where the time is\n // older than what is allowed.\n if (!_infiniteTimeWindow) {\n const now = _timestampProvider.now();\n let last = 0;\n // Search the array for the first timestamp that isn't expired and\n // truncate the buffer up to that point.\n for (let i = 1; i < _buffer.length && (_buffer[i] as number) <= now; i += 2) {\n last = i;\n }\n last && _buffer.splice(0, last + 1);\n }\n }\n}\n", "import { Scheduler } from '../Scheduler';\nimport { Subscription } from '../Subscription';\nimport { SchedulerAction } from '../types';\n\n/**\n * A unit of work to be executed in a `scheduler`. An action is typically\n * created from within a {@link SchedulerLike} and an RxJS user does not need to concern\n * themselves about creating and manipulating an Action.\n *\n * ```ts\n * class Action extends Subscription {\n * new (scheduler: Scheduler, work: (state?: T) => void);\n * schedule(state?: T, delay: number = 0): Subscription;\n * }\n * ```\n *\n * @class Action\n */\nexport class Action extends Subscription {\n constructor(scheduler: Scheduler, work: (this: SchedulerAction, state?: T) => void) {\n super();\n }\n /**\n * Schedules this action on its parent {@link SchedulerLike} for execution. May be passed\n * some context object, `state`. May happen at some point in the future,\n * according to the `delay` parameter, if specified.\n * @param {T} [state] Some contextual data that the `work` function uses when\n * called by the Scheduler.\n * @param {number} [delay] Time to wait before executing the work, where the\n * time unit is implicit and defined by the Scheduler.\n * @return {void}\n */\n public schedule(state?: T, delay: number = 0): Subscription {\n return this;\n }\n}\n", "import type { TimerHandle } from './timerHandle';\ntype SetIntervalFunction = (handler: () => void, timeout?: number, ...args: any[]) => TimerHandle;\ntype ClearIntervalFunction = (handle: TimerHandle) => void;\n\ninterface IntervalProvider {\n setInterval: SetIntervalFunction;\n clearInterval: ClearIntervalFunction;\n delegate:\n | {\n setInterval: SetIntervalFunction;\n clearInterval: ClearIntervalFunction;\n }\n | undefined;\n}\n\nexport const intervalProvider: IntervalProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n setInterval(handler: () => void, timeout?: number, ...args) {\n const { delegate } = intervalProvider;\n if (delegate?.setInterval) {\n return delegate.setInterval(handler, timeout, ...args);\n }\n return setInterval(handler, timeout, ...args);\n },\n clearInterval(handle) {\n const { delegate } = intervalProvider;\n return (delegate?.clearInterval || clearInterval)(handle as any);\n },\n delegate: undefined,\n};\n", "import { Action } from './Action';\nimport { SchedulerAction } from '../types';\nimport { Subscription } from '../Subscription';\nimport { AsyncScheduler } from './AsyncScheduler';\nimport { intervalProvider } from './intervalProvider';\nimport { arrRemove } from '../util/arrRemove';\nimport { TimerHandle } from './timerHandle';\n\nexport class AsyncAction extends Action {\n public id: TimerHandle | undefined;\n public state?: T;\n // @ts-ignore: Property has no initializer and is not definitely assigned\n public delay: number;\n protected pending: boolean = false;\n\n constructor(protected scheduler: AsyncScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n public schedule(state?: T, delay: number = 0): Subscription {\n if (this.closed) {\n return this;\n }\n\n // Always replace the current state with the new state.\n this.state = state;\n\n const id = this.id;\n const scheduler = this.scheduler;\n\n //\n // Important implementation note:\n //\n // Actions only execute once by default, unless rescheduled from within the\n // scheduled callback. This allows us to implement single and repeat\n // actions via the same code path, without adding API surface area, as well\n // as mimic traditional recursion but across asynchronous boundaries.\n //\n // However, JS runtimes and timers distinguish between intervals achieved by\n // serial `setTimeout` calls vs. a single `setInterval` call. An interval of\n // serial `setTimeout` calls can be individually delayed, which delays\n // scheduling the next `setTimeout`, and so on. `setInterval` attempts to\n // guarantee the interval callback will be invoked more precisely to the\n // interval period, regardless of load.\n //\n // Therefore, we use `setInterval` to schedule single and repeat actions.\n // If the action reschedules itself with the same delay, the interval is not\n // canceled. If the action doesn't reschedule, or reschedules with a\n // different delay, the interval will be canceled after scheduled callback\n // execution.\n //\n if (id != null) {\n this.id = this.recycleAsyncId(scheduler, id, delay);\n }\n\n // Set the pending flag indicating that this action has been scheduled, or\n // has recursively rescheduled itself.\n this.pending = true;\n\n this.delay = delay;\n // If this action has already an async Id, don't request a new one.\n this.id = this.id ?? this.requestAsyncId(scheduler, this.id, delay);\n\n return this;\n }\n\n protected requestAsyncId(scheduler: AsyncScheduler, _id?: TimerHandle, delay: number = 0): TimerHandle {\n return intervalProvider.setInterval(scheduler.flush.bind(scheduler, this), delay);\n }\n\n protected recycleAsyncId(_scheduler: AsyncScheduler, id?: TimerHandle, delay: number | null = 0): TimerHandle | undefined {\n // If this action is rescheduled with the same delay time, don't clear the interval id.\n if (delay != null && this.delay === delay && this.pending === false) {\n return id;\n }\n // Otherwise, if the action's delay time is different from the current delay,\n // or the action has been rescheduled before it's executed, clear the interval id\n if (id != null) {\n intervalProvider.clearInterval(id);\n }\n\n return undefined;\n }\n\n /**\n * Immediately executes this action and the `work` it contains.\n * @return {any}\n */\n public execute(state: T, delay: number): any {\n if (this.closed) {\n return new Error('executing a cancelled action');\n }\n\n this.pending = false;\n const error = this._execute(state, delay);\n if (error) {\n return error;\n } else if (this.pending === false && this.id != null) {\n // Dequeue if the action didn't reschedule itself. Don't call\n // unsubscribe(), because the action could reschedule later.\n // For example:\n // ```\n // scheduler.schedule(function doWork(counter) {\n // /* ... I'm a busy worker bee ... */\n // var originalAction = this;\n // /* wait 100ms before rescheduling the action */\n // setTimeout(function () {\n // originalAction.schedule(counter + 1);\n // }, 100);\n // }, 1000);\n // ```\n this.id = this.recycleAsyncId(this.scheduler, this.id, null);\n }\n }\n\n protected _execute(state: T, _delay: number): any {\n let errored: boolean = false;\n let errorValue: any;\n try {\n this.work(state);\n } catch (e) {\n errored = true;\n // HACK: Since code elsewhere is relying on the \"truthiness\" of the\n // return here, we can't have it return \"\" or 0 or false.\n // TODO: Clean this up when we refactor schedulers mid-version-8 or so.\n errorValue = e ? e : new Error('Scheduled action threw falsy error');\n }\n if (errored) {\n this.unsubscribe();\n return errorValue;\n }\n }\n\n unsubscribe() {\n if (!this.closed) {\n const { id, scheduler } = this;\n const { actions } = scheduler;\n\n this.work = this.state = this.scheduler = null!;\n this.pending = false;\n\n arrRemove(actions, this);\n if (id != null) {\n this.id = this.recycleAsyncId(scheduler, id, null);\n }\n\n this.delay = null!;\n super.unsubscribe();\n }\n }\n}\n", "import { Action } from './scheduler/Action';\nimport { Subscription } from './Subscription';\nimport { SchedulerLike, SchedulerAction } from './types';\nimport { dateTimestampProvider } from './scheduler/dateTimestampProvider';\n\n/**\n * An execution context and a data structure to order tasks and schedule their\n * execution. Provides a notion of (potentially virtual) time, through the\n * `now()` getter method.\n *\n * Each unit of work in a Scheduler is called an `Action`.\n *\n * ```ts\n * class Scheduler {\n * now(): number;\n * schedule(work, delay?, state?): Subscription;\n * }\n * ```\n *\n * @class Scheduler\n * @deprecated Scheduler is an internal implementation detail of RxJS, and\n * should not be used directly. Rather, create your own class and implement\n * {@link SchedulerLike}. Will be made internal in v8.\n */\nexport class Scheduler implements SchedulerLike {\n public static now: () => number = dateTimestampProvider.now;\n\n constructor(private schedulerActionCtor: typeof Action, now: () => number = Scheduler.now) {\n this.now = now;\n }\n\n /**\n * A getter method that returns a number representing the current time\n * (at the time this function was called) according to the scheduler's own\n * internal clock.\n * @return {number} A number that represents the current time. May or may not\n * have a relation to wall-clock time. May or may not refer to a time unit\n * (e.g. milliseconds).\n */\n public now: () => number;\n\n /**\n * Schedules a function, `work`, for execution. May happen at some point in\n * the future, according to the `delay` parameter, if specified. May be passed\n * some context object, `state`, which will be passed to the `work` function.\n *\n * The given arguments will be processed an stored as an Action object in a\n * queue of actions.\n *\n * @param {function(state: ?T): ?Subscription} work A function representing a\n * task, or some unit of work to be executed by the Scheduler.\n * @param {number} [delay] Time to wait before executing the work, where the\n * time unit is implicit and defined by the Scheduler itself.\n * @param {T} [state] Some contextual data that the `work` function uses when\n * called by the Scheduler.\n * @return {Subscription} A subscription in order to be able to unsubscribe\n * the scheduled work.\n */\n public schedule(work: (this: SchedulerAction, state?: T) => void, delay: number = 0, state?: T): Subscription {\n return new this.schedulerActionCtor(this, work).schedule(state, delay);\n }\n}\n", "import { Scheduler } from '../Scheduler';\nimport { Action } from './Action';\nimport { AsyncAction } from './AsyncAction';\nimport { TimerHandle } from './timerHandle';\n\nexport class AsyncScheduler extends Scheduler {\n public actions: Array> = [];\n /**\n * A flag to indicate whether the Scheduler is currently executing a batch of\n * queued actions.\n * @type {boolean}\n * @internal\n */\n public _active: boolean = false;\n /**\n * An internal ID used to track the latest asynchronous task such as those\n * coming from `setTimeout`, `setInterval`, `requestAnimationFrame`, and\n * others.\n * @type {any}\n * @internal\n */\n public _scheduled: TimerHandle | undefined;\n\n constructor(SchedulerAction: typeof Action, now: () => number = Scheduler.now) {\n super(SchedulerAction, now);\n }\n\n public flush(action: AsyncAction): void {\n const { actions } = this;\n\n if (this._active) {\n actions.push(action);\n return;\n }\n\n let error: any;\n this._active = true;\n\n do {\n if ((error = action.execute(action.state, action.delay))) {\n break;\n }\n } while ((action = actions.shift()!)); // exhaust the scheduler queue\n\n this._active = false;\n\n if (error) {\n while ((action = actions.shift()!)) {\n action.unsubscribe();\n }\n throw error;\n }\n }\n}\n", "import { AsyncAction } from './AsyncAction';\nimport { AsyncScheduler } from './AsyncScheduler';\n\n/**\n *\n * Async Scheduler\n *\n * Schedule task as if you used setTimeout(task, duration)\n *\n * `async` scheduler schedules tasks asynchronously, by putting them on the JavaScript\n * event loop queue. It is best used to delay tasks in time or to schedule tasks repeating\n * in intervals.\n *\n * If you just want to \"defer\" task, that is to perform it right after currently\n * executing synchronous code ends (commonly achieved by `setTimeout(deferredTask, 0)`),\n * better choice will be the {@link asapScheduler} scheduler.\n *\n * ## Examples\n * Use async scheduler to delay task\n * ```ts\n * import { asyncScheduler } from 'rxjs';\n *\n * const task = () => console.log('it works!');\n *\n * asyncScheduler.schedule(task, 2000);\n *\n * // After 2 seconds logs:\n * // \"it works!\"\n * ```\n *\n * Use async scheduler to repeat task in intervals\n * ```ts\n * import { asyncScheduler } from 'rxjs';\n *\n * function task(state) {\n * console.log(state);\n * this.schedule(state + 1, 1000); // `this` references currently executing Action,\n * // which we reschedule with new state and delay\n * }\n *\n * asyncScheduler.schedule(task, 3000, 0);\n *\n * // Logs:\n * // 0 after 3s\n * // 1 after 4s\n * // 2 after 5s\n * // 3 after 6s\n * ```\n */\n\nexport const asyncScheduler = new AsyncScheduler(AsyncAction);\n\n/**\n * @deprecated Renamed to {@link asyncScheduler}. Will be removed in v8.\n */\nexport const async = asyncScheduler;\n", "import { AsyncAction } from './AsyncAction';\nimport { Subscription } from '../Subscription';\nimport { QueueScheduler } from './QueueScheduler';\nimport { SchedulerAction } from '../types';\nimport { TimerHandle } from './timerHandle';\n\nexport class QueueAction extends AsyncAction {\n constructor(protected scheduler: QueueScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n public schedule(state?: T, delay: number = 0): Subscription {\n if (delay > 0) {\n return super.schedule(state, delay);\n }\n this.delay = delay;\n this.state = state;\n this.scheduler.flush(this);\n return this;\n }\n\n public execute(state: T, delay: number): any {\n return delay > 0 || this.closed ? super.execute(state, delay) : this._execute(state, delay);\n }\n\n protected requestAsyncId(scheduler: QueueScheduler, id?: TimerHandle, delay: number = 0): TimerHandle {\n // If delay exists and is greater than 0, or if the delay is null (the\n // action wasn't rescheduled) but was originally scheduled as an async\n // action, then recycle as an async action.\n\n if ((delay != null && delay > 0) || (delay == null && this.delay > 0)) {\n return super.requestAsyncId(scheduler, id, delay);\n }\n\n // Otherwise flush the scheduler starting with this action.\n scheduler.flush(this);\n\n // HACK: In the past, this was returning `void`. However, `void` isn't a valid\n // `TimerHandle`, and generally the return value here isn't really used. So the\n // compromise is to return `0` which is both \"falsy\" and a valid `TimerHandle`,\n // as opposed to refactoring every other instanceo of `requestAsyncId`.\n return 0;\n }\n}\n", "import { AsyncScheduler } from './AsyncScheduler';\n\nexport class QueueScheduler extends AsyncScheduler {\n}\n", "import { QueueAction } from './QueueAction';\nimport { QueueScheduler } from './QueueScheduler';\n\n/**\n *\n * Queue Scheduler\n *\n * Put every next task on a queue, instead of executing it immediately\n *\n * `queue` scheduler, when used with delay, behaves the same as {@link asyncScheduler} scheduler.\n *\n * When used without delay, it schedules given task synchronously - executes it right when\n * it is scheduled. However when called recursively, that is when inside the scheduled task,\n * another task is scheduled with queue scheduler, instead of executing immediately as well,\n * that task will be put on a queue and wait for current one to finish.\n *\n * This means that when you execute task with `queue` scheduler, you are sure it will end\n * before any other task scheduled with that scheduler will start.\n *\n * ## Examples\n * Schedule recursively first, then do something\n * ```ts\n * import { queueScheduler } from 'rxjs';\n *\n * queueScheduler.schedule(() => {\n * queueScheduler.schedule(() => console.log('second')); // will not happen now, but will be put on a queue\n *\n * console.log('first');\n * });\n *\n * // Logs:\n * // \"first\"\n * // \"second\"\n * ```\n *\n * Reschedule itself recursively\n * ```ts\n * import { queueScheduler } from 'rxjs';\n *\n * queueScheduler.schedule(function(state) {\n * if (state !== 0) {\n * console.log('before', state);\n * this.schedule(state - 1); // `this` references currently executing Action,\n * // which we reschedule with new state\n * console.log('after', state);\n * }\n * }, 0, 3);\n *\n * // In scheduler that runs recursively, you would expect:\n * // \"before\", 3\n * // \"before\", 2\n * // \"before\", 1\n * // \"after\", 1\n * // \"after\", 2\n * // \"after\", 3\n *\n * // But with queue it logs:\n * // \"before\", 3\n * // \"after\", 3\n * // \"before\", 2\n * // \"after\", 2\n * // \"before\", 1\n * // \"after\", 1\n * ```\n */\n\nexport const queueScheduler = new QueueScheduler(QueueAction);\n\n/**\n * @deprecated Renamed to {@link queueScheduler}. Will be removed in v8.\n */\nexport const queue = queueScheduler;\n", "import { AsyncAction } from './AsyncAction';\nimport { AnimationFrameScheduler } from './AnimationFrameScheduler';\nimport { SchedulerAction } from '../types';\nimport { animationFrameProvider } from './animationFrameProvider';\nimport { TimerHandle } from './timerHandle';\n\nexport class AnimationFrameAction extends AsyncAction {\n constructor(protected scheduler: AnimationFrameScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n protected requestAsyncId(scheduler: AnimationFrameScheduler, id?: TimerHandle, delay: number = 0): TimerHandle {\n // If delay is greater than 0, request as an async action.\n if (delay !== null && delay > 0) {\n return super.requestAsyncId(scheduler, id, delay);\n }\n // Push the action to the end of the scheduler queue.\n scheduler.actions.push(this);\n // If an animation frame has already been requested, don't request another\n // one. If an animation frame hasn't been requested yet, request one. Return\n // the current animation frame request id.\n return scheduler._scheduled || (scheduler._scheduled = animationFrameProvider.requestAnimationFrame(() => scheduler.flush(undefined)));\n }\n\n protected recycleAsyncId(scheduler: AnimationFrameScheduler, id?: TimerHandle, delay: number = 0): TimerHandle | undefined {\n // If delay exists and is greater than 0, or if the delay is null (the\n // action wasn't rescheduled) but was originally scheduled as an async\n // action, then recycle as an async action.\n if (delay != null ? delay > 0 : this.delay > 0) {\n return super.recycleAsyncId(scheduler, id, delay);\n }\n // If the scheduler queue has no remaining actions with the same async id,\n // cancel the requested animation frame and set the scheduled flag to\n // undefined so the next AnimationFrameAction will request its own.\n const { actions } = scheduler;\n if (id != null && actions[actions.length - 1]?.id !== id) {\n animationFrameProvider.cancelAnimationFrame(id as number);\n scheduler._scheduled = undefined;\n }\n // Return undefined so the action knows to request a new async id if it's rescheduled.\n return undefined;\n }\n}\n", "import { AsyncAction } from './AsyncAction';\nimport { AsyncScheduler } from './AsyncScheduler';\n\nexport class AnimationFrameScheduler extends AsyncScheduler {\n public flush(action?: AsyncAction): void {\n this._active = true;\n // The async id that effects a call to flush is stored in _scheduled.\n // Before executing an action, it's necessary to check the action's async\n // id to determine whether it's supposed to be executed in the current\n // flush.\n // Previous implementations of this method used a count to determine this,\n // but that was unsound, as actions that are unsubscribed - i.e. cancelled -\n // are removed from the actions array and that can shift actions that are\n // scheduled to be executed in a subsequent flush into positions at which\n // they are executed within the current flush.\n const flushId = this._scheduled;\n this._scheduled = undefined;\n\n const { actions } = this;\n let error: any;\n action = action || actions.shift()!;\n\n do {\n if ((error = action.execute(action.state, action.delay))) {\n break;\n }\n } while ((action = actions[0]) && action.id === flushId && actions.shift());\n\n this._active = false;\n\n if (error) {\n while ((action = actions[0]) && action.id === flushId && actions.shift()) {\n action.unsubscribe();\n }\n throw error;\n }\n }\n}\n", "import { AnimationFrameAction } from './AnimationFrameAction';\nimport { AnimationFrameScheduler } from './AnimationFrameScheduler';\n\n/**\n *\n * Animation Frame Scheduler\n *\n * Perform task when `window.requestAnimationFrame` would fire\n *\n * When `animationFrame` scheduler is used with delay, it will fall back to {@link asyncScheduler} scheduler\n * behaviour.\n *\n * Without delay, `animationFrame` scheduler can be used to create smooth browser animations.\n * It makes sure scheduled task will happen just before next browser content repaint,\n * thus performing animations as efficiently as possible.\n *\n * ## Example\n * Schedule div height animation\n * ```ts\n * // html:
\n * import { animationFrameScheduler } from 'rxjs';\n *\n * const div = document.querySelector('div');\n *\n * animationFrameScheduler.schedule(function(height) {\n * div.style.height = height + \"px\";\n *\n * this.schedule(height + 1); // `this` references currently executing Action,\n * // which we reschedule with new state\n * }, 0, 0);\n *\n * // You will see a div element growing in height\n * ```\n */\n\nexport const animationFrameScheduler = new AnimationFrameScheduler(AnimationFrameAction);\n\n/**\n * @deprecated Renamed to {@link animationFrameScheduler}. Will be removed in v8.\n */\nexport const animationFrame = animationFrameScheduler;\n", "import { Observable } from '../Observable';\nimport { SchedulerLike } from '../types';\n\n/**\n * A simple Observable that emits no items to the Observer and immediately\n * emits a complete notification.\n *\n * Just emits 'complete', and nothing else.\n *\n * ![](empty.png)\n *\n * A simple Observable that only emits the complete notification. It can be used\n * for composing with other Observables, such as in a {@link mergeMap}.\n *\n * ## Examples\n *\n * Log complete notification\n *\n * ```ts\n * import { EMPTY } from 'rxjs';\n *\n * EMPTY.subscribe({\n * next: () => console.log('Next'),\n * complete: () => console.log('Complete!')\n * });\n *\n * // Outputs\n * // Complete!\n * ```\n *\n * Emit the number 7, then complete\n *\n * ```ts\n * import { EMPTY, startWith } from 'rxjs';\n *\n * const result = EMPTY.pipe(startWith(7));\n * result.subscribe(x => console.log(x));\n *\n * // Outputs\n * // 7\n * ```\n *\n * Map and flatten only odd numbers to the sequence `'a'`, `'b'`, `'c'`\n *\n * ```ts\n * import { interval, mergeMap, of, EMPTY } from 'rxjs';\n *\n * const interval$ = interval(1000);\n * const result = interval$.pipe(\n * mergeMap(x => x % 2 === 1 ? of('a', 'b', 'c') : EMPTY),\n * );\n * result.subscribe(x => console.log(x));\n *\n * // Results in the following to the console:\n * // x is equal to the count on the interval, e.g. (0, 1, 2, 3, ...)\n * // x will occur every 1000ms\n * // if x % 2 is equal to 1, print a, b, c (each on its own)\n * // if x % 2 is not equal to 1, nothing will be output\n * ```\n *\n * @see {@link Observable}\n * @see {@link NEVER}\n * @see {@link of}\n * @see {@link throwError}\n */\nexport const EMPTY = new Observable((subscriber) => subscriber.complete());\n\n/**\n * @param scheduler A {@link SchedulerLike} to use for scheduling\n * the emission of the complete notification.\n * @deprecated Replaced with the {@link EMPTY} constant or {@link scheduled} (e.g. `scheduled([], scheduler)`). Will be removed in v8.\n */\nexport function empty(scheduler?: SchedulerLike) {\n return scheduler ? emptyScheduled(scheduler) : EMPTY;\n}\n\nfunction emptyScheduled(scheduler: SchedulerLike) {\n return new Observable((subscriber) => scheduler.schedule(() => subscriber.complete()));\n}\n", "import { SchedulerLike } from '../types';\nimport { isFunction } from './isFunction';\n\nexport function isScheduler(value: any): value is SchedulerLike {\n return value && isFunction(value.schedule);\n}\n", "import { SchedulerLike } from '../types';\nimport { isFunction } from './isFunction';\nimport { isScheduler } from './isScheduler';\n\nfunction last(arr: T[]): T | undefined {\n return arr[arr.length - 1];\n}\n\nexport function popResultSelector(args: any[]): ((...args: unknown[]) => unknown) | undefined {\n return isFunction(last(args)) ? args.pop() : undefined;\n}\n\nexport function popScheduler(args: any[]): SchedulerLike | undefined {\n return isScheduler(last(args)) ? args.pop() : undefined;\n}\n\nexport function popNumber(args: any[], defaultValue: number): number {\n return typeof last(args) === 'number' ? args.pop()! : defaultValue;\n}\n", "export const isArrayLike = ((x: any): x is ArrayLike => x && typeof x.length === 'number' && typeof x !== 'function');", "import { isFunction } from \"./isFunction\";\n\n/**\n * Tests to see if the object is \"thennable\".\n * @param value the object to test\n */\nexport function isPromise(value: any): value is PromiseLike {\n return isFunction(value?.then);\n}\n", "import { InteropObservable } from '../types';\nimport { observable as Symbol_observable } from '../symbol/observable';\nimport { isFunction } from './isFunction';\n\n/** Identifies an input as being Observable (but not necessary an Rx Observable) */\nexport function isInteropObservable(input: any): input is InteropObservable {\n return isFunction(input[Symbol_observable]);\n}\n", "import { isFunction } from './isFunction';\n\nexport function isAsyncIterable(obj: any): obj is AsyncIterable {\n return Symbol.asyncIterator && isFunction(obj?.[Symbol.asyncIterator]);\n}\n", "/**\n * Creates the TypeError to throw if an invalid object is passed to `from` or `scheduled`.\n * @param input The object that was passed.\n */\nexport function createInvalidObservableTypeError(input: any) {\n // TODO: We should create error codes that can be looked up, so this can be less verbose.\n return new TypeError(\n `You provided ${\n input !== null && typeof input === 'object' ? 'an invalid object' : `'${input}'`\n } where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.`\n );\n}\n", "export function getSymbolIterator(): symbol {\n if (typeof Symbol !== 'function' || !Symbol.iterator) {\n return '@@iterator' as any;\n }\n\n return Symbol.iterator;\n}\n\nexport const iterator = getSymbolIterator();\n", "import { iterator as Symbol_iterator } from '../symbol/iterator';\nimport { isFunction } from './isFunction';\n\n/** Identifies an input as being an Iterable */\nexport function isIterable(input: any): input is Iterable {\n return isFunction(input?.[Symbol_iterator]);\n}\n", "import { ReadableStreamLike } from '../types';\nimport { isFunction } from './isFunction';\n\nexport async function* readableStreamLikeToAsyncGenerator(readableStream: ReadableStreamLike): AsyncGenerator {\n const reader = readableStream.getReader();\n try {\n while (true) {\n const { value, done } = await reader.read();\n if (done) {\n return;\n }\n yield value!;\n }\n } finally {\n reader.releaseLock();\n }\n}\n\nexport function isReadableStreamLike(obj: any): obj is ReadableStreamLike {\n // We don't want to use instanceof checks because they would return\n // false for instances from another Realm, like an