Skip to content

Commit

Permalink
Merge pull request #1048 from EnterpriseDB/release/2021-03-09
Browse files Browse the repository at this point in the history
Production Release 2021-03-09

Former-commit-id: 251f073
  • Loading branch information
epbarger committed Mar 9, 2021
2 parents f140662 + 0229dba commit 0326062
Show file tree
Hide file tree
Showing 355 changed files with 6,447 additions and 26,593 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/deploy-develop.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ jobs:
NODE_OPTIONS: --max-old-space-size=4096
ALGOLIA_API_KEY: ${{ secrets.ALGOLIA_API_KEY }}
ALGOLIA_APP_ID: ${{ secrets.ALGOLIA_APP_ID }}
ALGOLIA_INDEX_NAME: edb-staging
ALGOLIA_INDEX_NAME: edb-docs-staging
INDEX_ON_BUILD: true

- name: Netlify deploy
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/deploy-main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ jobs:
NODE_OPTIONS: --max-old-space-size=4096
ALGOLIA_API_KEY: ${{ secrets.ALGOLIA_API_KEY }}
ALGOLIA_APP_ID: ${{ secrets.ALGOLIA_APP_ID }}
ALGOLIA_INDEX_NAME: edb
ALGOLIA_INDEX_NAME: edb-docs
GTM_ID: GTM-5W8M67
INDEX_ON_BUILD: true

Expand Down
1 change: 0 additions & 1 deletion .github/workflows/update-pdfs-on-develop.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@ jobs:
- uses: actions/checkout@v2
with:
ref: develop
fetch-depth: 0 # fetch whole repo so git-restore-mtime can work
ssh-key: ${{ secrets.ADMIN_SECRET_SSH_KEY }}
- name: Update submodules
run: git submodule update --init --remote
Expand Down
4 changes: 1 addition & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,6 @@ We recommend using MacOS to work with the EDB Docs application.

1. Pull the shared icon files down with `git submodule update --init`.

1. Now select which sources you want with `yarn config-sources`.

1. And finally, you can start up the site locally with `yarn develop`, which should make it live at `http://localhost:8000/`. Huzzah!

### Installation of PDF / Doc Conversion Tools (optional)
Expand All @@ -64,7 +62,7 @@ If you are a Windows user, you can work with Docs without installing it locally

### Configuring Which Sources are Loaded

When doing local development of the site or advocacy content, you may want to load other sources to experience the full site. The more sources you load, the slower the site will build, so it's recommended to typically only load the content you'll be working with the most.
By default, all document sources will be loaded into the app during development. It's possible to set up a configuration file, `dev-sources.json`, to only load specific sources, but this is not required.

#### `yarn config-sources`

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ Cloud Native PostgreSQL currently supports clusters based on asynchronous and sy
* One primary, with optional multiple hot standby replicas for High Availability
* Available services for applications:
* `-rw`: applications connect to the only primary instance of the cluster
* `-ro`: applications connect to the only hot standby replicas for read-only-workloads
* `-r`: applications connect to any of the instances for read-only workloads
* Shared-nothing architecture recommended for better resilience of the PostgreSQL cluster:
* PostgreSQL instances should reside on different Kubernetes worker nodes and share only the network
Expand Down Expand Up @@ -45,12 +46,17 @@ The following diagram shows the architecture:

![Applications reading from any instance in round robin](./images/architecture-r.png)

Applications can also access hot standby replicas through the `-ro` service made available
by the operator. This service enables the application to offload read-only queries from the
primary node.

## Application deployments

Applications are supposed to work with the services created by Cloud Native PostgreSQL
in the same Kubernetes cluster:

* `[cluster name]-rw`
* `[cluster name]-ro`
* `[cluster name]-r`

Those services are entirely managed by the Kubernetes cluster and
Expand Down Expand Up @@ -97,6 +103,9 @@ you can use the following environment variables in your applications:
* `PG_DATABASE_R_SERVICE_HOST`: the IP address of the service
pointing to all the PostgreSQL instances for read-only workloads

* `PG_DATABASE_RO_SERVICE_HOST`: the IP address of the
service pointing to all hot-standby replicas of the cluster

* `PG_DATABASE_RW_SERVICE_HOST`: the IP address of the
service pointing to the *primary* instance of the cluster

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ specific to Kubernetes and PostgreSQL.

| Resource | Description |
|-------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Node](https://kubernetes.io/docs/concepts/architecture/nodes/) | A *node* is a worker machine in Kubernetes, either virtual or physical, where all services necessary to run pods are managed by the master(s). |
| [Node](https://kubernetes.io/docs/concepts/architecture/nodes/) | A *node* is a worker machine in Kubernetes, either virtual or physical, where all services necessary to run pods are managed by the control plane node(s). |
| [Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/) | A *pod* is the smallest computing unit that can be deployed in a Kubernetes cluster and is composed of one or more containers that share network and storage. |
| [Service](https://kubernetes.io/docs/concepts/services-networking/service/) | A *service* is an abstraction that exposes as a network service an application that runs on a group of pods and standardizes important features such as service discovery across applications, load balancing, failover, and so on. |
| [Secret](https://kubernetes.io/docs/concepts/configuration/secret/) | A *secret* is an object that is designed to store small amounts of sensitive data such as passwords, access keys, or tokens, and use them in pods. |
Expand Down
144 changes: 144 additions & 0 deletions advocacy_docs/kubernetes/cloud_native_operator/cnp-plugin.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
---
title: 'Cloud Native PostgreSQL Plugin'
originalFilePath: 'src/cnp-plugin.md'
product: 'Cloud Native Operator'
---

Cloud Native PostgreSQL provides a plugin for `kubectl` to manage a cluster in Kubernetes.
The plugin also works with `oc` in an OpenShift environment.

## Install

You can install the plugin in your system with:

```sh
curl -sSfL \
https://github.com/EnterpriseDB/kubectl-cnp/raw/main/install.sh | \
sudo sh -s -- -b /usr/local/bin
```

## Use

Once the plugin was installed and deployed, you can start using it like this:

```shell
kubectl cnp <command> <args...>
```

### Status

The `status` command provides a brief of the current status of your cluster.

```shell
kubectl cnp status cluster-example
```

```shell
Cluster in healthy state
Name: cluster-example
Namespace: default
PostgreSQL Image: quay.io/enterprisedb/postgresql:13
Primary instance: cluster-example-1
Instances: 3
Ready instances: 3

Instances status
Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart
-------- ----------- ------------ ---------- --------- ------- ----------- ------------- ---------------
cluster-example-1 0/6000060 6927251808674721812 ✓ ✗ ✗ ✗
cluster-example-2 0/6000060 0/6000060 6927251808674721812 ✗ ✓ ✗ ✗
cluster-example-3 0/6000060 0/6000060 6927251808674721812 ✗ ✓ ✗ ✗

```

You can also get a more verbose version of the status by adding `--verbose` or just `-v`

```shell
kubectl cnp status cluster-example --verbose
```

```shell
Cluster in healthy state
Name: cluster-example
Namespace: default
PostgreSQL Image: quay.io/enterprisedb/postgresql:13
Primary instance: cluster-example-1
Instances: 3
Ready instances: 3

PostgreSQL Configuration
archive_command = '/controller/manager wal-archive %p'
archive_mode = 'on'
archive_timeout = '5min'
full_page_writes = 'on'
hot_standby = 'true'
listen_addresses = '*'
logging_collector = 'off'
max_parallel_workers = '32'
max_replication_slots = '32'
max_worker_processes = '32'
port = '5432'
ssl = 'on'
ssl_ca_file = '/tmp/ca.crt'
ssl_cert_file = '/tmp/server.crt'
ssl_key_file = '/tmp/server.key'
unix_socket_directories = '/var/run/postgresql'
wal_keep_size = '512MB'
wal_level = 'logical'
wal_log_hints = 'on'


PostgreSQL HBA Rules
# Grant local access
local all all peer

# Require client certificate authentication for the streaming_replica user
hostssl postgres streaming_replica all cert clientcert=1
hostssl replication streaming_replica all cert clientcert=1

# Otherwise use md5 authentication
host all all all md5


Instances status
Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart
-------- ----------- ------------ ---------- --------- ------- ----------- ------------- ---------------
cluster-example-1 0/6000060 6927251808674721812 ✓ ✗ ✗ ✗
cluster-example-2 0/6000060 0/6000060 6927251808674721812 ✗ ✓ ✗ ✗
cluster-example-3 0/6000060 0/6000060 6927251808674721812 ✗ ✓ ✗ ✗
```

The command also supports output in `yaml` and `json` format.

### Promote

The meaning of this command is to `promote` a pod in the cluster to primary, so you
can start with maintenance work or test a switch-over situation in your cluster

```shell
kubectl cnp promote cluster-example cluster-example-2
```

### Certificates

Clusters created using the Cloud Native PostgreSQL operator work with a CA to sign
a TLS authentication certificate.

To get a certificate, you need to provide a name for the secret to store
the credentials, the cluster name, and a user for this certificate

```shell
kubectl cnp certificate cluster-cert --cnp-cluster cluster-example --cnp-user appuser
```

After the secrete it's created, you can get it using `kubectl`

```shell
kubectl get secret cluster-cert
```

And the content of the same in plain text using the following commands:

```shell
kubectl get secret cluster-cert -o json | jq -r '.data | map(@base64d) | .[]'
```
14 changes: 8 additions & 6 deletions advocacy_docs/kubernetes/cloud_native_operator/credits.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,15 @@ product: 'Cloud Native Operator'
Cloud Native PostgreSQL (Operator for Kubernetes/OpenShift) has been designed,
developed, and tested by the EnterpriseDB Cloud Native team:

- Leonardo Cecchi
- Marco Nenciarini
- Jonathan Gonzalez
- Francesco Canovai
- Gabriele Bartolini
- Jonathan Battiato
- Francesco Canovai
- Leonardo Cecchi
- Valerio Del Sarto
- Niccolò Fei
- Devin Nemec
- Jonathan Gonzalez
- Danish Khan
- Marco Nenciarini
- Jitendra Wadle
- Adam Wright
- Gabriele Bartolini

2 changes: 1 addition & 1 deletion advocacy_docs/kubernetes/cloud_native_operator/e2e.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ and the following suite of E2E tests are performed on that cluster:
* Installation of the operator;
* Creation of a `Cluster`;
* Usage of a persistent volume for data storage;
* Connection via services;
* Connection via services, including read-only;
* Scale-up of a `Cluster`;
* Scale-down of a `Cluster`;
* Failover;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ This section explains how to expose a PostgreSQL service externally, allowing ac
to your PostgreSQL database **from outside your Kubernetes cluster** using
NGINX Ingress Controller.

If you followed the [QuickStart](/quickstart), you should have by now
If you followed the [QuickStart](./quickstart.md), you should have by now
a database that can be accessed inside the cluster via the
`cluster-example-rw` (primary) and `cluster-example-r` (read-only)
services in the `default` namespace. Both services use port `5432`.
Expand Down
14 changes: 5 additions & 9 deletions advocacy_docs/kubernetes/cloud_native_operator/failure_modes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -131,25 +131,21 @@ Self-healing will happen after `tolerationSeconds`.

## Self-healing

If the failed pod is a standby, the pod is removed from the `-r` service.
If the failed pod is a standby, the pod is removed from the `-r` service
and from the `-ro` service.
The pod is then restarted using its PVC if available; otherwise, a new
pod will be created from a backup of the current primary. The pod
will be added again to the `-r` service when ready.
will be added again to the `-r` service and to the `-ro` service when ready.

If the failed pod is the primary, the operator will promote the active pod
with status ready and the lowest replication lag, then point the `-rw`service
to it. The failed pod will be removed from the `-r` service.
to it. The failed pod will be removed from the `-r` service and from the
`-ro` service.
Other standbys will start replicating from the new primary. The former
primary will use `pg_rewind` to synchronize itself with the new one if its
PVC is available; otherwise, a new standby will be created from a backup of the
current primary.

!!! Important
Due to a [bug in PostgreSQL 13 streaming replication](https://www.postgresql.org/message-id/flat/20201209.174314.282492377848029776.horikyota.ntt%40gmail.com)
it is not guaranteed that an existing standby is able to follow a promoted
primary, even if the new primary contains all the required WALs. Standbys
will be able to follow a primary if WAL archiving is configured.

## Manual intervention

In the case of undocumented failure, it might be necessary to intervene
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions advocacy_docs/kubernetes/cloud_native_operator/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ navigation:
- ssl_connections
- kubernetes_upgrade
- e2e
- cnp-plugin
- license_keys
- container_images
- operator_capability_levels
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,12 @@ product: 'Cloud Native Operator'
The operator can be installed like any other resource in Kubernetes,
through a YAML manifest applied via `kubectl`.

You can install the [latest operator manifest](../samples/postgresql-operator-1.0.0.yaml)
You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.1.0.yaml)
as follows:

```sh
kubectl apply -f \
https://docs.enterprisedb.io/cloud-native-postgresql/latest/samples/postgresql-operator-1.0.0.yaml
https://get.enterprisedb.io/cnp/postgresql-operator-1.1.0.yaml
```

Once you have run the `kubectl` command, Cloud Native PostgreSQL will be installed in your Kubernetes cluster.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ When **disabled**, Kubernetes forces the recreation of the
Pod on a different node with a new PVC by relying on
PostgreSQL's physical streaming replication, then destroys
the old PVC together with the Pod. This scenario is generally
not recommended unless the database's size is small, and recloning
not recommended unless the database's size is small, and re-cloning
the new PostgreSQL instance takes shorter than waiting.

!!! Note
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ PostgreSQL instance and to reconcile the pod status with the instance itself
based on the PostgreSQL cluster topology. The instance manager also starts a
web server that is invoked by the `kubelet` for probes. Unix signals invoked
by the `kubelet` are filtered by the instance manager and, where appropriate,
forwarded to the `postmaster` process for fast and controlled reactions to
forwarded to the `postgres` process for fast and controlled reactions to
external events. The instance manager is written in Go and has no external
dependencies.

Expand Down Expand Up @@ -374,7 +374,7 @@ for PostgreSQL have been implemented.
### Kubernetes events

Record major events as expected by the Kubernetes API, such as creating resources,
removing nodes, upgrading, and so on. Events can be displayed throught
removing nodes, upgrading, and so on. Events can be displayed through
the `kubectl describe` and `kubectl get events` command.

## Level 5 - Auto Pilot
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ managed by the `primaryUpdateStrategy` option, accepting these two values:
The default and recommended value is `switchover`.

The upgrade keeps the Cloud Native PostgreSQL identity and does not
reclone the data. Pods will be deleted and created again with the same PVCs.
re-clone the data. Pods will be deleted and created again with the same PVCs.

During the rolling update procedure, the services endpoints move to reflect
the cluster's status, so the applications ignore the node that
Expand Down
Empty file.
Loading

0 comments on commit 0326062

Please sign in to comment.