diff --git a/advocacy_docs/dev-guides/deploy/images/pgadmin-connected.png b/advocacy_docs/dev-guides/deploy/images/pgadmin-connected.png
new file mode 100644
index 00000000000..ca1e4f3ae30
--- /dev/null
+++ b/advocacy_docs/dev-guides/deploy/images/pgadmin-connected.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:37702b1cca7fdd62d1a07a5ed69a4865553baac29eb0fcd4550ae7610d18033a
+size 67078
diff --git a/advocacy_docs/dev-guides/deploy/images/pgadmin-default.png b/advocacy_docs/dev-guides/deploy/images/pgadmin-default.png
new file mode 100644
index 00000000000..4ed6e83d24d
--- /dev/null
+++ b/advocacy_docs/dev-guides/deploy/images/pgadmin-default.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4fd7b3208d3df9041586538b807d3d81652364a7ba89851993e08f6c31107275
+size 91668
diff --git a/advocacy_docs/dev-guides/deploy/images/pgadmin-register-server.png b/advocacy_docs/dev-guides/deploy/images/pgadmin-register-server.png
new file mode 100644
index 00000000000..891367fbdac
--- /dev/null
+++ b/advocacy_docs/dev-guides/deploy/images/pgadmin-register-server.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:343759308119cecd59452f921b84ebee744b2457aecced1e9e3615a8ab44c265
+size 19729
diff --git a/advocacy_docs/dev-guides/deploy/windows.mdx b/advocacy_docs/dev-guides/deploy/windows.mdx
new file mode 100644
index 00000000000..63fb10026fa
--- /dev/null
+++ b/advocacy_docs/dev-guides/deploy/windows.mdx
@@ -0,0 +1,163 @@
+---
+title: Installing PostgreSQL for development and testing on Microsoft Windows
+navTitle: On Windows
+description: Install PostgreSQL on your local Windows machine for development purposes.
+product: postgresql
+iconName: logos/Windows
+---
+
+## Prerequesites
+
+- Windows 10 build 16299 or later
+- An account with Administrator rights
+
+## Installing
+
+For a development machine, we'll use [WinGet](https://learn.microsoft.com/en-us/windows/package-manager/winget/) to download and install PostgreSQL:
+
+```
+winget install PostgreSQL.PostgreSQL.16
+__OUTPUT__
+Found PostgreSQL 16 [PostgreSQL.PostgreSQL.16] Version 16.4-1
+This application is licensed to you by its owner.
+Microsoft is not responsible for, nor does it grant any licenses to, third-party packages.
+Downloading https://get.enterprisedb.com/postgresql/postgresql-16.4-1-windows-x64.exe
+ ██████████████████████████████ 356 MB / 356 MB
+Successfully verified installer hash
+Starting package install...
+Successfully installed
+```
+
+...where `16` is the major version of PostgreSQL we wish to install. This will install PostgreSQL in unattended mode (it won't ask you questions on what components to install, where to install them, or what the initial configuration should look like... Although you may be prompted by Windows to approve admin access for the installer). The installer will use the default location (`C:\Program Files\PostgreSQL\`) with the default password (`postgres`) and also install both [StackBuilder](/supported-open-source/postgresql/installing/using_stackbuilder/) and [pgAdmin](https://www.pgadmin.org/).
+
+!!! Tip
+
+You can find a list of available PostgreSQL versions by using WinGet's `search` command:
+
+```
+winget search PostgreSQL.PostgreSQL
+__OUTPUT__
+Name Id Version Source
+------------------------------------------------------
+PostgreSQL 10 PostgreSQL.PostgreSQL.10 10 winget
+PostgreSQL 11 PostgreSQL.PostgreSQL.11 11 winget
+PostgreSQL 12 PostgreSQL.PostgreSQL.12 12.20-1 winget
+PostgreSQL 13 PostgreSQL.PostgreSQL.13 13.16-1 winget
+PostgreSQL 14 PostgreSQL.PostgreSQL.14 14.13-1 winget
+PostgreSQL 15 PostgreSQL.PostgreSQL.15 15.8-1 winget
+PostgreSQL 16 PostgreSQL.PostgreSQL.16 16.4-1 winget
+PostgreSQL 9 PostgreSQL.PostgreSQL.9 9 winget
+```
+
+Installers for release candidate versions of PostgreSQL are also available on EDB's website at https://www.enterprisedb.com/downloads/postgres-postgresql-downloads
+
+!!!
+
+!!! SeeAlso "Further reading"
+
+For more control over installation (specifying components, location, port, superuser password...), you can also download and run the installer interactively by following the instructions in [Installing PostgreSQL on Windows](https://www.enterprisedb.com/docs/supported-open-source/postgresql/installing/windows/).
+
+!!!
+
+## Add PostgreSQL commands to your path
+
+PostgreSQL comes with [several useful command-line tools](https://www.postgresql.org/docs/current/reference-client.html) for working with your databases. For convenience, you'll probably want to add their location to your path.
+
+1. Open the System Properties control panel and select the Advanced tab (or run `SystemPropertiesAdvanced.exe`)
+2. Activate the "Environment Variables..." button to open the envornment variables editor
+3. Find the `Path` variable under the "System variables" heading, and click "Edit..."
+4. Add the path to the bin directory of the PostgreSQL version you installed, e.g. `C:\Program Files\postgresql\16\bin\` (where *16* is the version of PostgreSQL that you installed).
+
+!!! Info "This only affects new command prompts"
+
+If you have a command prompt open already, you'll need to close and reopen in order for your changes to the system path to take effect.
+
+!!!
+
+## Verifying your installation
+
+If the steps above completed successfully, you'll now have a PostgreSQL service running with the default database and superuser account. Let's verify this by connecting, creating a new user and database, and then connecting using that user.
+
+### Connect with psql
+
+Open a new command prompt, and run
+
+```
+psql -U postgres
+__OUTPUT__
+Password for user postgres:
+
+psql (16.4)
+WARNING: Console code page (437) differs from Windows code page (1252)
+ 8-bit characters might not work correctly. See psql reference
+ page "Notes for Windows users" for details.
+Type "help" for help.
+
+postgres=#
+```
+
+When prompted, enter the default password (`postgres`).
+
+!!! Warning
+
+We're setting up an environment for local development, and have no intention of enabling remote connections or working with sensitive data - so leaving the default password is fine. Never leave the default password set on a production or multi-user system!
+
+!!!
+
+It's a good practice to develop with a user and database other than the default (`postgres` database and `postgres` superuser) - this allows us to apply the [principle of least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege) and helps avoid issues later on when you go to deploy: since the superuser is allowed to modify *anything*, you'll never encounter permissions errors even if your app's configuration has you reading or writing to the wrong table, schema or database... Yet that's certainly not the sort of bug you'd wish to ship! So let's start off right by creating an app user and giving it its own database to operate on:
+
+```
+create user myapp with password 'app-password';
+create database appdb with owner myapp;
+__OUTPUT__
+CREATE ROLE
+CREATE DATABASE
+```
+
+!!! SeeAlso "Further reading"
+
+- [Database Roles](https://www.postgresql.org/docs/current/user-manag.html)
+- [Privileges](https://www.postgresql.org/docs/current/ddl-priv.html)
+
+!!!
+
+Now let's quit psql and connect with pgAdmin as our new user. Run,
+
+```
+\q
+```
+
+### Connect with pgAdmin
+
+Launch pgAdmin via the Start menu (you can find it under "PostgreSQL 16", or just type "pgAdmin").
+
+![pgAdmin 4 default view - server groups on left, welcome message on right](images/pgadmin-default.png)
+
+If you poke around a bit (by expanding the Servers group on the left), you'll see that pgAdmin comes with a default connection configured for the `postgres` user to the `postgres` database in our local PostgreSQL server.
+
+Let's add a new connection for our app user to our app database.
+
+1. Right-click the Servers entry on the left, select Register and click Server:
+
+ ![pgadmin Servers group menu with register item and server submenu item selected](images/pgadmin-register-server.png)
+
+2. On the General tab, give it a descriptive name like "myapp db"
+
+3. Switch to the Connection tab:
+
+ enter connection information, using the new role and database we created in psql above:
+
+ - `localhost` for Host name/address
+ - `appdb` for Maintenance database
+ - `myapp` for Username
+ - `app-password` for Password
+
+ ...and check the *Save password?* option and click the Save button.
+
+ ![pgadmin screen showing two server connections configured under Servers group on left, with PostgreSQL 16 disconnected and myapp db connected and selected, with activity charts for myapp db shown on right](images/pgadmin-connected.png)
+
+Note that some areas that require system-level permissions (e.g., logs) are unavailable, as you're not connected as a superuser. For these, you can use the default superuser connection.
+
+## Conclusion
+
+By following these steps, you've created a local environment for developing against PostgreSQL. You've created your own database and limited-access user to own it, and can proceed to create a schema, connect application frameworks, run tests, etc.
diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx
index e9c04317473..db5ad8ec548 100644
--- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx
+++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/index.mdx
@@ -3,7 +3,7 @@ title: EDB Data Migration Service
indexCards: simple
deepToC: true
directoryDefaults:
- description: "EDB Data Migration Service is a PG AI integrated migration solution that enables secure, fault-tolerant, and performant migrations to EDB Postgres AI Cloud Service."
+ description: "EDB Data Migration Service is a solution that enables secure, fault-tolerant, and performant migrations to EDB Postgres AI Cloud Service."
product: "data migration service"
iconName: EdbTransporter
navigation:
@@ -14,8 +14,6 @@ navigation:
- limitations
- "#Get started"
- getting_started
- - "#Reference"
- - known_issues
---
diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/known_issues.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/known_issues.mdx
deleted file mode 100644
index 43548b9ae90..00000000000
--- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/known_issues.mdx
+++ /dev/null
@@ -1,6 +0,0 @@
----
-title: "Known issues"
-description: Review the currently known issues.
----
-
-There are currently no known issues for EDB Data Migration Service.
diff --git a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/limitations.mdx b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/limitations.mdx
index e915c1ac2c7..affe8ca3b2e 100644
--- a/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/limitations.mdx
+++ b/advocacy_docs/edb-postgres-ai/migration-etl/data-migration-service/limitations.mdx
@@ -1,9 +1,17 @@
---
-title: "Limitations"
+title: "Known issues, limitations, and notes"
description: Revise any unsupported data types and features.
---
-## General limitations
+These are the known issues, limitations, and notes for:
+
+- [General EDB Data Migration Service limitations](#general-edb-data-migration-service-limitations)
+
+- [Oracle limitations](#oracle-limitations)
+
+- [Postgres limitations](#postgres-limitations)
+
+## General EDB Data Migration Service limitations
EDB DMS doesn’t currently support migrating schemas, tables, and columns that have case-sensitive names.
diff --git a/advocacy_docs/supported-open-source/postgresql/installing/windows.mdx b/advocacy_docs/supported-open-source/postgresql/installing/windows.mdx
index b02c54af85f..1f0bdef4c0a 100644
--- a/advocacy_docs/supported-open-source/postgresql/installing/windows.mdx
+++ b/advocacy_docs/supported-open-source/postgresql/installing/windows.mdx
@@ -31,7 +31,7 @@ To perform an installation using the graphical installation wizard, you need sup
!!!note Notes
- EDB doesn't support all non-ASCII, multi-byte characters in user or machine names. Use ASCII characters only to avoid installation errors related to unsupported path names.
- - If you're using the graphical installation wizard to perform a system ***upgrade***, the installer preserves the configuration options specified during the previous installation.
+ - If you're using the graphical installation wizard to perform a system upgrade, the installer preserves the configuration options specified during the previous installation.
!!!
1. To start the installation wizard, assume sufficient privileges, and double-click the installer icon. If prompted, provide a password. (In some versions of Windows, to invoke the installer with administrator privileges, you must select **Run as Administrator** from the installer icon's context menu.)
diff --git a/product_docs/docs/epas/12/epas_compat_sql/21_create_public_database_link.mdx b/product_docs/docs/epas/12/epas_compat_sql/21_create_public_database_link.mdx
index 6b6a8ffb920..b449ef0c169 100644
--- a/product_docs/docs/epas/12/epas_compat_sql/21_create_public_database_link.mdx
+++ b/product_docs/docs/epas/12/epas_compat_sql/21_create_public_database_link.mdx
@@ -33,7 +33,7 @@ When the `CREATE DATABASE LINK` command is given, the database link name and the
A SQL command containing a reference to a database link must be issued while connected to the local database. When the SQL command is executed, the appropriate authentication and connection is made to the remote database to access the table or view to which the `@dblink` reference is appended.
!!!note "Oracle compatibility"
-- For EDB Postgres Advanced Server 12, the CREATE DATABASE LINK command has been tested and certified with all the minor versions for use with Oracle versions 10g Release 2, 11g Release 2, 12c Release 1, 18c Release 1, 19c, and 23.
+- For EDB Postgres Advanced Server 12, the CREATE DATABASE LINK command has been tested and certified with all the minor versions for use with Oracle versions 10g Release 2, 11g Release 2, 12c Release 1, 18c Release 1, 19c, 21c, and 23.
!!!
!!!note
diff --git a/product_docs/docs/epas/13/epas_compat_sql/21_create_public_database_link.mdx b/product_docs/docs/epas/13/epas_compat_sql/21_create_public_database_link.mdx
index da362cebf0c..9ca483f8fe0 100644
--- a/product_docs/docs/epas/13/epas_compat_sql/21_create_public_database_link.mdx
+++ b/product_docs/docs/epas/13/epas_compat_sql/21_create_public_database_link.mdx
@@ -32,7 +32,7 @@ When the `CREATE DATABASE LINK` command is given, the database link name and the
A SQL command containing a reference to a database link must be issued while connected to the local database. When the SQL command is executed, the appropriate authentication and connection is made to the remote database to access the table or view to which the `@dblink` reference is appended.
!!!note "Oracle compatibility"
-- For EDB Postgres Advanced Server 13, the CREATE DATABASE LINK command has been tested and certified with all the minor versions for use with Oracle versions 10g Release 2, 11g Release 2, 12c Release 1, 18c Release 1, 19c, and 23.
+- For EDB Postgres Advanced Server 13, the CREATE DATABASE LINK command has been tested and certified with all the minor versions for use with Oracle versions 10g Release 2, 11g Release 2, 12c Release 1, 18c Release 1, 19c, 21c, and 23.
!!!
!!!note
diff --git a/product_docs/docs/epas/14/epas_compat_sql/21_create_public_database_link.mdx b/product_docs/docs/epas/14/epas_compat_sql/21_create_public_database_link.mdx
index cff48bfaf81..547518c688f 100644
--- a/product_docs/docs/epas/14/epas_compat_sql/21_create_public_database_link.mdx
+++ b/product_docs/docs/epas/14/epas_compat_sql/21_create_public_database_link.mdx
@@ -33,7 +33,7 @@ When you use the `CREATE DATABASE LINK` command, the database link name and the
You must be connected to the local database when you issue a SQL command containing a reference to a database link. When the SQL command executes, the appropriate authentication and connection is made to the remote database to access the table or view to which the `@dblink` reference is appended.
!!!note "Oracle compatibility"
-- For EDB Postgres Advanced Server 14, the CREATE DATABASE LINK command has been tested and certified with all the minor versions for use with Oracle versions 10g Release 2, 11g Release 2, 12c Release 1, 18c Release 1, 19c, and 23.
+- For EDB Postgres Advanced Server 14, the CREATE DATABASE LINK command has been tested and certified with all the minor versions for use with Oracle versions 10g Release 2, 11g Release 2, 12c Release 1, 18c Release 1, 19c, 21c, and 23.
!!!
!!!note
diff --git a/product_docs/docs/epas/14/epas_guide/03_database_administration/04_pgsnmpd.mdx b/product_docs/docs/epas/14/epas_guide/03_database_administration/04_pgsnmpd.mdx
index a2acc959b50..ba5230339d2 100644
--- a/product_docs/docs/epas/14/epas_guide/03_database_administration/04_pgsnmpd.mdx
+++ b/product_docs/docs/epas/14/epas_guide/03_database_administration/04_pgsnmpd.mdx
@@ -5,7 +5,7 @@ title: "pgsnmpd"
!!! Note
- 'pgsnnmpd' is deprecated as of EDB Postgres Advanced Server v14. It isn't included in EDB Postgres Advanced Server v15 and later.
+ 'pgsnnmpd' is deprecated as of EDB Postgres Advanced Server v14. It isn't included in EDB Postgres Advanced Server v15 and later. We recommend using Postgres Enterprise Manager and its [built-in interfaces for SNMP](/pem/latest/monitoring_performance/notifications/) for monitoring needs.
`pgsnmpd` is an SNMP agent that can return hierarchical information about the current state of EDB Postgres Advanced Server on a Linux system. `pgsnmpd` is distributed and installed using the `edb-asxx-pgsnmpd` RPM package, where `xx` is the EDB Postgres Advanced Server version number. The `pgsnmpd` agent can operate as a standalone SNMP agent, as a pass-through subagent, or as an AgentX subagent.
diff --git a/product_docs/docs/epas/15/reference/oracle_compatibility_reference/epas_compat_sql/21_create_public_database_link.mdx b/product_docs/docs/epas/15/reference/oracle_compatibility_reference/epas_compat_sql/21_create_public_database_link.mdx
index 32e14da500e..b36909e28f4 100644
--- a/product_docs/docs/epas/15/reference/oracle_compatibility_reference/epas_compat_sql/21_create_public_database_link.mdx
+++ b/product_docs/docs/epas/15/reference/oracle_compatibility_reference/epas_compat_sql/21_create_public_database_link.mdx
@@ -35,7 +35,7 @@ When you use the `CREATE DATABASE LINK` command, the database link name and the
You must be connected to the local database when you issue a SQL command containing a reference to a database link. When the SQL command executes, the appropriate authentication and connection is made to the remote database to access the table or view to which the `@dblink` reference is appended.
!!!note "Oracle compatibility"
-- For EDB Postgres Advanced Server 15, the CREATE DATABASE LINK command has been tested and certified with all the minor versions for use with Oracle versions 10g Release 2, 11g Release 2, 12c Release 1, 18c Release 1, 19c, and 23.
+- For EDB Postgres Advanced Server 15, the CREATE DATABASE LINK command has been tested and certified with all the minor versions for use with Oracle versions 10g Release 2, 11g Release 2, 12c Release 1, 18c Release 1, 19c, 21c, and 23.
!!!
!!!note
diff --git a/product_docs/docs/epas/16/reference/oracle_compatibility_reference/epas_compat_sql/21_create_public_database_link.mdx b/product_docs/docs/epas/16/reference/oracle_compatibility_reference/epas_compat_sql/21_create_public_database_link.mdx
index 06904c64e16..e1db1a6e95f 100644
--- a/product_docs/docs/epas/16/reference/oracle_compatibility_reference/epas_compat_sql/21_create_public_database_link.mdx
+++ b/product_docs/docs/epas/16/reference/oracle_compatibility_reference/epas_compat_sql/21_create_public_database_link.mdx
@@ -35,7 +35,7 @@ When you use the `CREATE DATABASE LINK` command, the database link name and the
You must be connected to the local database when you issue a SQL command containing a reference to a database link. When the SQL command executes, the appropriate authentication and connection is made to the remote database to access the table or view to which the `@dblink` reference is appended.
!!!note "Oracle compatibility"
-- For EDB Postgres Advanced Server 16, the CREATE DATABASE LINK command has been tested and certified with all the minor versions for use with Oracle versions 10g Release 2, 11g Release 2, 12c Release 1, 18c Release 1, 19c, and 23.
+- For EDB Postgres Advanced Server 16, the CREATE DATABASE LINK command has been tested and certified with all the minor versions for use with Oracle versions 10g Release 2, 11g Release 2, 12c Release 1, 18c Release 1, 19c, 21c, and 23.
!!!
!!!note
diff --git a/product_docs/docs/eprs/7/07_common_operations/09_offline_snapshot.mdx b/product_docs/docs/eprs/7/07_common_operations/09_offline_snapshot.mdx
index 62cde3723dc..4666eab1384 100644
--- a/product_docs/docs/eprs/7/07_common_operations/09_offline_snapshot.mdx
+++ b/product_docs/docs/eprs/7/07_common_operations/09_offline_snapshot.mdx
@@ -88,9 +88,9 @@ The default value is `true`.
You can use an offline snapshot to first load the subscription tables of a single-master replication system. For a publication that's intended to have multiple subscriptions, you can create some of the subscriptions using the default Replication Server snapshot replication process as described in [Performing snapshot replication](../05_smr_operation/04_on_demand_replication/01_perform_replication/#perform_replication). You can create other subscriptions from an offline snapshot.
-### Preparing the publication and subscription server configuration:
+### Preparing the publication and subscription server configuration
-Perform these steps before creating any subscriptions:
+Before creating any subscriptions:
1. Register the publication server, add the publication database definition, and create the publication as described in [Creating a publication](../05_smr_operation/02_creating_publication/#creating_publication).
@@ -101,18 +101,21 @@ Perform these steps before creating any subscriptions:
- Change the `offlineSnapshot` option to `true`. When you restart the publication server or reload the publication server's configuration via reloadconf, `offlineSnapshot` set to `true` has two effects. One is that creating a subscription doesn't create the schema and subscription table definitions in the subscription database as is done with the default setting. The other is that creating a subscription sets a column in the control schema indicating an offline snapshot is used to load this subscription.
- Set the `batchInitialSync` option to the appropriate setting for your situation as discussed at the end of [Non-batch mode synchronization](#non_batch_mode_sync).
-1. If you modified the publication server configuration, reload the configuration. See [Reloading the Publication or Subscription Server Configuration File (reloadconf)](../05_smr_operation/02_creating_publication/01_registering_publication_server/#registering_publication_server) for directions on reloading the publication server's configuration.
+1. If you modified the publication server configuration, reload the configuration. See [Reloading the publication or subscription server configuration file (reloadconf)](../08_xdb_cli/03_xdb_cli_commands/52_reload_conf_file/).
### Creating subscription servers
-Execute these steps to create a subscription from an offline snapshot. Repeat them for each additional subscription.
+Perform these steps for each subscription you want to create from an offline snapshot.
-1. Add the subscription as described in [Adding a subscription](../05_smr_operation/03_creating_subscription/03_adding_subscription/#adding_subscription).
+!!! Note
+ See [Adding a subscription database](../05_smr_operation/03_creating_subscription/02_adding_subscription_database) for the rules for how Replication Server creates the subscription definitions from the publication for each database type. Follow these conventions when you create the target definitions manually.
+
+1. Add the subscription as described in [Adding a subscription](../05_smr_operation/03_creating_subscription/03_adding_subscription). The subscription database user name you use when you add the subscription must have full privileges over the database objects you plan to create in the subscription database.
-1. In the subscription database, create the schema and the subscription table definitions, and load the subscription tables from your offline data source. The subscription database user name used in [Adding a subscription database](../05_smr_operation/03_creating_subscription/02_adding_subscription_database/#adding_subscription_database) must have full privileges over the database objects created in this step. Also review the beginning of [Adding a subscription database](../05_smr_operation/03_creating_subscription/02_adding_subscription_database/#adding_subscription_database) regarding the rules as to how Replication Server creates the subscription definitions from the publication for each database type. You must follow these same conventions when you create the target definitions manually.
+1. In the subscription database, create the schema and the subscription table definitions, and load the subscription tables from your offline data source.
!!!note
- Ensure you don't load the offline data source from the source Publication database until after you complete the creation of a subscription. Otherwise, certain changes from the source database won't be replicated.
+ Ensure you don't load the offline data source from the source publication database until after you finish creating a subscription. Otherwise, certain changes from the source database won't be replicated.
!!!
1. Perform an on-demand synchronization replication. See [Performing synchronization replication](../05_smr_operation/04_on_demand_replication/02_perform_sync_replication/#perform_sync_replication) to learn how to perform an on-demand synchronization replication.
@@ -128,11 +131,11 @@ You can use an offline snapshot to first load the primary nodes of a multi-maste
!!! Note
Offline snapshots aren't supported for a multi-master replication system that's actively in use. Any changes on an active primary node are lost during the offline snapshot process of dumping or restoring the data of another node.
-### Preparing the publication and subscription server configurations:
+### Preparing the publication and subscription server configurations
-Perform these steps before adding primary nodes:
+Before adding primary nodes:
-1. Register the publication server, add the primary definition node, and create the publication as described in [Creating a publication](../06_mmr_operation/02_creating_publication_mmr/#creating_publication_mmr).
+1. Register the publication server, add the primary definition node, and create the publication as described in [Creating a publication](../06_mmr_operation/02_creating_publication_mmr).
1. Be sure there's no schedule defined on the replication system. If there is, remove the schedule until you complete this process. See [Removing a schedule](03_managing_schedule/#remove_schedule) for details.
@@ -141,21 +144,21 @@ Perform these steps before adding primary nodes:
- Set the `offlineSnapshot` option to `true`. When you restart the publication server or reload the publication server's configuration via reloadconf, this setting has the effect that adding a primary node sets a column in the control schema indicating an offline snapshot is used to load this primary node.
- Set the `batchInitialSync` option to the appropriate setting for your situation as discussed at the end of [Non-batch mode synchronization](#non_batch_mode_sync).
-1. If you modified the publication server configuration file, reload the configuration. See [Reloading the Publication or Subscription Server Configuration File (reloadconf)](../05_smr_operation/02_creating_publication/01_registering_publication_server/#registering_publication_server) for directions to reload the publication server's configuration.
+1. If you modified the publication server configuration file, reload the configuration. See [Reloading the publication or subscription server configuration file (reloadconf)](../08_xdb_cli/03_xdb_cli_commands/52_reload_conf_file/).
-### Adding primary nodes
+### Adding primary nodes
-Execute these steps to add a primary node from an offline snapshot. Repeat them for each additional primary node.
+Perform these steps for each primary node you want to add from an offline snapshot.
-1. Add the primary node as described in [Creating more primary nodes](../06_mmr_operation/03_creating_primary_nodes/#creating_primary_nodes) with the options **Replicate Publication Schema** and **Perform Initial Snapshot** cleared.
+1. Add the primary node as described in [Creating more primary nodes](../06_mmr_operation/03_creating_primary_nodes) with the options **Replicate Publication Schema** and **Perform Initial Snapshot** cleared.
1. In the database to use as the new primary node, create the schema and the table definitions, and load the tables from your offline data source.
!!!note
- Ensure you don't load the offline data source from the source Publication database until after you add the target node in the MMR cluster. Otherwise, certain changes from the source database won't be replicated.
+ Ensure you don't load the offline data source from the source publication database until after you add the target node in the MMR cluster. Otherwise, certain changes from the source database won't be replicated.
!!!
-1. Perform an initial on-demand synchronization. See [Performing synchronization replication](../06_mmr_operation/05_on_demand_replication_mmr/#perform_synchronization_replication_mmr) to learn how to perform an on demand-synchronization.
+1. Perform an initial on-demand synchronization. See [Performing synchronization replication](../06_mmr_operation/05_on_demand_replication_mmr/#perform_synchronization_replication_mmr).
1. If you aren't planning to load any other primary nodes using an offline snapshot at this time, change the `offlineSnapshot` option back to `false` and the `batchInitialSync` option to `true` in the publication server configuration file.
diff --git a/product_docs/docs/migration_toolkit/55/06_building_toolkit.properties_file.mdx b/product_docs/docs/migration_toolkit/55/06_building_toolkit.properties_file.mdx
index 9b43e9f2357..96dfc139cf2 100644
--- a/product_docs/docs/migration_toolkit/55/06_building_toolkit.properties_file.mdx
+++ b/product_docs/docs/migration_toolkit/55/06_building_toolkit.properties_file.mdx
@@ -28,7 +28,7 @@ Before executing Migration Toolkit commands, modify the `toolkit.properties` fil
!!! Note
- Unless specified in the command line, Migration Toolkit expects the source database to be Oracle and the target database to be EDB Postgres Advanced Server. For any other source or target database, specify the `-sourcedbtype` or `-targetdbtype` options as described in [Migrating from a non-Oracle source database](/migration_toolkit/latest/07_invoking_mtk/#migrating-from-a-non-oracle-source-database).
+ Unless specified in the command line, Migration Toolkit expects the source database to be Oracle and the target database to be EDB Postgres Advanced Server. For any other source or target database, specify the `-sourcedbtype` or `-targetdbtype` options as described in [Migrating from a non-Oracle source database](/migration_toolkit/latest/07_invoking_mtk/#migrating-schemas-from-a-non-oracle-source-database).
For specifying a target database on BigAnimal, see [Defining a BigAnimal URL](#defining-a-biganimal-url).
diff --git a/product_docs/docs/migration_toolkit/55/07_invoking_mtk/08_mtk_command_options.mdx b/product_docs/docs/migration_toolkit/55/07_invoking_mtk/08_mtk_command_options.mdx
index bd0f32dbe2a..fbb5225b197 100644
--- a/product_docs/docs/migration_toolkit/55/07_invoking_mtk/08_mtk_command_options.mdx
+++ b/product_docs/docs/migration_toolkit/55/07_invoking_mtk/08_mtk_command_options.mdx
@@ -43,40 +43,40 @@ If you specify the `-offlineMigration` option in the command line, Migration Too
!!! Note
The following examples invoke Migration Toolkit in Linux. To invoke Migration Toolkit in Windows, use the `runMTK.bat` command instead of the `runMTK.sh` command.
-To perform an offline migration of both schema and data, specify the `‑offlineMigration` keyword, followed by the schema name:
+To perform an offline migration of both schema and data, specify the `‑offlineMigration` keyword, followed by the [schema scope](../07_invoking_mtk/#migrating-schemas):
```shell
-./runMTK.sh -offlineMigration
+./runMTK.sh -offlineMigration
```
-Each database object definition is saved in a separate file with a name derived from the schema name and object type in your home folder. To specify an alternative file destination, include a directory name after the `‑offlineMigration` option:
+Each database object definition is saved in a separate file with a name derived from each schema name and object type in your home folder. To specify an alternative file destination, include a directory name after the `‑offlineMigration` option:
```shell
-./runMTK.sh -offlineMigration
+./runMTK.sh -offlineMigration
```
To perform an offline migration of only schema objects (creating empty tables), specify the `‑schemaOnly` keyword in addition to the `‑offlineMigration` keyword when invoking Migration Toolkit:
```shell
-./runMTK.sh -offlineMigration -schemaOnly
+./runMTK.sh -offlineMigration -schemaOnly
```
To perform an offline migration of only data, omitting any schema object definitions, specify the `‑dataOnly` keyword and the `‑offlineMigration` keyword when invoking Migration Toolkit:
```shell
-./runMTK.sh -offlineMigration -dataOnly
+./runMTK.sh -offlineMigration -dataOnly
```
By default, data is written in COPY format. To write the data in a plain SQL format, include the `‑safeMode` keyword:
```shell
-./runMTK.sh -offlineMigration -dataOnly -safeMode
+./runMTK.sh -offlineMigration -dataOnly -safeMode
```
By default, when you perform an offline migration that contains table data, a separate file is created for each table. To create a single file that contains the data from multiple tables, specify the `‑singleDataFile` keyword:
```shell
-./runMTK.sh -offlineMigration -dataOnly -singleDataFile -safeMode
+./runMTK.sh -offlineMigration -dataOnly -singleDataFile -safeMode
```
!!! Note
diff --git a/product_docs/docs/migration_toolkit/55/07_invoking_mtk/index.mdx b/product_docs/docs/migration_toolkit/55/07_invoking_mtk/index.mdx
index 95abd4d0fb0..7f4bdfae2b1 100644
--- a/product_docs/docs/migration_toolkit/55/07_invoking_mtk/index.mdx
+++ b/product_docs/docs/migration_toolkit/55/07_invoking_mtk/index.mdx
@@ -1,12 +1,17 @@
---
title: "Invoking Migration Toolkit"
-
+description: "Learn how to perform a migration with the Migration Toolkit."
+deepToC: true
---
After installing Migration Toolkit and specifying connection properties for the source and target databases in the [toolkit.properties file](../06_building_toolkit.properties_file), Migration Toolkit is ready to perform migrations.
+!!!note
+ Ensure the operating system user account running the Migration Toolkit is the owner of the `toolkit.properties` file with a minimum of read permission on the file. See [Invalid file permissions](../09_mtk_errors/#invalid-file-permissions) for more information.
+!!!
+
The Migration Toolkit executable is named `runMTK.sh` on Linux systems and `runMTK.bat` on Windows systems. On a Linux system, the executable is located in:
`/usr/edb/migrationtoolkit/bin`
@@ -17,68 +22,123 @@ On Windows, the executable is located in:
See [Migration Toolkit command options](08_mtk_command_options) for information on controlling details of the migration.
+## Importing character data with embedded binary zeros (NULL characters)
+
+Migration Toolkit supports importing a column with a value of NULL. However, Migration Toolkit doesn't support importing NULL character values (embedded binary zeros 0x00) with the JDBC connection protocol. If you're importing data that includes the NULL character, use the `-replaceNullChar` option to replace the NULL character with a single, non-NULL, replacement character.
!!! Note
- If the following error appears upon invoking the Migration Toolkit, check the file permissions of the `toolkit.properties` file.
+ - MTK implicitly replaces NULL characters with an empty string.
+ - The `-replaceNullChar` option doesn't work with the `-copyViaDBLinkOra` option.
+
+Once the data is migrated, use a SQL statement to replace the character specified by `-replaceNullChar` with binary zeros.
+
+## Migrating schemas
+
+When migrating schemas from the source database specified in the [toolkit.properties file](../06_building_toolkit.properties_file), you have to provide the `` for the migration. You have these options:
+
+- To migrate a single schema, specify the schema name.
+- To migrate multiple schemas, provide a comma-delimited list of schema names.
+- To migrate all the schemas, use the `-allSchemas` option.
+
+These examples display how to execute the command depending on your operating system and schema scope.
+
+| | On Linux | On Windows |
+|------------------|--------------------------------------------------------------|---------------------------------------------------------------|
+| Single schema | `$ ./runMTK.sh ` | `> .\runMTK.bat ` |
+| Multiple schemas | `$ ./runMTK.sh ,,` | `> .\runMTK.bat ,,` |
+| All schemas | `$ ./runMTK.sh -allSchemas` | `> .\runMTK.bat -allSchemas` |
+
+### Migrating schemas from Oracle
-```text
-MTK-11015: The connection credentials file ../etc/toolkit.properties is
-not secure and accessible to group/others users. This file contains
-plain passwords and should be restricted to Migration Toolkit owner user
-only.
+Unless specified in the command line, Migration Toolkit expects the source database to be Oracle and the target database to be EDB Postgres Advanced Server. To migrate schemas, navigate to the executable and invoke the following command.
+
+On Linux:
+
+```shell
+$ ./runMTK.sh
```
-The operating system user account running the Migration Toolkit must be the owner of the `toolkit.properties` file with a minimum of read permission on the file. In addition, there must be no permissions of any kind for group and other users. The following is an example of the recommended file permissions, where user enterprisedb is running the Migration Toolkit.
+On Windows:
-`-rw------- 1 enterprisedb enterprisedb 191 Aug 1 09:59 toolkit.properties`
+```shell
+> .\runMTK.bat
+```
-## Importing character data with embedded binary zeros (NULL characters)
+Where `` is the [scope of schemas](#migrating-schemas) to migrate which can be a single schema, multiple schemas, or all schemas from the specified database.
-Migration Toolkit supports importing a column with a value of NULL. However, Migration Toolkit doesn't support importing NULL character values (embedded binary zeros 0x00) with the JDBC connection protocol. If you're importing data that includes the NULL character, use the `-replaceNullChar` option to replace the NULL character with a single, non-NULL, replacement character.
+Single schema example
-!!! Note
- - MTK implicitly replaces NULL characters with an empty string.
- - The `-replaceNullChar` option doesn't work with the `-copyViaDBLinkOra` option.
+The following example migrates a schema (table definitions and table content) named `HR` from an Oracle database to an EDB Postgres Advanced Server database.
-Once the data is migrated, use a SQL statement to replace the character specified by `-replaceNullChar` with binary zeros.
+On Linux:
-## Migrating a schema from Oracle
+```shell
+$ ./runMTK.sh HR
+```
-Unless specified in the command line, Migration Toolkit expects the source database to be Oracle and the target database to be EDB Postgres Advanced Server. To migrate a complete schema on Linux, navigate to the executable and invoke the following command:
+On Windows:
-`$ ./runMTK.sh `
+```shell
+> .\runMTK.bat HR
+```
-To migrate a complete schema on Windows, navigate to the executable and invoke the following command:
+
+
-`> .\runMTK.bat `
+Multiple schemas example
-Where:
+The following example migrates multiple schemas (named HR and ACCTG) from an Oracle database to a PostgreSQL database.
-`` is the name of the schema in the source database specified in the `toolkit.properties` file that you want to migrate. You must include at least one `schema_name` value.
+On Linux:
-!!! Note
- - When the default database user of a migrated schema is automatically migrated, the custom profile of the default database user is also migrated if a custom profile exists. A custom profile is a user-created profile. For example, custom profiles exclude Oracle profiles `DEFAULT` and `MONITORING_PROFILE`.
- - PostgreSQL default rows are limited to 8 KB in size. This means that each table row must fit into a single 8 KB block, Otherwise, an error occurs indicating, for example, that we can create a table with 1600 columns of INT and insert data for all the columns. However, we can't do the same with BIGINT columns because INT is stored as 4 bytes, but each BIGINT requires more space (8 bytes). For more information, see [PostgreSQL Limits](https://www.postgresql.org/docs/14/limits.html) in the PostgreSQL documentation.
+```shell
+$ ./runMTK.sh -targetdbtype postgres HR,ACCTG
+```
-You can migrate multiple schemas by following the command name with a comma-delimited list of schema names.
+On Windows:
-On Linux, execute the following command:
+```shell
+> .\runMTK.bat -targetdbtype postgres HR,ACCTG
+```
-`$ ./runMTK.sh ,,`
+
+
-On Windows, execute the following command:
+All schemas example
-`> .\runMTK.bat ,,`
+The following example migrates all schemas from the Oracle database specified in the [toolkit.properties file](../06_building_toolkit.properties_file) to an EDB Postgres Advanced Server database.
+
+On Linux:
+
+```shell
+$ ./runMTK.sh -allSchemas
+```
+
+On Windows:
+
+```shell
+> .\runMTK.bat -allSchemas
+```
+
+
+
+
+!!!note Notes
+ - When the default database user of a migrated schema is automatically migrated, the custom profile of the default database user is also migrated if a custom profile exists. A custom profile is a user-created profile. For example, custom profiles exclude Oracle profiles `DEFAULT` and `MONITORING_PROFILE`.
+ - PostgreSQL default rows are limited to 8 KB in size. This means that each table row must fit into a single 8 KB block, Otherwise, an error occurs indicating, for example, that we can create a table with 1600 columns of INT and insert data for all the columns. However, we can't do the same with BIGINT columns because INT is stored as 4 bytes, but each BIGINT requires more space (8 bytes). For more information, see [PostgreSQL Limits](https://www.postgresql.org/docs/14/limits.html) in the PostgreSQL documentation.
+!!!
-## Migrating from a non-Oracle source database
+### Migrating schemas from a non-Oracle source database
If you don't specify a source database type and a target database type, Postgres assumes the source database is Oracle and the target database is EDB Postgres Advanced Server.
To invoke Migration Toolkit, open a command window, navigate to the executable, and invoke the following command:
-`$ ./runMTK.sh -sourcedbtype -targetdbtype [options, …] ;`
+```shell
+$ ./runMTK.sh -sourcedbtype -targetdbtype [options, …] ;
+```
`-sourcedbtype `
@@ -102,26 +162,66 @@ To invoke Migration Toolkit, open a command window, navigate to the executable,
| EDB Postgres Advanced Server | enterprisedb |
| PostgreSQL | postgres or postgresql |
-``
+``
-`` is the name of the schema in the source database specified in the `toolkit.properties` file that you want to migrate. You must include at least one `` value.
+Where `` is the [scope of schemas](#migrating-schemas) to migrate which can be a single schema, multiple schemas, or all schemas from the specified database.
+
+Single schema example
The following example migrates a schema (table definitions and table content) named `HR` from a MySQL database on a Linux system to an EDB Postgres Advanced Server host. The command includes the `‑sourcedbtype` and `-targetdbtype` options.
-On Linux, use the following command:
+On Linux:
+
+```shell
+$ ./runMTK.sh -sourcedbtype mysql -targetdbtype enterprisedb HR
+```
+
+On Windows:
+
+```shell
+> .\runMTK.bat -sourcedbtype mysql -targetdbtype enterprisedb HR
+```
+
+
+
+
+Multiple schemas example
+
+The following example migrates multiple schemas (named HR and ACCTG) from a MySQL database to a PostgreSQL database.
+
+On Linux:
+
+```shell
+$ ./runMTK.sh -sourcedbtype mysql -targetdbtype postgres HR,ACCTG
+```
-`$ ./runMTK.sh -sourcedbtype mysql -targetdbtype enterprisedb HR`
+On Windows:
-On Windows, use the following command:
+```shell
+> .\runMTK.bat -sourcedbtype mysql -targetdbtype postgres HR,ACCTG
+```
+
+
+
-`> .\runMTK.bat -sourcedbtype mysql -targetdbtype enterprisedb HR`
+All schemas example
-You can migrate multiple schemas from a source database by including a comma-delimited list of schemas at the end of the Migration Toolkit command. The following example migrates multiple schemas (named HR and ACCTG) from a MySQL database to a PostgreSQL database.
+The following example migrates all schemas from a PostgreSQL database specified in the [toolkit.properties file](../06_building_toolkit.properties_file) to an EDB Postgres Advanced Server database.
-On Linux, use the following command:
+!!! Note
+ The `-allSchemas` parameter is supported only for Oracle, EDB Postgres Advanced Server, and PostgreSQL source databases.
-`$ ./runMTK.sh -sourcedbtype mysql -targetdbtype postgres HR,ACCTG`
+On Linux:
-On Windows, use the following command:
+```shell
+$ ./runMTK.sh -sourcedbtype postgres -allSchemas
+```
+
+On Windows:
+
+```shell
+> .\runMTK.bat -sourcedbtype postgres -allSchemas
+```
-`> .\runMTK.bat -sourcedbtype mysql -targetdbtype postgres HR,ACCTG`
+
+
\ No newline at end of file
diff --git a/product_docs/docs/migration_toolkit/55/07_invoking_mtk/mtk_command_options_in_file/creating_txt_file.mdx b/product_docs/docs/migration_toolkit/55/07_invoking_mtk/mtk_command_options_in_file/creating_txt_file.mdx
index 669c2ef1bfc..e207fc54dd3 100644
--- a/product_docs/docs/migration_toolkit/55/07_invoking_mtk/mtk_command_options_in_file/creating_txt_file.mdx
+++ b/product_docs/docs/migration_toolkit/55/07_invoking_mtk/mtk_command_options_in_file/creating_txt_file.mdx
@@ -66,12 +66,12 @@ Specifying an option both at the command line and in the text file causes the mi
## Order of processing
-Migration Toolkit reads command line options and option files in the order you provide them when running the command. Ensure you add the [schema scope](executing_migration_with_txt/#provide-the-scope-for-the-schema-migration) (`schema_name` or `-allSchemas`) as the last parameter at the command line.
+Migration Toolkit reads command line options and option files in the order you provide them when running the command. Ensure you add the [schema scope](../../07_invoking_mtk/#migrating-schemas) as the last parameter at the command line.
-For example, if you run the following command, Migration Toolkit first recognizes the `-sourcedbtype oracle` option, and then reads the contents of `options_textfile` in order from top to bottom. The last parameter is the schema scope (`` or `-allSchemas`).
+For example, if you run the following command, Migration Toolkit first recognizes the `-sourcedbtype oracle` option, and then reads the contents of `options_textfile` in order from top to bottom. The last parameter is the ``.
```shell
-runMTK.sh -sourcedbtype oracle -optionsFile options_textfile schema_name
+runMTK.sh -sourcedbtype oracle -optionsFile options_textfile
```
Using an options file means that you can employ different syntaxes to perform a migration where parameters are executed in the same way. The following alternatives perform the same migration.
@@ -79,7 +79,7 @@ Using an options file means that you can employ different syntaxes to perform a
**Alternative 1**
```shell
-runMTK.sh -sourcedbtype oracle -optionsFile
+runMTK.sh -sourcedbtype oracle -optionsFile
```
Where the content of the `` file is:
@@ -92,7 +92,7 @@ dataOnly
**Alternative 2**
```shell
-runMTK.sh -sourcedbtype oracle -optionsFile -dataOnly
+runMTK.sh -sourcedbtype oracle -optionsFile -dataOnly
```
Where the content of the `` file is:
diff --git a/product_docs/docs/migration_toolkit/55/07_invoking_mtk/mtk_command_options_in_file/executing_migration_with_txt.mdx b/product_docs/docs/migration_toolkit/55/07_invoking_mtk/mtk_command_options_in_file/executing_migration_with_txt.mdx
index 0c9d94eb0b2..55d4fe39199 100644
--- a/product_docs/docs/migration_toolkit/55/07_invoking_mtk/mtk_command_options_in_file/executing_migration_with_txt.mdx
+++ b/product_docs/docs/migration_toolkit/55/07_invoking_mtk/mtk_command_options_in_file/executing_migration_with_txt.mdx
@@ -7,7 +7,7 @@ deepToC: true
After you create the options file, reference it when executing the migration command:
```shell
-./runMTK.sh -optionsFile
+./runMTK.sh -optionsFile
```
!!!note
@@ -24,6 +24,8 @@ Specify the scope of the schemas to migrate:
- If you want to specify a subset of schemas, specify the schemas you want to migrate at the command line with no preceding option and as a comma-separated list. Schema specifications must be the last parameter at the command line.
+See [Migrating schemas](../../07_invoking_mtk/#migrating-schemas) for more information on ``.
+
Here are some examples for specifying all options in the file.
## Migrate a schema with specific tables
@@ -37,13 +39,13 @@ tables comp_schema.emp,comp_schema.dept,finance_schema.acctg
Syntax of the migration command:
```shell
-./runMTK.sh -optionsFile options_textfile schema_name
+./runMTK.sh -optionsFile options_textfile schema1
```
Command line equivalent:
```shell
-./runMTK.sh -tables comp_schema.emp,comp_schema.dept,finance_schema.acctg schema_name
+./runMTK.sh -tables comp_schema.emp,comp_schema.dept,finance_schema.acctg comp_schema,finance_schema
```
## Use options file to exclude tables and include functions
@@ -71,13 +73,13 @@ funcs finance_schema.add_two_numbers
Syntax of the migration command:
```shell
-./runMTK.sh -allTables -optionsFile excludeInclude.options -safeMode -connRetryCount 7 schema_name
+./runMTK.sh -allTables -optionsFile excludeInclude.options -safeMode -connRetryCount 7 schema1
```
Command line equivalent:
```shell
-./runMTK.sh -allTables -excludeTables comp_schema.emp,finance_schema.jobhist,temp_schema.temp_table,more_schema.more_tables -funcs finance_schema.add_two_numbers -safeMode -connRetryCount 7 schema_name
+./runMTK.sh -allTables -excludeTables comp_schema.emp,finance_schema.jobhist,temp_schema.temp_table,more_schema.more_tables -funcs finance_schema.add_two_numbers -safeMode -connRetryCount 7 schema1
```
## Offline migration
diff --git a/product_docs/docs/migration_toolkit/55/09_mtk_errors.mdx b/product_docs/docs/migration_toolkit/55/09_mtk_errors.mdx
index 5067097113e..29cd9cb5629 100644
--- a/product_docs/docs/migration_toolkit/55/09_mtk_errors.mdx
+++ b/product_docs/docs/migration_toolkit/55/09_mtk_errors.mdx
@@ -37,6 +37,23 @@ not valid to use to connect to the Oracle source database.
To resolve this error, edit the `toolkit.properties` file, specifying the name and password of a valid user with privileges to perform the migration in the `SRC_DB_USER` and `SRC_DB_PASSWORD` properties.
+### Invalid file permissions
+
+*When I try to perform a migration with Migration Toolkit, I get the following error:*
+
+```text
+MTK-11015: The connection credentials file ../etc/toolkit.properties is
+not secure and accessible to group/others users. This file contains
+plain passwords and should be restricted to Migration Toolkit owner user
+only.
+```
+
+To resolve this error, ensure the operating system user account running the Migration Toolkit is the owner of the `toolkit.properties` file with a minimum of read permission on the file. In addition, there must be no permissions of any kind for group and other users.
+
+The following is an example of the recommended file permissions, where user enterprisedb is running the Migration Toolkit.
+
+`-rw------- 1 enterprisedb enterprisedb 191 Aug 1 09:59 toolkit.properties`
+
### Connection rejected: FATAL: password
*When I try to perform a migration with Migration Toolkit, I get the following error:*
diff --git a/product_docs/docs/pgd/5/appusage/extensions.mdx b/product_docs/docs/pgd/5/appusage/extensions.mdx
new file mode 100644
index 00000000000..026af5c3a0e
--- /dev/null
+++ b/product_docs/docs/pgd/5/appusage/extensions.mdx
@@ -0,0 +1,77 @@
+---
+title: Using extensions with PGD
+navTitle: Extension usage
+deepToC: true
+---
+
+## PGD and other PostgreSQL extensions
+
+PGD is implemented as a PostgreSQL extension (see [Supported Postgres database servers](../overview/#supported-postgres-database-servers)), taking advantage of PostgreSQL's expandability and flexibility to modify low level system behavior to provide multi-master replication.
+
+In principle other extensions - both those provided by community PostgreSQL and EPAS, as well as third-party extensions, can be used together with PGD, however the distributed nature of PGD means selection and installation of extensions should be carefully considered and planned.
+
+### Extensions providing logical decoding
+
+Extensions providing logical decoding, such as [wal2json](https://github.com/eulerto/wal2json), may in theory work with PGD, but there is no support for failover, meaning any WAL stream being read from such an extension may be interrupted.
+
+### Extensions providing replication and/or HA functionality
+
+Any extension extending PostgreSQL with functionality related to replication and/or HA/failover is unlikely to work well with PGD, and may even be detrimental to the health of the PGD cluster, so should be avoided.
+
+## Supported extensions
+
+This section lists extensions which are explicitly supported by PGD.
+
+### EDB Advanced Server Table Access methods
+
+The [EDB Advanced Storage Pack](/pg_extensions/advanced_storage_pack/) provides a selection of table access methods (TAMs) implemented as extensions of which the following are certified for use with PGD:
+
+ - [Autocluster](/pg_extensions/advanced_storage_pack/#autocluster)
+ - [Refdata](/pg_extensions/advanced_storage_pack/#refdata)
+
+For more details, see [Table access methods](table-access-methods).
+
+### pgaudit
+
+PGD has been modified to ensure compatibility with the [pgaudit](https://www.pgaudit.org/) extension.
+See [Postgres settings](../postgres-configuration/#postgres-settings) for configuration information.
+
+
+## Installing extensions
+
+PostgreSQL extensions provide SQL objects such as functions and datatypes, and optionally also one or more shared libraries, which must be loaded into the PostgreSQL backend before the extension can be installed and used.
+
+!!! Warning
+
+ The relevant extension packages must be available on all nodes in the cluster, otherwise extension installation may fail and impact cluster stability.
+
+If PGD is deployed using [Trusted Postgres Architect](/tpa/latest/), extensions should be configured via that.
+For more details see [Adding Postgres extensions](/tpa/latest/reference/postgres_extension_configuration).
+
+The following sections are relevant for manually configured PGD installations.
+
+### Configuring "shared_preload_libraries"
+
+If an extension provides a shared library, this library must be included in the [shared_preload_libraries](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-SHARED-PRELOAD-LIBRARIES) configuration parameter before the extension itself is installed.
+
+`shared_preload_libraries` consists of a comma-separated list of extension names.
+It must include the `bdr` extension.
+The order in which other extensions are specified is generally unimportant, however if using the `pgaudit` extension, `pgaudit` **must** appear in the list before `bdr`.
+
+`shared_preload_libraries` should be configured on all nodes in the cluster before installing the extension with `CREATE EXTENSION`.
+Note that it requires a PostgreSQL restart to activate the new configuration.
+
+See also [Postgres settings](../postgres-configuration/#postgres-settings).
+
+
+### Installing the extension
+
+The extension itself is installed with the `CREATE EXTENSION` command.
+This only needs to be carried out on one node in the cluster - PGD's DDL replication will ensure that it propagates to all other nodes.
+
+!!! Warning
+
+ Do not attempt to install extensions manually on each node by e.g. disabling DDL replication before executing `CREATE EXTENSION`.
+
+ Do not use a command such as `bdr.replicate_ddl_command()` to execute `CREATE EXTENSION`.
+
diff --git a/product_docs/docs/pgd/5/appusage/index.mdx b/product_docs/docs/pgd/5/appusage/index.mdx
index 3b805f76e82..86d5c463b1e 100644
--- a/product_docs/docs/pgd/5/appusage/index.mdx
+++ b/product_docs/docs/pgd/5/appusage/index.mdx
@@ -9,6 +9,7 @@ navigation:
- nodes-with-differences
- rules
- timing
+- extensions
- table-access-methods
- feature-compatibility
---
@@ -32,6 +33,8 @@ Developing an application with PGD is mostly the same as working with any Postgr
* [Timing considerations](timing) shows how the asynchronous/synchronous replication might affect an application's view of data and notes functions to mitigate stale reads.
+* [Extension usage](extensions) explains how to select and install extensions on PGD and configure them.
+
* [Table access methods](table-access-methods) (TAMs) notes the TAMs available with PGD and how to enable them.
* [Feature compatibility](feature-compatibility) shows which server features work with which commit scopes and which commit scopes can be daisy chained together.
\ No newline at end of file
diff --git a/product_docs/docs/pgd/5/node_management/creating_and_joining.mdx b/product_docs/docs/pgd/5/node_management/creating_and_joining.mdx
index 1c94d45854a..0a2ceb59aef 100644
--- a/product_docs/docs/pgd/5/node_management/creating_and_joining.mdx
+++ b/product_docs/docs/pgd/5/node_management/creating_and_joining.mdx
@@ -56,6 +56,9 @@ The node that's joining the cluster must not contain any schema or data
that already exists on databases in the PGD group. We recommend that the
newly joining database be empty except for the BDR extension. However,
it's important that all required database users and roles are created.
+Additionally, if the joining operation is to be carried out by a non-superuser,
+extensions requiring superuser permission will need to be manually created. For
+more details see [Connections and roles](../security/role-management#connections-and-roles).
Optionally, you can skip the schema synchronization using the
`synchronize_structure` parameter of the
diff --git a/product_docs/docs/pgd/5/security/role-management.mdx b/product_docs/docs/pgd/5/security/role-management.mdx
index 8df0e7bf0eb..6960ae18f8a 100644
--- a/product_docs/docs/pgd/5/security/role-management.mdx
+++ b/product_docs/docs/pgd/5/security/role-management.mdx
@@ -53,6 +53,21 @@ nodes, such that following stipulations are satisfied:
- It owns all database objects to replicate, either directly or from
permissions from the owner roles.
+Additionally, if any non-default extensions (excluding the `bdr` extension
+itself) are present on the source node, and any of these can only be installed
+by a superuser, these extensions must be created manually (by a superuser) on
+the join target node, otherwise the join process will fail.
+
+In PostgreSQL 13 and later, extensions requiring superuser permission and which
+therefore need to be manually installed, can be identified by executing (on the
+source node):
+
+```sql
+ SELECT name, (trusted IS FALSE AND superuser) AS superuser_only
+ FROM pg_available_extension_versions
+ WHERE installed AND name != 'bdr';
+```
+
Once all nodes are joined, to continue to allow DML and DDL replication, you can reduce the permissions further to the following:
- The user has the `REPLICATION` attribute.
diff --git a/tools/user/mergeq/.gitignore b/tools/user/mergeq/.gitignore
new file mode 100644
index 00000000000..e120fec43e8
--- /dev/null
+++ b/tools/user/mergeq/.gitignore
@@ -0,0 +1,2 @@
+.envrc
+node_modules
diff --git a/tools/user/mergeq/README.md b/tools/user/mergeq/README.md
new file mode 100644
index 00000000000..b8b33f7c8e6
--- /dev/null
+++ b/tools/user/mergeq/README.md
@@ -0,0 +1,15 @@
+# Mergeq
+
+Utility which reads the docs github repo and finds any PRs which have been merged but not yet published in production release. It then generates a list of PRs with their titles, to assist generating the PR for a production release.
+
+## Usage
+
+From the docs root:
+
+```bash
+$ tools/user/mergeq/mergeq.js
+```
+
+This will output a list of PRs which have been merged but not yet published in a production release.
+
+
diff --git a/tools/user/mergeq/mergeq.js b/tools/user/mergeq/mergeq.js
new file mode 100755
index 00000000000..619bb92ab72
--- /dev/null
+++ b/tools/user/mergeq/mergeq.js
@@ -0,0 +1,62 @@
+#! /usr/bin/env node
+import fetch from "node-fetch";
+
+// Replace with your details
+const owner = "enterprisedb";
+const repo = "docs";
+const token = ""; /*process.env.GITHUB_TOKEN*/ // Optional - if we ever need to make authenticated requests
+
+async function getLatestReleaseDate(owner, repo, token) {
+ const url = `https://api.github.com/repos/${owner}/${repo}/releases/latest`;
+ const headers = token ? { Authorization: `token ${token}` } : {};
+ const response = await fetch(url, { headers });
+ if (!response.ok) {
+ throw new Error(`Error fetching latest release: ${response.statusText}`);
+ }
+ const data = await response.json();
+ return new Date(data.published_at);
+}
+
+async function getMergedPullRequestsSince(owner, repo, sinceDate, token) {
+ const url = `https://api.github.com/repos/${owner}/${repo}/pulls`;
+ const headers = token ? { Authorization: `token ${token}` } : {};
+ const params = new URLSearchParams({
+ state: "closed",
+ base: "develop",
+ sort: "updated",
+ direction: "desc",
+ });
+ const response = await fetch(`${url}?${params.toString()}`, { headers });
+ if (!response.ok) {
+ throw new Error(`Error fetching pull requests: ${response.statusText}`);
+ }
+ const pulls = await response.json();
+ const mergedPulls = pulls.filter(
+ (pr) => pr.merged_at && new Date(pr.merged_at) > sinceDate,
+ );
+ var finalpulls = mergedPulls.map((pr) => `#${pr.number} : ${pr.title}`);
+ return finalpulls;
+}
+
+async function main() {
+ try {
+ const latestReleaseDate = await getLatestReleaseDate(owner, repo, token);
+ const pulls = await getMergedPullRequestsSince(
+ owner,
+ repo,
+ latestReleaseDate,
+ token,
+ );
+ for (const pull of pulls) {
+ console.log(pull);
+ }
+ console.log();
+ console.log(
+ `Number of merged pull requests since the last release: ${pulls.length}`,
+ );
+ } catch (error) {
+ console.error("Error fetching data from GitHub API:", error);
+ }
+}
+
+main();
diff --git a/tools/user/mergeq/package-lock.json b/tools/user/mergeq/package-lock.json
new file mode 100644
index 00000000000..e9870f74b03
--- /dev/null
+++ b/tools/user/mergeq/package-lock.json
@@ -0,0 +1,100 @@
+{
+ "name": "mergeq",
+ "version": "1.0.0",
+ "lockfileVersion": 3,
+ "requires": true,
+ "packages": {
+ "": {
+ "name": "mergeq",
+ "version": "1.0.0",
+ "license": "ISC",
+ "dependencies": {
+ "node-fetch": "^3.3.2"
+ }
+ },
+ "node_modules/data-uri-to-buffer": {
+ "version": "4.0.1",
+ "resolved": "https://registry.npmjs.org/data-uri-to-buffer/-/data-uri-to-buffer-4.0.1.tgz",
+ "integrity": "sha512-0R9ikRb668HB7QDxT1vkpuUBtqc53YyAwMwGeUFKRojY/NWKvdZ+9UYtRfGmhqNbRkTSVpMbmyhXipFFv2cb/A==",
+ "engines": {
+ "node": ">= 12"
+ }
+ },
+ "node_modules/fetch-blob": {
+ "version": "3.2.0",
+ "resolved": "https://registry.npmjs.org/fetch-blob/-/fetch-blob-3.2.0.tgz",
+ "integrity": "sha512-7yAQpD2UMJzLi1Dqv7qFYnPbaPx7ZfFK6PiIxQ4PfkGPyNyl2Ugx+a/umUonmKqjhM4DnfbMvdX6otXq83soQQ==",
+ "funding": [
+ {
+ "type": "github",
+ "url": "https://github.com/sponsors/jimmywarting"
+ },
+ {
+ "type": "paypal",
+ "url": "https://paypal.me/jimmywarting"
+ }
+ ],
+ "dependencies": {
+ "node-domexception": "^1.0.0",
+ "web-streams-polyfill": "^3.0.3"
+ },
+ "engines": {
+ "node": "^12.20 || >= 14.13"
+ }
+ },
+ "node_modules/formdata-polyfill": {
+ "version": "4.0.10",
+ "resolved": "https://registry.npmjs.org/formdata-polyfill/-/formdata-polyfill-4.0.10.tgz",
+ "integrity": "sha512-buewHzMvYL29jdeQTVILecSaZKnt/RJWjoZCF5OW60Z67/GmSLBkOFM7qh1PI3zFNtJbaZL5eQu1vLfazOwj4g==",
+ "dependencies": {
+ "fetch-blob": "^3.1.2"
+ },
+ "engines": {
+ "node": ">=12.20.0"
+ }
+ },
+ "node_modules/node-domexception": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/node-domexception/-/node-domexception-1.0.0.tgz",
+ "integrity": "sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ==",
+ "funding": [
+ {
+ "type": "github",
+ "url": "https://github.com/sponsors/jimmywarting"
+ },
+ {
+ "type": "github",
+ "url": "https://paypal.me/jimmywarting"
+ }
+ ],
+ "engines": {
+ "node": ">=10.5.0"
+ }
+ },
+ "node_modules/node-fetch": {
+ "version": "3.3.2",
+ "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-3.3.2.tgz",
+ "integrity": "sha512-dRB78srN/l6gqWulah9SrxeYnxeddIG30+GOqK/9OlLVyLg3HPnr6SqOWTWOXKRwC2eGYCkZ59NNuSgvSrpgOA==",
+ "dependencies": {
+ "data-uri-to-buffer": "^4.0.0",
+ "fetch-blob": "^3.1.4",
+ "formdata-polyfill": "^4.0.10"
+ },
+ "engines": {
+ "node": "^12.20.0 || ^14.13.1 || >=16.0.0"
+ },
+ "funding": {
+ "type": "opencollective",
+ "url": "https://opencollective.com/node-fetch"
+ }
+ },
+ "node_modules/web-streams-polyfill": {
+ "version": "3.3.3",
+ "resolved": "https://registry.npmjs.org/web-streams-polyfill/-/web-streams-polyfill-3.3.3.tgz",
+ "integrity": "sha512-d2JWLCivmZYTSIoge9MsgFCZrt571BikcWGYkjC1khllbTeDlGqZ2D8vD8E/lJa8WGWbb7Plm8/XJYV7IJHZZw==",
+ "engines": {
+ "node": ">= 8"
+ }
+ }
+ }
+}
diff --git a/tools/user/mergeq/package.json b/tools/user/mergeq/package.json
new file mode 100644
index 00000000000..b664db45360
--- /dev/null
+++ b/tools/user/mergeq/package.json
@@ -0,0 +1,15 @@
+{
+ "name": "mergeq",
+ "version": "1.0.0",
+ "description": "Calculate the backlog of Merged PRs since last production builds",
+ "main": "mergeq.js",
+ "type": "module",
+ "scripts": {
+ "test": "echo \"Error: no test specified\" && exit 1"
+ },
+ "author": "",
+ "license": "ISC",
+ "dependencies": {
+ "node-fetch": "^3.3.2"
+ }
+}