Skip to content

Commit

Permalink
Release 2021-05-14 (#1374)
Browse files Browse the repository at this point in the history
* adding files before rename

* removing versions that will be replaced

* renaming directories

* normalize markdown / links

* New PDFs generated by Github Actions

* add content for BDR 3.7 per issue #1348 (#1359)

* add content for BDR 3.7 per issue #1348

* address @daltjoh's feedback

* single-asterisk "extended" indicator

* New PDFs generated by Github Actions

* Removing join push down from MySQL FDW 2.5.5 release docset

* New PDFs generated by Github Actions

Co-authored-by: Robert Stringer <[email protected]>
Co-authored-by: Robert Stringer <[email protected]>
Co-authored-by: Robert Stringer <[email protected]>
Co-authored-by: Abhilasha Narendra <[email protected]>
Former-commit-id: d3309d0
  • Loading branch information
5 people authored May 14, 2021
1 parent 8e7a1a1 commit af04fa8
Show file tree
Hide file tree
Showing 39 changed files with 1,318 additions and 687 deletions.
68 changes: 68 additions & 0 deletions product_docs/docs/bdr/3.7/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
---
navTitle: BDR
title: "BDR (Bi-Directional Replication)"
directoryDefaults:
description: "BDR (Bi-Directional Replication) is a ground-breaking multi-master replication capability for PostgreSQL clusters that has been in full production status since 2014."
---

**BDR (Bi-Directional Replication)** is a ground-breaking multi-master replication capability for PostgreSQL clusters that has been in full production status since 2014. In the complex environment of replication, this 3rd generation of BDR achieves efficiency and accuracy, enabling very high availability of all nodes in a geographically distributed cluster. This solution is for top-tier enterprise applications that require near-zero downtime and near-zero data loss.

As a standard PostgreSQL extension BDR does this through logical replication of data and schema along with a robust set of features and tooling to manage conflicts and monitor performance. This means applications with the most stringent demands can be run with confidence on PostgreSQL.

BDR was built from the start to allow for rolling upgrades and developed in conjunction with partners who were replacing costly legacy solutions.

Two editions are available. BDR Standard provides essential multi-master replication capabilities for delivering row level consistency to address high availability and/or geographically distributed workloads. BDR Enterprise adds advanced conflict-handling and data-loss protection capabilities.

## BDR Enterprise

To provide very high availability, avoid data conflicts, and to cope with more advanced usage scenarios, the Enterprise edition includes the following additional features not found in BDR Standard:

* Eager replication provides conflict free replication by synchronizing across cluster nodes before committing a transaction **\***
* Commit at most once consistency guards application transactions even in the presence of node failures **\***
* Parallel apply allows multiple writer processes to apply transactions on the downstream node improving throughput up to 2X
* Single decoding worker improves performance on upstream node by doing logical decoding of WAL once instead of for each downstream node **\***
* Conflict-free replicated data types (CRDTs) provide mathematically proven consistency in asynchronous multi-master update scenarios
* Column level conflict resolution enables per column last-update wins resolution to merge updates
* Transform triggers execute on incoming data for modifying or advanced programmatic filtering
* Conflict triggers provide custom resolution techniques when a conflict is detected
* Tooling to assess applications for distributed database suitability **\***

!!! Important **\*** Indicates feature is only available with EDB Postgres Extended at this time, and is expected to be available with EDB Postgres Advanced 14.
!!!

BDR Enterprise requires EDB Postgres Extended 11, 12, 13 (formerly known as 2ndQuadrant Postgres) which is SQL compatible with PostgreSQL. For applications requiring Oracle compatibility, BDR Enterprise requires EDB Postgres Advanced 11, 12, 13.

!!!note
The documentation for the new release 3.7 is available here:

[BDR 3.7 Enterprise Edition](https://documentation.2ndquadrant.com/bdr3-enterprise/release/latest/)

**This is a protected area of our website, if you need access please [contact us](https://www.enterprisedb.com/contact)**
!!!

## BDR Standard

The Standard edition provides loosely-coupled multi-master logical replication using a mesh topology. This means that you can write to any node and the changes will be sent directly, row-by-row to all the other nodes that are part of the BDR cluster.

By default BDR uses asynchronous replication to provide row-level eventual consistency, applying changes on the peer nodes only after the local commit.

The following are included to support very high availability and geographically distributed workloads:

* Rolling application and database upgrades to address the largest source of downtime
* Origin based conflict detection and row-level last-update wins conflict resolution
* DDL replication with granular locking supports changes to application schema, ideal for use in continuous release environments
* Sub-groups with subscribe-only nodes enable data distribution use cases for applications with very high read scaling requirements
* Sequence handling provides applications different options for generating unique surrogate ids that are multi-node aware
* Tools to monitor operation and verify data consistency

BDR Standard requires PostgreSQL 11, 12, 13 or EDB Postgres Advanced 11, 12, 13 for applications requiring Oracle compatibility.

!!!note
The documentation for the new release 3.7 is available here:

[BDR 3.7 Standard Edition](https://documentation.2ndquadrant.com/bdr3/release/latest/)

**This is a protected area of our website, if you need access please [contact us](https://www.enterprisedb.com/contact)**
!!!


4 changes: 0 additions & 4 deletions product_docs/docs/hadoop_data_adapter/2.0.7/01_whats_new.mdx
Original file line number Diff line number Diff line change
@@ -1,9 +1,5 @@
---
title: "What’s New"

legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
- "/edb-docs/d/edb-postgres-hadoop-data-adapter/user-guides/user-guide/2.0.7/whats_new.html"
---

<div id="whats_new" class="registered_link"></div>
Expand Down
Original file line number Diff line number Diff line change
@@ -1,29 +1,25 @@
---
title: "Requirements Overview"

legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
- "/edb-docs/d/edb-postgres-hadoop-data-adapter/user-guides/user-guide/2.0.7/requirements_overview.html"
---

## Supported Versions

The Hadoop Foreign Data Wrapper is certified with EDB Postgres Advanced Server 9.5 and above.
The Hadoop Foreign Data Wrapper is certified with EDB Postgres Advanced Server 9.6 and above.

## Supported Platforms

The Hadoop Foreign Data Wrapper is supported on the following platforms:

**Linux x86-64**

- RHEL 8.x and 7.x
- CentOS 8.x and 7.x
- OEL 8.x and 7.x
- Ubuntu 20.04 and 18.04 LTS
- Debian 10.x and 9.x
> - RHEL 8.x and 7.x
> - CentOS 8.x and 7.x
> - OEL 8.x and 7.x
> - Ubuntu 20.04 and 18.04 LTS
> - Debian 10.x and 9.x
**Linux on IBM Power8/9 (LE)**

- RHEL 7.x
> - RHEL 7.x
The Hadoop Foreign Data Wrapper supports use of the Hadoop file system using a HiveServer2 interface or Apache Spark using the Spark Thrift Server.
Original file line number Diff line number Diff line change
@@ -1,9 +1,5 @@
---
title: "Architecture Overview"

legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
- "/edb-docs/d/edb-postgres-hadoop-data-adapter/user-guides/user-guide/2.0.7/architecture_overview.html"
---

<div id="architecture_overview" class="registered_link"></div>
Expand All @@ -14,6 +10,4 @@ The Hadoop data wrapper provides an interface between a Hadoop file system and a

![Using a Hadoop distributed file system with Postgres](images/hadoop_distributed_file_system_with_postgres.png)

Using a Hadoop Distributed file system with Postgres

When possible, the Foreign Data Wrapper asks the Hive or Spark server to perform the actions associated with the `WHERE` clause of a `SELECT` statement. Pushing down the `WHERE` clause improves performance by decreasing the amount of data moving across the network.
Original file line number Diff line number Diff line change
@@ -1,9 +1,5 @@
---
title: "Supported Authentication Methods"

legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
- "/edb-docs/d/edb-postgres-hadoop-data-adapter/user-guides/user-guide/2.0.7/supported_authentication_methods.html"
---

<div id="supported_authentication_methods" class="registered_link"></div>
Expand Down Expand Up @@ -50,7 +46,7 @@ Then, when starting the hive server, include the path to the `hive-site.xml` fil

Where *path_to_hive-site.xml_file* specifies the complete path to the `hive‑site.xml` file.

When creating the user mapping, you must provide the name of a registered LDAP user and the corresponding password as options. For details, see [Create User Mapping](07_configuring_the_hadoop_data_adapter/#create-user-mapping).
When creating the user mapping, you must provide the name of a registered LDAP user and the corresponding password as options. For details, see [Create User Mapping](08_configuring_the_hadoop_data_adapter/#create-user-mapping).

<div id="using_nosasl_authentication" class="registered_link"></div>

Expand Down
Loading

0 comments on commit af04fa8

Please sign in to comment.