Skip to content

Commit

Permalink
Merge pull request #85 from kaleido-io/psql
Browse files Browse the repository at this point in the history
Persistence enhancements - including adding PostgreSQL support
  • Loading branch information
peterbroadhurst authored Jun 30, 2023
2 parents be4cafd + de4c9b7 commit ec94e7b
Show file tree
Hide file tree
Showing 114 changed files with 9,273 additions and 2,258 deletions.
25 changes: 21 additions & 4 deletions .github/workflows/go.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,19 +10,36 @@ on:
jobs:
build:
runs-on: ubuntu-latest
container: golang:1.19-bullseye
defaults:
run:
shell: bash # needed for codecov
services:
postgres:
image: postgres
env:
POSTGRES_PASSWORD: f1refly
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0

- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.19
- name: Fetch history # See https://github.com/actions/checkout/issues/477
run: |-
git config --global --add safe.directory $PWD
git fetch origin
- name: Build and Test
env:
TEST_FLAGS: -v
POSTGRES_HOSTNAME: postgres
POSTGRES_PASSWORD: f1refly
POSTGRES_PORT: 5432
run: make

- name: Upload coverage
Expand Down
1 change: 0 additions & 1 deletion .golangci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@ linters:
disable-all: false
enable:
- bodyclose
- depguard
- dogsled
- errcheck
- goconst
Expand Down
3 changes: 2 additions & 1 deletion .vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,7 @@
"txid",
"txns",
"txtype",
"txwriter",
"unflushed",
"unmarshalled",
"unmarshalling",
Expand All @@ -102,5 +103,5 @@
"wsconfig",
"wsmocks"
],
"go.testTimeout": "10s"
"go.testTimeout": "20s"
}
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -35,10 +35,10 @@ $(eval $(call makemock, pkg/ffcapi, API, ffc
$(eval $(call makemock, pkg/txhandler, TransactionHandler, txhandlermocks))
$(eval $(call makemock, pkg/txhandler, ManagedTxEventHandler, txhandlermocks))
$(eval $(call makemock, internal/metrics, TransactionHandlerMetrics, metricsmocks))
$(eval $(call makemock, pkg/txhistory, Manager, txhistorymocks))
$(eval $(call makemock, internal/confirmations, Manager, confirmationsmocks))
$(eval $(call makemock, internal/persistence, Persistence, persistencemocks))
$(eval $(call makemock, internal/persistence, TransactionPersistence, persistencemocks))
$(eval $(call makemock, internal/persistence, RichQuery, persistencemocks))
$(eval $(call makemock, internal/ws, WebSocketChannels, wsmocks))
$(eval $(call makemock, internal/ws, WebSocketServer, wsmocks))
$(eval $(call makemock, internal/events, Stream, eventsmocks))
Expand Down
16 changes: 16 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,14 @@ The framework is currently constrained to blockchains that adhere to certain bas
4. Has finality for transactions & events that can be expressed as a level of confidence over time
- Confirmations: A number of sequential blocks in the canonical chain that contain the transaction

## Developer setup

To run the postgres tests, you need to have a local database started as follows:

```bash
docker run -d --name postgres -e POSTGRES_PASSWORD=f1refly -p 5432:5432 postgres
```

## Nonce management in the simple transaction handler

The nonces for transactions is assigned as early as possible in the flow:
Expand Down Expand Up @@ -110,6 +118,14 @@ One of the most sophisticated parts of the FireFly Connector Framework is the ha
[![Event Streams](./images/fftm_event_streams_architecture.jpg)](./images/fftm_event_streams_architecture.jpg)

# Persistence

Simple filesystem (LevelDB) or remote database (PostgreSQL) persistence is supported.

The SQL based persistence implementation includes some additional features, including:
- Flush-writers for transaction persistence, to optimize database commits when writing new transactions in parallel
- Rich query support on the API

# Configuration

See [config.md](./config.md)
41 changes: 39 additions & 2 deletions config.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
|publicURL|External address callers should access API over|`string`|`<nil>`
|readTimeout|The maximum time to wait when reading from an HTTP connection|[`time.Duration`](https://pkg.go.dev/time#Duration)|`15s`
|shutdownTimeout|The maximum amount of time to wait for any open HTTP requests to finish before shutting down the HTTP server|[`time.Duration`](https://pkg.go.dev/time#Duration)|`10s`
|simpleQuery|Force use of original limited API query syntax, even if rich query is supported in the database|`boolean`|`<nil>`
|writeTimeout|The maximum time to wait when writing to a HTTP connection|[`time.Duration`](https://pkg.go.dev/time#Duration)|`15s`

## api.auth
Expand Down Expand Up @@ -43,9 +44,18 @@
|---|-----------|----|-------------|
|blockQueueLength|Internal queue length for notifying the confirmations manager of new blocks|`int`|`50`
|notificationQueueLength|Internal queue length for notifying the confirmations manager of new transactions/events|`int`|`50`
|receiptWorkers|Number of workers to use to query in parallel for receipts|`int`|`10`
|required|Number of confirmations required to consider a transaction/event final|`int`|`20`
|staleReceiptTimeout|Duration after which to force a receipt check for a pending transaction|[`time.Duration`](https://pkg.go.dev/time#Duration)|`1m`

## confirmations.retry

|Key|Description|Type|Default Value|
|---|-----------|----|-------------|
|factor|The retry backoff factor|`float32`|`2`
|initialDelay|The initial retry delay|[`time.Duration`](https://pkg.go.dev/time#Duration)|`100ms`
|maxDelay|The maximum retry delay|[`time.Duration`](https://pkg.go.dev/time#Duration)|`15s`

## cors

|Key|Description|Type|Default Value|
Expand Down Expand Up @@ -172,6 +182,34 @@
|path|The path for the LevelDB persistence directory|`string`|`<nil>`
|syncWrites|Whether to synchronously perform writes to the storage|`boolean`|`false`

## persistence.postgres

|Key|Description|Type|Default Value|
|---|-----------|----|-------------|
|maxConnIdleTime|The maximum amount of time a database connection can be idle|[`time.Duration`](https://pkg.go.dev/time#Duration)|`1m`
|maxConnLifetime|The maximum amount of time to keep a database connection open|[`time.Duration`](https://pkg.go.dev/time#Duration)|`<nil>`
|maxConns|Maximum connections to the database|`int`|`50`
|maxIdleConns|The maximum number of idle connections to the database|`int`|`<nil>`
|url|The PostgreSQL connection string for the database|`string`|`<nil>`

## persistence.postgres.migrations

|Key|Description|Type|Default Value|
|---|-----------|----|-------------|
|auto|Enables automatic database migrations|`boolean`|`false`
|directory|The directory containing the numerically ordered migration DDL files to apply to the database|`string`|`./db/migrations/postgres`

## persistence.postgres.txwriter

|Key|Description|Type|Default Value|
|---|-----------|----|-------------|
|batchSize|Number of persistence operations on transactions to attempt to group into a DB transaction|`int`|`100`
|batchTimeout|Duration to hold batch open for new transaction operations before flushing to the DB|[`time.Duration`](https://pkg.go.dev/time#Duration)|`10ms`
|cacheSlots|Number of transactions to hold cached metadata for to avoid DB read operations to calculate history|`int`|`1000`
|count|Number of transactions writing routines to start|`int`|`5`
|historyCompactionInterval|Duration between cleanup activities on the DB for a transaction with a large history|[`time.Duration`](https://pkg.go.dev/time#Duration)|`5m`
|historySummaryLimit|Maximum number of action entries to return embedded in the JSON response object when querying a transaction summary|`int`|`50`

## policyengine

|Key|Description|Type|Default Value|
Expand Down Expand Up @@ -256,7 +294,7 @@
|---|-----------|----|-------------|
|maxHistoryCount|The number of historical status updates to retain in the operation|`int`|`50`
|maxInFlight|Deprecated: Please use 'transactions.handler.simple.maxInFlight' instead|`int`|`100`
|nonceStateTimeout|Deprecated: Please use 'transactions.handler.simple.nonceStateTimeout' instead|[`time.Duration`](https://pkg.go.dev/time#Duration)|`1h`
|nonceStateTimeout|How old the most recently submitted transaction record in our local state needs to be, before we make a request to the node to query the next nonce for a signing address|[`time.Duration`](https://pkg.go.dev/time#Duration)|`1h`

## transactions.handler

Expand All @@ -271,7 +309,6 @@
|fixedGasPrice|A fixed gasPrice value/structure to pass to the connector|Raw JSON|`<nil>`
|interval|Interval at which to invoke the transaction handler loop to evaluate outstanding transactions|[`time.Duration`](https://pkg.go.dev/time#Duration)|`<nil>`
|maxInFlight|The maximum number of transactions to have in-flight with the transaction handler / blockchain transaction pool|`int`|`<nil>`
|nonceStateTimeout|How old the most recently submitted transaction record in our local state needs to be, before we make a request to the node to query the next nonce for a signing address|[`time.Duration`](https://pkg.go.dev/time#Duration)|`<nil>`
|resubmitInterval|The time between warning and re-sending a transaction (same nonce) when a blockchain transaction has not been allocated a receipt|[`time.Duration`](https://pkg.go.dev/time#Duration)|`<nil>`

## transactions.handler.simple.gasOracle
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
BEGIN;
DROP INDEX transactions_id;
DROP INDEX transactions_nonce;
DROP TABLE transactions;
COMMIT;
25 changes: 25 additions & 0 deletions db/migrations/postgres/000001_create_transactions_table.up.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
BEGIN;
CREATE TABLE transactions (
seq SERIAL PRIMARY KEY,
id TEXT NOT NULL,
created BIGINT NOT NULL,
updated BIGINT NOT NULL,
status VARCHAR(65) NOT NULL,
delete BIGINT,
tx_from TEXT,
tx_to TEXT,
tx_nonce VARCHAR(65),
tx_gas VARCHAR(65),
tx_value VARCHAR(65),
tx_gasprice TEXT,
tx_data TEXT NOT NULL,
tx_hash TEXT NOT NULL,
policy_info TEXT,
first_submit BIGINT,
last_submit BIGINT,
error_message TEXT NOT NULL
);
CREATE UNIQUE INDEX transactions_id ON transactions(id);
CREATE UNIQUE INDEX transactions_nonce ON transactions(tx_from, tx_nonce);
CREATE INDEX transactions_hash ON transactions(tx_hash);
COMMIT;
4 changes: 4 additions & 0 deletions db/migrations/postgres/000002_create_receipts_table.down.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
BEGIN;
DROP INDEX receipts_id;
DROP TABLE receipts;
COMMIT;
16 changes: 16 additions & 0 deletions db/migrations/postgres/000002_create_receipts_table.up.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
BEGIN;
CREATE TABLE receipts (
seq SERIAL PRIMARY KEY,
id TEXT NOT NULL,
created BIGINT NOT NULL,
updated BIGINT NOT NULL,
block_number VARCHAR(65),
tx_index VARCHAR(65),
block_hash TEXT NOT NULL,
success BOOLEAN NOT NULL,
protocol_id TEXT NOT NULL,
extra_info TEXT,
contract_loc TEXT
);
CREATE UNIQUE INDEX receipts_id ON receipts(id);
COMMIT;
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
BEGIN;
DROP INDEX confirmations_id;
DROP INDEX confirmations_txid;
DROP TABLE confirmations;
COMMIT;
14 changes: 14 additions & 0 deletions db/migrations/postgres/000003_create_confirmations_table.up.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
BEGIN;
CREATE TABLE confirmations (
seq SERIAL PRIMARY KEY,
id UUID NOT NULL,
created BIGINT NOT NULL,
updated BIGINT NOT NULL,
tx_id TEXT NOT NULL,
block_number BIGINT NOT NULL,
block_hash TEXT NOT NULL,
parent_hash TEXT NOT NULL
);
CREATE UNIQUE INDEX confirmations_id ON confirmations(id);
CREATE INDEX confirmations_txid ON confirmations(tx_id);
COMMIT;
5 changes: 5 additions & 0 deletions db/migrations/postgres/000004_create_txhistory_table.down.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
BEGIN;
DROP INDEX IF EXISTS txhistory_id;
DROP INDEX IF EXISTS txhistory_txid;
DROP TABLE txhistory;
COMMIT;
15 changes: 15 additions & 0 deletions db/migrations/postgres/000004_create_txhistory_table.up.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
BEGIN;
CREATE TABLE txhistory (
seq SERIAL PRIMARY KEY,
id UUID NOT NULL,
time BIGINT NOT NULL,
last_occurrence BIGINT NOT NULL,
tx_id TEXT NOT NULL,
status TEXT NOT NULL,
action TEXT NOT NULL,
count INT NOT NULL,
error TEXT,
error_time BIGINT,
info TEXT
);
COMMIT;
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
BEGIN;
DROP INDEX checkpoints_id;
DROP TABLE checkpoints;
COMMIT;
10 changes: 10 additions & 0 deletions db/migrations/postgres/000005_create_checkpoints_table.up.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
BEGIN;
CREATE TABLE checkpoints (
seq SERIAL PRIMARY KEY,
id UUID NOT NULL,
created BIGINT NOT NULL,
updated BIGINT NOT NULL,
listeners JSON
);
CREATE UNIQUE INDEX checkpoints_id ON checkpoints(id);
COMMIT;
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
BEGIN;
DROP INDEX eventstreams_id;
DROP INDEX eventstreams_name;
DROP TABLE eventstreams;
COMMIT;
20 changes: 20 additions & 0 deletions db/migrations/postgres/000006_create_eventstreams_table.up.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
BEGIN;
CREATE TABLE eventstreams (
seq SERIAL PRIMARY KEY,
id UUID NOT NULL,
created BIGINT NOT NULL,
updated BIGINT NOT NULL,
name TEXT,
suspended BOOLEAN,
stream_type TEXT,
error_handling TEXT,
batch_size BIGINT,
batch_timeout TEXT NOT NULL,
retry_timeout TEXT NOT NULL,
blocked_retry_timeout TEXT NOT NULL,
webhook_config TEXT,
websocket_config TEXT
);
CREATE UNIQUE INDEX eventstreams_id ON eventstreams(id);
CREATE UNIQUE INDEX eventstreams_name ON eventstreams(name);
COMMIT;
6 changes: 6 additions & 0 deletions db/migrations/postgres/000007_create_listeners_table.down.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
BEGIN;
DROP INDEX listeners_id;
DROP INDEX listeners_name;
DROP INDEX listeners_stream;
DROP TABLE listeners;
COMMIT;
17 changes: 17 additions & 0 deletions db/migrations/postgres/000007_create_listeners_table.up.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
BEGIN;
CREATE TABLE listeners (
seq SERIAL PRIMARY KEY,
id UUID NOT NULL,
created BIGINT NOT NULL,
updated BIGINT NOT NULL,
name TEXT,
stream_id UUID NOT NULL,
filters TEXT,
options TEXT,
signature TEXT,
from_block TEXT
);
CREATE UNIQUE INDEX listeners_id ON listeners(id);
CREATE UNIQUE INDEX listeners_name ON listeners(name); -- global uniqueness on names
CREATE INDEX listeners_stream ON listeners(stream_id);
COMMIT;
4 changes: 4 additions & 0 deletions db/migrations/postgres/000008_create_txhistory_idx.down.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
BEGIN;
DROP INDEX IF EXISTS txhistory_id;
DROP INDEX IF EXISTS txhistory_txid;
COMMIT;
12 changes: 12 additions & 0 deletions db/migrations/postgres/000008_create_txhistory_idx.up.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
BEGIN;
-- Allow nil data on transactions, for example for a simple transfer operation
ALTER TABLE transactions ALTER COLUMN tx_data DROP NOT NULL;

-- Correct TXHistory indexes if created by an invalid 000004 migration (no longer in codebase).
DROP INDEX IF EXISTS txhistory_id;
DROP INDEX IF EXISTS txhistory_txid;

-- Create corrected TXHistory indexes
CREATE UNIQUE INDEX txhistory_id ON txhistory(id);
CREATE INDEX txhistory_txid ON txhistory(tx_id);
COMMIT;
Loading

0 comments on commit ec94e7b

Please sign in to comment.