diff --git a/native/04_web-applications/02_javascript-sdk.md b/native/04_web-applications/02_javascript-sdk.md index 9faf099c..9e550eb2 100644 --- a/native/04_web-applications/02_javascript-sdk.md +++ b/native/04_web-applications/02_javascript-sdk.md @@ -4,5 +4,5 @@ title: JavaScript SDK You can use [WharfKit](https://wharfkit.com/guides) to interact with the EOS Network from a web browser or Node.js application. -Check out their excellent [Getting Started Guide](https://wharfkit.com/guides/sessionkit/getting-started-web-app) to learn how to use the SDK to +Check out their excellent [Getting Started Guide](https://wharfkit.com/guides/session-kit/getting-started-web-app) to learn how to use the SDK to log in with a wallet, and make a transaction. diff --git a/native/07_node-operation/100_migration-guides/04_upgrade-guide-spring-1-0.md b/native/07_node-operation/100_migration-guides/04_upgrade-guide-spring-1-0.md new file mode 100644 index 00000000..8b56b4b8 --- /dev/null +++ b/native/07_node-operation/100_migration-guides/04_upgrade-guide-spring-1-0.md @@ -0,0 +1,141 @@ +--- +title: Spring 1.0 Upgrade Guide +--- + +## Purpose +This upgrade guide covers the steps for upgrading a node to the Spring binary from a Leap v5 binary. The Node Operator's guide [Switching Over To Savanna Consensus Algorithm](switch-to-savanna) covers the steps needed to upgrade the consensus algorithm. Node Producers will be interested in [Guide to Managing Finalizer Keys](../../advanced-topics/managing-finalizer-keys) + +### Summary of Changes +- [Exceeding number of in flight requests or Mb in flight now returns HTTP 503](#updated-error-codes) +- [`v1/chain/get_block_header_state` has been updated](#get-block-header-state-changed) +- [`producer-threads` is no longer a supported option](#producer-threads-removed) +- [New v7 snapshot log format](#snapshot-format) +- [State logs no longer compressed](#state-log-history-compression-disabled) +- [Added BLS Finalizer Keys to support new consensus algorithm](#finalizer-keys) +- [New Finalizer Configuration Options](#new-finalizer-configuration-options) +- [New State History Configuration Options](#new-state-history-configuration-options) +- [New Vote-Threads Configuration Option](#new-vote-threads-option) + +## Upgrade Steps + +### Upgrade Steps for Non-Producer Nodes +Spring v1 must restart from a snapshot or recovered from a full transaction sync due to structural changes to the state memory storage. Snapshots from version 3.1 and higher of nodeos may be used to start a Spring node. + +Below are example steps for restarting from snapshot on ubuntu: +- Download latest release + - Head to the [Spring Releases](https://github.com/AntelopeIO/spring/releases) to download the latest version +- Create a new snapshot + - `curl -X POST http://127.0.0.1:8888/v1/producer/create_snapshot` + Wait until curl returns with a JSON response containing the filename of the newly created snapshot file. +- Stop nodeos +- Remove old package + - `sudo apt-get remove -y leap` +- Remove the `shared_memory.bin` file located in nodeos' data directory. This is the only file that needs to be removed. The data directory will be the path passed to nodeos' `--data-dir` argument, or `$HOME/local/share/eosio/nodeos/data/state` by default. +- Install new package + - `apt-get install -y ./antelope-spring_1.0.0_amd64.deb` +- Restart nodeos with the snapshot file returned from the `create_snapshot` request above. Add the `--snapshot` argument along with any other existing arguments. + - `nodeos --snapshot snapshot-1323.....83c5.bin ...` + This `--snapshot` argument only needs to be given once on the first launch of a 5.x nodeos. + +### Upgrade Steps for Producer Nodes +For producer nodes, in addition to the [Steps Above](#upgrade-steps-for-non-producer-nodes) there are a few other steps. Additional Documentation on BLS finalizer keys may be found in [Guide to Managing Finalizer Keys](../../advanced-topics/managing-finalizer-keys) + +- Remove the unsupported `producer-threads` option from your configuration. +- Generate your key(s) using `spring-utils` + - `spring-util bls create key --to-console > producer-name.finalizer.key` +- Add `signature-provider` to configuration with the generated Public and Private keys. + - You may configure multiple `signature-provider`, and have multiple name/value pairs for `signature-provider`. Example in your configuration file + ``` + signature-provider = PUB_BLS_S7aaZZ7ZdvnZ7Z7Za7SV7ZZZ-ZZtaa7ZaiLaaSPp7aZnaa7aZZnZd77BuS7ZZa7Zra7SU7ZZZZnaZaZZreZZZ7rraaZZZs7-i7Z7ive7aZZLZTas77VZtZL7a7aaZZaZL7sauZ=ZZZ:PVT_BLS_Z7tZLZZaZZ7o7LZ7aaa7uaBe7rLdPVZBpsZLrUaZZBUt-a7Z + signature-provider = PUB_BLS_7ZLauuZ777ZvSa_Z7ZTrZaZ7_eraa7a7aUanv7aZZ7ZaaZdZaaadaZr-agi7_aoZa77aZZZZZaZU7aB7a7TZ-ZZu777777gaSaarZ7udZs7S-aZ-ZZZ_SBa-iZZPaZZZ7Za7rg=ZZZ:PVT_BLS_Znsaa7uZ7iZZ7uZ7aZZe7raTaaZaauZZa7aapUtuaZB7saLS + signature-provider = PUB_BLS_ZZa-PZZZZaZZZZZZ7oae7_Z7a_UZsaZaLaaaSrZ7-Zaa7ada-ZaZZaZvppoSapgZd7aaouaZZZaZZZP7ZaavZdPaeZ7Zio77ZZaZLZZaZa7ZguaZpZ7raaPgZ77ZZUoZZ7Zeva=ZZZ:PVT_BLS_-oZ_ZZZPaae7TaaaZ7aZ7Zt7aaLZZat_7ZaVaraZLaaaaiga + ``` +- For your producer account register at least one key on chain with the `regfinkey` action. When there are no registered BLS keys calling `regfinkey` will activate the provided key. + - Here is an example for the producer account `NewBlockProducer` + ``` + cleos push action eosio regfinkey '{"finalizer_name":"NewBlockProducer", \ + "finalizer_key":"PUB_BLS_SvLa9z9kZoT9bzZZZ-Zezlrst9Zb-Z9zZV9olZazZbZvzZzk9r9ZZZzzarUVzbZZ9Z9ZUzf9iZZ9P_kzZZzGLtezL-Z9zZ9zzZb9ZitZctzvSZ9G9SUszzcZzlZu-GsZnZ9I9Z", \ + "proof_of_possession":"SIG_BLS_ZPZZbZIZukZksBbZ9Z9Zfysz9zZsy9z9S9V99Z-9rZZe99vZUzZPZZlzZszZiiZVzT9ZZZZBi99Z9kZzZ9zZPzzbZ99ZZzZP9zZrU-ZZuiZZzZUvZ9ZPzZbZ_yZi9ZZZ-yZPcZZe9SZZPz9Tc9ZaZ999voB99L9PzZ99I9Zu9Zo9ZZZzTtVZbcZ-Zck_ZZUZZtfTZGszUzzBTZZGrnIZ9Z9Z9zPznyZLZIavGzZunreVZ9zZZt_ZlZS9ZZIz9yUZa9Z9-Z"}' \ + -p NewBlockProducer + ``` + +## HTTP Protocol Changes + +### Updated Error Codes +The HTTP error return code for exceeding `http-max-bytes-in-flight-mb` or `http-max-in-flight-requests` is now `503`, whereas in Leap 5.0.2, it was `429`. + +### Get Block Header State Changed. +In the past, get_block_header_state returned the pre-Savanna (legacy) block header state. Parts of this response are incompatible with the internals of the new Savanna block state. Originally the plan was to remove the get_block_header_state endpoint, but some versions of eosjs, including the latest one, 22.1.0, require operation of this endpoint given certain values of blocksBehind otherwise eosjs will be unable to push a transaction. + +Instead of deprecating the endpoint has been updated with behavioral differences. No matter whether Savanna has been activated or not, get_block_header_state in Spring 1.0 will return a response containing all fields but with only block_num, id, header, and additional_signatures filled out. Other fields will still exist but will be empty or zero. Additionally, the endpoint will consider both reversible and irreversible blocks. This latter tweak helps guard against a race condition in eosjs between calling get_info and then get_block_header_state when blocksBehind is a low number such as 2 or 3. + +An example response with the limited filled fields looks something like, +``` +{ + "block_num": 40660, + "dpos_proposed_irreversible_blocknum": 0, + "dpos_irreversible_blocknum": 0, + "active_schedule": { + "version": 0, + "producers": [] + }, + "blockroot_merkle": { + "_active_nodes": [], + "_node_count": 0 + }, + "producer_to_last_produced": [], + "producer_to_last_implied_irb": [], + "valid_block_signing_authority": [ + 0, + { + "threshold": 0, + "keys": [] + } + ], + "confirm_count": [], + "id": "00009ed4aa662f3d7e96d88061e4741692b433fe783befc6c3cb6c6a40c5955a", + "header": { + "timestamp": "2024-03-19T02:39:40.000", + "producer": "inita", + "confirmed": 0, + "previous": "00009ed38d8d4c103ce5ced75558d6ac0f4d2ac4b63cf964feeb6d8bce600ade", + "transaction_mroot": "0000000000000000000000000000000000000000000000000000000000000000", + "action_mroot": "b179283d2264aa663cb669924bbff2f33e81a7ed71dcb465342673602605a1f1", + "schedule_version": 2, + "header_extensions": [ + [ + 2, + "d39e0000010000" + ] + ], + "producer_signature": "SIG_K1_KgufADyFuHdBwT6VGBVdrnhVs6dakZdXp4qr5NgJFU7orfXbFi9eVc7NvjrjvUyL79SXfMjgyzW7cVGfQW8iy1CbjStENZ" + }, + "pending_schedule": { + "schedule_lib_num": 0, + "schedule_hash": "0000000000000000000000000000000000000000000000000000000000000000", + "schedule": { + "version": 0, + "producers": [] + } + }, + "activated_protocol_features": null, + "additional_signatures": [] +} +``` + +### Snapshot Format +Spring v1 uses a new v7 snapshot format. The new v7 snapshot format is safe to use before, during, and after the switch to the Savanna Consensus Algorithm. Previous versions of Leap will not be able to use the v7 snapshot format. + +### State Log History Compression Disabled +State history log file compression has been disabled. Consumers with state history will need to put together their own compression. + +### New State History Configuration Options +Most node operators will never need to set these configuration options. If you are running State History you will need to set `finality-data-history`. +- `finality-data-history` - When running SHiP to support Inter-Blockchain Communication (IBC) set `finality-data-history = true`. This will enable the new field, `get_blocks_request_v1`. The `get_blocks_request_v1` defaults to `null` before Savanna Consensus is activated. +### New Finalizer Configuration Options +Scripts that move or delete the ‘data’ directory need to protect the finalizer safety file, or utilize this option to set another location for the finalizer safety.dat file. +- `finalizers-dir` - Specifies the directory path for storing voting history. Node Operators may want to specify a directory outside of their nodeos' data directory, and manage this as distinct file. More information in [Guide to Managing Finalizer Keys](../../advanced-topics/managing-finalizer-keys). + +### New Vote-Threads Option +Where there is a block producing node that connects to its peers through an intermediate nodeos, the intermediate nodeos will need to have an integer value greater then for `vote-threads`. The default value for `vote-threads` is 4. When `vote-threads` is not an integer greater then zero votes are not propagated. +- `vote-threads` - Sets the number of threads to handle voting. The default is sufficient for all know production setups, and the recommendation is to leave this value unchanged. diff --git a/native/07_node-operation/100_migration-guides/10_switch-to-savanna.md b/native/07_node-operation/100_migration-guides/10_switch-to-savanna.md new file mode 100644 index 00000000..bcf05b49 --- /dev/null +++ b/native/07_node-operation/100_migration-guides/10_switch-to-savanna.md @@ -0,0 +1,44 @@ +--- +title: Switching Over To Savanna Consensus Algorithm +--- + +## Switching Over To Savanna Consensus Algorithm +Switching over to the Savanna Consensus Algorithm is a multi-step process. + +### Overview of Upgrade Process +There are four steps +1. Upgrade Antelope Software, Spring and EOS System Contracts +2. Block Producers generate and register finality keys + - First activate protocol feature `BLS_PRIMITIVES2` + - See section below on [Generate and Registering Finalizer Keys](#generate-and-registering-finalizer-keys) +3. Activate required protocol features + - Activate `SAVANNA` protocol feature +4. `eosio` user calls `switchtosvnn` action + +### Antelope Software Requirements +Switching to Savanna will required the latest version of Spring Software and the EOS System Contracts. +- [Spring v1.0.0](https://github.com/AntelopeIO/spring/releases) +- [EOS System Contracts v3.6.0](https://github.com/eosnetworkfoundation/eos-system-contracts/releases) + +**Note:** [CDT v4.1.0](https://github.com/AntelopeIO/cdt/releases) is needed to compile the latest EOS System Contracts. This version of CDT contains both the needed host functions, and cryptography support needed to support managing finalizer keys. + +### Protocol Features Dependencies +The reference for protocol features with their corresponding hashes may be found in [bios-boot-tutorial](https://github.com/AntelopeIO/spring/blob/main/tutorials/bios-boot-tutorial/bios-boot-tutorial.py). +The protocol feature `SAVANNA` depends on the following protocol features being active_schedule +- `WTMSIG_BLOCK_SIGNATURES` +- `BLS_PRIMITIVES2` +- `DISALLOW_EMPTY_PRODUCER_SCHEDULE` +- `ACTION_RETURN_VALUE` + + +### Generate and Registering Finalizer Keys +The Savanna Consensus algorithm utilized by Spring v1 separates the roles of publishing blocks from signing and finalizing blocks. Finalizer Keys are needed to sign and finalize blocks. In Spring v1, all block producers are expected to be finalizers. There are three steps to creating finalizer keys +- generate your key(s) using `spring-utils` +- add `signature-provider` to configuration with the generated key(s) +- restart nodeos with the new `signature-provider` config +- register a single key on chain with the `regfinkey` action + +Additional information on Finalizer Keys may be found in [Guide to Managing Finalizer Keys](../../advanced-topics/managing-finalizer-keys) and [Introduction to Finalizers and Voting](../../advanced-topics/introduction-finalizers-voting). + +### Confirmation of Consensus Algorithm +The action `switchtosvnn`, initiates the change to the Savanna Consensus Algorithm, and must be called by the owner of the system contracts. On EOS Mainnet this would be the `eosio` user. This is event is called only once per chain. diff --git a/native/07_node-operation/100_migration-guides/index.md b/native/07_node-operation/100_migration-guides/index.md index d9c10aaf..17ce5f9c 100644 --- a/native/07_node-operation/100_migration-guides/index.md +++ b/native/07_node-operation/100_migration-guides/index.md @@ -7,3 +7,5 @@ Learn about EOS History Alternatives: - [General Upgrade Guide](01_general-upgrade-guide.md) - [V1 History Alternatives](02_v1-history-alternatives.md) - [5.0 Upgrade Guide](03_upgrade-guide-5-0.md) +- [Spring 1.0 Upgrade Guide](04_upgrade-guide-spring-1-0.md) +- [Switching To Savanna Consensus](10_switch-to-savanna.md) diff --git a/native/60_advanced-topics/01_consensus-protocol.md b/native/60_advanced-topics/01_consensus-protocol.md deleted file mode 100644 index 2cc3c056..00000000 --- a/native/60_advanced-topics/01_consensus-protocol.md +++ /dev/null @@ -1,224 +0,0 @@ ---- -title: Consensus Protocol ---- - -## 1. Overview - -The EOS blockchain is a highly efficient, deterministic, distributed state machine that can operate in a decentralized fashion. The blockchain keeps track of transactions within a sequence of interchanged blocks. Each block cryptographically commits to the previous blocks along the same chain. It is therefore intractable to modify a transaction recorded on a given block without breaking the cryptographic checks of successive blocks. This simple fact makes blockchain transactions immutable and secure. - -### 1.1. Block Producers - -In the EOS ecosystem, block production and block validation are performed by special nodes called "block producers". Producers are elected by EOS stakeholders (see [4. Producer Voting/Scheduling](#4-producer-votingscheduling)). Each producer runs an instance of an EOS node through the `nodeos` service. For this reason, producers that are on the active schedule to produce blocks are also called "active" or "producing" nodes. - -### 1.2. The Need for Consensus - -Block validation presents a challenge among any group of distributed nodes. A consensus model must be in place to validate such blocks in a fault tolerant way within the decentralized system. Consensus is the way for such distributed nodes and users to agree upon the current state of the blockchain (see [3. EOS Consensus (DPoS + aBFT)](#3-eosio-consensus-dpos--abft)). - -## 2. Consensus Models - -There are various ways to reach consensus among a group of distributed parties in a decentralized system. Most consensus models reach agreement through some proof. Two of the most popular ones are Proof of Work (PoW) and Proof of Stake (PoS), although other types of proof-based schemes exist, such as Proof of Activity (a hybrid between PoW and PoS), Proof of Burn, Proof of Capacity, Proof of Elapsed Time, etc. Other consensus schemes also exist, such as Paxos and Raft. This document focuses mainly on the EOS consensus model. - -### 2.1. Proof of Work (PoW) - -Two of the most common consensus models used in blockchains are Proof of Work and Proof of Stake. In Proof of Work, miner nodes compete to find a nonce added to the header of a block which causes the block to have some desired property (typically a certain number of zeros in the most significant bits of the cryptographic hash of the block header). By making it computationally expensive to find such nonces that make the blocks valid, it becomes difficult for attackers to create an alternative fork of the blockchain that would be accepted by the rest of the network as the best chain. The main disadvantage of Proof of Work is that the security of the network depends on spending a lot of resources on computing power to find the nonces. - -### 2.2. Proof of Stake (PoS) - -In Proof-of-Stake, nodes that own the largest stake or percentage of some asset have equivalent decision power. In other words, voting power is proportional to the stake held. One interesting variant is Delegated Proof-of-Stake (DPoS) in which a large number of participants or stakeholders elect a smaller number of delegates, which in turn make decisions for them. - -## 3. EOS Consensus (DPoS + aBFT) - -The EOS blockchain uses delegated proof of stake (DPoS) to elect the active producers who will be authorized to sign valid blocks in the network. However, this is only one half of the EOS consensus process. The other half is involved in the actual process of confirming each block until it becomes final (irreversible), which is performed in an asynchronous byzantine fault tolerant (aBFT) way. Therefore, there are two layers involved in the EOS consensus model: - -* Layer 1 - The Native Consensus Model (aBFT). -* Layer 2 - Delegated Proof of Stake (DPoS). - -The actual native consensus model used in EOS has no concept of delegations/voting, stake, or even tokens. These are used by the DPoS layer to generate the first schedule of block producers and, if applicable, update the set at most every schedule round after each producer has cycled through. These two layers are functionally separate in the EOS software. - -### 3.1. Layer 1: Native Consensus (aBFT) - -This layer ultimately decides which blocks, received and synced among the elected producers, eventually become final, and hence permanently recorded in the blockchain. It gets a schedule of producers proposed by the second layer (see [3.2. Layer 2: Delegated PoS](#32-layer-2-delegated-pos-dpos)) and uses that schedule to determine which blocks are correctly signed by the appropriate producer. For byzantine fault tolerance, the layer uses a two-stage block confirmation process by which a two-thirds supermajority of producers from the current scheduled set confirm each block twice. The first confirmation stage proposes a last irreversible block (LIB). The second stage confirms the proposed LIB as final. At this point, the block becomes irreversible. This layer is also used to signal producer schedule changes, if any, at the beginning of every schedule round. - -#### 3.1.1. EOS Algorithmic Finality -The EOS consensus model achieves algorithmic finality (differing from the merely probabilistic finality that at best can be achieved in Proof of Work models) through the signatures from the chosen set of special participants (active producers) that are arranged in a schedule to determine which party is authorized to sign the block at a particular time slot. Changes to this schedule can be initiated by privileged smart contracts running on the EOS blockchain, but any initiated changes to the schedule do not take effect until after the block that initiated the schedule change has been finalized by two stages of confirmations. Each stage of confirmations is performed by a supermajority of producers from the current scheduled set of active producers. - -### 3.2. Layer 2: Delegated PoS (DPoS) - -The Delegated PoS layer introduces the concepts of tokens, staking, voting/proxying, vote decay, vote tallying, producer ranking, and inflation pay. This layer is also in charge of generating new producer schedules from the rankings generated from producer voting. This occurs in schedule rounds of approximately two minutes (126 seconds) which is the period it takes for a block producer to be assigned a timeslot to produce and sign blocks. The timeslot lasts a total of 6 seconds per producer, which is the producer round, where a maximum of 12 blocks can be produced and signed. The DPoS layer is enabled by WASM smart contracts. - -#### 3.2.1. Stakeholders and Delegates - -The actual selection of the active producers (the producer schedule) is open for voting every schedule round and it involves all EOS stakeholders who exercise their right to participate. In practice, the rankings of the active producers do not change often, though. The stakeholders are regular EOS account holders who vote for their block producers of preference to act on their behalf as DPoS delegates. A major departure from regular DPoS, however, is that once elected, all block producers have equal power regardless of the ranking of votes obtained. In other DPoS models, voting power is proportional to the number of votes obtained by each delegate. - -### 3.3. The Consensus Process - -The EOS consensus process consists of two parts: - -* Producer voting/scheduling - performed by the the DPoS layer 2 -* Block production/validation - performed by the native consensus layer 1 - -These two processes are independent and can be executed in parallel, except for the very first schedule round after the boot sequence when the blockchain’s first genesis block is created. - -## 4. Producer Voting/Scheduling - -The voting of the active producers to be included in the next schedule is implemented by the DPoS layer. Strictly speaking, a token holder must first stake some tokens to become a stakeholder and thus be able to vote with a given staking power. - -### 4.1. Voting Process - -Each EOS stakeholder can vote for up to 30 block producers in one voting action. The top 21 elected producers will then act as DPoS delegates to produce and sign blocks on behalf of the stakeholders. The remaining producers are placed in a standby list in the order of votes obtained. The voting process repeats every schedule round by adding up the number of votes obtained by each producer. Producers not voted on get to keep their old votes, albeit depreciated due to vote decay. Producers voted on also get to keep their old votes, except for the contribution of the last voting weight for each voter, which gets replaced by their new voting weight. - -#### 4.1.1. Voting Weight - -The voting weight of each stakeholder is computed as a function of the number of tokens staked and the time elapsed since the EOS block timestamp epoch, defined as January 1, 2000. In the current implementation, the voting weight is directly proportional to the number of tokens staked and base-2 exponentially proportional to the time elapsed in years since the year 2000. The actual weight increases at a rate of $2^{1/52} = 1.013419$ per week. This means that the voting weight changes weekly and doubles each year for the same amount of tokens staked. - -#### 4.1.2. Vote Decay - -Increasing the voting weight produces depreciation of the current votes held by each producer. Such vote decay is intentional and its reason is twofold: - -* Encourage participation by allowing newer votes to have more weight than older votes. -* Give more voice to those users actively involved on important governance matters. - -### 4.2. Producers schedule - -After the producers are voted on and selected for the next schedule, they are simply sorted alphabetically by producer name. This determines the production order. Each producer receives the proposed set of producers for the next schedule round within the very first block to be validated from the current schedule round that is about to start. When the first block that contains the proposed schedule is deemed irreversible by a supermajority of producers plus one, the proposed schedule becomes active for the next schedule round. - -#### 4.2.1. Production Parameters - -The EOS block production schedule is divided equally among the elected producers. The producers are scheduled to produce an expected number of blocks each schedule round, based on the following parameters (per schedule round): - -Parameter | Description | Default | Layer --|-|-|- -**P** (producers) | number of active producers | 21 | 2 -**Bp** (blocks/producer) | number of contiguous blocks per producer | 12 | 1 -**Tb** (s/block) | Production time per block (s: seconds) | 0.5 | 1 - -It is important to mention that Bp (number of contiguous blocks per producer), and Tb (production time per block) are layer 1 consensus constants. In contrast, P (number of active producers) is a layer 2 constant configured by the DPoS layer, which is enabled by WASM contracts. - -The following variables can be defined from the above parameters (per schedule round): - -Variable | Description | Equation --|-|- -**B** (blocks) | Total number of blocks | Bp (blocks/producer) x P (producers) -**Tp** (s/producer) | Production time per producer | Tb (s/block) x Bp (blocks/producer) -**T** (s) | Total production time | Tp (s/producer) x P (producers) - -Therefore, the value of P, being defined at layer 2, can change dynamically in the EOS blockchain. In practice, however, N is strategically set to 21 producers, which means that 15 producers are required for a two-thirds supermajority of producers plus one to reach consensus. - -#### 4.2.2. Production Default Values - -With the current defaults: P=21 elected producers, Bp=12 blocks created per producer, and a block produced every T=0.5 seconds, current production times are as follows (per schedule round): - -Variable | Value --|- -**Tp**: Production time per producer | Tp = 0.5 (s/block) x 12 (blocks/producer) ⇒ Tp = 6 (s/producer) -**T**: Total production time | T = 6 (s/producer) x 21 (producers) ⇒ T = 126 (s) - -When a block is not produced by a given producer during its assigned time slot, a gap results in the blockchain. - -## 5. Block Lifecycle - -Blocks are created by the active producer on schedule during its assigned timeslot, then relayed to other producer nodes for syncing and validation. This process continues from producer to producer until a new schedule of producers is approved at a later schedule round. When a valid block meets the consensus requirements (see [3. EOS Consensus](#3-eosio-consensus-dpos--abft)), the block becomes final and is considered irreversible. Therefore, blocks undergo three major phases during their lifespan: production, validation, and finality. Each phase goes through various stages as well. - -### 5.1. Block Structure - -As an inter-chained sequence of blocks, the fundamental unit within the blockchain is the block. A block contains records of pre-validated transactions and additional cryptographic overhead such as hashes and signatures necessary for block confirmation, re-execution of transactions during validation, blockchain replays, protection against replay attacks, etc. (see `block` schema below). - -#### block schema - -Name | Type | Description --|-|- -`timestamp` | `block_timestamp_type` | expected time slot this block is produced (ends in .000 or .500 seconds) -`producer` | `name` | account name for producer of this block -`confirmed` | `uint16_t` | number of prior blocks confirmed by the producer of this block in current producer schedule -`previous` | `block_id_type` | block ID for previous block -`transaction_mroot` | `checksum256_type` | merkle tree root hash of transaction receipts included in block -`action_mroot` | `checksum256_type` | merkle tree root hash of action receipts included in block -`schedule_version` | `uint32_t` | number of times producer schedule has changed since genesis -`new_producers` | `producer_schedule_type` | holds producer names and keys for new proposed producer schedule; null if no change -`header_extensions` | `extensions_type` | extends block fields to support additional features (included in block ID calculation) -`producer_signature` | `signature_type` | digital signature by producer that created and signed block -`transactions` | array of `transaction_receipt` | list of valid transaction receipts included in block -`block_extensions` | `extension_type` | extends block fields to support additional features (NOT included in block ID calculation) -`id` | `block_id_type` | UUID of this block ID (a function of block header and block number); can be used to query block for validation/retrieval purposes -`block_num` | `uint32_t` | block number (sequential counter value since genesis block 0); can be used to query block for validation/retrieval purposes -`ref_block_prefix` | `uint32_t` | lower 32 bits of block ID; used to prevent replay attacks - -Some of the block fields are known in advance when the block is created, so they are added during block initialization. Others are computed and added during block finalization, such as the merkle root hashes for transactions and actions, the block number and block ID, the signature of the producer that created and signed the block, etc. (see [Network Peer Protocol: 3.1. Block ID](/docs/60_advanced-topics/03_network-peer-protocol.md#31-block-id)) - -### 5.2. Block Production - -During each schedule round of block production, the producer on schedule must create Bp=12 contiguous blocks containing as many validated transactions as possible. Each block is currently produced within a span of Tb=500 ms (0.5 s). To guarantee sufficient time to produce each block and transmit to other nodes for validation, the block production time is further divided into two configurable parameters: - -* **maximum processing interval**: time window to push transactions into the block (currently set at 200 ms). -* **minimum propagation time**: time window to propagate blocks to other nodes (currently set at 300 ms). - -All loose transactions that have not expired yet, or dropped as a result of a previous failed validation, are kept in a local queue for both block inclusion and syncing with other nodes. During block production, the scheduled transactions are applied and validated by the producer on schedule, and if valid, pushed to the pending block within the processing interval. If the transaction falls outside this window, it is unapplied and rescheduled for inclusion in the next block. If there are no more block slots available for the current producer, the transaction is picked up eventually by another producing node (via the peer-to-peer protocol) and pushed to another block. The maximum processing interval is slightly less for the last block (from the producer round of Bp blocks) to compensate for network latencies during handoff to the next producer. By the end of the processing interval, no more transactions are allowed in the pending block, and the block goes through a finalization step before it gets broadcasted to other block producers for validation. - -Blocks go through various stages during production: apply, finalize, sign, and commit. - -#### 5.2.1. Apply Block - -Apply block essentially pushes the transactions received and validated by the producing node into a block. Internally, this step involves the creation and initialization of the block header and the signed block instance. The signed block instance simply extends the block header with a signature field. This field eventually holds the signature of the producer that signs the block. Furthermore, recent changes in EOS allow multiple signatures to be included, which are stored in a header extensions field. - -#### 5.2.2. Finalize Block - -Produced blocks need to be finalized before they can be signed, committed, relayed, and validated. During finalization, any field in the block header that is necessary for cryptographic validation is computed and stored in the block. This includes generating both merkle tree root hashes for the list of action receipts and the list of transaction receipts pushed to the block. - -#### 5.2.3. Sign Block - -After the transactions have been pushed into the block and the block is finalized, the block is ready to be signed by the producer. This involves computing a signature digest from the serialized contents of the block header, which includes the transaction receipts included in the block. After the block is signed with the producer’s private key, the signature digest is added to the signed block instance. This completes the block signing. - -#### 5.2.4. Commit Block - -After the block is signed, it is committed to the local chain. This pushes the block to the reversible block database (see [Network Peer Protocol: 2.2.3. Fork Database](/docs/60_advanced-topics/03_network-peer-protocol.md#223-fork-database)). This makes the block available for syncing with other nodes for validation (see the [Network Peer Protocol](/docs/60_advanced-topics/03_network_peer_protocol.md) for more information about block syncing). - -### 5.3. Block Validation - -Block validation is a fundamental operation necessary to reach consensus within the EOS blockchain. During block validation, producers receive incoming blocks from other peers and confirm the transactions included within each block. Block validation is about reaching enough quorum among active producers to agree upon: - -* The integrity of the block and the transactions it contains. -* The deterministic, chronological order of transactions within each block. - -The first step towards validating a block begins when a block is received by a node. At this point, some safety checks are performed on the block. If the block does not link to an already known block or it matches the block ID of any block already received and processed by the node, the block is discarded. If the block is new, it is pushed to the chain controller for processing. - -#### 5.3.1. Push Block - -When the block is received by the chain controller, the software must determine where to add the block within the local chain. The fork database, or Fork DB for short, is used for this purpose. The fork database holds all the branches with reversible blocks that have been received but are not yet finalized. To that end, the following steps are performed: - -1. Add block to the fork database. -2. If block is added to the main branch that contains the current head block, apply block (see [5.2.1. Apply Block](#521-apply-block)); or -3. If block must be added to a different branch, then: - 1. if that branch now becomes the preferred branch compared to the current main branch: rewind all blocks up to the nearest common ancestor (and rollback the database state in the process), re-apply all blocks in the different branch, add the new block and apply it. That branch now becomes the new main branch. - 2. otherwise: add the new block to that branch in the fork database but do nothing else. - -In order for the block to be added to fork database, some block validation must occur. Block header validation must always be done before adding a block to the fork database. And if the block must be applied, some validation of the transactions within the block must occur. The degree to which transactions are validated depends on the validation mode that nodeos is configured with. Two block validation modes are supported: full validation (the default mode), and light validation. - -#### 5.3.2. Full Validation - -In full validation mode, every transaction that is applied is fully validated. This includes verifying the signatures on the transaction and checking authorizations. - -#### 5.3.3. Light Validation - -In light validation mode, blocks signed by trusted producers (which can be configured locally per node) can skip some of the transaction validation done during full validation. For example, signature verification is skipped and all claimed authorizations on actions are assumed to be valid. - -### 5.4. Block Finality - -Block finality is the final outcome of EOS consensus. It is achieved after a supermajority of active producers have validated the block according to the consensus rules (see [3.1. Layer 1: Native Consensus (aBFT)](#31-layer-1-native-consensus-abft)). Blocks that reach finality are permanently recorded in the blockchain and cannot be undone. In this regard, the last irreversible block (LIB) in the chain refers to the most recent block that has become final. Therefore, from that point backwards the transactions that have been recorded on the blockchain cannot be reversed, tampered, or erased. - -#### 5.4.1. Goal of Finality - -The main point of finality is to give users confidence that transactions that were applied prior and up to the LIB block cannot be modified, rolled back, or dropped. The LIB block can also be useful for active nodes to determine quickly and efficiently which branch to build off from, regardless of which is the longest one. This is because a given branch might be longer without containing the most recent LIB, in which case a shorter branch with the most recent LIB must be selected. - -#### 5.4.2. EOS Finality - -Currently, according to the above EOS consensus rules (see [3.1. Layer 1: Native Consensus (aBFT)](#31-layer-1-native-consensus-abft)), each proposed LIB block requires two schedule rounds of BP validations to become final. Since a supermajority of 2/3+1 BPs are required to reach consensus within the EOS mainnet (which accounts for 15 BPs from a total of 21 BPs), it follows that each proposed LIB block becomes final in at least `3` minutes (`180` seconds), according to the calculations below: - -Variable | Value --|- -**Tp**: Production time per producer | Tp = 0.5 (s/block) x 12 (blocks/producer) ⇒ Tp = 6 (s/producer) (**\***) -**SP**: Supermajority of producers | SP = int[ 2/3 * P (producers) ] + 1 ⇒ [P=21] SP = int[ 2/3 * 21 (producers) ] + 1 = 14 + 1 (producers) ⇒ SP = 15 (producers) -**CR**: Confirmation Rounds | CR = 2 (rounds) (**\*\***) -**FT**: Finality time | FT = SP x Tp (per round) x CR (rounds) ⇒ [SP=15, Tp=6, CR=2] FT = 15 (producers) x 6 (s/producer) (per round) x 2 (rounds) ⇒ **FT = 180 (s) = 3 (mins)** - -(**\***): from section [4.2.2. Production Default Values](#422-production-default-values). -(**\*\***): number of schedule rounds required to validate a proposed LIB block. diff --git a/native/60_advanced-topics/03_network-peer-protocol.md b/native/60_advanced-topics/03_network-peer-protocol.md deleted file mode 100644 index bb3f482b..00000000 --- a/native/60_advanced-topics/03_network-peer-protocol.md +++ /dev/null @@ -1,616 +0,0 @@ ---- -title: Network Peer Protocol ---- - - -## 1. Overview - -Nodes on the EOS blockchain must be able to communicate with each other for relaying transactions, pushing blocks, and syncing state between peers. The peer-to-peer (p2p) protocol, part of the `nodeos` service that runs on every node, serves this purpose. The ability to sync state is crucial for each block to eventually reach finality within the global state of the blockchain and allow each node to advance the last irreversible block (LIB). In this regard, the fundamental goal of the p2p protocol is to sync blocks and propagate transactions between nodes to reach consensus and advance the blockchain state. - - -### 1.1. Goals - -In order to add multiple transactions into a block and fit them within the specified production time of 0.5 seconds, the p2p protocol must be designed with speed and efficiency in mind. These two goals translate into maximizing transaction throughput within the effective bandwidth and reducing both network and operational latency. Some strategies to achieve this include: - -* Fit more transactions within a block for better economy of scale. -* Minimize redundant information among blocks and transactions. -* Allow more efficient broadcasting and syncing of node states. -* Minimize payload footprint with data compression and binary encoding. - -Most of these strategies are fully or partially implemented in the EOS software. Data compression, which is optional, is implemented at the transaction level. Binary encoding is implemented by the net serializer when sending object instances and protocol messages over the network. - - -## 2. Architecture - -The main goal of the p2p protocol is to synchronize nodes securely and efficiently. To achieve this overarching goal, the system delegates functionality into four main components: - -* **Net Plugin**: defines the protocol to sync blocks and forward transactions between peers. -* **Chain Controller**: dispatches/manages blocks and transactions received, within the node. -* **Net Serializer**: serializes messages, blocks, and transactions for network transmission. -* **Local Chain**: holds the node’s local copy of the blockchain, including reversible blocks. - -The interaction between the above components is depicted in the diagram below: - -![](/images/protocol-p2p_system_arch.png "Peer-to-peer Architecture") - -At the highest level sits the Net Plugin, which exchanges messages between the node and its peers to sync blocks and transactions. A typical message flow goes as follows: - -1. Node A sends a message to Node B through the Net Plugin (refer to diagram above). - 1. Node A’s Net Serializer packs the message and sends it to Node B. - 2. Node B’s Net Serializer unpacks the message and relays it to its Net Plugin. -2. The message is processed by Node B’s Net Plugin, dispatching the proper actions. -3. The Net Plugin accesses the local chain via the Chain Controller if necessary to push or retrieve blocks. - - -### 2.1. Local Chain - -The local chain is the node’s local copy of the blockchain. It consists of both irreversible and reversible blocks received by the node, each block being cryptographically linked to the previous one. The list of irreversible blocks contains the actual copy of the immutable blockchain. The list of reversible blocks is typically shorter in length and it is managed by the Fork Database as the Chain Controller pushes blocks to it. The local chain is depicted below. - -![](/images/protocol-p2p_local_chain.png "Local Chain (before pruning)") - -Each node constructs its own local copy of the blockchain as it receives blocks and transactions and syncs their state with other peers. The reversible blocks are those new blocks received that have not yet reached finality. As such, they are likely to form branches that stem from a main common ancestor, which is the LIB (last irreversible block). Other common ancestors different from the LIB are also possible for reversible blocks. In fact, any two sibling branches always have a nearest common ancestor. For instance, in the diagram above, block 52b is the nearest common ancestor for the branches starting at block 53a and 53b that is different from the LIB. Every active branch in the local chain has the potential to become part of the blockchain. - -#### 2.1.1. LIB Block - -All irreversible blocks constructed in a node are expected to match those from other nodes up to the last irreversible block (LIB) of each node. This is the distributed nature of the blockchain. Eventually, as the blocks that follow the LIB block reach finality, the LIB block moves up the chain through one of the branches as it catches up with the head block (HB). When the LIB block advances, the immutable blockchain effectively grows. In this process, the head block might switch branches multiple times depending on the potential head block numbers received and their timestamps, which is ultimately used as tiebreaker. - -### 2.2. Chain Controller - -The Chain Controller manages the basic operations on blocks and transactions that change the local chain state, such as validating and executing transactions, pushing blocks, etc. The Chain Controller receives commands from the Net Plugin and dispatches the proper operation on a block or a transaction based on the network message received by the Net Plugin. The network messages are exchanged continuously between the EOS nodes as they communicate with each other to sync the state of blocks and transactions. - -#### 2.2.1. Signals' Producer and Consumer - -The producer and consumer of the signals defined in the controller and their life cycle during normal operation, fork, and replay are as follows: - -##### pre_accepted_block (carry signed_block_ptr) - -- Produced by - -| Module | Function | Condition | -| --- | --- | --- | -| controller | push_block | before the block is added to the fork db | -| | replay_push_block | before the replayed block is added to the fork db (only if the replayed block is not irreversible since irreversible block is not added to fork db during replay) | - -- Consumed by - -| Module | Usage | -| --- | --- | -| chain_plugin | checkpoint validation | -| | forward data to pre_accepted_block_channel | - -##### accepted_block_header (carry block_state_ptr) - -- Produced by - -| Module | Function | Condition | -| --- | --- | --- | -| controller | push_block| after the block is added to fork db | -| | commit_block | after the block is added to fork db (only if you are the one who produce the block, in other words, this is not applicable to the block received from others) | -| | replay_push_block | after the replayed block is added to fork db | (only if the replayed block is not irreversible since irreversible block is not added to fork db during replay) | - -- Consumed by - -| Module | Usage | -| --- | --- | -| chain_plugin | forward data to accepted_block_header_channel | - -##### accepted_block (carry block_state_ptr) - -- Produced by - -| Module | Function | Condition | -| --- | --- | --- | -| controller | commit_block | when the block is finalized | - -- Consumed by - -| Module | Usage | -| --- | --- | -| net_plugin | broadcast block to other peers | - -##### irreversible_block (carry block_state_ptr) - -- Produced by - -| Module | Function | Condition | -| --- | --- | --- | -| controller | log_irreversible | before it's appended to the block log and before the chainbase db is committed | -| | replay_push_block | when replaying an irreversible block | - -- Consumed by - -| Module | Usage | -| --- | --- | -| controller | setting the current lib of wasm_interface | -| chain_plugin | forward data to irreversible_block_channel | - -##### accepted_transaction (carry transaction_metadata_ptr) - -- Produced by - -| Module | Function | Condition | -| --- | --- | --- | -| controller | push_transaction | when the transaction executes succesfully (only once, i.e. when it's unapplied and reapplied the signal won't be emitted) | -| | push_scheduled_transaction | when the scheduled transaction executes succesfully | -| | | when the scheduled transaction fails (subjective/ soft/ hard) | -| | | when the scheduled transaction expires | -| | | after applying onerror | - -- Consumed by - -| Module | Usage | -| --- | --- | -| chain_plugin | forward data to accepted_transaction_channel | - -##### applied_transaction (carry std::tuple) - -- Produced by - -| Module | Function | Condition | -| --- | --- | --- | -| controller | push_transaction | when the transaction executes succesfully | -| | push_scheduled_transaction | when the scheduled transaction executes succesfully | -| | | when the scheduled transaction fails (subjective/ soft/ hard) | -| | | when the scheduled transaction expires | -| | | after applying onerror | - -- Consumed by - -| Module | Usage | -| --- | --- | -| chain_plugin | forward data to applied_transaction_channel | - -##### bad_alloc -Not used. - -#### 2.2.2. Signals' Life Cycle - -##### A. normal operation where blocks and transactions are input - -1. When a transaction is pushed to the blockchain (through RPC or broadcasted by peer) - 1. Transaction is executed either succesfully/ fail the validation -> `accepted_transaction` is emitted by the controller - 2. chain_plugin will react to the signal to forward the transaction_metadata to accepted_transaction_channel -2. When a scheduled transaction is pushed to the blockchain - 1. Transaction is executed either succesfully/ fail subjectively/ soft fail/ hard fail -> `accepted_transaction` is emitted by the controller - 2. chain_plugin will react to the signal to forward the transaction_metadata to accepted_transaction_channel -3. When a block is pushed to the blockchain (through RPC or broadcasted by peer) - 1. Before the block is added to fork db -> `pre_accepted_block` will be emitted by the controller - 2. chain_plugin will react to the signal to do validation of the block forward the block_state to accepted_block_header_channel and validate it with the checkpoint - 3. After the block is added to fork db -> `accepted_block_header` will be emitted by the controller - 4. chain_plugin will react to the signal to forward the block_state to accepted_block_header_channel - 5. Then the block will be applied, at this time all the transactions and scheduled_transactions inside the block will be pushed. All signals related to push_transaction and push_scheduled_transaction (see point A.1 and A.2) will be emitted. - 6. When committing the block -> `accepted_block` will be emitted by the controller - 7. net_plugin will react to the signal and broadcast the block to the peers - 8. If a new block becomes irreversible, signals related to irreversible block will be emitted (see point A.5) -4. When a block is produced - 1. For the block that is produced by you, the block will be added to the fork_db when it is committed -> `accepted_block_header` will be emitted by the controller - 2. chain_plugin will react to the signal to forward the block_state to accepted_block_header_channel and validate it with the checkpoint - 3. Immediately after that (during commiting the block) -> `accepted_block` will be emitted by the controller - 4. net_plugin will react to the signal and broadcast the block to the peers - 5. If a new block becomes irreversible, signals related to irreversible block will be emitted (see point A.5) -5. When a block becomes irreversible - 1. Once a block is deemed irreversible -> `irreversible_block` will be emitted by the controller before the block is appended to the block log and the chainbase db is committed - 2. chain_plugin will react to the signal to forward the block_state to irreversible_block_channel and also set the lib of wasm_interface - -##### B. operation where forks are presented and resolved - -1. When forks are presented, the blockchain will pop all existing blocks up to the forking point and then apply all new blocks in the fork. -2. When applying the new block, all the transactions and scheduled_transactions inside the block will be pushed. All signals related to push_transaction and push_scheduled_transaction (see point A.1 and A.2) will be emitted. -3. And then when committing the new block -> `accepted_block` will be emitted by the controller -4. net_plugin will react to the signal and broadcast the block to the peers -5. If If a new block becomes irreversible, signals related to irreversible block will be emitted (see point A.5) - -##### C. normal replay (with or without replay optimization) - -1. When replaying irreversible block -> `irreversible_block` will be emitted by the controller -2. Refer to A.5 to see how `irreversible_block` signal is responded -3. When replaying reversible block, before the block is added to fork_db -> `pre_accepted_block` will be emitted by the controller -4. When replaying reversible block, after the block is added to fork db -> `accepted_block_header` will be emitted by the controller -5. When replaying reversible block, when the block is committed -> `accepted_block` will be emitted by the controller -6. Refer to A.3 to see how `pre_accepted_block`, `accepted_block_header` and `accepted_block` signal are responded - -#### 2.2.3. Fork Database - -The Fork Database (Fork DB) provides an internal interface for the Chain Controller to perform operations on the node’s local chain. As new blocks are received from other peers, the Chain Controller pushes these blocks to the Fork DB. Each block is then cryptographically linked to a previous block. Since there might be more than one previous block, the process is likely to produce temporary branches called mini-forks. Thus, the Fork DB serves three main purposes: - -* Resolve which branch the pushed block (new head block) will build off from. -* Advance the head block, the root block, and the LIB block. -* Trim off invalid branches and purge orphaned blocks. - -In essence, the Fork DB contains all the candidate block branches within a node that may become the actual branch that continues to grow the blockchain. The root block always marks the beginning of the reversible block tree, and will match the LIB block, except when the LIB advances, in which case the root block must catch up. The calculation of the LIB block as it advances through the new blocks within the Fork DB will ultimately decide which branch gets selected. As the LIB block advances, the root block catches up with the new LIB, and any candidate branch whose ancestor node is behind the LIB gets pruned. This is depicted below. - -![](/images/protocol-p2p_local_chain_prunning.png "Local Chain (after pruning)") - -In the diagram above, the branch starting at block 52b gets pruned (blocks 52b, 53a, 53b are invalid) after the LIB advances from node 51 to block 52c then 53c. As the LIB moves through the reversible blocks, they are moved from the Fork DB to the local chain as they now become part of the immutable blockchain. Finally, block 54d is kept in the Fork DB since new blocks might still be built off from it. - - -### 2.3. Net Plugin - -The Net Plugin defines the actual peer to peer communication messages between the EOS nodes. The main goal of the Net Plugin is to sync valid blocks upon request and to forward valid transactions invariably. To that end, the Net Plugin delegates functionality to the following components: - -* **Sync Manager**: maintains the block syncing state of the node with respect to its peers. -* **Dispatch Manager**: maintains the list of blocks and transactions sent by the node. -* **Connection List**: list of active peers the node is currently connected to. -* **Message Handler**: dispatches protocol messages to the corresponding handler. (see [4.2. Protocol Messages](#42-protocol-messages)). - - -#### 2.3.1. Sync Manager - -The Sync Manager implements the functionality for syncing block state between the node and its peers. It processes the messages sent by each peer and performs the actual syncing of the blocks based on the status of the node’s LIB or head block with respect to that peer. At any point, the node can be in any of the following sync states: - -* **LIB Catch-Up**: node is about to sync with another peer's LIB block. -* **Head Catch-Up**: node is about to sync with another peer's HEAD block. -* **In-Sync**: both LIB and HEAD blocks are in sync with the other peers. - -If the node’s LIB or head block is behind, the node will generate sync request messages to retrieve the missing blocks from the connected peer. Similarly, if a connected peer’s LIB or head block is behind, the node will send notice messages to notify the node about which blocks it needs to sync with. For more information about sync modes see [3. Operation Modes](#3-operation-modes). - - -#### 2.3.2. Dispatch Manager - -The Dispatch Manager maintains the state of blocks and loose transactions received by the node. The state contains basic information to identify a block or a transaction and it is maintained within two indexed lists of block states and transaction states: - -* **Block State List**: list of block states managed by node for all blocks received. -* **Transaction State List**: list of transaction states managed by node for all transactions received. - -This makes it possible to locate very quickly which peer has a given block or transaction. - - -##### 2.3.2.1. Block State - -The block state identifies a block and the peer it came from. It is transient in nature, so it is only valid while the node is active. The block state contains the following fields: - -Block State Fields | Description --|- -`id` | 256-bit block identifier. A function of the block contents and the block number. -`block_num` | 32-bit unsigned counter value that identifies the block sequentially since genesis. -`connection_id` | 32-bit unsigned integer that identifies the connected peer the block came from. -`have_block` | boolean value indicating whether the actual block has been received by the node. - -The list of block states is indexed by block ID, block number, and connection ID for faster lookup. This allows to query the list for any blocks given one or more of the indexed attributes. - - -##### 2.3.2.2. Transaction State - -The transaction state identifies a loose transaction and the peer it came from. It is also transient in nature, so it is only valid while the node is active. The transaction state contains the following fields: - -Transaction State Fields | Description --|- -`id` | 256-bit hash of the transaction instance, used as transaction identifier. -`expires` | expiration time since EOS block timestamp epoch (January 1, 2000). -`block_num` | current head block number. Transaction drops when LIB catches up to it. -`connection_id` | 32-bit integer that identifies the connected peer the transaction came from. - -The `block_num` stores the node's head block number when the transaction is received. It is used as a backup mechanism to drop the transaction when the LIB block number catches up with the head block number, regardless of expiration. - -The list of transaction states is indexed by transaction ID, expiration time, block number, and connection ID for faster lookup. This allows to query the list for any transactions given one or more of the indexed attributes. - - -##### 2.3.2.3. State Recycling - -As the LIB block advances (see [3.3.1. LIB Catch-Up Mode](#331-lib-catch-up-mode)), all blocks prior to the new LIB block are considered finalized, so their state is removed from the local list of block states, including the list of block states owned by each peer in the list of connections maintained by the node. Likewise, transaction states are removed from the list of transactions based on expiration time. Therefore, after a transaction expires, its state is removed from all lists of transaction states. - -The lists of block states and transaction states have a light footprint and feature high rotation, so they are maintained in memory for faster access. The actual contents of the blocks and transactions received by a node are stored temporarily in the fork database and the various incoming queues for applied and unapplied transactions, respectively. - - -#### 2.3.3. Connection List - -The Connection List contains the connection state of each peer. It keeps information about the p2p protocol version, the state of the blocks and transactions from the peer that the node knows about, whether it is currently syncing with that peer, the last handshake message sent and received, whether the peer has requested information from the node, the socket state, the node ID, etc. The connection state includes the following relevant fields: - -* **Info requested**: whether the peer has requested information from the node. -* **Socket state**: a pointer to the socket structure holding the TCP connection state. -* **Node ID**: the actual node ID that distinguishes the peer’s node from the other peers. -* **Last Handshake Received**: last handshake message instance received from the peer. -* **Last Handshake Sent**: the last handshake message instance sent to the peer. -* **Handshake Sent Count**: the number of handshake messages sent to the peer. -* **Syncing**: whether or not the node is syncing with the peer. -* **Protocol Version**: the internal protocol version implemented by the peer’s Net Plugin. - -The block state consists of the following fields: - -* **Block ID**: a hash of the serialized contents of the block. -* **Block number**: the actual block number since genesis. - -The transaction state consists of the following fields: - -* **Transaction ID**: a hash of the serialized contents of the transaction. -* **Block number**: the actual block number the transaction was included in. -* **Expiration time**: the time in seconds for the transaction to expire. - - -### 2.4. Net Serializer - -The Net Serializer has two main roles: - -* Serialize objects and messages that need to be transmitted over the network. -* Serialize objects and messages that need to be cryptographically hashed. - -In the first case, each serialized object or message needs to get deserialized at the other end upon receipt from the network for further processing. In the latter case, serialization of specific fields within an object instance is needed to generate cryptographic hashes of its contents. Most IDs generated for a given object type (action, transaction, block, etc.) consist of a cryptographic hash of the relevant fields from the object instance. - - -## 3. Operation Modes - -From an operational standpoint, a node can be in either one of three states with respect to a connected peer: - -* **In-Sync mode**: node is in sync with peer, so no blocks are required from that peer. -* **LIB Catch-Up mode**: node requires blocks since LIB block is behind that peer’s LIB. -* **HEAD Catch-Up mode**: node requires blocks since HEAD block is behind that peer’s Head. - -The operation mode for each node is stored in a sync manager context within the Net Plugin of the nodeos service. Therefore, a node is always in either in-sync mode or some variant of catchup mode with respect to its connected peers. This allows the node to switch back and forth between catchup mode and in-sync mode as the LIB and head blocks are updated and new fresh blocks are received from other peers. - - -### 3.1. Block ID - -The EOS software checks whether two blocks match or hold the same content by comparing their block IDs. A block ID is a function that depends on the contents of the block header and the block number (see [Consensus Protocol: 5.1. Block Structure](01_consensus-protocol.md#51-block-structure)). Checking whether two blocks are equal is crucial for syncing a node’s local chain with that of its peers. To generate the block ID from the block contents, the block header is serialized and a SHA-256 digest is created. The most significant 32 bits are assigned the block number while the least significant 224 bits of the hash are retained. Note that the block header includes the root hash of both the transaction merkle tree and the action merkle tree. Therefore, the block ID depends on all transactions included in the block as well as all actions included in each transaction. - - -### 3.2. In-Sync Mode - -During in-sync mode, the node's head block is caught up with the peer's head block, which means the node is in sync block-wise. When the node is in-sync mode, it does not request further blocks from peers, but continues to perform the other functions: - -* **Validate transactions**, drop them if invalid; forward them to other peers if valid. -* **Validate blocks**, drop them if invalid; forward them to other peers upon request if valid. - -Therefore, this mode trades bandwidth in favor of latency, being particularly useful for validating transactions that rely on TaPoS (transaction as proof of stake) due to lower processing overhead. - -Note that loose transactions are always forwarded if valid and not expired. Blocks, on the other hand, are only forwarded if valid and if explicitly requested by a peer. This reduces network overhead. - - -### 3.3. Catch-Up Mode - -A node is in catchup mode when its head block is behind the peer’s LIB or the peer’s head block. If syncing is needed, it is performed in two sequential steps: - -1. Sync the node’s LIB from the nearest common ancestor + 1 up to the peer’s LIB. -2. Sync the node’s head from the nearest common ancestor + 1 up to the peer’s head. - -Therefore, the node’s LIB block is updated first, followed by the node’s head block. - - -#### 3.3.1. LIB Catch-Up Mode - -Case 1 above, where the node’s LIB block needs to catch up with the peer’s LIB block, is depicted in the below diagram, before and after the sync (Note: inapplicable branches have been removed for clarity): - -![](/images/protocol-p2p_lib_catchup.png "LIB Catch-Up Mode") - -In the above diagram, the node’s local chain syncs up with the peer’s local chain by appending finalized blocks 91 and 92 (the peer’s LIB) to the node’s LIB (block 90). Note that this discards the temporary fork consisting of blocks 91n, 92n, 93n. Also note that these nodes have an “n” suffix (short for node) to indicate that they are not finalized, and therefore, might be different from the peer’s. The same applies to unfinalized blocks on the peer; they end in “p” (short for peer). After syncing, note that both the LIB (lib) and the head block (hb) have the same block number on the node. - - -#### 3.3.2. Head Catch-Up Mode - -After the node’s LIB block is synced with the peer’s, there will be new blocks pushed to either chain. Case 2 above covers the case where the peer’s chain is longer than the node’s chain. This is depicted in the following diagram, which shows the node and the peer’s local chains before and after the sync: - -![](/images/protocol-p2p_head_catchup.png "Head Catch-Up Mode") - -In either case 1 or 2 above, the syncing process in the node involves locating the first common ancestor block starting from the node’s head block, traversing the chains back, and ending in the LIB blocks, which are now in sync (see [3.3.1. LIB Catch-Up Mode](#331-lib-catch-up-mode)). In the worst case scenario, the synced LIBs are the nearest common ancestor. In the above diagram, the node’s chain is traversed from head block 94n, 93n, etc. trying to match blocks 94p, 93p, etc. in the peer’s chain. The first block that matches is the nearest common ancestor (block 93n and 93p in the diagram). Therefore, the following blocks 94p and 95p are retrieved and appended to the node’s chain right after the nearest common ancestor, now re-labeled 93n,p (see [3.3.3. Block Retrieval](#333-block-retrieval) process). Finally, block 95p becomes the node’s head block and, since the node is fully synced with the peer, the node switches to in-sync mode. - - -#### 3.3.3. Block Retrieval - -After the common ancestor is found, a sync request message is sent to retrieve the blocks needed by the node, starting from the next block after the nearest common ancestor and ending in the peer’s head block. - -To make effective use of bandwidth, the required blocks are obtained from various peers, rather than just one, if necessary. Depending on the number of blocks needed, the blocks are requested in chunks by specifying the start block number and the end block number to download from a given peer. The node uses the list of block states to keep track of which blocks each peer has, so this information is used to determine which connected peers to request block chunks from. This process is depicted in the diagram below: - -![](/images/protocol-p2p-node-peer-sync.png "Node-peer syncing") - -When both LIB and head blocks are caught up with respect to the peer, the operation mode in the Sync Manager is switched from catch-up mode to in-sync mode. - - -### 3.4. Mode Switching - -Eventually, both the node and its peer receive new fresh blocks from other peers, which in turn push the blocks to their respective local chains. This causes the head blocks on each chain to advance. Depending on which chain grows first, one of the following actions occur: - -* The node sends a catch up request message to the peer with its head block info. -* The node sends a catch up notice message to inform the peer it needs to sync. - -In the first case, the node switches the mode from in-sync to head catchup mode. In the second case, the peer switches to head catchup mode after receiving the notice message from the node. In practice, in-sync mode is short-lived. When the EOS blockchain is very busy, nodes spend most of their time in catchup mode validating transactions and syncing their chains after catchup messages are received. - - -## 4. Protocol Algorithm - -The p2p protocol algorithm runs on every node, forwarding validated transactions and validated blocks. Starting EOS v2.0, a node also forwards block IDs of unvalidated blocks it has received. In general, the simplified process is as follows: - -1. A node requests data or sends a control message to a peer. -2. If the request can be fulfilled, the peer executes the request; repeat 1. - -The data messages contain the block contents or the transaction contents. The control messages make possible the syncing of blocks and transactions between the node and its peers (see [Protocol Messages](#42-protocol-messages)). In order to allow such synchronization, each node must be able to retrieve information about its own state of blocks and transactions as well as that of its peers. - - -### 4.1. Node/Peers Status - -Before attempting to sync state, each node needs to know the current status of its own blocks and transactions. It must also be able to query other peers to obtain the same information. In particular, nodes must be able to obtain the following on demand: - -* Each node can find out which blocks and transactions it currently has. -* All nodes can find out which blocks and transactions their peers have. -* Each node can find out which blocks and transactions it has requested. -* All nodes can find out when each node has received a given transaction. - -To perform these queries, and thereafter when syncing state, the Net Plugin defines specific communication messages to be exchanged between the nodes. These messages are sent by the Net Plugin when transmitted and received over a TCP connection. - - -### 4.2. Protocol Messages - -The p2p protocol defines the following control messages for peer to peer node communication: - -Control Message | Description --|- -`handshake_message` | initiates a connection to another peer and sends LIB/head status. -`chain_size_message` | requests LIB/head status from peer. Not currently implemented. -`go_away_message` | sends disconnection notification to a connecting or connected peer. -`time_message` | transmits timestamps for peer synchronization and error detection. -`notice_message` | informs peer which blocks and transactions node currently has. -`request_message` | informs peer which blocks and transaction node currently needs. -`sync_request_message` | requests peer a range of blocks given their start/end block numbers. - -The protocol also defines the following data messages for exchanging the actual contents of a block or a loose transaction between peers on the p2p network: - -Data Message | Description --|- -`signed_block` | serialized contents of a signed block. -`packed_transaction` | serialized contents of a packed transaction. - - -#### 4.2.1. Handshake Message - -The handshake message is sent by a node when connecting to another peer. It is used by the connecting node to pass its chain state (LIB number/ID and head block number/ID) to the peer. It is also used by the peer to perform basic validation on the node the first time it connects, such as whether it belongs to the same blockchain, validating that fields are within range, detecting inconsistent block states on the node, such as whether its LIB is ahead of the head block, etc. The handshake message consists of the following fields: - -Message Field | Description --|- -`network_version` | internal net plugin version to keep track of protocol updates. -`chain_id` | hash value of the genesis state and config options. Used to identify chain. -`node_id` | the actual node ID that distinguishes the peer’s node from the other peers. -`key` | public key for peer to validate node; may be a producer or peer key, or empty. -`time` | timestamp the handshake message was created since epoch (Jan 1, 2000). -`token` | SHA-256 digest of timestamp to prove node owns private key of the key above. -`sig` | signature for the digest above after node signs it with private key of the key above. -`p2p_address` | IP address of node. -`last_irreversible_block_num` | the actual block count of the LIB block since genesis. -`last_irreversible_block_id` | a hash of the serialized contents of the LIB block. -`head_num` | the actual block count of the head block since genesis. -`head_id` | a hash of the serialized contents of the head block. -`os` | operating system where node runs. This is detected automatically. -`agent` | the name supplied by node to identify itself among its peers. -`generation` | counts `handshake_message` invocations; detects first call for validation. - -If all checks succeed, the peer proceeds to authenticate the connecting node based on the `--allowed-connection` setting specified for that peer's net plugin when `nodeos` started: - -* **Any**: connections are allowed without authentication. -* **Producers**: peer key is obtained via p2p protocol. -* **Specified**: peer key is provided via settings. -* **None**: the node does not allow connection requests. - -The peer key corresponds to the public key of the node attempting to connect to the peer. If authentication succeeds, the receiving node acknowledges the connecting node by sending a handshake message back, which the connecting node validates in the same way as above. Finally, the receiving node checks whether the peer’s head block or its own needs syncing. This is done by checking the state of the head block and the LIB of the connecting node with respect to its own. From these checks, the receiving node determines which chain needs syncing. - - -#### 4.2.2. Chain Size Message - -The chain size message was defined for future use, but it is currently not implemented. The idea was to send ad-hoc status notifications of the node’s chain state after a successful connection to another peer. The chain size message consists of the following fields: - -Message Field | Description --|- -`last_irreversible_block_num` | the actual block count of the LIB block since genesis. -`last_irreversible_block_id` | a hash of the serialized contents of the LIB block. -`head_num` | the actual block count of the head block since genesis. -`head_id` | a hash of the serialized contents of the head block. - -The chain size message is superseded by the handshake message, which also sends the status of the LIB and head blocks, but includes additional information so it is preferred. - - -#### 4.2.3. Go Away Message - -The go away message is sent to a peer before closing the connection. It is usually the result of an error that prevents the node from continuing the p2p protocol further. The go away message consists of the following fields: - -Message Field | Description --|- -`reason` | an error code signifying the reason to disconnect from peer. -`node_id` | the node ID for the disconnecting node; used for duplicate notification. - -The current reason codes are defined as follows: - -* **No reason**: indicate no error actually; the default value. -* **Self**: node was attempting to self connect. -* **Duplicate**: redundant connection detected from peer. -* **Wrong chain**: the peer's chain ID does not match. -* **Wrong version**: the peer's network version does not match. -* **Forked**: the peer's irreversible blocks are different -* **Unlinkable**: the peer sent a block we couldn't use -* **Bad transaction**: the peer sent a transaction that failed verification. -* **Validation**: the peer sent a block that failed validation. -* **Benign other**: reasons such as a timeout. not fatal but warrant resetting. -* **Fatal other**: a catch all for fatal errors that have not been isolated yet. -* **Authentication**: peer failed authentication. - -After the peer receives the go away message, the peer should also close the connection. - - -#### 4.2.4. Time Message - -The time message is used to synchronize events among peers, measure time intervals, and detect network anomalies such as duplicate messages, invalid timestamps, broken nodes, etc. The time message consists of the following fields: - -Message Field | Description --|- -`org` | origin timestamp; set when marking the beginning of a time interval. -`rec` | receive timestamp; set when a message arrives from the network. -`xmt` | transmit timestamp; set when a message is placed on the send queue. -`dst` | destination timestamp; set when marking the end of a time interval. - - -#### 4.2.5. Notice Message - -The notice message is sent to notify a peer which blocks and loose transactions the node currently has. The notice message consists of the following fields : - -Message Field | Description --|- -`known_trx` | sorted list of known transaction IDs node has available. -`known_blocks` | sorted list of known block IDs node has available. - -Notice messages are lightweight since they only contain block IDs and transaction IDs, not the actual block or transaction. - - -#### 4.2.6. Request Message - -The request message is sent to notify a peer which blocks and loose transactions the node currently needs. The request message consists of the following fields: - -Message Field | Description --|- -`req_trx` | sorted list of requested transaction IDs required by node. -`req_blocks` | sorted list of requested block IDs required by node. - - -#### 4.2.7. Sync Request Message - -The sync request message requests a range of blocks from peer. The sync request message consists of the following fields: - -Message Field | Description --|- -`start_block` | start block number for the range of blocks to receive from peer. -`end_block` | end block number for the range of blocks to receive from peer. - -Upon receipt of the sync request message, the peer sends back the actual blocks for the range of block numbers specified. - - -### 4.3. Message Handler - -The p2p protocol uses an event-driven model to process messages, so no polling or looping is involved when a message is received. Internally, each message is placed in a queue and the next message in line is dispatched to the corresponding message handler for processing. At a high level, the message handler can be defined as follows: - -```console - receiver/read handler: - if handshake message: - verify that peer's network protocol is valid - if node's LIB < peer's LIB: - sync LIB with peer's; continue - if node's LIB > peer's LIB: - send LIB catchup notice message; continue - if notice message: - update list of blocks/transactions known by remote peer - if trx message: - insert into global state as unvalidated - validate transaction; drop if invalid, forward if valid - else - close the connection -``` - - -### 4.4. Send Queue - -Protocol messages are placed in a buffer queue and sent to the appropriate connected peer. At a higher level, a node performs the following operations with each connected peer in a round-robin fashion: - -```console - send/write loop: - if peer knows the LIB: - if peer does not know we have a block or transaction: - next iteration - if peer does not know about a block: - send transactions for block that peer does not know - next iteration - if peer does not know about transactions: - sends oldest transactions unknown to remote peer - next iteration - wait for new validated block, transaction, or peer signal - else: - assume peer is in catchup mode (operating on request/response) - wait for notice of sync from the read loop -``` - - -## 5. Protocol Improvements - -Any software updates to the p2p protocol must also scale progressively and consistently across all nodes. This translates into installing updates that reduce operation downtime and potentially minimize it altogether while deploying new functionality in a backward compatible manner, if possible. On the other hand, data throughput can be increased by taking measures that minimize message footprint, such as using data compression and binary encoding of the protocol messages. diff --git a/native/60_advanced-topics/20_introduction-finalizers-voting.md b/native/60_advanced-topics/20_introduction-finalizers-voting.md new file mode 100644 index 00000000..777e6273 --- /dev/null +++ b/native/60_advanced-topics/20_introduction-finalizers-voting.md @@ -0,0 +1,50 @@ +--- +title: Introduction to Finalizers and Voting +--- + +## Takeaways +- Finality is a separate role tied to the authority of the top 21 block producers. +- Voting is continuous, try to keep your producer active between rounds. +- There is a voting history file. +- It is always safe to activate new, never used BLS keys. +- Be aware of vote routing topology and activate vote threads to relay votes. + +## Introduction to Finalizers and Voting +The EOS blockchain bundles together transactions into blocks, and working across 21 producers comes to a consensus on those blocks before marking them as irreversible. EOS continues to advance its blockchain technology, has added Finalizers in Spring v1.0. Finalizers enable the blockchain to mark blocks as irreversible seconds after they are published. This improvement in time to finality does not come at the cost of safety. Marking a block as irreversible continues to require agreement from 15 out of the top 21 producers. + +## Finalizer +In Spring v1.0, Finalizers are tightly coupled to the role of the block producer and publisher. Only the top 21 block producers may vote to advance finality. The top block producers are determined by the votes they receive from community members who stake their EOS. This distributed proof of stake remains unchanged and continues the distributed governance model that has worked so well for EOS. Starting with Spring v1.0, block producers have the authority to run two separate functions. +- Block Publish: Bundles together transactions into a block, and links that block and transaction to previous blocks and transactions. +- Finalizer: Cryptographic verification of the on-chain transactions + +To lower the time to irreversible blocks, also know as time to finality, producers must exchange information about the state of the chain more frequently. Starting in Spring v1.0 the top 21 producers vote and successful votes are included with every block that is produced. In Spring v1.0 blocks are produced every 500ms. These votes are later aggregated and verified. If there is agreement about the state of the block chain finality is advanced. The Finalizer is the software responsible for aggregating votes, performing cryptographically verification, and determining if finality may be advanced. + +### Block Publisher Rounds +The top block producers alternate publishing blocks on a schedule determined by their accumulated votes. This publishing period is called a round. If a publisher is unavailable during its round, that publisher is skipped over and another publisher takes their place. + +### Voting Overview +Unlike the Block Publishing process, voting does not have a schedule. There are no rounds. The top 21 block producers submit votes on every block. Agreement on the state of the chain from 15 of the top 21 is required before a block may be marked as irreversible. Currently there is no penalty if a block producer's vote is not included as part of the finality calculation. If the EOS blockchain fails to get votes, or those votes disagree on the state of the chain, finality will not advance, and the last irreversible block will remain unchanged. + +Votes include a cryptographically signed digest for the Merkle tree representing the current state of the blockchain. As the chain adds transactions and changes those digest will change. To quickly and efficiently vote each Finalizer has a `safety.dat` file which stores this history of votes, and the digest that have been voted on. The digest stored in the history file is used as a reference point for calculating future digests from incoming blocks and transactions. + +### BLS Keys and Signature +Voting creates signed digests using BLS Keys. BLS signatures are very cheap to aggregate together into a single message, and this property makes them a good choice for aggregating together votes from many different finalizers. The property also makes it cheap to support a large set of BLS signatures. + +### Keys and Voting History +There is an implicit link between the BLS Key used to sign votes, and the voting history stored in `safety.dat`. + +The voting history stored in `safety.dat` includes signed digests using the BLS key registered during voting. As we mentioned previously, this history is used to generate new votes. If the `safety.dat` does not contain the full voting history for a BLS Key, the votes will not be correct. When a partial voting history is included in the `safety.dat` that block producer will vote for a different branch of the blockchain. This will create a vote that does not contribute to finality. + +Activating a new, never used, BLS Key is always safe. There is no voting history for a new, never used key. + +Please take care when managing the `safety.dat` file. Please do not share BLS keys across hosts, or reuse the same BLS key when moving from host to host. Sharing and reuse of BLS Keys may result in a corrupted or partial voting history. + +### Continuous Voting +Unlike block publishing, for the top 21 block producers, voting is continuous. Taking a producer offline would prevent that producer from voting to advance finality. To support continuous voting and manage various support scenarios the EOS blockchain provides on chain actions to register, activate, and delete BLS Keys. Using these actions, a producer can quickly rotate to a new BLS Key. + +For this reason it is recommended that each producer instance uses its own unique BLS Key, and activates the BLS Key when going online. There are many strategies for [managing BLS Keys](../managing-finalizer-keys). + +### Voting and Peering +All the nodeos instance from the source of the votes, to the receiver of the votes, along with any intermediate nodes must be configured to send, receive, and propagate votes. This is accomplished by enabling the vote-threading pools, configuring `vote-threads` to a value greater than zero. By default `vote-threads` is greater than zero on all block production nodes. Therefore, when two finalizers are directly peered, votes are sent and received with no additional configuration changes needed. + +When nodeos instances are not directly connected, and an intermediate nodeos instance is present, the intermediate nodes must update their configuration to enable vote-threading. Failure to enable vote-threading on intermediate nodes will prevent the finalizer votes associated your producer from reaching peers. diff --git a/native/60_advanced-topics/20_linking-custom-permission.md b/native/60_advanced-topics/20_linking-custom-permission.md deleted file mode 100644 index 485a38a7..00000000 --- a/native/60_advanced-topics/20_linking-custom-permission.md +++ /dev/null @@ -1,78 +0,0 @@ ---- -title: "Creating and Linking Custom Permissions" ---- - -## Introduction - -On the EOS blockchain, you can create various custom permissions for accounts. A custom permission can later be linked to an action of a contract. This permission system enables smart contracts to have a flexible authorization scheme. - -This tutorial illustrates the creation of a custom permission, and subsequently, how to link the permission to an action. Upon completion of the steps, the contract's action will be prohibited from executing unless the authorization of the newly linked permission is provided. This allows you to have greater granularity of control over an account and its various actions. - -With great power comes great responsibility. This functionality poses some challenges to the security of your contract and its users. Ensure you understand the concepts and steps prior to putting them to use. - -[[info |Parent permission ]] -| When you create a custom permission, the permission will always be created under a parent permission. - -If you have the authority of a parent permission which a custom permission was created under, you can always execute an action which requires that custom permission. - -## Step 1. Create a Custom Permission - -Firstly, let's create a new permission level on the `alice` account: - -```shell -dune -- cleos set account permission alice upsert YOUR_PUBLIC_KEY owner -p alice@owner -``` - -A few things to note: - -1. A new permission called **upsert** was created -2. The **upsert** permission uses the development public key as the proof of authority -3. This permission was created on the `alice` account - -You can also specify authorities other than a public key for this permission, for example, a set of other accounts. - -## Step 2. Link Authorization to Your Custom Permission - -Link the authorization to invoke the `upsert` action with the newly created permission: - -```shell -dune -- cleos set action permission alice addressbook upsert upsert -``` - -In this example, we link the authorization to the `upsert` action created earlier in the addressbook contract. - -## Step 3. Test it - -Let's try to invoke the action with an `active` permission: - -```shell -dune -- cleos push action addressbook upsert '["alice", "alice", "liddel", 21, "Herengracht", "land", "dam"]' -p alice@active -``` - -You should see an error like the one below: - -```text -Error 3090005: Irrelevant authority included -Please remove the unnecessary authority from your action! -Error Details: -action declares irrelevant authority '{"actor":"alice","permission":"active"}'; minimum authority is {"actor":"alice","permission":"upsert"} -``` - -Now, try the **upsert** permission, this time, explicitly declaring the **upsert** permission we just created: (e.g. `-p alice@upsert`) - -```text -dune -- cleos push action addressbook upsert '["alice", "alice", "liddel", 21, "Herengracht", "land", "dam"]' -p alice@upsert -``` - -Now it works: - -```text -dune -- cleos push action addressbook upsert '["alice", "alice", "liddel", 21, "Herengracht", "land", "dam"] -p alice@upsert -executed transaction: - -2fe21b1a86ca2a1a72b48cee6bebce9a2c83d30b6c48b16352c70999e4c20983 144 bytes 9489 us -# addressbook <= addressbook::upsert {"user":"alice","first_name":"alice","last_name":"liddel","age":21,"street":"Herengracht","city":"land",... -# addressbook <= addressbook::notify {"user":"alice","msg":"alice successfully modified record to addressbook"} -# eosio <= addressbook::notify {"user":"alice","msg":"alice successfully modified record to addressbook"} -# abcounter <= abcounter::count {"user":"alice","type":"modify"} -``` diff --git a/native/60_advanced-topics/21_managing-finalizer-keys.md b/native/60_advanced-topics/21_managing-finalizer-keys.md new file mode 100644 index 00000000..bd6bc18f --- /dev/null +++ b/native/60_advanced-topics/21_managing-finalizer-keys.md @@ -0,0 +1,150 @@ +--- +title: Managing Finalizer Keys +--- + +Review [Introduction to Finalizers and Voting](../introduction-finalizers-voting) for additional background. The Savanna Consensus algorithm utilized by Spring v1 separates the roles of publishing blocks from signing and finalizing blocks. Finalizer Keys are needed to sign and finalize blocks. In Spring v1, all block producers are expected to be finalizers. + +## Recommended Setup +The recommendation is to generate, and register several finalizer keys. It is recommended to have one finalizer key for each instance of a producer node. A producer may have only one active finalizer key. When the keys are generated ahead of time, and included in the configuration, only an on-chain action is needed to use a new finalizer key. + +### Takeaways +- It is always safe to activate a new BLS finalizer key. +- Do not reuse BLS finalizer keys between hosts. +- Generate unique BLS finalizer keys for each nodeos instance. +- Keys are activated with an on-chain action, *keys must be pre-generated and registered*. +- `safety.dat` must contain the full voting history for the BLS finalizer key in use. + +### Multiple Instances of Nodeos +Consider the scenario where there are two hosts running nodeos. One host contains the primary block producer, the other host has the backup block producer. The recommendation is each instance of nodeos (primary producer and backup-producer) have distinct BLS finalizer keys. This requires creating and registering two BLS finalizer key pairs. The signature provider for each nodeos will reference one and only one BLS key. It is recommended that the signature provider in the primary producer node reference a different BLS finalizer key from the signature provider on the backup producer node. + +When switching to the backup node, run the usual scripts, and in addition activate the BLS finalizer key referenced in the signature provider configuration for the backup producer node. When switching back from the backup producer node to the primary producer node, in addition to running the usual scripts, you must active the BLS finalizer key associated with the primary block producer. + +If the same BLS finalizer key is used when switching between producers the voting history will not be complete. As a result the associated producer will vote for a different branch of the blockchain, out of sync with the correct state of the blockchain. + +### Multiple Producers on a Single Instance +Consider the scenario when there are multiple producers on a single instance of nodeos. The recommendation is to create, register, and activate a single finalizer keys for the nodeos instance. Only one producer needs to register the finalizer key. + +### Intermediate Relay Nodes +Setting `vote-threads` on a nodeos instance is expensive, consuming CPU and adding to network traffic. For this reason, `vote-threads` is set to a non-zero on node producer instances, but set to zero on all other instances. + +Block Producers may have an intermediate nodeos instance sitting in between their block producing nodeos and peer finalizers. In this setup each nodeos, local finalizer, intermediate node, and peer finalizer, must be able to send and forward votes. With this setup there would be a BLS finalizer key for the block producing nodes. The intermediate node would not need a BLS finalizer key. The intermediate node must set their `vote-threads` > 0 *default in Spring v1 is 4 threads*. + +### Rotating Keys +The EOS blockchain, allows producers to activate new BLS finalizer keys at anytime with an on-chain action `actfinkey`. Before called the `actfinkey` action, the key must already exist in the `signature-provider` configuration when the instance of nodeos is started, and the key must be registered using the `regfinkey` action. Activating a new BLS finalizer key is always safe, and may be performed at anytime. + +## Recovery +If the database is dirty or corrupted the fastest way to re-start is from a snapshot with the same finalizer key and the same `finalizers/safety.dat`. This fast restart method is similar to previous versions of Leap software. +- remove state `state/shared_memory.bin` and perhaps `blocks/blocks.log` +- restart from snapshot +- wait for node to catch up + +A full recovery is more involved and it included using a different finalizer key. Use these steps when the `finalizers/safety.dat` is corrupted or missing. +- remove state `state/shared_memory.bin` and perhaps `blocks/blocks.log` +- remove `finalizers/safety.dat` +- restart from snapshot +- use a new finalizer key, `actfinkey` to activate a previously registered finalizer key +- wait for node to catch up + +## Generating and Registering Finalizer Keys + +### Importance of Finalizer Safety +Savanna consensus introduces a new file that captures the history of finalizer voting. See [Introduction to Finalizers and Voting](../introduction-finalizers-voting) for more background on voting history and the role of the `finalizers/safety.dat` file. By default the file `finalizers/safety.dat` is found under the data directory. `finalizers/safety.dat` must have the full voting history for the BLS finalizer key that is in use. If `finalizers/safety.dat` is corrupted, removed, or lacks the full voting history for the BLS finalizer key in use, a new BLS finalizer key must be used. + +Spring v1 introduces a new configuration option `finalizers-dir` that can change the location of the `safety.dat` file. A node operator may want to change the location of `safety.dat`, if they want to move this important file out of the nodeos default data directory. + +### Pre-Generating Finalizer Keys +It is safe to generate many BLS finalizer keys and register them ahead of their use and activation. Registration includes: +- Creating the BLS key pair +- Adding the key pair to configuration via `signature-provider`, this requires a restart of nodeos +- Calling on-chain `regfinkey` action + +### Generate Finalizer Keys +`spring-utils` is the utility designated for node operators. Only node operators need to generate a BLS finalizer key, and for that reason we use `spring-utils` to generate the finalizer keys. Keys may be output to console (`--to-console`) or to file (`--file`). +``` +spring-util bls create key --to-console > producer-name.finalizer.key +``` +The output will look like this +``` +Private key: PVT_BLS_9-9ziZZzZcZZoiz-ZZzUtz9ZZ9u9Zo9aS9BZ-o9iznZfzUZU +Public key: PUB_BLS_SvLa9z9kZoT9bzZZZ-Zezlrst9Zb-Z9zZV9olZazZbZvzZzk9r9ZZZzzarUVzbZZ9Z9ZUzf9iZZ9P_kzZZzGLtezL-Z9zZ9zzZb9ZitZctzvSZ9G9SUszzcZzlZu-GsZnZ9I9Z +Proof of Possession: SIG_BLS_ZPZZbZIZukZksBbZ9Z9Zfysz9zZsy9z9S9V99Z-9rZZe99vZUzZPZZlzZszZiiZVzT9ZZZZBi99Z9kZzZ9zZPzzbZ99ZZzZP9zZrU-ZZuiZZzZUvZ9ZPzZbZ_yZi9ZZZ-yZPcZZe9SZZPz9Tc9ZaZ999voB99L9PzZ99I9Zu9Zo9ZZZzTtVZbcZ-Zck_ZZUZZtfTZGszUzzBTZZGrnIZ9Z9Z9zPznyZLZIavGzZunreVZ9zZZt_ZlZS9ZZIz9yUZa9Z9-Z +``` + +### Add Finalizer Keys to Config +You may add several finalizer keys to configuration. **NOTE** Instances of nodeos must be restarted to pick up the new configuration options. Keys are added to configuration with the `signature-provider` option. These keys may be added via the command line or placed into a configuration file. Placing the finalizer keys into a configuration file would look like this. +`signature-provider = PUBLIC_KEY=KEY:PRIVATE_KEY` +For example +`signature-provider = PUB_BLS_SvLa9z9kZoT9bzZZZ-Zezlrst9Zb-Z9zZV9olZazZbZvzZzk9r9ZZZzzarUVzbZZ9Z9ZUzf9iZZ9P_kzZZzGLtezL-Z9zZ9zzZb9ZitZctzvSZ9G9SUszzcZzlZu-GsZnZ9I9Z=KEY:PVT_BLS_9-9ziZZzZcZZoiz-ZZzUtz9ZZ9u9Zo9aS9BZ-o9iznZfzUZU` + +### Register Finalizer Key +Each producer should register a finalizer key. This is done with the `regfinkey`. No other actions are needed when registering your first key. +- **Note** the authority used is the block producer's. +- `finalizer_name` must be a registered producer. +- `finalizer_key` must be in base64url format. +- `proof_of_possession` must be a valid of proof of possession signature `finalizer_name` to register. +- `linkauth` may be used to allow a lower authority to execute this action. + +Here is an example +``` +cleos push action eosio regfinkey '{"finalizer_name":"NewBlockProducer", \ + "finalizer_key":"PUB_BLS_SvLa9z9kZoT9bzZZZ-Zezlrst9Zb-Z9zZV9olZazZbZvzZzk9r9ZZZzzarUVzbZZ9Z9ZUzf9iZZ9P_kzZZzGLtezL-Z9zZ9zzZb9ZitZctzvSZ9G9SUszzcZzlZu-GsZnZ9I9Z", \ + "proof_of_possession":"SIG_BLS_ZPZZbZIZukZksBbZ9Z9Zfysz9zZsy9z9S9V99Z-9rZZe99vZUzZPZZlzZszZiiZVzT9ZZZZBi99Z9kZzZ9zZPzzbZ99ZZzZP9zZrU-ZZuiZZzZUvZ9ZPzZbZ_yZi9ZZZ-yZPcZZe9SZZPz9Tc9ZaZ999voB99L9PzZ99I9Zu9Zo9ZZZzTtVZbcZ-Zck_ZZUZZtfTZGszUzzBTZZGrnIZ9Z9Z9zPznyZLZIavGzZunreVZ9zZZt_ZlZS9ZZIz9yUZa9Z9-Z"}' \ + -p NewBlockProducer +``` + +### Changing Finalizer Key +To activate a different BLS finalizer key call the `actfinkey` action. +- The provided finalizer_key must be a registered finalizer key in base64url format. +- The authority is the authority of `finalizer_name`. + +First register your new key with `cleos push action eosio regfinkey ...`. Then call `actfinkey` with the Public Key of the non-activated, and registered key. + +Example +``` +cleos push action eosio actfinkey '{"finalizer_name":"NewBlockProducer", \ + "finalizer_key":"PUB_BLS_SvLa9z9kZoT9bzZZZ-Zezlrst9Zb-Z9zZV9olZazZbZvzZzk9r9ZZZzzarUVzbZZ9Z9ZUzf9iZZ9P_kzZZzGLtezL-Z9zZ9zzZb9ZitZctzvSZ9G9SUszzcZzlZu-GsZnZ9I9Z"}' \ + -p NewBlockProducer +``` + +### Removing Finalizer Key +To remove a registered finalizer key, you no longer plan on using, call the `delfinkey` action. +- `finalizer_key` must be a registered finalizer key in base64url format. +- `finalizer_key` must not be active, unless it is the last registered finalizer key. +- The authority is the authority of `finalizer_name`. + +Example +``` +cleos push action eosio delfinkey '{"finalizer_name":"NewBlockProducer", \ + "finalizer_key":"PUB_BLS_SvLa9z9kZoT9bzZZZ-Zezlrst9Zb-Z9zZV9olZazZbZvzZzk9r9ZZZzzarUVzbZZ9Z9ZUzf9iZZ9P_kzZZzGLtezL-Z9zZ9zzZb9ZitZctzvSZ9G9SUszzcZzlZu-GsZnZ9I9Z"}' \ + -p NewBlockProducer +``` + +### Verifying Finalizer Keys +Active finalizer keys are stored in the `finkeys` table. This table can be accessed via cleos. The following request will show the active public BLS key for each producer. +`cleos get table eosio eosio finkeys` + +To see all finalizer keys including non-active keys check the `finalizers` table. +`cleos get table eosio eosio finalizers` + +## New Configuration Options +The configuration options specific to managing finality. It is recommended to use the default values and not set custom configurations. + +- `finalizers-dir` - Specifies the directory path for storing voting history. Node Operators may want to specify a directory outside of their nodeos' data directory, and manage this as distinct file. +- `finality-data-history` - When running SHiP to support Inter-Blockchain Communication (IBC) set `finality-data-history = true`. This will enable the new field, `get_blocks_request_v1`. The `get_blocks_request_v1` defaults to `null` before Savanna Consensus is activated. +- `vote-threads` - Sets the number of threads to handle voting. The default is sufficient for all know production setups, and the recommendation is to leave this value unchanged. + +## Avoid +Review [Introduction to Finalizers and Voting](../introduction-finalizers-voting) for additional background. Each of the following is likely to lead to voting on a different branch of the blockchain, and therefore votes will not contribute to finality. For best results do **NOT** the following: +- share `safety.dat` between hosts or producers +- reuse BLS finalizer keys between hosts +- backup and restore `safety.dat` + +## FAQ +- Q: Should I backup and restore `safety.dat`? +- A: No you should switch to a new BLS finalizer key, that is a much easier and safer way to continue voting. Otherwise you run the risk of restoring a partial voting history, and voting on the incorrect branch of the chain. + +- Q: When I want to switch producer hosts, can I keep using the same BLS Finalizer Key, and copy over my `safety.dat` to the new host? +- A: Yes this would work, but it is not recommended. Voting is continuous, and using a new BLS key takes an on-chain action. Therefore, it is best to switch over to a new BLS key assuming that results in less voting downtime. + +- Q: Why use BLS keys, why not re-use the existing producer keys? +- A: BLS signatures are very cheap to aggregate together into a single message, and this property makes them a good choice for aggregating together votes from many different finalizers. diff --git a/native/60_advanced-topics/999_dune-guide.md b/native/60_advanced-topics/999_dune-guide.md deleted file mode 100644 index c18bdfb3..00000000 --- a/native/60_advanced-topics/999_dune-guide.md +++ /dev/null @@ -1,197 +0,0 @@ ---- -title: Local Development ---- - -[Docker Utilities for Node Execution (DUNE)](https://github.com/AntelopeIO/DUNE) is a client tool that allows blockchain developers and node operators to perform boilerplate tasks related to smart contract development and node management functions. - -Before getting started with smart contract development, you need to learn about DUNE and how to install it on your platform. - -### Installation - -DUNE supports the following platforms: -* Linux -* Windows -* MacOS - -Installation instructions for each supported platform are available on the [DUNE's github project](https://github.com/AntelopeIO/DUNE) page. - - Run `dune --help` to see a list of all supported commands. - -## Wallets - -DUNE handles wallet management for you. - -If you need to import a new key into your wallet: - -```shell -dune --import-dev-key -``` - -## Node management - -Use DUNE to easily create a new local EOS blockchain. - -```shell -dune --start -``` - -The command above creates a new node called `NODE_NAME` with default settings. The default settings configure the new node to serve as an API/producer node. You can deploy smart contracts to this node and perform tests on it. - -> ❔ **Errors** -> -> You may see errors at the end of the node setup process. -> If you do, you can refer to this guide to troubleshoot common errors, or reach out to us on our -> [Telegram channel](https://t.me/antelopedevs) for help. - -You can see a list of EOS nodes on your system: - -```shell -dune --list -``` - -You can check if your active node's RPC API is live: - -```shell -dune -- cleos get info -``` - -To shut down your node: - -```shell -dune --stop -``` - -To remove a node entirely: - -```shell -dune --remove -``` - - -### Bootstrapping your environment - -Your development environment may need to rely on a few system contracts, such as: - -- `eosio.token` for **EOS** token transfers -- `eosio.msig` for multisig transactions -- `eosio.system` for system level actions such as resource management - -Bootstrapping your local node is easy. Once you have an active node running, you can bootstrap it with: - -```shell -dune --bootstrap-system-full -``` - - -## Account management - -You use accounts to interact with smart contracts, and also deploy contracts on top of accounts. - -To create a new account: - -```shell -dune --create-account -``` - -To get account info: - -```shell -dune -- cleos get account -``` - -## Smart contract development - -Create a sample project so you can learn how to compile, deploy, and interact with smart contracts using DUNE. - -Navigate to a directory in which you create a project, and then run the following command: - -```shell -dune --create-cmake-app hello . -``` - -This will create a `hello` directory with a cmake style EOS smart contract project. - -Replace the contents of `src/hello.cpp` with the following code: - -```cpp -#include -using namespace eosio; - -CONTRACT hello : public contract { - public: - using contract::contract; - - TABLE user_record { - name user; - uint64_t primary_key() const { return user.value; } - }; - typedef eosio::multi_index< name("users"), user_record> user_index; - - ACTION test( name user ) { - print_f("Hello World from %!\n", user); - user_index users( get_self(), get_self().value ); - users.emplace( get_self(), [&]( auto& new_record ) { - new_record.user = user; - }); - } -}; -``` - -### Compile your contract - -From the root of your project, run the following command to compile your contract: - -```shell -dune --cmake-build . -``` -Your contract compiles. Any errors display in the output. - -### Deploy your contract - -Create an account for your contract and then deploy it. - -```shell -dune --create-account hello -dune --deploy ./build/hello hello -``` - -> 👀 **Code Permission** -> -> By default, DUNE adds the `eosio.code` permission to an account when you deploy a contract to it. This allows the contract to trigger inline actions on other smart contracts. - -### Interacting with your contract - -Send a transaction on your local EOS. node to the blockchain to interact with your smart contract. A transaction contains multiple actions. You can send a transaction with a single action using the --send-action command. - -You must also create a test account from which to send the action. - -```shell -dune --create-account testaccount - -# format: dune --send-action -dune --send-action hello test '[bob]' testaccount -``` - -You should see a transaction executed successfully with a row added to the contract's database. If you repeat this command it will fail because that row already exists in the contract's database. - -### Get data from your contract - -You just added a row to the contract's database. You can fetch that data from the chain: - -```shell -# format: dune --get-table -dune --get-table hello hello users -``` - -You get a table result with one or more rows. If you did not receive a table with one or more rows, make sure the interaction above was successful. - -## Using raw programs with DUNE - -If you want to tap into the raw EOS stack, you can use the `DUNE -- ` format to access data, applications, and everything else within the container. - -Examples: - -```shell -dune -- cleos get info -dune -- nodeos --help -``` diff --git a/native/999_miscellaneous/10_helpful-links.md b/native/999_miscellaneous/10_helpful-links.md index 7aebcdb2..9795263d 100644 --- a/native/999_miscellaneous/10_helpful-links.md +++ b/native/999_miscellaneous/10_helpful-links.md @@ -4,7 +4,6 @@ title: Helpful Links - [Developers Telegram](https://t.me/antelopedevs) - [Blog](https://eosnetwork.com/blog/) -- [Get funding](https://learn.eosnetwork.com/earn) - [Github: EOS Foundation](https://github.com/eosnetworkfoundation) - [Github: Antelope Framework](https://github.com/antelopeio) - [Twitter](https://twitter.com/EOSnFoundation)