From 6732f818c1de8e4716820d713f0034e2bf199781 Mon Sep 17 00:00:00 2001 From: Steven Watanabe Date: Fri, 10 Jan 2020 17:44:11 -0500 Subject: [PATCH 01/25] Update fc --- libraries/fc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/libraries/fc b/libraries/fc index e95a03eed17..8d53b92f02b 160000 --- a/libraries/fc +++ b/libraries/fc @@ -1 +1 @@ -Subproject commit e95a03eed1796a3054e02e67f1171f8c9fdb57e5 +Subproject commit 8d53b92f02b0cb189a5c28b31a895401daf5dc0b From 6fd2b9e50935dc99f8a50b746f86e6c6bf0fce35 Mon Sep 17 00:00:00 2001 From: dskvr Date: Sat, 11 Jan 2020 16:43:24 +0100 Subject: [PATCH 02/25] update README.md for 2.0 release branch including broken links hotfix --- README.md | 83 ++++++++++++++++++++++++++++++++++++++----------------- 1 file changed, 58 insertions(+), 25 deletions(-) diff --git a/README.md b/README.md index 783f8d21300..a3a858b5e2e 100644 --- a/README.md +++ b/README.md @@ -20,18 +20,46 @@ Some of the groundbreaking features of EOSIO include: 1. Designed for Parallel Execution of Context Free Validation Logic 1. Designed for Inter Blockchain Communication -EOSIO is released under the open source MIT license and is offered “AS IS” without warranty of any kind, express or implied. Any security provided by the EOSIO software depends in part on how it is used, configured, and deployed. EOSIO is built upon many third-party libraries such as WABT (Apache License) and WAVM (BSD 3-clause) which are also provided “AS IS” without warranty of any kind. Without limiting the generality of the foregoing, Block.one makes no representation or guarantee that EOSIO or any third-party libraries will perform as intended or will be free of errors, bugs or faulty code. Both may fail in large or small ways that could completely or partially limit functionality or compromise computer systems. If you use or implement EOSIO, you do so at your own risk. In no event will Block.one be liable to any party for any damages whatsoever, even if it had been advised of the possibility of damage. +## Disclaimer Block.one is neither launching nor operating any initial public blockchains based upon the EOSIO software. This release refers only to version 1.0 of our open source software. We caution those who wish to use blockchains built on EOSIO to carefully vet the companies and organizations launching blockchains based on EOSIO before disclosing any private keys to their derivative software. +## Testnets + There is no public testnet running currently. +## Supported Operating Systems + +EOSIO currently supports the following operating systems: + +1. Amazon Linux 2 +2. CentOS 7 +3. Ubuntu 16.04 +4. Ubuntu 18.04 +5. MacOS 10.14 (Mojave) + +--- + +**Note: It may be possible to install EOSIO on other Unix-based operating systems. This is not officially supported, though.** + +--- + +## Software Installation + +If you are new to EOSIO, it is recommended that you install the [EOSIO Prebuilt Binaries](#prebuilt-binaries), then proceed to the [Getting Started](https://developers.eos.io/eosio-home/docs) walkthrough. If you are an advanced developer, a block producer, or no binaries are available for your platform, you may need to [Build EOSIO from source](https://eosio.github.io/eos/latest/install/build-from-source). + --- -**If you used our build scripts to install eosio, [please be sure to uninstall](#build-script-uninstall) before using our packages.** +**Note: If you used our scripts to build/install EOSIO, please run the [Uninstall Script](#uninstall-script) before using our prebuilt binary packages.** --- +## Prebuilt Binaries + +Prebuilt EOSIO software packages are available for the operating systems below. Find and follow the instructions for your OS: + +### Mac OS X: + #### Mac OS X Brew Install ```sh $ brew tap eosio/eosio @@ -42,44 +70,49 @@ $ brew install eosio $ brew remove eosio ``` +### Ubuntu Linux: + #### Ubuntu 18.04 Package Install ```sh -$ wget https://github.com/eosio/eos/releases/download/v2.0.0-rc3/eosio_2.0.0-rc3-ubuntu-18.04_amd64.deb -$ sudo apt install ./eosio_2.0.0-rc3-ubuntu-18.04_amd64.deb +$ wget https://github.com/eosio/eos/releases/download/v2.0.0/eosio_2.0.0-1-ubuntu-18.04_amd64.deb +$ sudo apt install ./eosio_2.0.0-1-ubuntu-18.04_amd64.deb ``` #### Ubuntu 16.04 Package Install ```sh -$ wget https://github.com/eosio/eos/releases/download/v2.0.0-rc3/eosio_2.0.0-rc3-ubuntu-16.04_amd64.deb -$ sudo apt install ./eosio_2.0.0-rc3-ubuntu-16.04_amd64.deb +$ wget https://github.com/eosio/eos/releases/download/v2.0.0/eosio_2.0.0-1-ubuntu-16.04_amd64.deb +$ sudo apt install ./eosio_2.0.0-1-ubuntu-16.04_amd64.deb ``` #### Ubuntu Package Uninstall ```sh $ sudo apt remove eosio ``` -#### Centos RPM Package Install + +### RPM-based (CentOS, Amazon Linux, etc.): + +#### RPM Package Install ```sh -$ wget https://github.com/eosio/eos/releases/download/v2.0.0-rc3/eosio-2.0.0-rc3.el7.x86_64.rpm -$ sudo yum install ./eosio-2.0.0-rc3.el7.x86_64.rpm +$ wget https://github.com/eosio/eos/releases/download/v2.0.0/eosio-2.0.0-1.el7.x86_64.rpm +$ sudo yum install ./eosio-2.0.0-1.el7.x86_64.rpm ``` -#### Centos RPM Package Uninstall +#### RPM Package Uninstall ```sh $ sudo yum remove eosio ``` -#### Build Script Uninstall - -If you have previously installed EOSIO using build scripts, you can execute `eosio_uninstall.sh` to uninstall. -- Passing `-y` will answer yes to all prompts (does not remove data directories) -- Passing `-f` will remove data directories (be very careful with this) -- Passing in `-i` allows you to specify where your eosio installation is located +## Uninstall Script +To uninstall the EOSIO built/installed binaries and dependencies, run: +```sh +./scripts/eosio_uninstall.sh +``` -## Supported Operating Systems -EOSIO currently supports the following operating systems: -1. Amazon Linux 2 -2. CentOS 7 -3. Ubuntu 16.04 -4. Ubuntu 18.04 -5. MacOS 10.14 (Mojave) +## Documentation +1. [Nodeos](http://eosio.github.io/eos/latest/nodeos/) + - [Usage](http://eosio.github.io/eos/latest/nodeos/usage/index) + - [Replays](http://eosio.github.io/eos/latest/nodeos/replays/index) + - [Chain API Reference](http://eosio.github.io/eos/latest/nodeos/plugins/chain_api_plugin/api-reference/index) + - [Troubleshooting](http://eosio.github.io/eos/latest/nodeos/troubleshooting/index) +1. [Cleos](http://eosio.github.io/eos/latest/cleos/) +1. [Keosd](http://eosio.github.io/eos/latest/keosd/) ## Resources 1. [Website](https://eos.io) @@ -93,7 +126,7 @@ EOSIO currently supports the following operating systems: ## Getting Started -Instructions detailing the process of getting the software, building it, running a simple test network that produces blocks, account creation and uploading a sample contract to the blockchain can be found in [Getting Started](https://developers.eos.io/eosio-home/docs) on the [EOSIO Developer Portal](https://developers.eos.io). +Instructions detailing the process of getting the software, building it, running a simple test network that produces blocks, account creation and uploading a sample contract to the blockchain can be found in the [Getting Started](https://developers.eos.io/eosio-home/docs) walkthrough. ## Contributing @@ -103,7 +136,7 @@ Instructions detailing the process of getting the software, building it, running ## License -[MIT](./LICENSE) +EOSIO is released under the open source [MIT](./LICENSE) license and is offered “AS IS” without warranty of any kind, express or implied. Any security provided by the EOSIO software depends in part on how it is used, configured, and deployed. EOSIO is built upon many third-party libraries such as WABT (Apache License) and WAVM (BSD 3-clause) which are also provided “AS IS” without warranty of any kind. Without limiting the generality of the foregoing, Block.one makes no representation or guarantee that EOSIO or any third-party libraries will perform as intended or will be free of errors, bugs or faulty code. Both may fail in large or small ways that could completely or partially limit functionality or compromise computer systems. If you use or implement EOSIO, you do so at your own risk. In no event will Block.one be liable to any party for any damages whatsoever, even if it had been advised of the possibility of damage. ## Important From bab04a601f88b1d805d3a3dcb1937b843f61c3b4 Mon Sep 17 00:00:00 2001 From: Nathan Pierce Date: Thu, 16 Jan 2020 01:58:21 -0500 Subject: [PATCH 03/25] added SDKROOT --- .cicd/platforms/pinned/macos-10.14-pinned.sh | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/.cicd/platforms/pinned/macos-10.14-pinned.sh b/.cicd/platforms/pinned/macos-10.14-pinned.sh index 5c38259f7c5..99af86b61d0 100755 --- a/.cicd/platforms/pinned/macos-10.14-pinned.sh +++ b/.cicd/platforms/pinned/macos-10.14-pinned.sh @@ -49,11 +49,13 @@ sudo make install cd ../.. rm -rf clang8 # install boost from source +# Boost Fix: eosio/install/bin/../include/c++/v1/stdlib.h:94:15: fatal error: 'stdlib.h' file not found +export SDKROOT="$(xcrun --sdk macosx --show-sdk-path)" curl -LO https://dl.bintray.com/boostorg/release/1.71.0/source/boost_1_71_0.tar.bz2 tar -xjf boost_1_71_0.tar.bz2 cd boost_1_71_0 ./bootstrap.sh --prefix=/usr/local -sudo ./b2 --with-iostreams --with-date_time --with-filesystem --with-system --with-program_options --with-chrono --with-test -q -j$(getconf _NPROCESSORS_ONLN) install +sudo SDKROOT="$SDKROOT" ./b2 --with-iostreams --with-date_time --with-filesystem --with-system --with-program_options --with-chrono --with-test -q -j$(getconf _NPROCESSORS_ONLN) install cd .. sudo rm -rf boost_1_71_0.tar.bz2 boost_1_71_0 # install mongoDB From e4c8f26d3aa3ad4a4bd9419a6bf6f0d64cc58c95 Mon Sep 17 00:00:00 2001 From: Nathan Pierce Date: Thu, 16 Jan 2020 03:33:14 -0500 Subject: [PATCH 04/25] enabled ping sleep --- .cicd/generate-pipeline.sh | 12 ++++++------ libraries/fc | 2 +- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/.cicd/generate-pipeline.sh b/.cicd/generate-pipeline.sh index 22df0570fcf..657202b8c74 100755 --- a/.cicd/generate-pipeline.sh +++ b/.cicd/generate-pipeline.sh @@ -132,7 +132,7 @@ EOF failover-registries: - 'registry_1' - 'registry_2' - pre-execute-sleep: 10 + pre-execute-ping-sleep: "8.8.8.8" pre-commands: - "git clone git@github.com:EOSIO/mac-anka-fleet.git && cd mac-anka-fleet && . ./ensure-tag.bash -u 12 -r 25G -a '-n'" env: @@ -220,7 +220,7 @@ EOF failover-registries: - 'registry_1' - 'registry_2' - pre-execute-sleep: 10 + pre-execute-ping-sleep: "8.8.8.8" agents: "queue=mac-anka-node-fleet" retry: manual: @@ -282,7 +282,7 @@ EOF failover-registries: - 'registry_1' - 'registry_2' - pre-execute-sleep: 10 + pre-execute-ping-sleep: "8.8.8.8" agents: "queue=mac-anka-node-fleet" retry: manual: @@ -347,7 +347,7 @@ EOF failover-registries: - 'registry_1' - 'registry_2' - pre-execute-sleep: 10 + pre-execute-ping-sleep: "8.8.8.8" agents: "queue=mac-anka-node-fleet" retry: manual: @@ -413,7 +413,7 @@ EOF failover-registries: - 'registry_1' - 'registry_2' - pre-execute-sleep: 10 + pre-execute-ping-sleep: "8.8.8.8" agents: "queue=mac-anka-node-fleet" retry: manual: @@ -597,7 +597,7 @@ cat < Date: Thu, 16 Jan 2020 03:33:44 -0500 Subject: [PATCH 05/25] anka buildkite plugin version bump for ping sleep --- .cicd/generate-pipeline.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.cicd/generate-pipeline.sh b/.cicd/generate-pipeline.sh index 657202b8c74..f5df577834a 100755 --- a/.cicd/generate-pipeline.sh +++ b/.cicd/generate-pipeline.sh @@ -119,7 +119,7 @@ EOF - "cd eos && ./.cicd/build.sh" - "cd eos && tar -pczf build.tar.gz build && buildkite-agent artifact upload build.tar.gz" plugins: - - chef/anka#v0.5.5: + - NorseGaud/anka#v0.5.7: no-volume: true inherit-environment-vars: true vm-name: ${MOJAVE_ANKA_TEMPLATE_NAME} From 1f9a406d5fa7d373487ac7d5bb17d1f22757da50 Mon Sep 17 00:00:00 2001 From: Nathan Pierce Date: Thu, 16 Jan 2020 03:37:39 -0500 Subject: [PATCH 06/25] anka plugin version bump --- .cicd/generate-pipeline.sh | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/.cicd/generate-pipeline.sh b/.cicd/generate-pipeline.sh index f5df577834a..85136c80018 100755 --- a/.cicd/generate-pipeline.sh +++ b/.cicd/generate-pipeline.sh @@ -209,7 +209,7 @@ EOF - "cd eos && buildkite-agent artifact download build.tar.gz . --step '$(echo "$PLATFORM_JSON" | jq -r .ICON) $(echo "$PLATFORM_JSON" | jq -r .PLATFORM_NAME_FULL) - Build' && tar -xzf build.tar.gz" - "cd eos && ./.cicd/test.sh scripts/parallel-test.sh" plugins: - - chef/anka#v0.5.4: + - NorseGaud/anka#v0.5.7: no-volume: true inherit-environment-vars: true vm-name: ${MOJAVE_ANKA_TEMPLATE_NAME} @@ -271,7 +271,7 @@ EOF - "cd eos && buildkite-agent artifact download build.tar.gz . --step '$(echo "$PLATFORM_JSON" | jq -r .ICON) $(echo "$PLATFORM_JSON" | jq -r .PLATFORM_NAME_FULL) - Build' && tar -xzf build.tar.gz" - "cd eos && ./.cicd/test.sh scripts/wasm-spec-test.sh" plugins: - - chef/anka#v0.5.4: + - NorseGaud/anka#v0.5.7: no-volume: true inherit-environment-vars: true vm-name: ${MOJAVE_ANKA_TEMPLATE_NAME} @@ -336,7 +336,7 @@ EOF - "cd eos && buildkite-agent artifact download build.tar.gz . --step '$(echo "$PLATFORM_JSON" | jq -r .ICON) $(echo "$PLATFORM_JSON" | jq -r .PLATFORM_NAME_FULL) - Build' && tar -xzf build.tar.gz" - "cd eos && ./.cicd/test.sh scripts/serial-test.sh $TEST_NAME" plugins: - - chef/anka#v0.5.4: + - NorseGaud/anka#v0.5.7: no-volume: true inherit-environment-vars: true vm-name: ${MOJAVE_ANKA_TEMPLATE_NAME} @@ -402,7 +402,7 @@ EOF - "cd eos && buildkite-agent artifact download build.tar.gz . --step '$(echo "$PLATFORM_JSON" | jq -r .ICON) $(echo "$PLATFORM_JSON" | jq -r .PLATFORM_NAME_FULL) - Build' ${BUILD_SOURCE} && tar -xzf build.tar.gz" - "cd eos && ./.cicd/test.sh scripts/long-running-test.sh $TEST_NAME" plugins: - - chef/anka#v0.5.4: + - NorseGaud/anka#v0.5.7: no-volume: true inherit-environment-vars: true vm-name: ${MOJAVE_ANKA_TEMPLATE_NAME} @@ -586,7 +586,7 @@ cat < Date: Thu, 16 Jan 2020 10:18:04 -0500 Subject: [PATCH 07/25] reverted fc --- libraries/fc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/libraries/fc b/libraries/fc index e95a03eed17..8d53b92f02b 160000 --- a/libraries/fc +++ b/libraries/fc @@ -1 +1 @@ -Subproject commit e95a03eed1796a3054e02e67f1171f8c9fdb57e5 +Subproject commit 8d53b92f02b0cb189a5c28b31a895401daf5dc0b From 2326948e09518dcaecc3d803eee0d2e1ac629f45 Mon Sep 17 00:00:00 2001 From: Nathan Pierce Date: Thu, 16 Jan 2020 10:56:07 -0500 Subject: [PATCH 08/25] build-scripts pipeline file and fix for centos llvm --- .cicd/build-scripts.yml | 137 +++++++++++++++++++++++++++++++++++++++ scripts/helpers/eosio.sh | 2 +- 2 files changed, 138 insertions(+), 1 deletion(-) create mode 100644 .cicd/build-scripts.yml diff --git a/.cicd/build-scripts.yml b/.cicd/build-scripts.yml new file mode 100644 index 00000000000..991fbe3b189 --- /dev/null +++ b/.cicd/build-scripts.yml @@ -0,0 +1,137 @@ +steps: + + - label: ":aws: Amazon_Linux 2 - Build Pinned" + plugins: + - docker#v3.3.0: + image: "amazonlinux:2.0.20190508" + always-pull: true + agents: + queue: "automation-eks-eos-builder-fleet" + command: + - "./scripts/eosio_build.sh -P -y" + timeout: 180 + + - label: ":centos: CentOS 7.7 - Build Pinned" + plugins: + - docker#v3.3.0: + image: "centos:7.7.1908" + always-pull: true + agents: + queue: "automation-eks-eos-builder-fleet" + command: + - "./scripts/eosio_build.sh -P -y" + timeout: 180 + + - label: ":darwin: macOS 10.14 - Build Pinned" + env: + REPO: "git@github.com:EOSIO/eos.git" + TEMPLATE: "10.14.6_6C_14G_40G" + TEMPLATE_TAG: "clean::cicd::git-ssh::nas::brew::buildkite-agent" + agents: "queue=mac-anka-large-node-fleet" + command: + - "git clone git@github.com:EOSIO/eos.git eos && cd eos && git checkout -f $BUILDKITE_BRANCH && git submodule update --init --recursive" + - "cd eos && ./scripts/eosio_build.sh -P -y" + plugins: + - chef/anka#v0.5.5: + debug: true + vm-name: "10.14.6_6C_14G_40G" + no-volume: true + modify-cpu: 12 + modify-ram: 24 + always-pull: true + wait-network: true + vm-registry-tag: "clean::cicd::git-ssh::nas::brew::buildkite-agent" + pre-execute-sleep: 10 + failover-registries: + - "registry_1" + - "registry_2" + inherit-environment-vars: true + - thedyrt/skip-checkout#v0.1.1: + cd: ~ + timeout: 180 + + - label: ":ubuntu: Ubuntu 16.04 - Build Pinned" + plugins: + - docker#v3.3.0: + image: "ubuntu:16.04" + always-pull: true + agents: + queue: "automation-eks-eos-builder-fleet" + command: + - "apt update && apt upgrade -y && apt install -y git" + - "./scripts/eosio_build.sh -P -y" + timeout: 180 + + - label: ":ubuntu: Ubuntu 18.04 - Build Pinned" + plugins: + - docker#v3.3.0: + image: "ubuntu:18.04" + always-pull: true + agents: + queue: "automation-eks-eos-builder-fleet" + command: + - "apt update && apt upgrade -y && apt install -y git" + - "./scripts/eosio_build.sh -P -y" + timeout: 180 + + - label: ":aws: Amazon_Linux 2 - Build UnPinned" + plugins: + - docker#v3.3.0: + image: "amazonlinux:2.0.20190508" + always-pull: true + agents: + queue: "automation-eks-eos-builder-fleet" + command: + - "./scripts/eosio_build.sh -y" + timeout: 180 + + - label: ":centos: CentOS 7.7 - Build UnPinned" + plugins: + - docker#v3.3.0: + image: "centos:7.7.1908" + always-pull: true + agents: + queue: "automation-eks-eos-builder-fleet" + command: + - "./scripts/eosio_build.sh -y" + timeout: 180 + + - label: ":darwin: macOS 10.14 - Build UnPinned" + env: + REPO: "git@github.com:EOSIO/eos.git" + TEMPLATE: "10.14.6_6C_14G_40G" + TEMPLATE_TAG: "clean::cicd::git-ssh::nas::brew::buildkite-agent" + agents: "queue=mac-anka-large-node-fleet" + command: + - "git clone git@github.com:EOSIO/eos.git eos && cd eos && git checkout -f $BUILDKITE_BRANCH && git submodule update --init --recursive" + - "cd eos && ./scripts/eosio_build.sh -y" + plugins: + - chef/anka#v0.5.5: + debug: true + vm-name: "10.14.6_6C_14G_40G" + no-volume: true + modify-cpu: 12 + modify-ram: 24 + always-pull: true + wait-network: true + vm-registry-tag: "clean::cicd::git-ssh::nas::brew::buildkite-agent" + pre-execute-sleep: 10 + failover-registries: + - "registry_1" + - "registry_2" + inherit-environment-vars: true + - thedyrt/skip-checkout#v0.1.1: + cd: ~ + timeout: 180 + + - label: ":ubuntu: Ubuntu 18.04 - Build UnPinned" + plugins: + - docker#v3.3.0: + image: "ubuntu:18.04" + always-pull: true + agents: + queue: "automation-eks-eos-builder-fleet" + command: + - "apt update && apt upgrade -y && apt install -y git" + - "./scripts/eosio_build.sh -y" + timeout: 180 \ No newline at end of file diff --git a/scripts/helpers/eosio.sh b/scripts/helpers/eosio.sh index 5e92d19513f..a7ace5f53f9 100755 --- a/scripts/helpers/eosio.sh +++ b/scripts/helpers/eosio.sh @@ -291,7 +291,7 @@ function ensure-llvm() { elif [[ $NAME == "Amazon Linux" ]]; then execute unlink $LLVM_ROOT || true elif [[ $NAME == "CentOS Linux" ]]; then - execute ln -snf /opt/rh/llvm-toolset-7.0/root $LLVM_ROOT + export LOCAL_CMAKE_FLAGS="${LOCAL_CMAKE_FLAGS} -DLLVM_DIR='/opt/rh/llvm-toolset-7.0/root/usr/lib64/cmake/llvm'" fi } From 09e365e64aa50090429996830ac401c7b4eb9b91 Mon Sep 17 00:00:00 2001 From: Nathan Pierce Date: Fri, 17 Jan 2020 10:32:38 -0500 Subject: [PATCH 09/25] NorseGaud -> EOSIO for anka plugin now that we have approval to manage a fork in EOSIO --- .cicd/build-scripts.yml | 4 ++-- .cicd/generate-pipeline.sh | 12 ++++++------ 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/.cicd/build-scripts.yml b/.cicd/build-scripts.yml index 991fbe3b189..fb124275f88 100644 --- a/.cicd/build-scripts.yml +++ b/.cicd/build-scripts.yml @@ -32,7 +32,7 @@ steps: - "git clone git@github.com:EOSIO/eos.git eos && cd eos && git checkout -f $BUILDKITE_BRANCH && git submodule update --init --recursive" - "cd eos && ./scripts/eosio_build.sh -P -y" plugins: - - chef/anka#v0.5.5: + - EOSIO/anka#v0.5.7: debug: true vm-name: "10.14.6_6C_14G_40G" no-volume: true @@ -106,7 +106,7 @@ steps: - "git clone git@github.com:EOSIO/eos.git eos && cd eos && git checkout -f $BUILDKITE_BRANCH && git submodule update --init --recursive" - "cd eos && ./scripts/eosio_build.sh -y" plugins: - - chef/anka#v0.5.5: + - EOSIO/anka#v0.5.7: debug: true vm-name: "10.14.6_6C_14G_40G" no-volume: true diff --git a/.cicd/generate-pipeline.sh b/.cicd/generate-pipeline.sh index 85136c80018..e2037c9f46b 100755 --- a/.cicd/generate-pipeline.sh +++ b/.cicd/generate-pipeline.sh @@ -119,7 +119,7 @@ EOF - "cd eos && ./.cicd/build.sh" - "cd eos && tar -pczf build.tar.gz build && buildkite-agent artifact upload build.tar.gz" plugins: - - NorseGaud/anka#v0.5.7: + - EOSIO/anka#v0.5.7: no-volume: true inherit-environment-vars: true vm-name: ${MOJAVE_ANKA_TEMPLATE_NAME} @@ -209,7 +209,7 @@ EOF - "cd eos && buildkite-agent artifact download build.tar.gz . --step '$(echo "$PLATFORM_JSON" | jq -r .ICON) $(echo "$PLATFORM_JSON" | jq -r .PLATFORM_NAME_FULL) - Build' && tar -xzf build.tar.gz" - "cd eos && ./.cicd/test.sh scripts/parallel-test.sh" plugins: - - NorseGaud/anka#v0.5.7: + - EOSIO/anka#v0.5.7: no-volume: true inherit-environment-vars: true vm-name: ${MOJAVE_ANKA_TEMPLATE_NAME} @@ -271,7 +271,7 @@ EOF - "cd eos && buildkite-agent artifact download build.tar.gz . --step '$(echo "$PLATFORM_JSON" | jq -r .ICON) $(echo "$PLATFORM_JSON" | jq -r .PLATFORM_NAME_FULL) - Build' && tar -xzf build.tar.gz" - "cd eos && ./.cicd/test.sh scripts/wasm-spec-test.sh" plugins: - - NorseGaud/anka#v0.5.7: + - EOSIO/anka#v0.5.7: no-volume: true inherit-environment-vars: true vm-name: ${MOJAVE_ANKA_TEMPLATE_NAME} @@ -336,7 +336,7 @@ EOF - "cd eos && buildkite-agent artifact download build.tar.gz . --step '$(echo "$PLATFORM_JSON" | jq -r .ICON) $(echo "$PLATFORM_JSON" | jq -r .PLATFORM_NAME_FULL) - Build' && tar -xzf build.tar.gz" - "cd eos && ./.cicd/test.sh scripts/serial-test.sh $TEST_NAME" plugins: - - NorseGaud/anka#v0.5.7: + - EOSIO/anka#v0.5.7: no-volume: true inherit-environment-vars: true vm-name: ${MOJAVE_ANKA_TEMPLATE_NAME} @@ -402,7 +402,7 @@ EOF - "cd eos && buildkite-agent artifact download build.tar.gz . --step '$(echo "$PLATFORM_JSON" | jq -r .ICON) $(echo "$PLATFORM_JSON" | jq -r .PLATFORM_NAME_FULL) - Build' ${BUILD_SOURCE} && tar -xzf build.tar.gz" - "cd eos && ./.cicd/test.sh scripts/long-running-test.sh $TEST_NAME" plugins: - - NorseGaud/anka#v0.5.7: + - EOSIO/anka#v0.5.7: no-volume: true inherit-environment-vars: true vm-name: ${MOJAVE_ANKA_TEMPLATE_NAME} @@ -586,7 +586,7 @@ cat < Date: Sat, 18 Jan 2020 18:21:16 -0600 Subject: [PATCH 10/25] Remove new block id notify feature as it actually degrades performance rather than improve it. --- plugins/net_plugin/net_plugin.cpp | 83 +++++-------------------------- 1 file changed, 13 insertions(+), 70 deletions(-) diff --git a/plugins/net_plugin/net_plugin.cpp b/plugins/net_plugin/net_plugin.cpp index d8c475dbc39..281c85ddb2a 100644 --- a/plugins/net_plugin/net_plugin.cpp +++ b/plugins/net_plugin/net_plugin.cpp @@ -191,10 +191,8 @@ namespace eosio { void retry_fetch(const connection_ptr& conn); bool add_peer_block( const block_id_type& blkid, uint32_t connection_id ); - bool add_peer_block_id( const block_id_type& blkid, uint32_t connection_id ); bool peer_has_block(const block_id_type& blkid, uint32_t connection_id) const; bool have_block(const block_id_type& blkid) const; - size_t num_entries( uint32_t connection_id ) const; bool add_peer_txn( const node_transaction_state& nts ); void update_txns_block_num( const signed_block_ptr& sb ); @@ -384,7 +382,6 @@ namespace eosio { constexpr auto def_max_trx_in_progress_size = 100*1024*1024; // 100 MB constexpr auto def_max_consecutive_rejected_blocks = 3; // num of rejected blocks before disconnect constexpr auto def_max_consecutive_immediate_connection_close = 9; // back off if client keeps closing - constexpr auto def_max_peer_block_ids_per_connection = 100*1024; // if we reach this many then the connection is spaming us, disconnect constexpr auto def_max_clients = 25; // 0 for unlimited clients constexpr auto def_max_nodes_per_host = 1; constexpr auto def_conn_retry_wait = 30; @@ -416,9 +413,9 @@ namespace eosio { */ constexpr uint16_t proto_base = 0; constexpr uint16_t proto_explicit_sync = 1; - constexpr uint16_t block_id_notify = 2; + constexpr uint16_t block_id_notify = 2; // reserved. feature was removed. next net_version should be 3 - constexpr uint16_t net_version = block_id_notify; + constexpr uint16_t net_version = proto_explicit_sync; /** * Index by start_block_num @@ -1831,16 +1828,6 @@ namespace eosio { return added; } - bool dispatch_manager::add_peer_block_id( const block_id_type& blkid, uint32_t connection_id) { - std::lock_guard g( blk_state_mtx ); - auto bptr = blk_state.get().find( std::make_tuple( connection_id, std::ref( blkid ))); - bool added = (bptr == blk_state.end()); - if( added ) { - blk_state.insert( {blkid, block_header::num_from_id( blkid ), connection_id, false} ); - } - return added; - } - bool dispatch_manager::peer_has_block( const block_id_type& blkid, uint32_t connection_id ) const { std::lock_guard g(blk_state_mtx); const auto blk_itr = blk_state.get().find( std::make_tuple( connection_id, std::ref( blkid ))); @@ -1858,11 +1845,6 @@ namespace eosio { return false; } - size_t dispatch_manager::num_entries( uint32_t connection_id ) const { - std::lock_guard g(blk_state_mtx); - return blk_state.get().count( connection_id ); - } - bool dispatch_manager::add_peer_txn( const node_transaction_state& nts ) { std::lock_guard g( local_txns_mtx ); auto tptr = local_txns.get().find( std::make_tuple( std::ref( nts.id ), nts.connection_id ) ); @@ -1976,33 +1958,6 @@ namespace eosio { } ); } - void dispatch_manager::bcast_notice( const block_id_type& id ) { - if( my_impl->sync_master->syncing_with_peer() ) return; - - fc_dlog( logger, "bcast notice ${b}", ("b", block_header::num_from_id( id )) ); - notice_message note; - note.known_blocks.mode = normal; - note.known_blocks.pending = 1; // 1 indicates this is a block id notice - note.known_blocks.ids.emplace_back( id ); - - for_each_block_connection( [this, note]( auto& cp ) { - if( !cp->current() ) { - return true; - } - cp->strand.post( [this, cp, note]() { - // check protocol_version here since only accessed from strand - if( cp->protocol_version < block_id_notify ) return; - const block_id_type& id = note.known_blocks.ids.back(); - if( peer_has_block( id, cp->connection_id ) ) { - return; - } - fc_dlog( logger, "bcast block id ${b} to ${p}", ("b", block_header::num_from_id( id ))("p", cp->peer_name()) ); - cp->enqueue( note ); - } ); - return true; - } ); - } - // called from connection strand void dispatch_manager::recv_block(const connection_ptr& c, const block_id_type& id, uint32_t bnum) { std::unique_lock g( c->conn_mtx ); @@ -2069,18 +2024,7 @@ namespace eosio { if (msg.known_blocks.mode == normal) { // known_blocks.ids is never > 1 if( !msg.known_blocks.ids.empty() ) { - if( num_entries( c->connection_id ) > def_max_peer_block_ids_per_connection ) { - fc_elog( logger, "received too many notice_messages, disconnecting" ); - c->close( false ); - } - const block_id_type& blkid = msg.known_blocks.ids.back(); - if( have_block( blkid )) { - add_peer_block( blkid, c->connection_id ); - return; - } else { - add_peer_block_id( blkid, c->connection_id ); - } - if( msg.known_blocks.pending == 1 ) { // block id notify + if( msg.known_blocks.pending == 1 ) { // block id notify of 2.0.0, ignore return; } } @@ -2434,8 +2378,9 @@ namespace eosio { pending_message_buffer.advance_read_ptr( message_length ); return true; } - fc_dlog( logger, "${p} received block ${num}, id ${id}...", - ("p", peer_name())("num", bh.block_num())("id", blk_id.str().substr(8,16)) ); + fc_dlog( logger, "${p} received block ${num}, id ${id}..., latency: ${latency}", + ("p", peer_name())("num", bh.block_num())("id", blk_id.str().substr(8,16)) + ("latency", (fc::time_point::now() - bh.timestamp).count()/1000) ); if( !my_impl->sync_master->syncing_with_peer() ) { // guard against peer thinking it needs to send us old blocks uint32_t lib = 0; std::tie( lib, std::ignore, std::ignore, std::ignore, std::ignore, std::ignore ) = my_impl->get_chain_info(); @@ -2744,8 +2689,12 @@ namespace eosio { return; } if( msg.known_trx.mode != none ) { - fc_dlog( logger, "this is a ${m} notice with ${n} transactions", - ("m", modes_str( msg.known_trx.mode ))( "n", msg.known_trx.pending ) ); + if( logger.is_enabled( fc::log_level::debug ) ) { + const block_id_type& blkid = msg.known_blocks.ids.empty() ? block_id_type{} : msg.known_blocks.ids.back(); + fc_dlog( logger, "this is a ${m} notice with ${n} pending blocks: ${num} ${id}...", + ("m", modes_str( msg.known_blocks.mode ))("n", msg.known_blocks.pending) + ("num", block_header::num_from_id( blkid ))("id", blkid.str().substr( 8, 16 )) ); + } } switch (msg.known_trx.mode) { case none: @@ -2902,7 +2851,6 @@ namespace eosio { app().post(priority::high, [ptr{std::move(ptr)}, id, c = shared_from_this()]() mutable { c->process_signed_block( id, std::move( ptr ) ); }); - my_impl->dispatcher->bcast_notice( id ); } // called from application thread @@ -3207,12 +3155,7 @@ namespace eosio { bool connection::populate_handshake( handshake_message& hello ) { namespace sc = std::chrono; bool send = false; - if( no_retry == wrong_version ) { - hello.network_version = net_version_base + proto_explicit_sync; // try previous version - send = true; - } else { - hello.network_version = net_version_base + net_version; - } + hello.network_version = net_version_base + net_version; const auto prev_head_id = hello.head_id; uint32_t lib, head; std::tie( lib, std::ignore, head, From 5a03c8b3da907cbd6068e4c3983241ec8ecddb55 Mon Sep 17 00:00:00 2001 From: Kevin Heifner Date: Sat, 18 Jan 2020 23:29:10 -0600 Subject: [PATCH 11/25] Report block header diff when digests do not match --- libraries/chain/controller.cpp | 30 +++++++++++++++++++++++++++--- 1 file changed, 27 insertions(+), 3 deletions(-) diff --git a/libraries/chain/controller.cpp b/libraries/chain/controller.cpp index 480ea01cbd1..3b0b59bf0cd 100644 --- a/libraries/chain/controller.cpp +++ b/libraries/chain/controller.cpp @@ -1820,6 +1820,27 @@ struct controller_impl { } } + void report_block_header_diff( const block_header& b, const block_header& ab ) { + +#define EOS_REPORT(DESC,A,B) \ + if( A != B ) { \ + elog("${desc}: ${bv} != ${abv}", ("desc", DESC)("bv", A)("abv", B)); \ + } + + EOS_REPORT( "timestamp", b.timestamp, ab.timestamp ) + EOS_REPORT( "producer", b.producer, ab.producer ) + EOS_REPORT( "confirmed", b.confirmed, ab.confirmed ) + EOS_REPORT( "previous", b.previous, ab.previous ) + EOS_REPORT( "transaction_mroot", b.transaction_mroot, ab.transaction_mroot ) + EOS_REPORT( "action_mroot", b.action_mroot, ab.action_mroot ) + EOS_REPORT( "schedule_version", b.schedule_version, ab.schedule_version ) + EOS_REPORT( "new_producers", b.new_producers, ab.new_producers ) + EOS_REPORT( "header_extensions", b.header_extensions, ab.header_extensions ) + +#undef EOS_REPORT + } + + void apply_block( const block_state_ptr& bsp, controller::block_status s, const trx_meta_cache_lookup& trx_lookup ) { try { try { @@ -1902,9 +1923,12 @@ struct controller_impl { auto& ab = pending->_block_stage.get(); - // this implicitly asserts that all header fields (less the signature) are identical - EOS_ASSERT( producer_block_id == ab._id, block_validate_exception, "Block ID does not match", - ("producer_block_id",producer_block_id)("validator_block_id",ab._id) ); + if( producer_block_id != ab._id ) { + report_block_header_diff( *b, *ab._unsigned_block ); + // this implicitly asserts that all header fields (less the signature) are identical + EOS_ASSERT( producer_block_id == ab._id, block_validate_exception, "Block ID does not match", + ("producer_block_id", producer_block_id)("validator_block_id", ab._id) ); + } if( !use_bsp_cached ) { bsp->set_trxs_metas( std::move( ab._trx_metas ), !skip_auth_checks ); From 72c1e0d949985b8653cfd3a851ebab55f9b1f80a Mon Sep 17 00:00:00 2001 From: Kevin Heifner Date: Mon, 20 Jan 2020 07:42:13 -0600 Subject: [PATCH 12/25] Do not broadcast block if syncing with peer --- plugins/net_plugin/net_plugin.cpp | 2 ++ 1 file changed, 2 insertions(+) diff --git a/plugins/net_plugin/net_plugin.cpp b/plugins/net_plugin/net_plugin.cpp index 281c85ddb2a..59b49d80c00 100644 --- a/plugins/net_plugin/net_plugin.cpp +++ b/plugins/net_plugin/net_plugin.cpp @@ -1921,6 +1921,8 @@ namespace eosio { void dispatch_manager::bcast_block(const block_state_ptr& bs) { fc_dlog( logger, "bcast block ${b}", ("b", bs->block_num) ); + if( my_impl->sync_master->syncing_with_peer() ) return; + bool have_connection = false; for_each_block_connection( [&have_connection]( auto& cp ) { peer_dlog( cp, "socket_is_open ${s}, connecting ${c}, syncing ${ss}", From 12c4a37d2fd70d6909784c67b038025873432c72 Mon Sep 17 00:00:00 2001 From: Kevin Heifner Date: Tue, 21 Jan 2020 09:46:20 -0600 Subject: [PATCH 13/25] Syncing can start without an updated sync_next_expected_num, so make sure it is at least lib --- plugins/net_plugin/net_plugin.cpp | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/plugins/net_plugin/net_plugin.cpp b/plugins/net_plugin/net_plugin.cpp index 59b49d80c00..219796429f5 100644 --- a/plugins/net_plugin/net_plugin.cpp +++ b/plugins/net_plugin/net_plugin.cpp @@ -1520,8 +1520,8 @@ namespace eosio { if( sync_state == in_sync ) { set_state( lib_catchup ); - sync_next_expected_num = std::max( lib_num + 1, sync_next_expected_num ); } + sync_next_expected_num = std::max( lib_num + 1, sync_next_expected_num ); fc_ilog( logger, "Catching up with chain, our last req is ${cc}, theirs is ${t} peer ${p}", ("cc", sync_last_requested_num)( "t", target )( "p", c->peer_name() ) ); From 272cf974996200fc7799327ea02256dd8898df94 Mon Sep 17 00:00:00 2001 From: Kevin Heifner Date: Tue, 21 Jan 2020 09:50:22 -0600 Subject: [PATCH 14/25] Add elog for context of block header diff --- libraries/chain/controller.cpp | 1 + 1 file changed, 1 insertion(+) diff --git a/libraries/chain/controller.cpp b/libraries/chain/controller.cpp index 3b0b59bf0cd..0f86f19a6e0 100644 --- a/libraries/chain/controller.cpp +++ b/libraries/chain/controller.cpp @@ -1924,6 +1924,7 @@ struct controller_impl { auto& ab = pending->_block_stage.get(); if( producer_block_id != ab._id ) { + elog( "Validation block id does not match producer block id" ); report_block_header_diff( *b, *ab._unsigned_block ); // this implicitly asserts that all header fields (less the signature) are identical EOS_ASSERT( producer_block_id == ab._id, block_validate_exception, "Block ID does not match", From 6ed50e1747aa724853d6be47f34b3d0c9e586834 Mon Sep 17 00:00:00 2001 From: Kevin Heifner Date: Tue, 21 Jan 2020 12:49:14 -0600 Subject: [PATCH 15/25] Always send a handshake on unlinkable block --- plugins/net_plugin/net_plugin.cpp | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/plugins/net_plugin/net_plugin.cpp b/plugins/net_plugin/net_plugin.cpp index 219796429f5..a856720c434 100644 --- a/plugins/net_plugin/net_plugin.cpp +++ b/plugins/net_plugin/net_plugin.cpp @@ -619,7 +619,7 @@ namespace eosio { static void _close( connection* self, bool reconnect, bool shutdown ); // for easy capture public: - bool populate_handshake( handshake_message& hello ); + bool populate_handshake( handshake_message& hello, bool force ); bool resolve_and_connect(); void connect( const std::shared_ptr& resolver, tcp::resolver::results_type endpoints ); @@ -635,7 +635,7 @@ namespace eosio { */ bool process_next_message(uint32_t message_length); - void send_handshake(); + void send_handshake( bool force = false ); /** \name Peer Timestamps * Time message handling @@ -1041,10 +1041,10 @@ namespace eosio { syncing = false; } - void connection::send_handshake() { - strand.dispatch( [c = shared_from_this()]() { + void connection::send_handshake( bool force ) { + strand.dispatch( [force, c = shared_from_this()]() { std::unique_lock g_conn( c->conn_mtx ); - if( c->populate_handshake( c->last_handshake_sent ) ) { + if( c->populate_handshake( c->last_handshake_sent, force ) ) { static_assert( std::is_same_vsent_handshake_count ), int16_t>, "INT16_MAX based on int16_t" ); if( c->sent_handshake_count == INT16_MAX ) c->sent_handshake_count = 1; // do not wrap c->last_handshake_sent.generation = ++c->sent_handshake_count; @@ -1732,7 +1732,7 @@ namespace eosio { g.unlock(); c->close(); } else { - c->send_handshake(); + c->send_handshake( true ); } } @@ -3154,9 +3154,9 @@ namespace eosio { } // call from connection strand - bool connection::populate_handshake( handshake_message& hello ) { + bool connection::populate_handshake( handshake_message& hello, bool force ) { namespace sc = std::chrono; - bool send = false; + bool send = force; hello.network_version = net_version_base + net_version; const auto prev_head_id = hello.head_id; uint32_t lib, head; From 9f558be349a92cbc213a1d104600b7363f4c5ad1 Mon Sep 17 00:00:00 2001 From: Scott Arnette Date: Wed, 22 Jan 2020 11:15:45 -0500 Subject: [PATCH 16/25] Port multiversion test into main EOS pipeline. --- .cicd/generate-pipeline.sh | 37 +++-- .cicd/helpers/multi_eos_docker.py | 138 ++++++++++++++++++ .cicd/multiversion.sh | 61 ++++++++ .../pinned/ubuntu-18.04-pinned.dockerfile | 3 +- .cicd/test.sh | 4 +- 5 files changed, 220 insertions(+), 23 deletions(-) create mode 100755 .cicd/helpers/multi_eos_docker.py create mode 100755 .cicd/multiversion.sh diff --git a/.cicd/generate-pipeline.sh b/.cicd/generate-pipeline.sh index e2037c9f46b..fd9d3ab6376 100755 --- a/.cicd/generate-pipeline.sh +++ b/.cicd/generate-pipeline.sh @@ -438,6 +438,23 @@ EOF echo '' fi done +# Execute multiversion test +if ( [[ ! $PINNED == false ]] ); then + cat < "$PIPELINE_CONFIG" +if [[ -f "$PIPELINE_CONFIG" ]]; then + [[ "$DEBUG" == 'true' ]] && cat "$PIPELINE_CONFIG" | jq . + # export environment + if [[ "$(cat "$PIPELINE_CONFIG" | jq -r '.environment')" != 'null' ]]; then + for OBJECT in $(cat "$PIPELINE_CONFIG" | jq -r '.environment | to_entries | .[] | @base64'); do + KEY="$(echo $OBJECT | base64 --decode | jq -r .key)" + VALUE="$(echo $OBJECT | base64 --decode | jq -r .value)" + [[ ! -v $KEY ]] && export $KEY="$VALUE" + done + fi + # export multiversion.conf + echo '[eosio]' > multiversion.conf + for OBJECT in $(cat "$PIPELINE_CONFIG" | jq -r '.configuration | .[] | @base64'); do + echo "$(echo $OBJECT | base64 --decode)" >> multiversion.conf # outer echo adds '\n' + done + mv -f $GIT_ROOT/multiversion.conf $GIT_ROOT/tests +elif [[ "$DEBUG" == 'true' ]]; then + echo 'Pipeline configuration file not found!' + echo "PIPELINE_CONFIG = \"$PIPELINE_CONFIG\"" + echo "RAW_PIPELINE_CONFIG = \"$RAW_PIPELINE_CONFIG\"" + echo '$ pwd' + pwd + echo '$ ls' + ls + echo 'Skipping that step...' +fi +# multiversion +cd $GIT_ROOT/eos_multiversion_builder +echo 'Downloading other versions of nodeos...' +python2.7 $GIT_ROOT/.cicd/helpers/multi_eos_docker.py +cd $GIT_ROOT +cp $GIT_ROOT/tests/multiversion_paths.conf $GIT_ROOT/build/tests +cd $GIT_ROOT/build +# count tests +echo "+++ $([[ "$BUILDKITE" == 'true' ]] && echo ':microscope: ')Running Multiversion Test" +TEST_COUNT=$(ctest -N -L mixed_version_tests | grep -i 'Total Tests: ' | cut -d ':' -f 2 | awk '{print $1}') +if [[ $TEST_COUNT > 0 ]]; then + echo "$TEST_COUNT tests found." +else + echo "+++ $([[ "$BUILDKITE" == 'true' ]] && echo ':no_entry: ')ERROR: No tests registered with ctest! Exiting..." + exit 1 +fi +# run tests +set +e # defer ctest error handling to end +echo "$ ctest -L mixed_version_tests --output-on-failure -T Test" +ctest -L mixed_version_tests --output-on-failure -T Test +EXIT_STATUS=$? +echo 'Done running multiversion test.' +exit $EXIT_STATUS \ No newline at end of file diff --git a/.cicd/platforms/pinned/ubuntu-18.04-pinned.dockerfile b/.cicd/platforms/pinned/ubuntu-18.04-pinned.dockerfile index 7815e219c8f..2712342e28d 100644 --- a/.cicd/platforms/pinned/ubuntu-18.04-pinned.dockerfile +++ b/.cicd/platforms/pinned/ubuntu-18.04-pinned.dockerfile @@ -5,7 +5,8 @@ RUN apt-get update && \ apt-get upgrade -y && \ DEBIAN_FRONTEND=noninteractive apt-get install -y git make \ bzip2 automake libbz2-dev libssl-dev doxygen graphviz libgmp3-dev \ - autotools-dev libicu-dev python2.7 python2.7-dev python3 python3-dev \ + autotools-dev libicu-dev python2.7 python2.7-dev python3 \ + python3-dev python-configparser python-requests python-pip \ autoconf libtool g++ gcc curl zlib1g-dev sudo ruby libusb-1.0-0-dev \ libcurl4-gnutls-dev pkg-config patch ccache vim-common jq # build cmake. diff --git a/.cicd/test.sh b/.cicd/test.sh index 632e714d82e..88b09b8f28e 100755 --- a/.cicd/test.sh +++ b/.cicd/test.sh @@ -11,9 +11,9 @@ if [[ $(uname) == 'Darwin' ]]; then # macOS else # Linux COMMANDS="$MOUNTED_DIR/$@" . $HELPERS_DIR/file-hash.sh $CICD_DIR/platforms/$PLATFORM_TYPE/$IMAGE_TAG.dockerfile - echo "$ docker run --rm --init -v $(pwd):$MOUNTED_DIR $(buildkite-intrinsics) -e JOBS $FULL_TAG bash -c \"$COMMANDS\"" + echo "$ docker run --rm --init -v $(pwd):$MOUNTED_DIR $(buildkite-intrinsics) -e JOBS -e BUILDKITE_API_KEY $FULL_TAG bash -c \"$COMMANDS\"" set +e # defer error handling to end - eval docker run --rm --init -v $(pwd):$MOUNTED_DIR $(buildkite-intrinsics) -e JOBS $FULL_TAG bash -c \"$COMMANDS\" + eval docker run --rm --init -v $(pwd):$MOUNTED_DIR $(buildkite-intrinsics) -e JOBS -e BUILDKITE_API_KEY $FULL_TAG bash -c \"$COMMANDS\" EXIT_STATUS=$? fi # buildkite From 47b3c95b713fa22443428c12f1aafac44f095ce6 Mon Sep 17 00:00:00 2001 From: arhag Date: Tue, 21 Jan 2020 17:01:35 -0500 Subject: [PATCH 17/25] avoid passing http-max-response-time-ms to old nodeos version in multiversion test --- tests/Cluster.py | 4 ++-- tests/nodeos_multiple_version_protocol_feature_test.py | 4 ++++ tests/nodeos_protocol_feature_test.py | 2 +- tests/nodeos_under_min_avail_ram.py | 2 +- tests/prod_preactivation_test.py | 2 +- 5 files changed, 9 insertions(+), 5 deletions(-) diff --git a/tests/Cluster.py b/tests/Cluster.py index 4102e3343f9..cb5d1d3ec2f 100644 --- a/tests/Cluster.py +++ b/tests/Cluster.py @@ -144,7 +144,7 @@ def setAlternateVersionLabels(self, file): # pylint: disable=too-many-branches # pylint: disable=too-many-statements def launch(self, pnodes=1, unstartedNodes=0, totalNodes=1, prodCount=1, topo="mesh", delay=1, onlyBios=False, dontBootstrap=False, - totalProducers=None, sharedProducers=0, extraNodeosArgs=None, useBiosBootFile=True, specificExtraNodeosArgs=None, onlySetProds=False, + totalProducers=None, sharedProducers=0, extraNodeosArgs=" --http-max-response-time-ms 990000 ", useBiosBootFile=True, specificExtraNodeosArgs=None, onlySetProds=False, pfSetupPolicy=PFSetupPolicy.FULL, alternateVersionLabelsFile=None, associatedNodeLabels=None, loadSystemContract=True): """Launch cluster. pnodes: producer nodes count @@ -220,7 +220,7 @@ def launch(self, pnodes=1, unstartedNodes=0, totalNodes=1, prodCount=1, topo="me if self.staging: cmdArr.append("--nogen") - nodeosArgs="--max-transaction-time -1 --http-max-response-time-ms 9999 --abi-serializer-max-time-ms 990000 --filter-on \"*\" --p2p-max-nodes-per-host %d" % (totalNodes) + nodeosArgs="--max-transaction-time -1 --abi-serializer-max-time-ms 990000 --filter-on \"*\" --p2p-max-nodes-per-host %d" % (totalNodes) if not self.walletd: nodeosArgs += " --plugin eosio::wallet_api_plugin" if self.enableMongo: diff --git a/tests/nodeos_multiple_version_protocol_feature_test.py b/tests/nodeos_multiple_version_protocol_feature_test.py index fdc0c3785bc..02c4bf11ea9 100755 --- a/tests/nodeos_multiple_version_protocol_feature_test.py +++ b/tests/nodeos_multiple_version_protocol_feature_test.py @@ -94,6 +94,10 @@ def hasBlockBecomeIrr(): assert cluster.launch(pnodes=4, totalNodes=4, prodCount=1, totalProducers=4, extraNodeosArgs=" --plugin eosio::producer_api_plugin ", useBiosBootFile=False, + specificExtraNodeosArgs={ + 0:"--http-max-response-time-ms 990000", + 1:"--http-max-response-time-ms 990000", + 2:"--http-max-response-time-ms 990000"}, onlySetProds=True, pfSetupPolicy=PFSetupPolicy.NONE, alternateVersionLabelsFile=alternateVersionLabelsFile, diff --git a/tests/nodeos_protocol_feature_test.py b/tests/nodeos_protocol_feature_test.py index 369068494ef..df416da1c29 100755 --- a/tests/nodeos_protocol_feature_test.py +++ b/tests/nodeos_protocol_feature_test.py @@ -45,7 +45,7 @@ def restartNode(node: Node, nodeId, chainArg=None, addSwapFlags=None): TestHelper.printSystemInfo("BEGIN") cluster.killall(allInstances=killAll) cluster.cleanup() - cluster.launch(extraNodeosArgs=" --plugin eosio::producer_api_plugin ", + cluster.launch(extraNodeosArgs=" --plugin eosio::producer_api_plugin --http-max-response-time-ms 990000 ", dontBootstrap=True, pfSetupPolicy=PFSetupPolicy.NONE) biosNode = cluster.biosNode diff --git a/tests/nodeos_under_min_avail_ram.py b/tests/nodeos_under_min_avail_ram.py index 6c9d6c7fc00..d48944344e4 100755 --- a/tests/nodeos_under_min_avail_ram.py +++ b/tests/nodeos_under_min_avail_ram.py @@ -88,7 +88,7 @@ def setName(self, num): minRAMValue=1002 maxRAMFlag="--chain-state-db-size-mb" maxRAMValue=1010 - extraNodeosArgs=" %s %d %s %d " % (minRAMFlag, minRAMValue, maxRAMFlag, maxRAMValue) + extraNodeosArgs=" %s %d %s %d --http-max-response-time-ms 990000 " % (minRAMFlag, minRAMValue, maxRAMFlag, maxRAMValue) if cluster.launch(onlyBios=False, pnodes=totalNodes, totalNodes=totalNodes, totalProducers=totalNodes, extraNodeosArgs=extraNodeosArgs, useBiosBootFile=False) is False: Utils.cmdError("launcher") errorExit("Failed to stand up eos cluster.") diff --git a/tests/prod_preactivation_test.py b/tests/prod_preactivation_test.py index dbc96c2c457..6543be1c288 100755 --- a/tests/prod_preactivation_test.py +++ b/tests/prod_preactivation_test.py @@ -68,7 +68,7 @@ Print("Stand up cluster") if cluster.launch(pnodes=prodCount, totalNodes=prodCount, prodCount=1, onlyBios=onlyBios, dontBootstrap=dontBootstrap, useBiosBootFile=False, - pfSetupPolicy=PFSetupPolicy.NONE, extraNodeosArgs=" --plugin eosio::producer_api_plugin") is False: + pfSetupPolicy=PFSetupPolicy.NONE, extraNodeosArgs=" --plugin eosio::producer_api_plugin --http-max-response-time-ms 990000 ") is False: cmdError("launcher") errorExit("Failed to stand up eos cluster.") From 3159fdcbee00bfb0f7f31f1b48f9942581e68bc1 Mon Sep 17 00:00:00 2001 From: Scott Arnette Date: Wed, 22 Jan 2020 16:34:20 -0500 Subject: [PATCH 18/25] Fixing docker name collision issues during build. --- .cicd/docker-tag.sh | 2 +- .cicd/installation-build.sh | 12 +++++++----- 2 files changed, 8 insertions(+), 6 deletions(-) diff --git a/.cicd/docker-tag.sh b/.cicd/docker-tag.sh index 18ef347a0a3..22da25dff95 100755 --- a/.cicd/docker-tag.sh +++ b/.cicd/docker-tag.sh @@ -3,7 +3,7 @@ set -eo pipefail echo '+++ :evergreen_tree: Configuring Environment' REPO='eosio/ci-contracts-builder' PREFIX='base-ubuntu-18.04' -IMAGE="$REPO:$PREFIX-$BUILDKITE_COMMIT" +IMAGE="$REPO:$PREFIX-$BUILDKITE_COMMIT-$PLATFORM_TYPE" SANITIZED_BRANCH=$(echo "$BUILDKITE_BRANCH" | tr '/' '_') SANITIZED_TAG=$(echo "$BUILDKITE_TAG" | tr '/' '_') echo '+++ :arrow_down: Pulling Container' diff --git a/.cicd/installation-build.sh b/.cicd/installation-build.sh index cfcd3bc4da9..f787afb567b 100755 --- a/.cicd/installation-build.sh +++ b/.cicd/installation-build.sh @@ -4,11 +4,13 @@ set -eo pipefail export ENABLE_INSTALL=true export BRANCH=$(echo $BUILDKITE_BRANCH | sed 's/\//\_/') export CONTRACTS_BUILDER_TAG="eosio/ci-contracts-builder:base-ubuntu-18.04" -export ARGS="--name ci-contracts-builder-$BUILDKITE_COMMIT --init -v $(pwd):$MOUNTED_DIR" +export ARGS="--name ci-contracts-builder-$BUILDKITE_PIPELINE_SLUG-$BUILDKITE_BUILD_NUMBER --init -v $(pwd):$MOUNTED_DIR" $CICD_DIR/build.sh -docker commit ci-contracts-builder-$BUILDKITE_COMMIT $CONTRACTS_BUILDER_TAG-$BUILDKITE_COMMIT -docker commit ci-contracts-builder-$BUILDKITE_COMMIT $CONTRACTS_BUILDER_TAG-$BRANCH-$BUILDKITE_COMMIT +docker commit ci-contracts-builder-$BUILDKITE_PIPELINE_SLUG-$BUILDKITE_BUILD_NUMBER $CONTRACTS_BUILDER_TAG-$BUILDKITE_COMMIT +docker commit ci-contracts-builder-$BUILDKITE_PIPELINE_SLUG-$BUILDKITE_BUILD_NUMBER $CONTRACTS_BUILDER_TAG-$BUILDKITE_COMMIT-$PLATFORM_TYPE +docker commit ci-contracts-builder-$BUILDKITE_PIPELINE_SLUG-$BUILDKITE_BUILD_NUMBER $CONTRACTS_BUILDER_TAG-$BRANCH-$BUILDKITE_COMMIT docker push $CONTRACTS_BUILDER_TAG-$BUILDKITE_COMMIT +docker push $CONTRACTS_BUILDER_TAG-$BUILDKITE_COMMIT-$PLATFORM_TYPE docker push $CONTRACTS_BUILDER_TAG-$BRANCH-$BUILDKITE_COMMIT -docker stop ci-contracts-builder-$BUILDKITE_COMMIT -docker rm ci-contracts-builder-$BUILDKITE_COMMIT \ No newline at end of file +docker stop ci-contracts-builder-$BUILDKITE_PIPELINE_SLUG-$BUILDKITE_BUILD_NUMBER +docker rm ci-contracts-builder-$BUILDKITE_PIPELINE_SLUG-$BUILDKITE_BUILD_NUMBER \ No newline at end of file From 38ec36f8e939f1446bacca4d9918818d5bea34a4 Mon Sep 17 00:00:00 2001 From: Kevin Heifner Date: Thu, 23 Jan 2020 11:44:21 -0600 Subject: [PATCH 19/25] Consolidated Security Fixes for 2.0.1 - Reduce net plugin logging and handshake size limits. - Improved handling of deferred transactions during block production. - Earlier block validation for greater security. Co-Authored-By: Kevin Heifner Co-Authored-By: Kayan --- libraries/chain/controller.cpp | 16 ++++++-- .../include/eosio/net_plugin/protocol.hpp | 7 ++++ plugins/net_plugin/net_plugin.cpp | 40 +++++++++++++++---- unittests/block_tests.cpp | 7 ++++ unittests/forked_tests.cpp | 2 +- 5 files changed, 59 insertions(+), 13 deletions(-) diff --git a/libraries/chain/controller.cpp b/libraries/chain/controller.cpp index 480ea01cbd1..6eac8f10361 100644 --- a/libraries/chain/controller.cpp +++ b/libraries/chain/controller.cpp @@ -123,6 +123,7 @@ struct building_block { vector _pending_trx_metas; vector _pending_trx_receipts; vector _actions; + optional _transaction_mroot; }; struct assembled_block { @@ -1308,7 +1309,7 @@ struct controller_impl { // Only subjective OR soft OR hard failure logic below: - if( gtrx.sender != account_name() && !failure_is_subjective(*trace->except)) { + if( gtrx.sender != account_name() && !(explicit_billed_cpu_time ? failure_is_subjective(*trace->except) : scheduled_failure_is_subjective(*trace->except))) { // Attempt error handling for the generated transaction. auto error_trace = apply_onerror( gtrx, deadline, trx_context.pseudo_start, @@ -1677,7 +1678,7 @@ struct controller_impl { // Create (unsigned) block: auto block_ptr = std::make_shared( pbhs.make_block_header( - calculate_trx_merkle(), + bb._transaction_mroot ? *bb._transaction_mroot : calculate_trx_merkle( bb._pending_trx_receipts ), calculate_action_merkle(), bb._new_pending_producer_schedule, std::move( bb._new_protocol_feature_activations ), @@ -1898,6 +1899,9 @@ struct controller_impl { ("producer_receipt", receipt)("validator_receipt", trx_receipts.back()) ); } + // validated in create_block_state_future() + pending->_block_stage.get()._transaction_mroot = b->transaction_mroot; + finalize_block(); auto& ab = pending->_block_stage.get(); @@ -1936,6 +1940,11 @@ struct controller_impl { return async_thread_pool( thread_pool.get_executor(), [b, prev, control=this]() { const bool skip_validate_signee = false; + + auto trx_mroot = calculate_trx_merkle( b->transactions ); + EOS_ASSERT( b->transaction_mroot == trx_mroot, block_validate_exception, + "invalid block transaction merkle root ${b} != ${c}", ("b", b->transaction_mroot)("c", trx_mroot) ); + return std::make_shared( *prev, move( b ), @@ -2126,9 +2135,8 @@ struct controller_impl { return merkle( move(action_digests) ); } - checksum256_type calculate_trx_merkle() { + static checksum256_type calculate_trx_merkle( const vector& trxs ) { vector trx_digests; - const auto& trxs = pending->_block_stage.get()._pending_trx_receipts; trx_digests.reserve( trxs.size() ); for( const auto& a : trxs ) trx_digests.emplace_back( a.digest() ); diff --git a/plugins/net_plugin/include/eosio/net_plugin/protocol.hpp b/plugins/net_plugin/include/eosio/net_plugin/protocol.hpp index a806486b50a..8ce781cefd5 100644 --- a/plugins/net_plugin/include/eosio/net_plugin/protocol.hpp +++ b/plugins/net_plugin/include/eosio/net_plugin/protocol.hpp @@ -17,6 +17,13 @@ namespace eosio { block_id_type head_id; }; + // Longest domain name is 253 characters according to wikipedia. + // Addresses include ":port" where max port is 65535, which adds 6 chars. + // We also add our own extentions of "[:trx|:blk] - xxxxxxx", which adds 14 chars, total= 273. + // Allow for future extentions as well, hence 384. + constexpr size_t max_p2p_address_length = 253 + 6; + constexpr size_t max_handshake_str_length = 384; + struct handshake_message { uint16_t network_version = 0; ///< incremental value above a computed base chain_id_type chain_id; ///< used to identify chain diff --git a/plugins/net_plugin/net_plugin.cpp b/plugins/net_plugin/net_plugin.cpp index d8c475dbc39..b50abce4f5c 100644 --- a/plugins/net_plugin/net_plugin.cpp +++ b/plugins/net_plugin/net_plugin.cpp @@ -828,7 +828,7 @@ namespace eosio { last_handshake_recv(), last_handshake_sent() { - fc_ilog( logger, "accepted network connection" ); + fc_dlog( logger, "new connection object created" ); } void connection::update_endpoints() { @@ -855,13 +855,13 @@ namespace eosio { peer_add.substr( colon2 + 1 ) : peer_add.substr( colon2 + 1, end - (colon2 + 1) ); if( type.empty() ) { - fc_ilog( logger, "Setting connection type for: ${peer} to both transactions and blocks", ("peer", peer_add) ); + fc_dlog( logger, "Setting connection type for: ${peer} to both transactions and blocks", ("peer", peer_add) ); connection_type = both; } else if( type == "trx" ) { - fc_ilog( logger, "Setting connection type for: ${peer} to transactions only", ("peer", peer_add) ); + fc_dlog( logger, "Setting connection type for: ${peer} to transactions only", ("peer", peer_add) ); connection_type = transactions_only; } else if( type == "blk" ) { - fc_ilog( logger, "Setting connection type for: ${peer} to blocks only", ("peer", peer_add) ); + fc_dlog( logger, "Setting connection type for: ${peer} to blocks only", ("peer", peer_add) ); connection_type = blocks_only; } else { fc_wlog( logger, "Unknown connection type: ${t}", ("t", type) ); @@ -2190,6 +2190,7 @@ namespace eosio { c->connect( resolver, endpoints ); } else { fc_elog( logger, "Unable to resolve ${add}: ${error}", ("add", c->peer_name())( "error", err.message() ) ); + c->connecting = false; ++c->consecutive_immediate_connection_close; } } ) ); @@ -2261,10 +2262,10 @@ namespace eosio { } else { if( from_addr >= max_nodes_per_host ) { - fc_elog( logger, "Number of connections (${n}) from ${ra} exceeds limit ${l}", + fc_dlog( logger, "Number of connections (${n}) from ${ra} exceeds limit ${l}", ("n", from_addr + 1)( "ra", paddr_str )( "l", max_nodes_per_host )); } else { - fc_elog( logger, "Error max_client_count ${m} exceeded", ("m", max_client_count)); + fc_dlog( logger, "max_client_count ${m} exceeded", ("m", max_client_count)); } // new_connection never added to connections and start_session not called, lifetime will end boost::system::error_code ec; @@ -2524,10 +2525,21 @@ namespace eosio { if (msg.p2p_address.empty()) { fc_wlog( logger, "Handshake message validation: p2p_address is null string" ); valid = false; + } else if( msg.p2p_address.length() > max_handshake_str_length ) { + // see max_handshake_str_length comment in protocol.hpp + fc_wlog( logger, "Handshake message validation: p2p_address to large: ${p}", ("p", msg.p2p_address.substr(0, max_handshake_str_length) + "...") ); + valid = false; } if (msg.os.empty()) { fc_wlog( logger, "Handshake message validation: os field is null string" ); valid = false; + } else if( msg.os.length() > max_handshake_str_length ) { + fc_wlog( logger, "Handshake message validation: os field to large: ${p}", ("p", msg.os.substr(0, max_handshake_str_length) + "...") ); + valid = false; + } + if( msg.agent.length() > max_handshake_str_length ) { + fc_wlog( logger, "Handshake message validation: agent field to large: ${p}", ("p", msg.agent.substr(0, max_handshake_str_length) + "...") ); + valid = false; } if ((msg.sig != chain::signature_type() || msg.token != sha256()) && (msg.token != fc::sha256::hash(msg.time))) { fc_wlog( logger, "Handshake message validation: token field invalid" ); @@ -3066,7 +3078,7 @@ namespace eosio { std::unique_lock g( connections_mtx ); auto it = (from ? connections.find(from) : connections.begin()); if (it == connections.end()) it = connections.begin(); - size_t num_rm = 0; + size_t num_rm = 0, num_clients = 0, num_peers = 0; while (it != connections.end()) { if (fc::time_point::now() >= max_time) { connection_wptr wit = *it; @@ -3077,13 +3089,16 @@ namespace eosio { } return; } + (*it)->peer_address().empty() ? ++num_clients : ++num_peers; if( !(*it)->socket_is_open() && !(*it)->connecting) { - if( (*it)->peer_address().length() > 0) { + if( !(*it)->peer_address().empty() ) { if( !(*it)->resolve_and_connect() ) { it = connections.erase(it); + --num_peers; ++num_rm; continue; } } else { + --num_clients; ++num_rm; it = connections.erase(it); continue; } @@ -3091,6 +3106,9 @@ namespace eosio { ++it; } g.unlock(); + if( num_clients > 0 || num_peers > 0 ) + fc_ilog( logger, "p2p client connections: ${num}/${max}, peer connections: ${pnum}/${pmax}", + ("num", num_clients)("max", max_client_count)("pnum", num_peers)("pmax", supplied_peers.size()) ); fc_dlog( logger, "connection monitor, removed ${n} connections", ("n", num_rm) ); if( reschedule ) { start_conn_timer( connector_period, std::weak_ptr()); @@ -3321,9 +3339,13 @@ namespace eosio { if( options.count( "p2p-listen-endpoint" ) && options.at("p2p-listen-endpoint").as().length()) { my->p2p_address = options.at( "p2p-listen-endpoint" ).as(); + EOS_ASSERT( my->p2p_address.length() <= max_p2p_address_length, chain::plugin_config_exception, + "p2p-listen-endpoint to long, must be less than ${m}", ("m", max_p2p_address_length) ); } if( options.count( "p2p-server-address" ) ) { my->p2p_server_address = options.at( "p2p-server-address" ).as(); + EOS_ASSERT( my->p2p_server_address.length() <= max_p2p_address_length, chain::plugin_config_exception, + "p2p_server_address to long, must be less than ${m}", ("m", max_p2p_address_length) ); } my->thread_pool_size = options.at( "net-threads" ).as(); @@ -3335,6 +3357,8 @@ namespace eosio { } if( options.count( "agent-name" )) { my->user_agent_name = options.at( "agent-name" ).as(); + EOS_ASSERT( my->user_agent_name.length() <= max_handshake_str_length, chain::plugin_config_exception, + "agent-name to long, must be less than ${m}", ("m", max_handshake_str_length) ); } if( options.count( "allowed-connection" )) { diff --git a/unittests/block_tests.cpp b/unittests/block_tests.cpp index 023907ce3e6..3b98e4082d3 100644 --- a/unittests/block_tests.cpp +++ b/unittests/block_tests.cpp @@ -30,6 +30,13 @@ BOOST_AUTO_TEST_CASE(block_with_invalid_tx_test) auto invalid_packed_tx = packed_transaction(signed_tx); copy_b->transactions.back().trx = invalid_packed_tx; + // Re-calculate the transaction merkle + vector trx_digests; + const auto& trxs = copy_b->transactions; + for( const auto& a : trxs ) + trx_digests.emplace_back( a.digest() ); + copy_b->transaction_mroot = merkle( move(trx_digests) ); + // Re-sign the block auto header_bmroot = digest_type::hash( std::make_pair( copy_b->digest(), main.control->head_block_state()->blockroot_merkle.get_root() ) ); auto sig_digest = digest_type::hash( std::make_pair(header_bmroot, main.control->head_block_state()->pending_schedule.schedule_hash) ); diff --git a/unittests/forked_tests.cpp b/unittests/forked_tests.cpp index 7268a6d78c6..4ff7b07d350 100644 --- a/unittests/forked_tests.cpp +++ b/unittests/forked_tests.cpp @@ -265,7 +265,7 @@ BOOST_AUTO_TEST_CASE( forking ) try { wlog( "end push c2 blocks to c1" ); wlog( "now push dan's block to c1 but first corrupt it so it is a bad block" ); signed_block bad_block = std::move(*b); - bad_block.transaction_mroot = bad_block.previous; + bad_block.action_mroot = bad_block.previous; auto bad_block_bs = c.control->create_block_state_future( std::make_shared(std::move(bad_block)) ); c.control->abort_block(); BOOST_REQUIRE_EXCEPTION(c.control->push_block( bad_block_bs, forked_branch_callback{}, trx_meta_cache_lookup{} ), fc::exception, From d922c085eb57fb9f3011d3d8d8feea3b95a09e71 Mon Sep 17 00:00:00 2001 From: Kevin Heifner Date: Thu, 23 Jan 2020 12:51:36 -0600 Subject: [PATCH 20/25] New policy: drop all incoming blocks while producing our own blocks --- .../include/eosio/chain/plugin_interface.hpp | 2 +- plugins/chain_plugin/chain_plugin.cpp | 6 ++--- .../eosio/chain_plugin/chain_plugin.hpp | 2 +- plugins/net_plugin/net_plugin.cpp | 2 +- plugins/producer_plugin/producer_plugin.cpp | 22 ++++++++++++------- 5 files changed, 20 insertions(+), 14 deletions(-) diff --git a/plugins/chain_interface/include/eosio/chain/plugin_interface.hpp b/plugins/chain_interface/include/eosio/chain/plugin_interface.hpp index 44ef1860c60..29bbfe710b6 100644 --- a/plugins/chain_interface/include/eosio/chain/plugin_interface.hpp +++ b/plugins/chain_interface/include/eosio/chain/plugin_interface.hpp @@ -44,7 +44,7 @@ namespace eosio { namespace chain { namespace plugin_interface { namespace methods { // synchronously push a block/trx to a single provider - using block_sync = method_decl; + using block_sync = method_decl&), first_provider_policy>; using transaction_async = method_decl), first_provider_policy>; } } diff --git a/plugins/chain_plugin/chain_plugin.cpp b/plugins/chain_plugin/chain_plugin.cpp index a854095fe3f..d5585fa34a7 100644 --- a/plugins/chain_plugin/chain_plugin.cpp +++ b/plugins/chain_plugin/chain_plugin.cpp @@ -1133,8 +1133,8 @@ void chain_apis::read_write::validate() const { EOS_ASSERT( db.get_read_mode() != chain::db_read_mode::READ_ONLY, missing_chain_api_plugin_exception, "Not allowed, node in read-only mode" ); } -void chain_plugin::accept_block(const signed_block_ptr& block ) { - my->incoming_block_sync_method(block); +void chain_plugin::accept_block(const signed_block_ptr& block, const block_id_type& id ) { + my->incoming_block_sync_method(block, id); } void chain_plugin::accept_transaction(const chain::packed_transaction_ptr& trx, next_function next) { @@ -2066,7 +2066,7 @@ fc::variant read_only::get_block_header_state(const get_block_header_state_param void read_write::push_block(read_write::push_block_params&& params, next_function next) { try { - app().get_method()(std::make_shared(std::move(params))); + app().get_method()(std::make_shared(std::move(params)), {}); next(read_write::push_block_results{}); } catch ( boost::interprocess::bad_alloc& ) { chain_plugin::handle_db_exhaustion(); diff --git a/plugins/chain_plugin/include/eosio/chain_plugin/chain_plugin.hpp b/plugins/chain_plugin/include/eosio/chain_plugin/chain_plugin.hpp index e00deb709e2..9aabc1f16ce 100644 --- a/plugins/chain_plugin/include/eosio/chain_plugin/chain_plugin.hpp +++ b/plugins/chain_plugin/include/eosio/chain_plugin/chain_plugin.hpp @@ -706,7 +706,7 @@ class chain_plugin : public plugin { chain_apis::read_only get_read_only_api() const { return chain_apis::read_only(chain(), get_abi_serializer_max_time()); } chain_apis::read_write get_read_write_api() { return chain_apis::read_write(chain(), get_abi_serializer_max_time()); } - void accept_block( const chain::signed_block_ptr& block ); + void accept_block( const chain::signed_block_ptr& block, const chain::block_id_type& id ); void accept_transaction(const chain::packed_transaction_ptr& trx, chain::plugin_interface::next_function next); bool block_is_on_preferred_chain(const chain::block_id_type& block_id); diff --git a/plugins/net_plugin/net_plugin.cpp b/plugins/net_plugin/net_plugin.cpp index d8c475dbc39..555bedb7eae 100644 --- a/plugins/net_plugin/net_plugin.cpp +++ b/plugins/net_plugin/net_plugin.cpp @@ -2936,7 +2936,7 @@ namespace eosio { go_away_reason reason = fatal_other; try { - my_impl->chain_plug->accept_block(msg); + my_impl->chain_plug->accept_block(msg, blk_id); my_impl->update_chain_info(); reason = no_reason; } catch( const unlinkable_block_exception &ex) { diff --git a/plugins/producer_plugin/producer_plugin.cpp b/plugins/producer_plugin/producer_plugin.cpp index 506050155a6..f914715fb56 100644 --- a/plugins/producer_plugin/producer_plugin.cpp +++ b/plugins/producer_plugin/producer_plugin.cpp @@ -332,8 +332,16 @@ class producer_plugin_impl : public std::enable_shared_from_thisid(); + void on_incoming_block(const signed_block_ptr& block, const std::optional& block_id) { + auto& chain = chain_plug->chain(); + if ( chain.is_building_block() && _pending_block_mode == pending_block_mode::producing ) { + fc_wlog( _log, "dropped incoming block #${num} while producing #${pbn} for ${bt}, id: ${id}", + ("num", block->block_num())("pbn", chain.head_block_num() + 1) + ("bt", chain.pending_block_time())("id", block_id ? (*block_id).str() : "UNKNOWN") ); + return; + } + + const auto& id = block_id ? *block_id : block->id(); auto blk_num = block->block_num(); fc_dlog(_log, "received incoming block ${n} ${id}", ("n", blk_num)("id", id)); @@ -341,8 +349,6 @@ class producer_plugin_impl : public std::enable_shared_from_thistimestamp < (fc::time_point::now() + fc::seconds( 7 )), block_from_the_future, "received a block from the future, ignoring it: ${id}", ("id", id) ); - chain::controller& chain = chain_plug->chain(); - /* de-dupe here... no point in aborting block if we already know the block */ auto existing = chain.fetch_block_by_id( id ); if( existing ) { return; } @@ -651,7 +657,7 @@ void producer_plugin::set_program_options( "Limit (between 1 and 1000) on the multiple that CPU/NET virtual resources can extend during low usage (only enforced subjectively; use 1000 to not enforce any limit)") ("produce-time-offset-us", boost::program_options::value()->default_value(0), "offset of non last block producing time in microseconds. Negative number results in blocks to go out sooner, and positive number results in blocks to go out later") - ("last-block-time-offset-us", boost::program_options::value()->default_value(0), + ("last-block-time-offset-us", boost::program_options::value()->default_value(-200000), "offset of last block producing time in microseconds. Negative number results in blocks to go out sooner, and positive number results in blocks to go out later") ("max-scheduled-transaction-time-per-block-ms", boost::program_options::value()->default_value(100), "Maximum wall-clock time, in milliseconds, spent retiring scheduled transactions in any block before returning to normal transaction processing.") @@ -840,7 +846,7 @@ void producer_plugin::plugin_initialize(const boost::program_options::variables_ my->_incoming_block_subscription = app().get_channel().subscribe( [this](const signed_block_ptr& block) { try { - my->on_incoming_block(block); + my->on_incoming_block(block, {}); } LOG_AND_DROP(); }); @@ -852,8 +858,8 @@ void producer_plugin::plugin_initialize(const boost::program_options::variables_ }); my->_incoming_block_sync_provider = app().get_method().register_provider( - [this](const signed_block_ptr& block) { - my->on_incoming_block(block); + [this](const signed_block_ptr& block, const std::optional& block_id) { + my->on_incoming_block(block, block_id); }); my->_incoming_transaction_async_provider = app().get_method().register_provider( From fffe6f320768e58b04d758d7c8bf81b25a162e8b Mon Sep 17 00:00:00 2001 From: Kevin Heifner Date: Thu, 23 Jan 2020 13:48:45 -0600 Subject: [PATCH 21/25] Inform net_plugin if block was accepted --- .../include/eosio/chain/plugin_interface.hpp | 2 +- plugins/chain_plugin/chain_plugin.cpp | 4 ++-- .../include/eosio/chain_plugin/chain_plugin.hpp | 2 +- plugins/net_plugin/net_plugin.cpp | 4 +++- plugins/producer_plugin/producer_plugin.cpp | 12 +++++++----- 5 files changed, 14 insertions(+), 10 deletions(-) diff --git a/plugins/chain_interface/include/eosio/chain/plugin_interface.hpp b/plugins/chain_interface/include/eosio/chain/plugin_interface.hpp index 29bbfe710b6..9c1186d4eb7 100644 --- a/plugins/chain_interface/include/eosio/chain/plugin_interface.hpp +++ b/plugins/chain_interface/include/eosio/chain/plugin_interface.hpp @@ -44,7 +44,7 @@ namespace eosio { namespace chain { namespace plugin_interface { namespace methods { // synchronously push a block/trx to a single provider - using block_sync = method_decl&), first_provider_policy>; + using block_sync = method_decl&), first_provider_policy>; using transaction_async = method_decl), first_provider_policy>; } } diff --git a/plugins/chain_plugin/chain_plugin.cpp b/plugins/chain_plugin/chain_plugin.cpp index d5585fa34a7..722dcee81e5 100644 --- a/plugins/chain_plugin/chain_plugin.cpp +++ b/plugins/chain_plugin/chain_plugin.cpp @@ -1133,8 +1133,8 @@ void chain_apis::read_write::validate() const { EOS_ASSERT( db.get_read_mode() != chain::db_read_mode::READ_ONLY, missing_chain_api_plugin_exception, "Not allowed, node in read-only mode" ); } -void chain_plugin::accept_block(const signed_block_ptr& block, const block_id_type& id ) { - my->incoming_block_sync_method(block, id); +bool chain_plugin::accept_block(const signed_block_ptr& block, const block_id_type& id ) { + return my->incoming_block_sync_method(block, id); } void chain_plugin::accept_transaction(const chain::packed_transaction_ptr& trx, next_function next) { diff --git a/plugins/chain_plugin/include/eosio/chain_plugin/chain_plugin.hpp b/plugins/chain_plugin/include/eosio/chain_plugin/chain_plugin.hpp index 9aabc1f16ce..2b608c4e4a8 100644 --- a/plugins/chain_plugin/include/eosio/chain_plugin/chain_plugin.hpp +++ b/plugins/chain_plugin/include/eosio/chain_plugin/chain_plugin.hpp @@ -706,7 +706,7 @@ class chain_plugin : public plugin { chain_apis::read_only get_read_only_api() const { return chain_apis::read_only(chain(), get_abi_serializer_max_time()); } chain_apis::read_write get_read_write_api() { return chain_apis::read_write(chain(), get_abi_serializer_max_time()); } - void accept_block( const chain::signed_block_ptr& block, const chain::block_id_type& id ); + bool accept_block( const chain::signed_block_ptr& block, const chain::block_id_type& id ); void accept_transaction(const chain::packed_transaction_ptr& trx, chain::plugin_interface::next_function next); bool block_is_on_preferred_chain(const chain::block_id_type& block_id); diff --git a/plugins/net_plugin/net_plugin.cpp b/plugins/net_plugin/net_plugin.cpp index 555bedb7eae..41df0e16334 100644 --- a/plugins/net_plugin/net_plugin.cpp +++ b/plugins/net_plugin/net_plugin.cpp @@ -2936,8 +2936,9 @@ namespace eosio { go_away_reason reason = fatal_other; try { - my_impl->chain_plug->accept_block(msg, blk_id); + bool accepted = my_impl->chain_plug->accept_block(msg, blk_id); my_impl->update_chain_info(); + if( !accepted ) return; reason = no_reason; } catch( const unlinkable_block_exception &ex) { peer_elog(c, "bad signed_block ${n} ${id}...: ${m}", ("n", blk_num)("id", blk_id.str().substr(8,16))("m",ex.what())); @@ -2959,6 +2960,7 @@ namespace eosio { if( reason == no_reason ) { boost::asio::post( my_impl->thread_pool->get_executor(), [dispatcher = my_impl->dispatcher.get(), cid=c->connection_id, blk_id, msg]() { + fc_elog( logger, "accepted signed_block : #${n} ${id}...", ("n", msg->block_num())("id", blk_id.str().substr(8,16)) ); dispatcher->add_peer_block( blk_id, cid ); dispatcher->update_txns_block_num( msg ); }); diff --git a/plugins/producer_plugin/producer_plugin.cpp b/plugins/producer_plugin/producer_plugin.cpp index f914715fb56..0ce8dfe10ea 100644 --- a/plugins/producer_plugin/producer_plugin.cpp +++ b/plugins/producer_plugin/producer_plugin.cpp @@ -332,13 +332,13 @@ class producer_plugin_impl : public std::enable_shared_from_this& block_id) { + bool on_incoming_block(const signed_block_ptr& block, const std::optional& block_id) { auto& chain = chain_plug->chain(); if ( chain.is_building_block() && _pending_block_mode == pending_block_mode::producing ) { fc_wlog( _log, "dropped incoming block #${num} while producing #${pbn} for ${bt}, id: ${id}", ("num", block->block_num())("pbn", chain.head_block_num() + 1) ("bt", chain.pending_block_time())("id", block_id ? (*block_id).str() : "UNKNOWN") ); - return; + return false; } const auto& id = block_id ? *block_id : block->id(); @@ -351,7 +351,7 @@ class producer_plugin_impl : public std::enable_shared_from_this().publish( priority::medium, block ); @@ -401,6 +401,8 @@ class producer_plugin_impl : public std::enable_shared_from_thisblock->confirmed)("latency", (fc::time_point::now() - hbs->block->timestamp).count()/1000 ) ); } } + + return true; } class incoming_transaction_queue { @@ -859,7 +861,7 @@ void producer_plugin::plugin_initialize(const boost::program_options::variables_ my->_incoming_block_sync_provider = app().get_method().register_provider( [this](const signed_block_ptr& block, const std::optional& block_id) { - my->on_incoming_block(block, block_id); + return my->on_incoming_block(block, block_id); }); my->_incoming_transaction_async_provider = app().get_method().register_provider( From 63f21b155f0b8a56c5a27859f685dcce3e208aa1 Mon Sep 17 00:00:00 2001 From: Kevin Heifner Date: Thu, 23 Jan 2020 15:28:50 -0600 Subject: [PATCH 22/25] Should have been debug log not error --- plugins/net_plugin/net_plugin.cpp | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/plugins/net_plugin/net_plugin.cpp b/plugins/net_plugin/net_plugin.cpp index 41df0e16334..76dd0d196a8 100644 --- a/plugins/net_plugin/net_plugin.cpp +++ b/plugins/net_plugin/net_plugin.cpp @@ -2960,7 +2960,7 @@ namespace eosio { if( reason == no_reason ) { boost::asio::post( my_impl->thread_pool->get_executor(), [dispatcher = my_impl->dispatcher.get(), cid=c->connection_id, blk_id, msg]() { - fc_elog( logger, "accepted signed_block : #${n} ${id}...", ("n", msg->block_num())("id", blk_id.str().substr(8,16)) ); + fc_dlog( logger, "accepted signed_block : #${n} ${id}...", ("n", msg->block_num())("id", blk_id.str().substr(8,16)) ); dispatcher->add_peer_block( blk_id, cid ); dispatcher->update_txns_block_num( msg ); }); From 84a0198ec9f6782a828e4a6b3c1df9838d577522 Mon Sep 17 00:00:00 2001 From: Kevin Heifner Date: Mon, 27 Jan 2020 08:27:44 -0600 Subject: [PATCH 23/25] Keep http_plugin_impl alive until all posted jobs finish --- plugins/http_plugin/http_plugin.cpp | 2 ++ plugins/http_plugin/include/eosio/http_plugin/http_plugin.hpp | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/plugins/http_plugin/http_plugin.cpp b/plugins/http_plugin/http_plugin.cpp index 7a720fa7564..887a2958c92 100644 --- a/plugins/http_plugin/http_plugin.cpp +++ b/plugins/http_plugin/http_plugin.cpp @@ -691,6 +691,8 @@ namespace eosio { if( my->thread_pool ) { my->thread_pool->stop(); } + + app().post( 0, [me = my](){} ); // keep my pointer alive until queue is drained } void http_plugin::add_handler(const string& url, const url_handler& handler) { diff --git a/plugins/http_plugin/include/eosio/http_plugin/http_plugin.hpp b/plugins/http_plugin/include/eosio/http_plugin/http_plugin.hpp index 5c81279fe62..29c31474fec 100644 --- a/plugins/http_plugin/include/eosio/http_plugin/http_plugin.hpp +++ b/plugins/http_plugin/include/eosio/http_plugin/http_plugin.hpp @@ -98,7 +98,7 @@ namespace eosio { get_supported_apis_result get_supported_apis()const; private: - std::unique_ptr my; + std::shared_ptr my; }; /** From f3beefe6d41497800c8e72525bce2fdf40df2d77 Mon Sep 17 00:00:00 2001 From: Nathan Pierce Date: Mon, 27 Jan 2020 13:10:21 -0500 Subject: [PATCH 24/25] added logic to prevent lrt pipeline from triggering itself and fixed scheduled SOURCE --- .cicd/generate-pipeline.sh | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/.cicd/generate-pipeline.sh b/.cicd/generate-pipeline.sh index fd9d3ab6376..3072f0d4b34 100755 --- a/.cicd/generate-pipeline.sh +++ b/.cicd/generate-pipeline.sh @@ -80,8 +80,8 @@ if [[ ! -z ${BUILDKITE_TRIGGERED_FROM_BUILD_ID} ]]; then fi export BUILD_SOURCE=${BUILD_SOURCE:---build \$BUILDKITE_BUILD_ID} # set trigger_job if master/release/develop branch and webhook -if [[ $BUILDKITE_BRANCH =~ ^release/[0-9]+\.[0-9]+\.x$ || $BUILDKITE_BRANCH =~ ^master$ || $BUILDKITE_BRANCH =~ ^develop$ ]]; then - [[ $BUILDKITE_SOURCE != 'scheduled' ]] && export TRIGGER_JOB=true +if [[ ! $BUILDKITE_PIPELINE_SLUG =~ 'lrt' ]] && [[ $BUILDKITE_BRANCH =~ ^release/[0-9]+\.[0-9]+\.x$ || $BUILDKITE_BRANCH =~ ^master$ || $BUILDKITE_BRANCH =~ ^develop$ ]]; then + [[ $BUILDKITE_SOURCE != 'schedule' ]] && export TRIGGER_JOB=true fi oIFS="$IFS" IFS=$'' From f697019e5439ad5cad0a385eb016ed9af0309988 Mon Sep 17 00:00:00 2001 From: Kevin Heifner Date: Mon, 27 Jan 2020 12:51:23 -0600 Subject: [PATCH 25/25] Bump version to 2.0.1 --- CMakeLists.txt | 2 +- README.md | 12 ++++++------ docs/00_install/00_install-prebuilt-binaries.md | 14 +++++++------- 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/CMakeLists.txt b/CMakeLists.txt index 9a490f97988..630e7670d49 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -25,7 +25,7 @@ set( CXX_STANDARD_REQUIRED ON) set(VERSION_MAJOR 2) set(VERSION_MINOR 0) -set(VERSION_PATCH 0) +set(VERSION_PATCH 1) #set(VERSION_SUFFIX rc3) if(VERSION_SUFFIX) diff --git a/README.md b/README.md index 936253da56f..6cdf605f4fc 100644 --- a/README.md +++ b/README.md @@ -74,13 +74,13 @@ $ brew remove eosio #### Ubuntu 18.04 Package Install ```sh -$ wget https://github.com/eosio/eos/releases/download/v2.0.0/eosio_2.0.0-1-ubuntu-18.04_amd64.deb -$ sudo apt install ./eosio_2.0.0-1-ubuntu-18.04_amd64.deb +$ wget https://github.com/eosio/eos/releases/download/v2.0.1/eosio_2.0.1-1-ubuntu-18.04_amd64.deb +$ sudo apt install ./eosio_2.0.1-1-ubuntu-18.04_amd64.deb ``` #### Ubuntu 16.04 Package Install ```sh -$ wget https://github.com/eosio/eos/releases/download/v2.0.0/eosio_2.0.0-1-ubuntu-16.04_amd64.deb -$ sudo apt install ./eosio_2.0.0-1-ubuntu-16.04_amd64.deb +$ wget https://github.com/eosio/eos/releases/download/v2.0.1/eosio_2.0.1-1-ubuntu-16.04_amd64.deb +$ sudo apt install ./eosio_2.0.1-1-ubuntu-16.04_amd64.deb ``` #### Ubuntu Package Uninstall ```sh @@ -91,8 +91,8 @@ $ sudo apt remove eosio #### RPM Package Install ```sh -$ wget https://github.com/eosio/eos/releases/download/v2.0.0/eosio-2.0.0-1.el7.x86_64.rpm -$ sudo yum install ./eosio-2.0.0-1.el7.x86_64.rpm +$ wget https://github.com/eosio/eos/releases/download/v2.0.1/eosio-2.0.1-1.el7.x86_64.rpm +$ sudo yum install ./eosio-2.0.1-1.el7.x86_64.rpm ``` #### RPM Package Uninstall ```sh diff --git a/docs/00_install/00_install-prebuilt-binaries.md b/docs/00_install/00_install-prebuilt-binaries.md index d2c9beb9208..b5988fa7cbd 100644 --- a/docs/00_install/00_install-prebuilt-binaries.md +++ b/docs/00_install/00_install-prebuilt-binaries.md @@ -25,13 +25,13 @@ $ brew remove eosio #### Ubuntu 18.04 Package Install ```sh -$ wget https://github.com/eosio/eos/releases/download/v2.0.0/eosio_2.0.0-1-ubuntu-18.04_amd64.deb -$ sudo apt install ./eosio_2.0.0-1-ubuntu-18.04_amd64.deb +$ wget https://github.com/eosio/eos/releases/download/v2.0.1/eosio_2.0.1-1-ubuntu-18.04_amd64.deb +$ sudo apt install ./eosio_2.0.1-1-ubuntu-18.04_amd64.deb ``` #### Ubuntu 16.04 Package Install ```sh -$ wget https://github.com/eosio/eos/releases/download/v2.0.0/eosio_2.0.0-1-ubuntu-16.04_amd64.deb -$ sudo apt install ./eosio_2.0.0-1-ubuntu-16.04_amd64.deb +$ wget https://github.com/eosio/eos/releases/download/v2.0.1/eosio_2.0.1-1-ubuntu-16.04_amd64.deb +$ sudo apt install ./eosio_2.0.1-1-ubuntu-16.04_amd64.deb ``` #### Ubuntu Package Uninstall ```sh @@ -42,8 +42,8 @@ $ sudo apt remove eosio #### RPM Package Install ```sh -$ wget https://github.com/eosio/eos/releases/download/v2.0.0/eosio-2.0.0-1.el7.x86_64.rpm -$ sudo yum install ./eosio-2.0.0-1.el7.x86_64.rpm +$ wget https://github.com/eosio/eos/releases/download/v2.0.1/eosio-2.0.1-1.el7.x86_64.rpm +$ sudo yum install ./eosio-2.0.1-1.el7.x86_64.rpm ``` #### RPM Package Uninstall ```sh @@ -56,7 +56,7 @@ After installing the prebuilt packages, the actual EOSIO binaries will be locate * `/usr/opt/eosio//bin` (Linux-based); or * `/usr/local/Cellar/eosio//bin` (MacOS ) -where `version-string` is the EOSIO version that was installed; e.g. `2.0.0-rc2`. +where `version-string` is the EOSIO version that was installed; e.g. `2.0.1`. Also, soft links for each EOSIO program (`nodeos`, `cleos`, `keosd`, etc.) will be created under `usr/bin` or `usr/local/bin` to allow them to be executed from any directory.