diff --git a/images/hyperion.png b/images/hyperion.png new file mode 100644 index 00000000..7359f85b Binary files /dev/null and b/images/hyperion.png differ diff --git a/images/hyperion_kibana.png b/images/hyperion_kibana.png new file mode 100644 index 00000000..d7778ed2 Binary files /dev/null and b/images/hyperion_kibana.png differ diff --git a/images/hyperion_rabbitmq.png b/images/hyperion_rabbitmq.png new file mode 100644 index 00000000..f00030c7 Binary files /dev/null and b/images/hyperion_rabbitmq.png differ diff --git a/native/07_node-operation/30_history/01_intro-to-hyperion-full-history.md b/native/07_node-operation/30_history/01_intro-to-hyperion-full-history.md new file mode 100644 index 00000000..ea4548d4 --- /dev/null +++ b/native/07_node-operation/30_history/01_intro-to-hyperion-full-history.md @@ -0,0 +1,120 @@ +--- +title: Introduction to Hyperion +contributors: + - { name: Ross Dold (EOSphere), github: eosphere } +--- + +Hyperion is a full history solution for indexing, storing and retrieving Antelope blockchain’s historical data. +It was built by EOS RIO to be an enterprise grade, performant and highly scalable Antelope History Solution. Their [documentation](https://hyperion.docs.eosrio.io/) is excellent and certainly a worthwhile starting point, this Technical How To series will cover some of their same content and add operational nuances from a practical stand point and EOSphere's experience. + +[Learn more about EOS RIO Hyperion](https://eosrio.io/hyperion/) + +![image](/images/hyperion.png) + +## Components + +The Hyperion Full History service is a collection of purpose built EOS RIO software and industry standard applications. The eight primary building blocks are the following: + +#### EOS RIO Hyperion Indexer and API + +The **Indexer** processes data sourced from an Antelope Leap software State-History (SHIP) node and enables it to be indexed in Elasticsearch. The Hyperion Indexer also makes use of the Antelope Binary to JSON conversion functionality using ABI’s called [abieos](https://github.com/AntelopeIO/abieos). Deserialisation performance is greatly improved by using abieos C++ code through EOS RIO’s own NPM package [**node-abieos**](https://github.com/eosrio/node-abieos) that provides a Node.js native binding. + +The **API** is the front end for client queries, it responds to V2 or legacy V1 requests and finds data for these responses by directly querying the Elasticsearch cluster. + +#### Antelope Leap Software State-History (SHIP) Node + +The State-History plugin is used by nodeos to capture historical data about the blockchain state and store this data in an externally readable flat file format. This readable file is accessed by the Hyperion Indexer. + +#### RabbitMQ + +[RabbitMQ](https://www.rabbitmq.com/) is an open source message broker that is used by Hyperion to queue messages and transport data during the multiple stages of indexing to Elasticsearch. + +#### Redis + +[Redis](https://redis.io/) is an in-memory data structure store and is used by Hyperion as a predictive temporary database cache for HTTP API client queries and as a Indexer transaction cache. + +#### Node.js + +The Hyperion indexer and API are [Node.js](https://nodejs.org/en/) applications and of course then use Node.js as an open-sourced back-end JavaScript runtime environment. + +#### PM2 + +[PM2](https://pm2.keymetrics.io/) is a process manager for Node.js and used to launch and run the Hyperion Indexer and API. + +#### Elasticsearch Cluster + +[Elasticsearch](https://www.elastic.co/) is a search engine based on the Lucene library, it is used by Hyperion to store and retrieve all indexed data in highly performant schema-free JSON document format. + +#### Kibana + +[Kibana](https://www.elastic.co/kibana/) is a component of the Elastic Stack, a dashboard that enables visualising data and simplified operation and insight of an Elasticsearch cluster. All Hyperion Indexed data resides in the Elasticsearch database, Kibana gives a direct view of this data and the health of the Elasticsearch cluster. + +## Hyperion Topology + +The Topology of your Hyperion deployment depends on your history services requirement and the network you intend to index. Whether it’s Public/Private, Mainnet/Testnet or Full/Partial History. + +This guide will discuss EOS Mainnet with Full History. Testnets and Private networks generally have far lower topology and resource requirements. + +**EOS Mainnet** + +EOSphere originally started with a single server running all Hyperion Software Components except for the EOS State-History Node. However a challenge was discovered in relation to Elasticsearch JVM heap size when the EOS network utilisation grew and our API became well used. + +JVM Heap size is the amount of memory allocated to the Java Virtual Machine of an Elasticsearch node, the more heap available the more cache memory available for indexing and search operations. If it’s too low Hyperion Indexing will be slow and search queries will be very latent. If the JVM heap size is more than 32GB (usually lower than this) on an Elasticsearch node, the threshold for compressed ordinary object pointers (OOP) will be exceeded and JVM will stop using compression. This will be exceptionally inefficient in regards to memory management and the node will consume vastly more memory. + +The result of the above is the necessity to create a cluster of more than one Elasticsearch node, as the limit is per Elasticsearch node instance. Two nodes with JVM heap of 25GB results in 50GB of cluster wide heap available. + +Other benefits to clustering more than one ElasticSearch node are of course more CPU cores for processing and more DISK for the ever expanding Full History storage requirements. Elasticsearch stores indexed data in documents these documents are allocated to shards, these shards are automatically balanced between nodes in a cluster. Other than distributing the DISK utilisation across nodes, each shard is it’s own Lucene index and as such distributes CPU bandwidth utilisation across the cluster as well. + +I recommend reading [Elasticsearch: The Definitive Guide](https://www.elastic.co/guide/en/elasticsearch/guide/current/index.html) as an excellent book to help you understand Elasticsearch concepts. + +Taking the above into account our current recommended topology for the EOS Mainnet is to logically or physically run the following nodes: + +* **Load Balancer** + * SSL Offload + * Usage Policies +* **Hyperion Server 1** + * Hyperion API + * Hyperion Indexer + * RabbitMQ + * Redis + * Node.js + * PM2 + * Kibana +* **Hyperion Server 2** + * Elasticsearch I (25GB JVM Heap) +* **Hyperion Server 3** + * Elasticsearch II (25GB JVM Heap) +* **Hyperion Server 4** + * Elasticsearch III (25GB JVM Heap) +* **State-History** + * Network sync’d nodeos with state_history plugin enabled + +## Hyperion Hardware + +Similar to Hyperion Topology, Hardware choice will vary on your history services requirement. + +The recommendations below are for EOS Mainnet with Full History, in relation to what currently works in the EOSphere Public Service Offerings. + +**EOS Mainnet** + +* **Load Balancer** + * Dealers choice, however HAProxy is a great option + * High Speed Internet 100Mb/s+ +* **Hyperion Server 1** + * Modern CPU, 3Ghz+, 8 Cores+ + * 64GB RAM + * 128GB DISK _(Enterprise Grade SSD/NVMe)_ + * 1Gb/s+ LAN +* **Hyperion Server 2–4** + * Modern CPU, 3Ghz+, 8 Cores+ + * 64GB RAM + * Enterprise Grade SSD/NVMe + _The current (February 2024) Elasticsearch Database is 24TB, I suggest provisioning 35–40TB across the cluster for Full History service longevity_ + * 1Gb/s+ LAN +* **State-History** + * Modern CPU, 4Ghz+, 4 Cores + * 128GB RAM + * 256GB DISK 1 _(Enterprise Grade SSD/NVMe)_ + * 16TB DISK 2 _(SAS or SATA are OK)_ + +With that introduction you should now have an informed starting point for your Hyperion services journey. diff --git a/native/07_node-operation/30_history/02_build-hyperion-software-components.md b/native/07_node-operation/30_history/02_build-hyperion-software-components.md new file mode 100644 index 00000000..aa18e316 --- /dev/null +++ b/native/07_node-operation/30_history/02_build-hyperion-software-components.md @@ -0,0 +1,192 @@ +--- +title: Build Hyperion Components +contributors: + - { name: Ross Dold (EOSphere), github: eosphere } +--- + +The Hyperion Full History service is a collection of **eight** purpose built EOS RIO software and industry standard applications. + +This walk through will install all components excluding the SHIP node on a single Ubuntu 22.04 server, please reference [Introduction to Hyperion Full History](./01_intro-to-hyperion-full-history.md) for infrastructure suggestions. + +The process for building each of these primary building blocks is covered below: + +## State-History (SHIP) Node + +The Hyperion deployment requires access to a fully sync’d State-History Node, the current SHIP recommend version is Leap `v5.0.2`. + +## RabbitMQ + +To install the latest RabbitMQ currently `3.13.0` be sure to check their latest [Cloudsmith Quick Start Script](https://www.rabbitmq.com/install-debian.html), this in our experience is the simplest way to ensure you are current and correctly built. + +The summary process is below: + +```bash +> sudo apt update + +> sudo apt-get install curl gnupg apt-transport-https -y + +#Team RabbitMQ's main signing key# +> curl -1sLf "https://keys.openpgp.org/vks/v1/by-fingerprint/0A9AF2115F4687BD29803A206B73A36E6026DFCA" | sudo gpg --dearmor | sudo tee /usr/share/keyrings/com.rabbitmq.team.gpg > /dev/null + +#Cloudsmith: modern Erlang repository# +> curl -1sLf https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-erlang.E495BB49CC4BBE5B.key | sudo gpg --dearmor | sudo tee /usr/share/keyrings/rabbitmq.E495BB49CC4BBE5B.gpg > /dev/null + +#Cloudsmith: RabbitMQ repository# +> curl -1sLf https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-server.9F4587F226208342.key | sudo gpg --dearmor | sudo tee /usr/share/keyrings/rabbitmq.9F4587F226208342.gpg > /dev/null + +-------------------------------------------------------------------- +#Add apt repositories maintained by Team RabbitMQ# +> sudo tee /etc/apt/sources.list.d/rabbitmq.list < sudo apt-get update -y + +#Install Erlang packages# +> sudo apt-get install -y erlang-base \ + erlang-asn1 erlang-crypto erlang-eldap erlang-ftp erlang-inets \ + erlang-mnesia erlang-os-mon erlang-parsetools erlang-public-key \ + erlang-runtime-tools erlang-snmp erlang-ssl \ + erlang-syntax-tools erlang-tftp erlang-tools erlang-xmerl + +#Install rabbitmq-server and its dependencies# +> sudo apt-get install rabbitmq-server -y --fix-missing + +**Check Version** +> sudo rabbitmqctl version +``` +## Redis + +Our current Hyperion deployments are running on the latest Redis stable version `v7.2.4` which is built as below: + +```bash +> sudo apt install lsb-release curl gpg + +#Redis Signing Key# +> curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg + +#Latest Redis repository# +> echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/redis.list> sudo apt update + +#Install Redis# +> sudo apt install redis + +**Check Version** +> redis-server --version +``` + +## Node.js + +Hyperion requires Node.js v18 , our current Hyperion deployments are running the current LTS `v18.19.1` which is built below: + +```bash +#Download and import the Nodesource GPG key# +> sudo apt update + +> sudo apt install -y ca-certificates curl gnupg + +> sudo mkdir -p /etc/apt/keyrings + +> curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg + +#Create .deb repository# +> NODE_MAJOR=18 + +> echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | sudo tee /etc/apt/sources.list.d/nodesource.list + +#Install Node.js# +> sudo apt update + +> sudo apt-get install -y nodejs + +**Check Version** +> node -v +``` + +## PM2 + +The latest public version is `5.3.1` and is built as below: + +```bash +> sudo apt update + +#Install PM2# +> sudo npm install pm2@latest -g + +**Check Version** +> pm2 -v +``` + +## Elasticsearch + +Currently most of our Hyperion deployments are using Elasticsearch `8.5-12.x` with great results, however the current recommended Elasticsearch version is `8.12.2` which I expect will work just as well or better. Build the latest Elasticsearch `8.x` as below: + +```bash +> sudo apt update + +> sudo apt install apt-transport-https + +> sudo apt install gpg + +#Elasticsearch signing key# +> wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg + +#Latest Elasticsearch 8.x repository# +> echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list + +#Install Elasticsearch# +> sudo apt update && sudo apt install elasticsearch + +**Take note of the super-user password** +``` + +## Kibana + +The utilised Kibana version should be paired with the installed Elasticsearch version, the process below will install the current version: + +```bash +> sudo apt update + +> sudo apt-get install apt-transport-https + +> sudo apt install gpg + +#Elasticsearch signing key - Not needed if already added# +> wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg + +#Latest Elasticsearch 8.x repository - Not needed if already added# +> echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list + +#Install Kibana# +> sudo apt update && sudo apt install kibana +``` + +## **EOS RIO Hyperion Indexer and API** + +Currently (March 2024) the most robust and production ready version of Hyperion from our experience is `3.3.9–8` and is used across all our Hyperion Full History Services. The EOS RIO Team are constantly developing and improving their code, the best way to stay on top of the current recommend version is to join the [Hyperion Telegram Group](https://t.me/EOSHyperion). + +Build Hyperion from `main` as below: + +```bash +> git clone https://github.com/eosrio/hyperion-history-api.git + +> cd hyperion-history-api + +> git checkout main + +> npm install + +> npm audit fix +``` + +After all Hyperion Software Components are built and provisioned you can now proceed to configuration. + +The next guide will walk through the technical configuration of each component. diff --git a/native/07_node-operation/30_history/03_configure-hyperion-software-components.md b/native/07_node-operation/30_history/03_configure-hyperion-software-components.md new file mode 100644 index 00000000..89692ada --- /dev/null +++ b/native/07_node-operation/30_history/03_configure-hyperion-software-components.md @@ -0,0 +1,514 @@ +--- +title: Configure Hyperion Components +contributors: + - { name: Ross Dold (EOSphere), github: eosphere } +--- + +This configuration walk through example has all software components excluding the SHIP node installed on a single Ubuntu 22.04 server as per "Build Hyperion Software Components". +This example server is also equipped with 16GB of RAM and a modern 8 Core 4Ghz CPU, suitable for running Full History on a **Testnet**. It is also recommend to disable system Swap space. + +Please reference [Introduction to Hyperion](01_intro-to-hyperion-full-history.md) for infrastructure suggestions especially when considering providing the service on an Antelope Mainnet. + +## State-History (SHIP) Node + +The Hyperion deployment requires access to a fully sync’d State-History Node. + +## RabbitMQ + +Follow the configuration process below: + +```bash +#Enable the Web User Interface +> sudo rabbitmq-plugins enable rabbitmq_management + +#Add Hyperion as a Virtual Host +> sudo rabbitmqctl add_vhost hyperion + +#Create a USER and PASSWORD (Supplement with your own) +> sudo rabbitmqctl add_user + +#Set your USER as Administrator +> sudo rabbitmqctl set_user_tags administrator + +#Set your USER permissions to the Virtual Host +> sudo rabbitmqctl set_permissions -p hyperion ".*" ".*" ".*" +``` + +Check access to the Web User Interface using a browser: + +``` +http://:15672 +``` + +## Redis + +Redis version 7.x has a tricky default memory policy of _use all available memory with no eviction_ meaning it will use all available memory until in runs out and crashes. + +To handle this challenge it is important to set the memory policy to **allkeys-lru** (Keeps most recently used keys; removes least recently used keys). We have been assigning 25% of the Hyperion Node’s memory to Redis with good results. + +Configure as below: + +```bash +> sudo nano /etc/redis/redis.conf + +###GENERAL### +daemonize yes +supervised systemd + +###MEMORY MANAGEMENT### +maxmemory 4gb +maxmemory-policy allkeys-lru + +> sudo systemctl restart redis-server + +> sudo systemctl enable redis-server +``` + +To view Redis memory config and statistics: + +```bash +> redis-cli +-> info memory +``` + +Debug Redis issues in the logs: + +```bash +> sudo tail -f /var/log/redis/redis-server.log +``` + +## Node.js + +Nothing to configure here, however ensure you are running Node.js v18 + +```bash +> node -v +v18.x.x +``` + +## PM2 + +Nothing to configure here, check that you are running the latest PM2 version`5.3.1` + +```bash +> pm2 --version +5.3.1 +``` + +## Elasticsearch + +Configure Elasticsearch 8.x as below: + +```bash +> sudo nano /etc/elasticsearch/elasticsearch.yml + +cluster.name: +node.name: +bootstrap.memory_lock: true +network.host: +cluster.initial_master_nodes: [""] +``` + +Additional to the above configuration it can also be advantageous to change the Elasticsearch disk usage watermark (80% is default high) and the maximum number of shards per node (1000 is the default shard maximum) depending on your deployment. + +```bash +> sudo nano /etc/elasticsearch/elasticsearch.yml + +cluster.routing.allocation.disk.threshold_enabled: true +cluster.routing.allocation.disk.watermark.low: 93% +cluster.routing.allocation.disk.watermark.high: 95% + +cluster.max_shards_per_node: 3000 +``` + +Configure Java Virtual Machine settings as below, don’t exceed the OOP threshold of 31GB, this example will use 8GB suitable for Hyperion on an Antelope Testnet and 50% of the servers memory: + +```bash +> sudo nano /etc/elasticsearch/jvm.options + +-Xms8g +-Xmx8g +``` + +Allow the Elasticsearch service to lock the required jvm memory on the server + +```bash +> sudo systemctl edit elasticsearch + +[Service] +LimitMEMLOCK=infinity +``` + +`Systemctrl` configuration below: + +```bash +#Reload Units +> sudo systemctl daemon-reload + +#Start Elasticsearch +> sudo systemctl start elasticsearch.service + +#Start Elasticsearch automatically on boot +> sudo systemctl enable elasticsearch.service +``` + +Check that Elasticsearch is running: + +```bash +> sudo systemctl status elasticsearch.service +``` + +Debug Elasticsearch issues in the logs: + +```bash +> sudo tail -f /var/log/elasticsearch/gc.log + +> sudo tail -f /var/log/elasticsearch/.log +``` + +Test the Rest API which by default will require access to the Elasticsearch cluster certificate and run as root for access to the cert: + +```bash +> sudo su - + +root@> curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic https://:9200 + +**Enter "elastic" superuser password*** +Enter host password for user 'elastic': +{ + "name" : "", + "cluster_name" : "", + "cluster_uuid" : "exucKwVpRJubHGq5Jwu1_Q", + "version" : { + "number" : "8.12.2", + "build_flavor" : "default", + "build_type" : "deb", + "build_hash" : "a94744f97522b2b7ee8b5dc13be7ee11082b8d6b", + "build_date" : "2024-03-13T20:16:27.027355296Z", + "build_snapshot" : false, + "lucene_version" : "9.9.2", + "minimum_wire_compatibility_version" : "7.17.0", + "minimum_index_compatibility_version" : "7.0.0" + }, + "tagline" : "You Know, for Search" +} +``` + +## Kibana + +Elasticsearch 8.x brings a few efficiencies in regards to connecting additional nodes and stack services such as Kibana by automatically using certificates. This is great of course, however the Hyperion Indexer and API don’t have a mechanism at the moment to utilise a certificate to connect to the Elasticsearch REST API and this function appears to be bound to SSL. + +_Your results may vary, but it has been my experience that the surest way to ensure the Hyperion software is able to connect to Elasticsearch 8.x without a certificate is to disable encryption for HTTP API client connections. As all this inter-node communication happens privately in your DC it shouldn’t be an issue._ + +**UPDATE:** Utilising a fresh install of the latest Elasticsearch I was able to connect Hyperion with encrypted SSL (HTTPS) leaving the normal auto-magic certificate config working fine with Kibana. I will leave both options in this guide to be complete. + +Configure Kibana as below: + +```bash +> sudo nano /etc/kibana/kibana.yml + +server.host: "0.0.0.0" +``` + +`Systemctrl` configuration below: + +```bash +#Reload Units +> sudo systemctl daemon-reload#Start Kibana + +> sudo systemctl start kibana#Start Kibana automatically on boot + +> sudo systemctl enable kibana.service +``` + +Check that Kibana is running: + +```bash +> sudo systemctl status kibana.service +``` + +Below I will demonstrate both ways to connect Kibana to Elasticsearch 8.x , using the auto-magic certificate method and then by using a non-encrypted local password method. + +When xpack encryption for HTTP is disabled, it then becomes necessary to set a password for Kibana on the Elasticsearch server as it won’t be using a cert. + +**Certificate Method:** + +Generate and copy an enrolment token on the **Elasticsearch Server** to be used for Kibana: + +```bash +> sudo /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana +``` + +Connect to the Kibana Web User Interface using a browser and paste the access token: + +``` +http://:5601/ +``` + +![image](/images/hyperion_kibana.png) + +Enter Kibana Enrollment Token + +Obtain the Kibana verification code from the Kibana server command line and enter in the Kibana GUI: + +```bash +> sudo /usr/share/kibana/bin/kibana-verification-code +Your verification code is: XXX XXX +``` + +Kibana is now connected to Elasticsearch, you are able to log in with username “elastic” and the elastic “superuser” password. + +**Password Method:** + +The kibana_system account will need to be enabled with a password in Elasticsearch, copy the password output: + +```bash +> sudo su - root@> /usr/share/elasticsearch/bin/elasticsearch-reset-password -u kibana_system +``` + +Then add the credentials + host details to the Kibana config and if necessary HASH# out the SSL automatically generated config at the bottom of the .yml file: + +```bash +> sudo nano /etc/kibana/kibana.yml + +elasticsearch.hosts: ["http://:9200"] + +elasticsearch.username: "kibana_system" +elasticsearch.password: "" +``` + +Disable xpack security for HTTP API Clients in Elasticsearch: + +```bash +#Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents + +xpack.security.http.ssl: + enabled: false +``` + +Finally restart Elasticsearch and Kibana: + +```bash +> sudo systemctl restart elasticsearch.service + +> sudo systemctl restart kibana.service +``` + +Debug Kibana issues in the system logs: + +```bash +> tail -f /var/log/syslog +``` + +**_Ideally it would be best if certificates could be used for all REST API access, I will update this guide when I’m aware of a suitable solution._** + +## EOS RIO Hyperion Indexer and API + +There are two .json files necessary to run the Hyperion Indexer and API. `connections.json` and `.config.json` + +**connections.json** + +The below example `connections.json` is configured for an Antelope Testnet, amend the config and network for your own deployment. This config is using a user and password to connect Elasticsearch with HTTP. + +```bash +> cd ~/hyperion-history-api + +> cp example-connections.json connections.json + +> sudo nano connections.json + +{ + "amqp": { + "host": "127.0.0.1:5672", + "api": "127.0.0.1:15672", + "protocol": "http", + "user": "", + "pass": "", + "vhost": "hyperion", + "frameMax": "0x10000" + }, + "elasticsearch": { + "protocol": "http", + "host": ":9200", + "ingest_nodes": [ + ":9200" + ], + "user": "elastic", + "pass": "" + }, + "redis": { + "host": "127.0.0.1", + "port": "6379" + }, + "chains": { + "eos": { + "name": "EOS Testnet", + "chain_id": "g17b1833c747c43682f4386fca9cbb327929334a762755ebec17f6f23c9b8a14", + "http": "http://:8888", + "ship": "ws://:8080", + "WS_ROUTER_HOST": "127.0.0.1", + "WS_ROUTER_PORT": 7001, + "control_port": 7002 + } + } +} +``` + +**.config.json** + + +The `.config.json` file is named to reflect the chains name in this case `eos.config.json`. The configuration as before needs to be adjusted to suit your own config and deployment using the provided example as a base. + +There are three phases when starting a new Hyperion Indexer, the first phase is what is called the “ABI Scan” which is the default mode in the software provided `example.config.json`. + +The below config (an EOSphere server) will prepare this example to be ready to run the ABI Scan phase of Indexing which will be covered in the next guide. + +Configure as below, take note of the **#UPDATE#** parameters + +```bash +> cd ~/hyperion-history-api/chains + +> cp example.config.json eos.config.json + +> nano eos.config.json + +{ + "api": { + "enabled": true, + "pm2_scaling": 1, + "node_max_old_space_size": 1024, + "chain_name": "EOS Testnet", #UPDATE# + "server_addr": "", #UPDATE# + "server_port": 7000, + "stream_port": 1234, + "stream_scroll_limit": -1, + "stream_scroll_batch": 500, + "server_name": "", #UPDATE# + "provider_name": "", #UPDATE# + "provider_url": "", #UPDATE# + "chain_api": "", + "push_api": "", + "chain_logo_url": "", #UPDATE# + "enable_caching": false, #DISABLED FOR BULK INDEXING# + "cache_life": 1, + "limits": { + "get_actions": 1000, + "get_voters": 100, + "get_links": 1000, + "get_deltas": 1000, + "get_trx_actions": 200 + }, + "access_log": false, + "chain_api_error_log": false, + "custom_core_token": "", + "enable_export_action": false, + "disable_rate_limit": false, + "rate_limit_rpm": 1000, + "rate_limit_allow": [], + "disable_tx_cache": true, #DISABLED FOR BULK INDEXING# + "tx_cache_expiration_sec": 3600, + "v1_chain_cache": [ + { + "path": "get_block", + "ttl": 3000 + }, + { + "path": "get_info", + "ttl": 500 + } + ] + }, + "indexer": { + "enabled": true, + "node_max_old_space_size": 4096, + "start_on": 0, + "stop_on": 0, + "rewrite": false, + "purge_queues": false, + "live_reader": false, #DISABLED FOR BULK INDEXING# + "live_only_mode": false, + "abi_scan_mode": true, #SET TO ABI_SCAN_MODE# + "fetch_block": true, + "fetch_traces": true, + "disable_reading": false, + "disable_indexing": false, + "process_deltas": true, + "disable_delta_rm": true + }, + "settings": { + "preview": false, + "chain": "eos", #SET CHAINS ID# + "eosio_alias": "eosio", + "parser": "3.2", #SET TO 1.8 for < 3.1 SHIP# + "auto_stop": 0, + "index_version": "v1", + "debug": false, + "bp_logs": false, + "bp_monitoring": false, + "ipc_debug_rate": 60000, + "allow_custom_abi": false, + "rate_monitoring": true, + "max_ws_payload_mb": 256, + "ds_profiling": false, + "auto_mode_switch": false, + "hot_warm_policy": false, + "custom_policy": "", + "index_partition_size": 10000000, + "es_replicas": 0 + }, + "blacklists": { + "actions": [], + "deltas": [] + }, + "whitelists": { + "actions": [], + "deltas": [], + "max_depth": 10, + "root_only": false + }, + "scaling": { + "readers": 2, #INCREASE READERS# + "ds_queues": 1, + "ds_threads": 1, + "ds_pool_size": 1, + "indexing_queues": 1, + "ad_idx_queues": 1, + "dyn_idx_queues": 1, + "max_autoscale": 4, + "batch_size": 5000, + "resume_trigger": 5000, + "auto_scale_trigger": 20000, + "block_queue_limit": 10000, + "max_queue_limit": 100000, + "routing_mode": "round_robin", + "polling_interval": 10000 + }, + "features": { + "streaming": { + "enable": false, + "traces": false, + "deltas": false + }, + "tables": { + "proposals": true, + "accounts": true, + "voters": true + }, + "index_deltas": true, + "index_transfer_memo": true + "index_all_deltas": true, + "deferred_trx": false, + "failed_trx": false, + "resource_limits": false, + "resource_usage": false + }, + "prefetch": { + "read": 50, + "block": 100, + "index": 500 + }, + "plugins": {} +} +``` + +All configuration is now ready to move onto running the Hyperion Indexer and API for the first time, this will be covered in the next guide. diff --git a/native/07_node-operation/30_history/04_running-hyperion.md b/native/07_node-operation/30_history/04_running-hyperion.md new file mode 100644 index 00000000..c40e23ac --- /dev/null +++ b/native/07_node-operation/30_history/04_running-hyperion.md @@ -0,0 +1,366 @@ +--- +title: Running Hyperion +contributors: + - { name: Ross Dold (EOSphere), github: eosphere } +--- + +Following on from our Configure Hyperion Software Components guide, this next guide in the series will walk through actually running a Hyperion Full History Service. + + +# Running Hyperion Full History + +There are three **Indexing** phases and an **API** enable phase when preparing your Hyperion Full History service for production. + +**ABI Scan Phase:** + +In this **abi_scan_mode** phase Hyperion indexes all contract Application Binary Interfaces (ABI)’s. This happens across the entire blockchain so that the indexer is aware of what ABI to use for deserialisation at any one time in the life of the onchain contract. + +**Indexing Phase:** + +This phase actually indexes data into Elasticsearch, however to ensure complete data is systematically ingested **live reader is disabled** and blocks are configured to be **ingested in batches**. + +**Indexing Catchup Phase:** + +This phase transitions the Indexer to an operational state, **live reader is enabled,** block ingestion is **configured to infinity** and **caching is enabled**. + +**API Enable Phase** + +This phase enables Hyperion **API queries**. + +In the previous how to, the example configuration left the guide ready to start the ABI Scan Phase, which is where this guide picks up. + +I recommend running the Hyperion PM2 commands using `screen` in two windows for PM2 logs and Commands, this gives good visibility of the phases. + +```bash +#Create a new screen session +> screen -US Hyperion + +#Display pm2 logging +> pm2 logs + +#Create a new screen +crtl-a+c + +#Check pm2 status +> pm2 status + +#Go forward a screen +ctrl-a+n + +#Go back a wscreen +ctrl-a+p + +#Disconnect screen session +ctrl-a+d + +#Reconnect screen session +> screen -r Hyperion +``` + +## Running the ABI Scan Phase + +Below is the initial configuration used for `eos.config.json` : + +```bash +> cd ~/hyperion-history-api/chains + +> cp example.config.json eos.config.json + +> nano eos.config.json + +{ + "api": { + "enabled": true, + "pm2_scaling": 1, + "node_max_old_space_size": 1024, + "chain_name": "EOS Testnet", + "server_addr": "", + "server_port": 7000, + "stream_port": 1234, + "stream_scroll_limit": -1, + "stream_scroll_batch": 500, + "server_name": "", + "provider_name": "", + "provider_url": "", + "chain_api": "", + "push_api": "", + "chain_logo_url": "", + "enable_caching": false, #DISABLED FOR BULK INDEXING# + "cache_life": 1, + "limits": { + "get_actions": 1000, + "get_voters": 100, + "get_links": 1000, + "get_deltas": 1000, + "get_trx_actions": 200 + }, + "access_log": false, + "chain_api_error_log": false, + "custom_core_token": "", + "enable_export_action": false, + "disable_rate_limit": false, + "rate_limit_rpm": 1000, + "rate_limit_allow": [], + "disable_tx_cache": true, #DISABLED FOR BULK INDEXING# + "tx_cache_expiration_sec": 3600, + "v1_chain_cache": [ + { + "path": "get_block", + "ttl": 3000 + }, + { + "path": "get_info", + "ttl": 500 + } + ] + }, + "indexer": { + "enabled": true, + "node_max_old_space_size": 4096, + "start_on": 0, + "stop_on": 0, + "rewrite": false, + "purge_queues": false, + "live_reader": false, #DISABLED FOR BULK INDEXING# + "live_only_mode": false, + "abi_scan_mode": true, #SET TO ABI_SCAN_MODE# + "fetch_block": true, + "fetch_traces": true, + "disable_reading": false, + "disable_indexing": false, + "process_deltas": true, + "disable_delta_rm": true + }, + "settings": { + "preview": false, + "chain": "eos", #SET CHAINS ID# + "eosio_alias": "eosio", + "parser": "3.2", #SET TO 1.8 for < 3.1 SHIP# + "auto_stop": 0, + "index_version": "v1", + "debug": false, + "bp_logs": false, + "bp_monitoring": false, + "ipc_debug_rate": 60000, + "allow_custom_abi": false, + "rate_monitoring": true, + "max_ws_payload_mb": 256, + "ds_profiling": false, + "auto_mode_switch": false, + "hot_warm_policy": false, + "custom_policy": "", + "index_partition_size": 10000000, + "es_replicas": 0 + }, + "blacklists": { + "actions": [], + "deltas": [] + }, + "whitelists": { + "actions": [], + "deltas": [], + "max_depth": 10, + "root_only": false + }, + "scaling": { + "readers": 2, #INCREASE READERS# + "ds_queues": 1, + "ds_threads": 1, + "ds_pool_size": 1, + "indexing_queues": 1, + "ad_idx_queues": 1, + "dyn_idx_queues": 1, + "max_autoscale": 4, + "batch_size": 5000, + "resume_trigger": 5000, + "auto_scale_trigger": 20000, + "block_queue_limit": 10000, + "max_queue_limit": 100000, + "routing_mode": "round_robin", + "polling_interval": 10000 + }, + "features": { + "streaming": { + "enable": false, + "traces": false, + "deltas": false + }, + "tables": { + "proposals": true, + "accounts": true, + "voters": true + }, + "index_deltas": true, + "index_transfer_memo": true + "index_all_deltas": true, + "deferred_trx": false, + "failed_trx": false, + "resource_limits": false, + "resource_usage": false + }, + "prefetch": { + "read": 50, + "block": 100, + "index": 500 + }, + "plugins": {} +} +``` +It is highly recommended that the SHIP node is connected on LAN. + +Initiate the ABI Scan as below: + +```bash +> cd ~/hyperion-history-api + +> pm2 start --only eos-indexer --update-env +``` + +Check that the ABI Scan has started and all software components are reachable and working as intended by observing the pm2 logs screen. Remediate any connectivity or configuration issues. + +Below is the legend for the Indexer logs output: + +```text +W (Workers) - Number of workers +R (Read) - Blocks read from the SHIP node and pushed to the queue +C (Consumed) - Blocks consumed from the blocks queue +A (Actions) - Actions being read from processed blocks +D (Deserialized) - Actions being deserialised +I (Indexed): Documents being indexed in Elasticsearch +``` +This phase may take many hours depending on your hardware and network connectivity, when finished the indexer will stop and `ABI SCAN COMPLETE` will be displayed. You may now confidently move onto the Indexing Phase. + +## Running the Indexing Phase + +In this phase the `eos.config.json` needs to be amended to disable abi_scan_mode and block batches to be ingested configured. My recommendation is to start with batches of 5000000 to ensure all is being ingested smoothly. + +Make sure the following is configured or amended in the `eos.config.json` : + +```bash +"enable_caching": false, #DISABLED FOR BULK INDEXING# +"disable_tx_cache": true, #DISABLED FOR BULK INDEXING# + +"start_on": 0, +"stop_on": 5000000, #FIRST BLOCK INDEX BATCH# +"live_reader": false, #DISABLED FOR BULK INDEXING# +"abi_scan_mode": false, #SET FOR INDEXING PHASE# +"scaling": { #CONSERVATIVE SETTINGS# + "readers": 2, + "ds_queues": 1, + "ds_threads": 2, + "ds_pool_size": 2, + "indexing_queues": 1, + "ad_idx_queues": 1, + "dyn_idx_queues": 1, + "max_autoscale": 4, +``` + +Start the Indexer as below: + +```bash +> cd ~/hyperion-history-api + +> pm2 start --only eos-indexer --update-env +``` + +Observe the pm2 logs to ensure documents are being indexed. Queues can be monitored in the RabbitMQ Web User Interface. + +``` +http://:15672 +``` + +![image](/images/hyperion_rabbitmq.png) + + +When the first 5000000 blocks are successfully indexed the indexer will stop and a message will be displayed in the pm2 logs advising `BLOCK RANGE COMPLETED`. + +The indexer block range can now be adjusted in the `eos.config.json` for the next batch and then the indexer can be started as before. Depending on how your deployment has managed you may want to increase or decrease this range. + +```bash +"start_on": 5000001,#CONTINUE FROM FIRST BATCH# +"stop_on": 1100000, #SECOND BLOCK INDEX BATCH# +``` + +Continue this process until you are almost at the current chain headblock. + +Bulk indexing can be very heavy on hardware resources and can take days. You will notice that quite conservative settings have been used for the index scaling in this example. My advice is that less can often be more, start with these example settings and adjust incrementally if required by observing pm2 logs and the RabbitMQ Web UI. + +## Running the Indexing Catchup Phase + +When indexing has been completed to as close to the current chain headblock as possible, you can transition to a normal mode of index operation. This phase will enable the live reader, normal block ingestion and caching. + +Make sure the following is configured or amended in the `eos.config.json` : + +```bash +"enable_caching": true, +"disable_tx_cache": false, + +"start_on": 0, +"stop_on": 0, +"live_reader": true, +``` + +Start the Indexer as below: + +```bash +> cd ~/hyperion-history-api + +> pm2 start --only eos-indexer --update-env +``` + +If your Hyperion indexer was near the headblock this phase shouldn’t take long, observe the pm2 logs to check when you have successfully caught up and then move on to starting the API. + +If for any reason you need to stop the indexer use the `pm2 trigger` option to ensure the current queues are completed before stopping: + +```bash +> pm2 trigger eos-indexer stop +``` + +## **The API Enable Phase** + +This final phase is running the Hyperion API which has already been configured in this example’s previous configuration files: + +```bash +"api": { + "enabled": true, + "pm2_scaling": 1, + "node_max_old_space_size": 1024, + "chain_name": "EOS Testnet", + "server_addr": "", + "server_port": 7000, + "stream_port": 1234, + "stream_scroll_limit": -1, + "stream_scroll_batch": 500, + "server_name": "", + "provider_name": "", + "provider_url": "", + "chain_api": "", + "push_api": "", + "chain_logo_url": "", + "enable_caching": true, + "cache_life": 1, + "limits": { + "get_actions": 1000, + "get_voters": 100, + "get_links": 1000, + "get_deltas": 1000, + "get_trx_actions": 200 +``` + +Start the Hyperion API as below, allowing queries on port :7000 + +```bash +> pm2 start --only eos-api --update-env +``` + +Observe the pm2 logs to check for successful API startup, the API can then be queried for it’s health info leaving you with a sense of satisfaction that all components are operating as expected + +In particular make sure `last_indexed_block` is equal to `total_indexed_blocks` showing that we have indexed all blocks to the current headblock. + +```bash +> curl [http://:7000/v2/health](https://eos-testnet.eosphere.io/v2/health) + +{"version":"3.3.9-8","version_hash":"b94f99d552a8fe85a3ab2c1cb5b84ccd6ded6af4","host":"eos-testnet.eosphere.io","health":[{"service":"RabbitMq","status":"OK","time":1695700845755},{"service":"NodeosRPC","status":"OK","service_data":{"head_block_num":268459315,"head_block_time":"2023-09-26T04:00:45.500","time_offset":210,"last_irreversible_block":268458983,"chain_id":"1064487b3cd1a897ce03ae5b6a865651747e2e152090f99c1d19d44e01aea5a4"},"time":1695700845710},{"service":"Elasticsearch","status":"OK","service_data":{"active_shards":"100.0%","head_offset":2,"first_indexed_block":2,"last_indexed_block":268459313,"total_indexed_blocks":268459311,"missing_blocks":0,"missing_pct":"0.00%"},"time":1695700845712}],"features":{"streaming":{"enable":true,"traces":true,"deltas":true},"tables":{"proposals":true,"accounts":true,"voters":true},"index_deltas":true,"index_transfer_memo":true,"index_all_deltas":true,"deferred_trx":false,"failed_trx":false,"resource_limits":false,"resource_usage":false},"cached":true,"query_time_ms":0.158,"last_indexed_block":268459318,"last_indexed_block_time":"2023-09-26T04:00:47.000"} +``` + +Congratulations you have now successfully built, configured and are running a **Hyperion Full History Service**, ready to be made publicly available from behind a SSL offloading Load Balancer such as HAProxy. diff --git a/native/07_node-operation/30_history/index.md b/native/07_node-operation/30_history/index.md new file mode 100644 index 00000000..4037f0ec --- /dev/null +++ b/native/07_node-operation/30_history/index.md @@ -0,0 +1,8 @@ +--- +title: Run a History Node +--- + +- [Intro to Hyperion](./01_intro-to-hyperion-full-history.md) - Learn about Hyperion, a full history solution for Antelope chains. +- [Building Hyperion](./02_build-hyperion-software-components.md) - Learn how to build Hyperion. +- [Configure Hyperion](./03_configure-hyperion-software-components.md) - Learn how to configure Hyperion. +- [Run Hyperion](./04_running-hyperion.md) - Learn how to run Hyperion.