Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Staging to production #236

Merged
merged 9 commits into from
Mar 26, 2024
Binary file added images/hyperion.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/hyperion_kibana.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/hyperion_rabbitmq.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 8 additions & 0 deletions native/01_quick-start/04_endpoints.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
title: Endpoints
---

<head><title>EOS Native Endpoints</title></head>

To find a list of all the available endpoints for the EOS Network,
please visit [Antelope Tools](https://eos.antelope.tools/endpoints).
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
---
title: Introduction to Hyperion
contributors:
- { name: Ross Dold (EOSphere), github: eosphere }
---

Hyperion is a full history solution for indexing, storing and retrieving Antelope blockchain’s historical data.
It was built by EOS RIO to be an enterprise grade, performant and highly scalable Antelope History Solution. Their [documentation](https://hyperion.docs.eosrio.io/) is excellent and certainly a worthwhile starting point, this Technical How To series will cover some of their same content and add operational nuances from a practical stand point and EOSphere's experience.

[Learn more about EOS RIO Hyperion](https://eosrio.io/hyperion/)

![image](/images/hyperion.png)

## Components

The Hyperion Full History service is a collection of purpose built EOS RIO software and industry standard applications. The eight primary building blocks are the following:

#### EOS RIO Hyperion Indexer and API

The **Indexer** processes data sourced from an Antelope Leap software State-History (SHIP) node and enables it to be indexed in Elasticsearch. The Hyperion Indexer also makes use of the Antelope Binary to JSON conversion functionality using ABI’s called [abieos](https://github.com/AntelopeIO/abieos). Deserialisation performance is greatly improved by using abieos C++ code through EOS RIO’s own NPM package [**node-abieos**](https://github.com/eosrio/node-abieos) that provides a Node.js native binding.

The **API** is the front end for client queries, it responds to V2 or legacy V1 requests and finds data for these responses by directly querying the Elasticsearch cluster.

#### Antelope Leap Software State-History (SHIP) Node

The State-History plugin is used by nodeos to capture historical data about the blockchain state and store this data in an externally readable flat file format. This readable file is accessed by the Hyperion Indexer.

#### RabbitMQ

[RabbitMQ](https://www.rabbitmq.com/) is an open source message broker that is used by Hyperion to queue messages and transport data during the multiple stages of indexing to Elasticsearch.

#### Redis

[Redis](https://redis.io/) is an in-memory data structure store and is used by Hyperion as a predictive temporary database cache for HTTP API client queries and as a Indexer transaction cache.

#### Node.js

The Hyperion indexer and API are [Node.js](https://nodejs.org/en/) applications and of course then use Node.js as an open-sourced back-end JavaScript runtime environment.

#### PM2

[PM2](https://pm2.keymetrics.io/) is a process manager for Node.js and used to launch and run the Hyperion Indexer and API.

#### Elasticsearch Cluster

[Elasticsearch](https://www.elastic.co/) is a search engine based on the Lucene library, it is used by Hyperion to store and retrieve all indexed data in highly performant schema-free JSON document format.

#### Kibana

[Kibana](https://www.elastic.co/kibana/) is a component of the Elastic Stack, a dashboard that enables visualising data and simplified operation and insight of an Elasticsearch cluster. All Hyperion Indexed data resides in the Elasticsearch database, Kibana gives a direct view of this data and the health of the Elasticsearch cluster.

## Hyperion Topology

The Topology of your Hyperion deployment depends on your history services requirement and the network you intend to index. Whether it’s Public/Private, Mainnet/Testnet or Full/Partial History.

This guide will discuss EOS Mainnet with Full History. Testnets and Private networks generally have far lower topology and resource requirements.

**EOS Mainnet**

EOSphere originally started with a single server running all Hyperion Software Components except for the EOS State-History Node. However a challenge was discovered in relation to Elasticsearch JVM heap size when the EOS network utilisation grew and our API became well used.

JVM Heap size is the amount of memory allocated to the Java Virtual Machine of an Elasticsearch node, the more heap available the more cache memory available for indexing and search operations. If it’s too low Hyperion Indexing will be slow and search queries will be very latent. If the JVM heap size is more than 32GB (usually lower than this) on an Elasticsearch node, the threshold for compressed ordinary object pointers (OOP) will be exceeded and JVM will stop using compression. This will be exceptionally inefficient in regards to memory management and the node will consume vastly more memory.

The result of the above is the necessity to create a cluster of more than one Elasticsearch node, as the limit is per Elasticsearch node instance. Two nodes with JVM heap of 25GB results in 50GB of cluster wide heap available.

Other benefits to clustering more than one ElasticSearch node are of course more CPU cores for processing and more DISK for the ever expanding Full History storage requirements. Elasticsearch stores indexed data in documents these documents are allocated to shards, these shards are automatically balanced between nodes in a cluster. Other than distributing the DISK utilisation across nodes, each shard is it’s own Lucene index and as such distributes CPU bandwidth utilisation across the cluster as well.

I recommend reading [Elasticsearch: The Definitive Guide](https://www.elastic.co/guide/en/elasticsearch/guide/current/index.html) as an excellent book to help you understand Elasticsearch concepts.

Taking the above into account our current recommended topology for the EOS Mainnet is to logically or physically run the following nodes:

* **Load Balancer**
* SSL Offload
* Usage Policies
* **Hyperion Server 1**
* Hyperion API
* Hyperion Indexer
* RabbitMQ
* Redis
* Node.js
* PM2
* Kibana
* **Hyperion Server 2**
* Elasticsearch I (25GB JVM Heap)
* **Hyperion Server 3**
* Elasticsearch II (25GB JVM Heap)
* **Hyperion Server 4**
* Elasticsearch III (25GB JVM Heap)
* **State-History**
* Network sync’d nodeos with state_history plugin enabled

## Hyperion Hardware

Similar to Hyperion Topology, Hardware choice will vary on your history services requirement.

The recommendations below are for EOS Mainnet with Full History, in relation to what currently works in the EOSphere Public Service Offerings.

**EOS Mainnet**

* **Load Balancer**
* Dealers choice, however HAProxy is a great option
* High Speed Internet 100Mb/s+
* **Hyperion Server 1**
* Modern CPU, 3Ghz+, 8 Cores+
* 64GB RAM
* 128GB DISK _(Enterprise Grade SSD/NVMe)_
* 1Gb/s+ LAN
* **Hyperion Server 2–4**
* Modern CPU, 3Ghz+, 8 Cores+
* 64GB RAM
* Enterprise Grade SSD/NVMe
_The current (February 2024) Elasticsearch Database is 24TB, I suggest provisioning 35–40TB across the cluster for Full History service longevity_
* 1Gb/s+ LAN
* **State-History**
* Modern CPU, 4Ghz+, 4 Cores
* 128GB RAM
* 256GB DISK 1 _(Enterprise Grade SSD/NVMe)_
* 16TB DISK 2 _(SAS or SATA are OK)_

With that introduction you should now have an informed starting point for your Hyperion services journey.
Original file line number Diff line number Diff line change
@@ -0,0 +1,192 @@
---
title: Build Hyperion Components
contributors:
- { name: Ross Dold (EOSphere), github: eosphere }
---

The Hyperion Full History service is a collection of **eight** purpose built EOS RIO software and industry standard applications.

This walk through will install all components excluding the SHIP node on a single Ubuntu 22.04 server, please reference [Introduction to Hyperion Full History](./01_intro-to-hyperion-full-history.md) for infrastructure suggestions.

The process for building each of these primary building blocks is covered below:

## State-History (SHIP) Node

The Hyperion deployment requires access to a fully sync’d State-History Node, the current SHIP recommend version is Leap `v5.0.2`.

## RabbitMQ

To install the latest RabbitMQ currently `3.13.0` be sure to check their latest [Cloudsmith Quick Start Script](https://www.rabbitmq.com/install-debian.html), this in our experience is the simplest way to ensure you are current and correctly built.

The summary process is below:

```bash
> sudo apt update

> sudo apt-get install curl gnupg apt-transport-https -y

#Team RabbitMQ's main signing key#
> curl -1sLf "https://keys.openpgp.org/vks/v1/by-fingerprint/0A9AF2115F4687BD29803A206B73A36E6026DFCA" | sudo gpg --dearmor | sudo tee /usr/share/keyrings/com.rabbitmq.team.gpg > /dev/null

#Cloudsmith: modern Erlang repository#
> curl -1sLf https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-erlang.E495BB49CC4BBE5B.key | sudo gpg --dearmor | sudo tee /usr/share/keyrings/rabbitmq.E495BB49CC4BBE5B.gpg > /dev/null

#Cloudsmith: RabbitMQ repository#
> curl -1sLf https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-server.9F4587F226208342.key | sudo gpg --dearmor | sudo tee /usr/share/keyrings/rabbitmq.9F4587F226208342.gpg > /dev/null

--------------------------------------------------------------------
#Add apt repositories maintained by Team RabbitMQ#
> sudo tee /etc/apt/sources.list.d/rabbitmq.list <<EOF

## Provides modern Erlang/OTP releases ##
deb [signed-by=/usr/share/keyrings/rabbitmq.E495BB49CC4BBE5B.gpg] https://ppa1.novemberain.com/rabbitmq/rabbitmq-erlang/deb/ubuntu jammy main
deb-src [signed-by=/usr/share/keyrings/rabbitmq.E495BB49CC4BBE5B.gpg] https://ppa1.novemberain.com/rabbitmq/rabbitmq-erlang/deb/ubuntu jammy main

## Provides RabbitMQ ##
deb [signed-by=/usr/share/keyrings/rabbitmq.9F4587F226208342.gpg] https://ppa1.novemberain.com/rabbitmq/rabbitmq-server/deb/ubuntu jammy main
deb-src [signed-by=/usr/share/keyrings/rabbitmq.9F4587F226208342.gpg] https://ppa1.novemberain.com/rabbitmq/rabbitmq-server/deb/ubuntu jammy main

EOF
--------------------------------------------------------------------
> sudo apt-get update -y

#Install Erlang packages#
> sudo apt-get install -y erlang-base \
erlang-asn1 erlang-crypto erlang-eldap erlang-ftp erlang-inets \
erlang-mnesia erlang-os-mon erlang-parsetools erlang-public-key \
erlang-runtime-tools erlang-snmp erlang-ssl \
erlang-syntax-tools erlang-tftp erlang-tools erlang-xmerl

#Install rabbitmq-server and its dependencies#
> sudo apt-get install rabbitmq-server -y --fix-missing

**Check Version**
> sudo rabbitmqctl version
```
## Redis

Our current Hyperion deployments are running on the latest Redis stable version `v7.2.4` which is built as below:

```bash
> sudo apt install lsb-release curl gpg

#Redis Signing Key#
> curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg

#Latest Redis repository#
> echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/redis.list> sudo apt update

#Install Redis#
> sudo apt install redis

**Check Version**
> redis-server --version
```

## Node.js

Hyperion requires Node.js v18 , our current Hyperion deployments are running the current LTS `v18.19.1` which is built below:

```bash
#Download and import the Nodesource GPG key#
> sudo apt update

> sudo apt install -y ca-certificates curl gnupg

> sudo mkdir -p /etc/apt/keyrings

> curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg

#Create .deb repository#
> NODE_MAJOR=18

> echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | sudo tee /etc/apt/sources.list.d/nodesource.list

#Install Node.js#
> sudo apt update

> sudo apt-get install -y nodejs

**Check Version**
> node -v
```

## PM2

The latest public version is `5.3.1` and is built as below:

```bash
> sudo apt update

#Install PM2#
> sudo npm install pm2@latest -g

**Check Version**
> pm2 -v
```

## Elasticsearch

Currently most of our Hyperion deployments are using Elasticsearch `8.5-12.x` with great results, however the current recommended Elasticsearch version is `8.12.2` which I expect will work just as well or better. Build the latest Elasticsearch `8.x` as below:

```bash
> sudo apt update

> sudo apt install apt-transport-https

> sudo apt install gpg

#Elasticsearch signing key#
> wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg

#Latest Elasticsearch 8.x repository#
> echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list

#Install Elasticsearch#
> sudo apt update && sudo apt install elasticsearch

**Take note of the super-user password**
```

## Kibana

The utilised Kibana version should be paired with the installed Elasticsearch version, the process below will install the current version:

```bash
> sudo apt update

> sudo apt-get install apt-transport-https

> sudo apt install gpg

#Elasticsearch signing key - Not needed if already added#
> wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg

#Latest Elasticsearch 8.x repository - Not needed if already added#
> echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list

#Install Kibana#
> sudo apt update && sudo apt install kibana
```

## **EOS RIO Hyperion Indexer and API**

Currently (March 2024) the most robust and production ready version of Hyperion from our experience is `3.3.9–8` and is used across all our Hyperion Full History Services. The EOS RIO Team are constantly developing and improving their code, the best way to stay on top of the current recommend version is to join the [Hyperion Telegram Group](https://t.me/EOSHyperion).

Build Hyperion from `main` as below:

```bash
> git clone https://github.com/eosrio/hyperion-history-api.git

> cd hyperion-history-api

> git checkout main

> npm install

> npm audit fix
```

After all Hyperion Software Components are built and provisioned you can now proceed to configuration.

The next guide will walk through the technical configuration of each component.
Loading
Loading