From 059ffbc7e645948226d7a254eff603973b2293d5 Mon Sep 17 00:00:00 2001 From: Morgan Mccauley Date: Wed, 9 Aug 2023 17:17:36 +1200 Subject: [PATCH] chore: Add starting notes to `README` --- README.md | 61 ++++++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 56 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index ef04df972..c41c066fe 100644 --- a/README.md +++ b/README.md @@ -5,12 +5,12 @@ With QueryApi you can * Specify the schema for your own custom hosted database and write to it with your indexer function; * Retrieve that data through a GraphQL API. -# Table of Contents / Applications +## Table of Contents / Applications 1. [QueryApi Coordinator](./indexer) -An Indexer that tracks changes to the QueryApi registry contract. It triggers the execution of those IndexerFunctions -when they match new blocks by placing messages on an SQS queue. Spawns historical processing threads when needed. - 1.a. Subfolders provide crates for the different components of the Indexer: indexer_rule_type (shared with registry contract), -indexer_rules_engine, storage. + An Indexer that tracks changes to the QueryApi registry contract. It triggers the execution of those IndexerFunctions + when they match new blocks by placing messages on an SQS queue. Spawns historical processing threads when needed. + 1.a. Subfolders provide crates for the different components of the Indexer: indexer_rule_type (shared with registry contract), + indexer_rules_engine, storage. 2. [Indexer Runner](.indexer-js-queue-handler) Retrieves messages from the SQS queue, fetches the matching block and executes the IndexerFunction. 3. [IndexerFunction Editor UI](./frontend) @@ -21,3 +21,54 @@ indexer_rules_engine, storage. Stores IndexerFunctions, their schemas and execution parameters like start block height. 6. [Lake Block server](./block-server) Serves blocks from the S3 lake for in browser testing of IndexerFunctions. + +## Starting  + +### Prerequisites +- [Docker](https://docs.docker.com/engine/install/) +- [Docker Compose](https://docs.docker.com/compose/install/) +- [Hasura CLI](https://hasura.io/docs/latest/hasura-cli/install-hasura-cli/) +- AWS Access Keys + +### Configuration +- Pointing to mainnet and dev registry +- Publishes to mock queue +- throttles? + +### AWS Access +A [Docker Compose file](./docker-compose.yml) has been created containing all the components of QueryApi, but it lacks some (secret) environment variables. The following fields need to be populated prior to running: + +Runner: +- `AWS_ACCESS_KEY_ID` +- `AWS_SECRET_ACCESS_KEY` + +Coordinator: +- `LAKE_AWS_ACCESS_KEY` +- `LAKE_AWS_SECRET_ACCESS_KEY` +- `QUEUE_AWS_ACCESS_KEY` +- `QUEUE_AWS_SECRET_ACCESS_KEY` + +The same access key pair can be used for all 3 sets of credentials, the important part here is that they have access to S3 so AWS can charge you for the [Requestor Pays](https://docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysBuckets.html) configured Near Lake. + +### Hasura Configuration +Before starting the other components of QueryApi, Hasura must be correctly configured. To achieve this, you'll need to start it: +```sh +docker compose up hasura-graphql --detach +``` +This will start the Hasura GraphQL engine, as well as its dependant components: Hasura auth, and Postgres. + +To configure Hasura, you can run the following command: +```sh +cd ./hasura && hasura deploy +``` + +This creates the required tables and adds the necessary metadata for the new tables. + +### Running +With everything configured correctly, we can now start the remaining components: Coordinator, Runner, and Redis. +```sh +docker compose up +``` + +### Initial Provisioning +It is expected to see some errors when starting the QueryAPI Runner for the first time. Before an indexer is executed, it is first provisioned. In the current registry there are most likely many accounts with many indexers under them. On first start up, all Indexers will attempt to provision, and most will fail due to the duplicate attempts to create shared resources. These indexers will eventually retry, and skip provisioning since it has already been setup.