Skip to content
This repository has been archived by the owner on Sep 5, 2024. It is now read-only.

Commit

Permalink
readme written for v2
Browse files Browse the repository at this point in the history
  • Loading branch information
okedeji committed Apr 25, 2024
1 parent 4a9167d commit 9131d28
Show file tree
Hide file tree
Showing 2 changed files with 31 additions and 11 deletions.
40 changes: 30 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,31 +3,39 @@
![Python!](https://img.shields.io/badge/Python-FFD43B?style=for-the-badge&logo=python&logoColor=blue)
![Apache License](https://img.shields.io/badge/Apache%20License-D22128?style=for-the-badge&logo=Apache&logoColor=white)

The `allocmd` is a CLI tool that handles worker nodes' seamless creation and deployment. With this tool, you do not need to write the worker node from scratch, the CLI tool will help you bootstrap all the needed components to get worker nodes working. All you have to do is to update the `config.yaml` file with your custom parameters, update the provided`main.py`to communicate with your inference server, run the deploy command, and your worker should be up and running.
The `allocmd` is a CLI tool that handles seamless creation of Allora external resources built to integrate with Allora chain. With this tool, you do not need to write the worker or reputer node or even a validator from scratch, the CLI tool will help you bootstrap all the needed components to get resource working.

To build a worker node with `allocmd`, you will need to follow the following steps:
The following are the list of stuff that can don with this tool at the moment
1. Generating worker node files
2. Genrating reputer node files
3. Generating validator node files
4. Funding of testnet account addresses

### 1. Install `allocmd` CLI
for all the files generation commands, the tool will help in generation of the needed files and their respective docker files and you can spin them up as usual docker containers with docker-compose

## Install `allocmd` CLI

You will begin with installing the tool on your machine.

```shell
pip install allocmd
```

> you can run `allocmd --help` to get general help or `allocmd [command] --help` to get help relating to a particular command.
> you should use version 1.0.0 for Allora Chain v1 and version 2.0.0 for Allora Chain v2. Run `allocmd --help` to get general help or `allocmd [command] --help` to get help relating to a particular command.
### 2. Initialize the worker for development
## Initializing resources
### Initialize the worker/reputer for development
> Note that all commands here will pass for both worker or reputer node
The next step is initializing the CLI to bootstrap all the needed components to get your worker running. The following command will handle the initialization process. It will create all the files in the appropriate directories and generate identities for your node to be used for local development.
The next step is initializing the CLI to bootstrap all the needed components to get your worker or reputer running. The following command will handle the initialization process. It will create all the files in the appropriate directories and generate identities for your node to be used for local development.

```shell
allocmd generate worker --name <preffered name> --topic <topic id> --env dev
```

Before running this command you will have to [pick the topic Id ](https://docs.allora.network/docs/existing-allora-appchain-topics)you wish to generate inference for after which you can run this command with the topic Id. The command will auto-create some files, the most important of which is the `dev-docker-compose.yaml`file which is an already complete docker-compose that you can run immediately to see your worker and head nodes running perfectly on your local machine. You can edit the files as you wish. for instance the `main.py` is meant for you to call your inference server, hence you will have to edit the sample code with actual URLs and logic as you prefer.
Before running this command you will have to [pick the topic Id ](https://docs.allora.network/docs/existing-allora-appchain-topics)you wish to generate inference for after which you can run this command with the topic Id. The command will auto-create some files, the most important of which is the `dev-docker-compose.yaml`file which is an already complete docker-compose that you can run immediately to see your worker/reputer and head nodes running perfectly on your local machine. You can edit the files as you wish. for instance the `main.py` is meant for you to call your inference server, hence you will have to edit the sample code with actual URLs and logic as you prefer.

When you run the docker-compose (`docker-compose -f dev-docker-compose.yaml up --build`), maybe after you have written and tested your logic in `main.py`, you then should be seeing the logs from the nodes, and you should be able to make a request to your head node and see it get a response from the worker node. Note that in production, you won't be the one to make the inference request, as the Allora chain will do this at the cadence provided by the topic creator.
When you run the docker-compose (`docker-compose -f dev-docker-compose.yaml up --build`), maybe after you have written and tested your logic in `main.py`, you then should be seeing the logs from the nodes, and you should be able to make a request to your head node and see it get a response from the worker/reputer node. Note that in production, you won't be the one to make the inference request, as the Allora chain will do this at the cadence provided by the topic creator.

You can test your node by running the following curl command:

Expand Down Expand Up @@ -56,12 +64,24 @@ curl --location 'http://localhost:6000/api/v1/functions/execute' --header 'Accep

The `<TOPIC_ID>` needs to be [an existing topic on the chain](https://docs.allora.network/docs/existing-allora-appchain-topics). The `<argument>` is what the topic is expecting to receive to perform the inference (as an indication to test, you can use the `DefaultArg` value from the topic on-chain, e.g. for ETH prediction topic, it should be `"ETH"`).

### 3. Initialize the worker for production
### Initialize the worker/reputer for production

Your worker node is now ready to be deployed, the `main.py` has been modified, all env variables passed, and the worker node is running locally and you are now ready to deploy your worker to run in the production environment. The following command will handle the generation of the `prod-docker-compose.yaml` file which contains all the keys and parameters needed for your worker to function perfectly in production.
Your worker/reputer node is now ready to be deployed, the `main.py` has been modified, all env variables passed, and the worker/reputer node is running locally and you are now ready to deploy your worker/reputer to run in the production environment. The following command will handle the generation of the `prod-docker-compose.yaml` file which contains all the keys and parameters needed for your worker/reputer to function perfectly in production.

```shell
allocmd generate worker --env prod
```

By running this command, `prod-docker-compose.yaml` will be generated with appropriate keys and parameters. You can now run the docker-compose file or deploy the whole codebase in your preferred cloud instance. At this stage, your worker should be responding to inference request from the Allora Chain.

### Initialize validator production
```shell
allocmd generate validator --name <validator-name> --network <testnet or mainnet>
```
The above command can generate validator files and you can then use docker-compose to deploy

### Fund account address
```shell
allocmd fund <address>
```
The above command takes address and fund the account with Allora Faucet
2 changes: 1 addition & 1 deletion allocmd/utilities/constants.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
cliVersion = "0.3.20"
cliVersion = "2.0.0"

0 comments on commit 9131d28

Please sign in to comment.