Skip to content

Latest commit

 

History

History
264 lines (162 loc) · 7.42 KB

README.md

File metadata and controls

264 lines (162 loc) · 7.42 KB

simple way to customise your worker node on the Allora network

image

Following the successful release of Testnet V2 — the upgrade that brought a lot of interest to the Allora network, many crypto and AI enthusiasts use the Allora network to run worker nodes to collect Allora points in the Point Campaign.

[read about the point campaign here: https://www.allora.network/blog/launching-phase-2-of-the-allora-points-program-30832]

The best way to maximize points from your worker nodes is to provide unique inferences to the network and contribute to its collective intelligence. Using the same model as everyone else provides little to no benefit for the network (such as the basic-coin-prediction default LinearRegression model in the official doc) will result in fewer points gained to your worker nodes. [read more about the concept of collective intelligence here: https://www.allora.network/blog/how-allora-s-collective-intelligence-network-echoes-the-principles-of-evolutionary-biology-599c4]

how to easily change your model!

I will show you an easy way to change the model of your worker node so it can generate more points for you by providing unique returns and contributing to the collective intelligence of the network.

This guide is based on the Allora-Basic-Coin-Prediction node

*You must need to buy a VPS for running Allora Worker *You should buy VPS which is fulfilling all these requirements : Operating System : Ubuntu 22.04 CPU: Minimum of 1/2 core. Memory: 2 to 4 GB. Storage: SSD or NVMe with at least 5GB of space.

Clean Docker First

docker system prune

Automatic Installtion:

cd $HOME
rm -rf basicinstall.sh
wget https://raw.githubusercontent.com/0xtnpxsgt/Allora-Basic-Coin-Prediction/main/basicinstall.sh
chmod +x basicinstall.sh
./basicinstall.sh

Check logs

docker logs -f worker

Before reading further please make sure that:

· Your base node is the basic-coin-prediction-node

· You have every dependency for Allora’s worker nodes installed

· Your node has been configured correctly and is running perfectly

· You know how to get inside a file on Linux CLI using nano / vim / vi whatever you prefer

Here are the easy steps:

1️⃣

We are now in the basic-coin-prediction-node directory after you have configured everything to test that the basic-coin-prediction-node runs perfectly

cd $HOME/basic-coin-prediction-node

image

2️⃣

We will get inside the model.py file — I use vim but you can use whatever command you like

vim model.py

3️⃣

At the top, you will want to add numpy as np there just in case you’d need it

image

4️⃣

Next, we scroll down the def train_model section and locate the model parameter

image

5️⃣

Go to the scikit-learn doc website and choose a regression model that you would like to use Link: https://scikit-learn.org/stable/supervised_learning.html You can use any supervised learning regression model, and each one performs the prediction differently. DO NOT PICK a classification model.

image

In this example, we will use the Lasso regression model.

6️⃣

Click the link of your preferred model and find the usage example

image

7️⃣

Once found, edit the model parameter in your model.py accordingly

I also added two print(..) for easier debugging just in case we have to

image

8️⃣

Edit the library downloader at the top of the model.py too

As you can see, I commented out the LinearRegression and added the line below to the list

from sklearn import linear_model

image

9️⃣

Exit from model.py

1️⃣0️⃣

Check requirement.txt if we need to add any dependencies

image

vim requirement.txt

image

In this case, we don’t have to add anything

1️⃣1️⃣

Rebuild the docker and restart the containers with

docker compose build
docker compose up -d 

1️⃣2️⃣

If all went well, then check your inferences

curl http://localhost:8000/inference/<token> 

If you see something similar to this, image

Congrats! You have levelled up to level-2 builder

Debugging

Error starting the inference container

image

1️⃣

Check logs to see what is going on and fix the bug accordingly or switch to another model

docker compose logs -f 

2️⃣

If there’s no error and you can see only the printed message that says Begin training the model then there’s no error, the training process just takes longer than expected

3️⃣

Wait until you get Training completed

4️⃣

Restart the other containers

docker compose up -d 

5️⃣

If all went well, then check your inferences

curl http://localhost:8000/inference/<token>

Getting code 500 instead of 200 in the logs

image

1️⃣

Check your inferences

curl http://localhost:8000/inference/

2️⃣

A common bug is the return is a 1d array but your app.py still tries to extract it out using index and it results in an error

image

3️⃣

To fix this, we will have to get inside app.py

vim app.py 

4️⃣

  1. Look for def get_eth_inference() and delete the [0] in the return

Before

image

After

image

Depending on your chosen model, you may have to delete both [0]

5️⃣ Rebuild and restart the containers

docker compose build
docker compose down
docker compose up -d

6️⃣ If all went well, then check your inferences

curl http://localhost:8000/inference/<token> 

Other things you could do to further improve the model

· Change the train_dataset

· Change how the final output is aggregated

· Edit the input (X_train and X_test)

· Hyperparameter Editing / Tuning

Please see our doc for additional advice: https://docs.allora.network/devs/workers/walkthroughs/walkthrough-price-prediction-worker/modelpy.

About the Allora Network

Allora is a self-improving decentralized AI network.

Allora enables applications to leverage smarter, more secure AI through a self-improving network of ML models. By combining innovations in crowdsourced intelligence, reinforcement learning, and regret minimization, Allora unlocks a vast new design space of applications at the intersection of crypto and AI.