Push protocol is evolving to Push Chain, a shared-state L1 designed to deliver universal app experiences (Any Chain. Any User. Any App).🚀
Push Storage Node is a part of Push's Proof of Stake (PoS) chain and is responsible for storing transactions along with the payload data in a sharded fashion.
After transactions are validated by Validator Nodes
of the PoS chain and a new block is produced, it is picked up by Storage Nodes
for storage and indexing.
Push Storage Nodes ensure reliable and efficient storage of transaction data by distributing the data across multiple nodes (sharding). This allows for enhanced data redundancy, fault tolerance, scalability and also ensures running a storage node is cost-effective since only a fraction of the data is stored on each node.
To operate a storage node, participants must stake a certain amount of tokens. This staking process serves as a security deposit, ensuring that storage nodes act in the network's best interest.
In the network, once blocks are processed by validator nodes, they are picked up by storage nodes for further handling. The following diagram illustrates the interaction between the network and the storage nodes:
Sharding is a key component of the Push Storage Node architecture. Each node is responsible for a specific list of shards, ensuring a distributed and balanced load across the network. This approach enhances fault tolerance and scalability.
- Shard Responsibility: Each node handles a set of shards defined by the
storage.sol
smart contract, storing the associated transaction data. - Replication Factor: To ensure data redundancy and reliability, each shard is replicated across multiple nodes. This replication factor can be adjusted based on the network requirements and is managed by the
storage.sol
smart contract. - Resharding: As more nodes join the network, resharding is performed to redistribute the shards and maintain balance. This ensures that the system can scale efficiently with the addition of new nodes.
Indexing transaction data is crucial for quick retrieval and efficient querying. The table below represents a proposed structure for transaction indexation:
wallet | Tx Hash | Block Hash | Category | Tx Data | Timestamp |
---|---|---|---|---|---|
eip155:1:0xAA | b0249fbb-a03d-4292-9599-042c6993958e | 2608d687-fe55-4fe9-9fa5-1f782dcebb34 | protobuf_serialized_data | epoch |
Note: The above table example is a simplified representation of the transaction indexation structure. The actual implementation may include additional fields based on the requirements of the network.
This project is currently a work in progress. Please be aware that things might break, and the installation process might change as we improve and dockerize it completely for public running of the node. Proceed with caution and check back frequently for updates.
The do.sh
script is included inside the zips
folder. It provides shortcuts for running various commands, including publishing a default test key and executing Hardhat with arguments. Ensure you review the code before executing any commands.
-
Setting up
do.sh
- Place
do.sh
in a directory accessible by your environment (e.g., your home directory). - Grant execute privileges to the script:
chmod +x do.sh
- Place
-
Running
do.sh
There are multiple ways to execute thedo.sh
script:- Full Path Execution:
Navigate to the project directory:
cd /path/to/push-storage-node-project-dir /home/user/do.sh command1 command2 command3
- Add
do.sh
to Your Path: Follow the instructions in this Apple discussion to adddo.sh
to your system path. Then, navigate to the project directory:cd /path/to/push-storage-node-project-dir ./do.sh command1 command2 command3
- Create an Alias for
do.sh
(Recommended): Add an alias to your shell configuration:Restart your shell to apply changes. Now, you can use# Open .zshrc file nano $HOME/.zshrc # Add this line to the file alias do='/Users/your-username/Documents/projects/do.sh' # Save and close the file
do
to run commands:cd /path/to/push-storage-node-project-dir do command1 command2 command3
- Full Path Execution:
Navigate to the project directory:
-
Clone the repository:
git clone https://github.com/push-protocol/push-snode.git cd push-snode
-
Install dependencies:
yarn install
-
Configure docker directories: To set up the storage nodes, you'll need to configure specific directories for each node. This setup ensures that each node runs independently with its own environment and key files.
-
Download and Unpack Docker Directory: Get the
docker-dir-for-snodes.zip
file from thezips
folder and extract it into your project's root directory. After extraction, you'll find a/docker
directory containing subdirectories for each node:/docker/01, /docker/02
. Each node directory (e.g., docker/01, docker/02) contains the necessary configuration files and scripts to run the node. -
Key Files within Each Node Directory: This file contains environment-specific properties, such as database credentials, node identifiers, and other configuration settings that the node requires to operate.
-
-
Start the docker container:
docker-compose up
Note: It is expected that after this command you would have the following containers running: mysql, postgres, redis
-
Postgres Database Setup: For the nodes to function correctly, you need to set up two separate Postgres databases, one for each node. These databases will store the data related to each storage node.
-
Access the pSql command-line interface by running the following command in your terminal:
psql -U postgres -d postgres
-
Once you're in the pSql CLI, create each of the databases by running the following commands:
create database snode1; create database snode2;
-
-
Run the nodes in separate terminals:
# Run Storage Node 1 do debug.s1 # Run Storage Node 2 do debug.s2
We welcome contributions from the community! To contribute, please follow these steps:
- Fork the repository.
- Create a new branch (
git checkout -b feature/your-feature-name
). - Make your changes and commit them (
git commit -m 'Add some feature'
). - Push to the branch (
git push origin feature/your-feature-name
). - Open a pull request.
Please ensure your code adheres to our coding standards and includes appropriate tests.
All crates of this repository are licensed under either of
- Apache License, Version 2.0, (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.