Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FIX] Improve writing problems + fix several typos #332

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 12 additions & 14 deletions history/history-network.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@ In addition, the chain history network provides individual epoch records for the

- Block headers
- Block bodies
- Transactions
- Ommers
- Withdrawals
- Transactions
- Ommers
- Withdrawals
- Receipts
- Header epoch records (pre-merge only)

Expand All @@ -31,7 +31,7 @@ The network supports the following mechanisms for data retrieval:
- Block receipts by block header hash
- Header epoch record by epoch record hash

> The presence of the pre-merge header records provides an indirect way to lookup blocks by their number, but is restricted to pre-merge blocks. Retrieval of blocks by their number for post-merge blocks is not intrinsically supported within this network.
> The presence of the pre-merge header records provides an indirect way to lookup blocks by their number, but is restricted to pre-merge blocks. Retrieval of blocks by their number for post-merge blocks is not intrinsically supported within this network.
>
> This sub-protocol does **not** support retrieval of transactions by hash, only the full set of transactions for a given block. See the "Canonical Transaction Index" sub-protocol of the Portal Network for more information on how the portal network implements lookup of transactions by their individual hashes.

Expand Down Expand Up @@ -79,7 +79,7 @@ The history network uses the standard routing table structure from the Portal Wi

#### Data Radius

The history network includes one additional piece of node state that should be tracked. Nodes must track the `data_radius` from the Ping and Pong messages for other nodes in the network. This value is a 256 bit integer and represents the data that a node is "interested" in. We define the following function to determine whether node in the network should be interested in a piece of content.
The history network includes one additional piece of node state that should be tracked. Nodes must track the `data_radius` from the Ping and Pong messages for other nodes in the network. This value is a 256 bit integer and represents the data that a node is "interested" in. We define the following function to determine whether node in the network should be interested in a piece of content.

```python
interested(node, content) = distance(node.id, content.id) <= node.radius
Expand Down Expand Up @@ -112,7 +112,7 @@ MAX_RECEIPT_LENGTH = 2**27 # ~= 134 million
# Maximum receipt length is logging a bunch of data out, currently at a cost of
# 8 gas per byte. Since that is double the cost of 0 calldata bytes, the
# maximum size is roughly half that of the transaction: 3.75 million bytes.
# But there is more reason for protocol devs to constrain the transaction length,
# But there is more reason for protocol developers to constrain the transaction length,
# and it's not clear what the practical limits for receipts are, so we should add more buffer room.
# Imagine the cost drops by 2x and the block gas limit goes up by 8x. So we add 2**4 = 16x buffer.

Expand Down Expand Up @@ -181,10 +181,10 @@ content_key = selector + SSZ.serialize(block_header_key)
```

> **_Note:_** The `BlockHeaderProof` allows to provide headers without a proof (`None`).
For pre-merge headers, clients SHOULD NOT accept headers without a proof
as there is the `HistoricalHashesAccumulatorProof` solution available.
For post-merge headers, there is currently no proof solution and clients MAY
accept headers without a proof.
> For pre-merge headers, clients SHOULD NOT accept headers without a proof
> as there is the `HistoricalHashesAccumulatorProof` solution available.
> For post-merge headers, there is currently no proof solution and clients MAY
> accept headers without a proof.

#### Block Body

Expand Down Expand Up @@ -276,7 +276,7 @@ content_key = selector + SSZ.serialize(epoch_record_key)

#### The "Historical Hashes Accumulator"

The "Historical Hashes Accumulator" is based on the [double-batched merkle log accumulator](https://ethresear.ch/t/double-batched-merkle-log-accumulator/571) that is currently used in the beacon chain. This data structure is designed to allow nodes in the network to "forget" the deeper history of the chain, while still being able to reliably receive historical headers with a proof that the received header is indeed from the canonical chain (as opposed to an uncle mined at the same block height). This data structure is only used for pre-merge blocks.
The "Historical Hashes Accumulator" is based on the [double-batched merkle log accumulator](https://ethresear.ch/t/double-batched-merkle-log-accumulator/571) that is currently used in the beacon chain. This data structure is designed to allow nodes in the network to "forget" the deeper history of the chain, while still being able to reliably receive historical headers with a proof that the received header is indeed from the canonical chain (as opposed to an uncle mined at the same block height). This data structure is only used for pre-merge blocks.

The accumulator is defined as an [SSZ](https://ssz.dev/) data structure with the following schema:

Expand All @@ -298,7 +298,6 @@ HistoricalHashesAccumulator = Container[

The algorithm for building the accumulator is as follows.


```python
def update_accumulator(accumulator: HistoricalHashesAccumulator, new_block_header: BlockHeader) -> None:
# get the previous total difficulty
Expand All @@ -322,9 +321,8 @@ def update_accumulator(accumulator: HistoricalHashesAccumulator, new_block_heade
accumulator.current_epoch.append(header_record)
```


The `HistoricalHashesAccumulator` is fully build and frozen when the last block before TheMerge/Paris fork is added and the last incomplete `EpochRecord` its `hash_tree_root` is added to the `historical_epochs`.
The network provides no mechanism for acquiring the fully build `HistoricalHashesAccumulator`. Clients are encouraged to solve this however they choose, with the suggestion that they include a frozen copy of the accumulator at the point of TheMerge within their client code, and provide a mechanism for users to override this value if they so choose. The `hash_tree_root` of the `HistoricalHashesAccumulator` is
The network provides no mechanism for acquiring the fully build `HistoricalHashesAccumulator`. Clients are encouraged to solve this however they choose, with the suggestion that they include a frozen copy of the accumulator at the point of TheMerge within their client code, and provide a mechanism for users to override this value if they so choose. The `hash_tree_root` of the `HistoricalHashesAccumulator` is
defined in [EIP-7643](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-7643.md).

#### HistoricalHashesAccumulatorProof
Expand Down
18 changes: 7 additions & 11 deletions implementation-details-overlay.md
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ Each bucket is limited to `K` total members

### D.3.d - Replacement cache

Each bucket maintains a set of additional nodes known to be at the appropriate distance. When a node is removed from the routing table it is replaced by a node from the replacement cache when one is available. The cache is managed such that it remains disjoint from the nodes in the corresponding bucket.
Each bucket maintains a set of additional nodes known to be at the appropriate distance. When a node is removed from the routing table it is replaced by a node from the replacement cache when one is available. The cache is managed such that it remains disjoint from the nodes in the corresponding bucket.

## D.4 - Retrieve nodes at specified log-distance

Expand All @@ -218,15 +218,15 @@ The client uses a set of bootnodes to acquire an initial view of the network.

### E.1.a - Bootnodes

Each supported sub protocol can have its own set of bootnodes. These records can be either hard coded into the client or provided via client configuration.
Each supported sub protocol can have its own set of bootnodes. These records can be either hard coded into the client or provided via client configuration.

## E.2 - Population of routing table

The client actively seeks to populate its routing table by performing [RFN](#TODO) lookups to discover new nodes for the routing table

## E.3 - Liveliness checks

The client tracks *liveliness* of nodes in its routing table and periodically checks the liveliness of the node in its routing table which was least recently checked.
The client tracks _liveliness_ of nodes in its routing table and periodically checks the liveliness of the node in its routing table which was least recently checked.

### E.3.a - Rate Limiting Liviliness Checks

Expand All @@ -238,7 +238,7 @@ Management of stored content.

## F.1 - Content can be stored

Content can be stored in a persistent database. Databases are segmented by sub protocol.
Content can be stored in a persistent database. Databases are segmented by sub protocol.

## F.2 - Content can be retrieved by `content_id`

Expand All @@ -248,12 +248,10 @@ Given a known `content_id` the corresponding content payload can be retrieved.

Content can be removed.


## F.4 - Query furthest by distance

Retrieval of the content from the database which is furthest from a provided `node_id` using the custom distance function.


## F.5 - Total size of stored content

Retrieval of the total number of bytes stored.
Expand All @@ -274,7 +272,7 @@ The ability to listening for an inbound connection from another node with a `con

## G.2 - Enforcement of maximum stored content size

When the total size of stored content exceeds the configured maximum content storage size the content which is furthest from the local `node_id` is evicted in a timely manner. This should also result in any "data radius" values relevant to this network being adjusted.
When the total size of stored content exceeds the configured maximum content storage size the content which is furthest from the local `node_id` is evicted in a timely manner. This should also result in any "data radius" values relevant to this network being adjusted.

## G.3 - Retrieval via FINDCONTENT/FOUNDCONTENT & uTP

Expand All @@ -298,28 +296,26 @@ Support for receipt of content using the OFFER/ACCEPT messages and uTP sub proto

### G.4.a - Handle incoming gossip

Client can listen for incoming OFFER messages, responding with an ACCEPT message for any offered content which is of interest to the client.
Client can listen for incoming OFFER messages, responding with an ACCEPT message for any offered content which is of interest to the client.

#### G.4.a.1 - Receipt via uTP

After sending an ACCEPT response to an OFFER request the client listens for an inbound uTP stream with the `connection-id` that was sent with the ACCEPT response.

### G.4.b - Neighborhood Gossip Propogation
### G.4.b - Neighborhood Gossip Propagation

Upon receiving and validating gossip content, the content should then be gossiped to some set of interested nearby peers.

#### G.4.b.1 - Sending content via uTP

Upon receiving an ACCEPT message in response to our own OFFER message the client can initiate a uTP stream with the other node and can send the content payload across the stream.


## G.5 - Serving Content

The client should listen for FINDCONTENT messages.

When a FINDCONTENT message is received either the requested content or the nodes known to be closest to the content are returned via a FOUNDCONTENT message.


# H - JSON-RPC

Endpoints that require for the portal network wire protocol.
Expand Down
Loading