From 57c02f77451a20ed487af2789e2ad4228c84a886 Mon Sep 17 00:00:00 2001 From: Alessandro Mazza <121622391+alessandromazza98@users.noreply.github.com> Date: Fri, 29 Sep 2023 11:37:13 +0200 Subject: [PATCH] Markdown linting support (#159) --- .github/workflows/hygeine.yml | 18 ++++ .markdownlint.json | 4 + CONTRIBUTING.md | 14 ++-- README.md | 75 ++++++++++++++--- SECURITY.md | 3 +- docs/README.md | 47 +++++++---- docs/alert-routing.md | 36 ++++---- docs/architecture/alerting.markdown | 19 +++-- docs/architecture/api.markdown | 18 ++-- docs/architecture/architecture.markdown | 5 ++ docs/architecture/engine.markdown | 55 ++++++++---- docs/architecture/etl.markdown | 106 +++++++++++++++--------- docs/heuristics.markdown | 52 ++++++------ docs/index.markdown | 3 + docs/telemetry.markdown | 9 +- pull_request_template.md | 7 +- 16 files changed, 323 insertions(+), 148 deletions(-) create mode 100644 .markdownlint.json diff --git a/.github/workflows/hygeine.yml b/.github/workflows/hygeine.yml index 84d76d0b..b6d099c6 100644 --- a/.github/workflows/hygeine.yml +++ b/.github/workflows/hygeine.yml @@ -37,3 +37,21 @@ jobs: with: # Optional: version of golangci-lint to use in form of v1.2 or v1.2.3 or `latest` to use the latest version version: v1.52.1 + + markdown-linting: + runs-on: ubuntu-latest + + steps: + - name: Check out code + uses: actions/checkout@v3 + + - name: Set up Node.js + uses: actions/setup-node@v2 + with: + node-version: '14' + + - name: Install markdownlint CLI + run: npm install -g markdownlint-cli + + - name: Run markdownlint + run: markdownlint '**/*.md' diff --git a/.markdownlint.json b/.markdownlint.json new file mode 100644 index 00000000..84371ba5 --- /dev/null +++ b/.markdownlint.json @@ -0,0 +1,4 @@ +{ + "MD013": false +} + \ No newline at end of file diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index eeec9908..715b3e2d 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -11,12 +11,12 @@ Coinbase projects. ## Before You Start -Ensure that you have read and understand the project's README file and the +Ensure that you have read and understand the project's README file and the contribution guidelines. Search the issues tracker to see if the issue you want to work on has already been reported or if someone is already working on it. If you find an existing issue that you would like to work on, request to be assigned to the issue. If you cannot find an existing issue that matches -what you want to work on, create a new issue and wait for it to be assigned to +what you want to work on, create a new issue and wait for it to be assigned to you before starting work on it. ## Bug Reports @@ -51,19 +51,19 @@ The best way to see a feature added, however, is to submit a pull request. * Ensure that your code is well-documented and meets the project's coding standards. -* Provide a reference to the issue you worked on and provide a brief description of the - changes you made. +* Provide a reference to the issue you worked on and provide a brief description +of the changes you made. * Submit your pull request! ## Contributing to an Existing Issue -If you have been assigned an issue, please confirm that the issue is still open +If you have been assigned an issue, please confirm that the issue is still open and has not already been resolved. If you have any questions about the issue, please ask on the issue thread before starting work on it. Once you are assigned to an issue, you can start working on a solution for it. Please note that it is important to communicate regularly with the project maintainers and update -them on your progress. If you are no longer able to work on an issue, please +them on your progress. If you are no longer able to work on an issue, please let us know as soon as possible so we can reassign it. ## Support Requests @@ -76,4 +76,4 @@ All support requests must be made via [our support team][3]. [1]: https://github.com/base-org/pessimism/issues [2]: https://medium.com/brigade-engineering/the-secrets-to-great-commit-messages-106fc0a92a25 -[3]: https://support.coinbase.com/customer/en/portal/articles/2288496-how-can-i-contact-coinbase-support- \ No newline at end of file +[3]: https://support.coinbase.com/customer/en/portal/articles/2288496-how-can-i-contact-coinbase-support- diff --git a/README.md b/README.md index daee0dc0..d0d401e9 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,12 @@ # pessimism -__Because you can't always be optimistic__ -_Pessimism_ is a public good monitoring service that allows for [OP Stack](https://stack.optimism.io/) and EVM compatible blockchains to be continuously assessed for real-time threats using custom defined user heuristic rule sets. To learn about Pessimism's architecture, please advise the documentation. +## Because you can't always be optimistic + +_Pessimism_ is a public good monitoring service that allows for [OP Stack](https://stack.optimism.io/) +and EVM compatible blockchains to be +continuously assessed for real-time threats using custom defined +user heuristic rule sets. To learn about Pessimism's architecture, +please advise the documentation. @@ -17,17 +22,23 @@ _Pessimism_ is a public good monitoring service that allows for [OP Stack](https [![GitHub Issues](https://img.shields.io/github/issues-raw/base-org/pessimism.svg)](https://github.com/base-org/pessimism/issues) **Warning:** -Pessimism is currently experimental and very much in development. It means Pessimism is currently unstable, so code will change and builds can break over the coming months. If you come across problems, it would help greatly to open issues so that we can fix them as quickly as possible. +Pessimism is currently experimental and very much in development. It means +Pessimism is currently unstable, so code will change and builds can break +over the coming months. If you come across problems, it would help greatly +to open issues so that we can fix them as quickly as possible. ## Setup + To use the template, run the following command(s): + 1. Create local config file (`config.env`) to store all necessary environmental variables. There's already an example `config.env.template` in the repo that stores default env vars. 2. [Download](https://go.dev/doc/install) or upgrade to `golang 1.19`. 3. Install all project golang dependencies by running `go mod download`. -# To Run +## To Run + 1. Compile pessimism to machine binary by running the following project level command(s): * Using Make: `make build-app` @@ -35,8 +46,8 @@ To use the template, run the following command(s): * Using Make: `make run-app` * Direct Call: `./bin/pessimism` - ## Docker + 1. Ensure [docker](https://docs.docker.com/engine/install/) is installed on your machine 2. Pull the latest image from Github container registry (ghcr) via `docker pull ghcr.io/base-org/pessimism:latest` @@ -44,11 +55,14 @@ To use the template, run the following command(s): 3. Make sure you have followed the above instructions to create a local config file (config.env) using the config.env.template 4. Run the following: - * Without genesis.json: + * Without genesis.json: + ```bash docker run -p 8080:8080 -p 7300:7300 --env-file=config.env -it ghcr.io/base-org/pessimism:latest ``` - * With genesis.json: + + * With genesis.json: + ```bash docker run -p 8080:8080 -p 7300:7300 --env-file=config.env -it -v ${PWD}/genesis.json:/app/genesis.json ghcr.io/base-org/pessimism:latest ``` @@ -56,39 +70,74 @@ To use the template, run the following command(s): **Note**: If you want to bootstrap the application and run specific heuristics/pipelines upon start, update config.env `BOOTSTRAP_PATH` value to the location of your genesis.json file then run ### Building and Running New Images -- Run `make docker-build` at the root of the repository to build a new docker image. - -- Run `make docker-run` at the root of the repository to run the new docker image. +* Run `make docker-build` at the root of the repository to build a new docker image. +* Run `make docker-run` at the root of the repository to run the new docker image. ## Linting -[golangci-lint](https://golangci-lint.run/) is used to perform code linting. Configurations are defined in [.golangci.yml](./.golangci.yml) + +[golangci-lint](https://golangci-lint.run/) is used to perform code linting. +Configurations are defined in [.golangci.yml](./.golangci.yml) It can be ran using the following project level command(s): + * Using Make: `make lint` * Direct Call: `golangci-lint run` +## Linting Markdown Files + +To ensure consistent formatting and avoid common mistakes in our Markdown documents, +we use markdownlint. Before submitting a pull request, you can check your Markdown +files for compliance. + +### Installation + +1. **Install Node.js**: +If you haven't already, install [Node.js](https://nodejs.org/en). + +2. **Install markdownlint CLI globally**: + +```bash +npm install -g markdownlint-cli +``` + +### Linting with markdownlint + +To lint your Markdown files, navigate to the root directory of the project and run: + +```bash +markdownlint '**/*.md' +``` + +If markdownlint reports any issues, please fix them before submitting your pull request. + ## Testing ### Unit Tests + Unit tests are written using the native [go test](https://pkg.go.dev/testing) library with test mocks generated using the golang native [mock](https://github.com/golang/mock) library. These tests live throughout the project's `/internal` directory and are named with the suffix `_test.go`. Unit tests can run using the following project level command(s): + * Using Make: `make test` * Direct Call: `go test ./...` ### Integration Tests + Integration tests are written that leverage the existing [op-e2e](https://github.com/ethereum-optimism/optimism/tree/develop/op-e2e) testing framework for spinning up pieces of the bedrock system. Additionally, the [httptest](https://pkg.go.dev/net/http/httptest) library is used to mock downstream alerting services (e.g. Slack's webhook API). These tests live in the project's `/e2e` directory. Integration tests can run using the following project level command(s): + * Using Make: `make e2e-test` * Direct Call: `go test ./e2e/...` ## Bootstrap Config + A bootstrap config file is used to define the initial state of the pessimism service. The file must be `json` formatted with its directive defined in the `BOOTSTRAP_PATH` env var. (e.g. `BOOTSTRAP_PATH=./genesis.json`) ### Example File -``` + +```json [ { "network": "layer1", @@ -122,6 +171,6 @@ A bootstrap config file is used to define the initial state of the pessimism ser ] ``` - ## Spawning a heuristic session + To learn about the currently supported heuristics and how to spawn them, please advise the [heuristics' documentation](./docs/heuristics.markdown). diff --git a/SECURITY.md b/SECURITY.md index 98bc1eba..681c8183 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -1,6 +1,7 @@ # Pessimism Security Policy ## Reporting a Security Bug + If you think you have discovered a security issue within any part of this codebase, please let us know. We take security bugs seriously; upon investigating and confirming one, we will patch it within a reasonable amount of time, and ultimately release a public security bulletin, in which we discuss the impact and credit the discoverer. -Report your findings to our H1 program: https://hackerone.com/coinbase +Report your findings to our H1 program: diff --git a/docs/README.md b/docs/README.md index 42d8e78f..56e7d698 100644 --- a/docs/README.md +++ b/docs/README.md @@ -1,8 +1,9 @@ # Pessimism Documentation -This directory contains the english specs for the Pessimism application. +This directory contains the english specs for the Pessimism application. ## Contents + - [Architecture](architecture/architecture.markdown) - [JSON-RPC API](swaggerdoc.html) - [ETL Subsystem](architecture/etl.markdown) @@ -12,41 +13,59 @@ This directory contains the english specs for the Pessimism application. - [Telemetry](telemetry.markdown) ## GitHub Pages -The Pessimism documentation is hosted on GitHub Pages. To view the documentation, please visit [https://base-org.github.io/pessimism](https://base-org.github.io/pessimism/architecture). - + +The Pessimism documentation is hosted on GitHub Pages. To view the documentation, +please visit [https://base-org.github.io/pessimism](https://base-org.github.io/pessimism/architecture). ## Contributing -If you would like to contribute to the Pessimism documentation, please advise the guidelines stipulated in the [CONTRIBUTING.md](../CONTRIBUTING.md) file __before__ submitting a pull request. +If you would like to contribute to the Pessimism documentation, please advise +the guidelines stipulated in the [CONTRIBUTING.md](../CONTRIBUTING.md) +file __before__ submitting a pull request. ## Running Docs Website Locally ### Prerequisites -- Ensure that you have installed the latest version of ruby on your machine following steps located [here](https://www.ruby-lang.org/en/documentation/installation/). -- Installing ruby should also install the ruby bundler which is used to install dependencies located in the [Gemfile](Gemfile) + +- Ensure that you have installed the latest version of ruby on your machine +following steps located [here](https://www.ruby-lang.org/en/documentation/installation/). +- Installing ruby should also install the ruby bundler which is used to +install dependencies located in the [Gemfile](Gemfile) ### Local Testing -To run the documentation website locally, ensure you have followed the prerequisite steps, then do the following + +To run the documentation website locally, ensure you have followed the prerequisite +steps, then do the following + 1. Install dependencies via `bundle install` 2. Run `bundle exec jekyll serve` 3. You should now see a localhost version of documentation for the website! ## Creating Diagrams in GitHub Pages -It is important to note that you cannot simply write a mermaid diagram as you normally would with markdown and expect the diagram to be properly rendered via GitHub pages. To enable proper GitHub pages rendering, follow the recommended steps below: +It is important to note that you cannot simply write a mermaid diagram as you +normally would with markdown and expect the diagram to be properly rendered +via GitHub pages. To enable proper GitHub pages rendering, follow the +recommended steps below: + 1. Implement your diagram in markdown using the ` ```mermaid\n` key -2. Once done with implementing the diagram, ff you have not already, import the mermaid.js library via the following - ``` +2. Once done with implementing the diagram, ff you have not already, +import the mermaid.js library via the following + + ```bash {% raw %} {% endraw %} ``` -3. Delete the ` ```mermaid ` key and replace it with - ``` + +3. Delete the ` ```mermaid ` key and replace it with + + ```bash {% raw %}
--- diagram implementation here ---
{% endraw %} - -4. Done! To make sure this renders correctly, you can run `bundle exec jekyll serve` to view your changes. \ No newline at end of file + +4. Done! To make sure this renders correctly, you can run +`bundle exec jekyll serve` to view your changes. diff --git a/docs/alert-routing.md b/docs/alert-routing.md index 869f6d34..10917adb 100644 --- a/docs/alert-routing.md +++ b/docs/alert-routing.md @@ -6,25 +6,30 @@ permalink: /alert-routing ## Overview -The alert routing feature enables users to define a number of alert destinations and then route alerts to those -destinations based on the alert's severity. For example, a user may want to send all alerts to Slack but only send high -severity alerts to PagerDuty. +The alert routing feature enables users to define a number of alert destinations +and then route alerts to those destinations based on the alert's severity. +For example, a user may want to send all alerts to Slack but only send high +severity alerts to PagerDuty. ## How it works + Alerts are routed to destinations based on the severity of the given heuristic. -When a heuristic is deployed, the user must specify the severity of the alert that the heuristic will produce. -When the heuristic is run, the alert is routed to the configured destinations based on the severity of the alert. -For example, if a heuristic is configured to produce a high severity alert, the alert will be routed to all configured -destinations that support high severity alerts. +When a heuristic is deployed, the user must specify the severity of the alert +that the heuristic will produce. When the heuristic is run, the alert is routed +to the configured destinations based on the severity of the alert. For example, +if a heuristic is configured to produce a high severity alert, the alert will be +routed to all configured destinations that support high severity alerts. -Each severity level is configured independently for each alert destination. A user can add any number of alert -configurations per severity. +Each severity level is configured independently for each alert destination. +A user can add any number of alert configurations per severity. -Located in the root directory you'll find a file named `alerts-template.yaml`. This file contains a template for -configuring alert routing. The template contains a few examples on how you might want to configure your alert routing. +Located in the root directory you'll find a file named `alerts-template.yaml`. +This file contains a template for configuring alert routing. The template contains +a few examples on how you might want to configure your alert routing. ## Supported Alert Destinations + Pessimism currently supports the following alert destinations: | Name | Description | @@ -33,6 +38,7 @@ Pessimism currently supports the following alert destinations: | pagerduty | Sends alerts to a PagerDuty service | ## Alert Severity + Pessimism currently defines the following severities for alerts: | Severity | Description | @@ -41,10 +47,12 @@ Pessimism currently defines the following severities for alerts: | medium | Alerts that could be hazardous, but may not be completely destructive | | high | Alerts that require immediate attention and could result in a loss of funds | - ## PagerDuty Severity Mapping -PagerDuty supports the following severities: `critical`, `error`, `warning`, and `info`. -Pessimism maps the Pessimism severities to [PagerDuty severities](https://developer.pagerduty.com/docs/ZG9jOjExMDI5NTgx-send-an-alert-event) as follows ([ref](../internal/core/alert.go)): + +PagerDuty supports the following severities: `critical`, `error`, `warning`, +and `info`. Pessimism maps the Pessimism severities to +[PagerDuty severities](https://developer.pagerduty.com/docs/ZG9jOjExMDI5NTgx-send-an-alert-event) +as follows ([ref](../internal/core/alert.go)): | Pessimism | PagerDuty | |-----------|-----------| diff --git a/docs/architecture/alerting.markdown b/docs/architecture/alerting.markdown index b88c6aa6..dc1b856a 100644 --- a/docs/architecture/alerting.markdown +++ b/docs/architecture/alerting.markdown @@ -9,6 +9,7 @@ permalink: /architecture/alerting {% endraw %} ## Overview + The alerting subsystem will receive alerts from the `EngineManager` and publish them to the appropriate alerting destinations. The alerting subsystem will also be responsible for managing the lifecycle of alerts. This includes creating, updating, and removing alerting entries for heuristic sessions. ## Diagram @@ -39,28 +40,35 @@ PH --> |"HTTP POST"|PagerDutyAPI("PagerDuty API") {% endraw %} ### Alert -An `Alert` type stores all necessary metadata for external consumption by a downstream entity. + +An `Alert` type stores all necessary metadata for external consumption by a downstream entity. + ### Alert Store + The alert store is a persistent storage layer that is used to store alerting entries. As of now, the alert store only supports configurable alert destinations for each alerting entry. Ie: + ``` (SUUID) --> (AlertDestination) ``` ### Alert Destinations + An alert destination is a configurable destination that an alert can be sent to. As of now this only includes _Slack_. In the future however, this will include other third party integrations. #### Slack -The Slack alert destination is a configurable destination that allows alerts to be sent to a specific Slack channel. The Slack alert destination will be configured with a Slack webhook URL. The Slack alert destination will then use this URL to send alerts to the specified Slack channel. +The Slack alert destination is a configurable destination that allows alerts to be sent to a specific Slack channel. The Slack alert destination will be configured with a Slack webhook URL. The Slack alert destination will then use this URL to send alerts to the specified Slack channel. #### PagerDuty -The PagerDuty alert destination is a configurable destination that allows alerts to be sent to a specific PagerDuty services via the use of integration keys. Pessimism also uses the SUUID associated with an alert as a deduplication key for PagerDuty. This is done to ensure that PagerDuty will not be spammed with duplicate or incidents. +The PagerDuty alert destination is a configurable destination that allows alerts to be sent to a specific PagerDuty services via the use of integration keys. Pessimism also uses the SUUID associated with an alert as a deduplication key for PagerDuty. This is done to ensure that PagerDuty will not be spammed with duplicate or incidents. ### Alert CoolDowns -To ensure that alerts aren't spammed to destinations once invoked, a time based cooldown value (`cooldown_time`) can be defined within the `alert_params` of a heuristic session config. This time value determines how long a heuristic session must wait before being allowed to alert again. + +To ensure that alerts aren't spammed to destinations once invoked, a time based cooldown value (`cooldown_time`) can be defined within the `alert_params` of a heuristic session config. This time value determines how long a heuristic session must wait before being allowed to alert again. An example of this is shown below: + ```json { "network": "layer1", @@ -81,4 +89,5 @@ An example of this is shown below: ``` ### Alert Messages -Pessimism allows for the arbitrary customization of alert messages. This is done by defining an `message` value string within the `alerting_params` of a heuristic session bootstrap config or session creation request. This is critical for providing additional context on alerts that allow for easier ingestion by downstream consumers (i.e, alert responders). \ No newline at end of file + +Pessimism allows for the arbitrary customization of alert messages. This is done by defining an `message` value string within the `alerting_params` of a heuristic session bootstrap config or session creation request. This is critical for providing additional context on alerts that allow for easier ingestion by downstream consumers (i.e, alert responders). diff --git a/docs/architecture/api.markdown b/docs/architecture/api.markdown index 9bcc2a30..e5f0eb09 100644 --- a/docs/architecture/api.markdown +++ b/docs/architecture/api.markdown @@ -5,12 +5,15 @@ permalink: /api --- ### Overview + The Pessimism API is a RESTful HTTP API that allows users to interact with the Pessimism application. The API is built using the [go-chi](https://github.com/go-chi/chi) framework and is served using the native [http package](https://pkg.go.dev/net/http). The API is designed to be modular and extensible, allowing for the addition of new endpoints and functionality with relative ease. -Currently, interactive endpoint documentation is hosted via [Swagger UI](https://swagger.io/tools/swagger-ui/) at [https://base-org.github.io/pessimism/](https://base-org.github.io/pessimism/). +Currently, interactive endpoint documentation is hosted via [Swagger UI](https://swagger.io/tools/swagger-ui/) at [https://base-org.github.io/pessimism/](https://base-org.github.io/pessimism/). ### Configuration + The API can be customly configured using environment variables stored in a `config.env` file. The following environment variables are used to configure the API: + - `SERVER_HOST`: The host address to serve the API on (eg. `localhost`) - `SERVER_PORT`: The port to serve the API on (eg. `8080`) - `SERVER_KEEP_ALIVE`: The keep alive second duration for the server (eg. `10`) @@ -18,11 +21,14 @@ The API can be customly configured using environment variables stored in a `conf - `SERVER_WRITE_TIMEOUT`: The write timeout second duration for the server (eg. `10`) ### Components + The Pessimism API is broken down into the following constituent components: -* `handlers`: The handlers package contains the HTTP handlers for the API. Each handler is responsible for handling a specific endpoint and is responsible for parsing the request, calling the appropriate service method, and renders a response. -* `service`: The service package contains the business logic for the API. The service is responsible for handling calls to the core Pessimism subsystems and is responsible for validating incoming requests. -* `models`: The models package contains the data models used by the API. Each model is responsible for representing a specific data type and is used to parse and validate incoming requests. -* `server`: The server package contains the HTTP server for the API. The server is responsible for serving the API and is responsible for handling incoming requests and dispatching them to the appropriate handler function. + +- `handlers`: The handlers package contains the HTTP handlers for the API. Each handler is responsible for handling a specific endpoint and is responsible for parsing the request, calling the appropriate service method, and renders a response. +- `service`: The service package contains the business logic for the API. The service is responsible for handling calls to the core Pessimism subsystems and is responsible for validating incoming requests. +- `models`: The models package contains the data models used by the API. Each model is responsible for representing a specific data type and is used to parse and validate incoming requests. +- `server`: The server package contains the HTTP server for the API. The server is responsible for serving the API and is responsible for handling incoming requests and dispatching them to the appropriate handler function. ### Authorization and Authentication -TBD \ No newline at end of file + +TBD diff --git a/docs/architecture/architecture.markdown b/docs/architecture/architecture.markdown index 860040df..d017867c 100644 --- a/docs/architecture/architecture.markdown +++ b/docs/architecture/architecture.markdown @@ -5,7 +5,9 @@ permalink: /architecture --- ## Overview + There are *three subsystems* that drive Pessimism’s architecture: + 1. [ETL](./etl.markdown) - Modularized data extraction system for retrieving and processing external chain data in the form of a DAG known as the Pipeline DAG 2. [Risk Engine](./engine.markdown) - Logical execution platform that runs a set of heuristics on the data funneled from the Pipeline DAG 3. [Alerting](./alerting.markdown) - Alerting system that is used to notify users of heuristic failures @@ -13,15 +15,18 @@ There are *three subsystems* that drive Pessimism’s architecture: These systems will be accessible by a client through the use of a JSON-RPC API that has unilateral access to all three primary subsystems. The API will be supported to allow Pessimism users via client to: + 1. Start heuristic sessions 2. Update existing heuristic sessions 3. Remove heuristic sessions ## Diagram + The following diagram illustrates the core interaction flow between the three primary subsystems, API, and external data sources: ![high level component diagram](../assets/images/high_level_diagram.png) ## Shared State + To provide context about specific data values (ie. addresses to monitor) between subsystems, Pessimism uses a shared state store. The shared state store will be a non-persistent storage layer. This means that the data will not be persisted to disk and will be lost upon restart of the Pessimism service. **NOTE: As of now, the shared state store only supports an in-memory representation and fails to leverage more proper cache solutions like Redis** diff --git a/docs/architecture/engine.markdown b/docs/architecture/engine.markdown index 5a0a2bfa..ebed2bde 100644 --- a/docs/architecture/engine.markdown +++ b/docs/architecture/engine.markdown @@ -8,8 +8,8 @@ permalink: /architecture/risk-engine {% endraw %} - ## Overview + The Risk Engine is responsible for handling and executing active heuristics. It is the primary downstream consumer of ETL output. The Risk Engine will receive data from the ETL and execute the heuristics associated with the data. If an invalidation occurs, the Risk Engine will return an `InvalidationOutcome` to the `EngineManager`. The `EngineManager` will then create an `Alert` using the `InvalidationOutcome` and publish it to the Alerting system. The Risk Engine will execute the heuristics associated with some ingested input data and return an `InvalidationOutcome` to the `EngineManager`. The `EngineManager` will then create an `Alert` using the `InvalidationOutcome` and publish it to the Alerting system. @@ -40,17 +40,21 @@ graph LR; {% endraw %} ## Inter-Connectivity + The ETL publishes `Heuristic Input` to the Risk Engine using a relay channel. The Risk Engine will subscribe to this channel to receive and process updates as they are published by the ETL. The Risk Engine will also publish events to the Alerting system using a separate downstream relay channel. The Alerting system will subscribe to this channel to receive and process updates as they are published by the Risk Engine. ## Heuristic Session + An heuristic session refers to the execution and representation of a single heuristic. An heuristic session is uniquely identified by a `SUUID` and is associated with a single `PUUID`. An heuristic session is created by the `EngineManager` when a user requests to run an active session. The `EngineManager` will create a new `HeuristicSession` and pass it to the `RiskEngine` to be executed. The `RiskEngine` will then execute the heuristic session and return an `InvalidationOutcome` to the `EngineManager`. The `EngineManager` will then create an `Alert` using the `InvalidationOutcome` and publish it to the Alerting system. ## Session UUID (SUUID) -The SUUID is a unique identifier that is used to identify a specific heuristic session. The SUUID is generated by the `EngineManager` when a user requests to run a new heuristic session. The SUUID is used to uniquely identify a specific heuristic session. This allows the `EngineManager` to perform operations on a specific heuristic session such as removing it or updating it. -A `SUUID` constitutes of both a unique `UUID` and a `PID`. +The SUUID is a unique identifier that is used to identify a specific heuristic session. The SUUID is generated by the `EngineManager` when a user requests to run a new heuristic session. The SUUID is used to uniquely identify a specific heuristic session. This allows the `EngineManager` to perform operations on a specific heuristic session such as removing it or updating it. + +A `SUUID` constitutes of both a unique `UUID` and a `PID`. A `SessionPID` is encoded using the following 3 byte array sequence: + ``` 0 1 2 3 |-----------|-----------|-----------| @@ -59,57 +63,72 @@ A `SessionPID` is encoded using the following 3 byte array sequence: ``` ## Heuristic Input + The heuristic input is a struct that contains the following fields: + * `PUUID` - The ID of the heuristic that the input data is intended for * `Input` - Transit data that was generated by the ETL ## Heuristic + An heuristic is a logical execution module that defines some set of invalidation criteria. The heuristic is responsible for processing the input data and determining if an invalidation has occurred. If an invalidation has occurred, the heuristic will return a `InvalidationOutcome` that contains relevant metadata necessary for the `EngineManager` to create an `Alert`. ### Hardcoded Base Heuristic + Every hardcoded heuristic must implement an `Heuristic` interface to be compatible for invalidation by the `Hardcoded` Risk Engine type. Currently the interface is as follows: + ``` type Heuristic interface { - Addressing() bool - InputType() core.RegisterType - Invalidate(core.TransitData) (*core.InvalOutcome, bool, error) - SUUID() core.SUUID - SetSUUID(core.SUUID) + Addressing() bool + InputType() core.RegisterType + Invalidate(core.TransitData) (*core.InvalOutcome, bool, error) + SUUID() core.SUUID + SetSUUID(core.SUUID) } -``` +``` ### Heuristic Input Type + The heuristic input type is a `RegisterType` that defines the type of data that the heuristic will receive as input. The heuristic input type is defined by the `InputType()` method of the `Heuristic` interface. The heuristic input type is used by the `RiskEngine` to determine if the input data is compatible with the heuristic. If the input data is not compatible with the heuristic, the `RiskEngine` will not execute the heuristic and will return an error. ### Addressing + All heuristics have a boolean property `Addressing` which determines if the heuristic is addressable. To be addressable, a heuristic must only execute under the context of a single address. -For example, a `balance_enforcement` heuristic session will be addressable because it only executes invalidation logic for the native ETH balance of a single address. +For example, a `balance_enforcement` heuristic session will be addressable because it only executes invalidation logic for the native ETH balance of a single address. ### Heuristic States + State is used to represent the current state of a heuristic. The state of a heuristic is represented by a `HeuristicState` type. The following states are supported: -- `Running` - The heuristic is currently running and is being executed by the `RiskEngine` -- `Inactive` - The heuristic is currently inactive and is not being executed by the `RiskEngine` -- `Paused` - The heuristic is currently paused and is not being executed by the `RiskEngine` + +* `Running` - The heuristic is currently running and is being executed by the `RiskEngine` +* `Inactive` - The heuristic is currently inactive and is not being executed by the `RiskEngine` +* `Paused` - The heuristic is currently paused and is not being executed by the `RiskEngine` ### Execution Type + A risk engine has an associated execution type that defines how the risk engine will execute the heuristic. There are two types of execution: + 1. `Hardcoded` - The heuristic invalidation logic is hardcoded directly into the risk engine registry using native Go code. These heuristics can only be changed by modifying the application source code of the engine registry. 2. `Dynamic` - The heuristic invalidation logic is dynamically loaded and executed by a risk engine. These heuristics can be changed without modifying the application source code of the engine registry. **As of now, this is not supported.** ## Hardcoded Heuristic Types + As of now, there are two types of hardcoded heuristics that a user can deploy active sessions for: -- `invocation` - Heuristic that is triggered when a specific smart contract function is invoked **Not currently supported** -- `balance_enforcement` - Heuristic that checks an address's balance changes and ensures that the balance does not exceed a certain threshold + +* `invocation` - Heuristic that is triggered when a specific smart contract function is invoked **Not currently supported** +* `balance_enforcement` - Heuristic that checks an address's balance changes and ensures that the balance does not exceed a certain threshold ### How to add a new heuristic -1. Create a new file in the `internal/engine/registry` directory that stores the heuristic implementation. The implementation must adhere to the interface specified for the `BaseHeuristic` type. -3. Add a new entry to the `HeuristicType` enum in `internal/core/constants.go` -2. Add a new entry to the registry in `internal/engine/registry/registry.go` +1. Create a new file in the `internal/engine/registry` directory that stores the heuristic implementation. The implementation must adhere to the interface specified for the `BaseHeuristic` type. + +2. Add a new entry to the `HeuristicType` enum in `internal/core/constants.go` +3. Add a new entry to the registry in `internal/engine/registry/registry.go` ## Dynamic Heuristic Types + **Not currently supported** Dynamic heuristics are programmable entities that can be deployed as arbitrary code by a user. They are represented via some code standard that is dynamically executable by a Risk Engine. Unlike `Hardcoded` heuristics, dynamic heuristics can be deployed and executed without modifying the source code of the Pessimism application. diff --git a/docs/architecture/etl.markdown b/docs/architecture/etl.markdown index 99afe7e8..17ba0ce3 100644 --- a/docs/architecture/etl.markdown +++ b/docs/architecture/etl.markdown @@ -8,24 +8,26 @@ permalink: /architecture/etl {% endraw %} - -The Pessimism ETL is a generalized abstraction for a DAG-based component system that continuously transforms chain data into inputs for consumption by a Risk Engine in the form of intertwined data “pipelines”. This DAG based representation of ETL operations is done to ensure that the application can optimally scale to support many active heuristics. This design allows for the reuse of modularized ETL components and de-duplication of conflicting pipelines under certain key logical circumstances. +The Pessimism ETL is a generalized abstraction for a DAG-based component system that continuously transforms chain data into inputs for consumption by a Risk Engine in the form of intertwined data “pipelines”. This DAG based representation of ETL operations is done to ensure that the application can optimally scale to support many active heuristics. This design allows for the reuse of modularized ETL components and de-duplication of conflicting pipelines under certain key logical circumstances. ## Component -A component refers to a graph node within the ETL system. Every component performs some operation for transforming data from any data source into a consumable input for the Risk Engine to ingest. + +A component refers to a graph node within the ETL system. Every component performs some operation for transforming data from any data source into a consumable input for the Risk Engine to ingest. Currently, there are three total component types: + 1. `Pipe` - Used to perform local arbitrary computations _(e.g. Extracting L1Withdrawal transactions from a block)_ 2. `Oracle` - Used to poll and collect data from some third-party source _(e.g. Querying real-time account balance amounts from an op-geth execution client)_ 3. `Aggregator` - Used to synchronize events between asynchronous data sources _(e.g. Synchronizing L1/L2 blocks to understand real-time changes in bridging TVL)_ - -### Inter-Connectivity + +### Inter-Connectivity + The diagram below showcases how interactivity between components occurs: {% raw %}
graph LR; A((Component0)) -->|dataX| C[Ingress]; - + subgraph B["Component1"] C --> D[ingressHandler]; D --> |dataX| E(eventLoop); @@ -39,8 +41,8 @@ graph LR;
{% endraw %} - #### Egress Handler + All component types use an `egressHandler` struct for routing transit data to actively subscribed downstream ETL components. {% raw %} @@ -51,18 +53,21 @@ flowchart TD; {% endraw %} - #### Ingress Handler + All component types also use an `ingressHandler` struct for ingesting active transit data from upstream ETL components. ### Component UUID (CUUID) + All components have a UUID that stores critical identification data. Component IDs are used by higher order abstractions to: -* Represent a component DAG + +* Represent a component DAG * Understand when component duplicates occur in the system Component UUID's constitute of both a randomly generated `UUID` and a deterministic `PID`. This is done to ensure uniqueness of each component instance while also ensuring collision based properties so that components can be reused when viable. A `ComponentPID` is encoded using the following four byte sequence: + ``` 0 1 2 3 4 |--------|--------|--------|--------| @@ -71,13 +76,16 @@ A `ComponentPID` is encoded using the following four byte sequence: ``` ### State Update Handling + **NOTE - State handling policies by management abstractions has yet to be properly fleshed out** ### Pipe -Pipes are used to perform arbitrary transformations on some provided upstream input data. -Once input data processing has been completed, the output data is then submitted to its respective destination(s). + +Pipes are used to perform arbitrary transformations on some provided upstream input data. +Once input data processing has been completed, the output data is then submitted to its respective destination(s). #### Attributes + * An `ActivityState` channel with a pipeline manager * Ingress handler that other components can write to * `TransformFunc` - A processing function that performs some data translation/transformation on respective inputs @@ -85,14 +93,17 @@ Once input data processing has been completed, the output data is then submitted * A specified output data type #### Example Use Case(s) + * Generating opcode traces for some EVM transaction -* Parsing emitted events from a transaction +* Parsing emitted events from a transaction -### Oracle -Oracles are responsible for collecting data from some external third party _(e.g. L1 geth node, L2 rollup node, etc.)_. As of now, oracle's are configurable through the use of a standard `OracleDefinition` interface that allows developers to write arbitrary oracle logic. +### Oracle + +Oracles are responsible for collecting data from some external third party _(e.g. L1 geth node, L2 rollup node, etc.)_. As of now, oracle's are configurable through the use of a standard `OracleDefinition` interface that allows developers to write arbitrary oracle logic. The following key interface functions are supported/enforced: + * `ReadRoutine` - Routine used for reading/polling real-time data for some arbitrarily configured data source -* `BackTestRoutine` - _Optional_ routine used for sequentially backtesting from some starting to ending block heights. +* `BackTestRoutine` - _Optional_ routine used for sequentially backtesting from some starting to ending block heights. Unlike other components, `Oracles` actually employ _2 go routines_ to safely operate. This is because the definition routines are run as a separate go routine with a communication channel to the actual `Oracle` event loop. This is visualized below: @@ -107,8 +118,8 @@ graph LR; {% endraw %} - #### Attributes + * A communication channel with the pipeline manager * Poller/subscription logic that performs real-time data reads on some third-party source * An `egressHandler` that stores dependencies to write to (i.e. Other pipeline components, heuristic engine) @@ -118,21 +129,24 @@ graph LR; * _(Optional)_ Backtest support for polling some data between some starting and ending block heights * _(Optional)_ Use of an application state cache to understand which parameter sets to sequentially feed to an endpoint - #### Example Use Case(s) + * Polling layer-2 block data in real-time for state updates * Interval polling user provided chain addresses for native ETH amounts ### (TBD) Aggregator + **NOTE - This component type is still in-development** Aggregators are used to solve the problem where a pipe or a heuristic input will require multiple sources of data to perform an execution sequence. Since aggregators are subscribing to more than one data stream with different output frequencies, they must employ a synchronization policy for collecting and propagating multi-data inputs within a highly asynchronous environment. #### Attributes + * Able to read heterogenous transit data from an arbitrary number of component ingresses * A synchronization policy that defines how different transit data from multiple ingress streams will be aggregated into a collectively bound single piece of data * EgressHandler to handle downstream transit data routing to other components or destinations #### Single Value Subscription + _Only send output at the update of a single ingress stream_ Single Value Subscription refers to a synchronization policy where a bucketed multi-data tuple is submitted every time there’s an update to a single input data queue. @@ -140,6 +154,7 @@ Single Value Subscription refers to a synchronization policy where a bucketed mu For example we can have a heuristic that subscribes to blocks from two heterogenous chains (layer1, layer2) or `{ChainA, ChainB}`, let's assume `BLOCK_TIME(ChainA) > BLOCK_TIME(ChainB)`. We can either specify that the heuristic will run every time there's an update or a new block from `ChainA`: + ``` { "A:latest_blocks": [xi] where cardinality = 1, @@ -149,6 +164,7 @@ We can either specify that the heuristic will run every time there's an update o ``` Or we can specify the inverse, every-time there's a new block from `ChainB`: + ``` { "A:latest_blocks": [NULL OR xi] where cardinality <= 1, @@ -158,15 +174,17 @@ We can either specify that the heuristic will run every time there's an update o This should be extendable to any number of heterogenous data sources. - ## Registry + A registry submodule is used to store all ETL data register definitions that provide the blueprint for a unique ETL component type. A register definition consists of: -- `DataType` - The output data type of the component node. This is used for data serialization/deserialization by both the ETL and Risk Engine subsystems. -- `ComponentType` - The type of component being invoked (_e.g. Oracle_). -- `ComponentConstructor` - Constructor function used to create unique component instances. All components must implement the `Component` interface. -- `Dependencies` - Ordered slice of data register dependencies that are necessary for the component to operate. For example, a component that requires a geth block would have a dependency list of `[geth.block]`. This dependency list is used to ensure that the ETL can properly construct a component graph that satisfies all component dependencies. + +* `DataType` - The output data type of the component node. This is used for data serialization/deserialization by both the ETL and Risk Engine subsystems. +* `ComponentType` - The type of component being invoked (_e.g. Oracle_). +* `ComponentConstructor` - Constructor function used to create unique component instances. All components must implement the `Component` interface. +* `Dependencies` - Ordered slice of data register dependencies that are necessary for the component to operate. For example, a component that requires a geth block would have a dependency list of `[geth.block]`. This dependency list is used to ensure that the ETL can properly construct a component graph that satisfies all component dependencies. ## Addressing + Some component's require knowledge of a specific address to properly function. For example, an oracle that polls a geth node for native ETH balance amounts would need knowledge of the address to poll. To support this, the ETL leverages a shared state store between the ETL and Risk Engine subsystems. Shown below is how the ETL and Risk Engine interact with the shared state store using a `BalanceOracle` component as an example: @@ -200,18 +218,19 @@ graph LR; {% endraw %} - - ### Geth Block Oracle Register + A `GethBlock` register refers to a block output extracted from a go-ethereum node. This register is used for creating `Oracle` components that poll and extract block data from a go-ethereum node in real-time. ### Geth Account Balance Oracle Register + An `AccountBalance` register refers to a native ETH balance output extracted from a go-ethereum node. This register is used for creating `Oracle` components that poll and extract native ETH balance data for some state persisted addresses from a go-ethereum node in real-time. Unlike, the `GethBlock` register, this register requires knowledge of an address set that's shared with the risk engine to properly function and is therefore addressable. Because of this, any heuristic that uses this register must also be addressable. ## Managed ETL ### Component Graph + The ETL uses a `ComponentGraph` construct to represent and store critical component inter-connectivity data _(ie. component node entries and graph edges)_. A graph edge is represented as a binded communication path between two arbitrary component nodes (`c1`, `c2`). Adding an edge from some component (`c1`) to some downstream component (`c2`) results in `c1` having a path to the ingress of `c2` in its [egress handler](#egress-handler). This would look something like: @@ -237,38 +256,41 @@ graph TB; {% endraw %} -**NOTE:** The component graph used in the ETL is represented as a _DAG_ (Directed Acyclic Graph), meaning that no bipartite edge relationships should exist between two components (`c1`, `c2`) where `c1-->c2` && `c2-->c1`. While there are no explicit checks for this in the code software, it should be impossible given that all components declare entrypoint register dependencies within their metadata, meaning that a component could only be susceptible to bipartite connectivity in the circumstance where a component registry definition declares inversal input->output of an existing component. - +**NOTE:** The component graph used in the ETL is represented as a _DAG_ (Directed Acyclic Graph), meaning that no bipartite edge relationships should exist between two components (`c1`, `c2`) where `c1-->c2` && `c2-->c1`. While there are no explicit checks for this in the code software, it should be impossible given that all components declare entrypoint register dependencies within their metadata, meaning that a component could only be susceptible to bipartite connectivity in the circumstance where a component registry definition declares inversal input->output of an existing component. ### Pipeline + Pipelines are used to represent some full component path in a DAG based `ComponentGraph`. A pipeline is a sequence of components that are connected together in a way to express meaningful ETL operations for extracting some heuristic input for consumption by the Risk Engine. ### Pipeline States -- `Backfill` - Backfill denotes that the pipeline is currently performing a backfill operation. This means the pipeline is sequentially reading data from some starting height to the most recent block height. This is useful for building state dependent pipelines that require some knowledge of prior history to make live assessments. For example, detecting imbalances between the native ETH deposit supply on the L1 portal contract and the TVL unlocked on the L2 chain would require indexing the prior history of L1 deposits to construct correct supply values. -- `Live` - Live denotes that the pipeline is currently performing live operations. This means the pipeline is reading data from the most recent block height. -- `Stopped` - Stopped denotes that the pipeline is currently not performing any operations. This means the pipeline is neither reading nor processing any data. -- `Paused` - Paused denotes that the pipeline is currently not performing any operations. This means the pipeline is neither reading nor processing any data. The difference between `Stopped` and `Paused` is that a `Paused` pipeline can be resumed at any time while a `Stopped` pipeline must be restarted. -- `Error` - Error denotes that the pipeline is currently in an error state. This means the pipeline is neither reading nor processing any data. The difference between `Stopped` and `Error` is that an `Error` pipeline can be resumed at any time while a `Stopped` pipeline must be restarted. + +* `Backfill` - Backfill denotes that the pipeline is currently performing a backfill operation. This means the pipeline is sequentially reading data from some starting height to the most recent block height. This is useful for building state dependent pipelines that require some knowledge of prior history to make live assessments. For example, detecting imbalances between the native ETH deposit supply on the L1 portal contract and the TVL unlocked on the L2 chain would require indexing the prior history of L1 deposits to construct correct supply values. +* `Live` - Live denotes that the pipeline is currently performing live operations. This means the pipeline is reading data from the most recent block height. +* `Stopped` - Stopped denotes that the pipeline is currently not performing any operations. This means the pipeline is neither reading nor processing any data. +* `Paused` - Paused denotes that the pipeline is currently not performing any operations. This means the pipeline is neither reading nor processing any data. The difference between `Stopped` and `Paused` is that a `Paused` pipeline can be resumed at any time while a `Stopped` pipeline must be restarted. +* `Error` - Error denotes that the pipeline is currently in an error state. This means the pipeline is neither reading nor processing any data. The difference between `Stopped` and `Error` is that an `Error` pipeline can be resumed at any time while a `Stopped` pipeline must be restarted. ### Pipeline Types + There are two types of pipelines: **Live** A live pipeline is a pipeline that is actively running and performing ETL operations on some data fetched in real-time. For example, a live pipeline could be used to extract newly curated block data from a go-ethereum node. - **Backtest** -A backtest pipeline is a pipeline that is used to sequentially backtest some component sequence from some starting to ending block height. For example, a backtest pipeline could be used to backtest a _balance_enforcement_ heuristic between L1 block heights `0` to `1000`. - +A backtest pipeline is a pipeline that is used to sequentially backtest some component sequence from some starting to ending block height. For example, a backtest pipeline could be used to backtest a _balance_enforcement_ heuristic between L1 block heights `0` to `1000`. ### Pipeline UUID (PUUID) + All pipelines have a PUUID that stores critical identification data. Pipeline UUIDs are used by higher order abstractions to: + * Route heuristic inputs between the ETL and Risk Engine * Understand when pipeline collisions between `PIDs` occur -Pipeline UUID's constitute of both a randomly generated `UUID` and a deterministic `PID`. This is done to ensure uniqueness of each component instance while also ensuring collision based properties so that overlapping components can be deduplicated when viable. +Pipeline UUID's constitute of both a randomly generated `UUID` and a deterministic `PID`. This is done to ensure uniqueness of each component instance while also ensuring collision based properties so that overlapping components can be deduplicated when viable. A `PipelinePID` is encoded using the following 9 byte array sequence: + ``` 0 1 5 9 |--------|----------------------------------------|----------------------------------------| @@ -277,22 +299,26 @@ A `PipelinePID` is encoded using the following 9 byte array sequence: ``` ### Collision Analysis + **NOTE - This section is still in-development** Pipeline collisions occur when two pipelines with the same `PID` are generated. This can occur when two pipelines have identical component sequences and valid stateful properties. For some pipeline collision to occur between two pipelines (`P0`, `P1`), the following properties must hold true: + 1. `P0` must have the same `PID` as `P1` 2. `P0` and `P1` must be live pipelines that aren't performing backtests or backfilling operations Once a collision is detected, the ETL will attempt to deduplicate the pipeline by: + 1. Stopping the event loop of `P1` 2. Removing the `PID` of `P1` from the pipeline manager 3. Merging shared state from `P1` to `P0` - ## ETL Manager + `EtlManager` is used for connecting lower-level objects (_Component Graph, Pipeline_) together in a way to express meaningful ETL administration logic; ie: -- Creating a new pipeline -- Removing a pipeline -- Merging some pipelines -- Updating a pipeline + +* Creating a new pipeline +* Removing a pipeline +* Merging some pipelines +* Updating a pipeline diff --git a/docs/heuristics.markdown b/docs/heuristics.markdown index a06fe42f..48035f85 100644 --- a/docs/heuristics.markdown +++ b/docs/heuristics.markdown @@ -6,8 +6,8 @@ permalink: /heuristics # Heuristics - ## Balance Enforcement + The hardcoded `balance_enforcement` heuristic checks the native ETH balance of some address every `n` milliseconds and alerts to slack if the account's balance is ever less than `lower` or greater than `upper` value. This heuristic is useful for monitoring hot wallets and other accounts that should always have a balance above a certain threshold. ### Parameters @@ -19,7 +19,8 @@ The hardcoded `balance_enforcement` heuristic checks the native ETH balance of s | upper | float | The ETH upper bound of the balance | ### Example Deploy Request -``` + +```bash curl --location --request POST 'http://localhost:8080/v0/heuristic' \ --header 'Content-Type: text/plain' \ --data-raw '{ @@ -40,6 +41,7 @@ curl --location --request POST 'http://localhost:8080/v0/heuristic' \ ``` ## Contract Event + The hardcoded `contract_event` heuristic scans newly produced blocks for a specific contract event and alerts to slack if the event is found. This heuristic is useful for monitoring for specific contract events that should never occur. ### Parameters @@ -49,11 +51,10 @@ The hardcoded `contract_event` heuristic scans newly produced blocks for a speci | address | string | The address of the contract to scan for the events | | args | []string | The event signatures to scan for | - -**NOTE:** The `args` field is an array of string event declarations (eg. `Transfer(address,address,uint256)`). Currently Pessimism makes no use of contract ABIs so the manually specified event declarations are not validated for correctness. If the event declaration is incorrect, the heuristic session will never alert but will continue to scan. - +**NOTE:** The `args` field is an array of string event declarations (eg. `Transfer(address,address,uint256)`). Currently Pessimism makes no use of contract ABIs so the manually specified event declarations are not validated for correctness. If the event declaration is incorrect, the heuristic session will never alert but will continue to scan. ### Example Deploy Request + ``` curl --location --request POST 'http://localhost:8080/v0/heuristic' \ --header 'Content-Type: text/plain' \ @@ -74,8 +75,9 @@ curl --location --request POST 'http://localhost:8080/v0/heuristic' \ ``` ## Withdrawal Enforcement + **NOTE:** This heuristic requires an active RPC connection to both L1 and L2 networks. - + The hardcoded `withdrawal_enforcement` heuristic scans for active `WithdrawalProven` events on an L1Portal contract. Once an event is detected, the heuristic proceeds to scan for the corresponding `withdrawlHash` event on the L2ToL1MesagePasser contract's internal state. If the `withdrawlHash` is not found, the heuristic alerts to slack. ### Parameters @@ -85,19 +87,19 @@ The hardcoded `withdrawal_enforcement` heuristic scans for active `WithdrawalPro | l1_portal_address | string | The address of the L1Portal contract | | l2_to_l1_address | string | The address of the L2ToL1MessagePasser contract | - ### Example Deploy Request -``` + +```bash curl --location --request POST 'http://localhost:8080/v0/heuristic' \ --header 'Content-Type: text/plain' \ --data-raw '{ "method": "run", - "params": { - "network": "layer1", - "pipeline_type": "live", - "type": "withdrawal_enforcement", - "start_height": null, - "alert_destination": "slack", + "params": { + "network": "layer1", + "pipeline_type": "live", + "type": "withdrawal_enforcement", + "start_height": null, + "alert_destination": "slack", "heuristic_params": { "l1_portal_address": "0x111", "l2_to_l1_address": "0x333", @@ -106,10 +108,10 @@ curl --location --request POST 'http://localhost:8080/v0/heuristic' \ ``` ## Fault Detection + **NOTE:** This heuristic requires an active RPC connection to both L1 and L2 networks. Furthermore, the Pessimism implementation of fault-detector assumes that a submitted L2 output on L1 will correspond to a canonical block on L2. - -The hardcoded `fault_detector` heuristic scans for active `OutputProposed` events on an L1 Output Oracle contract. Once an event is detected, the heuristic implementation proceeds to reconstruct a local state output for the corresponding L2 block. If there is a mismatch between the L1 output and the local state output, the heuristic alerts. +The hardcoded `fault_detector` heuristic scans for active `OutputProposed` events on an L1 Output Oracle contract. Once an event is detected, the heuristic implementation proceeds to reconstruct a local state output for the corresponding L2 block. If there is a mismatch between the L1 output and the local state output, the heuristic alerts. ### Parameters @@ -118,22 +120,22 @@ The hardcoded `fault_detector` heuristic scans for active `OutputProposed` event | l2_output_address | string | The address of the L1 output oracle | | l2_to_l1_address | string | The address of the L2ToL1MessagePasser contract | - ### Example Deploy Request -``` + +```bash curl --location --request POST 'http://localhost:8080/v0/heuristic' \ --header 'Content-Type: text/plain' \ --data-raw '{ "method": "run", - "params": { - "network": "layer1", - "pipeline_type": "live", - "type": "fault_detector", - "start_height": null, - "alert_destination": "slack", + "params": { + "network": "layer1", + "pipeline_type": "live", + "type": "fault_detector", + "start_height": null, + "alert_destination": "slack", "heuristic_params": { "l2_output_address": "0x111", "l2_to_l1_address": "0x333", }, }' -``` \ No newline at end of file +``` diff --git a/docs/index.markdown b/docs/index.markdown index ad979f5a..15222a75 100644 --- a/docs/index.markdown +++ b/docs/index.markdown @@ -5,6 +5,7 @@ layout: home Detect real-time threats on Op-stack compatible chains ## Contents + - [Architecture](../pessimism/architecture) - [REST API](../pessimism/api) - [ETL Subsystem](../pessimism/architecture/etl) @@ -15,7 +16,9 @@ Detect real-time threats on Op-stack compatible chains - [Telemetry Documentation](../pessimism/telemetry) ## Github Pages + The Pessimism documentation is hosted on Github Pages. To view the documentation, please visit [https://base-org.github.io/pessimism/architecture](https://base-org.github.io/pessimism/architecture). ## Contributing + If you would like to contribute to the Pessimism documentation, please advise the guidelines stipulated in the [CONTRIBUTING.md](../CONTRIBUTING.md) file __before__ submitting a pull request. diff --git a/docs/telemetry.markdown b/docs/telemetry.markdown index f86452fa..4e9bcac0 100644 --- a/docs/telemetry.markdown +++ b/docs/telemetry.markdown @@ -4,16 +4,19 @@ title: Telemetry permalink: /telemetry --- -Pessimism uses [Prometheus](https://prometheus.io/docs/introduction/overview/) for telemetry. The application spins up a metrics server on a specified port (default 7300) and exposes the `/metrics` endpoint. +Pessimism uses [Prometheus](https://prometheus.io/docs/introduction/overview/) for telemetry. The application spins up a metrics server on a specified port (default 7300) and exposes the `/metrics` endpoint. ### Local Testing + To verify that metrics are being collected locally, curl the metrics endpoint via `curl localhost:7300/metrics`. The response should display all custom and system metrics. ### Server Configuration -The default configuration within `config.env.template` should be suitable in most cases, however if you do not want to run the metrics server, set `METRICS_ENABLED=0` and the metrics server will not be started. This is useful mainly for testing purposes. + +The default configuration within `config.env.template` should be suitable in most cases, however if you do not want to run the metrics server, set `METRICS_ENABLED=0` and the metrics server will not be started. This is useful mainly for testing purposes. ## Generating Documentation -To generate documentation for metrics, run `make docs` from the root of the repository. This will generate markdown + +To generate documentation for metrics, run `make docs` from the root of the repository. This will generate markdown which can be pasted directly below to keep current system metric documentation up to date. ## Current Metrics diff --git a/pull_request_template.md b/pull_request_template.md index 1b82cd6f..778b9381 100644 --- a/pull_request_template.md +++ b/pull_request_template.md @@ -1,15 +1,18 @@ - +# Pull Request Template + + ## Fixes Issue Fixes # + ## Changes proposed - ### Screenshots (Optional) ## Note to reviewers