Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: improve basic, transfer, streaming and advanced documentation #160

Closed
wants to merge 4 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
144 changes: 105 additions & 39 deletions advanced/advanced-01-open-telemetry/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,15 @@

This sample will show you how you can:

- generate traces with [OpenTelemetry](https://opentelemetry.io) and collect and visualize them with [Jaeger](https://www.jaegertracing.io/)
- automatically collect metrics from infrastructure, server endpoints and client libraries with [Micrometer](https://micrometer.io)
- generate traces with [OpenTelemetry](https://opentelemetry.io) and collect and visualize them
with [Jaeger](https://www.jaegertracing.io/)
- automatically collect metrics from infrastructure, server endpoints and client libraries
with [Micrometer](https://micrometer.io)
and visualize them with [Prometheus](https://prometheus.io)

For this, this sample uses the Open Telemetry Java Agent, which dynamically injects bytecode to capture telemetry from
several popular [libraries and frameworks](https://github.com/open-telemetry/opentelemetry-java-instrumentation/tree/main/instrumentation).
several
popular [libraries and frameworks](https://github.com/open-telemetry/opentelemetry-java-instrumentation/tree/main/instrumentation).

In order to visualize and analyze the traces and metrics, we use
[OpenTelemetry exporters](https://opentelemetry.io/docs/instrumentation/js/exporters/) to export data into the Jaeger
Expand All @@ -18,88 +21,147 @@ tracing backend and a Prometheus endpoint.
We will use a single docker-compose to run the consumer, the provider, and a Jaeger backend.
Let's have a look to the [docker-compose.yaml](docker-compose.yaml). We created a consumer and a provider service with
entry points specifying the OpenTelemetry Java Agent as a JVM parameter.
In addition, the [Jaeger exporter](https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk-extensions/autoconfigure/README.md#jaeger-exporter)
In addition,
the [Jaeger exporter](https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk-extensions/autoconfigure/README.md#jaeger-exporter)
is configured using environmental variables as required by OpenTelemetry. The
[Prometheus exporter](https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk-extensions/autoconfigure/README.md#prometheus-exporter)
is configured to expose a Prometheus metrics endpoint.

To run the consumer, the provider, and Jaeger execute the following commands in the project root folder:

```bash
docker-compose -f advanced/advanced-01-open-telemetry/docker-compose.yaml up --abort-on-container-exit
```shell
docker compose -f advanced/advanced-01-open-telemetry/docker-compose.yaml up --abort-on-container-exit
```

Open a new terminal.
Open a new terminal and register the dataplane for the provider and consumer:

Register data planes for provider and consumer:

```bash
```shell
curl -H 'Content-Type: application/json' \
-H "X-Api-Key: password" \
-d @transfer/transfer-00-prerequisites/resources/dataplane/register-data-plane-provider.json \
-X POST "http://localhost:19193/management/v2/dataplanes" \
-s | jq
```

```bash
```shell
curl -H 'Content-Type: application/json' \
-H "X-Api-Key: password" \
-d @transfer/transfer-00-prerequisites/resources/dataplane/register-data-plane-consumer.json \
-X POST "http://localhost:29193/management/v2/dataplanes" \
-s | jq
```

Create an asset:
Then use these three calls to create the Asset, the Policy Definition and the Contract Definition:

```bash
```shell
curl -H "X-Api-Key: password" \
-d @transfer/transfer-01-negotiation/resources/create-asset.json \
-H 'content-type: application/json' http://localhost:19193/management/v2/assets \
-s | jq
```

Create a Policy on the provider connector:

```bash
```shell
curl -H "X-Api-Key: password" \
-d @transfer/transfer-01-negotiation/resources/create-policy.json \
-H 'content-type: application/json' http://localhost:19193/management/v2/policydefinitions \
-s | jq
```

Follow up with the creation of a contract definition:

```bash
```shell
curl -H "X-Api-Key: password" \
-d @transfer/transfer-01-negotiation/resources/create-contract-definition.json \
-H 'content-type: application/json' http://localhost:19193/management/v2/contractdefinitions \
-s | jq
```

Start a contract negotiation:
### Negotiate the contract

The typical flow requires fetching the catalog from the consumer side and using the contract offer to negotiate a
contract. However, in this sample case, we already have the provider asset `assetId` so we can get the related dataset
directly with this call:


```bash
```shell
curl -H "X-Api-Key: password" \
-H "Content-Type: application/json" \
-d @advanced/advanced-01-open-telemetry/resources/get-dataset.json \
-X POST "http://localhost:29193/management/v2/catalog/dataset/request" \
-s | jq
```

The output will be something like:

```json
{
"@id": "assetId",
"@type": "dcat:Dataset",
"odrl:hasPolicy": {
"@id": "MQ==:YXNzZXRJZA==:YjI5ZDVkZDUtZWU0Mi00NWRiLWE2OTktYjNmMjlmMWNjODk3",
"@type": "odrl:Set",
"odrl:permission": [],
"odrl:prohibition": [],
"odrl:obligation": [],
"odrl:target": "assetId"
},
"dcat:distribution": [
{
"@type": "dcat:Distribution",
"dct:format": {
"@id": "HttpProxy"
},
"dcat:accessService": "06348bca-6bf0-47fe-8bb5-6741cff7a955"
},
{
"@type": "dcat:Distribution",
"dct:format": {
"@id": "HttpData"
},
"dcat:accessService": "06348bca-6bf0-47fe-8bb5-6741cff7a955"
}
],
"edc:name": "product description",
"edc:id": "assetId",
"edc:contenttype": "application/json",
"@context": {
"dct": "https://purl.org/dc/terms/",
"edc": "https://w3id.org/edc/v0.0.1/ns/",
"dcat": "https://www.w3.org/ns/dcat/",
"odrl": "http://www.w3.org/ns/odrl/2/",
"dspace": "https://w3id.org/dspace/v0.8/"
}
}
```

With the `odrl:hasPolicy/@id` we can now replace it in the [negotiate-contract.json](resources/negotiate-contract.json) file
and request the contract negotiation:

```shell
curl -H "X-Api-Key: password" \
-H "Content-Type: application/json" \
-d @advanced/advanced-01-open-telemetry/resources/negotiate-contract.json \
-X POST "http://localhost:29193/management/v2/contractnegotiations" \
-s | jq
```

Wait until the negotiation is in `FINALIZED` state and call
At this point the contract agreement should already been issued, to verify that, please check the contract negotiation
state with this call, replacing `{{contract-negotiation-id}}` with the id returned by the negotiate contract call.

```bash
curl -X GET -H 'X-Api-Key: password' "http://localhost:29193/management/v2/contractnegotiations/{UUID}"
```shell
curl -H 'X-Api-Key: password' \
-X GET "http://localhost:29193/management/v2/contractnegotiations/{{contract-negotiation-id}}" \
-s | jq
```
to get the contract agreement id.

Finally, update the contract agreement id in the [request body](resources/start-transfer.json) and execute a file transfer with the following command:

```bash
Finally, update the contract agreement id in the [start-transfer.json](resources/start-transfer.json) and execute a file
transfer with the following command:

```shell
curl -H "X-Api-Key: password" \
-H "Content-Type: application/json" \
-d @advanced/advanced-01-open-telemetry/resources/start-transfer.json \
-X POST "http://localhost:29193/management/v2/transferprocesses"
-X POST "http://localhost:29193/management/v2/transferprocesses" \
-s | jq
```

You can access the Jaeger UI on your browser at `http://localhost:16686`. In the search tool, we can select the service
Expand All @@ -112,18 +174,21 @@ Example contract negotiation trace:
Example file transfer trace:
![File transfer](attachments/file-transfer-trace.png)

OkHttp and Jetty are part of the [libraries and frameworks](https://github.com/open-telemetry/opentelemetry-java-instrumentation/tree/main/instrumentation)
OkHttp and Jetty are part of
the [libraries and frameworks](https://github.com/open-telemetry/opentelemetry-java-instrumentation/tree/main/instrumentation)
that OpenTelemetry can capture telemetry from. We can observe spans related to OkHttp and Jetty as EDC uses both
frameworks internally. The `otel.library.name` tag of the different spans indicates the framework each span is coming from.
frameworks internally. The `otel.library.name` tag of the different spans indicates the framework each span is coming
from.

You can access the Prometheus UI on your browser at `http://localhost:9090`. Click the globe icon near the top right
corner (Metrics Explorer) and select a metric to display. Metrics include System (e.g. CPU usage), JVM (e.g. memory usage),
Executor service (call timings and thread pools), and the instrumented OkHttp, Jetty and Jersey libraries (HTTP client and server).
corner (Metrics Explorer) and select a metric to display. Metrics include System (e.g. CPU usage), JVM (e.g. memory
usage), Executor service (call timings and thread pools), and the instrumented OkHttp, Jetty and Jersey libraries (HTTP
client and server).

## Using another monitoring backend

Other monitoring backends can be plugged in easily with OpenTelemetry. For instance, if you want to use Azure Application
Insights instead of Jaeger, you can replace the OpenTelemetry Java Agent with the
Other monitoring backends can be plugged in easily with OpenTelemetry. For instance, if you want to use Azure
Application Insights instead of Jaeger, you can replace the OpenTelemetry Java Agent with the
[Application Insights Java Agent](https://docs.microsoft.com/azure/azure-monitor/app/java-in-process-agent#download-the-jar-file),
which has to be stored in the root folder of this sample as well. The only additional configuration required are the
`APPLICATIONINSIGHTS_CONNECTION_STRING` and `APPLICATIONINSIGHTS_ROLE_NAME` env variables:
Expand Down Expand Up @@ -163,14 +228,17 @@ which has to be stored in the root folder of this sample as well. The only addit
-jar /app/connector.jar
```

The Application Insights Java agent will automatically collect metrics from Micrometer, without any configuration needed.
The Application Insights Java agent will automatically collect metrics from Micrometer, without any configuration
needed.

## Provide your own OpenTelemetry implementation

In order to provide your own OpenTelemetry implementation, you have to "deploy an OpenTelemetry service provider on the class path":
In order to provide your own OpenTelemetry implementation, you have to "deploy an OpenTelemetry service provider on the
class path":

- Create a module containing your OpenTelemetry implementation.
- Add a file in the resource directory META-INF/services. The file should be called `io.opentelemetry.api.OpenTelemetry`.
- Add a file in the resource directory META-INF/services. The file should be
called `io.opentelemetry.api.OpenTelemetry`.
- Add to the file the fully qualified name of your custom OpenTelemetry implementation class.

EDC uses a [ServiceLoader](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html)
Expand All @@ -179,5 +247,3 @@ it, otherwise it will use the registered global OpenTelemetry. You can look at t
`Deploying service providers on the class path` of the
[ServiceLoader documentation](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html)
to have more information about service providers.

---
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{
"@context": { "@vocab": "https://w3id.org/edc/v0.0.1/ns/" },
"@type": "DatasetRequest",
"@id": "assetId",
"counterPartyAddress": "http://provider:19194/protocol",
"protocol": "dataspace-protocol-http"
}
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,10 @@
"providerId": "provider",
"protocol": "dataspace-protocol-http",
"offer": {
"offerId": "MQ==:YXNzZXRJZA==:YTc4OGEwYjMtODRlZi00NWYwLTgwOWQtMGZjZTMwMGM3Y2Ey",
"offerId": "{{offerId}}",
"assetId": "assetId",
"policy": {
"@id": "MQ==:YXNzZXRJZA==:YTc4OGEwYjMtODRlZi00NWYwLTgwOWQtMGZjZTMwMGM3Y2Ey",
"@id": "{{offerId}}",
"@type": "Set",
"odrl:permission": [],
"odrl:prohibition": [],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"@type": "TransferRequestDto",
"connectorId": "provider",
"connectorAddress": "http://provider:19194/protocol",
"contractId": "<contract agreement id>",
"contractId": "{{contract-agreement-id}}",
"assetId": "assetId",
"protocol": "dataspace-protocol-http",
"dataDestination": {
Expand Down
2 changes: 1 addition & 1 deletion basic/basic-01-basic-connector/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ installation._

If everything works as intended you should see command-line output similar to this:

```bash
```shell
INFO 2022-01-13T13:43:57.677973407 Secrets vault not configured. Defaulting to null vault.
INFO 2022-01-13T13:43:57.680158117 Initialized Null Vault
INFO 2022-01-13T13:43:57.851181615 Initialized Core Services
Expand Down
10 changes: 5 additions & 5 deletions basic/basic-02-health-endpoint/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ An _extension_ typically consists of two things:

1. a class implementing the `ServiceExtension` interface.
2. a plugin file in the `src/main/resources/META-INF/services` directory. This file **must** be named exactly as the
interface's fully qualified class-name and it **must** contain the fully-qualified name of the implementing class (
interface's fully qualified class-name, and it **must** contain the fully-qualified name of the implementing class (
=plugin class).

Therefore, we require an extension class, which we'll name `HealthEndpointExtension`:
Expand All @@ -25,12 +25,13 @@ public class HealthEndpointExtension implements ServiceExtension {
}
```

The `@Inject` annotation indicates that the extension needs a service that is registered by another extension, in
The `@Inject` annotation indicates that the extension needs a service that is registered by another extension, in
this case an implementation of `WebService.class`.

For that, we can use Jakarta REST annotations to implement a simple REST API:

```java

@Consumes({MediaType.APPLICATION_JSON})
@Produces({MediaType.APPLICATION_JSON})
@Path("/")
Expand All @@ -53,7 +54,7 @@ public class HealthApiController {

Once we compile and run the application with

```bash
```shell
./gradlew clean basic:basic-02-health-endpoint:build
java -jar basic/basic-02-health-endpoint/build/libs/connector-health.jar
```
Expand All @@ -66,8 +67,7 @@ and can be configured using the `web.http.port` property (more on that in the ne
this whenever you have two connectors running on the same machine.

Also, the default path is `/api/*`, which is defined in
[`JettyConfiguration.java`](https://github.com/eclipse-edc/Connector/blob/releases/extensions/common/http/jetty-core/src/main/java/org/eclipse/edc/web/jetty/JettyConfiguration.java)
.
[`JettyConfiguration.java`](https://github.com/eclipse-edc/Connector/blob/releases/extensions/common/http/jetty-core/src/main/java/org/eclipse/edc/web/jetty/JettyConfiguration.java).

---

Expand Down
15 changes: 7 additions & 8 deletions basic/basic-03-configuration/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
So far we have not had any way to configure our system other than directly modifying code, which generally is not an
elegant way.

The Eclipse Dataspace Connector exposes configuration through its `ConfigurationExtension` interface. That is a "
special" extension in that sense that it gets loaded at a very early stage. There is also a default implementation
The Eclipse Dataspace Connector exposes configuration through its `ConfigurationExtension` interface. That is a
special extension in that sense that it gets loaded at a very early stage. There is also a default implementation
named [`FsConfigurationExtension.java`](https://github.com/eclipse-edc/Connector/blob/releases/extensions/common/configuration/configuration-filesystem/src/main/java/org/eclipse/edc/configuration/filesystem/FsConfigurationExtension.java)
which uses a standard Java properties file to store configuration entries.

Expand All @@ -21,14 +21,14 @@ dependencies {

We compile and run the application with:

```bash
```shell
./gradlew clean basic:basic-03-configuration:build
java -jar basic/basic-03-configuration/build/libs/filesystem-config-connector.jar
```

you will notice an additional log line stating that the "configuration file does not exist":

```bash
```shell
INFO 2021-09-07T08:26:08.282159 Configuration file does not exist: dataspaceconnector-configuration.properties. Ignoring.
```

Expand All @@ -41,7 +41,7 @@ file is configurable using the `edc.fs.config` property, so we can customize thi
First, create a properties file in a location of your convenience,
e.g. `/etc/eclipse/dataspaceconnector/config.properties`.

```bash
```shell
mkdir -p /etc/eclipse/dataspaceconnector
touch /etc/eclipse/dataspaceconnector/config.properties
```
Expand All @@ -56,17 +56,16 @@ web.http.port=9191
An example file can be found [here](config.properties). Clean, rebuild and run the connector again, but this time
passing the path to the config file:

```bash
```shell
java -Dedc.fs.config=/etc/eclipse/dataspaceconnector/config.properties -jar basic/basic-03-configuration/build/libs/filesystem-config-connector.jar
```

Observing the log output we now see that the connector's REST API is exposed on port `9191` instead:

```bash
```shell
INFO 2022-04-27T14:09:10.547662345 HTTP context 'default' listening on port 9191 <-- this is the relevant line
DEBUG 2022-04-27T14:09:10.589738491 Port mappings: {alias='default', port=9191, path='/api'}
INFO 2022-04-27T14:09:10.589846121 Started Jetty Service

```

## Add your own configuration value
Expand Down
Loading
Loading