From c64ae5b25546dbcd7cd098a55165fbe4c64dc7c3 Mon Sep 17 00:00:00 2001 From: Michael Steinert Date: Tue, 21 Nov 2023 11:07:51 +0100 Subject: [PATCH 1/4] docs: improved streaming documentation --- .../streaming-01-http-to-http/README.md | 111 +++++++++++------ .../streaming-02-kafka-to-http/1-asset.json | 1 - .../streaming-02-kafka-to-http/README.md | 112 ++++++++++------- .../streaming-03-kafka-broker/README.md | 115 +++++++++++------- 4 files changed, 213 insertions(+), 126 deletions(-) diff --git a/transfer/streaming/streaming-01-http-to-http/README.md b/transfer/streaming/streaming-01-http-to-http/README.md index 9beee036..e54c2030 100644 --- a/transfer/streaming/streaming-01-http-to-http/README.md +++ b/transfer/streaming/streaming-01-http-to-http/README.md @@ -1,70 +1,92 @@ # Streaming HTTP to HTTP -This sample will show how you can set up the EDC to stream messages from HTTP to HTTP. +This sample will show how you can set up the Eclipse Dataspace Connector to stream messages from HTTP to HTTP. This code is only for demonstration purposes and should not be used in production. ## Concept -We will build a data-plane `DataSource` extension that will retrieve new data from a disk folder and push it + +We will build a `Dataplane DataSource` extension that will retrieve new data from a disk folder and push it to every consumer that has started a `TransferProcess` for a related asset. ### Run Build the connector runtime, which will be used both for the provider and consumer: + ```shell ./gradlew :transfer:streaming:streaming-01-http-to-http:streaming-01-runtime:build ``` -Run the provider and the consumer, which must be started from different terminal shells: +Run the provider and the consumer with their own configuration, which will need to be started from different terminals: + ```shell -# provider export EDC_FS_CONFIG=transfer/streaming/streaming-01-http-to-http/streaming-01-runtime/provider.properties java -jar transfer/streaming/streaming-01-http-to-http/streaming-01-runtime/build/libs/connector.jar +``` -#consumer +```shell export EDC_FS_CONFIG=transfer/streaming/streaming-01-http-to-http/streaming-01-runtime/consumer.properties java -jar transfer/streaming/streaming-01-http-to-http/streaming-01-runtime/build/libs/connector.jar ``` -#### Register Data Plane on provider -The provider connector needs to be aware of the streaming capabilities of the embedded dataplane, which can be registered with -this call: -```js -curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-01-http-to-http/dataplane.json -X POST "http://localhost:18181/management/v2/dataplanes" +#### Register Dataplane on provider + +The provider connector needs to be aware of the streaming capabilities of the embedded dataplane, which can be +registered with this call: + +```shell +curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-01-http-to-http/dataplane.json -X POST "http://localhost:18181/management/v2/dataplanes" -s | jq ``` -If you look at the `dataplane.json` you'll notice that the supported source is `HttpStreaming` and the supported sink is `HttpData`. +If you look at the [dataplane.json](dataplane.json) you'll notice that the supported source is `HttpStreaming` and the +supported sink is `HttpData`. #### Register Asset, Policy Definition and Contract Definition on provider -A "source" folder must first be created where the data plane will get the messages to be sent to the consumers. -To do this, create a temp folder: + +A "source" folder must first be created where the data plane will get the messages to be sent to the consumers. To do +this, create a temporary folder: + ```shell mkdir /tmp/source ``` Then put the path in the [asset.json](asset.json) file replacing the `{{sourceFolder}}` placeholder. + ```json +{ "dataAddress": { "type": "HttpStreaming", "sourceFolder": "{{sourceFolder}}" } +} +``` + +Then use these three calls to create the Asset, the Policy Definition and the Contract Definition: + +```shell +curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-01-http-to-http/asset.json -X POST "http://localhost:18181/management/v3/assets" -s | jq ``` -Then create the Asset, the Policy Definition and the Contract Definition with these three calls: ```shell -curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-01-http-to-http/asset.json -X POST "http://localhost:18181/management/v3/assets" -curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-01-http-to-http/policy-definition.json -X POST "http://localhost:18181/management/v2/policydefinitions" -curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-01-http-to-http/contract-definition.json -X POST "http://localhost:18181/management/v2/contractdefinitions" +curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-01-http-to-http/policy-definition.json -X POST "http://localhost:18181/management/v2/policydefinitions" -s | jq +``` + +```shell +curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-01-http-to-http/contract-definition.json -X POST "http://localhost:18181/management/v2/contractdefinitions" -s | jq ``` #### Negotiate the contract -The typical flow requires fetching the catalog from the consumer side and using the contract offer to negotiate a contract. -However, in this sample case, we already have the provider asset (`"stream-asset"`) so we can get the related dataset + +The typical flow requires fetching the catalog from the consumer side and using the contract offer to negotiate a +contract. +However, in this sample case, we already have the provider asset `stream-asset` so we can get the related dataset directly with this call: + ```shell -curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-01-http-to-http/get-dataset.json -X POST "http://localhost:28181/management/v2/catalog/dataset/request" -s | jq +curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-01-http-to-http/get-dataset.json -X POST "http://localhost:28181/management/v2/catalog/dataset/request" -s | jq ``` The output will be something like: + ```json { "@id": "stream-asset", @@ -97,54 +119,64 @@ The output will be something like: With the `odrl:hasPolicy/@id` we can now replace it in the [negotiate-contract.json](negotiate-contract.json) file and request the contract negotiation: + ```shell -curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-01-http-to-http/negotiate-contract.json -X POST "http://localhost:28181/management/v2/contractnegotiations" -s | jq +curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-01-http-to-http/negotiate-contract.json -X POST "http://localhost:28181/management/v2/contractnegotiations" -s | jq ``` ### Start the transfer -First we need to set up the receiver server on the consumer side that will receive a call for every message. For this -you'll need to open another terminal shell and run: + +First we need to set up the logging webserver on the consumer side, which will receive a call for each transfer. For +this you'll need to open another terminal and run: + ```shell -./gradlew util:http-request-logger:build -HTTP_SERVER_PORT=4000 java -jar util/http-request-logger/build/libs/http-request-logger.jar +docker build -t http-request-logger util/http-request-logger +docker run -p 4000:4000 http-request-logger ``` + It will run on port 4000. -At this point the contract agreement should already been issued, to verify that, please check the contract negotiation state with -this call, replacing `{{contract-negotiation-id}}` with the id returned by the negotiate contract call. +At this point the contract agreement should already been issued, to verify that, please check the contract negotiation +state with this call, replacing `{{contract-negotiation-id}}` with the id returned by the negotiate contract call. + ```shell curl "http://localhost:28181/management/v2/contractnegotiations/{{contract-negotiation-id}}" -s | jq ``` -If the `edc:contractAgreementId` is valued, it can be used to start the transfer, replacing it in the [transfer.json](transfer.json) -file to `{{contract-agreement-id}}` and then calling the connector with this command: +If the `edc:contractAgreementId` has a value, it can be used to start the transfer, which will be replaced in +the [transfer.json](transfer.json) file to `{{contract-agreement-id}}` and then calling the connector with this command: + ```shell -curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-01-http-to-http/transfer.json -X POST "http://localhost:28181/management/v2/transferprocesses" -s | jq +curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-01-http-to-http/transfer.json -X POST "http://localhost:28181/management/v2/transferprocesses" -s | jq ``` -> Note that the destination address is `localhost:4000`, this because is where our http server is listening. + +> Note that the destination address is `localhost:4000`, this because is where our logging webserver is listening. -Let's wait until the transfer state is `STARTED` state executing this call, replacing to `{{transfer-process-id}}` the id returned -by the start transfer call: +Let's wait until the transfer state is `STARTED` state executing this call, replacing to `{{transfer-process-id}}` the +id returned by the start transfer call: + ```shell curl "http://localhost:28181/management/v2/transferprocesses/{{transfer-process-id}}" -s | jq ``` -Here we can test the transfer creating a file into the `source` folder that we configured before, e.g. copying the `README.md` -into the `source` folder: +Here we can test the transfer creating a file into the `source` folder that we configured before, e.g. copying +the `README.md` into the `source` folder: + ```shell cp README.md /tmp/source ``` -we should see the content logged into the received server: +we should see the content logged to the received logging webserver: + ``` Incoming request Method: POST Path: / Body: -# EDC Samples -... + ``` + ### Up to you: second connector As a challenge, try starting another consumer connector, negotiating a contract, and starting the transfer. @@ -152,4 +184,5 @@ Every message pushed by the provider will be sent to all the consumers. ## Technical insight -The required code is contained in the [`streaming-01-runtime` source folder](transfer/streaming/streaming-01-http-to-http/streaming-01-runtime/src/main/java/org/eclipse/edc/samples/transfer/streaming/http). +The required code can be found in the source folder of +the [streaming-01-runtime](streaming-01-runtime/src/main/java/org/eclipse/edc/samples/transfer/streaming/http). diff --git a/transfer/streaming/streaming-02-kafka-to-http/1-asset.json b/transfer/streaming/streaming-02-kafka-to-http/1-asset.json index 675614c9..a4b7d626 100644 --- a/transfer/streaming/streaming-02-kafka-to-http/1-asset.json +++ b/transfer/streaming/streaming-02-kafka-to-http/1-asset.json @@ -6,7 +6,6 @@ "dataAddress": { "type": "Kafka", "kafka.bootstrap.servers": "{{bootstrap.servers}}", - "maxDuration": "{{max.duration}}", "topic": "{{topic}}" } } diff --git a/transfer/streaming/streaming-02-kafka-to-http/README.md b/transfer/streaming/streaming-02-kafka-to-http/README.md index aeac12c7..401f6958 100644 --- a/transfer/streaming/streaming-02-kafka-to-http/README.md +++ b/transfer/streaming/streaming-02-kafka-to-http/README.md @@ -1,76 +1,93 @@ -# Streaming KAFKA to HTTP +# Streaming Kafka to HTTP -This sample demonstrates how to set up the EDC to stream messages from Kafka to HTTP. +This sample demonstrates how to set up the Eclipse Dataspace Connector to stream messages from Kafka to HTTP. This code is only for demonstration purposes and should not be used in production. ## Concept -We will use the data-plane kafka `DataSource` extension that will pull event records from a kafka topic and push it +We will use the `Dataplane Kafka DataSource` extension, which pulls event records from a Kafka topic and pushes them to every consumer that has started a `TransferProcess` for a related asset. ### Run Build the connector runtime, which will be used both for the provider and consumer: + ```shell ./gradlew :transfer:streaming:streaming-02-kafka-to-http:streaming-02-runtime:build ``` -Run the provider and the consumer, which must be started from different terminal shells: +Run the provider and the consumer with their own configuration, which will need to be started from different terminals: + ```shell -# provider export EDC_FS_CONFIG=transfer/streaming/streaming-02-kafka-to-http/streaming-02-runtime/provider.properties java -jar transfer/streaming/streaming-02-kafka-to-http/streaming-02-runtime/build/libs/connector.jar +``` -#consumer +```shell export EDC_FS_CONFIG=transfer/streaming/streaming-02-kafka-to-http/streaming-02-runtime/consumer.properties java -jar transfer/streaming/streaming-02-kafka-to-http/streaming-02-runtime/build/libs/connector.jar ``` -### Register Data Plane on provider +### Register Dataplane on provider + +The provider connector needs to be aware of the Kafka streaming capabilities of the embedded dataplane, which can be +registered with this call: -The provider connector needs to be aware of the kafka streaming capabilities of the embedded dataplane, which can be registered with -this call: ```shell -curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-02-kafka-to-http/0-dataplane.json -X POST "http://localhost:18181/management/v2/dataplanes" +curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-02-kafka-to-http/0-dataplane.json -X POST "http://localhost:18181/management/v2/dataplanes" -s | jq ``` -If you look at the `0-dataplane.json` you'll notice that the supported source is `Kafka` and the supported sink is `HttpData`. +If you look at the `0-dataplane.json` you'll notice that the supported source is `Kafka` and the supported sink +is `HttpData`. ### Register Asset, Policy Definition and Contract Definition on provider -A "source" kafka topic must first be created where the data plane will get the event records to be sent to the consumers. -To do this, initiate a Kafka server with the source topic: +A "source" Kafka topic must first be created where the data plane will get the event records to be sent to the +consumers. To do this, initiate a Kafka server with the source topic: + ```shell -docker run -e "KAFKA_CREATE_TOPICS={{topic}}:1:1" -p 9092:9092 -d bashj79/kafka-kraft:3.0.0 +docker run --rm --name=kafka-kraft -e "KAFKA_CREATE_TOPICS={{topic}}:1:1" -p 9092:9092 -d bashj79/kafka-kraft:3.0.0 ``` -Then put values of `kafka.bootstrap.servers`, `maxDuration` and `topic` in the [1-asset.json](1-asset.json) file replacing their placeholders. +Then put values of `kafka.bootstrap.servers` and `topic` in the [1-asset.json](1-asset.json) file replacing their +placeholders. + ```json +{ "dataAddress": { "type": "Kafka", - "kafka.bootstrap.servers": "{{bootstrap.servers}}", - "maxDuration": "{{max.duration}}" - "topic": "{{topic}}" + "kafka.bootstrap.servers": "localhost:9092", + "topic": "kafka-stream-topic" } +} ``` -Then create the Asset, the Policy Definition and the Contract Definition with these three calls: +Then use these three calls to create the Asset, the Policy Definition and the Contract Definition: + ```shell -curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-02-kafka-to-http/1-asset.json -X POST "http://localhost:18181/management/v3/assets" -curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-02-kafka-to-http/2-policy-definition.json -X POST "http://localhost:18181/management/v2/policydefinitions" -curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-02-kafka-to-http/3-contract-definition.json -X POST "http://localhost:18181/management/v2/contractdefinitions" +curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-02-kafka-to-http/1-asset.json -X POST "http://localhost:18181/management/v3/assets" -s | jq +``` + +```shell +curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-02-kafka-to-http/2-policy-definition.json -X POST "http://localhost:18181/management/v2/policydefinitions" -s | jq +``` + +```shell +curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-02-kafka-to-http/3-contract-definition.json -X POST "http://localhost:18181/management/v2/contractdefinitions" -s | jq ``` ### Negotiate the contract -The typical flow requires fetching the catalog from the consumer side and using the contract offer to negotiate a contract. -However, in this sample case, we already have the provider asset (`"kafka-stream-asset"`) so we can get the related dataset -directly with this call: +The typical flow requires fetching the catalog from the consumer side and using the contract offer to negotiate a +contract. However, in this sample case, we already have the provider asset `kafka-stream-asset` so we can get the +related dataset directly with this call: + ```shell -curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-02-kafka-to-http/4-get-dataset.json -X POST "http://localhost:28181/management/v2/catalog/dataset/request" -s | jq +curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-02-kafka-to-http/4-get-dataset.json -X POST "http://localhost:28181/management/v2/catalog/dataset/request" -s | jq ``` The output will be something like: + ```json { "@id": "kafka-stream-asset", @@ -103,46 +120,60 @@ The output will be something like: With the `odrl:hasPolicy/@id` we can now replace it in the [negotiate-contract.json](5-negotiate-contract.json) file and request the contract negotiation: + ```shell -curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-02-kafka-to-http/5-negotiate-contract.json -X POST "http://localhost:28181/management/v2/contractnegotiations" -s | jq +curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-02-kafka-to-http/5-negotiate-contract.json -X POST "http://localhost:28181/management/v2/contractnegotiations" -s | jq ``` ### Start the transfer -First we need to set up the receiver server on the consumer side that will receive a call for every new event. For this -you'll need to open another terminal shell and run: +First we need to set up the logging webserver on the consumer side that will receive a call for every new event. For +this you'll need to open another terminal and run: + ```shell -./gradlew util:http-request-logger:build -HTTP_SERVER_PORT=4000 java -jar util/http-request-logger/build/libs/http-request-logger.jar +docker build -t http-request-logger util/http-request-logger +docker run -p 4000:4000 http-request-logger ``` + It will run on port 4000. -At this point the contract agreement should already been issued, to verify that, please check the contract negotiation state with +At this point the contract agreement should already been issued, to verify that, please check the contract negotiation +state with this call, replacing `{{contract-negotiation-id}}` with the id returned by the negotiate contract call. + ```shell curl "http://localhost:28181/management/v2/contractnegotiations/{{contract-negotiation-id}}" -s | jq ``` -If the `edc:contractAgreementId` is valued, it can be used to start the transfer, replacing it in the [6-transfer.json](6-transfer.json) +If the `edc:contractAgreementId` is valued, it can be used to start the transfer, replacing it in +the [6-transfer.json](6-transfer.json) file to `{{contract-agreement-id}}` and then calling the connector with this command: + ```shell -curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-02-kafka-to-http/6-transfer.json -X POST "http://localhost:28181/management/v2/transferprocesses" -s | jq +curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-02-kafka-to-http/6-transfer.json -X POST "http://localhost:28181/management/v2/transferprocesses" -s | jq ``` -> Note that the destination address is `localhost:4000`, this because is where our http server is listening. -Let's wait until the transfer state is `STARTED` state executing this call, replacing to `{{transfer-process-id}}` the id returned -by the start transfer call: +> Note that the destination address is `localhost:4000`, this because is where our logging webserver is listening. + +Let's wait until the transfer state is `STARTED` state executing this call, replacing to `{{transfer-process-id}}` the +id returned by the start transfer call: + ```shell curl "http://localhost:28181/management/v2/transferprocesses/{{transfer-process-id}}" -s | jq ``` ### Produce events -With the Kafka server running in Docker, you can use the Kafka command-line producer `kafka-console-producer.sh` to produce a message. In a new terminal shell, you'll need to execute: +With the Kafka server running in Docker, you can use the Kafka command-line producer `kafka-console-producer.sh` to +produce a message. In a new terminal shell, you'll need to execute: + ```shell -docker exec -it {{docker-container-id}} /opt/kafka/bin/kafka-console-producer.sh --topic kafka-stream-topic --bootstrap-server localhost:9092 +docker exec -it kafka-kraft /opt/kafka/bin/kafka-console-producer.sh --topic kafka-stream-topic --bootstrap-server localhost:9092 ``` -This command will open an interactive prompt for you to input your message. Once you've typed your message and pressed Enter, it will be produced, consumed and pushed to the receiver server. You should observe the content being logged on its terminal shell: + +This command will open an interactive prompt for you to input your message. Once you've typed your message and pressed +Enter, it will be produced, consumed and pushed to the receiver server. You should observe the content being logged on +its terminal: ``` Incoming request @@ -150,5 +181,4 @@ Method: POST Path: / Body: -... ``` \ No newline at end of file diff --git a/transfer/streaming/streaming-03-kafka-broker/README.md b/transfer/streaming/streaming-03-kafka-broker/README.md index 046c6c85..bdbcdd52 100644 --- a/transfer/streaming/streaming-03-kafka-broker/README.md +++ b/transfer/streaming/streaming-03-kafka-broker/README.md @@ -1,42 +1,47 @@ -# Streaming KAFKA to KAFKA +# Streaming Kafka to Kafka -This sample demonstrates how to set up the EDC to stream messages through Kafka. +This sample demonstrates how to set up the Eclipse Dataspace Connector to stream messages through Kafka. This code is only for demonstration purposes and should not be used in production. ## Concept -In this sample the Data-Plane is not used, the consumer will set up a kafka client to poll the messages from the broker -using some credentials obtained from the transfer process. +In this sample the dataplane is not used, the consumer connector will set up a Kafka client to poll the messages from +the broker using some credentials obtained from the transfer process. -The DataFlow is managed by the [KafkaToKafkaDataFlowController](streaming-03-runtime/src/main/java/org/eclipse/edc/samples/streaming/KafkaToKafkaDataFlowController.java), -that on flow initialization creates an `EndpointDataReference` containing the credentials that the consumer would then use -to poll the messages. +The Dataflow is managed by +the [KafkaToKafkaDataFlowController](streaming-03-runtime/src/main/java/org/eclipse/edc/samples/streaming/KafkaToKafkaDataFlowController.java), +that on flow initialization creates an `EndpointDataReference` containing the credentials that the consumer would then +use to poll the messages. ### Run Build the connector runtime, which will be used both for the provider and consumer: + ```shell ./gradlew :transfer:streaming:streaming-03-kafka-broker:streaming-03-runtime:build ``` -Run the provider and the consumer, which must be started from different terminal shells: +Run the provider and the consumer with their own configuration, which will need to be started from different terminals: + ```shell -# provider export EDC_FS_CONFIG=transfer/streaming/streaming-03-kafka-broker/streaming-03-runtime/provider.properties java -jar transfer/streaming/streaming-03-kafka-broker/streaming-03-runtime/build/libs/connector.jar +``` -#consumer +```shell export EDC_FS_CONFIG=transfer/streaming/streaming-03-kafka-broker/streaming-03-runtime/consumer.properties java -jar transfer/streaming/streaming-03-kafka-broker/streaming-03-runtime/build/libs/connector.jar ``` ### Start Kafka and configure ACLs -Kafka will be started in [KRaft mode](https://developer.confluent.io/learn/kraft/), a single broker with `SASL_PLAINTEXT` -as security protocol ([see config](kafka.env)), there will be an `admin` user, responsible for setting up ACLs and producing -messages, and `alice`, that will be used by the consumer to consume the messages. +Kafka will be started in [KRaft mode](https://developer.confluent.io/learn/kraft/), a single broker +with `SASL_PLAINTEXT` +as security protocol ([see config](kafka.env)), there will be an `admin` user, responsible for setting up ACLs and +producing messages, and `alice`, that will be used by the consumer to consume the messages. Run the Kafka container: + ```shell docker run --rm --name=kafka-kraft -h kafka-kraft -p 9093:9093 \ -v "$PWD/transfer/streaming/streaming-03-kafka-broker/kafka-config":/config \ @@ -56,6 +61,7 @@ docker run --rm --name=kafka-kraft -h kafka-kraft -p 9093:9093 \ ``` Create the topic `kafka-stream-topic` + ```shell docker exec -it kafka-kraft /bin/kafka-topics \ --topic kafka-stream-topic --create --partitions 1 --replication-factor 1 \ @@ -64,6 +70,7 @@ docker exec -it kafka-kraft /bin/kafka-topics \ ``` To give `alice` read permissions on the topic we need to set up ACLs: + ```shell docker exec -it kafka-kraft /bin/kafka-acls --command-config /config/admin.properties \ --bootstrap-server localhost:9093 \ @@ -75,33 +82,45 @@ docker exec -it kafka-kraft /bin/kafka-acls --command-config /config/admin.prope ### Register Asset, Policy Definition and Contract Definition on provider -Then put values of `kafka.bootstrap.servers`, `maxDuration` and `topic` in the [1-asset.json](1-asset.json) file replacing -their placeholders this way: +Then put values of `kafka.bootstrap.servers` and `topic` in the [1-asset.json](1-asset.json) file +replacing their placeholders this way: + ```json +{ "dataAddress": { "type": "Kafka", "kafka.bootstrap.servers": "localhost:9093", "topic": "kafka-stream-topic" } +} +``` + +Then use these three calls to create the Asset, the Policy Definition and the Contract Definition: + +```shell +curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-03-kafka-broker/1-asset.json -X POST "http://localhost:18181/management/v3/assets" -s | jq ``` -Then create the Asset, the Policy Definition and the Contract Definition with these three calls: ```shell -curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-03-kafka-broker/1-asset.json -X POST "http://localhost:18181/management/v3/assets" -curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-03-kafka-broker/2-policy-definition.json -X POST "http://localhost:18181/management/v2/policydefinitions" -curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-03-kafka-broker/3-contract-definition.json -X POST "http://localhost:18181/management/v2/contractdefinitions" +curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-03-kafka-broker/2-policy-definition.json -X POST "http://localhost:18181/management/v2/policydefinitions" -s | jq +``` + +```shell +curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-03-kafka-broker/3-contract-definition.json -X POST "http://localhost:18181/management/v2/contractdefinitions" -s | jq ``` ### Negotiate the contract -The typical flow requires fetching the catalog from the consumer side and using the contract offer to negotiate a contract. -However, in this sample case, we already have the provider asset (`"kafka-stream-asset"`) so we can get the related dataset -directly with this call: +The typical flow requires fetching the catalog from the consumer side and using the contract offer to negotiate a +contract. However, in this sample case, we already have the provider asset `kafka-stream-asset` so we can get the +related dataset directly with this call: + ```shell -curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-03-kafka-broker/4-get-dataset.json -X POST "http://localhost:28181/management/v2/catalog/dataset/request" -s | jq +curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-03-kafka-broker/4-get-dataset.json -X POST "http://localhost:28181/management/v2/catalog/dataset/request" -s | jq ``` The output will be something like: + ```json { "@id": "kafka-stream-asset", @@ -114,13 +133,7 @@ The output will be something like: "odrl:obligation": [], "odrl:target": "kafka-stream-asset" }, - "dcat:distribution": { - "@type": "dcat:Distribution", - "dct:format": { - "@id": "HttpData" - }, - "dcat:accessService": "b24dfdbc-d17f-4d6e-9b5c-8fa71dacecfc" - }, + "dcat:distribution": [], "edc:id": "kafka-stream-asset", "@context": { "dct": "https://purl.org/dc/terms/", @@ -134,8 +147,9 @@ The output will be something like: With the `odrl:hasPolicy/@id` we can now replace it in the [negotiate-contract.json](5-negotiate-contract.json) file and negotiate the contract: + ```shell -curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-03-kafka-broker/5-negotiate-contract.json -X POST "http://localhost:28181/management/v2/contractnegotiations" -s | jq +curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-03-kafka-broker/5-negotiate-contract.json -X POST "http://localhost:28181/management/v2/contractnegotiations" -s | jq ``` ### Start the transfer @@ -143,47 +157,57 @@ curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-03-kaf First we need to set up the receiver server on the consumer side that will receive the EndpointDataReference containing the address and credentials to connect to the broker and poll the messages from the topic. For this you'll need to open another terminal shell and run: + ```shell -./gradlew util:http-request-logger:build -HTTP_SERVER_PORT=4000 java -jar util/http-request-logger/build/libs/http-request-logger.jar +docker build -t http-request-logger util/http-request-logger +docker run -p 4000:4000 http-request-logger ``` + It will run on port 4000. -At this point the contract agreement should already been issued, to verify that, please check the contract negotiation state with -this call, replacing `{{contract-negotiation-id}}` with the id returned by the negotiate contract call. +At this point the contract agreement should already been issued, to verify that, please check the contract negotiation +state with this call, replacing `{{contract-negotiation-id}}` with the id returned by the negotiate contract call. + ```shell curl "http://localhost:28181/management/v2/contractnegotiations/{{contract-negotiation-id}}" -s | jq ``` -If the `edc:contractAgreementId` is valued, it can be used to start the transfer, replacing it in the [6-transfer.json](6-transfer.json) +If the `edc:contractAgreementId` is valued, it can be used to start the transfer, replacing it in +the [6-transfer.json](6-transfer.json) file to `{{contract-agreement-id}}` and then calling the connector with this command: + ```shell -curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-03-kafka-broker/6-transfer.json -X POST "http://localhost:28181/management/v2/transferprocesses" -s | jq +curl -H 'Content-Type: application/json' -d @transfer/streaming/streaming-03-kafka-broker/6-transfer.json -X POST "http://localhost:28181/management/v2/transferprocesses" -s | jq ``` -> Note that the destination address is `localhost:4000`, this because is where our http server is listening. -Let's wait until the transfer state is `STARTED` state executing this call, replacing to `{{transfer-process-id}}` the id returned -by the start transfer call: +> Note that the destination address is `localhost:4000`, this because is where our logging webserver is listening. + +Let's wait until the transfer state is `STARTED` state executing this call, replacing to `{{transfer-process-id}}` the +id returned by the start transfer call: + ```shell curl "http://localhost:28181/management/v2/transferprocesses/{{transfer-process-id}}" -s | jq ``` ### Consume events + Now in the console of the `http-request-logger` we started before, the `EndpointDataReference` should have appeared: + ```json { - "id":"8c52a781-2588-4c9b-8c70-4e5ad428eea9", - "endpoint":"localhost:9093", - "authKey":"alice", - "authCode":"alice-secret", + "id": "8c52a781-2588-4c9b-8c70-4e5ad428eea9", + "endpoint": "localhost:9093", + "authKey": "alice", + "authCode": "alice-secret", "properties": { - "https://w3id.org/edc/v0.0.1/ns/topic":"kafka-stream-topic" + "https://w3id.org/edc/v0.0.1/ns/topic": "kafka-stream-topic" } } ``` Using these information on the consumer side we can run a `kafka-console-consumer` with the data received to consume messages from the topic: + ```shell docker exec -it kafka-kraft /bin/kafka-console-consumer --topic kafka-stream-topic \ --bootstrap-server localhost:9093 \ @@ -196,6 +220,7 @@ docker exec -it kafka-kraft /bin/kafka-console-consumer --topic kafka-stream-top ### Produce events In another shell we can put ourselves in the provider shoes and create messages from the producer shell: + ```shell docker exec -it kafka-kraft /bin/kafka-console-producer --topic kafka-stream-topic \ --producer.config=/config/admin.properties \ From a3cc8d70919498f285e7855217a705e26e972abc Mon Sep 17 00:00:00 2001 From: Michael Steinert Date: Tue, 21 Nov 2023 11:16:44 +0100 Subject: [PATCH 2/4] docs: use of the same placeholder --- .../advanced-01-open-telemetry/resources/start-transfer.json | 2 +- .../transfer-01-negotiation/resources/negotiate-contract.json | 4 ++-- .../transfer-02-consumer-pull/resources/start-transfer.json | 2 +- .../transfer-03-provider-push/resources/start-transfer.json | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/advanced/advanced-01-open-telemetry/resources/start-transfer.json b/advanced/advanced-01-open-telemetry/resources/start-transfer.json index 7547b17f..20a3d7c5 100644 --- a/advanced/advanced-01-open-telemetry/resources/start-transfer.json +++ b/advanced/advanced-01-open-telemetry/resources/start-transfer.json @@ -5,7 +5,7 @@ "@type": "TransferRequestDto", "connectorId": "provider", "connectorAddress": "http://provider:19194/protocol", - "contractId": "", + "contractId": "{{contract-agreement-id}}", "assetId": "assetId", "protocol": "dataspace-protocol-http", "dataDestination": { diff --git a/transfer/transfer-01-negotiation/resources/negotiate-contract.json b/transfer/transfer-01-negotiation/resources/negotiate-contract.json index 33a46d2a..8b4974f5 100644 --- a/transfer/transfer-01-negotiation/resources/negotiate-contract.json +++ b/transfer/transfer-01-negotiation/resources/negotiate-contract.json @@ -10,10 +10,10 @@ "providerId": "provider", "protocol": "dataspace-protocol-http", "offer": { - "offerId": "MQ==:YXNzZXRJZA==:YTc4OGEwYjMtODRlZi00NWYwLTgwOWQtMGZjZTMwMGM3Y2Ey", + "offerId": "{{offerId}}", "assetId": "assetId", "policy": { - "@id": "MQ==:YXNzZXRJZA==:YTc4OGEwYjMtODRlZi00NWYwLTgwOWQtMGZjZTMwMGM3Y2Ey", + "@id": "{{offerId}}", "@type": "Set", "odrl:permission": [], "odrl:prohibition": [], diff --git a/transfer/transfer-02-consumer-pull/resources/start-transfer.json b/transfer/transfer-02-consumer-pull/resources/start-transfer.json index e28fc9c5..6f10d25f 100644 --- a/transfer/transfer-02-consumer-pull/resources/start-transfer.json +++ b/transfer/transfer-02-consumer-pull/resources/start-transfer.json @@ -5,7 +5,7 @@ "@type": "TransferRequestDto", "connectorId": "provider", "connectorAddress": "http://localhost:19194/protocol", - "contractId": "", + "contractId": "{{contract-agreement-id}}", "assetId": "assetId", "protocol": "dataspace-protocol-http", "dataDestination": { diff --git a/transfer/transfer-03-provider-push/resources/start-transfer.json b/transfer/transfer-03-provider-push/resources/start-transfer.json index c4636720..47646f89 100644 --- a/transfer/transfer-03-provider-push/resources/start-transfer.json +++ b/transfer/transfer-03-provider-push/resources/start-transfer.json @@ -5,7 +5,7 @@ "@type": "TransferRequestDto", "connectorId": "provider", "connectorAddress": "http://localhost:19194/protocol", - "contractId": "", + "contractId": "{{contract-agreement-id}}", "assetId": "assetId", "protocol": "dataspace-protocol-http", "dataDestination": { From cd65d0dc6dc7cf4fa98d19854bc9b433f6300f56 Mon Sep 17 00:00:00 2001 From: Michael Steinert Date: Tue, 21 Nov 2023 13:30:27 +0100 Subject: [PATCH 3/4] docs: improved basic, transfer and advanced documentation --- advanced/advanced-01-open-telemetry/README.md | 144 +++++++++++++----- .../resources/get-dataset.json | 7 + .../resources/negotiate-contract.json | 4 +- basic/basic-01-basic-connector/README.md | 2 +- basic/basic-02-health-endpoint/README.md | 10 +- basic/basic-03-configuration/README.md | 15 +- .../streaming-01-http-to-http/README.md | 2 +- .../streaming-02-kafka-to-http/README.md | 5 +- transfer/transfer-00-prerequisites/README.md | 8 +- transfer/transfer-01-negotiation/README.md | 27 ++-- transfer/transfer-02-consumer-pull/README.md | 34 ++--- transfer/transfer-03-provider-push/README.md | 28 ++-- transfer/transfer-04-event-consumer/README.md | 51 +++---- 13 files changed, 192 insertions(+), 145 deletions(-) create mode 100644 advanced/advanced-01-open-telemetry/resources/get-dataset.json diff --git a/advanced/advanced-01-open-telemetry/README.md b/advanced/advanced-01-open-telemetry/README.md index a88d08fa..c043f4e9 100644 --- a/advanced/advanced-01-open-telemetry/README.md +++ b/advanced/advanced-01-open-telemetry/README.md @@ -2,12 +2,15 @@ This sample will show you how you can: -- generate traces with [OpenTelemetry](https://opentelemetry.io) and collect and visualize them with [Jaeger](https://www.jaegertracing.io/) -- automatically collect metrics from infrastructure, server endpoints and client libraries with [Micrometer](https://micrometer.io) +- generate traces with [OpenTelemetry](https://opentelemetry.io) and collect and visualize them + with [Jaeger](https://www.jaegertracing.io/) +- automatically collect metrics from infrastructure, server endpoints and client libraries + with [Micrometer](https://micrometer.io) and visualize them with [Prometheus](https://prometheus.io) For this, this sample uses the Open Telemetry Java Agent, which dynamically injects bytecode to capture telemetry from -several popular [libraries and frameworks](https://github.com/open-telemetry/opentelemetry-java-instrumentation/tree/main/instrumentation). +several +popular [libraries and frameworks](https://github.com/open-telemetry/opentelemetry-java-instrumentation/tree/main/instrumentation). In order to visualize and analyze the traces and metrics, we use [OpenTelemetry exporters](https://opentelemetry.io/docs/instrumentation/js/exporters/) to export data into the Jaeger @@ -18,22 +21,21 @@ tracing backend and a Prometheus endpoint. We will use a single docker-compose to run the consumer, the provider, and a Jaeger backend. Let's have a look to the [docker-compose.yaml](docker-compose.yaml). We created a consumer and a provider service with entry points specifying the OpenTelemetry Java Agent as a JVM parameter. -In addition, the [Jaeger exporter](https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk-extensions/autoconfigure/README.md#jaeger-exporter) +In addition, +the [Jaeger exporter](https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk-extensions/autoconfigure/README.md#jaeger-exporter) is configured using environmental variables as required by OpenTelemetry. The [Prometheus exporter](https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk-extensions/autoconfigure/README.md#prometheus-exporter) is configured to expose a Prometheus metrics endpoint. To run the consumer, the provider, and Jaeger execute the following commands in the project root folder: -```bash -docker-compose -f advanced/advanced-01-open-telemetry/docker-compose.yaml up --abort-on-container-exit +```shell +docker compose -f advanced/advanced-01-open-telemetry/docker-compose.yaml up --abort-on-container-exit ``` -Open a new terminal. +Open a new terminal and register the dataplane for the provider and consumer: -Register data planes for provider and consumer: - -```bash +```shell curl -H 'Content-Type: application/json' \ -H "X-Api-Key: password" \ -d @transfer/transfer-00-prerequisites/resources/dataplane/register-data-plane-provider.json \ @@ -41,7 +43,7 @@ curl -H 'Content-Type: application/json' \ -s | jq ``` -```bash +```shell curl -H 'Content-Type: application/json' \ -H "X-Api-Key: password" \ -d @transfer/transfer-00-prerequisites/resources/dataplane/register-data-plane-consumer.json \ @@ -49,36 +51,91 @@ curl -H 'Content-Type: application/json' \ -s | jq ``` -Create an asset: +Then use these three calls to create the Asset, the Policy Definition and the Contract Definition: -```bash +```shell curl -H "X-Api-Key: password" \ -d @transfer/transfer-01-negotiation/resources/create-asset.json \ -H 'content-type: application/json' http://localhost:19193/management/v2/assets \ -s | jq ``` -Create a Policy on the provider connector: - -```bash +```shell curl -H "X-Api-Key: password" \ -d @transfer/transfer-01-negotiation/resources/create-policy.json \ -H 'content-type: application/json' http://localhost:19193/management/v2/policydefinitions \ -s | jq ``` -Follow up with the creation of a contract definition: - -```bash +```shell curl -H "X-Api-Key: password" \ -d @transfer/transfer-01-negotiation/resources/create-contract-definition.json \ -H 'content-type: application/json' http://localhost:19193/management/v2/contractdefinitions \ -s | jq ``` -Start a contract negotiation: +### Negotiate the contract + +The typical flow requires fetching the catalog from the consumer side and using the contract offer to negotiate a +contract. However, in this sample case, we already have the provider asset `assetId` so we can get the related dataset +directly with this call: + -```bash +```shell +curl -H "X-Api-Key: password" \ + -H "Content-Type: application/json" \ + -d @advanced/advanced-01-open-telemetry/resources/get-dataset.json \ + -X POST "http://localhost:29193/management/v2/catalog/dataset/request" \ + -s | jq +``` + +The output will be something like: + +```json +{ + "@id": "assetId", + "@type": "dcat:Dataset", + "odrl:hasPolicy": { + "@id": "MQ==:YXNzZXRJZA==:YjI5ZDVkZDUtZWU0Mi00NWRiLWE2OTktYjNmMjlmMWNjODk3", + "@type": "odrl:Set", + "odrl:permission": [], + "odrl:prohibition": [], + "odrl:obligation": [], + "odrl:target": "assetId" + }, + "dcat:distribution": [ + { + "@type": "dcat:Distribution", + "dct:format": { + "@id": "HttpProxy" + }, + "dcat:accessService": "06348bca-6bf0-47fe-8bb5-6741cff7a955" + }, + { + "@type": "dcat:Distribution", + "dct:format": { + "@id": "HttpData" + }, + "dcat:accessService": "06348bca-6bf0-47fe-8bb5-6741cff7a955" + } + ], + "edc:name": "product description", + "edc:id": "assetId", + "edc:contenttype": "application/json", + "@context": { + "dct": "https://purl.org/dc/terms/", + "edc": "https://w3id.org/edc/v0.0.1/ns/", + "dcat": "https://www.w3.org/ns/dcat/", + "odrl": "http://www.w3.org/ns/odrl/2/", + "dspace": "https://w3id.org/dspace/v0.8/" + } +} +``` + +With the `odrl:hasPolicy/@id` we can now replace it in the [negotiate-contract.json](resources/negotiate-contract.json) file +and request the contract negotiation: + +```shell curl -H "X-Api-Key: password" \ -H "Content-Type: application/json" \ -d @advanced/advanced-01-open-telemetry/resources/negotiate-contract.json \ @@ -86,20 +143,25 @@ curl -H "X-Api-Key: password" \ -s | jq ``` -Wait until the negotiation is in `FINALIZED` state and call +At this point the contract agreement should already been issued, to verify that, please check the contract negotiation +state with this call, replacing `{{contract-negotiation-id}}` with the id returned by the negotiate contract call. -```bash -curl -X GET -H 'X-Api-Key: password' "http://localhost:29193/management/v2/contractnegotiations/{UUID}" +```shell +curl -H 'X-Api-Key: password' \ + -X GET "http://localhost:29193/management/v2/contractnegotiations/{{contract-negotiation-id}}" \ + -s | jq ``` -to get the contract agreement id. -Finally, update the contract agreement id in the [request body](resources/start-transfer.json) and execute a file transfer with the following command: -```bash +Finally, update the contract agreement id in the [start-transfer.json](resources/start-transfer.json) and execute a file +transfer with the following command: + +```shell curl -H "X-Api-Key: password" \ -H "Content-Type: application/json" \ -d @advanced/advanced-01-open-telemetry/resources/start-transfer.json \ - -X POST "http://localhost:29193/management/v2/transferprocesses" + -X POST "http://localhost:29193/management/v2/transferprocesses" \ + -s | jq ``` You can access the Jaeger UI on your browser at `http://localhost:16686`. In the search tool, we can select the service @@ -112,18 +174,21 @@ Example contract negotiation trace: Example file transfer trace: ![File transfer](attachments/file-transfer-trace.png) -OkHttp and Jetty are part of the [libraries and frameworks](https://github.com/open-telemetry/opentelemetry-java-instrumentation/tree/main/instrumentation) +OkHttp and Jetty are part of +the [libraries and frameworks](https://github.com/open-telemetry/opentelemetry-java-instrumentation/tree/main/instrumentation) that OpenTelemetry can capture telemetry from. We can observe spans related to OkHttp and Jetty as EDC uses both -frameworks internally. The `otel.library.name` tag of the different spans indicates the framework each span is coming from. +frameworks internally. The `otel.library.name` tag of the different spans indicates the framework each span is coming +from. You can access the Prometheus UI on your browser at `http://localhost:9090`. Click the globe icon near the top right -corner (Metrics Explorer) and select a metric to display. Metrics include System (e.g. CPU usage), JVM (e.g. memory usage), -Executor service (call timings and thread pools), and the instrumented OkHttp, Jetty and Jersey libraries (HTTP client and server). +corner (Metrics Explorer) and select a metric to display. Metrics include System (e.g. CPU usage), JVM (e.g. memory +usage), Executor service (call timings and thread pools), and the instrumented OkHttp, Jetty and Jersey libraries (HTTP +client and server). ## Using another monitoring backend -Other monitoring backends can be plugged in easily with OpenTelemetry. For instance, if you want to use Azure Application -Insights instead of Jaeger, you can replace the OpenTelemetry Java Agent with the +Other monitoring backends can be plugged in easily with OpenTelemetry. For instance, if you want to use Azure +Application Insights instead of Jaeger, you can replace the OpenTelemetry Java Agent with the [Application Insights Java Agent](https://docs.microsoft.com/azure/azure-monitor/app/java-in-process-agent#download-the-jar-file), which has to be stored in the root folder of this sample as well. The only additional configuration required are the `APPLICATIONINSIGHTS_CONNECTION_STRING` and `APPLICATIONINSIGHTS_ROLE_NAME` env variables: @@ -163,14 +228,17 @@ which has to be stored in the root folder of this sample as well. The only addit -jar /app/connector.jar ``` -The Application Insights Java agent will automatically collect metrics from Micrometer, without any configuration needed. +The Application Insights Java agent will automatically collect metrics from Micrometer, without any configuration +needed. ## Provide your own OpenTelemetry implementation -In order to provide your own OpenTelemetry implementation, you have to "deploy an OpenTelemetry service provider on the class path": +In order to provide your own OpenTelemetry implementation, you have to "deploy an OpenTelemetry service provider on the +class path": - Create a module containing your OpenTelemetry implementation. -- Add a file in the resource directory META-INF/services. The file should be called `io.opentelemetry.api.OpenTelemetry`. +- Add a file in the resource directory META-INF/services. The file should be + called `io.opentelemetry.api.OpenTelemetry`. - Add to the file the fully qualified name of your custom OpenTelemetry implementation class. EDC uses a [ServiceLoader](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html) @@ -179,5 +247,3 @@ it, otherwise it will use the registered global OpenTelemetry. You can look at t `Deploying service providers on the class path` of the [ServiceLoader documentation](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/ServiceLoader.html) to have more information about service providers. - ---- \ No newline at end of file diff --git a/advanced/advanced-01-open-telemetry/resources/get-dataset.json b/advanced/advanced-01-open-telemetry/resources/get-dataset.json new file mode 100644 index 00000000..ba8a0670 --- /dev/null +++ b/advanced/advanced-01-open-telemetry/resources/get-dataset.json @@ -0,0 +1,7 @@ +{ + "@context": { "@vocab": "https://w3id.org/edc/v0.0.1/ns/" }, + "@type": "DatasetRequest", + "@id": "assetId", + "counterPartyAddress": "http://provider:19194/protocol", + "protocol": "dataspace-protocol-http" +} diff --git a/advanced/advanced-01-open-telemetry/resources/negotiate-contract.json b/advanced/advanced-01-open-telemetry/resources/negotiate-contract.json index 5497bab7..6e8eb10c 100644 --- a/advanced/advanced-01-open-telemetry/resources/negotiate-contract.json +++ b/advanced/advanced-01-open-telemetry/resources/negotiate-contract.json @@ -10,10 +10,10 @@ "providerId": "provider", "protocol": "dataspace-protocol-http", "offer": { - "offerId": "MQ==:YXNzZXRJZA==:YTc4OGEwYjMtODRlZi00NWYwLTgwOWQtMGZjZTMwMGM3Y2Ey", + "offerId": "{{offerId}}", "assetId": "assetId", "policy": { - "@id": "MQ==:YXNzZXRJZA==:YTc4OGEwYjMtODRlZi00NWYwLTgwOWQtMGZjZTMwMGM3Y2Ey", + "@id": "{{offerId}}", "@type": "Set", "odrl:permission": [], "odrl:prohibition": [], diff --git a/basic/basic-01-basic-connector/README.md b/basic/basic-01-basic-connector/README.md index 03b9cb46..81e8e726 100644 --- a/basic/basic-01-basic-connector/README.md +++ b/basic/basic-01-basic-connector/README.md @@ -31,7 +31,7 @@ installation._ If everything works as intended you should see command-line output similar to this: -```bash +```shell INFO 2022-01-13T13:43:57.677973407 Secrets vault not configured. Defaulting to null vault. INFO 2022-01-13T13:43:57.680158117 Initialized Null Vault INFO 2022-01-13T13:43:57.851181615 Initialized Core Services diff --git a/basic/basic-02-health-endpoint/README.md b/basic/basic-02-health-endpoint/README.md index 2eb8c71d..9c83b556 100644 --- a/basic/basic-02-health-endpoint/README.md +++ b/basic/basic-02-health-endpoint/README.md @@ -7,7 +7,7 @@ An _extension_ typically consists of two things: 1. a class implementing the `ServiceExtension` interface. 2. a plugin file in the `src/main/resources/META-INF/services` directory. This file **must** be named exactly as the - interface's fully qualified class-name and it **must** contain the fully-qualified name of the implementing class ( + interface's fully qualified class-name, and it **must** contain the fully-qualified name of the implementing class ( =plugin class). Therefore, we require an extension class, which we'll name `HealthEndpointExtension`: @@ -25,12 +25,13 @@ public class HealthEndpointExtension implements ServiceExtension { } ``` -The `@Inject` annotation indicates that the extension needs a service that is registered by another extension, in +The `@Inject` annotation indicates that the extension needs a service that is registered by another extension, in this case an implementation of `WebService.class`. For that, we can use Jakarta REST annotations to implement a simple REST API: ```java + @Consumes({MediaType.APPLICATION_JSON}) @Produces({MediaType.APPLICATION_JSON}) @Path("/") @@ -53,7 +54,7 @@ public class HealthApiController { Once we compile and run the application with -```bash +```shell ./gradlew clean basic:basic-02-health-endpoint:build java -jar basic/basic-02-health-endpoint/build/libs/connector-health.jar ``` @@ -66,8 +67,7 @@ and can be configured using the `web.http.port` property (more on that in the ne this whenever you have two connectors running on the same machine. Also, the default path is `/api/*`, which is defined in -[`JettyConfiguration.java`](https://github.com/eclipse-edc/Connector/blob/releases/extensions/common/http/jetty-core/src/main/java/org/eclipse/edc/web/jetty/JettyConfiguration.java) -. +[`JettyConfiguration.java`](https://github.com/eclipse-edc/Connector/blob/releases/extensions/common/http/jetty-core/src/main/java/org/eclipse/edc/web/jetty/JettyConfiguration.java). --- diff --git a/basic/basic-03-configuration/README.md b/basic/basic-03-configuration/README.md index 1ddf965d..dc7de47a 100644 --- a/basic/basic-03-configuration/README.md +++ b/basic/basic-03-configuration/README.md @@ -3,8 +3,8 @@ So far we have not had any way to configure our system other than directly modifying code, which generally is not an elegant way. -The Eclipse Dataspace Connector exposes configuration through its `ConfigurationExtension` interface. That is a " -special" extension in that sense that it gets loaded at a very early stage. There is also a default implementation +The Eclipse Dataspace Connector exposes configuration through its `ConfigurationExtension` interface. That is a +special extension in that sense that it gets loaded at a very early stage. There is also a default implementation named [`FsConfigurationExtension.java`](https://github.com/eclipse-edc/Connector/blob/releases/extensions/common/configuration/configuration-filesystem/src/main/java/org/eclipse/edc/configuration/filesystem/FsConfigurationExtension.java) which uses a standard Java properties file to store configuration entries. @@ -21,14 +21,14 @@ dependencies { We compile and run the application with: -```bash +```shell ./gradlew clean basic:basic-03-configuration:build java -jar basic/basic-03-configuration/build/libs/filesystem-config-connector.jar ``` you will notice an additional log line stating that the "configuration file does not exist": -```bash +```shell INFO 2021-09-07T08:26:08.282159 Configuration file does not exist: dataspaceconnector-configuration.properties. Ignoring. ``` @@ -41,7 +41,7 @@ file is configurable using the `edc.fs.config` property, so we can customize thi First, create a properties file in a location of your convenience, e.g. `/etc/eclipse/dataspaceconnector/config.properties`. -```bash +```shell mkdir -p /etc/eclipse/dataspaceconnector touch /etc/eclipse/dataspaceconnector/config.properties ``` @@ -56,17 +56,16 @@ web.http.port=9191 An example file can be found [here](config.properties). Clean, rebuild and run the connector again, but this time passing the path to the config file: -```bash +```shell java -Dedc.fs.config=/etc/eclipse/dataspaceconnector/config.properties -jar basic/basic-03-configuration/build/libs/filesystem-config-connector.jar ``` Observing the log output we now see that the connector's REST API is exposed on port `9191` instead: -```bash +```shell INFO 2022-04-27T14:09:10.547662345 HTTP context 'default' listening on port 9191 <-- this is the relevant line DEBUG 2022-04-27T14:09:10.589738491 Port mappings: {alias='default', port=9191, path='/api'} INFO 2022-04-27T14:09:10.589846121 Started Jetty Service - ``` ## Add your own configuration value diff --git a/transfer/streaming/streaming-01-http-to-http/README.md b/transfer/streaming/streaming-01-http-to-http/README.md index e54c2030..203e62d8 100644 --- a/transfer/streaming/streaming-01-http-to-http/README.md +++ b/transfer/streaming/streaming-01-http-to-http/README.md @@ -55,7 +55,7 @@ Then put the path in the [asset.json](asset.json) file replacing the `{{sourceFo { "dataAddress": { "type": "HttpStreaming", - "sourceFolder": "{{sourceFolder}}" + "sourceFolder": "/tmp/source" } } ``` diff --git a/transfer/streaming/streaming-02-kafka-to-http/README.md b/transfer/streaming/streaming-02-kafka-to-http/README.md index 401f6958..d6a06907 100644 --- a/transfer/streaming/streaming-02-kafka-to-http/README.md +++ b/transfer/streaming/streaming-02-kafka-to-http/README.md @@ -138,15 +138,14 @@ docker run -p 4000:4000 http-request-logger It will run on port 4000. At this point the contract agreement should already been issued, to verify that, please check the contract negotiation -state with -this call, replacing `{{contract-negotiation-id}}` with the id returned by the negotiate contract call. +state with this call, replacing `{{contract-negotiation-id}}` with the id returned by the negotiate contract call. ```shell curl "http://localhost:28181/management/v2/contractnegotiations/{{contract-negotiation-id}}" -s | jq ``` If the `edc:contractAgreementId` is valued, it can be used to start the transfer, replacing it in -the [6-transfer.json](6-transfer.json) +the [transfer.json](6-transfer.json) file to `{{contract-agreement-id}}` and then calling the connector with this command: ```shell diff --git a/transfer/transfer-00-prerequisites/README.md b/transfer/transfer-00-prerequisites/README.md index b79ddad0..e84520fc 100644 --- a/transfer/transfer-00-prerequisites/README.md +++ b/transfer/transfer-00-prerequisites/README.md @@ -26,7 +26,7 @@ Before we can run a connector, we need to build the JAR file. Execute this command in project root: -```bash +```shell ./gradlew transfer:transfer-00-prerequisites:connector:build ``` @@ -56,13 +56,13 @@ This property is used to define the endpoint exposed by the control plane to val To run the provider, just run the following command -```bash +```shell java -Dedc.keystore=transfer/transfer-00-prerequisites/resources/certs/cert.pfx -Dedc.keystore.password=123456 -Dedc.vault=transfer/transfer-00-prerequisites/resources/configuration/provider-vault.properties -Dedc.fs.config=transfer/transfer-00-prerequisites/resources/configuration/provider-configuration.properties -jar transfer/transfer-00-prerequisites/connector/build/libs/connector.jar ``` To run the consumer, just run the following command (different terminal) -```bash +```shell java -Dedc.keystore=transfer/transfer-00-prerequisites/resources/certs/cert.pfx -Dedc.keystore.password=123456 -Dedc.vault=transfer/transfer-00-prerequisites/resources/configuration/consumer-vault.properties -Dedc.fs.config=transfer/transfer-00-prerequisites/resources/configuration/consumer-configuration.properties -jar transfer/transfer-00-prerequisites/connector/build/libs/connector.jar ``` @@ -85,7 +85,7 @@ request to the management API of the connector. Open a new terminal and execute: -```bash +```shell curl -H 'Content-Type: application/json' \ -d @transfer/transfer-00-prerequisites/resources/dataplane/register-data-plane-provider.json \ -X POST "http://localhost:19193/management/v2/dataplanes" -s | jq diff --git a/transfer/transfer-01-negotiation/README.md b/transfer/transfer-01-negotiation/README.md index 8ee5cd4a..d344b7f5 100644 --- a/transfer/transfer-01-negotiation/README.md +++ b/transfer/transfer-01-negotiation/README.md @@ -41,7 +41,7 @@ of resources offered, through a contract offer, the so-called "catalog". The following [request](resources/create-asset.json) creates an asset on the provider connector. -```bash +```shell curl -d @transfer/transfer-01-negotiation/resources/create-asset.json \ -H 'content-type: application/json' http://localhost:19193/management/v2/assets \ -s | jq @@ -65,7 +65,7 @@ to keep things simple, we will choose a policy that gives direct access to all t associated within the contract definitions. This means that the consumer connector can request any asset from the provider connector. -```bash +```shell curl -d @transfer/transfer-01-negotiation/resources/create-policy.json \ -H 'content-type: application/json' http://localhost:19193/management/v2/policydefinitions \ -s | jq @@ -78,21 +78,18 @@ the asset, on the basis of which a contract agreement can be negotiated. The con associates policies to a selection of assets to generate the contract offers that will be put in the catalog. In this case, the selection is empty, so every asset is attached to these policies. -```bash +```shell curl -d @transfer/transfer-01-negotiation/resources/create-contract-definition.json \ -H 'content-type: application/json' http://localhost:19193/management/v2/contractdefinitions \ -s | jq - ``` -Sample output: +Part of sample output: ```json { - ... "@id": "1", - "edc:createdAt": 1674578184023, - ... + "edc:createdAt": 1674578184023 } ``` @@ -103,7 +100,7 @@ all the contract offers available for negotiation. In our case, it will contain offer, the so-called "catalog". To get the catalog from the consumer side, you can use the following request: -```bash +```shell curl -X POST "http://localhost:29193/management/v2/catalog/request" \ -H 'Content-Type: application/json' \ -d @transfer/transfer-01-negotiation/resources/fetch-catalog.json -s | jq @@ -181,20 +178,18 @@ looks as follows: Of course, this is the simplest possible negotiation sequence. Later on, both connectors can also send counteroffers in addition to just confirming or declining an offer. -```bash +```shell curl -d @transfer/transfer-01-negotiation/resources/negotiate-contract.json \ -X POST -H 'content-type: application/json' http://localhost:29193/management/v2/contractnegotiations \ -s | jq ``` -Sample output: +Part of sample output: ```json { - ... "@id": "254015f3-5f1e-4a59-9ad9-bf0e42d4819e", - "edc:createdAt": 1685525281848, - ... + "edc:createdAt": 1685525281848 } ``` @@ -207,7 +202,7 @@ state machine. Once both provider and consumer either reach the `confirmed` or t state, the negotiation is finished. We can now use the UUID to check the current status of the negotiation using an endpoint on the consumer side. -```bash +```shell curl -X GET "http://localhost:29193/management/v2/contractnegotiations/" \ --header 'Content-Type: application/json' \ -s | jq @@ -224,7 +219,7 @@ Sample output: "edc:state": "FINALIZED", "edc:counterPartyAddress": "http://localhost:19194/protocol", "edc:callbackAddresses": [], - "edc:contractAgreementId": "0b3150be-feaf-43bc-91e1-90f050de28bd", <--------- + "edc:contractAgreementId": "0b3150be-feaf-43bc-91e1-90f050de28bd", "@context": { "dct": "https://purl.org/dc/terms/", "edc": "https://w3id.org/edc/v0.0.1/ns/", diff --git a/transfer/transfer-02-consumer-pull/README.md b/transfer/transfer-02-consumer-pull/README.md index 4de7b639..a234cdca 100644 --- a/transfer/transfer-02-consumer-pull/README.md +++ b/transfer/transfer-02-consumer-pull/README.md @@ -28,7 +28,7 @@ order. As a pre-requisite, you need to have a logging webserver that runs on port 4000 and logs all the incoming requests, it will be mandatory to get the EndpointDataReference that will be used to get the data. -```bash +```shell docker build -t http-request-logger util/http-request-logger docker run -p 4000:4000 http-request-logger ``` @@ -39,12 +39,11 @@ In the [request body](resources/start-transfer.json), we need to specify which a provider connector and where we want the file transferred. Before executing the request, insert the `contractAgreementId` from the previous chapter. Then run: -```bash +```shell curl -X POST "http://localhost:29193/management/v2/transferprocesses" \ -H "Content-Type: application/json" \ -d @transfer/transfer-02-consumer-pull/resources/start-transfer.json \ -s | jq - ``` > the "HttpProxy" method is used for the consumer pull method, and it means that it will be up to @@ -56,14 +55,12 @@ process id) created on the consumer side, because like the contract negotiation, the data transfer is handled in a state machine and performed asynchronously. -Sample output: +Part of sample output: ```json { - ... "@id": "591bb609-1edb-4a6b-babe-50f1eca3e1e9", - "edc:createdAt": 1674078357807, - ... + "edc:createdAt": 1674078357807 } ``` @@ -72,20 +69,17 @@ Sample output: Due to the nature of the transfer, it will be very fast and most likely already done by the time you read the UUID. -```bash -curl http://localhost:29193/management/v2/transferprocesses/ +```shell +curl http://localhost:29193/management/v2/transferprocesses/{{transfer-process-id}} -s | jq ``` You should see the Transfer Process in `STARTED` state: ```json { - ... "@id": "591bb609-1edb-4a6b-babe-50f1eca3e1e9", - "edc:state": "STARTED", - ... + "edc:state": "STARTED" } - ``` > Note that for the consumer pull scenario the TP will stay in STARTED state after the data has been transferred successfully. @@ -101,28 +95,26 @@ to get the data from the provider: "id": "591bb609-1edb-4a6b-babe-50f1eca3e1e9", "endpoint": "http://localhost:29291/public/", "authKey": "Authorization", - "authCode": "eyJhbGciOiJSUzI1NiJ9.eyJkYWQiOiJ7XCJwcm9wZXJ0aWVzXCI6e1wiYXV0aEtleVwiOlwiQXV0aG9yaXphdGlvblwiLFwiYmFzZVVybFwiOlwiaHR0cDpcL1wvbG9jYWxob3N0OjE5MjkxXC9wdWJsaWNcL1wiLFwiYXV0aENvZGVcIjpcImV5SmhiR2NpT2lKU1V6STFOaUo5LmV5SmtZV1FpT2lKN1hDSndjbTl3WlhKMGFXVnpYQ0k2ZTF3aVltRnpaVlZ5YkZ3aU9sd2lhSFIwY0hNNlhDOWNMMnB6YjI1d2JHRmpaV2h2YkdSbGNpNTBlWEJwWTI5a1pTNWpiMjFjTDNWelpYSnpYQ0lzWENKdVlXMWxYQ0k2WENKVVpYTjBJR0Z6YzJWMFhDSXNYQ0owZVhCbFhDSTZYQ0pJZEhSd1JHRjBZVndpZlgwaUxDSmxlSEFpT2pFMk56UTFPRGcwTWprc0ltTnBaQ0k2SWpFNk1XVTBOemc1TldZdE9UQXlOUzAwT1dVeExUazNNV1F0WldJNE5qVmpNemhrTlRRd0luMC5ITFJ6SFBkT2IxTVdWeWdYZi15a0NEMHZkU3NwUXlMclFOelFZckw5eU1tQjBzQThwMHFGYWV0ZjBYZHNHMG1HOFFNNUl5NlFtNVU3QnJFOUwxSE5UMktoaHFJZ1U2d3JuMVhGVUhtOERyb2dSemxuUkRlTU9ZMXowcDB6T2MwNGNDeFJWOEZoemo4UnVRVXVFODYwUzhqbU4wZk5sZHZWNlFpUVFYdy00QmRTQjNGYWJ1TmFUcFh6bDU1QV9SR2hNUGphS2w3RGsycXpJZ0ozMkhIdGIyQzhhZGJCY1pmRk12aEM2anZ2U1FieTRlZXU0OU1hclEydElJVmFRS1B4ajhYVnI3ZFFkYV95MUE4anNpekNjeWxyU3ljRklYRUV3eHh6Rm5XWmczV2htSUxPUFJmTzhna2RtemlnaXRlRjVEcmhnNjZJZzJPR0Eza2dBTUxtc3dcIixcInByb3h5TWV0aG9kXCI6XCJ0cnVlXCIsXCJwcm94eVF1ZXJ5UGFyYW1zXCI6XCJ0cnVlXCIsXCJwcm94eUJvZHlcIjpcInRydWVcIixcInR5cGVcIjpcIkh0dHBEYXRhXCIsXCJwcm94eVBhdGhcIjpcInRydWVcIn19IiwiZXhwIjoxNjc0NTg4NDI5LCJjaWQiOiIxOjFlNDc4OTVmLTkwMjUtNDllMS05NzFkLWViODY1YzM4ZDU0MCJ9.WhbTzERmM75mNMUG2Sh-8ZW6uDQCus_5uJPvGjAX16Ucc-2rDcOhAxrHjR_AAV4zWjKBHxQhYk2o9jD-9OiYb8Urv8vN4WtYFhxJ09A0V2c6lB1ouuPyCA_qKqJEWryTbturht4vf7W72P37ERo_HwlObOuJMq9CS4swA0GBqWupZHAnF-uPIQckaS9vLybJ-gqEhGxSnY4QAZ9-iwSUhkrH8zY2GCDkzAWIPmvtvRhAs9NqVkoUswG-ez1SUw5bKF0hn2OXv_KhfR8VsKKYUbKDQf5Wagk7rumlYbXMPNAEEagI4R0xiwKWVTfwwZPy_pYnHE7b4GQECz3NjhgdIw", + "authCode": "ey..", "properties": { "cid": "1:1e47895f-9025-49e1-971d-eb865c38d540" } } ``` -Once this json is read, use a tool like postman or curl to execute the following query, to read the -data +Once this json is read, use a tool like postman or curl to execute the following query, to read the data -```bash -curl --location --request GET 'http://localhost:29291/public/' --header 'Authorization: ' +```shell +curl --location --request GET 'http://localhost:29291/public/' --header 'Authorization: {{auth-code}}' -s | jq ``` At the end, and to be sure that you correctly achieved the pull, you can check if the data you get is the same as the one you can get at https://jsonplaceholder.typicode.com/users - Since we configured the `HttpData` with `proxyPath`, we could also ask for a specific user with: -```bash -curl --location --request GET 'http://localhost:29291/public/1' --header 'Authorization: ' +```shell +curl --location --request GET 'http://localhost:29291/public/1' --header 'Authorization: {{auth-code}}' -s | jq ``` And the data returned will be the same as in https://jsonplaceholder.typicode.com/users/1 diff --git a/transfer/transfer-03-provider-push/README.md b/transfer/transfer-03-provider-push/README.md index 07277efa..057596dd 100644 --- a/transfer/transfer-03-provider-push/README.md +++ b/transfer/transfer-03-provider-push/README.md @@ -16,47 +16,45 @@ This samples consists of: The following steps assume your provider and consumer connectors are still up and running and contract negotiation has taken place successfully. Furthermore, the http server should be up as well. -If not, re-visit the [Prerequisites](../transfer-00-prerequisites/README.md) -, [Negotiation](../transfer-01-negotiation/README.md) and [Consumer Pull](../transfer-02-consumer-pull/README.md) chapters. +If not, re-visit +the [Prerequisites](../transfer-00-prerequisites/README.md), [Negotiation](../transfer-01-negotiation/README.md) +and [Consumer Pull](../transfer-02-consumer-pull/README.md) chapters. # Run the sample -Running this sample consists of multiple steps, that are executed one by one and following the same -order. +Running this sample consists of multiple steps, that are executed one by one and following the same order. ### 1. Start the transfer -Before executing the request, modify the [request body](resources/start-transfer.json) by inserting the contract agreement ID -from the [Negotiation](../transfer-01-negotiation/README.md) chapter. +Before executing the request, modify the [request body](resources/start-transfer.json) by inserting the contract +agreement ID from the [Negotiation](../transfer-01-negotiation/README.md) chapter. You can re-use the same asset, policies and contract negotiation from before. -```bash +```shell curl -X POST "http://localhost:29193/management/v2/transferprocesses" \ -H "Content-Type: application/json" \ -d @transfer/transfer-03-provider-push/resources/start-transfer.json \ -s | jq ``` + > keep in mind that, to make a transfer with a provider push method, the dataDestination type should > be any value different from the "HttpProxy". -Sample output: +Part of sample output: ```json { - ... "@id": "591bb609-1edb-4a6b-babe-50f1eca3e1e9", - "edc:createdAt": 1674078357807, - ... + "edc:createdAt": 1674078357807 } ``` ### 2. Check the transfer status -Due to the nature of the transfer, it will be very fast and most likely already done by the time you -read the UUID. +Due to the nature of the transfer, it will be very fast and most likely already done by the time you read the UUID. -```bash -curl http://localhost:29193/management/v2/transferprocesses/ +```shell +curl http://localhost:29193/management/v2/transferprocesses/{{transfer-process-id}} -s | jq ``` Notice the transfer COMPLETED state diff --git a/transfer/transfer-04-event-consumer/README.md b/transfer/transfer-04-event-consumer/README.md index 9780c9a4..de2f03c8 100644 --- a/transfer/transfer-04-event-consumer/README.md +++ b/transfer/transfer-04-event-consumer/README.md @@ -10,21 +10,12 @@ Also, in order to keep things organized, the code in this example has been separ ## Inspect the listener -A `TransferProcessListener` may define methods that are invoked after a transfer changes state, for example, to notify an -external application on the consumer side after data has been produced (i.e. the transfer moves to the completed state). +A `TransferProcessListener` may define methods that are invoked after a transfer changes state, for example, to notify +an external application on the consumer side after data has been produced (i.e. the transfer moves to the completed +state). -```java -// in TransferListenerExtension.java - @Override - public void initialize(ServiceExtensionContext context) { - // ... - var transferProcessObservable = context.getService(TransferProcessObservable.class); - transferProcessObservable.registerListener(new MarkerFileCreator(monitor)); - } -``` - -The `TransferProcessStartedListener` implements the `TransferProcessListener` interface. -It will consume the transfer `STARTED` event and write a log message. +The `TransferProcessStartedListener` implements the `TransferProcessListener` interface. It will consume the +transfer `STARTED` event and write a log message. ```java public class TransferProcessStartedListener implements TransferProcessListener { @@ -50,26 +41,26 @@ public class TransferProcessStartedListener implements TransferProcessListener { ## Run the sample -Assuming your provider connector and logging webserver are still running, we can re-use the existing assets and contract definitions stored on -provider side. If not, set up your assets and contract definitions as described in the [Negotiation](../transfer-01-negotiation/README.md) -chapter. +Assuming your provider connector and logging webserver are still running, we can re-use the existing assets and contract +definitions stored on provider side. If not, set up your assets and contract definitions as described in +the [Negotiation](../transfer-01-negotiation/README.md) chapter. ### 1. Build & launch the consumer with listener extension -This consumer connector is based on a different build file, hence a new JAR file will be built. -Make sure to terminate your current consumer connector from the previous chapters. +This consumer connector is based on a different build file, hence a new JAR file will be built. +Make sure to terminate your current consumer connector from the previous chapters. That way we unblock the ports and can reuse the known configuration files and API calls. Run this to build and launch the consumer with listener extension: -```bash +```shell ./gradlew transfer:transfer-04-event-consumer:consumer-with-listener:build java -Dedc.keystore=transfer/transfer-00-prerequisites/resources/certs/cert.pfx -Dedc.keystore.password=123456 -Dedc.vault=transfer/transfer-00-prerequisites/resources/configuration/consumer-vault.properties -Dedc.fs.config=transfer/transfer-00-prerequisites/resources/configuration/consumer-configuration.properties -jar transfer/transfer-04-event-consumer/consumer-with-listener/build/libs/connector.jar ```` ### 2. Negotiate a new contract -```bash +```shell curl -d @transfer/transfer-01-negotiation/resources/negotiate-contract.json \ -X POST -H 'content-type: application/json' http://localhost:29193/management/v2/contractnegotiations \ -s | jq @@ -77,7 +68,7 @@ curl -d @transfer/transfer-01-negotiation/resources/negotiate-contract.json \ ### 3. Get the contract agreement id -```bash +```shell curl -X GET "http://localhost:29193/management/v2/contractnegotiations/" \ --header 'Content-Type: application/json' \ -s | jq @@ -85,10 +76,10 @@ curl -X GET "http://localhost:29193/management/v2/contractnegotiations/ Date: Tue, 21 Nov 2023 13:33:14 +0100 Subject: [PATCH 4/4] docs: use placeholder for auth code --- transfer/transfer-02-consumer-pull/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/transfer/transfer-02-consumer-pull/README.md b/transfer/transfer-02-consumer-pull/README.md index a234cdca..cabbfd45 100644 --- a/transfer/transfer-02-consumer-pull/README.md +++ b/transfer/transfer-02-consumer-pull/README.md @@ -95,7 +95,7 @@ to get the data from the provider: "id": "591bb609-1edb-4a6b-babe-50f1eca3e1e9", "endpoint": "http://localhost:29291/public/", "authKey": "Authorization", - "authCode": "ey..", + "authCode": "", "properties": { "cid": "1:1e47895f-9025-49e1-971d-eb865c38d540" }