From 6592f51a77b074e61316648ad5e5d84adc801d17 Mon Sep 17 00:00:00 2001 From: DinithiDiaz Date: Tue, 5 Mar 2024 07:49:55 +0530 Subject: [PATCH 01/23] Remove MI pages from Get Started section --- .../integration-quick-start-guide.md | 389 ------------------ 1 file changed, 389 deletions(-) delete mode 100644 en/docs/get-started/integration-quick-start-guide.md diff --git a/en/docs/get-started/integration-quick-start-guide.md b/en/docs/get-started/integration-quick-start-guide.md deleted file mode 100644 index af8bad91f8..0000000000 --- a/en/docs/get-started/integration-quick-start-guide.md +++ /dev/null @@ -1,389 +0,0 @@ -# Quick Start Guide - Integration - -Let's get started with WSO2 Micro Integrator by running a simple integration use case in your local environment. - -## Before you begin... - -1. Install Java SE Development Kit (JDK) version 11 and set the `JAVA_HOME` environment variable. - - !!! Info - For information on the compatible JDK types and setting the `JAVA_HOME` environment variable for different operating systems, see [Setup and Install]({{base_path}}/install-and-setup/install/installing-the-product/installing-api-m-runtime/). - -2. Go to the [WSO2 Micro Integrator web page](https://wso2.com/integration/micro-integrator/#), click **Download**, and then click **Zip Archive** to download the Micro Integrator distribution as a ZIP file. -3. Optionally, navigate to the [API Manager Tooling web page](https://wso2.com/api-management/tooling/), and download WSO2 Integration Studio. - - !!! Info - For more information, see the [installation instructions]({{base_path}}/install-and-setup/install-and-setup-overview/#installing_1). - -4. Download the [sample files]({{base_path}}/assets/attachments/quick-start-guide/mi-qsg-home.zip). From this point onwards, let's refer to this directory as ``. -5. Download [curl](https://curl.haxx.se/) or a similar tool that can call an HTTP endpoint. -6. Optionally, go to the [WSO2 API Manager website](https://wso2.com/api-management/), click **TRY IT NOW**, and then click **Zip Archive** to download the API Manager distribution as a ZIP file. - -## What you'll build - -This is a simple service orchestration scenario. The scenario is about a basic healthcare system where the Micro Integrator is used to integrate two back-end hospital services to provide information to the client. - -Most healthcare centers have a system that is used to make doctor appointments. To check the availability of the doctors for a particular time, users typically need to visit the hospitals or use each and every online system that is dedicated to a particular healthcare center. Here, we are making it easier for patients by orchestrating those isolated systems for each healthcare provider and exposing a single interface to the users. - - - - -!!! Tip - You may export` /HealthcareIntegrationProject` to Integration Studio to view the project structure. - -In the above scenario, the following takes place: - -1. The client makes a call to the Healthcare API created using Micro Integrator. - -2. The Healthcare API calls the Pine Valley Hospital back-end service and gets the queried information. - -3. The Healthcare API calls the Grand Oak Hospital back-end service and gets the queried information. - -4. The response is returned to the client with the required information. - -Both Grand Oak Hospital and Pine Valley Hospital have services exposed over the HTTP protocol. - -The Pine Valley Hospital service accepts a POST request in the following service endpoint URL. - -```bash -http://:/pineValley/doctors -``` - -The Grand Oak Hospital service accepts a GET request in the following service endpoint URL. - -```bash -http://:/grandOak/doctors/ -``` - -The expected payload should be in the following JSON format: - -```bash -{ - "doctorType": "" -} -``` - -Let’s implement a simple integration solution that can be used to query the availability of doctors for a particular category from all the available healthcare centers. - - -### Step 1 - Set up the workspace - -To set up the integration workspace for this quick start guide, we will use an integration project that was built using WSO2 Integration Studio: - -1. Extract the downloaded WSO2 Micro Integrator and sample files into the same directory location. - -2. Navigate to the `` directory. -The following project files and executable back-end services are available in the ``. - -- **HealthcareIntegrationProject/HealthcareIntegrationProjectConfigs**: This is the ESB Config module with the integration artifacts for the healthcare service. This service consists of the following REST API: - - - -
- HealthcareAPI.xml - ```xml - - - - - - - - - - - - - - - - { - "doctorType": "$1" - } - - - - - - - - - - - - - - - - - - - - - - - - ``` -
- - - It also contains the following two files in the metadata folder. - - - !!! Tip - This data is used later in this guide by the API management runtime to generate the managed API proxy. - - - - - - - - - - - -
- HealthcareAPI_metadata.yaml - - This file contains the metadata of the integration service. - The default **serviceUrl** is configured as `http://localhost:8290/healthcare`. - If you are running Micro Integrator on a different host and port, you may have to change these values. -
- HealthcareAPI_swagger.yaml - - This Swagger file contains the OpenAPI definition of the integration service. -
- -- **HealthcareIntegrationProject/HealthcareIntegrationProjectCompositeExporter**: This is the Composite Application Project folder, which contains the packaged CAR file of the healthcare service. - -- **Backend**: This contains an executable .jar file that contains mock back-end service implementations for the Pine Valley Hospital and Grand Oak Hospital. - -- **bin**: This contains a script to copy artifacts and run the backend service. - -### Step 2 - Running the integration artifacts - -Follow the steps given below to run the integration artifacts we developed on a Micro Integrator instance that is installed on a VM. - -1. Run `run.sh/run.bat` script in `/bin` based on your operating system to start up the workspace. - 1. Open a terminal and navigate to the `/bin` folder. - 2. Execute the relevant OS specific command: - - === "On MacOS/Linux/CentOS" - ```bash - sh run.sh - ``` - - === "On Windows" - ```bash - run.bat - ``` - - !!! Tip - The script assumes `MI_HOME` and `` are located in the same directory. It carries out the following steps. - - - Start the back-end services. - - Two mock hospital information services are available in the `DoctorInfo.jar` file located in the `/Backend/` directory. - - To manually start the service, open a terminal window, navigate to the `/Backend/` folder, and use the following command to start the services: - - ``` bash - java -jar DoctorInfo.jar - ``` - - - Deploy the Healthcare service. - - Copy the CAR file of the Healthcare service (HealthcareIntegrationProjectCompositeExporter_1.0.0-SNAPSHOT.car) from the `/HealthcareIntegrationProject/HealthcareIntegrationProjectCompositeExporter/target/` directory to the `/repository/deployment/server/carbonapps` directory. - -2. Start the Micro Integrator. - - 1. Execute the relevant command in a terminal based on the OS: - - === "On MacOS/Linux/CentOS" - ```bash - sh micro-integrator.sh - ``` - === "On Windows" - ```bash - micro-integrator.bat - ``` - -4. (Optional) Start the Dashboard. - - If you want to view the integration artifacts deployed in the Micro Integrator, you can start the dashboard. The instructions on running the MI dashboard is given in the installation guide: - - 1. [Install]({{base_path}}/install-and-setup/install/installing-the-product/installing-mi-dashboard) the MI dashboard. - 2. [Start]({{base_path}}/install-and-setup/install/installing-the-product/running-the-mi-dashboard) the MI dashboard. - - You can now test the **HealthcareIntegrationService** that you just generated. - -### Step 3 - Testing the integration service - -1. Invoke the healthcare service. - - Open a terminal and execute the following curl command to invoke the service: - - ```bash - curl -v http://localhost:8290/healthcare/doctor/Ophthalmologist - ``` - - Upon invocation, you should be able to observe the following response: - - ```bash - [ - [ - { - "name":"John Mathew", - "time":"03:30 PM", - "hospital":"Grand Oak" - }, - { - "name":"Allan Silvester", - "time":"04:30 PM", - "hospital":"Grand Oak" - } - ], - [ - { - "name":"John Mathew", - "time":"07:30 AM", - "hospital":"pineValley" - }, - { - "name":"Roma Katherine", - "time":"04:30 PM", - "hospital":"pineValley" - } - ] - ] - ``` - **Congratulations!** - Now you have created your first integration service. Optionally, you can follow the steps given below to expose the service as a Managed API in API Manager. - -## Exposing an Integration Service as a Managed API - -The REST API you deployed in the Micro Integrator is an **integration service** for the API Manager. Now, let's look at how you can expose the integration service to the API Management layer and generate a managed API by using the service. - -### Step 1 - Expose your integration as a service - -1. Start the API Manager runtime: - - 1. Extract the API Manager ZIP file. - 3. Start WSO2 API Manager: - - Open a terminal, navigate to the `/bin` directory, and execute the relevant command. - - - === "On MacOS/Linux" - ```bash - ./api-manager.sh - ``` - - === "On Windows" - ```bash - api-manager.bat --run - ``` - -2. Update and start the Micro Integrator runtime: - - 1. Stop the Micro Integrator. - - 2. Uncomment the following configuration from the `/conf/deployment.toml` file of the Micro Integrator. - - !!! Tip - The default username and password for connecting to the API gateway is `admin:admin`. - - - ```toml - [[service_catalog]] - apim_host = "https://localhost:9443" - enable = true - username = "admin" - password = "admin" - ``` - - 3. Start the Micro Integrator again. - - You will see the following in the server start-up log. - - ```bash - Successfully updated the service catalog - ``` - -3. Access the integration service from the **API Publisher**: - - 1. Sign in to the **API Publisher**: `https://localhost:9443/publisher` - - !!! Tip - Use `admin` as the user name and password. - - 2. Select the **Services** from the menu. - - - - 3. See that the `HealthcareAPI` is listed as a service. -` ` -### Step 2 - Create a managed API using the Integration Service - -1. Click on the `HealthcareAPI` that is in the service catalog. - -2. Click **Create API**. - - This opens the **Create API** dialog box with the API details that are generated based on the service. - - create api dialog box - -3. Update the API name, context, and version if required, and click **Create API**. - - The overview page of the API that you just created appears. - - apis list - -4. Navigate to **Develop -> API Configurations -> Endpoints** from the left menu. You will see that **Service Endpoint** is already selected and the production endpoint is already provided. - - Select the `Sandbox Endpoint`, add the endpoint `http://localhost:8290/healthcare`, and **Save**. - -5. Update the portal configurations and API configurations as required. - - Now, you have successfully created an API using the service. - -### Step 3 - Publish the managed API - -1. Navigate to **Deployments** and click **Deploy** to create a revision to deploy in the default Gateway environment. - -2. Navigate to **Lifecycle** and click **Publish** to publish the API in the Gateway environment. - - - - If the API is published successfully, the lifecycle state will shift to **PUBLISHED**. - -### Step 4 - Invoke the Managed `HealthcareAPI` via Developer Portal - -1. Navigate to the **Developer Portal** by clicking on the `View In Dev Portal` at the top menu. - - - -2. Sign in using the default username/password `admin/admin`. You will be redirected to the **APIs**. - -3. Under **APIs**, you will see the published `HealthcareAPI`. Click on it to navigate to the Overview of the API. - -4. Click `Try Out`. This will create a subscription to the API using `Default Application`. - - - -5. Click `GET TEST KEY` to get a test token to invoke the API. - - - -6. Click **GET** resource `/doctor​/{doctorType}`. Click on **Try It Out**. Enter `Ophthalmologist` in the doctorType field and click **Execute**. - - - - -## What's next? - -- [Develop your first integration solution]({{base_path}}/integrate/develop/integration-development-kickstart). -- Try out the **examples** available in the [Integrate section of our documentation]({{base_path}}/integrate/integration-overview/). -- Try out the entire developer guide on [Exposing an Integration Service as a Managed API]({{base_path}}/tutorials/integration-tutorials/service-catalog-tutorial/). -- Try out the entire developer guide on [Exposing an Integration SOAP Service as a Managed API]({{base_path}}/tutorials/integration-tutorials/service-catalog-tutorial-for-proxy-services/). \ No newline at end of file From 3221cb9d2f8140a278e428e35478ea3c2f6311dc Mon Sep 17 00:00:00 2001 From: DinithiDiaz Date: Tue, 5 Mar 2024 09:00:32 +0530 Subject: [PATCH 02/23] Remove Integrate section --- en/docs/integrate/api-led-integration.md | 56 - .../applying-security-to-a-proxy-service.md | 181 --- .../applying-security-to-an-api.md | 165 --- ...anging-the-endpoint-of-deployed-service.md | 227 ---- .../dynamic-user-authentication.md | 143 -- .../extend-role-based-filtering-for-ds.md | 144 -- .../using-swagger-for-apis.md | 70 - .../develop/create-data-services-configs.md | 18 - .../integrate/develop/create-datasources.md | 10 - .../develop/create-docker-project.md | 227 ---- .../develop/create-integration-project.md | 178 --- .../develop/create-kubernetes-project.md | 308 ----- .../creating-artifacts/adding-connectors.md | 51 - .../creating-a-message-processor.md | 67 - .../creating-a-message-store.md | 72 - .../creating-a-proxy-service.md | 134 -- .../creating-artifacts/creating-an-api.md | 412 ------ .../creating-an-inbound-endpoint.md | 105 -- .../creating-endpoint-templates.md | 75 - .../creating-artifacts/creating-endpoints.md | 82 -- .../creating-registry-resources.md | 101 -- .../creating-reusable-sequences.md | 93 -- .../creating-scheduled-task.md | 54 - .../creating-sequence-templates.md | 63 - .../data-services/creating-data-services.md | 402 ------ .../data-services/creating-datasources.md | 48 - .../creating-input-validators.md | 49 - .../data-services/securing-data-services.md | 125 -- .../creating-local-registry-entries.md | 94 -- .../using_docker_secrets.md | 147 -- .../creating-artifacts/using_k8s_secrets.md | 82 -- .../develop/creating-unit-test-suite.md | 211 --- .../creating-custom-inbound-endpoint.md | 73 - .../creating-custom-mediators.md | 40 - .../creating-custom-task-scheduling.md | 443 ------ .../customizations/creating-new-connector.md | 162 --- .../creating-synapse-handlers.md | 97 -- .../integrate/develop/debugging-mediation.md | 156 --- en/docs/integrate/develop/deploy-artifacts.md | 27 - .../integrate/develop/endpoint-trace-logs.md | 4 - en/docs/integrate/develop/export_project.md | 48 - .../integrate/develop/exporting-artifacts.md | 11 - .../develop/generate-docker-image.md | 107 -- .../generate-service-catalog-metadata.md | 34 - en/docs/integrate/develop/hot-deployment.md | 15 - .../integrate/develop/importing-artifacts.md | 42 - .../integrate/develop/importing-projects.md | 15 - .../integrate/develop/injecting-parameters.md | 575 -------- .../installing-wso2-integration-studio.md | 71 - .../integration-development-kickstart.md | 504 ------- .../develop/intro-integration-development.md | 354 ----- .../develop/monitoring-api-level-logs.md | 69 - .../develop/monitoring-service-level-logs.md | 67 - .../integrate/develop/packaging-artifacts.md | 44 - ...troubleshooting-wso2-integration-studio.md | 138 -- .../using-embedded-micro-integrator.md | 115 -- .../develop/using-remote-micro-integrator.md | 78 -- en/docs/integrate/develop/using-wire-logs.md | 103 -- .../develop/working-with-service-catalog.md | 96 -- .../working-with-wso2-integration-studio.md | 17 - .../develop/wso2-integration-studio.md | 133 -- .../data_integration/batch-requesting.md | 134 -- .../data_integration/carbon-data-service.md | 90 -- .../data_integration/csv-data-service.md | 122 -- .../data_integration/data-input-validator.md | 110 -- .../distributed-trans-data-service.md | 134 -- .../json-with-data-service.md | 435 ------ .../data_integration/mongo-data-service.md | 97 -- .../nested-queries-in-data-service.md | 290 ---- .../data_integration/odata-service.md | 112 -- .../data_integration/rdbms-data-service.md | 187 --- .../examples/data_integration/request-box.md | 168 --- .../data_integration/swagger-data-services.md | 108 -- .../endpoint-error-handling.md | 275 ---- .../mtom-swa-with-endpoints.md | 186 --- .../endpoint_examples/reusing-endpoints.md | 30 - .../using-address-endpoints.md | 76 -- ...sing-dynamic-recepient-list-endpoints-1.md | 66 - ...sing-dynamic-recepient-list-endpoints-2.md | 82 -- .../using-failover-endpoints.md | 204 --- .../endpoint_examples/using-http-endpoints.md | 48 - .../using-loadbalancing-endpoints.md | 133 -- .../using-static-recepient-list-endpoints.md | 76 -- .../using-websocket-endpoints.md | 114 -- .../endpoint_examples/using-wsdl-endpoints.md | 122 -- ...ssing_windows_share_using_vfs_transport.md | 139 -- .../mailto-transport-examples.md | 179 --- .../file-processing/vfs-transport-examples.md | 106 -- .../hl7-examples/acknowledge_hl7_messages.md | 153 --- .../hl7-examples/file_transfer_using_hl7.md | 177 --- .../hl7-examples/hl7_proxy_service.md | 42 - .../file-inbound-endpoint.md | 106 -- .../inbound-endpoint-hl7-protocol-auto-ack.md | 64 - .../inbound-endpoint-http-protocol.md | 93 -- .../inbound-endpoint-https-protocol.md | 114 -- .../inbound-endpoint-jms-protocol.md | 69 - .../inbound-endpoint-kafka.md | 108 -- .../inbound-endpoint-mqtt-protocol.md | 59 - .../inbound-endpoint-rabbitmq-protocol.md | 84 -- .../inbound-endpoint-secured-websocket.md | 140 -- .../inbound-endpoint-with-registry.md | 55 - .../examples/integrating-mi-with-si.md | 103 -- .../jms_examples/consume-produce-jms.md | 120 -- .../examples/jms_examples/consuming-jms.md | 314 ----- ...tecting-repeatedly-redelivered-messages.md | 236 ---- .../jms_examples/dual-channel-http-to-jms.md | 242 ---- .../guaranteed-delivery-with-failover.md | 202 --- .../examples/jms_examples/producing-jms.md | 119 -- .../publish-subscribe-with-jms.md | 166 --- .../jms_examples/quad-channel-jms-to-jms.md | 126 -- .../jms_examples/shared-topic-subscription.md | 316 ----- ...specifying-a-delivery-delay-on-messages.md | 286 ---- .../examples/json_examples/json-examples.md | 1205 ----------------- .../intro-message-stores-processors.md | 97 -- .../loadbalancing-with-message-processor.md | 113 -- .../securing-message-processor.md | 67 - .../using-jdbc-message-store.md | 139 -- .../using-jms-message-stores.md | 267 ---- .../using-message-forwarding-processor.md | 105 -- .../using-message-sampling-processor.md | 110 -- .../using-rabbitmq-message-stores.md | 145 -- .../json-to-soap-conversion.md | 267 ---- .../pox-to-json-conversion.md | 200 --- .../switching_between_fix_versions.md | 81 -- .../switching_between_http_and_msmq.md | 101 -- .../switching_from_fix_to_amqp.md | 47 - .../switching_from_fix_to_http.md | 100 -- ...tching_from_ftp_listener_to_mail_sender.md | 81 -- .../switching_from_http_to_fix.md | 85 -- .../switching_from_https_to_jms.md | 84 -- .../switching_from_jms_to_http.md | 74 - .../switching_from_tcp_to_https.md | 92 -- .../switching_from_udp_to_https.md | 85 -- .../exposing-proxy-via-inbound.md | 119 -- .../introduction-to-proxy-services.md | 128 -- .../publishing-a-custom-wsdl.md | 122 -- .../securing-proxy-services.md | 145 -- .../move-msgs-to-dlq-rabbitmq.md | 66 - .../point-to-point-rabbitmq.md | 74 - .../rabbitmq_examples/pub-sub-rabbitmq.md | 95 -- .../request-response-rabbitmq.md | 91 -- .../requeue-msgs-with-errors-rabbitmq.md | 59 - .../retry-delay-failed-msgs-rabbitmq.md | 99 -- .../store-forward-rabbitmq.md | 110 -- .../local-registry-entries.md | 77 -- .../configuring-non-http-endpoints.md | 59 - .../enabling-rest-to-soap.md | 127 -- .../handling-non-matching-resources.md | 91 -- .../introduction-rest-api.md | 137 -- .../publishing-a-swagger-api.md | 68 - .../rest_api_examples/securing-rest-apis.md | 129 -- .../setting-https-status-codes.md | 103 -- .../setting-query-params-outgoing-messages.md | 133 -- .../rest_api_examples/special-cases.md | 8 - .../transforming-content-type.md | 210 --- .../routing_based_on_headers.md | 203 --- .../routing_based_on_payloads.md | 191 --- .../splitting_aggregating_messages.md | 141 -- .../injecting-messages-to-rest-endpoint.md | 64 - .../task-scheduling-simple-trigger.md | 75 - .../custom-sequences-with-proxy-services.md | 127 -- .../using-fault-sequences.md | 171 --- .../using-multiple-sequences.md | 259 ---- .../using-endpoint-templates.md | 151 --- .../using-sequence-templates.md | 246 ---- .../fix-transport-examples.md | 90 -- .../transport_examples/pub-sub-using-mqtt.md | 72 - .../tcp-transport-examples.md | 373 ----- .../examples/working-with-transactions.md | 509 ------- en/docs/integrate/integration-key-concepts.md | 140 -- en/docs/integrate/integration-overview.md | 601 -------- .../asynchronous-message-overview.md | 81 -- .../integration-use-case/connectors.md | 112 -- .../data-integration-overview.md | 47 - .../file-processing-overview.md | 42 - .../message-routing-overview.md | 90 -- .../protocol-switching-overview.md | 48 - .../scheduled-task-overview.md | 23 - .../service-orchestration-overview.md | 34 - 179 files changed, 24834 deletions(-) delete mode 100644 en/docs/integrate/api-led-integration.md delete mode 100644 en/docs/integrate/develop/advanced-development/applying-security-to-a-proxy-service.md delete mode 100644 en/docs/integrate/develop/advanced-development/applying-security-to-an-api.md delete mode 100644 en/docs/integrate/develop/advanced-development/changing-the-endpoint-of-deployed-service.md delete mode 100644 en/docs/integrate/develop/advanced-development/dynamic-user-authentication.md delete mode 100644 en/docs/integrate/develop/advanced-development/extend-role-based-filtering-for-ds.md delete mode 100644 en/docs/integrate/develop/advanced-development/using-swagger-for-apis.md delete mode 100644 en/docs/integrate/develop/create-data-services-configs.md delete mode 100644 en/docs/integrate/develop/create-datasources.md delete mode 100644 en/docs/integrate/develop/create-docker-project.md delete mode 100644 en/docs/integrate/develop/create-integration-project.md delete mode 100644 en/docs/integrate/develop/create-kubernetes-project.md delete mode 100644 en/docs/integrate/develop/creating-artifacts/adding-connectors.md delete mode 100644 en/docs/integrate/develop/creating-artifacts/creating-a-message-processor.md delete mode 100644 en/docs/integrate/develop/creating-artifacts/creating-a-message-store.md delete mode 100644 en/docs/integrate/develop/creating-artifacts/creating-a-proxy-service.md delete mode 100644 en/docs/integrate/develop/creating-artifacts/creating-an-api.md delete mode 100644 en/docs/integrate/develop/creating-artifacts/creating-an-inbound-endpoint.md delete mode 100644 en/docs/integrate/develop/creating-artifacts/creating-endpoint-templates.md delete mode 100644 en/docs/integrate/develop/creating-artifacts/creating-endpoints.md delete mode 100644 en/docs/integrate/develop/creating-artifacts/creating-registry-resources.md delete mode 100644 en/docs/integrate/develop/creating-artifacts/creating-reusable-sequences.md delete mode 100644 en/docs/integrate/develop/creating-artifacts/creating-scheduled-task.md delete mode 100644 en/docs/integrate/develop/creating-artifacts/creating-sequence-templates.md delete mode 100644 en/docs/integrate/develop/creating-artifacts/data-services/creating-data-services.md delete mode 100644 en/docs/integrate/develop/creating-artifacts/data-services/creating-datasources.md delete mode 100644 en/docs/integrate/develop/creating-artifacts/data-services/creating-input-validators.md delete mode 100644 en/docs/integrate/develop/creating-artifacts/data-services/securing-data-services.md delete mode 100644 en/docs/integrate/develop/creating-artifacts/registry/creating-local-registry-entries.md delete mode 100644 en/docs/integrate/develop/creating-artifacts/using_docker_secrets.md delete mode 100644 en/docs/integrate/develop/creating-artifacts/using_k8s_secrets.md delete mode 100644 en/docs/integrate/develop/creating-unit-test-suite.md delete mode 100644 en/docs/integrate/develop/customizations/creating-custom-inbound-endpoint.md delete mode 100644 en/docs/integrate/develop/customizations/creating-custom-mediators.md delete mode 100644 en/docs/integrate/develop/customizations/creating-custom-task-scheduling.md delete mode 100644 en/docs/integrate/develop/customizations/creating-new-connector.md delete mode 100644 en/docs/integrate/develop/customizations/creating-synapse-handlers.md delete mode 100644 en/docs/integrate/develop/debugging-mediation.md delete mode 100644 en/docs/integrate/develop/deploy-artifacts.md delete mode 100644 en/docs/integrate/develop/endpoint-trace-logs.md delete mode 100644 en/docs/integrate/develop/export_project.md delete mode 100644 en/docs/integrate/develop/exporting-artifacts.md delete mode 100644 en/docs/integrate/develop/generate-docker-image.md delete mode 100644 en/docs/integrate/develop/generate-service-catalog-metadata.md delete mode 100644 en/docs/integrate/develop/hot-deployment.md delete mode 100644 en/docs/integrate/develop/importing-artifacts.md delete mode 100644 en/docs/integrate/develop/importing-projects.md delete mode 100644 en/docs/integrate/develop/injecting-parameters.md delete mode 100644 en/docs/integrate/develop/installing-wso2-integration-studio.md delete mode 100644 en/docs/integrate/develop/integration-development-kickstart.md delete mode 100644 en/docs/integrate/develop/intro-integration-development.md delete mode 100644 en/docs/integrate/develop/monitoring-api-level-logs.md delete mode 100644 en/docs/integrate/develop/monitoring-service-level-logs.md delete mode 100644 en/docs/integrate/develop/packaging-artifacts.md delete mode 100644 en/docs/integrate/develop/troubleshooting-wso2-integration-studio.md delete mode 100644 en/docs/integrate/develop/using-embedded-micro-integrator.md delete mode 100644 en/docs/integrate/develop/using-remote-micro-integrator.md delete mode 100644 en/docs/integrate/develop/using-wire-logs.md delete mode 100644 en/docs/integrate/develop/working-with-service-catalog.md delete mode 100644 en/docs/integrate/develop/working-with-wso2-integration-studio.md delete mode 100644 en/docs/integrate/develop/wso2-integration-studio.md delete mode 100644 en/docs/integrate/examples/data_integration/batch-requesting.md delete mode 100644 en/docs/integrate/examples/data_integration/carbon-data-service.md delete mode 100644 en/docs/integrate/examples/data_integration/csv-data-service.md delete mode 100644 en/docs/integrate/examples/data_integration/data-input-validator.md delete mode 100644 en/docs/integrate/examples/data_integration/distributed-trans-data-service.md delete mode 100644 en/docs/integrate/examples/data_integration/json-with-data-service.md delete mode 100644 en/docs/integrate/examples/data_integration/mongo-data-service.md delete mode 100644 en/docs/integrate/examples/data_integration/nested-queries-in-data-service.md delete mode 100644 en/docs/integrate/examples/data_integration/odata-service.md delete mode 100644 en/docs/integrate/examples/data_integration/rdbms-data-service.md delete mode 100644 en/docs/integrate/examples/data_integration/request-box.md delete mode 100644 en/docs/integrate/examples/data_integration/swagger-data-services.md delete mode 100644 en/docs/integrate/examples/endpoint_examples/endpoint-error-handling.md delete mode 100644 en/docs/integrate/examples/endpoint_examples/mtom-swa-with-endpoints.md delete mode 100644 en/docs/integrate/examples/endpoint_examples/reusing-endpoints.md delete mode 100644 en/docs/integrate/examples/endpoint_examples/using-address-endpoints.md delete mode 100644 en/docs/integrate/examples/endpoint_examples/using-dynamic-recepient-list-endpoints-1.md delete mode 100644 en/docs/integrate/examples/endpoint_examples/using-dynamic-recepient-list-endpoints-2.md delete mode 100644 en/docs/integrate/examples/endpoint_examples/using-failover-endpoints.md delete mode 100644 en/docs/integrate/examples/endpoint_examples/using-http-endpoints.md delete mode 100644 en/docs/integrate/examples/endpoint_examples/using-loadbalancing-endpoints.md delete mode 100644 en/docs/integrate/examples/endpoint_examples/using-static-recepient-list-endpoints.md delete mode 100644 en/docs/integrate/examples/endpoint_examples/using-websocket-endpoints.md delete mode 100644 en/docs/integrate/examples/endpoint_examples/using-wsdl-endpoints.md delete mode 100644 en/docs/integrate/examples/file-processing/accessing_windows_share_using_vfs_transport.md delete mode 100644 en/docs/integrate/examples/file-processing/mailto-transport-examples.md delete mode 100644 en/docs/integrate/examples/file-processing/vfs-transport-examples.md delete mode 100644 en/docs/integrate/examples/hl7-examples/acknowledge_hl7_messages.md delete mode 100644 en/docs/integrate/examples/hl7-examples/file_transfer_using_hl7.md delete mode 100644 en/docs/integrate/examples/hl7-examples/hl7_proxy_service.md delete mode 100644 en/docs/integrate/examples/inbound_endpoint_examples/file-inbound-endpoint.md delete mode 100644 en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-hl7-protocol-auto-ack.md delete mode 100644 en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-http-protocol.md delete mode 100644 en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-https-protocol.md delete mode 100644 en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-jms-protocol.md delete mode 100644 en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-kafka.md delete mode 100644 en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-mqtt-protocol.md delete mode 100644 en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-rabbitmq-protocol.md delete mode 100644 en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-secured-websocket.md delete mode 100644 en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-with-registry.md delete mode 100644 en/docs/integrate/examples/integrating-mi-with-si.md delete mode 100644 en/docs/integrate/examples/jms_examples/consume-produce-jms.md delete mode 100644 en/docs/integrate/examples/jms_examples/consuming-jms.md delete mode 100644 en/docs/integrate/examples/jms_examples/detecting-repeatedly-redelivered-messages.md delete mode 100644 en/docs/integrate/examples/jms_examples/dual-channel-http-to-jms.md delete mode 100644 en/docs/integrate/examples/jms_examples/guaranteed-delivery-with-failover.md delete mode 100644 en/docs/integrate/examples/jms_examples/producing-jms.md delete mode 100644 en/docs/integrate/examples/jms_examples/publish-subscribe-with-jms.md delete mode 100644 en/docs/integrate/examples/jms_examples/quad-channel-jms-to-jms.md delete mode 100644 en/docs/integrate/examples/jms_examples/shared-topic-subscription.md delete mode 100644 en/docs/integrate/examples/jms_examples/specifying-a-delivery-delay-on-messages.md delete mode 100644 en/docs/integrate/examples/json_examples/json-examples.md delete mode 100644 en/docs/integrate/examples/message_store_processor_examples/intro-message-stores-processors.md delete mode 100644 en/docs/integrate/examples/message_store_processor_examples/loadbalancing-with-message-processor.md delete mode 100644 en/docs/integrate/examples/message_store_processor_examples/securing-message-processor.md delete mode 100644 en/docs/integrate/examples/message_store_processor_examples/using-jdbc-message-store.md delete mode 100644 en/docs/integrate/examples/message_store_processor_examples/using-jms-message-stores.md delete mode 100644 en/docs/integrate/examples/message_store_processor_examples/using-message-forwarding-processor.md delete mode 100644 en/docs/integrate/examples/message_store_processor_examples/using-message-sampling-processor.md delete mode 100644 en/docs/integrate/examples/message_store_processor_examples/using-rabbitmq-message-stores.md delete mode 100644 en/docs/integrate/examples/message_transformation_examples/json-to-soap-conversion.md delete mode 100644 en/docs/integrate/examples/message_transformation_examples/pox-to-json-conversion.md delete mode 100644 en/docs/integrate/examples/protocol-switching/switching_between_fix_versions.md delete mode 100644 en/docs/integrate/examples/protocol-switching/switching_between_http_and_msmq.md delete mode 100644 en/docs/integrate/examples/protocol-switching/switching_from_fix_to_amqp.md delete mode 100644 en/docs/integrate/examples/protocol-switching/switching_from_fix_to_http.md delete mode 100644 en/docs/integrate/examples/protocol-switching/switching_from_ftp_listener_to_mail_sender.md delete mode 100644 en/docs/integrate/examples/protocol-switching/switching_from_http_to_fix.md delete mode 100644 en/docs/integrate/examples/protocol-switching/switching_from_https_to_jms.md delete mode 100644 en/docs/integrate/examples/protocol-switching/switching_from_jms_to_http.md delete mode 100644 en/docs/integrate/examples/protocol-switching/switching_from_tcp_to_https.md delete mode 100644 en/docs/integrate/examples/protocol-switching/switching_from_udp_to_https.md delete mode 100644 en/docs/integrate/examples/proxy_service_examples/exposing-proxy-via-inbound.md delete mode 100644 en/docs/integrate/examples/proxy_service_examples/introduction-to-proxy-services.md delete mode 100644 en/docs/integrate/examples/proxy_service_examples/publishing-a-custom-wsdl.md delete mode 100644 en/docs/integrate/examples/proxy_service_examples/securing-proxy-services.md delete mode 100644 en/docs/integrate/examples/rabbitmq_examples/move-msgs-to-dlq-rabbitmq.md delete mode 100644 en/docs/integrate/examples/rabbitmq_examples/point-to-point-rabbitmq.md delete mode 100644 en/docs/integrate/examples/rabbitmq_examples/pub-sub-rabbitmq.md delete mode 100644 en/docs/integrate/examples/rabbitmq_examples/request-response-rabbitmq.md delete mode 100644 en/docs/integrate/examples/rabbitmq_examples/requeue-msgs-with-errors-rabbitmq.md delete mode 100644 en/docs/integrate/examples/rabbitmq_examples/retry-delay-failed-msgs-rabbitmq.md delete mode 100644 en/docs/integrate/examples/rabbitmq_examples/store-forward-rabbitmq.md delete mode 100644 en/docs/integrate/examples/registry_examples/local-registry-entries.md delete mode 100644 en/docs/integrate/examples/rest_api_examples/configuring-non-http-endpoints.md delete mode 100644 en/docs/integrate/examples/rest_api_examples/enabling-rest-to-soap.md delete mode 100644 en/docs/integrate/examples/rest_api_examples/handling-non-matching-resources.md delete mode 100644 en/docs/integrate/examples/rest_api_examples/introduction-rest-api.md delete mode 100644 en/docs/integrate/examples/rest_api_examples/publishing-a-swagger-api.md delete mode 100644 en/docs/integrate/examples/rest_api_examples/securing-rest-apis.md delete mode 100644 en/docs/integrate/examples/rest_api_examples/setting-https-status-codes.md delete mode 100644 en/docs/integrate/examples/rest_api_examples/setting-query-params-outgoing-messages.md delete mode 100644 en/docs/integrate/examples/rest_api_examples/special-cases.md delete mode 100644 en/docs/integrate/examples/rest_api_examples/transforming-content-type.md delete mode 100644 en/docs/integrate/examples/routing_examples/routing_based_on_headers.md delete mode 100644 en/docs/integrate/examples/routing_examples/routing_based_on_payloads.md delete mode 100644 en/docs/integrate/examples/routing_examples/splitting_aggregating_messages.md delete mode 100644 en/docs/integrate/examples/scheduled-tasks/injecting-messages-to-rest-endpoint.md delete mode 100644 en/docs/integrate/examples/scheduled-tasks/task-scheduling-simple-trigger.md delete mode 100644 en/docs/integrate/examples/sequence_examples/custom-sequences-with-proxy-services.md delete mode 100644 en/docs/integrate/examples/sequence_examples/using-fault-sequences.md delete mode 100644 en/docs/integrate/examples/sequence_examples/using-multiple-sequences.md delete mode 100644 en/docs/integrate/examples/template_examples/using-endpoint-templates.md delete mode 100644 en/docs/integrate/examples/template_examples/using-sequence-templates.md delete mode 100644 en/docs/integrate/examples/transport_examples/fix-transport-examples.md delete mode 100644 en/docs/integrate/examples/transport_examples/pub-sub-using-mqtt.md delete mode 100644 en/docs/integrate/examples/transport_examples/tcp-transport-examples.md delete mode 100644 en/docs/integrate/examples/working-with-transactions.md delete mode 100644 en/docs/integrate/integration-key-concepts.md delete mode 100644 en/docs/integrate/integration-overview.md delete mode 100644 en/docs/integrate/integration-use-case/asynchronous-message-overview.md delete mode 100644 en/docs/integrate/integration-use-case/connectors.md delete mode 100644 en/docs/integrate/integration-use-case/data-integration-overview.md delete mode 100644 en/docs/integrate/integration-use-case/file-processing-overview.md delete mode 100644 en/docs/integrate/integration-use-case/message-routing-overview.md delete mode 100644 en/docs/integrate/integration-use-case/protocol-switching-overview.md delete mode 100644 en/docs/integrate/integration-use-case/scheduled-task-overview.md delete mode 100644 en/docs/integrate/integration-use-case/service-orchestration-overview.md diff --git a/en/docs/integrate/api-led-integration.md b/en/docs/integrate/api-led-integration.md deleted file mode 100644 index 4262eadc3f..0000000000 --- a/en/docs/integrate/api-led-integration.md +++ /dev/null @@ -1,56 +0,0 @@ -# API-led Integration - -WSO2 API Manager consists of an API management layer as well as an integration layer. While the integration layer (Micro Integrator) is used for running the integration logic, the API management layer is used for managing the APIs (with the integration logic) and making these APIs discoverable to developers. - -This allows you to implement an API-led integration strategy by developing the front-end APIs and integration APIs separately. There are two approaches to development when you implement API-led integration: Create the APIs first and then create the integration logic, or create the integration logic first followed by the APIs. - -## API-First development - -This development strategy enables you to first design the **experience APIs** in your API-led integration framework. That is, you will first create the front-end APIs based on business requirements and on how you want consumers to interact with your APIs. At this stage, you don't need to consider how the underlying integration logic is implemented. - -Integration developers will then use the Swagger definition of the experience APIs and implement the **process APIs** that contains the integration logic as well as the **system APIs** that directly interact with internal and external systems. - -api-first integration development - -The high-level steps are as follows: - -1. **Create the API proxy** - - Open the **Publisher** of WSO2 API-M and create a new API using a Swagger definition (OpenAPI specification). This will be your experience API. The production endpoint of your experience API should point to the **process API** or **system API** that is deployed in the integration layer (Micro Integrator runtime) of the API-M platform. - - !!! Tip - At the time of creating this API, you will not have the production endpoint ready. However, you can still publish this API using a prototype endpoint of your choice. You can even provide prototype implementations for the API as required. Once published as a prototype, you can test the API. - - You can manage the API proxy by applying business plans, rate-limiting policies, security mechanisms, etc. - -2. **Create the integration service** - - Download the Swagger definition of your API proxy from the **Publisher** and import it as a REST API to your WSO2 Integration Studio. The base integration sequences will be generated by default. You can now use the features in WSO2 Integration Studio and develop the integration logic that should apply to the API. - -3. **Connect the API proxy and integration service** - - Once you have created the integration service and tested it, you can create a CApp and deploy it in the Micro Integrator of WSO2 API-M and start the Micro Integrator. - -## Integration-First development - -With this development strategy, integration developers will first create the **process APIs** (with the integration logic) or the **system APIs**. These integration services are published to the service catalog in the API management layer. API creators will then convert these integration APIs to **experience APIs**. - -api-first integration development - -The high-level steps are as follows: - -1. **Create the integration service** - - Create a REST API artifact and define the integration logic using WSO2 Integration Studio. Note that the resources that are required for publishing the integration service to the API management layer are generated. You need to update this metadata by providing the connection parameters of your API-M runtime. - -2. **Publish the integration service** - - Once you have created the integration service and deployed it in the Micro Integrator, you only need to start the two servers (API-M server and the Micro Integrator server). Note that the API-M server should be started before the Micro Integrator. The Service Catalog client in the Micro Integrator publishes the integration services to the API-M layer during server startup. - -3. **Create the API proxy** - - Open the **Publisher** and see that the integration service is listed in the service catalog. You can now create an API proxy for this integration service from the Publisher. This will be the experience API of your integration service. - - Because the integration service is already running in the integration layer, the production endpoint of your experience API is already updated to connect with the integration layer. - - You can now manage the API proxy by applying business plans, rate-limiting policies, security policies, etc., and then expose the API to consumers from the API store (marketplace). \ No newline at end of file diff --git a/en/docs/integrate/develop/advanced-development/applying-security-to-a-proxy-service.md b/en/docs/integrate/develop/advanced-development/applying-security-to-a-proxy-service.md deleted file mode 100644 index c7e86b60d8..0000000000 --- a/en/docs/integrate/develop/advanced-development/applying-security-to-a-proxy-service.md +++ /dev/null @@ -1,181 +0,0 @@ -# Applying Security to a Proxy Service - -Follow the instructions below to apply security to a proxy service via WSO2 Integration Studio: - -## Prerequisites - -Be sure to [configure a user store]({{base_path}}/install-and-setup/setup/mi-setup/user_stores/setting_up_a_userstore) for the Micro Integrator and add the required users and roles. - -## Step 1 - Create the security policy file - -Follow the instructions given below to create a **WS-Policy** resource in your registry project. This will be your security policy file. - -1. Create a [registry resource project]({{base_path}}/integrate/develop/create-integration-project/#registry-resource-project). - -2. Right-click on the registry resource project in the left navigation panel, click **New**, and then click **Registry Resource**. - - The **New Registry Resource** window appears. - - [![Click registry resource menu]({{base_path}}/assets/img/integrate/apply-security/119130870/119130887.jpg)]({{base_path}}/assets/img/integrate/apply-security/119130870/119130887.jpg) - -3. Select **From existing template** and click **Next**. - - [![Registry resources artifact creation options]({{base_path}}/assets/img/integrate/apply-security/119130870/119130886.jpg)]({{base_path}}/assets/img/integrate/apply-security/119130870/119130886.jpg) - -4. Enter a resource name and select the **WS-Policy** template along with the preferred registry path. - - [![Registry resource name]({{base_path}}/assets/img/integrate/apply-security/119130870/119130885.jpg)]({{base_path}}/assets/img/integrate/apply-security/119130870/119130885.jpg) - - [![Registry resource details]({{base_path}}/assets/img/integrate/apply-security/119130870/119130884.jpg)]({{base_path}}/assets/img/integrate/apply-security/119130870/119130884.jpg) - -5. Click **Finish**. - - The policy file is now listed in the project explorer as shown below. - - [![Policy file in project explorer]({{base_path}}/assets/img/integrate/apply-security/119130870/119130883.jpg)]({{base_path}}/assets/img/integrate/apply-security/119130870/119130883.jpg) - -6. Double-click on the policy file to open the file. - - Note that you get a **Design View** and **Source View** of the policy. - -7. Let's use the **Design View** to enable the required security scenario. - - For example, enable the **Sign and Encrypt** security scenario as shown below. - - !!! Tip - Click the icon next to the scenario to get details of the scenario. - - [![Sign and Encrypt security scenario]({{base_path}}/assets/img/integrate/apply-security/119130870/119130882.jpg){: style=width:90%}]({{base_path}}/assets/img/integrate/apply-security/119130870/119130882.jpg) - -8. You can also provide encryption properties, signature properties, and advanced rampart configurations as shown below. - - **Encryption/Signature Properties** - - [![Encryption/Signature Properties]({{base_path}}/assets/img/integrate/apply-security/119130870/119130890.jpg)]({{base_path}}/assets/img/integrate/apply-security/119130870/119130890.jpg) - - **Rampart Properties** - - [![Rampart Properties]({{base_path}}/assets/img/integrate/apply-security/119130870/119130889.jpg)]({{base_path}}/assets/img/integrate/apply-security/119130870/119130889.jpg) - - !!! Info - - Change the tokenStoreClass in the policy file to `org.wso2.micro.integrator.security.extensions.SecurityTokenStore` - - - Replace ServerCrypto class with `org.wso2.micro.integrator.security.util.ServerCrypto` if present. - - - -## Step 2 - Add the security policy to the proxy service - -1. Add a proxy service to your workspace. - - You can do either one of the following actions for this purpose. - - - [Create a new proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) - - [Import an existing proxy service]({{base_path}}/integrate/develop/importing-artifacts) - -2. Double-click the proxy service on the project explorer to open the file and click on the service on design view. - -3. Select the **Security Enabled** property in the **Properties** tab. - - [![Enable Security]({{base_path}}/assets/img/integrate/apply-security/119130870/119130879.jpg)]({{base_path}}/assets/img/integrate/apply-security/119130870/119130879.jpg) - -4. Select the **Browse** icon for the **Service Policies** field. In the dialog box that opens, create a new record and click the **Browse** icon to open the **Resource Key** dialog as shown below. - - [![Resource Key dialog box]({{base_path}}/assets/img/integrate/apply-security/119130870/119130877.jpg){: style=width:80%}]({{base_path}}/assets/img/integrate/apply-security/119130870/119130877.jpg) - -5. Click **workspace**, to add the security policy from the current workspace. You can select the path to the `sample_policy.xml` file that you created in the previous steps. - - [![Add the security policy]({{base_path}}/assets/img/integrate/apply-security/119130870/119130876.jpg)]({{base_path}}/assets/img/integrate/apply-security/119130870/119130876.jpg) - -6. Save the proxy service file. - -## Step 3 - Package the artifacts - -[Package the artifacts into a composite application project]({{base_path}}/integrate/develop/packaging-artifacts). - -## Step 4 - Build and run the artifacts - -[Deploy the artifacts]({{base_path}}/integrate/develop/deploy-and-run). - -## Step 5 - Testing the service - -Create a Soap UI project with the relevant security settings and then send the request to the hosted service. - -### General guidelines on testing with SOAP UI - -1. Create a "SOAP Project" in SOAP UI using the WSDL URL of the proxy service. - - Example: `http://localhost:8280/services/SampleProxy?wsdl` - - - -2. Double click on the created SOAP project, click on **WS-Security-Configuration**, **Keystores**, and add the WSO2 keystore. - - - -3. Enter the keystore password for the keystore configuration. - -4. Click on **Outgoing WS-Security Configuration**, and add a new policy by specifying a name. - - The name can be anything. - - - -5. Add the required WSS entries for the created configuration. - - What you need add will vary according to the policy you are using. The explanation about adding three main sections is given below. - - - **Adding a Signature** - - Adding a Signature - - - **Adding a Timestamp** - - Adding a Timestamp - - - **Adding an Encryption** - - Adding an Encryption - - !!! Note - The order of the WS entries matters. So always add the above one after the other. If you are adding only two sections, you need to maintain the order. - -6. Specify the created WS-policy under **Outgoing WSS** at the request **Authorization**. - - Specify the created WS-policy - -7. Invoke the Proxy Service. - -!!! Info - - When defining the Outgoing WS-Security Configuration, you need to pick the WS entries based on your WS policy. - - Eg: - - - A Non Repudiation policy needs only Timestamp and Signature. - - A Confidentiality policy needs all three: Timestamp, Signature and Encryption. - - You do not need to provide an Outgoing WS-Security Configuration for a Username Token policy. Providing the basic auth configuration is enough. - - diff --git a/en/docs/integrate/develop/advanced-development/applying-security-to-an-api.md b/en/docs/integrate/develop/advanced-development/applying-security-to-an-api.md deleted file mode 100644 index 78d67bb9ee..0000000000 --- a/en/docs/integrate/develop/advanced-development/applying-security-to-an-api.md +++ /dev/null @@ -1,165 +0,0 @@ -# Applying Security to an API - -## Using a Basic Auth handler -A Basic Authentication handler is enabled in the Micro Integrator by default. See the example on [securing an API with basic auth]({{base_path}}/integrate/examples/rest_api_examples/securing-rest-apis). - -## Using a custom basic auth handler - -If required, you can implement a custom basic auth handler (instead of the default handler explained above). The following example of a primitive security handler serves as a template that can be used to write your own security handler to secure an API. - -### Prerequisites - -**Before you begin**, be sure to [configure a user store]({{base_path}}/install-and-setup/setup/mi-setup/setup/user_stores/setting_up_a_userstore) for the Micro Integrator and add the required users and roles. - -### Creating the custom handler - -The custom Basic Auth handler in this sample simply verifies whether the request uses username: admin and password: admin. Following is the code for this handler: - -```java -package org.wso2.rest; -import org.apache.commons.codec.binary.Base64; -import org.apache.synapse.MessageContext; -import org.apache.synapse.core.axis2.Axis2MessageContext; -import org.apache.synapse.core.axis2.Axis2Sender; -import org.apache.synapse.rest.Handler; - -import java.util.Map; - -public class BasicAuthHandler implements Handler { - public void addProperty(String s, Object o) { - //To change body of implemented methods use File | Settings | File Templates. - } - - public Map getProperties() { - return null; //To change body of implemented methods use File | Settings | File Templates. - } - - public boolean handleRequest(MessageContext messageContext) { - - org.apache.axis2.context.MessageContext axis2MessageContext - = ((Axis2MessageContext) messageContext).getAxis2MessageContext(); - Object headers = axis2MessageContext.getProperty( - org.apache.axis2.context.MessageContext.TRANSPORT_HEADERS); - - if (headers != null && headers instanceof Map) { - Map headersMap = (Map) headers; - if (headersMap.get("Authorization") == null) { - headersMap.clear(); - axis2MessageContext.setProperty("HTTP_SC", "401"); - headersMap.put("WWW-Authenticate", "Basic realm=\"WSO2 ESB\""); - axis2MessageContext.setProperty("NO_ENTITY_BODY", new Boolean("true")); - messageContext.setProperty("RESPONSE", "true"); - messageContext.setTo(null); - Axis2Sender.sendBack(messageContext); - return false; - - } else { - String authHeader = (String) headersMap.get("Authorization"); - if (processSecurity(credentials)) { - return true; - } else { - headersMap.clear(); - axis2MessageContext.setProperty("HTTP_SC", "403"); - axis2MessageContext.setProperty("NO_ENTITY_BODY", new Boolean("true")); - messageContext.setProperty("RESPONSE", "true"); - messageContext.setTo(null); - Axis2Sender.sendBack(messageContext); - return false; - } - } - } - return false; - } - - public boolean handleResponse(MessageContext messageContext) { - return true; - } - - public boolean processSecurity(String credentials) { - String decodedCredentials = new String(new Base64().decode(credentials.getBytes())); - String usernName = decodedCredentials.split(":")[0]; - String password = decodedCredentials.split(":")[1]; - if ("admin".equals(username) && "admin".equals(password)) { - return true; - } else { - return false; - } - } -} -``` - -You can build the project (mvn clean install) for this handler by accessing its source from here: https://github.com/wso2/product-esb/tree/v5.0.0/modules/samples/integration-scenarios/starbucks_sample/BasicAuth-handler - -!!! Note - When building the sample using the source ensure you update `pom.xml` with the online repository. To do this, add the following section before `` tag in `pom.xml` : - - ```xml - - - wso2-nexus - WSO2 internal Repository - http://maven.wso2.org/nexus/content/groups/wso2-public/ - - true - daily - ignore - - - - wso2-maven2-repository - WSO2 Maven2 Repository - http://dist.wso2.org/maven2 - - false - - - true - never - fail - - - - ``` - -Alternatively, you can download the JAR file from the following location, copy it to the `MI_HOME/lib` directory, -and restart the Micro Integrator: https://github.com/wso2/product-esb/blob/v5.0.0/modules/samples/integration-scenarios/starbucks_sample/bin/WSO2-REST-BasicAuth-Handler-1.0-SNAPSHOT.jar - -### Creating the REST API - -Add the handler to the REST API: - -```xml - - - - - - - - $1 - - - - - - - -
- - -
- - - - - - - - - - - - -``` - -You can now send a request to the secured API. diff --git a/en/docs/integrate/develop/advanced-development/changing-the-endpoint-of-deployed-service.md b/en/docs/integrate/develop/advanced-development/changing-the-endpoint-of-deployed-service.md deleted file mode 100644 index 8140fe7150..0000000000 --- a/en/docs/integrate/develop/advanced-development/changing-the-endpoint-of-deployed-service.md +++ /dev/null @@ -1,227 +0,0 @@ -# Changing the Endpoint of a Deployed Proxy Service - -The below sections describe how you can change the endpoint reference of -a deployed proxy service without changing its own configuration. For -example, in this scenario, you have two endpoints to manage two -environments (i.e., Dev and QA). The endpoint URLs for the services -hosted in the Dev and QA environments respectively are as follows: - -- Dev environment: - [http://localhost:8280/services/echo](https://www.google.com/url?q=http://localhost:8280/services/echo&sa=D&source=hangouts&ust=1533987796246000&usg=AFQjCNHGkW_-21LrrGTq7bZTCOqRn_23uw) - -- QA environment: - [http://localhost:8281/services/echo](https://www.google.com/url?q=http://localhost:8280/services/echo&sa=D&source=hangouts&ust=1533987796246000&usg=AFQjCNHGkW_-21LrrGTq7bZTCOqRn_23uw) - - -## Creating the Endpoints - -You need to create two Endpoint artifacts to represent the Dev and QA environments respectively. Follow the steps given below. - -1. Create two ESB config projects as given below. - - - - - - - - - - - - - -
Project NameDescription
HelloWorldDevResourcesThe ESB config project will store the Endpoint artifact for the Dev environment.
HelloWorldQAResourcesThe ESB config project will store the Endpoint artifact for the QA environment.
-2. Create two Endpoint artifacts in two projects with the following configurations: - - - HelloWorldDevResources project - - - - - - - - - - - - - - - - - -
Endpoint ParameterValue
Endpoint NameHelloWorldEP
Endpoint TypeAddress Endpoint
Address URLhttp://localhost:8280/services/ech
- - - HelloWorldQAResources project - - - - - - - - - - - - - - - - - -
Endpoint ParameterValue
Endpoint NameHelloWorldEP
Endpoint TypeAddress Endpoint
Address URLhttp://localhost:8281/services/ech0
- -## Creating the Proxy Service - -1. Create an ESB Config project named **HelloWorldServices**. -2. Create a proxy service in the HelloWorldServices project with the following configurations: - - | Parameter | Value | - |--------------------|----------------------------------------------------------------------------------------------------| - | Proxy Service Name | HelloWorldProxy | - | Proxy Service Type | Select Pass Through Proxy | - | Endpoint | Select HelloWorldEP (You need to select **Predefined Endpoint** from the endpoint options listed.) | - -The projects setup is now complete. - -## Creating the composite application projects - -Create two composite application projects to package the QA artifacts and Dev artifacts separately. The proxy service and the Dev endpoint must go in its own CApp, and the proxy service and the QA endpoint should be in another CApp as shown below. - -See the instructions on packaging artifacts into CApps. - - - - - - - - - - - - - - - - - -
EnvironmentCApp NameArtifacts Included
DevHelloWorldDevCApp - HelloWorldServices project and the - HelloWorldDevResources project. -
QAHelloWorldQACApp - HelloWorldServices project and the - HelloWorldQAResources project. -
- -Your CApp projects are now ready to be deployed to the Micro Integrator. - -## Deploying the Dev composite application - -If you have an instance of WSO2 Micro Integrator setup as your Dev environment, deploy the HelloWorldDevCApp CApp in the server. - -## Testing the Dev environment - -Use the following request to invoke the service: - -``` - - - - 50 - - -``` - -You view the response from the **HelloWorldProxy**. - -## Changing the endpoint reference - -Follow the steps below to change the endpoint reference of the **HelloWorldProxy** you deployed, to point it to the QA environment, without changing its configuration. - -1. Set a port offset by changing the following configuration in the `deployment.toml ` file. - - ```toml - offset=2 - ``` -2. Undeploy the **HelloWorldDevCApp,** deploy the **HelloWorldQACApp** and re-start the Micro Integrator. - -## Testing the QA environment - -Use the following request to invoke the service: - -```xml - - - - 100 - - -``` - -You view the response from the **HelloWorldProxy** as seen in the image below. - -## Changing an endpoint reference - -Once the endpoint has been created, you can update it using any one of the options listed below. The options below describe how you can update the endpoint value for QA environment. - -### Option 1: Using WSO2 Integration Studio - -1. Open the ` HelloWorldEP.xml ` file under - **HelloWorldQAResources** project and replace the URL with the QA - URL. -2. Save all changes. - -Your CApp can be deployed to your QA Micro Integrator. - -### Option 2: From Command Line - -1. Open a Terminal window and navigate to - ` /HelloWorldQAResources/src/main/synapse_configendpoints/HelloWorldEP.xml ` - file. -2. Edit the HelloWorldEP.xml (e.g. using gedit or vi) under - HelloWorldResources/QA and replace the URL with the QA one. - - ``` - ... -
- ... - ``` - -3. Navigate to `/HelloWorldQAResources ` and build the ESB Config project using the following command: - - ``` - mvn clean install - ``` - -4. Navigate to - ` /HelloWorldQACApp ` and - build the CApp project using the following command: - - ``` - mvn clean install - ``` - -5. The resulting CAR file can be deployed directly to the QA ESB - server. For details, see [Running the ESB profile via WSO2 - Integration - Studio](https://docs.wso2.com/display/EI650/Running+the+Product#RunningtheProduct-RunningtheESBprofileviaWSO2IntegrationStudio) - . - -!!! Note - - To build the projects using the above commands, you need an active network connection. - - Creating a Maven Multi Module project that contains the above projects, allows you to projects in one go by simply building the parent Maven Multi Module project. - -### Option 3: Using a Script - -Alternatively you can have a CAR file with dummy values for the endpoint URLs and use a customized shell script or batch script. The script -created would need to do the following: - -1. Extract the CAR file. -2. Edit the URL values. -3. Re-create the CAR file with new values. - -The resulting CAR file can be deployed directly to the QA ESB server. diff --git a/en/docs/integrate/develop/advanced-development/dynamic-user-authentication.md b/en/docs/integrate/develop/advanced-development/dynamic-user-authentication.md deleted file mode 100644 index 5e74e408be..0000000000 --- a/en/docs/integrate/develop/advanced-development/dynamic-user-authentication.md +++ /dev/null @@ -1,143 +0,0 @@ -# Dynamic User Authentication - -Dynamic user authentication allows you to authenticate database users -dynamically for each data service call. This is implemented using a -mapping between the server users and the database users. This mapping -can be either, - -- Static inside the data service configuration. -- Provided at runtime through a Java class that implements the - `org.wso2.micro.integrator.dataservices.core.auth.DynamicUserAuthenticator.` interface. - -## Static configuration - -You can specify a code as shown in the following example in the -datasource configuration section of the data service. - -```xml - - - org.h2.Driver - jdbc:h2:file:./samples/database/DATA_SERV_SAMP -  wso2ds -  wso2ds - -   -   -   wso2ds -   wso2ds -   - -   dbuser1 -  dbpass1 -   - - guest - guest - - - - -.... -``` - -The configuration above maps the two Carbon users to specific database -credentials and the rest of the users to a different username/password -pair. The ` dynamicUserAuthMapping ` property in -` /configuration/entry/@request ` represents the incoming -Carbon user, and the username and password elements that follow -represent the mapped database credentials. - -For dynamic user authentication to work, security must be enabled in the -data service through ` UsernameToken ` for user -authentication. If user authentication is not available when a -` dynamicUserAuthMapping ` section is specified, it maps -to the request="\*" scenario by default. - -## Runtime configuration - -In the runtime mode, the property -` dynamicUserAuthClass ` must be specified instead of the -datasource configuration property -` dynamicUserAuthMapping ` . The -` dynamicUserAuthClass ` property's value must have the -fully-qualified class name of a Java class that implements the interface -` org.wso2.micro.integrator.dataservices.core.auth.DynamicUserAuthenticator. ` -The interface is as follows: - -```java -public interface DynamicUserAuthenticator { - /** - * This method is used to lookup a username/password pair given a source username. - * @param user The source username - * @return A two element String array containing the username and password respectively - * @throws DataServiceFault - */ - String[] lookupCredentials(String user) throws DataServiceFault; - -} -``` - -The following example code snippet shows an implementation of a dynamic -user authenticator class. - -```java -package samples; -import org.wso2.carbon.dataservices.core.DataServiceFault; -import org.wso2.carbon.dataservices.core.auth.DynamicUserAuthenticator; - -public class MyDynAuthClass implements DynamicUserAuthenticator { - @Override - public String[] lookupCredentials(String user) throws DataServiceFault { - if ("admin".equals(user)) { - return new String[] {"wso2ds", "wso2ds"}; - } else if ("user1".equals(user)) { - return new String[] {"dbuser1", "dbpass1"}; - } else if ("user2".equals(user)) { - return new String[] {"dbuser2", "dbpass2"}; - } else { - throw new DataServiceFault("The user '" + user + "' not supported in invoking the target data service"); - } - } -} -``` - -The ` lookupCredentials ` method takes in the request -user and returns the database username/password in a String array. The -dbs file configuration format is as follows: - -```xml - - - org.h2.Driver -  jdbc:h2:file:./samples/database/DATA_SERV_SAMP -  wso2ds -  wso2ds - samples.MyDynAuthClass -.... -``` - -## Dynamic user lookup order of precedence - -In a single datasource configuration, both the static and the runtime -configurations can be available at once. The server processes them as -follows: - -- Higher precedence goes to the static mapping in initially looking up - the credentials. The "\*" request setting is ignored in the first - pass. -- If a request user/database credential mapping cannot be found, the - secondary runtime Java class implementation is used to look up the - user. -- If the previous option also fails, the program returns for the - primary static mapping and processes the "\*" request mapping. -- The data service request returns an error only if all of the above - options fail. - -## Use of external datasources - -When using datasources that are not inline, the -datasources must be specified in a way that its connections can be -created for selected users. Specifically in Carbon datasources, enable -the ` alternateUsernameAllowed ` setting for dynamic user -authentication to function. \ No newline at end of file diff --git a/en/docs/integrate/develop/advanced-development/extend-role-based-filtering-for-ds.md b/en/docs/integrate/develop/advanced-development/extend-role-based-filtering-for-ds.md deleted file mode 100644 index 70b9e9c9c0..0000000000 --- a/en/docs/integrate/develop/advanced-development/extend-role-based-filtering-for-ds.md +++ /dev/null @@ -1,144 +0,0 @@ -# Filtering Responses by User Role - -When you work with data services, you can control access to sensitive -data for specific user roles. This facility is called **Role-based -content filtering**. It filters data where specific data sections are -only accessible to a given type of users. - -## Define user role-based result filtering - -Follow the steps below to filter a data service according to a specific user role. - -1. [Secure the dataservice]({{base_path}}/integrate/develop/creating-artifacts/data-services/securing-data-services) using `UsernameToken` for user authentication. -2. Add `requiredRoles` attribute to the output mapping with the comma separated list of user roles. - ```xml - - select EmployeeNumber,FirstName,Email from Employees - - - - - - - ``` - -## Extend role-based filtering via a custom authorization provider - -In the Micro Integrator, you can filter content to specific user roles by taking roles from -the [user store]({{base_path}}/install-and-setup/setup/mi-setup/setup/user_stores/setting_up_a_userstore) connected to the server. However, this extension provides -the flexibility for you to develop data services by plugging in a -mechanism to provide those role details from any preferred external -source (e.g., third party identity provider, JWT token etc.). Hence, in -data integration scenarios where data needs to be filtered based on the -user who requests those data, follow the steps given below to plug in a custom -authorization provider. - -1. Create a Java project and create a Java class (e.g. - ` SampleAuthProvider ` ), which extends the - ` org.wso2.micro.integrator.dataservices.core.auth.AuthorizationProvider ` - interface and add the below methods. - - - - - - - - - - - - - - - - - - -
- Parameter - - Description -
- String getUsername(MessageContext msgContext) - - This should return the user name from the message context, which contains all the HTTP request details. -
- String[] getUserRoles(MessageContext msgContext) - - This should return the roles of the user returned from the getUsername method. This can be extracted from a JWT, a third party Identity provider etc. -
- void init(Map authorizationProps) - - This initializes the auth provider. For example, if you are using a third-party identity provider to retrieve roles, you can pass the required parameters (such as endpoint URLs and tokens) to the provider through this method and do the required initializations within this method. -
- - **SampleAuthProvider Class** - - ```java - public class SampleAuthProvider implements AuthorizationProvider { - public String[] getUserRoles(MessageContext messageContext) throws DataServiceFault { - String[] roles = {"user", "manager"}; - return roles; - } - - public String[] getAllRoles() throws DataServiceFault { - String[] roles = {"admin", "client", "user", "manager"}; - return roles; - } - - public String getUsername(MessageContext messageContext) throws DataServiceFault { - return "saman"; - } - - public void init(Map map) throws DataServiceFault { - - } - } - ``` - -2. Build the project and place the JAR file in the - ` /lib/ ` directory. - -3. Create the data service. - - !!! Tip - When creating the data service; - - - Use the **Authorization Provider - Class** that you created above. - - When adding the output mapping, select the user roles out of the - ones you defined when creating the Java class. - -When you invoke the data service you created, you will view a response -as shown in the example below. - -!!! Info - Since the sample Java class above returns hard-coded ` “{“user”, “manager”}” ` roles, the response below returns only the rows those roles can view. - ```xml - - - 4 - john -
12, seren street, TN
-
- - 1 - Tom -
34, baker str, London
-
- - 2 - Jack -
324, Vale str, PN
-
- - 3 - Allan -
23, St str, NW
-
-
- ``` - -You can extend this functionality to extract the required roles from the JWT -tokens or invoke third-party identity providers to fetch roles for role-based filtering in data services. diff --git a/en/docs/integrate/develop/advanced-development/using-swagger-for-apis.md b/en/docs/integrate/develop/advanced-development/using-swagger-for-apis.md deleted file mode 100644 index 2f09234f3a..0000000000 --- a/en/docs/integrate/develop/advanced-development/using-swagger-for-apis.md +++ /dev/null @@ -1,70 +0,0 @@ -# Using Swagger Documents - -API documentation is important to guide the users on what they can do using specific APIs. - -When you create a REST API artifact or a RESTful data service from WSO2 Integration Studio, a default Swagger 3.0 (OpenAPI) definition is generated. For [REST API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) artifacts, you can also attach an additional custom Swagger definition for the API. - -## Swagger documents of API artifacts - -If your REST API is deployed, copy the following URLs (with your API details) to your browser: - -!!! Note - - If you have a custom Swagger definition attached to the API, the following URLs will return the custom definition and not the default Swagger definition of the API. - - Both swagger 2.0 and OpenAPI definitions are supported as the custom swagger definition. - - -- To access the `swagger.json` file, use the following URL: - - ```bash - http://:8290/?swagger.json - ``` - - **Example**: - ```bash - http://localhost:8290/HealthcareAPI?swagger.json - ``` - -- To access the `swagger.yaml` file, use the following URL: - - ```bash - http://:8290/?swagger.yaml - ``` - - **Example**: - ```bash - http://localhost:8290/HealthcareAPI?swagger.yaml - ``` - -!!! Tip - - Replace `` with `localhost`. If you are using a public IP, the respective IP address or domain needs to be specified. - - Replace `` with your API's name. The API name is case sensitive. - -## Swagger documents of RESTful data services - -If your RESTful data service is deployed, copy the following URLs to your browser: - -- To access the `swagger.json` file, use the following URL: - - ```bash - http://:8290/?swagger.json - ``` - - **Example**: - ```bash - http://localhost:8290/RDBMSDataService?swagger.json - ``` - -- To access the `swagger.yaml` file, use the following URL: - - ```bash - http://:8290/?swagger.yaml - ``` - - **Example**: - ```bash - http://localhost:8290/RDBMSDataService?swagger.yaml - ``` - -!!! Tip - - Replace `` with `localhost`. If you are using a public IP, the respective IP address or domain needs to be specified. - - Replace `` with your service name. The service name is case sensitive. diff --git a/en/docs/integrate/develop/create-data-services-configs.md b/en/docs/integrate/develop/create-data-services-configs.md deleted file mode 100644 index 2911ee09de..0000000000 --- a/en/docs/integrate/develop/create-data-services-configs.md +++ /dev/null @@ -1,18 +0,0 @@ -# Create Data Services Configs - -Create this project directory to start creating data services (.dbs files) for exposing various datasources as a service.
-
    -
  1. - Open WSO2 Integration Studio and click DS Project → Create New Data Service in the Getting Started view as shown below. - -
  2. -
  3. - In the dialog that opens, enter a module name and click Next. -
  4. -
  5. - Click Finish and see that the project is now listed in the project explorer. -
  6. -
- -You can now start managing data service artifacts using WSO2 Integration Studio. - diff --git a/en/docs/integrate/develop/create-datasources.md b/en/docs/integrate/develop/create-datasources.md deleted file mode 100644 index 59c991f2c8..0000000000 --- a/en/docs/integrate/develop/create-datasources.md +++ /dev/null @@ -1,10 +0,0 @@ -# Create Datasource Configs - -Create this project directory to start creating datasources that you can expose through a data service. - -1. Open **WSO2 Integration Studio** and click **DS Project → Create New Data Source** in the **Getting Started** view as shown below. - -2. In the dialog that opens, enter a project name and click **Next**. - -3. Click **Finish** and see that the project is now listed in the project explorer. - diff --git a/en/docs/integrate/develop/create-docker-project.md b/en/docs/integrate/develop/create-docker-project.md deleted file mode 100644 index 17396df6f6..0000000000 --- a/en/docs/integrate/develop/create-docker-project.md +++ /dev/null @@ -1,227 +0,0 @@ -# Creating Docker Exporter - -Create a Docker Exporter if you want to deploy your integration solutions inside a Docker environment. This project directory allows you to package multiple [integration modules]({{base_path}}/integrate/develop/create-integration-project) into a single Docker image and then build and push to the Docker registries. - -!!! note - When using Kubernetes/Docker exporter project, make sure to use Integration Studio 8.0.0 with the latest WUM update. - mi-1.2.0 pack we have to change the miVersion of the config-mapper-parser plugin into 1.2.0/1631645406425 in the pom - -## Creating the Docker exporter - -Follow the steps given below. - -1. [Create a new integration project]({{base_path}}/integrate/develop/create-integration-project) and create a Docker Exporter by doing one of the following. - - 1. As part of creating an integration project, you can select the **Docker Exporter** check box. - - 2. You can right click on an existing integration project and select **New** -> **Docker Exporter**. - -2. In the **New Docker Exporter** dialog box that opens, enter a name for the Docker exporter and other parameters as shown below. - - - - Enter the following information: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- Parameter - - Description -
- Docker Exporter Name - - Required. Give a name for the Docker project. -
- Base Image Repository - - Required. Select the base Micro Integrator Docker image for your solution. Use one of the following options: -
    -
  • - wso2/wso2mi: This is the community version of the Micro Integrator Docker image, which is stored in the public WSO2 Docker registry. This is selected by default. -
  • -
  • - docker.wso2.com: This is the Micro Integrator Docker image that includes product updates. This image is stored in the private WSO2 Docker registry. - Note that you need a valid WSO2 subscription to use the Docker image with updates. -
  • -
  • - You can also use a custom Docker image from a custom repository. -
  • -
- If you specify a Docker image from a private repository, note that you need to log in to your repository from a terminal before you build the image (as explained below). -
- Base Image Tag - - Required. Specify the tag of the base image that you are using. -
- Target Image Repository - - Required. The Docker repository to which you will later push this Docker image. -
    -
  • - If your repository is in Docker Hub, use the docker_user_name/repository_name format. -
  • -
  • - If you are using any other repository, use the repository_url/repository_user_name/repository_name forrmat. -
  • -
- If required, you can update this information later when you build the Docker image or when you push the image to the relevant repository. -
- Target Image Tag - - Required. Give a tag name for the Docker image. -
- Environment Variables - - You can enter multiple environment variables as key-value pairs. -
- -3. Optionally, click **Next** and configure Maven details for the Docker exporter. - - - -4. Click **Finish**. The Docker exporter is created in the project explorer. -5. This step is only required if you already have a Docker image (in your local Docker repository) with the same name as the base image specified above. - - !!! Info - In this scenario, WSO2 Integration Studio will first check if there is a difference in the two images before pulling the image specified in the **Base Image Repository** field. If the given base image is more updated, the existing image will be overwritten by this new image. Therefore, if you are currently using an older version, or if you have custom changes in your existing image, they will be replaced. - - To avoid your existing custom/older images from being replaced, add the following property under **dockerfile-maven-plugin -> executions -> execution -> configurations** in the `pom.xml` file of your Docker Exporter project. This configuration will ensure that the base image will not be pulled when a Docker image already exists with the same name. - - ```xml - false - ``` - -## The Docker Exporter directory - -Expand the **Docker Exporter** in the project explorer. See that the following folders and files are created: - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- Directory - - Description -
- Libs - - This folder stores libraries that should be copied to the Docker image. During the build time, the libraries inside this directory will be copied to the image. -
- Resources - - This folder stores additional files and resources that should be copied to the Docker image. During the build time, the resources inside this directory will be copied to the image. -
- deployment.toml - - The product configuration file. -
- Dockerfile - - The Dockerfile containing the build details. -
- pom.xml - - The file for selecting the relevant composite exporters that should be included in the Docker image. This information is also used when you later build and push Docker images to the Docker registries. -
- -## Build and Push Docker images - -Before you begin: - -- Create your integration artifacts in an [ESB Config sub project]({{base_path}}/integrate/develop/create-integration-project/#sub-projects) and package the artifacts in a [Composite Exporter]({{base_path}}/integrate/develop/packaging-artifacts/#sub-projects). For example, see the HelloWorld sample given below. - - Integration artifacts for Docker - -- Be sure to start your Docker instance before building the image. If Docker is not started, the build process will fail. - -- If you are using a Micro Integrator Docker image from a private registry as your base image: - - 1. Open a terminal and use the following command to log in to Docker: - ```bash - docker login -u username -p password - ``` - 2. In the next step, specify the name of the private Docker registry. - -To build and push the Docker image: - -!!! Note - As an alternative, you can skip the steps given below and manually build and push the Docker images using maven. Open a terminal, navigate to the Docker exporter and execute the following command: - - ```bash - mvn clean install -Dmaven.test.skip=true -Ddockerfile.username={username} -Ddockerfile.password={password} - ``` - - However, note that you need **Maven 3.5.2** or a later version when you build the Docker image manually (without using WSO2 Integration Studio). - -1. Open the **pom.xml** file inside the Docker project and click **Refresh** on the top-right. Your composite application project with integration artifacts will be listed under **Dependencies** as follows: - - Docker Pom view - -2. Select the composite exporters that you want to package inside the Docker image. -3. If required, you can update the **Target Repository** to which the image should be pushed and the **Target Tag**. -4. Save the POM file and click **Build** to start the Docker image build. -5. It will build the Docker image based on the Dockerfile and the Target details. When the image is created, the following message will display. - - Docker Build Success - -6. Click Push to push the Docker image to your Docker registry. - - In the dialog box that opens, provide the details of your Docker registry as shown below. - - - - When the image is pushed to the registry, you will see the following message. - - Docker Push Success diff --git a/en/docs/integrate/develop/create-integration-project.md b/en/docs/integrate/develop/create-integration-project.md deleted file mode 100644 index b56af7ddfb..0000000000 --- a/en/docs/integrate/develop/create-integration-project.md +++ /dev/null @@ -1,178 +0,0 @@ -# Creating an Integration Project - -An integration project consists of one or several project directories. These directories store the various artifacts that you create for your integration sequence. An integration project can be created as a Maven Multi Module (MMM) project by default. This enables you to add ESB Configs, Composite Exporter, Registry Resources, Connector Exporter, Docker Exporter, and Kubernetes Exporter as sub-modules to the project. - -An integration project is the recommended way of creating an “Integration Solution” as it simplifies the CICD workflow. - -## Integration project - -To create an integration project: - -1. [Download](https://wso2.com/integration/integration-studio/) and [install WSO2 Integration Studio]({{base_path}}/integrate/develop/instaling-wso2-integration-studio). - -2. Open WSO2 Integration Studio and click **New Integration Project** in the **Getting Started** view as shown below. - New Integration Project - -3. In the **New Integration Project** dialog box that opens, enter a name for your integration project. Select the relevant check boxes if you want to create **Registry Resources**, **Connector Exporter**, **Docker Exporter**, or **Kubernetes Exporter** in addition to the **ESB Configs** and **Composite Exporter**. - Create a New Integration Project - -## Sub projects - -An integration project can consist of multiple sub-projects. So multiple small projects can exist under a single integration project, where each of these can be dependent on each other and can be grouped together. However, it is not necessary that all sub-projects in an integration project be dependent on every other sub-project. - -Sub Project - -To add sub-projects to an existing integration project, right-click the integration project and hover over **New** to see the available project creation options. - -Add a New Sub Project - -Once you create the new sub project, you can see this nested under your integration project folder in the project explorer. - -The following table lists out the available projects that can be associated with an integration project. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Sub projectDescription
ESB ConfigsThis project stores the ESB artifacts that are used when defining a mediation flow. This includes addition of any synapse artifacts to your integration project that enables features of a typical ESB.
- Synapse Engine -
- The following are the synapse artifacts that can be added to an integration flow. -
-
    -
  • Proxy: This is a virtual service in the Micro Integrator that receives messages and processes them. It then delivers them to an external endpoint, where the actual web service is located.
  • -
  • API: A Rest API is an endpoint that has a URL. This address specifies the context and resources that need to be accessed through an HTTP method or call such as GET, PUT, POST, DELETE. Requests arrive at the input sequence, the Micro Integrator processes the message using mediators and forwards the message to the backend. The output sequence receives the backend’s response, processes it, and forwards the message to the client.
  • -
  • Inbound endpoints: They can be configured dynamically without restarting the server. Messages move from the transport layer to the mediation layer without going through the Axis2 engine.
  • -
  • Sequences: Sequences are used in the proxy service and the REST APIs. Each sequence is a set of mediators where messages are processed.
  • -
  • Mediator: It is the processing unit or action that is performed on a message. For example, when enriching a message, filtering it, sending it to an endpoint, deleting it, etc. Mediators can be customized.
  • -
  • Scheduled Tasks: This is a code that is to be executed at a specific moment. Tasks can also be customized.
  • -
  • Endpoints: They are destinations, for example, external to the Micro Integrator. It may be a service represented by a URL, mailbox, JMS queue, TCP socket. The same endpoint can be used with several transport protocols.
  • -
  • Message Store/Message Processors: This design pattern is used in integration when dealing with messages asynchronously (which is to say, when the client does not wait for the response). The message is stored in the memory or drive; this is done by the Message Store. The message processor extracts a queue, memory or database from it and sends it to an endpoint. By using this pattern, the delivery of a message to the endpoint can be guaranteed, since it is only deleted from the Store when an endpoint receives the message correctly.
  • -
-
Composite ExporterThis project allows you to package all the artifacts (stored as sub-projects under the same integration project) into one composite application (C-APP). This C-APP can then be deployed in the Micro Integrator server. -
- Composite Application -
Registry ResourcesCreate this project if you want to create registry resources for your mediation flow. You can later use these registry artifacts when you define your mediation sequences in the ESB config project. -
- The registry has three components: local, config, and governance. Registry resources and metadata can be added into each component in the registry. -
- Registry Resource -
Connector ExporterCreate this project if you wish to use connectors in your mediation sequence (defined in the ESB config project). All connector artifacts need to be stored in a connector exporter module before packaging. -
- Why Connectors -
Docker ExporterCreate a Docker Exporter if you want to deploy your integration solutions inside a Docker environment. This project directory allows you to package multiple integration projects into a single Docker image and then build and push to the Docker registries. For more information on Docker-specific project creation information, see Create Docker Project.
Kubernetes ExporterA Kubernetes Exporter allows you to deploy your integration solutions in a Kubernetes environment. This module allows you to package multiple integration projects and modules into a single Docker image. Also, a file named integration_cr.yaml is generated, which can be used to carry out Kubernetes deployments based on the k8s-ei-operator. For more information on Kubernetes-specific project creation information, see Create Kubernetes Project.
- -## Maven Multi Module projects - -The Maven Multi Module (MMM) integration project is the parent project in an integration solution and sub-projects can be added under this parent project. By default, an integration project is an MMM project unless you specify otherwise. - -By building the parent MMM project, you can build all the sub-projects in the integration solution simultaneously. This allows you to seamlessly push your integration solutions to a CI/CD pipeline. Therefore, it is recommended as a best practice to create your Config project and other projects inside an MMM integration project. - -This allows you to manage multiple projects such as Config projects, Composite Application projects, and Registry Resource projects as a single entity. - -Although the recommended approach is to create an integration project, which essentially has all the functionality of an MMM project, you can also create an MMM project separately. - -**To create the MMM project**: - -1. Open **WSO2 Integration Studio** and click **New Maven Multi Module Project** in the **Getting Started** view. - -2. In the **Maven Modules Creation Wizard** that opens, enter an artifact ID and other parameters as shown below. The artifact ID will be the name of your MMM project. - - - -3. Click **Finish**. The MMM project is created in the project explorer. - - - -Now you can create other projects inside the MMM project. For example, let's create a **Config** project and a **Composite Application** project. - -You can create sub-projects under this parent MMM project. - -## Moving sub projects to MMM project - -You can import existing sub projects (ESB Config project, Registry resource project, Composite project, etc.) into an existing Maven Multi Module Project (Integration Project). - -Right-click the project, and click Import to Maven Multi Module. - -import to maven multi module - -## Building selected MMM profiles - -When you create an integration project, you have a parent MMM project with child modules (sub projects). The MMM project in WSO2 Integration Studio now includes multiple maven profiles. Therefore, you can build selected profiles instead of building the complete MMM project. - -Maven profiles: - - - - - - - - - - - - - - - - - - -
- Profile Name - - Description -
- Solution - - Builds the integration artifacts stored in the ESB Config sub project. -
- Docker - - Builds the integration artifacts stored in the ESB Config and Docker sub projects. -
- Kubernetes - - Builds the integration artifacts stored in the ESB Config and Kubernetes sub projects. -
- -To build a selected Maven profile: - -!!! Note - When you build a Docker or Kubernetes profile using this method, you need to have **Maven 3.5.2** or a later version installed. - -1. Open a terminal and navigate to the MMM project folder. -2. Execute the following command: - - ```bash - mvn clean install -P - ``` - - !!! Tip - If you don't specify a profile name with the `-P` parameter, the default profile will apply. diff --git a/en/docs/integrate/develop/create-kubernetes-project.md b/en/docs/integrate/develop/create-kubernetes-project.md deleted file mode 100644 index 8f095f8423..0000000000 --- a/en/docs/integrate/develop/create-kubernetes-project.md +++ /dev/null @@ -1,308 +0,0 @@ -# Creating Kubernetes Exporter - -Create a Kubernetes Exporter if you want to deploy your integration solutions in a Kubernetes environment. - -The Kubernetes Exporter allows you to package multiple [integration modules]({{base_path}}/integrate/develop/create-integration-project) into a single Docker image. Also, a file named **integration_cr.yaml** is generated, which can be used to carry out Kubernetes deployments based on the [API K8s Operator]({{base_path}}/setup/deployment/kubernetes_deployment/#ei-kubernetes-k8s-operator). - -## Creating the Kubernetes project - -Follow the steps given below. - -1. [Create a new integration project]({{base_path}}/integrate/develop/create-integration-project) and create a Kubernetes Exporter project by doing one of the following. - - 1. As part of creating an integration project, you can select the **Kubernetes Exporter** check box. - - 2. You can right click on an existing integration project and select **New** -> **Kubernetes Exporter**. - -2. In the **New Kubernetes Exporter** dialog box that opens, enter a name for the Kubernetes exporter and other parameters as shown below. - - - - Enter the following information: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- Parameter - - Description -
- Kubernetes Exporter Name - - Required. Give a name for the Kubernetes project. -
- Generate K8s Artifacts for - - Specify the method that should be used for deploying the Micro Integrator on Kubernetes. -
    -
  • - K8s Operator: The Kubernetes resources required by the Operator are generated. -
  • -
  • - Pure K8s Artifacts: The Kubernetes resources required for setting up a deployment without using the K8s Operator are generated. -
  • -
-
- Integration Name - - Required. This name will be used to identify the integration solution in the kubernetes custom resource. The custom resource file (integration_cr.yaml) for this solution will be generated along with the other artifacts. -
- Number of Replicas - - Required. Specify the number of pods that should be created in the kubernetes cluster. -
- Base Image Repository - - Required. Select the base Micro Integrator Docker image for your solution. Use one of the following options: -
    -
  • - wso2/wso2mi: This is the community version of the Micro Integrator Docker image, which is stored in the public WSO2 Docker registry. This is selected by default. -
  • -
  • - docker.wso2.com/wso2mi: This is the Micro Integrator Docker image that includes product updates. This image is stored in the private WSO2 Docker registry. - Note that you need a valid WSO2 subscription to use the Docker image with updates. -
  • -
  • - You can also use a custom Docker image from a custom repository. -
  • -
- If you specify a Docker image from a private repository, note that you need to log in to your repository from a terminal before you build and push the image (as explained below). -
- Base Image Tag - - Required. Specify the tag of the base image that you are using. -
- Target Image Repository - - Required. The Docker repository to which you will later push this Docker image. -
    -
  • - If your repository is in Docker Hub, use the docker_user_name/repository_name format. -
  • -
  • - If you are using any other repository, use the repository_url/repository_user_name/repository_name forrmat. -
  • -
- If required, you can update this information later when you build and push the Docker image to the relevant repository. -
- Target Image Tag - - Required. Give a tag name for the Docker image. -
- Automatically deploy configurations - - This check box indicates that you are using the Micro Integrator as the base image. It is recommended to leave this check box selected when you use the Micro Integrator. -
- Environment Variables - - You can enter multiple environment variables as key-value pairs. -
- -3. Optionally, click **Next** and configure Maven details for the Kubernetes exporter. - - - -4. Click **Finish**. The Kubernetes exporter is created in the project explorer. - -5. This step is only required if you already have a Docker image (in your local Docker repository) with the same name as the base image specified above. - - !!! Info - In this scenario, WSO2 Integration Studio will first check if there is a difference in the two images before pulling the image specified in the **Base Image Repository** field. If the given base image is more updated, the existing image will be overwritten by this new image. Therefore, if you are currently using an older version, or if you have custom changes in your existing image, they will be replaced. - - To avoid your existing custom/older images from being replaced, add the following property under **dockerfile-maven-plugin -> executions -> execution -> configurations** in the `pom.xml` file of your Kubernetes Exporter project. This configuration will ensure that the base image will not be pulled when a Docker image already exists with the same name. - - ```xml - false - ``` - -## The Kubernetes Exporter directory - -Expand the **Kubernetes Exporter** in the project explorer. See that the following folders and files are created: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- Directory - - Description -
- Libs - - Directory to store libraries. During the build time, the libraries inside this folder will be copied to the image. -
- Resources - - This folder stores additional files and resources that should be copied to the Docker image. During the build time, the resources inside this directory will be copied to the image. -
- deployment.toml - - The product configuration file. -
- Dockerfile - - The Dockerfile containing the build details. -
- integration_cr.yaml - - Kubernetes configuration file generated based on the user inputs. -
- pom.xml - - The file for selecting the relevant composite applications that should be included in the Docker image. This information is also used when you later build and push Docker images to the Docker registries. -
- -## Build and Push Docker images - -Before you begin: - -- Create your integration artifacts in an [ESB Config sub project]({{base_path}}/integrate/develop/create-integration-project/#sub-projects) and package the artifacts in a [Composite Exporter]({{base_path}}/integrate/develop/packaging-artifacts). For example, see the HelloWorld sample given below. - - Integration artifacts for Docker - -- Be sure to start your Docker instance before building the image. If Docker is not started, the build process will fail. - -- If you are using a Micro Integrator Docker image from a private registry as your base image: - - 1. Open a terminal and use the following command to log in to Docker: - ```bash - docker login -u username -p password - ``` - 2. In the next step, specify the name of the private Docker registry. - -To build and push the Docker image: - -!!! Note - As an alternative, you can skip the steps given below and manually build and push the Docker images using maven. Open a terminal, navigate to the Docker exporter and execute the following command: - - ```bash - mvn clean install -Dmaven.test.skip=true -Ddockerfile.username={username} -Ddockerfile.password={password} - ``` - - However, note that you need **Maven 3.5.2** or a later version when you build the Docker image manually (without using WSO2 Integration Studio). - -1. Open the **pom.xml** file inside the Kubernetes exporter and click **Refresh** on the top-right. Your composite application project with integration artifacts will be listed under **Dependencies** as follows: - - Kubernetes pom view - -2. Select the composite applications that should be packed inside the Docker image (under **Dependencies**). -3. If required, you can update the **Target Repository** to which the image should be pushed and the **Target Tag**. -4. Save the file and click **Build & Push** on the top-right to start the Docker image build-and-push process. The **Enter Docker Registry Credentials** wizard opens. - - Docker Registry Auth Details - -4. Enter the following details in the wizard: - - - - - - - - - - - - - - - - - - -
- Parameter - - Description -
- Registry URL Type - - The Docker image registry to which the image will be pushed: Docker Hub or Other. -
- Username - - Username of the target registry repository. -
- Password - - Password of the target registry repository. -
- -5. Once you enter the above details, click **Push Image**. -6. First, it will build the Docker image based on the Dockerfile and the Target details. When the image is created, you will see the following message. - - Docker Build Success - -7. Finally, it will start to push the image to the given registry. Once the process is completed, you will see the following message. - - Docker Push Success diff --git a/en/docs/integrate/develop/creating-artifacts/adding-connectors.md b/en/docs/integrate/develop/creating-artifacts/adding-connectors.md deleted file mode 100644 index e571972bd2..0000000000 --- a/en/docs/integrate/develop/creating-artifacts/adding-connectors.md +++ /dev/null @@ -1,51 +0,0 @@ -# Adding Connectors - -You can develop configurations with connectors, and deploy the configurations and connectors as composite application archive (CAR) files in WSO2 Micro Integrator using WSO2 Integration Studio. - -!!! Info - In addition to the below methods, you can enable a connector by creating a configuration file in the `MI_HOME/repository/deployment/server/synapse-configs/default/imports` directory with the following configurations. Replace the value of the `name` property with the name of your connector, and name the configuration file `{org.wso2.carbon.connector}.xml` (e.g., `{org.wso2.carbon.connector}salesforce.xml`). - ```xml - - ``` - -## Instructions - -See the topics given below. - -### Importing Connectors - -Follow the steps below to import connectors into WSO2 Integration Studio: - -1. If you have already created an [ESB Config project]({{base_path}}/integrate/develop/create-integration-project/#esb-config-project), right click the ESB Config project where you want to use the connector and click **Add or Remove Connector/Module**. -2. On the wizard that appears, select **Add Connector/module** and click **Next**. - - If you have not downloaded the connector, search on the required connector in **WSO2 Connector Store** view, and click on the download icon to import the connector into the workspace. Then, click on **Finish**. - - If you have already downloaded the connectors, select the **Add from File System** option and browse to the connector file from the file system. Click **Finish**. The connector is imported into the workspace and available for use with all the projects in the workspace. -3. After importing the connectors into WSO2 Integration Studio, the connector operations are available in the tool palette. You can drag and drop connector operations into your sequences and proxy services. - -### Packaging Connectors - -Follow the steps below to create a composite application archive (CAR) file containing the connectors: - -1. Click **File > New > Other** and select **Connector Exporter Project** under **WSO2 > Extensions > Project Types** and click **Next**. -2. If you are using a maven multi module project right click on the project and select **New > Connector Exporter**. -3. Enter a project name and click **Finish**. -4. Right-click on the created connector exporter project, point to **New** and then click **Add/Remove Connectors**. -5. Select **Add Connector/module** and click **Next**. Then, click on the **Workspace** option. This will list down the connectors that have been imported into WSO2 Integration Studio. -6. Select the connector and click **OK** and then click **Finish**. - -You can export this connector file as a CAR file just as other ESB artifacts. See [exporting artifacts]({{base_path}}/integrate/develop/exporting-artifacts) for instructions. - -### Removing Connectors - -Follow the steps below to remove connectors from WSO2 Integration Studio: - -1. Right-click on the relevant ESB Config project and click **Add or Remove Connector/Module**. -2. On the wizard that appears, select **Remove Connector/module** and click **Next**. -3. Select the connectors you want to remove and click **Finish**. - -## Tutorials - -- See the tutorial on [Connecting Web APIs/Cloud Services]({{base_path}}/integrate/tutorials/using-the-gmail-connector/#importing-the-email-connector-into-wso2-integration-studio). diff --git a/en/docs/integrate/develop/creating-artifacts/creating-a-message-processor.md b/en/docs/integrate/develop/creating-artifacts/creating-a-message-processor.md deleted file mode 100644 index b6177d4824..0000000000 --- a/en/docs/integrate/develop/creating-artifacts/creating-a-message-processor.md +++ /dev/null @@ -1,67 +0,0 @@ -# Creating a Message Processor - -Follow the instructions given below to create a new [Message Processor]({{base_path}}/reference/synapse-properties/about-message-stores-processors) artifact in WSO2 Integration Studio. - -## Instructions - -### Creating the Message Processor artifact - -1. Right-click the [ESB Config project]({{base_path}}/integrate/develop/create-integration-project/#esb-config-project) and go to **New → Message Processor** to open the **New Message Processor Artifact** dialog box. - - - -2. Select **Create a new message-processor artifact** and click **Next**. - - - -3. Enter a unique name for this message processor, specify the type of processor you're creating. - - - - See the links given below for descriptions of properties for each message processor type: - - - [Message Sampling Processor properties]({{base_path}}/reference/synapse-properties/message-processors/msg-sampling-processor-properties) - - [Scheduled Message Forwarding Processor properties]({{base_path}}/reference/synapse-properties/message-processors/msg-sched-forwarding-processor-properties) - - [Scheduled Failover Message Forwarding Processor properties]({{base_path}}/reference/synapse-properties/message-processors/msg-sched-failover-forwarding-processor-properties) - -4. Do one of the following to save the artifact: - - - To save the message processor in an existing ESB Config project in your workspace, click **Browse** and select that project. - - To save the message processor in a new ESB Config project, click **Create new Project** and create the new project. - -5. Click **Finish**.  - -The message processor is created in the `src/main/synapse-config/message-processors` folder under the ESB Config project you specified. - -### Updating the properties - -Open the new message processor artifact from the project explorer. You can use the **Form** view or the **Source** view to update message processor properties. - - - -See the links given below for descriptions of properties for each processor type: - -- [Message Sampling Processor properties]({{base_path}}/reference/synapse-properties/message-processors/msg-sampling-processor-properties) -- [Scheduled Message Forwarding Processor properties]({{base_path}}/reference/synapse-properties/message-processors/msg-sched-forwarding-processor-properties) -- [Scheduled Failover Message Forwarding Processor properties]({{base_path}}/reference/synapse-properties/message-processors/msg-sched-failover-forwarding-processor-properties) - -## Examples - - - -## Tutorials - -- See the tutorial on [using message stores and processors]({{base_path}}/integrate/tutorials/storing-and-forwarding-messages) diff --git a/en/docs/integrate/develop/creating-artifacts/creating-a-message-store.md b/en/docs/integrate/develop/creating-artifacts/creating-a-message-store.md deleted file mode 100644 index eca09d2041..0000000000 --- a/en/docs/integrate/develop/creating-artifacts/creating-a-message-store.md +++ /dev/null @@ -1,72 +0,0 @@ -# Creating a Message Store - -Follow the instructions given below to create a new [Message Store]({{base_path}}/reference/synapse-properties/about-message-stores-processors) artifact in WSO2 Integration Studio. - -## Instructions - -### Creating the Message Store artifact - -1. Right-click the [ESB Config project]({{base_path}}/integrate/develop/create-integration-project/#esb-config-project) and go to **New → Message Store** to open the **New Message Store Artifact** dialog box. - - - -2. Select the **Create a new message-store artifact** option and click **Next**. - - - -3. Enter a unique name for the message store, and then select the type of message store you are creating. - - - - See the links given below for descriptions of message store properties for each store type: - - - [JMS properties]({{base_path}}/reference/synapse-properties/message-stores/jms-msg-store-properties) - - [JDBC properties]({{base_path}}/reference/synapse-properties/message-stores/jdbc-msg-store-properties) - - [RabbitMQ properties]({{base_path}}/reference/synapse-properties/message-stores/rabbitmq-msg-store-properties) - - [Resequence properties]({{base_path}}/reference/synapse-properties/message-stores/resequence-msg-store-properties) - - [WSO2 MB properties]({{base_path}}/reference/synapse-properties/message-stores/wso2mb-msg-store-properties) - - [In-Memory properties]({{base_path}}/reference/synapse-properties/message-stores/in-memory-msg-store-properties) - - [Custom properties]({{base_path}}/reference/synapse-properties/message-stores/custom-msg-store-properties) - -5. Do one of the following to save the artifact: - - - To save the message store in an existing ESB Config project in your workspace, click **Browse** and select that project. - - To save the message store in a new ESB Config project, click **Create new Project** and create the new project. - -6. Click **Finish**.  - -The message store is created in the `src/main/synapse-config/message-stores` folder under the ESB Config project you specified. - -### Designing the integration - -To add a message store to the integration sequence, use the [Store Mediator]({{base_path}}/reference/mediators/store-mediator): - -1. Open to the **Design View** of your [mediation sequence]({{base_path}}/reference/synapse-properties/sequence-properties). -2. Drag the [Store Mediator]({{base_path}}/reference/mediators/store-mediator) from the **Palette** and drop it to the relevant position in the [mediation sequence]({{base_path}}/reference/synapse-properties/sequence-properties): - - - -3. Double-click the **Store Mediator** to open the **Properties** tab: - - - -4. Select your message store artifact from the list in the **Available Message Stores** field as shown above. - -The message store is now linked to your integration sequence. - -### Updating the properties - -Open the new message store artifact from the project explorer. You can use the **Form** view or the **Source** view to update message store properties. - - - -## Examples - -- [Introduction to Message Stores and Processors]({{base_path}}/integrate/examples/message_store_processor_examples/intro-message-stores-processors) -- [JDBC Message Store]({{base_path}}/integrate/examples/message_store_processor_examples/using-jdbc-message-store) -- [JMS Message Store]({{base_path}}/integrate/examples/message_store_processor_examples/using-jms-message-stores) -- [RabbitMQ Message Store]({{base_path}}/integrate/examples/message_store_processor_examples/using-rabbitmq-message-stores) - -## Tutorials - -- See the tutorial on [using message stores and processors]({{base_path}}/integrate/tutorials/storing-and-forwarding-messages) diff --git a/en/docs/integrate/develop/creating-artifacts/creating-a-proxy-service.md b/en/docs/integrate/develop/creating-artifacts/creating-a-proxy-service.md deleted file mode 100644 index f727586707..0000000000 --- a/en/docs/integrate/develop/creating-artifacts/creating-a-proxy-service.md +++ /dev/null @@ -1,134 +0,0 @@ -# Creating a Proxy Service - -Follow the instructions given below to create a new [Proxy Service]({{base_path}}/reference/synapse-properties/proxy-service-properties) artifact in WSO2 Integration Studio. - -## Instructions - -### Creating the Proxy Service artifact - -Follow the steps given below. - -1. Right-click the project in the navigator and go to **New → Proxy Service** to open the **New Proxy Service** dialog box. - - -2. Select **Create New Proxy Service** and click **Next**. - - - -3. Enter a unique name for the proxy service and select a proxy service template from the list shown below. These templates will automatically generate the mediation flow that is required for each use case. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Template TypeDescription
Pass-Through proxyThis template creates a proxy service that forwards messages to the endpoint without performing any processing.
Transformer proxyThis template creates a proxy service that transforms all the incoming requests using XSLT and then forwards them to a given endpoint. If required it can also transform responses from the back-end service according to an XSLT that you specify.
Log Forward proxyThis template creates a proxy service that first logs all the incoming requests and passes them to a given endpoint. It can also log responses from the backend service before routing them to the client. You can specify the log level for requests and responses.
WSDL-Based proxyThis template generates a proxy service from the remotely hosted WSDL of an existing web service. The endpoint information is extracted from the WSDL you specify. Alternatively, you can generate a proxy service from a WSDL definition as explained below. -
Secure proxyThis template creates a proxy service that uses WS-Security to process incoming requests and forward them to an unsecured backend service. You simply need to provide the policy file that should be used.
Custom proxyThis template creates an empty proxy service file, where you can manually create the mediation flow by adding all the sequences, endpoints, transports, and other QoS settings.
- -4. Do one of the following to save the proxy service: - - To save the proxy service in an existing ESB Config project in your workspace, click **Browse** and select that project. - - To save the proxy service in a new ESB Config project, click **Create new Project** and create the new project. -5. Click **Finish**.  - -The proxy service is created in the `src/main/synapse-config/proxy-services` folder under the project you specified. - -### Creating a Proxy Service using a WSDL definition - -Follow the steps given below after opening the **New Proxy Service** dialog box. - -1. Select **Generate Proxy Service using WSDL file** and click **Next**. - - - -2. Provide a URL or a file location as the source of the WSDL and click **Finish**. - - - -You will now see the mediation logic generated from the WSDL as shown below. Note that the [Switch mediator]({{base_path}}/reference/mediators/switch-mediator) is added to the mediation logic and that the different operations given in the WSDL are represented as switch cases. - -!!! Tip - If your WSDL does not have `SOAPActions` specified for the operations, only the **default** switch case will be generated. - - - -### Designing the integration - -When you open the proxy service from the **Config** project in the project explorer, you will see the default **Design** view as shown below. - - - -Drag and drop the required integration artifacts from the **Palette** to the canvas and design the integration flow. - - - -### Updating the properties - -To add service-level properties to the proxy service from the **Design** view: - -1. Double-click the **Proxy Service** icon to open the Properties tab for the service. - - - -2. Expand each section and add the required parameters. - -To add service-level transport parameters: - -1. Go to the **Properties** tab and expand the **Parameters** section as shown below. - - - -2. Click the **plus** icon and add the parameter name and value as a key-value pair: - - - -See the following links for the list of transport parameters you can use: - - - [VFS Parameters]({{base_path}}/reference/synapse-properties/transport-parameters/vfs-transport-parameters) - - [JMS Parameters]({{base_path}}/reference/synapse-properties/transport-parameters/jms-transport-parameters) - - [FIX Parameters]({{base_path}}/reference/synapse-properties/transport-parameters/fix-transport-parameters) - - [MailTo Parameters]({{base_path}}/reference/synapse-properties/transport-parameters/mailto-transport-parameters) - - [MQTT Parameters]({{base_path}}/reference/synapse-properties/transport-parameters/mqtt-transport-parameters) - - [RabbitMQ Parameters]({{base_path}}/reference/synapse-properties/transport-parameters/rabbitmq-transport-parameters) - -3. See the complete list of [service-level properties and parameters]({{base_path}}/reference/synapse-properties/proxy-service-properties) that you can configure. - -### Using the Source View - -Click the **Source** tab to view the XML-based synapse configuration (source code) of the proxy service. You can update the service using this view. - - - -## Examples - -- [Using a Simple Proxy Service]({{base_path}}/integrate/examples/proxy_service_examples/introduction-to-proxy-services) -- [Publishing a Custom WSDL]({{base_path}}/integrate/examples/proxy_service_examples/publishing-a-custom-wsdl) -- [Exposing a Proxy Service via Inbound Endpoint]({{base_path}}/integrate/examples/proxy_service_examples/exposing-proxy-via-inbound) -- [Securing Proxy Services]({{base_path}}/integrate/examples/proxy_service_examples/securing-proxy-services) diff --git a/en/docs/integrate/develop/creating-artifacts/creating-an-api.md b/en/docs/integrate/develop/creating-artifacts/creating-an-api.md deleted file mode 100644 index e2547ecd45..0000000000 --- a/en/docs/integrate/develop/creating-artifacts/creating-an-api.md +++ /dev/null @@ -1,412 +0,0 @@ -# Creating a REST API - -Follow the instructions given below to create a new [REST API]({{base_path}}/reference/synapse-properties/rest-api-properties) artifact in WSO2 Integration Studio. - -## Instructions - -### Creating the API artifact - -1. Right-click the **Config** project in the project explorer and go to **New → REST API**. - - - -2. In the dialog box that opens, select one of the given options for creating the API artifact: - - - - - - - - - - - - - - - - - - - - - - - - -
- Create A New API Artifact - - This option is selected by default. Use this option if you want to create the REST API artifact from scratch. -
- Generate API using Swagger Definition - - Selet this option if you want to generate the REST API artifact from an existing Swagger definition (YAML/JSON file). That is, the synapse configuration (XML) of the REST API will be generated using the Swagger definition. -
- Import API Artifact - - Select this option to import an existing REST API configuration (XML definition) that was created using WSO2 Integration Studio. -
- Import API from API Manager - - Select this option to import an managed API from WSO2 API Manager. -
- Generate REST API from WSDL - - Select this option to generate Synapse API from a WSDL endpoint. -
- -3. Click **Next** to go to the next page and enter the relevant details. - - - If you selected **Create a New API** in the previous step, enter the basic details that are required for creating the synapse configuration (XML) of the API: - - - - - - - - - - - - - - - - - - - - -
- Parameter - - Description -
- Name - - Required.
- The name of the REST API. -
- Context - - Required.
- The context for the REST API. For example, /healthcare. Note that it is not recommended to use the same API context in multiple APIs. The API context should be unique for each API. It is also not recommended to use "/service" as API context because it is preserved for Proxy services. -
- Path to Swagger Definition - - Enter the path to a custom Swagger definition (YAML/JSON file) that is stored in a registry project in your workspace.

- Once this API is created and deployed in the Micro Integrator, users will be able to access this custom Swagger definition and not the default Swagger definition of the API. -
- - - If you selected **Generate API using Swagger Definition** in the previous step, enter the details of your custom Swagger file: - - - - - - - - - - - - - - - - -
- Parameter - - Description -
- Swagger Definition File - - Required.
- Click Browse and select the Swagger file. -
- Swagger Registry Path - - Click Browse to select an existing registry project in your workspace. The Swagger definition will be saved to this registry.

- If you don't have an existing registry project, click Create new project to add a new registry project to your workspace. -
- - - If you selected **Import API Artifact** in the previous step, enter the following information: - - - - - - - - - - - - -
- Parameter - - Description -
- API Configuration File - - Required.
- Click Browse and select the REST API configuration file. -
- - - If you selected **Import API from API Manager** in the previous step, enter the following information: - - - - - - - - - - - - - - - - - - - - -
- Parameter - - Description -
- Username - - Username of the API Manager user. -
- Password - - Password of the API Manager user. -
- API Manager host - - Host URL of the API Manager. -
- After entering the above values in the Import API from API Manager option wizard, click List APIs. The list of APIs that are in WSO2 API Manager appear. Thereafter, select one of the APIs from the API list. - - - If you selected **Generate REST API from WSDL** in the previous step, enter the following information: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- Parameter - - Description -
- Name - - Name for the generated REST API. -
- Generate API - - Select 'Using WSDL URL' for generate API using the remote WSDL file or select 'Using Local - File' to generate API from WSDL file or Zip file contain a valid WSDL file. -
- Use Local WSDL file or Zip file - - Browse and select the WSDL File or Zip file that has a valid WSDL file -
- Enter WSDL URL - - Give the remote location of the SOAP Service WSDL File as a valid URL -
- Enter SOAP Endpoint - - Give the actual SOAP Backend URL(This should return a valid WSDL when invoked with ?wsdl) -
- Save in - - Select the integration project for the generated API -
- - !!! Note - - Current SOAP to REST Generation has the limitations mentioned at [https://github.com/wso2/soap-to-rest/blob/main/limitations.md](https://github.com/wso2/soap-to-rest/blob/main/limitations.md) - - All the generated REST Services are not production ready and users need to review them manually using Integration Studio IDE and edit them if needed - - Since having "." (dot) in XML element names is not a best practice, you may need to manually change the generated soap payload to include the "." (dot) - -4. Click **Finish**. - - The REST API is created inside the `src/main/synapse-config/api` folder of your **Config** project. - - If you provided a custom Swagger definition file (YAML), it is now stored in the registry project. - - - -### Designing the integration - -When you open the REST API from the **Config** project in the project explorer, you will see the default **Design** view as shown below. - - - -Drag and drop the required integration artifacts from the **Palette** to the API resource and design the integration flow. - - - -You can also use the [**Source** view](#using-the-source-view) or the [**Swagger** editor](#using-the-swagger-editor) to update the API configuration. - -### Adding new API resources - -When you create the API, an API resource is created by default. If you want to add a new resource, click **API Resource** in the **Pallet** and simply drag and drop the resource to the REST API. - - - -!!! Info - **About the default API Resource** - - Each API can have at most one default resource. Any request received - by the API but does not match any of the enclosed resources - definitions will be dispatched to the default resource of the API. - In the following example, if a DELETE request is received by `SampleAPI` on the `/payments` URL, the request will be - dispatched to the default resource as none of the resources in SampleAPI are configured to handle DELETE requests. - - === "SampleAPI" - ```xml - - - - - - - - - - - - - ``` - -### Updating metadata - -When you create the API artifact from WSO2 Integration Studio, a **resources** folder with metadata files is created as shown below. - - - -The service's metadata is used by the API management runtime to generate the API proxy for the integration service (which is this API). - - - - - - - - - - - - - - -
- Parameter - - Description -
- description - - Explain the purpose of the API. -
- serviceUrl - - This is the URL of the API when it gets deployed in the Micro Integrator. You (as the integration developer) may not know this URL during development. Therefore, you can parameterize the URL to be resolved later using environment variables. By default, the {MI_HOST} and {MI_PORT} values are parameterized with placeholders.

- You can configure the serviceUrl in the following ways: -
    -
  • - Add the complete URL without parameters. For example: http://localhost:8290/healthcare.
    -
  • -
  • - Parameterize using the host and port combination. For example: http://{MI_HOST}:{MI_PORT}/healthcare. -
  • -
  • - Parameterize using a preconfigured URL. For example: http://{MI_URL}/healthcare. -
  • -
-
- -!!! Tip - See the [Service Catalog API documentation]({{base_path}}/reference/product-apis/service-catalog-apis/service-catalog-v1/service-catalog-v1/) for more information on the metadata in the YAML file. - -### Updating properties - -To update API-level properties from the **Design** view: - -1. Double-click the **API** icon to open the Properties tab for the API. - - - -2. See the complete list of [optional REST API properties]({{base_path}}/reference/synapse-properties/rest-api-properties/#rest-api-properties-optional) you can configure. - -To update API resource properties from the **Design** view: - -1. Double-click the **Resource** icon to enable the Properties tab for the resource. - - - -2. See the complete list of [API Resource properties]({{base_path}}/reference/synapse-properties/rest-api-properties/#rest-api-resource-properties) you can configure. - -### Using the Source View - -Click the **Source** tab to view the XML-based synapse configuration (source code) of the API. You can update the API using this view. - - - -### Using the Swagger Editor - -Click the **Swagger Editor** tab to view the Swagger definition of your API. You can update the API using the Swagger editor (left panel) and also interact with the API using the Swagger UI (right panel). - -!!! Note - If you have added a custom Swagger definition to the API, note that this view displays the API's default Swagger definition and not the custom Swagger definition that you added. - - - -## Examples - -- [Using a Simple Rest API]({{base_path}}/integrate/examples/rest_api_examples/introduction-rest-api) -- [Working with Query Parameters]({{base_path}}/integrate/examples/rest_api_examples/setting-query-params-outgoing-messages) -- [Exposing a SOAP Endpoint as a RESTful API]({{base_path}}/integrate/examples/rest_api_examples/enabling-rest-to-soap) -- [Exposing Non-HTTP Services as RESTful APIs]({{base_path}}/integrate/examples/rest_api_examples/configuring-non-http-endpoints) -- [Handling Non Matching Resources]({{base_path}}/integrate/examples/rest_api_examples/handling-non-matching-resources) -- [Handling HTTP Status Codes]({{base_path}}/integrate/examples/rest_api_examples/setting-https-status-codes) -- [Manipulating Content Types]({{base_path}}/integrate/examples/rest_api_examples/transforming-content-type) -- [Securing a REST API]({{base_path}}/integrate/examples/rest_api_examples/securing-rest-apis) -- [Special Cases]({{base_path}}/integrate/examples/rest_api_examples/special-cases) diff --git a/en/docs/integrate/develop/creating-artifacts/creating-an-inbound-endpoint.md b/en/docs/integrate/develop/creating-artifacts/creating-an-inbound-endpoint.md deleted file mode 100644 index a2d48d20c3..0000000000 --- a/en/docs/integrate/develop/creating-artifacts/creating-an-inbound-endpoint.md +++ /dev/null @@ -1,105 +0,0 @@ -# Creating an Inbound Endpoint - -Follow the instructions given below to create a new [Inbound Endpoint]({{base_path}}/reference/synapse-properties/inbound-endpoints/about-inbound-endpoints) artifact in WSO2 Integration Studio. - -## Instructions - -### Creating the Inbound Endpoint artifact - -1. If you have already created an [ESB Config project]({{base_path}}/integrate/develop/create-integration-project/#esb-config-project), right-click the project and go to **New → Inbound Endpoint** to open the **New Inbound Endpoint Artifact**. - - - -2. Select **Create a New Inbound Endpoint** and click **Next**. - - - -3. Enter a unique name for the inbound endpoint, and select an **Inbound Endpoint Creation Type** from the list. - - - -4. Specify values for the required parameter for the selected inbound endpoint type. - - !!! Note - For certain protocols (HL7, KAFKA, Custom, MQTT, RabbitMq, WSO2_MB, WS, and  WSS) the **main sequence** and **error sequence** are mandatory fields. - - - You can select sequences that already exist in the workspace and add them to the **Sequence** and **Error sequence** fields. If you don't have any sequences in the workspace, click **Generate Sequence and Error Sequence** to generate new sequences for the inbound endpoint. - -5. Do one of the following: - - To save the endpoint in an existing ESB Config project in your workspace, click **Browse** and select that project. - - To save the endpoint in a new ESB Config project, click **Create new Project** and create the new project. -6. Click **Finish**.  - -The inbound endpoint is created in the `src/main/synapse-config/inbound-endpoint` folder under the ESB Config project you specified. - -### Designing the integration - -When you open the inbound endpoint from the **Config** project in the project explorer, you will see the default **Design** view. - - - -The integration flow for an inbound endpoint is defined within [named sequences]({{base_path}}/reference/synapse-properties/sequence-properties/#named-sequences). You can drag and drop **sequences** from the **Palette** to the canvas as shown below. - - - -Double-click the **Sequence** artifact to open the canvas for the sequence. You can now drag and drop the mediation artifacts from the palette and design the integration flow. - - - -### Updating the properties - -To update properties from the **Design** view: - -1. Double-click the **Inbound Endpoint** icon to open the Properties tab. - - - -2. See the following links for the list of parameters for each inbound endpoint type: - - - [HTTP Inbound Parameters]({{base_path}}/reference/synapse-properties/inbound-endpoints/listening-inbound-endpoints/http-inbound-endpoint-properties) - - [CXF WS RM Inbound Parameters]({{base_path}}/reference/synapse-properties/inbound-endpoints/listening-inbound-endpoints/cxf-ws-rm-inbound-endpoint-properties) - - [HL7 Inbound Parameters]({{base_path}}/reference/synapse-properties/inbound-endpoints/listening-inbound-endpoints/hl7-inbound-endpoint-properties) - - [WebSocket Inbound Parameters]({{base_path}}/reference/synapse-properties/inbound-endpoints/listening-inbound-endpoints/websocket-inbound-endpoint-properties) - - [File Inbound Parameters]({{base_path}}/reference/synapse-properties/inbound-endpoints/polling-inbound-endpoints/file-inbound-endpoint-properties) - - [JMS Inbound Parameters]({{base_path}}/reference/synapse-properties/inbound-endpoints/polling-inbound-endpoints/jms-inbound-endpoint-properties) - - [Kafka Inbound Parameters]({{base_path}}/reference/synapse-properties/inbound-endpoints/polling-inbound-endpoints/kafka-inbound-endpoint-properties) - - [RabbitMQ Inbound Parameters]({{base_path}}/reference/synapse-properties/inbound-endpoints/event-based-inbound-endpoints/rabbitmq-inbound-endpoint-properties) - - [MQTT Inbound Parameters]({{base_path}}/reference/synapse-properties/inbound-endpoints/event-based-inbound-endpoints/mqtt-inbound-endpoint-properties) - -!!! Note - **Redeployment of listening inbound endpoints fail?** - - A **listening inbound endpoint** opens the port for itself during deployment. Therefore, if you are **redeploying** a listening inbound endpoint artifact, the redeployment will not be successful until the port that was previously opened for the inbound endpoint is closed. - - By default, the system will wait for 10 seconds for the previously opened port to close down. If you want to increase this waiting time beyond 10 seconds, be sure to add the following system property in the `deployment.toml` file, which is stored in the `MI_HOME/conf/` directory and restart the server before redeploying the artifacts. - - ```toml - [system.parameter] - 'synapse.transport.portCloseVerifyTimeout' = 20 - ``` - Note that `synapse.transport.portCloseVerifyTimeout` should be wrapped by single quotes since it contain dots. The TOML format detects the dot as an object separator. - Also note that this setting may be required in Windows environments as the process of closing a port can sometimes take longer than 10 seconds. - -### Using the Source View - -Click the **Source** tab to view the XML-based synapse configuration (source code) of the inbound endpoint. You can update the service using this view. - - - -## Examples - -- [JMS Inbound Endpoint example]({{base_path}}/integrate/examples/inbound_endpoint_examples/inbound-endpoint-jms-protocol) -- [File Inbound Endpoint example]({{base_path}}/integrate/examples/inbound_endpoint_examples/file-inbound-endpoint) -- [HTTP Inbound Endpoint example]({{base_path}}/integrate/examples/inbound_endpoint_examples/inbound-endpoint-http-protocol) -- [HTTPS Inbound Endpoint example]({{base_path}}/integrate/examples/inbound_endpoint_examples/inbound-endpoint-https-protocol) -- [HL7 Inbound Endpoint example]({{base_path}}/integrate/examples/inbound_endpoint_examples/inbound-endpoint-hl7-protocol-auto-ack) -- [MQTT Inbound Endpoint example]({{base_path}}/integrate/examples/inbound_endpoint_examples/inbound-endpoint-mqtt-protocol) -- [RabbitMQ Inbound Endpoint example]({{base_path}}/integrate/examples/inbound_endpoint_examples/inbound-endpoint-rabbitmq-protocol) -- [Kafka Inbound Endpoint example]({{base_path}}/integrate/examples/inbound_endpoint_examples/inbound-endpoint-kafka) -- [WebSocket Inbound Endpoint example]({{base_path}}/integrate/examples/inbound_endpoint_examples/inbound-endpoint-secured-websocket) -- [Using Inbound Endpoints with Registry]({{base_path}}/integrate/examples/inbound_endpoint_examples/inbound-endpoint-with-registry) - -## Tutorial - -- See the tutorial on [using inbound endpoints]({{base_path}}/integrate/tutorials/using-inbound-endpoints) diff --git a/en/docs/integrate/develop/creating-artifacts/creating-endpoint-templates.md b/en/docs/integrate/develop/creating-artifacts/creating-endpoint-templates.md deleted file mode 100644 index 178e44e6a7..0000000000 --- a/en/docs/integrate/develop/creating-artifacts/creating-endpoint-templates.md +++ /dev/null @@ -1,75 +0,0 @@ -# Creating Endpoint Templates - -Follow the instructions given below to create a new **Endpoint Template** in WSO2 Integration Studio. - -## Instructions - -### Creating the Endpoint Template artifact - -1. Right-click the [ESB Config project]({{base_path}}/integrate/develop/create-integration-project/#esb-config-project) and go to **New → Template** to open the **New Template Artifact** dialog box. - - - -2. Select **Create a New Template** and click **Next**. - - - -3. Enter a unique name for the template and select one of the following **Endpoint Template** types. - - - - - Address Endpoint Template - - Default Endpoint Template - - HTTP Endpoint Template - - WSDL Endpoint Template - - Specify values for the [required parameter]({{base_path}}/reference/synapse-properties/template-properties/#endpoint-template-properties) for the selected endpoint type. - -5. Do one of the following to save the artifact: - - - To save the template in an existing ESB Config project in your workspace, click **Browse** and select that project. - - To save the template in a new ESB Config project, click **Create new Project** and create the new project. - -6. Click **Finish**.  - - The template is created in the `src/main/synapse-config/templates` folder under the ESB Config project you specified. - -7. To use the endpoint template, [update the properties](#updating-properties). - -### Updating properties - -1. Open the template artifact from the project explorer. -2. First, update the endpoint parameter values with placeholders that are prefixed by `$`. - - For example: - - - -3. Then, click **Add Template Parameter** to open the **Parameter Configuration** dialog box and add the endpoint parameter placeholders (that you used above) as parameters: - - - -### Designing the integration - -When you have an Endpoint template defined, you can use a **Template Endpoint** in your [mediation sequence]({{base_path}}/reference/synapse-properties/sequence-properties) to call the parameters in the template. - -1. Open to the **Design View** of your [mediation sequence]({{base_path}}/reference/synapse-properties/sequence-properties). -2. Drag the [Call Mediator]({{base_path}}/reference/mediators/call-mediator) from the **Palette** and drop it to the relevant position in the [mediation sequence]({{base_path}}/reference/synapse-properties/sequence-properties). - - - - !!! Tip - Similarly, you can use the [Send Mediator]({{base_path}}/reference/mediators/send-mediator). - -3. Drag a [Template Endpoint]({{base_path}}/reference/mediators/send-mediator) from the **Endpoints** section in the **Palette** and drop it to the empty box in the [Call Mediator]({{base_path}}/reference/mediators/call-mediator). - - - -4. Open the [Template Endpoint]({{base_path}}/reference/mediators/send-mediator) from the project explorer and click **Add Parameters** to open the **Template Endpoint Parameter Configuration** dialog box. -5. Specify the parameter values as shown below. - - - -## Examples - -- [Using Endpoint Templates]({{base_path}}/integrate/examples/template_examples/using-endpoint-templates) diff --git a/en/docs/integrate/develop/creating-artifacts/creating-endpoints.md b/en/docs/integrate/develop/creating-artifacts/creating-endpoints.md deleted file mode 100644 index d80df15706..0000000000 --- a/en/docs/integrate/develop/creating-artifacts/creating-endpoints.md +++ /dev/null @@ -1,82 +0,0 @@ -# Creating an Endpoint -Follow the instructions given below to create a new [Endpoint]({{base_path}}/reference/synapse-properties/endpoint-properties) artifact in WSO2 Integration Studio. - -## Instructions - -### Creating the Endpoint artifact - -1. Right-click the [ESB Config project]({{base_path}}/integrate/develop/create-integration-project/#esb-config-project) and go to **New → Endpoint** to open the **New Endpoint Artifact** dialog box. - - - -2. Select **Create a New Endpoint** and click **Next**. - - - -3. Enter a unique name for the endpoint, and then select the type of endpoint you are creating. - - - -4. Specify values for the [required parameter]({{base_path}}/reference/synapse-properties/endpoint-properties) for the selected endpoint type. -5. Specify how you want to save the endpoint: - - - Select **Static Endpoint** to save the endpoint in the current workspace. - - Select **Dynamic Endpoint** to save the endpoint as a registry resource. - -6. Specify the location to save the endpoint: - - - To save in an existing ([ESB Config project]({{base_path}}/integrate/develop/create-integration-project/#esb-config-project) or [Registry Resource project]({{base_path}}/integrate/develop/create-integration-project/#registry-resource-project)) in your workspace, click **Browse** and select that project. - - To save in a new project, click **Create new Project** and create the new project. - -7. Click **Finish**.  - -The endpoint is created in the `src/main/synapse-config/endpoints` folder under the ESB Config project or [registry resource project]({{base_path}}/integrate/develop/create-integration-project/#registry-resource-project) you specified. - -### Designing the integration - -To add an endpoint artifact to the integration sequence, use the [Send Mediator]({{base_path}}/reference/mediators/send-mediator) or the [Call Mediator]({{base_path}}/reference/mediators/call-mediator). - -1. Open to the **Design View** of your [mediation sequence]({{base_path}}/reference/synapse-properties/sequence-properties). -2. Drag the [Call Mediator]({{base_path}}/reference/mediators/call-mediator) from the **Palette** and drop it to the relevant position in the [mediation sequence]({{base_path}}/reference/synapse-properties/sequence-properties): - - - - !!! Tip - Similarly, you can use the [Send Mediator]({{base_path}}/reference/mediators/send-mediator). - -3. Drag the new endpoint artifact from the **Defined Endpoints** section in the **Palette** and drop it to the empty box in the [Call Mediator]({{base_path}}/reference/mediators/call-mediator): - - - -The endpoint artifact is now linked to your integration sequence. - -### Updating the properties - -Open the new endpoint artifact from the project explorer. You can use the **Form** view or the **Source** view to update endpoint properties. - - - -See the descriptions of all [endpoint properties]({{base_path}}/reference/synapse-properties/endpoint-properties). - -## Examples - - diff --git a/en/docs/integrate/develop/creating-artifacts/creating-registry-resources.md b/en/docs/integrate/develop/creating-artifacts/creating-registry-resources.md deleted file mode 100644 index b8e8e784e9..0000000000 --- a/en/docs/integrate/develop/creating-artifacts/creating-registry-resources.md +++ /dev/null @@ -1,101 +0,0 @@ -# Creating a Registry Resource - -Initially, your Registry resources project will contain only a `pom` file. You can create any number of registry resources inside that project. - -## Step 1: Creating the resource artifact - -Right-click on the `Registry Resource project` and click **New** -> **Registry Resource**. - - - -This will open the **New Registry Resource** window. - - - -Select one of the following options and click **Next**. - -- [From existing template](#from-existing-template) -- [Import from file system](#import-from-file-system) -- [Import Registry dump file from file system](#import-registry-dump-file-from-file-system) -- [Check-out from Registry](#check-out-from-registry) - -### From existing template - -Use the **From existing template** option if you want to generate a registry resource from a template. - - - -Click **Next** and specify values for the following parameters: - - - -Enter a unique name for the resource and select a resource template for the **Template** field. In this example, a **WSDL File** template is used. - -### Import from file system - -Use the **Import from file system** option to import a file or a folder -containing registry resources. - -!!! Tip - This helps you import a resource and collection from the same registry instance or a different registry instance that you have added. Similarly, you can export a resource or collection to the same registry instance or a different registry instance. - - - -Click **Next** and specify values for the following parameters: - - - - - - - - - - - - -
Browse file/Browse folderBrowse to find the relevant file or folder.
Copy content onlyIf you selected Browse Folder, the Copy content only check box will be enabled. Select the check box if you want to copy only the content of the folder (and not the folder itself) to the save location.
- -### Import Registry dump file from file system - -Use this option to browse for a registry dump file, which you can use to -sync a registry. - - - -Click **Next** and then click **Browse** to find the relevant file. - - - -### Check-out from registry - -Use this option to check out files from the registry. - - - -Click **Next** and specify the artifact name and the registry path from which you want to check out the files. - - - -## Step 2: Saving the resource artifact - -Specify the location to save the registry resource and click **Finish**. - - - - - - - - - - - - -
Registry path to deploySpecify where the registry resource should be saved to at the time of deployment.
Save Resource inSelect an existing registry resource project to save the resource. Alternatively, you can create a new registry resource project.
- -## Editing a Registry Resource - -You may need to change the details you entered for a registry resource, for example, the registry path. You can edit such information using the **Registry Resource Editor**. To open this editor, right-click the [Registry Resource project]({{base_path}}/integrate/develop/create-integration-project/#registry-resource-project) and click **Registry Resource Editor**. - -This editor lists all the registry resources that you have defined in that project and it will list the **Registry Path to Deploy** information per resource. diff --git a/en/docs/integrate/develop/creating-artifacts/creating-reusable-sequences.md b/en/docs/integrate/develop/creating-artifacts/creating-reusable-sequences.md deleted file mode 100644 index a5f5145a8f..0000000000 --- a/en/docs/integrate/develop/creating-artifacts/creating-reusable-sequences.md +++ /dev/null @@ -1,93 +0,0 @@ -# Creating a Reusable Sequence - -Follow these steps to create a new, reusable sequence that you can add to your mediation workflow or refer to from a sequence mediator, or to create a sequence mediator and its underlying sequence all at once. - -## Instructions - -### Creating a Sequence Artifact - -1. Right-click the [ESB Config project]({{base_path}}/integrate/develop/create-integration-project/#esb-config-project) and go to **New → Sequence** to open the **New Sequence Artifact** dialog box. - - - -2. Select **Create New Sequence** and click **Next**. - - - -3. Specify a unique name for the sequence. - - !!! Info - **Creating a Main Sequence**: - If you want to create the default main sequence that just sends messages without mediation, be sure to name it `main`, which automatically populates the sequence with the default **In** and **Out** sequences. - - - -4. In the **Save Sequence in** field, specify the location to save the sequence: - - To save the sequence in an existing ESB Config project in your workspace, click **Browse** and select that project. Else, click **Create new Project** and create the new project. - - To save the sequence as a **Dynamic Sequence** in a [registry resource project]({{base_path}}/integrate/develop/create-integration-project/#registry-resource-project): - 1. Select the **Make this as Dynamic Sequence** check box. - 2. Specify the registry space (Governance or Configuration) in the **Registry** field. - 3. If a **Registry Resource** project already exist in the workspace, click **Browse** for the **Save Sequence in** field and select the registry resource project. - Else, click **Create new Project** to create a new registry project. - 4. Type the sequence name in the **Registry Path** field. - - - -5. Click **Finish**.  - -The sequence is created in the `src/main/synapse-config/sequences` folder under the ESB Config or [registry resource project]({{base_path}}/integrate/develop/create-integration-project/#registry-resource-project) you specified. - -The sequence is also available in the **Defined Sequences** section of the **Palette** and ready for [use in other meditation workflows](#using-a-sequence). - -### Create from a Sequence Mediator - -1. Open your proxy service, drag the **Sequence Mediator** from the **Palette** to the canvas. This represents a sequence artifact. - - - -2. If required, change the name of the sequence. -3. Double-click the sequence mediator you just added. The canvas of the new sequence opens in the graphical editor. - -The sequence artifact (with the name you specified) is created in the `src/main/synapse-config/sequences` folder under the ESB Config project. - -### Designing the integration - -When you create a sequence, it appears in the **Defined Sequences** section of the tool palette. To use this sequence in a mediation flow: - -1. When you sequence artifact from the **Config** project in the project explorer, you will see the default **Design** view. - - - -2. Drag and drop the required integration artifacts from the **Palette** to the canvas and design the integration flow. - - - -To use a sequence from a different project or from the registry, you need to use the [Sequence Mediator](): - -1. Drag and drop the **Sequence Mediator** from the **Palette** to the mediation flow. - -2. Click the **Sequence Mediator** icon to open the **Properties** tab: - -3. Click **Static Reference Key**, and then click the browse **\[...\]** button on the right. - -4. In the **Resource Key Editor**, click **Registry** if the sequence is stored in the registry or **Workspace** if it is in another ESB Config project. - -5. If you are trying to select a sequence from the registry and no entries appear in the dialog box, click **Add Registry Connection** and connect to the registry where the sequence reside. - -6. Select the required sequence and click **OK**, and then click **OK** again. - -The sequence mediator name and static reference key are updated to point to the sequence you selected. - -You can also use the [**Source** view](#using-the-source-view) to update the sequence configuration. - -### Using the Source View - -Click the **Source** tab to view the XML-based synapse configuration (source code) of the inbound endpoint. You can update the sequence using this view. - - - -## Examples - -- [Breaking Complex Flows into Multiple Sequences]({{base_path}}/integrate/examples/sequence_examples/using-multiple-sequences) -- [Using Fault Sequences]({{base_path}}/integrate/examples/sequence_examples/using-fault-sequences) -- [Reusing Sequences]({{base_path}}/integrate/examples/sequence_examples/custom-sequences-with-proxy-services) diff --git a/en/docs/integrate/develop/creating-artifacts/creating-scheduled-task.md b/en/docs/integrate/develop/creating-artifacts/creating-scheduled-task.md deleted file mode 100644 index b8e01c920d..0000000000 --- a/en/docs/integrate/develop/creating-artifacts/creating-scheduled-task.md +++ /dev/null @@ -1,54 +0,0 @@ -# Scheduling ESB Tasks - -Follow the instructions given below to create a **Scheduled Task** in WSO2 Integration Studio. - -## Instructions - -### Creating the Scheduled Task artifact - -1. Right-click the [ESB Config project]({{base_path}}/integrate/develop/create-integration-project/#esb-config-project) and click **New** → **Scheduled Task**. - - - -2. Select **Create a New Scheduled Task Artifact** and click **Next**. - - - -3. Specify values for the [required parameter]({{base_path}}/reference/synapse-properties/scheduled-task-properties) for the scheduled task. - - - -4. Specify the location to save the artifact: - - - To save the scheduled task in an existing ESB Config project in your workspace, click **Browse** and select that project. - - To save the scheduled task in a new ESB Config project, click **Create new Project** and create the new project. - -5. Click **Finish**.  - - The scheduled task is created in the `src/main/synapse-config/tasks` folder under the ESB Config project you specified. - -6. To use the scheduled task, [update the properties](#updating-properties). - -### Updating properties - -Update the task properties to specify the incoming message that should trigger the task and the destination to which the message should be injected. - -1. Open the new artifact from the project explorer. - - - -2. In the **Form** view, you can optionally modify already specified property values. -3. Click **Task Implementation Properties** to open the **Task Properties** dialog box. - - - -4. Update the properties. - -## Examples - -- [Task Scheduling using a Simple Trigger]({{base_path}}/integrate/examples/scheduled-tasks/task-scheduling-simple-trigger) -- [Injecting Messages to a RESTful Endpoint]({{base_path}}/integrate/examples/scheduled-tasks/injecting-messages-to-rest-endpoint) - -## Tutorials - -- See the tutorial on [periodically executing an integration process]({{base_path}}/integrate/tutorials/using-scheduled-tasks) using a scheduled task diff --git a/en/docs/integrate/develop/creating-artifacts/creating-sequence-templates.md b/en/docs/integrate/develop/creating-artifacts/creating-sequence-templates.md deleted file mode 100644 index 4576aa3e8d..0000000000 --- a/en/docs/integrate/develop/creating-artifacts/creating-sequence-templates.md +++ /dev/null @@ -1,63 +0,0 @@ -# Creating Sequence Templates - -Follow the instructions given below to create a new **Sequence Template** in WSO2 Integration Studio. - -## Instructions - -### Creating the Sequence Template artifact - -1. Right-click the [ESB Config project]({{base_path}}/integrate/develop/create-integration-project/#esb-config-project) and go to **New → Template** to open the **New Template Artifact** dialog box. - - - -2. Select **Create a New Template** and click **Next**. - - - -3. Enter a unique name for the template and select **Sequence Template** from the list. - - - - Specify values for the [required parameter]({{base_path}}/reference/synapse-properties/template-properties/#endpoint-template-properties) for the selected endpoint type. - -5. Do one of the following to save the artifact: - - - To save the template in an existing ESB Config project in your workspace, click **Browse** and select that project. - - To save the template in a new ESB Config project, click **Create new Project** and create the new project. - -6. Click **Finish**.  - - The template is created in the `src/main/synapse-config/templates` folder under the ESB Config project you specified. - -7. To use the sequence template, [update the properties](#updating-properties). - -### Updating properties - -1. Open the **Design View** of the sequence template you created. - - - -2. Drag-and-drop the required mediators from the **Palette**. -3. Specify parameter values as an XPATH. - - In the following example, the `GREETING_MESSAGE` property of the **Log** mediator is specified using the `$func:message` expression. - - - -### Designing the integration - -When you have a Sequence template defined, you can use a [Call Template Mediator]({{base_path}}/reference/mediators/call-template-mediator) in your [mediation sequence]({{base_path}}/reference/synapse-properties/sequence-properties). - -1. Open to the **Design View** of your [mediation sequence]({{base_path}}/reference/synapse-properties/sequence-properties). -2. Drag the [Call Template Mediator]({{base_path}}/reference/mediators/call-template-mediator) from the **Palette** and drop it to the relevant position in the [mediation sequence]({{base_path}}/reference/synapse-properties/sequence-properties). - - - -3. Double-click the [Call Template Mediator]({{base_path}}/reference/mediators/call-template-mediator) icon to open the **Properties** tab. -4. Select your sequence template from the list in the **Available Templates** field and then add values using the template parameters. - - - -## Examples - -- [Using Sequence Templates]({{base_path}}/integrate/examples/template_examples/using-sequence-templates) diff --git a/en/docs/integrate/develop/creating-artifacts/data-services/creating-data-services.md b/en/docs/integrate/develop/creating-artifacts/data-services/creating-data-services.md deleted file mode 100644 index 12597c3650..0000000000 --- a/en/docs/integrate/develop/creating-artifacts/data-services/creating-data-services.md +++ /dev/null @@ -1,402 +0,0 @@ -# Creating a Data Service - -Follow the instructions given below to create a new data service artifact. - -!!! Tip - You can also use a sample template to create your data service. - - 1. Open the **Getting Started** view of WSO2 Integration Studio (**Menu -> Help -> Getting Started**). - 2. In the Getting Started view, go to the **Data Service** tab and select the **REST Data Service** example. - -## Instructions - -### Create the data service artifact - -Follow the steps given below to create the data service file: - -1. Right-click the **Data Service Config** module in the project - explorer and go to **New -> Data Service**. - - - -2. In the **New Data Service** wizard that opens, select **Create New - Data Service** and click **Next**. - - - -3. Enter a name for the data service and click **Finish**. - -A data service file (DBS file) will now be created in your data service -project as show below. - -![]({{base_path}}/assets/img/integrate/tutorials/data_services/data-service-project-structure.png) - -### Adding a datasource - -You can configure the datasource connection details using this section. - -1. Click **Data Sources** to expand the section. - - ![]({{base_path}}/assets/img/integrate/tutorials/data_services/add-datasource-1.png) - -2. Click **Add New** to open the **Create Datasource** page. - - ![]({{base_path}}/assets/img/integrate/tutorials/data_services/add-datasource-2.png) - -3. Enter the datasource connection details. -4. Click **Test Connection** to expand the section. - - ![]({{base_path}}/assets/img/integrate/tutorials/data_services/test_connection.png) - -5. Click the **Test Connection** button to verify the connectivity between the MySQL datasource and the data service. - -6. Save the data service. - -### Creating a query - -You can configure the main query details using this section. - -1. Click **Queries** to expand the section. - - - -2. Click **Add New** to open the **Add Query** page. - - - -3. Enter the following query details. - - - - - - - - - - - - - - - - - - -
- Parameter - - Description -
- Query ID - - Give a unique name to Identify the Query. -
- Datasource - - All the datasources created for this data service are listed. Select the required datasource from the list. -
- SQL Query - - You can enter the SQL query in this text box. -
- -#### Input mapping - -You can configure input parameters for the query using this section. - -1. Click **Input Mappings** to expand the section. - - - -2. There are two ways to create the mapping: - - - You can click **Generate** to automatically generate the input mappings from the SQL query. - - If you want to add a new input mapping: - - 1. Click **Add New** to open the **Add Input Mapping** page. - - - - 2. Enter the following input mapping details: - - - - - - - - - - - - - - - - - - -
- Parameter - - Description -
- Mapping Name - - Give a name for the mapping. -
- Parameter Type - - The parameter type. -
- SQL Type - - The SQL type. -
- - 3. Save the input mapping. - -Shown below is an example query with input mapping: - - - -#### Result (Output Mappings) - -You can configure output result parameters for the query using this section. - -1. Click **Result (Output Mappings)** to expand the section. - - - -2. Enter the following details: - - - - - - - - - - -
PropertyDescription
Grouped by ElementEmployees
- -3. There are two ways to create the output mapping: - - - You can click **Generate** to automatically generate the output mappings from the SQL query. - - Alternatively, you can manually add the mappings: - - 1. Click **Add New** to open the **Add Output Mapping** page. - - - - 2. Enter the following output element details. - - - - - - - - - - - - - - - - - - - - - - - - -
PropertyDescription
Datasource Typecolumn
Output Field NameEmployeeNumber
Datasource Column NameEmployeeNumber
Schema TypeString
- - 3. Save the element. - 4. Follow the same steps to create the remaining output elements. - -Shown below is an example query with output mappings: - -![]({{base_path}}/assets/img/integrate/tutorials/data_services/output_mapings.png) - -#### Advanced properties - -Click **Advanced Properties** to expand the section and add the required parameter values. - -![]({{base_path}}/assets/img/integrate/tutorials/data_services/advances_properties_expanded.png) - -The data service should now have the query element added. - -### Adding a SOAP operation - -Use this section to configure a SOAP operation for invoking the data service. - -1. Click **Operations** to expand the section. - - - -2. Click **Add New** to add a SOAP Operation for your data service. - - - -3. Enter the following information: - - - - - - - - - - - - - - - - - - -
- Parameter - - Description -
- Operation Name - - Give a name to the SOAP Operation. -
- Query ID - - Select the Query from the listed queries. -
- Operation Parameters - - Click Add New to add new parameters to the operation. -
- -### Adding a Resource - -Use this section to configure a REST resource for invoking the data service. - -1. Click **Resources** to expand the section. - - - -2. Click **Add New** to add a new resource. - - - -3. Give the following details to create the REST resource. - - - - - - - - - - - - - - -
- Parameter - - Description -
- Resource Path - - Give the HTTP REST resource path you need to expose. -
- Query ID - - Select the Query ID from the drop down list that you need to expose as a REST resource. -
- -4. Click **Save** to add the resource to the data service. - -The data service should now have the resource added. - -### Generate Data Service from a Datasource - -Follow the steps given below to automatically create a data service using a given datasource structure. -When generating a data service, the server takes its table structure according to the structure specified in the -datasource and automatically creates the SELECT, INSERT, UPDATE, and DELETE operations. - -1. Create a datasource project and add a datasource in the current workspace. You can - refer [Creating a Datasource]({{base_path}}/integrate/develop/creating-artifacts/data-services/creating-datasources) for more information. - -2. In the **New Data Service** wizard that opens, select **Generate Data Service from Datasource** and click **Next**. - - -3. From the wizard, select the datasource that you have configured in step 1. - - - -4. Select the driver to connect to the datasource. You need to browse and upload a driver from your file system. - - - - Then click **Fetch Table** to list down all avaialble tables in the selected datasource. - -5. From the list of tables, select the tables and the REST resource methods that you want in the generated data service. - - !!! Note - 1. The **POST** REST method is enabled only when the database is not in read-only mode. - 2. The **PUT** and **DELETE** REST methods are enabled only when a primary key is defined on the table. - - - -6. You can select a service generation mode from the following two options: - - - Single Service: Creates a single data service for resources of all tables. - If this option is selected, you need to provide a name for the Data Service you are creating. - - - Multiple Services: Creates a service per table, which will contain isolated resources for each table. - -7. Click **Finish** to generate the services and add to the dataservices project. - -## Examples - - - -## Tutorials - -
  • - See the tutorial on data integration -
  • diff --git a/en/docs/integrate/develop/creating-artifacts/data-services/creating-datasources.md b/en/docs/integrate/develop/creating-artifacts/data-services/creating-datasources.md deleted file mode 100644 index 56aa3239c2..0000000000 --- a/en/docs/integrate/develop/creating-artifacts/data-services/creating-datasources.md +++ /dev/null @@ -1,48 +0,0 @@ -# Creating a Datasource - -Follow the instructions given below to create a new Datasource connection in WSO2 Integration Studio. - -## Instructions - -Follow the steps given below to create the datasource file: - -1. Select the already created [**Datasource Config module**]({{base_path}}/integrate/develop/create-integration-project/#datasource-project) in the project - navigator, right-click, and go to **New -> Datasource**. - - - - The **New Datasource** window will open as shown below. - - - -2. Select your [**datasource config module**]({{base_path}}/integrate/develop/create-integration-project/#datasource-project) as the **Container**, add the file name for your datasource, and click **Finish**. - -A datasource file will now be created in your datasource config module. -Shown below is the sample configuration that is created. You can now update the values in this configuration. - -```xml - - MySQLConnection - MySQL Connection - - MysqlConJNDI1 - - - - com.mysql.jdbc.Driver - jdbc:mysql://localhost:3306/mysqldb - username - password - - - -``` - -!!! Tip - You can generate dataservices for the created datasource. - For more information, you can follow the steps given in [Generate Data Services]({{base_path}}/integrate/develop/creating-artifacts/data-services/creating-data-services/#generate-data-service-from-a-datasource). - - -## Examples - -- Exposing a Carbon Datasource diff --git a/en/docs/integrate/develop/creating-artifacts/data-services/creating-input-validators.md b/en/docs/integrate/develop/creating-artifacts/data-services/creating-input-validators.md deleted file mode 100644 index e17f75d264..0000000000 --- a/en/docs/integrate/develop/creating-artifacts/data-services/creating-input-validators.md +++ /dev/null @@ -1,49 +0,0 @@ -# Creating a custom validator - -An **input validator** allows a data service to validate the input -parameters in a request and stop the execution of the request if the -input doesn’t meet the required criteria. In addition to the default -validators provided, you can create your own custom validators by -creating a Java class that implements the -` org.wso2.carbon.dataservices.core.validation.Validator ` -interface. You can [create a new custom validator](#creating-a-new-custom-validator) -or [import an existing validator project](#importing-a-validator-project). - -## Creating a new custom validator - -Follow these steps to create a new custom validator. Alternatively, you -can [import an existing validator project](#importing-a-validator-project). - -1. Go to **File-> New -> Other -> Data Services Validator Project** - to open the **New Data Services Validated Artifact Creation Wizard** - . -2. Select **Create New Data Services Validator Project** and click - **Next** . -3. Type a unique name for the project and specify the package and class - name for this validator. -4. Optionally, specify the location and working set for this project. -5. A Maven POM file will be generated automatically for this project. - If you want to customize the Maven options (such as including parent - POM information in the file from another project in this workspace), - click **Next** and specify the options. -6. Click **Finish** . The project is created, and the new validator - class is open in the editor where you can add your validation logic. - -## Importing a validator project - -Follow these steps to import an existing custom validator project. -Alternatively, you can [create a new custom validator](#creating-a-new-custom-validator). - -1. Go to **File-> New -> Other -> Data Services Validator Project** - to open the **New Data Services Validated Artifact Creation Wizard** - . -2. Select **Import Project From Workspace** and click **Next** . -3. Select the existing validator project, and optionally specify the - location and working sets for the new project. -4. A Maven POM file will be generated automatically for this project. - If you want to customize the Maven options (such as including parent - POM information in the file from another project in this workspace), - click **Next** and specify the options. -5. Click **Finish** . The project is imported, and the validator class - is open in the editor, where you can modify the validation logic as - needed. \ No newline at end of file diff --git a/en/docs/integrate/develop/creating-artifacts/data-services/securing-data-services.md b/en/docs/integrate/develop/creating-artifacts/data-services/securing-data-services.md deleted file mode 100644 index 5cbd629150..0000000000 --- a/en/docs/integrate/develop/creating-artifacts/data-services/securing-data-services.md +++ /dev/null @@ -1,125 +0,0 @@ -# Applying Security to a Data Service - -WSO2 supports WS-Security, WS-Policy, and WS-Security Policy -specifications. These specifications define a behavioral model for Web -services. To enable a security policy for a data service, you need to -first create a security policy file, and then add it to the data -service. - -## Prerequisites - -Be sure to [configure a user store]({{base_path}}/install-and-setup/setup/mi-setup/setup/user_stores/setting_up_a_userstore) for the Micro Integrator and add the required users and roles. - -## Step 1: Creating a registry resource module - -Registry artifacts (such as security policy files) should be stored in a -**Registry Resource** module. Follow the steps given below to create a -module: - -1. Right click on the [Integration project]({{base_path}}/integrate/develop/create-integration-project) - and go to **New → Registry Resource**. - - !!! Tip - Alternatively, you can go to **File → New → Others** and - select **Registry Resources** from the opening wizard. - -2. Enter a name for the module and click **Next** . -3. Enter the Maven information about the module and click **Finish** . -4. The new module will be listed in the project explorer. - -## Step 2: Creating a security policy as a registry resource - -1. Right-click the registry resource module in the left navigation - panel, click **New**, and then click **Registry Resource**. This - will open the **New Registry Resource** window. -2. Select the **From existing template** option as shown below and - click **Next** . - ![]({{base_path}}/assets/img/integrate/tutorials/data_services/119130577/119130583.png) -3. Enter the following details: - - | Property | Value | - |---------------|----------------| - | Resource Name | Sample_Policy | - | Artifact Name | Sample_Policy | - | Template | WS-Policy | - | Registry | gov | - | Registry path | ws-policy/ | - -4. Click **Finish** and the policy file will be listed in the - navigator. - 1. Let's use the **Design View** to enable the required security - scenario. For example, enable the **Sign and Encrypt** security - scenario. - - !!! Tip - Click the icon next to the scenario to get details of the scenario. - - ![]({{base_path}}/assets/img/integrate/tutorials/data_services/119130577/119130596.png) - - 2. You can also provide encryption properties, signature - properties, and advanced rampart configurations. - - !!! Info - **Using role-based permissions?** - - For certain scenarios, you can specify user roles. After you select the scenario, scroll to the right to see the **User Roles** button. Either define the user roles inline or retrieve the user roles from the server. - - !!! Info - Switch to source view of the policy file and make sure the tokenStoreClass in the policy file is 'org.wso2.micro.integrator.security.extensions.SecurityTokenStore'. - In addition, replace the ServerCrypto class with 'org.wso2.micro.integrator.security.util.ServerCrypto' if present. - -5. Save the policy file. - -## Step 2: Adding the security policy to the data service - -Once you have configured the policy file, you can add the security -policy to the data service as explained below. - -1. If you have already created a data service using WSO2 Integration - Studio, select the file from the Project Explorer. - - !!! Tip - Be sure to update your database credentials in the dataservice file. - -2. Once you have opened the data service file, switch to the **Source View** to see -the source of the data service. - -3. Add the following elements inside the `` element and save the file. - ```xml - - - ``` - -## Step 3: Package the artifacts - -See the instructions on [packaging the artifacts]({{base_path}}/integrate/develop/packaging-artifacts) into a composite exporter. - -## Step 4: Build and run the artifacts - -See the instructions [deploying the artifacts]({{base_path}}/integrate/develop/deploy-artifacts). - -## Step 5: Testing the service - -Create a Soap UI project with the relevant security settings and then send the request to the hosted service. - -For guidelines on using SoapUI, see [general guidelines on testing with SOAP UI]({{base_path}}/integrate/develop/advanced-development/applying-security-to-a-proxy-service/#general-guidelines-on-testing-with-soap-ui). - -## Using an encrypted datasource password - -When you create a data service for an RDBMS datasource, you have the -option of encrypting the datasource connection password. This ensures -that the password is encrypted in the configuration file (.dbs file) of -the data service. - -See the instructions on [encrypting plain-text passwords]({{base_path}}/install-and-setup/setup/mi-setup/security/encrypting_plain_text) - -Once you have encrypted the datasource password, you can update the data -service as explained below. - -1. Open the data service and click **Data Sources** to expand the section. - ![]({{base_path}}/assets/img/integrate/tutorials/data_services/data_source_expanded.png) -2. Click on the **Edit** icon of the respective Datasource to open - **Edit Datasource** page. - ![]({{base_path}}/assets/img/integrate/tutorials/data_services/edit_datasource.png) -3. Make sure to check **Use as a Secret Alias**. -4. Update the Secret Alias and click on **Save**. diff --git a/en/docs/integrate/develop/creating-artifacts/registry/creating-local-registry-entries.md b/en/docs/integrate/develop/creating-artifacts/registry/creating-local-registry-entries.md deleted file mode 100644 index 04b65b9deb..0000000000 --- a/en/docs/integrate/develop/creating-artifacts/registry/creating-local-registry-entries.md +++ /dev/null @@ -1,94 +0,0 @@ -# Creating Local Registry Entries - -The **local registry** acts as a memory registry where you can store static content as a key-value pair. This could be a static text specified as **inline text**, static XML specified as an **inline XML** fragment, or a URL (using the `src` attribute). - -=== "Inline text" - ```xml - 0.1 - ``` - -=== "Inline XML" - ```xml - - - - ``` - -=== "Source URL" - ```xml - - ``` - -This is useful for the type of static content often found in XSLT files, WSDL files, URLs, etc. Local entries can be referenced from mediators in the Micro Integrator mediation flows and resolved at runtime. These entries are top-level entries and are globally visible within the entire system. Values of these entries can be retrieved via the extension XPath function `synapse:get-property(prop-name)`, and the keys of these entries could be specified wherever a registry key is expected within the configuration. A local entry shadows any entry with the same name from a remote Registry. - -## Instructions - -### Creating the local entry - -Follow these steps to create a new local entry. - -1. Right-click the [ESB Config project]({{base_path}}/integrate/develop/create-integration-project/#esb-config-project) and go to **New → Local Entry** to open the **New Local Entry** dialog box. - - - -4. Select **Create a New Local Entry** and click **Next**. - - - -5. Enter a unique name for the local entry, specify one of the following types of local entries and specify the details. - - - - - **In-Line Text Entry**: Type the text you want to store - - **In-Line XML Entry**: Type the XML code you want to store - - **Source URL Entry**: Type or browse to the URL you want to store - -6. In the **Save in** field, specify the project to save the artifact: - - - To save the local entry in an existing ESB Config project in your workspace, click **Browse** and select that project. - - To save the local entry in a new ESB Config project, click **Create new Project** and create the new project. - -7. Click **Finish**.  - -The local entry is created in the `src/main/synapse-config/local-entries` folder under the ESB Config project you specified, and the local entry appears in the editor. - -### Updating the properties - -Open the new local entry artifact from the project explorer. You can use the **Form** view or the **Source** view to update message processor properties. - - - -### Using a local entry - -After you create a local entry, you can reference it from a mediator in -your mediation workflow. For example, if you created a local entry with -XSLT code, you can add an XSLT mediator to the workflow and then -reference the local entry as follows: - -1. Open to the **Design View** of your [mediation sequence]({{base_path}}/reference/synapse-properties/sequence-properties). -2. Drag and drop an [XSLT Mediator]({{base_path}}/reference/mediators/xslt-mediator) to the mediation flow as shown below. - - - -3. Double-click the XSLT mediator icon to open the **Properties** tab. - - - -4. Click the **XSLT Static Schema Key** property to get the **Resource Key** wizard. - - - -5. Click the **Workspace** link, and then navigate to and select the - local entry that contains the XSLT code. - - - -6. Click **OK**. - -!!! Info - If you want to add local entries before deploying the server, you can add them to the top-level bootstrap file `synapse.xml`, or to separate XML files in the `local-entries` directory, which are located under `MI_HOME\repository\deployment\server\synapse-configs\default `. When the server is started, these configurations will be added to the registry. - -## Examples - -- [Sequences and Endpoints as Local Registry Entries]({{base_path}}/integrate/examples/registry_examples/local-registry-entries) diff --git a/en/docs/integrate/develop/creating-artifacts/using_docker_secrets.md b/en/docs/integrate/develop/creating-artifacts/using_docker_secrets.md deleted file mode 100644 index 6af28bfa7b..0000000000 --- a/en/docs/integrate/develop/creating-artifacts/using_docker_secrets.md +++ /dev/null @@ -1,147 +0,0 @@ -# Using Docker Secrets in Synapse Configurations - -WSO2 Micro Integrator comes with a built-in secret repository as a part of its [secure vault implementation]({{base_path}}/install-and-setup/setup/security/logins-and-passwords/carbon-secure-vault-implementation) by default. In addition to this, the Micro Integrator also provides built-in support for Docker secrets and Kubernetes secrets for your containerized deployments. - -Managing sensitive information in a Docker environment can be achieved using two simple steps: - -1. Adding the secret to your Docker environment. -2. Accessing the secret from within your synapse configurations. - -There are two ways to add secrets to a Docker environment: Creating the Docker secret directly inside the environment, or storing the secrets in a flat file and adding the file to the environment. - -## Creating a secret in the Docker environment - -Follow the steps given below to directly add the required secrets to the Docker environment. - -### Step 1: Creating Docker secrets - -You can create a Docker secret in the Docker environment by using the following command: - -!!! Tip - To use Docker secrets, you must have the `swarm` mode enabled in your environment. If it is not already enabled, you can enable it by using the `docker swarm init` command. - -You can use the `docker secret create` command as given below to create a secret in your docker environment. - -```bash -echo "dockersecret123456" | docker secret create testdockersecret - -``` -This command will create a Docker secret named `testdockersecret` in your Docker environment. - -### Step 2: Using Docker secrets in Synapse configurations - -Secret can be accessed from the integration artifacts by using the `wso2:vault-lookup` function in the following format. - -```bash -wso2:vault-lookup('', '', '') -``` - -Specify values for the following three parameters: - -- ``: Name of the secret. -- ``: Set this to `DOCKER` -- ``: Set this to `true` or `false` to specify whether the secret is encrypted. - -Given below is a sample synapse configuration that accesses and prints the Docker secret we declared in the previous step. - -```xml - -``` - -## Adding a secret from a flat file - -Instead of creating Docker secrets directly in the Docker environment, you can add secrets to the Docker environment by adding a flat file that contains the secrets. - -### Step 1: Adding the flat file - -Follow the steps given below. - -1. Create a flat file with the secret. Note that the file name is the alias (e.g. `testsecret`) of the secret and the file content should be the secret itself. -2. Add the created file to the **Resources** folder in your Docker Exporter module inside the integration project. -3. Add the following line to the Dockerfile in your Docker Exporter module. This copies the secret file to the `` directory so that it will be available in your Docker Image. - -```bash -COPY Resources/FLAT_FILE_NAME ${WSO2_SERVER_HOME}/ -``` - -### Step 2: Using file secrets in Synapse configurations - -Secret can be accessed from the integration artifacts by using the `wso2:vault-lookup` function in the following format. - - -```bash -wso2:vault-lookup('', '', '') -``` - - -Specify values for the following three parameters: - -- ``: Name of the file. -- ``: Set this to `FILE` -- ``: Set this to `true` or `false` to specify whether the secret is encrypted. - -Given below is a sample synapse configuration that accesses and prints the file secret we declared in the previous step. - -```xml - -``` - -## Enabling secrets in the environment - -Once the secrets are added to the environment, you need to enable secure vault in the environment. In a Docker environment, you don't need to manually run the Cipher tool. Follow the steps given below. - -1. Open your Integration Project in WSO2 Integration Studio, which contains all the integration artifacts and the Docker Exporter. -2. Open the `pom.xml` of the Docker Exporter module and select the Enable Cipher Tool check box as shown below. - - - -3. When you build the Docker image from your Docker Exporter, the secrets will get enabled in the environment. - -!!! Tip - For Docker secrets to be effective, you can create a docker service that includes the image you created in the above step. This can be done by creating a `docker-compose.yml` file and deploying it to the docker swarm. - - Given below is a sample `docker-compose.yml` file that can be used to create a simple docker service orchestration. - ```bash - version: '3.3' - services: - wso2mi: - image: wso2/wso2mi:latest - ports: - - 8290:8290 - - 8253:8253 - secrets: - - testdockersecret - secrets: - testdockersecret: - external: true - ``` - - Upon creating the `docker-compose.yml` file, you can deploy the services using the `docker stack deploy` command as follows: - - ```bash - docker stack deploy -c - ``` - - See the [Docker guide](https://docs.docker.com/engine/swarm/secrets/#defining-and-using-secrets-in-compose-files) for more information on defining and using Docker secrets. - -## Configuring the secrets' location - -The Docker secrets and file secrets are stored in default locations in the container environment. The Docker secrets can be found in the following location: - -- On **Linux**: `/run/secrets/` -- On **Windows**: `C:\ProgramData\Docker\secrets` - -The default location for the file secret is the `/` directory. Therefore, by default, the server will search for aliases in these directories. - -However, if you are storing your secrets in a different directory location in the container, you should configure the server to search the secrets in those custom directories by using the following **system properties**. - -- Configuring the custom directory path storing the Docker secret: - -```bash --Dei.secret.docker.root.dir= -``` - -- Configuring the custom directory path storing the flat file secrets: - -```bash --Dei.secret.file.root.dir= -``` diff --git a/en/docs/integrate/develop/creating-artifacts/using_k8s_secrets.md b/en/docs/integrate/develop/creating-artifacts/using_k8s_secrets.md deleted file mode 100644 index e4b9a76c7b..0000000000 --- a/en/docs/integrate/develop/creating-artifacts/using_k8s_secrets.md +++ /dev/null @@ -1,82 +0,0 @@ -# Using Kubernetes Secrets in Synapse Configurations - -WSO2 Micro Integrator comes with a built-in secret repository as a part of its secure vault implementation by default. In addition to this, the Micro Integrator also provides built-in support for Docker secrets and Kubernetes secrets for your containerized deployments. -You need to generate Kubernetes secrets for the sensitive data and inject them to the pods in your deployment as environment variables. Follow the steps given below. - -## Step 1: Creating the secret - -You can generate Kubernetes secrets using the following command using **kubectl**. - -```bash -kubectl create secret generic --from-literal== -``` - -For example, let's generate a database password: - -```bash -kubectl create secret generic db-password --from-literal=password=1234567 -``` - -See the [Kubernetes guide](https://kubernetes.io/docs/concepts/configuration/secret/) for more information on creating secrets. - -## Step 2: Adding the secret to a Pod - -You can add the defined secret into your deployment as an environment variable of your container. - -The integration artifacts you develop using WSO2 Integration Studio are built into a Docker image from the Kubernetes Exporter module. Therefore, you must update the `integration_cr.yml` file (in your Kubernetes Exporter module) with the secrets you generated by using the following syntax: - -```yaml -env: - - name: PASSWORD - valueFrom: - secretKeyRef: - name: - key: -``` - -For example, let's add the database password as an environment variable to the containers: - -```yaml -apiVersion: "integration.wso2.com/v1alpha1" -kind: "Integration" -metadata: - name: "kubesecrets" -spec: - replicas: 1 - image: "docker/k8secret:1.0.0" - env: - - name: DB_PASSWORD - valueFrom: - secretKeyRef: - name: db-password - key: password -``` -## Step 3: Using Kubernetes secrets in Synapse configurations -Secrets can be accessed from the integration artifacts by using the `wso2:vault-lookup` function in the following format. - -```bash -wso2:vault-lookup('', '', '') -``` - -Specify values for the following three parameters: - -- ``: Name of the environment variable specified in the `integraton_cr.yml` file. -- ``: Set this to `ENV` -- ``: Set this to `true` or `false` to specify whether the secret is encrypted. - -Given below is a sample synapse configuration with an environment variable lookup. - -```xml - -``` - -## Step 4: Enabling secrets in the environment - -Once the secrets are added to the environment, you need to enable secure vault in the environment. In a Kubernetes environment you don't need to manually run the Cipher tool. Follow the steps given below. - -1. Open your Integration Project in WSO2 Integration Studio, which contains all the integration artifacts and the Kubernetes Exporter. -2. Open the `pom.xml` of the Kubernetes Exporter module and select the Enable Cipher Tool check box as show below. - - - -3. When you build the Docker image from your Kubernetes exporter, the secrets will get enabled in the environment. diff --git a/en/docs/integrate/develop/creating-unit-test-suite.md b/en/docs/integrate/develop/creating-unit-test-suite.md deleted file mode 100644 index 7a8133c609..0000000000 --- a/en/docs/integrate/develop/creating-unit-test-suite.md +++ /dev/null @@ -1,211 +0,0 @@ -# Creating Unit Test Suite - -Once you have developed an integration solution, WSO2 Integration Studio allows you to build unit tests for the following: - -- Test [mediation sequences]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences), [proxy services]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service), and [REST apis]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) with multiple test cases -- Test the artifacts with [registry resources]({{base_path}}/integrate/develop/creating-artifacts/creating-registry-resources). -- Test the artifacts with [Connectors]({{base_path}}/integrate/develop/creating-artifacts/adding-connectors). - - !!! Note - [Scheduled Tasks]({{base_path}}/integrate/develop/creating-artifacts/creating-scheduled-task) are not supported by the Unit Testing framework. - -## Create Unit Test Suite - -1. Open WSO2 Integration Studio. -2. Open an existing project with your integration solution. -3. Right-click the **test** folder, which is parallel to the **src** folder and go to **New** -> **Unit Test Suite** as shown below. - - ![Create Unit Test Suite]({{base_path}}/assets/img/integrate/create_project/synapse_unit_test/create-test-suite.png) - - The **New Unit Test Suite** wizard opens. - -4. Select **Create a New Unit Test Suite** and click **Next**. - - ![Select Create Method]({{base_path}}/assets/img/integrate/create_project/synapse_unit_test/select-create-method.png) - -5. Specify a name for the unit test suite. Then, select the artifact file that you want to test from the file list and click **Next**. - - !!! Note - You can only select one sequence, proxy service, or API artifact per unit test suite. - - ![Fill Unit Test Suite Details]({{base_path}}/assets/img/integrate/create_project/synapse_unit_test/select-main-artifact.png) - -6. Select the supporting artifacts from the list as shown below and click **Next**. - - ![Select Supportive Artifacts]({{base_path}}/assets/img/integrate/create_project/synapse_unit_test/select-supportives.png) - -7. You can use a mock service to simulate the actual endpoint. If you have an already created Mock Service, select the mock service files from the list as shown below. You can also [create a new Mock Service](#create-mock-service) for this purpose. - - ![Select Mock Services]({{base_path}}/assets/img/integrate/create_project/synapse_unit_test/select-mock-services.png) - -8. Click **Finish**. - -## Update the Unit Test Suite - -Once you have created a Unit Test Suite in WSO2 Integration Studio, you can find it inside the test folder. You can update the Unit Test Suite by adding test cases and changing the supporting artifacts and mock-services. - -1. Open Unit Test Suite from the project explorer. You can use either the design view or the source view to update the unit test suite. - - ![Unit Test Form]({{base_path}}/assets/img/integrate/create_project/synapse_unit_test/unit-test-form.png) - -2. In design view, click the '+' button under the **Test Artifact, Test Cases and Assertion Details** section to add a new **test case** to the unit test suite. - - ![Add Test Case]({{base_path}}/assets/img/integrate/create_project/synapse_unit_test/add-test-case.png) - -3. Enter the following information: - - 1. Enter a name for the test case. - 2. Update the **Input Payload and Properties** section: - - - **Input Payload**: The input payload of the test case. This can be **JSON**, **XML**, or **plain text**. - - **Input properties**: The input properties of the test case. There are three types of properties allowed in unit testing: **Synapse($ctx), Axis2($axis2)**, and **Transport($trp)** properties. - - For sequences, the test suite allows to add all type of properties with the value. For APIs and proxy services, you are only allowed to add transport properties. - - !!! Note - For APIs, you also need to specify the **Request Path** and **Request Method** in the this section. The **Request Path** indicates the URL mapping of the API resource. If the URL mapping consists some parameter(s), replace those with values. Also the **Request Method** indicates the REST method of the resource. - - 3. In the **Assertions** section, you can add multiple assertion belonging to two types: **AssertEquals** check the whether the mediated result and expected values are equal. **AssertNotNull** checks whether the mediated result is not null. - - ![Add Assertions]({{base_path}}/assets/img/integrate/create_project/synapse_unit_test/add-assertion.png) - - - **Assertion Type**: Type of the assertion. - - **Actual Expression**: Expression that you want to assert. - - **$body**: assert the payload
    - - **$ctx:**: assert synapse property - - **$axis2:**: assert axis2 property - - **$trp:**: assert transport property - - **$statusCode**: assert status code of the service - - **$httpVersion**: assert http version of the service - - - **Expected Value**: Expected value for the actual expression. Type can be a **JSON**, **XML** or a **plain text**. - - **Error Message**: Error message to print when the assertion is failed. - 4. Once you have added at least one assertion, click **Add**. -4. Save the unit test suite. - -## Run Unit Test Suites - -Run the Unit Test Suite(s) in the unit testing server of the embedded Micro Integrator or a remote unit testing server. - -1. Right-click the **test** directory and click **Run Unit Test** to run all the unit test suites at once. Alternatively, right-click the particular unit test suite and click **Run Unit Test** to run a selected unit test suite. - - ![Run Unit Test Suite]({{base_path}}/assets/img/integrate/create_project/synapse_unit_test/run-test.png) - - The **Unit Test Run Configuration** wizard opens. - -2. Select the specific unit testing server (embedded server or remote server) to run the tests. - - ![Run Configuration]({{base_path}}/assets/img/integrate/create_project/synapse_unit_test/run-configuration.png) - - **Local Server Configuration** - - If you select this option, you are running the tests in the unit test server of the embedded Micro Integrator. Specify the following details: - - - **Executable Path**: Path to the unit testing server. - - **Server Test Port**: Port of the unit testing server. - - **Remote Server Configuration** - - !!! Note - **Before you begin** - Be sure that your remote Micro Integrator is started along with its Unit Testing server. Note that you need to pass the `-DsynapseTest` property with your product startup script as shown below. This property is required for starting the Unit Testing server. - - === "On MacOS/Linux/CentOS" - ```bash - sh micro-integrator.sh -DsynapseTest - ``` - - === "On Windows" - ```bash - micro-integrator.bat -DsynapseTest - ``` - - To change starting port of the unit testing server, you can use `-DsynapseTestPort=` system property with above command. The default port is 9008. - - If you select this option, you are running the tests in the unit testing server of a remote Micro Integrator. Specify the following details: - - - **Server Remote Host**: Host IP of the remote unit testing server. This is the host on which the remote Micro Integrator is running. - - **Server Test Port**: Port of the remote unit testing server. The default port is 9008. - -3. Click **Run** to start the unit test. It will start the unit testing server in the console and prints the summary report for the given unit test suite(s) using the response from the unit testing server. - - ![Output Console]({{base_path}}/assets/img/integrate/create_project/synapse_unit_test/console-log.png) - -## Create Mock Service - -Mock services give the opportunity to simulate the actual endpoint. - -1. Open an existing project that has your integration solution. -2. Right-click the **test** folder parallel to the **src** folder, and go to **New** -> **Mock Service** as shown below. - - ![Create Mock Service]({{base_path}}/assets/img/integrate/create_project/synapse_unit_test/create-mock.png) - -3. Select **Create a New Mock Service** and click **Next**. - - ![Select Create Mock Service Method]({{base_path}}/assets/img/integrate/create_project/synapse_unit_test/select-mock-method.png) - -4. In the **Create a new Mock Service** page, enter the following details: - - ![Mock Service Details]({{base_path}}/assets/img/integrate/create_project/synapse_unit_test/mock-details.png) - - - **Name of the Mock Service**: A name for the mock service. - - **Mocking Endpoint Name**: Endpoint name which wants to mock. - - **Mock Service Port**: Port for the mock service. - - **Mock Service Context**: Main context for the service starts with '/'. - -5. Add multiple resources for the mock service as needed. To add multiple resources click the '+' icon on top of the resources table. - - - **Service Sub Context**: Sub context of the resource starts with '/'. - - **Service Method**: REST method type of the resource. - -6. Fill the **Expected Request to the Mock Service Resource** section if you want to mock an endpoint based on the coming request headers or payload. - - ![Mock Service Resource Request Details]({{base_path}}/assets/img/integrate/create_project/synapse_unit_test/resource-request.png) - - - **Header Name**: Expected request header name. - - **Header Value**: Expected request header value. - - **Expected Request Payload**: Expected request payload to the service. - - **Note**: Entered request headers/payload must me matched with the request to send the response a this mock service. - -7. Fill the **Response Send Out from the Mock Service Resource** section to get a response from the service. - - ![Mock Service Resource Response Details]({{base_path}}/assets/img/integrate/create_project/synapse_unit_test/resource-response.png) - - - **Response Status Code**: Response status code of the mock service. - - **Header Name**: Response request header name. - - **Header Value**: Response request header value. - - **Send Out Response Payload**: Expected response payload from the service. - - !!! Note - - Please note that the mock service **should have a sub context** with '/' defined, and additional sub contexts should be defined after that. - - - If you are trying to mock an endpoint `http://petstore.com/pets`, the wizard should look like below now. - - ![Mock Service with one sub context]({{base_path}}/assets/img/integrate/create_project/synapse_unit_test/mockservice-context-sample.png) - - - If you are going to mock the same endpoint with an additional sub context (e.g., `http://petstore.com/pets/id` ), you can add it to the same mock service as shown below. - - ![Mock Service with additional sub Contexts]({{base_path}}/assets/img/integrate/create_project/synapse_unit_test/mockservice-subcontext-sample.png) - -Once you have entered the required details, click **Add**. It will list the resource under the **Add Service Resource** table with **Sub Context** and **Method**. After that click **Finish** to create a Mock Service. It will locate under the **test → resources → mock-services** directory. - -![Mock Service Form]({{base_path}}/assets/img/integrate/create_project/synapse_unit_test/mock-service-form.png) - -## Debug the Unit Test Suite - -If you encounter errors with the unit testing framework, you can debug the framework for troubleshooting as follows: - -1. Select the WSO2 Integration Studio and go to **Run -> Run Configurations..** to open the Run Configurations dialog box. -2. Expand Maven Build and click Run Unit Test Internal Configuration. -3. In the Goals field, add `-X` to the end of the command as shown below. - - !!! Tip - This enables maven debug for the testing framework. - - troubleshooting unit testing - -4. Click Apply and then click Run. - -5. Return to the Console and see that the unit tests are running on debug mode. diff --git a/en/docs/integrate/develop/customizations/creating-custom-inbound-endpoint.md b/en/docs/integrate/develop/customizations/creating-custom-inbound-endpoint.md deleted file mode 100644 index 938ffb620f..0000000000 --- a/en/docs/integrate/develop/customizations/creating-custom-inbound-endpoint.md +++ /dev/null @@ -1,73 +0,0 @@ -# Creating a Custom Inbound Endpoint - -WSO2 Micro Integrator supports several inbound endpoints. However, there can be scenarios that require functionality not provided by the existing inbound endpoints. For example, you might need an inbound endpoint to connect to a certain back-end server or vendor specific protocol. - -To support such scenarios, you can write your own custom inbound endpoint by extending the behavior for **Listening**, **Polling**, and **Event-Based** inbound endpoints. - -## Instructions - -### Step 1: Developing the custom Inbound Endpoint - -- To create a **custom listening inbound endpoint**, download the maven artifact used in the [sample custom listening inbound endpoint configuration](https://github.com/wso2-docs/ESB/tree/master/ESB-Artifacts/inbound/custom_inbound_listening) configuration. - -- To create a **custom polling inbound endpoint**, download the maven artifact used in the [sample custom polling inbound endpoint configuration](https://github.com/wso2-docs/ESB/tree/master/ESB-Artifacts/inbound/custom_inbound). - -- To create a **custom event-based inbound endpoint**, download the maven artifact used in the [sample custom event-based inbound endpoint configuration](https://github.com/wso2-docs/ESB/tree/master/ESB-Artifacts/inbound/custom_inbound_waiting). - -### Step 2: Deploying the Custom Inbound Endpoint - -You need to copy the built jar file to the `MI_HOME/lib` directory and restart the Micro Integrator to load the class. -To copy the jar file to the Embedded Micro Integrator, open the Embedded Micro -Integrator Server Configuration Wizard by clicking on the () -icon on the upper menu and add the jar to select libraries section. - -### Step 3: Adding the custom Inbound Endpoint - -1. If you have already created an [Integration Project]({{base_path}}/integrate/develop/create-integration-project), right-click the [ESB Config module]({{base_path}}/integrate/develop/create-integration-project/#types-of-projects) and go to **New → Inbound Endpoint** to open the **New Inbound Endpoint Artifact**. -2. Select **Create a New Inbound Endpoint** and click **Next**. -3. Type a unique name for the inbound endpoint, and then select **Custom** as the **Inbound Endpoint Creation Type**. -5. Specify the location where the artifact should be saved: Select an existing ESB Config project in your workspace, or click **Create new Project** and create a new project. -5. Click **Finish**. The inbound endpoint is created in the `src/main/synapse-config/inbound-endpoint` folder under the ESB Config project you specified. -6. Open the new artifact from the project explorer, go to the **Source View**, and update the following properties: - - - - - - - - - - - - - - - - - - - - - - - - - -
    -

    Property Name

    -
    -

    Description

    -
    - class - - Name of the custom class you implemented in step 1. -
    - sequence - Name of the sequence message that should be injected. Specify a valid sequence name.
    - onError - Name of the fault sequence that should be invoked in case of failure. Specify a valid sequence name.
    - inbound.behavior - - Specify whether your custom endpoint is listening, polling, or event-based. -
    diff --git a/en/docs/integrate/develop/customizations/creating-custom-mediators.md b/en/docs/integrate/develop/customizations/creating-custom-mediators.md deleted file mode 100644 index 14e1ee5ace..0000000000 --- a/en/docs/integrate/develop/customizations/creating-custom-mediators.md +++ /dev/null @@ -1,40 +0,0 @@ -# Creating a Custom Mediator - -If you need to create a custom mediator that performs some logic on a message, you can either create a new mediator project, or import an existing mediator project using WSO2 Integration Studio. - -Once a mediator project is finalized, you can export it as a deployable artifact by right-clicking on the project and selecting **Export Project as Deployable Archive** . This creates a JAR file that you can deploy. Alternatively, you can group the mediator project as a Composite Application Project, create a Composite Application Archive (CAR), and deploy it to the Micro Integrator. - -!!! Info - A URL classloader is used to load classes in the mediator (class mediators are not deployed as OSGi bundles). Therefore, it is only possible to refer to the class mediator from artifacts packed in the same CAR file in which the class mediator is packed. Accessing the class mediator from an artifact packed in another CAR file is not possible. However, it is possible to refer to the class mediator from a sequence packed in the same CAR file and call that sequence from any other artifact packed in other CAR files. - -## Instructions - -### Creating a Mediator Project - -Create this project directory to start creating custom mediator artifacts. You can use these customer mediators when you define the mediation flow in your ESB config project. - -1. Open WSO2 Integration Studio and click Miscellaneous → Create Mediator Project in the Getting Started view as shown below. - ![new mediator project]({{base_path}}/assets/img/integrate/create_project/new_mediator_project.png) -2. In the dialog that opens, select Create New Mediator and click Next. -3. Enter a project name, package name, and class name. - ![new mediator dialog]({{base_path}}/assets/img/integrate/create_project/new_mediator_artifact_dialog.png) -4. Click Finish and see that the project is now listed in the project explorer. - -The mediator project is created in the workspace location you specified with a new mediator class that extends `org.apache.synapse.mediators.AbstractMediator`. - -### Importing a Java Mediator Project - -Follow the steps below to import a Java mediator project (that includes a Java class, which extends the `org.apache.synapse.mediators.AbstractMediator` class) to WSO2 Integration Studio. - -1. Open WSO2 Integration Studio and click on Create Mediator Project in the Getting Started view as shown above. -2. In the dialog that opens, select **Import From Workspace** and click **Next**. -3. Specify the mediator project in this workspace that you want to import. Only projects with source files that extend `org.apache.synapse.mediators.AbstractMediator` are listed. Optionally, you can change the location where the mediator project will be created and add it to working sets. -4. Click **Finish**. - -The mediator project you selected is created in the location you specified. - -!!! Info - The mediator projects you create using WOS2 Integration Studio are of the `org.wso2.developerstudio.eclipse.artifact.mediator.project.nature` nature by default. Follow the steps below to view this nature added to the `/target/.project` file of the Java mediator project you imported. - - 1. Click **View Menu**, and click **Filters -> Customization**. - 2. Deselect **.\resources**, and click **OK**. diff --git a/en/docs/integrate/develop/customizations/creating-custom-task-scheduling.md b/en/docs/integrate/develop/customizations/creating-custom-task-scheduling.md deleted file mode 100644 index 903e3498fd..0000000000 --- a/en/docs/integrate/develop/customizations/creating-custom-task-scheduling.md +++ /dev/null @@ -1,443 +0,0 @@ -# Customizing Task Scheduling - -When you create a task using the default task implementation, the task can inject messages to a proxy service, or to a sequence. If you have a specific task-handling requirement, you can write your own task-handling implementation by creating a custom Java Class that implements the `org.apache.synapse.startup.Task` interface. - -For example, the below sections demonstrate how you can create and schedule a task to receive stock quotes by invoking a back-end service, which exposes stock quotes. The scheduled task will read stock order information from a text file, and print the stock quotes. - -## Creating the custom Task implementation - -Follow the steps below to create the implementation of the custom Task. - -### Creating the Maven Project - -Create a Maven Project using the following information. - -!!! Tip - You can skip step 5 since you do not need to add external JAR files in this example. - - **Group Id** : `org.wso2.task` - - **Artifact Id** : `StockQuoteTaskMavenProject` - -### Creating the Java Package - -Create a Java Package inside the Maven Project using the following name: `org.wso2.task.stockquote.v1` - -![]({{base_path}}/assets/img/integrate/custom-task-scheduling/119130458/119130467.png) - -### Creating the Java Class - -1. Create a Java Class inside the Maven Project using the following name: `StockQuoteTaskV1` - -2. In the **Project Explorer**, double-click on the **StockQuoteTaskV1.java** file and replace its source with the below content. - - ```java - package org.wso2.task.stockquote.v1; - - import java.io.BufferedReader; - import java.io.File; - import java.io.FileReader; - import java.io.IOException; - - import org.apache.axiom.om.OMAbstractFactory; - import org.apache.axiom.om.OMElement; - import org.apache.axiom.om.OMFactory; - import org.apache.axiom.om.OMNamespace; - import org.apache.axis2.addressing.EndpointReference; - import org.apache.commons.logging.Log; - import org.apache.commons.logging.LogFactory; - import org.apache.synapse.ManagedLifecycle; - import org.apache.synapse.MessageContext; - import org.apache.synapse.SynapseException; - import org.apache.synapse.core.SynapseEnvironment; - import org.apache.synapse.startup.Task; - import org.apache.synapse.util.PayloadHelper; - - public class StockQuoteTaskV1 implements Task, ManagedLifecycle { - private Log log = LogFactory.getLog(StockQuoteTaskV1.class); - private String to; - private String stockFile; - private SynapseEnvironment synapseEnvironment; - - public void execute() { - log.debug("PlaceStockOrderTask begin"); - - if (synapseEnvironment == null) { - log.error("Synapse Environment not set"); - return; - } - - if (to == null) { - log.error("to not set"); - return; - } - - File existFile = new File(stockFile); - - if (!existFile.exists()) { - log.debug("waiting for stock file"); - return; - } - - try { - - // file format IBM,100,120.50 - - BufferedReader reader = new BufferedReader(new FileReader(stockFile)); - String line = null; - - while ((line = reader.readLine()) != null) { - line = line.trim(); - - if (line == "") { - continue; - } - - String[] split = line.split(","); - String symbol = split[0]; - String quantity = split[1]; - String price = split[2]; - MessageContext mc = synapseEnvironment.createMessageContext(); - mc.setTo(new EndpointReference(to)); - mc.setSoapAction("urn:placeOrder"); - mc.setProperty("OUT_ONLY", "true"); - OMElement placeOrderRequest = createPlaceOrderRequest(symbol, quantity, price); - PayloadHelper.setXMLPayload(mc, placeOrderRequest); - synapseEnvironment.injectMessage(mc); - log.info("placed order symbol:" + symbol + " quantity:" + quantity + " price:" + price); - } - - reader.close(); - } catch (IOException e) { - throw new SynapseException("error reading file", e); - } - - File renamefile = new File(stockFile); - renamefile.renameTo(new File(stockFile + "." + System.currentTimeMillis())); - log.debug("PlaceStockOrderTask end"); - } - - public static OMElement createPlaceOrderRequest(String symbol, String qty, String purchPrice) { - OMFactory factory = OMAbstractFactory.getOMFactory(); - OMNamespace ns = factory.createOMNamespace("http://services.samples/xsd", "m0"); - OMElement placeOrder = factory.createOMElement("placeOrder", ns); - OMElement order = factory.createOMElement("order", ns); - OMElement price = factory.createOMElement("price", ns); - OMElement quantity = factory.createOMElement("quantity", ns); - OMElement symb = factory.createOMElement("symbol", ns); - price.setText(purchPrice); - quantity.setText(qty); - symb.setText(symbol); - order.addChild(price); - order.addChild(quantity); - order.addChild(symb); - placeOrder.addChild(order); - return placeOrder; - } - - public void destroy() {} - - public void init(SynapseEnvironment synapseEnvironment) { - this.synapseEnvironment = synapseEnvironment; - } - - public SynapseEnvironment getSynapseEnvironment() { - return synapseEnvironment; - } - - public void setSynapseEnvironment(SynapseEnvironment synapseEnvironment) { - this.synapseEnvironment = synapseEnvironment; - } - - public String getTo() { - return to; - } - - public void setTo(String to) { - this.to = to; - } - - public String getStockFile() { - return stockFile; - } - - public void setStockFile(String stockFile) { - this.stockFile = stockFile; - } - } - ``` - - ![]({{base_path}}/assets/img/integrate/custom-task-scheduling/119130458/119130464.png) - -3. In the **Project Explorer**, double-click on the **pom.xml** file and replace its source with the below content. - - ```xml - - 4.0.0 - org.wso2.task - StockQuoteTask - 0.0.1-SNAPSHOT - - - wso2.releases - WSO2 internal Repository - http://maven.wso2.org/nexus/content/repositories/releases/ - - true - daily - ignore - - - - - - org.apache.synapse - synapse-core - 2.1.7-wso2v65 - - - - ``` - - ![]({{base_path}}/assets/img/integrate/custom-task-scheduling/119130458/119130465.png) - -### Writing the custom Task - -#### Step 1: Writing the Task Class - -You can create a custom task class, which implements the - ` org.apache.synapse.startup.Task ` interface - as follows. This interface has a single - ` execute() ` method, which contains the code - that will be executed according to the defined schedule. - -The ` execute() ` method contains the - following actions: - -1. Check whether the file exists at the desired location. -2. If it does, then read the file line by line composing place - order messages for each line in the text file. -3. Individual messages are then injected to the synapse environment with the given ` To ` endpoint reference. -4. Set each message as ` OUT_ONLY ` since it is not expected any response for messages. - -In addition to the ` execute() ` method, it - is also possible to make the class implement a - ` JavaBean ` interface. - -Also, add the following dependency to the POM file of the custom task project: ` WSO2 Carbon - Utilities bundle ` (symbolic name: ` org.wso2.carbon.utils ` ) - -This is a bean implementing two properties: To and StockFile. These are used to configure the task. - -**Implementing ` ManagedLifecycle ` for Initialization and Clean-up** - -Since a task implements ` ManagedLifecyle ` - interface, the Micro Integrator will call the - ` init() ` method at the initialization of a - ` Task ` object and - ` destroy() ` method when a - ` Task ` object is destroyed: - -```java -public interface ManagedLifecycle { -public void init(SynapseEnvironment se); -public void destroy(); - } -``` - -The ` PlaceStockOrderTask ` stores the - Synapse environment object reference in an instance variable for - later use with the ` init() ` method. The - ` SynapseEnvironment ` is needed for - injecting messages into the ESB. - -#### Step 2: Customizing the Task - -It is possible to pass values to a task at runtime using property - elements. In this example, the location of the stock order file and - its address was given using two properties within the - ` Task ` object: - -- **String type** -- **OMElement type** - -!!! Info - For **OMElement** type, it is possible to pass XML elements as - values in the configuration file. - -When creating a ` Task ` object, the ESB will - initialize the properties with the given values in the configuration - file. - -```java -public String getStockFile() { -return stockFile; -} -public void setStockFile(String stockFile) { -this.stockFile = stockFile; -} -``` - -For example, the following properties in the - ` Task ` class are initialized with the given - values within the property element of the task in the configuration. - -```xml - -``` - -For those properties given as XML elements, properties need to be defined within the ` Task ` class using the format given below. OMElement comes from [Apache AXIOM](http://ws.apache.org/commons/axiom/), which is used by the Micro Integrator. AXIOM is an object model similar to DOM. To learn more about AXIOM, see the tutorial in the [AXIOM user guide](http://ws.apache.org/axiom/userguide/userguide.html) . - -```java -public void setMessage(OMElement elem) { - message = elem;} -``` - -It can be initialized with an XML element as follows: - -```xml - - - - IBM - - - -``` - -### Deploying the custom Task implementation - -Deploy the custom Task implementation. - -## Creating the Task - -Follow the steps below to create the task and schedule it. - -1. [Create a ESB Config project]({{base_path}}/integrate/develop/create-integration-project) named ` PrintStockQuote `. -2. [Create a Sequence]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences) using the following information named `PrintStockQuoteSequence`. -3. Add a **Log Mediator** and a **Drop Mediator** to the sequence and configure them. - - ![]({{base_path}}/assets/img/integrate/custom-task-scheduling/119130458/119130461.png) - - The below is the complete source configuration of the Sequence (i.e., the `PrintStockQuoteSequence.xml` file): - - ```xml - - - - - - ``` - -4. [Create a Scheduled Task]({{base_path}}/integrate/develop/creating-artifacts/creating-scheduled-task) using the following information: - - - - - - - - - - - - - - - - - -
    Task PropertyDescription
    Task NamePrintStockQuoteScheduledTask
    Count1
    Interval (in seconds)5
    - - ![]({{base_path}}/assets/img/integrate/custom-task-scheduling/119130458/119130460.png) - -5. Defining the properties of the Task: In the **Project Explorer** , double-click the **Print StockQuoteScheduledTask.xml** file and replace its source with the below content. - - ```xml - - - - - - - ``` - The task properties will change according to the custom implementation. Therefore, you need to enter values for your custom properties. This sets the below properties. - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameValue
    tohttp://localhost:9000/soap/SimpleStockQuoteService
    stockFile
    The directory path to the stockfile.txt file.
    synapseEnvironment
    Do not enter a value. This will be used during runtime.
    - - !!! Note - Currently, you cannot set properties of a custom task using the **Design View** due to a [known issue](https://github.com/wso2/product-ei/issues/2551), which will be fixed in future versions. - -The below is the complete source configuration of the Task (i.e., the `PrintStockQuoteScheduledTask.xml` file). - -```xml - - - - - - -``` - -## Deploying the Task - -Deploy the Task. - -## Testing the Custom Task - -### Starting the back-end service - -Download the backend service from [GitHub](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/stockquote-deployable-jar-2.2.2.jar) and run it. - -### Creating the text file - -Create a text file named ` stockfile.txt ` with the following content and save it to a preferred location on your -machine. This will include the information to be read by the scheduled task to pass to the backend service. - -**stockfile.txt** - -```xml -IBM,100,120.50 -MSFT,200,70.25 -SUN,400,60.758 -``` - -!!! Info - Each line in the text file contains details for a stock order: - - ` symbol ` - - ` quantity ` - - ` price ` - -A task that is scheduled using this custom implementation will read the -text file, a line at a time, and create orders using the given values to -be sent to the back-end service. The text file will then be tagged as -processed to include a system time stamp. The task will be scheduled to -run every 15 seconds. - -### Viewing the output - -You will view the stock quotes sent by the backend service printed every 3 seconds by the scheduled task in the below format. - -```bash -INFO - StockQuoteTask placed order symbol:IBM quantity:100 price:120.50 -``` - -![]({{base_path}}/assets/img/integrate/custom-task-scheduling/119130458/119130459.png) diff --git a/en/docs/integrate/develop/customizations/creating-new-connector.md b/en/docs/integrate/develop/customizations/creating-new-connector.md deleted file mode 100644 index c8778866c3..0000000000 --- a/en/docs/integrate/develop/customizations/creating-new-connector.md +++ /dev/null @@ -1,162 +0,0 @@ -# Creating a New Connector - -You can write a new connector for a specific requirement that cannot be addressed via any of the existing connectors that can be downloaded from the [connector store](https://store.wso2.com/store/pages/top-assets). - -Follow the steps given below to write a new connector to integrate with the **Google Books** service. You can then use the connector inside a mediation sequence to connect with Google Books and get information. - -## Writing a new connector - -Follow the steps given below to write the new connector. - -### Prerequisites - -Download and install Apache Maven. - -### Step 1: Creating the Maven project template - -We will use the [maven archetype](https://github.com/wso2-extensions/archetypes/tree/master/esb-connector-archetype) to generate the Maven project template and sample connector code. - -1. Open a terminal, navigate to the directory on your machine where you want the new connector to be created, and run the following command: - - ```xml - mvn org.apache.maven.plugins:maven-archetype-plugin:2.4:generate -DarchetypeGroupId=org.wso2.carbon.extension.archetype -DarchetypeArtifactId=org.wso2.carbon.extension.esb.connector-archetype -DarchetypeVersion=2.0.4 -DgroupId=org.wso2.carbon.esb.connector -DartifactId=org.wso2.carbon.esb.connector.googlebooks -Dversion=1.0.0 -DarchetypeRepository=http://maven.wso2.org/nexus/content/repositories/wso2-public/ - ``` -2. When prompted, enter a name for the connector. For example, **googleBooks**. -3. When prompted for confirmation, enter **y**. - -The `org.wso2.carbon.esb.connector.googlebooks` directory is now created with a directory structure consisting of a `pom.xml` file, `src`tree, and `repository` tree. - -### Step 2: Adding the new connector resources - -Now, let's configure files in the `org.wso2.carbon.esb.connector.googlebooks/src/main/resources` directory: - -1. Create a directory named **googlebooks_volume** in the `/src/main/resources` directory. -2. Create a file named `listVolume.xml` with the following content in the **googlebooks_volume** directory: - ```xml - - ``` - -3. Create a file named `component.xml` in the **googlebooks_volume** directory and add the following content: - ```xml - - - - - listVolume.xml - Lists volumes that satisfy the given query. - - - - ``` - -4. Edit the `connector.xml` file in the `src/main/resources` directory and replace the contents with the following dependency: - ```xml - - - - - wso2 sample connector library - - - ``` - -5. Create a folder named **icon** in the `/src/main/resources` directory and add two icons. - - !!! Tip - You can download icons from the following location: [icons](http://svn.wso2.org/repos/wso2/scratch/connectors/icons/) - -You are now ready to build the connector. - -### Step 3: Building the connector - -Open a terminal, navigate to the `org.wso2.carbon.esb.connector.googlebooks` directory and execute the following maven command: - -```bash -mvn clean install -``` - -This builds the connector and generates a ZIP file named `googleBooks-connector-1.0.0.zip` in the `target` directory. - -## Using the new connector - -Now, let's look at how you can use the new connector in a mediation sequence. - -### Step 1: Adding the connector to your mediation sequence - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an ESB Config project]({{base_path}}/integrate/develop/create-integration-project) and [import the connector]({{base_path}}/integrate/develop/creating-artifacts/adding-connectors/#importing-connectors) to your project. - - !!! Tip - Be sure to select the new `googleBooks-connector-1.0.0.zip` file from your `org.wso2.carbon.esb.connector.googlebooks/target` directory. - -3. [Create a custom proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) named **googlebooks_listVolume**. In the **Design View**, you will note that the new connector is added to the tool palette. - - -4. Now, update the proxy service as shown below. You will be defining a mediation logic using the **Property** mediator, the new **googleBooks** connector, and the **Respond** mediator: - ```xml - - - - - - - {$ctx:searchQuery} - - - - - - - ``` - -### Step 2: Packaging all the artifacts - -You need to package the new connector file and the proxy service separately. - -1. Create a **Connector Exporter project** and add the connector. - - See the instructions on [packaging a new connector file]({{base_path}}/integrate/develop/creating-artifacts/adding-connectors/#packaging-connectors). - -2. Create a new **Composite Application project** and add the proxy service as well as the connector as dependencies. - - !!! Tip - Note that you need to add both the **Connector Exporter project** as well as the **ESB Config project** as dependencies because the connector is referred from the proxy service. - - - - See the instructions on [packaging ESB artifacts]({{base_path}}/integrate/develop/packaging-artifacts/#creating-a-new-composite-application). - -### Step 3: Deploying the artifacts - -1. Open the POM file for the composite application project and ensure that the **Connector Exporter** project as well as the **ESB Config** project are selected as dependencies. - - - -2. Right-click the Composite Application project and click **Export Project Artifacts and Run**. - -The embedded Micro Integrator will now start and deploy the artifacts. - -### Step 4: Testing the connector - -Post a request to the proxy service using Curl as shown below. - -```bash -curl -v -X POST -d "{"searchQuery":"rabbit"}" -H "Content-Type: application/json" http://localhost:8290/services/googlebooks_listVolume -``` - -This performs a search and displays a list of volumes that meet the specified search criteria. \ No newline at end of file diff --git a/en/docs/integrate/develop/customizations/creating-synapse-handlers.md b/en/docs/integrate/develop/customizations/creating-synapse-handlers.md deleted file mode 100644 index 2a1787f230..0000000000 --- a/en/docs/integrate/develop/customizations/creating-synapse-handlers.md +++ /dev/null @@ -1,97 +0,0 @@ -# Synapse Handlers - -This section gives an introduction to what a handler is and describes how you can write a synapse handler by walking you through a basic example. - -## What is a Synapse Handler? - -Synapse Handlers can be used to process requests in a scenario where you have multiple requests and each request needs be processed in a specific manner. A Handler defines the interface that is required to handle the request and concreteHandlers are to handle requests in a specific manner based on what needs to be done with regard to each type of request. The diagram below illustrates this. - -![Handler]({{base_path}}/assets/img/integrate/synapse_handlers/handler.png) - -Synapse handler is the interface used to register server response callbacks. Synapse handler provides the abstract handler implementation -that executes the request in flow, request out flow, response in flow and response out flow. The diagram below is an illustration of how the specified flows execute in the abstract handler implementation. - -![Message Flow using Handler]({{base_path}}/assets/img/integrate/synapse_handlers/inflow_outflow.png) - -- **Request in flow** - This executes when the request reaches the synapse engine. - - ```java - public boolean handleRequestInFlow(MessageContext synCtx); - ``` - -- **Request out flow** - This executes when the request goes out of the synapse engine. - ```java - public boolean handleRequestOutFlow(MessageContext synCtx); - ``` - -- **Response in flow** - This executes when the response reaches the synapse engine. - ```java - public boolean handleResponseInFlow(MessageContext synCtx); - ``` - -- **Response out flow** - This executes when the response goes out of the synapse engine. - - ```java - public boolean handleResponseOutFlow(MessageContext synCtx); - ``` - -The diagram below illustrates the basic component structure of WSO2 Micro Integrator and how the flows mentioned above execute in the request path and the response path. - -![Request-Response Flow]({{base_path}}/assets/img/integrate/synapse_handlers/esb-with-request-response-flow.png) - -Now that you understand what a handler is, let's see how you can write a concrete Synapse handler. - -## Step 1: Writing a concrete Synapse handler - -The easiest way to write a concrete Synapse handler is to extend the `org.apache.synapse.AbstractSynapseHandler` class. You can also write a concrete Synapse handler by implementing `org.apache.synapse.SynapseHandler`, which is the SynapseHandler interface. - -Following is an example Synapse handler implementation that extends the `org.apache.synapse.AbstractSynapseHandler` class: - -```java -public class TestHandler extends AbstractSynapseHandler { - - private static final Log log = LogFactory.getLog(TestHandler.class); - - @Override - public boolean handleRequestInFlow(MessageContext synCtx) { - log.info("Request In Flow"); - return true; - } - - @Override - public boolean handleRequestOutFlow(MessageContext synCtx) { - log.info("Request Out Flow"); - return true; - } - - @Override - public boolean handleResponseInFlow(MessageContext synCtx) { - log.info("Response In Flow"); - return true; - } - - @Override - public boolean handleResponseOutFlow(MessageContext synCtx) { - log.info("Response Out Flow"); - return true; - } -} -``` - -## Step 2: Deploying the Synapse handler - -To deploy your custom synapse handler in WSO2 Micro Integrator, bundle the artifact as a JAR file (with either the .jar or .xar format), and add it to the `MI_HOME/lib/` directory. Be sure to restart the server after adding the files. - -## Step 3: Engaging the Synapse handler - -To engage the deployed Synapse handler, you need to add the following configuration to the `deployment.toml` file. - -```toml -[synapse_handlers] -.enabled = true -.class = "." -``` diff --git a/en/docs/integrate/develop/debugging-mediation.md b/en/docs/integrate/develop/debugging-mediation.md deleted file mode 100644 index 4ae5ac6c61..0000000000 --- a/en/docs/integrate/develop/debugging-mediation.md +++ /dev/null @@ -1,156 +0,0 @@ -# Debugging Mediation - -Once you [deploy and run]({{base_path}}/integrate/develop/using-embedded-micro-integrator) your integration solution, you may encounter errors and identify the required modifications for your artifacts. Use the mediation debugging feature in WSO2 Integration Studio to troubleshoot errors. - -There are two ways to debug a developed mediation flow. - -1. Instant debugging using the Micro Integrator packaged with WSO2 Integration Studio. -2. Deploy artifacts to an external Micro Integrator server and debug. - -Above two approaches are discussed in detail below. - -## Instant debugging using Micro Integrator - -1. When project artifacts are ready, select the project you want to debug and click **Run** -> **Debug**. - - ![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-1.png) - -2. It will ask to choose the artifacts those needs to be deployed to the embedded Micro Integrator. Internally WSO2 Integration Studio will generate a CAR application with chosen artifacts and deploy. - ![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-2.png) -3. On the console of WSO2 Integration Studio, notice that Micro - Integrator is started with the artifacts deployed. HTTP traffic is - listened on the 8290 port. -4. Add some breakpoints in the flow as below. You can mark a particular - mediator as a breakpoint. - ![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-3.png) -5. Invoke the service using SOAP UI or some external client. As soon as - a request comes to the proxy service, the first break point will be triggered. - ![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-4.png) - Note that you can view the payload that comes into the mediator and - the properties that you can access on the message context. - -6. Click **Continue**. Then the message will be sent to - the backend by the **Call** mediator and the next breakpoint (the **log** mediator) - will be triggered. - ![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-5.png) - - Note that responses can be viewed on **Message Envelope** tab. The - property set before calling the endpoint is also accessible in the - context. -7. Click **Continue** again. Response will be received by the client. - ![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-6.png) - -## Debugging with external Micro Integrator - -Follow the steps below to enable debugging with respect to mediation. - -1. Click **Run** in the top menu of the WSO2 Integration Studio, and - then click **Debug Configurations** . - ![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-7.png) -2. Enter the details to create a new configuration as shown in the - example below. You need to define two port numbers and a hostname to connect the external Micro Integrator with WSO2 Integration Studio in the mediation debug mode. Note that you need to specify debug mode as **Remote**. - - ![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-8.png) - -3. Add the new configuration to the Debug menu. Then you can access the configuration easily. - ![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-9.png) - -4. Execute the following commands (passing a system variable at start up) to start WSO2 Micro Integrator in debug - mode: - - On **Windows**: - - `MI_HOME\bin\micro-integrator.bat --run -Desb.debug=true` - - - On **Linux/Solaris**: - - `sh MI_HOME/bin/micro-integrator.sh-Desb.debug=true` - -5. Click the **downward** arrow beside **Debug** in WSO2 Integration Studio and select the new profile created above when the Console indicates the following. - - !!! Note - You have approximately one minute to connect WSO2 Integration Studio with the Micro Integrator for the execution of the above created debug configuration. Otherwise, the server will stop listening and start without connecting with the debugger tool. - - ![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-10.png) - ![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-11.png) - -6. In WSO2 Integration Studio, right-click and add breakpoints or skip points on the desired mediators to start debugging as shown in the example below. - - ![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-12.png) - - !!! Info - You can add the following debugging options on the mediators using the right click context menu. - - **Toggle Breakpoint:** Adds a breakpoint to the selected - mediator - - **Toggle Skip Point:** Adds a skip point to the selected - mediator - - **Resend Debug Points:** I f you re-start the the Micro Integrator, or if you re-deploy the proxy service after changing its Synapse configuration, you need to re-send the information on breakpoints to the Micro Integrator server. T his re-sends all registered debugging points to the server. - - **Delete All Debug Points:** Deletes all registered debug points from the server and WSO2 Integration Studio. - -Now you can send a request to the external Micro Integrator and debug the flow as discussed under "Instant debugging using Micro Integrator". - -## Information provided by the Debugger Tool - -When your target artifact gets a request message and when the mediation flow reaches a mediator marked as a breakpoint, the message mediation process suspends at that point. A tool tip message of the suspended mediator displays the message envelope of the message payload at that point as shown in the example below. - -![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-13.png) - -You can view the message payload at that point of the message flow also in the **Message Envelope** tab as shown below. - -![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-14.png) - -Also, you can view the message mediation properties in the **Variables** -view as shown in the example below. - -The **Variable** view contains properties of the following property scopes. - -- **Axis2-Client Scope** properties -- **Axis2 Scope** properties -- **Operation Scope** properties -- **Synapse Scope** properties -- **Transport Scope** properties - -You can have a list of selected properties out of the above, in the properties table of the **Message Envelope** tab, and view information on the property keys and values of them as shown below. - -![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-15.png) - -Click **Add Property**, specify the context and name of the property, and then click **OK** to add that property to the properties table in the **Message Envelope** tab as shown below. - -!!! Tip - Click **Clear Property**, to remove a property from the properties table. - -![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-16.png) - -## Changing the property values - -There are three operations that you can perform on message mediation property values as described below. - -### Injecting new properties - -Follow the steps below to inject new properties while debugging. - -1. Right click on the **Variable** view, click **Inject/Clear Property**, and then click **Inject Property** as shown below. - ![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-17.png) -2. Enter the details about the property you prefer to add as shown in the example below. - ![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-18.png) -3. Click **OK**. -4. When the next debug point is hit, you will see the property is set to the specified context. - ![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-19.png) - -### Clearing a property - -Follow the steps below to clear an existing property. - -1. Right click on the **Variable** view, click **Inject/Clear - Property** , and then click **Clear Property** as shown below. - ![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-20.png) -2. Enter the details about the property you want to clear as shown in - the example below. - ![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-21.png) -3. Click **OK** . - -### Modifying a property - -1. Click on the value section of the preferred property and change the value in the **Variable** view as shown in the example below, to modify it. - ![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-22.png) -2. You will see that the property is changed on the property view. - ![select debugging]({{base_path}}/assets/img/integrate/mediation-debugging/debugging-23.png) diff --git a/en/docs/integrate/develop/deploy-artifacts.md b/en/docs/integrate/develop/deploy-artifacts.md deleted file mode 100644 index a746bd03ac..0000000000 --- a/en/docs/integrate/develop/deploy-artifacts.md +++ /dev/null @@ -1,27 +0,0 @@ -# Deploying Artifacts - -Once you have your integration artifacts developed and [packaged in a composite exporter]({{base_path}}/integrate/develop/packaging-artifacts), you can deploy the composite exporter in your Micro Integrator server or your container environment. - -## Deploy artifacts in the embedded Micro Integrator - -The light-weight Micro Integrator is already included in your WSO2 Integration Studio package, which allows you to deploy and run the artifacts instantly. - -See the instructions in [using the embedded Micro Integrator]({{base_path}}/integrate/develop/using-embedded-micro-integrator) of WSO2 Integration Studio. - -## Deploy artifacts in a remote Micro Integrator instance - -Download and set up a Micro Integrator server in your VM and deploy the composite exporter with your integration artifacts. - -See the instructions in [using a remote Micro Integrator]({{base_path}}/integrate/develop/using-remote-micro-integrator). - -## Deploy artifacts in Docker - -Use the Docker Exporter module in WSO2 Integration Studio to build a Docker image of your Micro Integrator solution and push it to your Docker registry. You can then use this Docker image from your Docker registry to start Docker containers. - -See the instructions on using the [Docker Exporter]({{base_path}}/integrate/develop/create-docker-project). - -## Deploy artifacts in Kubernetes - -Use the Kubernetes Exporter module in WSO2 Integration Studio to deploy a Docker image of your Micro Integrator Solution in a Kubernetes environment. - -See the instructions on using the [Kubernetes Exporter]({{base_path}}/integrate/develop/create-kubernetes-project). diff --git a/en/docs/integrate/develop/endpoint-trace-logs.md b/en/docs/integrate/develop/endpoint-trace-logs.md deleted file mode 100644 index 41c462ec0b..0000000000 --- a/en/docs/integrate/develop/endpoint-trace-logs.md +++ /dev/null @@ -1,4 +0,0 @@ -# Tracing and handling errors - -Endpoints have a `trace` attribute, which turns on detailed trace information for messages being sent to the endpoint. -These are available in the `wso2carbon-trace-messages.log` file, which is configured in the `MI_HOME/repository/conf/log4j2.properties` file. Setting the trace log level to `TRACE` logs detailed trace information including message payloads. For more information on endpoint states and handling errors, see [Endpoint Error Handling]({{base_path}}/reference/synapse-properties/endpoint-properties/#endpoint-error-handling-properties). \ No newline at end of file diff --git a/en/docs/integrate/develop/export_project.md b/en/docs/integrate/develop/export_project.md deleted file mode 100644 index eb8c2f8e96..0000000000 --- a/en/docs/integrate/develop/export_project.md +++ /dev/null @@ -1,48 +0,0 @@ -# Exporting a Project - -With WSO2 Integration Studio, you can export projects from your workspace and later [import them]({{base_path}}/integrate/develop/importing-projects). - -For example, consider the following [**Maven Multi Module** project]({{base_path}}/integrate/develop/create-integration-project) in your project explorer. This is a project solution that includes several project types. - - - -Follow the steps given below to export the project. - -1. Right-click the project and click **Export**. - - - -2. In the **Export** wizard, open the **WSO2** folder as shown below. - - - -3. You can choose to export the project as an **archive file** or as a **file system**. Select the required option from the list and click **Next**. - - - - - - - - - - -
    - Projects Export as Archive File - - Select this option to export the project as a ZIP archive. -
    - Projects Export as File System - - Select this option to export the project folders without creating a ZIP archive. -
    - -4. In the next page, see that your root project folder is selected. Click **Browse** and give the path to the export location. - - !!! Tip - If you have other projects in your workspace, you can also select them if required. - - - - -5. Click **Finish** to export the project. diff --git a/en/docs/integrate/develop/exporting-artifacts.md b/en/docs/integrate/develop/exporting-artifacts.md deleted file mode 100644 index f9cc3081ac..0000000000 --- a/en/docs/integrate/develop/exporting-artifacts.md +++ /dev/null @@ -1,11 +0,0 @@ -# Exporting packaged Synapse artifacts - -Once you have [packaged your artifacts]({{base_path}}/integrate/develop/packaging-artifacts) into a composite application, you can -export it into a CAR file (.car file): - -1. Select the Composite Exporter module in the project explorer, - right-click, and click **Export Composite Application Project** . - -2. In the dialog that opens, give a name for the CAR file, the destination where the file should be saved, and click **Next**. -3. You can select the artifacts that should be packaged in the CAR file. -4. Click **Finish** to generate the CAR file. diff --git a/en/docs/integrate/develop/generate-docker-image.md b/en/docs/integrate/develop/generate-docker-image.md deleted file mode 100644 index 8be73037ce..0000000000 --- a/en/docs/integrate/develop/generate-docker-image.md +++ /dev/null @@ -1,107 +0,0 @@ -# Generating Docker images - -See the topics given below. - -## Before you begin - -1. Install Docker from the [Docker Site](https://docs.docker.com/). -2. Create a Docker Account at [Docker Hub](https://hub.docker.com) and log in. -3. Start the Docker server. - -## Generate the Docker image - -1. Right-click the **Composite Application Project** in the project explorer and - then click **Generate Docker Image**. - - - -2. In the **Generate Docker Image Wizard** that opens, select one from the following three options and proceed. - - ![Generate docker image dialog]({{base_path}}/assets/img/integrate/create_project/docker_k8s_project/generate-docker-image-options.png) - - - **Create a new Docker Exporter Project** - - Select this option to create a new **Docker Exporter Project** and click **Proceed**. You can build a docker image using this Docker Exporter Project. You are now directed to the [Docker Exporter Project wizard](create-docker-project. - - - **Generate Docker Image with the Embedded MI** - - 1. Select this option to generate a Docker image with the embedded Micro Integrator runtime of WSO2 Integration studio. - - !!! Note - This is recommended only for testing. - - 2. Click **Next** and enter the following details: - - ![Create docker image dialog]({{base_path}}/assets/img/integrate/create_project/generate_docker_image_dialog.png) - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Parameter - - Description -
    - Name of the Application - - The name of the composite - application with the artifacts created for your ESB project. - The name of the ESB project is displayed by default, but it - can be changed if required. -
    - Application Version - - The version of the composite - application. -
    - Name of the Docker Image - - Give a name for the Docker image. -
    - Docker Image Tag - - A tag for the Docker image to be used - for reference. This is optional. -
    - Export Destination - - Browse for the preferred location - in your machine to export the Docker image. -
    - - 3. Click **Next**. Select the **Config** projects that you want to include in the Docker image and click **Finish**. - - ![Create docker image]({{base_path}}/assets/img/integrate/create_project/select_artifact_docker.png) - - Once the Docker image is successfully created, a message similar to the following appears in your screen. - - ![Create docker image]({{base_path}}/assets/img/integrate/create_project/docker_image_successful.png) - - - **Generate Docker Image with an Existing Project** - - This will use the existing Docker Exporter Project that you selected and create a Docker image. You will receive a message similar to the following: - - ![Create docker image]({{base_path}}/assets/img/integrate/create_project/docker_image_successful.png) - \ No newline at end of file diff --git a/en/docs/integrate/develop/generate-service-catalog-metadata.md b/en/docs/integrate/develop/generate-service-catalog-metadata.md deleted file mode 100644 index 440dbbb64a..0000000000 --- a/en/docs/integrate/develop/generate-service-catalog-metadata.md +++ /dev/null @@ -1,34 +0,0 @@ -# Generating Service Catalog Metadata Artifact - -Follow the instructions given below to generate Service Catalog metadata artifacts for APIs and Proxy Services in WSO2 Integration Studio older workspaces to expose integration services as Managed APIs. - -## Generate Swagger and Metadata files for APIs - -1. Right-click the **api** folder under the ESB project and click **Generate API Metadata**. - - - -2. This will open a success dialog box once finished. - - - -3. Re-select required API artifacts under the relevant **Composite Exporter** module dependencies section to pack the generated metadata artifacts along with the API artifacts. - - !!! Tip - By default, the `Publish to Service Catalog` checkbox is enabled. If not, select the checkbox in the wizard so that it will include metadata files of the selected artifacts. - -## Generate Metadata files for Proxy Services - -1. Right-click the **proxy-service** folder under the ESB project and click **Generate API Metadata**. - - - -2. This will open a success dialog box once finished. - - - -3. Re-select required Proxy Service artifacts under the relevant **Composite Exporter** module dependencies section to pack the generated metadata artifacts along with the Proxy Service artifacts. - - !!! Tip - By default, the `Publish to Service Catalog` checkbox is enabled. If not, select the checkbox in the wizard so that it will include metadata files of the selected artifacts. - diff --git a/en/docs/integrate/develop/hot-deployment.md b/en/docs/integrate/develop/hot-deployment.md deleted file mode 100644 index 6a6dcfe5c6..0000000000 --- a/en/docs/integrate/develop/hot-deployment.md +++ /dev/null @@ -1,15 +0,0 @@ -# Hot Deploying Artifacts - -Hot deployment is the process of dynamically deploying your synapse artifacts (XML), dataservices(DBS), carbon applications (CAR), etc., in your Micro Integrator. That is, it is not required to restart the server for the artifact deployment to be effective. - -Hot deployment is useful for testing the integration artifacts **in a VM environment**. With hot deployment, it is not required to restart the server each time you deploy an artifact and the testing time will be shorter and efficient. Hence, hot deployment is enabled by default in the Micro Integrator distribution. - -## Disabling hot deployment -Open the `deployment.toml` file from the `/conf` directory and set hot_deployment property to false. - -```toml -[server] -hot_deployment = false -``` - -See the [complete list of server configurations]({{base_path}}/reference/config-catalog/#deployment). diff --git a/en/docs/integrate/develop/importing-artifacts.md b/en/docs/integrate/develop/importing-artifacts.md deleted file mode 100644 index 019f15c7a4..0000000000 --- a/en/docs/integrate/develop/importing-artifacts.md +++ /dev/null @@ -1,42 +0,0 @@ -# Importing Artifacts - -Follow the instructions given below to import an integration artifact into WSO2 Integration Studio. - -1. [Create an ESB project]({{base_path}}/integrate/develop/create-integration-project). -2. Right-click the ESB project, click **New**, and select the type of artifact you want to import. For example, let's import a REST API artifact. - - - -3. Select the **Import Artifact** option and click **Next**. - - - -4. Browse for the configuration file of your artifact, specify the location to save the artifact. - - - -5. Click **Finish**.  - -The artifacts are created in the `src/main/synapse-config/` folder under the ESB project you specified. - -!!! note - - When importing artifacts with custom mediators, make sure the custom mediator name starts with the "CUSTOM_" prefix. - - !!! example - ```xml - - - - - - ... - - - - - - - - ``` - diff --git a/en/docs/integrate/develop/importing-projects.md b/en/docs/integrate/develop/importing-projects.md deleted file mode 100644 index b828104b3e..0000000000 --- a/en/docs/integrate/develop/importing-projects.md +++ /dev/null @@ -1,15 +0,0 @@ -# Importing projects - -If you have an already created Integration project file, you can import it to -your WSO2 Integration Studio workspace. - -1. Open WSO2 Integration Studio, navigate to **File -> Import**, select **Existing WSO2 Projects into workspace,** and click **Next**: - ![Import ESB project]({{base_path}}/assets/img/integrate/create_project/import_proj_dialog.png) -2. If you have a ZIP file of your project, browse for the **archive file**, or if you have an extracted project folder, browse for the - **root directory**: - ![Import ESB project]({{base_path}}/assets/img/integrate/create_project/import_proj_select_folders.png) - - !!! Info - Select **Copy projects into workspace** check box if you want to save the project in the workspace. - -3. Click **Finish** , and see that the project files are imported in the project explorer. \ No newline at end of file diff --git a/en/docs/integrate/develop/injecting-parameters.md b/en/docs/integrate/develop/injecting-parameters.md deleted file mode 100644 index b33ffa40eb..0000000000 --- a/en/docs/integrate/develop/injecting-parameters.md +++ /dev/null @@ -1,575 +0,0 @@ -# Injecting Parameters - -When deploying integration artifacts in different environments, it is necessary to change the synapse parameters used in the artifacts according to the environment. For example, the 'endpoint URL' will be different in each environment. If you define the synapse parameters in your artifacts as explained below, you can inject the required parameter values for each environment using system variables. Without this feature, you need to create and maintain separate artifacts for each environment. This feature is useful for container deployments. - -There are two ways to inject parameters into synapse configurations: By injecting values using environment variables, or by using a file to inject the parameter values. - -## Using Environment Variables - -If you want to inject parameter values as environment variables, you need to apply the following. - -**Configuring the synapse artifacts** - -Define your synapse artifacts using "$SYSTEM:parameter_key" as the parameter value. Note that parameter_key represents a place holder representing the parameter. For example, shown below is an endpoint artifact, where the endpoint URI configured for this feature: - -```xml - - -
    - -``` - -**Exporting the environment variable** - -In a VM deployment, you can export the environment variables as shown below. Here VAR is the URL you need to have set as environment property. - -```bash -export stockQuoteEP=http://localhost:61616/... -``` - -## Using a File - -If you want to inject parameter values using a configuration file, you need to apply the following configurations. - -**Configuring the synapse artifacts** - -Define your synapse artifacts using "$FILE:parameter_key" as the parameter value. For example, shown below is an endpoint artifact, where the endpoint URI is configured for the purpose injecting values using a configuration file: - -```xml - - -
    - -``` - -**Setting up the file** - -You can use a configuration file to load the parameter values for each environment. By default, the Micro Integrator is shipped with the file.properties file (stored in the `/conf` directory), which you can use to store the parameter values that should be injected to your synapse configuration. The parameter values should be specified as a key-value pair as shown below. - -```text -stockQuoteEP=http://localhost:9000/services/SimpleStockQuoteService -``` - -Alternatively, you can use a custom file stored in a file system instead of the default `file.properties` file. For example, a file named `dev.properties` can be used to inject parameter values to the development environment and a file named `prod.properties` can be used to inject parameter values to the production environment. - -!!! Tip - It is possible to use a file from a network file system mount (NFS mount) as the file path. We can then use the environment specific configurations from the file in the NFS mount and inject the parameter values to the environment. - -**Updating the System property** - -In the product startup scripts (integrator.sh and integrator.bat file), which are available in the `/bin` directory, a system variable is defined as shown below and the value is set to default. When the system property is set to default as shown below, the system reads the parameters from the file.properties file that is available in the `MI_HOME/conf` directory. - -```bash --Dproperties.file.path=default -``` - -If you are using a custom configuration file, instead of the `file.properties` file, you need to configure the particular file path in the product startup script as shown below. - -=== "On Linux/MacOs" - ```bash - -Dproperties.file.path=/home/user/ei_configs/dev/dev.properties - ``` - -=== "On Windows" - ```bash - -Dproperties.file.path="%CONFIG_DIR%\dev\dev.properties - ``` - -## Supported Parameters - -Listed below are the synapse artifact parameters to which you can dynamically inject values. Note that there are two ways to inject parameters as discussed above. - -### Endpoint parameters - -Listed below are the Endpoint parameters that can be dynamically injected. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Endpoint TypeParameters
    Address Endpointuri
    HTTP Endpointuri
    Loadbalance Endpoint - hostname and port -
    RecipientList Endpoint - hostname and port -
    Template Endpoint - uri -
    WSDL Endpoint - wsdlURI -
    - -#### Example - -In the following example, the endpoint URL is configured as a dynamic value. - -=== "Using Environment Variables" - ```xml - - -
    - - ``` - -=== "Using a File" - ```xml - - -
    - - ``` - -### Data service parameters - -Listed below are the data service parameters that can be dynamically injected. - -- `Driver` -- `URL` -- `Username` -- `Password` - -#### Example - -In the following example, parameters are configured as dynamic values in the data service. - -=== "Inline Datasource (Using Environment Variables)" - ```xml - - - - $SYSTEM:uname - $SYSTEM:pass - $SYSTEM:url1 - $SYSTEM:driver1 - - - -------------------- - - - -------------------- - - - ``` - -=== "External Datasource (Using Environment Variables)" - ```xml - - MySQLConnection - MySQL Connection - - - $SYSTEM:driver1 - $SYSTEM:url1 - $SYSTEM:uname - $SYSTEM:pass - - - - ``` - -=== "Inline Datasource (Using a File)" - ```xml - - - - $FILE:uname - $FILE:pass - $FILE:url1 - $FILE:driver1 - - - -------------------- - - - -------------------- - - - ``` - -=== "External Datasource (Using a File)" - ```xml - - MySQLConnection - MySQL Connection - - - $FILE:driver1 - $FILE:url1 - $FILE:uname - $FILE:pass - - - - ``` - -### DB Report and DB Lookup mediator parameters - -Listed below are the DB Report and DB Lookup mediator parameters that can be dynamically injected. - -- `Driver` -- `URL` -- `Username` -- `Password` - -#### Example - -In the following example, parameters are configured as dynamic values in the DB Report and DB Lookup mediators. - -=== "DB Report (Using Environment Variables)" - ```xml - - - - $SYSTEM:driver1 - $SYSTEM:url1 - $SYSTEM:uname - $SYSTEM:pass - - - - ``` - -=== "DB Lookup (Using Environment Variables)" - ```xml - - - - $SYSTEM:driver1 - $SYSTEM:url1 - $SYSTEM:uname - $SYSTEM:pass - - - - ``` - -=== "DB Report (Using a File)" - ```xml - - - - $FILE:driver1 - $FILE:url1 - $FILE:uname - $FILE:pass - - - - ``` - -=== "DB Lookup (Using a File)" - ```xml - - - - $FILE:driver1 - $FILE:url1 - $FILE:uname - $FILE:pass - - - - ``` - -### Scheduled Task parameters - -The pinned servers parameter can be dynamically injected to a scheduled task or proxy service. See the example given below. - -#### Example - -=== "Using Environment Variables" - ```xml - - - - - - - - ---------- - - - ``` - -=== "Using a File" - ```xml - - - - - - - - ---------- - - - ``` - -### Inbound Endpoint parameters - -See the list of inbound endpoint parameters that can be dynamically injected. - -- HTTP/HTTPS Inbound Protocol -- HL7 Inbound Protocol -- CXF WS-RM Inbound Protocol -- WebSocket Inbound Protocol - -- File Inbound Protocol -- JMS Inbound Protocol -- Kafka Inbound Protocol - -- MQTT Inbound Protocol -- RabbitMQ Inbound Protocol - -#### Example - -In the following example, JMS transport parameters in an inbound endpoint are configured as dynamic values. - -=== "Using Environment Variables" - ```xml - - - - 15000 - true - true - myq - 3 - $SYSTEM:jmsconfac - org.apache.activemq.jndi.ActiveMQInitialContextFactory - $SYSTEM:jmsurl - $SYSTEM:jmsuname - AUTO_ACKNOWLEDGE - $SYSTEM:jmspass - false - queue - application/xml - false - $SYSTEM:pinned - false - - - ``` - -=== "Using a File" - ```xml - - - - 15000 - true - true - myq - 3 - $FILE:jmsconfac - org.apache.activemq.jndi.ActiveMQInitialContextFactory - $FILE:jmsurl - $FILE:jmsuname - AUTO_ACKNOWLEDGE - $FILE:jmspass - false - queue - application/xml - false - $FILE:pinned - false - - - ``` - -### Proxy Service parameters - -The pinned servers parameter as well as all the service-level transport parameters can be dynamically injected to a proxy service. - -- [JMS parameters]({{base_path}}/reference/synapse-properties/transport-parameters/jms-transport-parameters) -- [FIX parameters]({{base_path}}/reference/synapse-properties/transport-parameters/fix-transport-parameters) -- [MailTo parameters]({{base_path}}/reference/synapse-properties/transport-parameters/mailto-transport-parameters) -- [MQTT parameters]({{base_path}}/reference/synapse-properties/transport-parameters/mqtt-transport-parameters) -- [RabbitMQ parameters]({{base_path}}/reference/synapse-properties/transport-parameters/rabbitmq-transport-parameters) -- [VFS parameters]({{base_path}}/reference/synapse-properties/transport-parameters/vfs-transport-parameters) - -#### Example - -In the following example, JMS transport parameters are dynamically injected to the proxy service. - -=== "Using Environment Variables" - ```xml - - - - - ------------- - - - - - - AUTO_ACKNOWLEDGE - myq - queue - application/xml - org.apache.activemq.jndi.ActiveMQInitialContextFactory - $SYSTEM:jmsurl - false - $SYSTEM:jmsconfac - $SYSTEM:jmsuname - $SYSTEM:jmspass - - ``` - -=== "Using a File" - ```xml - - - - - ------------- - - - - - - AUTO_ACKNOWLEDGE - myq - queue - application/xml - org.apache.activemq.jndi.ActiveMQInitialContextFactory - $FILE:jmsurl - false - $FILE:jmsconfac - $FILE:jmsuname - $FILE:jmspass - - ``` - -### Message Store parameters - -Listed below are the message store parameters that can be dynamically injected. - - - - - - - - - - - - - - - - - - - - - - - - -
    Message Store TypeParameters
    JMS Message Store -
      -
    • - store.jms.username -
    • -
    • - store.jms.password -
    • -
    • - store.jms.connection.factory -
    • -
    -
    WSO2 MB Message Store
    RabbitMQ Message Store -
      -
    • - store.rabbitmq.host.name -
    • -
    • - store.rabbitmq.host.port -
    • -
    • - store.rabbitmq.username -
    • -
    • - store.rabbitmq.password -
    • -
    -
    JDBC Message Store -
      -
    • - store.jdbc.drive -
    • -
    • - store.jdbc.connection.url -
    • -
    • - store.jdbc.username -
    • -
    • - store.jdbc.password -
    • -
    -
    Resequence Message Store
    - -#### Example - -In the following example, the parameters in the RabbitMQ message store are configured as dynamic values. - -=== "Using Environment Variables" - ```xml - - - $SYSTEM:rabbithost - false - $SYSTEM:rabbitport - - $SYSTEM:rabbitname - - false - exchange3 - queue3 - $SYSTEM:rabbitpass - - ``` - -=== "Using a File" - ```xml - - - $FILE:rabbithost - false - $FILE:rabbitport - - $FILE:rabbitname - - false - exchange3 - queue3 - $FILE:rabbitpass - - ``` diff --git a/en/docs/integrate/develop/installing-wso2-integration-studio.md b/en/docs/integrate/develop/installing-wso2-integration-studio.md deleted file mode 100644 index 42473e36e9..0000000000 --- a/en/docs/integrate/develop/installing-wso2-integration-studio.md +++ /dev/null @@ -1,71 +0,0 @@ -# Installing WSO2 Integration Studio - -WSO2 Integration Studio provides a comprehensive development experience for building integration solutions. - -### Installation prerequisites - - - - - - - - - - - - - - -
    ProcessorIntel Core i5 or equivalent
    RAM4 GB minimum, 8 GB recommended
    Disk SpaceApproximately 4 GB
    - -### Install and run WSO2 Integration Studio - -Follow the steps given below. - -1. Go to the [API Manager Tooling web page](https://wso2.com/api-management/tooling/), and download WSO2 Integration Studio. - - !!! Note - * If you are a MacOS user, be sure to add it to the **Applications** directory. - * If you are a Microsoft Windows user, extract it outside the **Programs** directory. This is done because the Integration Studio requires permission to write to files. - -3. Run the **Integration Studio** application to start the tool. - -!!! info - **Getting an error message?** See the [troubleshooting](#troubleshooting) tips. - -### Get the latest updates - -If you have already installed and set up WSO2 Integration Studio, you can get the latest updates as follows: - -1. Open WSO2 Integration Studio on your computer. -2. Go to **Help** -> **Check for Updates**. - - get tooling updates - -3. Once the update check is completed, you can select all the available updates and install. - -#### Checking the version - -You can check the version of the Integration Studio as below. - -* For MacOS : Integration Studio > About Integration Studio -* For Windows/Linux : Help > About Integration Studio - - get studio information - -### Troubleshooting - -If you get an error message about the file being damaged or that you -cannot open the file when you try to start the tool on a MacOS, change the -MacOS security settings as described below. - -1. Go to **System Preferences** -\> **Security & Privacy** -\> **General**. -2. Under **Allow apps downloaded from**, click **Anywhere** . -3. Thereafter, select **IntegrationStudio** from the **Applications** menu in your Mac. - -## What's next? - -- Take a [quick tour]({{base_path}}/integrate/develop/wso2-integration-studio) of the WSO2 Integration Studio interface. -- [Build a simple integration use case]({{base_path}}/integrate/develop/integration-development-kickstart) to get familiar with the development workflow. -- Build [integration use cases]({{base_path}}/integrate/integration-overview) with WSO2 Integration Studio. diff --git a/en/docs/integrate/develop/integration-development-kickstart.md b/en/docs/integrate/develop/integration-development-kickstart.md deleted file mode 100644 index cb293665d8..0000000000 --- a/en/docs/integrate/develop/integration-development-kickstart.md +++ /dev/null @@ -1,504 +0,0 @@ -# Developing Your First Integration Solution - -Integration developers need efficient tools to build and test all the integration use cases required by the enterprise before pushing them into a production environment. The following topics will guide you through the process of building and running an example -integration use case using WSO2 Integration Studio. -This tool contains an embedded WSO2 Micro Integrator instance as well as other capabilities -that allows you to conveniently design, develop, and test your integration artifacts before -deploying them in your production environment. - -## What you'll build - -We are going to use the same use case we considered in the [Quick Start Guide]({{base_path}}/get-started/integration-quick-start-guide). -In the quick start guide, we just executed the already-built integration scenario. -Here, we are going to build the integration scenario from scratch. Let’s recall the -business scenario: - -![Integration Scenario]({{base_path}}/assets/img/integrate/developing-first-integration/dev-first-integration-0.png) - -The scenario is about a basic healthcare system where WSO2 Micro Integrator is used as the integration middleware. Most healthcare centers use a system to help patients book doctor appointments. To check the availability of doctors, patients will typically use every online system that is dedicated to a particular healthcare center or personally visit the healthcare centers. - -We will simplify this process of booking doctor appointments by building an integration solution that orchestrates the isolated systems in each healthcare provider and exposes a single interface to the users. - -Both the Grand Oak service and Pine Valley service are exposed over the HTTP protocol. - -- The Grand Oak service accept GET requests in the following service endpoint URL: - ```bash - http://:/grandOak/doctors/ - ``` - -- The Pine Valley service accepts POST requests in the following service endpoint URL: - ```bash - http://:/pineValley/doctors - ``` - - The expected payload should be in the following JSON format: - ```bash - { - "doctorType": "" - } - ``` - -Let’s implement a simple Rest API that can be used to query the availability of doctors for a particular category -from all the available healthcare centers. - -## Step 1 - Set up the workspace - -Download the relevant [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/) based on your operating system. For more information, see [Installing WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). - -## Step 2 - Develop the integration artifacts - -### Create the integration project - -Let's create an integration project with the required modules (to store artifacts) in WSO2 Integration Studio. - -1. Open WSO2 Integration Studio and click **New Integration Project** in the **Getting Started** view as shown below. - New Integration Project - -3. In the **New Integration Project** dialog box that opens, enter `Healthcare` as the project name. This is a maven multi module project. - - Be sure to leave the Create ESB Configs and Create Composite Exporter check boxes selected as shown below. - - - -3. Click **Finish**. - - The integration project with the ESB Config module (`HealthcareConfigs`) and Composite Exporter module (`HealthcareCompositeExporter`) are created as shown below. - - project folder - -### Create Endpoints - -The actual back-end services (healthcare services) are logically represented in the integration solution as **Endpoint** artifacts. - -Let's create two Endpoint artifacts for the two healthcare services: - -1. Right-click `HealthcareConfigs` and go to **New** → **Endpoint** to open the **New Endpoint Artifact** dialog box. - - - -2. In the New Endpoint Artifact dialog box that opens, select **Create a New Endpoint** and click **Next**. -3. For the ‘Grand Oak hospital service’, let’s use the following values: - - - - - - - - - - - - - - - - - - - - - - -
    ParameterValue
    Endpoint NameGrandOakEndpoint
    Endpoint TypeHTTP Endpoint
    URI Templatehttp://localhost:9090/grandOak/doctors/{uri.var.doctorType}
    MethodGET
    - - - -4. Click **Finish** to save the endpoint configuration. -5. Follow the same steps to create an endpoint for ‘Pine Valley Hospital’. Use the following parameter values: - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterValue
    Endpoint NamePineValleyEndpoint
    Endpoint TypeHTTP Endpoint
    URI Templatehttp://localhost:9091/pineValley/doctors
    MethodPOST
    - -### Create the REST API - -We are orchestrating multiple services and exposing a single API to the clients. The main integration artifact is going to be a REST API. - -1. Right-click `HealthcareConfigs` in the project explorer and -go to **New** → **REST API** to open the **API Artifact Creation Options** dialog box. -2. Select **Create A New API Artifact** and click **Next**. -3. Specify values for the required REST API properties: - - - - - - - - - - - - - - -
    ParameterValue
    NameHealthcareAPI
    Context/healthcare
    - - - -4. Click **Finish**. The REST API is created in the `src/main/synapse-config/api` folder under `HealthcareConfigs`. -5. Open the new artifact from the project explorer. You will see the graphical view of the `HealthcareAPI` with its default **API Resource**. - - - - To the right of the editor, you will see the **Mediators** palette containing various mediators - that can be dragged and dropped into the canvas of the **API Resource**. - -6. Double-click the API resource to open the **Properties** view: - - - - Specify values for the required resource properties: - - - - - - - - - - - - - - - - - - -
    ParameterValue
    Url StyleURL_TEMPLATE
    Uri Template - /doctor/{doctorType}

    - Note that '{doctorType}' is a uri variable that gets resolved to the path parameter value in the runtime. We can access the value of the uri variable in the mediation flow using the variable (property) called ‘uri.var.doctorType’. -
    MethodsGet
    - -### Create the mediation logic - -1. Create two parallel message flows: - - In this scenario, the Healthcare API receives an HTTP GET request, which should be delivered to two different back-end services. That is, we need to clone the message into two branches and process them in parallel. - To do that, we can use the **Clone Mediator**. - - Drag the **Clone** mediator from the mediator palette and drop it into the request path (in sequence) of the API Resource canvas. - - - - Right-click the Clone mediator and select **Add/Remove Target..**. - In the **Add Target Branches** window, set the number of branches to 2. - You will now see two branches inside the **Clone** mediator. - - - -2. Invoke the GrandOak Endpoint: - - The **Call** mediator is used to invoke a back-end service. In [Step 2](#step-2-create-endpoints), we have already created an Endpoint to represent the GrandOak endpoint. - - Drag the Call mediator from the mediator palette into one branch of the Clone mediator. - - - - Then, drag the already-defined GrandOak endpoint artifact, which is available under the **Defined Endpoints** section of the palette, into the Call mediator. - - - -3. Construct message payload for the PineValley Endpoint: - - Unlike the GrandOAK endpoint, which accepts a simple GET request, the PineValley endpoint requires a POST request with the following JSON message: - - ```bash - { - "doctorType": "" - } - ``` - - Therefore, we need to first construct the required message payload. There are several - Transformation mediators for constructing messages. Let's use the **PayloadFactory** mediator. - Drag the PayloadFactory mediator into the 2nd branch of the **Clone** mediator as shown below. - - - - Specify values for the required PayloadFactory properties: - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterValue
    Payload FormatInline
    Media Typejson
    Payload{ - "doctorType": "$1" - } -
    Args$ctx:uri.var.doctorType
    - - Note the `$1` in the Payload format. It denotes a parameter that can get a value assigned dynamically. The value for the parameters needs to be assigned using Arguments **(Args)**. - **Args** can be added using the **PayloadFactoryArgument** dialog box, which appears when you click the () sign. - - - - In the `PayloadFactoryArgument` dialog box, select **Expression** as the **Argument Type**, and click **Argument Expression**. You will then see the **Expression Selector** dialog box. Enter **$ctx:uri.var.doctorType** as the value for the expression. - -4. Invoke the PineValley Endpoint: - - Use the Call mediator to invoke the PineVallery Endpoint. Follow the same steps you used under ‘Invoke GrandOak Endpoint’. - -5. Aggregating response messages: - - Since we are cloning the messages and delivering them into two different services, we will receive two responses. - So we need to aggregate those two responses and construct a single response. To do that, we can use the **Aggregate** mediator. - - Drag the Aggregate mediator and drop it next to the Clone mediator as shown below. - - - - Specify values for the required Aggregate mediator properties. - - - - - - - - - - - - -
    ParameterValue
    Aggregation Expressionjson-eval($.doctors.doctor)
    - -6. Send a response back to the client : - - To send the response back to the client, we can use the **Respond** mediator. Add the Respond mediator as shown below. - - - -The final mediation configuration looks similar to the above diagram. -Following is what you will see in the **Source View** of WSO2 Integration Studio. - -```xml - - - - - - - - - - - - - - - - { - "doctorType": "$1" - } - - - - - - - - - - - - - - - - - - - - - - - - -``` - -## Step 3 - Build and run the artifacts - -There are several ways to deploy and run the integration scenario. - -### Option 1: Using WSO2 Integration Studio - -1. Right-click `HealthcareCompositeExporter` and click **Export Project Artifacts and Run**. - - - -2. You will see the following dialog box. Select the `HealthcareConfigs` folder in the artifact list and click **Finish**. - - - -The embedded Micro Integrator starts with the deployed artifacts. You will see the server startup log in the Console tab, and the endpoints of the deployed services in the Runtime Services tab as shown below. - - - -### Option 2: Using a local Micro Integrator instance - -**Before you begin**, be sure to install the Micro Integrator on your machine: - -1. Go to the [WSO2 Micro Integrator web page](https://wso2.com/integration/micro-integrator/#), click **Download**, and then click **Zip Archive** to download the Micro Integrator distribution as a ZIP file. -2. Extract the ZIP file. This will be the `` folder. - -Once you have downloaded and set up the Micro Integrator locally, follow the steps given below. - -1. **Export the artifacts as a deployable CAR file**: Right-click `HealthcareCompositeExporter` in WSO2 Integration Studio and select **Export Composite Application Project**. - -2. **Deploy the Healthcare service**: Copy the exported CAR file of the Healthcare service to the `MI_HOME/repository/deployment/server/carbonapps` directory. - -3. **Start the Micro Integrator**: - - 1. Open a terminal and navigate to the `/bin` folder. - 2. Execute one of the commands given below. - - === "On MacOS/Linux" - ```bash - ./micro-integrator.sh - ``` - - === "On Windows" - ```bash - micro-integrator.bat - ``` - -## Step 4 - Observe deployed artifacts - -Once you have deployed the artifacts and started the Micro Integrator server, you can [install]({{base_path}}/install-and-setup/install/installing-the-product/installing-mi-dashboard) and [start the Dashboard]({{base_path}}/install-and-setup/install/installing-the-product/running-the-mi-dashboard) to observe details of the deployed artifacts. - -If you are running the embedded Micro Integrator, click Open Monitoring Dashboard in the Runtime Services tab as shown below. - - - -You will be directed to the sign-in screen of the Dashboard from your default browser as shown below. Sign in using `admin` as the user name and password. - - - -Once you sign in, click the required artifact type to view details. - - - -## Step 5 - Test the use case - -Now, let's test the integration service. - -### Start back-end services - -Let's start the mock back-end services for this use case: - -1. Download the [`DoctorInfo-JDK11.jar` file]({{base_path}}/assets/attachments/developing-first-integration/doctorinfo-jdk11.jar). This contains two healthcare services. -2. Open a terminal, navigate to the location of the downloaded `DoctorInfo-JDK11.jar` file, and execute the following command to start the services: - - ```bash - java -jar DoctorInfo-JDK11.jar - ``` - -### Invoke the Healthcare service - -There are two ways to invoke the service: - -- **Option 1: Using Postman** - - Let's invoke the API from Postman as follows: - - 1. Open the Postman application. If you do not have the application, download it from here : [Postman](https://www.postman.com/downloads/) - 2. Create a collection with appropriate name. Ex : 'IntegrationStudio collection'. - 3. Add a new request to this collection and name it appropriately. Ex: 'Healthcare request'. - 4. In the 'Enter request URL' section paste our endpoint URL : ```http://localhost:8290/healthcare/doctor/Ophthalmologist``` - 5. Select 'GET' as http method and click 'Send' button. -

    - -

    - -- **Option 2: Using your terminal** - - If you want to send the client request from your terminal: - - 1. Install and set up [cURL](https://curl.haxx.se/) as your REST client. - 2. Open a terminal and execute the following curl command to invoke the service: - - ```bash - curl -v http://localhost:8290/healthcare/doctor/Ophthalmologist - ``` - - You will receive the following response: - - ```bash - [ - [ - { - "name": "John Mathew", - "time": "03:30 PM", - "hospital": "Grand Oak" - }, - { - "name": "Allan Silvester", - "time": "04:30 PM", - "hospital": "Grand Oak" - } - ], - [ - { - "name": "John Mathew", - "time": "07:30 AM", - "hospital": "pineValley" - }, - { - "name": "Roma Katherine", - "time": "04:30 PM", - "hospital": "pineValley" - } - ] - ] - ``` - -## What's Next? - -- [Publish Integrations to the API Manager]({{base_path}}/integrate/develop/working-with-service-catalog). -- [Writing a unit test for integration artifacts]({{base_path}}/integrate/develop/creating-unit-test-suite). diff --git a/en/docs/integrate/develop/intro-integration-development.md b/en/docs/integrate/develop/intro-integration-development.md deleted file mode 100644 index ffeeec37cf..0000000000 --- a/en/docs/integrate/develop/intro-integration-development.md +++ /dev/null @@ -1,354 +0,0 @@ -# Developing Integration Solutions - -The contents on this page will walk you through the topics related to developing integration solutions using WSO2 Integration Studio. - -## WSO2 Integration Studio - -WSO2 Integration Studio is the comprehensive developer tool, which you will use to develop, build, and test your integration solutions before the solutions are pushed to your production environments. See the topics given below for details. - - - - - - - - - - - - - - -
    - Quick Tour of WSO2 Integration Studio - - Get introduced to the main functions of WSO2 Integration Studio. -
    - Installing WSO2 Integration Studio - - Find the instructions on how to download and install the tool on your operating system. -
    - Troubleshooting WSO2 Integration Studio - - Find details on how to troubleshoot errors you might encounter as you use WSO2 Integration Studio. -
    - -## Development workflow - -Integration developers will follow the workflow illustrated by the following diagram. - -![developer workflow]({{base_path}}/assets/img/integrate/development_workflow.png) - -### Set up the workspace - -To start developing integration solutions, you need to first install and set up WSO2 Integration Studio. - -### Develop - -- Create projects and modules - - - - - - - - - - - - - - -
    - Create an Integration project - - An integration project is a maven multi module project that will include all the modules (sub projects) of your integration solution. -
    - Add sub projects to Integration project - - Once you have created an integration project, you can add new sub projects if required. -
    - Move sub projects to Integration project - - You can move sub projects to the required integration project from any location in the workspace. -
    - -- Create artifacts - - - - - - - - - - - -
    - Message Entry Points - - - Message Processing Units - - - Registry Resources - -
    - Data Services Resources - - - Custom Artifacts - - - Other - -
    - -- Secure the artifacts - - - - - - -
    - Encrypting Sensitive Data - - - Securing APIs and Services - -
    - -### Build and run - -1. Package - - The artifacts and modules should be packaged in a Composite Exporter before they can be deployed in any environment. - -2. Deploy - - You can easily deploy and try out the packaged integration artifacts on your preferred environment: - - - - - -
    - -
    - -3. Unit Tests - - Use the integration test suite of WSO2 Integration Studio to run unit tests on the developed integration solution. - -### Iterate and improve - -As you build and run the integration flow, you may identify errors that need to be fixed, and changes that need to be done to the synapse artifacts. - - - - - - - - - - -
    - Debug Mediations - - Use the Mediation Debug function in WSO2 Integration Studio to debug errors while you develop the integration solutions. -
    - Using Logs - - You can enable and analyze the following logs to debug various errors: - -
    - -You must redeploy the integration artifacts after applying changes. - -- If you are testing on a VM, the artifacts will be instantly deployed when you redeploy the synapse artifacts. -- If you are testing on containers, you need to rebuild the Docker images or Kubernetes artifacts. - -### Push to production - -It is recommended to use a CICD pipeline to deploy your tested integration solutions in the production environment. - - - - - - - - - - -
    - On-Premise Environment - - You can easily push your integration solutions to a CICD pipeline because the developer tool (WSO2 Integration Studio) consists of Maven support. See the details on Integration Project. -
    - Kubernetes Environment - - If you have a Kubernetes deployment, see the instructions on how to use the Kubernetes CICD pipeline. -
    - -## Related topics - - - - - - - - - - - - - - - - - - -
    - Develop your first integration - - Try the development workflow end-to-end by running a simple use case. -
    - Integration Use Cases - - Read about the integration use cases supported by the Micro Integrator. -
    - Tutorials - - Develop and try out each integration use case end-to-end. -
    - Examples - - Try out specific integration scenarios by running the samples. -
    \ No newline at end of file diff --git a/en/docs/integrate/develop/monitoring-api-level-logs.md b/en/docs/integrate/develop/monitoring-api-level-logs.md deleted file mode 100644 index 2ee6cc273d..0000000000 --- a/en/docs/integrate/develop/monitoring-api-level-logs.md +++ /dev/null @@ -1,69 +0,0 @@ -# Monitoring API-Level Logs - -The advantage of having per-API log files is that it is very easy to analyze/monitor what went wrong in a particular REST API defined in WSO2 Micro Integrator by looking at the log files. The API log is an additional log file, which will contain a copy of the logs to a particular REST API. - -Below are the configuration details to configure the logs of a REST API called `TestAPI` using `log4j` -properties. - -Open `/conf/log4j2.properties` file using your favorite text editor to configure `log4j` to log the API specific logs to a file. You can configure the logger for either INFO level logs or DEBUG level logs as follows: - -## Enabling log4j for an API - -Follow the instructions below to enable log4j2 logs for a sample REST API (named `TestAPI`). - -1. Open up the `log4j2.properties` file (stored in the `/conf` ) directory.  -2. Let's define a new appender for the `TestAPI` API by adding the following section to the end of the file (starting in a new line). - - !!! Note - This configuration creates a log file named `TestAPI.log` in the folder `/repository/logs` folder. - - ```bash - # API_APPENDER is set to be a DailyRollingFileAppender using a PatternLayout. - appender.API_APPENDER.type = RollingFile - appender.API_APPENDER.name = API_APPENDER - appender.API_APPENDER.fileName = ${sys:carbon.home}/repository/logs/TestAPI.log - appender.API_APPENDER.filePattern = ${sys:carbon.home}/repository/logs/wso2-ei-api-%d{MM-dd-yyyy}.log - appender.API_APPENDER.layout.type = PatternLayout - appender.API_APPENDER.layout.pattern = TID: [%d] %5p {% raw %}{%c}{% endraw %} [%logger] - %m%ex%n - appender.API_APPENDER.policies.type = Policies - appender.API_APPENDER.policies.time.type = TimeBasedTriggeringPolicy - appender.API_APPENDER.policies.time.interval = 1 - appender.API_APPENDER.policies.time.modulate = true - appender.API_APPENDER.policies.size.type = SizeBasedTriggeringPolicy - appender.API_APPENDER.policies.size.size=10MB - appender.API_APPENDER.strategy.type = DefaultRolloverStrategy - appender.API_APPENDER.strategy.max = 20 - appender.API_APPENDER.filter.threshold.type = ThresholdFilter - appender.API_APPENDER.filter.threshold.level = INFO - ``` - -3. Register the appender (named `API_APPENDER`): - - ```xml - appenders = CARBON_CONSOLE, CARBON_LOGFILE, AUDIT_LOGFILE, API_APPENDER, - ``` - -4. Define a new logger to filter out `TestAPI` related logs: - - ```xml - logger.API_LOG.name=API_LOGGER.TestAPI - logger.API_LOG.level=INFO - logger.API_LOG.appenderRef.API_APPENDER.ref = API_APPENDER - logger.API_LOG.additivity=false - ``` - -5. Register the `API_LOG` logger: - - ```xml - loggers = AUDIT_LOG, API_LOG, SERVICE_LOGGER, - ``` - -6. Save the `log4j2.properties` file. - -## Configuring the REST API - -The log4j2 configurations in the `log4j2.properties` file does not create logs for the REST API by default. Add a Log mediator to the REST API's in-sequence and configure it to log messages at `INFO` log level. - -## Dynamically changing log level - -See the instructions on [updating the log level]({{base_path}}/observe/micro-integrator/classic-observability-logs/configuring-log4j2-properties/#updating-the-log4j2-log-level). diff --git a/en/docs/integrate/develop/monitoring-service-level-logs.md b/en/docs/integrate/develop/monitoring-service-level-logs.md deleted file mode 100644 index 8dc2f9ef49..0000000000 --- a/en/docs/integrate/develop/monitoring-service-level-logs.md +++ /dev/null @@ -1,67 +0,0 @@ -# Monitoring Service-Level Logs - -The advantage of having per-service log files is that it is very easy to analyze/monitor what went wrong in this particular service (proxy service, data service etc.) by looking at the service log. Enabling this feature will not terminate the `wso2carbon.log` file. This file will contain the complete log with every log statement, including the service logs that you have configured to be logged into a different log file. In other words, the service log is an additional log file, which will contain a copy of the logs to that particular service. - -## Enabling log4j2 for a service - -Follow the instructions below to enable log4j2 logs for a sample proxy service (named `StockQuoteProxy`). - -1. Open up the `log4j2.properties` file (stored in the `/conf` directory.  -2. Let's define a new appender for the `StockQuoteProxy` service by adding the following section to the end of the file (starting in a new line). - - !!! Note - This configuration creates a log file named `stock-quote-proxy-service.log` in the `/repository/logs` folder. - - ```bash - # SQ_PROXY_APPENDER is set to be a DailyRollingFileAppender using a PatternLayout. - appender.SQ_PROXY_APPENDER.type = RollingFile - appender.SQ_PROXY_APPENDER.name = SQ_PROXY_APPENDER - appender.SQ_PROXY_APPENDER.fileName = ${sys:carbon.home}/repository/logs/stock-quote-proxy-service.log - appender.SQ_PROXY_APPENDER.filePattern = ${sys:carbon.home}/repository/logs/stock-quote-proxy-service-%d{MM-dd-yyyy}.log - appender.SQ_PROXY_APPENDER.layout.type = PatternLayout - appender.SQ_PROXY_APPENDER.layout.pattern = TID: [%d] %5p {% raw %}{%c}{% endraw %} [%logger] - %m%ex%n - appender.SQ_PROXY_APPENDER.policies.type = Policies - appender.SQ_PROXY_APPENDER.policies.time.type = TimeBasedTriggeringPolicy - appender.SQ_PROXY_APPENDER.policies.time.interval = 1 - appender.SQ_PROXY_APPENDER.policies.time.modulate = true - appender.SQ_PROXY_APPENDER.policies.size.type = SizeBasedTriggeringPolicy - appender.SQ_PROXY_APPENDER.policies.size.size=10MB - appender.SQ_PROXY_APPENDER.strategy.type = DefaultRolloverStrategy - appender.SQ_PROXY_APPENDER.strategy.max = 20 - appender.SQ_PROXY_APPENDER.filter.threshold.type = ThresholdFilter - appender.SQ_PROXY_APPENDER.filter.threshold.level = DEBUG - ``` - -3. Register the appender (named `SQ_PROXY_APPENDER`): - - ```xml - appenders = CARBON_CONSOLE, CARBON_LOGFILE, AUDIT_LOGFILE, SQ_PROXY_APPENDER, - ``` - -4. Define a new logger to filter out `StockQuoteProxy` related logs: - - ```xml - logger.StockQuoteProxy.name = SERVICE_LOGGER.StockQuoteProxy - logger.StockQuoteProxy.level = INFO - logger.StockQuoteProxy.appenderRef.SQ_PROXY_APPENDER.ref = SQ_PROXY_APPENDER - logger.StockQuoteProxy.additivity = false - ``` - - !!! Info - This particular logger is configured to rotate the file each minute whenever there is a log going into the service log. - -5. Register the `StockQuoteProxy` logger: - - ```xml - loggers = AUDIT_LOG, StockQuoteProxy, SERVICE_LOGGER, - ``` - -6. Save the `log4j2.properties` file. - -## Configuring the proxy service - -The log4j2 configurations in the `log4j2.properties` file does not create logs for the proxy service by default. Add a Log mediator to the proxy service's in-sequence and configure it to log messages at `Full` log level. - -## Dynamically changing log level - -See the instructions on [updating the log level]({{base_path}}/observe/micro-integrator/classic-observability-logs/configuring-log4j2-properties/#updating-the-log4j2-log-level). diff --git a/en/docs/integrate/develop/packaging-artifacts.md b/en/docs/integrate/develop/packaging-artifacts.md deleted file mode 100644 index ea68bb43ff..0000000000 --- a/en/docs/integrate/develop/packaging-artifacts.md +++ /dev/null @@ -1,44 +0,0 @@ -# Packaging Synapse Artifacts - -To package Synapse artifacts, you need to create a Composite Application Project. Use one of the following methods: - -## Using an existing composite application - -If you have an already created composite application project, do the following to package the Synapse artifacts into the composite application: - -1. Select the `pom.xml` file that is under the composite application project in the project explorer. - ![Create CAPP]({{base_path}}/assets/img/integrate/create_project/capp_proj_explorer.png) -2. In the **Dependencies** section, select the artifacts from each of - the projects. - - !!! Note - If you have created a custom mediator artifact, it should be packaged in the same composite application along with the other artifacts that uses the mediator. - - ![Create CAPP]({{base_path}}/assets/img/integrate/create_project/capp_dependencies.png) - -3. Save the artifacts. - -## Creating a new composite application - -If you have not previously created a composite application project, do the following to package the artifacts in your ESB Config project. - -1. Right click on the ESB project and go to **New** and then click **Composite Exporter**. - ![Create new CAPP]({{base_path}}/assets/img/integrate/create_project/create_new_capp.png) -2. In the **New Composite Application Project** dialog that opens, select the artifacts from the relevant ESB projects and click - **Finish** . - ![Create new CAPP]({{base_path}}/assets/img/integrate/create_project/create_new_capp_dialog.png) - -Alternatively, - -1. Right-click the project explorer and click **New -> Project** . - ![Create new CAPP]({{base_path}}/assets/img/integrate/create_project/create_new_project_capp.png) -2. In the **New Project** dialog that opens, select **Composite - Application Project** from the list and click **Next** . - ![Create new CAPP]({{base_path}}/assets/img/integrate/create_project/create_new_project_capp_dialog.png) -3. Give a name for the **Composite Application** project and select the - artifacts that you want to package. - ![Create new CAPP]({{base_path}}/assets/img/integrate/create_project/create_new_project_capp_select_dependencies.png) -4. In the **Composite Application Project POM Editor** that opens, - under **Dependencies** , note the information for each of the - projects you selected earlier. - ![Create new CAPP]({{base_path}}/assets/img/integrate/create_project/create_new_project_capp_dependencies_view.png) diff --git a/en/docs/integrate/develop/troubleshooting-wso2-integration-studio.md b/en/docs/integrate/develop/troubleshooting-wso2-integration-studio.md deleted file mode 100644 index 4b9a669cc3..0000000000 --- a/en/docs/integrate/develop/troubleshooting-wso2-integration-studio.md +++ /dev/null @@ -1,138 +0,0 @@ -# Troubleshooting WSO2 Integration Studio - -The following are some of the ways to troubleshoot errors that you may encounter when working with WSO2 Integration Studio. - -## Adding an artifact - -Once you add an artifact, you need to refresh the `CompositeApplication.pom` -file to reflect new changes on the Composite Application. - -![troubleshooting]({{base_path}}/assets/img/integrate/workbench/refresh-integration-studio.png) - -## Restoring the project perspective - -If your project view goes missing, you can get it back by navigating -to **Window -> Perspective -> Reset Perspective** from the toolbar. - -## Opening a project view - -If you need to open a particular project view, you can get it by -navigating to **Window -> Show View -> Other...** from the -toolbar, and open the preferred view from the list. - -## Unable to drag and drop mediators into the canvas - -When you use **display scaling** that exceeds 150% (in **Windows** or **Linux** environments only), you may observe that you cannot drag and drop mediators into the canvas. To overcome this issue, add the following line (VM argument) to the `IntegrationStudio.ini` file in the installation directory of WSO2 Integration Studio. - -!!! Warning - Be sure to add this as the last line in the file. - -```bash --Dswt.autoScale=100 -``` - -## Error creating Docker image (on macOS) - -When you run WSO2 Integration Studio on MacOS, you will sometimes get the following error when you [generate a Docker image]({{base_path}}/integrate/develop/generate-docker-image) of your integration artifacts: "**Error creating Docker image**". - -The details of the error are given below. To access WSO2 Integration Studio errors, see the instructions on [viewing the WSO2 Integration Studio error log](#view-wso2-integration-studio-error-log) - -```java -org.wso2.developerstudio.eclipse.esb.docker.exceptions.DockerImageGenerationException: Could not create the Docker image bundle file. -at org.wso2.developerstudio.eclipse.esb.docker.util.DockerImageGenerator.buildImage(DockerImageGenerator.java:273) -at org.wso2.developerstudio.eclipse.esb.docker.util.DockerImageGenerator.generateDockerImage(DockerImageGenerator.java:202) -at org.wso2.developerstudio.eclipse.esb.docker.job.GenerateDockerImageJob.run(GenerateDockerImageJob.java:141) -at org.eclipse.core.internal.jobs.Worker.run(Worker.java:56) -Caused by: com.spotify.docker.client.exceptions.DockerException: java.io.IOException: Cannot run program “docker-credential-osxkeychain”: error=2, No such file or directory -at com.spotify.docker.client.auth.ConfigFileRegistryAuthSupplier.authForBuild(ConfigFileRegistryAuthSupplier.java:108) -at com.spotify.docker.client.DefaultDockerClient.build(DefaultDockerClient.java:1483) -at com.spotify.docker.client.DefaultDockerClient.build(DefaultDockerClient.java:1460) -at org.wso2.developerstudio.eclipse.esb.docker.util.DockerImageGenerator.buildImage(DockerImageGenerator.java:249) -… 3 more -Caused by: java.io.IOException: Cannot run program “docker-credential-osxkeychain”: error=2, No such file or directory -at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048) -at java.lang.Runtime.exec(Runtime.java:620) -at java.lang.Runtime.exec(Runtime.java:450) -at java.lang.Runtime.exec(Runtime.java:347) -at com.spotify.docker.client.SystemCredentialHelperDelegate.exec(SystemCredentialHelperDelegate.java:140) -at com.spotify.docker.client.SystemCredentialHelperDelegate.get(SystemCredentialHelperDelegate.java:88) -at com.spotify.docker.client.DockerCredentialHelper.get(DockerCredentialHelper.java:119) -at com.spotify.docker.client.DockerConfigReader.authWithCredentialHelper(DockerConfigReader.java:282) -at com.spotify.docker.client.DockerConfigReader.authForAllRegistries(DockerConfigReader.java:166) -at com.spotify.docker.client.auth.ConfigFileRegistryAuthSupplier.authForBuild(ConfigFileRegistryAuthSupplier.java:106) -… 6 more -Caused by: java.io.IOException: error=2, No such file or directory -at java.lang.UNIXProcess.forkAndExec(Native Method) -at java.lang.UNIXProcess.(UNIXProcess.java:247) -at java.lang.ProcessImpl.start(ProcessImpl.java:134) -at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029) -… 15 more -``` - -This error is because the **Docker UI** installation on your MacOS has a feature that stores Docker credentials on Mac Keychain. To fix this, you must disable this feature from the Docker UI. Also, this will automatically be saved in your `~/.docker/config.json` file. - -![docker ui]({{base_path}}/assets/img/integrate/docker-ui.png) - -## Error creating Docker image (on Windows) - -When you build a Docker image either via [Docker Exporter Project]({{base_path}}/integrate/develop/create-docker-project) or [Kubernetes Exporter Project]({{base_path}}/integrate/develop/create-kubernetes-project) in WSO2 Integration Studio on Windows, you may sometimes get the following error: "**Docker image generation failed**". - -The details of the error are given below. To access WSO2 Integration Studio errors, see the instructions on [viewing the WSO2 Integration Studio error log](#view-wso2-integration-studio-error-log) - -```java -[WARNING] An attempt failed, will retry 1 more times -org.apache.maven.plugin.MojoExecutionException: Could not build image -at com.spotify.plugin.dockerfile.BuildMojo.buildImage(BuildMojo.java:185) -at com.spotify.plugin.dockerfile.BuildMojo.execute(BuildMojo.java:105) -at com.spotify.plugin.dockerfile.AbstractDockerMojo.tryExecute(AbstractDockerMojo.java:252) -at com.spotify.plugin.dockerfile.AbstractDockerMojo.execute(AbstractDockerMojo.java:241) -at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134) -at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207) -at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) -at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) -at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) -at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) -at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) -..... -Caused by: com.spotify.docker.client.shaded.org.apache.http.conn.HttpHostConnectException: Connect to localhost:2375 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused: connect -at com.spotify.docker.client.shaded.org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151) -at com.spotify.docker.client.shaded.org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353) -at com.spotify.docker.client.shaded.org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380) -at com.spotify.docker.client.shaded.org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) -... 21 more -Caused by: java.net.ConnectException: Connection refused: connect -at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method) -at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85) -``` - -To overcome this issue, you must go to the [**Docker Desktop**](https://docs.docker.com/docker-for-windows/) settings in Windows and expose the **daemon** on TCP without TLS. - -Follow the steps given below. - -1. Right-click the Docker icon in the **Notifications** area (or System tray) to open the [**Docker Desktop**](https://docs.docker.com/docker-for-windows/) menu. -2. Select **Settings**. - - ![Docker Desktop menu]({{base_path}}/assets/img/integrate/docker-desktop-menu-windows.png) - -3. In the **Settings** dialog box that opens, select **Expose daemon on TCP without TLS**. - - ![Docker settings tab]({{base_path}}/assets/img/integrate/docker-ui-setting-windows.png) - -4. Restart Docker to apply the changes. - -## Getting Started page goes blank - -The Getting Started page of WSO2 Integration Studio goes blank on some occasions when using older Firefox browser versions like 59.0. Upgrade to a newer version of Firefox (for example, 77.0.1) to fix the problem and have a seamless experience when using this page. - -## View WSO2 Integration Studio Error Log - -To get details of a WSO2 Integration Studio error: - -1. Select the WSO2 Integration Studio window that you have open. -2. Go to **Windows** -> **Show View** -> **Other** on the top menu bar of your computer and select **Error Logs**. - - This will open the **Error Log** tab in WSO2 Integration Studio: - - ![error log tab]({{base_path}}/assets/img/integrate/error-log-tab.png) - -3. Double-click the required error to see the details. \ No newline at end of file diff --git a/en/docs/integrate/develop/using-embedded-micro-integrator.md b/en/docs/integrate/develop/using-embedded-micro-integrator.md deleted file mode 100644 index 6347bf5cf2..0000000000 --- a/en/docs/integrate/develop/using-embedded-micro-integrator.md +++ /dev/null @@ -1,115 +0,0 @@ -# Using the Embedded Micro Integrator - -WSO2 Integration Studio contains an embedded Micro Integrator instance, which you can use for testing your integration solutions during the development process. - -## Deploy and run artifacts in the (embedded) server - -Once you have the [integration artifacts packaged]({{base_path}}/integrate/develop/packaging-artifacts) in a composite application, you can deploy and run them in the embedded Micro Integrator using a single click. - -1. Select the composite application in the project explorer. -2. Click the icon in the menu palette to open the Run As dialog box. - -3. Select Run on Micro Integrator and click OK. - - - -4. Select the artifacts from the composite application that you want to deploy. - - -5. Click **Finish**. The artifacts will be deployed in the WSO2 Micro Integrator and the server will start. - See the startup log in the **Console** tab: - - -6. If you find errors in your mediation sequence, use the [debugging features]({{base_path}}/integrate/develop/debugging-mediation) to troubleshoot. - -## View deployed endpoints in the (embedded) server - -Use the Runtime Services tab in WSO2 Integration Studio to view the endpoint URLs of the artifacts deployed in the embedded Micro Integrator. - -When you [deploy the artifacts and start](#deploy-and-run-artifacts-in-the-embedded-server) the embedded Micro Integrator, the Console tab prints the server startup logs and the Runtime Services tab will open as shown below. - - - -If you have closed the tab and you want to open it again, go to Window -> Show View -> Other and select Runtime Services. - - - -## Update (embedded) server configurations and libraries - -For some integrations, it is necessary to update the server configurations. For example, if you are integrating with an external broker, you need to update broker connection details and also add the broker's connection JARs to the server's `/lib` folder. - -Click the icon to open the Embedded Micro Integrator Configuration dialog box shown below. - -!!! Note - You can also paramterize configurations as [environment variables]({{base_path}}/install-and-setup/setup/mi-setup/dynamic_server_configurations) and later [inject environment variables to the embedded Micro Integrator](#injecting-environment-variables-to-embedded-micro-integrator). - - - -In the upper section, update the server configuration file (`deployment.toml` file). In the lower section, add any required third-party libraries to the `/lib` folder of the server. - -## Encrypt static (embedded) server secrets - -If you have secrets in the `deployment.toml` file, you can encrypt them using the Cipher Tool. - -1. Open the [Embedded Micro Integrator Configuration](#update-embedded-server-configs-and-libraries) dialog box. -2. Update the static secrets in the `deployment.toml` file as explained in [encrypting server secrets]({{base_path}}/install-and-setup/setup/mi-setup/security/encrypting_plain_text). -3. Click Encrypt Secrets. - - - -This will run the Cipher Tool internally and encrypt the secrets. The plain-text values you entered are now replaced with the encrypted values. - - - -## Redeploy integration artifacts - -Hot deployment is enabled in the Micro Integrator by default. This allows you to redeploy artifacts without restarting the server. However, if you have applied changes to the server configurations and libraries, the server will restart. - -1. Select the composite application that contains your artifacts. -2. Click the icon in the menu palette. - -## Injecting environment variables to embedded Micro Integrator - -WSO2 Micro Integrator supports environment variables for server configurations as well as synapse configurations (integration artifacts). - -!!! Note - To be able to dynamically inject parameters to the embedded Micro Integrator, you must first define the relevant configurations as environment variables. See the following topics for instructions: - - - [Environment variables for server Configurations]({{base_path}}/install-and-setup/setup/mi-setup/dynamic_server_configurations) - - [Environment variables for synapse configurations]({{base_path}}/integrate/develop/injecting-parameters) - -Follow the steps given below. - -1. [Deploy and run](#deploy-and-run-artifacts-in-the-embedded-server) the artifacts in the embedded Micro Integrator. - - !!! Tip - Note that you need to run the embedded Micro Integrator at least once before proceeding to specify environment variables. - -2. You can now go to **Run** -> **Run Configurations** in the upper menu bar of your computer: - - ![run configurations menu]({{base_path}}/assets/img/integrate/run-configs-menu.png) - -3. In the **Run Configurations** dialog box that opens, select **Micro Integrator Server 1.2.0** that is listed under **Generic Server** in the navigator: - - ![run configurations dialog box]({{base_path}}/assets/img/integrate/run-configs-dialog-box.png) - -4. In the **Server** tab, select Micro Integrator 1.2.0 from the list if it is not already selected. -5. Go to the **Environment** tab and click **New** to add an environment variable: - - ![run configurations environments]({{base_path}}/assets/img/integrate/run-configs-env.png) - -6. Enter the variable name and value as a key-value pair and click **OK**. In this example, let's use the server offset: - - !!! Tip - The offset parameter in the `deployment.toml` file of the embedded Micro Integrator should be specified as follows: - ```toml - [server] - offset="$env{offset}" - ``` - - ![run configurations environments]({{base_path}}/assets/img/integrate/run-configs-env-popup.png) - - -7. Click **Apply** to apply the new environment variable. - - ![run configurations environments]({{base_path}}/assets/img/integrate/run-configs-env-apply.png) \ No newline at end of file diff --git a/en/docs/integrate/develop/using-remote-micro-integrator.md b/en/docs/integrate/develop/using-remote-micro-integrator.md deleted file mode 100644 index 0fbfae5dba..0000000000 --- a/en/docs/integrate/develop/using-remote-micro-integrator.md +++ /dev/null @@ -1,78 +0,0 @@ -# Using a Remote Micro Integrator - -The light-weight Micro Integrator is already included in your WSO2 Integration Studio package, which allows you to [deploy and run the artifacts instantly]({{base_path}}/integrate/develop/using-embedded-micro-integrator). - -The following instructions can be used to run your artifacts in a remote Micro Integrator instance. - -## Deploy and run artifacts in a remote instance - -1. [Download and install]({{base_path}}/install-and-setup/install/installing-the-product/installing-mi) the Micro Integrator server and on your computer. -2. [Package your Synapse artifacts]({{base_path}}/integrate/develop/packaging-artifacts) from WSO2 Integration Studio. - -However, when your solutions are ready to be moved to your production environments, it is recommended to use a **CICD pipeline**. - -!!! Note - As an alternative, you can skip the steps given below and manually copy the exported CAR file to the `/repository/deployment/server/carbonapps/` folder, where `` is the root folder of your Micro Integrator installation. - For more information on how to export a CAR file, see [Exporting Artifacts]({{base_path}}/integrate/develop/exporting-artifacts). - -## Add a new remote instance - -1. Open the Getting Started view and click Add Server to open the New Server dialog box. - - - -2. In the New Server dialog box that opens, expand the WSO2 folder and select the version of your server. - - - -3. Click Next. In the CARBON_HOME field, provide the path to your product's home directory and then click Next. - -4. Review the default port details for your server and click Next. - - - - !!! Note - - - If you selected an Enterprise Integrator server in the previous step, enter the port details required for an Enterprise Integrator. - - If you are already running another server on these ports, give unused ports. See [Default ports](../../setup/changing_default_ports) of the Micro Integrator for more information. - -## Deploy and run artifacts in a remote instance - -1. To deploy the C-App project to your server, select the composite application from the list, click Add to move it to the configured list, and then click Finish. - - -2. On the Servers tab, note that the server is currently stopped. Click the icon on the tool bar. If prompted to save changes to any of the artifact files you created earlier, click Yes. - - - -## Deploy, redeploy, or remove artifacts in a remote instance - -- To deploy/remove C-Apps, right-click the server, click Add and Remove, and follow the instructions on the wizard. - - - -- If you want to redeploy a C-App after modifying the included artifacts, select the already deployed C-App, right-click and click Redeploy. - -!!! Note - Hot deployment is enabled in the Micro Integrator by default. This allows you to redeploy artifacts without restarting the server. - If you disabled hot deployment while adding the server, you need to restart the server as well. - -## Disable graceful shutdown (Only for testing) - -By default, the graceful shutdown capability is enabled in the Micro Integrator distribution. This means that the server will not immediately shut down when there are incomplete HTTP messaging transactions that are still active. These are transactions that are processed by the HTTP/S PassThrough transport. - -For example, consider a delay in receiving the response from the backend (which should be returned to the messaging client). Because graceful shutdown is enabled, the Micro Integrator will wait until the time specified by the following parameter in the server configuration file (`deployment.toml` file) is exceeded before shutting down. - -```toml -[transport.http] -socket_timeout = 180000 -``` - -You can disable this feature by using the following system property when you start the server: - -!!! Warning - Disabling graceful shutdown is only recommended for a development environment for the purpose of making the development and testing process faster. Be sure to have graceful shutdown enabled when you move to production. - -```bash --DgracefulShutdown=false -``` diff --git a/en/docs/integrate/develop/using-wire-logs.md b/en/docs/integrate/develop/using-wire-logs.md deleted file mode 100644 index ee30119d61..0000000000 --- a/en/docs/integrate/develop/using-wire-logs.md +++ /dev/null @@ -1,103 +0,0 @@ -# Using Wire Logs - -While debugging a Synapse flow, you can view the the actual HTTP -messages at the entry point of the Micro Integrator via wire logs. For -example, you can view wire logs of the incoming flow and the final -response of a proxy service. Also, you can view wire logs for points, -where it goes out from the Micro Integrator. For example, you can see -the outgoing and incoming wire logs for specific mediators (i.e. Call -mediator, Send mediator etc.). Wire logs are useful to troubleshoot -unexpected issues that occur while integrating miscellaneous systems. -You can use wire logs to verify whether the message payload is properly -going out from the server, whether the HTTP headers such as the -content-type is properly set in the outgoing message, etc. - -!!! Note - It is recommended to enable wire logs only for troubleshooting purposes. Running production systems with wire logs enabled is not recommended. - -## Enabling wire logs - -See [Configuring Logs]({{base_path}}/observe/micro-integrator/classic-observability-logs/configuring-log4j2-properties/#wire-logs-and-header-logs) for instructions. - -## Sample wire log - -Following is a sample wirelog. - -```bash -[2013-09-22 19:47:57,797] DEBUG - wire >> "POST /services/StockQuoteProxy HTTP/1.1[\r][\n]" -[2013-09-22 19:47:57,798] DEBUG - wire >> "Content-Type: text/xml; charset=UTF-8[\r][\n]" -[2013-09-22 19:47:57,798] DEBUG - wire >> "SOAPAction: "urn:getQuote"[\r][\n]" -[2013-09-22 19:47:57,799] DEBUG - wire >> "User-Agent: Axis2[\r][\n]" -[2013-09-22 19:47:57,799] DEBUG - wire >> "Host: localhost:8280[\r][\n]" -[2013-09-22 19:47:57,799] DEBUG - wire >> "Transfer-Encoding: chunked[\r][\n]" -[2013-09-22 19:47:57,800] DEBUG - wire >> "[\r][\n]" -[2013-09-22 19:47:57,800] DEBUG - wire >> "215[\r][\n]" -[2013-09-22 19:47:57,800] DEBUG - wire >> "http://localhost:8280/services/StockQuoteProxyurn:uuid:9e1b0def-a24b-4fa2-8016-86cf3b458f67urn:getQuoteIBM[\r][\n]" -[2013-09-22 19:47:57,801] DEBUG - wire >> "0[\r][\n]" -[2013-09-22 19:47:57,801] DEBUG - wire >> "[\r][\n]" -[2013-09-22 19:47:57,846] INFO - TimeoutHandler This engine will expire all callbacks after : 120 seconds, irrespective of the timeout action, after the specified or optional timeout -[2013-09-22 19:47:57,867] DEBUG - wire << "POST /services/SimpleStockQuoteService HTTP/1.1[\r][\n]" -[2013-09-22 19:47:57,867] DEBUG - wire << "Content-Type: text/xml; charset=UTF-8[\r][\n]" -[2013-09-22 19:47:57,867] DEBUG - wire << "SOAPAction: "urn:getQuote"[\r][\n]" -[2013-09-22 19:47:57,867] DEBUG - wire << "Transfer-Encoding: chunked[\r][\n]" -[2013-09-22 19:47:57,868] DEBUG - wire << "Host: localhost:9000[\r][\n]" -[2013-09-22 19:47:57,868] DEBUG - wire << "Connection: Keep-Alive[\r][\n]" -[2013-09-22 19:47:57,868] DEBUG - wire << "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]" -[2013-09-22 19:47:57,868] DEBUG - wire << "[\r][\n]" -[2013-09-22 19:47:57,868] DEBUG - wire << "215[\r][\n]" -[2013-09-22 19:47:57,868] DEBUG - wire << "http://localhost:8280/services/StockQuoteProxyurn:uuid:9e1b0def-a24b-4fa2-8016-86cf3b458f67urn:getQuoteIBM[\r][\n]" -[2013-09-22 19:47:57,868] DEBUG - wire << "0[\r][\n]" -[2013-09-22 19:47:57,869] DEBUG - wire << "[\r][\n]" -[2013-09-22 19:47:58,002] DEBUG - wire >> "HTTP/1.1 200 OK[\r][\n]" -[2013-09-22 19:47:58,002] DEBUG - wire >> "Content-Type: text/xml; charset=UTF-8[\r][\n]" -[2013-09-22 19:47:58,002] DEBUG - wire >> "Date: Sun, 22 Sep 2013 14:17:57 GMT[\r][\n]" -[2013-09-22 19:47:58,002] DEBUG - wire >> "Transfer-Encoding: chunked[\r][\n]" -[2013-09-22 19:47:58,002] DEBUG - wire >> "Connection: Keep-Alive[\r][\n]" -[2013-09-22 19:47:58,002] DEBUG - wire >> "[\r][\n]" -[2013-09-22 19:47:58,014] DEBUG - wire << "HTTP/1.1 200 OK[\r][\n]" -[2013-09-22 19:47:58,015] DEBUG - wire << "Content-Type: text/xml; charset=UTF-8[\r][\n]" -[2013-09-22 19:47:58,015] DEBUG - wire << "Date: Sun, 22 Sep 2013 14:17:58 GMT[\r][\n]" -[2013-09-22 19:47:58,015] DEBUG - wire << "Server: WSO2-PassThrough-HTTP[\r][\n]" -[2013-09-22 19:47:58,016] DEBUG - wire << "Transfer-Encoding: chunked[\r][\n]" -[2013-09-22 19:47:58,016] DEBUG - wire << "[\r][\n]" -[2013-09-22 19:47:58,016] DEBUG - wire >> "4d8[\r][\n]" -[2013-09-22 19:47:58,017] DEBUG - wire >> "urn:getQuoteResponseurn:uuid:9e1b0def-a24b-4fa2-8016-86cf3b458f673.827143922330303-8.819296796724336-170.50810412063595170.73218944560944Sun Sep 22 19:47:57 IST 2013-170.472077024782785.562077973231586E7IBM Company178.0616712932281324.9438904049222641.9564266653777567195.61908401976004IBM6216[\r][\n]" -[2013-09-22 19:47:58,017] DEBUG - wire >> "0[\r][\n]" -[2013-09-22 19:47:58,018] DEBUG - wire >> "[\r][\n]" -[2013-09-22 19:47:58,021] DEBUG - wire << "4d8[\r][\n]" -[2013-09-22 19:47:58,022] DEBUG - wire << "urn:getQuoteResponseurn:uuid:9e1b0def-a24b-4fa2-8016-86cf3b458f673.827143922330303-8.819296796724336-170.50810412063595170.73218944560944Sun Sep 22 19:47:57 IST 2013-170.472077024782785.562077973231586E7IBM Company178.0616712932281324.9438904049222641.9564266653777567195.61908401976004IBM6216[\r][\n]" -[2013-09-22 19:47:58,022] DEBUG - wire << "0[\r][\n]" -[2013-09-22 19:47:58,022] DEBUG - wire << "[\r][\n] -``` - -There are two incoming messages and two outgoing messages in the above log. First part of the wire logs of a message contains the HTTP headers and it is followed by the message payload. You need to identify the message direction as shown below to read wire logs. - -- `DEBUG - wire >>`: This represents a message, which is coming into WSO2 Micro Integrator from the wire. -- `DEBUG - wire <<`: This represents a message, which goes to the wire from the Micro Integrator. - -## Viewing wire logs of a specific mediator - -You need to put a debug point to the mediator, to view wire logs of it. When debugging is finished (or while debugging), right click on the mediator, and click **Show WireLogs** , to view wire logs for a specific mediator. - -!!! Info - You can only view wire logs for a whole **proxy service**, **call mediator**, **send mediator**, or other **API resources**. However, you cannot view a wire log of a Synapse config (e.g. sequences), because there would not be anything written to wire when the flow comes to the sequence etc. Hence, you can only view them in wire entry points. - -![using wire logs]({{base_path}}/assets/img/integrate/wire-logs/show-wire-logs.png) - -## Viewing wire logs while debugging - -If you view wire logs while debugging, you view only the wire logs of mediators, whose execution is already completed as shown in the example below. - -![using wire logs]({{base_path}}/assets/img/integrate/wire-logs/while-debugging.png) - -## Viewing wire logs of a mediator after debugging - -When you view wire logs of a mediator (e.g. send mediator) after debugging, you can view the request and response wire logs as shown in the example below. - -![using wire logs]({{base_path}}/assets/img/integrate/wire-logs/after-debugging.png) - -## Viewing wire logs of a proxy service after debugging - -If you view wire logs of a proxy service after debugging finished, you view the request wire log and final response wire log of that proxy as shown in the example below. - -![using wire logs]({{base_path}}/assets/img/integrate/wire-logs/for-proxy.png) diff --git a/en/docs/integrate/develop/working-with-service-catalog.md b/en/docs/integrate/develop/working-with-service-catalog.md deleted file mode 100644 index 9f78f4c8fd..0000000000 --- a/en/docs/integrate/develop/working-with-service-catalog.md +++ /dev/null @@ -1,96 +0,0 @@ -# Publishing Integrations to the API Manager - -A REST API artifact you create from WSO2 Integration Studio is exposed to consumers when you run it on the Micro Integrator runtime. If you want to control and manage this API, and also expose it to an API marketplace where it becomes discoverable to a wider community of consumers, you need to publish this REST API to the API management layer (API-M runtime) of the product. - -Follow the steps given below to publish REST APIs from the Micro Integrator to the API-M runtime. - -!!! tip "Related Tutorials" - To try out an end-to-end use case where an integration service is created and used as a managed API, see tutorials: [Exposing an Integration Service as a Managed API]({{base_path}}/tutorials/integration-tutorials/service-catalog-tutorial) and [Exposing an Integration SOAP Service as a Managed API]({{base_path}}/tutorials/integration-tutorials/service-catalog-tutorial-for-proxy-services). - -## Prerequisites - -Develop a REST API artifact using WSO2 Integration Studio. This is your integration service with the mediation logic that will run on the Micro Integrator. - -!!! Tip - For instructions on creating a new integration service, use the following documentation: - - - [Developing your First Integration Service]({{base_path}}/integrate/develop/integration-development-kickstart). - - [Integration Tutorials]({{base_path}}/tutorials/tutorials-overview/#integration-tutorials). - -## Step 1 - Update the service metadata - -When you create a REST API artifact from WSO2 Integration Studio, a **resources** folder with metadata files is created as shown below. This metadata is used by the API management runtime to generate the API proxy for the service. - - - -Update the metadata for your service as explained below. - - - - - - - - - - - - - - -
    - Parameter - - Description -
    - description - - Explain the purpose of the API. -
    - serviceUrl - - This is the URL of the API when it gets deployed in the Micro Integrator. You (as the integration developer) may not know this URL during development. Therefore, you can parameterize the URL to be resolved later using environment variables. By default, the {MI_HOST} and {MI_PORT} values are parameterized with placeholders.

    - You can configure the serviceUrl in the following ways: -
      -
    • - Add the complete URL without parameters. For example: http://localhost:8290/healthcare.
      -
    • -
    • - Parameterize using the host and port combination. For example: http://{MI_HOST}:{MI_PORT}/healthcare. -
    • -
    • - Parameterize using a preconfigured URL. For example: http://{MI_URL}/healthcare. -
    • -
    -
    - -!!! Tip - See the [Service Catalog API documentation]({{base_path}}/reference/product-apis/service-catalog-apis/service-catalog-v1/service-catalog-v1/) for more information on the metadata in the YAML file. - -## Step 2 - Configure the Micro Integrator server - -The Micro Integrator contains a client for publishing integrations to the API-M runtime. To enable this client, update the following in the `deployment.toml` file of your Micro Integrator. - -```toml -[[service_catalog]] -apim_host = "https://localhost:9443" -enable = true -username = "admin" -password = "admin" -``` - -See the descriptions of the [service catalog parameters]({{base_path}}/reference/config-catalog-mi/#service-catalog-client). - -## Step 3 - Start the servers - -Once you have created the integration service and deployed it in the Micro Integrator, you only need to start the two servers (API-M server and the Micro Integrator server). - -Note that the API-M server should be started before the Micro Integrator. The client in the Micro Integrator publishes the integration services to the API-M layer during server startup. - -## What's Next? - -Once the servers are started and the services are published, you can access the service from the API-M layer, and then proceed to **Create**, **Deploy**, and **Publish** the API as follows: - -1. [Create and API ]({{base_path}}/design/create-api/create-an-api-using-a-service) using the integration service. -2. [Deploy the API]({{base_path}}/deploy-and-publish/deploy-on-gateway/deploy-api/deploy-an-api) in the API Gateway. -3. [Publish the API]({{base_path}}/deploy-and-publish/publish-on-dev-portal/publish-an-api) to the Developer Portal. diff --git a/en/docs/integrate/develop/working-with-wso2-integration-studio.md b/en/docs/integrate/develop/working-with-wso2-integration-studio.md deleted file mode 100644 index 664af6b6cf..0000000000 --- a/en/docs/integrate/develop/working-with-wso2-integration-studio.md +++ /dev/null @@ -1,17 +0,0 @@ -# Working with WSO2 Integration Studio - -Once you have created a [REST API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) or a [Proxy Service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) in WSO2 Integration Studio, you can update the mediation flow by adding new mediation artifacts and changing the existing artifacts. - -Follow the steps given below. - -1. First, open the proxy service or REST API from the project explorer. -2. You can use either the **design view** or the **source view** to update the mediation flow. Shown below is an example of a PassThrough proxy service: - - **Design View**: - You can select any of the mediation artifacts from the design view shown below and update its parameters from the **Properties** tab in the bottom pane. You can also drag and drop new mediation artifacts to the design view from the artifact **Palette** to modify the mediation flow. - - ![design view]({{base_path}}/assets/img/integrate/design-workflow/design-view.png) - - - **Source View**: - If you have a sample proxy service configuration, you can simply copy it to the source view shown below. - - ![source view]({{base_path}}/assets/img/integrate/design-workflow/source-view.png) \ No newline at end of file diff --git a/en/docs/integrate/develop/wso2-integration-studio.md b/en/docs/integrate/develop/wso2-integration-studio.md deleted file mode 100644 index f250c02e20..0000000000 --- a/en/docs/integrate/develop/wso2-integration-studio.md +++ /dev/null @@ -1,133 +0,0 @@ -# Quick Tour - WSO2 Integration Studio - -WSO2 Integration Studio is your development environment for designing, developing, debugging, and testing integration solutions. As an integration developer, you can execute all the phases of the development lifecycle using this tool. When your integration solutions are production-ready, you can easily push the artifacts to your continuous integration/continuous deployment pipeline. - -!!! Tip - The base of the WSO2 Integration Studio is Eclipse IDE. You can install any supported Eclipse plugin for Integration - Studio by navigating to **Help** -> **Install New Software**. - -## Getting Started - -When you open WSO2 Integration Studio, you will see the **Getting Started** view in the tool's workbench. - - - -You can also click the icon at the top-right of the workbench to open the **Project Explorer** alongside the **Getting Started** tab as shown below. - - - -To get started, you need to first create the required project directories. Alternatively, you can use an integration sample, which will generate the required projects and files for a specific use case. - - - - - - - - - - - - - - -
    - Project Directories - -

    Use the links on the Getting Started view to create the required projects. These project directories are saved to your workspace and they can later be accessed from the Project Explorer view of WSO2 Integration Studio.

    - -
    - Samples - -

    - The Getting Started view lists a set of sample projects and integration artifacts that represent common integration scenarios. You can use these to explore WSO2 Micro Integrator and to try out common integration use cases. The sample guide will provide instructions on how to run the samples. -

    - -
    - Sample Guide - -

    - The sample guide is a Help pane, which provides documentation on how to use the integration sample scenarios. You can follow the instructions given in the guides to deploy and test each sample scenario. -

    - -
    - -Once you have created the required set of projects and artifacts, you can start working with the project directories and artifact editors shown below. - - - -## Project Explorer - -The project explorer provides a view of all the project directories created for your integration solution. Shown below is the project explorer of a working project. - - - -## Graphical Editor - -The graphical editor of WSO2 Integration Studio is a drag-and-drop editor for designing integration workflows. To access the graphical editor, you must first create a REST API, Proxy Service, Inbound Endpoint, or Sequence artifact. - -Once you open the graphical editor, the **Palette** to your left lists all the integration artifacts that you can use. You can drag the required artifacts to the canvas on your right and design your integration flow. The parameters for each artifact can be configured using the [Properties](#properties) view. - - - -## Source Editor - -When you open any integration artifact from the project explorer, you will have a source editor in addition to the graphical editor. This editor allows you to write or edit your integration solution using the source code (synapse). - - - -## Swagger Editor - -The swagger editor is available when you create a REST API. This is in addition to the graphical editor and the source editor. The swagger editor can be used to write or edit your integration solution using the swagger definition. The Swagger UI allows you to visualize and interact with the REST API. - - - -## Properties - -The properties view allows you to configure the properties and parameters that define the integration artifacts in your integration flow. When you double-click an artifact in the graphical editor, the **Properties** view for that artifact will open. Alternatively, you can right-click the artifact and click **Show Properties** to open this view. - - - -## Console - -The Console view displays a variety of console types depending on the type of development and the current set of user settings. The three consoles that are provided by default with WSO2 Integration Studio are: - -- **Process Console**: Shows standard output, error, and input. -- **Stacktrace Console**: Well-formatted Java stack trace with hyperlinks to specific source code locations. -- **CVS Console**: Displays output from CVS operations. - - - -## Embedded Micro Integrator - -WSO2 Integration Studio is shipped with an embedded Micro Integrator server, which allows developers to deploy and run integration artifacts during the development phase. To deploy the artifacts and to run the embedded Micro Integrator, right-click the composite application project (which includes your artifacts) and click **Export Project Artifacts and Run**. - -Find out more about [using the embedded Micro Integrator]({{base_path}}/integrate/develop/using-embedded-micro-integrator). - - - -## Inbuilt Debugging Capabilities - -WSO2 Integration Studio is shipped with mediation debugging capabilities, which allows developers to debug an integration project using the tool. The embedded Micro Integrator server and debugging capabilities enable developers to comprehensively test, debug, and improve integration solutions before the artifacts are released to a production environment. - -You need to select your integration project in the project explorer and go to **Run -> Debug** as shown below. Find out more about [mediation debugging]({{base_path}}/integrate/develop/debugging-mediation). - - - -## Outline - -The Outline view displays an outline of a structured file that is -currently open in the editor area and lists structural elements. It -enables you to hide certain fields, methods, and types, and also allows -you to sort and filter to find what you want. The contents of the -outline are editor-specific. For example, in a Java source file, -the structural elements are classes, fields, and methods. The contents -of the toolbar are also editor-specific. - - - -## What's Next? - -- See [Installing WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio) for installation instructions. -- See [Working with WSO2 Integration Studio]({{base_path}}/integrate/develop/working-with-wso2-integration-studio) for more information on how to setup and use tooling. -- See [Troubleshooting WSO2 Integration Studio]({{base_path}}/integrate/develop/troubleshooting-wso2-integration-studio) for information on troubleshooting errors you may run into while using WSO2 Integration Studio. \ No newline at end of file diff --git a/en/docs/integrate/examples/data_integration/batch-requesting.md b/en/docs/integrate/examples/data_integration/batch-requesting.md deleted file mode 100644 index 4542c90f40..0000000000 --- a/en/docs/integrate/examples/data_integration/batch-requesting.md +++ /dev/null @@ -1,134 +0,0 @@ -# Batch Requesting - -The batch requests feature allows you to send multiple (IN-Only) -requests to a datasource using a single operation (batch operation). - -## Prerequisites - -Let's create a MySQL database with the required data. - -1. Install the MySQL server. -2. Create the following database: Company - - ```bash - CREATE DATABASE Company; - ``` - -3. Create the **Employees** table: - - ```bash - USE Company; - - - CREATE TABLE `Employees` (`EmployeeNumber` int(11) NOT NULL, `FirstName` varchar(255) NOT NULL, `LastName` varchar(255) DEFAULT NULL, `Email` varchar(255) DEFAULT NULL, `JobTitle` varchar(255) DEFAULT NULL, `OfficeCode` int(11) NOT NULL, PRIMARY KEY (`EmployeeNumber`,`OfficeCode`)); - ``` - -## Synapse configuration - -Given below is the data service configuration you need to build. See the instructions on how to [build and run](#build-and-run) this example. - -!!! Tip - Be sure to replace the datasource username and password with the correct values for your MySQL instance. - -```xml - - - com.mysql.jdbc.Driver - jdbc:mysql://localhost:3306/Company - root - password - - - insert into Employees (EmployeeNumber, FirstName, LastName, Email, JobTitle, OfficeCode) values(:EmployeeNumber,:FirstName,:LastName,:Email,:JobTitle,:Officecode) - - - - - - - - - - - - - - - - - - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). The path to this folder is referred to as `MI_TOOLING_HOME` throughout this tutorial. -2. Download the JDBC driver for MySQL from [here](http://dev.mysql.com/downloads/connector/j/) and copy it to the `MI_TOOLING_HOME/Contents/Eclipse/runtime/microesb/lib/` (for MacOS) or -`MI_TOOLING_HOME/runtime/microesb/lib/` (for Windows) directory. - - !!! Note - If the driver class does not exist in the relevant folders when you create the datasource, you will get an exception such as `Cannot load JDBC driver class com.mysql.jdbc.Driver`. - -3. [Create a Data Service project]({{base_path}}/integrate/develop/create-data-services-configs) -4. [Create the data service]({{base_path}}/integrate/develop/creating-artifacts/data-services/creating-data-services) with the configurations given above. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Let's send a request with multiple transactions to the data service: - -1. Download and Install [SoapUI](https://www.soapui.org/downloads/soapui.html) to run this SOAP service. -2. Create a new SOAP project in SoapUI by using the following WSDL file: - ```bash - http://localhost:8290/services/batch_requesting_sample?wsdl - ``` - -3. Update the **addEmployeeOp** operation (under **batch_requesting_sampleSOAP11Binding**) with the request body as shown below: - - !!! Tip - In this example, we are sending two transactions with details of two employees. - - ```xml - - - - - 1002 - - John - - Doe - - johnd@wso2.com - - Consultant - - 01 - - - - 1004 - - Peter - - Parker - - peterp@wso2.com - - Consultant - - 01 - - - ``` - -4. Invoke the **addEmployeeOp** operation. - -You will find that all the records have been inserted into the `Employees` database simultaneously. - -!!! Tip - Want to confirm that the records are added to the database? Run the following MySQL command. - - ```bash - SELECT * FROM Employees - ``` diff --git a/en/docs/integrate/examples/data_integration/carbon-data-service.md b/en/docs/integrate/examples/data_integration/carbon-data-service.md deleted file mode 100644 index c6e0fc140c..0000000000 --- a/en/docs/integrate/examples/data_integration/carbon-data-service.md +++ /dev/null @@ -1,90 +0,0 @@ -# Exposing a Carbon Datasource as a Data Service - -A Carbon datasource is an RDBMS or a custom datasource created using the -Micro Integrator. You can simply use -that as the datasource for a data service. A Carbon datasource is -persistent, and can be used whenever required. - -## Synapse configurations - -Given below is the data service configuration you need to build. See the instructions on how to [build and run](#build-and-run) this example. - -- **Carbon** datasource - - ```xml - - rdbms_datasource_mysql - MySQL Connection - - MysqlConJNDI1 - - - - com.mysql.jdbc.Driver - jdbc:mysql://localhost:3306/Employees - username - password - - - - ``` - -- **Data service** - - ```xml - - - rdbms_datasource_mysql - - - select EmployeeNumber, FirstName, LastName, Email, Salary from Employees where EmployeeNumber=:EmployeeNumber - - - - - - - - - - - - - - - - - - - - - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. Download the JDBC driver for MySQL from [here](http://dev.mysql.com/downloads/connector/j/) and copy it to the `MI_TOOLING_HOME/Contents/Eclipse/runtime/microesb/lib/` (for MacOS) or -`MI_TOOLING_HOME/runtime/microesb/lib/` (for Windows) directory. - - !!! Note - If the driver class does not exist in the relevant folders when you create the datasource, you will get an exception such as `Cannot load JDBC driver class com.mysql.jdbc.Driver`. - -3. [Create a Datasource project]({{base_path}}/integrate/develop/create-datasources) and then [create a datasource]({{base_path}}/integrate/develop/creating-artifacts/data-services/creating-datasources). -4. [Create a Data Service project]({{base_path}}/integrate/develop/create-data-services-configs) and then [create the data service]({{base_path}}/integrate/develop/creating-artifacts/data-services/creating-data-services) with the configurations given above. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -The service can be invoked in REST-style via curl ( -[http://curl.haxx.se](http://curl.haxx.se/) ). Shown below is the curl -command to invoke the GET resource: - -```bash -curl -X GET http://localhost:8290/services/RDBMSDataService_3.HTTPEndpoint/Employee/3 -``` - -This generates a response as follows. - -```bash -3WillSmithwill@google.com15500.03WillSmithwill@google.com15500.03WillSmithwill@google.com15500.0 -``` diff --git a/en/docs/integrate/examples/data_integration/csv-data-service.md b/en/docs/integrate/examples/data_integration/csv-data-service.md deleted file mode 100644 index a6cfb9b8e9..0000000000 --- a/en/docs/integrate/examples/data_integration/csv-data-service.md +++ /dev/null @@ -1,122 +0,0 @@ -# Exposing an CSV Datasource - -This example demonstrates how CSV data can be exposed as a data service. - -## Prerequisites - -!!! Info - Note that you can only read data from CSV files. The Micro Integrator does not support inserting, updating, or modifying data in a CSV file. - -[Download](https://github.com/wso2-docs/WSO2_EI/blob/master/data-service-resources/Products.csv) the `Products.csv` file. - -This file contains data about products (cars/motorcycles) that are -manufactured in an automobile company. The data table has the following -columns: `ID` , `Name` , -`Classification` , and `Price`. - -## Synapse configuration -Given below is the data service configuration you need to build. See the instructions on how to [build and run](#build-and-run) this example. - -**Be sure** to update the CSV datasource path. - -```xml - - - /path/to/csv/Products.csv - , - 2 - true - 1 - - - - - - - - - - - - - - - - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create a Data Service project]({{base_path}}/integrate/develop/create-data-services-configs). -4. [Create the data service]({{base_path}}/integrate/develop/creating-artifacts/data-services/creating-data-services) with the configurations given above. - **Be sure** to update the CSV datasource path. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -You can send an HTTP GET request to invoke the data service using cURL -as shown below. - -```bash -curl -X GET http://localhost:8290/services/CSV.HTTPEndpoint/Products -``` - -This will return the response in XML. - -Example: - -```xml - - - - S10_1678 - Motorcycles - 1000 - 1969 Harley Davidson - Ultimate Chopper - - - S10_1949 - Classic Cars - 600 - 1952 Alpine Renault 1300 - - - S10_2016 - Motorcycles - 456 - 1996 Moto Guzzi 1100i - - - S10_4698 - Motorcycles - 345 - 2003 Harley-Davidson Eagle Drag Bike - - - S10_4757 - Classic Cars - 230 - 1972 Alfa Romeo GTA - - - S10_4962 - Classic Cars - 890 - 1962 LanciaA Delta 16V - - - S12_1099 - Classic Cars - 560 - 1968 Ford Mustang - - - S12_1108 - Classic Cars - 900 - 2001 Ferrari Enzo - - -``` \ No newline at end of file diff --git a/en/docs/integrate/examples/data_integration/data-input-validator.md b/en/docs/integrate/examples/data_integration/data-input-validator.md deleted file mode 100644 index 371a3187b4..0000000000 --- a/en/docs/integrate/examples/data_integration/data-input-validator.md +++ /dev/null @@ -1,110 +0,0 @@ -# Validating Input Data in a Data Request - -Validators are added to individual input mappings in a query. Input -validation allows data services to validate the input parameters in a -request and stop the execution of the request if the input doesn’t meet -the required criteria. WSO2 Micro Integrator provides a set of built-in validators for some of the most -common use cases. It also provides an extension mechanism to write -custom validators. - -## Prerequisites - -Let's create a MySQL database with the required data. - -1. Install the MySQL server. -2. Create a database named ` EmployeeDatabse ` . - - ```bash - CREATE DATABASE EmployeeDatabase; - ``` - -3. Create the Employee table inside the Employees database: - - ```bash - USE EmployeeDatabase; - - CREATE TABLE Employees (EmployeeNumber int(11) NOT NULL, FirstName varchar(255) NOT NULL, LastName varchar(255) DEFAULT NULL, Email varchar(255) DEFAULT NULL, Salary varchar(255)); - ``` - -## Synapse configuration -Given below is the data service configuration you need to build. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - com.mysql.jdbc.Driver - jdbc:mysql://localhost:3306/EmployeeDatabase - root - password - - - insert into Employees (EmployeeNumber, FirstName, LastName, Email, Salary) values(:EmployeeNumber,:FirstName,:LastName,:Email,:Salary) - - - - - - - - - - - - - - - - - - - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). The path to this folder is referred to as `MI_TOOLING_HOME` throughout this tutorial. -2. Download the JDBC driver for MySQL from [here](http://dev.mysql.com/downloads/connector/j/) and copy it to the `MI_TOOLING_HOME/Contents/Eclipse/runtime/microesb/lib/` (for MacOS) or -`MI_TOOLING_HOME/runtime/microesb/lib/` (for Windows) directory. - - !!! Note - If the driver class does not exist in the relevant folders when you create the datasource, you will get an exception such as `Cannot load JDBC driver class com.mysql.jdbc.Driver`. - -3. [Create a Data Service project]({{base_path}}/integrate/develop/create-data-services-configs). -4. [Create the data service]({{base_path}}/integrate/develop/creating-artifacts/data-services/creating-data-services) with the configurations given above. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Let's send a request with invalid and valid data to the data service: - -1. Download and Install [SoapUI](https://www.soapui.org/downloads/soapui.html) to run this SOAP service. -2. Create a new SOAP project in SoapUI by using the following WSDL file: - ```bash - http://localhost:8290/services/input_validator_sample?wsdl - ``` - -3. Update the **addEmployeeOp** operation (under **input_validator_sample.SOAP12Binding**) with the request body as shown below: - - ```xml - - 6001 - AB - Nick - test@test.com - 1500 - - ``` - -4. Invoke the **addEmployeeOp** operation. A validation error is thrown as the response because the **addEmployeeOp** operation has failed. This is because the FirstName only has 2 characters. - -5. Now, change the FirstName value in the request as shown below and invoke the operation again. - ```xml - - 6001 - ABC - Nick - test@test.com - 1500 - - ``` - The employee details are added to the database table. diff --git a/en/docs/integrate/examples/data_integration/distributed-trans-data-service.md b/en/docs/integrate/examples/data_integration/distributed-trans-data-service.md deleted file mode 100644 index 3efcc74909..0000000000 --- a/en/docs/integrate/examples/data_integration/distributed-trans-data-service.md +++ /dev/null @@ -1,134 +0,0 @@ -# Using Distributed Transactions in Data Services - -!!! Warning - **The contents on this page are currently under review!** - -The data integration feature in WSO2 Micro Integrator supports data -federation, which means that a single data service can expose data from -multiple datasources. However, if you have multiple RDBMSs connected to -your data service, and if you need to perform IN-ONLY operations -(operations that can insert data and modify data in the datasource) in a -coordinated manner, the RDBMSs need to be defined as XA datasources. - -Let's consider a scenario where you have two MySQL databases. You can -define a single data service for these databases and insert data into -both as explained below. - -## Prerequisites - -Let's create a MySQL database with the required data. - -1. Install the MySQL server. -2. Set up a database for storing information of offices: - 1. Create a database called **OfficeDetails**. - - ```bash - CREATE DATABASE OfficeDetails; - ``` - - 2. Create the **Offices** table: - - ```bash - USE OfficeDetails; - - CREATE TABLE `Offices` (`OfficeCode` int(11) NOT NULL, `AddressLine1` varchar(255) NOT NULL, `AddressLine2` varchar(255) DEFAULT NULL, `City` varchar(255) DEFAULT NULL, `State` varchar(255) DEFAULT NULL, `Country` varchar(255) DEFAULT NULL, `Phone` varchar(255) DEFAULT NULL, PRIMARY KEY (`OfficeCode`)); - ``` - -3. Set up a database to store the employee information: - 1. Create a database called **EmployeeDetails** . - - ```bash - CREATE DATABASE EmployeeDetails; - ``` - - 2. Create the **Employees** table: - - ```bash - USE EmployeeDetails; - - CREATE TABLE `Employees` (`EmployeeNumber` int(11) NOT NULL, `FirstName` varchar(255) NOT NULL, `LastName` varchar(255) DEFAULT NULL, `Email` varchar(255) DEFAULT NULL, `JobTitle` varchar(255) DEFAULT NULL, `OfficeCode` int(11) NOT NULL, PRIMARY KEY (`EmployeeNumber`)); - ``` - -## Synapse configuration - -Given below is the data service configuration you need to build. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - com.mysql.jdbc.jdbc2.optional.MysqlXADataSource - - jdbc:mysql://localhost:3306/OfficeDetails - root - root - - - - com.mysql.jdbc.jdbc2.optional.MysqlXADataSource - - jdbc:mysql://localhost:3306/EmployeeDetails - root - root - - - - insert into Offices (OfficeCode,AddressLine1,AddressLine2,City,State,Country,Phone) values(:OfficeCode,:AddressLine1,'test','test','test','USA','test') - - - - - insert into Employees (EmployeeNumber,FirstName,LastName,Email,JobTitle,OfficeCode) values(:EmployeeNumber,:FirstName,:LastName,'test','test',:OfficeCode) - - - - - - - - - - - - - - - - - - - - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). The path to this folder is referred to as `MI_TOOLING_HOME` throughout this tutorial. -2. Download the JDBC driver for MySQL from [here](http://dev.mysql.com/downloads/connector/j/) and copy it to the `MI_TOOLING_HOME/Contents/Eclipse/runtime/microesb/lib/` (for MacOS) or -`MI_TOOLING_HOME/runtime/microesb/lib/` (for Windows) directory. - - !!! Note - If the driver class does not exist in the relevant folders when you create the datasource, you will get an exception such as `Cannot load JDBC driver class com.mysql.jdbc.Driver`. - -3. [Create a Data Service project]({{base_path}}/integrate/develop/create-data-services-configs). -4. [Create the data service]({{base_path}}/integrate/develop/creating-artifacts/data-services/creating-data-services) with the configurations given above. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Invoke the **request box** operation and see that the data is successfully inserted into the two databases. Go to the MySQL terminal and run the following commands: - -- Check the office details in the offices table: - - ```bash - USE OfficeDetails; - SELECT * FROM Offices; - ``` - -- Check the employee details in the employees table. - - ```bash - USE EmployeeDetails; - SELECT * FROM Employees; - ``` - -Now, enter another set of values for the two operations but enter an erroneous value for one field. Invoke the operation and check the database tables. You see that no records have been entered into either database. \ No newline at end of file diff --git a/en/docs/integrate/examples/data_integration/json-with-data-service.md b/en/docs/integrate/examples/data_integration/json-with-data-service.md deleted file mode 100644 index 453fda10a2..0000000000 --- a/en/docs/integrate/examples/data_integration/json-with-data-service.md +++ /dev/null @@ -1,435 +0,0 @@ -# Exposing Data in JSON Format - -You can send and receive JSON messages by default via WSO2 Micro Integrator. See the topics given below to -understand how data can be exposed in the JSON format, and how data can be changed by sending JSON payloads. In this example, you will use a data service that exposes RDBMS data. - -A data service can expose data in one of the following formats: XML, -RDF, or JSON. You can select the required format by specifying -the output type for the data service query. To expose data in JSON, you -need to select JSON as the output type, and map the output to a JSON -template. - -## Prerequisites - -Let's create a MySQL database with the required data. - -1. Install the MySQL server. -2. Create a database named `Employees`. - - ```bash - CREATE DATABASE Employees; - ``` - -3. Create the Employee table inside the Employees database: - - ```bash - USE Employees; - - CREATE TABLE Employees (EmployeeNumber int(11) NOT NULL, FirstName varchar(255) NOT NULL, LastName varchar(255) DEFAULT NULL, Email varchar(255) DEFAULT NULL, Salary varchar(255)); - ``` - -## Synapse configuration - -Given below is the data service configuration you need to build. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - com.mysql.jdbc.Driver - jdbc:mysql://localhost:3306/Employees - root - password - - - select EmployeeNumber, FirstName, LastName, Email, Salary from Employees where EmployeeNumber=:EmployeeNumber - { - "Employees":{ - "Employee":[ - { - "EmployeeNumber":"$EmployeeNumber", - "FirstName":"$FirstName", - "LastName":"$LastName", - "Email":"$Email", - "Salary":"$Salary" - } - ] - } -} - - - - insert into Employees (EmployeeNumber, FirstName, LastName, Email, Salary) values(:EmployeeNumber,:FirstName,:LastName,:Email,:Salary) - - - - - - - - update Employees set LastName=:LastName, FirstName=:FirstName, Email=:Email, Salary=:Salary where EmployeeNumber=:EmployeeNumber - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -``` - -Alternatively, you can use one of the following JSON templates for the response mapping: - -- Simple JSON template - - ```json - { "Employees": - {"Employee":[ - {"EmployeeNumber":"$EmployeeNumber", - "Details": { - "FirstName":"$FirstName", - "LastName":"$LastName", - "Email":"$Email", - "Salary":"$Salary" - } - } - ] - } - } - ``` - -- Define data types - - In a basic JSON output mapping, we specify the field values that we expect in the query result. You can give additional properties to this field mapping such as data type of the field, the possible content filtering user roles etc. These extended properties for the fields are given in parentheses, with a list of string tokens providing the additional properties, separated by a semicolon (";"). See the sample below. - - ```json - - { "Employees": - {"Employee":[ - {"EmployeeNumber":"$EmployeeNumber(type:integer)", - "Details": { - "FirstName":"$FirstName", - "LastName":"$LastName", - "Email":"$Email", - "Salary":"$Salary(requiredRoles:hr,admin)" - } - } - ] - } - } - - ``` -!!! Info - As shown in the sample given above, the column name values that are expected in the query result should be referred to by the column name with the `$` prefix. E.g. `$EmployeeNumber`. - - Also, the structure of the JSON template should follow some guidelines in order to be compatible with the result. These guidelines are: - - - The top most item should be a JSON object. It cannot be a JSON array. - - For handling multiple records from the result set, the immediate child of the top most object can be a JSON array, and the array should contain only a single object. - - If only a single result is returned, the immediate child of the top most object can be a single JSON object. - - After the immediate child of the top most object, there cannot be other JSON arrays in the mapping. - - All JSON responses are returned as an array. - -- If you want to write a nested query using JSON, see the example on [nested queries]({{base_path}}/integrate/examples/data_integration/nested-queries-in-data-service). - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). The path to this folder is referred to as `MI_TOOLING_HOME` throughout this tutorial. -2. Download the JDBC driver for MySQL from [here](http://dev.mysql.com/downloads/connector/j/) and copy it to the `MI_TOOLING_HOME/Contents/Eclipse/runtime/microesb/lib/` (for MacOS) or -`MI_TOOLING_HOME/runtime/microesb/lib/` (for Windows) directory. - - !!! Note - If the driver class does not exist in the relevant folders when you create the datasource, you will get an exception such as `Cannot load JDBC driver class com.mysql.jdbc.Driver`. - -3. [Create a Data Service project]({{base_path}}/integrate/develop/create-data-services-configs). -4. [Create the data service]({{base_path}}/integrate/develop/creating-artifacts/data-services/creating-data-services) with the configurations given above. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -### GET data in JSON - -The RDBMSDataService that you are using contains the following -resource: - -- **Resource Path**: `Employee/{EmployeeNumber} ` -- **Resource Method**: `GET` -- **Query ID**: `GetEmployeeDetails` - -You can now RESTfully invoke the above resource. To send a JSON message -to a RESTful resource, you can simply add the “ -` Accept ` : ` Application/json ` ” to the -request header when you send the request. The service can be invoked in -REST-style via [curl](http://curl.haxx.se/) . -Shown below is the curl command to invoke the GET resource: - -```bash -curl -X GET -H "Accept: application/json" http://localhost:8290/services/RDBMSDataService/Employee/{EmployeeNumber} -``` - -Example: - -```bash -curl -X GET -H "Accept: application/json" http://localhost:8290/services/RDBMSDataService/Employee/1 -``` - -As a result, you receive the response in JSON format as shown below. - -```bash -{"Employees":{"Employee":[{"EmployeeNumber":"1","FirstName":"John","LastName":"Doe","Email":"JohnDoe@gmail.com","Salary":"10000"},{"EmployeeNumber":"1","FirstName":"John","LastName":"Doe","Email":"JohnDoe@gmail.com","Salary":"20000"}]} -``` - -### POST/UPDATE data using JSON - -When a client sends a request to change data (POST/PUT/DELETE) in the -datasource, the HTTP header ` Accept ` should be set to -` application/json ` .  Also, if the data is sent as a -JSON payload, the HTTP header ` Content-Type ` should be -set to ` application/json ` . - -The RDBMSDataService that you are using contains the following -resources for adding and updating data. - -- Resource for adding employee information: - - - **Resource Path**: `Employee` - - **Resource Method**: `POST ` - - **Query ID**: `AddEmployeeDetails` - -- Resource for updating employee information: - - - **Resource Path**: `Employee` - - **Resource Method**: `PUT` - - **Query ID**: `UpdateEmployeeDetails` - -You can RESTfully invoke the above resource by sending HTTP requests as -explained below. - -#### Post data - -To post new employee information, you need to invoke the resource with -the POST method. - -1. First, create a file named - ` employee-payload.json ` , and define the JSON - payload for posting new data as shown below. - - ```json - { - "user_defined_value": { - "EmployeeNumber" : "14001", - "LastName": "Smith", - "FirstName": "Will", - "Email": "will@google.com", - "Salary": "15500.0" - } - } - ``` - -2. On the terminal, navigate to the location where the - **` employee-payload.json `** file is stored, - and execute the following HTTP request: - - ```bash - curl -X POST -H 'Accept: application/json' -H 'Content-Type: application/json' --data "@employee-payload.json" -k -v http://localhost:8290/services/RDBMSDataService/Employee - ``` - -#### Post data in batches - -You are able to post JSON data in batches using the -` RDBMSDataService ` that you created or uploaded. - -!!! Info - Verify that batch requesting is enabled for the data service. - -1. First, create a file named - **` employee-batch-payload.json `** , and - define the JSON payload for posting multiple employee records - (batch) as shown below. - - ```bash - { - "user_defined_value": { - "user_defined_value": [ - { - "EmployeeNumber": "5012", - "FirstName": "Will", - "LastName": "Smith", - "Email": "will@smith.com", - "Salary": "13500.0" - }, - { - "EmployeeNumber": "5013", - "FirstName": "Parker", - "LastName": "Peter", - "Email": "peter@parker.com", - "Salary": "15500.0" - } - ] - } - } - ``` - -2. On the terminal, navigate to the location where the - **` employee-batch-payload.json `** file is - stored, and execute the following HTTP request: - - ```bash - curl -X POST -H 'Accept: application/json' -H 'Content-Type: application/json' --data "@employee-batch-payload.json" -k -v http://localhost:8290/services/RDBMSDataService/Employee_batch_req - ``` - -#### Update data - -To update the existing employee records, you need to invoke the resource -with the PUT method. - -1. First, create a file named - **` employee-upload-update.json `** , and - define the JSON payload for updating an existing employee record as - shown below. - For example, change the salary amount. Make sure that the employee - number already exists in the database. - - ```bash - { - "user_defined_value": { - "EmployeeNumber" : "1", - "FirstName": "Will", - "LastName": "Smith", - "Email": "will@smith.com", - "Salary": "78500.0" - } - } - ``` - -2. On the terminal, navigate to the location where the - **` employee-upload-update.json `** file is - stored, and execute the following HTTP request: - - ```bash - curl -X PUT -H 'Accept: application/json' -H 'Content-Type: application/json' --data "@employee-upload-update.json" -k -v http://localhost:8290/services/RDBMSDataService/Employee - ``` - -#### Post data using Request Box - -When the Request Box feature is enabled, you can invoke multiple -operations (consecutively) using one single operation. The process of -posting a JSON payload through a request box transaction is explained -below. - -!!! Info - Verify that batch requesting is enabled for the data service. - -1. First, create a file named - **` employee-request-box-payload `** - **` .json `** , and define the JSON payload for - posting multiple employee records (batch) as shown below. - - !!! Tip - The following payload works for this use case. When you create - payloads for different use cases, be mindful of the tips [given - here](#UsingJSONwithDataServices-JSON_payloads) . - - - ```json - { - "request_box" : { - "_postemployee" : { - "EmployeeNumber" : "14005", - "LastName" : "Smith" , - "FirstName" : "Will" , - "Email" : "will@google.com" , - "Salary" : "15500.0" - }, - "_getemployee_employeenumber":{ - "EmployeeNumber" : "14005" - } - } - } - ``` - -2. On the terminal, navigate to the location where the - **` employee-request-box-payload.json `** file - is stored, and execute the following HTTP request: - - ```bash - curl -X POST -H 'Accept: application/json' -H 'Content-Type: application/json' --data "@employee-request-box-payload.json" http://localhost:8290/services/RDBMSDataService/request_box - ``` - -!!! Tip - **Creating JSON payloads for Request Box transactions** - - Note the following when you define a JSON payload for a request box - transaction: The object name specified in the payload must be in the - following format: " ` _ ` - " where ` RESOURCE_PATH ` represents the path value - specified in the data service resource. For example, if the - ` RESOURCE_PATH ` is "employee", the payload object name - should be as follows: - - - For HTTP POST requests: ` _postemployee ` - - For HTTP PUT requests: ` _putemployee ` - - The child name/values of the child fields in the payload should be the - names and values of the input parameters in the target query. - - **Handling a resource path with the "/" symbol** - - If the ` RESOURCE_PATH ` specified in the data service - contains the "/" symbol, be sure to replace the "/" symbol with the - underscore symbol ("\") in the payload object name. - - **Important!** In this scenario, the ` RESOURCE_PATH ` value should only contain simple letters. For example, the value can be " ` /employee/add" ` but not " `/Employee/Add"`. - - For example, if the ` RESOURCE_PATH ` is - ` /employee/add ` , the payload object name should be as - follows: - - - For HTTP POST requests: ` _post_employee_add ` - - For HTTP PUT requests: ` _put_employee_add ` - diff --git a/en/docs/integrate/examples/data_integration/mongo-data-service.md b/en/docs/integrate/examples/data_integration/mongo-data-service.md deleted file mode 100644 index 4f2f7dc49b..0000000000 --- a/en/docs/integrate/examples/data_integration/mongo-data-service.md +++ /dev/null @@ -1,97 +0,0 @@ -# Exposing a Mongo Datasource - -This example demonstrates how Mongo data can be exposed as a data service. - -## Prerequisites - -Let's create a simple Mongo database that stores employee information. - -1. Download and set up a MongoDB server. -2. Use the following commands to create the database. - - ```bash - mongo - use employeesdb - db.createCollection("employees") - db.things.insert( { id: 1, name: "Document1" } ) - db.employees.insert( { id: 1, name: "Sam" } ) - db.employees.insert( { id: 2, name: "Mark" } ) - db.employees.insert( { id: 3, name: "Steve" } ) - db.employees.insert( { id: 4, name: "Jack" } ) - ``` - -## Synapse configuration -Given below is the data service configuration you need to build. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - localhost - employeesdb - PRIMARY - - - employees.insert("{id:#, name:#}") - - - - - employees.find() - - - - - - - - - - - - - - - - - - - - - - - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). The path to this folder is referred to as `MI_TOOLING_HOME` throughout this tutorial. -2. [Create a Data Service project]({{base_path}}/integrate/develop/create-data-services-configs). -3. [Create the data service]({{base_path}}/integrate/develop/creating-artifacts/data-services/creating-data-services) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Let's try out this sample by invoking the `find` resource in the data service to GET data. Shown below is the [curl](http://curl.haxx.se/) command to send the GET request: - -```bash -curl -X GET http://localhost:8290/services/MongoDB/find -``` - -This generates a response as follows. - -```bash - - - { "_id" : { "$oid" : "5fd21a7dc9b921afa1c9b83c"} , "id" : 1.0 , "name" : "Sam"} - - - { "_id" : { "$oid" : "5fd21a87c9b921afa1c9b83d"} , "id" : 2.0 , "name" : "Mark"} - - - { "_id" : { "$oid" : "5fd21a90c9b921afa1c9b83e"} , "id" : 3.0 , "name" : "Steve"} - - - { "_id" : { "$oid" : "5fd21a9fc9b921afa1c9b83f"} , "id" : 4.0 , "name" : "Jack"} - - -``` diff --git a/en/docs/integrate/examples/data_integration/nested-queries-in-data-service.md b/en/docs/integrate/examples/data_integration/nested-queries-in-data-service.md deleted file mode 100644 index f494f07a8f..0000000000 --- a/en/docs/integrate/examples/data_integration/nested-queries-in-data-service.md +++ /dev/null @@ -1,290 +0,0 @@ -# Using Nested Data Queries - -Nested queries help you to use the result of one query as an input -parameter of another, and the queries executed in a nested query works -in a transactional manner. Follow the steps given below to add a nested -query to a data service. - -## Prerequisites - -Let's create a MySQL database with the required data. - -1. Install the MySQL server. -2. Create the following database: Company - - ```bash - CREATE DATABASE Company; - ``` - -3. Create the following tables: - - - **Offices** table: - - ```bash - USE company; - - CREATE TABLE `OFFICES` (`OfficeCode` int(11) NOT NULL, `AddressLine1` varchar(255) NOT NULL, `AddressLine2` varchar(255) DEFAULT NULL, `City` varchar(255) DEFAULT NULL, `State` varchar(255) DEFAULT NULL, `Country` varchar(255) DEFAULT NULL, `Phone` varchar(255) DEFAULT NULL, PRIMARY KEY (`OfficeCode`)); - ``` - - - **Employees** table: - - ```bash - CREATE TABLE `EMPLOYEES` (`EmployeeNumber` int(11) NOT NULL, `FirstName` varchar(255) NOT NULL, `LastName` varchar(255) DEFAULT NULL, `Email` varchar(255) DEFAULT NULL, `JobTitle` varchar(255) DEFAULT NULL, `OfficeCode` int(11) NOT NULL, PRIMARY KEY (`EmployeeNumber`,`OfficeCode`), CONSTRAINT `employees_ibfk_1` FOREIGN KEY (`OfficeCode`) REFERENCES `OFFICES` (`OfficeCode`)); - ``` - -4. Insert the following data into the tables: - - - Add to the **Offices** table: - - ```bash - INSERT INTO OFFICES VALUES (1,"51","Glen Street","Norwich","London","United Kingdom","+441523624"); - INSERT INTO OFFICES VALUES (2,"72","Rose Street","Pasadena","California","United States","+152346343"); - ``` - - - Add to the **Employees** table: - - ```bash - INSERT INTO EMPLOYEES VALUES (1,"John","Gardiner","john@office1.com","Manager",1); - INSERT INTO EMPLOYEES VALUES (2,"Jane","Stewart","jane@office2.com","Head of Sales",2); - INSERT INTO EMPLOYEES VALUES (3,"David","Green","david@office1.com","Manager",1); - ``` - -You will now have two tables in the **Company** database as shown below: - -- **Offices** table: - To view the data, you can run the following command: - ` SELECT * FROM Offices; ` - -- **Employees** table: - To view the data, you can run the following command: - ` SELECT * FROM Employees; ` - -## Synapse configuration -Given below is the data service configuration you need to build. See the instructions on how to [build and run](#build-and-run) this example. - -!!! Tip - Be sure to replace the datasource username and password with the correct values for your MySQL instance. - -```xml - - - com.mysql.jdbc.Driver - jdbc:mysql://localhost:3306/Company - root - password - - - select EmployeeNumber, FirstName, LastName, Email, JobTitle, OfficeCode from EMPLOYEES where OfficeCode=:OfficeCode - - - - - - - - - - - - select OfficeCode, AddressLine1, AddressLine2, City, State, Country, Phone from OFFICES where OfficeCode=:OfficeCode - - - - - - - - - - - - - - - - - - - - - - - - - -``` - -!!! Tip - If you want to map the query output to JSON, select `JSON` as the output type. The query result for the `listOfficeSQL` query will be as follows: - - ```json - - select OfficeCode, AddressLine1, AddressLine2, City, State, Country, Phone from OFFICES where OfficeCode=:OfficeCode - { - "Offices":{ - "Office":[ - { - "OfficeCode":"$OfficeCode(type:integer)", - "City":"$City", - "Country":"$Country", - "Phone":"$Phone", - "@EmployeeOfficeSQL":"$OfficeCode->OfficeCode" - } - ] - } - } - - - ``` - - As shown above, nested queries are mentioned in the JSON mapping by giving the query details as a JSON object attribute. That is, the name of the target query to be called and the property value (the fields in the result mapped with the target query parameters) are included in the JSON mapping as the object attribute name. - - In the above example: - - - The target query name is mentioned by prefixing the query name with "@". Note "@EmployeeOfficeSQL" in the example given above. - - The parameter mapping is added to the query by giving the following values: The field name in the result prefixed by "$", and the name of the target query parameter. - - These two values in the parameter mapping are separated by "->". See "$OfficeCode->OfficeCode" in the example given above. - - Note that the target query name and the parameter mapping are separated by a colon as follows: "@EmployeeOfficeSQL": "$OfficeCode->OfficeCode" - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). The path to this folder is referred to as `MI_TOOLING_HOME` throughout this tutorial. -2. Download the JDBC driver for MySQL from [here](http://dev.mysql.com/downloads/connector/j/) and copy it to the `MI_TOOLING_HOME/Contents/Eclipse/runtime/microesb/lib/` (for MacOS) or -`MI_TOOLING_HOME/runtime/microesb/lib/` (for Windows) directory. - - !!! Note - If the driver class does not exist in the relevant folders when you create the datasource, you will get an exception such as `Cannot load JDBC driver class com.mysql.jdbc.Driver`. - -3. [Create a Data Service project]({{base_path}}/integrate/develop/create-data-services-configs). -4. [Create the data service]({{base_path}}/integrate/develop/creating-artifacts/data-services/creating-data-services) with the configurations given above. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -The service can be invoked in REST-style via curl ( -[http://curl.haxx.se](http://curl.haxx.se/) ). Shown below is the curl -command to invoke the GET resource. -It gets the details of the office that has the office code 1, and all -the employees that belong to office code 1. - -```bash -curl -X GET http://localhost:8290/services/nested_queries/offices/1 -``` - -!!! Tip - If you configured the output mapping of the `listOfficeSQL` query to be in the JSON format, you need to add the header `-H 'Accept: application/json'` to your curl command to get the output in the JSON format. - -```bash -curl -H 'Accept: application/json' -X GET http://localhost:8290/services/nested_queries/offices/1 -``` - -You will now see the following result: - -=== "XML result" - ```xml - - - 1 - 51 - Glen Street - Norwich - London - United Kingdom - +441523624 - - - 1 - John - Gardiner - john@office1.com - Manager - 1 - - - 3 - David - Green - david@office1.com - Manager - 1 - - - - - ``` - -=== "JSON result" - ```json - { - "Offices":{ - "Office":[ - { - "Phone":"+441523624", - "Country":"United Kingdom", - "OfficeCode":1, - "City":"Norwich", - "Entries":{ - "Entry":[ - { - "EmployeeNumber":"1", - "FirstName":"John", - "LastName":"Gardiner", - "Email":"john@office1.com", - "JobTitle":"Manager", - "OfficeCode":"1" - }, - { - "EmployeeNumber":"3", - "FirstName":"David", - "LastName":"Green", - "Email":"david@office1.com", - "JobTitle":"Manager", - "OfficeCode":"1" - }, - { - "EmployeeNumber":"1002", - "FirstName":"Peter", - "LastName":"Parker", - "Email":"peter@wso2.com", - "JobTitle":null, - "OfficeCode":"1" - }, - { - "EmployeeNumber":"1003", - "FirstName":"Chris", - "LastName":"Sam", - "Email":"chris@sam.com", - "JobTitle":null, - "OfficeCode":"1" - }, - { - "EmployeeNumber":"1006", - "FirstName":"Chris", - "LastName":"Sam", - "Email":"chris@sam.com", - "JobTitle":null, - "OfficeCode":"1" - }, - { - "EmployeeNumber":"1007", - "FirstName":"John", - "LastName":"Doe", - "Email":"johnd@wso2.com", - "JobTitle":null, - "OfficeCode":"1" - }, - { - "EmployeeNumber":"1008", - "FirstName":"Peter", - "LastName":"Parker", - "Email":"peterp@wso2.com", - "JobTitle":null, - "OfficeCode":"1" - } - ] - } - } - ] - } - } - ``` diff --git a/en/docs/integrate/examples/data_integration/odata-service.md b/en/docs/integrate/examples/data_integration/odata-service.md deleted file mode 100644 index 170a04f129..0000000000 --- a/en/docs/integrate/examples/data_integration/odata-service.md +++ /dev/null @@ -1,112 +0,0 @@ -# Using an OData Service - -This example demonstrates how an RDBMS can be exposed as an OData service. When OData is enabled, you do not need to manually define CRUD operations. Therefore, OData services are an easy way to enable CRUD operations for a data service. - -!!! Note - Note that the OData feature can only be used for RDBMS, Cassandra, and MongoDB datasources. - -## Prerequisites - -Let's create a MySQL database with the required data. - -1. Install the MySQL server. -2. Create a MySQL database named `CompanyAccounts`. - . - - ```bash - CREATE DATABASE CompanyAccounts; - ``` - -3. Create a table in the ` CompanyAccounts ` - database as follows. - - ```bash - CREATE TABLE ACCOUNT(AccountID int NOT NULL,Branch varchar(255) NOT NULL, AccountNumber varchar(255),AccountType ENUM('CURRENT', 'SAVINGS') NOT NULL, - Balance FLOAT,ModifiedDate DATE,PRIMARY KEY (AccountID));  - ``` - -4. Enter the following data into the table: - - ```bash - INSERT INTO ACCOUNT VALUES (1,"AOB","A00012","CURRENT",231221,'2014-12-02'); - ``` - -## Synapse configuration - -Given below is the data service configuration you need to build. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - com.mysql.jdbc.Driver - jdbc:mysql://localhost:3306/CompanyAccounts - root - password - - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). The path to this folder is referred to as `MI_TOOLING_HOME` throughout this tutorial. -2. Download the JDBC driver for MySQL from [here](http://dev.mysql.com/downloads/connector/j/) and copy it to the `MI_TOOLING_HOME/Contents/Eclipse/runtime/microesb/lib/` (for MacOS) or -`MI_TOOLING_HOME/runtime/microesb/lib/` (for Windows) directory. - - !!! Note - If the driver class does not exist in the relevant folders when you create the datasource, you will get an exception such as `Cannot load JDBC driver class com.mysql.jdbc.Driver`. - -3. [Create a Data Service project]({{base_path}}/integrate/develop/create-data-services-configs). -4. [Create the data service]({{base_path}}/integrate/develop/creating-artifacts/data-services/creating-data-services) with the configurations given above. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Open a command prompt execute the following CURL commands using CRUD operations: - -!!! Note - Note that you should have privileges to perform CRUD operations on the database. If not, the OData service will not work properly. - - -- To get the service document: - - ```bash - curl -X GET -H 'Accept: application/json' http://localhost:8290/odata/odata_service/Datasource - ``` - -- To get the metadata of the service: - - ```bash - curl -X GET -H 'Accept: application/xml' http://localhost:8290/odata/odata_service/Datasource/$metadata - ``` - -- To read details from the ACCOUNT table: - - ```bash - curl -X GET -H 'Accept: application/xml' http://localhost:8290/odata/odata_service/Datasource/ACCOUNT - ``` - -!!! info "Supported functionality" - - - Navigation (e.g. GET /EMPLOYEES(1001)/DEPARTMENTS) - - One to One - - One to Many - - Count - - Append count to the result set (e.g. GET /EMPLOYEES?$count=true) - - Get only the count (e.g. GET /EMPLOYEES/$count) - - Top (e.g. GET /EMPLOYEES?$top=10) - - Skip (e.g. GET /EMPLOYEES?$skip=5) - - Select (e.g. GET /EMPLOYEES?$select=emp_no,last_name) - - Sort - - Ascending (e.g. GET /EMPLOYEES?$orderby=last_name asc) - - Descending (e.g. GET /EMPLOYEES?$orderby=last_name desc) - - Filter (e.g. GET /EMPLOYEES?$filter=dept_no eq 'd001' and emp_no eq 10001) - - Pagination (e.g. GET /EMPLOYEES?$skiptoken=5) - -!!! important - Application owners should exist in both WSO2 API Manager and WSO2 Identity Server for this feature to work correctly. There are two approaches to fulfill this requirement: - - 1. Share the same user stores between APIM and IS: By sharing the user stores, ensure that the application owners are present in both APIM and IS. For more information on how to configure user stores, refer [Introduction to User Stores](https://apim.docs.wso2.com/en/4.2.0/administer/managing-users-and-roles/managing-user-stores/introduction-to-userstores/). - - 2. Create the same application owners in APIM and the IS user stores: Alternatively, you can create identical application owner accounts separately in both APIM and the IS user stores. - - Ensure that either of these options is implemented to enable the proper functioning of the feature. \ No newline at end of file diff --git a/en/docs/integrate/examples/data_integration/rdbms-data-service.md b/en/docs/integrate/examples/data_integration/rdbms-data-service.md deleted file mode 100644 index 23de03d6d7..0000000000 --- a/en/docs/integrate/examples/data_integration/rdbms-data-service.md +++ /dev/null @@ -1,187 +0,0 @@ -# Exposing an RDBMS Datasource - -This example demonstrates how RDBMS data (stored in a MySQL database) can be exposed as a data service. - -## Prerequisites - -Let's create a MySQL database with the required data. - -1. Install the MySQL server. -2. Create a database named `Employees` . - - ```bash - CREATE DATABASE Employees; - ``` - -3. Create the Employee table inside the Employees database: - - ```bash - USE Employees; - - CREATE TABLE Employees (EmployeeNumber int(11) NOT NULL, FirstName varchar(255) NOT NULL, LastName varchar(255) DEFAULT NULL, Email varchar(255) DEFAULT NULL, Salary varchar(255)); - ``` - -## Synapse configuration -Given below is the data service configuration you need to build. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - select EmployeeNumber, FirstName, LastName, Email, Salary from Employees where EmployeeNumber=:EmployeeNumber - - - - - - - - - - - com.mysql.jdbc.Driver - jdbc:mysql://localhost:3306/Employees - root - - com.mysql.jdbc.Driver - - - insert into Employees (EmployeeNumber, FirstName, LastName, Email, Salary) values(:EmployeeNumber,:FirstName,:LastName,:Email,:Salary) - - - - - - - - - update Employees set FirstName=:FirstName, LastName=:LastName, Email=:Email, Salary=:Salary where EmployeeNumber=:EmployeeNumber - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -``` - -!!! Tip - If you use **External** instead of the **Default** as the datasource type, your datasource should be supported by an external provider class, such as `com.mysql.jdbc.jdbc2.optional.MysqlXADataSource`.

    - After an external datasource is created, it can be used as another datasource in queries. See the example on [handling distributed transactions]({{base_path}}/integrate/examples/data_integration/distributed-trans-data-service) for more information on using external datasources. - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). The path to this folder is referred to as `MI_TOOLING_HOME` throughout this tutorial. -2. Download the JDBC driver for MySQL from [here](http://dev.mysql.com/downloads/connector/j/) and copy it to the `MI_TOOLING_HOME/Contents/Eclipse/runtime/microesb/lib/` (for MacOS) or -`MI_TOOLING_HOME/runtime/microesb/lib/` (for Windows) directory. - - !!! Note - If the driver class does not exist in the relevant folders when you create the datasource, you will get an exception such as `Cannot load JDBC driver class com.mysql.jdbc.Driver`. - -3. [Create a Data Service project]({{base_path}}/integrate/develop/create-data-services-configs). -4. [Create the data service]({{base_path}}/integrate/develop/creating-artifacts/data-services/creating-data-services) with the configurations given above. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Let's take a look at the curl commands that are used to send the HTTP -requests for each of the resources: - -#### Post new data - -1. Create a file called ` employee-payload.xml ` - file, and define the XML payload for posting new data as shown - below. - - ```bash - <_postemployee> - 3 - Will - Smith - will@google.com - 15500.0 - - ``` - -2. Send the following HTTP request from the location where the - ` employee-payload.xml ` file is stored: - - ```bash - curl -X POST -H 'Accept: application/xml' -H 'Content-Type: application/xml' --data "@employee-payload.xml" http://localhost:8290/services/RDBMSDataService/employee - ``` - -#### Get data - -The service can be invoked in REST-style via curl ( -[http://curl.haxx.se](http://curl.haxx.se/) ). Shown below is the curl -command to invoke the GET resource: - -```bash -curl -X GET http://localhost:8290/services/RDBMSDataService.HTTPEndpoint/Employee/3 -``` - -This generates a response as follows. - -```bash -3WillSmithwill@google.com15500.03WillSmithwill@google.com15500.03WillSmithwill@google.com15500.0 -``` - -#### Update data - -1. Create a file called - ` employee-update-payload.xml ` file, and define - the XML payload for updating an existing employee record as shown - below. - - ```bash - <_putemployee> - 3 - Smith - Will - will@google.com - 30000.0 - - ``` - -2. Send the following HTTP request from the location where the - ` employee-update-payload.xml ` file is stored: - - ```bash - curl -X PUT -H 'Accept: application/xml' -H 'Content-Type: application/xml' --data "@employee-update-payload.xml" http://localhost:8290/services/RDBMSDataService/employee - ``` - -#### Get Swagger definition - -- Copy the following URL to your browser to get the Swagger definition in JSON format: - - ```bash - http://localhost:8290/services/RDBMSDataService?swagger.json - ``` - -- Copy the following URL to your browser to get the Swagger definition in YAML format: - - ```bash - http://localhost:8290/services/RDBMSDataService?swagger.yaml - ``` diff --git a/en/docs/integrate/examples/data_integration/request-box.md b/en/docs/integrate/examples/data_integration/request-box.md deleted file mode 100644 index b43c36df41..0000000000 --- a/en/docs/integrate/examples/data_integration/request-box.md +++ /dev/null @@ -1,168 +0,0 @@ -# Invoking Multiple Operations via Request Box - -This example demonstrates how a data service can invoke request -box operations. The **request box** feature allows you to invoke -multiple operations (consecutively) to a datasource using a single -operation. - -**Boxcarring** is a method of grouping a set of service calls so that -they can be executed as a group (i.e., the individual service calls are -executed consecutively in the specified order). Note that we have now -deprecated the boxcarring method for data services. Instead, we have -replaced boxcarring with a new request type called **request_box** . - -**Request box** is simply a wrapper element (request_box), which wraps -the service calls that need to be invoked. When the request_box is -invoked, the individual service calls are executed in the specified -order, and the result of the last service call in the list are returned. -In this tutorial, we are using the request box to invoke the following -two service calls: - -1. Add a new employee to the database -2. Get details of the office of the added employee - -When you click the **Enable Boxcarring** check box for the data service, -both of the above functions (**Boxcarring** and **Request box**) are -enabled. However, since boxcarring is deprecated in the product, it is -recommended to disable boxcarring by clicking the **Disable Legacy -Boxcarring Mode** check box. - -## Prerequisites - -Let's create a MySQL database with the required data. - -1. Install the MySQL server. -2. Create a database named **Company**. - - ```bash - CREATE DATABASE Company; - ``` - -3. Create the **Employees** table: - - ```bash - USE Company; - - CREATE TABLE `Employees` (`EmployeeNumber` int(11) NOT NULL, `FirstName` varchar(255) NOT NULL, `LastName` varchar(255) DEFAULT NULL, `Email` varchar(255) DEFAULT NULL, `JobTitle` varchar(255) DEFAULT NULL, `OfficeCode` int(11) NOT NULL, PRIMARY KEY (`EmployeeNumber`,`OfficeCode`)); - ``` - -## Synapse configuration -Given below is the data service configuration you need to build. See the instructions on how to [build and run](#build-and-run) this example. - -!!! Tip - Be sure to replace the datasource username and password with the correct values for your MySQL instance. - -```xml - - - com.mysql.jdbc.Driver - jdbc:mysql://localhost:3306/Company - root - password - - - insert into Employees (EmployeeNumber, FirstName, LastName, Email,OfficeCode) values(:EmployeeNumber,:FirstName,:LastName,:Email,:OfficeCode) - - - - - - - - select EmployeeNumber, FirstName, LastName, Email, OfficeCode from Employees where EmployeeNumber=:EmployeeNumber - - - - - - - - - - - - - - - - - - - - - - - - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). The path to this folder is referred to as `MI_TOOLING_HOME` throughout this tutorial. -2. Download the JDBC driver for MySQL from [here](http://dev.mysql.com/downloads/connector/j/) and copy it to the `MI_TOOLING_HOME/Contents/Eclipse/runtime/microesb/lib/` (for MacOS) or -`MI_TOOLING_HOME/runtime/microesb/lib/` (for Windows) directory. - - !!! Note - If the driver class does not exist in the relevant folders when you create the datasource, you will get an exception such as `Cannot load JDBC driver class com.mysql.jdbc.Driver`. - -3. [Create a Data Service project]({{base_path}}/integrate/develop/create-data-services-configs). -4. [Create the data service]({{base_path}}/integrate/develop/creating-artifacts/data-services/creating-data-services) with the configurations given above. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Let's send a request with multiple transactions to the data service: - -1. Download and Install [SoapUI](https://www.soapui.org/downloads/soapui.html) to run this SOAP service. -2. Create a new SOAP Project in the SoapUI using following WSDL file: - ```bash - http://localhost:8290/services/request_box_example?wsdl - ``` - -3. Invoke the **request_box** under **request_box_exampleSOAP12Binding** with the following request body: - - !!! Tip - Note that we are sending two transactions with details of two employees. - - ```xml - - - - - 1003 - - Chris - - Sam - - chris@sam.com - - 1 - - - - - 1003 - - - ``` - -You will see the following response received by SoapUI: - -```xml - - - - - - 1003 - Chris - Sam - chris@sam.com - 1 - - - - - -``` diff --git a/en/docs/integrate/examples/data_integration/swagger-data-services.md b/en/docs/integrate/examples/data_integration/swagger-data-services.md deleted file mode 100644 index d0fff72483..0000000000 --- a/en/docs/integrate/examples/data_integration/swagger-data-services.md +++ /dev/null @@ -1,108 +0,0 @@ -# Using Swagger Documents of RESTful Data Services - -When RESTful resources are added to the data service, the Micro Integrator generates a corresponding swagger 3.0 (OpenApi) definition automatically. You can access this Swagger document by suffixing the service URL with `?swagger.json` or `?swagger.yaml` as shown below. - -- JSON format - - ```bash - http://localhost:8290/services/?swagger.json - ``` - -- YAML format - - ```bash - http://localhost:8290/services/?swagger.yaml - ``` - -This example demonstrates how a custom Swagger definition is published for a RESTful data service. - -## Synapse configuration - -Following is a sample data service configuration with a custom Swagger definition. See the instructions on how to [build and run](#build-and-run) this example. - -!!! Note - The custom Swagger file (JSON file) is saved to the Micro Integrator's registry. The `publishSwagger` element in the data service configuration specifies the registry path. In this example, we are storing the Swagger definition in the governance registry as shown below. - -```xml - - - - select EmployeeNumber, FirstName, LastName, Email, Salary from Employees where EmployeeNumber=:EmployeeNumber - - - - - - - - - - - root - - jdbc:mysql://localhost:3306/Employees - com.mysql.jdbc.Driver - - - - - - insert into Employees (EmployeeNumber, FirstName, LastName, Email, Salary) values(:EmployeeNumber,:FirstName,:LastName,:Email,:Salary) - - - - - - - - - update Employees set FirstName=:FirstName, LastName=:LastName, Email=:Email, Salary=:Salary where EmployeeNumber=:EmployeeNumber - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with a Registry Resources module and a Composite Exporter. -3. [Create a Data Service project]({{base_path}}/integrate/develop/create-data-services-configs) inside the integration project. -4. To create the data service with the above configurations: - - Download the Swagger file: [custom_data_service_swagger.yaml](https://github.com/wso2-docs/WSO2_EI/blob/master/samples-rest-apis/simple_petstore.yaml). - - Follow the instructions on [creating a data service]({{base_path}}/integrate/develop/creating-artifacts/data-services/creating-data-services). - -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - - -Copy the following URLs to your browser to see the Swagger documents of your RESTful data service: - -- `http://localhost:8290/services/?swagger.json` -- `http://localhost:8290/services/?swagger.yaml` diff --git a/en/docs/integrate/examples/endpoint_examples/endpoint-error-handling.md b/en/docs/integrate/examples/endpoint_examples/endpoint-error-handling.md deleted file mode 100644 index 2400c03bc7..0000000000 --- a/en/docs/integrate/examples/endpoint_examples/endpoint-error-handling.md +++ /dev/null @@ -1,275 +0,0 @@ -# Endpoint Error Handling - -The last step of message processing inside WSO2 Micro Integrator -is to send the message to a service provider (see also [Working with Mediators]({{base_path}}/reference/mediators/about-mediators)) -by sending the message to a listening service -[endpoint]({{base_path}}/reference/synapse-properties/endpoint-properties). During this process, transport -errors can occur. For example, the connection might time out, or it -might be closed by the actual service. Therefore, endpoint error -handling is a key part of any successful Micro -Integrator deployment. - -Messages can fail or be lost due to various reasons in a real TCP -network. When an error occurs, if the Micro Integrator is not -configured to accept the error, it will mark the endpoint as failed, -which leads to a message failure. By default, the endpoint is marked as -failed for quite a long time, and due to this error, subsequent messages -can get lost. - -To avoid lost messages, you can configure error handling at the endpoint -level. You should also run a few long-running load tests to discover -errors and fine-tune the endpoint configurations for errors that can -occur intermittently due to various reasons. - -## Example 1: Using endpoint error codes -This example demonstrates a simple use case where error codes are used to move an endpoint into Timeout state. - -### Synapse configuration -Following is a sample REST API configuration that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run-example-1) this example. - -```xml - - - - - -
    - - 60000 - - - - 101504, 101505 - 3 - 1 - - - - 101500, 101501, 101506, 101507, 101508 - 1000 - 2 - 60000 - - -
    -
    -
    - -
    -
    -
    -``` - -In this example, the errors 101504 and 101505 move the endpoint into the -"Timeout" state. At that point, three requests can fail for one of these -errors before the endpoint is moved into the "Suspended" state. -Additionally, errors 101500, 101501, 101506, 101507, and 101508 will put -the endpoint directly into the "Suspended" state. If a 101503 error -occurs, the endpoint will remain in the "Active" state as you have not -specified it under ` suspendOnFailure ` . The default -setting to suspend the endpoint for all error codes except the ones -specified under ` markForSuspension ` will apply only if -you do not specify error codes under ` suspendOnFailure ` -. - -When the endpoint is first suspended, the retry happens after one -second. Because the progression factor is 2, the next suspension -duration before retry is two seconds, then four seconds, then eight, and -so on until it gets to sixty seconds, which is the maximum duration we -have configured. At this point, all subsequent suspension periods will -be sixty seconds until the endpoint succeeds and is back in the Active -state, at which point the initial duration will be used on subsequent -suspensions. - -### Build and run (Example 1) - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the REST API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Invoke the sample API by executing the following command: - -```bash -curl -v -X GET "http://localhost:8290/test" -``` - -## Example 2: Configuration for Endpoint Dynamic Timeout -Let's look at a sample configuration where you have dynamic timeout for -the endpoint. - -### Synapse configuration - -Following is a sample REST API configuration that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run-example-2) this example. - -```xml - - - - - - -
    - - {get-property('timeout')} - discard - - - - - - -``` - -In this example, the timeout value is defined using a [Property mediator]({{base_path}}/reference/mediators/property-mediator) outside -the endpoint configuration. The timeout parameter in the endpoint -configuration is then evaluated against an XPATH expression that is used -to reference and read the timeout value. Using this timeout values can -be configured without having to change the endpoint configuration. - -!!! Info - You also have the option of defining a dynamic timeout for the endpoint as a [local entry]({{base_path}}/integrate/develop/creating-artifacts/registry/creating-local-registry-entries). - ```xml - - - - ``` - -### Build and run (Example 2) - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the REST API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -## Example 3: Dynamic endpoint failover management -Let's look at a sample configuration where you have a dynamic URL with failover management for -the endpoint. - -### Synapse configuration -Following is a sample REST API configuration that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run-example-3) this example. - -```xml - - - - - -
    - - 30000 - fault - - - -1 - 0 - 1.0 - 0 - - - -1 - -
    -
    -
    - -
    -
    -
    -``` - -If a dynamic URL is used as the endpoint and if one URL fails, the -endpoint is suspended even though the URL is changed dynamically. Follow -the steps given below to avoid suspension or to re-enable the endpoint. - -- Disabling endpoint suspension: If you do not want the endpoint to be suspended at all, you can configure the `Timeout` , `MarkForSuspension` , and `suspendOnFailure` settings as shown in the following example. - - Use `-1` to disable suspension for the endpoint under the - `MarkForSuspension` and `suspendOnFailure` settings. - - Use `fault` - under the ` ` setting. - - Define the ` ` and - ` ` properties as - ` 0 ` under the - ` suspendOnFailure ` setting. - -Follow any of the options given below to re-enable an endpoint that is suspended. - -- Define the error codes that cause endpoint failure. - For example, use - ` 101504, 101505 ` to - exclude the error codes from ` suspendOnFailure ` - and ` markForSuspension ` under endpoint - configuration, so that the endpoint does not get suspended for these - error codes. -- If the endpoint is defined as a registry resource, activate the - endpoint through the Java Management Extension (JMX). - For example, use the ` switchOn ` operation for - that particular endpoint in the JConsole, which comes under - **MBeans \> org.apache.synapse \> Endpoint** . This activates the - endpoint again. - -### Build and run (Example 3) - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the REST API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Invoke the sample API by executing the following command: - -```bash -curl -v -X GET "http://localhost:8290/test" -``` - -You will not observe any endpoint suspended logs for the above API call. - -## Example 4: Configuring retry -You can configure the Micro Integrator to enable or disable retry for an endpoint when a specific error code occurs. - -### Synapse configuration - -```xml -
    - - 101503 - -
    -
    - -
    - - 101503 - -
    -
    -``` - -In this example, if the error code 101503 occurs when trying to connect -to the first endpoint, the endpoint is not retried, whereas in the -second endpoint, the endpoint is always retried if error code 101503 -occurs. You can specify enabled or disabled error codes (but not both) -for a given endpoint. diff --git a/en/docs/integrate/examples/endpoint_examples/mtom-swa-with-endpoints.md b/en/docs/integrate/examples/endpoint_examples/mtom-swa-with-endpoints.md deleted file mode 100644 index b46305bacc..0000000000 --- a/en/docs/integrate/examples/endpoint_examples/mtom-swa-with-endpoints.md +++ /dev/null @@ -1,186 +0,0 @@ -# MTOM and SwA Optimizations and Request/Response Correlation - -This sample demonstrates how you can use content optimization mechanisms such as **Message Transmission Optimization Mechanism** (MTOM) and **SOAP with -Attachments** (SwA) with the Micro Integrator. - -By default, the Micro Integrator serializes binary data as Base64 encoded strings and sends them in the SOAP payload. MTOM and SwA define mechanisms over which files with binary content can be transmitted over SOAP web services. - -The configuration sets a local message context property, and forwards -the message to -`http://localhost:9000/services/MTOMSwASampleService` -optimizing the binary content as MTOM. You can see the actual message -sent over the http transport if required by sending this message through -TCPMon. - -During response processing, the -Micro Integrator can identify the past information (by checking the local message property) about the current message context, -and use this knowledge to transfer the response back to the client -in the same format as the original request. - -!!! Note - In a content aware mediation scenario (where the message gets built), you can use the following property to decode the - multipart message that is being sent to the backend. Otherwise, the outgoing message will be in encoded form. - ```xml - - ``` - -## Synapse configuration - -Following are the integration artifacts that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - - - -
    - - - - - - - -
    - - - - - - - - - - - - - - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. Open the `deployment.toml` file (stored in the `MI_HOME/conf` directory) and add the following configurations: - - - To enable MTOM: - ```toml - [server] - enable_mtom = true - ``` - When this is enabled, all outgoing messages will be serialized and - sent as MTOM optimized MIME messages. You can override this - configuration per service in the `services.xml` - configuration file. - - - To enable SwA: - ```toml - [server] - enable_swa = true - ``` - When this is enabled, incoming SwA messages are automatically - identified by the Micro Integrator. - -!!! Note - From MI 4.2.0 onwards, there are two different configs for the axis2 and the axis2 blocking client. The above configurations will be used to configure the axis2 client. To enable SwA and MTOM configurations for the axis2 blocking client, you need to add the following configuration as well. - - ```toml - [server] - enable_mtom = true - enable_swa = true - ``` - - -3. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -4. Create the [main sequence]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences) with the configurations given above. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -When your client executes successfully, it will upload a file containing -the ASF logo, receive its response, and save the response to a -temporary file. - - - -If you use TCPMon and send the message through it, you will see that the requests and responses sent are MTOM/SwA optimized or sent as http -attachments as follows: - -- **MTOM** - - ```xml - POST http://localhost:9000/services/MTOMSwASampleService HTTP/1.1 - Host: 127.0.0.1 - SOAPAction: urn:uploadFileUsingMTOM - Content-Type: multipart/related; boundary=MIMEBoundaryurn_uuid_B94996494E1DD5F9B51177413845353; type="application/xop+xml"; - start="<0.urn:uuid:B94996494E1DD5F9B51177413845354@apache.org>"; start-info="text/xml"; charset=UTF-8 - Transfer-Encoding: chunked - Connection: Keep-Alive - User-Agent: Synapse-HttpComponents-NIO - - --MIMEBoundaryurn_uuid_B94996494E1DD5F9B51177413845353241 - Content-Type: application/xop+xml; charset=UTF-8; type="text/xml" - Content-Transfer-Encoding: binary - Content-ID: - <0.urn:uuid:B94996494E1DD5F9B51177413845354@apache.org>221b1 - - - - - - - - - - - - - --MIMEBoundaryurn_uuid_B94996494E1DD5F9B51177413845353217 - Content-Type: image/gif - Content-Transfer-Encoding: binary - Content-ID: - <1.urn:uuid:78F94BC50B68D76FB41177413845003@apache.org>22800GIF89a... << binary content >> - ``` - -- **SWA** - - ```xml - POST http://localhost:9000/services/MTOMSwASampleService HTTP/1.1 - Host: 127.0.0.1 - SOAPAction: urn:uploadFileUsingSwA - Content-Type: multipart/related; boundary=MIMEBoundaryurn_uuid_B94996494E1DD5F9B51177414170491; type="text/xml"; - start="<0.urn:uuid:B94996494E1DD5F9B51177414170492@apache.org>"; charset=UTF-8 - Transfer-Encoding: chunked - Connection: Keep-Alive - User-Agent: Synapse-HttpComponents-NIO - - --MIMEBoundaryurn_uuid_B94996494E1DD5F9B51177414170491225 - Content-Type: text/xml; charset=UTF-8 - Content-Transfer-Encoding: 8bit - Content-ID: - <0.urn:uuid:B94996494E1DD5F9B51177414170492@apache.org>22159 - - - - - - urn:uuid:15FD2DA2584A32BF7C1177414169826 - - - - 22--34MIMEBoundaryurn_uuid_B94996494E1DD5F9B511774141704912 - 17 - Content-Type: image/gif - Content-Transfer-Encoding: binary - Content-ID: - 22800GIF89a... << binary content >> - ``` diff --git a/en/docs/integrate/examples/endpoint_examples/reusing-endpoints.md b/en/docs/integrate/examples/endpoint_examples/reusing-endpoints.md deleted file mode 100644 index ff8059ce9c..0000000000 --- a/en/docs/integrate/examples/endpoint_examples/reusing-endpoints.md +++ /dev/null @@ -1,30 +0,0 @@ -# Reusing Endpoints - -## Using Indirect Endpoints - -In the following [Send -mediator]({{base_path}}/reference/mediators/send-mediator) -configuration, the ` PersonInfoEpr ` key refers to a -specific endpoint configured. - -``` - - - -``` - -## Using Resolving Endpoints - -!!! Info - The XPath expression specified in a Resolving endpoint configuration derives an existing endpoint rather than the URL of the endpoint to which the message is sent. To derive the endpoint URL to which the message is sent via an XPath expression, use the **Header** mediator. - -In the following [Send -mediator]({{base_path}}/reference/mediators/send-mediator) -configuration, the endpoint to which the message is sent is determined -by the ` get-property('Mail') ` expression. - -``` - - - -``` diff --git a/en/docs/integrate/examples/endpoint_examples/using-address-endpoints.md b/en/docs/integrate/examples/endpoint_examples/using-address-endpoints.md deleted file mode 100644 index 647c769f13..0000000000 --- a/en/docs/integrate/examples/endpoint_examples/using-address-endpoints.md +++ /dev/null @@ -1,76 +0,0 @@ -# Using the Address Endpoint -This sample demonstrates how you can convert a POX message to a SOAP request using an Address endpoint. - -## Synapse configuration - -Following is a sample REST API configuration that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - - - - -
    - - -
    - - - - - - - - - - - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create a proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Send the following request: - -```bash -POST /services/SimpleStockQuoteProxy/StockQuote HTTP/1.1 -Content-Type: application/xml; charset=UTF-8;action="urn:getQuote"; -SOAPAction: urn:getQuote -User-Agent: Axis2 -Host: 127.0.0.1 -Transfer-Encoding: chunked - - - - IBM - - -``` - -This HTTP REST request will be transformed into a SOAP request and forwarded to the stock quote service. \ No newline at end of file diff --git a/en/docs/integrate/examples/endpoint_examples/using-dynamic-recepient-list-endpoints-1.md b/en/docs/integrate/examples/endpoint_examples/using-dynamic-recepient-list-endpoints-1.md deleted file mode 100644 index adc936f570..0000000000 --- a/en/docs/integrate/examples/endpoint_examples/using-dynamic-recepient-list-endpoints-1.md +++ /dev/null @@ -1,66 +0,0 @@ -# Routing Messages to a Dynamic List of Recipients -This example demonstrates message routing to a set of dynamic endpoints. - -## Synapse configuration - -Following are the integration artifacts you can use to implement this scenario. - -=== "Error Handling Sequence" - ```xml - - - - - - - - ``` - -=== "Fault Sequence" - ```xml - - - - - - - - - ``` - -=== "Proxy Service" - ```xml - - - -
    - - - - - - - - - - - - - - - - - - - ``` - - \ No newline at end of file diff --git a/en/docs/integrate/examples/endpoint_examples/using-dynamic-recepient-list-endpoints-2.md b/en/docs/integrate/examples/endpoint_examples/using-dynamic-recepient-list-endpoints-2.md deleted file mode 100644 index 9bcf8b3bf0..0000000000 --- a/en/docs/integrate/examples/endpoint_examples/using-dynamic-recepient-list-endpoints-2.md +++ /dev/null @@ -1,82 +0,0 @@ -# Routing a Message to a Dynamic List of Recipients and Aggregating Responses -This example demonstrates message routing to a set of dynamic endpoints and aggregate responses. - -The sample configuration routes a cloned copy of a message -to each recipient defined within the dynamic recipient list, and -each recipient responds with a stock quote. When all the responses -reach the Micro Integrator, the responses are aggregated to form the final response, -which will be sent back to the client. - -If you sent the client request through a TCP-based conversation -monitoring tool such as TCPMon, you will see the structure of the -aggregated response message. - -## Synapse configuration - -Following are the integration artifacts you can use to implement this scenario. - -=== "Error Handling Sequence" - ```xml - - - - - - - - ``` - -=== "Fault Sequence" - ```xml - - - - - - - - - ``` - -=== "Proxy Service" - ```xml - - - -
    - - - - - - - - - - - - - - - - - - - - - - - - ``` - - diff --git a/en/docs/integrate/examples/endpoint_examples/using-failover-endpoints.md b/en/docs/integrate/examples/endpoint_examples/using-failover-endpoints.md deleted file mode 100644 index d7f70220fc..0000000000 --- a/en/docs/integrate/examples/endpoint_examples/using-failover-endpoints.md +++ /dev/null @@ -1,204 +0,0 @@ -# Using Failover Endpoints -## Example 1: Failover with one address endpoint - -When message failure is not tolerable even though there is only one -service endpoint, then failovers are possible with a single endpoint as -shown in the below configuration. - -### Synapse configuration - -Following is a sample REST API configuration that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run-example-1) this example. - -```xml - - - - - - - -
    - - 60000 - - - - 101504, 101505, 101500 - 3 - 10 - - - - 1000 - 2 - 64000 - - -
    -
    -
    -
    -
    - -
    -
    -
    -``` - -In the above example, the ` Sample_First ` endpoint is -marked as ` Timeout ` if a connection times out, closes, -or sends IO errors after retrying for ` 60000 ` -milliseconds. - -When one of the errors of the specified codes occur (i.e., -` 101504, 101505 ` and ` 101500) ` , the -failover will retry using the first non-suspended endpoint. In this -case, it is the same endpoint ( ` Sample_First ` ). It -will retry until the retry count (i.e. 3 in the above example) becomes 0 -with a delay as specified by the ` ` -property (i.e., ` 10 ` milliseconds in the above example). - -For all the other errors, it will be marked as -` Suspended ` . For more information about these states -and properties, see [Endpoint Error Handling]({{base_path}}/integrate/examples/endpoint_examples/endpoint_error_handling). - -!!! Info - The retry count is per endpoint, not per message. The retry happens in parallel. Since messages come to this endpoint via many threads, the same message may not be retried three times. Another message may fail and can reduce the retry count. - - In this configuration, we assume that these errors are rare and if they happen once in a while, it is okay to retry again. If they happen frequently and continuously, it means that it requires immediate attention to get it back to normal state. - -### Build and run (example 1) - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the REST API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Send the following request to invoke the API: - -```bash -curl -v -X GET "http://localhost:8290/test" -``` - -The following logs are printed if the endpoint is unreachable: - -```xml -[2019-10-29 12:10:52,557] WARN {API_LOGGER.TestAPI} - ERROR_CODE : 101503 ERROR_MESSAGE : Error connecting to the back end -[2019-10-29 12:10:52,558] WARN {org.apache.synapse.endpoints.EndpointContext} - Endpoint : Sample_First with address http://localhost/myendpoint will be marked SUSPENDED as it failed -``` - -## Example 2: Failover with multiple address endpoints - -When a message reaches a failover endpoint with multiple address -endpoints, it will go through its list of endpoints to pick the first -one in ` Active ` or ` Timeout ` state -(not in the ` Suspended ` state). Then, it will send the -message using that particular endpoint. - -If a failure occurs with the first endpoint within the failover group -and if this error does not put the first endpoint into -` Suspended ` state, the retry will happen using the same -endpoint. - -However, if the first endpoint is suspended or if an error occurs while -sending the message with the first endpoint, the failover endpoint will -go through the endpoint list again from the beginning and will try to -send the requests using the next endpoint, which is in the -` Active ` or ` Timeout ` state. -Nevertheless, when the first endpoint becomes ready to send again, it -will try again on the first endpoint, even though the second endpoint is -still active. For more information about these states and properties, -see [Endpoint Error Handling]({{base_path}}/integrate/examples/endpoint_examples/endpoint_error_handling). - -### Synapse configuration - -Following is a sample REST API configuration that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run-example-2) this example. - -Multiple address endpoints are used in this example. - -```xml - - - - - - - - - - 10000 - fault - - - 101503,101504,101505,101507 - 100 - 1.0 - 30000 - - - - - - - 10000 - fault - - - 101503,101504,101505,101507 - 100 - 1.0 - 30000 - - - 101507,101503 - - - - - - - - - - - -``` - -!!! Note - The `` property configures the last child endpoint to stop retying by ending the loop (i.e. to make the endpoint respond back to the service), after attempting to send requests to all the child endpoints and when all the attempts fail. - ```xml - - 101507,101504 - - ``` - According to this configuration, the following error codes are used for sending the endpoint into `Timeout` state: `101504` and `101507`. - -### Build and run (example 2) - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an ESB Solution project]({{base_path}}/integrate/develop/create-integration-project/#esb-config-project). -3. [Create the REST API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-and-run) in your Micro Integrator. - -Invoke the sample API by executing the following command: - -```bash -curl -v -X GET "http://localhost:8290/test" -``` - -The following logs are getting printed if the endpoint is unreachable: - -```xml -[2019-10-29 13:38:24,354] WARN {API_LOGGER.TestAPI} - ERROR_CODE : 101503 ERROR_MESSAGE : Error connecting to the back end -[2019-10-29 13:38:24,354] WARN {org.apache.synapse.endpoints.EndpointContext} - Endpoint : fooEP with address http://localhost:8080/foo will be marked SUSPENDED as it failed -[2019-10-29 13:38:24,354] WARN {org.apache.synapse.endpoints.EndpointContext} - Suspending endpoint : fooEP with address http://localhost:8080/foo - current suspend duration is : 100ms - Next retry after : Tue Oct 29 13:38:24 IST 2019 -[2019-10-29 13:38:24,355] WARN {org.apache.synapse.endpoints.FailoverEndpoint} - AnonymousEndpoint Detect a Failure in a child endpoint : Endpoint [fooEP] -[2019-10-29 13:38:24,356] WARN {org.apache.synapse.transport.passthru.ConnectCallback} - Connection refused or failed for : localhost/127.0.0.1:8080 -[2019-10-29 13:38:24,357] WARN {API_LOGGER.TestAPI} - ERROR_CODE : 101503 ERROR_MESSAGE : Error connecting to the back end -[2019-10-29 13:38:24,357] WARN {org.apache.synapse.endpoints.EndpointContext} - Endpoint : barEP with address http://localhost:8080/bar will be marked SUSPENDED as it failed -[2019-10-29 13:38:24,357] WARN {org.apache.synapse.endpoints.EndpointContext} - Suspending endpoint : barEP with address http://localhost:8080/bar - current suspend duration is : 100ms - Next retry after : Tue Oct 29 13:38:24 IST 2019 -``` \ No newline at end of file diff --git a/en/docs/integrate/examples/endpoint_examples/using-http-endpoints.md b/en/docs/integrate/examples/endpoint_examples/using-http-endpoints.md deleted file mode 100644 index 22455d056b..0000000000 --- a/en/docs/integrate/examples/endpoint_examples/using-http-endpoints.md +++ /dev/null @@ -1,48 +0,0 @@ -# Using the HTTP Endpoint - -See the examples given below on how to effectively use the HTTP endpoint. - -## Example 1: Populating an HTTP endpoint during mediation - -Shown below is a synapse configuration that defines an endpoint artifact. - -```xml - - - -``` - -The URI template variables in this example HTTP endpoint can be populated during mediation using the following sequence configuration: - -```xml - - - - - - - - - -``` - -This configuration will cause the RESTful URL to evaluate the following: - -```bash -http://localhost:8080/PizzaShopServlet/restapi/PizzaWS/menu?category=pizza&type=pan -``` - -## Example 2 - -You can specify one parameter as the HTTP endpoint by -using multiple other parameters, and then pass that to define the HTTP -endpoint as follows: - -```xml - - - - - - -``` diff --git a/en/docs/integrate/examples/endpoint_examples/using-loadbalancing-endpoints.md b/en/docs/integrate/examples/endpoint_examples/using-loadbalancing-endpoints.md deleted file mode 100644 index 7f731ce92d..0000000000 --- a/en/docs/integrate/examples/endpoint_examples/using-loadbalancing-endpoints.md +++ /dev/null @@ -1,133 +0,0 @@ -# Using the Load Balance Endpoint - -**Session Affinity Load Balancing between Three Endpoints** - -This sample demonstrates how the Micro Integrator can handle load balancing with -session affinity using simple client sessions. Here the -session type is specified as ` simpleClientSession ` . -This is a client initiated session, which means that the client -generates the session identifier and sends it with each request. In this -sample session type, the client adds a SOAP header named ClientID -containing the identifier of the client. The MI binds this ID with a -server on the first request and sends all successive requests containing -that ID to the same server. - -## Synapse configuration - -Following is a sample REST API configuration that we can used to implement this scenario. - -=== "Proxy Service" - ```xml - - - -
    - - - - - - - -
    - -
    -
    - -
    - -
    -
    - -
    - -
    -
    -
    -
    -
    - - - - - - - - - - - ``` - -=== "Sequence" - ```xml - - - - - - -
    - - - - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the Proxy]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) and the Sequence with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -3. Open the `tcpmon` application, which is in `MI_TOOLING_HOME/Contents/Eclipse/runtime/microesb/bin/` (in MacOS) or `MI_TOOLING_HOME/runtime/microesb/bin` (in Windows/Linux) directory. -4. Configure `tcpmon` to listen to ports `9001, 9002, and 9003` and set the target hostname to `localhost` and target port to `9000` in each instance. - -Invoking the proxy service: - -Send the following request **3 or more times**. Make sure to include a `simpleClientSession` to the header. - -```xml -POST http://localhost:8290/services/LoadBalanceProxy HTTP/1.1 -Content-Type: text/xml;charset=UTF-8 -simpleClientSession: 123 - - - - - - - 172.23182849731984 - 18398 - IBM - - - - -``` - -Analyzing the output: - -When inspecting the `tcpmon`, you will see that each listener -has received a request (If you have only sent 3 requests, otherwise more than 1). This is because, -when multiple requests are sent with the same session ID, they are distributed across -the three endpoints in a round robin manner. diff --git a/en/docs/integrate/examples/endpoint_examples/using-static-recepient-list-endpoints.md b/en/docs/integrate/examples/endpoint_examples/using-static-recepient-list-endpoints.md deleted file mode 100644 index 3a76e09f36..0000000000 --- a/en/docs/integrate/examples/endpoint_examples/using-static-recepient-list-endpoints.md +++ /dev/null @@ -1,76 +0,0 @@ -# Routing Messages to a Static List of Recipients -!!! Note - This documentation is currently under review. You might encounter some errors when trying out this sample in WSO2 Integration Studio. Please refer [this issue](https://github.com/wso2/integration-studio/issues/37) for details. - -This example demonstrates how messages can be routed to a list of static endpoints. This configuration routes a cloned copy of a message to each recipient defined within the static recipient list. The Micro Integrator will create cloned copies of the message and route to the three endpoints mentioned in the configuration. The back-end service prints the details of the placed order. - -## Synapse configuration -Following is a sample proxy service configuration and mediation sequence that we can used to implement this scenario. - -=== "Proxy Service" - ```xml - - - -
    - - - - - -
    - - -
    - - -
    - - - - - - - - - - - - - - ``` - -=== "Error Handling Sequence" - ```xml - - - - - - - - ``` - - diff --git a/en/docs/integrate/examples/endpoint_examples/using-websocket-endpoints.md b/en/docs/integrate/examples/endpoint_examples/using-websocket-endpoints.md deleted file mode 100644 index c6ab686531..0000000000 --- a/en/docs/integrate/examples/endpoint_examples/using-websocket-endpoints.md +++ /dev/null @@ -1,114 +0,0 @@ -# Using a WebSocket Endpoint - -WebSocket is a protocol that provides full-duplex communication channels over a single TCP connection. This can be used by any client or server application. The Micro Integrator provides WebSocket support via the [WebSocket Transport]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-websocket-transport) and the [WebSocket Inbound Protocol]({{base_path}}/integrate/examples/inbound_endpoint_examples/inbound-endpoint-secured-websocket). - -## Example 1: Sending a Message from a WebSocket Client to a WebSocket Endpoint - -If you need to send a message from a WebSocket client to a WebSocket -endpoint via WSO2 MI, you need to establish a -persistent WebSocket connection from the WebSocket client to WSO2 MI as well as from WSO2 MI to the -WebSocket back-end. - -To demonstrate this scenario, you need to create two dispatching -sequences. One for the client to back-end mediation, and another for the -back-end to client mediation. Finally you need to configure the -WebSocket inbound endpoint of WSO2 MI to use the -created sequences and listen on port 9092. - -For sample synapse configurations, see [WebSocket Inbound]({{base_path}}/integrate/examples/inbound_endpoint_examples/inbound-endpoint-secured-websocket). - -If you analyze the log, you will see that a connection from the -WebSocket client to WSO2 MI is established, and the -sequences are executed by the WebSocket inbound endpoint. You will also -see that the message sent to the WebSocket server is not transformed, -and that the response injected to the out sequence is also not -transformed. - -## Example 2: Sending a Message from a HTTP Client to a WebSocket Endpoint - -If you need to send a message from a HTTP client to a WebSocket endpoint -via the Micro Integrator, you need to establish -a persistent WebSocket connection from WSO2 MI to the -WebSocket back-end. - -To demonstrate this scenario, you need to create two dispatching -sequences. One for the client to back-end mediation, and another for the -back-end to client mediation. Then you need to create a proxy service to -call the created sequences. - -### Synapse configuration -Following is a sample REST API configuration that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -Create the sequence for client to backend mediation, sequence for the backend to client mediation, and a proxy service as to call the sequences. - -=== "Sequence (Backend Mediation)" - ```xml - - - - - - -
    - - - - ``` - -=== "Sequence (Backend to Client Mediation)" - ```xml - - - - ``` - -=== "Proxy Service" - ```xml - - - - - ``` - -### Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). - - !!! Note - The Websocket sender functionality of the Micro Integrator is disabled by default. To enable the transport, open the `deployment.toml` file from the `MI_TOOLING_HOME/Contents/Eclipse/runtime/microesb/conf/` directory and add the following: - - ```toml - [transport.ws] - sender.enable = true - ``` - -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [mediation sequences]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences) and the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Starting the WebSocket server: - -- Download the netty artifacts zip file from [here](https://github.com/wso2-docs/ESB) and extract it. The extracted folder will be shown as `ESB`. -- Open a terminal, navigate to `ESB/ESB-Artifacts/Netty_artifacts_for_WebSocket_samples` and execute the following command to start the WebSocket server on port 8082: - - ```bash - java -cp 'netty-example-4.0.30.Final.jar:lib/*:.' io.netty.example.http.websocketx.server.WebSocketServer - ``` - -Calling the Proxy service: - -- Execute the following command to call the proxy service: -```bash -curl -v --request POST -d "Value" -H Content-Type:"text/xml" http://localhost:8290/services/websocketProxy1 -``` - -If you analyze the log, you will see that an HTTP request is sent to the -WebSocket server, and that the WebSocket server injects the response to -the out sequence. diff --git a/en/docs/integrate/examples/endpoint_examples/using-wsdl-endpoints.md b/en/docs/integrate/examples/endpoint_examples/using-wsdl-endpoints.md deleted file mode 100644 index 90183dd5d1..0000000000 --- a/en/docs/integrate/examples/endpoint_examples/using-wsdl-endpoints.md +++ /dev/null @@ -1,122 +0,0 @@ -# Using the WSDL Endpoint -This sample demonstrates how you can use a WSDL endpoint as the target -endpoint. The configuration in this sample uses a WSDL endpoint inside -the send mediator. This WSDL endpoint extracts the target endpoint reference from the WSDL document specified in the configuration. In this -configuration the WSDL document is specified as a URI. - -### Synapse configuration - -Following is a sample REST API configuration that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - -
    - - - - - - - - - - - - - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create a proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -The WSDL file `sample_proxy_1.wsdl` can be downloaded from [sample_proxy_1.wsdl](https://github.com/wso2-docs/WSO2_EI/blob/master/samples-protocol-switching/sample_proxy_1.wsdl). -The WSDL URI of the endpoint needs to be updated with the path to the `sample_proxy_1.wsdl` file. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Send a request to invoke the service: - -```bash -POST http://localhost:8290/services/SimpleStockQuoteProxy.SimpleStockQuoteProxyHttpSoap11Endpoint HTTP/1.1 -Accept-Encoding: gzip,deflate -Content-Type: text/xml;charset=UTF-8 -SOAPAction: "urn:mediate" -Content-Length: 428 -Connection: Keep-Alive -User-Agent: Apache-HttpClient/4.1.1 (java 1.5) - - - - - - - 172.23182849731984 - 18398 - IBM - - - - -``` - -You will see the following output as the response: - -```xml - - - - created - - - -``` - -The WSDL endpoint -inside the send mediator extracts the EPR from the WSDL document. -Since WSDL documents can have many services and many ports inside each -service, the service and port of the required endpoint has to be -specified in the configuration via the ` service ` and -` port ` attributes respectively. When it comes to -address endpoints, the QoS parameters for the endpoint can be specified -in the configuration. An excerpt taken from -` sample_proxy_1.wsdl ` , which is the WSDL document used -in above sample is given below. - -```xml - - - - - - - - -``` - -According to the above WSDL, the service and port specified in the -configuration refers to the endpoint address: -`http://localhost:9000/services/SimpleStockQuoteService` diff --git a/en/docs/integrate/examples/file-processing/accessing_windows_share_using_vfs_transport.md b/en/docs/integrate/examples/file-processing/accessing_windows_share_using_vfs_transport.md deleted file mode 100644 index 76825b4b3e..0000000000 --- a/en/docs/integrate/examples/file-processing/accessing_windows_share_using_vfs_transport.md +++ /dev/null @@ -1,139 +0,0 @@ -# Accessing a Windows Share using VFS -This example demonstrates how the [VFS transport]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/configuring-the-vfs-transport) in WSO2 Micro Integrator can be used to access a windows share. - -## Synapse configuration - -Following are the integration artifacts (proxy service) that we can used to implement this scenario. - -```xml - - smb://host/test/in - text/xml - .*\.xml - 15 - smb://host/test/original - smb://host/test/failed - MOVE - MOVE - - - = -
    - - -
    - - - - - - - - -
    - - - - - - -``` - -## Build and run - -To test this sample, the following files and directories should be created: -1. Download the provider [jar](https://repo1.maven.org/maven2/jcifs/jcifs/1.3.17/jcifs-1.3.17.jar) and place it in /lib directory and continue with the feature. - Please note that, since the above library is licensed under LGPL version 2.1 and by downloading and installing the library you will have to comply with the terms of LGPL version 2.1 and its restrictions as found in [https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html](https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html). - -2. Create the file directories: - - - Create a directory named **test** on a windows machine and create - three sub directories named **in** , **out** and **original** within - the test directory. - - Grant permission to the network users to read from and write to the - **test** directory and sub directories. - - Be sure to update the **in**, **original**, and **original** directory locations with the values given as the - `transport.vfs.FileURI`, - `transport.vfs.MoveAfterProcess`, - `transport.vfs.MoveAfterFailure` parameter values in your synapse configuration. - - You need to set both `transport.vfs.MoveAfterProcess` and `transport.vfs.MoveAfterFailure` parameter values to point to the **original** directory location. - - Be sure that the endpoint in the `` points to the **out** directory location. Make sure that the prefix `vfs:` in the endpoint URL is not removed or changed. - -3. Add [sample_proxy_1.wsdl](https://github.com/wso2-docs/WSO2_EI/blob/master/samples-protocol-switching/sample_proxy_1.wsdl) as a [registry resource]({{base_path}}/integrate/develop/creating-artifacts/creating-registry-resources). Change the registry path of the proxy accordingly. - -4. Set up the back-end service. - - - Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip) - - Extract the downloaded zip file. - - Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. - - Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -5. Create the `test.xml` file shown below and copy it to the location specified by `transport.vfs.FileURI` in the configuration (i.e., the **in** directory). This contains a simple stock quote request in XML/SOAP format. - - ```xml - - - - - - IBM - - - - - ``` -When the sample is executed, the VFS transport listener picks the file from the **in** directory and sends it to the back service over HTTP. Then the request XML file is moved to the **original** directory and the response is saved to the **out** directory. - -## Using SMB2 for VFS transport - -!!! important "SMB3 Support in VFS Transport" - Starting from version API-M 4.1.0, the VFS (Virtual File System) transport in MI now supports both SMB2 and SMB3 protocols for Windows share URI configurations. This enhancement allows for improved performance, security, and compatibility with modern SMB implementations. - -Windows share URI format for SMB v2/3 use cases is shown below. - -``` -smb2://[username]:[password]@[hostname]/[absolute-path] -``` -You can use the proxy given below to test the SMB2 functionality. - -``` xml - - - - - - - - - -
    - - - - - 15 - smb2://username:password@/host/SMBFileShare/in - text/plain - MOVE - smb2://username:password@/host/SMBFileShare/fail - MOVE - .*\.txt - smb2://username:password@/host/SMBFileShare/original - -``` diff --git a/en/docs/integrate/examples/file-processing/mailto-transport-examples.md b/en/docs/integrate/examples/file-processing/mailto-transport-examples.md deleted file mode 100644 index 58208b9e1e..0000000000 --- a/en/docs/integrate/examples/file-processing/mailto-transport-examples.md +++ /dev/null @@ -1,179 +0,0 @@ -# MailTo Transport Examples - -## Globally setting the email sender - -When the [MailTo transport sender is enabled]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-mailto-transport) for the Micro Integrator, you can configure your mediation sequences to send emails. In this example, the email sender credentials are set globally in the `deployment.toml` file (stored in the `MI_HOME/conf` directory). You need to specify a valid email address prefixed with the transport sender name (which is specified in the deployment.toml file) in your mediation flow. For example, if the transport sender is 'mailto', the endpoint URL in the synapse configuration should be as follows: `mailto:targetemail@mail.com` - -### Synapse configuration - -Following are the integration artifacts (proxy service) that we can used to implement this scenario. - -```xml - - - - - - -
    - - - - - - - -``` - -!!! Note - - Incoming payload will be sent as mail body. - -### Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). The path to this folder is referred to as `MI_TOOLING_HOME` throughout this tutorial. -2. Open the `deployment.toml` file from the `MI_TOOLING_HOME/Contents/Eclipse/runtime/microesb/conf` directory, and [enable the MailTo transport sender]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-mailto-transport). -4. [Create the proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Invoke the proxy service by sending a request. For example use SOAP UI. Check the inbox of your email account, which is configured as the target endpoint. You will receive an email from the email sender that is configured globally in the deployment.toml file. - -## Dynamically setting the email sender - -In this example, let's set the email sender details by adding **Property** mediators to the mediation sequence. If these details are not provided in the proxy service, the system uses the [email sender configurations](#globally-setting-the-email-sender) in the deployment.toml file explained above. - -### Synapse configuration - -Following are the integration artifacts (proxy service) that we can used to implement this scenario. - -Enter a valid email address prefixed with the transport sender name (specified in the `deployment.toml` file). For example, if the transport sender is 'mailto', the endpoint URL should be as follows: `mailto:targetemail@mail.com`. - -!!! Note - - You need to update the property values with actual values of the mail sender account. - - In some email service providers, the value for the 'mail.smtp.user' property is the same as the email address of the account. - -!!! Tip - For testing purposes, be sure to enable access from less secure apps to your email account. See the documentation from your email service provider for instructions. - -```xml - - - - - - - - - - -
    - - - - - - - -``` -### Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). The path to this folder is referred to as `MI_TOOLING_HOME` throughout this tutorial. -2. Open the `deployment.toml` file from the `MI_TOOLING_HOME/Contents/Eclipse/runtime/microesb/conf` directory, and [enable the MailTo transport sender]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-mailto-transport). -4. [Create the proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Invoke the proxy service by sending a request. Check your inbox. You will receive an email from the email sender that you configured for the proxy service. - -## Receiving emails - -When the [MailTo transport listener is enabled]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-mailto-transport) for the Micro Integrator, you can configure your mediation sequences to send emails. - -In this example, let's configure your mediation sequence to receive emails and then process the email contents. The MailTo transport receiver should be configured at service level and each service configuration should explicitly state the mail transport receiver configuration. This is required to enable different services to receive mails over different mail accounts and configurations. - -!!! Info - You need to provide correct parameters for a valid mail account at the service level. - -### Synapse configuration - -In this sample, we used the `transport.mail.ContentType` property to make sure that the transport parses the request message as POX. If you remove this property, you may still be able to send requests using a standard mail client. Instead of writing the XML in the body of the message, you add it as an attachment. In that case, you should use XML as a suffix for the attachment and format the request as a SOAP 1.1 message. Indeed, for a file with suffix XML, the mail client will most likely use text/XML as the content type, exactly as required for SOAP 1.1. Sending a POX message using this approach will be a lot trickier because most standard mail clients do not allow the user to explicitly set the content type. - -```xml - - synapse.demo.1@gmail.com - pop3 - 5 - pop.gmail.com - 995 - synapse.demo.1 - mailpassword1 - javax.net.ssl.SSLSocketFactory - false - 995 - application/xml - - - - - - - - -
    - - - - - -
    - - - - - - - - - -``` - -### Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). The path to this folder is referred to as `MI_TOOLING_HOME` throughout this tutorial. -2. Open the `deployment.toml` file from the `MI_TOOLING_HOME/Contents/Eclipse/runtime/microesb/conf` directory, and [enable the MailTo transport sender]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-mailto-transport). -3. [Create the proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. Add [sample_proxy_1.wsdl](https://github.com/wso2-docs/WSO2_EI/blob/master/samples-protocol-switching/sample_proxy_1.wsdl) as a [registry resource]({{base_path}}/integrate/develop/creating-artifacts/creating-registry-resources). Change the registry path of the proxy accordingly. -5. Set up the back-end service. - - Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip) - - - Extract the downloaded zip file. - - Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. - - Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -6. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Send a plain/text e-mail (Make sure you switch to **Plain text** **mode** when you are composing the email) with the following body and any custom Subject from your mail account to the mail address `synapse.demo.1@gmail.com`. - -```xml - - - IBM - - -``` - -After a few seconds (for example 30 seconds), you should receive a POX response in your e-mail account with the stock quote reply. diff --git a/en/docs/integrate/examples/file-processing/vfs-transport-examples.md b/en/docs/integrate/examples/file-processing/vfs-transport-examples.md deleted file mode 100644 index 60f6fe0f3f..0000000000 --- a/en/docs/integrate/examples/file-processing/vfs-transport-examples.md +++ /dev/null @@ -1,106 +0,0 @@ -# VFS Transport - -The Micro Integrator can access the local file system using the [VFS transport]({{base_path}}/reference/synapse-properties/transport-parameters/vfs-transport-parameters) sender and -receiver. This example demonstrates the VFS transport by using the file system as a transport medium. - -## Synapse configuration - -Following are the integration artifacts (proxy service) that we can used to implement this scenario. - -```xml - - file:///home/user/test/in - text/xml - .*\.xml - 15 - file:///home/user/test/original - file:///home/user/test/original - MOVE - MOVE - - -
    - - -
    - - - - - - - - -
    - - - - - - -``` - -To configure a VFS endpoint, use the `vfs:file` prefix in the URI. For example: - -```xml - -
    - -``` - -## Build and run - -To test this sample, the following files and directories should be created: - -1. Create the file directories: - - - Create 3 new directories (folders) named **in** , **out**, and **original** in a suitable location in a test directory (e.g., - /home/user/test) in the local file system. - - Be sure to update the **in**, **out**, and **original** directory locations with the values given as the - `transport.vfs.FileURI`, - `transport.vfs.MoveAfterProcess`, - `transport.vfs.MoveAfterFailure` parameter values in your synapse configuration. - - You need to set both - ` transport.vfs.MoveAfterProcess ` and - ` transport.vfs.MoveAfterFailure ` parameter - values to point to the **original** directory location. - - Be sure that the endpoint in the `` points to the **out** directory location. Make sure that the prefix - ` vfs: ` in the endpoint URL is not removed or changed. - -2. Add [sample_proxy_1.wsdl](https://github.com/wso2-docs/WSO2_EI/blob/master/samples-protocol-switching/sample_proxy_1.wsdl) as a [registry resource]({{base_path}}/integrate/develop/creating-artifacts/creating-registry-resources). Change the registry path of the proxy accordingly. - -3. Set up the back-end service. - - - Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip) - - Extract the downloaded zip file. - - Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. - - Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -4. Create the `test.xml` file shown below and copy it to the location specified by the `transport.vfs.FileURI` property in the configuration (i.e., the **in** directory). This contains a simple stock quote request in XML/SOAP format. - - ```xml - - - - - - IBM - - - - - ``` - -When the sample is executed, the VFS transport listener picks the file from the **in** directory and sends it to the back service over HTTP. Then the request XML file is moved to the **original** directory and the response is saved to the **out** directory. diff --git a/en/docs/integrate/examples/hl7-examples/acknowledge_hl7_messages.md b/en/docs/integrate/examples/hl7-examples/acknowledge_hl7_messages.md deleted file mode 100644 index 4e843b260e..0000000000 --- a/en/docs/integrate/examples/hl7-examples/acknowledge_hl7_messages.md +++ /dev/null @@ -1,153 +0,0 @@ -# Acknowledging HL7 Messages - -Automatic message acknowledgement for HL7 messages is enabled in the Micro Integrator by default. With this setting, an ACK is immediately returned to the client when a message is received. - -If required, you can disable automatic acknowledgement. This allows you to control how and when ACK/NACK messages should be returned to the client. That is, you can define the integration logic to generate an ack/nack message after message validations or related tasks. - -## Configuring message acknowledgement for HL7 - -When auto acknowledgement for HL7 messages is disabled in the Micro Integrator, you can manually configure ACK/NACK messages in the mediation logic by using the Property mediator. - -!!! Info - Add the following parameter to the proxy service to disable auto acknowledgement and validation: - ```xml - false - ``` - -- Specify an axis2 scope message context property `HL7_GENERATE_ACK` and set its value to true as shown below. - - ```xml - - ``` - - This ensures that an ACK/NACK message is created automatically when a message is sent (using the HL7 formatter). By default, an ACK message is created. - -- If a NACK message is required instead, set the result mode to `NACK` and provide a custom NACK message as shown below. - - === "HL7 Result Mode" - ```xml - - ``` - - === "NACK Message" - ```xml - - ``` - -- You can use the `HL7_RAW_MESSAGE` property in the axis2 scope to retrieve the original raw EDI format HL7 message in an InSequence. The user doesn't have to convert from XML to EDI again. Therefore, this may be particularly helpful inside a custom mediator. - - ```xml - - ``` - -- To control the encoding type of incoming messages, set the Java system property `ca.uhn.hl7v2.llp.charset`. - -- If you do want to wait for the back-end application's response before sending the ACK/NACK message to the client, define the following property in the InSequence: - - ```xml - - ``` - - In this case, the request thread will wait until the back-end application returns the response before sending the "accept-acknowledgement" message to the client. - -## Example 1: Generate ACK/NACK before the backend responds - -Consider an example where the client sending the message only requires an acknowledgement from the proxy service that the message was received. It does not need to wait for the back-end service to process the message before receiving this acknowledgement. - -### Synapse configuration - -Given below is a sample proxy service that is configured to send an ACK/NACK as soon as the message is received. - -```xml - - - - - - - - - -
    - - - - - - - - - - - false - false - 9293 - -``` - -### Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Configure the HL7 transport]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-hl7-transport) in your Micro Integrator. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -To test this scenario, you need the following: - -- An HL7 client that sends messages to the port specified by the `transport.hl7.Port` parameter. -- An HL7 back-end application that receives messages from the Micro Integrator. - -You can simulate the HL7 client and back-end application using a tool such as HAPI. - -## Example 2: Generate ACK/NACK after the backend responds - -Consider an example where the client sending the message requires an acknowledgement from the proxy service only after the back-end service has processed the message. - -### Synapse configuration - -The following proxy service is configured to send a NACK message after the backend has processed the message. - -```xml - - - - - - - -
    - - - - - - - - - - false - true - 9294 - - -``` - -### Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Configure the HL7 transport]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-hl7-transport) in your Micro Integrator. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -To test this scenario, you need the following: - -- An HL7 client that sends messages to the port specified by the `transport.hl7.Port` parameter. -- An HL7 back-end application that receives messages from the Micro Integrator. - -You can simulate the HL7 client and back-end application using a tool such as HAPI. \ No newline at end of file diff --git a/en/docs/integrate/examples/hl7-examples/file_transfer_using_hl7.md b/en/docs/integrate/examples/hl7-examples/file_transfer_using_hl7.md deleted file mode 100644 index c202eefe03..0000000000 --- a/en/docs/integrate/examples/hl7-examples/file_transfer_using_hl7.md +++ /dev/null @@ -1,177 +0,0 @@ -# Using HL7 Messages with File Systems - -The Micro Integrator allows messages to be transferred between HL7 and the file system using the HL7 -and VFS transports. - -## Transferring HL7 messages between file systems - -Let's look at how a proxy service reads HL7 messages stored in a file system and transfers them to another file system. - -### Synapse configuration - -Given below is a proxy service that is configured to detect HL7 files (`.hl7`) in the folder specified by the `transport.vfs.FileURI` parameter. Note that the VFS content type is set to `application/edi-hl7` MIME type with an optional charset encoding. When you save the .hl7 file to the `home/user/test/in` folder, the proxy service invokes the HL7 builders/formatters and builds the HL7 message into its equivalent XML format. It then forwards the message to the VFS endpoint `/tmp/out`. - -!!! Info - Be sure to replace file directories specified below with actual directories in your own file system. - -```xml - - - - - - - - - -
    - - - - - 5 - file:///home/user/test/in - .*\.hl7 - application/edi-hl7;charset="iso-8859-15" - false - -``` - -### Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Configure the HL7 transport]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-hl7-transport) in your Micro Integrator. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -To test this scenario: - -1. Copy the following HL7 message into a text editor and save it with the `.hl7` extension inside the directory you specified with the `transport.vfs.FileURI` parameter in the above example. - - ```bash - MSH|^~\&|Abc|Def|Ghi|JKL|20131231000000||ADT^A01|1234567|P|2.6|||NE|NE|CH| - ``` - -2. See that the files are immediately moved to the folder specified by the endpoint. - -## Transferring messages from HL7 to file system - -Now, let's look at how we can receive an HL7 message and transfer it to a file system. - -### Synapse configuration - -When the following proxy service runs, an HL7 service will start listening on the port defined by the `transport.hl7.Port` parameter. When the HL7 message arrives, the proxy will send an ACK back to the client as specified in the `HL7_RESULT_MODE` property. The HL7 message is then processed and sent to the VFS endpoint, which will save the HL7 message to the given directory. - -!!! Info - Be sure to replace file directories specified below with actual directories in your own file system. - -```xml - - - - - - - - - - -
    - - - - - false - 55555 - false - - -``` - -### Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Configure the HL7 transport]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-hl7-transport) in your Micro Integrator. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -To test this scenario: - -1. Use an HL7 client (such as HAPI) to send a message to the port specified by the `transport.hl7.Port` parameter. -2. See that the message is successfully saved to the file system specified as the endpoint. - -## Transferring messages from HL7 to FTP - -The following configuration is similar to the previous example, but it illustrates how to process files between an HL7 endpoint and files accessed through FTP. - -### Synapse configuration - -Given below is a proxy service that will detect .hl7 files in the `transport.vfs.FileURI` directory and send them to the HL7 endpoint. - -!!! Info - Be sure to replace file directories specified below with actual directories in your own file system. - -```xml - - - - - - - - -
    - - - - - - - - 2 - MOVE - 5 - false - vfs:sftp://user:pass@localhost/vfs/out - vfs:sftp://user:pass@localhost/vfs/in - vfs:sftp://user:pass@localhost/vfs/failed - .*\.hl7 - application/edi-hl7;charset="iso-8859-15" - MOVE - false - - -``` - -### Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Configure the HL7 transport]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-hl7-transport) in your Micro Integrator. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -To test this scenario: - -1. Use an HL7 client (such as HAPI) to receive HL7 messages on the port specified as the endpoint (which is `9988`) in the above proxy service. -2. Place an HL7 message in the `transport.vfs.FileURI` directory and see that the message passed to the HL7 endpoint in the HL7 client. diff --git a/en/docs/integrate/examples/hl7-examples/hl7_proxy_service.md b/en/docs/integrate/examples/hl7-examples/hl7_proxy_service.md deleted file mode 100644 index 8504d65c79..0000000000 --- a/en/docs/integrate/examples/hl7-examples/hl7_proxy_service.md +++ /dev/null @@ -1,42 +0,0 @@ -# Mediating HL7 Messages - -You can create a proxy service that uses the HL7 transport to connect to an HL7 server. This proxy service will receive HL7-client connections and send them to the HL7 server. It can also receive XML messages over HTTP/HTTPS and transform them into HL7 before sending them to the server, and it will transform the HL7 responses back into XML. - -## Synapse configuration - -Given below is an example proxy that receives HL7 messages from a client and relays the message to an HL7 server. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - - - - - - - -
    - - - 9292 - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Configure the HL7 transport]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-hl7-transport) in your Micro Integrator. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -To test this scenario, you need the following: - -- An HL7 client that sends messages to the port specified by the `transport.hl7.Port` parameter. -- An HL7 back-end application that receives messages from the Micro Integrator. - -You can simulate the HL7 client and back-end application using a tool such as HAPI. \ No newline at end of file diff --git a/en/docs/integrate/examples/inbound_endpoint_examples/file-inbound-endpoint.md b/en/docs/integrate/examples/inbound_endpoint_examples/file-inbound-endpoint.md deleted file mode 100644 index 0d1e72ce84..0000000000 --- a/en/docs/integrate/examples/inbound_endpoint_examples/file-inbound-endpoint.md +++ /dev/null @@ -1,106 +0,0 @@ -# Using the File Inbound Endpoint -## Failure tracking using File Inbound -To track failures in file processing that can occur when a resource -becomes unavailable, the VFS transport creates and maintains a failed -records file. This text file contains a list of files that failed to -processed. When a failure occurs, an entry with the failed file name and -timestamp is logged in the text file. When the next polling iteration -occurs, the VFS transport checks each file against the failed records -file, and if a file is listed as a failed record, it will skip -processing and schedule a move task to move that file. - -### Synapse configuration - -Following are the integration artifacts that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Inbound Endpoint" - ```xml - - - - 1000 - true  - true - MOVE - file:///home/user/test/out - file:///home/user/test/in - file:///home/user/test/failed - .*.xml - text/xml - MOVE - - - ``` - -=== "Sequence" - ```xml - - - - - ``` - -### Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create a [mediation sequence]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences) and [inbound endpoint]({{base_path}}/integrate/develop/creating-an-inbound-endpoint) with configurations given in the above example. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -To invoke the inbound endpoint, you can create a file with the below content and save it as `request.xml` in the `/home/user/test/in` directory. - -```xml - - - - - - IBM - - - - -``` - -Once the file is created, the inbound endpoint's sequence (request) is triggered and the following content is logged: - -```xml -To: , MessageID: urn:uuid:CA46833F184F7EAA0E1585819580883, Direction: request, Envelope: - - - IBM - - - -``` - -## Configuring FTP, SFTP, and FILE Connections - -The following section describes how to configure the file inbound protocol for FTP, SFTP, and FILE connections. - -- To configure the file inbound protocol for FTP connections, you should specify the URL as `ftp://{username}:{password}@{hostname/ip_address}/{source_filepath}`: - - ```bash - ftp://admin:pass@localhost/orders - ``` - -- To configure the file inbound protocol for SFTP connections, you should specify the URL as `sftp://{username}:{password}@{hostname/ip_address}/{source_filepath}`: - - ```bash - sftp://admin:pass@localhost/orders - ``` - -!!! Tip - If the password contains special characters, these characters will need to be replaced with their hexadecimal representation. - -- To configure the file inbound protocol for FILE connections, you should specify the URL as `file://{local_file_system_path}`: - - ```bash - file:///home/user/test/in - ``` diff --git a/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-hl7-protocol-auto-ack.md b/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-hl7-protocol-auto-ack.md deleted file mode 100644 index d3f842e82d..0000000000 --- a/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-hl7-protocol-auto-ack.md +++ /dev/null @@ -1,64 +0,0 @@ -# Using the HL7 Inbound Endpoint (with Auto Ack) -The HL7 inbound endpoint implementation is fully asynchronous and is based on the Minimal Lower Layer Protocol(MLLP) implemented on top of event driven I/O. - -## Synapse configuration - -Following are the integration artifacts that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Inbound Endpoint" - ```xml - - - - 20000 - true - 3000 - UTF-8 - false - true - true - - - ``` - -=== "Main Sequence" - ```xml - - - - - - - - - - - ``` - -=== "Fault Sequence" - ```xml - - - - - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create two sequences]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences) (Main and Fault) and an [inbound endpoint]({{base_path}}/integrate/develop/creating-an-inbound-endpoint) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -To execute the sample, use the **HAPI HL7 TestPanel**: - -- Connect to the port defined in the inbound endpoint (i.e., 20000, - which is the value of ` inbound.hl7.Port) ` using - the HAPI HL7 TestPanel. -- Generate and send an HL7 message using the messages dialog frame. - -You will see that the Micro Integrator receives the HL7 message and logs a -serialization of this message in a SOAP envelope. You will also see that -the HAPI HL7 TestPanel receives an acknowledgement. diff --git a/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-http-protocol.md b/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-http-protocol.md deleted file mode 100644 index 76ee892163..0000000000 --- a/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-http-protocol.md +++ /dev/null @@ -1,93 +0,0 @@ -# Using the HTTP Inbound Endpoint -This sample demonstrates how an HTTP inbound endpoint can act as a -dynamic http listener. Many http listeners can be added without -restarting the server. When a message arrives at a port it will bypass -the inbound side axis2 layer and will be sent directly to the sequence -for mediation. The response also behaves in the same way. - -## Synapse configuration - -Following are the integration artifacts that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Inbound Endpoint" - ```xml - - - - 8085 - - - ``` - -=== "Sequence" - ```xml - - - - -
    - - - - - ``` -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create a [mediation sequence]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences) and [inbound endpoint]({{base_path}}/integrate/develop/creating-artifacts/creating-an-inbound-endpoint) with configurations given in the above example. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Invoke the inbound endpoint with the below request. - -```xml -POST http://localhost:8085 HTTP/1.1 -Accept-Encoding: gzip,deflate -Content-Type: text/xml;charset=UTF-8 -SOAPAction: "urn:getQuote" -Content-Length: 492 -Host: localhost:8290 -Connection: Keep-Alive -User-Agent: Apache-HttpClient/4.1.1 (java 1.5) - - - - - - - - - IBM - - - - -``` - -The inbound endpoint will capture any request coming through the `8085` port and divert it to the `TestIn` sequence. - -For further details, analyze the output debug messages for the actions in the dumb client mode. You will see that the Micro Integrator receives a message when the Micro Integrator Inbound is set as the ultimate receiver. You will also see the response from the backend in the client. diff --git a/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-https-protocol.md b/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-https-protocol.md deleted file mode 100644 index 8de000ab0a..0000000000 --- a/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-https-protocol.md +++ /dev/null @@ -1,114 +0,0 @@ -# Using the HTTPS Inbound Endpoint -This sample demonstrates how an HTTPS inbound endpoint can act as a -dynamic HTTPS listener. Many HTTPS listeners can be added without -restarting the server. When a message arrives at a port it will bypass -the inbound side axis2 layer and will be sent directly to the sequence -for mediation. The response also behaves in the same way. - -## Synapse configuration - -Following are the integration artifacts that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Inbound Endpoint" - ```xml - - - - 8085 - - - repository/resources/security/wso2carbon.jks - JKS - wso2carbon - wso2carbon - - - - - repository/resources/security/client-truststore.jks - JKS - wso2carbon - - - - - ``` - -=== "Sequence 1" - ```xml - - - - -
    - - - - - ``` - -## Build and run - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. See the instructions on [creating mediation sequences]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences) to define the two sequences given above ('Sequence 1' and 'Sequence 2'). -4. See the instructions on [creating an inbound endpoint]({{base_path}}/integrate/develop/creating-artifacts/creating-an-inbound-endpoint) to define the inbound endpoint given above. - - !!! Tip - Be sure to add an empty namespace for the keystore and truststore elements (`xmlns=""`) in the inbound endpoint as shown above. This is necessary when you run this example in the embedded Micro Integrator of WSO2 Integration Studio. - -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Invoke the inbound endpoint with the below request. - -!!! Tip - Be sure to add **basic auth** from the http client you use. - -```xml -POST https://localhost:8085 HTTP/1.1 -Accept-Encoding: gzip,deflate -Content-Type: text/xml;charset=UTF-8 -SOAPAction: "urn:getQuote" -Content-Length: 492 -Host: localhost:8290 -Connection: Keep-Alive -User-Agent: Apache-HttpClient/4.1.1 (java 1.5) -Authorization: Basic YWRtaW46YWRtaW4= - - - - - - - - - IBM - - - - -``` - -Analyze the output debug messages for the actions in the dumb client mode. You will see that the Micro Integrator receives a message when the Micro Integrator Inbound is set as the ultimate receiver. You will also see the response from the back -end in the client. diff --git a/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-jms-protocol.md b/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-jms-protocol.md deleted file mode 100644 index 34a1caec9d..0000000000 --- a/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-jms-protocol.md +++ /dev/null @@ -1,69 +0,0 @@ -# Using the JMS Inbound Endpoint -This sample demonstrates how one way message bridging from JMS to HTTP can be done using the inbound JMS endpoint. - -## Synapse configuration - -Following are the integration artifacts that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Inbound Endpoint" - ```xml - - - - 1000 - ordersQueue - 1 - QueueConnectionFactory - true - org.apache.activemq.jndi.ActiveMQInitialContextFactory - tcp://localhost:61616 - AUTO_ACKNOWLEDGE - false - queue - - - ``` - -=== "Sequence" - ```xml - - - - -
    - - - - - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create a [mediation sequence]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences) and [inbound endpoint]({{base_path}}/integrate/develop/creating-an-inbound-endpoint) with configurations given in the above example. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -[Configure the ActiveMQ broker]({{base_path}}/install-and-setup/setup/mi-setup/brokers/configure-with-activemq). - -Invoke the inbound endpoint: - -1. Log on to the ActiveMQ console using the URL. -2. Browse the queue `ordersQueue` listening via the above endpoint. -3. Add a new message with the following content to the queue: - - ```xml - - - - - IBM - - - - - ``` - -You will see that the JMS endpoint gets the message from the queue and sends it to the stock quote service. diff --git a/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-kafka.md b/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-kafka.md deleted file mode 100644 index 8cb2601d24..0000000000 --- a/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-kafka.md +++ /dev/null @@ -1,108 +0,0 @@ -# Using the Kafka Inbound Endpoint - -## Example use case - -This sample demonstrates how one way message bridging from Kafka to HTTP can be done using the inbound Kafka endpoint. - -### Synapse configuration - -Following are the integration artifacts that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Inbound Endpoint" - ```xml - - - - true - 10 - true - polling - org.apache.kafka.common.serialization.StringDeserializer - test - 100 - localhost:9092 - hello - application/json - org.apache.kafka.common.serialization.StringDeserializer - - - ``` - -=== "Sequence" - ```xml - - - - - - - - - - - - - - ``` - -### Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create a [mediation sequence]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences) and [inbound endpoint]({{base_path}}/integrate/develop/creating-an-inbound-endpoint) with configurations given in the above example. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service. - -- Apache Kafka inbound endpoint should be configured. The recommended version for the customized Kafka inbound endpoint is `kafka_2.12-2.2.1`. See [Configuring Kafka]({{base_path}}/install-and-setup/setup/mi-setup/feature_configs/configuring-kafka) for more information. - -- Go to the [WSO2 Connector Store](https://store.wso2.com/store/assets/esbconnector/details/b15e9612-5144-4c97-a3f0-179ea583be88) and click **Download Inbound Endpoint** to download the inbound JAR file. Add the downloaded JAR file to the /dropins directory. - -Run the following commands in the directory to invoke the service. - -- Run the following on the Kafka command line to create a topic named `test` with a single partition and only one - replica: - - ```bash - bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test - ``` - -- Run the following on the Kafka command line to send a message to the Kafka brokers. You can also use the **WSO2 ESB Kafka producer** connector to send the message to the Kafka brokers. - - ```bash - bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test - ``` - -- Executing the above command will open up the console producer. Send the following message using the console: - - ```json - {"test":"wso2"} - ``` - -You can see the following Message content in the Micro Integrator: - -```bash -[2020-02-19 12:39:59,331] INFO {org.apache.synapse.mediators.builtin.LogMediator} - To: , MessageID: d130fb8f-5d77-43f8-b6e0-85b98bf0f8c1, Direction: request, Payload: {"test":"wso2"} -[2020-02-19 12:39:59,335] INFO {org.apache.synapse.mediators.builtin.LogMediator} - partitionNo = 0 -[2020-02-19 12:39:59,336] INFO {org.apache.synapse.mediators.builtin.LogMediator} - messageValue = {"test":"wso2"} -[2020-02-19 12:39:59,336] INFO {org.apache.synapse.mediators.builtin.LogMediator} - offset = 6 -``` - -The Kafka inbound gets the messages from the Kafka brokers and logs the messages in the Micro Integrator. - -## Using specific topics/topic patterns - -You may consume messages in two ways: Using **specific topics** or using a **topic pattern**. - -=== "Using Specific Topics" - ```xml - test,sampletest - ``` - -=== "Using a Topic Pattern" - ```xml - .*test - ``` diff --git a/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-mqtt-protocol.md b/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-mqtt-protocol.md deleted file mode 100644 index ae106764c1..0000000000 --- a/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-mqtt-protocol.md +++ /dev/null @@ -1,59 +0,0 @@ -# Using the MQTT Inbound Endpoint -This sample demonstrates how the MQTT connector publishes a message on a -particular topic and how a MQTT client that is subscribed to that topic -receives the message. -Following sections demonstrate how you can try this sample using the -Mosquitto server as the Message Broker. - -## Synapse configuration - -Following are the integration artifacts that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Inbound Endpoint" - ```xml - - - - true - mqttConFactory - localhost - 1883 - esb.test - application/xml - 0 - false - false - 1000 - - - ``` - -=== "Sequence" - ```xml - - - - - - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create a [mediation sequence]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences) and [inbound endpoint]({{base_path}}/integrate/develop/creating-an-inbound-endpoint) with configurations given in the above example. - -Set up the MQTT server: - -1. Install Mosquitto. (This sample is tested for [Mosquitto1.6.7 version](https://mosquitto.org/download/)). The Mosquitto server will run automatically in the background. -2. Download [MQTT client library](http://repo.spring.io/plugins-release/org/eclipse/paho/mqtt-client/0.4.0/) (i.e. ` mqtt-client-0.4.0.jar ` ) and add it to the `MI_TOOLING_HOME/Contents/Eclipse/runtime/microesb/lib/` directory. - -[Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Open a new terminal and enter the below command to send an MQTT message using mosquitto-pub. Be sure to enter the MQTT Topic Name you entered when creating the inbound endpoint as shown below. - -`mosquitto_pub -t -m "Testing123` - -You will see that the Micro Integrator receives a message when the Micro Integrator Inbound is set as the ultimate receiver. diff --git a/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-rabbitmq-protocol.md b/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-rabbitmq-protocol.md deleted file mode 100644 index 5076b4f859..0000000000 --- a/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-rabbitmq-protocol.md +++ /dev/null @@ -1,84 +0,0 @@ -# Using the RabbitMQ Inbound Endpoint -This sample demonstrates how one way message bridging from RabbitMQ to HTTP can be done using the inbound RabbitMQ endpoint. - -## Synapse configuration - -Following are the integration artifacts that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Sequence" - ```xml - - - - - - ``` - -=== "Inbound Endpoint" - ```xml - - - - true - true - AMQPConnectionFactory - localhost - 5672 - guest - guest - queue - exchange - - - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create a [mediation sequence]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences) and [inbound endpoint]({{base_path}}/integrate/develop/creating-an-inbound-endpoint) with configurations given in the above example. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -[Configure the RabbitMQ broker]({{base_path}}/install-and-setup/setup/mi-setup/brokers/configure-with-rabbitmq). - -Use the following [Java client](https://mvnrepository.com/artifact/com.rabbitmq/amqp-client) to publish a request to the RabbitMQ broker. - -```java -ConnectionFactory factory = new ConnectionFactory(); -factory.setHost("localhost"); -factory.setUsername("guest"); -factory.setPassword("guest"); -factory.setPort(5672); -Channel channel = null; -Connection connection = factory.newConnection(); -channel = connection.createChannel(); -channel.queueDeclare("queue", false, false, false, null); -channel.exchangeDeclare("exchange", "direct", true); -channel.queueBind("queue", "exchange", "route"); - -// The message to be sent -String message = "" + - "" + - "100" + - "20" + - "RMQ" + - "" + - ""; - -// Populate the AMQP message properties -AMQP.BasicProperties.Builder builder = new AMQP.BasicProperties().builder(); -builder.contentType("application/xml"); - -// Publish the message to exchange -channel.basicPublish("exchange", "queue", builder.build(), message.getBytes()); -``` - -You will see the following Message content: - -```bash -10020RMQ -``` - -The RabbitMQ inbound endpoint gets the messages from the RabbitMQ broker and logs the messages in the Micro Integrator. diff --git a/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-secured-websocket.md b/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-secured-websocket.md deleted file mode 100644 index 570e518560..0000000000 --- a/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-secured-websocket.md +++ /dev/null @@ -1,140 +0,0 @@ -# Using the Secure WebSocket Inbound Endpoint - -If you need to read and transform the content of WebSocket frames, the -information in incoming WebSocket frames is not sufficient because the -WebSocket protocol does not specify any information about the -content-type of frames that flow through WebSocket channels. Hence, the -Micro Integrator supports a WebSocket -subprotocol extension to determine the content type of WebSocket frames. - -The **WebSocket inbound endpoint** of the Micro Integrator supports the following Synapse subprotocols by default: - -- ` synapse(contentType='application/json') ` -- ` synapse(contentType='application/xml') ` -- ` synapse(contentType='text/xml') ` - -Now let's look at a sample scenario that demonstrates WebSocket to -WebSocket integration using subprotocols to support content handling. - -## Example use case - -Let's say you need to send messages between two WebSocket based systems -using the Micro Integrator as a WebSocket gateway that facilitates -the messaging. Let's also assume that you need to read and transform the -content of WebSocket frames that are sent and received. - -The following should take place in this scenario: - -- The WebSocket Client sends WebSocket frames to the Micro Integrator. -- When the initial handshake happens between the WebSocket client and - the WebSocket inbound endpoint of the Micro Integrator, the WebSocket client sends a `Sec-WebSockets-Protocol` header - that specifies the content type of the WebSocket frame. In this sample it is - `synapse(contentType='application/json')`. -- The WebSocket inbound endpoint of the Micro Integrator determines the content-type of the incoming WebSocket frame using the subprotocol. -- Once the handshake is complete, the WebSocket inbound endpoint builds all the subsequent WebSocket frames based on the content-type specified during the initial handshake. -- The Micro Integrator sends the transformed message in the form of WebSocket frames. - -!!! Tip - If necessary, you can use the [data mapper]({{base_path}}/reference/mediators/data-mapper-mediator) to perform data transformation inside the Micro Integrator message flow. For example, you can perform JSON to JSON transformation. To do this, you have to explicitly apply the required data mapping logic for all WebSocket frames. - -## Synapse configuration - -Following are the integration artifacts that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -Specify the `websocket.accept.contenType` property to inform the WebSocket sender to build the frames with the specified content type, and to include the same subprotocol header that was used to determine the content of the WebSocket frames. In this case it is JSON. - -- Create the sequence for client to back-end mediation as follows: - - ```xml - - - - - - - - - -
    - - - - ``` - -- Create the sequence for back-end to client mediation as follows: - - ```xml - - - - - - ``` - -- Configure the WebSocket inbound endpoint as follows to use the created sequences and listen on port 9092: - - ```xml - - - - 9092 - 0 - outDispatchSeq - fault - false - - - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). - - !!! Note - The Websocket sender functionality of the Micro Integrator is disabled by default. To enable the transport, open the `deployment.toml` file from the `MI_TOOLING_HOME/Contents/Eclipse/runtime/microesb/conf/` directory and add the following: - - ```toml - [transport.ws] - sender.enable = true - ``` - -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [mediation sequences]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences) and [inbound endpoint]({{base_path}}/integrate/develop/creating-an-inbound-endpoint) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Starting the WebSocket client: - -- Download the netty artifacts zip file from [here](https://github.com/wso2-docs/ESB) and extract it. The extracted folder will be shown as `ESB` -- Open a terminal, navigate to `ESB/ESB-Artifacts/Netty_artifacts_for_WebSocket_samples` and execute the following command to start the WebSocket server on port 8082: - ```bash - java -cp netty-example-4.0.30.Final.jar:lib/*:. io.netty.example.http.websocketx.server.WebSocketServer - ``` -- Open a terminal, navigate to `ESB/ESB-Artifacts/Netty_artifacts_for_WebSocket_samples` and execute the following command to start the WebSocket client: - - ```bash - java -DsubProtocol="synapse(contentType='application/json')" -DclientPort=9092 -cp netty-example-4.0.30.Final.jar:lib/*:. io.netty.example.http.websocketx.client.WebSocketClient - ``` - - You will see the following message on the client terminal: - - ```bash - WebSocket Client connected! - ``` - -- Send the following sample JSON payload from the client terminal: - - ```json - {"sample message":"test"} - ``` -When you send a sample JSON payload from the client, you will see that a connection from the WebSocket client to the Micro Integrator is established, and that the Micro Integrator receives the message. - -This shows that the sequences are executed by the WebSocket inbound endpoint. - -You will also see that the message sent to the WebSocket server is transformed, and that the response injected to the out sequence is also transformed. diff --git a/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-with-registry.md b/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-with-registry.md deleted file mode 100644 index eaeacd0487..0000000000 --- a/en/docs/integrate/examples/inbound_endpoint_examples/inbound-endpoint-with-registry.md +++ /dev/null @@ -1,55 +0,0 @@ -# Using Inbound Endpoints with the Registry -Other than specifying parameter values inline, you can also -specify parameter values as registry entries. The advantage of -specifying a parameter value as a registry entry is that the same -inbound endpoint configuration can be used in different environments -simply by changing the registry entry value. - - - -## Synapse configuration - -Following are the integration artifacts that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - 1000 - true - true - MOVE - - - - .*.txt - text/plain - MOVE - - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. See the instructions on [creating an inbound endpoint]({{base_path}}/integrate/develop/creating-artifacts/creating-an-inbound-endpoint) to define the inbound endpoint given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. \ No newline at end of file diff --git a/en/docs/integrate/examples/integrating-mi-with-si.md b/en/docs/integrate/examples/integrating-mi-with-si.md deleted file mode 100644 index df3940b224..0000000000 --- a/en/docs/integrate/examples/integrating-mi-with-si.md +++ /dev/null @@ -1,103 +0,0 @@ -# Integrating Micro Integrator with WSO2 Streaming Integrator - -You can publish events from the integration flow to WSO2 Streaming Integrator using an http or http-service source configured in a Siddhi application deployed in Streaming Integrator server. The http or http-service source receive POST requests via HTTP and HTTPS protocols in a format such as text, XML, or JSON. In the case of http-service source, it will send responses via its corresponding http-service-response sink correlated through a unique `source.id`. - -In this example, we are using a simple rest API to publish events to the Streaming Integrator, which is configured to receive POST requests via HTTP protocol in JSON format and send responses accordingly. - -## Set up the Siddhi Application - -Follow the instructions below to set up and configure. - -1. [Download and install the Streaming Integrator](https://ei.docs.wso2.com/en/latest/streaming-integrator/quick-start-guide/getting-started/getting-started-guide-overview/). - -2. [Start and create the following Siddhi application](https://ei.docs.wso2.com/en/latest/streaming-integrator/quick-start-guide/getting-started/create-the-siddhi-application/). - - !!! Note - This “ShoppingCart” siddhi app demonstrates a simple scenario where you can add an item to a shopping cart and see the total cost. The siddhi app will receive events via HTTP protocol in JSON format with a custom mapping. - - ``` - @App:name("ShoppingCart") - - @App:description('Receive events via HTTP transport in JSON format with custom mapping and view the output on the console') - - @source(type='http-service', receiver.url='http://localhost:5005/addToCart', - source.id='adder', basic.auth.enabled='false', @map(type='json', - @attributes(messageId='trp:messageId', name='$.event.name', price='$.event.price'))) - define stream AddStream (messageId string, name String, price double); - - @sink(type='http-service-response', source.id='adder', message.id='{{messageId}}', @map(type = 'json')) - define stream ResultStream (messageId string, message string, numOfItems long, totalCost double); - - @info(name = 'query1') - from AddStream - select messageId, str:concat('Successfully added ', name, ' to the cart') as message, count() as numOfItems, sum(price) as totalCost - insert into ResultStream; - ``` - - The **http-service source** on stream `AddStream` listens on url `http://localhost:5005/addToCart` for JSON messages with format: - - ```json - { - "event": { - "name": "Cheese", - "Price": 390 - } - } - ``` - - and when events arrive, it maps to `AddStream` events and passes them to query named `query1` for processing. The query results produced on `ResultStream` are sent as a response via **http-service-response sink** with format: - - ```json - { - "event": { - "messageId":"741f30af-89c8-44ce-abbc-8ded26a4c4b7", - "message":"Successfully added Cheese to the cart", - "numOfItems":1, - "totalCost":390.0 - } - } - ``` - -3. [Deploy the application in the Streaming Integrator](https://ei.docs.wso2.com/en/latest/streaming-integrator/quick-start-guide/getting-started/deploy-siddhi-application/). - -## Synapse configuration - -Following is the sample rest API configuration that we can use to implement this scenario. See the instructions on how to [build and run](#build-and-run) run this example. - - -```xml - - - - - -
    - - - - - -``` - -### Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an ESB Solution project]({{base_path}}/integrate/develop/create-integration-project/#esb-config-project). -3. [Create the rest API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-and-run) in your Micro Integrator. - -Invoke the sample API: - -- Open a terminal and execute the following CURL command. This sends a simple POST request to the Micro Integrator. - - ```bash - curl -X POST -d "{\"event\":{\"name\":\"snacks\",\"price\":10.0}}" http://localhost:8290/addToCart --header "Content-Type:application/json" - ``` - -- You will receive the following response in the console in which you are running the MI server: - ```json - {"event":{"messageId":"7b0da3c2-ddae-4627-8c43-fa09faa1abb7","message":"Successfully added snacks to the cart","numOfItems":1,"totalCost":10.0}} - ``` - diff --git a/en/docs/integrate/examples/jms_examples/consume-produce-jms.md b/en/docs/integrate/examples/jms_examples/consume-produce-jms.md deleted file mode 100644 index d193f80fea..0000000000 --- a/en/docs/integrate/examples/jms_examples/consume-produce-jms.md +++ /dev/null @@ -1,120 +0,0 @@ -# Consuming and Producing JMS Messages - -This section describes how to configure WSO2 Micro Integrator to work as a JMS-to-JMS proxy service. In this example, the Micro Integrator listens to a JMS queue, consumes messages, and then sends those messages to another JMS queue. - -## Synapse configuration - -Given below is the synapse configuration of the proxy service that mediates the above use case. Note that you need to update the JMS connection URL according to your broker as explained below. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - - - -
    - - - - - -``` - -The Synapse artifacts used are explained below. - - - - - - - - - - - - - - - - - - -
    Artifact TypeDescription
    - Proxy Service - - A proxy service is used to receive messages and to define the message flow. In the sample configuration above, the 'transports' property is set to 'jms', which allows the ESB to receive JMS messages. This proxy StockQuoteProxy and sends messages to another queue named SimpleStockQuoteService. -
    Property Mediator - The OUT ONLY property is set to true , which indicates that the message exchange is one-way. -
    Send Mediator - To send a message to a JMS queue, you should define the JMS connection URL as the endpoint address (which should be invoked via the **Send** mediator). There are two ways to specify the endpoint URL: -
      -
    • - Specify the JNDI name of the JMS queue and the connection factory parameters in the JMS connection URL as shown in the exampe below. Values of connection factory parameters depend on the type of the JMS broker.

      - When the broker is ActiveMQ
      - - jms:/SimpleStockQuoteService?transport.jms.ConnectionFactoryJNDIName=QueueConnectionFactory&java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory&java.naming.provider.url=tcp://localhost:61616&transport.jms.DestinationType=queue -

      - When the broker is WSO2 Message Broker
      - - jms:/StockQuotesQueue?transport.jms.ConnectionFactoryJNDIName=QueueConnectionFactory&java.naming.factory.initial=org.wso2.andes.jndi.PropertiesFileInitialContextFactory&java.naming.provider.url=conf/jndi.properties&transport.jms.DestinationType=queue - -

    • -
    • - If you have already specified the endpoint's connection factory parameters (for the JMS sender configuration) in the deployment.toml file, the connection URL in the proxy service should be as shown below. In this example, the endpoint URL of the proxy service refers the relevant connection factory in the deployment.toml file:

      - When the broker is ActiveMQ
      - - jms:/SimpleStockQuoteService?transport.jms.ConnectionFactory=QueueConnectionFactory -

      - When the broker is WSO2 Message Broker
      - - jms:/StockQuotesQueue?transport.jms.ConnectionFactory=QueueConnectionFactory - -
    • -
    -
    - -!!! Info - To refer details on JMS transport parameters, you can follow [JMS transport parameters]({{base_path}}/ireference/synapse-properties/transport-parameters/jms-transport-parameters) used in the Micro Integrator. - -!!! Note - Be sure to replace the ' `& ` ' character in the endpoint URL with '`&`' to avoid the following exception: - ```java - com.ctc.wstx.exc.WstxUnexpectedCharException: Unexpected character '=' (code 61); expected a semi-colon after the reference for entity 'java.naming.factory.initial' at [row,col {unknown-source} - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the broker: - -1. [Configure a broker]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transport#configuring-the-jms-transport) with your Micro Integrator instance. Let's use Active MQ for this example. -2. Start the broker. -3. Start the Micro Integrator (after starting the broker). - -Set up the back-end service: - -1. Download the [back-end service]( -https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -You now have a running WSO2 Micro Integrator instance, ActiveMQ instance, and a sample back-end service to simulate the sample scenario. -Add a message in the `StockQuoteProxy` queue using the [ActiveMQ Web Console](https://activemq.apache.org/web-console.html). \ No newline at end of file diff --git a/en/docs/integrate/examples/jms_examples/consuming-jms.md b/en/docs/integrate/examples/jms_examples/consuming-jms.md deleted file mode 100644 index 0517b99340..0000000000 --- a/en/docs/integrate/examples/jms_examples/consuming-jms.md +++ /dev/null @@ -1,314 +0,0 @@ -# Consuming JMS Messages -This section describes how to configure WSO2 Micro Integrator to listen to a JMS Queue. - -## Example 1: One-way messaging - -In this example, the Micro Integrator listens to a JMS queue, consumes messages, and sends them to an HTTP back-end service. - -### Synapse configuration - -Given below is the synapse configuration of the proxy service that mediates the above use case. Note that you need to update the JMS connection URL according to your broker as explained below. - -See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - -
    - - - -
    - - - - - - contentType - text/xml - - - -``` - -The Synapse artifacts used are explained below. - - - - - - - - - - - - - - - - - - - - - - -
    Artifact TypeDescription
    - Proxy Service - - A proxy service is used to receive messages and to define the message flow. -
    - Header Mediator - - A header mediator is used to set the SOAPAction header. -
    - Property Mediator - - The OUT_ONLY property is set to true to indicate that message exchange is one-way. -
    Endpoint Mediator - To send a message to the HTTP backend, you should define the connection URL as the endpoint address. -
    - -### Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the broker: - -1. [Configure a broker]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-jms-transport) with your Micro Integrator instance. Let's use Active MQ for this example. -2. Start the broker. -3. Start the Micro Integrator (after starting the broker). - -Set up the back-end service: - -1. Download the [back-end service]( -https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -You now have a running WSO2 Micro Integrator instance, ActiveMQ instance, and a sample back-end service to simulate the sample scenario. -Add a message in the `JMStoHTTPStockQuoteProxy` queue with the following XML payload using [ActiveMQ Web Console](https://activemq.apache.org/web-console.html). - -```xml - - - - - - IBM - - - - -``` -The Micro Integrator will read the message from the ActiveMQ queue and send it to the back-end service. You will see the following response in the back-end service console: - -```bash -INFO [wso2/stockquote_service] - Stock quote service invoked. -INFO [wso2/stockquote_service] - Generating getQuote response for IBM -INFO [wso2/stockquote_service] - Stock quote generated. -``` - -!!! Info - You can specify a different content type within the transport.jms.ContentType parameter. In the sample configuration above, the content type defined is `text/xml`. You can make the proxy service a JMS listener by setting its transport as jms. Once the JMS transport is enabled for a proxy service, the Micro Integrator listens on a JMS queue for the same name as the proxy service.
    In the sample configuration above, the Micro Integrator listens to a JMS queue named JMStoHTTPStockQuoteProxy. To make the proxy service listen to a different JMS queue, define the transport.jms.Destination parameter with the name of the destination queue. For more information, you can refer details of the [JMS transport parameters]({{base_path}}/reference/synapse-properties/transport-parameters/jms-transport-parameters) used in the Micro Integrator. - -## Example 2: Two-way HTTP back-end call - -In addition to one-way invocations, the proxy service can listen to the queue, pick up a message, and do a two-way HTTP call as well. It allows the response to be delivered to a queue specified by the client. This is done by specifying a `ReplyDestination` element when placing a request message to a JMS queue. - -### Synapse configuration - -We can have a proxy service similar to the following to simulate a two-way invocation. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - -
    - - -
    - - - - - - - - - - contentType - text/xml - - - ResponseQueue - -``` -The Synapse artifacts used are explained below. - - - - - - - - - - - - - - - - - - -
    Artifact TypeDescription
    - Proxy Service - - A proxy service is used to receive messages and to define the message flow. -
    - Header Mediator - - A header mediator is used to set the SOAPAction header. -
    Send Mediator - To send a message to the HTTP backend, you should define the connection URL as the endpoint address. -
    - -### Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the broker: - -1. [Configure a broker]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transport#configuring-the-jms-transport) with your Micro Integrator instance. Let's use Active MQ for this example. -2. Start the broker. -3. Start the Micro Integrator (after starting the broker). - -Set up the back-end service: - -1. Download the [back-end service]( -https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -You now have a running WSO2 MI instance, an ActiveMQ instance, and a sample back-end service to simulate the sample scenario. Add a message in `JMStoHTTPStockQuoteProxy1` queue with the following XML payload using the [ActiveMQ Web Console](https://activemq.apache.org/web-console.html). You can view the responses from the back-end service in the `ResponseQueue`. - -```xml - - - - - - IBM - - - - -``` - -!!! Info - You can make the proxy service a JMS listener by setting its transport as jms. Once the JMS transport is enabled for a proxy service, the Micro Integrator listens on a JMS queue for the same name as the proxy service.
    In the sample configuration above, the Micro Integrator listens to a JMS queue named JMStoHTTPStockQuoteProxy1. To make the proxy service listen to a different JMS queue, define the transport.jms.Destination parameter with the name of the destination queue. For more information, you can refer details of the [JMS transport parameters]({{base_path}}/reference/synapse-properties/transport-parameters/jms-transport-parameters) used in the Micro Integrator. - -## Example 3: Set content type of incoming JMS messages - -The Micro Integrator considers all messages consumed from a queue as SOAP messages by default. To consider that the messages consumed from a queue are of a different format, define the **transport.jms.ContentType** parameter with the respective content type as a proxy service parameter. - -### Synapse configuration - -```xml - - - -
    - - - -
    - - - - - - - - contentType - application/xml - - - MyJMSQueue - -``` - -The Synapse artifacts used are explained below. - - - - - - - - - - - - - - - - - - -
    Artifact TypeDescription
    - Proxy Service - - A proxy service is used to receive messages and to define the message flow. -
    - Header Mediator - - A header mediator is used to set the SOAPAction header. -
    Send Mediator - To send a message to the HTTP backend, you should define the connection URL as the endpoint address. -
    - -!!! Info - You can specify a different content type within the transport.jms.ContentType parameter. In the sample configuration above, the content type defined is application/xml. If you want the proxy service to listen to a queue where the queue name is different from the proxy service name, you can specify the queue name using the transport.jms.Destination parameter. In the sample configuration above, the Micro Integrator listens to a JMS queue named MyJMSQueue. For more information, you can refer details of the [JMS transport parameters]({{base_path}}/reference/synapse-properties/transport-parameters/jms-transport-parameters) used in the Micro Integrator. diff --git a/en/docs/integrate/examples/jms_examples/detecting-repeatedly-redelivered-messages.md b/en/docs/integrate/examples/jms_examples/detecting-repeatedly-redelivered-messages.md deleted file mode 100644 index 37dce15b43..0000000000 --- a/en/docs/integrate/examples/jms_examples/detecting-repeatedly-redelivered-messages.md +++ /dev/null @@ -1,236 +0,0 @@ -# Detecting Repeatedly Redelivered Messages - -In JMS 2.0, it is mandatory for JMS providers to set the `JMSXDeliveryCount` property, which allows an application that receive a message to determine how many times the message is redelivered. - -If a message is being redelivered, it means that a previous attempt to deliver the message failed due to some reason. If a message is being redelivered multiple times, it can be because the message is *bad* in some way. When such a message is being redelivered over and over again, it wastes resources and prevents subsequent *good* messages from being processed. - -When you work with WSO2 Micro Integrator, you can detect such repeatedly redelivered messages using the `JMSXDeliveryCount` property that is set in messages. The ability to detect repeatedly redelivered messages is particularly useful because you can take the necessary steps to handle such messages in a proper manner. For example, you can consume such a message and send it to a separate queue. - -To demonstrate this scenario, let's configure the JMS inbound endpoint in WSO2 Micro Integrator using HornetQ as the message broker. - -## Synapse configuration - -Given below are the synapse configurations that are required for mediating the above use case. - -See the instructions on how to [build and run](#build-and-run) this example. - -=== "Inbound Endpoint" - ```xml - - - 1000 - queue/mySampleQueue - 1 - QueueConnectionFactory - true - org.jnp.interfaces.NamingContextFactory - jnp://localhost:1099 - AUTO_ACKNOWLEDGE - false - - - ``` - -=== "Sequence (Request)" - ```xml - - - - - - - - - - - - - - - - - - ``` - -=== "Registry Artifact" - ```xml - - 15000 - - ``` - -=== "Task Manager" - ```xml - - 15000 - - ``` - -=== "Sequence (Main)" - ```xml - - - - - ``` - -=== "Sequence (Fault)" - ```xml - - - - - - - - - ``` - -=== "Message Store" - ```xml - - ``` - -See the descriptions of the above configurations: - - - - - - - - - - -
    ArtifactDescription
    Inbound Endpoint - This configuration creates an inbound endpoint to the JMS broker and has a simple sequence that logs the message status using the `JMSXDeliveryCount` value. -
    - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [inbound endpoint]({{base_path}}/integrate/develop/creating-artifacts/creating-an-inbound-endpoint), [registry artifact]({{base_path}}/integrate/develop/creating-artifacts/creating-registry-resources), [scheduled task]({{base_path}}/integrate/develop/creating-artifacts/creating-scheduled-task), and [sequences]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the broker: - -1. [Configure a broker]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transport#configuring-the-jms-transport) with your Micro Integrator instance. Let's use HornetQ for this example. -2. Start HornetQ with the following command: - - On **Windows**: HORNETQ_HOME\bin\run.bat --run - - On **MacOS/Linux/Solaris**: sh HORNETQ_HOME/bin/run.sh - -3. Start the Micro Integrator (after starting the broker). - -Run the following java file (**SOAPPublisher.java**) to publish a message to the JMS queue: - -```java -package JMSXDeliveryCount; - -import java.util.Properties; -import java.util.logging.Logger; - -import javax.jms.ConnectionFactory; -import javax.jms.Destination; -import javax.jms.JMSContext; -import javax.naming.Context; -import javax.naming.InitialContext; -import javax.naming.NamingException; - -public class SOAPPublisher { - private static final Logger log = Logger.getLogger(SOAPPublisher.class.getName()); - - // Set up all the default values - private static final String param = "IBM"; - - // with header for inbounds - private static final String MESSAGE_WITH_HEADER = - "\n" + - " \n" + - "\n" + - "\n" + - " \n" + - " " + - getRandom(100, 0.9, true) + - "\n" + - " " + - (int) getRandom(10000, 1.0, true) + - "\n" + - " " + - param + - "\n" + - " \n" + - "" + - " \n" + - ""; - private static final String DEFAULT_CONNECTION_FACTORY = "QueueConnectionFactory"; - private static final String DEFAULT_DESTINATION = "queue/mySampleQueue"; - private static final String INITIAL_CONTEXT_FACTORY = "org.jnp.interfaces.NamingContextFactory"; - private static final String PROVIDER_URL = "jnp://localhost:1099"; - - public static void main(String[] args) { - - Context namingContext = null; - - try { - - // Set up the namingContext for the JNDI lookup - final Properties env = new Properties(); - env.put(Context.INITIAL_CONTEXT_FACTORY, INITIAL_CONTEXT_FACTORY); - env.put(Context.PROVIDER_URL, System.getProperty(Context.PROVIDER_URL, PROVIDER_URL)); - namingContext = new InitialContext(env); - - // Perform the JNDI lookups - String connectionFactoryString = - System.getProperty("connection.factory", - DEFAULT_CONNECTION_FACTORY); - log.info("Attempting to acquire connection factory \"" + connectionFactoryString + "\""); - ConnectionFactory connectionFactory = - (ConnectionFactory) namingContext.lookup(connectionFactoryString); - log.info("Found connection factory \"" + connectionFactoryString + "\" in JNDI"); - - String destinationString = System.getProperty("destination", DEFAULT_DESTINATION); - log.info("Attempting to acquire destination \"" + destinationString + "\""); - Destination destination = (Destination) namingContext.lookup(destinationString); - log.info("Found destination \"" + destinationString + "\" in JNDI"); - - // String content = System.getProperty("message.content", - // DEFAULT_MESSAGE); - String content = System.getProperty("message.content", MESSAGE_WITH_HEADER); - - try (JMSContext context = connectionFactory.createContext()) { - log.info("Sending message"); - // Send the message - context.createProducer().send(destination, content); - } - - } catch (NamingException e) { - log.severe(e.getMessage()); - } finally { - if (namingContext != null) { - try { - namingContext.close(); - } catch (NamingException e) { - log.severe(e.getMessage()); - } - } - } - } - - - private static double getRandom(double base, double varience, boolean onlypositive) { - double rand = Math.random(); - return (base + (rand > 0.5 ? 1 : -1) * varience * base * rand) * - (onlypositive ? 1 : rand > 0.5 ? 1 : -1); - } -} -``` - -When you analyze the output on the MI server console, you will see an entry similar to the following: - -```bash -INFO - LogMediator To: , MessageID: ID:419a4153-01e8-11ea-b4f3-7f52bbde3597, Direction: request, DeliveryCounter = 1 -``` - diff --git a/en/docs/integrate/examples/jms_examples/dual-channel-http-to-jms.md b/en/docs/integrate/examples/jms_examples/dual-channel-http-to-jms.md deleted file mode 100644 index 1ca1fe241b..0000000000 --- a/en/docs/integrate/examples/jms_examples/dual-channel-http-to-jms.md +++ /dev/null @@ -1,242 +0,0 @@ -# JMS Synchronous Invocations: Dual Channel HTTP-to-JMS - -A JMS synchronous invocation takes place when a JMS producer receives a response to a JMS request produced by it when invoked. The WSO2 Micro Integrator uses an internal **JMS correlation ID** to correlate the request and the response. See [JMSRequest/ReplyExample](http://www.eaipatterns.com/RequestReplyJmsExample.html) for more information. JMS synchronous invocations are further explained in the following use case. - -When the proxy service named `SMSSenderProxy` receives an HTTP request, it publishes that request in a JMS queue named `SMSStore` . Another proxy service named `SMSForwardProxy` subscribes to messages published in this queue and forwards them to a back-end service named ` SimpleStockQuoteService ` . When this back-end service returns an HTTP response, internal ESB logic is used to save that -message as a JMS message in a JMS queue named `SMSReceiveNotification`. The `SMSSenderProxy` proxy service picks the response from the `SMSReceiveNotification` queue and delivers it to the client as an HTTP message using the internal mediation logic. - -**Note** that the ` SMSSenderProxy ` proxy service is able to pick up the message from the ` SMSReceiveNotification ` queue because the ` transport.jms.ReplyDestination ` parameter of the ` SMSSenderProxy ` proxy service is set to the same ` SMSReceiveNotification ` queue. - -!!! Info - **Correlation between request and response**: - - Note that the message that is passed to the back-end service contains the JMS message ID. However, the back-end service is required to return the response using the JMS correlation ID. Therefore, the back-end service should be configured to copy the message ID from the request (the value of the **JMSMessageID** header) to the correlation ID of the response (using the **JMSCorrelationID** header). - -## Synapse configurations - -Create two proxy services with the JMS publisher configuration and JMS consumer configuration given below and then deploy the proxy service artifacts in the Micro Integrator. - -See the instructions on how to [build and run](#build-and-run) this example. - -### JMS publisher configuration - -Shown below is the `SMSSenderProxy` proxy service. - -```xml - - - - - - - - - - -
    - - - - -``` - -Listed below are some of the properties that can be used with the **Property** mediator used in this proxy service: - - ---- - - - - - - - - - - - - - - - - - - - - - - - - -
    PropertyDescription
    TRANSPORT_HEADERS

    This property is used in the out sequence to make sure that transport headers (which are JMS headers in this example) are removed from the message when it is passed to the back-end client.

    -

    It is recommended to set this property because (according to the JMS specification) a property name can contain any character for which the Character.isJavaIdentifierPart Java method returns 'true'. Therefore when there are headers that contain special characters (e.g accept-encoding), some JMS brokers will give errors.

    transport.jms.ContentTypeProperty

    The JMS transport uses this property in the above configuration to determine the content type of the response message. If this property is not set, the JMS transport treats the incoming message as plain text.

    -

    Note: When this property is used, the content type is determined by the out transport. For example, if the proxy service/API is sending a request, the endpoint reference will determine the content type. Also, if the proxy service/API is sending the response back to the client, the configuration of the service/API will determine the content type.

    JMS_WAIT_REPLY
    -

    This property can be used to specify how long the system should wait for the JMS queue (SMSRecieveNotification queue) to send the response back. You can add this property to the in sequence as shown below.

    -
    -
    -
    <property name="JMS_WAIT_REPLY" value="60000" scope="axis2"/>
    -
    -
    -

    JMS_TIME_TO_LIVE

    -

    This property can be set in the InSequence of the proxy service to specify the maximum time period for which a message can live without being consumed.

    -
    -
    -
    <property name="JMS_TIME_TO_LIVE" scope="axis2" value="20000"/>
    -
    -
    -
    - -The endpoint of this proxy service uses the properties listed below to connect the proxy service to the JMS queue in the Message Broker. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    PropertyValue for this use caseDescription

    address URI

    jms:/SMSStore

    The destination in which the request received by the proxy service is stored. Note that there are two ways to define the URL:
    -
      -
    • - Specify the JNDI name of the JMS queue and the connection factory parameters in the connection URL as shown in the above example. -
    • -
    • - If you have already specified the endpoint's connection factory parameters (for the JMS sender configuration) in the deployment.toml file, the connection URL in the proxy service should be as shown below. In this example, the endpoint URL of the proxy service refers the relevant connection factory in the deployment.toml file: - - jms:/SMSStore?transport.jms.ConnectionFactory=QueueConnectionFactory - -
    • -
    -

    java.naming.factory.initial

    org.wso2.andes.jndi.PropertiesFileInitialContextFactory

    -

    The initial context factory to use.
    -The value specified here should be the same as that specified in the <MI_HOME>/conf/deployment.toml `parameter.initial_naming_factory` for the JMS transport receiver (Under [[transport.jms.listener]] section. Make sure that this section is uncommented.).

    -
    java.naming.provider.url

    conf/jndi.properties

    The location of the JNDI service provider.

    transport.jms.DestinationType

    queue The destination type of the JMS message that will be generated by the proxy service.
    transport.jms.ReplyDestination

    SMSReceiveNotificationStore

    The destination in which the response generated by the back-end service is stored.
    - -### JMS consumer configuration - -Create a proxy service named ` SMSForwardProxy ` with the configuration given below. This proxy service will consume messages from the ` SMSStore ` queue of the Message Broker Profile, and forward the messages to the back-end service. - -```xml - - - -
    - - -
    - - - - - - - - - - contentType - text/xml - - - myQueueConnectionFactory - queue - SMSStore - - -``` - -The ` transport.jms.ConnectionFactory ` , ` transport.jms.DestinationType ` parameter and the -` transport.jms.Destination properties ` parameter map the proxy service to the ` SMSStore ` queue. - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy services]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the broker: - -1. [Configure a broker]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transport#configuring-the-jms-transport) with your Micro Integrator instance. Let's use Active MQ for this example. -2. Start the broker. -3. Start the Micro Integrator (after starting the broker). - - !!! Warning - If you are using message processor with Active MQ broker add the following configuration to the startup script before starting the server as shown below, - For Linux/Mac OS update `micro-integrator.sh` and for Windows update `micro-integrator.bat` with `-Dorg.apache.activemq.SERIALIZABLE_PACKAGES="*"` system property. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -To invoke this service, the address URI of this proxy service is defined as `http://localhost:8290/services/SMSSenderProxy`. Send a POST request to the above address URI with the following payload: - -```xml - - - - - - IBM - - - - -``` diff --git a/en/docs/integrate/examples/jms_examples/guaranteed-delivery-with-failover.md b/en/docs/integrate/examples/jms_examples/guaranteed-delivery-with-failover.md deleted file mode 100644 index cad89b7f8a..0000000000 --- a/en/docs/integrate/examples/jms_examples/guaranteed-delivery-with-failover.md +++ /dev/null @@ -1,202 +0,0 @@ -# Guaranteed Delivery with Failover - -WSO2 Micro Integrator ensures guaranteed delivery with the failover message store and scheduled failover message forwarding processor. The topics in the following section describe how you can setup guaranteed message delivery with failover configurations. - -The following diagram illustrates a scenario where a failover message -store and a scheduled failover message forwarding processor is used -to ensure guaranteed delivery: - -![guaranteed delivery]({{base_path}}/assets/img/integrate/tutorials/guaranteed-delivery-failover/guaranteed-delivery.png) - -In this scenario, the original message store fails due to, either network -failure, message store crash, or system shutdown for maintenance. The -failover message store is used as the solution for the original message -store failure. So now the store mediator sends messages to the failover -message store. Then, when the original message store is available again, -the messages that were sent to the failover message store need to be -forwarded to the original message store. The **scheduled failover message forwarding processor** -is used for this purpose. The scheduled failover message -forwarding processor is almost the same as the scheduled message -forwarding processor. The only difference is that the scheduled message -forwarding processor forwards messages to a defined endpoint, whereas -the scheduled failover message forwarding processor forwards messages to -the original message store that the message was supposed to be -temporarily stored. - -## Synapse Configurations - -Given below are the synapse configurations that are required for mediating the above use case. - -See the instructions on how to [build and run](#build-and-run) this example. - -- **Message Stores** - - === "Failover message store" - ```xml - - ``` - - === "Message Store" - ```xml - - failover - true - org.apache.activemq.jndi.ActiveMQInitialContextFactory - tcp://localhost:61616 - 1.1 - - ``` - -- **Message Processors** - - === "Scheduled message forwarding processor" - ```xml - - 1000 - 1000 - 4 - true - Disabled - 1 - - ``` - - === "Scheduled failover message forwarding processor" - ```xml - - 1000 - 60000 - 1000 - true - Disabled - 1 - Original - - ``` - -- **Proxy configurations** - - === "Proxy Service" - ```xml - - - -
    - - - - - - -   - ``` - - === "Endpoint" - ```xml - -
    - - ``` - -The synapse configurations used above are as follows: - -- **Failover message store** - - In this example, an in-memory message store is used to create the failover message store. This step does not involve any special configuration. - -- **Original message store** - - In this example, a JMS message store is used to create the original message store.  When creating the original message store, you need to enable guaranteed delivery on the producer side. To do this, set the following parameters in the message store configuration:
    - `failover` - `true` - -- **Endpoint for the scheduled message forwarding processor** - - In this example, the `SimpleStockquote` service is used as the back-end service. - -- **Scheduled failover message forwarding processor** - - When creating the scheduled failover message forwarding processor, you need to specify the following two mandatory parameters that are important in the failover scenario. - - * Source Message Store - * Target Message Store - - The scheduled failover message forwarding processor sends messages from the failover store to the original store when it is available in the failover scenario. In this configuration, the source message store should be the failover message store and target message store should be the original message store. - -- **Proxy service** - - A proxy service is used to send messages to the original message store via the store mediator. - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service), [message stores]({{base_path}}/integrate/develop/creating-artifacts/creating-a-message-store), and [message processors]({{base_path}}/integrate/develop/creating-artifacts/creating-a-message-processor) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the broker: - -1. [Configure a broker]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transport#configuring-the-jms-transport) with your Micro Integrator instance. Let's use Active MQ for this example. -2. Start the broker. -3. Start the Micro Integrator (after starting the broker). - - !!! Warning - If you are using message processor with Active MQ broker add the following configuration to the startup script before starting the server as shown below, - For Linux/Mac OS update `micro-integrator.sh` and for Windows update `micro-integrator.bat` with `-Dorg.apache.activemq.SERIALIZABLE_PACKAGES="*"` system property. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Invoke the proxy service (http://localhost:8290/services/Proxy1) with the following payload: - -```xml - - - - - - IBM - - - - -``` - -You will see the following response in the back-end service console: - -```bash -INFO [wso2/stockquote_service] - Stock quote service invoked. -INFO [wso2/stockquote_service] - Generating getQuote response for IBM -INFO [wso2/stockquote_service] - Stock quote generated. -``` - -To test the failover scenario, shut down the JMS broker (i.e., the original message store) -and send a few messages to the proxy service. - -You will see that the messages are not sent to the backend since the -original message store is not available. You will also see that the -messages are stored in the failover message store. - -If you analyze the Console log, you will see the failover -message processor trying to forward messages to the original message -store periodically. Once the original message store is available, you -will see that the scheduled failover message forwarding processor sends -the messages to the original store and that the scheduled message -forwarding processor then forwards the messages to the back-end service. \ No newline at end of file diff --git a/en/docs/integrate/examples/jms_examples/producing-jms.md b/en/docs/integrate/examples/jms_examples/producing-jms.md deleted file mode 100644 index 6df470040d..0000000000 --- a/en/docs/integrate/examples/jms_examples/producing-jms.md +++ /dev/null @@ -1,119 +0,0 @@ -# Producing JMS Messages - -This section describes how to configure WSO2 Micro Integrator to send messages to a JMS Queue. In this example, the Micro Integrator accepts messages via HTTP and sends them to a JMS queue. - -## Synapse configuration - -Given below is the synapse configuration of the proxy service that mediates the above use case. Note that you need to update the JMS connection URL according to your broker as explained below. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - - - - -
    - - - - - -``` - -The Synapse artifacts used are explained below. - - - - - - - - - - - - - - - - - - - - - - -
    Artifact TypeDescription
    - Proxy Service - - A proxy service is used to receive messages and to define the message flow. -
    Property Mediator - The OUT ONLY property is set to true , which indicates that the message exchange is one-way. -
    Property Mediator - The FORCE_SC_ACCEPTED property is set to true , this property forces a 202 HTTP response to the client so that the client stops waiting for a response.. -
    Send Mediator - To send a message to a JMS queue, you should define the JMS connection URL as the endpoint address (which should be invoked via the Send mediator). There are two ways to specify the endpoint URL: -
      -
    • - Specify the JNDI name of the JMS queue and the connection factory parameters in the JMS connection URL as shown in the exampe below. Values of connection factory parameters depend on the type of the JMS broker.

      - When the broker is ActiveMQ
      - jms:/SimpleStockQuoteService?SimpleStockQuoteService?transport.jms.ConnectionFactoryJNDIName=QueueConnectionFactory&java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory&java.naming.provider.url=tcp://localhost:61616&transport.jms.DestinationType=queue

      - When the broker is WSO2 Message Broker
      - jms:/StockQuotesQueue?transport.jms.ConnectionFactoryJNDIName=QueueConnectionFactory&java.naming.factory.initial=org.wso2.andes.jndi.PropertiesFileInitialContextFactory&java.naming.provider.url=conf/jndi.properties&transport.jms.DestinationType=queue -

    • -
    • - If you have already specified the endpoint's connection factory parameters (for the JMS sender configuration) in the deployment.toml file, the connection URL in the proxy service should be as shown below. In this example, the endpoint URL of the proxy service refers the relevant connection factory in the deployment.toml file:

      - When the broker is ActiveMQ
      - jms:transport.jms.ConnectionFactory=QueueConnectionFactory

      - When the broker is WSO2 Message Broker
      - jms:/StockQuotesQueue?transport.jms.ConnectionFactory=QueueConnectionFactory -
    • -
    -
    - -!!! Info - To refer details on JMS transport parameters, you can follow [JMS transport parameters]({{base_path}}/reference/synapse-properties/transport-parameters/jms-transport-parameters) used in the Micro Integrator. - - -!!! Note - Be sure to replace the ' `& ` ' character in the endpoint URL with '`&`' to avoid the following exception: - ``` java - com.ctc.wstx.exc.WstxUnexpectedCharException: Unexpected character '=' (code 61); expected a semi-colon after the reference for entity 'java.naming.factory.initial' at [row,col {unknown-source} - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the broker: - -1. [Configure a broker]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-jms-transport) with your Micro Integrator instance. Let's use Active MQ for this example. -2. Start the broker. -3. Start the Micro Integrator (after starting the broker). - -Set up the back-end service: - -1. Download the [back-end service]( -https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Invoke the proxy service by sending a simple message. diff --git a/en/docs/integrate/examples/jms_examples/publish-subscribe-with-jms.md b/en/docs/integrate/examples/jms_examples/publish-subscribe-with-jms.md deleted file mode 100644 index 1d1ddff8b4..0000000000 --- a/en/docs/integrate/examples/jms_examples/publish-subscribe-with-jms.md +++ /dev/null @@ -1,166 +0,0 @@ -# Publish and Subscribe with JMS - -JMS supports two models for messaging as follows: - -- Queues: point-to-point -- Topics: publish and subscribe - -There are many business use cases that can be implemented using the publisher-subscriber (pub-sub) pattern. For example, consider a blog with subscribed readers. The blog author posts a blog entry, which the subscribers of the blog can view. In other words, the blog author publishes a message (the blog post content) and the subscribers (the blog readers) receive that message. Popular publisher-subscriber patterns like these can be implemented using JMS topics. - -In this sample scenario, two proxy services in the Micro Integrator act as the publisher and subscriber to a topic defined in the message broker. - -When we invoke the back-end service, the publisher is invoked and sends the message to the JMS topic. The topic delivers the message to all the subscribers of that topic. In this case, the subscribers are Micro Integrator proxy services. - -## Synapse configurations - -Shown below are the synapse artifacts that are used to define this use case. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Proxy Service (Publisher)" - ```xml - - - -
    - - - - - - - - - - - - ``` - -=== "Proxy Service (Subscriber 1)" - ```xml - - - - - - - - - - - - - - - - contentType - application/xml - - - myTopicConnectionFactory - topic - SimpleStockQuoteService - - ``` - -=== "Proxy Service (Subscriber 2)" - ```xml - - - - - - - - - - - - - - - - contentType - application/xml - - - myTopicConnectionFactory - topic - SimpleStockQuoteService - - ``` - -See the descriptions of the above configurations: - - - - - - - - - - - - - - - - - - -
    ArtifactDescription
    Publisher - This proxy service (StockQuoteProxy) is configure to publish messages to the SimpleStockQuoteService topic in the broker. Note that there are two ways to define the endpoint URL: -
      -
    • - Specify the JNDI name of the JMS topic and the connection factory parameters in the connection URL as shown in the above example. -
    • -
    • - If you have already specified the endpoint's connection factory parameters (for the JMS sender configuration) in the deployment.toml file, the connection URL in the proxy service should be as shown below. In this example, the endpoint URL of the proxy service refers the relevant connection factory in the deployment.toml file.

      - When the broker is ActiveMQ
      - jms:/StockQuotesQueue?transport.jms.ConnectionFactory=QueueConnectionFactory

      - When the broker is WSO2 Message Broker
      - jms:/StockQuotesQueue?transport.jms.ConnectionFactory=QueueConnectionFactory
      -
    • -
    -
    Subscriber 1Proxy service that consumes messages from the broker.
    Subscriber 2Proxy service that consumes messages from the broker.
    - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create [proxy services]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the broker: - -1. [Configure a broker]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transport#configuring-the-jms-transport) with your Micro Integrator instance. Let's use Active MQ for this example. -2. Start the broker. -3. Start the Micro Integrator (after starting the broker). - -Publishing to the topic: - -1. Start the Micro Integrator. A log message similar to the following will appear: - ```bash - INFO {org.apache.axis2.transport.jms.JMSListener} - Started to listen on destination : SimpleStockQuoteService of type topic for service SimpleStockQuoteService2 - INFO {org.apache.axis2.transport.jms.JMSListener} - Started to listen on destination : SimpleStockQuoteService of type topic for service SimpleStockQuoteService1 - ``` - -2. To invoke the publisher, send a request to `StockQuoteProxy` (http://localhost:8290/services/StockQuoteProxy) with the following payload: - ```xml - - - - - - IBM - - - - - ``` - - When the stockquote client sends the message to the `StockQuoteProxy` service, the publisher is invoked and sends the message to the JMS topic. The topic delivers the message to all the subscribers of that topic. In this case, the subscribers are proxy services deployed in the Micro Integrator. - - !!! Note - There can be many types of publishers and subscribers for a given JMS topic. The following [article in the WSO2 library](http://wso2.org/library/articles/2011/12/wso2-esb-example-pubsub-soa) provides more information on different types of publishers and subscribers. \ No newline at end of file diff --git a/en/docs/integrate/examples/jms_examples/quad-channel-jms-to-jms.md b/en/docs/integrate/examples/jms_examples/quad-channel-jms-to-jms.md deleted file mode 100644 index 2e906f72cb..0000000000 --- a/en/docs/integrate/examples/jms_examples/quad-channel-jms-to-jms.md +++ /dev/null @@ -1,126 +0,0 @@ -__# JMS Synchronous Invocations: Quad Channel JMS-to-JMS - -The example demonstrates how WSO2 Micro Integrator handles quad-channel JMS synchronous invocations. - -## Synapse configuration - -Given below is the synapse configuration of the proxy service that mediates the above use case. Note that you need to update the JMS connection URL according to your broker as explained below. - -See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - - - - -
    - - - - - - - - - - contentType - text/xml - - - ClientReq - -``` - -The message flow handled by the sample configuration is as follows: - -1. The **JMSReplyTo** property of the JMS message is set to **ClientRes** . Therefore, the client sends a JMS message to the - **ClientReq** queue. -2. The **transport.jms.ReplyDestination** value is set to **BERes**. This enables the Micro Integrator proxy to pick messages from the **ClientReq** queue and send to the **BEReq** queue. -3. The back-end picks messages from the **BEReq** queue, processes and places response messages to the **BERes** queue. -4. Once a response is available in **BERes** queue, the proxy service picks the response message, and sends it back to the **ClientRes** queue. -5. The client the message as the response message. - -The Synapse artifacts used are explained below. - - - - - - - - - - - - - - - - - - -
    Artifact TypeDescription
    - Proxy Service - - A proxy service is used to receive messages and to define the message flow. -
    Property Mediator - The JMS transport uses the transport.jms.ContentTypeProperty property in the above configuration to determine the content type of the response message. If this property is not set, the JMS transport treats the incoming message as plain text. -
    Send Mediator - To send a message to a JMS queue, you should define the JMS connection URL as the endpoint address (which should be invoked via the Send mediator). There are two ways to specify the endpoint URL: -
      -
    • - Specify the JNDI name of the JMS queue and the connection factory parameters in the JMS connection URL as shown in the exampe. Values of connection factory parameters depend on the type of JMS broker. -

    • -
    • - If you have already specified the endpoint's connection factory parameters (for the JMS sender configuration) in the deployment.toml file, the connection URL in the proxy service should be as shown below. In this example, the endpoint URL of the proxy service refers the relevant connection factory in the deployment.toml file:

      - When the broker is ActiveMQ
      - jms:/BEReq?transport.jms.ConnectionFactory=QueueConnectionFactory

      - When the broker is WSO2 Message Broker
      - jms:/BEReq?transport.jms.ConnectionFactory=QueueConnectionFactory -
    • -
    -
    - -!!! Note - Be sure to replace the ' `& ` ' character in the endpoint URL with '`&`' to avoid the following exception: - ```java - com.ctc.wstx.exc.WstxUnexpectedCharException: Unexpected character '=' (code 61); expected a semi-colon after the reference for entity 'java.naming.factory.initial' at [row,col {unknown-source} - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the broker: - -1. [Configure a broker]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transport#configuring-the-jms-transport) with your Micro Integrator instance. Let's use Active MQ for this example. -2. Start the broker. -3. Start the Micro Integrator (after starting the broker). - -Set up the back-end service: - -1. Download the [back-end service]( -https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Invoke the proxy service by send a simple message. \ No newline at end of file diff --git a/en/docs/integrate/examples/jms_examples/shared-topic-subscription.md b/en/docs/integrate/examples/jms_examples/shared-topic-subscription.md deleted file mode 100644 index dc51d9e173..0000000000 --- a/en/docs/integrate/examples/jms_examples/shared-topic-subscription.md +++ /dev/null @@ -1,316 +0,0 @@ -# Shared Topic Subscription - -With JMS 1.1, a subscription on a topic is not permitted to have more than one consumer at a time. That is, if multiple JMS -consumers subscribe to a JMS topic, and if a message comes to that topic, multiple copies of the message is forwarded to each consumer. There is no way of sharing messages between consumers that come to the topic. - -With the shared subscription feature in JMS 2.0, you can overcome this restriction. When shared subscription is used, a message that comes to a topic is forwarded to only one of the consumers. That is, if multiple JMS consumers subscribe to a JMS topic, consumers can share the messages that come to the topic. The advantage of shared topic subscription is that it allows to share the workload between consumers. - -The Micro Integrator can be configured as a shared topic listener that can connect to a shared topic subscription as a message consumer (subscriber) to share workload between other consumers of the subscription. - -To demonstrate the sample scenario, let's configure the JMS inbound endpoint in WSO2 Micro Integrator as a shared topic listener using HornetQ as the message broker. - -## Synapse configurations - -Given below are the synapse configurations that are required for mediating the above use case. - -See the instructions on how to [build and run](#build-and-run) this example. - -=== "Inbound Endpoint" - ```xml - - - 1000 - /topic/exampleTopic - 3 - TopicConnectionFactory - true - org.jnp.interfaces.NamingContextFactory - jnp://localhost:1099 - AUTO_ACKNOWLEDGE - false - topic - 2.0 - true - mySubscription - - - ``` - -=== "Registry Artifact" - ```xml - - 15000 - - ``` - -=== "Task Manager" - ```xml - - 15000 - - ``` - -=== "Sequence (Request)" - ```xml - - - - - ``` - -=== "Sequence (Fault)" - ```xml - - - - - - - - - ``` - -See the descriptions of the above configurations: - - - - - - - - - - - -
    ArtifactDescription
    Inbound Endpoint - Make sure to configure the below properties as follows when setting up the inbound endpoint: -
      -
    • - Set the value of the transport.jms.JMSSpecVersion property as 2.0. -
    • -
    • - Set the value of the transport.jms.SharedSubscription propoerty as true. -
    • -
    • - Specify a subscriber name as the value of the transport.jms.DurableSubscriberName property and use the same for all subscribers. -
    • -
    -
    - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service), [registry artifact]({{base_path}}/integrate/develop/creating-artifacts/creating-registry-resources), [scheduled task]({{base_path}}/integrate/develop/creating-artifacts/creating-scheduled-task), and [sequences]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the broker: - -1. [Configure a broker]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transport#configuring-the-jms-transport) with your Micro Integrator instance. Let's use HornetQ for this example. - - - Be sure to create a sample topic by editing the `HORNET_HOME/config/stand-alone/non-clustered/hornetq-jms.xml` file as follows: - ```xml - - - - ``` - -2. Start HornetQ with the following command: - - On **Windows**: HORNETQ_HOME\bin\run.bat --run - - On **MacOS/Linux/Solaris**: sh HORNETQ_HOME/bin/run.sh - -3. Start the Micro Integrator (after starting the broker). - -Follow the steps given below to create the topic consumer and publisher to run this example. - -1. Create and run the following topic consumer (**TopicConsumer.java**) and run. - - ```java - package SharedTopicSubscribe; - - import java.util.Properties; - - import javax.jms.Connection; - import javax.jms.ConnectionFactory; - import javax.jms.MessageConsumer; - import javax.jms.Session; - import javax.jms.TextMessage; - import javax.jms.Topic; - import javax.naming.Context; - import javax.naming.InitialContext; - - public class TopicConsumer { - private static final String DEFAULT_CONNECTION_FACTORY = "TopicConnectionFactory"; - private static final String DEFAULT_DESTINATION = "/topic/exampleTopic"; - private static final String INITIAL_CONTEXT_FACTORY = "org.jnp.interfaces.NamingContextFactory"; - private static final String PROVIDER_URL = "jnp://localhost:1099"; - private static final String SUBSCRIPTION_NAME = "mySubscription"; - - public static void main(final String[] args) { - try { - runExample(); - } catch (Exception e) { - e.printStackTrace(); - } - } - - public static void runExample() throws Exception { - Connection connection = null; - Context initialContext = null; - try { - // /Step 1. Create an initial context to perform the JNDI lookup. - final Properties env = new Properties(); - env.put(Context.INITIAL_CONTEXT_FACTORY, INITIAL_CONTEXT_FACTORY); - env.put(Context.PROVIDER_URL, System.getProperty(Context.PROVIDER_URL, PROVIDER_URL)); - initialContext = new InitialContext(env); - - // Step 2. perform a lookup on the topic - Topic topic = (Topic) initialContext.lookup(DEFAULT_DESTINATION); - - // Step 3. perform a lookup on the Connection Factory - ConnectionFactory cf = - (ConnectionFactory) initialContext.lookup(DEFAULT_CONNECTION_FACTORY); - - // Step 4. Create a JMS Connection - connection = cf.createConnection(); - - // Step 5. Create a JMS Session - Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); - - // Step 6. Create a JMS Message Consumer - MessageConsumer messageConsumer = - session.createSharedConsumer(topic, SUBSCRIPTION_NAME); - - // Step 7. Start the Connection - connection.start(); - System.out.println("Shared message consumer started on topic: " + DEFAULT_DESTINATION + - "\n"); - - // Step 8. Receive the message - TextMessage messageReceived = null; - while (true) { - messageReceived = (TextMessage) messageConsumer.receive(); - System.out.println("Consumer received message: " + messageReceived.getText() + "\n"); - } - - } finally { - - // Step 9. Close JMS resources - if (connection != null) { - connection.close(); - } - - // Also the initialContext - if (initialContext != null) { - initialContext.close(); - } - } - } - } - ``` - -2. Run the following java file (**TopicPublisher.java**) to publish 5 messages to the HornetQ topic: - - ```java - package SharedTopicSubscribe; - - import java.util.Properties; - - import javax.jms.Connection; - import javax.jms.ConnectionFactory; - import javax.jms.MessageProducer; - import javax.jms.Session; - import javax.jms.TextMessage; - import javax.jms.Topic; - import javax.naming.Context; - import javax.naming.InitialContext; - - public class TopicPublisher { - private static final String DEFAULT_CONNECTION_FACTORY = "TopicConnectionFactory"; - private static final String DEFAULT_DESTINATION = "/topic/exampleTopic"; - private static final String INITIAL_CONTEXT_FACTORY = "org.jnp.interfaces.NamingContextFactory"; - private static final String PROVIDER_URL = "jnp://localhost:1099"; - // Set up all the default values - private static final String param = "IBM"; - - public static void main(final String[] args) { - try { - runExample(); - } catch (Exception e) { - e.printStackTrace(); - } - } - - public static boolean runExample() throws Exception { - Connection connection = null; - Context initialContext = null; - try { - // /Step 1. Create an initial context to perform the JNDI lookup. - // Set up the namingContext for the JNDI lookup - final Properties env = new Properties(); - env.put(Context.INITIAL_CONTEXT_FACTORY, INITIAL_CONTEXT_FACTORY); - env.put(Context.PROVIDER_URL, System.getProperty(Context.PROVIDER_URL, PROVIDER_URL)); - initialContext = new InitialContext(env); - - // Step 2. perform a lookup on the topic - Topic topic = (Topic) initialContext.lookup(DEFAULT_DESTINATION); - - // Step 3. perform a lookup on the Connection Factory - ConnectionFactory cf = - (ConnectionFactory) initialContext.lookup(DEFAULT_CONNECTION_FACTORY); - - // Step 4. Create a JMS Connection - connection = cf.createConnection(); - - // Step 5. Create a JMS Session - Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); - - // Step 6. Create a Message Producer - MessageProducer producer = session.createProducer(topic); - System.out.println("Publishing 5 messages to topic/exampleTopic"); - for (int i = 0; i < 5; i++) { - - // Step 7. Create a Text Message - TextMessage message = session.createTextMessage(getMessage()); - - // Step 8. Send the Message - producer.send(message); - } - return true; - } finally { - - // Step 9. Close JMS resources - if (connection != null) { - connection.close(); - } - - // Also the initialContext - if (initialContext != null) { - initialContext.close(); - } - } - } - - private static double getRandom(double base, double varience, boolean onlypositive) { - double rand = Math.random(); - return (base + (rand > 0.5 ? 1 : -1) * varience * base * rand) * - (onlypositive ? 1 : rand > 0.5 ? 1 : -1); - } - - private static String getMessage() { - return "\n" + - " \n" + "\n" + - "\n" + " \n" + - " " + getRandom(100, 0.9, true) + "\n" + - " " + (int) getRandom(10000, 1.0, true) + "\n" + - " " + param + "\n" + " \n" + - "" + " \n" + ""; - } - } - ``` - -You will see that the 5 messages are shared between the inbound listener and `TopicConsumer.java`. This is because both the inbound listener and `TopicConsumer.java` are configured as shared subscribers. - -The total number of consumed messages between the inbound listener and `TopicConsumer.java` will be equal to the number messages published by ` TopicPublisher.java`. diff --git a/en/docs/integrate/examples/jms_examples/specifying-a-delivery-delay-on-messages.md b/en/docs/integrate/examples/jms_examples/specifying-a-delivery-delay-on-messages.md deleted file mode 100644 index 66e7b11e89..0000000000 --- a/en/docs/integrate/examples/jms_examples/specifying-a-delivery-delay-on-messages.md +++ /dev/null @@ -1,286 +0,0 @@ -# Specifying Delivery Delay on Messages - -In a normal message flow, JMS messages that are sent by the JMS producer to the JMS broker are forwarded to the respective JMS consumer without any delay. - -With the delivery delay messaging feature introduced with JMS 2.0, you can specify a delivery delay time value in each JMS message so that the JMS broker will not deliver the message until after the specified delivery delay has elapsed. Specifying a delivery delay is useful if there is a scenario where you do not want a message consumer to receive a message that is sent until a specified time duration has elapsed. To implement this, you need to add a delivery delay to the JMS producer so -that the publisher does not deliver a message until the specified delivery delay time interval is elapsed. - -The following diagram illustrates how you can use WSO2 Micro Integrator as a JMS producer and specify a delivery delay on messages when you do not want the message consumer to receive a message until a specified time duration has elapsed. - -## Synapse configuration - -Given below are the synapse configurations that are required for mediating the above use case. - -See the instructions on how to [build and run](#build-and-run) this example. - -=== "Proxy Service 1" - ```xml - - - - - - - - - - - - - - - - -
    - - - - - - - - - - ``` - -=== "Proxy Service 2" - ```xml - - - - - - - - - - - - - - - - - -
    - - - - - - - - - - ``` - -=== "Main Sequence" - ```xml - - - - - - - - - - - - - - - The main sequence for the message mediation - - ``` - -=== "Fault Sequence" - ```xml - - - - - - - - - - - ``` - -=== "Registry Artifact" - ```xml - - 15000 - - ``` - -=== "Task Manager" - ```xml - - ``` - -See the descriptions of the above configurations: - - - - - - - - - - - - - - -
    ArtifactDescription
    Proxy Service 1 - The JMSDeliveryDelayed proxy service sets a delivery delay of 10 seconds on the message that it forwards -
    Proxy Service 2The JMSDelivery proxy service does not set a delivery delay on the message.
    - - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy services]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service), [registry artifact]({{base_path}}/integrate/develop/creating-artifacts/creating-registry-resources), [scheduled task]({{base_path}}/integrate/develop/creating-artifacts/creating-scheduled-task), and [sequences]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the broker: - -1. [Configure a broker]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transport#configuring-the-jms-transport) with your Micro Integrator instance. Let's use HornetQ for this example. - - - On **Windows**: HORNETQ_HOME\bin\run.bat --run - - On **MacOS/Linux/Solaris**: sh HORNETQ_HOME/bin/run.sh - -2. Start the broker. -3. Start the Micro Integrator. - -Follow the steps given below to run the example: - -1. Run the following java file (**QueueConsumer.java**), which acts as the JMS consumer that consumes messages from the queue: - - ```java - package DeliveryDelay; - - import java.sql.Timestamp; - import java.util.Date; - import java.util.Properties; - - import javax.jms.Connection; - import javax.jms.ConnectionFactory; - import javax.jms.MessageConsumer; - import javax.jms.Session; - import javax.jms.TextMessage; - import javax.jms.Queue; - import javax.naming.Context; - import javax.naming.InitialContext; - - /** - * Sample consumer to demonstrate JMS 2.0 feature : - * Message Delivery Delay - * Classic API is used - */ - - public class QueueConsumer { - private static final String DEFAULT_CONNECTION_FACTORY = "QueueConnectionFactory"; - private static final String DEFAULT_DESTINATION = "queue/mySampleQueue"; - private static final String INITIAL_CONTEXT_FACTORY = "org.jnp.interfaces.NamingContextFactory"; - private static final String PROVIDER_URL = "jnp://localhost:1099"; - - public static void main(final String[] args) { - try { - runExample(); - } catch (Exception e) { - e.printStackTrace(); - } - } - - public static void runExample() throws Exception { - Connection connection = null; - Context initialContext = null; - try { - - // Step 1. Create an initial context to perform the JNDI lookup. - final Properties env = new Properties(); - env.put(Context.INITIAL_CONTEXT_FACTORY, INITIAL_CONTEXT_FACTORY); - env.put(Context.PROVIDER_URL, System.getProperty(Context.PROVIDER_URL, PROVIDER_URL)); - initialContext = new InitialContext(env); - - - // Step 2. perform a lookup on the Queue - Queue queue = (Queue) initialContext.lookup(DEFAULT_DESTINATION); - - - // Step 3. perform a lookup on the Connection Factory - ConnectionFactory cf = (ConnectionFactory) initialContext.lookup(DEFAULT_CONNECTION_FACTORY); - - - // Step 4. Create a JMS Connection - connection = cf.createConnection(); - - - // Step 5. Create a JMS Session - Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); - - // Step 6. Create a JMS Message Consumer - MessageConsumer messageConsumer = session.createConsumer(queue); - - // Step 7. Start the Connection - connection.start(); - System.out.println("JMS consumer stated on the queue " + DEFAULT_DESTINATION + - "\n"); - - //Clear the queue, if there is any previous messages in the queue - TextMessage tempMessage; - do{ - tempMessage = (TextMessage) messageConsumer.receive(1); - } while(tempMessage != null); - - // Step 8.1. Receive the message one - TextMessage firstMessage = (TextMessage) messageConsumer.receive(); - long first = System.currentTimeMillis(); - System.out.println("Consumer received message: [ "+new Timestamp(new Date(first).getTime())+" ] " + firstMessage.getText() + "\n"); - - // Step 8.2. Receive delayed - TextMessage secondMessage = (TextMessage) messageConsumer.receive(); - long second = System.currentTimeMillis(); - System.out.println("Consumer received dealyed message: [ "+new Timestamp(new Date(second).getTime())+" ] " + secondMessage.getText() + "\n"); - System.out.println("Time difference between two messages : "+(second-first)/1000+"s"); - } finally { - - // Step 9. Close JMS resources - if (connection != null) { - connection.close(); - } - - // Also the initialContext - if (initialContext != null) { - initialContext.close(); - } - } - } - } - ``` - -2. Invoke the two proxy services (http://localhost:8290/services/JMSDelivery, http://localhost:8290/services/JMSDeliveryDelayed) with the following payload: - - ```xml - - - - - - IBM - - - - - ``` - -You will see that two messages are received by the Java consumer with a time difference of more than 10 seconds. - -This is because the ` JMSDeliveryDelayed ` proxy service sets a delivery delay of 10 seconds on the message that it forwards, whereas the ` JMSDelivery ` proxy service does not set a delivery delay on the message. \ No newline at end of file diff --git a/en/docs/integrate/examples/json_examples/json-examples.md b/en/docs/integrate/examples/json_examples/json-examples.md deleted file mode 100644 index d378317d42..0000000000 --- a/en/docs/integrate/examples/json_examples/json-examples.md +++ /dev/null @@ -1,1205 +0,0 @@ -# Working with JSON Message Payloads - -WSO2 Micro Integrator provides support for [JavaScript Object Notation (JSON)](http://www.json.org/) payloads in messages. The following sections describe how to work with JSON via the Micro Integrator. - -## Handling JSON to XML conversion - -When building the XML tree, JSON builders attach the converted XML infoset to a special XML element that acts as the root element of the -final XML tree. If the original JSON payload is of type `object` , the special element is ``. If it is an `array`, the special element is ``. Following are examples of JSON and XML representations of various objects and arrays. - -### Empty objects - -=== "JSON" - ``` javascript - {"object":{}} - ``` - -=== "XML" - ``` html - - - - ``` - -### Empty strings - -=== "JSON" - ``` javascript - {"object":""} - ``` - -=== "XML" - ``` html - - - - ``` - -### Empty array - -=== "JSON" - ``` javascript - [] - ``` - -=== "XML (JsonStreamBuilder)" - ``` html - - ``` - -=== "XML (JsonBuilder)" - ``` html - - - - ``` - -### Named arrays - -=== "JSON" - ``` javascript - {"array":[1,2]} - ``` - -=== "XML (JsonStreamBuilder)" - ``` html - - 1 - 2 - - ``` - -=== "XML (JsonBuilder)" - ``` html - - - 1 - 2 - - ``` - -=== "JSON" - ``` javascript - {"array":[]} - ``` - -=== "XML (JsonStreamBuilder)" - ``` html - - ``` - -=== "XML (JsonBuilder)" - ``` html - - - - ``` - -### Anonymous arrays - -=== "JSON" - ``` javascript - [1,2] - ``` - -=== "XML (JsonStreamBuilder)" - ``` html - - 1 - 2 - - ``` - -=== "XML (JsonBuilder)" - ``` html - - - 1 - 2 - - ``` - -=== "JSON" - ``` javascript - [1, []] - ``` - -=== "XML (JsonStreamBuilder)" - ``` html - - 1 - - - - - ``` - -=== "XML (JsonBuilder)" - ``` html - - - 1 - - - - - - - ``` - -## XML processing instructions (PIs) - -Note the addition of `xml-multiple` processing instructions to the XML payloads whose JSON representations contain arrays. `JsonBuilder` (via StAXON) adds these instructions to the XML payload that it builds during the JSON to XML conversion so that during the XML to JSON conversion, `JsonFormatter` can reconstruct the arrays that are present in the original JSON payload. `JsonFormatter` interprets the elements immediately following a processing instruction to construct an array. - -## Special characters - -When building XML elements, the EI handles the `$` character and digits in a special manner when they appear as the first character of a JSON key. Following are examples of two such occurrences. Note the addition of the `_JsonReader_PS_` and `_JsonReader_PD_` prefixes in place of the `$` and digit characters, respectively. - -=== "JSON" - ``` javascript - {"$key":1234} - ``` - -=== "XML" - ``` html - - <_JsonReader_PS_key>1234 - - ``` - -=== "JSON" - ``` javascript - {"32X32":"image_32x32.png"} - ``` - -=== "XML" - ``` html - - <_JsonReader_PD_32X32>image_32x32.png - - ``` - -## Converting spaces - -Although you can have spaces in JSON elements, [you cannot have them when converted to XML](https://www.w3.org/TR/REC-xml/#sec-common-syn). Therefore, you can handle spaces when converting JSON message payloads to XML, by adding the following property to the `MI_HOME/conf/deployment.toml` file in the `[mediation]` section: -`synapse.build_valid_nc_name` - -For example, consider the following JSON message: - -```json -{ - "abc def" : "this is a sample value" -} -``` - -The output converted to XML is as follows: - -```xml -this is a sample value -``` - -!!! Tip - The value 32 represents the standard char value of the space. This works other way around as well. When you need to convert XML to JSON, with a JSON element that needs to have a space within it. Then, you use " ` _JsonReader_32_ ` " in the XML element, to get the space in the JSON output. For example, if you consider the following XML payload; - - `this is a sample value` - - The JSON output will be as follows: - - `{ "abc def" : "this is a sample value"}` - -## Handling XML to JSON conversion - -When an XML element is converted to JSON, the following rules apply: - -### Empty XML elements - -Consider the following empty XML elements: - -=== "Example 1" - ``` html - - - - ``` - -=== "Example 2" - ``` html - - - - ``` - -By default, empty XML elements convert to JSON as null objects as shown below. - -``` java -{"object":null} -``` - -JSON representation of empty XML element will change as below by adding `'synapse.commons.enableXmlNullForEmptyElement' = false` under `[synapse_properties]` section in `MI_HOME/conf/deployment.toml` file. - -``` javascript -{"object":""} -``` - -!!! Info - `'synapse.commons.enableXmlNullForEmptyElement` property surrounded with single quotation to identify it as whole string rather dot separated TOML object. - -### Empty XML elements with the 'nil' attribute - -Consider the following XML element that has the 'nil' attribute set to true. - -``` - - - -``` - -By default, the above XML element converts to JSON as shown below. - -``` javascript -{"object":{"@nil":"true"}} -``` - -If you set the `synapse.enable_xml_nil=true` property in the `deployment.toml` file `[mediation]` section (stored in the `MI_HOME/conf/` directory), XML elements where the 'nil' attribute is set to true will be represented in JSON as null objects as shown below. - -``` javascript -{"object":null} -``` - -### Converting a payload between XML and JSON - -To convert an XML payload to JSON, set the `messageType` property to `application/json` in the axis2 scope before sending message to an endpoint. Similarly, to convert a JSON payload to XML, set the `messageType` property to `application/xml` or `text/xml`. For example: - -``` - - - - - - - - - -``` -If the request payload is as follows: - -``` - - - Bermuda Triangle - 25.0000 - 71.0000 - - - Eiffel Tower - 48.8582 - 2.2945 - - -``` - -Save the payload in request.xml file and use the following command to invoke this proxy service: - -```bash -curl -v -X POST -H "Content-Type:application/xml" -d@request.xml "http://localhost:8290/services/tojson" -``` - -The response payload will look like this: - -``` javascript -{ - "coordinates":{ - "location":[ - { - "name":"Bermuda Triangle", - "n":25.0000, - "w":71.0000 - }, - { - "name":"Eiffel Tower", - "n":48.8582, - "e":2.2945 - } - ] - } -} -``` - -Note that we have used the [Property mediator]({{base_path}}/reference/mediators/property-mediator) to mark the outgoing payload to be formatted as JSON: - -``` - -``` - -!!! Note - JSON requests cannot be converted to XML if it contains invalid XML characters. - -!!! Info - If you need to convert complex XML responses (e.g., XML with with ` xsi:type ` values), you will need to set the message type using the [Property mediator]({{base_path}}/reference/mediators/property-mediator) as follows: - `` - You will also need to ensure you register the following message builder and formatter as specified in [Message Builders and Formatters](https://ei.docs.wso2.com/en/latest/micro-integrator/setup/message_builders_formatters/message-builders-and-formatters/). - ``` - - - ``` - -### Accessing content from JSON payloads - -There are two ways to access the content of a JSON payload within the MI. - -- JSONPath expressions (with `json-eval()` method) -- XPath expressions - -JSONPath allows you to access fields of JSON payloads with faster -results and less processing overhead. Although it is possible to -evaluate XPath expressions on JSON payloads by assuming the XML -representation of the JSON payload, we recommend that you use JSONPath -to query JSON payloads. It is also possible to evaluate both JSONPath -and XPath expressions on a payload (XML/JSON) at the same time. - -You can use JSON path expressions with following mediators: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    MediatorUsage
    Log
    -

    As a log property:

    -
    -
    -
    <log>
    -    <property name="location" 
    -              expression="json-eval($.coordinates.location[0].name)"/>
    -</log>
    -
    -
    -
    Property
    -

    As a standalone property:

    -
    -
    -
    <property name="location" 
    -              expression="json-eval($.coordinates.location[0].name)"/>
    -
    -
    -
    PayloadFactory
    -

    As the payload arguments:

    -
    -
    -
    <payloadFactory media-type="json">
    -    <format>{"RESPONSE":"$1"}</format>
    -    <args>
    -        <arg evaluator="json" expression="$.coordinates.location[0].name"/>
    -    </args>
    -</payloadFactory>
    -
    -
    -

    IMPORTANT : You MUST omit the json-eval() method within the payload arguments to evaluate JSON paths within the PayloadFactory mediator. Instead, you MUST select the correct expression evaluator ( xml or json ) for a given argument.

    -
    Switch
    -

    As the switch source:

    -
    -
    -
    <switch source="json-eval($.coordinates.location[0].name)">
    -
    -
    -
    Filter
    -

    As the filter source:

    -
    -
    -
    <filter source="json-eval($.coordinates.location[0].name)" 
    -        regex="Eiffel.*">
    -
    -
    -
    - -#### JSON path syntax - -Suppose we have the following payload: - -``` -{  - "id": 12345, - "id_str": "12345", - "array": [ 1, 2, [ [], [{"inner_id": 6789}] ] ], - "name": null, - "object": {}, - "$schema_location": "unknown", - "12X12": "image12x12.png" -} -``` - -The following table summarizes sample JSONPath expressions and their outputs: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ExpressionResult
    $.
    { "id":12345, "id_str":"12345", "array":[1, 2, [[],[{"inner_id":6789}]]], "name":null, "object":{}, "$schema_location":"unknown", "12X12":"image12x12.png"}
    $.id
    12345
    $.name
    null
    $.object
    {}
    $.['$schema_location']
    unknown
    $.12X12
    image12x12.png
    $.array
    [1, 2, [[],[{"inner_id":6789}]]]
    $.array[2][1][0].inner_id
    6789
    - -!!! Info - During mediation, evaluating expressions against a property does not modify the original payload. The changes will be reflected within the property itself and hence, it cannot be expected to get applied for the rest of the mediation similar to payload modification. - -We can also evaluate a JSONPath expression against a property that contains a JSON payload. - -To evaluate a JSONPath expression against a property, use the following syntax. - -```json -json-eval(:.) -``` - -Example 1: When the property is in the synapse message context. - -```json -json-eval($ctx:propertyName.student.name) -``` - -Example 2: When the property is in the axis2 message context. - -```json -json-eval($axis2:propertyName.student.name) -``` - -Example 3: When the property is in the transport scope. - -```json -json-eval($trp:propertyName.student.name) -``` - -Learn more about [JSONPath syntax](http://goessner.net/articles/JsonPath/). - -### Logging JSON payloads - -To log JSON payloads as JSON, use the [Log -mediator]({{base_path}}/reference/mediators/log-mediator) as shown -below. The ` json-eval() ` method returns the -` java.lang.String ` representation of the existing JSON -payload. - -``` - - - -``` - -To log JSON payloads as XML, use the Log mediator as shown below: - -``` - -``` - -For more information on logging, see [Troubleshooting, debugging, and logging]({{base_path}}/integrate/examples/json_examples/json-examples/#validating-json-messages) below. - -### Constructing and transforming JSON payloads - -To construct and transform JSON payloads, you can use the PayloadFactory -mediator or Script mediator as described in the rest of this section. - -#### PayloadFactory mediator - -The [PayloadFactory mediator]({{base_path}}/reference/mediators/payloadfactory-mediator) provides the simplest way to work with JSON payloads. Suppose we have a service that returns the following response for a search query: - -``` javascript -{ - "geometry":{ - "location":{ - "lat":-33.867260, - "lng":151.1958130 - } - }, - "icon":"bar-71.png", - "id":"7eaf7", - "name":"Biaggio Cafe", - "opening_hours":{ - "open_now":true - }, - "photos":[ - { - "height":600, - "html_attributions":[ - ], - "photo_reference":"CoQBegAAAI", - "width":900 - } - ], - "price_level":1, - "reference":"CnRqAAAAtz", - "types":[ - "bar", - "restaurant", - "food", - "establishment" - ], - "vicinity":"48 Pirrama Road, Pyrmont" -} -``` - -We can create a proxy service that consumes the above response and creates a new response containing the location name and tags associated with the location based on several fields from the above response. - -``` - - - - - { - "location_response" : { - "name" : "$1", - "tags" : "$2" - }} - - - - - - - - - - - -``` - -Save the above payload in request.json file and use the following command to invoke this service: - -``` bash -curl -v POST -H "Content-Type:application/json" -d@request.json "http://localhost:8290/services/singleresponse" -``` - -The response payload would look like this: - -``` javascript -{ - "location_response":{ - "name":"Biaggio Cafe", - "tags":["bar", "restaurant", "food", "establishment"] - } -} -``` - -Note the following aspects of the proxy service configuration: - -- We use the ` payloadFactory ` mediator to construct the new JSON payload. -- The ` media-type ` attribute is set to ` json ` . -- Because JSONPath expressions are used in arguments, the ` json ` evaluators are specified. - -##### Configuring the payload format - -The `` section of the proxy service -configuration defines the format of the response. Notice that in the -example above, the name and tags field values are enclosed by double -quotes ("), which creates a string value in the final response. If you -do not use quotes, the value that gets assigned uses the real type -evaluated by the expression (boolean, number, object, array, or null). - -It is also possible to instruct the PayloadFactory mediator to load a -payload format definition from the registry. This approach is -particularly useful when using large/complex payload formats in the -definitions. To load a format from the registry, click **Pick From -Registry** instead of **Define inline** when defining the PayloadFactory -mediator. - -For example, suppose we have saved the following text content in the -following registry location: -` conf:/repository/MI/transform.txt ` . - -``` -{ - "location_response" : { - "name" : "$1", - "tags" : "$2" - } -} -``` - -We can now modify the definition of the PayloadFactory mediator to use -this format text saved as a registry resource as the payload format. The -new configuration would look as follows (note that the -` ` element now uses the key attribute to point -to the registry resource key): - -``` - - - ...  - -``` - -!!! Note - When saving format text for the PayloadFactory mediator as a registry resource, be sure to save it as text content with the “text/plain” media type. - -#### Script mediator - -The [Script mediator]({{base_path}}/reference/mediators/script-mediator) in -JavaScript is useful when you need to create payloads that have -recurring structures such as arrays of objects. The Script mediator -defines the following important methods that can be used to manipulate -payloads in many different ways: - -- `getPayloadJSON` -- ` setPayloadJSON ` -- ` getPayloadXML ` -- ` setPayloadXML ` - -By combining any of the setters with a getter, we can handle almost any -type of content transformation within the MI. For example, by combining -` getPayloadXML ` and ` setPayloadJSON ` , -we can easily implement an XML to JSON transformation scenario. In -addition, we can perform various operations (such as deleting individual -keys, modifying selected values, and inserting new objects) on JSON -payloads to transform from one JSON format to another JSON format by -using the ` getPayloadJSON ` and -` setPayloadJSON ` methods. - -!!! Note - - If you are using **nashornJS** as the JavaScript language, and also if you have JSON operations defined in the Script mediator, you need to have JDK version `8u112` or a later version in your environment. - If your environment has an older JDK version, the Script mediator (that uses nashornJS and JSON operations) will not function properly because of this [JDK bug](https://bugs.openjdk.java.net/browse/JDK-8157160). That is, you will encounter server exceptions in the Micro Integrator. - - - If you are using JDK 15 or above, you need to manually copy the [nashorn-core](https://mvnrepository.com/artifact/org.openjdk.nashorn/nashorn-core/15.4) and [asm-util](https://mvnrepository.com/artifact/org.ow2.asm/asm-util/9.5) jars to the <MI_HOME>/lib directory since Nashorn was [removed](https://openjdk.org/jeps/372) from the JDK in Java 15. - -**Example** - -Following is an example of a JSON to JSON transformation performed by the Script mediator. Suppose a second service returns the following response: - -``` -{ - "results" : [ - { - "geometry" : { - "location" : { - "lat" : -33.867260, - "lng" : 151.1958130 - } - }, - "icon" : "bar-71.png", - "id" : "7eaf7", - "name" : "Biaggio Cafe", - "opening_hours" : { - "open_now" : true - }, - "photos" : [ - { - "height" : 600, - "html_attributions" : [], - "photo_reference" : "CoQBegAAAI", - "width" : 900 - } - ], - "price_level" : 1, - "reference" : "CnRqAAAAtz", - "types" : [ "bar", "restaurant", "food", "establishment" ], - "vicinity" : "48 Pirrama Road, Pyrmont" - }, - { - "geometry" : { - "location" : { - "lat" : -33.8668040, - "lng" : 151.1955790 - } - }, - "icon" : "generic_business-71.png", - "id" : "3ef98", - "name" : "Doltone House", - "photos" : [ - { - "height" : 600, - "html_attributions" : [], - "photo_reference" : "CqQBmgAAAL", - "width" : 900 - } - ], - "reference" : "CnRrAAAAV", - "types" : [ "food", "establishment" ], - "vicinity" : "48 Pirrama Road, Pyrmont" - } - ], - "status" : "OK" -} -``` - -The following proxy service shows how we can transform the above -response using JavaScript with the Script mediator. - -``` - - - - - - - - - - - - -``` - -The proxy service will convert the request into the following format: - -``` javascript -[ - { - "name":"Biaggio Cafe", - "tags":["bar", "restaurant", "food", "establishment", "pub"], - "id_str":"ID:7eaf7" - }, - { - "name":"Doltone House", - "tags":["food", "establishment", "pub"], - "id_str":"ID:3ef98" - } -] -``` - -Note that the transformation (line 9 through 17) has added a new field -` id_str ` and removed the old field -` id ` from the request, and it has added a new tag -` pub ` to the existing tags list of the payload. - - - -### XML to JSON transformation parameters - -You can use XML to JSON transformation parameters when you need -to transform XML formatted data into the JSON format. - -Following are the XML to JSON transformation parameters and their -descriptions: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    Parameter

    Description

    Default Value

    synapse.commons.json.preserve.namespace

    Preserves the namespace declarations in the JSON output in XML to JSON transformations.

    false

    synapse.commons.json.buildValidNCNames

    Builds valid XML NCNames when building XML element names in XML to JSON transformations.

    false

    synapse.commons.json.output.autoPrimitive

    Allows primitive types in the JSON output in XML to JSON transformations.

    true

    synapse.commons.json.output.namespaceSepChar

    The namespace prefix separation character for the JSON output in XML to JSON transformations.

    The default separation character is -

    synapse.commons.json.output.enableNSDeclarations

    Adds XML namespace declarations in the JSON output in XML to JSON transformations.

    false

    synapse.commons.json.output.disableAutoPrimitive.regex

    Disables auto primitive conversion in XML to JSON transformations.

    null

    synapse.commons.json.output.jsonoutAutoArray

    Sets the JSON output to an array element in XML to JSON transformations.

    true

    synapse.commons.json.output.jsonoutMultiplePI

    Sets the JSON output to an xml multiple processing instruction in XML to JSON transformations.

    true

    synapse.commons.json.output.xmloutAutoArray

    Sets the XML output to an array element in XML to JSON transformations.

    true

    synapse.commons.json.output.xmloutMultiplePI

    Sets the XML output to an xml multiple processing instruction in XML to JSON transformations.

    false
    synapse.commons.enableXmlNilReadWrite Handles how empty XML elements with the 'nil' attribute are converted to JSON. false
    synapse.commons.enableXmlNullForEmptyElement
    Handles how empty XML elements are converted to JSON. true
    - -### Validating JSON messages - -You can use the [Validate mediator]({{base_path}}/reference/mediators/validate-mediator) -to validate JSON messages against a specified JSON schema as described -in the rest of this section. - -#### Validate mediator - -The parameters available in this section are as follows. - -| Parameter Name | Description | -|-----------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| **Schema keys defined for Validate Mediator** | This section is used to specify the key to access the main schema based on which validation is carried out, as well as to specify the JSON, which needs to be validated. | -| **Source** | The JSONPath expression to extract the JSON that needs to be validated. E.g: ` json-eval($.msg)" ` | - -Following example use the below sample schema -` StockQuoteSchema.json ` file. Add this sample schema -file (i.e. ` StockQuoteSchema.json ` ) to the following -Registry path: -` conf:/schema/StockQuoteSchema . ` -json. For instructions on adding the schema file to the Registry path, -see [Adding a Resource]({{base_path}}/integrate/develop/creating-artifacts/registry/creating-local-registry-entries). - -!!! Tip - When adding this sample schema file to the Registry, specify the **Media Type** as application/json. - -``` -{ - "$schema": "http://json-schema.org/draft-04/schema#", - "type": "object", - "properties": { - "getQuote": { - "type": "object", - "properties": { - "request": { - "type": "object", - "properties": { - "symbol": { - "type": "string" - } - }, - "required": [ - "symbol" - ] - } - }, - "required": [ - "request" - ] - } - }, - "required": [ - "getQuote" - ] -}   -``` - -In this example, the required schema for validating messages going through the Validate mediator is given as a registry key (i.e. -` schema\StockQuoteSchema.json ` ). You do not have any source attributes specified. Therefore, the schema will be used to validate the complete JSON body. The mediation logic to follow if the validation fails is defined within the on-fail element. In this example, the [PayloadFactory mediator]({{base_path}}/reference/mediators/payloadfactory-mediator) creates a fault to be sent back to the party, which sends the message. - -``` - - - - - {"Error":"$1"} - - - - - - - - -``` - -An example for a valid JSON payload request is given below. - -``` -{ - "getQuote": { - "request": { - "symbol": "WSO2" - } - } -} -``` - -## Troubleshooting, debugging, and logging - -To assist with troubleshooting, you can enable debug logging at several stages of the mediation of a JSON payload by adding one or more of the following loggers to the `MI_HOME/conf/log4j2.properties` file and restarting the MI. - -!!! Info - Be sure to turn off these loggers when running the MI in a production environment, as logging every message will significantly reduce performance. - -Following are the available logger components: - -Message builders and formatters - -- `org.apache.synapse.commons.json.JsonStreamBuilder` -- `org.apache.synapse.commons.json.JsonStreamFormatter` -- `org.apache.synapse.commons.json.JsonBuilder` -- `org.apache.synapse.commons.json.JsonFormatter` - -JSON utility class - -`org.apache.synapse.commons.json.JsonUtil` - -PayloadFactory mediator - -`org.apache.synapse.mediators.transform.PayloadFactoryMediator` - -JSONPath evaluator - -`org.apache.synapse.util.xpath.SynapseJsonPath` - -Debug logging for the mediation of a JSON payload can be enabled by adding these loggers in log4j2.properties file. - -For example: -``` - logger.JsonStreamBuilder.name = org.apache.synapse.commons.json.JsonStreamBuilder - logger.JsonStreamBuilder.level = DEBUG - ``` -For more instructions on adding loggers, see [Configuring Log4j Properties]({{base_path}}/observe/micro-integrator/classic-observability-logs/configuring-log4j2-properties). diff --git a/en/docs/integrate/examples/message_store_processor_examples/intro-message-stores-processors.md b/en/docs/integrate/examples/message_store_processor_examples/intro-message-stores-processors.md deleted file mode 100644 index 20c767458d..0000000000 --- a/en/docs/integrate/examples/message_store_processor_examples/intro-message-stores-processors.md +++ /dev/null @@ -1,97 +0,0 @@ -# Introduction to Message Stores -This sample demonstrates the basic functionality of a [message store]({{base_path}}/reference/synapse-properties/about-message-stores-processors). - -## Synapse configuration - -Following are the artifact configurations that we can use to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Proxy Service" - ```xml - - - - - - - - - - - - - - ``` - -=== "On Store Sequence" - ```xml - - - - - - ``` - -=== "Message Store" - ```xml - - ``` - -=== "OnError Sequence" - ```xml - - - - - - - - - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service), [message store]({{base_path}}/integrate/develop/creating-artifacts/creating-a-message-store), and [mediation sequences]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Send the following request to invoke the service: - -```xml -POST http://localhost:9090/services/SampleProxy HTTP/1.1 -Accept-Encoding: gzip,deflate -Content-Type: text/xml;charset=UTF-8 -SOAPAction: "urn:getQuote" -Content-Length: 492 -Host: localhost:9090 -Connection: Keep-Alive -User-Agent: Apache-HttpClient/4.1.1 (java 1.5) - - - - - - - IBM - - - - -``` - -In the proxy service, the store mediator will store the -` getQuote ` request message in the -` MyStore ` message store. Before storing the request, -the message store mediator will invoke the -` onStoreSequence ` sequence. - -Analyze the logs and you will see the following log: - -```bash -INFO {org.apache.synapse.mediators.builtin.LogMediator} - To: /services/SampleProxy, WSAction: urn:getQuote, SOAPAction: urn:getQuote, MessageID: urn:uuid:ab78ee5d-f5ed-4346-a0ea-1beb2e6c0b1d, Direction: request, On-Store = Storing message -``` - -You can then use the JMX view of the Synapse message store using -jconsole. diff --git a/en/docs/integrate/examples/message_store_processor_examples/loadbalancing-with-message-processor.md b/en/docs/integrate/examples/message_store_processor_examples/loadbalancing-with-message-processor.md deleted file mode 100644 index 288dbf130b..0000000000 --- a/en/docs/integrate/examples/message_store_processor_examples/loadbalancing-with-message-processor.md +++ /dev/null @@ -1,113 +0,0 @@ -# Load Balancing with Message Forwarding Processor -This example demonstrates how the message forwarding processor handles load balancing. - -## Synapse configuration - -Following are the artifact configurations that we can use to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Proxy Service" - ```xml - - - - - - - - - - - - - ``` - -=== "Endpoint 1" - ```xml - -
    - - ``` - -=== "Endpoint 2" - ```xml - -
    - - ``` - -=== "Endpoint 3" - ```xml - -
    - - ``` - -=== "Message Store" - ```xml - - org.apache.activemq.jndi.ActiveMQInitialContextFactory - false - tcp://localhost:61616 - 1.1 - JMSMS - - ``` - -=== "Message Processor 1" - ```xml - - 1000 - 1000 - true - - ``` - -=== "Message Processor 2" - ```xml - - 1000 - 1000 - true - - ``` - -=== "Message Processor 3" - ```xml - - 1000 - 1000 - true - - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service), [endpoints]({{base_path}}/integrate/develop/creating-artifacts/creating-endpoints), [message stores]({{base_path}}/integrate/develop/creating-artifacts/creating-a-message-store) and [message processors]({{base_path}}/integrate/develop/creating-artifacts/creating-a-message-processor) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -[Configure the ActiveMQ broker]({{base_path}}/install-and-setup/setup/mi-setup/brokers/configure-with-activemq). - -You can analyze the message sent by the Micro Integrator to the secure service using TCPMon. - -On successful execution of the placeorder request, you will see the following message on the back-end: - -```xml -Sun Aug 18 10:58:00 IST 2013 samples.services.SimpleStockQuoteService :: Accepted order #5 for : 18851 stocks of WSO2 at $ 61.782478265721714 -``` - -If you send the placeorder request to the proxy service several times and observe the log on the back-end server, you will see that the messages are distributed among the back-end nodes. \ No newline at end of file diff --git a/en/docs/integrate/examples/message_store_processor_examples/securing-message-processor.md b/en/docs/integrate/examples/message_store_processor_examples/securing-message-processor.md deleted file mode 100644 index 53566727d1..0000000000 --- a/en/docs/integrate/examples/message_store_processor_examples/securing-message-processor.md +++ /dev/null @@ -1,67 +0,0 @@ -# Securing the Message Forwarding Processor -This example demonstrates a use case where security policies are applied to the [message forwarding processor]({{base_path}}/integrate/examples/message_store_processor_examples/using-message-forwarding-processor). - -## Synapse configuration - -Following are the artifact configurations that we can use to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Proxy Service" - ```xml - - - - - - - - - - - - - ``` - -=== "Local Registry Entry" - ```xml - - ``` - -=== "Endpoint" - ```xml - -
    - -
    -
    - ``` - -=== "Message Store" - ```xml - - ``` - -=== "Message Processor" - ```xml - - 1000 - 1000 - true - - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service), [registry resource]({{base_path}}/integrate/develop/creating-artifacts/creating-registry-resources), [local entry]({{base_path}}/integrate/develop/creating-artifacts/registry/creating-local-registry-entries), [message store]({{base_path}}/integrate/develop/creating-artifacts/creating-a-message-store), and [message processor]({{base_path}}/integrate/develop/creating-artifacts/creating-a-message-processor) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -The Micro Integrator is configured to enable WS-Security as per the policy specified by -'policy_3.xml' for the outgoing messages to the secured backend. The debug logs on the Micro Integrator -shows the encrypted message flowing to the service and the encrypted -response being received by the Micro Integrator. - -The security policy file `policy1.xml` can be downloaded from [policy1.xml](https://github.com/wso2-docs/WSO2_EI/blob/master/sec-policies/policy1.xml). -The security policy file URI needs to be updated with the path to the policy1.xml file. \ No newline at end of file diff --git a/en/docs/integrate/examples/message_store_processor_examples/using-jdbc-message-store.md b/en/docs/integrate/examples/message_store_processor_examples/using-jdbc-message-store.md deleted file mode 100644 index 79462d4fcb..0000000000 --- a/en/docs/integrate/examples/message_store_processor_examples/using-jdbc-message-store.md +++ /dev/null @@ -1,139 +0,0 @@ -# Using the JDBC Message Store -In this sample, the client sends requests to a proxy service. The proxy service stores the messages in a JDBC message store. The back-end service is invoked by a message forwarding processor, which picks the messages stored in the JDBC message store. - -## Prerequisites - -Setup the database. Use one of the following DB scripts depending on which database type you want to use. - -=== "MySQL" - ```SQL - CREATE TABLE jdbc_message_store( - indexId BIGINT( 20 ) NOT NULL AUTO_INCREMENT , - msg_id VARCHAR( 200 ) NOT NULL , - message BLOB NOT NULL , - PRIMARY KEY ( indexId ) - ) - ``` - -=== "H2" - ```SQL - CREATE TABLE jdbc_message_store( - indexId BIGINT( 20 ) NOT NULL AUTO_INCREMENT , - msg_id VARCHAR( 200 ) NOT NULL , - message BLOB NOT NULL , - PRIMARY KEY ( indexId ) - ) - ``` - -!!! Note - You can create a similar script based on the database you want to set up. - -Add the relevant database driver into the `/lib` directory. - -## Synapse configuration -Following are the artifact configurations that we can use to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -This sample configuration uses a MySQL database named **sampleDB** and the database table named **jdbc_message_store**. - -=== "Proxy Service" - ```xml - - - - - - - - - - - - - ``` - -=== "Message Store" - ```xml - - - com.mysql.jdbc.Driver - false - root - jdbc:mysql://localhost:3306/sampleDB - ******** - jdbc_message_store - - ``` - -=== "Message Processor" - ```xml - - - 1000 - 5 - 1 - 1000 - -1 - Disabled - 10000 - true - - ``` - -=== "Endpoint" - ```xml - -
    - - ``` -## Build and run - -The WSDL file `sample_proxy_1.wsdl` can be downloaded from [sample_proxy_1.wsdl](https://github.com/wso2-docs/WSO2_EI/blob/master/samples-protocol-switching/sample_proxy_1.wsdl). -The WSDL URI needs to be updated with the path to the `sample_proxy_1.wsdl` file. - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service), [message store]({{base_path}}/integrate/develop/creating-artifacts/creating-a-message-store), [message processor]({{base_path}}/integrate/develop/creating-artifacts/creating-a-message-processor), and [endpoint]({{base_path}}/integrate/develop/creating-artifacts/creating-endpoints) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Send the following request to invoke the sample proxy service: - -```xml -POST http://localhost:9090/services/MessageStoreProxy HTTP/1.1 -Accept-Encoding: gzip,deflate -Content-Type: text/xml;charset=UTF-8 -SOAPAction: "urn:getQuote" -Content-Length: 492 -Host: localhost:9090 -Connection: Keep-Alive -User-Agent: Apache-HttpClient/4.1.1 (java 1.5) - - - - - - - IBM - - - - -``` \ No newline at end of file diff --git a/en/docs/integrate/examples/message_store_processor_examples/using-jms-message-stores.md b/en/docs/integrate/examples/message_store_processor_examples/using-jms-message-stores.md deleted file mode 100644 index 681552335c..0000000000 --- a/en/docs/integrate/examples/message_store_processor_examples/using-jms-message-stores.md +++ /dev/null @@ -1,267 +0,0 @@ -# Using the JMS Message Store -See the examples given below. - -## Example 1: Store and forward JMS messages - -In this example, the client sends requests to a **proxy service**, which stores the messages in a **JMS message store**. The **message forwarding processor** then picks the stored messages from the JMS message store and invokes the back-end service. - -### Synapse configurations - -Following are the artifact configurations that we can use to implement this scenario. See the instructions on how to [build and run](#build-and-run-example-1) this example. - -=== "Message Store" - ```xml - - org.apache.activemq.jndi.ActiveMQInitialContextFactory - tcp://localhost:61616 - - ``` - -=== "Endpoint" - ```xml - -
    - - ``` - -=== "Proxy Service" - ```xml - - - - - - - - - - - ``` - -=== "Message Processor" - ```xml - - 4 - 4000 - true - - ``` - -See the descriptions of the above configurations: - - - - - - - - - - - - - - - - - - - - - - -
    ArtifactDescription
    Message Store - Set the value of the the java.naming.provider.url property to point to the jndi.propertiesfile. In this case,  store.jms.destination> is a mandatory parameter. If you are using the WSO2 Message Broker, you need to create a q ueue named 'JMSMS' using the Message Broker (i.e., the value you specify for the store.jms.destination. -
    Endpoint - Define an endpoint which is used to send the message to the back-end service. -
    Proxy Service - Create a proxy service which stores messages to the created Message Store. Note that you can use the FORCE_SC_ACCEPTED property in the message flow to send an Http 202 status to the client after the Micro Integrator accepts a message. If this property is not specified, the client that sends the request to the proxy service will timeout since it isbnot getting any response back from the proxy. -
    Message Processor - Create a message forwarding processor using the below configuration. Message forwarding processor consumes the messages stored in the message store. -
    - -### Build and run (Example 1) - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service), [endpoint]({{base_path}}/integrate/develop/creating-artifacts/creating-endpoints), and [message processor]({{base_path}}/integrate/develop/creating-artifacts/creating-a-message-processor) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -[Configure the ActiveMQ broker]({{base_path}}/install-and-setup/setup/mi-setup/brokers/configure-with-activemq) and set up the JMS Sender. - -Invoke the service: - -```bash -POST http://localhost:9090/services/Proxy1 HTTP/1.1 -Accept-Encoding: gzip,deflate -Content-Type: text/xml;charset=UTF-8 -SOAPAction: "urn:getQuote" -Content-Length: 492 -Host: localhost:9090 -Connection: Keep-Alive -User-Agent: Apache-HttpClient/4.1.1 (java 1.5) - - - - - - - IBM - - - - -``` - -Note a message similar to the following example: - -```bash -SimpleStockQuoteService :: Accepted order for : 7482 stocks of IBM at $ 169.27205579038733 -``` - -## Example 2: Using a reply sequence to process response - -In the sample, when the message forwarding processor receives a response from the back-end service, it forwards it to a **replySequence** to process the response message. - -### Synapse configurations - -Following are the artifact configurations that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run-example-2) this example. - -=== "Proxy Service" - ```xml - - - - - - - - - - ``` - -=== "Message Store" - ```xml - - org.apache.activemq.jndi.ActiveMQInitialContextFactory - tcp://localhost:61616 - - ``` - -=== "Message Processor" - ```xml - - 1000 - 1000 - 4 - replySequence - true - Disabled - 1 - - ``` - -=== "Sequence" - ```xml - - - - - - - ``` - -=== "Endpoint" - ```xml - -
    - - ``` - -See the descriptions of the above configurations: - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ArtifactDescription
    Message Store - Set the value of the the java.naming.provider.url property to point to the jndi.propertiesfile. In this case,  store.jms.destination> is a mandatory parameter. If you are using the WSO2 Message Broker, you need to create a q ueue named 'JMSMS' using the Message Broker (i.e., the value you specify for the store.jms.destination. -
    Endpoint - Define an endpoint which is used to send the message to the back-end service. -
    Proxy Service - Create a proxy service which stores messages to the created Message Store. -
    Sequence - Create a sequence to  handle  the response received from the back-end service. -
    Message Processor - Create a message forwarding processor using the below configuration. Message forwarding processor consumes the messages stored in the message store. Compared to [Example 1](#example-1), this has an additional parameter **message.processor.reply.sequence** to point to a sequence to handle the response message. -
    - -### Build and run (Example 2) - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service), [mediation sequences]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences), [endpoint]({{base_path}}/integrate/develop/creating-artifacts/creating-endpoints), [message store]({{base_path}}/integrate/develop/creating-artifacts/creating-a-message-store) and [message processor]({{base_path}}/integrate/develop/creating-artifacts/creating-a-message-processor) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -[Configure the ActiveMQ broker]({{base_path}}/install-and-setup/setup/mi-setup/brokers/configure-with-activemq). - -Invoke the service. Note a message similar to the following example printed in the backend service. diff --git a/en/docs/integrate/examples/message_store_processor_examples/using-message-forwarding-processor.md b/en/docs/integrate/examples/message_store_processor_examples/using-message-forwarding-processor.md deleted file mode 100644 index 8b665cef92..0000000000 --- a/en/docs/integrate/examples/message_store_processor_examples/using-message-forwarding-processor.md +++ /dev/null @@ -1,105 +0,0 @@ -# Using the Message Forwarding Processor -This example demonstrates the usage of the message forwarding processor. - -## Synapse configuration -Following are the artifact configurations that we can use to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Proxy Service" - ```xml - - - - - - - - - - - - - - - ``` - -=== "Message Store" - ```xml - - ``` - -=== "Message Processor" - ```xml - - 10000 - false - StockQuoteServiceEp - - ``` - -=== "Endpoint" - ```xml - -
    - - -1 - 1.0 - -
    -
    - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service), [endpoint]({{base_path}}/integrate/develop/creating-artifacts/creating-endpoints), [message store]({{base_path}}/integrate/develop/creating-artifacts/creating-a-message-store) and [message processor]({{base_path}}/integrate/develop/creating-artifacts/creating-a-message-processor) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -[Configure the ActiveMQ broker]({{base_path}}/install-and-setup/setup/mi-setup/brokers/configure-with-activemq) and set up the JMS Sender. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Send the following request to invoke the service: - -```bash -POST http://localhost:9090/services/StockQuoteProxy HTTP/1.1 -Accept-Encoding: gzip,deflate -Content-Type: text/xml;charset=UTF-8 -SOAPAction: "urn:getQuote" -Content-Length: 492 -Host: localhost:9090 -Connection: Keep-Alive -User-Agent: Apache-HttpClient/4.1.1 (java 1.5) - - - - - - - IBM - - - - -``` - -Now Start the SimpleStockQuoteService. When you Start the service you will see message getting delivered to the service. Even though service is down when we invoke it from the client. Here in the Proxy Service store mediator will store the getQuote request message in the "MyStore" Message Store. Message Processor will send the message to the endpoint configured as a message context property. Message processor will remove the message from the store only if message delivered successfully. diff --git a/en/docs/integrate/examples/message_store_processor_examples/using-message-sampling-processor.md b/en/docs/integrate/examples/message_store_processor_examples/using-message-sampling-processor.md deleted file mode 100644 index 77d6025fab..0000000000 --- a/en/docs/integrate/examples/message_store_processor_examples/using-message-sampling-processor.md +++ /dev/null @@ -1,110 +0,0 @@ -# Using the Message Sampling Processor -This example demonstrates the usage of the message sampling processor. - -## Synapse configuration - -Following are the artifact configurations that we can use to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Send Sequence" - ```xml - - - -
    - - -1 - 1.0 - -
    -
    -
    -
    - ``` - -=== "Proxy Service" - ```xml - - - - - - - - - - - - - - - ``` - -=== "Message Store" - ```xml - - ``` - -=== "Message Processor" - ```xml - - 20000 - send_seq - true - - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service), [mediation sequences]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences), [message store]({{base_path}}/integrate/develop/creating-artifacts/creating-a-message-store), and [message processor]({{base_path}}/integrate/develop/creating-artifacts/creating-a-message-processor) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -[Configure the ActiveMQ broker]({{base_path}}/install-and-setup/setup/mi-setup/brokers/configure-with-activemq) and set up the JMS Sender. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Send the following request to invoke the service: - -```bash -POST http://localhost:9090/services/StockQuoteProxy HTTP/1.1 -Accept-Encoding: gzip,deflate -Content-Type: text/xml;charset=UTF-8 -SOAPAction: "urn:getQuote" -Content-Length: 492 -Host: localhost:9090 -Connection: Keep-Alive -User-Agent: Apache-HttpClient/4.1.1 (java 1.5) - - - - - - - IBM - - - - -``` - -When you send the request, the message will be dispatched to the proxy service. In the proxy service, the store mediator will store the getQuote request message in the "MyStore" message store. The message processor will consume the messages, and forward them to the "send_seq" sequence in configured rate. You will observe that the service invocation rate is not changing when we increase the rate of proxy service invocation. diff --git a/en/docs/integrate/examples/message_store_processor_examples/using-rabbitmq-message-stores.md b/en/docs/integrate/examples/message_store_processor_examples/using-rabbitmq-message-stores.md deleted file mode 100644 index a0fa81dbdb..0000000000 --- a/en/docs/integrate/examples/message_store_processor_examples/using-rabbitmq-message-stores.md +++ /dev/null @@ -1,145 +0,0 @@ -# Using the RabbitMQ Message Store - -In this example, the client sends requests to a **proxy service**, which stores the messages in a **RabbitMQ message store**. The **message forwarding processor** then picks the stored messages from the RabbitMQ message store and invokes the back-end service. - -### Synapse configurations - -Following are the artifact configurations that we can use to implement this scenario. See the instructions on how to [build and run](#build-and-run-example-1) this example. - -=== "Message Store" - ```xml - - - localhost - false - 5672 - - - - false - - xyz - - - ``` - -=== "Endpoint" - ```xml - -
    - - ``` - -=== "Proxy Service" - ```xml - - - - - - - - - - - ``` - -=== "Message Processor" - ```xml - - 4 - 4000 - true - - ``` - -See the descriptions of the above configurations: - - - - - - - - - - - - - - - - - - - - - - -
    ArtifactDescription
    Message Store - The RabbitMQ message store. -
    Endpoint - Define an endpoint which is used to send the message to the back-end service. -
    Proxy Service - Create a proxy service which stores messages to the created Message Store. Note that you can use the FORCE_SC_ACCEPTED property in the message flow to send an Http 202 status to the client after the Micro Integrator accepts a message. If this property is not specified, the client that sends the request to the proxy service will timeout since it isbnot getting any response back from the proxy. -
    Message Processor - Create a message forwarding processor using the below configuration. Message forwarding processor consumes the messages stored in the message store. -
    - -### Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service), [endpoint]({{base_path}}/integrate/develop/creating-artifacts/creating-endpoints), and [message processor]({{base_path}}/integrate/develop/creating-artifacts/creating-a-message-processor) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -[Configure the RabbitMQ broker]({{base_path}}/install-and-setup/setup/mi-setup/brokers/configure-with-rabbitmq) with the Micro Integrator. - -Invoke the service: - -```bash -POST http://localhost:9090/services/Proxy1 HTTP/1.1 -Accept-Encoding: gzip,deflate -Content-Type: text/xml;charset=UTF-8 -SOAPAction: "urn:getQuote" -Content-Length: 492 -Host: localhost:9090 -Connection: Keep-Alive -User-Agent: Apache-HttpClient/4.1.1 (java 1.5) - - - - - - - IBM - - - - -``` - -Note a message similar to the following example: - -```bash -SimpleStockQuoteService :: Accepted order for : 7482 stocks of IBM at $ 169.27205579038733 -``` \ No newline at end of file diff --git a/en/docs/integrate/examples/message_transformation_examples/json-to-soap-conversion.md b/en/docs/integrate/examples/message_transformation_examples/json-to-soap-conversion.md deleted file mode 100644 index be2e3e1935..0000000000 --- a/en/docs/integrate/examples/message_transformation_examples/json-to-soap-conversion.md +++ /dev/null @@ -1,267 +0,0 @@ -# Converting JSON to SOAP - -Let's consider a scenario where you have a SOAP-based backend and a JSON client. The SOAP backend is exposed as a REST API in the Micro Integrator. - -When the JSON client sends a message to the SOAP backend, the REST API in the Micro Integrator should convert the JSON message to SOAP. The backend will process the SOAP request and generate a response for the JSON client. The Micro Integrator should then convert the SOAP response back to JSON and return it to the client. - -The following examples explain different methods of converting JSON messages to SOAP using the Micro Integrator. - -## Using the PayloadFactory Mediator - -Let's convert JSON messages to SOAP using the [PayloadFactory mediator]({{base_path}}/reference/mediators/payloadfactory-mediator). - -### Synapse configuration -Following is a sample REST API configuration that we can use to implement this scenario. -See the instructions on how to [build and run](#build-and-run-example-1) this example. - -```xml - - - - - - - - - - - - $1 - $2 - $3 - - - - - - - - - - - - - - -
    - - -1 - -1 - 0 - - - 0 - -
    -
    -
    - - -
    - - -
    -
    -``` - -### Build and run (example 1) - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the REST API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Invoke the REST API: - -- HTTP method: POST -- Request URL: http://localhost:8290/stockorder_api -- Content-Type: application/json -- SoapAction: urn:placeOrder -- Message Body: - ```json - {"placeOrder": - {"order": - { - "symbol":"IBM", - "price":"3.141593E0", - "quantity":"4" - } - } - } - ``` - -Check the log printed on the back-end service's terminal to confirm that the order is successfully placed. - -```xml -2020-01-30 16:39:51,902 INFO [wso2/stockquote_service] - Stock quote service invoked. -2020-01-30 16:39:51,904 INFO [wso2/stockquote_service] - Generating placeOrder response -2020-01-30 16:39:51,904 INFO [wso2/stockquote_service] - The order was placed. -``` - -The JSON client will receive the following response from the backend confirming that the stock order is placed: - -```json -{ - "Envelope": { - "Body": { - "placeOrderResponse": { - "status": "created" - } - } - } -} -``` - -## Using the XSLT Mediator - -Let's convert JSON messages to SOAP using the [XSLT mediator]({{base_path}}/reference/mediators/xslt-mediator). The XSLT, which specifies the message conversion parameters, is stored in the product registry as a **local entry**. - -### Synapse configuration -Following are the synapse configurations for implementing this scenario. -See the instructions on how to [build and run](#build-and-run-example-2) this example. - -=== "REST Api" - ```xml - - - - - - -
    - - - - - - -
    - - -1 - 1 - - - 0 - -
    -
    -
    - - - - - - - - ``` - -=== "Local Entry - In Transform XSLT" - ```xml - - - - - - - - - - - - - ``` - -### Build and run (example 2) - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an ESB Config project]({{base_path}}/integrate/develop/create-integration-project/#esb-config-project). -3. [Create the REST API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) with the configurations given above. -4. [Create a local entry]({{base_path}}/integrate/develop/creating-artifacts/registry/creating-local-registry-entries) named **in_transform** with the above XSLT configuration. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-and-run) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Invoke the REST API: - -- HTTP method: POST -- Request URL: http://localhost:8290/stockorder_api -- Content-Type: application/json -- SoapAction: urn:getQuote -- Message Body: - ```json - {"getQuote": - {"request": - {"symbol":"IBM"} - } - } - ``` - -Check the log printed on the back-end service's terminal to confirm that the request is successfully sent. - -```xml -2020-01-30 15:35:28,088 INFO [wso2/stockquote_service] - Stock quote service invoked. -2020-01-30 15:35:28,090 INFO [wso2/stockquote_service] - Generating getQuote response for IBM -2020-01-30 15:35:28,091 INFO [wso2/stockquote_service] - Stock quote generated. -``` - -The JSON client will receive the following response from the backend with the relevant stock quote: - -```json -{ - "Envelope": { - "Body": { - "getQuoteResponse": { - "change": -2.86843917118114, - "earnings": -8.540305401672558, - "high": -176.67958828498735, - "last": 177.66987465262923, - "low": -176.30898912339075, - "marketCap": 56495579.98178506, - "name": "IBM Company", - "open": 185.62740369461244, - "peRatio": 24.341353665128693, - "percentageChange": -1.4930577008849097, - "prevClose": 192.11844053187397, - "symbol": "IBM", - "volume": 7791 - } - } - } -} -``` \ No newline at end of file diff --git a/en/docs/integrate/examples/message_transformation_examples/pox-to-json-conversion.md b/en/docs/integrate/examples/message_transformation_examples/pox-to-json-conversion.md deleted file mode 100644 index c883bf003a..0000000000 --- a/en/docs/integrate/examples/message_transformation_examples/pox-to-json-conversion.md +++ /dev/null @@ -1,200 +0,0 @@ -# Converting POX Messages to JSON - -The following examples explain different methods of converting POX messages to JSON using the Micro Integrator. - -## Using the messageType property - -Let's convert a POX message to JSON using the [messageType property]({{base_path}}/reference/mediators/property-reference/generic-Properties#messagetype). - -### Synapse configuration -Following is a sample proxy service configuration that we can use to implement this scenario. - -!!! Tip - Note that after the [messageType property]({{base_path}}/reference/mediators/property-reference/generic-Properties#messagetype) completes the message convertion, we are using the [Respond mediator]({{base_path}}/reference/mediators/respond-mediator) to return the converted message back to the JSON client. - -See the instructions on how to [build and run](#build-and-run-example-1) this example. - -```xml - - - - - - - - - - -``` - -### Build and run (example 1) - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Invoke the proxy service: - -- HTTP method: POST -- Request URL: http://localhost:8290/services/POX_To_JSON_Convert_Msgtype_Proxy -- Content-Type: text/xml -- Message Body: - ```xml - - - - - - Kennedy Space Center Historic Launch Complex 39A - KSC LC 39A - - 18 - 18 - - Florida - 28.6080585 - -80.6039558 - https://en.wikipedia.org/wiki/Kennedy_Space_Center_Launch_Complex_39#Launch_Pad_39A - - - - - ``` - -The converted JSON response is returned as follows: - -```json -{ - "Envelope": { - "Header": null, - "Body": { - "SpaceX_LaunchPads": { - "Station": { - "Name": "Kennedy Space Center Historic Launch Complex 39A", - "Short_Name": "KSC LC 39A", - "Launches": { - "Attempts": 18, - "Successful": 18 - }, - "Region": "Florida", - "Latitude": 28.6080585, - "Longitude": -80.6039558, - "WIKI_Link": "https://en.wikipedia.org/wiki/Kennedy_Space_Center_Launch_Complex_39#Launch_Pad_39A" - } - } - } - } -} -``` - -## Using the PayloadFactory Mediator - -Let's convert a POX message to JSON using a [PayloadFactory mediator]({{base_path}}/reference/mediators/payloadfactory-mediator). - -### Synapse configuration -Following is a sample proxy service configuration that we can use to implement this scenario. - -!!! Tip - Note that after the [PayloadFactory mediator]({{base_path}}/reference/mediators/payloadfactory-mediator) completes the message convertion, we are using the [Respond mediator]({{base_path}}/reference/mediators/respond-mediator) to return the converted message back to the JSON client. - -See the instructions on how to [build and run](#build-and-run-example-2) this example. - -```xml - - - - - - - { - "name": "$1", - "location": { - "region": "$2", - "latitude": $3, - "longitude": $4 - }, - "attempted_launches": $5, - "successful_launches": $6, - "wikipedia": "$7", - "site_name_long": "$8" - } - - - - - - - - - - - - - - - - - -``` - -### Build and run (example 2) - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Invoke the proxy service: - -- HTTP method: POST -- Request URL: http://localhost:8290/services/POX_To_JSON_Convert_PayloadFactory_Proxy -- Content-Type: text/xml -- Message Body: - ```xml - - - Kennedy Space Center Historic Launch Complex 39A - KSC LC 39A - - 18 - 18 - - Florida - 28.6080585 - -80.6039558 - https://en.wikipedia.org/wiki/Kennedy_Space_Center_Launch_Complex_39#Launch_Pad_39A - - - ``` - -The converted JSON response is returned as follows: - -```json -{ - "name": "KSC LC 39A", - "location": { - "region": "Florida", - "latitude": 28.6080585, - "longitude": -80.6039558 - }, - "attempted_launches": 18, - "successful_launches": 18, - "wikipedia": "https://en.wikipedia.org/wiki/Kennedy_Space_Center_Launch_Complex_39#Launch_Pad_39A", - "site_name_long": "Kennedy Space Center Historic Launch Complex 39A" -} -``` \ No newline at end of file diff --git a/en/docs/integrate/examples/protocol-switching/switching_between_fix_versions.md b/en/docs/integrate/examples/protocol-switching/switching_between_fix_versions.md deleted file mode 100644 index a7e3082cb5..0000000000 --- a/en/docs/integrate/examples/protocol-switching/switching_between_fix_versions.md +++ /dev/null @@ -1,81 +0,0 @@ -# Switching between FIX Versions - -This sample demonstrates how you can use WSO2 Micro Integrator to accept FIX input via the FIX transport layer and dispatch to another FIX acceptor that accept messages in a different FIX version. Here you will see how the Micro Integrator receives FIX 4.0 messages and simply forwards it to the FIX 4.1 endpoint. - -## Synapse configuration - -Following are the integration artifacts (proxy service) that we can used to implement this scenario. - -```xml - - - -
    - - - - - - - - - - file:repository/samples/resources/fix/fix-synapse-m40.cfg - file - file:repository/samples/resources/fix/synapse-sender-m.cfg - file - -``` - - \ No newline at end of file diff --git a/en/docs/integrate/examples/protocol-switching/switching_between_http_and_msmq.md b/en/docs/integrate/examples/protocol-switching/switching_between_http_and_msmq.md deleted file mode 100644 index aa379f2656..0000000000 --- a/en/docs/integrate/examples/protocol-switching/switching_between_http_and_msmq.md +++ /dev/null @@ -1,101 +0,0 @@ -# Switching between HTTP and MSMQ - -This example demonstrates how you can use the Micro Integrator to switch messages between HTTP and MSMQ during message mediation. - -In this example, stockquote requests are placed to the stockquote proxy service, which sends the incoming request message to the MSMQ server. Another proxy service named `msmqTest` listens to the MSMQ queue, invokes the message from the MSMQ server, and sends the message to the backend. - -## Synapse configuration - -=== "MSMQ Test proxy" - ```xml - - - - - - -
    - - - - - - - application/xml - - - ``` - -=== "StockQuote proxy" - ```xml - - - - -
    - - - - - - - - - - - - - - - ``` - - \ No newline at end of file diff --git a/en/docs/integrate/examples/protocol-switching/switching_from_fix_to_amqp.md b/en/docs/integrate/examples/protocol-switching/switching_from_fix_to_amqp.md deleted file mode 100644 index f62949f57d..0000000000 --- a/en/docs/integrate/examples/protocol-switching/switching_from_fix_to_amqp.md +++ /dev/null @@ -1,47 +0,0 @@ -# Switch from FIX to AMQP - -This example demonstrates how WSO2 Micro Integrator receives messages through FIX and forwards them through AMQP. - -Synapse will forward the order request by binding it to a JMS message payload and sending it to the AMQP consumer. The AMQP consumer will send an execution back to Banzai. - -## Synapse configuration - -Following are the integration artifacts (proxy service) that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - -
    - - - - - - - - - - - file:/{file_path}/fix-synapse.cfg - file - -``` -## Build and Run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. Download the FIX transport resources from [here](https://github.com/wso2-docs/WSO2_EI/tree/master/FIX-transport-resources) and change the `{file_path}` of the proxy with the downloaded location. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -[Enable the FIX transport]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-fix-transport) and start the Micro-Integrator. - -Run the quickfixj **Banzai** sample application. - -```bash -java -jar quickfixj-examples-banzai-2.1.1.jar -``` -Send a sample request from Banzai to Synapse. Then the message count of the queue should be increased. \ No newline at end of file diff --git a/en/docs/integrate/examples/protocol-switching/switching_from_fix_to_http.md b/en/docs/integrate/examples/protocol-switching/switching_from_fix_to_http.md deleted file mode 100644 index 51ab303b2e..0000000000 --- a/en/docs/integrate/examples/protocol-switching/switching_from_fix_to_http.md +++ /dev/null @@ -1,100 +0,0 @@ -# Switch from FIX to HTTP - -This example demonstrates how WSO2 Micro Integrator receives messages through FIX and forwards them through HTTP. - -The Micro Integrator will forward the order request to a one-way `placeOrder` operation in the back-end service. Micro Integrator uses a simple XSLT Mediator to transform the incoming FIX to a SOAP message. - -## Synapse configuration - -Following are the integration artifacts that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - - - - -
    - - -
    -
    -
    -
    - - - - - - - file:/{file_path}/fix-synapse.cfg - file - - -``` - -FIX_XSLT: - -```xml - - - - - - - - - - - - - - - - - - - -``` - -## Build and Run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Add the above XSLT as a registry resource. -4. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -5. Download the FIX transport resources from [here](https://github.com/wso2-docs/WSO2_EI/tree/master/FIX-transport-resources) and change the `{file_path}` of the proxy with the downloaded location. -6. Change the `{reg_path}` with the XSLT registry location. -6. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -[Enable the FIX transport]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-fix-transport) and start the Micro-Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip) -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Run the quickfixj **Banzai** sample application. - -```bash -java -jar quickfixj-examples-banzai-2.1.1.jar -``` -Send an order request from Banzai to Synapse. For example, Buy DELL 1000 @ 100. User has to send a "Limit" Order because price is a mandatory field for placeOrder operation. diff --git a/en/docs/integrate/examples/protocol-switching/switching_from_ftp_listener_to_mail_sender.md b/en/docs/integrate/examples/protocol-switching/switching_from_ftp_listener_to_mail_sender.md deleted file mode 100644 index 90355061fa..0000000000 --- a/en/docs/integrate/examples/protocol-switching/switching_from_ftp_listener_to_mail_sender.md +++ /dev/null @@ -1,81 +0,0 @@ -# Switching from FTP Listener to Mail Sender - -This example demonstrates how WSO2 Micro Integrator receives messages through the FTP transport listener and forwards the messages through the mail transport sender. - -VFS transport listener will pick the file from the directory in the FTP server. The file in the FTP directory will be deleted. The response will be sent to the given e-mail address. - -## Synapse configuration - -Following are the integration artifacts that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - vfs:sftp://guest:guest@localhost/test?vfs.passive=true - text/xml - .*\.xml - 15 - - -
    - - -
    - - - - - - - -
    - - - - - - -``` - -## Build and Run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Add [sample_proxy_1.wsdl](https://github.com/wso2-docs/WSO2_EI/blob/master/samples-protocol-switching/sample_proxy_1.wsdl) as a [registry resource]({{base_path}}/integrate/develop/creating-artifacts/creating-registry-resources) (change the registry path of the proxy accordingly). -4. Create the proxy service with the [VFS configurations parameters given above]({{base_path}}/reference/config-catalog/#vfs-transport). -5. Configure [MailTo transport sender]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-mailto-transport). -6. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator and start the Micro Integrator. - -Set up the back-end service. - -1. Download the [back-end service]( -https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip) -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Add the following request.xml file to the sftp location and verify the content received via the mailto transport. - -```xml - - - - - - WSO2 - - - - -``` diff --git a/en/docs/integrate/examples/protocol-switching/switching_from_http_to_fix.md b/en/docs/integrate/examples/protocol-switching/switching_from_http_to_fix.md deleted file mode 100644 index cac3b66947..0000000000 --- a/en/docs/integrate/examples/protocol-switching/switching_from_http_to_fix.md +++ /dev/null @@ -1,85 +0,0 @@ -# Switching from HTTP to FIX - -This example demonstrates how WSO2 Micro Integrator receives messages in HTTP and forwards them through FIX. - -Synapse will create a session with the **Executor** and forward the order request. The first response coming from the Executor will be sent back over HTTP. The Executor generally sends two responses for each incoming order request. But since the response has to be forwarded over HTTP, only one can be sent back to the client. - -## Synapse configuration - -Following are the integration artifacts (proxy service) that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - - - - - - -
    - - - - - - - - - file:/{file_path}/synapse-sender.cfg - file - false - true - -``` - -## Build and Run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. Download the FIX transport resources from [here](https://github.com/wso2-docs/WSO2_EI/tree/master/FIX-transport-resources) and change the `{file_path}` of the proxy with the downloaded location. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -[Enable the FIX transport]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-fix-transport) and start the Micro-Integrator. - -Run the quickfixj **Executor** sample application. - -```bash -java -jar quickfixj-examples-executor-2.1.1.jar -``` - -Send the following request to the Micro Integrator and we will receive the response from the executor application. - -```bash -curl -X POST \ - http://localhost:8290/services/HTTPToFIXProxy \ - -H 'cache-control: no-cache' \ - -H 'content-type: text/xml' \ - -H 'soapaction: \"urn:mediate\"' \ - -d ' - - - - -
    - D - Fri Nov 08 11:04:31 IST 2019 -
    - - 122333 - 1 - 5 - 1 - 1 - IBM - 0 - - -
    -
    -
    ' -``` \ No newline at end of file diff --git a/en/docs/integrate/examples/protocol-switching/switching_from_https_to_jms.md b/en/docs/integrate/examples/protocol-switching/switching_from_https_to_jms.md deleted file mode 100644 index 7e7a3fa220..0000000000 --- a/en/docs/integrate/examples/protocol-switching/switching_from_https_to_jms.md +++ /dev/null @@ -1,84 +0,0 @@ -# Switching from HTTP(S) to JMS - -This example demonstrates how WSO2 Micro Integrator receives messages in HTTP and passes the messages through JMS. The Micro Integrator uses a proxy service over HTTP, forwards the received messages to the EPR using JMS, and immediately responds with a 202. - -If the previous example on [JMS to HTTP]({{base_path}}/integrate/examples/switching_from_JMS_to_HTTP) is also configured, it will pick the message from queue and send it to the stockquote proxy. - -## Synapse configuration - -Following are the integration artifacts (proxy service) that we can use to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - -
    - - - - - - - - - - - -``` - -Example JMS connection URL for WSO2 MB - -```xml -jms:/Queue1?transport.jms.ConnectionFactoryJNDIName=QueueConnectionFactory&java.naming.factory.initial=org.wso2.andes.jndi.PropertiesFileInitialContextFactory&java.naming.provider.url=conf/jndi.properties&transport.jms.DestinationType=queue -``` -## Build and Run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Add [sample_proxy_1.wsdl](https://github.com/wso2-docs/WSO2_EI/blob/master/samples-protocol-switching/sample_proxy_1.wsdl) as a [registry resource]({{base_path}}/integrate/develop/creating-artifacts/creating-registry-resources) (change the registry path of the proxy accordingly). -4. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. -6. [Configure MI with the selected message broker]({{base_path}}/install-and-setup/setup/mi-setup/brokers/configure-with-activemq) and start the Micro-Integrator. - -Invoke the HTTPtoJMSStockQuoteProxy with the following payload (using SOAP UI or CURL): - -```xml - - - - - - 172.23182849731984 - 18398 - IBM - - - - -``` - -Sample CURL: - -```bash -curl -X POST \ - http://localhost:8290/services/HTTPtoJMSStockQuoteProxy.HTTPtoJMSStockQuoteProxyHttpSoap11Endpoint \ - -H 'cache-control: no-cache' \ - -H 'content-type: text/xml' \ - -H 'soapaction: \"urn:placeOrder\"' \ - -d ' - - - - - 172.23182849731984 - 18398 - IBM - - - -' -``` - -Now, the message count in the queue should be increased. If the JMS listener is also setup, it should pick the message from the queue and send to the stockquote proxy. diff --git a/en/docs/integrate/examples/protocol-switching/switching_from_jms_to_http.md b/en/docs/integrate/examples/protocol-switching/switching_from_jms_to_http.md deleted file mode 100644 index b6f6d22a22..0000000000 --- a/en/docs/integrate/examples/protocol-switching/switching_from_jms_to_http.md +++ /dev/null @@ -1,74 +0,0 @@ -# Switching from JMS to HTTP(S) - -This example demonstrates how the Micro Integrator receives a messages over the JMS transport and forwards it over an HTTP/S transport. In this sample, the client sends a request message to the proxy service exposed in JMS. The Micro Integrator forwards this message to the HTTP endpoint and returns the reply back to the client through a JMS temporary queue. - -## Synapse configuration - -Following are the integration artifacts (proxy service) that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - - - -
    - - - - - - - contentType - text/xml - - - Queue1 - myQueueListener - -``` - -## Build and Run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. -5. Start the selected message broker and create a queue with name Queue1. -6. [Configure MI with the selected message broker]({{base_path}}/install-and-setup/setup/mi-setup/brokers/configure-with-activemq) and start the Micro-Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip) -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Publish the following XML message to the Queue1. -```xml - - - - - - 172.23182849731984 - 18398 - IBM - - - - -``` \ No newline at end of file diff --git a/en/docs/integrate/examples/protocol-switching/switching_from_tcp_to_https.md b/en/docs/integrate/examples/protocol-switching/switching_from_tcp_to_https.md deleted file mode 100644 index da93ff8958..0000000000 --- a/en/docs/integrate/examples/protocol-switching/switching_from_tcp_to_https.md +++ /dev/null @@ -1,92 +0,0 @@ -# Switching from TCP to HTTP/S - -This example demonstrates how WSO2 Micro Integrator receives SOAP messages over TCP and forwards them over HTTP. - -TCP is not an application layer protocol. Hence there are no application-level headers available in the requests. The Micro Integrator has to simply read the XML content coming through the socket and dispatch it to the right proxy service based on the information available in the message payload. The TCP transport is capable of dispatching requests based on addressing headers or the first element in the SOAP body. In this sample, we will get the sample client to send WS-Addressing headers in the request. Therefore, the dispatching will take place based on the addressing header values. - -## Synapse configuration - -Following are the integration artifacts (proxy service) that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - - - - - -
    - - - - - -``` -## Build and Run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service. - -* Download the [back-end service]( -https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip) -* Extract the downloaded zip file. -* Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -* Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -[Enable the TCP transport]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-tcp-transport) and start the Micro-Integrator. - -Send the following message via TCP to the TCP listener port. -```xml - - - tcp://localhost:6060/services/StockQuoteProxy - - http://www.w3.org/2005/08/addressing/none - - urn:uuid:464d2e2a-cd47-4c63-a7c6-550c282a1e3c - urn:placeOrder - - - - - 172.23182849731984 - 18398 - IBM - - - - -``` -In Linux, we can save the above request in a request.xml file and use netcat to send the TCP request. -``` -netcat localhost 6060 < request.xml -``` - -You will see the following response in the back-end service's console: - -```bash -INFO [wso2/stockquote_service] - Stock quote service invoked. -INFO [wso2/stockquote_service] - Generating placeOrder response -INFO [wso2/stockquote_service] - The order was placed. -``` diff --git a/en/docs/integrate/examples/protocol-switching/switching_from_udp_to_https.md b/en/docs/integrate/examples/protocol-switching/switching_from_udp_to_https.md deleted file mode 100644 index c68000e69b..0000000000 --- a/en/docs/integrate/examples/protocol-switching/switching_from_udp_to_https.md +++ /dev/null @@ -1,85 +0,0 @@ -# Switching from UDP to HTTP/S - -This example demonstrates how WSO2 Micro Integrator receives SOAP messages over UDP and forwards them over HTTP. - -## Synapse configuration - -Following are the integration artifacts (proxy service) that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - - - - - -
    - - - - - - 9999 - text/xml - -``` - -## Build and Run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Add [sample_proxy_1.wsdl](https://github.com/wso2-docs/WSO2_EI/blob/master/samples-protocol-switching/sample_proxy_1.wsdl) as a [registry resource]({{base_path}}/integrate/develop/creating-artifacts/creating-registry-resources) (change the registry path of the proxy accordingly). -4. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -* Download the [back-end service]( -https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip) -* Extract the downloaded zip file. -* Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -* Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -[Enable the UDP transport]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-udp-transport) and start the Micro-Integrator. - -Send the following message via UDP to the UDP listener port (9999). -```xml - - - udp://localhost:9999/services/StockQuoteProxy - - http://www.w3.org/2005/08/addressing/none - - urn:uuid:464d2e2a-cd47-4c63-a7c6-550c282a1e3c - urn:placeOrder - - - - - 172.23182849731984 - 18398 - IBM - - - - -``` -In Linux, we can save the above request in a request.xml file and use netcat to send the UDP request. - -```bash -nc -u localhost 9999 < request.xml -``` diff --git a/en/docs/integrate/examples/proxy_service_examples/exposing-proxy-via-inbound.md b/en/docs/integrate/examples/proxy_service_examples/exposing-proxy-via-inbound.md deleted file mode 100644 index 364215e7b9..0000000000 --- a/en/docs/integrate/examples/proxy_service_examples/exposing-proxy-via-inbound.md +++ /dev/null @@ -1,119 +0,0 @@ -# Exposing a Proxy Service via Inbound Endpoint -If a proxy service is to be exposed only via inbound endpoints, the `inbound.only` service parameter has to be set in the proxy configuration. - -## Synapse configuration -Following is a sample proxy service configuration that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Proxy Service" - ```xml - - - - - - -
    - - - true - - ``` - -=== "Inbound Endpoint" - ```xml - - - .* - 9090 - - - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) and [security policy]({{base_path}}/integrate/develop/creating-artifacts/registry/creating-local-registry-entries) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Send the following request to the Micro Integrator. - -```xml -POST http://localhost:9090/services/InboundProxy HTTP/1.1 -Accept-Encoding: gzip,deflate -Content-Type: text/xml;charset=UTF-8 -SOAPAction: "urn:getQuote" -Content-Length: 492 -Host: localhost:9090 -Connection: Keep-Alive -User-Agent: Apache-HttpClient/4.1.1 (java 1.5) - - - - - - - IBM - - - - -``` - -You will get the following response: - -```xml -HTTP/1.1 200 OK -server: ballerina -content-encoding: gzip -content-type: application/xml -Date: Thu, 31 Oct 2019 05:18:32 GMT -Transfer-Encoding: chunked -Connection: Keep-Alive - - - - - -2.86843917118114 - -8.540305401672558 - -176.67958828498735 - 177.66987465262923 - -176.30898912339075 - 5.649557998178506E7 - IBM Company - 185.62740369461244 - 24.341353665128693 - -1.4930577008849097 - 192.11844053187397 - IBM - 7791 - - - -``` - -When the proxy service is directly invoked, you will not get the response payload. \ No newline at end of file diff --git a/en/docs/integrate/examples/proxy_service_examples/introduction-to-proxy-services.md b/en/docs/integrate/examples/proxy_service_examples/introduction-to-proxy-services.md deleted file mode 100644 index daf041f7f7..0000000000 --- a/en/docs/integrate/examples/proxy_service_examples/introduction-to-proxy-services.md +++ /dev/null @@ -1,128 +0,0 @@ -# Using a Simple Proxy Service -This example demonstrates how to use a simple proxy service to expose a back-end service. In this example, a request received by the proxy service is forwarded to the sample service hosted in the backend. - -## Synapse configuration -Following is a sample proxy service configuration that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -An `inSequence` or `endpoint` or both of these would decide how the message would be handled after the proxy service receives the message. The -`outSequence` defines how the response is handled before it is sent back to the client. - -```xml - - - -
    - - - - - - - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. - - !!! Tip - Download the wsdl file (`sample_proxy_1.wsdl`) from [sample_proxy_1.wsdl](https://github.com/wso2-docs/WSO2_EI/blob/master/samples-protocol-switching/sample_proxy_1.wsdl). - The wsdl uri in the proxy service needs to be updated with the path to this `sample_proxy_1.wsdl` file. - -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -When the Micro Integrator starts, you could go to the following URL and view the WSDL generated for the proxy service defined in the configuration. - -```bash -http://localhost:8290/services/StockQuoteProxy?wsdl -``` - -This WSDL is based on the source WSDL supplied in the proxy service definition and is updated to reflect the proxy service EPR. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Set up the SOAPUI client. - -1. Download and Install [SoapUI](https://www.soapui.org/downloads/soapui.html) to run this SOAP service. -2. Create a new SOAP project in the SoapUI using following WSDL file: - - ```bash - http://localhost:8290/services/StockQuoteProxy?wsdl - ``` - -Send requests to the proxy service: - -- Send the following payload to receive a response containing the last sales price for the stock. You can -use the `getQuote` operation. - - ```xml - - - IBM - - - ``` - -- Send the following payload to get simple quote response containing the last sales price for stock. You can -use the `getSimpleQuote` operation. - - ```xml - - IBM - - ``` - -- Send the following payload to get quote reports for the stock over a number of days (i.e. last 100 days of the year). You can use the `getFullQuote` operation. - - ```xml - - - IBM - - - ``` - -- Send the following payload as an order for stocks using a - one way request. You can use the `placeOrder` operation. - - ```xml - - - 3.141593E0 - 4 - IBM - - - ``` - -- Send the following paylaod to get a market activity report - for the day (i.e. quotes for multiple symbols). You can use the `getMarketActivity` operation. - - ```xml - - - IBM - ... - MSFT - - - ``` diff --git a/en/docs/integrate/examples/proxy_service_examples/publishing-a-custom-wsdl.md b/en/docs/integrate/examples/proxy_service_examples/publishing-a-custom-wsdl.md deleted file mode 100644 index 03af527929..0000000000 --- a/en/docs/integrate/examples/proxy_service_examples/publishing-a-custom-wsdl.md +++ /dev/null @@ -1,122 +0,0 @@ -# Publishing a Custom WSDL -When you create a proxy service, a default WSDL is automatically -generated. You can access this WSDL by suffixing the service URL -with ?wsdl. See the example given below, where the proxy service name is -'sample_service' and IP is localhost: - -[http://localhost:8290/services/sample_service?wsdl](http://localhost:8290/services/Logging?wsdl) - -However, this default WSDL only shows the `mediate` -operation. This can be a limitation because your proxy service may be -exposing a back-end service that expects additional information such as -the message format. Therefore, the proxy service should be able to -publish a custom WSDL based on the back-end service's WSDL or a modified -version of that WSDL. For example, if the back-end service expects a -message that includes the name, department, and permission level, and -you want the proxy service to inject the permission level as it -processes the message, you could publish a WSDL that includes just the -name and department without the permission level parameter. - -## Synapse configuration -Following is a sample proxy service configuration that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - -
    - - - - - - - -``` - -## Build and run - -The wsdl file `sample_proxy_1.wsdl` can be downloaded from [sample_proxy_1.wsdl](https://github.com/wso2-docs/WSO2_EI/blob/master/samples-protocol-switching/sample_proxy_1.wsdl). -The wsdl URI needs to be updated with the path to the sample_proxy_1.wsdl file - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the SOAP client: - -1. Download and install [SoapUI](https://www.soapui.org/downloads/soapui.html) to run this SOAP service. -2. Create a new SOAP project in the SoapUI using the following wsdl file: - - ```bash - http://localhost:8290/services/StockQuoteProxy?wsdl - ``` - -3. Use the `getQuote` operation. -4. Enter the following payload. This will return a response containing the last sales price for the stock. - - ```xml - - - - - - - - IBM - - - - - ``` - -You will receive the following response: - -```xml -HTTP/1.1 200 OK -server: ballerina -content-encoding: gzip -content-type: application/xml -Transfer-Encoding: chunked -Connection: Keep-Alive - - - - - -2.86843917118114 - -8.540305401672558 - -176.67958828498735 - 177.66987465262923 - -176.30898912339075 - 5.649557998178506E7 - IBM Company - 185.62740369461244 - 24.341353665128693 - -1.4930577008849097 - 192.11844053187397 - IBM - 7791 - - - -``` diff --git a/en/docs/integrate/examples/proxy_service_examples/securing-proxy-services.md b/en/docs/integrate/examples/proxy_service_examples/securing-proxy-services.md deleted file mode 100644 index 415f807562..0000000000 --- a/en/docs/integrate/examples/proxy_service_examples/securing-proxy-services.md +++ /dev/null @@ -1,145 +0,0 @@ -# Securing a Proxy Service -This sample demonstrates how you can use WS-Security signing and encryption with proxy services through a WS policy. - -In this example, the proxy service expects to receive a signed and encrypted message as specified by the security policy. To understand the format of the policy file, have a look at the Apache Rampart and Axis2 documentation. The `engageSec` element specifies that Apache Rampart should be engaged on this proxy service. Hence, if Rampart rejects any request message that does not conform to the specified policy, that message will never reach the `inSequence` for processing. Since the proxy service is forwarding the received request to the simple stock quote service that does not use WS-Security, you are instructing the Micro Integrator to remove the `wsse:Security` header from the outgoing message. - -## Synapse configuration -Following is a sample proxy service configuration that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Proxy Service" - ```xml - - - -
    - - -
    - - - - - - - - - - - - ``` - -=== "Local Entry" - ```xml - - ``` - -## Build and run - -The wsdl file `sample_proxy_1.wsdl` can be downloaded from [sample_proxy_1.wsdl](https://github.com/wso2-docs/WSO2_EI/blob/master/samples-protocol-switching/sample_proxy_1.wsdl). -The wsdl URI needs to be updated with the path to the `sample_proxy_1.wsdl` file. - -The security policy file `policy1.xml` can be downloaded from [policy1.xml](https://github.com/wso2-docs/WSO2_EI/blob/master/sec-policies/policy1.xml). -The security policy file URI needs to be updated with the path to the policy1.xml file. -This sample security policy file validates username token and admin role is allowed to invoke the service. - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) and [security policy]({{base_path}}/integrate/develop/creating-artifacts/registry/creating-local-registry-entries) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Be sure to [configure a user store]({{base_path}}/install-and-setup/setup/mi-setup/setup/user_stores/setting_up_a_userstore) for the Micro Integrator and add the required users and roles. - -Set up the SOAP client: - -1. Download and install [SoapUI](https://www.soapui.org/downloads/soapui.html) to run this SOAP service. -2. Create a new SOAP project in the SoapUI using following wsdl file: - - ```bash - https://localhost:8253/services/StockQuoteProxy?wsdl - ``` -3. Use the `getQuote` operation. -4. Set [Authorization](https://www.soapui.org/soap-and-wsdl/authenticating-soap-requests.html) in the SoapUI request. You will need this to call a secure service. - -Send a simple request to invoke the service: - -```xml -POST https://localhost:8253/services/StockQuoteProxy.StockQuoteProxyHttpSoap11Endpoint HTTP/1.1 -Accept-Encoding: gzip,deflate -Content-Type: text/xml;charset=UTF-8 -SOAPAction: "urn:getQuote" -Content-Length: 492 -Host: localhost:8253 -Connection: Keep-Alive -User-Agent: Apache-HttpClient/4.1.1 (java 1.5) -Authorization: Basic YWRtaW46YWRtaW4= - - - - - - - IBM - - - - -``` - -You will receive the following response: - -```xml -HTTP/1.1 200 OK -server: ballerina -content-encoding: gzip -content-type: application/xml -Content-Type: application/xml; charset=UTF-8 -Date: Thu, 31 Oct 2019 04:44:45 GMT -Transfer-Encoding: chunked -Connection: Keep-Alive - - - - - -2.86843917118114 - -8.540305401672558 - -176.67958828498735 - 177.66987465262923 - -176.30898912339075 - 5.649557998178506E7 - IBM Company - 185.62740369461244 - 24.341353665128693 - -1.4930577008849097 - 192.11844053187397 - IBM - 7791 - - - -``` - -By analyzing the debug log output or the TCPMon output, you will see that the request received by the proxy service is signed and encrypted. - -You can look up the WSDL of the proxy service by requesting the `http://localhost:8290/services/StockQuoteProxy?wsdl` URL. This confirms the security policy attachment to the supplied base WSDL. - -When sending the message to the backend service, you can verify that the security headers were removed, the response received does not use WS-Security, and that the response being forwarded back to the client is signed and encrypted as expected by the client. diff --git a/en/docs/integrate/examples/rabbitmq_examples/move-msgs-to-dlq-rabbitmq.md b/en/docs/integrate/examples/rabbitmq_examples/move-msgs-to-dlq-rabbitmq.md deleted file mode 100644 index 0f80bbf912..0000000000 --- a/en/docs/integrate/examples/rabbitmq_examples/move-msgs-to-dlq-rabbitmq.md +++ /dev/null @@ -1,66 +0,0 @@ -# Publish unacked messages to Dead Letter Exchange - -This sample demonstrates how WSO2 Micro Integrator can ensure guaranteed delivery of messages by using the Dead Letter Exchange (DLX) of RabbitMQ. - -As shown below, a proxy service in the Micro Integrator consumes messages from the RabbitMQ broker and sends it to the endpoint. If the message delivery fails, the Micro Integrator will route the message to the dead letter exchange of RabbitMQ. - - - -## Synapse configurations - -See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - - - - - - - - - - - - - - - - - - - - - false - orders-exchange - false - orders - AMQPConnectionFactory - - -``` - -## Build and run - -1. Make sure you have a RabbitMQ broker instance running. -2. Create an exchange with the name `orders-exchange`. -3. Create another exchange `orders-error-exchange` with a queue bound to it (`orders-error`). -4. Create queue `orders` (bound by `orders-exchange` with routing key `orders` ) and configure a -dead letter exchange for it (`orders-error-exchange`). -5. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -6. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -7. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -8. Enable the RabbitMQ sender and receiver in the Micro-Integrator from the deployment.toml. Refer the - [configuring RabbitMQ documentation]({{base_path}}/install-and-setup/setup/mi-setup/brokers/configure-with-rabbitmq) for more information. -9. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. -10. Make the `http://localhost:8280/orders` endpoint unavailable temporarily. -11. Publish a message to the orders queue. -12. You will see that the failed message has been moved to the dead-letter-exchange. diff --git a/en/docs/integrate/examples/rabbitmq_examples/point-to-point-rabbitmq.md b/en/docs/integrate/examples/rabbitmq_examples/point-to-point-rabbitmq.md deleted file mode 100644 index 4500e97207..0000000000 --- a/en/docs/integrate/examples/rabbitmq_examples/point-to-point-rabbitmq.md +++ /dev/null @@ -1,74 +0,0 @@ -# A queue used to deliver a message to a consumer - -This example demonstrates how WSO2 Micro Integrator can be used to implement an asynchronous point-to-point messaging scenario using queues in a RabbitMQ broker instance. - -As shown below, a proxy service configured in the Micro Integrator sends messages to the RabbitMQ queue, which are then consumed by another proxy service in the Micro Integrator. - - - -## Synapse configurations - -See the instructions on how to [build and run](#build-and-run) this example. - -=== "RabbitMQ Consumer" - ```xml - - - - - - - - - - - - - - - queue1 - AMQPConnectionFactory - - ``` -=== "RabbitMQ Producer" - ```xml - - - - - - - - - -
    - -## Synapse configurations - -See the instructions on how to [build and run](#build-and-run) this example. - -=== "RabbitMQ Subscriber 1" - ```xml - - - - - - - - - - - - - - - topic1 - amq.topic - queue2 - AMQPConnectionFactory - - - ``` - -=== "RabbitMQ Subscriber 2" - ```xml - - - - - - - - - - - - - - - topic1 - amq.topic - queue3/parameter> - AMQPConnectionFactory - - ``` - -=== "RabbitMQ Publisher" - ```xml - - - - - - - - -
    - - - - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. Enable the RabbitMQ sender and receiver in the Micro-Integrator from the deployment.toml. Refer the - [configuring RabbitMQ documentation]({{base_path}}/install-and-setup/setup/mi-setup/brokers/configure-with-rabbitmq) for more information. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. -6. Make sure you have a RabbitMQ broker instance running. -7. Create queue1 and queue2 and add bind them in the `amq.topic` exchange with the routing key `topic1`. -8. Publish the following payload to the topic using the publisher proxy (TopicPublisher). - - ```xml - - John Doe - 27 - - ``` diff --git a/en/docs/integrate/examples/rabbitmq_examples/request-response-rabbitmq.md b/en/docs/integrate/examples/rabbitmq_examples/request-response-rabbitmq.md deleted file mode 100644 index 2ee63b45fc..0000000000 --- a/en/docs/integrate/examples/rabbitmq_examples/request-response-rabbitmq.md +++ /dev/null @@ -1,91 +0,0 @@ -# Synchronous messaging with request-reply pattern - -This sample demonstrates how you can implement the request-reply messaging scenario (dual-channel scenario) using the RabbitMQ broker and WSO2 Micro Integrator. - -As shown below, the `OrderRequest` proxy service in the Micro Integrator receives an HTTP -request, which it publishes to a RabbitMQ queue. This message is consumed and processed by the `OrderProcessing` proxy service in the Micro Integrator, and the response is sent back to the client over HTTP. - - - -## Synapse configurations - -See the instructions on how to [build and run](#build-and-run) this example. - -=== "Order Request Proxy Service" - ```xml - - - - - - - - - -
    - - - - - - ``` - -=== "Order Processing Proxy Service" - ```xml - - - - - - - - - - - - - - - - - - - $1 - - - - - - - - - order-request - AMQPConnectionFactory - - ``` - -## Build and run - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. Enable the RabbitMQ sender and receiver in the Micro-Integrator from the deployment.toml. Refer the - [configuring RabbitMQ documentation]({{base_path}}/install-and-setup/setup/mi-setup/brokers/configure-with-rabbitmq) for more information. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. -6. Make sure you have a RabbitMQ broker instance running. -7. Send a message to the `Order Request Proxy Service` with the following payload. - - ```json - { "orderId": "1242", - "orderQty": 43, - "orderDate": "2020/07/22" - } - ``` diff --git a/en/docs/integrate/examples/rabbitmq_examples/requeue-msgs-with-errors-rabbitmq.md b/en/docs/integrate/examples/rabbitmq_examples/requeue-msgs-with-errors-rabbitmq.md deleted file mode 100644 index dc565bd2bb..0000000000 --- a/en/docs/integrate/examples/rabbitmq_examples/requeue-msgs-with-errors-rabbitmq.md +++ /dev/null @@ -1,59 +0,0 @@ -# Requeue a message preserving the message order with a delay in case of error - -This sample demonstrates how WSO2 Micro Integrator can ensure guaranteed delivery of messages by requeueing messages when an error occurs during delivery. That is, the Micro Integrator can be configured to requeue messages to a RabbitMQ queue when the delivery fails. - -As shown in the following example, the Micro Integrator first consumes the request message from the RabbitMQ queue and sends it to the back-end HTTP endpoint. If the HTTP endpoint becomes unavailable, the message will be returned -to the `student-registration` queue in the RabbitMQ broker until the endpoint becomes available again. - - - -## Synapse configurations - -See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - - - - - - - - - - - - - - - - - - - - - false - 1 - 30000 - student-registration - AMQPConnectionFactory - -``` - -## Build and run - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. Enable the RabbitMQ sender and receiver in the Micro-Integrator from the deployment.toml. Refer the - [configuring RabbitMQ documentation]({{base_path}}/install-and-setup/setup/mi-setup/brokers/configure-with-rabbitmq) for more information. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. -6. Make the `http://localhost:8280/students` endpoint unavailable temporarily. -7. Make sure you have a RabbitMQ broker instance running. -8. Publish a message to the `student-registration` queue. diff --git a/en/docs/integrate/examples/rabbitmq_examples/retry-delay-failed-msgs-rabbitmq.md b/en/docs/integrate/examples/rabbitmq_examples/retry-delay-failed-msgs-rabbitmq.md deleted file mode 100644 index 6813f6194c..0000000000 --- a/en/docs/integrate/examples/rabbitmq_examples/retry-delay-failed-msgs-rabbitmq.md +++ /dev/null @@ -1,99 +0,0 @@ -# Control the number of retries and delay message in case of error - -This sample demonstrates how the WSO2 Micro Integrator can guarantee message delivery to an endpoint by controlling the number of delivery retries during errors. You can also configure a delay in message delivery from the RabbitMQ broker. - - - -1. The Micro Integrator first consumes a message from RabbitMQ and attempts to deliver it to the endpoint. -2. When there is an error in delivery, the `SET_ROLLBACK_ONLY` property in the Micro Integrator moves the message to the dead letter exchange (DLX) configured in RabbitMQ. -3. The message will then be re-queued by RabbitMQ subject to a specified **delay**. Note that you have to configure this delay in the RabbitMQ broker itself (using the `x-message-ttl` property). -4. If the message delivery to the endpoint continuous to fail, the Micro Integrator will **retry** for the number times specified by the `rabbitmq.message.max.dead.lettered.count` parameter in the proxy. -5. When the maximum retry count is exceeded, the message will be either discarded or moved to a different -queue in RabbitMQ (specified by the `rabbitmq.message.error.exchange.name` and `rabbitmq.message.error.queue.routing.key` parameters in the proxy. - -## Synapse configurations - -See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - - - - - - - - - - - - - - - - - - - - - false - 3 - enrollment-exchange - false - enrollment - false - AMQPConnectionFactory - - -``` - -## Build and run - -1. Make sure you have a RabbitMQ broker instance running. -2. Declare exchange to route enrollment - ```bash - rabbitmqadmin declare exchange --vhost=/ --user=guest --password=guest name=enrollment-exchange type=direct durable=true - ``` - -3. Declare a queue to store enrollment. At the same time define DLX, DLK to control the error scenario. - ```bash - rabbitmqadmin declare queue --vhost=/ --user=guest --password=guest name=enrollment durable=true arguments='{"x-dead-letter-exchange": "enrollment-error-exchange", "x-dead-letter-routing-key": "enrollment-error"}' - ``` - -4. Bind enrollment with enrollment-exchange. - ```bash - rabbitmqadmin declare binding --vhost=/ --user=guest --password=guest source=enrollment-exchange destination=enrollment routing_key=enrollment - ``` - -5. Declare exchange to route enrollment-error. - ```bash - rabbitmqadmin declare exchange --vhost=/ --user=guest --password=guest name=enrollment-error-exchange type=direct durable=true - ``` - -6. Declare queue to store enrollment-error. Define DLX, DLK and TTL for control retries and delay message. - ```bash - rabbitmqadmin declare queue --vhost=/ --user=guest --password=guest name=enrollment-error durable=true arguments='{"x-dead-letter-exchange": "enrollment-exchange", "x-dead-letter-routing-key": "enrollment", "x-message-ttl": 60000}' - ``` - -7. Bind enrollment-error with enrollment-error-exchange. - ```bash - rabbitmqadmin declare binding --vhost=/ --user=guest --password=guest source=enrollment-error-exchange destination=enrollment-error routing_key=enrollment-error - ``` - -8. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -9. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -10. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -11. Enable the RabbitMQ sender and receiver in the Micro-Integrator from the deployment.toml. Refer the - [configuring RabbitMQ documentation]({{base_path}}/install-and-setup/setup/mi-setup/brokers/configure-with-rabbitmq) for more information. -12. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. -13. Make the `http://localhost:8280/enrollment` endpoint unavailable temporarily. -14. Publish a message to the enrollment queue. -15. You will see that the failed message will be retried 3 times for delivery by the EnrollmentService proxy and then be discarded. diff --git a/en/docs/integrate/examples/rabbitmq_examples/store-forward-rabbitmq.md b/en/docs/integrate/examples/rabbitmq_examples/store-forward-rabbitmq.md deleted file mode 100644 index 9ce15e7306..0000000000 --- a/en/docs/integrate/examples/rabbitmq_examples/store-forward-rabbitmq.md +++ /dev/null @@ -1,110 +0,0 @@ -# Message store and message processor for guaranteed delivery - -This sample demonstrates how a store and forward messaging scenario can be implemented using the RabbitMQ -message broker and WSO2 Micro Integrator. Store and forward messaging is used for serving traffic to back-end services that can accept request messages only at a given rate. - -This messaging pattern ensures guaranteed message delivery. That is, because request messages are stored in a message store, messages never get lost. - -As shown below, when a client sends a message, the message store artifact in the Micro Integrator will route the messages to the RabbitMQ broker. The message processor artifact in the Micro Integrator will then process the message from the broker and send it to the back-end service. - - - -## Synapse configurations - -See the instructions on how to [build and run](#build-and-run) this example. - -=== "Sales Delivery - Message store" - ```xml - - - localhost - false - 5672 - guest - sales-delivery - guest - - ``` - -=== "Sales Delivery - Message Processor" - ```xml - - - 1000 - false - 4 - 1 - 1000 - -1 - Disabled - 1000 - sales-store - true - DeliveryEndpoint - - ``` - -=== "Sales Delivery - Proxy" - ```xml - - - - - - - - - - - - - - - - ``` - -=== "Sales Delivery - Endpoint" - ```xml - - -
    - - 30000 - fault - - - -1 - 0 - 1.0 - 0 - - - -1 - 0 - -
    -
    - ``` - -## Build and run - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the artifacts (proxy service, message-processor, message-store, endpoint) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. -5. Make sure you have a RabbitMQ broker instance running. -6. Send a message to the `sales-delivery-proxy` with the following payload. - ```xml - - 342 - HealthCorp - 20/12/2020 - Colombo - - ``` diff --git a/en/docs/integrate/examples/registry_examples/local-registry-entries.md b/en/docs/integrate/examples/registry_examples/local-registry-entries.md deleted file mode 100644 index 670251cfd3..0000000000 --- a/en/docs/integrate/examples/registry_examples/local-registry-entries.md +++ /dev/null @@ -1,77 +0,0 @@ -# Sequences and Endpoints as Local Registry Entries -This sample demonstrates how sequences and endpoints can be fetched from a local registry. - -## Synapse configurations - -Following are the integration artifacts that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Proxy Service" - ```xml - - - - - - - - - - - - - ``` - -=== "Sequence" - ```xml - - - - - - - - - - - - ``` - -=== "Endpoint" - ```xml - -
    - - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create sequence `stockquote` and endpoint `simple` as [local entries]({{base_path}}/integrate/develop/creating-artifacts/registry/creating-local-registry-entries) with the configurations given above. -4. Also, create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) `MainProxy` with the configuration given above. -5. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Send a message to invoke the service and analyze the mediation log on the Micro Integrator's start-up console. - -You will see that the sequence and the endpoint are fetched from the local entry and that the property named `direction` (which was set by the proxy service) is logged by the sequence. - -`INFO {org.apache.synapse.mediators.builtin.LogMediator} - Text = Sending quote request, direction = incoming` diff --git a/en/docs/integrate/examples/rest_api_examples/configuring-non-http-endpoints.md b/en/docs/integrate/examples/rest_api_examples/configuring-non-http-endpoints.md deleted file mode 100644 index 90b9847964..0000000000 --- a/en/docs/integrate/examples/rest_api_examples/configuring-non-http-endpoints.md +++ /dev/null @@ -1,59 +0,0 @@ -# Exposing Non-HTTP Services as RESTful APIs -This example demonstrates how the WSO2 Micro Integrator forwards messages to non-HTTP endpoints. - -## Synapse configuration - -Following is a sample REST API configuration that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - - - -
    -
    -
    -
    -
    -
    -
    -``` - -When using a non-HTTP endpoint, such as a JMS endpoint, in the API definition, you must remove the `REST_URL_POSTFIX` property to avoid any characters specified after the context (such as a trailing slash) in the request from being appended to the JMS endpoint. - -Notice that we have specified the `REST_URL_POSTFIX` property with the value set to "remove". When invoking this API, even if the request contains a trailing slash after the context (e.g., `POST http://127.0.0.1:8290/orderdelayAPI/` instead of `POST http://127.0.0.1:8290/orderdelayAPI`, the endpoint will be called correctly. - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the rest API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -[Configure the ActiveMQ broker]({{base_path}}/install-and-setup/setup/mi-setup/brokers/configure-with-activemq) with your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip) -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Invoke the REST API with a POST message. \ No newline at end of file diff --git a/en/docs/integrate/examples/rest_api_examples/enabling-rest-to-soap.md b/en/docs/integrate/examples/rest_api_examples/enabling-rest-to-soap.md deleted file mode 100644 index 64bbdfeafe..0000000000 --- a/en/docs/integrate/examples/rest_api_examples/enabling-rest-to-soap.md +++ /dev/null @@ -1,127 +0,0 @@ -# Exposing a SOAP Endpoint as a RESTful API - -This example demonstrates how you can expose a SOAP service over REST using an API in WSO2 Micro Integrator. - -## Synapse configuration - -Following is a sample REST API configuration that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - - - - - $1 - - - - - - - -
    - - -
    - - - - - - - - - - - -
    - - -
    - - - - - -``` - -In this API configuration we have defined two resources. One is for the HTTP method GET and the other one is for POST. In the first resource, we have defined the uri-template as `/view/{symbol}` so that request will be dispatched to this resource when you invoke the API using the following URI: `http://127.0.0.1:8290/stockquote/view/IBM` - -The context of this REST API is `stockquote`. The SOAP payload required for the SOAP back-end service is constructed using the payload factory mediator defined in the `inSequence`. The value for the `` element is extracted using the following expression: - -`get-property('uri.var.symbol')` - -Here, ‘symbol’ refers to the variable we defined in the uri-template `(/view/{symbol})`. Therefore, for the above invocation, the `'uri.var.symbol'` property will resolve to the value `‘IBM’`. - -After constructing the SOAP payload, the request will be sent to the SOAP back-end service from the `` mediator, which has an address endpoint defined inline with the `format="soap11"` attribute in the address element. The response received from the back-end soap service will be sent to the client in plain old XML (POX) format. - -In the second resource, we have defined the URL mapping as "/order/\*". Since this has POST as the HTTP method, the client has to send a payload to invoke this. - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the rest API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Invoking the first resource (**GET** request): - -- Execute the following command (without query parameters): - ```bash - curl -v http://127.0.0.1:8290/stockquote/view/IBM - ``` - -- Execute the following command with query parameters: - ```bash - curl -v -X GET "http://localhost:8290/stockquote/view/IBM?param1=value1¶m2=value2" - ``` - -Sending a **POST request**: - -1. Save the following sample place order request as `placeorder.xml` in your local file system and execute the command. This payload is used to invoke a SOAP service. - - ```xml - - - 50 - 10 - IBM - - - ``` - -2. Following is a sample cURL command to invoke the second resource: - - ```bash - curl -v -d @placeorder.xml -H "Content-type: application/xml" http://127.0.0.1:8290/stockquote/order/ - ``` - -This SOAP service invocation is an `OUT_ONLY` invocation, so the Micro Integrator is not expecting any response back from the SOAP service. Since we have set the `FORCE_SC_ACCEPTED` property value to true, the Micro Integrator returns a 202 response back to the client as shown below. - -```bash -< HTTP/1.1 202 Accepted -< Date: Wed, 30 Oct 2019 05:49:24 GMT -< Transfer-Encoding: chunked -``` \ No newline at end of file diff --git a/en/docs/integrate/examples/rest_api_examples/handling-non-matching-resources.md b/en/docs/integrate/examples/rest_api_examples/handling-non-matching-resources.md deleted file mode 100644 index 34498faf64..0000000000 --- a/en/docs/integrate/examples/rest_api_examples/handling-non-matching-resources.md +++ /dev/null @@ -1,91 +0,0 @@ -# Handling Non-Matching Resources - -This example demonstrates how you can define a sequence to be invoked if the Micro Integrator is unable to find a matching resource definition for a specific API invocation. This sequence generates a response indicating an error when no matching resource definition is found. - -## Synapse configurations - -Following is a sample REST API configuration and Sequence configuration that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "REST Api" - ```xml - - - - - -
    - - - - - - - - - ``` - -=== "Sequence" - ```xml - - - - - 404 - Status report - Not Found - The requested resource (/$1) is not available. - - - - - - - - - -
    - - - ``` -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [rest API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) and [mediation sequence]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip) -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Send an invalid request to the back end as follows: - -```bash -curl -X GET http://localhost:8290/jaxrs/customers-wrong/123 -``` - -You will get the following response: - -```bash - -404 -Status report -Not Found -The requested resource (//customers-wrong/123) is not available. - -``` \ No newline at end of file diff --git a/en/docs/integrate/examples/rest_api_examples/introduction-rest-api.md b/en/docs/integrate/examples/rest_api_examples/introduction-rest-api.md deleted file mode 100644 index 5f0629f5d3..0000000000 --- a/en/docs/integrate/examples/rest_api_examples/introduction-rest-api.md +++ /dev/null @@ -1,137 +0,0 @@ -# Using a Simple REST API - -You can configure REST endpoints in the Micro Integrator by directly specifying HTTP verbs, URL patterns, URI templates, HTTP media types, and other related headers. You can define REST APIs and the associated resources by combining REST APIs with mediation features provided by the underlying messaging framework. - -## Synapse configuration - -Following is a sample REST API configuration that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -This is a REST API with two API resources. The GET calls are handled by the first resource. These REST calls will get converted into SOAP calls and sent to the back-end service. The response will be sent to the client in POX format. - -```xml - - - - - - - - $1 - - - - - - - -
    - - -
    - - - - - - - - - - - -
    - - -
    - - - - - -``` -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the rest API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Invoke the sample API: - -- Sending a GET request. - - Open a terminal and execute the following command. This sends a simple GET request to the Micro Integrator. - - ```bash - curl http://127.0.0.1:8290/stockquote/view/IBM - ``` - - The Micro Integrator returns the following response to the client. - - ```xml - - - - -2.86843917118114 - -8.540305401672558 - -176.67958828498735 - 177.66987465262923 - -176.30898912339075 - 5.649557998178506E7 - IBM Company - 185.62740369461244 - 24.341353665128693 - -1.4930577008849097 - 192.11844053187397 - IBM - 7791 - - - - ``` - -- Sending a POST request. - 1. Save the following sample request as `placeorder.xml` in your local file system. - - ```bash - - - 50 - 10 - IBM - - - ``` - - 2. Open a terminal, navigate to the location of your `placeorder.xml` file, and execute the following command. This posts a simple XML request to the Micro Integrator. - - ```bash - curl -v -d @placeorder.xml -H "Content-type: application/xml" http://127.0.0.1:8290/stockquote/order/ - ``` - - The Micro Integrator returns the 202 response back to the client. - - ```xml - < HTTP/1.1 202 Accepted - < Date: Wed, 30 Oct 2019 05:33:49 GMT - < Transfer-Encoding: chunked - ``` diff --git a/en/docs/integrate/examples/rest_api_examples/publishing-a-swagger-api.md b/en/docs/integrate/examples/rest_api_examples/publishing-a-swagger-api.md deleted file mode 100644 index b1808846ad..0000000000 --- a/en/docs/integrate/examples/rest_api_examples/publishing-a-swagger-api.md +++ /dev/null @@ -1,68 +0,0 @@ -# Publishing a Custom Swagger Document - -When you create a REST API, by default a Swagger 3.0 (OpenApi) definition is generated automatically. You can access this Swagger document by suffixing the API URL -with `?swagger.json` or `?swagger.yaml`. See [Using Swagger Documents]({{base_path}}/integrate/develop/advanced-development/using-swagger-for-apis) for more information. - -This example demonstrates how a custom Swagger definition is published for a REST API. - -## Synapse configuration -Following is a sample REST API configuration with a custom Swagger definition. See the instructions on how to [build and run](#build-and-run) this example. - -!!! Note - The custom Swagger file that you use for generating the API is saved to the Micro Integrator's registry. The `publishSwagger` element in the REST API configuration specifies the registry path. In this example, we are storing the Swagger definition in the governance registry as shown below. - -```xml - - - - - - - - {"Response" : "Sample Response"} - - - - - - - - - - - - - - - {"Response" : "Sample Response"} - - - - - - - - - - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with the modules listed below: - - Config project - - Registry project - - Composite Application project. -3. To create the REST API with the above configurations: - - Download the Swagger file: [simple_petstore.yaml](https://github.com/wso2-docs/WSO2_EI/blob/master/samples-rest-apis/simple_petstore.yaml). - - Follow the instructions on [creating a REST API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api). - -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Copy the following URLs to your browser to see the Swagger documents of your API: - -- `http://localhost:8290/SwaggerPetstore?swagger.json` -- `http://localhost:8290/SwaggerPetstore?swagger.yaml` \ No newline at end of file diff --git a/en/docs/integrate/examples/rest_api_examples/securing-rest-apis.md b/en/docs/integrate/examples/rest_api_examples/securing-rest-apis.md deleted file mode 100644 index b258d7e4fa..0000000000 --- a/en/docs/integrate/examples/rest_api_examples/securing-rest-apis.md +++ /dev/null @@ -1,129 +0,0 @@ -# Securing REST APIs -In most of the real-world use cases of REST, when a consumer attempts to access a privileged resource, access will be denied unless the consumer's credentials are provided in an Authorization header. By default, the Micro Integrator validates the credentials of the consumer (that is provided in the Authorization header) against the credentials of users that are registered in the [user store connected to the server]({{base_path}}/install-and-setup/setup/mi-setup/setup/user_stores/setting_up_a_userstore). - -!!! Info - The Micro Integrator uses a Basic Auth handler for this purpose. If required, you can use a custom basic auth handler or other security implementations. Find out more about [applying security to REST APIs]({{base_path}}/integrate/develop/advanced-development/applying-security-to-an-api). - -## Synapse configuration - -Following is a sample REST API configuration that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -!!! Note - The basic auth handler is engaged in the API as follows: - ```xml - - - - ``` - -See the REST API given below for an example of how the default basic auth handler is used. - -```xml - - - - - - - - $1 - - - - - - - -
    - - -
    - - - - - - - - - - - - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the rest API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -[Configure an external user store]({{base_path}}/install-and-setup/setup/mi-setup/user_stores/setting_up_a_userstore). - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Test the API: - -1. First, invoke the service using the following service URL without providing any user credentials: `http://127.0.0.1:8290/stockquote/view/IBM` - - !!! Info - You can invoke the service using Postman or Curl. - - ```bash - curl -v http://127.0.0.1:8290/stockquote/view/IBM - ``` - - Note that you will receive the following error because the username and password are not passed and the service cannot be authenticated: `401 Unauthorized` - -2. Now, invoke the service again by providing the credentials of a user that is registered in the user store that is hosted. - - ```bash - curl -v http://127.0.0.1:8290/stockquote/view/IBM -H "Authorization: Basic YWRtaW46YWRtaW4=" - ``` - !!! Info - Note that the credentials (`YWRtaW46YWRtaW4=`) given in the authorization header (`Authorization: Basic YWRtaW46YWRtaW4=`) are the Base64-encoded username and password in the following format: `username:password`. - - The request is passed to the back-end service and you will receive a response similar to what is shown below: - - ```xml - - - - - -2.6989539095024164 - 12.851852793420885 - -166.81703170012037 - 170.03627716039932 - Mon Jul 30 15:10:56 IST 2018 - 178.02122263133768 - -7306984.135450081 - IBM Company - -165.86249647643422 - 23.443106773044992 - 1.5959734616866617 - -169.11019978052138 - IBM - 9897 - - - - - ``` diff --git a/en/docs/integrate/examples/rest_api_examples/setting-https-status-codes.md b/en/docs/integrate/examples/rest_api_examples/setting-https-status-codes.md deleted file mode 100644 index e435eea3b9..0000000000 --- a/en/docs/integrate/examples/rest_api_examples/setting-https-status-codes.md +++ /dev/null @@ -1,103 +0,0 @@ -# Handling HTTP Status Codes -A REST service typically sends HTTP status codes with its response. When you configure an API that send messages to a SOAP back-end service, you can set the status code of the HTTP response within the configuration. To achieve this, set the status code parameter within the **Out** sequence of the API definition. - -## Synapse configuration - -Following is a sample REST API configuration that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -```xml -` - - - - - - - $1 - - - - - - - -
    - - -
    - - - - - - - - - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the rest API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Send the following request to the Micro Integrator: - -```bash -curl -v http://127.0.0.1:8290/stockquote/view/IBM -``` - -The response message will contain the following response code (201) and the requested stock quote information. - -```bash -< HTTP/1.1 201 Created -< server: ballerina -< Access-Control-Allow-Methods: GET -< content-type: text/plain -< Access-Control-Allow-Headers: -< Date: Tue, 29 Oct 2019 15:41:05 GMT -< Transfer-Encoding: chunked -``` - -The requested stock quote information: - -```xml - - - -2.86843917118114 - -8.540305401672558 - -176.67958828498735 - 177.66987465262923 - -176.30898912339075 - 5.649557998178506E7 - IBM Company - 185.62740369461244 - 24.341353665128693 - -1.4930577008849097 - 192.11844053187397 - IBM - 7791 - - -``` \ No newline at end of file diff --git a/en/docs/integrate/examples/rest_api_examples/setting-query-params-outgoing-messages.md b/en/docs/integrate/examples/rest_api_examples/setting-query-params-outgoing-messages.md deleted file mode 100644 index fb5819a24d..0000000000 --- a/en/docs/integrate/examples/rest_api_examples/setting-query-params-outgoing-messages.md +++ /dev/null @@ -1,133 +0,0 @@ -# Setting Query Parameters on Outgoing Messages - -REST clients use query parameters to provide inputs for the relevant operation. These query parameters may be required to carry out the back-end operations either in a REST service or a proxy service. - -Shown below is an example request that uses query parameters. - -```bash -curl -v -X GET "http://localhost:8290/stockquote/view/IBM?param1=value1¶m2=value2" -``` - -## Synapse configuration - -Following is a sample REST API configuration that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -There are two query parameters (customer name and ID) that must be set in the outgoing message from the Micro Integrator. We can configure the API to set those parameters as shown below. The query parameter values can be accessed through the `get-property` function by specifying the parameter number as highlighted in the request (given above). - -```xml - - - - - - - - $1 - $2 - $3 - - - - - - - - - - - - -
    - - - - - - - - -``` - -## Reading a query or path parameter - -You can define a REST API and access the query parameters or path parameters by defining them in expressions. The following is a sample code that shows how the resource is defined. - -```xml - -``` - -**Reading a query parameter** - -The following sample indicates how the expressions can be defined using `get-property('query.param.xxx')` to read a query parameter. - -```xml - -``` - -Alternately, you can use the following. - -```xml - -``` - -**Reading a path parameter** - -The following sample indicates how the expressions can be defined using `get-property('uri.var.yyy')` to read a path parameter. - -```xml - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the REST API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Invoke the sample API by executing the following command: - -```bash -curl -v -X GET "http://localhost:8290/stockquote/view/IBM?param1=value1¶m2=value2" -``` - -You will receive the following response: - -```xml - - - -2.86843917118114 - -8.540305401672558 - -176.67958828498735 - 177.66987465262923 - -176.30898912339075 - 5.649557998178506E7 - IBM Company - 185.62740369461244 - 24.341353665128693 - -1.4930577008849097 - 192.11844053187397 - IBM - 7791 - - -``` \ No newline at end of file diff --git a/en/docs/integrate/examples/rest_api_examples/special-cases.md b/en/docs/integrate/examples/rest_api_examples/special-cases.md deleted file mode 100644 index a219ec913b..0000000000 --- a/en/docs/integrate/examples/rest_api_examples/special-cases.md +++ /dev/null @@ -1,8 +0,0 @@ -## GET request with a Message Body -Normally, a GET request does not contain a body, and the Micro Integrator will not consume the payload even if there is one. The payload will not go through the mediation or to the backend. - -## Using POST with an Empty Body -Typically, POST request is used to send a message that has data enclosed as a payload. However, you can also use POST without a payload. WSO2 Micro Integrator considers such messages as normal messages and forwards them to the endpoint without any additional configurations. - -## Using POST with Query Parameters -Sending a POST message with query parameters is an unusual scenario, but the Micro Integrator supports it with no additional configuration. The Micro Integrator forwards the message like any other POST message and includes the query parameters. \ No newline at end of file diff --git a/en/docs/integrate/examples/rest_api_examples/transforming-content-type.md b/en/docs/integrate/examples/rest_api_examples/transforming-content-type.md deleted file mode 100644 index d93b9fe07e..0000000000 --- a/en/docs/integrate/examples/rest_api_examples/transforming-content-type.md +++ /dev/null @@ -1,210 +0,0 @@ -# Transforming Content Types -This section describes how you can transform the content type of a message using an API. In this scenario, the API exposes a REST back-end service that accepts and returns XML and JSON messages for HTTP methods as follows: - -- GET - Response is in JSON format. -- POST - Accepts JSON request and returns response in XML format. -- DELETE - Empty request body should is required. Returns response in XML format. - -## Synapse configuration - -Following is a sample REST API configuration that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - - - -
    - - - - - - - - - - - - - -
    - - - - - - - - - - - - - -
    - - - - - - - - - - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the rest API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [Hospital-Service-2.0.0-JDK11.jar](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/Hospital-Service-JDK11-2.0.0.jar). -2. Open a terminal, navigate to the location of the downloaded service, and run it using the following command: - - ```bash - java -jar Hospital-Service-2.0.0-JDK11.jar - ``` - -Sending an **HTTP POST request**: - -1. Create the `request.json` file as follows: - ```json - { - "patient": { - "name": "John Doe", - "dob": "1940-03-19", - "ssn": "234-23-525", - "address": "California", - "phone": "8770586755", - "email": "johndoe@gmail.com" - }, - "doctor": "thomas collins", - "hospital": "grand oak community hospital", - "appointment_date": "2025-04-02" - } - ``` - -2. Following is the cURL command to send an HTTP POST request to the API: - - !!! Tip - The context of the API is ‘/healthcare’. For every HTTP method, a url-mapping or uri-template is defined, and the URL to call the methods differ with the defined mapping or template. - - ```bash - curl -v -H "Content-Type: application/json" -X POST -d @request.json http://localhost:8290/healthcare/appointment/reserve - ``` - - The response from backend to the Micro Integrator will be: - - ```json - { - "appointmentNumber": 1, - "doctor": { - "name": "thomas collins", - "hospital": "grand oak community hospital", - "category": "surgery", - "availability": "9.00 a.m - 11.00 a.m", - "fee": 7000 - }, - "patient": { - "name": "John Doe", - "dob": "1940-03-19", - "ssn": "234-23-525", - "address": "California", - "phone": "8770586755", - "email": "johndoe@gmail.com" - }, - "fee": 7000, - "confirmed": false, - "appointmentDate": "2025-04-02" - } - ``` - - The Micro Integrator transform the response to XML and send it back to client as: - - ```xml - - 1 - - thomas collins - grand oak community hospital - surgery - 9.00 a.m - 11.00 a.m - 7000.0 - - - John Doe - 1940-03-19 - 234-23-525 -
    California
    - 8770586755 - johndoe@gmail.com -
    - 7000.0 - false - 2025-04-02 -
    - ``` - -Sending an **HTTP GET request**: - -1. Following is the CURL command to send a **GET request** to the API: - - ```bash - curl -v -X GET http://localhost:8290/healthcare/appointments/1 - ``` - -2. The response for the request will be: - - ```json - { - "appointmentNumber": 1, - "doctor": { - "name": "thomas collins", - "hospital": "grand oak community hospital", - "category": "surgery", - "availability": "9.00 a.m - 11.00 a.m", - "fee": 7000 - }, - "patient": { - "name": "John Doe", - "dob": "1940-03-19", - "ssn": "234-23-525", - "address": "California", - "phone": "8770586755", - "email": "johndoe@gmail.com" - }, - "fee": 7000, - "confirmed": false, - "appointmentDate": "2025-04-02" - } - ``` - -Sending an **HTTP DELETE request**: - -1. Following is the cURL command for sending an HTTP DELETE request: - - ```bash - curl -v -X DELETE http://localhost:8290/healthcare/appointments/1 - ``` - - This request will be sent to the back end, and the order with the specified ID will be deleted. The response to the Micro Integrator from backend will be as follows: - - ```json - {"status":"Appointment is successfully removed"} - ``` - -2. The Micro Integrator transform the response to XML and sends it back to the client as follows: - - ```xml - Appointment is successfully removed - ``` diff --git a/en/docs/integrate/examples/routing_examples/routing_based_on_headers.md b/en/docs/integrate/examples/routing_examples/routing_based_on_headers.md deleted file mode 100644 index cd91ebc051..0000000000 --- a/en/docs/integrate/examples/routing_examples/routing_based_on_headers.md +++ /dev/null @@ -1,203 +0,0 @@ -# Routing Based on Message Headers - -This example scenario uses an inventory of stocks as the back-end service. A proxy service is configured in the Micro Integrator to use separate mediation sequences for processing request messages with different **message headers**. - -When a stock quote request is received, the Micro Integrator will read the **request header** and then route the message to the relevant mediation sequence for processing. The relevant sequence will forward the message to the backend, receive the response, process it, and return it to the client. - -## Synapse configuration - -Listed below are the synapse configurations for implementing this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Proxy Service" - ```xml - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ``` - -=== "Sequence 1" - ```xml - - - - - - - ``` - -=== "Sequence 2" - ```xml - - - - - - - ``` - -=== "Sequence 3" - ```xml - - - - - - ``` - -=== "Send Seq" - ```xml - - -
    - - -
    - - - - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) and [mediation sequences]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Invoke the proxy service: - -- Send a request with the 'application/json' header and see that a JSON response is received. - - === "Request (application/json)" - ```xml - HTTP method: POST - Request URL: http://localhost:8290/services/HeaderBasedRoutingProxy - Content-Type: text/xml;charset=UTF-8 - CustomHeader: application/json - Message Body: - - - - - - IBM - - - - - ``` - - === "Response" - ```json - {"Envelope": - {"Body": - {"getQuoteResponse": - {"change":-2.86843917118114, - "earnings":-8.540305401672558, - "high":-176.67958828498735, - "last":177.66987465262923, - "low":-176.30898912339075, - "marketCap":56495579.98178506, - "name":"IBM Company", - "open":185.62740369461244, - "peRatio":24.341353665128693, - "percentageChange":-1.4930577008849097, - "prevClose":192.11844053187397, - "symbol":"IBM","volume":7791} - } - } - } - ``` - -- Send a request with the 'text/xml' header and see that an XML response is received. - - === "Request (text/xml)" - ```xml - HTTP method: POST - Request URL: http://localhost:8290/services/HeaderBasedRoutingProxy - Content-Type: text/xml;charset=UTF-8 - CustomHeader: text/xml - Message Body: - - - - - - IBM - - - - - ``` - - === "Response" - ```xml - - - - - - - -2.86843917118114 - -8.540305401672558 - -176.67958828498735 - 177.66987465262923 - -176.30898912339075 - 5.649557998178506E7 - IBM Company - 185.62740369461244 - 24.341353665128693 - -1.4930577008849097 - 192.11844053187397 - IBM - 7791 - - - - - - ``` diff --git a/en/docs/integrate/examples/routing_examples/routing_based_on_payloads.md b/en/docs/integrate/examples/routing_examples/routing_based_on_payloads.md deleted file mode 100644 index 063837d359..0000000000 --- a/en/docs/integrate/examples/routing_examples/routing_based_on_payloads.md +++ /dev/null @@ -1,191 +0,0 @@ -# Routing Based on Message Payloads - -This example scenario uses a back-end service with two stock quote inventories (IBM and MSFT). A proxy service is configured in the Micro Integrator to use separate mediation sequences for processing request messages with different **payloads**. - -When a stock quote request is received, the Micro Integrator will read the **message payload** (content) and then route the message to the relevant mediation sequence for processing. The sequence will forward the message to the relevant stock quote inventory in the backend, receive the response, process it, and return it to the client. - -## Synapse configuration - -Listed below are the synapse configurations (proxy service) for implementing this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Proxy Service" - ```xml - - - - - - - - - - - - - - - - - - - - - - - - - - ``` - -=== "Sequence 1" - ```xml - - - - - - - ``` - -=== "Sequence 2" - ```xml - - - - - - - ``` - -=== "Send Seq" - ```xml - - -
    - - -
    - - - - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Invoke the proxy service: - -- Send a request to get the IBM stock quote and see that a JSON response is received with the IBM stock quote. - - === "Request" - ```xml - HTTP method: POST - Request URL: http://localhost:8290/services/ContentBasedRoutingProxy - Content-Type: text/xml;charset=UTF-8 - Message Body: - - - - - - IBM - - - - - ``` - - === "Response" - ```xml - { - "Envelope": { - "Body": { - "getQuoteResponse": { - "change": -2.86843917118114, - "earnings": -8.540305401672558, - "high": -176.67958828498735, - "last": 177.66987465262923, - "low": -176.30898912339075, - "marketCap": 56495579.98178506, - "name": "IBM Company", - "open": 185.62740369461244, - "peRatio": 24.341353665128693, - "percentageChange": -1.4930577008849097, - "prevClose": 192.11844053187397, - "symbol": "IBM", - "volume": 7791 - } - } - } - } - ``` - -- Send a request to get the MSFT stock quote and see that an XML response is received with the MSFT stock quote. - - === "Request" - ```xml - HTTP method: POST - Request URL: http://localhost:8290/services/ContentBasedRoutingProxy - Content-Type: text/xml;charset=UTF-8 - Message Body: - - - - - - MSFT - - - - - ``` - - === "Response" - ```xml - - - - - - -2.86843917118114 - -8.540305401672558 - -176.67958828498735 - 177.66987465262923 - -176.30898912339075 - 5.649557998178506E7 - MSFT Company - 185.62740369461244 - 24.341353665128693 - -1.4930577008849097 - 192.11844053187397 - MSFT - 7791 - - - - - - ``` diff --git a/en/docs/integrate/examples/routing_examples/splitting_aggregating_messages.md b/en/docs/integrate/examples/routing_examples/splitting_aggregating_messages.md deleted file mode 100644 index 74daf56490..0000000000 --- a/en/docs/integrate/examples/routing_examples/splitting_aggregating_messages.md +++ /dev/null @@ -1,141 +0,0 @@ -# Splitting Messages and Aggregating Responses - -This example scenario uses a back-end service with two stock quote inventories (IBM and SUN). A proxy service is configured in the Micro Integrator with the **Iterate** mediator (to split the incoming message) and the **Aggregate** mediator (to aggregate the responses). - -When a stock quote request is received by the Micro Integrator, the proxy service will read the **message payload** and first identify the parts of the message that are intended for each of the inventories. The Iterate mediator will then split the message and route the parts to the relevant inventories in the backend. These messages will be processed asynchronously. - -When the response messages are received from the backend, the Aggregate mediator will aggregate the responses into one and send to the client. - -## Synapse configuration - -Listed below are the synapse configurations (proxy service) for implementing this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -```xml - - - - - - - -
    - - -
    - - - - - - - - - - - - - - - - - - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. [Create the proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Invoke the sample proxy service: - -```xml -HTTP method: POST -Request URL: http://localhost:8290/services/SplitAggregateProxy -Content-Type: text/xml;charset=UTF-8 -SOAPAction: "urn:mediate" -CustomHeader: application/json -Message Body: - - - - - - IBM - - - SUN - - - - -``` - -You can then observe that the response from the proxy service is the aggregated response received for each of the `getQuote` requests that were sent to the backend. - -```xml - - - - - -2.86843917118114 - -8.540305401672558 - -176.67958828498735 - 177.66987465262923 - -176.30898912339075 - 5.649557998178506E7 - IBM Company - 185.62740369461244 - 24.341353665128693 - -1.4930577008849097 - 192.11844053187397 - IBM - 7791 - - - - - - - -2.86843917118114 - -8.540305401672558 - -176.67958828498735 - 177.66987465262923 - -176.30898912339075 - 5.649557998178506E7 - SUN Company - 185.62740369461244 - 24.341353665128693 - -1.4930577008849097 - 192.11844053187397 - SUN - 7791 - - - - -``` diff --git a/en/docs/integrate/examples/scheduled-tasks/injecting-messages-to-rest-endpoint.md b/en/docs/integrate/examples/scheduled-tasks/injecting-messages-to-rest-endpoint.md deleted file mode 100644 index c5ebc2a27a..0000000000 --- a/en/docs/integrate/examples/scheduled-tasks/injecting-messages-to-rest-endpoint.md +++ /dev/null @@ -1,64 +0,0 @@ -# Injecting Messages to a RESTful Endpoint -In order to use the Message Injector to inject messages to a RESTful endpoint, you can specify the injector with the required payload and inject the message to the sequence or proxy service as defined below. The sample below shows a RESTful message injection through a proxy service. - -## Synapse configurations - -Following are the integration artifacts that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Scheduled Task" - ```xml - - - - - - - London - UK - - - - - - ``` - -=== "Proxy Service" - ```xml - - - - - - - - - - - - - - - - - - - - - - - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) and a [scheduled task]({{base_path}}/integrate/develop/creating-artifacts/creating-scheduled-task) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -The XML message you injected (i.e., This is a scheduled task of the default implementation.) will be printed in the logs of the Micro Integrator twice, 5 seconds apart. - -```bash -INFO {org.apache.synapse.mediators.builtin.LogMediator} - Which city? = London, Which country? = UK -``` diff --git a/en/docs/integrate/examples/scheduled-tasks/task-scheduling-simple-trigger.md b/en/docs/integrate/examples/scheduled-tasks/task-scheduling-simple-trigger.md deleted file mode 100644 index 0b576f5839..0000000000 --- a/en/docs/integrate/examples/scheduled-tasks/task-scheduling-simple-trigger.md +++ /dev/null @@ -1,75 +0,0 @@ -# Task Scheduling using a Simple Trigger -This example demonstrates the concept of tasks and how a simple trigger works. Here the `MessageInjector` class is used, which injects a specified message to the Micro Integrator environment. You can write your own task class implementing the `org.apache.synapse.startup.Task` interface and implement the `execute` method to run the task. - -If the task should send the message directly to the endpoint through the main sequence, the endpoint address should be specified. For example, if the address of the endpoint is `http://localhost:9000/services/SimpleStockQuoteService`, the Synapse configuration of the scheduled task will be as shown below. - -## Synapse configurations - -Following are the integration artifacts that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Scheduled Task" - ```xml - - - - - - - - - - IBM - - - - - ``` - -=== "Main Sequence" - ```xml - - - - - - - - - - - - - - - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [main sequence]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences) and a [scheduled task]({{base_path}}/integrate/develop/creating-artifacts/creating-scheduled-task) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -When the Micro Integrator is invoked, you will see that the back-end service generates a quote every 5 seconds and that the Micro Integrator receives the stock quote response. diff --git a/en/docs/integrate/examples/sequence_examples/custom-sequences-with-proxy-services.md b/en/docs/integrate/examples/sequence_examples/custom-sequences-with-proxy-services.md deleted file mode 100644 index ac8f96800f..0000000000 --- a/en/docs/integrate/examples/sequence_examples/custom-sequences-with-proxy-services.md +++ /dev/null @@ -1,127 +0,0 @@ -# Reusing Sequences -This example demonstrates how to reuse sequences in the Micro Integrator. - -## Synapse configuration - -This configuration creates two Proxy Services. The first Proxy Service (StockQuoteProxy1) uses the sequence named "proxy_1" to process incoming messages and the sequence named "out" to process outgoing responses. The second Proxy Service (StockQuoteProxy2) is set to directly forward messages that are received to the endpoint named "proxy_2_endpoint" without any mediation. - -=== "Endpoint" - ```xml - -
    - - ``` - -=== "Local Entry" - ```xml - - ``` - -=== "Sequence" - ```xml - - -
    - - - ``` - -=== "Out Sequence" - ```xml - - - - ``` - -=== "Proxy Service 1" - ```xml - - - - - ``` - -=== "Proxy Service 2" - ```xml - - - - - ``` -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy services]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service), the [mediation sequences]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences), the [local entry]({{base_path}}/integrate/develop/creating-artifacts/registry/creating-local-registry-entries), and the [endpoint]({{base_path}}/integrate/develop/creating-artifacts/creating-endpoints) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Download the [sample_proxy_1.wsdl file](https://github.com/wso2-docs/WSO2_EI/blob/master/samples-protocol-switching/sample_proxy_1.wsdl) and copy it to the `MI_HOME/samples/wsdl` folder (create this folder if does not already exist). - -You could send a stock quote request to each of these proxy services and receive the reply generated by the actual back-end service. - -- Request to StockQuoteProxy1: - - ```bash - POST http://localhost:8290/services/StockQuoteProxy1.StockQuoteProxy1HttpSoap11Endpoint HTTP/1.1 - Accept-Encoding: gzip,deflate - Content-Type: text/xml;charset=UTF-8 - SOAPAction: "urn:getQuote" - Content-Length: 416 - Host: Chanikas-MacBook-Pro.local:8290 - Connection: Keep-Alive - User-Agent: Apache-HttpClient/4.1.1 (java 1.5) - - - - - - - IBM - - - - - ``` - -- Request to StockQuoteProxy2: - - ```bash - POST http://localhost:8290/services/StockQuoteProxy2.StockQuoteProxy2HttpSoap11Endpoint HTTP/1.1 - Accept-Encoding: gzip,deflate - Content-Type: text/xml;charset=UTF-8 - SOAPAction: "urn:getQuote" - Content-Length: 416 - Host: Chanikas-MacBook-Pro.local:8290 - Connection: Keep-Alive - User-Agent: Apache-HttpClient/4.1.1 (java 1.5) - - - - - - - IBM - - - - - ``` \ No newline at end of file diff --git a/en/docs/integrate/examples/sequence_examples/using-fault-sequences.md b/en/docs/integrate/examples/sequence_examples/using-fault-sequences.md deleted file mode 100644 index 2091a2ac5e..0000000000 --- a/en/docs/integrate/examples/sequence_examples/using-fault-sequences.md +++ /dev/null @@ -1,171 +0,0 @@ -# Using Fault Sequences -WSO2 Micro Integrator provides fault sequences for dealing with errors. Whenever an error occurs, the mediation engine attempts to provide as much information as possible on the error to the user by initializing the following properties on the erroneous message: - -- ERROR_CODE -- ERROR_MESSAGE -- ERROR_DETAIL -- ERROR_EXCEPTION - -## Synapse configuration -Following are the integration artifacts that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -- Proxy service: - ```xml - - - - - - -
    - - - - - - - - - - - - - - - - - - - ``` - -- Mediation sequences: - - === "Fault Sequence" - ```xml - - - - - - - - ``` - - === "Error Handling Sequence with Logs" - ```xml - - - - - - - - ``` - - === "Error Handling Sequence" - ```xml - - - - - - ``` - -Note how the `ERROR_MESSAGE` property is being used to get the error message text. Within the fault sequence, you can access these property values using -the `get-property` XPath function. The following log mediator logs the actual error message: - -```xml - - - - -``` - - - -The following is a sample of the configurations to use the Fault sequence in an API. Make note of the "faultSequence" attribute in the "resource" element. - -```xml - - - - - - -
    - - - - - - - - - - - - - - - - - - -``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service), and the [mediation sequences]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Send a request to invoke the proxy service: -```xml -POST http://localhost:8290/services/FaultTestProxy HTTP/1.1 -Accept-Encoding: gzip,deflate -Content-Type: text/xml;charset=UTF-8 -SOAPAction: "urn:mediate" -Content-Length: 263 -Host: Chanikas-MacBook-Pro.local:8290 -Connection: Keep-Alive -User-Agent: Apache-HttpClient/4.1.1 (java 1.5) - - - - - - - - 50 - 10 - SUN - - - - -``` - -The following line is logged: -```bash -INFO {org.apache.synapse.mediators.builtin.LogMediator} - text = An unexpected error occured for stock SUN, message = Couldn't find the endpoint with the key : sunPort -``` diff --git a/en/docs/integrate/examples/sequence_examples/using-multiple-sequences.md b/en/docs/integrate/examples/sequence_examples/using-multiple-sequences.md deleted file mode 100644 index 4e16e3629d..0000000000 --- a/en/docs/integrate/examples/sequence_examples/using-multiple-sequences.md +++ /dev/null @@ -1,259 +0,0 @@ -# Breaking Complex Flows into Multiple Sequences -This sample demonstrates how a complex sequence can be separated into a set of simpler sequences. In this sample, you will send a simple request to a back-end service (Stock Quote service) and receive a response. If you look at the sample's XML configuration, you will see how this mediation is performed by several sequence definitions instead of one main sequence. - -## Synapse configuration - -Following are the integration artifacts that we can used to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -- Proxy service: - ```xml - - - - - ``` - -- Mediation sequences: - - === "Sequence 1" - ```xml - - - - - - - - - ``` - - === "Sequence 2" - ```xml - - - - - - - ``` - - === "Sequence 3" - ```xml - - - - - - - - - $1 - $2 - $3 - $4 - - - - - - - - - - - - - - - - ``` - - === "Sequence 4" - ```xml - - - - - - - ``` - - === "Sequence 5" - ```xml - - - - - - - ``` - - === "Sequence 6" - ```xml - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ``` - -- REST API, which calls the back-end service. - ```xml - - - - - - - - $1 - - - - - - - -
    - - -
    - - - - - - - - - - ``` - -## Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service), the [mediation sequences]({{base_path}}/integrate/develop/creating-artifacts/creating-reusable-sequences), and the [REST API ]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service: - -1. Download the [back-end service](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). -2. Extract the downloaded zip file. -3. Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -4. Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Send a request to invoke the service: -```xml -POST http://localhost:8290/services/SequenceBreakdownSampleProxy.SequenceBreakdownSampleProxyHttpSoap11Endpoint HTTP/1.1 -Accept-Encoding: gzip,deflate -Content-Type: text/xml;charset=UTF-8 -SOAPAction: "urn:mediate" -Content-Length: 321 -Host: Chanikas-MacBook-Pro.local:8290 -Connection: Keep-Alive -User-Agent: Apache-HttpClient/4.1.1 (java 1.5) - - - - - - - - - 50 - 10 - IBM - - - - -``` - -You will receive the following response: - -```xml -HTTP/1.1 200 OK -SOAPAction: "urn:mediate" -Host: Chanikas-MacBook-Pro.local:8290 -Accept-Encoding: gzip,deflate -Content-Type: text/xml;charset=UTF-8 -Date: Wed, 02 Oct 2019 10:01:25 GMT -Transfer-Encoding: chunked -Connection: Keep-Alive - - - - - IBM Company - 69.75734480144942 - -69.47003220786323 - 72.09188473048964 - - - -``` \ No newline at end of file diff --git a/en/docs/integrate/examples/template_examples/using-endpoint-templates.md b/en/docs/integrate/examples/template_examples/using-endpoint-templates.md deleted file mode 100644 index cd3dbcf13d..0000000000 --- a/en/docs/integrate/examples/template_examples/using-endpoint-templates.md +++ /dev/null @@ -1,151 +0,0 @@ -# Using Endpoint Templates - -For example, let's say we have two address endpoints with the following hypothetical configurations: - -=== "Endpoint 1" - ```xml - -
    - - 10001,10002 - 1.0 - - - 5 - 0 - -
    -
    - ``` - -=== "Endpoint 2" - ```xml - -
    - - 10001,10003 - 2.0 - - - 3 - 0 - -
    -
    - ``` - -Note that these two endpoints have different set of error codes and different progression factors for suspension. Furthermore, the number of retries is different between them. By defining an endpoint template, these two endpoints can be converged to a generalized form. This is illustrated in the following: - -``` - -``` - -!!! Note - - The endpoint template uses parameters as inputs. Hence, these parameters can be refered using the `$` prefix within the template. Unlike sequence templates, endpoint templates are always parameterized using `$` prefixed values (not XPath expressions). e.g., You can refer to a parameter named `codes` as `$codes`. - - `$name` and `$uri` are default parameters that a template can use anywhere within the endpoint template (usually as parameters for endpoint name and address attributes). - -The template is now complete. Therefore, you can use template endpoints to create two concrete endpoint instances with different parameter values for this scenario as shown below. - -=== "Endpoint 1" - ``` xml - - - - - - ``` - -=== "Endpoint 2" - ``` xml - - - - - - ``` -### Synapse configuration - -In this example, the endpoint template is configured to invoke the endpoints based on the API invocation. According to this configuration, the endpoint name, URI, codes, retries, and factor are parameterized. - -=== "REST API" - ```xml - - - - - - - - - - - - - - - - - - - - - - - - - - - - ``` - -### Build and run - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [REST API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) and [endpoint template]({{base_path}}/integrate/develop/creating-artifacts/creating-endpoint-templates) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -### Invoke the API -1. Using REST client: -Invoke this REST API using the HTTP client in WSO2 Integration Studio. -See that the response from the backend is logged on the console. - -2. Using CURL: - -=== "Request" - ``` xml - curl -v http://localhost:8290/test/bar - ``` - -=== "Response" - ``` xml - { - "symbol": "foo" - } - ``` diff --git a/en/docs/integrate/examples/template_examples/using-sequence-templates.md b/en/docs/integrate/examples/template_examples/using-sequence-templates.md deleted file mode 100644 index 5bfcfbda13..0000000000 --- a/en/docs/integrate/examples/template_examples/using-sequence-templates.md +++ /dev/null @@ -1,246 +0,0 @@ -# Using Sequence Templates - -!!! Info - The **Call Template** mediator allows you to construct a sequence by passing values into a **sequence template**. This is currently only supported for special types of mediators such as the **Iterator** and **Aggregate Mediators**, where actual XPath operations are performed on a different SOAP message, and not on the message coming into the mediator. - -Sequence template parameters can be referenced using an XPath expression defined inside the in-line sequence. For example, the parameter named "foo" can be referenced by the Property mediator (defined inside the in-line sequence of the template) in the following ways: - -```xml - -``` - -or - -```xml - -``` - -Using function scope or 'func' in the XPath expression allows us to refer a particular parameter value passed externally by an invoker such as the Call Template mediator. - -See the examples given below. - -## Example 1: Calling a sequence template - -Let's illustrate the sequence template with a simple example. Suppose we have a sequence that logs the text "hello" in three different languages. We shall make use of a proxy to which we shall send a payload. The switch statement will log a greeting based on the language. - -```xml - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -``` - -Instead of printing our "hello" message for each and every language inside the sequence (as shown above), we can create a generalized template of these actions, which will accept any greeting message (from a particular language) and log it on screen. For example, let's create the following template named "Hello_Logger". Thus, due to the availability of the Call Template mediator, you are not required to have the message entered in all four languages included in the sequence template configuration itself. - -### Synapse configuration - -Following are the integration artifacts we can use to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -=== "Sequence template" - ```xml - - ``` - -=== "Proxy Service" - ```xml - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ``` - -Note the following; - -- The following four Call Template mediator configurations populate a sequence template named Hello_Logger with the "Hello" text in four different languages. - - === "Call Template 1" - ```xml - - - - ``` - - === "Call Template 2" - ```xml - - - - ``` - - === "Call Template 3" - ```xml - - - - ``` - - === "Call Template 4" - ```xml - - - - ``` - -- With our "Hello_Logger" in place, the Call Template mediator can -populate the template with actual hello messages and execute the -sequence of actions defined within the template like with any other -sequence. - -- The Call Template mediator points to the same template "Hello_Logger" and passes different arguments to it. In this way, sequence templates make it easy to stereotype different workflows inside the Micro Integrator. - -- The `target` attribute is used to specify the sequence template you want to use. The `` element is used to parse parameter values to the target sequence template. The parameter names should be the same as the names specified in target template. The parameter value can contain a string, an XPath expression (passed in with curly braces { }), or a dynamic XPath expression (passed in with double curly braces) of which the values are compiled dynamically. - -### Build and run (Example 1) - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) and [sequence template]({{base_path}}/integrate/develop/creating-artifacts/creating-sequence-templates) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -You can test this out with the following payload sent to the proxy via `http://localhost:8290/services/HelloProxy`: - -```xml - - English - French - Japanese - -``` - -## Example 2: Mandatory parameters and default values - -Following are the integration artifacts we can use to implement this scenario. See the instructions on how to [build and run](#build-and-run) this example. - -### Synapse configuration - -In this example, the sequence template is configured to log the greeting message that is passed from the mediation sequence in the REST API. According to the sequence template, a value for the greeting message is mandatory. However, the REST API is not passing a greeting message to this template. Therefore, the default greeting message specified in the template is effectively applied. - -=== "Sequence Template" - ```xml - - - ``` - -=== "REST API" - ```xml - - - - - - - - - - - - ``` - -### Build and run (Example 2) - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [REST API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api) and [sequence template]({{base_path}}/integrate/develop/creating-artifacts/creating-sequence-templates) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Invoke this REST API using the HTTP client in WSO2 Integration Studio. -See that the default greeting message (`Welcome`) is logged on the console. - -## Example 3: Calling the sequence template using dynamic XPATH expression - -The following Call Template mediator configuration populates a sequence template named `Testtemp` with a dynamic XPath expression. - -```xml - - - -``` - -The following `Testtemp` template includes a dynamic XPath expression to save messages in a Message Store, which is dynamically set via the message context. - -```xml - -``` \ No newline at end of file diff --git a/en/docs/integrate/examples/transport_examples/fix-transport-examples.md b/en/docs/integrate/examples/transport_examples/fix-transport-examples.md deleted file mode 100644 index f1a1cbdb92..0000000000 --- a/en/docs/integrate/examples/transport_examples/fix-transport-examples.md +++ /dev/null @@ -1,90 +0,0 @@ -# Using the FIX Transport - -This example demonstrates the usage of the FIX (Financial Information eXchange) transport with proxy services. - -## Synapse configuration - -WSO2 Micro Integrator will create a session with an Executor and forward the order request. The responses coming from the Executor will be sent back to -Banzai. - -```xml - - file:/home/synapse_user/fix-config/fix-synapse.cfg - file:/home/synapse_user/fix-config/synapse-sender.cfg - file - file - - -
    - - - - - - - - - - -``` - -## Build and run - -- You will need the two sample FIX applications that come with - Quickfix/J (Banzai and Executor). [Configure the two applications]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-fix-transport) to - establish sessions with the Micro Integrator and enable the FIX transport in the Micro-Integrator. -- Start the Micro-Integrator. -- Be sure that the - ` transport.fix.AcceptorConfigURL ` property - points to the ` fix-synapse.cfg ` file you - created. Also make sure that - ` transport.fix. InitiatorConfigURL ` property - points to the ` synapse-sender.cfg ` file you - created. - - !!! Note - The Micro Integrator creates a new FIX session with Banzai at this point. - -- Start Banzai and Executor. -- Send an order request from Banzai to the Micro Integrator. - -### Configuring the Micro Integrator for FIX Samples - -Create the FIX configuration files as specified below. The `FileStorePath` property in the following two files should point to two directories in your local file system. Once the samples are executed, Synapse will create FIX message stores in these two directories. - -=== "fix-synapse.cfg" - ```java - [default] - FileStorePath=repository/logs/fix/data - ConnectionType=acceptor - StartTime=00:00:00 - EndTime=00:00:00 - HeartBtInt=30 - ValidOrderTypes=1,2,F - SenderCompID=SYNAPSE - TargetCompID=BANZAI - UseDataDictionary=Y - DefaultMarketPrice=12.30 - - [session] - BeginString=FIX.4.0 - SocketAcceptPort=9876 - ``` - -=== "synapse-sender.cfg" - ```java - [default] - FileStorePath=repository/logs/fix/data - SocketConnectHost=localhost - StartTime=00:00:00 - EndTime=00:00:00 - HeartBtInt=30 - ReconnectInterval=5 - SenderCompID=SYNAPSE - TargetCompID=EXEC - ConnectionType=initiator - - [session] - BeginString=FIX.4.0 - SocketConnectPort=19876 - ``` diff --git a/en/docs/integrate/examples/transport_examples/pub-sub-using-mqtt.md b/en/docs/integrate/examples/transport_examples/pub-sub-using-mqtt.md deleted file mode 100644 index 02db9d449d..0000000000 --- a/en/docs/integrate/examples/transport_examples/pub-sub-using-mqtt.md +++ /dev/null @@ -1,72 +0,0 @@ -# Using the MQTT transport -This sample demonstrates how to run a Pub-Sub use case using MQTT as the broker. the MQTT listener in the Micro Integrator consumes messages from a MQTT topic, and the MQTT sender publishes messages to a MQTT topic. - -## Synapse configuration - -```xml - - - - -
    - - - - - - - - - - mqttConFactory - esb.test1 - 2 - text/plain - false -   -``` - -Add the following configurations to enable the MQTT listener and sender in `/conf/deployment.toml` file. - -```toml -[transport.mqtt] -listener.enable = true -listener.hostname = "localhost" -listener.connection_factory = "mqttConFactory" -listener.server_port = 1883 -listener.client_id = "client-id-1234" -listener.topic_name = "esb.test2" - -sender.enable = true -``` - -## Build and run - -- Download the `org.eclipse.paho.client.mqttv3-1.1.0.jar` - file. -- Download mosquitto MQTT broker (http://mosquitto.org/) -- Copy the `org.eclipse.paho.client.mqttv3-1.1.0.jar` file to the `MI_HOME/lib/` directory. -- Start the MQTT broker. - -Invoke the proxy service: - -- Execute the following command to start the MQTT subscriber on the - *esb.test2* topic: - - ```bash - mosquitto_sub -h localhost -t esb.test2 - ``` - -- Execute the following command to run the MQTT publisher to publish - to the *esb.test1* topic: - - ```bash - mosquitto_pub -h localhost -p 1883 -t esb.test1 -m {"company":"WSO2"} - ``` - -When you analyze the output messages on the MQTT subscriber console, you -will see the following log: - -```bash -{"company":"WSO2"} -``` diff --git a/en/docs/integrate/examples/transport_examples/tcp-transport-examples.md b/en/docs/integrate/examples/transport_examples/tcp-transport-examples.md deleted file mode 100644 index 5bf625dc59..0000000000 --- a/en/docs/integrate/examples/transport_examples/tcp-transport-examples.md +++ /dev/null @@ -1,373 +0,0 @@ -# Using the TCP Transport - -**Sending multiple messages via the same TCP channel** - -Generally, you can send only one message via one generic TCP channel. Nevertheless, the Micro Integrator also supports sending multiple messages via the same TCP channel by splitting them in different ways. Hence, the TCP transport needs to determine the end of the message that is mediated through the Micro Integrator to split it by a character, a sequence of characters, message length, or special characters in hex form. The client can select which input type to use to send the request to the TCP proxy out of the available options (i.e., binary and String). Splitting the message by a single character is the most efficient method. - -You can split the following sample request input message in different ways as explained below. - -```xml - -``` - -The following are the properties that are specific to sending multiple messages via the same TCP channel. - -| **Property** | **Description** | **Required** | **Possible Values** | **Default Value** | -|--------------------|-------------------------------------------------------|-------------------------------------|-----------------------------|-------------------------| -|recordDelimiterType |Type of the record delimiter you use to split the message | No | Character, byte or String | String | -|recordDelimiter |The delimiter of the record you use to split the message | No | A valid value that matches the specified delimiter type | N/A | -|recordLength | Length of the message to be split. If you set this, then the delimiter properties are omitted. | No | A valid integer value. This will be identified in bytes. | N/A | -|inputType | Input type of the message | No | String or binary | String | - -The following are the transport receiver properties. - -| **Property** | **Description** | **Required** | **Possible Values** | **Default Value** | -|--------------------|-------------------------------------------------------|--------------------------- --------|-----------------------------|-------------------------| -|port | The port on which the TCP server should listen for incoming messages | No | A positive integer less than 65535 | 8000 | -|hostname | The host name of the server to be displayed in WSDLs, etc. | No | A valid host name or an IP address | N/A | -|contentType | The content type of the input message | No | A valid content type (e.g., application/xml, application/json, text/html etc.) | N/A | -|responseClient | Whether the client needs to get the response or not | No | True or false | true | - - -## Prerequisites - -[Enable the TCP transport]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-tcp-transport). - -## Example 1: Splitting by a character - -### Synapse configurations - -The following proxy service splits the message by a character. It receives a message with an empty body, which it will forward to the HTTP endpoint after enriching the body with the symbolic value "`IBM`". - -```xml - - - - - - - - - - ? - - - - - - - - - - -
    - - -
    - - - - - - - - - true - | - string - 6060 - character - text/xml - -``` -### Build and Run (Example 1) - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. - -Set up the back-end service. - -* Download the [back-end service]( -https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip) -* Extract the downloaded zip file. -* Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -* Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Send the following message via TCP to the TCP listener port. -```xml -| -``` -In Linux, we can save the request in a request.xml file and use netcat to send the TCP request. -``` -netcat localhost 6060 < request.xml -``` -It can be observed that two messages are sent to the backend. - -## Example 2: Splitting by a special character - -### Synapse configuration - -The sample proxy below splits the input message by appending a special character to the end of the message. - -```xml - - - - - - - - - - ? - - - - - - - - - - -
    - - -
    - - - - - - - - - 0x03 - true - binary - 6060 - byte - text/xml - -``` - -### Build and Run (Example 2) - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an ESB Solution project]({{base_path}}/integrate/develop/create-integration-project/#esb-config-project). -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-and-run) in your Micro Integrator. - -Set up the back-end service. - -* Download the [back-end service]( -https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip) -* Extract the downloaded zip file. -* Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -* Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Send the following message via TCP to the TCP listener port. -```xml - -``` -In Linux, we can save the request in a request.xml file and use netcat to send the TCP request. -``` -netcat localhost 6060 < request.xml -``` - -## Example 3: Splitting by a character sequence - -### Synapse configuration - -The sample proxy below splits the input message by a sequence of characters. - -```xml - - - - - - - - - - - ? - - - - - - - - - - -
    - - -
    - - - - - - - - -true - split - string - 6060 - string - text/xml - -``` - -### Build and Run (Example 3) - -Create the artifacts: - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an ESB Solution project]({{base_path}}/integrate/develop/create-integration-project/#esb-config-project). -3. Create the [proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-and-run) in your Micro Integrator. - -Set up the back-end service. - -* Download the [back-end service]( -https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip) -* Extract the downloaded zip file. -* Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. -* Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - -Send the following message via TCP to the TCP listener port. -```xml -split -``` - -In Linux, we can save the request in a request.xml file and use netcat to send the TCP request. - -``` -netcat localhost 6060 < request.xml -``` -It can be observed that two messages are sent to the backend. - -## Developing the Java Client for the Transport - -The sample Java Client below splits the input message by a special character. Also, you can develop a character delimiter client by changing the below client accordingly. - -```java - import java.io.ByteArrayOutputStream; - import java.io.IOException; - import java.io.InputStream; - import java.io.OutputStreamWriter; - import java.io.PrintWriter; - import java.net.Socket; - - public class TCPClient { - - String host = "localhost"; - int port = 6060; - Socket socket = null; - int count = 0; - - public static void main(String args[]) throws Exception { - Character aByte = 0x10; - TCPClient client = new TCPClient(); - String message = "" - + "" + aByte; - client.sendToServer(message); - client.recieveFromServer(); - client.sendToServer(message); - client.recieveFromServer(); - client.close(); - } - - TCPClient() throws Exception { - socket = new Socket(host, port); - } - - void sendToServer(String msg) throws Exception { - //create output stream attached to socket - PrintWriter outToServer = new PrintWriter(new OutputStreamWriter(socket.getOutputStream())); - //send msg to server - outToServer.print(msg); - outToServer.flush(); - } - - void recieveFromServer() throws Exception { - char delimiter = 0x10; - InputStream inFromServer = socket.getInputStream(); - //read from server - int next = inFromServer.read(); - ByteArrayOutputStream bos = new ByteArrayOutputStream(); - while (next > -1) { - if (delimiter != next) { - bos.write(next); - } - next = inFromServer.read(); - if (delimiter == next) { - System.out.println(new String(bos.toByteArray())); - count++; - if (count == 1 || count == 2) { - break; - } - bos = new ByteArrayOutputStream(); - } - } - if (count == 2) { - close(); - } - } - - void close() throws IOException { - socket.close(); - } - } -``` diff --git a/en/docs/integrate/examples/working-with-transactions.md b/en/docs/integrate/examples/working-with-transactions.md deleted file mode 100644 index 1e8b58fff6..0000000000 --- a/en/docs/integrate/examples/working-with-transactions.md +++ /dev/null @@ -1,509 +0,0 @@ -# Working with Transactions - -!!! Warning - **Please note that the contents on this page are under review!** - -A **transaction** is a set of operations executed as a single unit. It -also can be defined as an agreement, which is carried out between -separate entities or objects. A transaction can be considered as -indivisible or atomic when it has the characteristic of either being -completed in its entirety or not at all. During the event of a failure -for a transaction update, atomic transaction type guarantees transaction -integrity such that any partial updates are rolled back automatically. - -Transactions have many different forms, such as financial transactions, -database transactions etc. - -## Distributed transactions - -A **distributed transaction** is a transaction that updates data on two -or more networked computer systems, such as two databases, or a database -and a message queue such as JMS. Implementing robust distributed -applications is difficult because these applications are subject to -multiple failures, including failure of the client, the server, and the -network connection between the client and server. For distributed -transactions, each computer has a local transaction manager. When a -transaction works at multiple computers, the transaction managers -interact with other transaction managers via either a superior or -subordinate relationship. These relationships are relevant only for a -particular transaction. - -For an example that demonstrates how the [transaction -mediator]({{base_path}}/reference/mediators/transaction-mediator/) can -be used to manage distributed transactions , see [Transaction Mediator -Example](https://docs.wso2.com/display/EI650/Transaction+Mediator+Example). - -### Java Message Service (JMS) transactions - -In addition to the [transaction -mediator]({{base_path}}/reference/mediators/transaction-mediator/) , -WSO2 Micro Integrator (WSO2 MI) also supports JMS transactions. - -!!! Note - In WSO2 MI, JMS transactions only work with either the Callout mediator or the Call mediator in blocking mode. - -The [JMS transport](https://docs.wso2.com/display/EI650/JMS+Transport) -shipped with WSO2 MI supports both local and distributed JMS -transactions. You can use local transactions to group messages -received in a JMS queue. Local transactions are not supported for -messages sent to a JMS queue. - -## JMS consumer transactions - -Following sections describe the JMS consumer transactions. - -### JMS local transactions - -A **local transaction** represents a unit of work on a single connection -to a data source managed by a resource manager. In JMS, you can use the -JMS API to get a transacted session and to call methods for commit or -roll back for the relevant transaction objects. This is managed -internally by a resource manager. There is no external transaction -manager involved in the coordination of such transactions. - -Let's explore a sample scenario that demonstrates how to handle a -transaction using JMS in a situation where the back-end service is -unreachable. - -#### Sample scenario - -A message is read from a JMS queue and is processed by a back-end -service. In the successful scenario, the transaction will be committed -and the request will be sent to the back end service. In the failure -scenario, while executing a sequence, a failure occurs and WSO2 MI -receives a fault. This cause the JMS transaction to roll back. - -The sample scenario can be depicted as follows: - -![]({{base_path}}/assets/img/integrate/jms_transaction.png) - -#### Prerequisites - -- Windows, Linux or Solaris operating systems with WSO2 MI - installed. For instructions on downloading and installing WSO2 MI, - see [Installation Guide]({{base_path}}/install-and-setup/install/installing-the-product/installing-mi) . -- WSO2 MI JMS transport configured with ActiveMQ. For instructions, - see [Configure with ActiveMQ](https://ei.docs.wso2.com/en/latest/micro-integrator/setup/brokers/configure-with-ActiveMQ/) - . - -#### Configuring the sample scenario - -1. Configure the JMS local transaction by defining the following - parameter in the - ` /conf/deployment.toml ` file. By default the session is not transacted. In order to - make it transacted, we set the session_transaction parameter to true . - - ``` - [[transport.jms.listener]] - name = "myTopicConnectionFactory" - parameter.initial_naming_factory = "org.apache.activemq.jndi.ActiveMQInitialContextFactory" - parameter.provider_url = "tcp://localhost:61616" - parameter.connection_factory_name = "TopicConnectionFactory" - parameter.connection_factory_type = "topic" - parameter.session_transaction = true - - [[transport.jms.listener]] - name = "myQueueConnectionFactory" - parameter.initial_naming_factory = "org.apache.activemq.jndi.ActiveMQInitialContextFactory" - parameter.provider_url = "tcp://localhost:61616" - parameter.connection_factory_name = "QueueConnectionFactory" - parameter.connection_factory_type = "queue" # [queue, topic] - parameter.session_transaction = true - - [[transport.jms.listener]] - name = "default" - parameter.initial_naming_factory = "org.apache.activemq.jndi.ActiveMQInitialContextFactory" - parameter.provider_url = "tcp://localhost:61616" - parameter.connection_factory_name = "QueueConnectionFactory" - parameter.connection_factory_type = "queue" # [queue, topic] - parameter.session_transaction = true - ``` - -2. Copy and paste the following configuration into the Synapse - configuration in \< - ` MI_HOME>/repository/deployment/server/synapse-configs//synapse.xml ` - . - - ```xml - - - - - - - - - - - - - - - - - - - - - - contentType - application/xml - - - - ``` - - According to the above configuration, a message will be read from - the JMS queue and will be sent to the - ` SimpleStockQuoteService `. If a failure occurs, the transaction will roll - back. - - In the above configuration, the following property is set to - **true** in the fault handler, in order to roll back the transaction - when a failure occurs. - - ```xml - - ``` - - !!! Tip - If you are using a JMS Inbound endpoint for the transaction, set the - scope of the ` SET_ROLLBACK_ONLY ` property to - ` default ` as follows: - - ```xml - - ``` - - !!! note "Working with Client Acknowledgement" - You can alterntively use Client Acknowledgment of JMS (this will not slow down message consumption). - - Configure JMS transport or inbound JMS protocol to use JMS Client Acknowledgment: - - ``` - consumer - CLIENT_ACKNOWLEDGE - false - ``` - - Upon mediation failure, configure a fault sequence to be executed and set the below property to recover the JMS session, so that we tell broker to redeliver messages from the point last acknowledgement is received. - - ``` - - ``` - - -3. Deploy the back-end service - ` SimpleStockQuoteService ` . - * Download the ZIP file of the back-end service from [here](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/axis2Server.zip). - * Extract the downloaded zip file. - * Open a terminal, navigate to the `axis2Server/bin/` directory inside the extracted folder. - * Execute the following command to start the axis2server with the SimpleStockQuote back-end service: - - === "On MacOS/Linux/CentOS" - ```bash - sh axis2server.sh - ``` - - === "On Windows" - ```bash - axis2server.bat - ``` - - You now have a running WSO2 MI instance, ActiveMQ instance and a - sample back-end service to simulate the sample scenario. - - !!! Info - Due to the asynchronous behavior of the [Send - Mediator]({{base_path}}/reference/mediators/send-mediator/) , you - cannot you use it with a http/https endpoint, but you can use it in - asynchronous use cases, for example with another JMS as endpoint. - -#### Executing the sample scenario - - To execute the sample scenario we need to trigger a sample message to the JMS Server. - Add a message in `StockQuoteProxy` queue with an XML payload using the [ActiveMQ Web Console](https://activemq.apache.org/web-console.html). - -#### Testing the sample scenario - -You can test the sample scenario as follows. - -**Successful scenario** - -If the message mediates successfully, the MI log will display an -INFO message indicating that the transaction is committed. - -**Failure scenario** - -Stop the SimpleStockQuoteService and add a message in StockQuoteProxy queue once again to -simulate the failure scenario. In this scenario, the MI log will -display an INFO message indicating that the transaction is rolled back. - -### JMS distributed transactions - -WSO2 MI also supports distributed JMS transactions. You can use the JMS -transport with more than one distributed resource, for example, two -remote database servers. An external transaction manager coordinates the -transaction. Designing and using JMS distributed transactions is more -complex than using local JMS transactions. - -The transaction manager is the primary component of the distributed -transaction infrastructure and distributed JMS transactions are managed -by the -[XAResource](http://docs.oracle.com/javaee/5/api/javax/transaction/xa/XAResource.html) -enabled transaction manager in the Java 2 Platform, Enterprise Edition -(J2EE) application server. - -!!! Info - You will need to check if your message broker supports [XA transactions](https://docs.oracle.com/cd/E19509-01/820-5892/ref_xatrans/index.html) prior to implementing distributed JMS transactions. - -#### XA two-phase commit process - -XA is a two-phase commit specification that is used in distributed -transaction processing. Let's look at a sample scenario for JMS -distributed transactions. - -##### Sample Scenario - -MI listens to the message queue and sends that message to multiple -queues. If something goes wrong in sending the message to one of those -queues, the original message should be rolled back to the listening -queue and none of the other queues should receive the message. Thus, the -entire transaction should be rolled back. - -##### Prerequisites - -- Windows, Linux or Solaris operating systems with WSO2 MI - installed. For instructions on downloading and installing WSO2 MI, - see [Installation - Guide]({{base_path}}/install-and-setup/install/installing-the-product/installing-mi) . -- WSO2 MI JMS transport configured with ActiveMQ. For instructions, - see [Configure with - ActiveMQ](https://ei.docs.wso2.com/en/latest/micro-integrator/setup/brokers/configure-with-ActiveMQ/) - . - -##### Configuring the sample scenario - -1. Create the ` JMSListenerProxy ` proxy service - in WSO2 MI with the following configuration: - - ``` - - - - - - - - - - - - - - failure - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - contentType - application/xml - - - MyJMSQueue - - ``` - - In the above configuration,  WSO2 MI listens to a JMS queue named - ` MyJMSQueue ` and consumes messages as well - as sends messages to multiple JMS queues in a transactional manner. - -2. Place a message in `MyJMSQueue` using the [ActiveMQ Web Console](https://activemq.apache.org/web-console.html). - - You can see how WSO2 MI consumes messages from the queue named - ` MyJMSQueue ` and sends the messages to - multiple queues. - - To check the rollback functionality provide an unreachable hostname - to any destination queue and save the configurations. You should be - able to observe WSO2 MI fault sequence getting invoked and failed - message delivered to the destination configured in the fault - sequence. - -#### JMS publisher transactions - -When you do not enable publisher transactions, the message publishing -call to the Broker will not wait until the messages are persisted to the -database. As a result, a successful HTTP response will be returned back -to the caller even in a state where the database is disconnected. Hence, -the message might actually be lost and not persisted in the Broker. -Therefore, you can achieve guaranteed delivery by enabling publisher -transactions. - -The below is a sample scenario that demonstrates how to handle a -publisher transaction using JMS. - -##### Sample scenario - -In this scenario, the client publishes JMS messages to the WSO2 MI. Then, WSO2 MI publishes those messages to the JMS -queue, which acts as the JMS endpoint. The sample scenario can be -depicted as follows. - -##### Prerequisites - -- Install WSO2 MI. For instructions , see [Installation - Guide]({{base_path}}/install-and-setup/install/installing-the-product/installing-mi) . -- WSO2 MI JMS transport configured with ActiveMQ. For instructions, - see [Configure with - ActiveMQ](https://ei.docs.wso2.com/en/latest/micro-integrator/setup/brokers/configure-with-ActiveMQ/) - . - -##### Configuring the sample scenario - -1. Configure the JMS sender for the WSO2 MI by adding the following configurations in deployment.toml file - available in - ` /conf/deployment.toml ` - - !!! Info - By default, the session is not transacted. Set the value of the - ` session_transaction ` property to true, to - make it transacted to publish transactions successfully. - - ``` - [[transport.blocking.jms.sender]] # jms sender for blocking transport - name = "commonTopicPublisherConnectionFactory" - parameter.initial_naming_factory = "org.apache.activemq.jndi.ActiveMQInitialContextFactory" - parameter.provider_url = "tcp://localhost:61616" - parameter.connection_factory_name = "TopicConnectionFactory" - parameter.connection_factory_type = "topic" - parameter.cache_level = "producer" - parameter.session_transaction = true - - [[transport.blocking.jms.sender]] # jms sender for blocking transport - name = "commonJmsSenderConnectionFactory" - parameter.initial_naming_factory = "org.apache.activemq.jndi.ActiveMQInitialContextFactory" - parameter.provider_url = "tcp://localhost:61616" - parameter.connection_factory_name = "QueueConnectionFactory" - parameter.connection_factory_type = "queue" - parameter.cache_level = "producer" - parameter.session_transaction = true - ``` - -2. Create an XML file with the below Synapse configuration of a sample - publisher Proxy Service, and place the file inside the - ` /repository/deployment/server/synapse-configs/default/proxy-services/ ` - directory. - - ``` - - - - - - - - - -
    - - - - - - 200 - Successful - - $1 - - - - - - - - - - - - - - - - - 500 - Failure - - $1 - - - - - - - - - - - - - - ``` - -##### Executing the sample scenario - -Use a JMS client such as [Apache JMeter](https://jmeter.apache.org/) to -execute this sample scenario. - -##### Testing the sample scenario - -When a message is successfully published, it returns an HTTP 200 -response to the client (successful scenario). In a case where it fails -to publish a message, it executes the fault sequence returning an HTTP -500 response to the client (failure scenario) . diff --git a/en/docs/integrate/integration-key-concepts.md b/en/docs/integrate/integration-key-concepts.md deleted file mode 100644 index 86f56f1f93..0000000000 --- a/en/docs/integrate/integration-key-concepts.md +++ /dev/null @@ -1,140 +0,0 @@ -# Integration Key Concepts - -Listed below are the key concepts of WSO2 Micro Integrator. - -![Key Concepts]({{base_path}}/assets/img/integrate/key-concepts/key-concepts.png) - -## Message entry points - -Message entry points are the entities that a message can enter into the Micro Integrator mediation flow. - -### REST APIs - -A REST API in WSO2 Micro Integrator is analogous to a web application deployed in the web container. -The REST API will receive messages over the HTTP/S protocol, performs the necessary transformations and processing, and then forwards the messages to a given endpoint. Each API is anchored at a user-defined URL context, -much like how a web application deployed in a servlet container is anchored at a fixed URL context. -An API will only process requests that fall under its URL context. -An API is made of one or more **Resources**, which are logical components of an API that can be accessed by making a particular type of HTTP call. - - -### Proxy Services - -A Proxy service is a virtual service that receives messages and optionally processes them before forwarding them to a service at a given endpoint. This approach allows you to perform the necessary message transformations and -introduce additional functionality to your services without changing your actual services. -Unlike in [REST APIs](#rest-apis), here the protocol does not always need to be HTTP/S. -Proxy Services do support any well-known protocols including HTTP/S, JMS, FTP, FIX, and HL7. - - -### Inbound Endpoints - -In [Proxy services](#proxy-services) and [REST APIs](#rest-apis) some part of the configuration is global to a particular instance. For example, HTTP port needs to be common for all the REST APIs. -The Inbound Endpoints do not contain such global configurations. That gives extra flexibility in configuring -the Inbound Endpoints compared to the other two message entry points. - ---- - -## Message processing units - -### Mediators - -Mediators are individual processing units that perform a specific function on messages that pass through the Micro Integrator. -The mediator takes the message received by the message entry point (Proxy service, REST API, or Inbound Endpoint), -carries out some predefined actions on it (such as transforming, enriching, filtering), and outputs the modified message. - -### Mediation Sequences - -A mediation sequence is a set of [mediators](#mediators) organized into a logical flow, allowing you to implement pipes and filter patterns. The mediators in the sequence will perform the necessary message processing and route the message -to the required destination. - -### Message Stores and Processors - -A **Message Store** is used by a [mediation sequence](#mediation-sequences) to temporarily store messages before they are delivered to their destination. This approach is useful for several scenarios. -1. Serving traffic to back-end services that can only accept messages at a given rate, -whereas incoming traffic arrives at different rates. This use case is called **request rate matching**. -2. If the back-end service is not available at a particular moment, the message can be kept safely inside the message store until the back-end service becomes available. This use case is called **Guaranteed delivery** - -The task of the **Message Processor** is to pick the messages stored in the Message Store and deliver -it to the destination. - -### Templates - -A large number of configuration files in the form of [sequences](#mediation-sequences) and [endpoints](#endpoints), -and transformations can be required to satisfy all the mediation requirements of your system. -To keep your configurations manageable, it is important to avoid scattering configuration files across different locations and to avoid duplicating redundant configurations. Templates help minimize this redundancy by creating prototypes that users can use and reuse when needed. WSO2 Micro Integrator can template [sequences](#mediation-sequences) - and [endpoints](#endpoints). - ---- - -## Message exit points - -### Endpoints - -A message exit point or an endpoint defines an external destination for a message. -An endpoint could represent a URL, a mailbox, a JMS queue, a TCP socket, etc. along with the settings needed for the connection. - -### Connectors - -Connectors allow your mediation flows to connect and interact with external services such as Twitter and Salesforce. -Typically, connectors are used to wrap the API of an external service. -It is also a collection of [mediation templates](#templates) that define specific operations that should be performed on the service. Each connector provides operations that perform different actions in that service. -For example, the Twitter connector has operations for creating a tweet, getting a user's followers, and more. - -To download a required connector, go to the [WSO2 Connector Store](https://store.wso2.com/store). - ---- - -## Data Services - -The data in your organization can be a complex pool of information that is stored in heterogeneous systems. Data services are created for the purpose of decoupling the data from its infrastructure. In other words, when you create a data service in WSO2 Micro Integrator, -the data that is stored in a storage system (such as an RDBMS) can be exposed in the form of a service. -This allows users (that may be any application or system) to access the data without interacting with the original source of the data. Data services are, thereby, a convenient interface for interacting with the database layer in your -organization. - -A data service in WSO2 Micro Integrator is a SOAP-based web service by default. -However, you also have the option of creating REST resources, which allows applications and systems to consume the -data service to have both SOAP-based, and RESTful access to your data. - ---- -## Other concepts - -### Scheduled Tasks - -Executing an integration process at a specified time is a common requirement in enterprise integration. -For example, in an organization, there can be a need for running an integration process to synchronize two systems every day at the day end. -In the Micro Integrator, the execution of a message mediation process can be automated to run periodically by using a **Scheduled task**. You can schedule a task to run in the time interval of 't' for 'n' number of times or to run once -the Micro Integrator starts. -Furthermore, you can use cron expressions for more advanced executing time configuration. - - -### Transports - -A transport protocol is responsible for carrying messages that are in a specific format. -WSO2 Micro Integrator supports all the widely used transports including HTTP/S, JMS, VFS, as well as domain-specific -transports like FIX. -Each transport provides a receiver implementation for receiving messages and a sender implementation for sending messages. - -### Service Catalog - -Service Catalog is one of the main attributes that enable the API-first Integration in WSO2 API Manager. Through the Service Catalog, integration services are made discoverable to the API Management layer so that API proxies can directly be created using them. - -These integration services can be created using WSO2 Integration Studio and a variety of other platforms. For an Integration Studio user, the service registration happens automatically when exporting the project as a composite application (CApp). - -### Registry - -WSO2 Micro Integrator uses a registry to store various configurations and resources such as [endpoints](#endpoints). -A registry is simply a content store and a metadata repository. -Various resources such as XSLT scripts, WSDLs, and configuration files can be stored in a registry and referred to by a key, which is a path similar to a UNIX file path. -The WSO2 Micro Integrator uses a [file-based registry]({{base_path}}/install-and-setup/setup/mi-setup/deployment/file_based_registry) that is configured by default. -When you develop your integration artifacts, you can also define and -use a [local registry]({{base_path}}/integrate/develop/creating-artifacts/registry/creating-local-registry-entries). - -### Message Builders and Formatters - -When a message comes into WSO2 Micro Integrator, the receiving transport selects a **message builder** based on the message's content type. It uses that builder to process the message's raw payload data and converts it to -common XML, which the mediation engine of WSO2 Micro Integrator can then read and understand. -WSO2 Micro Integrator includes message builders for text-based and binary content. - -Conversely, before a transport sends a message out from WSO2 Micro Integrator, a **message formatter** is used to -build the outgoing stream from the message back into its original format. -As with message builders, the message formatter is selected based on the message's content type. -You can implement new message builders and formatters for custom requirements. \ No newline at end of file diff --git a/en/docs/integrate/integration-overview.md b/en/docs/integrate/integration-overview.md deleted file mode 100644 index c4de6a15a7..0000000000 --- a/en/docs/integrate/integration-overview.md +++ /dev/null @@ -1,601 +0,0 @@ - - -# Integration Overview - -WSO2 API Manager 4.2.0 is shipped with an integration runtime (Micro Integrator) with comprehensive enterprise integration capabilities. Therefore, you can now use WSO2 API Manager to develop complex integration services and expose them as managed APIs in an API marketplace. This allows you to enable API-led connectivity across your business using a single platform. - -## Get Started with Integration - -Let's get started with the integration capabilities and concepts of the Micro Integrator of WSO2 API Manager. - -
    -
    - -
    -
    -
    - integration quick start -
    -
    -

    Quick Start with Integration

    -

    Try out a simple message mediation using the Micro Integrator.

    -
    -
    - - -
    -
    -
    - develop first integration -
    -
    -

    Develop your First Integration

    -

    Build a simple integration scenario using WSO2 Integration Studio.

    -
    -
    - - -
    -
    -
    - integration key concepts -
    -
    -

    Key Concepts of Integration

    -

    Explore the key concepts used by the Micro Integrator.

    -
    -
    - -
    -
    - -## Integration Strategy - -You can now leverage the integration capabilities as well as the API management capabilities of the product to implement any of the following integration strategies. - -### API-led Integration - -WSO2 API Manager consists of an API management layer as well as an integration layer, which enables API-led integration through a single platform. The integration layer (Micro Integrator) is used for running the integration APIs, which are developed using WSO2 Integration Studio. The API management layer is used for converting the integration APIs into experience APIs and making them discoverable to developers. - -See API-led Integration for more information. - -### Microservices Integration - -The Micro Integrator is lightweight and container friendly. This allows you to leverage the comprehensive enterprise messaging capabilities of the Micro Integrator in your decentralized, cloud-native integrations. - - - -If your organization is running on a decentralized, cloud-native, integration architecture where microservices are used for integrating the various APIs, events, and systems, the Micro Integrator can easily function as your Integration microservices and API microservices. - -### Centralized Integration (Enterprise Service Bus) - -At the heart of the Micro Integrator server is an event-driven, standards-based messaging engine (the Bus). This ESB supports message routing, message transformations, and other types of messaging use cases. If your organization uses an API-driven, centralized, integration architecture, the Micro Integrator can be used as the central integration layer that implements the message mediation logic connecting all the systems, data, events, APIs, etc. in your integration ecosystem. - - - -## Learn Integration - -See the topics in the following sections for details and instructions. - -### Integration Use Cases - -Learn about the main integration capabilities of the Micro Integrator of WSO2 API Manager. You can also follow the [tutorials](#integration-tutorials) on each of these use cases to gain hands-on knowledge. - -
    -
    - -
    -
    -
    -

    Message Routing

    -

    Explore how messages are routed to different endpoints.

    -
    -
    - - -
    -
    -
    -

    Message Transformation

    -

    Explore how messages are transformed into different formats.

    -
    -
    - - -
    -
    -
    -

    Data Integration

    -

    Explore how data from various sources are used during message mediation.

    -
    -
    - -
    -
    - -
    -
    -
    -

    File Processing

    -

    Explore how data from file systems are moved and used during message mediation.

    -
    -
    - - -
    -
    -
    -

    SaaS and B2B Connectivity

    -

    Explore how to integrate with third-party systems using WSO2 connectors.

    -
    -
    - - -
    -
    -
    -

    Service Orchestration

    -

    Explore how multiple Restful services are exposed as a single course-grained service.

    -
    -
    - -
    -
    - -
    -
    -
    -

    Enterprise Messaging

    -

    Explore asynchronous messaging patterns using message brokers.

    -
    -
    - - -
    -
    -
    -

    Scheduled Integration Processes

    -

    Explore how integration processes are scheduled and executed periodically.

    -
    -
    - - -
    -
    -
    -

    Protocol Switching

    -

    Explore how message protocols are changed during message mediation.

    -
    -
    - -
    -
    - -### Integration Development - -Learn how to set up the development environment and build integration solutions. - -
    -
    - -
    -
    -
    -

    Quick Tour - WSO2 Integration Studio

    -

    Get an overview of the developer tool that you will use for developing integrations.

    -
    -
    - - -
    -
    -
    -

    Install WSO2 Integration Studio

    -

    Install and set up WSO2 Integration Studio.

    -
    -
    - - -
    -
    -
    -

    Development Workflow

    -

    Get an overview of the integration development workflow.

    -
    -
    - -
    -
    - -See the **Developing Integrations** section in the left-hand navigator for more topics on working with integrations. - -### Management and Observability - -Learn about the dashboards, tools, and solutions that are available for managing and monitoring integrations deployed in the Micro Integrator. - -
    -
    - -
    -
    -
    -

    Micro Integrator Dashboard

    -

    Dashboard for monitoring integration artifacts in a Micro Integrator cluster.

    -
    -
    - - -
    -
    -
    -

    APICTL (CLI for Integration)

    -

    Command-line tool for monitoring integration artifacts in a Micro Integrator instance.

    -
    -
    - - -
    -
    -
    -

    Observability for Integrations

    -

    Observability solution for integrations deployed in a Micro Integrator cluster.

    -
    -
    - -
    -
    - -### DevOps and Administration - -Learn how to set up a Micro Integrator deployment and configure the deployment according to your requirements. - -
    -
    - -
    -
    -
    -

    Installation

    -

    Install the Micro Integrator in your environment.

    -
    -
    - - -
    -
    -
    -

    Deployment

    -

    Select a deployment strategy and set up a deployment (on containers or VMs).

    -
    -
    - - -
    -
    -
    -

    Upgrade

    -

    Upgrade to the latest Micro Integrator from previous product versions.

    -
    -
    - -
    -
    - -
    -
    -
    -

    Configuration and Set up

    -

    Configure Security, Data Stores, Perfomance, Message Brokers, Transports, etc.

    -
    -
    - - -
    -
    -
    -

    User Management

    -

    Configure a user store and manage users and roles in the Micro Integrator.

    -
    -
    - - -
    -
    -
    -

    CICD Pipelines

    -

    Implement CICD pipelines for your deployment (on containers or VMs).

    -
    -
    - -
    -
    - -### Integration Tutorials - -Learn how to implement various integration use cases, deploy them in the Micro Integrator, and test them locally. - -- API-led Integration tutorials - - - - - - - - -
    - Exposing an Integration Service as a Managed API -
    - Exposing an Integration SOAP Service as a Managed API -
    - -- Message mediation tutorials - - - - - - -
    - - - -
    - -### Integration Examples - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Message Routing - -
    Message Transformation - -
    Asynchronous Messaging -
  • RabbitMQ Examples - -
  • -
  • JMS Examples - -
  • -
    Protocol Switching - -
    File Processing - -
    Data Integration - -
    Examples of Components - -
    - -
    - -
    diff --git a/en/docs/integrate/integration-use-case/asynchronous-message-overview.md b/en/docs/integrate/integration-use-case/asynchronous-message-overview.md deleted file mode 100644 index 13892dec4b..0000000000 --- a/en/docs/integrate/integration-use-case/asynchronous-message-overview.md +++ /dev/null @@ -1,81 +0,0 @@ -# Asynchronous Message Processing - -Asynchronous messaging is a communication method wherein the system puts a message in a message queue and does not require an immediate response to continue processing. Asynchronous messaging is useful for the following: - -- Delegate the request to some external system for processing -- Ensure delivery of a message to an external system -- Throttle message rates between two systems -- Batch processing of messages - -Note the following about asynchronous message processing: - -- Asynchronous messaging solves the problem of intermittent connectivity. The message receiving party does not need to be online to receive the message as the message is stored in a middle layer. This allows the receiver to retrieve the message when it comes online. -- Message consumers do not need to know about the message publishers. They can operate independently. - -Disadvantages of asynchronous messaging includes the additional component of a message broker or transfer agent to ensure the message is received. This may affect both performance and reliability. There are various levels of message delivery reliability grantees from publisher to broker and from broker to subscriber. Wire level protocols like AMQP and MQTT can provide those. - - - - - - - -
    - Tutorials
    - -
    - RabbitMQ Examples - - - JMS Examples - -
    \ No newline at end of file diff --git a/en/docs/integrate/integration-use-case/connectors.md b/en/docs/integrate/integration-use-case/connectors.md deleted file mode 100644 index b520e9b3d3..0000000000 --- a/en/docs/integrate/integration-use-case/connectors.md +++ /dev/null @@ -1,112 +0,0 @@ -# SaaS and B2B Integration - -Connectors are a means of interacting with various SaaS applications on the cloud, databases, and popular B2B protocols. See [Connectors Overview]({{base_path}}/reference/connectors/connectors-overview) for more information. - -The following are documented connectors available from the [connector store](https://store.wso2.com/store/assets/esbconnector/list). Click the link of the connector to view the documentation for each connector. - -!!! Info - For details on connectors not mentioned in this documentation, you can find more information in [WSO2 ESB Connectors documentation](https://docs.wso2.com/display/ESBCONNECTORS/WSO2+ESB+Connectors+Documentation) or in the [GitHub repository of the connector](https://github.com/wso2-extensions) you are looking for. - -## SaaS Connectors - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ConnectorDescription
    Amazon DynamoDBAmazon DynamoDB Connector allows you to access the Amazon DynamoDB REST API.
    Amazon LambdaThe AmazonLambda Connector allows you to access the REST API of Amazon Web Service Lambda (AWS Lambda), which lets you run code without provisioning or managing servers.
    Amazon S3The AmazonS3 Connector allows you to access the REST API of Amazon Storage Service S3, which lets you store your information and retrieve them when needed.
    Amazon SQSThis connector enables you to perform CRUD operations for queues in Amazon SQS instance, update permissions, and work with messages through the Amazon SQS API.
    Ceridian DayforceThe Ceridian Dayforce connector allows you to access the REST API of Ceridian Dayforce HCM.
    GmailThe Gail Connector allows you to integrate with the Gmail REST API.
    Google FirebaseGoogle Firebase Connector is useful for integrating Google Firebase with other enterprise applications, on-premise or cloud using the Google Firebase API.
    Google SpreadsheetThe WSO2 Google Spreadsheet Connector allows you to access the Google Spreadsheet API Version v4.
    Microsoft Azure StorageThe Microsoft Azure Storage Connector allows you to access the Azure Storage services using Microsoft Azure Storage Java SDK.
    Salesforce RESTThe connector uses the Salesforce REST API to interact with Salesforce.
    Salesforce RESTThe Salesforce streaming Inbound Endpoint allows you to perform various operations on Salesforce streaming data via the integration server of WSO2. The Salesforce streaming API receives notifications based on the changes that happen to Salesforce data with respect to an SQQL (Salesforce Object Query Language) query you define, in a secured and scalable way.
    ServiceNowUsing ServiceNow connector you can work with Aggregate API, Import Set API and Table API in ServiceNow.
    - - -## Technology Connectors - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ConnectorDescription
    DB Event ListenerDB Event Inbound Endpoint is the DB event listener. You can configure it with any popular Database systems such as `MySQL` and `Oracle`.
    FHIRThis connector uses the HAPI FHIR APIs to connect with a Test Server, which is an open source server licensed under the Apache Software License 2.0 (Java-based implementation of the FHIR specification).
    FileThe File Connector uses the Apache Commons VFS I/O functionalities to execute operations related to the file system and allows you to easily manipulate files based on your requirement.
    ISO8583The ISO8583 message format is used for financial transactions such as ATM, POS, Credit Card, Mobile Banking, Internet Banking, KIOSK, e-commerce, etc.
    ISO8583 Inbound EndpointThe ISO8583 inbound endpoint is a listening inbound endpoint that can consume ISO8583 standard messages.
    Kafka ProducerThis connector enables you to send messages to a Kafka broker via Kafka topics. This uses the Producer API.
    Kafka Inbound EndpointThe Kafka inbound endpoint in the integration server acts as a message consumer. It creates a connection to ZooKeeper and requests messages for either a topic/s or topic filters.
    LDAPThe LDAP connector allows you to connect to any LDAP server through a simple web services interface and perform CRUD (Create, Read, Update, Delete) operations on LDAP entries.
    SMPPSMPP (Short Message Peer-to-Peer Protocol) Connector allows you to send an SMS through the integration runtime. It uses the jsmpp API to communicate with an SMSC (Short Message Service Center)
    - diff --git a/en/docs/integrate/integration-use-case/data-integration-overview.md b/en/docs/integrate/integration-use-case/data-integration-overview.md deleted file mode 100644 index 58306cef76..0000000000 --- a/en/docs/integrate/integration-use-case/data-integration-overview.md +++ /dev/null @@ -1,47 +0,0 @@ -# Data Integration - -Data integration is an important part of an integration process. For example, consider a typical integration process that is managed using the Micro Integrator: Data stored in various, disparate datasources are required in order to complete the integration use case. - -The data services functionality that is embedded in the Micro Integrator can decouple the data from the datasource layer and exposing them as data services. The main integration flow defined in the Integrator will then have the capability of managing the data through the data service. Once the data service is defined, you can manipulate the data stored in the datasources by invoking the relevant operation defined in the data service. For example, you can perform the basic CRUD operations as well as other advanced operations. - - - - - - -
    - Tutorials
    - -
    - Examples
    - -
    \ No newline at end of file diff --git a/en/docs/integrate/integration-use-case/file-processing-overview.md b/en/docs/integrate/integration-use-case/file-processing-overview.md deleted file mode 100644 index 9370e88f23..0000000000 --- a/en/docs/integrate/integration-use-case/file-processing-overview.md +++ /dev/null @@ -1,42 +0,0 @@ -# File Processing - -In many business domains, there are different use cases related to managing files. Also, there are file-based legacy systems that are tightly coupled with other systems. These files contain huge amounts of data, which requires a big effort for manual processing. It is not scalable with an increase in system load. This leads us to the requirement of automating the processing of files. The WSO2 Micro Integrator enables the following file processing capabilities: - -- Reading, Writing, and Updating files: - - Files can be located in the local file system or a remote location which can be accessed over protocols such as FTP, FTPS, SFTP, SMB. Therefore, the system used to process those files should capable of communicating over those protocols. - -- Process data - - The system should capable of extracting relevant information from the file. For example, if required to process XML files, the system should be capable of executing and XPath on the file content and extract relevant information. - -- Execute some business logic - - The system should be capable of performing actions that are required to construct a business use case. It should be capable of taking decisions and sending processed information to other systems over different communication protocols. - - - - - - -
    - Tutorials
    - -
    - Examples - -
    \ No newline at end of file diff --git a/en/docs/integrate/integration-use-case/message-routing-overview.md b/en/docs/integrate/integration-use-case/message-routing-overview.md deleted file mode 100644 index 478716d05d..0000000000 --- a/en/docs/integrate/integration-use-case/message-routing-overview.md +++ /dev/null @@ -1,90 +0,0 @@ -# Message Routing and Transformation - -## Message routing - -Message routing is one of the most fundamental requirements when integrating systems/services. It considers content-based routing, header-based routing, rules-based routing, and policy-based routing as ways of routing a message. WSO2 Micro Integrator enables these routing capabilities using the concepts of mediators and endpoints. - -The following image depicts a form of message routing where a message is routed through the Micro Integrator to the appropriate service. In this case, the Switch/Send mediator can be used. - -Message Routing - - - - - - - - -
    - Tutorials
    - -
    - -## Message transformation - -The integration of systems that communicate in various message formats is a common business case in enterprise integration. WSO2 Micro Integrator facilitates this use case as the intermediary system bridging the communication gap among the systems. - -The following image depicts a typical message transformation scenario using the Transform mediator. - -Message Transformation - - - - - - - -
    - Tutorials
    - -
    - -For example, consider a service that returns data in XML format and a mobile client that accepts messages only in JSON format. To allow these two systems to communicate, the intermediary system needs to convert message formats during the communication. This allows the systems to communicate with each other without depending on the message formats supported by each system. \ No newline at end of file diff --git a/en/docs/integrate/integration-use-case/protocol-switching-overview.md b/en/docs/integrate/integration-use-case/protocol-switching-overview.md deleted file mode 100644 index 26c3193993..0000000000 --- a/en/docs/integrate/integration-use-case/protocol-switching-overview.md +++ /dev/null @@ -1,48 +0,0 @@ -# Protocol Switching - -The Micro Integrator offers a wide range of integration capabilities from simple message routing to complicated systems that use integrated solutions. Different applications typically use different protocols for communication. Therefore, for two systems to successfully communicate, it is necessary to switch the protocol (that passes from one system) to the protocol compatible with the receiving application. - - -For example, messages that are received via HTTP may need to be sent to a JMS queue. Further, you can couple the protocol switching feature with the message transformation feature to handle use cases where the content of messages received via one protocol (such as HTTP) are first processed, and then sent out in a completely different message format and protocol. - - - - - -
    - Examples - -
    \ No newline at end of file diff --git a/en/docs/integrate/integration-use-case/scheduled-task-overview.md b/en/docs/integrate/integration-use-case/scheduled-task-overview.md deleted file mode 100644 index b5a7fdec6e..0000000000 --- a/en/docs/integrate/integration-use-case/scheduled-task-overview.md +++ /dev/null @@ -1,23 +0,0 @@ -# Periodic Execution of Integration Processes - -Executing an integration process at a specified time is another common requirement in enterprise integration. For example, in an organization, there can be a need for running an integration process to synchronize two systems every day at the day end. - -In Micro Integrator, execution of a message mediation process can be automated to run periodically by using a 'Scheduled task'. You can schedule a task to run in the time interval of 't' for 'n' number of times or to run once the Micro Integrator starts. - -Furthermore, you can use cron expressions for more advanced executing time configuration. - - - - - -
    - Examples
    - -
    diff --git a/en/docs/integrate/integration-use-case/service-orchestration-overview.md b/en/docs/integrate/integration-use-case/service-orchestration-overview.md deleted file mode 100644 index bbbc3bb8f3..0000000000 --- a/en/docs/integrate/integration-use-case/service-orchestration-overview.md +++ /dev/null @@ -1,34 +0,0 @@ -# Service Orchestration - -Service Orchestration is the process of exposing multiple fine-grained services using a single coarse-grained service. The service client will only have access to a single coarse-grained service, which encapsulates the multiple fine-grained services that are invoked in the process flow. - - -There are two distinct types of service orchestration: - -- Synchronous service orchestration -- Asynchronous service orchestration - -In both the above orchestration approaches, the WSO2 Micro Integrator can interact with services using two different patterns (depending on the use case): - -**Service chaining** - -Multiple services that need to be orchestrated are invoked one after the other in a synchronous manner. The input to one service is dependant on the output of another service. Invocation of services and input-output mapping is handled by the service orchestration layer (which is the WSO2 Micro Integrator). - -**Parallel service invocations** - -Multiple services are invoked simultaneously without any blocking until a response is received from another service. - - - - - -
    - Tutorials
    - -
    \ No newline at end of file From 63f9c4fcd092f9bf715d4d1e3620165eba97b8ac Mon Sep 17 00:00:00 2001 From: DinithiDiaz Date: Tue, 5 Mar 2024 09:08:39 +0530 Subject: [PATCH 03/23] Remove MI pages from Analytics section --- en/docs/mi-analytics/mi-elk-dashboards.md | 174 ------------ .../mi-analytics/mi-elk-installation-guide.md | 246 ----------------- .../mi-analytics/setting-up-mi-analytics.md | 250 ------------------ .../using-the-analytics-dashboard.md | 240 ----------------- 4 files changed, 910 deletions(-) delete mode 100644 en/docs/mi-analytics/mi-elk-dashboards.md delete mode 100644 en/docs/mi-analytics/mi-elk-installation-guide.md delete mode 100644 en/docs/mi-analytics/setting-up-mi-analytics.md delete mode 100644 en/docs/mi-analytics/using-the-analytics-dashboard.md diff --git a/en/docs/mi-analytics/mi-elk-dashboards.md b/en/docs/mi-analytics/mi-elk-dashboards.md deleted file mode 100644 index 6d6b40f8a5..0000000000 --- a/en/docs/mi-analytics/mi-elk-dashboards.md +++ /dev/null @@ -1,174 +0,0 @@ -# ELK Dashboards for Micro Integrator - -## Dashboards - -### Overall Dashboard (wso2-mi-overall) - -Gives you an idea about overall analytics. - -Mi Overall 01 - -Mi Overall 02 - -|Total Requests|Total number of requests handled by the Micro Integrator| -|:----|:----| -|Fault Response Rate|Fault response percentage| -|Failure Rate|Number of failure requests| -|Fault Responses|Total Number of fault responses. (Example HTTP status code 500)| -|Failure Requests|Total Number of requests that failed| -|Success Requests|Total Number of requests that were successful| -|Overall Message Count|Total number of requests received within the time span| -|Top Proxy Services by Request Count|Top Proxy services that served the highest number of requests| -|Top APIs by Request Count|Top APIs that served the highest number of requests| -|Top Inbound Endpoints by Request Count|Top Inbound Endpoints that served the highest number of requests| -|Top Endpoints by Request Count|Top Endpoints that served the highest number of requests| -|Top Sequences by Request Count|Top Endpoints that served the highest number of requests| - - -### API Dashboard (wso2-mi-api) - -Gives you an idea about API analytics. - -Mi API - -|Total Requests|Total number of requests handled by the APIs| -|:----|:----| -|Fault Response Rate|Fault response percentage| -|Failure Rate|Number of failure requests| -|Fault Responses|Total Number of fault responses. (Example HTTP status code 500)| -|Failure Requests|Total Number of requests that failed| -|Success Requests|Total Number of requests that were successful| -|Maximum Latency|Maximum latency recorded by a single request| -|Average Latency|Average latency for a request| -|Top APIs by Message Count|Top APIs that served the highest number of requests| -|Message Latency|Maximum, Minimum and Average latency for the messages in the time span| -|Message Count|Total number of requests received within the time span| - - -### Endpoints Dashboard (wso2-mi-endpoints) - -Gives you an idea about Endpoints analytics. - -Mi Endpoints - -|Total Requests|Total number of requests handled by the Endpoints| -|:----|:----| -|Fault Response Rate|Fault response percentage| -|Failure Rate|Number of failure requests| -|Fault Responses|Total Number of fault responses. (Example HTTP status code 500)| -|Failure Requests|Total Number of requests that failed| -|Success Requests|Total Number of requests that were successful| -|Maximum Latency|Maximum latency recorded by a single request| -|Average Latency|Average latency for a request| -|Top Endpoints by Message Count|Top Endpoints that served the highest number of requests| -|Message Latency|Maximum, Minimum and Average latency for the messages in the time span| -|Message Count|Total number of requests received within the time span| - - -### Inbound Endpoints Dashboard (wso2-mi-inbound-endpoints) - -Gives you an idea about Inbound Endpoints analytics. - -Mi Inbound Endpoints - -|Total Requests|Total number of requests handled by the Inbound Endpoints| -|:----|:----| -|Fault Response Rate|Fault response percentage| -|Failure Rate|Number of failure requests| -|Fault Responses|Total Number of fault responses. (Example HTTP status code 500)| -|Failure Requests|Total Number of requests that failed| -|Success Requests|Total Number of requests that were successful| -|Maximum Latency|Maximum latency recorded by a single request| -|Average Latency|Average latency for a request| -|Top Inbound Endpoints by Message Count|Top Inbound Endpoints that served the highest number of requests| -|Message Latency|Maximum, Minimum and Average latency for the messages in the time span| -|Message Count|Total number of requests received within the time span| - -### Sequences Dashboard (wso2-mi-sequences) - -Gives you an idea about Sequences analytics. - -Mi Sequences - -|Total Requests|Total number of requests handled by the Sequences| -|:----|:----| -|Fault Response Rate|Fault response percentage| -|Failure Rate|Number of failure requests| -|Fault Responses|Total Number of fault responses.| -|Failure Requests|Total Number of requests that failed| -|Success Requests|Total Number of requests that were successful| -|Maximum Latency|Maximum latency recorded by a single request| -|Average Latency|Average latency for a request| -|Top Sequences by Message Count|Top Sequences that served the highest number of requests| -|Message Latency|Maximum, Minimum and Average latency for the messages in the time span| -|Message Count|Total number of requests received within the time span| - -### Proxy Services Dashboard (wso2-mi-proxy-services) - -Gives you an idea about Proxy Services analytics. - -Mi Proxy Services - -|Total Requests|Total number of requests handled by the Proxy Services| -|:----|:----| -|Fault Response Rate|Fault response percentage| -|Failure Rate|Number of failure requests| -|Fault Responses|Total Number of fault responses. (Example HTTP status code 500)| -|Failure Requests|Total Number of requests that failed| -|Success Requests|Total Number of requests that were successful| -|Maximum Latency|Maximum latency recorded by a single request| -|Average Latency|Average latency for a request| -|Top Proxy Services by Message Count|Top Proxy Services which served the highest number of requests| -|Message Latency|Maximum, Minimum and Average latency for the messages in the time span| -|Message Count|Total number of requests received within the time span| - - -## Creating Advanced Dashboards - -This section will help you to setup advanced dashboards using custom metadata. Use [this documentation](https://www.elastic.co/kibana/kibana-dashboard) by Elastic to explore what kind of dashboard widgets you can create. - -Assume you have a user registration AP deployed through a WSO2 Micro Integrator, which accepts JSON POST requests, and within that request, the body contains user age and country. You can publish the age and country with API analytics and create a visualization in the Kibana dashboard. - -Example API: -```xml - - - - - - - - - - -``` - -To add data into the publishing analytics, you can use the Property Mediator. Any data that you add to the ANALYTICS scope will be published to Elastic Stack. In this example, age and country are being added into the analytics scope as `newUserAge` and `newUserLocation`. - -```json -{ - "timestamp": "2022-07-20T14:03:16.190979Z", - "payload": { - "httpMethod": "POST", - "entityType": "API", - "metadata": { - "newUserAge": 28, - "newUserLocation": "AU" - }, - "latency": 1, - "apiDetails": { - "method": "POST", - "apiContext": "/user", - "subRequestPath": "/signup", - "api": "userRegistrationAPI", - }, - "faultResponse": false, - }, - "serverInfo": { … }, - "schemaVersion": 1 -} - -``` - -These custom data will be published under `metadata` inside the `payload`. - diff --git a/en/docs/mi-analytics/mi-elk-installation-guide.md b/en/docs/mi-analytics/mi-elk-installation-guide.md deleted file mode 100644 index 919fe02658..0000000000 --- a/en/docs/mi-analytics/mi-elk-installation-guide.md +++ /dev/null @@ -1,246 +0,0 @@ -# Elastic Stack-Based Operational Analytics for Micro Integrator - -As an alternative for WSO2 EI Analytics, from version 4.2.0 onwards, Micro Integrator now supports publishing operational analytics for Elastic Stack. As a part of the feature, WSO2 will provide some dashboards that include operational data from the following entities. - -- APIs -- Sequences -- Endpoints -- Inbound Endpoints -- Proxy Services - -!!! note - Enabling Elasticsearch analytics will disable data publishing to WSO2 EI Analytics. - -## Required components from Elastic Stack - -The following components are required from the Elastic stack to enable operational analytics on WSO2 Micro Integrator. - -- Kibana -- Elasticsearch -- Logstash -- Filebeats - -## Analytics dataflow - -Micro Integrator will publish the analytics data to a log file (synapse-analytics.log). This log file will be read using Filebeat, and a JSON file will be sent to Elasticsearch through Logstash with the data required for analytics. - -Example Analytic log line: - -``` log -12:30:57,396 [-] [message-flow-reporter-1-tenant--1234] INFO ElasticStatisticsPublisher SYNAPSE_ANALYTICS_DATA {"serverInfo":{"hostname":"sulochana","serverName":"localhost","ipAddress":"192.168.1.5","id":"localhost"},"timestamp":"2022-08-18T07:00:57.346Z","schemaVersion":1,"payload":{"metadata":{},"entityType":"API","failure":false,"latency":0,"messageId":"urn:uuid:0541cbe6-0424-4b91-9461-7550b278673c","correlation_id":"b187ecca-100c-4af5-854e-6759a364f6c7","apiDetails":{"method":"POST","apiContext":"/hotels","api":"hotels","transport":"http","subRequestPath":"/cancel"},"faultResponse":false,"entityClassName":"org.apache.synapse.api.API"}} -``` - -## Setup procedure - -The setup procedure consists of 3 main stages: - -- Setup Elastic Stack -- Configure Micro Integrator -- Configure Integration Projects to support custom analytics (Optional) - -### Setup Elastic Stack - -In this stage, we download and install the components required from Elastic stack. - -!!! note - Note that this guide is to set up the ELK stack at an entry-level, and in a production environment, it is highly recommended to configure each component separately for security, performance, and high availability. - -### Install Elasticsearch - -1. [Install Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/8.3/install-elasticsearch.html) according to your operating system. - -2. Make sure Elasticsearch is [up and running](https://www.elastic.co/guide/en/elasticsearch/reference/8.3/starting-elasticsearch.html). - - -### Install Kibana. - -1. [Install Kibana](https://www.elastic.co/guide/en/kibana/8.3/install.html) according to your operating system. - -2. [Launch the Kibana web interface](https://www.elastic.co/guide/en/kibana/8.3/start-stop.html). - -3. Log in to the Kibana dashboards. - -4. [Create a user](https://www.elastic.co/guide/en/kibana/8.3/tutorial-secure-access-to-kibana.html#_users) with cluster privileges :manage_index_templates, monitor and index privileges: create_index, create, write to wso2-mi-analytics-* indices pattern.The credentials of this user need to be included in the Logstash configuration. - -### Installing Logstash - -1. [Install Logstash](https://www.elastic.co/guide/en/logstash/8.3/installing-logstash.html) according to your operating system. - -2. Use the following [configuration file]({{base_path}}/assets/attachments/mi-elk/config.conf) when starting Logstash. Update the `logstash_internal_user_password` and `elasticsearch_home` placeholders in the configuration file. - - ``` conf - input { - beats { - port => 5044 - } - } - - filter { - grok { - match => ["message", "%{GREEDYDATA:UNWANTED}\ SYNAPSE_ANALYTICS_DATA %{GREEDYDATA:analyticJson}"] - } - json { - source => "analyticJson" - target => "analytic" - } - - mutate { - copy => {"[analytic][payload][entityType]" => "[@metadata][appNameIndex]"} - } - - mutate { - remove_field => [ "UNWANTED", "analyticJson", "message" ] - } - - mutate { - lowercase => ["[@metadata][appNameIndex]"] - } - } - output { - elasticsearch { - hosts => ["https://localhost:9200"] - user => "logstash_username" - password => "" - index => "wso2-mi-analytics-%{[@metadata][appNameIndex]}" - ssl => true - ssl_certificate_verification => true - cacert => "/config/certs/http_ca.crt" - } - } - - ``` - -### Installing Filebeat - -1. [Install Filebeat](https://www.elastic.co/guide/en/beats/filebeat/8.3/filebeat-installation-configuration.html) according to your operating system. - -2. Download the [sample configuration file]({{base_path}}/assets/attachments/mi-elk/filebeat.yml) and replace with the Micro Integrator’s home directory and with Logstash URL. - - ``` yaml - filebeat.inputs: - - type: filestream - id: wso2mi-analytics - enabled: true - paths: - - /repository/logs/synapse-analytics.log - - output.logstash: - # The Logstash hosts - hosts: [":5044"] - - ``` - -3. Make sure Filebeat is [up and running](https://www.elastic.co/guide/en/beats/filebeat/8.3/filebeat-installation-configuration.html#start). - -### Import Dashboards and DataViews - -1. Download the [dashboards.ndjson]({{base_path}}/assets/attachments/mi-elk/dashboards.ndjson) file. - -2. On Kibana UI go to **Stack Management** → **Saved objects** and **Import** the downloaded file. This should import the following objects into Kibana. - -### Configure Security in ELK - -ElasticSearch supports basic authentication via an internal user store. To set up basic authentication in ElasticSearch and Kibana, refer to the [ElasticSearch documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html). - -## Configure Micro Integrator - -To enable operational analytics, you must update the deployment.toml file with the required configurations and add a new log appender so that the analytics data will be stored in a dedicated log file. - -### Enabling statistics for artifacts - -You must enable statistics for the integration artifacts you wish to monitor. If you want to collect statistics for all your integration artifacts, be sure to add the `flow.statistics.capture_all` parameter to the deployment.toml file - -``` toml -[mediation] -flow.statistics.enable=true -flow.statistics.capture_all=true -``` - -### Enable Analytics - -Add the following configuration to the deployment.toml file to enable analytics, which includes custom analytics. - -``` toml -[analytics] -enabled=true -``` - -You can have more control over the analytics data with the following additional configurations. - -``` toml -[analytics] -enabled = true -publisher = "log" -id = "wso2mi_server_1234" -prefix = "SYNAPSE_ANALYTICS_DATA" -api_analytics.enabled = true -proxy_service_analytics.enabled = true -sequence_analytics.enabled = true -endpoint_analytics.enabled = true -inbound_endpoint_analytics.enabled = true - -``` - -|Config Key|Data Type|Default Value|Description| -|:----|:----|:----|:----| -|api_analytics.enabled|bool|TRUE|If set to false, analytics for APIs will not be published| -|proxy_service_analytics.enabled|bool|TRUE|If set to false, analytics for Proxy Services will not be published| -|sequence_analytics.enabled|bool|TRUE|If set to false, analytics for Sequences will not be published| -|endpoint_analytics.enabled|bool|TRUE|If set to false, analytics for Endpoints will not be published| -|inbound_endpoint_analytics.enabled|bool|TRUE|If set to false, analytics for Inbound Endpoints will not be published| -|prefix|string|SYNAPSE_ANALYTICS_DATA|This string will be used as a prefix when Elasticsearch analytics are being published. The purpose of this prefix is to distinguish log lines that hold analytics data from others. If you override this default value, you will have to update the Logstash and Filebeat configuration files accordingly.| -|enabled|bool|FALSE|If set to true, Elasticsearch service will be enabled| -|id|string|hostname|An identifier that will be published with the analytic| - -### Creating Log Appender - -Open the `/repository/conf` directory and edit the `log4j2.properties` file following the instructions given below. - -1. Add `ELK_ANALYTICS_APPENDER` to the appenders list. - - ``` - appenders = ELK_ANALYTICS_APPENDER,.... (list of other available appenders) - ``` - -2. Add the following configuration after the appenders: - - !!! note - - Any changes to the layout pattern may require changes in the Logstash configuration file. - - The `synapse-analytics.log` file is rolled each day or when the log size reaches the limit of 1000 MB by default. Furthermore, only ten revisions will be kept, and older revisions will be deleted automatically. You can change these configurations by updating the configurations provided in step 2 above. - - ``` log - appender.ELK_ANALYTICS_APPENDER.type = RollingFile - appender.ELK_ANALYTICS_APPENDER.name = ELK_ANALYTICS_APPENDER - appender.ELK_ANALYTICS_APPENDER.fileName = ${sys:carbon.home}/repository/logs/synapse-analytics.log - appender.ELK_ANALYTICS_APPENDER.filePattern = ${sys:carbon.home}/repository/logs/synapse-analytics-%d{MM-dd-yyyy}-%i.log - appender.ELK_ANALYTICS_APPENDER.layout.type = PatternLayout - appender.ELK_ANALYTICS_APPENDER.layout.pattern = %d{HH:mm:ss,SSS} [%X{ip}-%X{host}] [%t] %5p %c{1} %m%n - appender.ELK_ANALYTICS_APPENDER.policies.type = Policies - appender.ELK_ANALYTICS_APPENDER.policies.time.type = TimeBasedTriggeringPolicy - appender.ELK_ANALYTICS_APPENDER.policies.time.interval = 1 - appender.ELK_ANALYTICS_APPENDER.policies.time.modulate = true - appender.ELK_ANALYTICS_APPENDER.policies.size.type = SizeBasedTriggeringPolicy - appender.ELK_ANALYTICS_APPENDER.policies.size.size=1000MB - appender.ELK_ANALYTICS_APPENDER.strategy.type = DefaultRolloverStrategy - appender.ELK_ANALYTICS_APPENDER.strategy.max = 10 - ``` - -3. Add ELKAnalytics to the loggers list: - - ``` log - loggers = ELKAnalytics, ...(list of other available loggers) - ``` - -4. Add the following configurations after the loggers. - - ``` log - logger.ELKAnalytics.name = org.wso2.micro.integrator.analytics.messageflow.data.publisher.publish.elasticsearch.ElasticStatisticsPublisher - logger.ELKAnalytics.level = DEBUG - logger.ELKAnalytics.additivity = false - logger.ELKAnalytics.appenderRef.ELK_ANALYTICS_APPENDER.ref = ELK_ANALYTICS_APPENDER - ``` - - - diff --git a/en/docs/mi-analytics/setting-up-mi-analytics.md b/en/docs/mi-analytics/setting-up-mi-analytics.md deleted file mode 100644 index 327d3c08fc..0000000000 --- a/en/docs/mi-analytics/setting-up-mi-analytics.md +++ /dev/null @@ -1,250 +0,0 @@ -# Set up MI Analytics - -!!! note - - - The MI Analytics feature has been deprecated. This solution is recommended only for users who are using WSO2 EI 7.0.0 and want to migrate in to a newer version while retaining the already existing analytics data. - -## How it works - -[![MI Analytics]({{base_path}}/assets/img/integrate/mi-analytics/analytics-architecture.jpg)]({{base_path}}/assets/img/integrate/mi-analytics/analytics-architecture.jpg) - - -MI Analytics consists of two components: **Server** and **Portal**. The server processes the data streams that are sent from the Micro Integrator and publishes the statistics to a database. The portal reads the statistics published by the worker and displays the statistics. The server and portal are connected through the database. - -Follow the instructions given below to enable Analytics in the Micro Integrator profile. - -## System requirements - -You will be running three servers (Analytics server, MI Analytics portal, and the Micro Integrator) for this solution. Be sure that you have the required system specifications to run each server. - -??? info "More information on the system requirements" - - - For the Analytics **Server**: - - - - - - - - - - - - -
    Memory

    • ~ 4 GB per worker node
    • It is recommended to allocate 4 cores.
    • ~ 2 GB is the initial heap (-Xms) required for the server startup. The maximum heap size is 4 GB (-Xmx)

    Disk

  • ~ 480 MB, excluding space allocated for log files and databases.
  • - - - For the Analytics **Portal**: - - - - - - - - - - - - -
    Memory

    • ~ 2 GB minimum, 4 GB Maximum
    • 2 CPU cores minimum. It is recommended to allocate 4 cores.
    • ~ 512 MB heap size. This is generally sufficient to process typical SOAP messages but the requirements vary with larger message sizes and the number of messages processed concurrently.

    Disk

  • ~ 480 MB, excluding space allocated for log files and databases.
  • - - - For the Micro Integrator, see the [installation prerequsites]({{base_path}}/install-and-setup/install/installation-prerequisites). - -## Step 1 - Download the servers - -- **Download Integrator Analytics**. - 1. Go to the WSO2 Enterprise Integrator product page, click TRY IT NOW, and then go to the Other Resources section. - 2. Click Integration Analytics to download the distribution. - - Integration Analytics download menu - - !!! Info - The location of your Analytics installation will be referred to as ``. - -- **Download and [install the Micro Integrator]({{base_path}}/install-and-setup/install/installing-the-product/installing-mi)**. - -## Step 2 - Configure the Micro Integrator - -### Step 2.1 - Enable statistics monitoring - -To enable statistics monitoring for the Micro Integrator, add the following parameters in the `deployment.toml` file of your Micro Integrator. This file is stored in the `/conf`. - -```toml -[mediation] -flow.statistics.enable=true -stat.tracer.collect_payloads=true -stat.tracer.collect_mediation_properties=true -``` - -### Step 2.2 - Enable data publishing to MI Analytics - -Follow the instructions below to configure the Micro Integrator to publish data to MI Analytics. Analytics publishing can be configured in the `[monitoring]` section of the `/conf/deployment.toml` file as shown below. - -!!! Note - By default, the Micro Integrator is internally configured (with the following) to connect with an Integrator Analytics server running on the same Virtual Machine (VM). To change the default setup, you need to add the following to the `deployment.toml` file and update the values. - -```toml -[monitoring] -ei_analytics.server_url = "tcp://localhost:7612" -ei_analytics.auth_server_url = "ssl://localhost:7712" -ei_analytics.username = "admin" -ei_analytics.password = "admin" -``` - -If the Analytics nodes run in cluster mode or in different VMs, you can configure the `ei_analytics.server_url` and the `ei-analytics.auth_server_url` parameters in a load balancing manner. For more information, see, [Set up load balancing](#step-25-optionally-set-up-load-balancing). - -### Step 2.3 - Optionally, enable statistics for ALL artifacts - -If you want to collect statistics for **all** your integration artifacts, be sure to add the following parameter under the `[mediation]` header in the `deployment.toml` file in addition the [parameters explained above](#step-2-configure-the-micro-integrator): - -```toml -flow.statistics.capture_all=true -``` - -Alternatively, you can enable statistics for selected artifacts as explained below. - -### Step 2.4 - Optionally, enable statistics for specific artifacts - -Let's use the integration artifacts from the [service chaining]({{base_path}}/tutorials/integration-tutorials/exposing-several-services-as-a-single-service) tutorial. - -!!! Warning - It is **not recommended to enable tracing in production environments** as it generates a large number of events that reduces the performance of the analytics profile. Therefore, tracing should only be enabled in development environments. - -!!! info "If you do not have the integration artifacts from the service chaining tutorial" - If you did not try the [service chaining]({{base_path}}/tutorials/integration-tutorials/exposing-several-services-as-a-single-service) tutorial yet: - - 1. Download the [pre-packaged project](https://github.com/wso2-docs/WSO2_EI/blob/master/Integration-Tutorial-Artifacts/Integration-Tutorial-Artifacts-EI7.1.0/service-orchestration-tutorial.zip) for the **service chaining** use case. - 2. [Open WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio) and [import the pre-packaged project]({{base_patgh}}/integrate/develop/importing-projects). - -#### REST API artifact - -Follow the steps below to enable statistics and tracing for the **REST API** artifact: - -1. Select `HealthcareAPI` in the canvas of WSO2 Integration Studio to open the **Properties** tab. -2. Select **Statistics Enabled** and (if required) **Trace Enabled** as shown below. - - rest api properties - -#### Endpoint artifacts - -Follow the steps below to enable statistics for the **endpoint** artifacts: - -1. Select the required endpoint artifacts from the project explorer. -2. Select **Statistics Enabled** and (if required) **Trace Enabled** as shown below. - [![endpoint properties]({{base_path}}/assets/img/integrate/mi-analytics/endpoint-properties.png){: style="width:80%"}]({{base_path}}/assets/img/integrate/mi-analytics/endpoint-properties.png) - -### Step 2.5 - Optionally, set up load balancing - -You can send events to multiple Analytics servers either by sending the same event to many Analytics servers or by load balancing events among a set of servers. This handles the failover problem. When events are load balanced within a set of servers and if one receiver cannot be reached, events are automatically sent to the other available and active Analytics servers. - -#### Load balancing across a group of servers - -To configure this setup, configure the Analytics receiver URL specified in the Micro Integrator as a comma-separated list of Analytics servers. - -The format of the receiver URL should be as follows: - -``` -tcp://:,tcp://:,tcp://: -``` - -Example configuration in the `deployment.toml` file of the Micro Integrator: - -```toml -[monitoring] -ei_analytics.server_url = "tcp://10.100.2.32:7611, tcp://10.100.2.33:7611, tcp://10.100.2.34:7611" -ei_analytics.auth_server_url = "tcp://10.100.2.32:7612, tcp://10.100.2.33:7612, tcp://10.100.2.34:7612" -ei_analytics.username = "admin" -ei_analytics.password = "admin" -``` - -[![lb events to servers]({{base_path}}/assets/img/integrate/mi-analytics/ob-lb-events-to-servers.jpg){: style="width:70%"}]({{base_path}}/assets/img/integrate/mi-analytics/ob-lb-events-to-servers.jpg) - -This handles failover as follows: - -- If Analytics Receiver-1 is marked as down, then the Micro Integrator will send the data only to Analytics Receiver-2 and Analytics Receiver-3 in a round robin manner. -- When the Analytics Receiver-1 becomes active after some time, the Micro Integrator automatically detects it, adds it to the operation, and again starts to load balance between all three receivers. This functionality significantly reduces the loss of data and provides more concurrency. - -#### Load balancing across multiple groups of servers - -In this setup, there are two sets of servers that are referred to as set-A and set-B. -You can send events to both the sets. You can also carry out load balancing for both sets as mentioned in [Load balancing across a group of servers](#load-balancing-across-a-group-of-servers). This scenario is a combination of load balancing between a set of servers and sending an event to several receivers. - -- An event is sent to both set-A and set-B. -- Within set-A, it is sent either to Analytics A1 or Analytics A2. -- Similarly within set-B, it is sent either to Analytics B1 or Analytics B2. -- In the setup, you can have any number of sets and any number of servers as required. - - [![lb events to set of servers]({{base_path}}/assets/img/integrate/mi-analytics/ob-lb-to-sets-of-servers.jpg){: style="width:70%"}]({{base_path}}/assets/img/integrate/mi-analytics/ob-lb-to-sets-of-servers.jpg) - -Similar to the other scenarios, you need to describe the server URLs as the receiver URL in the Micro Integrator configuration. The sets should be specified within curly braces separated by commas. Furthermore, each receiver that belongs to the set should be within the curly braces and with the receiver URLs in a comma-separated format. - -The format of the receiver URL should be as follows: - -``` -{tcp://Analytics-A1:port, tcp://Analytics-A2:port},{tcp://Analytics-B1:port, tcp://Analytics-B2:port} -``` - -Example configuration in the `deployment.toml` file of the Micro Integrator: - -```toml -[monitoring] -ei_analytics.server_url = "{tcp://10.100.2.32:7611, tcp://10.100.2.33:7611}, {tcp://10.100.2.34:7611, tcp://10.100.2.35:7611}" -ei_analytics.auth_server_url = "{tcp://10.100.2.32:7612, tcp://10.100.2.33:7612}, {tcp://10.100.2.34:7612, tcp://10.100.2.35:7612}" -ei_analytics.username = "admin" -ei_analytics.password = "admin" -``` - -#### Sending all events to several analytics servers - -This setup involves sending all the events to more than one Analytics server. -This approach is useful when you want to have multiple Analytics servers to analyze the same events simultaneously. -For example, as shown below, you can configure the Micro Integrator to publish the same event to both Analytics servers at the same time. - - [![all events to all servers]({{base_path}}/assets/img/integrate/mi-analytics/ob-all-events-to-all-servers.jpg){: style="width:70%"}]({{base_path}}/assets/img/integrate/mi-analytics/ob-all-events-to-all-servers.jpg) - -The Analytics receiver URL should be configured with the following format in the Micro Integrator: - -``` -{tcp://Analytics-1>:}, {tcp://Analytics-2>:}, {tcp://:} -``` - -Example configuration in the `deployment.toml` file of the Micro Integrator: - -```toml -[monitoring] -ei_analytics.server_url = "{tcp://10.100.2.32:7611},{ tcp://10.100.2.33:7611}, {tcp://10.100.2.34:7611}" -ei_analytics.auth_server_url = "{tcp://10.100.2.32:7612},{ tcp://10.100.2.33:7612}, {tcp://10.100.2.34:7612}" -ei_analytics.username = "admin" -ei_analytics.password = "admin" -``` - -#### Failover configuration - -When using the failover configuration in publishing events to Analytics, events are sent to multiple Analytics servers in a sequential order based on priority. -You can specify multiple Analytics servers so that events can be sent to the next server in the specified sequence (in a situation where they were not successfully sent to the first server). - -In the scenario depicted in the image below, -- The events are first sent to Analytics-1. -- If it is unavailable, then events are sent to Analytics-2. -- If Analytics-2 is also unavailable, then the events are sent to Analytics-3. - -[![fail over]({{base_path}}/assets/img/integrate/mi-analytics/ob-fail-over.jpg){: style="width:70%"}]({{base_path}}/assets/img/integrate/mi-analytics/ob-fail-over.jpg) - -The Analytics receiver URL should be configured with the following format in the Micro Integrator: - -``` -tcp://:|tcp://:|tcp://: -``` - -```toml -[monitoring] -ei_analytics.server_url = "tcp://10.100.2.32:7611|tcp://10.100.2.33:7611|tcp://10.100.2.34:7611" -ei_analytics.auth_server_url = "tcp://10.100.2.32:7612|tcp://10.100.2.33:7612|tcp://10.100.2.34:7612" -ei_analytics.username = "admin" -ei_analytics.password = "admin" -``` - -## What's Next? - -If you have successfully set up your analytics deployment, see the instructions on [using the analytics portal]({{base_path}}/observe/mi-observe/using-the-analytics-dashboard). diff --git a/en/docs/mi-analytics/using-the-analytics-dashboard.md b/en/docs/mi-analytics/using-the-analytics-dashboard.md deleted file mode 100644 index c68a251e40..0000000000 --- a/en/docs/mi-analytics/using-the-analytics-dashboard.md +++ /dev/null @@ -1,240 +0,0 @@ -# Access the MI Analytics Portal - -!!! note - - - The MI Analytics feature has been deprecated. This solution is recommended only for users who are using WSO2 EI 7.0.0 and want to migrate in to a newer version while retaining the already existing analytics data. - -Let's use **MI Analytics** to view and monitor **statistics** and **message tracing**. - -You can monitor the following statistics and more through the MI Analytics Portal: - -- Request Count -- Overall TPS -- Overall Message Count -- Top Proxy Services by Request Count -- Top APIs by Request Count -- Top Endpoints by Request Count -- Top Inbound Endpoints by Request Count -- Top Sequences by Request Count - -!!! Tip - Monitoring the usage of the integration runtime using statistical information is very important for understanding the overall health of a system that runs in production. Statistical data helps to do proper capacity planning, to keep the runtimes in a healthy state, and for debugging and troubleshooting problems. When it comes to troubleshooting, the ability to trace messages that pass through the mediation flows of the Micro Integrator is very useful. - -## Before you begin - -- Set up the [MI Analytics deployment]({{base_path}}/install-and-setup/setup/mi-setup/observability/setting-up-classic-observability-deployment). - -- Note the following server directory in your deployment. - - - - - - -
    - `` - - This is the root folder of your MI Analytics installation. -
    - -## Step 1 - Start the servers - -Let's start the servers in the given order. - -### Step 1.1 - Start the Analytics Server - -!!! Note - Be sure to start the **Analytics** server before [starting the Micro Integrator](#starting-the-micro-integrator). - -1. Open a terminal and navigate to the `/bin` directory. -2. Start the Analytics server by executing the following command: - - === "On MacOS/Linux/Centos" - ```bash - sh server.sh - ``` - - === "On Windows" - ```bash - server.bat - ``` - -### Step 1.2 - Start the Micro Integrator - -Once you have [started the Analytics Server](#starting-the-analytics-server), you can [start the Micro Integrator]({{base_path}}/install-and-setup/install/installing-the-product/installing-mi/). - -### Step 1.3 - Start the Analytics Portal - -1. Open a terminal and navigate to the `/bin` directory. -2. Start the Analytics Portal's runtime by executing the following command: - - === "On MacOS/Linux/Centos" - ```bash - sh portal.sh - ``` - - === "On Windows" - ```bash - portal.bat - ``` - -In a new browser window or tab, open the Analytics Portal using the following URL: https://localhost:9645/analytics-dashboard. -Use `admin` for both the username and password. - - - -## Step 2 - Publish statistics to the Portal - -Let's **test this solution** by running the [service chaining]({{base_path}}/tutorials/integration-tutorials/exposing-several-services-as-a-single-service) tutorial. When the artifacts deployed in the Micro Integrator are invoked, the statistics will be available in the portal. - -Follow the steps given below. - -??? note "Step 1: Deploy integration artifacts" - - If you have already started the Micro Integrator server, let's deploy the artifacts. Let's use the integration artifacts from the [service chaining]({{base_path}}/tutorials/integration-tutorials/exposing-several-services-as-a-single-service) tutorial. - - 1. Download the [CAR file](https://github.com/wso2-docs/WSO2_EI/blob/master/Analytics/Integration-Artifacts/SampleServicesCompositeExporter_1.0.0.car). - 2. Copy the CAR file to the `/repository/deployment/server/carbonapps/` directory. - -??? note "Step 2: Start the backend" - - Let's start the hospital service that serves as the backend to the [service chaining]({{base_path}}/tutorials/integration-tutorials/exposing-several-services-as-a-single-service) use case: - - 1. Download the JAR file of the back-end service from [here](https://github.com/wso2-docs/WSO2_EI/blob/master/Back-End-Service/Hospital-Service-JDK11-2.0.0.jar). - 2. Open a terminal, navigate to the location where your saved the back-end service. - 3. Execute the following command to start the service: - ```bash - java -jar Hospital-Service-JDK11-2.0.0.jar - ``` - -??? note "Step 3: Sending messages" - - Let's send 8 requests to the Micro Integrator to invoke the integration artifacts: - - !!! Tip - For the purpose of demonstrating how successful messages and message failures are illustrated in the portal, let's send 2 of the requests while the back-end service is not running. This should generate a success rate of 75%. - - 1. Create a JSON file called `request.json` with the following request payload. - ```json - { - "name": "John Doe", - "dob": "1940-03-19", - "ssn": "234-23-525", - "address": "California", - "phone": "8770586755", - "email": "johndoe@gmail.com", - "doctor": "thomas collins", - "hospital": "grand oak community hospital", - "cardNo": "7844481124110331", - "appointment_date": "2025-04-02" - } - ``` - - 2. Open a command line terminal and execute the following command (**six times**) from the location where you save the - `request.json` file: - ```bash - curl -v -X POST --data @request.json http://localhost:8290/healthcare/categories/surgery/reserve --header "Content-Type:application/json" - ``` - If the messages are sent successfully, you will receive the following response for each request. - ```json - { - "appointmentNo": 1, - "doctorName": "thomas collins", - "patient": "John Doe", - "actualFee": 7000.0, - "discount": 20, - "discounted": 5600.0, - "paymentID": "e1a72a33-31f2-46dc-ae7d-a14a486efc00", - "status": "Settled" - } - ``` - - 3. Now, shut down the back-end service and send two more requests. - -## Step 3 - View the Analytics Portal - -!!! info - - The Analytics Portal has been renamed to Micro Integrator Analytics as it contains MI related analytics. - - You need to get the [latest product updates](https://updates.docs.wso2.com/en/latest/updates/overview/) for your product to view these changes in the current version of WSO2 API-M. This change is available as a product update in Integrator Analytics 7.1.0 from June 18, 2021 onwards. - - !!! note - You can deploy updates in a production environment only if you have a valid subscription with WSO2. Read more about [WSO2 Updates](https://wso2.com/updates). - -Once you have signed in to the Analytics Portal server, click the **Micro Integrator Analytics** icon shown below to open the portal. - -[![Opening the Analytics dashboard for the integration component]({{base_path}}/assets/img/integrate/mi-analytics/119132315/mi-dashboard.png)]({{base_path}}/assets/img/integrate/mi-analytics/119132315/mi-dashboard.png) - -### Statistics overview - -View the statistics overview for all the integration artifacts that have published statistics: - -[![ESB total request count]({{base_path}}/assets/img/integrate/mi-analytics/119132315/119132316.png)]({{base_path}}/assets/img/integrate/mi-analytics/119132315/119132316.png) - -### Transactions per second - -The number of transactions handled by the Micro Integrator per second is mapped on a graph as follows. - -[![ESB overall TPS]({{base_path}}/assets/img/integrate/mi-analytics/119132315/119132326.png)]({{base_path}}/assets/img/integrate/mi-analytics/119132315/119132326.png) - -### Overall message count -The success rate and the failure rate of the messages received by the Micro Integrator during the last hour are mapped in a graph as follows. - -[![ESB overall message count]({{base_path}}/assets/img/integrate/mi-analytics/119132315/119132325.png)]({{base_path}}/assets/img/integrate/mi-analytics/119132315/119132325.png) - -### Top APIs by request - -The `HealthcareAPI` REST API is displayed under **TOP APIS BY REQUEST COUNT** as follows. - -[![Top APIs by request count]({{base_path}}/assets/img/integrate/mi-analytics/119132315/119132324.png)]({{base_path}}/assets/img/integrate/mi-analytics/119132315/119132324.png) - -### Endpoints by request - -The three endpoints used for the message mediation are displayed under **Top Endpoints by Request Count** as shown below. - -[![Top endpoints by request count]({{base_path}}/assets/img/integrate/mi-analytics/119132315/119132318.png)]({{base_path}}/assets/img/integrate/mi-analytics/119132315/119132318.png) - -### Per API requests - -In the Top APIS BY Request COUNT gadget, click `HealthcareAPI` to open the **OVERVIEW/API/HealthcareAPI** page. The following is displayed. - -- The **API Request Count** gadget shows the total number of - requests handled by the `StockQuoteAPI` - REST API during the last hour: - [![Total request per API]({{base_path}}/assets/img/integrate/mi-analytics/119132315/119132323.png)]({{base_path}}/assets/img/integrate/mi-analytics/119132315/119132323.png) -- The **API** **Message Count** gadget maps the number of - successful messages as well as failed messages at different - times within the last hour in a graph as shown below. - [![API message count]({{base_path}}/assets/img/integrate/mi-analytics/119132315/119132322.png)]({{base_path}}/assets/img/integrate/mi-analytics/119132315/119132322.png) -- The **API** **Message Latency** gadget shows the speed with - which the messages are processed by mapping the amount of time - taken per message at different times within the last hour as - shown below. - [![API message latency]({{base_path}}/assets/img/integrate/mi-analytics/119132315/119132321.png)]({{base_path}}/assets/img/integrate/mi-analytics/119132315/119132321.png) -- The **Messages** gadget lists all the the messages handled by - the `StockQuoteAPI` REST API during the - last hour with the following property details as follows. - [![Message per API]({{base_path}}/assets/img/integrate/mi-analytics/119132315/119132320.png)]({{base_path}}/assets/img/integrate/mi-analytics/119132315/119132320.png) -- The **Message Flow** gadget illustrates the order in which the - messages handled by the `StockQuoteAPI` - REST API within the last hour passed through all the mediation - sequences, mediators and endpoints that were included in the - message flow as shown below. - [![Message flow per API]({{base_path}}/assets/img/integrate/mi-analytics/119132315/119132319.png)]({{base_path}}/assets/img/integrate/mi-analytics/119132315/119132319.png) - -### Per endpoint requests - -In the **Top Endpoints by Request Count** gadget, click one of the endpoints to view similar statistics per endpoint. - -- `ChannelingFeeEP` -- `SettlePaymentEP` -- `GrandOaksEP` - -You can also navigate to any of the artifacts by using the top-left menu as shown below. For example, to view the statistics of a specific endpoint, click **Endpoint** and search for the required endpoint. - -![Dashboard navigation menu]({{base_path}}/assets/img/integrate/mi-analytics/119132315/per-endpoint-requests.png "Dashboard navigation menu") - -### Message tracing - -When you go to the [Analytics Portal](#step-13-start-the-analytics-portal) the message details will be logged as follows: - -![Message tracing per API]({{base_path}}/assets/img/integrate/mi-analytics/119132315/message-tracing.png "Message tracing per API") From fea811982c38ece2db09bfb34678d963d77ad1a3 Mon Sep 17 00:00:00 2001 From: DinithiDiaz Date: Tue, 5 Mar 2024 09:19:43 +0530 Subject: [PATCH 04/23] Remove MI pages from Observe section --- .../configuring-log4j2-properties.md | 774 ------------------ .../enabling-logs-for-a-component.md | 66 -- .../monitoring-correlation-logs.md | 236 ------ .../monitoring-logs.md | 166 ---- .../monitoring-mi-audit-logs.md | 59 -- .../jmx-monitoring.md | 237 ------ .../snmp-monitoring.md | 256 ------ .../monitoring-with-opentelemetry-mi.md | 395 --------- .../cloud-native-observability-overview.md | 110 --- ...loud-native-observability-in-kubernetes.md | 212 ----- ...g-up-cloud-native-observability-on-a-vm.md | 417 ---------- ...g-cloud-native-observability-statistics.md | 210 ----- 12 files changed, 3138 deletions(-) delete mode 100644 en/docs/observe/micro-integrator/classic-observability-logs/configuring-log4j2-properties.md delete mode 100644 en/docs/observe/micro-integrator/classic-observability-logs/enabling-logs-for-a-component.md delete mode 100644 en/docs/observe/micro-integrator/classic-observability-logs/monitoring-correlation-logs.md delete mode 100644 en/docs/observe/micro-integrator/classic-observability-logs/monitoring-logs.md delete mode 100644 en/docs/observe/micro-integrator/classic-observability-logs/monitoring-mi-audit-logs.md delete mode 100644 en/docs/observe/micro-integrator/classic-observability-metrics/jmx-monitoring.md delete mode 100644 en/docs/observe/micro-integrator/classic-observability-metrics/snmp-monitoring.md delete mode 100644 en/docs/observe/micro-integrator/classic-observability-traces/monitoring-with-opentelemetry-mi.md delete mode 100644 en/docs/observe/micro-integrator/cloud-native-observability-overview.md delete mode 100644 en/docs/observe/micro-integrator/setting-up-cloud-native-observability-in-kubernetes.md delete mode 100644 en/docs/observe/micro-integrator/setting-up-cloud-native-observability-on-a-vm.md delete mode 100755 en/docs/observe/micro-integrator/viewing-cloud-native-observability-statistics.md diff --git a/en/docs/observe/micro-integrator/classic-observability-logs/configuring-log4j2-properties.md b/en/docs/observe/micro-integrator/classic-observability-logs/configuring-log4j2-properties.md deleted file mode 100644 index cbf32db1e6..0000000000 --- a/en/docs/observe/micro-integrator/classic-observability-logs/configuring-log4j2-properties.md +++ /dev/null @@ -1,774 +0,0 @@ -# Configuring Logs - -## Introduction - -All WSO2 products are shipped with Log4j2 logging capabilities, which generate server-side logs. The `/conf/log4j2.properties` file governs how logging is performed by the server. - -??? note "Java logging and Log4j2 integration" - In addition to the logs from libraries that use Log4j2, all logs from libraries that use the Java logging framework are also visible in the same log files. That is, when Java logging is enabled in Carbon, only the Log4j2 appenders will write to the log files. If the Java Logging Handlers have logs, these logs will be delegated to the log events of the corresponding Log4j2 appenders. A Pub/Sub registry pattern implementation has been used in the latter mentioned scenario to plug the handlers and appenders. The following default log4j2 appenders in the `log4j2.properties` file are used for this implementation:
      -
    • org.wso2.carbon.logging.appenders.CarbonConsoleAppender
    • -
    • org.wso2.carbon.logging.appenders.CarbonDailyRollingFileAppender
    - -There are three main components when configuring Log4j2: **Loggers**, **Appenders**, and **Layouts**. - -### Log4j2 loggers - -```xml -logger..name = -logger..level = INFO -logger..additivity = false -logger..appenderRef..ref = -``` - -The logger attributes are described below. - - - - - - - - - - - - - - - - - - -
    - name - - The name of the component (class) for which the logger is defined. That is, this logger is responsible for generating logs for the activities of the specified component. -
    - level - - Allows to configure level (threshold). After you specify the level for a certain logger, a log request for that logger will only be enabled if its level is equal or higher to the logger’s level. If a given logger is not assigned a level, then it inherits one from its closest ancestor with an assigned level. Refer to the hierarchy of levels given above. See descriptions of the available log levels. -
    - additivity - - Allows to inherit all the appenders of the parent Logger if set as `true`. -
    - appenderRef.APPENDER_NAME.ref - - This element is used to attach appenders to the logger. -
    - -The loggers are then listed in the `log4j2.properties` file using the logger name as shown below. - -```xml -loggers = , , , -``` -### Log4j2 Appenders - -Log4j2 allows logging requests to print to multiple destinations. These output destinations are called 'Appenders'. All the defined appenders should be listed as shown below in the `log4j2.properties` file. - -!!! Note - If the output destination is in another environment (such as a cloud storage), you need to [use custom log appenders](#using-custom-log-appenders). - -```xml -appenders = CARBON_CONSOLE, CARBON_LOGFILE, AUDIT_LOGFILE, ATOMIKOS_LOGFILE, CARBON_TRACE_LOGFILE, osgi, SERVICE_LOGFILE, API_LOGFILE, ERROR_LOGFILE, CORRELATION -``` - -Once the appenders are defined, a logger can refer the appender by using the `appenderRef.APPENDER_NAME.ref` element. You can also attach several appenders to one logger. For example, see how the [root logger](#root-logs) is linked to three appenders. Also, see how [other loggers](#configuring-log4j2-logs) in the `log4j2.properties` file are configured to use appenders. - -## Configuring Log4j2 Logs - -The list below shows some of the main loggers (excluding the [root logger](#root-logs)) that are configured by default in the Micro Integrator. Open the `log4j2.properties` file to see the complete list. - -```xml -loggers = SERVICE_LOGGER, API_LOGGER, AUDIT_LOG, correlation, trace-messages, org.apache.synapse.transport.http.headers, org.apache.synapse.transport.http.wire, httpclient.wire.header, httpclient.wire.content, -``` - -The above logger configurations are explained below. - -### Root Logs - -Given below is the root logger that is configured by default for the Micro Integrator. All loggers that do not have specific appenders defined will refer the appenders from the root logger. - -This logger generates INFO-level logs and prints them to three destinations as per the appenders linked to the logger. The `appenderRef..ref` attribute is used for referring the appenders. - -```xml -rootLogger.level = INFO -rootLogger.appenderRef.CARBON_CONSOLE.ref = CARBON_CONSOLE -rootLogger.appenderRef.CARBON_LOGFILE.ref = CARBON_LOGFILE -rootLogger.appenderRef.ERROR_LOGFILE.ref = ERROR_LOGFILE -``` - -Listed below are the default log destinations (appenders) used by the root logger: - -- `CARBON_CONSOLE`: This is the consoleAppender that prints logs to the server's console. These logs are printed to the `wso2carbon.log` file and the `wso2error.log` file through the two appenders given below. - - === "CARBON_CONSOLE" - ```xml - # CARBON_CONSOLE is set to be a ConsoleAppender using a PatternLayout. - appender.CARBON_CONSOLE.type = Console - appender.CARBON_CONSOLE.name = CARBON_CONSOLE - appender.CARBON_CONSOLE.layout.type = PatternLayout - appender.CARBON_CONSOLE.layout.pattern = [%d] %5p {% raw %}{%c{1}}{% endraw %} - %m%ex%n - appender.CARBON_CONSOLE.filter.threshold.type = ThresholdFilter - appender.CARBON_CONSOLE.filter.threshold.level = DEBUG - ``` - -- `CARBON_LOGFILE`: This is a RollingFile appender that generates management logs of the server. Logs are printed to the `/repository/logs/wso2carbon.log`. - - === "CARBON_LOGFILE" - ```xml - # CARBON_LOGFILE is set to be a DailyRollingFileAppender using a PatternLayout. - appender.CARBON_LOGFILE.type = RollingFile - appender.CARBON_LOGFILE.name = CARBON_LOGFILE - appender.CARBON_LOGFILE.fileName = ${sys:carbon.home}/repository/logs/wso2carbon.log - appender.CARBON_LOGFILE.filePattern = ${sys:carbon.home}/repository/logs/wso2carbon-%d{MM-dd-yyyy}.log - appender.CARBON_LOGFILE.layout.type = PatternLayout - appender.CARBON_LOGFILE.layout.pattern = [%d] %5p {% raw %}{%c}{% endraw %} - %m%ex%n - appender.CARBON_LOGFILE.policies.type = Policies - appender.CARBON_LOGFILE.policies.time.type = TimeBasedTriggeringPolicy - appender.CARBON_LOGFILE.policies.time.interval = 1 - appender.CARBON_LOGFILE.policies.time.modulate = true - appender.CARBON_LOGFILE.policies.size.type = SizeBasedTriggeringPolicy - appender.CARBON_LOGFILE.policies.size.size=10MB - appender.CARBON_LOGFILE.strategy.type = DefaultRolloverStrategy - appender.CARBON_LOGFILE.strategy.max = 20 - appender.CARBON_LOGFILE.filter.threshold.type = ThresholdFilter - appender.CARBON_LOGFILE.filter.threshold.level = DEBUG - ``` - -- `ERROR_LOGFILE`: This is a RollingFile appender that print the error logs to the `/repository/logs/wso2error.log` file. - - - === "ERROR_LOGFILE" - ```xml - # Appender config to SERVICE_APPENDER - appender.ERROR_LOGFILE.type = RollingFile - appender.ERROR_LOGFILE.name = ERROR_LOGFILE - appender.ERROR_LOGFILE.fileName = ${sys:carbon.home}/repository/logs/wso2error.log - appender.ERROR_LOGFILE.filePattern = ${sys:carbon.home}/repository/logs/wso2error-%d{MM-dd-yyyy}.log - appender.ERROR_LOGFILE.layout.type = PatternLayout - appender.ERROR_LOGFILE.layout.pattern = [%d] %5p {% raw %}{%c}{% endraw %} - %m%ex%n - appender.ERROR_LOGFILE.policies.type = Policies - appender.ERROR_LOGFILE.policies.time.type = TimeBasedTriggeringPolicy - appender.ERROR_LOGFILE.policies.time.interval = 1 - appender.ERROR_LOGFILE.policies.time.modulate = true - appender.ERROR_LOGFILE.policies.size.type = SizeBasedTriggeringPolicy - appender.ERROR_LOGFILE.policies.size.size=10MB - appender.ERROR_LOGFILE.strategy.type = DefaultRolloverStrategy - appender.ERROR_LOGFILE.strategy.max = 20 - appender.ERROR_LOGFILE.filter.threshold.type = ThresholdFilter - appender.ERROR_LOGFILE.filter.threshold.level = WARN - ``` - -### Service Logs - -This logger generates logs for services deployed in the Micro Integrator. It refers to the details in the `SERVICE_LOGFILE` appender and prints logs to the `/repository/logs/wso2-mi-service.log` file. - -!!! Note - If you want to have separate log files for individual services, you need to add loggers for each service and then specify appenders for the loggers. Note that the service name has to be suffixed to `SERVICE_LOGGER` as follows: - - ```xml - logger.SERVICE_LOGGER.name = SERVICE_LOGGER.TestProxy - ``` - - See the instructions on [monitoring per-service logs]({{base_path}}/integrate/develop/monitoring-service-level-logs). - -=== "SERVICE_LOGGER" - ```xml - logger.SERVICE_LOGGER.name= SERVICE_LOGGER - logger.SERVICE_LOGGER.level = INFO - logger.SERVICE_LOGGER.appenderRef.SERVICE_LOGFILE.ref = SERVICE_LOGFILE - logger.SERVICE_LOGGER.additivity = false - ``` - -=== "APPENDER" - ```xml - # Appender config to SERVICE_LOGFILE - appender.SERVICE_LOGFILE.type = RollingFile - appender.SERVICE_LOGFILE.name = SERVICE_LOGFILE - appender.SERVICE_LOGFILE.fileName = ${sys:carbon.home}/repository/logs/wso2-mi-service.log - appender.SERVICE_LOGFILE.filePattern = ${sys:carbon.home}/repository/logs/wso2-mi-service-%d{MM-dd-yyyy}.log - appender.SERVICE_LOGFILE.layout.type = PatternLayout - appender.SERVICE_LOGFILE.layout.pattern = [%d] %5p {% raw %}{%c}{% endraw %} - %m%ex%n - appender.SERVICE_LOGFILE.policies.type = Policies - appender.SERVICE_LOGFILE.policies.size.type = SizeBasedTriggeringPolicy - appender.SERVICE_LOGFILE.policies.size.size=10MB - appender.SERVICE_LOGFILE.strategy.type = DefaultRolloverStrategy - appender.SERVICE_LOGFILE.strategy.max = 20 - ``` - -### API Logs - -This logger generates logs for APIs deployed in the Micro Integrator. It refers to the details in the `API_LOGFILE` appender and prints logs to the `/repository/logs/wso2-mi-api.log` file. - -!!! Note - If you want to have separate log files for individual APIs, you need to add loggers for each API and then specify appenders for the loggers. Note that the service name has to be suffixed to `SERVICE_LOGGER` as follows: - - ```xml - logger.API_LOG.name=API_LOGGER.TestAPI - ``` - - See the instructions on [monitoring per-API logs]({{base_path}}/install-and-setup/setup/mi-setup/observability/logs/enabling-logs-for-api). - -=== "API_LOGGER" - ```xml - logger.API_LOGGER.name= API_LOGGER - logger.API_LOGGER.level = INFO - logger.API_LOGGER.appenderRef.SERVICE_LOGFILE.ref = API_LOGFILE - logger.API_LOGGER.additivity = false - ``` - -=== "APPENDER" - ```xml - # Appender config to API_APPENDER - appender.API_LOGFILE.type = RollingFile - appender.API_LOGFILE.name = API_LOGFILE - appender.API_LOGFILE.fileName = ${sys:carbon.home}/repository/logs/wso2-mi-api.log - appender.API_LOGFILE.filePattern = ${sys:carbon.home}/repository/logs/wso2-mi-api-%d{MM-dd-yyyy}.log - appender.API_LOGFILE.layout.type = PatternLayout - appender.API_LOGFILE.layout.pattern = [%d] %5p {% raw %}{%c}{% endraw %} - %m%ex%n - appender.API_LOGFILE.policies.type = Policies - appender.API_LOGFILE.policies.size.type = SizeBasedTriggeringPolicy - appender.API_LOGFILE.policies.size.size=10MB - appender.API_LOGFILE.strategy.type = DefaultRolloverStrategy - appender.API_LOGFILE.strategy.max = 20 - ``` - -### Audit Logs - -This is a `RollingFile` appender that writes logs to the `/repository/logs/audit.log` file. By default, the `AUDIT_LOG` logger is configured to write logs using this appender. - -=== "AUDIT_LOGGER" - ```xml - logger.AUDIT_LOG.name = AUDIT_LOG - logger.AUDIT_LOG.level = INFO - logger.AUDIT_LOG.appenderRef.AUDIT_LOGFILE.ref = AUDIT_LOGFILE - logger.AUDIT_LOG.additivity = false - ``` - -=== "APPENDER" - ```xml - # Appender config to AUDIT_LOGFILE - appender.AUDIT_LOGFILE.type = RollingFile - appender.AUDIT_LOGFILE.name = AUDIT_LOGFILE - appender.AUDIT_LOGFILE.fileName = ${sys:carbon.home}/repository/logs/audit.log - appender.AUDIT_LOGFILE.filePattern = ${sys:carbon.home}/repository/logs/audit-%d{MM-dd-yyyy}.log - appender.AUDIT_LOGFILE.layout.type = PatternLayout - appender.AUDIT_LOGFILE.layout.pattern = [%d] %5p {% raw %}{%c}{% endraw %} - %m%ex%n - appender.AUDIT_LOGFILE.policies.type = Policies - appender.AUDIT_LOGFILE.policies.time.type = TimeBasedTriggeringPolicy - appender.AUDIT_LOGFILE.policies.time.interval = 1 - appender.AUDIT_LOGFILE.policies.time.modulate = true - appender.AUDIT_LOGFILE.policies.size.type = SizeBasedTriggeringPolicy - appender.AUDIT_LOGFILE.policies.size.size=10MB - appender.AUDIT_LOGFILE.strategy.type = DefaultRolloverStrategy - appender.AUDIT_LOGFILE.strategy.max = 20 - appender.AUDIT_LOGFILE.filter.threshold.type = ThresholdFilter - appender.AUDIT_LOGFILE.filter.threshold.level = INFO - ``` - -### Correlations Logs - -This logger generates correlation logs for monitoring individual HTTP requests from the point that a message is received by the Micro Integrator until the corresponding response message is sent back to the original message sender. It refers to the details in the `CORRELATION` appender and prints logs to the `/repository/logs/correlation.log` file. - -!!! Note - The maximum file size of the correlation log is set to - 10MB in the following appender. That is, when the size of the file - exceeds 10MB, a new log file is created. If required, you can change - this file size. - -=== "correlation" - ```xml - logger.correlation.name = correlation - logger.correlation.level = INFO - logger.correlation.appenderRef.CORRELATION.ref = CORRELATION - logger.correlation.additivity = false - ``` - -=== "APPENDER" - ```xml - # Appender config to put correlation Log. - appender.CORRELATION.type = RollingFile - appender.CORRELATION.name = CORRELATION - appender.CORRELATION.fileName =${sys:carbon.home}/repository/logs/correlation.log - appender.CORRELATION.filePattern =${sys:carbon.home}/repository/logs/correlation-%d{MM-dd-yyyy}.log - appender.CORRELATION.layout.type = PatternLayout - appender.CORRELATION.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS}|%X{Correlation-ID}|%t|%m%n - appender.CORRELATION.policies.type = Policies - appender.CORRELATION.policies.time.type = TimeBasedTriggeringPolicy - appender.CORRELATION.policies.time.interval = 1 - appender.CORRELATION.policies.time.modulate = true - appender.CORRELATION.policies.size.type = SizeBasedTriggeringPolicy - appender.CORRELATION.policies.size.size=10MB - appender.CORRELATION.strategy.type = DefaultRolloverStrategy - appender.CORRELATION.strategy.max = 20 - appender.CORRELATION.filter.threshold.type = ThresholdFilter - appender.CORRELATION.filter.threshold.level = INFO - ``` - -Additional configurations: - -If required, you can change the default HTTP header (which is 'activity_id'), which is used to carry the Correlation ID, by adding the following property to the `deployment.toml` file (stored in the `/conf/` directory). Replace `` with a value of your choice. - -```toml -[passthru_properties] -correlation_header_name="" -``` - -Once you have configured this logger, see the instructions on [monitoring correlation logs]({{base_path}}/observe/micro-integrator/classic-observability-logs/monitoring-correlation-logs). - -### Message Tracing Logs - -This is a `RollingFile` appender that writes logs to the `/repository/logs/wso2carbon-trace-messages.log` file. By default, the `trace.messages` logger is configured to write logs using this appender. - -=== "trace-messages" - ```xml - logger.trace-messages.name = trace.messages - logger.trace-messages.level = TRACE - logger.trace-messages.appenderRef.CARBON_TRACE_LOGFILE.ref = CARBON_TRACE_LOGFILE - ``` - -=== "APPENDER" - ```xml - # Appender config to CARBON_TRACE_LOGFILE - appender.CARBON_TRACE_LOGFILE.type = RollingFile - appender.CARBON_TRACE_LOGFILE.name = CARBON_TRACE_LOGFILE - appender.CARBON_TRACE_LOGFILE.fileName = ${sys:carbon.home}/repository/logs/wso2carbon-trace-messages.log - appender.CARBON_TRACE_LOGFILE.filePattern = ${sys:carbon.home}/repository/logs/wso2carbon-trace-messages-%d{MM-dd-yyyy}.log - appender.CARBON_TRACE_LOGFILE.layout.type = PatternLayout - appender.CARBON_TRACE_LOGFILE.layout.pattern = [%d] %5p {% raw %}{%c}{% endraw %} - %m%ex%n - appender.CARBON_TRACE_LOGFILE.policies.type = Policies - appender.CARBON_TRACE_LOGFILE.policies.time.type = TimeBasedTriggeringPolicy - appender.CARBON_TRACE_LOGFILE.policies.time.interval = 1 - appender.CARBON_TRACE_LOGFILE.policies.time.modulate = true - appender.CARBON_TRACE_LOGFILE.policies.size.type = SizeBasedTriggeringPolicy - appender.CARBON_TRACE_LOGFILE.policies.size.size=10MB - appender.CARBON_TRACE_LOGFILE.strategy.type = DefaultRolloverStrategy - appender.CARBON_TRACE_LOGFILE.strategy.max = 20 - ``` - -### Wire Logs and Header Logs - -These logs are disabled by default by setting the log level to `OFF`. You can enable these logs by [changing the log level](#updating-the-log4j2-log-level) of the loggers to `DEBUG`. - -!!! Info - It is not recommended to use these logs in production environments. Developers can enable them for testing and troubleshooting purposes. Note that appenders are not specified for these loggers, and therefore, the logs will be printed as specified for the [root logger](#root-logs). - -- The following loggers configure wire logs for the PassThrough HTTP transport: - - !!! Tip - The Passthrough HTTP transport is the main transport that handles HTTP/HTTPS messages in the Micro Integrator. - - === "Synapse HTTP Headers" - ```xml - # The following loggers are used to log HTTP headers and messages. - logger.synapse-transport-http-headers.name=org.apache.synapse.transport.http.headers - logger.synapse-transport-http-headers.level=OFF - ``` - - === "Synapse Wire Logs" - ```xml - logger.synapse-transport-http-wire.name=org.apache.synapse.transport.http.wire - logger.synapse-transport-http-wire.level=OFF - ``` - -- The following loggers configure wire logs for the Callout mediator/MessageProcessor. - - === "Client Headers" - ```xml - logger.httpclient-wire-header.name=httpclient.wire.header - logger.httpclient-wire-header.level=OFF - ``` - - === "Client Wire Content" - ```xml - logger.httpclient-wire-content.name=httpclient.wire.content - logger.httpclient-wire-content.level=OFF - ``` - -See the instructions on [using wire logs to debug]({{base_path}}/integrate/develop/using-wire-logs) your integration solution during development. - -## Configuring HTTP Access Logs - -Access logs related to service/API invocations are enabled by default in the Micro Integrator. Access logs for the PassThrough transport will record the request and the response on **two** separate log lines. - -By default, access logs are printed to the `http_acces_.log` file (stored in the `/repository/logs` folder). If required, you can use the log4j2 configurations to print the access logs to other destinations. Simply apply the following [logger](#log4j2-loggers) with an [appender](#log4j2-appenders). - -- Logger Name: PassThroughAccess -- Logger Class: org.apache.synapse.transport.http.access - -### Customizing the Access Log format - -You can customize the format of this access log by changing the following property values in the `/conf/access-log.properties` configuration file. If this file does not exist in the product by default, you can create a new file. - -1. Open the `access-log.properties` file and add the following properties. - - - - - - - - - - - - - - - - - - - - - - - - -
    access_log_directoryAdd this property ONLY if you want to change the default location of the log file. By default, the product is configured to store access logs in the MI_HOME/repository/logs directory.
    access_log_prefix -
    -

    The prefix added to the log file's name. The default value is as follows:

    -
    -
    -
    -
    access_log_prefix=http_access_
    -
    -
    -
    -
    -
    access_log_suffix -
    -

    The suffix added to the log file's name. The default value is as follows:

    -
    -
    -
    -
    access_log_suffix=.log
    -
    -
    -
    -
    -
    access_log_file_date_format -
    -

    The date format used in access logs. The default value is as follows:

    -
    -
    -
    -
    access_log_file_date_format=yyyy-MM-dd
    -
    -
    -
    -
    -
    access_log_pattern -
    -

    The attribute defines the format for the log pattern, which consists of the information fields from the requests and responses that should be logged. The pattern format is created using the following attributes:

    -
      -
    • -

      A standard value to represent a particular string. For example, "%h" represents the remote host name in the request. Note that all the string replacement values supported by Tomcat are NOT supported for the PassThrough transport's access logs. The list of supported values are given below.

      -
    • -
    • %{xxx}i is used to represent the header in the incoming request (xxx=header value).
    • -
    • %{xxx}o is used to represents the header in the outgoing request (xxx=header value).
    • -
    -

    While you can use the above attributes to define a custom pattern, the standard patterns shown below can be used.

    - -

    By default, a modified version of the Apache combined log format is enabled in the ESB, as shown below. Note that the "X-Forwarded-For" header is appended to the beginning of the usually combined log format. This correctly identifies the original node that sent the request (in situations where requests go through a proxy such as a load balancer). The "X-Forwarded-For" header must be present in the incoming request for this to be logged.

    -
    -
    -
    -
    access_log_pattern=%{X-Forwarded-For}i %h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-Agent}i\"
    -
    -
    -
    -
    -
    - -3. Restart the server. -4. Invoke a proxy service or REST API that is deployed in the Micro Integrator. The access log file for the service/API will be created in the `/repository/logs` directory. The default name of the log file is `http_access_.log`. - - !!! Tip - Note that there will be a delay in printing the logs to the log file. - -### Supported log pattern formats - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    AttributeDescription
    %a

    Remote IP address

    %A

    Local IP address

    %b

    Bytes sent, excluding HTTP headers, or '-' if zero

    %B

    Bytes sent, excluding HTTP headers

    %c

    Cookie value

    %C

    Accept header

    %e

    Accept Encoding

    %E

    Transfer Encoding

    %h

    The remote hostname (or IP address if enableLookups for the connector is false)

    %l

    Remote logical username from identd (always returns '-')

    %L

    Accept Language

    %k

    Keep Alive

    %m

    Request method (GET, POST, etc.)

    %n

    Content Encoding

    %r

    Request Element

    %s

    HTTP status code of the response

    %S

    Accept Chatset

    %t

    Date and time, in Common Log Format

    %T

    The time taken to process the request in seconds.

    %u

    The remote user that was authenticated (if any), else '-'

    %U

    Requested URL path

    %v

    Local server name

    %V

    Vary Header

    %x

    Connection Header

    %Z

    Server Header

    - -## Updating the Log4j2 Log level - -You can dynamically update the log level for a specific logger by using the Micro Integrator [dashboard](#viewing-logs-via-the-dashboard) or [CLI](#viewing-logs-via-the-cli). If you change the wire log configuration directly from the `log4j2.properties` file (without using the dashboard or CLI), the Micro Integrator needs to be restarted for the changes to become effective. - -??? Info "Log Levels" - The following table explains the log4j2 log levels you can use. Refer Log4j2 documentation for more information. - - | Level | Description | - |-------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| - | OFF | The highest possible log level. This is intended for disabling logging. | - | FATAL | Indicates server errors that cause premature termination. These logs are expected to be immediately visible on the command line that you used for starting the server. | - | ERROR | Indicates other runtime errors or unexpected conditions. These logs are expected to be immediately visible on the command line that you used for starting the server. | - | WARN | Indicates the use of deprecated APIs, poor use of API, possible errors, and other runtime situations that are undesirable or unexpected but not necessarily wrong. These logs are expected to be immediately visible on the command line that you used for starting the server. | - | INFO | Indicates important runtime events, such as server startup/shutdown. These logs are expected to be immediately visible on the command line that you used for starting the server. It is recommended to keep these logs to a minimum. | - | DEBUG | Provides detailed information on the flow through the system. This information is expected to be written to logs only. Generally, most lines logged by your application should be written as DEBUG logs. | - | TRACE | Provides additional details on the behavior of events and services. This information is expected to be written to logs only. | - -### Viewing logs via the dashboard - -1. Sign in to the [Micro Integrator dashboard]({{base_path}}/observe/mi-observe/working-with-monitoring-dashboard). -2. Click Log Configs on the left-hand navigator to open the Logging Management window. - - change log level from dashboard - -3. Use the Search option to find the required logger, and change the log level as shown above. - -### Viewing logs via the CLI - -1. Download and set up the [API Controller]({{base_path}}/install-and-setup/setup/api-controller/getting-started-with-wso2-api-controller). - -2. Issue commands to view logs for the required Micro Integrator artifacts. For more information, see [Managing Integrations with apictl]({{base_path}}/install-and-setup/setup/api-controller/managing-integrations/managing-integrations-with-ctl). - - -## Updating the threshold Level - -The threshold value filters log entries based on the [log level](#updating-the-log4j2-log-level). This value is set for the log appender in the `log4j2.properties` file. For example, a threshold set to 'WARN' allows the log entry to pass into the appender. If its level is 'WARN', 'ERROR' or 'FATAL', other entries will be discarded. This is the minimum log level at which you can log a message. - -Shown below is how the log level is set to DEBUG for the `CARBON_LOGFILE` appender: - -```bash -appender.CARBON_LOGFILE.filter.threshold.level = DEBUG -``` - -## Updating the Log4j2 Log pattern - -The log pattern defines the output format of the log file. This is the layout pattern that describes the log message format. - -**Identifying forged messages**: -The conversion character 'u' can be used in the pattern layout to log a UUID. For example, the log pattern can be  [%u] [%d] %5p {% raw %}{%c}{% endraw %} - %m%ex%n, where [%u] is the UUID. - -## Hiding current parameters in the printed log - -By default, when an error occurs while invoking a data service, the Micro Integrator logs a set of parameters in the error message. - -For example: - -```xml -DS Code: INCOMPATIBLE_PARAMETERS_ERROR -Source Data Service:- -Name: RDBMSSample -Location: /RDBMSSample.dbs -Description: N/A -Default Namespace: http://ws.wso2.org/dataservice -Current Request Name: _addEmployee -Current Params: {% raw %}{firstName=Will, lastName=Smith, salary=1200, email=will@abc.com}{% endraw %} -``` - -You can hide the 'Current Params' in the printed logs by passing the following system property: - -```xml --Ddss.disable.current.params=true \ -``` - -## Using Custom Log appenders - -Custom log appenders for Log4j2 can be used to store application logs in various environments/systems such as cloud storages. - -However, since WSO2 Micro Integrator works in an OSGi environment, such Log4j2 extensions cannot be used as they are. Therefore, you need to modify those extensions to be compatible with WSO2 Micro Integrator. Follow the steps given below to modify an existing Log4j2 extension: - -1. In the custom log appender, open the `pom.xml` file of the module that contains the `Log4j2Appender` class. - -2. Under the `build` section, add `maven-compiler-plugin` and `maven-bundle-plugin` as follows. - - ```xml - - ... - - maven-compiler-plugin - - - log4j-plugin-processor - - compile - - process-classes - - only - - - org.apache.logging.log4j.core.config.plugins.processor.PluginProcessor - - - - - - - - org.apache.felix - maven-bundle-plugin - true - - - ${project.artifactId} - ${project.artifactId} - org.ops4j.pax.logging.pax-logging-log4j2 - - - - * - - ${project.build.directory}/classes/ - - - - ... - ``` - -3. Rebuild the related module and copy the built JAR file from the `target` directory to `/dropins` directory. - -4. Configure the custom appender in the `log4j2.properties` file as follows. - - ```properties - appender.log4j2Custom.type = Log4j2Appender - appender.log4j2Custom.name = log4j2Custom - appender.log4j2Custom.layout.type = PatternLayout - appender.log4j2Custom.layout.pattern = [%d] %5p {% raw %}{%c}{% endraw %} - %m%ex%n - ``` - -5. The custom appender should be added to the list of registered appenders in the `log4j2.properties` file as shown below. - - ```properties - appenders = log4j2Custom, .... - ``` - -6. Restart the server. - -## What's Next? - -Once you have configured the logs, you can start [using the logs]({{base_path}}/observe/micro-integrator/classic-observability-logs/monitoring-logs). - diff --git a/en/docs/observe/micro-integrator/classic-observability-logs/enabling-logs-for-a-component.md b/en/docs/observe/micro-integrator/classic-observability-logs/enabling-logs-for-a-component.md deleted file mode 100644 index d2ccd49588..0000000000 --- a/en/docs/observe/micro-integrator/classic-observability-logs/enabling-logs-for-a-component.md +++ /dev/null @@ -1,66 +0,0 @@ -# Enabling Logs for a Component - -Follow the instructions given below to enable logs for a specific component in the Micro Integrator. - -## Enabling Logs - -There are two ways to enable logs for a component: using the Micro Integrator [dashboard](#using-the-dashboard) or using the [CLI](#using-the-cli). - -!!! Info - Alternatively, you can directly update the [log configurations]({{base_path}}/observe/micro-integrator/classic-observability-logs/configuring-log4j2-properties) in the `log4j2.properties` file (stored in the `/conf` directory). - -### Using the Dashboard - -1. Sign in to the [Micro Integrator dashboard]({{base_path}}/observe/mi-observe/working-with-monitoring-dashboard). -2. Click Log Configs on the left-hand navigator to open the Logging Management window. -3. Go to the Add Loggers tab and define the new logger. - - add new loggers using dashboard - - - - - - - - - - - - - - -
    - Logger Name - - Give a name for the logger. -
    - Class - - Specify the class implementation of the component for which the logger is defined. -
    - Log Level - - Specify the log level. -
    - -### Using the CLI - -1. Download and set up the [API Controller]({{base_path}}/install-and-setup/setup/api-controller/getting-started-with-wso2-api-controller). - -2. Use the commands for [adding a new logger]({{base_path}}/install-and-setup/setup/api-controller/managing-integrations/managing-integrations-with-ctl/#add-a-new-logger) to the Micro Integrator. - -## Printing Logs - -By default, when you enable logs for a component, the logs get printed to the server console and the carbon log file. When there are error logs, these are also printed to the error log file. These log files are stored in the `/repository/logs/` directory. Every HTTP message that flows through the ESB and between the Micro Integrator and external clients undergoes several state changes. A new log entry is created in the `correlation.log` file corresponding to the state changes in the round trip of a single HTTP request. A Correlation ID assigned to the incoming HTTP request is assigned to all the log entries corresponding to the request. Therefore, you can use this Correlation ID to easily locate the logs relevant to the round trip of a specific HTTP request and, thereby, analyze the behavior of the message flow. - -## Enabling Correlation logs - -### At the server start-up - -You can enable Correlation logging by passing a system property. - -- If you want Correlation logs to be enabled every time the server starts, add the following system property to the product start-up script (stored in the `/bin/` directory) and set it to `true`. - - ```bash - -DenableCorrelationLogs=true \ - ``` - -- Alternatively, you can pass the system property at the time of starting the server by executing the following command: - - - On **Linux/MacOS/CentOS**: `sh micro-integrator.sh -DenableCorrelationLogs=true` - - On **Windows**: `micro-integrator.bat -DenableCorrelationLogs=true` - - -Now when you start the Micro Integrator, the `correlation.log` file is created in the `/repository/logs/` directory. - -### During the runtime - -- You can enable correlation logging by invoking the configs resource of the Management API. For more information, see -[enable/disable correlation logs using the Management API]({{base_path}}/observe/mi-observe/working-with-management-api/#enabledisable-correlation-logging-during-runtime). - -- Alternatively, you can enable correlation logging using the MI dashboard. - -- You cannot disable the correlation logs during runtime if the correlation logs are enabled using the system property. - -## Sending an HTTP request with a Correlation ID - -When the client sends an HTTP request to the Micro Integrator, a Correlation ID for the request can be passed using the Correlation header that is configured in the Micro Integrator. By default, the Correlation header is `activity_id`. If you want to change the default Correlation header, see the topic on [configuring Correlation logs]({{base_path}}/observe/micro-integrator/classic-observability-logs/configuring-log4j2-properties/#correlations-logs). If the client does not pass a Correlation ID in the request, the Micro Integrator will generate an internal value and assign it to the request. The Correlation ID assigned to the incoming request is assigned to all the log entries that are related to the same request. - -Shown below is the POST request that is sent using the CURL client. Note that the Correlation ID is set in this request. - -```bash -curl -X POST --data @request.json http://localhost:8280/healthcare/categories/surgery/reserve -H "Content-Type:application/json" -H "activityid:correlationID" -``` - -## Accessing the Correlation logs - -If you know the Correlation ID of the HTTP request that you want to analyze, you can isolate the relevant logs as explained below. - -1. Open a terminal and navigate to the `/repository/logs/` directory where the `correlation.log` file is saved. - -2. Execute the following command with the required Correlation ID. - - Replace `` with the required value. - - ```bash - cat correlation.log | grep "" - ``` - -Shown below is an example of Correlation log entries corresponding to the round trip of a single HTTP request. - -```xml -2021-11-30 15:27:27,262|correlationID|HTTP-Listener I/O dispatcher-5|0|HTTP State Transition|http-incoming-17|POST|/healthcare/categories/surgery/reserve|REQUEST_HEAD -2021-11-30 15:27:27,262|correlationID|HTTP-Listener I/O dispatcher-5|0|HTTP State Transition|http-incoming-17|POST|/healthcare/categories/surgery/reserve|REQUEST_BODY -2021-11-30 15:27:27,263|correlationID|HTTP-Listener I/O dispatcher-5|1|HTTP State Transition|http-incoming-17|POST|/healthcare/categories/surgery/reserve|REQUEST_DONE -2021-11-30 15:27:27,265|correlationID|HTTP-Sender I/O dispatcher-4|42173|HTTP State Transition|http-outgoing-4|POST|http://localhost:9090/grandoaks/categories/surgery/reserve|REQUEST_HEAD -2021-11-30 15:27:27,265|correlationID|HTTP-Sender I/O dispatcher-4|0|HTTP State Transition|http-outgoing-4|POST|http://localhost:9090/grandoaks/categories/surgery/reserve|REQUEST_DONE -2021-11-30 15:27:27,267|correlationID|HTTP-Sender I/O dispatcher-4|2 |HTTP|sourhttp://localhost:9090/grandoaks/categories/surgery/reserve|BACKEND LATENCY -2021-11-30 15:27:27,267|correlationID|HTTP-Sender I/O dispatcher-4|2|HTTP State Transition|http-outgoing-4|POST|http://localhost:9090/grandoaks/categories/surgery/reserve|RESPONSE_HEAD -2021-11-30 15:27:27,267|correlationID|HTTP-Sender I/O dispatcher-4|0|HTTP State Transition|http-outgoing-4|POST|http://localhost:9090/grandoaks/categories/surgery/reserve|RESPONSE_BODY -2021-11-30 15:27:27,267|correlationID|HTTP-Sender I/O dispatcher-4|0|HTTP State Transition|http-outgoing-4|POST|http://localhost:9090/grandoaks/categories/surgery/reserve|RESPONSE_DONE -2021-11-30 15:27:27,269|correlationID|HTTP-Listener I/O dispatcher-5|6|HTTP State Transition|http-incoming-17|POST|/healthcare/categories/surgery/reserve|RESPONSE_HEAD -2021-11-30 15:27:27,269|correlationID|HTTP-Listener I/O dispatcher-5|0|HTTP State Transition|http-incoming-17|POST|/healthcare/categories/surgery/reserve|RESPONSE_BODY -2021-11-30 15:27:27,269|correlationID|HTTP-Listener I/O dispatcher-5|0|HTTP State Transition|http-incoming-17|POST|/healthcare/categories/surgery/reserve|RESPONSE_DONE -2021-11-30 15:27:27,269|correlationID|HTTP-Listener I/O dispatcher-5|7|HTTP|http-incoming-17|POST|/healthcare/categories/surgery/reserve|ROUND-TRIP LATENCY -``` - -## Reading Correlation logs - -The pattern/format of a Correlation log is shown below along with an example log entry. - -=== "Log Pattern" - ```bash - Time Stamp|Correlation ID|Thread name|Duration|Call type|Connection name|Method type|Connection URL|HTTP state - ``` - -=== "Example Log" - ```bash - 2021-10-26 17:34:40,464|de461a83-fc74-4660-93ed-1b609ecfac23|HTTP-Listener I/O dispatcher-3|535|HTTP|http-incoming-3|GET|/api/querydoctor/surgery|ROUND-TRIP LATENCY - ``` - -The detail recorded in a log entry is described below. - - ---- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    StateDescription
    Time Stamp
    -

    The time at which the log is created.

    -
    -
    -Example -
    -
    -
    2021-10-26 17:34:40,464
    -
    -
    -
    Correlation ID
    -

    Each log contains a Correlation ID, which is unique to the HTTP request. A client can send the Correlation ID in the header of the HTTP request. If this Correlation ID is missing in the incoming request, the ESB will generate one for the request.

    -

    The HTTP header that carries the Correlation ID is configured in the ESB.

    -
    -
    -Example -
    -
    -
    de461a83-fc74-4660-93ed-1b609ecfac23
    -
    -
    -
    Thread name
    -

    The identifier of the thread.

    -
    -
    -Example -
    -
    -
    HTTP-Listener I/O dispatcher-3
    -
    -
    -
    Duration
    -

    The duration (given in milliseconds) depends on the type of log entry:

    -
      -
    • If the state in the log entry is ROUND-TRIP LATENCY, the duration corresponds to the time gap between the REQUEST_HEAD state and the ROUND-TRIP LATENCY state, which is the total time of the round trip.
    • -
    • If the state in the log entry is BACKEND LATENCY, the duration corresponds to the total time taken by the backend to process the message.
    • -
    • For all other log entries, the duration corresponds to the time gap between the current log entry and the immediately previous log entry, which is the time taken for the HTTP request to move from one state to another.
    • -
    -
    -
    -Example -
    -
    -
    535
    -
    -
    -
    Call type

    There are two possible call types:

    -
      -
    • HTTP call type identifies logs that correspond to either back-end latency or round-trip latency states. That is, in the case of an individual request, one log will be recorded to identify back-end latency, and another log for round-trip latency. Since these logs relate to HTTP calls between the ESB and external clients, these logs are categorized using the HTTP call type.
    • -
    • HTTP State Transition call type identifies logs that correspond to the state transitions in the HTTP transport related to a particular message.
    • -
    Connection name
    -

    This is a name that is generated to identify the connection between the ESB and the external client (back-end or message sender).

    -
    -
    -Example -
    -
    -
    http-incoming-3
    -
    -
    -
    Method type
    -

    The HTTP method used for the request.

    -
    -
    -Example -
    -
    -
    GET
    -
    -
    -
    Connection URL
    -

    The connection URL of the external client with which the message is being communicated. For example, if the message is being read from the client, the connection URL corresponds to the client sending the message. However, if the message is being written to the backend, the URL corresponds to the backend client.

    -
    -
    -Example -
    -
    -
    /api/querydoctor/surgery
    -
    -
    -
    HTTP state

    Listed below are the state changes that a message goes through when it flows through the ESB, and when the message flows between the ESB and external clients. Typically, a new log entry is generated for each of the states. However, there can be two separate log entries created for one particular state (except for BACKEND LATENCY and ROUND-TRIP LATENCY) depending on whether the message is being read or written. You can identify the two separate log entries from the connection URL explained above.

    -
      -
    • REQUEST_HEAD: All HTTP headers in the incoming request are being read/or being written to the backend.
    • -
    • REQUEST_BODY : The body of the incoming request is being read/or being written to the backend.
    • -
    • REQUEST_DONE : The request is completely read (content decoded)/ or is completely written to the backend.
    • -
    • BACKEND LATENCY : The response message is received by the ESB. This status corresponds to the time total time taken by the backend to process the message.
    • -
    • RESPONSE_HEAD : All HTTP headers in the response message are being read/or being written to the client.
    • -
    • RESPONSE_BODY : The body of the response message is being read/or being written to the client.
    • -
    • RESPONSE_DONE : The response is completely read/ or completely written to the client.
    • -
    • ROUND-TRIP LATENCY : The response message is completely written to the client. This status corresponds to the total time taken by the HTTP request to complete the round trip (from the point of receiving the HTTP request from a client until the response message is sent back to the client)
    • -
    - -## Configuring Correlation Logs (Optional) - -For information, see [Configuring Correlation Logs]({{base_path}}/observe/micro-integrator/classic-observability-logs/configuring-log4j2-properties/#correlations-logs). diff --git a/en/docs/observe/micro-integrator/classic-observability-logs/monitoring-logs.md b/en/docs/observe/micro-integrator/classic-observability-logs/monitoring-logs.md deleted file mode 100644 index 2ab457393a..0000000000 --- a/en/docs/observe/micro-integrator/classic-observability-logs/monitoring-logs.md +++ /dev/null @@ -1,166 +0,0 @@ -# Monitoring Logs - -Logging is one of the most important aspects of a production-grade server. A properly configured logging system is vital for identifying errors, security threats, and usage patterns. - -By default, the Micro Integrator is configured to generate the basic log files that are required for monitoring the server. These log files are stored in the `/repository/logs` directory by default. - -## Before you begin - -The following topics explain how you can use the default logs that are configured in the Micro Integrator. If you have additional logs configured, -you will be able to access those logs as well. - -See [Configuring Logs]({{base_path}}/observe/micro-integrator/classic-observability-logs/configuring-log4j2-properties) for details on how logs are configured in the Micro Integrator. - -## Downloading Log Files - -You can easily download them from the [Micro Integrator Dashboard]({{base_path}}/observe/mi-observe/working-with-monitoring-dashboard). - -!!! Info - Alternatively, you can open the log file from the `/repository/logs` directory. - -1. Sign in to the dashboard. -2. Click Log Files as shown below to view the complete list. - - download log files - -3. User the Search option to find a specific log file. -4. Click the log file to download. - -The default log files available on the dashboard are explained below. - -## Monitoring Carbon logs - -The Carbon log file (`wso2carbon.log`) covers all the management features of a product. These logs are printed to the console as defined in the log4j2 configurations. - -Shown below is a sample log that is printed when you start the Micro Integrator with some integration artifacts deployed. - -```bash -TID: [2020-09-24 23:00:04,634] INFO {org.wso2.config.mapper.ConfigParser} - Initializing configurations with deployment configurations {org.wso2.config.mapper.ConfigParser} -[2020-09-24 23:00:09,292] INFO {org.ops4j.pax.logging.spi.support.EventAdminConfigurationNotifier} - Logging configuration changed. (Event Admin service unavailable - no notification sent). -[2020-09-24 23:00:12,071] INFO {org.apache.synapse.rest.API} - {api:HelloWorld} Initializing API: HelloWorld -[2020-09-24 23:00:12,075] INFO {org.apache.synapse.deployers.APIDeployer} - API named 'HelloWorld' has been deployed from file : /Applications/IntegrationStudio.app/Contents/Eclipse/runtime/microesb/tmp/carbonapps/-1234/1600968612042TestCompositeApplication_1.0.0.car/HelloWorld_1.0.0/HelloWorld-1.0.0.xml -[2020-09-24 23:00:12,076] INFO {org.wso2.micro.integrator.initializer.deployment.application.deployer.CappDeployer} - Successfully Deployed Carbon Application : helloworldCompositeExporter_1.0.0{super-tenant} -[2020-09-24 23:00:12,110] INFO {org.apache.synapse.transport.passthru.core.PassThroughListeningIOReactorManager} - Pass-through HTTP Listener started on 0.0.0.0:8290 -[2020-09-24 23:00:12,113] INFO {org.apache.synapse.transport.passthru.core.PassThroughListeningIOReactorManager} - Pass-through HTTPS Listener started on 0.0.0.0:8253 -[2020-09-24 23:00:12,114] INFO {org.wso2.micro.integrator.initializer.StartupFinalizer} - WSO2 Micro Integrator started in 7.49 seconds -[2020-09-24 23:00:12,229] INFO {org.apache.synapse.transport.passthru.core.PassThroughListeningIOReactorManager} - Pass-through EI_INTERNAL_HTTP_INBOUND_ENDPOINT Listener started on 0.0.0.0:9201 -[2020-09-24 23:00:12,240] INFO {org.apache.synapse.transport.passthru.core.PassThroughListeningIOReactorManager} - Pass-through EI_INTERNAL_HTTPS_INBOUND_ENDPOINT Listener started on 0.0.0.0:9164 -[2020-09-24 23:00:14,616] INFO {org.wso2.micro.integrator.management.apis.security.handler.AuthenticationHandlerAdapter} - User admin logged in successfully -``` - -## Monitoring API Logs - -The API log file covers the logs related to APIs deployed in the Micro Integrator. By default, all APIs in the server will print logs to this common log file (`wso2-mi-api.log`). Shown below are some sample logs printed when the Healthcare API and the UserInfoRESTAPI are being used. - -If you have [individual log files]({{base_path}}/integrate/develop/monitoring-api-level-logs/) configured for APIs, you can download the log file that is specific to the API. - -```bash -[2020-11-10 08:44:15,258] INFO {API_LOGGER.UserInfoRestAPI} - Initializing API: UserInfoRestAPI -[2020-11-10 08:45:59,419] INFO {API_LOGGER.UserInfoRestAPI} - MESSAGE = Request received to /users resource. -[2020-11-10 08:50:45,351] INFO {API_LOGGER.UserInfoRestAPI} - Destroying API: UserInfoRestAPI -[2020-11-10 08:50:45,373] INFO {API_LOGGER.HealthcareAPI} - Initializing API: HealthcareAPI -[2020-11-10 08:52:35,607] INFO {API_LOGGER.HealthcareAPI} - Log Property message = "Welcome to HealthcareService" -[2020-11-10 08:57:45,457] INFO {API_LOGGER.HealthcareAPI} - Destroying API: HealthcareAPI -[2020-11-10 08:57:45,477] INFO {API_LOGGER.StockQuoteAPI} - Initializing API: StockQuoteAPI -[2020-11-10 08:57:49,400] INFO {API_LOGGER.StockQuoteAPI} - Destroying API: StockQuoteAPI -``` - -## Monitoring Service Logs - -The service log file covers the logs related to proxy services deployed in the Micro Integrator. By default, all services in the server will print logs to this common log file (`wso2-mi-service.log`). Shown below are some sample logs printed when the Healthcare API and the UserInfoRESTAPI are being used. - -If you have [individual log files]({{base_path}}/integrate/develop/monitoring-service-level-logs/) configured for services, you can download the log file that is specific to the service. - -```bash -[2020-10-14 10:16:15,399] INFO {SERVICE_LOGGER.hl7testproxy} - Building Axis service for Proxy service : hl7testproxy -[2020-10-14 10:16:15,401] INFO {SERVICE_LOGGER.hl7testproxy} - Adding service hl7testproxy to the Axis2 configuration -[2020-10-14 10:16:15,401] INFO {SERVICE_LOGGER.hl7testproxy} - Successfully created the Axis2 service for Proxy service : hl7testproxy -[2020-10-14 10:26:16,335] INFO {SERVICE_LOGGER.hl7testproxy} - Stopped the proxy service : hl7testproxy -[2020-10-14 10:37:21,790] INFO {SERVICE_LOGGER.HL7Proxy1} - Building Axis service for Proxy service : HL7Proxy1 -[2020-10-14 10:37:21,791] INFO {SERVICE_LOGGER.HL7Proxy1} - Adding service HL7Proxy1 to the Axis2 configuration -[2020-10-14 10:37:21,791] INFO {SERVICE_LOGGER.HL7Proxy1} - Successfully created the Axis2 service for Proxy service : HL7Proxy1 -``` - -## Monitoring Error Logs - -The Error log file (`wso2error.log`) contains the error logs that are generated when the server is running. Note that these logs are also printed to the console of the Micro Integrator. - -Shown below is an example server error that is printed in the error log file. - -```bash -[2020-10-14 10:26:16,361] ERROR {org.apache.synapse.deployers.ProxyServiceDeployer} - ProxyService named : HL7Proxy already exists -[2020-10-14 10:26:16,363] ERROR {org.apache.synapse.deployers.ProxyServiceDeployer} - ProxyService Deployment from the file : /Applications/IntegrationStudio.app/Contents/Eclipse/runtime/microesb/tmp/carbonapps/-1234/1602651376337TestCompositeApplication_1.0.0.car/HL7Proxy2_1.0.0/HL7Proxy2-1.0.0.xml : Failed. org.apache.synapse.deployers.SynapseArtifactDeploymentException: ProxyService named : HL7Proxy already exists - at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.handleSynapseArtifactDeploymentError(AbstractSynapseArtifactDeployer.java:482) - at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:66) - at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:204) - at org.wso2.micro.integrator.initializer.deployment.synapse.deployer.SynapseAppDeployer.deployArtifactType(SynapseAppDeployer.java:1106) - at org.wso2.micro.integrator.initializer.deployment.synapse.deployer.SynapseAppDeployer.deployArtifacts(SynapseAppDeployer.java:134) - at org.wso2.micro.integrator.initializer.deployment.application.deployer.CappDeployer.deployCarbonApps(CappDeployer.java:141) - at org.wso2.micro.integrator.initializer.deployment.application.deployer.CappDeployer.deploy(CappDeployer.java:99) - at org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(DeploymentFileData.java:136) - at org.apache.axis2.deployment.DeploymentEngine.doDeploy(DeploymentEngine.java:807) - at org.apache.axis2.deployment.repository.util.WSInfoList.update(WSInfoList.java:153) - at org.apache.axis2.deployment.RepositoryListener.update(RepositoryListener.java:377) - at org.apache.axis2.deployment.RepositoryListener.checkServices(RepositoryListener.java:254) - at org.apache.axis2.deployment.RepositoryListener.startListener(RepositoryListener.java:371) - at org.apache.axis2.deployment.scheduler.SchedulerTask.checkRepository(SchedulerTask.java:59) - at org.apache.axis2.deployment.scheduler.SchedulerTask.run(SchedulerTask.java:67) - at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) - at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) - at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) - at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) - at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) - at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) - at java.lang.Thread.run(Thread.java:748) -``` - -## Monitoring Audit Logs - -Audit logs are used for tracking the sequence of actions that affect a particular task carried out on the server. - -For more information, see [Monitoring MI Audit Logs]({{base_path}}/observe/micro-integrator/classic-observability-logs/monitoring-mi-audit-logs). - -## Monitoring Service/Event Tracing Logs - -These are logs that are enabled in the Micro Integrator for tracing services and events using a separate log file (`wso2carbon-trace-messages.log`). - -## Monitoring HTTP Access Logs - -HTTP access logs (requests and responses) help you monitor information such as the clients that access the product, how many hits are received, what the errors are, etc. This information is useful for troubleshooting errors. - -In the Micro Integrator, access logs are generated for the PassThrough transport. The PassThrough transport works on 8290/8253 ports and is used for API/Service invocations. By default, all access logs from the PassThrough transport are written to a common access log file - `http_access_.log`. - -!!! Note - See [Configuring Access Logs]({{base_path}}/observe/micro-integrator/classic-observability-logs/configuring-log4j2-properties/#configuring-http-access-logs) for instructions on configuring access logs. - -```xml -[10/Nov/2020:08:52:35.604 +0530] "GET /healthcare/querydoctor/surgery HTTP/1.1" - - "-" "curl/7.64.1" -[10/Nov/2020:08:52:35.610 +0530] "GET /healthcare/surgery HTTP/1.1" - - "-" "Synapse-PT-HttpComponents-NIO" -[10/Nov/2020:08:52:35.610 +0530] "- - " 200 - "-" "-" -[10/Nov/2020:08:52:35.604 +0530] "- - " 200 - "-" "-" -``` - -## Monitoring Patch Logs - -The Patch log file contains details related to patches applied to the product. Patch logs cannot be customized. - -```bash -[2020-09-24 23:00:05,319] FINE {org.wso2.micro.integrator.server.util.PatchUtils processPatches} - Checking for patch changes ... -[2020-09-24 23:00:05,322] FINE {org.wso2.micro.integrator.server.util.PatchUtils processPatches} - No new patch or service pack detected, server will start without applying patches -[2020-09-24 23:00:05,323] FINE {org.wso2.micro.integrator.server.util.PatchUtils checkMD5Checksum} - Patch verification started -[2020-09-24 23:00:05,323] FINE {org.wso2.micro.integrator.server.util.PatchUtils checkMD5Checksum} - Patch verification successfully completed -[2020-10-14 10:16:07,812] FINE {org.wso2.micro.integrator.server.util.PatchUtils processPatches} - Checking for patch changes ... -[2020-10-14 10:16:07,815] FINE {org.wso2.micro.integrator.server.util.PatchUtils processPatches} - No new patch or service pack detected, server will start -``` - -## Monitoring Correlation Logs - -Correlation logs are used for monitoring the round trip of a message that is sent to the Micro Integrator. - -For more information, see [Monitoring Correlation Logs]({{base_path}}/observe/micro-integrator/classic-observability-logs/monitoring-correlation-logs). - -## Monitoring Console Logs - -When you run the Micro Integrator, the console will print logs from the [Carbon log file](#monitoring-carbon-logs) as well as the [Error log file](#monitoring-error-logs). - -If you have enabled wire logs, these will also be printed on the console. See the instructions on how to [enable and use Wire Logs]({{base_path}}/integrate/develop/using-wire-logs/). \ No newline at end of file diff --git a/en/docs/observe/micro-integrator/classic-observability-logs/monitoring-mi-audit-logs.md b/en/docs/observe/micro-integrator/classic-observability-logs/monitoring-mi-audit-logs.md deleted file mode 100644 index deac28ecf4..0000000000 --- a/en/docs/observe/micro-integrator/classic-observability-logs/monitoring-mi-audit-logs.md +++ /dev/null @@ -1,59 +0,0 @@ ---- -title: Monitoring Audit Logs - WSO2 API Manager 4.2.0 ---- - -# Monitoring Audit Logs in Micro Integrator - -Auditing is a primary requirement when it comes to monitoring production servers. For example, DevOps needs to have a clear mechanism to identify who did what, and to filter possible system violations or breaches. Audit Logs or Audit Trails contain a set of log entries that describe a sequence of actions that occurred over a period of time. Audit Logs allow you to trace all the actions of a single user, or all the actions or changes introduced to a certain module in the system etc. over a period of time. For example, it captures all the actions of a single user from the first point of logging in to the server. - -By default, the Audit Logs that get created when running WSO2 Micro Integrator are stored in the `audit.log` file, which is located in the `/repository/logs` directory. - -## Configuring Audit Logs - -Audit logs are enabled by default in WSO2 Micro Integrator (WSO2 MI) via the following configurations, which are in the `/conf/log4j2.properties` file. - -``` -appender.AUDIT_LOGFILE.type = RollingFile -appender.AUDIT_LOGFILE.name = AUDIT_LOGFILE -appender.AUDIT_LOGFILE.fileName = ${sys:carbon.home}/repository/logs/audit.log -appender.AUDIT_LOGFILE.filePattern = ${sys:carbon.home}/repository/logs/audit-%d{MM-dd-yyyy}.log -appender.AUDIT_LOGFILE.layout.type = PatternLayout -appender.AUDIT_LOGFILE.layout.pattern = [%d] %5p {% raw %}{%c}{% endraw %} - %m%ex%n -appender.AUDIT_LOGFILE.policies.type = Policies -appender.AUDIT_LOGFILE.policies.time.type = TimeBasedTriggeringPolicy -appender.AUDIT_LOGFILE.policies.time.interval = 1 -appender.AUDIT_LOGFILE.policies.time.modulate = true -appender.AUDIT_LOGFILE.policies.size.type = SizeBasedTriggeringPolicy -appender.AUDIT_LOGFILE.policies.size.size=10MB -appender.AUDIT_LOGFILE.strategy.type = DefaultRolloverStrategy -appender.AUDIT_LOGFILE.strategy.max = 20 -appender.AUDIT_LOGFILE.filter.threshold.type = ThresholdFilter -appender.AUDIT_LOGFILE.filter.threshold.level = INFO -``` - -The log growth of Audit Logs can be managed by using the configurations that are discussed in the [Managing Log Growth]({{base_path}}/administer/logging-and-monitoring/logging/managing-log-growth) guide. - -## Audit Log actions - -In WSO2 MI, Audit Logs are availble with the management API where you can monitor the changes that happened to the MI instance. - - -| Action | Sample Format | -|--------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Sign in | `[2017-06-07 22:26:22,506] INFO - admin logged in at [2017-06-07 22:26:22,501+0530]`| -| Sign out | `[2017-06-07 22:26:22,506] INFO - admin logged out at [2017-06-07 22:26:22,501+0530]`| -| Add user | `[2021-09-07 12:42:59,249] INFO {AUDIT_LOG} - { “performedBy”: “admin”, “action”: “created”, “type”: “user”, “info” : “{\“userId\” : \“user4\”,\“isAdmin\”: \“true\”}”}` | -| Remove user | `[2021-09-07 12:42:59,249] INFO {AUDIT_LOG} - { “performedBy”: “admin”, “action”: “deleted”, “type”: “user”, “info” : “{\“userId\”: \“user4\”}”}`| -| Activate/Deactivate proxy service | `[2021-09-07 12:42:59,249] INFO {AUDIT_LOG} - { “performedBy”: “admin”, “action”: “enabled”, “type”: “proxy_service”, “info” : “{\“proxyName\”: \“proxy1\”}”}` | -| Enable/Disable message tracing for proxy service | `[2021-09-07 12:42:59,249] INFO {AUDIT_LOG} - { “performedBy”: “admin”, “action”: “enabled”, “type”: “proxy_service_trace”, “info” : “{\“proxyName\”: \“proxy1\”}”}` | -| Add Carbon application | `[2021-09-07 12:42:59,249] INFO {AUDIT_LOG} - { “performedBy”: “admin”, “action”: “created”, “type”: "carbon_application", “info” : “{\“cAppfileName\”: \“abc.car\”}”}` | -| Remove Carbon application | `[2021-09-07 12:42:59,249] INFO {AUDIT_LOG} - { “performedBy”: “admin”, “action”: "deleted", “type”: "carbon_application", “info” : “{\“cAppfileName\”: \“abc.car\”}”}` | -| Activate/Deactivate endpoint | `[2021-09-07 12:42:59,249] INFO {AUDIT_LOG} - { “performedBy”: “admin”, “action”: "enabled", “type”: "endpoint", “info” : “{\“endpointName\”: \“httpEP\”}”}` | -| Enable/Disable message tracing for endpoint | `[2021-09-07 12:42:59,249] INFO {AUDIT_LOG} - { “performedBy”: “admin”, “action”: "enabled", “type”: "endpoint_trace", “info” : “{\“endpointName\”: \“httpEP\”}”}` | -| Enable/Disable message tracing for API | `[2021-09-07 12:42:59,249] INFO {AUDIT_LOG} - { “performedBy”: “admin”, “action”: "enabled", “type”: "api_trace", “info” : “{\“apiName\”: \“helloAPI\”}”}` | -| Enable/Disable message tracing for sequence | `[2021-09-07 12:42:59,249] INFO {AUDIT_LOG} - { “performedBy”: “admin”, “action”: "enabled", “type”: "sequence_trace", “info” : “{\“sequenceName\”: \“helloSequence\”}”}` | -| Activate/Deactivate message processor | `[2021-09-07 12:42:59,249] INFO {AUDIT_LOG} - { “performedBy”: “admin”, “action”: "enabled", “type”: "message_processor", “info” : “{\“messageProcessorName\”: \“processor1\”}”}` | -| Enable/Disable message tracing for inbound endpoint | `[2021-09-07 12:42:59,249] INFO {AUDIT_LOG} - { “performedBy”: “admin”, “action”: "enabled", “type”: "inbound_endpoint_trace", “info” : “{\“inboundEndpointName\”: \“httpIEP\”}”}` | -| Enable/Disable message tracing for sequence template | `[2021-09-07 12:42:59,249] INFO {AUDIT_LOG} - { “performedBy”: “admin”, “action”: "enabled", “type”: "sequence_template_trace", “info” : “{\“sequenceName\”: \“sequenceTemplate\”,\“sequenceType\”: “sequence\”}”}` | -| Update log level | `[2021-09-07 12:42:59,249] INFO {AUDIT_LOG} - { “performedBy”: “admin”, “action”: "updated", “type”: "log_level", “info” : “{\“loggerName\”: \“org-apache-hive\”,\“loggingLevel\”: “WARN}”}` | -| Add new logger | `[2021-09-07 12:42:59,249] INFO {AUDIT_LOG} - { “performedBy”: “admin”, “action”: "created", “type”: "logger", “info” : {“loggerName”: “synapse-api”,\“loggingLevel\”: \“DEBUG\”,\“loggerClass\”: \“org.apache.rest.API\”}}` | diff --git a/en/docs/observe/micro-integrator/classic-observability-metrics/jmx-monitoring.md b/en/docs/observe/micro-integrator/classic-observability-metrics/jmx-monitoring.md deleted file mode 100644 index 7af4c1f3c3..0000000000 --- a/en/docs/observe/micro-integrator/classic-observability-metrics/jmx-monitoring.md +++ /dev/null @@ -1,237 +0,0 @@ ---- -title: JMX Monitoring - WSO2 API Manager 4.2.0 ---- - -# JMX Monitoring - -Java Management Extensions (JMX) is a technology that lets you implement management interfaces for Java applications. A management interface, as defined by JMX is composed of named objects called MBeans (Management Beans). MBeans are registered with a name (an ObjectName) in an MBeanServer. To manage a resource or many resources in your application, you can write an MBean defining its management interface and register that MBean in your MBeanServer. The content of the MBeanServer can then be exposed through various protocols, implemented by protocol connectors or protocol adaptors. - -!!! note - Prometheus-based monitoring is recommended for remote monitoring in more recent versions of the Micro Integrator. - -## Configuring JMX in Micro Integrator - -With [**JConsole**](#monitoring-with-jconsole), you can attach the Micro Integrator as a local process and monitor the MBeans that are provided. There is nothing explicitly enable.  - -## Monitoring with JConsole - -Jconsole is a JMX-compliant monitoring tool, which comes with the Java Development Kit (JDK). You can find this tool inside your `/bin` directory. - -### Starting JConsole - -Once the **product server is started**, you can start the `JConsole` tool as follows: - -1. Open a command prompt and navigate to the `/bin` directory. -2. Execute the `jconsole` command to open the log-in screen of the **Java Monitoring & Management Console** as - shown below. - - [![jconsole_process]({{base_path}}/assets/img/integrate/jmx/jconsole-new-connection.png){: style="width:50%")}]({{base_path}}/assets/img/integrate/jmx/jconsole-new-connection.png) - -3. Click on the `org.wso2.micro.integrator.bootstrap.Bootstrap` process (which is the Micro Integrator) under the Local Process. -4. Click **Connect** to open the **Java Monitoring & Management Console**. - - See the **Oracle** documentation on [using - JConsole](http://docs.oracle.com/javase/7/docs/technotes/guides/management/jconsole.html). The following tabs will be available: - - - **Overview** - - [![jconsole overview]({{base_path}}/assets/img/integrate/jmx/jconsole-overview.png){: style="width:90%")}]({{base_path}}/assets/img/integrate/jmx/jconsole-overview.png) - - - **Memory** - - [![jconsole memory]({{base_path}}/assets/img/integrate/jmx/jconsole-memory.png){: style="width:90%")}]({{base_path}}/assets/img/integrate/jmx/jconsole-memory.png) - - - **Threads** - - [![jconsole threads]({{base_path}}/assets/img/integrate/jmx/jconsole-threads.png){: style="width:90%")}]({{base_path}}/assets/img/integrate/jmx/jconsole-threads.png) - - - **Classes** - - [![jconsole classes]({{base_path}}/assets/img/integrate/jmx/jconsole-classes.png){: style="width:90%")}]({{base_path}}/assets/img/integrate/jmx/jconsole-classes.png) - - - **VM** - - [![jconsole VM]({{base_path}}/assets/img/integrate/jmx/jconsole-vm-summary.png){: style="width:90%")}]({{base_path}}/assets/img/integrate/jmx/jconsole-vm-summary.png) - - - **MBeans** - - [![jconsole MBeans]({{base_path}}/assets/img/integrate/jmx/jconsole-mbeans.png){: style="width:90%")}]({{base_path}}/assets/img/integrate/jmx/jconsole-mbeans.png) - -See the list of [Micro Integrator MBeans](#mbeans-for-the-micro-integrator) that you can monitor. - -## Monitoring a WSO2 product with Jolokia - -[Jolokia](https://jolokia.org) is a JMX-HTTP bridge, which is an alternative to JSR-160 connectors. It is an agent-based approach that supports many platforms. In addition to basic JMX operations, it enhances JMX monitoring with unique features like bulk requests and fine-grained security policies. - -Follow the steps below to use Jolokia to monitor a WSO2 product using a JVM Agent. -It can be dynamically attached (and -detached) to an already running Java process. This universal agent uses the JVM agent API and is available for every Sun/Oracle JVM 1.6 and later. - -1. Download [JVM-Agent](https://jolokia.org/download.html). (These instructions are tested with the Jolokia JVM-Agent version 1.7.1 by downloading the `jolokia-jvm-1.7.1.jar` file.) -2. Add it to the `/dropins/` directory. -3. Start the WSO2 product server. -4. Get the PID of wso2 server -5. Start the JVM Agent ex: java -jar jolokia-jvm-1.7.1.jar --host=localhost --port=9764 start -6. Also you can call it with --help to get a short usage information: - - Once the server starts, you can read MBeans using Jolokia APIs. The following are a few examples. - - - List all available MBeans: `http://localhost:9763/jolokia/list` (Change the appropriate hostname and port accordingly.) - - WSO2 ESB MBean: - ``` - http://localhost:9763/jolokia/read/org.apache.synapse:Name=https-sender,Type=PassThroughConnections/ActiveConnections - ``` - - - Reading Heap Memory: `http://localhost:9763/jolokia/read/java.lang:type=Memory/HeapMemoryUsage` - -Follow the steps below to use Jolokia to monitor a WSO2 product using OSGi Agent. - -1. Download [Osgi-Agent (full bundle)](https://jolokia.org/download.html). (These instructions are tested with the Jolokia OSGI Agent version 1.7.1 by downloading the `jolokia-osgi-bundle-1.7.1.jar` file.) -2. Add it to the `/dropins/` directory. -3. Start the WSO2 product server. -4. You can define the port with system variables. E.g., `./micro-integrator.sh -Dorg.osgi.service.http.port=9763` - - Once the server starts, you can read MBeans using Jolokia APIs. The following are a few examples. - - - List all available MBeans: `http://localhost:9763/jolokia/list` (Change the appropriate hostname and port accordingly.) - - WSO2 ESB MBean: - ``` - http://localhost:9763/jolokia/read/org.apache.synapse:Name=https-sender,Type=PassThroughConnections/ActiveConnections - ``` - - - Reading Heap Memory: `http://localhost:9763/jolokia/read/java.lang:type=Memory/HeapMemoryUsage` - -## MBeans for the Micro Integrator - -When JMX is enabled, the Micro Integrator exposes a number of management resources as -JMX Management Beans (MBeans) that can be used for managing and -monitoring the running server.  When you start JConsole, you can monitor -these MBeans from the **MBeans** tab. Most of the MBeans are exposed from the underlying Synapse mediation engine. - -[![micro integrator mbeans]({{base_path}}/assets/img/integrate/jmx/mi-mbeans.png){: style="width:90%")}]({{base_path}}/assets/img/integrate/jmx/mi-mbeans.png) - -The following section summarizes the common MBeans for all WSO2 products: - -### Connection MBeans - -These MBeans provide connection statistics for the HTTP and HTTPS -transports. - -You can view the following Connection MBeans: - -- `org.apache.synapse/PassThroughConnections/http-listener` -- `org.apache.synapse/PassThroughConnections/http-sender` -- `org.apache.synapse/PassThroughConnections/https-listener` -- `org.apache.synapse/PassThroughConnections/https-sender` - -**Attributes** - -| Attribute Name | Description | -|-----------------------------|--------------------------------------------------------------------| -| `ActiveConnections` | Number of currently active connections. | -| `ActiveConnectionsPerHosts` | A map of the number of connections against hosts. | -| `LastXxxConnections` | The number of connections created during last Xxx time period. | -| `RequestSizesMap` | A map of the number of requests against their sizes. | -| `ResponseSizesMap` | A map of the number of responses against their sizes. | -| `LastResetTime` | The time when the connection-statistic recordings were last reset. | - -**Operations** - -| Operation Name | Description | -|------------------------------------|-------------------------------------------------------------| -| `reset()` | Clear recorded connection statistics and restart recording. | - -### Latency MBeans - -This view provides statistics of the latencies from all backend services -connected through the HTTP  and HTTPS transports. These statistics are -provided as an aggregate value. - -You can view the following Latency MBeans: - -- `org.apache.synapse/PassthroughLatencyView/nio-http-http` -- `org.apache.synapse/PassthroughLatencyView/nio-https-https` - -**Attributes** - -| Attribute Name | Description | -|----------------------------------------------|--------------------------------------------------------------------------------------------------------------------------| -| `AllTimeAvgLatency` | Average latency since the latency recording was last reset. | -| `LastxxxAvgLatency` | Average latency for last xxx time period. For example, LastHourAvgLatency returns the average latency for the last hour. | -| `LastResetTime` | The time when the latency-statistic recordings were last reset. | - -**Operations** - -| Operation Name | Description | -|------------------------------------|---------------------------------| -| `reset()` | Clear recorded latency statistics and restart recording. | - -### Transport MBeans - -For each transport listener and sender enabled in the Micro Integrator, there will be an MBean under the `org.apache.axis2/Transport` domain. -For example, when the JMS transport is enabled, the following MBean will be exposed: - -- `org.apache.axis2/Transport/jms-sender-n` - -You can also view the following Transport MBeans: - -- `org.apache.synapse/Transport/passthru-http-receiver` -- `org.apache.synapse/Transport/passthru-http-sender` -- `org.apache.synapse/Transport/passthru-https-receiver` -- `org.apache.synapse/Transport/passthru-https-sender` - -**Attributes** - -| Attribute Name | Description | -|----------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------| -| `ActiveThreadCount` | Threads active in this transport listener/sender. | -| `AvgSizeReceived` | The average size of received messages. | -| `AvgSizeSent` | The average size of sent messages. | -| `BytesReceived` | The number of bytes received through this transport. | -| `BytesSent` | The number of bytes sent through this transport. | -| `FaultsReceiving` | The number of faults encountered while receiving. | -| `FaultsSending` | The number of faults encountered while sending. | -| `LastResetTime` | The time when the last transport listener/sender statistic recording was reset. | -| `MaxSizeReceived` | Maximum message size of received messages. | -| `MaxSizeSent` | Maximum message size of sent messages. | -| `MetricsWindow` | Time difference between current time and last reset time in milliseconds. | -| `MinSizeReceived` | Minimum message size of received messages. | -| `MinSizeSent` | Minimum message size of sent messages. | -| `MessagesReceived` | The total number of messages received through this transport. | -| `MessagesSent` | The total number of messages sent through this transport. | -| `QueueSize` | The number of messages currently queued. Messages get queued if all the worker threads in this transport thread pool are busy. | -| `ResponseCodeTable` | The number of messages sent against their response codes. | -| `TimeoutsReceiving` | Message receiving timeout. | -| `TimeoutsSending` | Message sending timeout. | - -**Operations** - -| Operation Name | Description | -|------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------| -| `start()` | Start this transport listener/sender. | -| `stop()` | Stop this transport listener/sender. | -| `resume()` | Resume this transport listener/sender which is currently paused. | -| `resetStatistics()` | Clear recorded transport listener/sender statistics and restart recording. | -| `pause()` | Pause this transport listener/sender which has been started. | -| `maintenenceShutdown(long gracePeriod)` | Stop processing new messages, and wait the specified maximum time for in-flight requests to complete before a controlled shutdown for maintenance. | - - diff --git a/en/docs/observe/micro-integrator/classic-observability-metrics/snmp-monitoring.md b/en/docs/observe/micro-integrator/classic-observability-metrics/snmp-monitoring.md deleted file mode 100644 index 38a09b412e..0000000000 --- a/en/docs/observe/micro-integrator/classic-observability-metrics/snmp-monitoring.md +++ /dev/null @@ -1,256 +0,0 @@ -# SNMP Monitoring - -Simple Network Management Protocol (SNMP) is an Internet-standard protocol for managing devices on IP networks. Follow the instructions given below to configure SNMP in the Micro Integrator, which exposes various MBeans via SNMP. - -## Enabling SNMP - -1. Download the following jar files from [http://www.snmp4j.org](http://www.snmp4j.org/) and add them to the - `/lib` directory. - - - **snmp4j-2.1.0.jar** - - **snmp4j-agent-2.0.6.jar** - -2. Enable SNMP in the `ei.toml` file, stored in the `/conf/` file by - adding the following entry: - - ```toml - [synapse_properties] - 'synapse.snmp.enabled'=true - ``` - -The Micro Integrator can now monitor MBeans with SNMP. For example: - -``` java -Monitoring Info : OID branch "1.3.6.1.4.1.18060.14" with the following sub-branches: - -1 - ServerManager MBean -2 - Transport MBeans -3 - NHttpConnections MBeans -4 - NHTTPLatency MBeans -5 - NHTTPS2SLatency MBeans -``` - -## MBean OID mappings - -Following are the OID equivalents of the server manager and transport MBeans, which are described in [JMX Monitoring]({{base_path}}/observe/micro-integrator/classic-observability-metrics/jmx-monitoring): - -``` -Name=ServerManager@ServerState as OID: -1.3.6.1.4.1.18060.14.1.21.1.0 - -Name=passthru-http-sender@ActiveThreadCount as OID: -1.3.6.1.4.1.18060.14.2.17.1.0 - -Name=passthru-http-sender@AvgSizeReceived as OID: -1.3.6.1.4.1.18060.14.2.17.2.0 - -Name=passthru-http-sender@AvgSizeSent as OID: -1.3.6.1.4.1.18060.14.2.17.3.0 - -Name=passthru-http-sender@BytesReceived as OID: -1.3.6.1.4.1.18060.14.2.17.4.0 - -Name=passthru-http-sender@BytesSent as OID: -1.3.6.1.4.1.18060.14.2.17.5.0 - -Name=passthru-http-sender@FaultsReceiving as OID: -1.3.6.1.4.1.18060.14.2.17.6.0 - -Name=passthru-http-sender@FaultsSending as OID: -1.3.6.1.4.1.18060.14.2.17.7.0 - -Name=passthru-http-sender@LastResetTime as OID: -1.3.6.1.4.1.18060.14.2.17.8.0 - -Name=passthru-http-sender@MaxSizeReceived as OID: -1.3.6.1.4.1.18060.14.2.17.9.0 - -Name=passthru-http-sender@MaxSizeSent as OID: -1.3.6.1.4.1.18060.14.2.17.10.0 - -Name=passthru-http-sender@MessagesReceived as OID: -1.3.6.1.4.1.18060.14.2.17.11.0 - -Name=passthru-http-sender@MessagesSent as OID: -1.3.6.1.4.1.18060.14.2.17.12.0 - -Name=passthru-http-sender@MetricsWindow as OID: -1.3.6.1.4.1.18060.14.2.17.13.0 - -Name=passthru-http-sender@MinSizeReceived as OID: -1.3.6.1.4.1.18060.14.2.17.14.0 - -Name=passthru-http-sender@MinSizeSent as OID: -1.3.6.1.4.1.18060.14.2.17.15.0 - -Name=passthru-http-sender@QueueSize as OID: -1.3.6.1.4.1.18060.14.2.17.16.0 - -Name=passthru-http-sender@TimeoutsReceiving as OID: -1.3.6.1.4.1.18060.14.2.17.18.0 - -Name=passthru-http-sender@TimeoutsSending as OID: -1.3.6.1.4.1.18060.14.2.17.19.0 - -Name=passthru-https-sender@ActiveThreadCount as OID: -1.3.6.1.4.1.18060.14.2.18.1.0 - -Name=passthru-https-sender@AvgSizeReceived as OID: -1.3.6.1.4.1.18060.14.2.18.2.0 - -Name=passthru-https-sender@AvgSizeSent as OID: -1.3.6.1.4.1.18060.14.2.18.3.0 - -Name=passthru-https-sender@BytesReceived as OID: -1.3.6.1.4.1.18060.14.2.18.4.0 - -Name=passthru-https-sender@BytesSent as OID: -1.3.6.1.4.1.18060.14.2.18.5.0 - -Name=passthru-https-sender@FaultsReceiving as OID: -1.3.6.1.4.1.18060.14.2.18.6.0 - -Name=passthru-https-sender@FaultsSending as OID: -1.3.6.1.4.1.18060.14.2.18.7.0 - -Name=passthru-https-sender@LastResetTime as OID: -1.3.6.1.4.1.18060.14.2.18.8.0 - -Name=passthru-https-sender@MaxSizeReceived as OID: -1.3.6.1.4.1.18060.14.2.18.9.0 - -Name=passthru-https-sender@MaxSizeSent as OID: -1.3.6.1.4.1.18060.14.2.18.10.0 - -Name=passthru-https-sender@MessagesReceived as OID: -1.3.6.1.4.1.18060.14.2.18.11.0 - -Name=passthru-https-sender@MessagesSent as OID: -1.3.6.1.4.1.18060.14.2.18.12.0 - -Name=passthru-https-sender@MetricsWindow as OID: -1.3.6.1.4.1.18060.14.2.18.13.0 - -Name=passthru-https-sender@MinSizeReceived as OID: -1.3.6.1.4.1.18060.14.2.18.14.0 - -Name=passthru-https-sender@MinSizeSent as OID: -1.3.6.1.4.1.18060.14.2.18.15.0 - -Name=passthru-https-sender@QueueSize as OID: -1.3.6.1.4.1.18060.14.2.18.16.0 - -Name=passthru-https-sender@TimeoutsReceiving as OID: -1.3.6.1.4.1.18060.14.2.18.18.0 - -Name=passthru-https-sender@TimeoutsSending as OID: -1.3.6.1.4.1.18060.14.2.18.19.0 - -Name=passthru-http-receiver@ActiveThreadCount as OID: -1.3.6.1.4.1.18060.14.2.19.1.0 - -Name=passthru-http-receiver@AvgSizeReceived as OID: -1.3.6.1.4.1.18060.14.2.19.2.0 - -Name=passthru-http-receiver@AvgSizeSent as OID: -1.3.6.1.4.1.18060.14.2.19.3.0 - -Name=passthru-http-receiver@BytesReceived as OID: -1.3.6.1.4.1.18060.14.2.19.4.0 - -Name=passthru-http-receiver@BytesSent as OID: -1.3.6.1.4.1.18060.14.2.19.5.0 - -Name=passthru-http-receiver@FaultsReceiving as OID: -1.3.6.1.4.1.18060.14.2.19.6.0 - -Name=passthru-http-receiver@FaultsSending as OID: -1.3.6.1.4.1.18060.14.2.19.7.0 - -Name=passthru-http-receiver@LastResetTime as OID: -1.3.6.1.4.1.18060.14.2.19.8.0 - -Name=passthru-http-receiver@MaxSizeReceived as OID: -1.3.6.1.4.1.18060.14.2.19.9.0 - -Name=passthru-http-receiver@MaxSizeSent as OID: -1.3.6.1.4.1.18060.14.2.19.10.0 - -Name=passthru-http-receiver@MessagesReceived as OID: -1.3.6.1.4.1.18060.14.2.19.11.0 - -Name=passthru-http-receiver@MessagesSent as OID: -1.3.6.1.4.1.18060.14.2.19.12.0 - -Name=passthru-http-receiver@MetricsWindow as OID: -1.3.6.1.4.1.18060.14.2.19.13.0 - -Name=passthru-http-receiver@MinSizeReceived as OID: -1.3.6.1.4.1.18060.14.2.19.14.0 - -Name=passthru-http-receiver@MinSizeSent as OID: -1.3.6.1.4.1.18060.14.2.19.15.0 - -Name=passthru-http-receiver@QueueSize as OID: -1.3.6.1.4.1.18060.14.2.19.16.0 - -Name=passthru-http-receiver@TimeoutsReceiving as OID: -1.3.6.1.4.1.18060.14.2.19.18.0 - -Name=passthru-http-receiver@TimeoutsSending as OID: -1.3.6.1.4.1.18060.14.2.19.19.0 - -Name=passthru-https-receiver@ActiveThreadCount as OID: -1.3.6.1.4.1.18060.14.2.20.1.0 - -Name=passthru-https-receiver@AvgSizeReceived as OID: -1.3.6.1.4.1.18060.14.2.20.2.0 - -Name=passthru-https-receiver@AvgSizeSent as OID: -1.3.6.1.4.1.18060.14.2.20.3.0 - -Name=passthru-https-receiver@BytesReceived as OID: -1.3.6.1.4.1.18060.14.2.20.4.0 - -Name=passthru-https-receiver@BytesSent as OID: -1.3.6.1.4.1.18060.14.2.20.5.0 - -Name=passthru-https-receiver@FaultsReceiving as OID: -1.3.6.1.4.1.18060.14.2.20.6.0 - -Name=passthru-https-receiver@FaultsSending as OID: -1.3.6.1.4.1.18060.14.2.20.7.0 - -Name=passthru-https-receiver@LastResetTime as OID: -1.3.6.1.4.1.18060.14.2.20.8.0 - -Name=passthru-https-receiver@MaxSizeReceived as OID: -1.3.6.1.4.1.18060.14.2.20.9.0 - -Name=passthru-https-receiver@MaxSizeSent as OID: -1.3.6.1.4.1.18060.14.2.20.10.0 - -Name=passthru-https-receiver@MessagesReceived as OID: -1.3.6.1.4.1.18060.14.2.20.11.0 - -Name=passthru-https-receiver@MessagesSent as OID: -1.3.6.1.4.1.18060.14.2.20.12.0 - -Name=passthru-https-receiver@MetricsWindow as OID: -1.3.6.1.4.1.18060.14.2.20.13.0 - -Name=passthru-https-receiver@MinSizeReceived as OID: -1.3.6.1.4.1.18060.14.2.20.14.0 - -Name=passthru-https-receiver@MinSizeSent as OID: -1.3.6.1.4.1.18060.14.2.20.15.0 - -Name=passthru-https-receiver@QueueSize as OID: -1.3.6.1.4.1.18060.14.2.20.16.0 - -Name=passthru-https-receiver@TimeoutsReceiving as OID: -1.3.6.1.4.1.18060.14.2.20.18.0 - -Name=passthru-https-receiver@TimeoutsSending as OID: -1.3.6.1.4.1.18060.14.2.20.19.0 -``` diff --git a/en/docs/observe/micro-integrator/classic-observability-traces/monitoring-with-opentelemetry-mi.md b/en/docs/observe/micro-integrator/classic-observability-traces/monitoring-with-opentelemetry-mi.md deleted file mode 100644 index 8f9ca8aa76..0000000000 --- a/en/docs/observe/micro-integrator/classic-observability-traces/monitoring-with-opentelemetry-mi.md +++ /dev/null @@ -1,395 +0,0 @@ -# Monitoring with OpenTelemetry - -Tracing a message in MI is important to debug, observe, and identify possible bottlenecks in a message path. This is known as distributed tracing. OpenTelemetry allows you to enable distributed tracing for WSO2 MI. - -OpenTelemetry is a single set of APIs and libraries that standardize how telemetry data such as traces, metrics and logs are collected, transmitted and managed. It provides a secure, vendor-neutral specification for instrumentation and offers a way for developers to follow the thread to trace requests from beginning to end across touchpoints and understand distributed systems at scale. OpenTelemetry will also help to trace the message and identify the latencies that took place in each process or method. Thereby, OpenTelemetry will help you to carry out a time-related analysis. - -## OpenTelemetry Configurations for MI - -Below configurations should be added by the client to enable and view traces through OpenTelemetry. - -WSO2 MI supports the following types of ways to retrieve instrumented data. - - - Jaeger - - Zipkin - - Log - - [OTLP](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md) - This type can support APMs such as NewRelic, Elastic, etc. - -First add the below configurations to enable statistics collection in the `/repository/conf/deployment.toml` file. - -```toml -[mediation] -flow.statistics.capture_all= true -stat.tracer.collect_payloads= true -stat.tracer.collect_mediation_properties= true -``` - -Then, add the configurations for the specific type of tracing in order to enable OpenTelemetry. - -!!! note - OpenTracing will no longer support WSO2 MI as it is deprecated and OpenTelemetry will be supported to enable distributed tracing. The `[opentracing]` section that was present in the `deployment.toml` file of WSO2 MI 4.1.0 - which denoted OpenTracing related configurations has been replaced by the `[opentelemetry]` section. - -## Enabling Jaeger Tracing - -1. Copy the following configuration into the `deployment.toml` file. - - === "Format" - ```toml - [opentelemetry] - enable = true - logs = true - type = "jaeger" - host = "" - port = "" - - # instead of ‘host’ and ‘port’, ‘url’ can be used directly in the following way. - url = "" - ``` - - === "Example" - ```toml - [opentelemetry] - enable = true - logs = true - type = "jaeger" - host = "localhost" - port = 14250 - - # or - - url = "http://localhost:14250" - ``` - -2. Start the server. Once that is done, [Download Jaeger](https://www.jaegertracing.io/download/) and start it as mentioned in its [Quick Start Guide](https://www.jaegertracing.io/docs/1.15/#quick-start). Then the traces can be viewed from the [Jaeger UI](http://localhost:16686). - - [![Distributed tracing jaeger]({{base_path}}/assets/img/administer/opentelemetry-jaeger-mi.png)]({{base_path}}/assets/img/administer/opentelemetry-jaeger-mi.png) - - -## Enabling Zipkin Tracing - -1. Copy the following configuration into the `deployment.toml` file. - - === "Format" - ```toml - [opentelemetry] - enable = true - logs = true - type = "zipkin" - host = "" - port = "" - - # instead of host and port, ‘url’ can be used directly in the following way. - url = "" - ``` - - === "Example" - ```toml - [opentelemetry] - enable = true - logs = true - type = "zipkin" - host = "localhost" - port = 9411 - - # or - url = "http://localhost:9411" - ``` - -2. Start the server. Once that is done, Download Zipkin and start it as mentioned in its Quick Start Guide. Then the traces can be viewed from Zipkin UI (http://localhost:9411). - - [![Distributed tracing zipkin]({{base_path}}/assets/img/administer/opentelemetry-zipkin-mi.png)]({{base_path}}/assets/img/administer/opentelemetry-zipkin-mi.png) - - -## Enabling Log Tracing - -Log reporter records all the information related to the trace in the form of logs, and appends them to a log file. This is different from Jaeger or Zipkin, as there are no traces visualized, and no need to install anything in order to view the traces. - -```toml -[opentelemetry] -enable = true -logs = true -type = "log" -``` - -After you invoke an artifact you will be able to see tracing data in the `wso2-mi-open-telemetry.log` in the `/repository/logs` folder. - -```log -07-01-2022-11:31:01,716 [-] [BatchSpanProcessor_WorkerThread-1] TRACE {"Span Id":"f960a2c19e696aa7","Trace Id":"9f3d860546d053aa06d55844a7d209a4","Operation":"CloneMediator","Latency":"60ms","Tags":"AttributesMap{data={componentType=Mediator, componentId=HealthcareAPI@3:CloneMediator, threadId=103, Transport Headers={Accept=*/*, Accept-Encoding=gzip, deflate, br, Accept-Language=en-US,en;q=0.5, activityid=24b17154-8f82-4a0e-8ac9-81ae61d95762, Connection=Keep-Alive, Host=localhost:8290, Origin=https://localhost:9443, Referer=https://localhost:9443/, Sec-Fetch-Dest=empty, Sec-Fetch-Mode=cors, Sec-Fetch-Site=same-site, uber-trace-id=12091cfc04f002deb631dee437ef4479:5024662f6c48b6fa:0:1, User-Agent=Synapse-PT-HttpComponents-NIO, X-B3-Sampled=1, X-B3-SpanId=f960a2c19e696aa7, X-B3-TraceId=9f3d860546d053aa06d55844a7d209a4}, componentName=CloneMediator}, capacity=128, totalAddedValues=5}"} -07-01-2022-11:31:01,716 [-] [BatchSpanProcessor_WorkerThread-1] TRACE {"Span Id":"a1ad135b0f7aff6a","Trace Id":"9f3d860546d053aa06d55844a7d209a4","Operation":"/healthcare/doctor/{doctorType}","Latency":"74ms","Tags":"AttributesMap{data={componentType=API Resource, componentId=HealthcareAPI@1:Resource, threadId=103, Transport Headers={Accept=*/*, Accept-Encoding=gzip, deflate, br, Accept-Language=en-US,en;q=0.5, activityid=24b17154-8f82-4a0e-8ac9-81ae61d95762, Connection=Keep-Alive, Host=localhost:8290, Origin=https://localhost:9443, Referer=https://localhost:9443/, Sec-Fetch-Dest=empty, Sec-Fetch-Mode=cors, Sec-Fetch-Site=same-site, uber-trace-id=12091cfc04f002deb631dee437ef4479:5024662f6c48b6fa:0:1, User-Agent=Synapse-PT-HttpComponents-NIO, X-B3-Sampled=1, X-B3-SpanId=f960a2c19e696aa7, X-B3-TraceId=9f3d860546d053aa06d55844a7d209a4}, componentName=/healthcare/doctor/{doctorType}}, capacity=128, totalAddedValues=5}"} -07-01-2022-11:31:01,717 [-] [BatchSpanProcessor_WorkerThread-1] TRACE {"Span Id":"c271d0cce18fcb84","Trace Id":"9f3d860546d053aa06d55844a7d209a4","Operation":"PayloadFactoryMediator","Latency":"30ms","Tags":"AttributesMap{data={componentType=Mediator, componentId=HealthcareAPI@6:PayloadFactoryMediator, threadId=105, Transport Headers={Accept=*/*, Accept-Encoding=gzip, deflate, br, Accept-Language=en-US,en;q=0.5, activityid=24b17154-8f82-4a0e-8ac9-81ae61d95762, Connection=Keep-Alive, Content-Type=application/json, Host=localhost:8290, Origin=https://localhost:9443, Referer=https://localhost:9443/, Sec-Fetch-Dest=empty, Sec-Fetch-Mode=cors, Sec-Fetch-Site=same-site, uber-trace-id=12091cfc04f002deb631dee437ef4479:5024662f6c48b6fa:0:1, User-Agent=Synapse-PT-HttpComponents-NIO, X-B3-Sampled=1, X-B3-SpanId=c271d0cce18fcb84, X-B3-TraceId=9f3d860546d053aa06d55844a7d209a4}, componentName=PayloadFactoryMediator}, capacity=128, totalAddedValues=5}"} -07-01-2022-11:31:01,717 [-] [BatchSpanProcessor_WorkerThread-1] TRACE {"Span Id":"bf4ae2557f179d2e","Trace Id":"9f3d860546d053aa06d55844a7d209a4","Operation":"GrandOakEndpoint","Latency":"104ms","Tags":"AttributesMap{data={componentType=Endpoint, componentId=GrandOakEndpoint@0:GrandOakEndpoint, hashcode=-1832964342, threadId=104, Transport Headers={Accept=*/*, Accept-Encoding=gzip, deflate, br, Accept-Language=en-US,en;q=0.5, activityid=24b17154-8f82-4a0e-8ac9-81ae61d95762, Host=localhost:8290, Origin=https://localhost:9443, Referer=https://localhost:9443/, Sec-Fetch-Dest=empty, Sec-Fetch-Mode=cors, Sec-Fetch-Site=same-site, uber-trace-id=12091cfc04f002deb631dee437ef4479:5024662f6c48b6fa:0:1, X-B3-Sampled=1, X-B3-SpanId=bf4ae2557f179d2e, X-B3-TraceId=9f3d860546d053aa06d55844a7d209a4}, Endpoint={\"method\":\"GET\",\"advanced\":{\"suspendState\":{\"errorCodes\":[],\"maxDuration\":9223372036854775807,\"initialDuration\":-1},\"timeoutState\":{\"errorCodes\":[],\"reties\":0}},\"uriTemplate\":\"http://localhost:9090/grandOak/doctors/\",\"name\":\"GrandOakEndpoint\",\"type\":\"HTTP Endpoint\"}, componentName=GrandOakEndpoint}, capacity=128, totalAddedValues=7}"} -07-01-2022-11:31:01,717 [-] [BatchSpanProcessor_WorkerThread-1] TRACE {"Span Id":"fa85a4860aa84740","Trace Id":"9f3d860546d053aa06d55844a7d209a4","Operation":"CallMediator","Latency":"121ms","Tags":"AttributesMap{data={componentType=Mediator, componentId=HealthcareAPI@4:CallMediator, threadId=104, Transport Headers={Accept=*/*, Accept-Encoding=gzip, deflate, br, Accept-Language=en-US,en;q=0.5, activityid=24b17154-8f82-4a0e-8ac9-81ae61d95762, Host=localhost:8290, Origin=https://localhost:9443, Referer=https://localhost:9443/, Sec-Fetch-Dest=empty, Sec-Fetch-Mode=cors, Sec-Fetch-Site=same-site, uber-trace-id=12091cfc04f002deb631dee437ef4479:5024662f6c48b6fa:0:1, X-B3-Sampled=1, X-B3-SpanId=bf4ae2557f179d2e, X-B3-TraceId=9f3d860546d053aa06d55844a7d209a4}, componentName=CallMediator}, capacity=128, totalAddedValues=5}"} -07-01-2022-11:31:01,718 [-] [BatchSpanProcessor_WorkerThread-1] TRACE {"Span Id":"a61e1a81316aebe5","Trace Id":"9f3d860546d053aa06d55844a7d209a4","Operation":"AnonymousSequence","Latency":"122ms","Tags":"AttributesMap{data={componentType=Sequence, componentId=HashCodeNullComponent, threadId=104, Transport Headers={Accept=*/*, Accept-Encoding=gzip, deflate, br, Accept-Language=en-US,en;q=0.5, activityid=24b17154-8f82-4a0e-8ac9-81ae61d95762, Host=localhost:8290, Origin=https://localhost:9443, Referer=https://localhost:9443/, Sec-Fetch-Dest=empty, Sec-Fetch-Mode=cors, Sec-Fetch-Site=same-site, uber-trace-id=12091cfc04f002deb631dee437ef4479:5024662f6c48b6fa:0:1, X-B3-Sampled=1, X-B3-SpanId=bf4ae2557f179d2e, X-B3-TraceId=9f3d860546d053aa06d55844a7d209a4}, componentName=AnonymousSequence}, capacity=128, totalAddedValues=5}"} -07-01-2022-11:31:01,718 [-] [BatchSpanProcessor_WorkerThread-1] TRACE {"Span Id":"47b80a3714e433eb","Trace Id":"9f3d860546d053aa06d55844a7d209a4","Operation":"PineValleyEndpoint","Latency":"85ms","Tags":"AttributesMap{data={componentType=Endpoint, componentId=PineValleyEndpoint@0:PineValleyEndpoint, hashcode=1891138122, threadId=105, Transport Headers={Accept=*/*, Accept-Encoding=gzip, deflate, br, Accept-Language=en-US,en;q=0.5, activityid=24b17154-8f82-4a0e-8ac9-81ae61d95762, Content-Type=application/json, Host=localhost:8290, Origin=https://localhost:9443, Referer=https://localhost:9443/, Sec-Fetch-Dest=empty, Sec-Fetch-Mode=cors, Sec-Fetch-Site=same-site, uber-trace-id=12091cfc04f002deb631dee437ef4479:5024662f6c48b6fa:0:1, X-B3-Sampled=1, X-B3-SpanId=47b80a3714e433eb, X-B3-TraceId=9f3d860546d053aa06d55844a7d209a4}, Endpoint={\"method\":\"POST\",\"advanced\":{\"suspendState\":{\"errorCodes\":[],\"maxDuration\":9223372036854775807,\"initialDuration\":-1},\"timeoutState\":{\"errorCodes\":[],\"reties\":0}},\"uriTemplate\":\"http://localhost:9091/pineValley/doctors\",\"name\":\"PineValleyEndpoint\",\"type\":\"HTTP Endpoint\"}, componentName=PineValleyEndpoint}, capacity=128, totalAddedValues=7}"} -07-01-2022-11:31:01,718 [-] [BatchSpanProcessor_WorkerThread-1] TRACE {"Span Id":"57f993da8ffe148d","Trace Id":"9f3d860546d053aa06d55844a7d209a4","Operation":"CallMediator","Latency":"87ms","Tags":"AttributesMap{data={componentType=Mediator, componentId=HealthcareAPI@7:CallMediator, threadId=105, Transport Headers={Accept=*/*, Accept-Encoding=gzip, deflate, br, Accept-Language=en-US,en;q=0.5, activityid=24b17154-8f82-4a0e-8ac9-81ae61d95762, Content-Type=application/json, Host=localhost:8290, Origin=https://localhost:9443, Referer=https://localhost:9443/, Sec-Fetch-Dest=empty, Sec-Fetch-Mode=cors, Sec-Fetch-Site=same-site, uber-trace-id=12091cfc04f002deb631dee437ef4479:5024662f6c48b6fa:0:1, X-B3-Sampled=1, X-B3-SpanId=47b80a3714e433eb, X-B3-TraceId=9f3d860546d053aa06d55844a7d209a4}, componentName=CallMediator}, capacity=128, totalAddedValues=5}"} -07-01-2022-11:31:01,718 [-] [BatchSpanProcessor_WorkerThread-1] TRACE {"Span Id":"8b723871e1d1dfcd","Trace Id":"9f3d860546d053aa06d55844a7d209a4","Operation":"AnonymousSequence","Latency":"118ms","Tags":"AttributesMap{data={componentType=Sequence, componentId=HashCodeNullComponent, threadId=105, Transport Headers={Accept=*/*, Accept-Encoding=gzip, deflate, br, Accept-Language=en-US,en;q=0.5, activityid=24b17154-8f82-4a0e-8ac9-81ae61d95762, Content-Type=application/json, Host=localhost:8290, Origin=https://localhost:9443, Referer=https://localhost:9443/, Sec-Fetch-Dest=empty, Sec-Fetch-Mode=cors, Sec-Fetch-Site=same-site, uber-trace-id=12091cfc04f002deb631dee437ef4479:5024662f6c48b6fa:0:1, X-B3-Sampled=1, X-B3-SpanId=47b80a3714e433eb, X-B3-TraceId=9f3d860546d053aa06d55844a7d209a4}, componentName=AnonymousSequence}, capacity=128, totalAddedValues=5}"} -07-01-2022-11:31:01,719 [-] [BatchSpanProcessor_WorkerThread-1] TRACE {"Span Id":"dd4ce497799a0fb2","Trace Id":"9f3d860546d053aa06d55844a7d209a4","Operation":"AggregateMediator","Latency":"2ms","Tags":"AttributesMap{data={componentType=Mediator, Status code=200, componentId=HealthcareAPI@9:AggregateMediator, threadId=107, Transport Headers={Connection=keep-alive, Content-Encoding=gzip, Content-Type=application/json, Transfer-Encoding=chunked, X-B3-Sampled=1, X-B3-SpanId=dd4ce497799a0fb2, X-B3-TraceId=9f3d860546d053aa06d55844a7d209a4}, componentName=AggregateMediator}, capacity=128, totalAddedValues=6}"} -07-01-2022-11:31:01,719 [-] [BatchSpanProcessor_WorkerThread-1] TRACE {"Span Id":"40ba700629e978dc","Trace Id":"9f3d860546d053aa06d55844a7d209a4","Operation":"RespondMediator","Latency":"16ms","Tags":"AttributesMap{data={componentType=Mediator, Status code=200, componentId=HealthcareAPI@10:RespondMediator, threadId=106, Transport Headers={Access-Control-Allow-Headers=, Access-Control-Allow-Methods=GET, Content-Encoding=gzip, Content-Type=application/json, Origin=https://localhost:9443, X-B3-Sampled=1, X-B3-SpanId=40ba700629e978dc, X-B3-TraceId=9f3d860546d053aa06d55844a7d209a4}, componentName=RespondMediator}, capacity=128, totalAddedValues=6}"} -07-01-2022-11:31:01,719 [-] [BatchSpanProcessor_WorkerThread-1] TRACE {"Span Id":"9ef4457c70927019","Trace Id":"9f3d860546d053aa06d55844a7d209a4","Operation":"AnonymousSequence","Latency":"17ms","Tags":"AttributesMap{data={componentType=Sequence, Status code=200, componentId=HashCodeNullComponent, threadId=106, Transport Headers={Access-Control-Allow-Headers=, Access-Control-Allow-Methods=GET, Content-Encoding=gzip, Content-Type=application/json, Origin=https://localhost:9443, X-B3-Sampled=1, X-B3-SpanId=40ba700629e978dc, X-B3-TraceId=9f3d860546d053aa06d55844a7d209a4}, componentName=AnonymousSequence}, capacity=128, totalAddedValues=6}"} -07-01-2022-11:31:01,719 [-] [BatchSpanProcessor_WorkerThread-1] TRACE {"Span Id":"8fac5fc65b6f6369","Trace Id":"9f3d860546d053aa06d55844a7d209a4","Operation":"AggregateMediator","Latency":"117ms","Tags":"AttributesMap{data={componentType=Mediator, Status code=200, componentId=HealthcareAPI@9:AggregateMediator, threadId=106, Transport Headers={Connection=keep-alive, Content-Encoding=gzip, Content-Type=application/json, Transfer-Encoding=chunked, X-B3-Sampled=1, X-B3-SpanId=8fac5fc65b6f6369, X-B3-TraceId=9f3d860546d053aa06d55844a7d209a4}, componentName=AggregateMediator}, capacity=128, totalAddedValues=6}"} -07-01-2022-11:31:01,719 [-] [BatchSpanProcessor_WorkerThread-1] TRACE {"Span Id":"b867784122bbcddb","Trace Id":"9f3d860546d053aa06d55844a7d209a4","Operation":"API_INSEQ","Latency":"963ms","Tags":"AttributesMap{data={componentType=Sequence, componentId=HealthcareAPI@2:API_INSEQ, threadId=106, Transport Headers={Accept=*/*, Accept-Encoding=gzip, deflate, br, Accept-Language=en-US,en;q=0.5, activityid=24b17154-8f82-4a0e-8ac9-81ae61d95762, Connection=Keep-Alive, Host=localhost:8290, Origin=https://localhost:9443, Referer=https://localhost:9443/, Sec-Fetch-Dest=empty, Sec-Fetch-Mode=cors, Sec-Fetch-Site=same-site, uber-trace-id=12091cfc04f002deb631dee437ef4479:5024662f6c48b6fa:0:1, User-Agent=Synapse-PT-HttpComponents-NIO, X-B3-Sampled=1, X-B3-SpanId=f960a2c19e696aa7, X-B3-TraceId=9f3d860546d053aa06d55844a7d209a4}, componentName=API_INSEQ}, capacity=128, totalAddedValues=5}"} -07-01-2022-11:31:01,719 [-] [BatchSpanProcessor_WorkerThread-1] TRACE {"Span Id":"5692e999f708a67d","Trace Id":"9f3d860546d053aa06d55844a7d209a4","Operation":"HealthcareAPI","Latency":"971ms","Tags":"AttributesMap{data={componentType=API, componentId=HealthcareAPI@0:HealthcareAPI, hashcode=1400154233, threadId=106, Transport Headers={Accept=*/*, Accept-Encoding=gzip, deflate, br, Accept-Language=en-US,en;q=0.5, activityid=24b17154-8f82-4a0e-8ac9-81ae61d95762, Connection=Keep-Alive, Host=localhost:8290, Origin=https://localhost:9443, Referer=https://localhost:9443/, Sec-Fetch-Dest=empty, Sec-Fetch-Mode=cors, Sec-Fetch-Site=same-site, uber-trace-id=12091cfc04f002deb631dee437ef4479:5024662f6c48b6fa:0:1, User-Agent=Synapse-PT-HttpComponents-NIO, X-B3-Sampled=1, -X-B3-SpanId=f960a2c19e696aa7, X-B3-TraceId=9f3d860546d053aa06d55844a7d209a4}, componentName=HealthcareAPI}, capacity=128, totalAddedValues=6}"} -``` - -## Enabling OTLP Tracing - -OpenTelemetry protocol(OTLP) is a general-purpose telemetry data delivery protocol used to exchange data between the client and the server. This type can support APMs such as NewRelic, Elastic, etc. - -1. Copy the following configuration into the `deployment.toml` file to use OTLP. - - === "Format" - ```toml - [opentelemetry] - enable = true - logs = true - type = "otlp" - url = "endpoint-url" - - [[opentelemetry.properties]] - name = "name-of-the-header" - value = "key-value-of-the-header" - ``` - - === "Example" - ```toml - [opentelemetry] - enable = true - logs = true - type = "otlp" - url = "https://otlp.nr-data.net:4317/v1/traces" - - [[opentelemetry.properties]] - name = "api-key" - value = "" - ``` - - !!! note - Above example illustrates the OpenTelemetry configurations for NewRelic APM. - -It is recommended to use OTLP to view the traces through NewRelic APM. But still you can use zipkin format traces to view the traces through NewRelic in the following way. - -=== "Format" - ```toml - [opentelemetry] - enable = true - logs = true - type = "zipkin" - url = "https://trace-api.newrelic.com/trace/v1?Api-Key=&Data-Format=zipkin&Data-Format-Version=2" - - ``` - -!!! note - To configure the API key in Newrelic: - - - Go to **Profile -> API keys -> Insights Insert key -> Insert keys** to create an account in New Relic. - - For other vendors, please consult the respective documentations. - - -## Using the Custom Tracer Implementation - -By using a custom tracer implementation in WSO2 MI, you can publish tracing data from WSO2 MI to any tracing server. Let's implement a custom tracer which simply prints the logs via the System.out in WSO2 MI using the instructions given below: - -- Implement the `org.apache.syanpse.flow.statistics.tracing.opentelemetry.management.OpenTelemetryManager` interface and add your implementation. - -- The `init` method should contain the generation of the `Tracer` instance by configuring the endpoint URL, headers, `SdkTraceProvider` and `OpenTelemetry` instance. - -- Then the `handler` attribute can be defined using initialized tracer and opentelemetry instances. - -- The `getTelemetryTracer` method should return the tracer with the given instrumentation name. - -- The close method should close the initialized `SdkTraceProvider` instance to shutdown the SDK cleanly at JVM exit. - -- The `getServiceName` method should return the service name. - -- Finally, the `getHandler` method should return the above initialized handler. - -The following are the components involved in building your custom tracer: - -- An implementation of `SpanExporter` - Publishes the spans. - -- An implementation of `OpenTelemetryManager` - Coordinates the span, and the relevant `SpanExporter`. - -If you are building without an already available SpanExporter, then you should create one. In the below example, let’s create a SysoutExporter) as below by implementing the `SpanExporter` interface, which will simply output the logs to the standard output. - -```java -public class SysOutExporter implements SpanExporter { - - private final Log log = LogFactory.getLog(TelemetryConstants.TRACER); - private final JsonFactory jsonFactory = new JsonFactory(); - - public static SysOutExporter create() { - - return new SysOutExporter(); - } - - @Override - public CompletableResultCode export(Collection spans) { - - Iterator iterator = spans.iterator(); - while (iterator.hasNext()) { - String traceId = null; - String spanId = null; - SpanData span = (SpanData) iterator.next(); - try { - StringWriter writer = new StringWriter(); - JsonGenerator generator = this.jsonFactory.createGenerator(writer); - generator.writeStartObject(); - traceId = span.getTraceId(); - spanId = span.getSpanId(); - generator.writeStringField(TelemetryConstants.SPAN_ID, spanId); - generator.writeStringField(TelemetryConstants.TRACE_ID, traceId); - generator.writeStringField(TelemetryConstants.SPAN_NAME, span.getName()); - generator.writeStringField(TelemetryConstants.LATENCY, - ((int) (span.getEndEpochNanos() - span.getStartEpochNanos()) / 1000000) + "ms"); - generator.writeStringField(TelemetryConstants.ATTRIBUTES, String.valueOf(span.getAttributes())); - generator.writeEndObject(); - generator.close(); - writer.close(); - System.out.println(writer.toString()); - } catch (IOException e) { - log.error("Error while structuring the log message when exporting Trace ID: " + traceId + ", Span ID:" + - " " + spanId, e); - } - } - - return CompletableResultCode.ofSuccess(); - } - - @Override - public CompletableResultCode flush() { - - return CompletableResultCode.ofSuccess(); - } - - @Override - public CompletableResultCode shutdown() { - - return CompletableResultCode.ofSuccess(); - } -} -``` - -Then you can create the class as below. - -```java -public class SysoutTelemetryManager implements OpenTelemetryManager { - - private static final Log logger = LogFactory.getLog(SysoutTelemetryManager.class); - private SdkTracerProvider sdkTracerProvider; - private OpenTelemetry openTelemetry; - private TelemetryTracer tracer; - private SpanHandler handler; - - @Override - public void init() { - SysOutExporter sysoutExporter = SysOutExporter.create(); - - if (logger.isDebugEnabled()) { - logger.debug("Exporter: " + sysoutExporter + " is configured"); - } - - Resource serviceNameResource = Resource.create(Attributes.of(ResourceAttributes.SERVICE_NAME, - TelemetryConstants.SERVICE_NAME)); - - sdkTracerProvider = SdkTracerProvider.builder() - .addSpanProcessor(BatchSpanProcessor.builder(sysoutExporter).build()) - .setResource(Resource.getDefault().merge(serviceNameResource)) - .build(); - - openTelemetry = OpenTelemetrySdk.builder() - .setTracerProvider(sdkTracerProvider) - .setPropagators(ContextPropagators.create(W3CTraceContextPropagator.getInstance())) - .build(); - - this.tracer = new TelemetryTracer(getTelemetryTracer()); - if (logger.isDebugEnabled()) { - logger.debug("Tracer: " + this.tracer + " is configured"); - } - this.handler = new SpanHandler(tracer, openTelemetry, new TracingScopeManager()); - } - - @Override - public Tracer getTelemetryTracer() { - - return openTelemetry.getTracer(TelemetryConstants.OPENTELEMETRY_INSTRUMENTATION_NAME); - } - - @Override - public void close() { - - if (sdkTracerProvider != null) { - sdkTracerProvider.close(); - } - } - - @Override - public String getServiceName() { - - return TelemetryConstants.SERVICE_NAME; - } - - @Override - public OpenTelemetrySpanHandler getHandler() { - - return this.handler; - } -} -``` - -1. Build the Apache Maven project and add the JAR file to the `/dropins` directory. - -2. Edit the `infer.json` file in the `/repository/resources/conf` folder in the following way under `opentelemetry.type`. - - ``` - "sysout": { - "synapse_properties.'opentelemetry.class'": "org.apache.synapse.aspects.flow.statistics.tracing.opentelemetry.management.SysoutTelemetryManager" - } - ``` - -3. Add the following configuration to the `deployment.toml` file. - - === "Format" - ```toml - [opentelemetry] - enable = true - logs = true - type = "sysout" - ``` - -If you need more `opentelemetry.properties` than the developed ones, you can edit the `for` loop of `synapse.properties.j2` file in the `/repository/resources/conf/templates/conf` folder in the following way. - -{% raw %} -``` -{%for property in opentelemetry.properties %} -opentelemetry.properties.{{property.header}} = {{property.key}} -{% endfor %} -``` -{% endraw %} - -The `deployment.toml` file entry will be as follows: - -=== "Format" - ```toml - [opentelemetry] - enable = true - logs = true - type = "type-name" - url = "endpoint-url" - - [[opentelemetry.properties]] - header = "header" - key = "key-of-the-header" - ``` - -Also, in the custom tracer class, a method should be implemented to return those properties that will be similar to the method `getHeaderKeyProperty` in `OTLPTelemetryManager` class and the constant of `org.apache.syanpse.flow.statistics.tracing.opentelemetry.management.TelemetryConstants` class also needs to be changed according to the name given. For more information, view manually instrumented [OTLP tracer](https://github.com/wso2/wso2-synapse/blob/master/modules/core/src/main/java/org/apache/synapse/aspects/flow/statistics/tracing/opentelemetry/management/OTLPTelemetryManager.java). - -OpenTelemetry ensured backward compatibility with OpenTracing for Jaeger and Zipkin by testing the below versions. - -- Zipkin 2.12.9 -- Jaeger 1.14.0 -- Jaeger 1.10.0 - -Therefore, the existing versions can be used without any issue. diff --git a/en/docs/observe/micro-integrator/cloud-native-observability-overview.md b/en/docs/observe/micro-integrator/cloud-native-observability-overview.md deleted file mode 100644 index b2498116cf..0000000000 --- a/en/docs/observe/micro-integrator/cloud-native-observability-overview.md +++ /dev/null @@ -1,110 +0,0 @@ -# Micro Integrator Observability Overview - -The following diagram depicts the complete **cloud native** observability solution for your Micro Integrator deployment, which includes **metrics monitoring**, **log monitoring**, and **message tracing** capabilities. - -[![Cloud Native Deployment Architecture]({{base_path}}/assets/img/integrate/monitoring-dashboard/cloud-native-deployment-architecture.png)]({{base_path}}/assets/img/integrate/monitoring-dashboard/cloud-native-deployment-architecture.png) - -## Minimum cloud native observability - -The basic deployment offers you metrics capabilities. You can set up the basic deployment with only Prometheus and Grafana to view and explore with the available Prometheus metrics. - -[![Cloud Native Deployment - Minimum]({{base_path}}/assets/img/integrate/monitoring-dashboard/cloud-native-observability-metrics.png)]({{base_path}}/assets/img/integrate/monitoring-dashboard/cloud-native-observability-metrics.png) - -## Cloud native observability add ons - -You can also set up different flavors of the observability solution depending on your requirement. - -### Log processing add on - -Once you set up the basic deployment, you can integrate log-processing capabilities. To use this, you need to install **Fluent-Bit** as the logging agent and **Grafana Loki** as the log aggregator. - -[![Cloud Native Deployment with Logs]({{base_path}}/assets/img/integrate/monitoring-dashboard/cloud-native-observability-logs.png)]({{base_path}}/assets/img/integrate/monitoring-dashboard/cloud-native-observability-logs.png) - -### Message tracing add on - -Once you set up the basic deployment, you can integrate message tracing capabilities. To use this you need to install **Jaeger**. - -[![Cloud Native Deployment with Tracing]({{base_path}}/assets/img/integrate/monitoring-dashboard/cloud-native-observability-tracing.png)]({{base_path}}/assets/img/integrate/monitoring-dashboard/cloud-native-observability-tracing.png) - -## Observability solutions - -There are two cloud native observability solutions for the Micro Integrator; The Kubernetes based deployment and the VM based deployment. - -Observability Solution - -These solutions are suitable for the following combination of operations. - - - - - - - - - - - - - - - - - -
    Observability solutionOperationsDescription
    Kubernetes cloud native solution -
      -
    • Metrics only
    • -
    • Metrics + Logging
    • -
    • Metrics + Tracing
    • -
    • Metrics + Logging + Tracing
    • -
    -
    -
      -
    • The default Kubernetes cloud native solution comes with metrics enabled.
    • -
    • You can also configure logging and tracing in combination with this. -
    • -
    • This solution is ideal in the following situations. -
        -
      • If you want a complete cloud native solution for observability.
      • -
      • If you already have Prometheus, Grafana, and Jaeger as your in-house monitoring and observability tools.
      • -
      -
    • -
    • - For more information, see the Kubernetes Deployment Getting Started Guide. -
    • -
    -
    VM cloud native deployment -
      -
    • Metrics only
    • -
    • Logging (add-on)
    • -
    • Tracing (add-on)
    • -
    -
    -
      -
    • - The default VM cloud native solution comes with metrics enabled.
    • -
    • You can additionally set up logging or tracing separately as part of this solution later. -
    • -
    • - This solution is ideal if you want a complete cloud native solution for observability, but you need to set this up on a VM. Ideally, you would already have Prometheus, Grafana, and Jaeger as your in-house monitoring and observability tools. -
    • -
    • - For more information, see the VM Deployment Getting Started Guide. -
    • -
    -
    - -## Technologies - -The cloud native observability solution is based on proven projects from the **Cloud Native Computing Foundation**, which makes the solution cloud native and future proof. Following are the technologies used in the current solution: - -| **Feature** | **Technology** | -|---------------|-----------------------------| -| Metrics | Prometheus | -| Visualization | Grafana | -| Logging | Log4j2, Fluent-Bit, and Grafana Loki | -| Tracing | Jaeger | - -## What's Next? - -- Set up cloud-native observability on a VM. -- Set up cloud-native observability on Kubernetes. \ No newline at end of file diff --git a/en/docs/observe/micro-integrator/setting-up-cloud-native-observability-in-kubernetes.md b/en/docs/observe/micro-integrator/setting-up-cloud-native-observability-in-kubernetes.md deleted file mode 100644 index bb8623e4e2..0000000000 --- a/en/docs/observe/micro-integrator/setting-up-cloud-native-observability-in-kubernetes.md +++ /dev/null @@ -1,212 +0,0 @@ -# Setting up Cloud Native Observability on Kubernetes - -Follow the instructions given below to set up a cloud native observability solution in a Kubernetes environment. - -To streamline the deployment of the cloud native observability solution in Kubernetes, the Micro Integrator provides a Helm chart via which you can deploy the solution to your Kubernetes cluster. The deployment installs the relevant products and adds the required configurations. After the installation, you can directly use the observability solution with a few additional configurations. - -## Prerequisites - -- Set up a Kubernetes cluster. For instructions, see [Kubernetes Documentation](https://kubernetes.io/docs/home/). -- Install Helm in the client machine. - -## Setting up the observability deployment - -When you deployed the solution on a VM, you first set up the minimum deployment (with metrics monitoring capability) and then added log processing and message tracing capabilities (as add-ons). However, when you deploy on Kubernetes, you must first select the required observability capabilities and then deploy all the related technologies and configurations in one step. - -Select the required deployment option from the following list and follow the instructions. - -### Option 1 - Metrics Monitoring - -The basic observability stack allows you to view metrics by installing and configuring Prometheus and Grafana. To install it, follow the steps below: - -1. Clone the [Helm repository](https://github.com/wso2/observability-ei). - -2. Navigate to the home directory of the cloned repository. - -3. To install the basic deployment with the `wso2-observability` release name, issue the following command. - - `helm install wso2-observability . --render-subchart-notes` - -4. Make changes to the default settings of the chart if required. For information about configurable parameters, see [Integration Observability - Configuration](https://github.com/wso2/observability-ei#configuration). - -The above step deploys the basic deployment and displays instructions to access the dashboards. This deployment allows you to access both Prometheus and Grafana UIs and provides you with the ability to view and analyze metrics. - -### Option 2 - Metrics + Log Monitoring - -This deployment involves deploying Prometheus, Grafana, Loki, and Fluent-bit Daemon set with all the required configurations to integrate deployed products. To install the deployment using Helm, follow the steps below: - -1. Clone the [Helm repository](https://github.com/wso2/observability-ei). - -2. Navigate to the home directory of the cloned repository. - -3. Open the `values.yaml` file and set the `enabled` parameter to `true` for Loki-stack as shown in the extract below. - - ```yaml - loki-stack: - enabled: true - ``` - -4. To install the observability deployment including log processing capabilities with the `wso2-observability` release name, issue the following command. - - `helm install wso2-observability . --render-subchart-notes` - -5. Make changes to the default settings of the chart if required. For information about configurable parameters, see [Integration Observability - Configuration](https://github.com/wso2/observability-ei#configuration). - -The above steps deploy the observability solution with log processing capabilities and display instructions to access the dashboards. With this deployment, you can access Prometheus and Grafana UIs. - -### Option 3 - Metrics Monitoring + Message Tracing - -This involves deploying Prometheus, Grafana, and Jaeger-operator with all the required configurations to integrate deployed products. To install the deployment using Helm, follow the steps below: - -1. Clone the [Helm repository](https://github.com/wso2/observability-ei). - -2. Navigate to the home directory of the cloned repository. - -3. Open the `values.yaml` file and set the `enabled` parameter to `true` for Jaeger as shown in the extract below. - - ```yaml - jaeger: - enabled: true - ``` - -4. To install the observability deployment including tracing capabilities with the `wso2-observability` release name, issue the following command. - - `helm install wso2-observability . --render-subchart-notes` - -5. Make changes to the default settings of the chart if required. For information about configurable parameters, see [Integration Observability - Configuration](https://github.com/wso2/observability-ei#configuration). - -The above steps deploy the observability solution with tracing capabilities and displays instructions to access the dashboards. With this deployment, you are able to access Prometheus, Grafana, and Jaeger UIs. - -This deployment installs Jaeger-Operator. To install the Jaeger deployment, follow the steps in [Jaeger Operator documentation - Creating a new instance](https://github.com/jaegertracing/helm-charts/tree/master/charts/jaeger-operator#creating-a-new-jaeger-instance) and deploy the preferred Jaeger deployment. - -!!! Note - - There are some limitations because the Jaeger client, by default, uses a UDP sender as mentioned in [the Jaeger documentation](https://www.jaegertracing.io/docs/1.22/client-libraries/). If the payload size exceeds 65 KB, spans might get lost in the Jaeger console. - - Jaeger [sampler types](https://www.jaegertracing.io/docs/1.22/sampling/) can also play a major role in tracing. Depending on the TPS, the sampler type should be carefully chosen. - - Be sure to check the performance tests and scaling requirements before including tracing in production deployments. For details on how to achieve better performance, see the [Jaeger performance tuning guide](https://www.jaegertracing.io/docs/1.22/performance-tuning/). - -##### Configuring Grafana to visualize tracing information - -The Helm chart configures the Jaeger data source automatically. Therefore, unlike in Setting up [Cloud Native Observability in a Virtual Machine]({{base_path}}/observe/micro-integrator/setting-up-cloud-native-observability-on-a-vm), it is not required to add it manually. However, to configure the links into Jaeger UI from the service-level dashboards, you need to perform the following steps: - -1. Access Grafana via `localhost:3000` and sign in. -2. Navigate to the settings section of the service level dashboard by clicking the cog wheel icon in the top right corner. - -3. Click **Variable**. This opens the following view. - - [![Variables view]({{base_path}}/assets/img/integrate/monitoring-dashboard/variables.png)]({{base_path}}/assets/img/integrate/monitoring-dashboard/variables.png) - -4. Edit the JaegerHost variable and provide your Jaeger query component hostname and port in the `host:port` syntax as shown below. - - [![constant options]({{base_path}}/assets/img/integrate/monitoring-dashboard/constant-options.png)]({{base_path}}/assets/img/integrate/monitoring-dashboard/constant-options.png) - -5. Click **Save** - -You need to perform the above steps for all the service-level dashboards (i.e., Proxy Service dashboard, API Service Dashboard, and Inbound Endpoint dashboard). - -Once Grafana is successfully configured to visualize statistics, you should be correctly redirected to the Jaeger UI from the Response Time widget of each service level dashboard as shown below. - -[![jaeger ui]({{base_path}}/assets/img/integrate/monitoring-dashboard/jaeger-ui.png){: style="width:50%"}]({{base_path}}/assets/img/integrate/monitoring-dashboard/jaeger-ui.png) - -### Option 4 - Metrics + Logs + Message Tracing - -To install the cloud native observability solution with logging and tracing capabilities in your Kubernetes cluster, follow the steps below: - -1. Clone the [Helm repository](https://github.com/wso2/observability-ei). - -2. Navigate to the home directory of the cloned repository. - -3. Open the `values.yaml` file and set the `enabled` parameter to `true` for both Loki-stack and Jaeger as shown in the extract below. - - ```yaml - loki-stack: - enabled: true - jaeger: - enabled: true - ``` - -4. To install the complete deployment with the `wso2-observability` release name, issue the following command. - - `helm install wso2-observability . --render-subchart-notes` - -5. Make changes to the default settings of the chart if required. For information about configurable parameters, see [Integration Observability - Configuration](https://github.com/wso2/observability-ei#configuration). - -The above step deploys the complete deployment and displays instructions to access the dashboards. This deployment allows you to access Prometheus, Grafana, and Jaeger UIs. - -## Setting up the Micro Integrator deployment - -To integrate with the observability deployment, you are required to perform the following three main tasks in containers: - -### Enabling observability for the Micro Integrator - -- **Enabling the statistics publishing handler** - - Add the following lines in the `/deployment.toml`file in the Kubernetes project *before* creating your micro integrator image. - - ```toml - [[synapse_handlers]] - name="MetricHandler" - class="org.wso2.micro.integrator.observability.metric.handler.MetricHandler" - ``` - - For more information about the Micro Integrator Kubernetes development flow, see [MI Kubernetes guide]({{base_path}}/install-and-setup/setup/mi-setup/deployment/kubernetes_deployment_patterns/). - -- **Enabling the metrics endpoint** - - Set an environment variable in the Kubernetes resource definition. You can either add that at the time of creating the project using the wizard. Alternatively, you can open the /integration_cr.yaml file in the Kubernetes project and add the following under the spec tag. - - ```yaml - env: - - name: "JAVA_OPTS" - value: "-DenablePrometheusApi=true" - ``` - -- **Enabling discovery for Prometheus** - - This allows Prometheus to discover Micro Integrator targets through service discovery methods. To achieve this, set the following pod level annotations to the Micro Integrator pod. - - - `prometheus.io.wso2/path: /metric-service/metrics` - - `prometheus.io.wso2/port: "9201"` - - `prometheus.io.wso2/scrape: "true"` - -### Configuring the Micro Integrator to publish logs - -!!! Tip - This step is only required if you have log processing capabilities in your observability deployment. - -Once the above tasks are completed, the container that is being deployed through the integration Kubernetes resource emits metric data, and the Observability deployment can discover and start without further configuration. - -**Configuring pods to parse logs through Fluent-bit** - -To do this, set the following pod level annotation to the Micro Integrator pod. - -`fluentbit.io/parser: wso2` - -### Configuring the Micro Integrator to publish tracing information - -!!! Tip - This step is only required if you have message tracing capabilities in your observability deployment. - -To configure the Micro Integrator to publish tracing information, add the following lines to the deployment.toml file in the Kubernetes project *before* creating your micro integrator container image. - -```toml -[mediation] -flow.statistics.capture_all= true -stat.tracer.collect_payloads= true -stat.tracer.collect_mediation_properties= true - -[opentracing] -enable = true -jaeger.sampler.manager_host="hostname" -jaeger.sender.agent_host="hostname" -``` - -!!! tip - Enter the host name of your Jaeger service as the value for `manager_host` and `agent_host` parameters. - -These settings enable the tracing data instrumentation and publishing to a Jaeger instance. - -For more information about the Micro Integrator Kubernetes development flow, see [MI Kubernetes guide]({{base_path}}/install-and-setup/setup/mi-setup/deployment/kubernetes_deployment_patterns). - -## What's Next? - -If you have successfully set up your analytics deployment, see the instructions on [viewing cloud native observability statistics]({{base_path}}/observe/micro-integrator/viewing-cloud-native-observability-statistics/). diff --git a/en/docs/observe/micro-integrator/setting-up-cloud-native-observability-on-a-vm.md b/en/docs/observe/micro-integrator/setting-up-cloud-native-observability-on-a-vm.md deleted file mode 100644 index bca0ed6dd3..0000000000 --- a/en/docs/observe/micro-integrator/setting-up-cloud-native-observability-on-a-vm.md +++ /dev/null @@ -1,417 +0,0 @@ -# Setting up Cloud Native Observability on a VM - -Follow the instructions given below to set up a cloud native observability solution for your Micro Integrator (MI) deployment in a VM environment. - -You need to start with the [minimum deployment](#step-1-set-up-the-minimum-deployment), which enables metric monitoring. Once you have set up the minimum deployment, you can add [log processing](#step-2-optionally-integrate-the-log-processing-add-on) and [message tracing](#step-3-optionally-integrate-the-message-tracing-add-on) capabilities to your solution. - -## Step 1 - Set up the minimum deployment - -The minimum cloud native observability deployment requires Prometheus and Grafana. The Micro Integrator uses Prometheus to expose its statistics to Grafana. Grafana is used to visualize the statistics. - -### Step 1.1 - Set up Prometheus - -Follow the instructions below to set up the Prometheus server: - -1. Download Prometheus from the [Prometheus site](https://prometheus.io/download/). - - !!! tip - Select the appropriate operating system and the architecture based on your operating system and requirements. - -2. Extract the downloaded file and navigate to that directory. - - !!! info - This directory is referred to as `` from hereon. - -3. Open the `/prometheus.yml` file, and in the `scrape_configs` section, add a configuration as follows: - - ```yaml - global: - scrape_interval: 15s - evaluation_interval: 15s - - scrape_configs: - - job_name: 'prometheus' - static_configs: - - targets: ['localhost:9090'] - - job_name: esb_stats - metrics_path: /metric-service/metrics - static_configs: - - targets: ['localhost:9201'] - ``` - - !!! note - - Do not add or remove spaces when you copy the above configuration to the `prometheus.ymal` file. - - In the `targets` section, you need to add your IP address and the port in which you are running the Micro Integrator server. - -4. To start the Prometheus server, open a terminal, navigate to ``, and execute the following command: - - `./prometheus` - - When the Prometheus server is successfully started, you will see the following log: - - *Server is ready to receive web requests.* - -### Step 1.2 - Set up Grafana - -Follow the instructions below to set up the Grafana server: - -1. Download and install [Grafana](https://grafana.com/grafana/download/7.1.1). - - !!! tip - Follow the instructions (for your operating system) on the Grafana website. - -2. Start you Grafana server. - - !!! Tip - The procedure to start Grafana depends on your operating system and the installation process. For example, If your operating system is Mac OS and you have installed Grafana via Homebrew, you start Grafana by issuing the `brew services start grafana` command. - -3. Access the Grafana UI from the `localhost:3000` URL. -4. Sign in using `admin` as both the username and the password. - -### Step 1.3 - Import dashboards to Grafana - -The Micro Integrator provides pre-configured Grafana dashboards in which you can visualize MI statistics. - -You can directly import the required dashboards to Grafana using the dashboard ID: - -1. Go to [Grafana labs](https://grafana.com/orgs/wso2/dashboards). -2. Select the required dashboard and copy the dashboard ID. -3. Provide this ID to Grafana and import the dashboard. -4. Repeat the above steps to import all other Micro Integrator dashboards. - -These dashboards are provided as JSON files that can be manually imported to Grafana. To import the dashboards as JSON files: - -1. Go to [Grafana labs](https://grafana.com/orgs/wso2/dashboards), select the required dashboard and download the JSON file. -2. Sign in to Grafana, click the **+** icon in the left pane, and then click **Import**. - - The **Import** dialog box opens as follows. - - [![Import Dashboards dialog box]({{base_path}}/assets/img/integrate/monitoring-dashboard/grafana-import-dialog-box.png)]({{base_path}}/assets/img/integrate/monitoring-dashboard/grafana-import-dialog-box.png) - -3. Click **Upload.json file**. Then browse for one of the dashboards that you downloaded as a JSON file. - -4. Repeat the above two steps to import all the required dashboards that you downloaded and saved. - - - -### Step 1.4 - Set up the Micro Integrator - -To enable observability for the Micro Integrator servers, add the following Synapse handler to the `deployment.toml` file (stored in the `/conf/` folder). - -```toml -[[synapse_handlers]] -name="CustomObservabilityHandler" -class="org.wso2.micro.integrator.observability.metric.handler.MetricHandler" -``` -After applying the above change, you can start the Micro Integrator with the following JVM property: -``` --DenablePrometheusApi=true -``` - -## Step 2 - Optionally, integrate the Log Processing add-on - -Once you have successfully set up the [minimum deployment](#step-1-set-up-the-minimum-deployment), you need to set up the log processing add-on to process logs. To achieve this, you can use Grafana Loki-based logging stack. - -A Loki-based logging stack consists of three components: - -- **fluentBit** is the agent that gathers logs and sends them to Loki. -- **loki** is the main server that stores logs and processes queries. -- **Grafana** queries and displays the logs. - -Follow the steps below to set up Fluent Bit and Grafana Loki: - -### Step 2.1 - Set up Fluent Bit - -Follow the instructions below to set up Fluent Bit: - -1. Download [Fluent Bit](https://fluentbit.io/download/). - -2. Extract the downloaded file. - - !!! Tip - The directory is referred to as `` from hereon. - -3. Create the following files and save them with the given extension in a preferred location. You can use any text editor of your choice. - - !!! info - In this example, the files are saved in the `/conf` directory. - - - **`labelmap.json`** file - - ``` - { - "instance": "instance", - "log_level": "log_level", - "service": "service" - } - ``` - - - **`parsers.conf`** file - - ``` - [PARSER] - Name observability - Format json - Time_Key time - Time_Format %Y-%m-%dT%H:%M:%S.%L - [PARSER] - Name wso2 - Format regex - Regex \[(?\d{2,4}\-\d{2,4}\-\d{2,4} \d{2,4}\:\d{2,4}\:\d{2,4}\,\d{1,6})\] (?[^\s]+) \{(?[\s\S]*)\} ([-]) (?\{[\s\S]*\})?(?.*) - Time_Key date - Time_Format %Y-%m-%d %H:%M:%S,%L - ``` - - - **`fluentBit.conf`** file - - ``` - [SERVICE] - Flush 1 - Daemon Off - Log_Level info - Parsers_File - - [INPUT] - Name tail - Path /repository/logs/*.log - Mem_Buf_Limit 500MB - Parser wso2 - - [OUTPUT] - Name loki - Match * - Url http://localhost:3100/loki/api/v1/push - BatchWait 1 - BatchSize 30720 - Labels {job="fluent-bit"} - LineFormat json - LabelMapPath - ``` - -4. Follow the instructions below to build the Fluent Bit output plugin before starting Fluent Bit: - - 1. Clone the [grafana/loki git repository](https://github.com/grafana/loki). - 2. To build the Fluent Bit plugin, execute the following command. - - `make fluent-bit-plugin` - - For more details, see [Fluent Bit Output Plugin readme file](https://github.com/grafana/loki/blob/main/clients/cmd/fluent-bit/README.md#fluent-bit-output-plugin). - - 3. Copy and save the path of the `out_loki.so` file. - -5. Open a new terminal and navigate to the `` directory. -6. Execute the following command: - - !!! tip - Replace `` with the path that you copied and saved in the previous step. - - `fluent-bit -e -c ` - - When Fluent Bit is successfully installed, you will see a log message. - -### Step 2.2 - Set up the Loki server - -Grafana Loki aggregates and processes the logs from Fluent Bit. - -Follow the instructions below to set up Grafana Loki: - -1. Download Loki v1.6.1 from the [`grafana/loki` Git repository](https://github.com/grafana/loki/blob/v1.5.0/docs/installation/local.md). - - !!! tip - Be sure to select the appropriate OS version before downloading. - -2. Create a configuration file named `loki-local-config.yaml` for Loki, similar to the sample given below, and save it in a preferred location. - - !!! tip - - You can use a text editor of your choice for this purpose. - - You can change the given parameter values based on your requirement. - - ``` - auth_enabled: false - - server: - http_listen_port: 3100 - - ingester: - lifecycler: - address: 127.0.0.1 - ring: - kvstore: - store: inmemory - replication_factor: 1 - final_sleep: 0s - chunk_idle_period: 5m - chunk_retain_period: 30s - max_transfer_retries: 0 - - schema_config: - configs: - - from: 2018-04-15 - store: boltdb - object_store: filesystem - schema: v11 - index: - prefix: index_ - period: 168h - - storage_config: - boltdb: - directory: /tmp/loki/index - - filesystem: - directory: /tmp/loki/chunks - - limits_config: - enforce_metric_name: false - reject_old_samples: true - reject_old_samples_max_age: 168h - - chunk_store_config: - max_look_back_period: 0s - - table_manager: - retention_deletes_enabled: false - retention_period: 0s - ``` - - 1. Unzip the file you downloaded in step 1. - - The directory that is created as a result is referred to as `` from hereon. - - 2. Open a new terminal and navigate to ``. - 3. Execute the following command: - - ``` - ./loki-darwin-amd64 -config.file=./loki-local-config.yaml - ``` - -### Step 2.3 - Configure Grafana to visualize logs - -Follow the instructions below to add Loki as a datasource in Grafana: - -You need to do this in order to configure Grafana to display logs. - -1. Start you Grafana server. - - !!! Tip - The procedure to start Grafana depends on your operating system and the installation process. For example, If your operating system is Mac OS and you have installed Grafana via Homebrew, you start Grafana by issuing the `brew services start grafana` command. - -2. Access Grafana via `http://localhost:3000/`. - -3. In the **Data Sources** section, click **Add your first data source**. In the **Add data source** page that appears, click **Select** for **Loki**. - - [![Select Loki as Data Source]({{base_path}}/assets/img/integrate/monitoring-dashboard/grafana-select-datasource.png){: style="width:80%"}]({{base_path}}/assets/img/integrate/monitoring-dashboard/grafana-select-datasource.png) - -4. In the **Add data source** page -> **Settings** tab, update the configurations for Loki. - -5. Click **Save & Test**. - - If the datasource is successfully configured, it is indicated via a message. - -## Step 3 - Optionally, integrate the Message Tracing add-on - -Once you have successfully set up the [minimum deployment](#step-1-set-up-the-minimum-deployment), you need to set up the message tracing add-on using Jaeger. - -### Step 3.1 - Set up Jaeger - -Download and install [Jaeger](https://www.jaegertracing.io/download/). - -!!! Note - - - There are some limitations in the Jaeger client, which by default uses a UDP sender as mentioned in [the Jaeger documentation](https://www.jaegertracing.io/docs/1.22/client-libraries/). If the payload size exceeds 65 KB, spans might get lost in the Jaeger console. - - Jaeger [sampler types](https://www.jaegertracing.io/docs/1.22/sampling/) can also play a major role in tracing. Depending on the TPS, the sampler type should be carefully chosen. - - In general, before including tracing in production deployments, it is essential to look into performance tests and scaling requirements. For details on how to achieve better performance, see the [Jaeger performance tuning guide](https://www.jaegertracing.io/docs/1.22/performance-tuning/). - -### Step 3.2 - Set up the Micro Integrator - -Follow the instructions below to configure the Micro Integrator to publish tracing information: - -1. Add the following configurations to the `deployment.toml` file (stored in the `/conf/`). - - ```toml - [mediation] - flow.statistics.capture_all= true - stat.tracer.collect_payloads= true - stat.tracer.collect_mediation_properties= true - - [opentracing] - enable = true - logs = true - manager_host = "localhost" - agent_host = "localhost” - ``` - -2. Add the following entries to the `/repository/resources/conf/keyMappings.json` file. - - ```json - "opentracing.enable": "synapse_properties.'opentracing.enable'", - "opentracing.logs": "synapse_properties.'jaeger.reporter.log.spans'", - "opentracing.manager_host": "synapse_properties.'jaeger.sampler.manager.host'", - "opentracing.agent_host": "synapse_properties.'jaeger.sender.agent.host'" - ``` -!!! note - The service name used to initialize the JaegerTracer can be configured using the environment variable `SERVICE_NAME` - as shown below. - ``` - export SERVICE_NAME=customServiceName - ``` - `SERVICE_NAME` is set to `WSO2-SYNAPSE` by default. - -### Step 3.3 - Configure Grafana to visualize tracing data - -In order to configure Grafana to display tracing information, follow the steps given below. - -1. Add Jaeger as a datasource: - - 1. Access Grafana via `localhost:3000` and sign in. - - 2. Click the **Configuration** icon in the left menu and then click **Data Sources**. - - [![Open Data sources]({{base_path}}/assets/img/integrate/monitoring-dashboard/open-datasources.png)]({{base_path}}/assets/img/integrate/monitoring-dashboard/open-datasources.png) - - 3. Click **Add data source** to open the **Add data source** page where all the available data source types are displayed. Here, click **Jaeger**. - - [![Select Jaeger]({{base_path}}/assets/img/integrate/monitoring-dashboard/select-jaeger.png)]({{base_path}}/assets/img/integrate/monitoring-dashboard/select-jaeger.png) - - This opens the **Data Sources/Jaeger** dialog box. - - 4. In the **Data Sources/Jaeger** dialog box, enter the URL of the Jaeger query component in the **URL** field in the `http://host:port` format as shown below. - - [![Enter Basic Jaeger Information]({{base_path}}/assets/img/integrate/monitoring-dashboard/enter-basic-jaeger-information.png)]({{base_path}}/assets/img/integrate/monitoring-dashboard/enter-basic-jaeger-information.png) - - 5. Click **Save and Test**. If the data source is successfully configured, it is indicated via a message. - - -2. Set up drill-down links to the Jaeger UI in service-level dashboards. - - 1. Navigate to the settings section of the service-level dashboard by clicking the cogwheel icon in the upper-right corner. - - 2. Click **Variable**. This opens the following view. - - [![Variables view]({{base_path}}/assets/img/integrate/monitoring-dashboard/variables.png)]({{base_path}}/assets/img/integrate/monitoring-dashboard/variables.png) - - 3. Edit the JaegerHost variable and provide your Jaeger query component hostname and port in the `host:port` syntax as shown below. - - [![Constant options]({{base_path}}/assets/img/integrate/monitoring-dashboard/constant-options.png)]({{base_path}}/assets/img/integrate/monitoring-dashboard/constant-options.png) - - 4. Click **Save** - - You need to perform the above steps for all the service-level dashboards (i.e., Proxy Service dashboard, API Service Dashboard, and Inbound Endpoint dashboard). - -Once Grafana is successfully configured to visualize statistics, you should be correctly redirected to the Jaeger UI from the Response Time widget of each service-level dashboard as shown below. - -[![jaeger ui]({{base_path}}/assets/img/integrate/monitoring-dashboard/jaeger-ui.png){: style="width:40%"}]({{base_path}}/assets/img/integrate/monitoring-dashboard/jaeger-ui.png) - -## What's Next? - -If you have successfully set up your analytics deployment, see the instructions on [viewing cloud native observability statistics]({{base_path}}/observe/micro-integrator/viewing-cloud-native-observability-statistics/). diff --git a/en/docs/observe/micro-integrator/viewing-cloud-native-observability-statistics.md b/en/docs/observe/micro-integrator/viewing-cloud-native-observability-statistics.md deleted file mode 100755 index 952d2e8af9..0000000000 --- a/en/docs/observe/micro-integrator/viewing-cloud-native-observability-statistics.md +++ /dev/null @@ -1,210 +0,0 @@ -# Viewing Cloud Native Observability Statistics - -Let's use the **dashboards** from the cloud-native observability deployment to monitor **statistics** from your integration artifacts. - -## Before you begin - -Set up the suitable cloud-native observability deployment. The dashboards described in this section apply to all the cloud-native deployment strategies. - -See the following topics for information and instructions: - -!!! Tip - If you do not know which dashboard to download when setting up cloud-native observability, check the "Downloading the dashboard" section in the respective sub-sections below for details on the dashboard. - -- Setting up [cloud-native observability for a VM environment]({{base_path}}/observe/micro-integrator/setting-up-cloud-native-observability-on-a-vm). -- Setting up [cloud-native observability for a Kubernetes environment]({{base_path}}/observe/micro-integrator/setting-up-cloud-native-observability-in-kubernetes). - -## Cluster dashboard - -Cluster dashboards visualize the overall statistics relating to your Micro Integrator cluster. You can view information related to your MI cluster. - -[![Cluster Dashboard]({{base_path}}/assets/img/integrate/monitoring-dashboard/grafana-cluster-dashboard.jpg)]({{base_path}}/assets/img/integrate/monitoring-dashboard/grafana-cluster-dashboard.jpg) - -### Downloading the dashboard - -You can download the dashboard from the [Grafana Labs - WSO2 Integration Cluster Metrics](https://grafana.com/grafana/dashboards/12783). - -### Statistic types - -The following is the list of widgets displayed in this dashboard. - -| **Widget** | **Description** | -|---------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -|**Node Count** |The total number of nodes in the cluster. | -|**Service Count** |The total number of services deployed in the cluster. | -|**Node List** |The list of nodes in the cluster. The time at which the node started is displayed together with the node name.
    You can click on a node to open the **MI Node Metrics** dashboard, which displays statistics specific to the selected node. | -|**Service List** |The list of services deployed in the cluster. The service type and the deployment time are displayed for each service. The service can be a proxy service or a REST API.

    You can click on a proxy service to view statistics specific to it in the **WSO2 Proxy Service Metrics** dashboard.

    You can click on a REST API service to view statistics specific to it in the **WSO2 API Metrics** dashboard. | -|**All Time Request Count** |The total number of requests handled by the cluster. | -|**All Time Error Count** |The total number of errors that have occurred for requests handled by the cluster. | -|**Request Rate** |This is a graphical representation of the number of requests handled by the cluster against time. | -|**Error Rate** |This is a graphical representation of the number of errors that have occurred for the cluster against time. | -|**Response Time** |The amount of time taken by the cluster to respond to a request against time. | - -### Purpose - -This dashboard serves the following purposes: - -- It provides an overview of how the cluster as a whole performs in terms of the successful execution of requests and the response time. - -- It also provides the basic details of the nodes and services deployed in the cluster. This can indicate how each node/service affects the overall cluster performance. e.g., If the **Error Rate** widget indicates a surge in the error rate at a particular time, you can identify a node/service that started at around the same time (as shown by the **Node List** and **Service List** widgets) as a possible cause of it. - -- It provides access to other dashboards that display statistics related to specific nodes and services so that you can carry out further analysis relating to the performance of your Micro Integrator setup. - -### Recommended action - -- Identify the times at which the error rate and/or the response time has been rising. Depending on the time, you can investigate the cause of if (e.g., a node/service that started around the same time). - -- Click on the nodes/services that you have identified as nodes/services to be further analyzed to improve the performance of your Micro Integrator setup, and view the visualizations specific to them. - -- Based on the request count, make the appropriate decisions with regard to the resource allocation (i.e., whether to add or reduce the number of nodes to leave the present number unchanged). - -- Identify the popular services and make business decisions accordingly. For example, if there is a surge in the request rate, you can identify the services that were active during that time. You can analyze such services in more detail by viewing information specific to them and decide whether to invest more in them. - -## Node dashboard - -This displays statistics specific to a selected node. - -[![Node Dashboard]({{base_path}}/assets/img/integrate/monitoring-dashboard/grafana-node-metrics.jpg)]({{base_path}}/assets/img/integrate/monitoring-dashboard/grafana-node-metrics.jpg) - -### Downloading the dashboard - -You can download the dashboard from the [Grafana Labs - WSO2 Integration Node Metrics](https://grafana.com/grafana/dashboards/12887). - -### Statistic types - -The following is the list of widgets displayed in this dashboard. - -| **Widget** | **Description** | -|--------------------------------|-------------------------------------------------------------------------------------------------------------------| -| **Up Time** | The time duration that has elapsed since the node became active for the current session.| | -| **Service Count** | The number of services (i.e., proxy services and REST API services) that are currently deployed in the node.| -| **All Time Request Count** | The total number of requests received by the node after it became a member of the current Micro Integrator setup.| -| **All Time Error Count** | The total number of requests handled by the node that have resulted in errors.| -| **CPU Utilization** | A visualization of the node's CPU consumption over time.| -| **JVM Heap Memory** | A visualization of the amount of JVM heap memory consumed by the node over time.| -| **Thread Count** | A visualization of the number of threads allocated to the node over time. | -| **Open File Descriptor Count** || -| **Services List** | The complete list of services (i.e., proxy services and REST API services) that are currently deployed in the node.| -| **Request Rate** | A visualization of the total number of requests received by the node over time. | -| **Error Rate** | A visualization of the total number of requests handled by the node that has resulted in errors over time. | -| **Response Time** | A visualization of the amount of time taken by the node to respond to requests over time. | - -### Purpose - -The purposes of this dashboard are as follows: - -- It shows the performance of individual nodes in terms of the error count and the response time. - -- It allows you to track the resource consumption of individual nodes and make decisions accordingly (e.g., to allocate more CPU cores, undeploy services with a high throughput if the node does not have sufficient system resources to run them etc.). - -- By clicking on the name of a service deployed in the selected node, you can open the **Proxy Service Dashboard** and the **API Dashboard** dashboard (depending on the type of the service) to view statistics specific to the selected service. - -### Recommended action - -- Evaluate whether the resources allocated to the node (i.e., system memory, CPU cores, etc.) are sufficient/excessive in proportion to the throughput it handles (i.e., the number of requests within a specific duration of time), and make changes accordingly. For example, suppose the number of requests that are handled is less in proportion to the node's capacity in terms of system resources. In that case, you can either reduce the number of resources to reduce your cost or deploy more services in the node to utilize the existing resources in a more optimum manner. - -- Click on the services deployed in the node to view statistics specific to those services. This allows you to evaluate the throughput of each service to analyze further and make decisions on how to deploy the available services in the available nodes in a manner that optimizes the use of resources. It also allows you to identify the services that contribute to the total error count of the node and take appropriate action. - -## WSO2 Proxy Service Metrics dashboard - -In the Proxy service dashboard, you can view information related to a specific Proxy service. - -[![Proxy Service Metrics Dashboard]({{base_path}}/assets/img/integrate/monitoring-dashboard/grafana-proxy-services-dashboard.png)]({{base_path}}/assets/img/integrate/monitoring-dashboard/grafana-proxy-services-dashboard.png) - -### Downloading the dashboard - -You can download the dashboard from the [Grafana Labs - WSO2 Proxy Service Metrics](https://grafana.com/grafana/dashboards/12889). - -### Statistic types - -The following is the list of widgets displayed in this dashboard. - -| **Widget** | **Description** | -|-------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| **Up Time** | The time duration that has elapsed since the proxy service started running during the current session. | -| **All Request Count** | The total number of requests received and handled by the proxy service during the selected time interval. | -| **Successful Request Count** | The total number of requests that were successfully executed by the proxy service during the selected time interval. | -| **Error Count** | The total number of requests handled by the proxy service during the selected time interval that have resulted in errors. | -| **Error Percentage** | The requests handled by the proxy service during the selected time interval that have resulted in errors, as a percentage of the total number of requests received by the proxy service during that same time interval.| -| **Deployed Node Count** | The number of nodes in which the proxy service is deployed. | -| **Request Rate** | A visualization of the total number of requests handled by the proxy service during the selected time interval. | -| **Error Rate** | A visualization of the total number of errors that have occurred for the proxy service during the selected time interval. | -| **Response Time** | A visualization of the time taken by the proxy service to respond to requests during the selected time interval. | - -### Purpose - -The purposes of this dashboard are as follows: - -- To understand the performance of a selected proxy service in terms of the number of requests it processes within a given time duration, the number/percentage of errors that have resulted, and the time taken by the proxy service to respond to requests. - -- To understand the client demand for the related business based on the number of requests received by the proxy service. - -### Recommended action - -- If the number of requests/response time is too high, deploy the proxy service in more nodes in the cluster so that the throughput is divided. - -- If there are errors, check the mediation flow of the proxy service and make changes to prevent the errors. - -## WSO2 API Metrics dashboard - -This dashboard displays overall statistics related to a specific API. - -[![API Metrics Dashboard]({{base_path}}/assets/img/integrate/monitoring-dashboard/grafana-api-services-dashboard.png)]({{base_path}}/assets/img/integrate/monitoring-dashboard/grafana-api-services-dashboard.png) - -### Downloading the dashboard - -You can download the dashboard from the [Grafana Labs - WSO2 API Metrics](https://grafana.com/grafana/dashboards/12888). - -### Statistic types - -The following is the list of widgets displayed in this dashboard. - -| **Widget** | **Description** | -|-------------------------------|------------------------------------------------------------------------------------------------------| -| **Up Time** | The time duration that has elapsed since the API service started running during the current session. | | -| **All Request Count** | The total number of requests received and handled by the API during the selected time interval. | -| **Successful Request Count** | The total number of requests that were successfully executed by the API during the selected time interval. | -| **Error Count** | The total number of requests handled by the API during the selected time interval that have resulted in errors. | -| **Error Percentage** | The requests handled by the API during the selected time interval that have resulted in errors, as a percentage of the total number of requests received by the API during that same time interval.| -| **Deployed Node Count** | The number of nodes in which the API service is deployed. | -| **Request Rate** | A visualization of the total number of requests handled by the API service during the selected time interval. | -| **Error Rate** | A visualization of the total number of errors that have occurred for the API service during the selected time interval. | -| **Response Time** | A visualization of the time taken by the API service to respond to requests during the selected time interval. | - -### Purpose - -- To understand the performance of a selected API service in terms of the number of requests it processes within a given time duration, the number/percentage of errors that have resulted, and the time taken by the API service to respond to requests. - -- To understand the client demand for the related business based on the number of requests received by the API service. - -### Recommended action - -- If the number of requests/response time is too high, deploy the API service in more nodes in the cluster so that the throughput is divided. - -- If there are errors, check the mediation flow of the API service and make changes to prevent the errors. - -## WSO2 Inbound Endpoint Metrics dashboard - -At a given time, the Inbound endpoint dashboard displays the overall statistics related to a selected endpoint. You can view information related to a specific Inbound endpoint. We can download this dashboard from here. In this dashboard, it will show us Up Time, All Request Count, Successful Request Count, Error Count, Error Percentage, Deployed Node Count, Request Rate, Error Rate and Response Time. - -[![Inbound Endpoint Metrics Dashboard]({{base_path}}/assets/img/integrate/monitoring-dashboard/grafana-inbound-endpoint-metrics-dashboard.png)]({{base_path}}/assets/img/integrate/monitoring-dashboard/grafana-inbound-endpoint-metrics-dashboard.png) - -### Downloading the dashboard - -You can download the dashboard from the [Grafana Labs - WSO2 Inbound Endpoint Metrics](https://grafana.com/grafana/dashboards/12890). - -### Statistic types - -The following is the list of widgets displayed in this dashboard. - -| **Widget** | **Description** | -|-------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------| -| **Up Time** | The time duration that has elapsed since the inbound endpoint became active during the current session. | -| **All Request Count** | The total number of requests received and handled by the inbound endpoint during the selected time interval. | -| **Successful Request Count** | The total number of requests that were successfully executed by the inbound endpoint during the selected time interval. | -| **Error Count** | The total number of requests handled by the inbound endpoint during the selected time interval that have resulted in errors. | -| **Error Percentage** | The requests handled by the inbound endpoint during the selected time interval that have resulted in errors, as a percentage of the total number of requests received by the endpoint during that same time interval.| -| **Deployed Node Count** | The number of nodes in which the inbound endpoint is deployed. | -| **Request Rate** | A visualization of the total number of requests handled by the inbound endpoint during the selected time interval. | -| **Error Rate** | A visualization of the total number of errors that have occurred for the inbound endpoint during the selected time interval. | -| **Response Time** | A visualization of the time taken by the inbound endpoint to respond to requests during the selected time interval. | From aa1e0af69047f5ff0ba8796c04b345c6d7639cf2 Mon Sep 17 00:00:00 2001 From: DinithiDiaz Date: Tue, 5 Mar 2024 10:10:07 +0530 Subject: [PATCH 05/23] Remove MI pages from Reference section --- .../reference/config-catalog-mi-dashboard.md | 855 -- en/docs/reference/config-catalog-mi.md | 11552 ---------------- .../amazondynamodb-connector-configuration.md | 57 - .../amazondynamodb-connector-example.md | 940 -- .../amazondynamodb-connector-overview.md | 33 - .../amazondynamodb-connector-reference.md | 1865 --- .../amazonlambda-connector-config.md | 1225 -- .../amazonlambda-connector-example.md | 242 - .../amazonlambda-connector-overview.md | 33 - .../setting-up-amazonlambda.md | 106 - .../1.x/amazons3-connector-1.x-config.md | 62 - .../1.x/amazons3-connector-1.x-example.md | 326 - .../1.x/amazons3-connector-1.x-reference.md | 4431 ------ .../amazons3-connector-config.md | 101 - .../amazons3-connector-example.md | 288 - .../amazons3-connector-overview.md | 36 - .../amazons3-connector-reference.md | 3535 ----- ...nbound-endpoint-reference-configuration.md | 113 - .../amazonsqs-connector-config.md | 612 - .../amazonsqs-connector-example.md | 219 - .../amazonsqs-connector-overview.md | 41 - .../amazonsqs-inbound-endpoint-example.md | 105 - ...nbound-endpoint-reference-configuration.md | 116 - .../as400-pcml-connector-configuration.md | 65 - .../as400-pcml-connector-reference.md | 393 - .../bigquery-connector-configuration.md | 135 - .../bigquery-connector-example.md | 719 - .../bigquery-connector-overview.md | 31 - .../bigquery-connector-reference.md | 1131 -- .../cerediandayforce-overview.md | 35 - .../ceridiandayforce-connector-config.md | 19 - .../ceridiandayforce-connector-example.md | 235 - .../ceridiandayforce-connector-reference.md | 30 - .../configuration/orgunitdetails.md | 290 - .../configuration/orgunits.md | 784 -- .../employee-documents/documentdetails.md | 143 - .../employee-documents/listofdocuments.md | 171 - .../employeeclockdevicegroups.md | 141 - .../employeecompensationsummary.md | 236 - .../employeecourses.md | 159 - .../employeeemploymentagreements.md | 354 - .../employeeemploymentstatuses.md | 633 - .../employeeemploymenttypes.md | 149 - .../employeehighlycompensatedemployees.md | 137 - .../employeehrincidents.md | 267 - .../employeelabordefaults.md | 157 - .../employeeonboardingpolicies.md | 253 - .../employeeorginfo.md | 351 - .../employeepayadjustmentcodegroups.md | 137 - .../employeepaygraderates.md | 165 - .../employeeperformanceratings.md | 169 - .../employeeproperties.md | 293 - .../employeeskills.md | 149 - .../employeetrainingprograms.md | 135 - .../employeeunionmemberships.md | 150 - .../employeeworkassignments.md | 653 - .../employeeworkcontracts.md | 248 - .../employeeaddresses.md | 329 - .../employeecantaxes.md | 313 - .../employeecontacts.md | 392 - .../employeedirectdeposits.md | 155 - .../employeeemergencycontacts.md | 389 - .../employeeethnicities.md | 164 - .../employeehealthandwellness.md | 141 - .../employeemaritalstatuses.md | 250 - .../employeeustaxes.md | 310 - .../employee-time-management/availability.md | 317 - .../employeepunches.md | 442 - .../employeerawpunches.md | 324 - .../employee-time-management/schedules.md | 136 - .../timeawayfromwork.md | 174 - .../ceridiandayforce-connector/employee.md | 586 - .../i9order.md | 128 - .../recruiting/jobpostings.md | 556 - .../reportmetadataforalistofreports.md | 139 - .../reportmetadataforaspecificreport.md | 291 - .../reporting/reports.md | 171 - .../documentmanagementsecuritygroups.md | 137 - .../employeelocations.md | 298 - .../employeemanagers.md | 147 - .../employeeroles.md | 250 - .../employeessoaccounts.md | 222 - .../employeeworkassignmentmanagers.md | 267 - .../userpayadjustmentcodegroups.md | 137 - .../reference/connectors/connector-usage.md | 155 - .../connectors/connectors-overview.md | 147 - .../csv-module/csv-module-config.md | 635 - .../db-event-inbound-endpoint-config.md | 113 - .../db-event-inbound-endpoint-example.md | 143 - .../db-event-inbound-endpoint-overview.md | 33 - .../connectors/develop-connectors.md | 1254 -- .../documentum/documentum-example.md | 228 - .../documentum/documentum-overview.md | 53 - .../documentum/documentum-reference.md | 648 - .../email-connector/email-connector-config.md | 684 - .../email-connector-example.md | 272 - .../email-connector-overview.md | 31 - .../fhir-connector/fhir-connector-config.md | 886 -- .../fhir-connector/fhir-connector-example.md | 425 - .../fhir-connector/fhir-connector-overview.md | 33 - .../3.x/file-connector-3.x-config.md | 1595 --- .../3.x/file-connector-3.x-example.md | 184 - .../file-connector/file-connector-config.md | 2936 ---- .../file-connector/file-connector-example.md | 253 - .../file-connector/file-connector-overview.md | 58 - .../gmail-connector/configuring-gmail-api.md | 50 - .../gmail-connector/gmail-connector-config.md | 1456 -- .../gmail-connector-example.md | 121 - .../gmail-connector-overview.md | 33 - .../google-firebase-configuration.md | 326 - .../google-firebase-connector-example.md | 276 - .../google-firebase-overview.md | 35 - .../google-firebase-setup.md | 11 - .../googlepubsub-connector-configuration.md | 65 - .../googlepubsub-connector-example.md | 362 - .../googlepubsub-connector-overview.md | 42 - .../googlepubsub-connector-reference.md | 305 - .../get-credentials-for-google-spreadsheet.md | 56 - .../google-spreadsheet-connector-config.md | 2453 ---- .../google-spreadsheet-connector-example.md | 393 - .../google-spreadsheet-overview.md | 33 - .../iso8583-connector-configuration.md | 27 - .../iso8583-connector-example.md | 94 - .../iso8583-connector-overview.md | 49 - .../iso8583-connector-reference.md | 61 - .../iso8583-inbound-endpoint-config.md | 71 - .../iso8583-inbound-endpoint-example.md | 107 - .../jira-connector/jira-connector-config.md | 4317 ------ .../jira-connector/jira-connector-example.md | 343 - .../jira-connector/jira-connector-overview.md | 29 - .../3.0.x/kafka-connector-config.md | 385 - .../enabling-security-for-kafka.md | 182 - .../kafka-connector-avro-producer-example.md | 172 - .../kafka-connector/kafka-connector-config.md | 501 - .../kafka-connector-overview.md | 62 - .../kafka-connector-producer-example.md | 105 - .../kafka-inbound-endpoint-config.md | 379 - .../kafka-inbound-endpoint-example.md | 155 - .../kafka-connector/setting-up-kafka.md | 75 - .../ldap-connector/ldap-connector-example.md | 217 - .../ldap-connector/ldap-connector-overview.md | 35 - .../ldap-server-configuration.md | 424 - .../ldap-connector/setting-up-ldap.md | 16 - ...crosoft-azure-storage-connector-example.md | 334 - .../1.x/microsoft-azure-storage-reference.md | 264 - ...crosoft-azure-storage-connector-example.md | 338 - .../2.x/microsoft-azure-storage-reference.md | 459 - .../microsoft-azure-overview.md | 43 - .../microsoft-azure-storage-configuration.md | 130 - .../microsoft-dynamics365-configuration.md | 165 - .../mongodb-connector-config.md | 1837 --- .../mongodb-connector-example.md | 263 - .../mongodb-connector-overview.md | 35 - .../1.0.1/redis-connector-reference.md | 1987 --- .../2.1.x/redis-connector-reference.md | 2102 --- .../2.2.x/redis-connector-reference.md | 2136 --- .../2.4.x/redis-connector-reference.md | 2199 --- .../2.7.x/redis-connector-reference.md | 2204 --- .../redis-connector-configuration.md | 25 - .../redis-connector-example.md | 413 - .../redis-connector-overview.md | 37 - .../salesforce-soap-reference.md | 1881 --- .../salesforcebulk-connector-configuration.md | 36 - .../salesforcebulk-connector-example.md | 339 - .../salesforcebulk-reference.md | 663 - .../salesforcebulk-v2-connector-example.md | 760 - .../salesforcebulk-v2-reference.md | 552 - .../sf-inbound-endpoint-configuration.md | 135 - .../sf-inbound-endpoint-example.md | 124 - ...nbound-endpoint-reference-configuration.md | 96 - .../salesforce-connectors/sf-overview.md | 85 - .../sf-rest-connector-config.md | 4060 ------ .../sf-rest-connector-example.md | 247 - .../sf-soap-connector-config.md | 53 - .../sf-soap-connector-example.md | 270 - .../salesforce-soap-reference.md | 1121 -- .../sf-soap-connector-config.md | 59 - .../sf-soap-connector-example.md | 270 - .../salesforcebulk-connector-configuration.md | 36 - .../salesforcebulk-connector-example.md | 341 - .../salesforcebulk-reference.md | 628 - .../servicenow-connector-config.md | 268 - .../servicenow-connector-example.md | 264 - .../servicenow-overview.md | 35 - .../settingup-servicenow-instance.md | 28 - .../smpp-connector/smpp-connector-config.md | 1035 -- .../smpp-connector-configuration.md | 37 - .../smpp-connector/smpp-connector-example.md | 206 - .../smpp-connector/smpp-connector-overview.md | 44 - .../smpp-inbound-endpoint-config.md | 149 - .../smpp-inbound-endpoint-example.md | 139 - .../twitter-connector-configuration.md | 24 - .../twitter-connector-credentials.md | 35 - .../twitter-connector-example.md | 112 - .../twitter-connector-overview.md | 38 - .../twitter-connector-reference.md | 3462 ----- .../utility-module/utility-module-config.md | 521 - .../utility-module/utility-module-overview.md | 39 - .../reference/connectors/why-connectors.md | 45 - .../reference/mediators/about-mediators.md | 63 - .../reference/mediators/aggregate-mediator.md | 146 - .../reference/mediators/builder-mediator.md | 21 - en/docs/reference/mediators/cache-mediator.md | 293 - en/docs/reference/mediators/call-mediator.md | 417 - .../mediators/call-template-mediator.md | 192 - .../reference/mediators/callout-mediator.md | 210 - en/docs/reference/mediators/class-mediator.md | 138 - en/docs/reference/mediators/clone-mediator.md | 124 - .../data-mapper-json-schema-specification.md | 427 - .../mediators/data-mapper-mediator.md | 652 - .../reference/mediators/db-report-mediator.md | 449 - .../reference/mediators/dblookup-mediator.md | 305 - en/docs/reference/mediators/drop-mediator.md | 48 - en/docs/reference/mediators/dss-mediator.md | 537 - en/docs/reference/mediators/ejb-mediator.md | 77 - .../reference/mediators/enrich-mediator.md | 495 - .../mediators/entitlement-mediator.md | 167 - .../reference/mediators/fastxslt-mediator.md | 167 - en/docs/reference/mediators/fault-mediator.md | 221 - .../reference/mediators/filter-mediator.md | 120 - .../reference/mediators/foreach-mediator.md | 121 - .../reference/mediators/header-mediator.md | 176 - .../reference/mediators/iterate-mediator.md | 199 - .../mediators/json-transform-mediator.md | 302 - en/docs/reference/mediators/log-mediator.md | 127 - .../reference/mediators/loopback-mediator.md | 69 - en/docs/reference/mediators/ntlm-mediator.md | 88 - en/docs/reference/mediators/oauth-mediator.md | 48 - .../mediators/payloadfactory-mediator.md | 1280 -- .../mediators/property-group-mediator.md | 67 - .../reference/mediators/property-mediator.md | 323 - .../accessing-properties-with-xpath.md | 634 - .../property-reference/axis2-properties.md | 558 - .../property-reference/generic-properties.md | 959 -- .../http-transport-properties.md | 609 - .../message-context-properties.md | 108 - .../property-reference/soap-headers.md | 102 - .../reference/mediators/respond-mediator.md | 70 - .../reference/mediators/script-mediator.md | 503 - en/docs/reference/mediators/send-mediator.md | 170 - .../reference/mediators/sequence-mediator.md | 98 - .../reference/mediators/smooks-mediator.md | 125 - en/docs/reference/mediators/store-mediator.md | 136 - .../reference/mediators/switch-mediator.md | 82 - .../reference/mediators/throttle-mediator.md | 284 - .../mediators/transaction-mediator.md | 36 - .../mediators/urlrewrite-mediator.md | 151 - .../reference/mediators/validate-mediator.md | 381 - .../reference/mediators/xquery-mediator.md | 230 - en/docs/reference/mediators/xslt-mediator.md | 319 - .../customizing-secure-vault.md | 133 - .../security-implementation.md | 113 - .../mi-security-reference/using_keystores.md | 96 - .../about-message-stores-processors.md | 143 - .../synapse-properties/data-services.md | 113 - .../datasource-configuration-parameters.md | 40 - .../elements-of-a-data-service.md | 530 - .../data-services/input-validators.md | 28 - .../data-services/mapping-data-types.md | 97 - .../data-services/query-parameters.md | 320 - .../data-services/sample-queries.md | 219 - .../data-services/using-namespaces.md | 135 - .../synapse-properties/endpoint-properties.md | 1181 -- .../about-inbound-endpoints.md | 127 - .../custom-inbound-endpoint-properties.md | 82 - .../mqtt-inbound-endpoint-properties.md | 180 - .../rabbitmq-inbound-endpoint-properties.md | 347 - .../cxf-ws-rm-inbound-endpoint-properties.md | 72 - .../hl7-inbound-endpoint-properties.md | 106 - .../http-inbound-endpoint-properties.md | 208 - .../websocket-inbound-endpoint-properties.md | 219 - .../file-inbound-endpoint-properties.md | 35 - .../jms-inbound-endpoint-properties.md | 225 - .../kafka-inbound-endpoint-properties.md | 268 - .../msg-sampling-processor-properties.md | 78 - ...ailover-forwarding-processor-properties.md | 137 - ...g-sched-forwarding-processor-properties.md | 213 - .../custom-msg-store-properties.md | 24 - .../in-memory-msg-store-properties.md | 26 - .../jdbc-msg-store-properties.md | 160 - .../jms-msg-store-properties.md | 128 - .../rabbitmq-msg-store-properties.md | 170 - .../resequence-msg-store-properties.md | 123 - .../wso2mb-msg-store-properties.md | 52 - .../proxy-service-properties.md | 301 - .../pull/proxy-service-add-properties-pull.md | 10 - .../synapse-properties/rest-api-properties.md | 120 - .../scheduled-task-properties.md | 257 - .../synapse-properties/sequence-properties.md | 4 - .../synapse-properties/template-properties.md | 190 - .../fix-transport-parameters.md | 133 - .../hl7-transport-parameters.md | 203 - .../jms-transport-parameters.md | 503 - .../mailto-transport-parameters.md | 252 - .../mqtt-transport-parameters.md | 88 - .../rabbitmq-transport-parameters.md | 364 - .../vfs-transport-parameters.md | 509 - .../configuring-xslt-mediation-with-xalan.md | 19 - en/docs/troubleshooting/error-handling-mi.md | 106 - .../troubleshooting/troubleshooting-jms.md | 121 - 300 files changed, 123904 deletions(-) delete mode 100644 en/docs/reference/config-catalog-mi-dashboard.md delete mode 100644 en/docs/reference/config-catalog-mi.md delete mode 100644 en/docs/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-configuration.md delete mode 100644 en/docs/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-example.md delete mode 100644 en/docs/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-overview.md delete mode 100644 en/docs/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-reference.md delete mode 100644 en/docs/reference/connectors/amazonlambda-connector/amazonlambda-connector-config.md delete mode 100644 en/docs/reference/connectors/amazonlambda-connector/amazonlambda-connector-example.md delete mode 100644 en/docs/reference/connectors/amazonlambda-connector/amazonlambda-connector-overview.md delete mode 100644 en/docs/reference/connectors/amazonlambda-connector/setting-up-amazonlambda.md delete mode 100644 en/docs/reference/connectors/amazons3-connector/1.x/amazons3-connector-1.x-config.md delete mode 100644 en/docs/reference/connectors/amazons3-connector/1.x/amazons3-connector-1.x-example.md delete mode 100644 en/docs/reference/connectors/amazons3-connector/1.x/amazons3-connector-1.x-reference.md delete mode 100644 en/docs/reference/connectors/amazons3-connector/amazons3-connector-config.md delete mode 100644 en/docs/reference/connectors/amazons3-connector/amazons3-connector-example.md delete mode 100644 en/docs/reference/connectors/amazons3-connector/amazons3-connector-overview.md delete mode 100644 en/docs/reference/connectors/amazons3-connector/amazons3-connector-reference.md delete mode 100644 en/docs/reference/connectors/amazonsqs-connector/amazon-inbound-endpoint-1.0.x/amazonsqs-inbound-endpoint-reference-configuration.md delete mode 100644 en/docs/reference/connectors/amazonsqs-connector/amazonsqs-connector-config.md delete mode 100644 en/docs/reference/connectors/amazonsqs-connector/amazonsqs-connector-example.md delete mode 100644 en/docs/reference/connectors/amazonsqs-connector/amazonsqs-connector-overview.md delete mode 100644 en/docs/reference/connectors/amazonsqs-connector/amazonsqs-inbound-endpoint-example.md delete mode 100644 en/docs/reference/connectors/amazonsqs-connector/amazonsqs-inbound-endpoint-reference-configuration.md delete mode 100644 en/docs/reference/connectors/as400-pcml-connector/as400-pcml-connector-configuration.md delete mode 100644 en/docs/reference/connectors/as400-pcml-connector/as400-pcml-connector-reference.md delete mode 100644 en/docs/reference/connectors/bigquery-connector/bigquery-connector-configuration.md delete mode 100644 en/docs/reference/connectors/bigquery-connector/bigquery-connector-example.md delete mode 100644 en/docs/reference/connectors/bigquery-connector/bigquery-connector-overview.md delete mode 100644 en/docs/reference/connectors/bigquery-connector/bigquery-connector-reference.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/cerediandayforce-overview.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/ceridiandayforce-connector-config.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/ceridiandayforce-connector-example.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/ceridiandayforce-connector-reference.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/configuration/orgunitdetails.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/configuration/orgunits.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-documents/documentdetails.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-documents/listofdocuments.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeclockdevicegroups.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeecompensationsummary.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeecourses.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeemploymentagreements.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeemploymentstatuses.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeemploymenttypes.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeehighlycompensatedemployees.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeehrincidents.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeelabordefaults.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeonboardingpolicies.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeorginfo.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeepayadjustmentcodegroups.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeepaygraderates.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeperformanceratings.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeproperties.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeskills.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeetrainingprograms.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeunionmemberships.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeworkassignments.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeworkcontracts.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeeaddresses.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeecantaxes.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeecontacts.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeedirectdeposits.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeeemergencycontacts.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeeethnicities.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeehealthandwellness.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeemaritalstatuses.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeeustaxes.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-time-management/availability.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-time-management/employeepunches.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-time-management/employeerawpunches.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-time-management/schedules.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee-time-management/timeawayfromwork.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employee.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/employment-eligibility-verification/i9order.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/recruiting/jobpostings.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/reporting/reportmetadataforalistofreports.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/reporting/reportmetadataforaspecificreport.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/reporting/reports.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/documentmanagementsecuritygroups.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/employeelocations.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/employeemanagers.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/employeeroles.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/employeessoaccounts.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/employeeworkassignmentmanagers.md delete mode 100644 en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/userpayadjustmentcodegroups.md delete mode 100644 en/docs/reference/connectors/connector-usage.md delete mode 100644 en/docs/reference/connectors/connectors-overview.md delete mode 100644 en/docs/reference/connectors/csv-module/csv-module-config.md delete mode 100644 en/docs/reference/connectors/db-event-inbound-endpoint/db-event-inbound-endpoint-config.md delete mode 100644 en/docs/reference/connectors/db-event-inbound-endpoint/db-event-inbound-endpoint-example.md delete mode 100644 en/docs/reference/connectors/db-event-inbound-endpoint/db-event-inbound-endpoint-overview.md delete mode 100644 en/docs/reference/connectors/develop-connectors.md delete mode 100644 en/docs/reference/connectors/documentum/documentum-example.md delete mode 100644 en/docs/reference/connectors/documentum/documentum-overview.md delete mode 100644 en/docs/reference/connectors/documentum/documentum-reference.md delete mode 100644 en/docs/reference/connectors/email-connector/email-connector-config.md delete mode 100644 en/docs/reference/connectors/email-connector/email-connector-example.md delete mode 100644 en/docs/reference/connectors/email-connector/email-connector-overview.md delete mode 100644 en/docs/reference/connectors/fhir-connector/fhir-connector-config.md delete mode 100644 en/docs/reference/connectors/fhir-connector/fhir-connector-example.md delete mode 100644 en/docs/reference/connectors/fhir-connector/fhir-connector-overview.md delete mode 100644 en/docs/reference/connectors/file-connector/3.x/file-connector-3.x-config.md delete mode 100644 en/docs/reference/connectors/file-connector/3.x/file-connector-3.x-example.md delete mode 100644 en/docs/reference/connectors/file-connector/file-connector-config.md delete mode 100644 en/docs/reference/connectors/file-connector/file-connector-example.md delete mode 100644 en/docs/reference/connectors/file-connector/file-connector-overview.md delete mode 100644 en/docs/reference/connectors/gmail-connector/configuring-gmail-api.md delete mode 100644 en/docs/reference/connectors/gmail-connector/gmail-connector-config.md delete mode 100644 en/docs/reference/connectors/gmail-connector/gmail-connector-example.md delete mode 100644 en/docs/reference/connectors/gmail-connector/gmail-connector-overview.md delete mode 100644 en/docs/reference/connectors/google-firebase-connector/google-firebase-configuration.md delete mode 100644 en/docs/reference/connectors/google-firebase-connector/google-firebase-connector-example.md delete mode 100644 en/docs/reference/connectors/google-firebase-connector/google-firebase-overview.md delete mode 100644 en/docs/reference/connectors/google-firebase-connector/google-firebase-setup.md delete mode 100644 en/docs/reference/connectors/google-pubsub-connector/googlepubsub-connector-configuration.md delete mode 100644 en/docs/reference/connectors/google-pubsub-connector/googlepubsub-connector-example.md delete mode 100644 en/docs/reference/connectors/google-pubsub-connector/googlepubsub-connector-overview.md delete mode 100644 en/docs/reference/connectors/google-pubsub-connector/googlepubsub-connector-reference.md delete mode 100644 en/docs/reference/connectors/google-spreadsheet-connector/get-credentials-for-google-spreadsheet.md delete mode 100644 en/docs/reference/connectors/google-spreadsheet-connector/google-spreadsheet-connector-config.md delete mode 100644 en/docs/reference/connectors/google-spreadsheet-connector/google-spreadsheet-connector-example.md delete mode 100644 en/docs/reference/connectors/google-spreadsheet-connector/google-spreadsheet-overview.md delete mode 100644 en/docs/reference/connectors/iso8583-connector/iso8583-connector-configuration.md delete mode 100644 en/docs/reference/connectors/iso8583-connector/iso8583-connector-example.md delete mode 100644 en/docs/reference/connectors/iso8583-connector/iso8583-connector-overview.md delete mode 100644 en/docs/reference/connectors/iso8583-connector/iso8583-connector-reference.md delete mode 100644 en/docs/reference/connectors/iso8583-connector/iso8583-inbound-endpoint-config.md delete mode 100644 en/docs/reference/connectors/iso8583-connector/iso8583-inbound-endpoint-example.md delete mode 100644 en/docs/reference/connectors/jira-connector/jira-connector-config.md delete mode 100644 en/docs/reference/connectors/jira-connector/jira-connector-example.md delete mode 100644 en/docs/reference/connectors/jira-connector/jira-connector-overview.md delete mode 100644 en/docs/reference/connectors/kafka-connector/3.0.x/kafka-connector-config.md delete mode 100644 en/docs/reference/connectors/kafka-connector/enabling-security-for-kafka.md delete mode 100644 en/docs/reference/connectors/kafka-connector/kafka-connector-avro-producer-example.md delete mode 100644 en/docs/reference/connectors/kafka-connector/kafka-connector-config.md delete mode 100644 en/docs/reference/connectors/kafka-connector/kafka-connector-overview.md delete mode 100644 en/docs/reference/connectors/kafka-connector/kafka-connector-producer-example.md delete mode 100644 en/docs/reference/connectors/kafka-connector/kafka-inbound-endpoint-config.md delete mode 100644 en/docs/reference/connectors/kafka-connector/kafka-inbound-endpoint-example.md delete mode 100644 en/docs/reference/connectors/kafka-connector/setting-up-kafka.md delete mode 100644 en/docs/reference/connectors/ldap-connector/ldap-connector-example.md delete mode 100644 en/docs/reference/connectors/ldap-connector/ldap-connector-overview.md delete mode 100644 en/docs/reference/connectors/ldap-connector/ldap-server-configuration.md delete mode 100644 en/docs/reference/connectors/ldap-connector/setting-up-ldap.md delete mode 100644 en/docs/reference/connectors/microsoft-azure-storage-connector/1.x/microsoft-azure-storage-connector-example.md delete mode 100644 en/docs/reference/connectors/microsoft-azure-storage-connector/1.x/microsoft-azure-storage-reference.md delete mode 100644 en/docs/reference/connectors/microsoft-azure-storage-connector/2.x/microsoft-azure-storage-connector-example.md delete mode 100644 en/docs/reference/connectors/microsoft-azure-storage-connector/2.x/microsoft-azure-storage-reference.md delete mode 100644 en/docs/reference/connectors/microsoft-azure-storage-connector/microsoft-azure-overview.md delete mode 100644 en/docs/reference/connectors/microsoft-azure-storage-connector/microsoft-azure-storage-configuration.md delete mode 100644 en/docs/reference/connectors/microsoft-dynamics365-connector/microsoft-dynamics365-configuration.md delete mode 100644 en/docs/reference/connectors/mongodb-connector/mongodb-connector-config.md delete mode 100644 en/docs/reference/connectors/mongodb-connector/mongodb-connector-example.md delete mode 100644 en/docs/reference/connectors/mongodb-connector/mongodb-connector-overview.md delete mode 100644 en/docs/reference/connectors/redis-connector/1.0.1/redis-connector-reference.md delete mode 100644 en/docs/reference/connectors/redis-connector/2.1.x/redis-connector-reference.md delete mode 100644 en/docs/reference/connectors/redis-connector/2.2.x/redis-connector-reference.md delete mode 100644 en/docs/reference/connectors/redis-connector/2.4.x/redis-connector-reference.md delete mode 100644 en/docs/reference/connectors/redis-connector/2.7.x/redis-connector-reference.md delete mode 100644 en/docs/reference/connectors/redis-connector/redis-connector-configuration.md delete mode 100644 en/docs/reference/connectors/redis-connector/redis-connector-example.md delete mode 100644 en/docs/reference/connectors/redis-connector/redis-connector-overview.md delete mode 100644 en/docs/reference/connectors/salesforce-connectors/salesforce-soap-reference.md delete mode 100644 en/docs/reference/connectors/salesforce-connectors/salesforcebulk-connector-configuration.md delete mode 100644 en/docs/reference/connectors/salesforce-connectors/salesforcebulk-connector-example.md delete mode 100644 en/docs/reference/connectors/salesforce-connectors/salesforcebulk-reference.md delete mode 100644 en/docs/reference/connectors/salesforce-connectors/salesforcebulk-v2-connector-example.md delete mode 100644 en/docs/reference/connectors/salesforce-connectors/salesforcebulk-v2-reference.md delete mode 100644 en/docs/reference/connectors/salesforce-connectors/sf-inbound-endpoint-configuration.md delete mode 100644 en/docs/reference/connectors/salesforce-connectors/sf-inbound-endpoint-example.md delete mode 100644 en/docs/reference/connectors/salesforce-connectors/sf-inbound-endpoint-reference-configuration.md delete mode 100644 en/docs/reference/connectors/salesforce-connectors/sf-overview.md delete mode 100644 en/docs/reference/connectors/salesforce-connectors/sf-rest-connector-config.md delete mode 100644 en/docs/reference/connectors/salesforce-connectors/sf-rest-connector-example.md delete mode 100644 en/docs/reference/connectors/salesforce-connectors/sf-soap-connector-config.md delete mode 100644 en/docs/reference/connectors/salesforce-connectors/sf-soap-connector-example.md delete mode 100644 en/docs/reference/connectors/salesforce-soap-connector/salesforce-soap-reference.md delete mode 100644 en/docs/reference/connectors/salesforce-soap-connector/sf-soap-connector-config.md delete mode 100644 en/docs/reference/connectors/salesforce-soap-connector/sf-soap-connector-example.md delete mode 100644 en/docs/reference/connectors/salesforcebulk-connector/salesforcebulk-connector-configuration.md delete mode 100644 en/docs/reference/connectors/salesforcebulk-connector/salesforcebulk-connector-example.md delete mode 100644 en/docs/reference/connectors/salesforcebulk-connector/salesforcebulk-reference.md delete mode 100644 en/docs/reference/connectors/servicenow-connector/servicenow-connector-config.md delete mode 100644 en/docs/reference/connectors/servicenow-connector/servicenow-connector-example.md delete mode 100644 en/docs/reference/connectors/servicenow-connector/servicenow-overview.md delete mode 100644 en/docs/reference/connectors/servicenow-connector/settingup-servicenow-instance.md delete mode 100644 en/docs/reference/connectors/smpp-connector/smpp-connector-config.md delete mode 100644 en/docs/reference/connectors/smpp-connector/smpp-connector-configuration.md delete mode 100644 en/docs/reference/connectors/smpp-connector/smpp-connector-example.md delete mode 100644 en/docs/reference/connectors/smpp-connector/smpp-connector-overview.md delete mode 100644 en/docs/reference/connectors/smpp-connector/smpp-inbound-endpoint-config.md delete mode 100644 en/docs/reference/connectors/smpp-connector/smpp-inbound-endpoint-example.md delete mode 100644 en/docs/reference/connectors/twitter-connector/twitter-connector-configuration.md delete mode 100644 en/docs/reference/connectors/twitter-connector/twitter-connector-credentials.md delete mode 100644 en/docs/reference/connectors/twitter-connector/twitter-connector-example.md delete mode 100644 en/docs/reference/connectors/twitter-connector/twitter-connector-overview.md delete mode 100644 en/docs/reference/connectors/twitter-connector/twitter-connector-reference.md delete mode 100644 en/docs/reference/connectors/utility-module/utility-module-config.md delete mode 100644 en/docs/reference/connectors/utility-module/utility-module-overview.md delete mode 100644 en/docs/reference/connectors/why-connectors.md delete mode 100644 en/docs/reference/mediators/about-mediators.md delete mode 100644 en/docs/reference/mediators/aggregate-mediator.md delete mode 100644 en/docs/reference/mediators/builder-mediator.md delete mode 100644 en/docs/reference/mediators/cache-mediator.md delete mode 100644 en/docs/reference/mediators/call-mediator.md delete mode 100644 en/docs/reference/mediators/call-template-mediator.md delete mode 100644 en/docs/reference/mediators/callout-mediator.md delete mode 100644 en/docs/reference/mediators/class-mediator.md delete mode 100644 en/docs/reference/mediators/clone-mediator.md delete mode 100644 en/docs/reference/mediators/data-mapper-json-schema-specification.md delete mode 100644 en/docs/reference/mediators/data-mapper-mediator.md delete mode 100644 en/docs/reference/mediators/db-report-mediator.md delete mode 100644 en/docs/reference/mediators/dblookup-mediator.md delete mode 100644 en/docs/reference/mediators/drop-mediator.md delete mode 100644 en/docs/reference/mediators/dss-mediator.md delete mode 100644 en/docs/reference/mediators/ejb-mediator.md delete mode 100644 en/docs/reference/mediators/enrich-mediator.md delete mode 100644 en/docs/reference/mediators/entitlement-mediator.md delete mode 100644 en/docs/reference/mediators/fastxslt-mediator.md delete mode 100644 en/docs/reference/mediators/fault-mediator.md delete mode 100644 en/docs/reference/mediators/filter-mediator.md delete mode 100644 en/docs/reference/mediators/foreach-mediator.md delete mode 100644 en/docs/reference/mediators/header-mediator.md delete mode 100644 en/docs/reference/mediators/iterate-mediator.md delete mode 100644 en/docs/reference/mediators/json-transform-mediator.md delete mode 100644 en/docs/reference/mediators/log-mediator.md delete mode 100644 en/docs/reference/mediators/loopback-mediator.md delete mode 100644 en/docs/reference/mediators/ntlm-mediator.md delete mode 100644 en/docs/reference/mediators/oauth-mediator.md delete mode 100644 en/docs/reference/mediators/payloadfactory-mediator.md delete mode 100644 en/docs/reference/mediators/property-group-mediator.md delete mode 100644 en/docs/reference/mediators/property-mediator.md delete mode 100644 en/docs/reference/mediators/property-reference/accessing-properties-with-xpath.md delete mode 100644 en/docs/reference/mediators/property-reference/axis2-properties.md delete mode 100644 en/docs/reference/mediators/property-reference/generic-properties.md delete mode 100644 en/docs/reference/mediators/property-reference/http-transport-properties.md delete mode 100644 en/docs/reference/mediators/property-reference/message-context-properties.md delete mode 100644 en/docs/reference/mediators/property-reference/soap-headers.md delete mode 100644 en/docs/reference/mediators/respond-mediator.md delete mode 100644 en/docs/reference/mediators/script-mediator.md delete mode 100644 en/docs/reference/mediators/send-mediator.md delete mode 100644 en/docs/reference/mediators/sequence-mediator.md delete mode 100644 en/docs/reference/mediators/smooks-mediator.md delete mode 100644 en/docs/reference/mediators/store-mediator.md delete mode 100644 en/docs/reference/mediators/switch-mediator.md delete mode 100644 en/docs/reference/mediators/throttle-mediator.md delete mode 100644 en/docs/reference/mediators/transaction-mediator.md delete mode 100644 en/docs/reference/mediators/urlrewrite-mediator.md delete mode 100644 en/docs/reference/mediators/validate-mediator.md delete mode 100644 en/docs/reference/mediators/xquery-mediator.md delete mode 100644 en/docs/reference/mediators/xslt-mediator.md delete mode 100644 en/docs/reference/mi-security-reference/customizing-secure-vault.md delete mode 100644 en/docs/reference/mi-security-reference/security-implementation.md delete mode 100644 en/docs/reference/mi-security-reference/using_keystores.md delete mode 100644 en/docs/reference/synapse-properties/about-message-stores-processors.md delete mode 100644 en/docs/reference/synapse-properties/data-services.md delete mode 100644 en/docs/reference/synapse-properties/data-services/datasource-configuration-parameters.md delete mode 100644 en/docs/reference/synapse-properties/data-services/elements-of-a-data-service.md delete mode 100644 en/docs/reference/synapse-properties/data-services/input-validators.md delete mode 100644 en/docs/reference/synapse-properties/data-services/mapping-data-types.md delete mode 100644 en/docs/reference/synapse-properties/data-services/query-parameters.md delete mode 100644 en/docs/reference/synapse-properties/data-services/sample-queries.md delete mode 100644 en/docs/reference/synapse-properties/data-services/using-namespaces.md delete mode 100644 en/docs/reference/synapse-properties/endpoint-properties.md delete mode 100644 en/docs/reference/synapse-properties/inbound-endpoints/about-inbound-endpoints.md delete mode 100644 en/docs/reference/synapse-properties/inbound-endpoints/custom-inbound-endpoint-properties.md delete mode 100644 en/docs/reference/synapse-properties/inbound-endpoints/event-based-inbound-endpoints/mqtt-inbound-endpoint-properties.md delete mode 100644 en/docs/reference/synapse-properties/inbound-endpoints/event-based-inbound-endpoints/rabbitmq-inbound-endpoint-properties.md delete mode 100644 en/docs/reference/synapse-properties/inbound-endpoints/listening-inbound-endpoints/cxf-ws-rm-inbound-endpoint-properties.md delete mode 100644 en/docs/reference/synapse-properties/inbound-endpoints/listening-inbound-endpoints/hl7-inbound-endpoint-properties.md delete mode 100644 en/docs/reference/synapse-properties/inbound-endpoints/listening-inbound-endpoints/http-inbound-endpoint-properties.md delete mode 100644 en/docs/reference/synapse-properties/inbound-endpoints/listening-inbound-endpoints/websocket-inbound-endpoint-properties.md delete mode 100644 en/docs/reference/synapse-properties/inbound-endpoints/polling-inbound-endpoints/file-inbound-endpoint-properties.md delete mode 100644 en/docs/reference/synapse-properties/inbound-endpoints/polling-inbound-endpoints/jms-inbound-endpoint-properties.md delete mode 100644 en/docs/reference/synapse-properties/inbound-endpoints/polling-inbound-endpoints/kafka-inbound-endpoint-properties.md delete mode 100644 en/docs/reference/synapse-properties/message-processors/msg-sampling-processor-properties.md delete mode 100644 en/docs/reference/synapse-properties/message-processors/msg-sched-failover-forwarding-processor-properties.md delete mode 100644 en/docs/reference/synapse-properties/message-processors/msg-sched-forwarding-processor-properties.md delete mode 100644 en/docs/reference/synapse-properties/message-stores/custom-msg-store-properties.md delete mode 100644 en/docs/reference/synapse-properties/message-stores/in-memory-msg-store-properties.md delete mode 100644 en/docs/reference/synapse-properties/message-stores/jdbc-msg-store-properties.md delete mode 100644 en/docs/reference/synapse-properties/message-stores/jms-msg-store-properties.md delete mode 100644 en/docs/reference/synapse-properties/message-stores/rabbitmq-msg-store-properties.md delete mode 100644 en/docs/reference/synapse-properties/message-stores/resequence-msg-store-properties.md delete mode 100644 en/docs/reference/synapse-properties/message-stores/wso2mb-msg-store-properties.md delete mode 100644 en/docs/reference/synapse-properties/proxy-service-properties.md delete mode 100644 en/docs/reference/synapse-properties/pull/proxy-service-add-properties-pull.md delete mode 100644 en/docs/reference/synapse-properties/rest-api-properties.md delete mode 100644 en/docs/reference/synapse-properties/scheduled-task-properties.md delete mode 100644 en/docs/reference/synapse-properties/sequence-properties.md delete mode 100644 en/docs/reference/synapse-properties/template-properties.md delete mode 100644 en/docs/reference/synapse-properties/transport-parameters/fix-transport-parameters.md delete mode 100644 en/docs/reference/synapse-properties/transport-parameters/hl7-transport-parameters.md delete mode 100644 en/docs/reference/synapse-properties/transport-parameters/jms-transport-parameters.md delete mode 100644 en/docs/reference/synapse-properties/transport-parameters/mailto-transport-parameters.md delete mode 100644 en/docs/reference/synapse-properties/transport-parameters/mqtt-transport-parameters.md delete mode 100644 en/docs/reference/synapse-properties/transport-parameters/rabbitmq-transport-parameters.md delete mode 100644 en/docs/reference/synapse-properties/transport-parameters/vfs-transport-parameters.md delete mode 100644 en/docs/troubleshooting/configuring-xslt-mediation-with-xalan.md delete mode 100644 en/docs/troubleshooting/error-handling-mi.md delete mode 100644 en/docs/troubleshooting/troubleshooting-jms.md diff --git a/en/docs/reference/config-catalog-mi-dashboard.md b/en/docs/reference/config-catalog-mi-dashboard.md deleted file mode 100644 index b4976aaf42..0000000000 --- a/en/docs/reference/config-catalog-mi-dashboard.md +++ /dev/null @@ -1,855 +0,0 @@ -# Micro Integrator Dashboard Configuration Catalog - -All the server-level configurations of your Micro Integrator Dashboard can be applied using a single configuration file, which is the `deployment.toml` file (stored in the `MI_DASHBOARD_HOME/conf` directory). - -The complete list of configuration parameters that you can use in the `deployment.toml` file are listed below along with descriptions. - -## Instructions for use - -To update the product configurations: - -1. Open the `deployment.toml` file (stored in the `MI_DASHBOARD_HOME/conf` directory). -2. Select the required configuration headers and parameters from the list given below and apply them to the `deployment.toml` file. - -The **default** `deployment.toml` file of the Micro Integrator Dashboard is as follows: - -```toml -[server_config] -port = 9743 - -[heartbeat_config] -pool_size = 15 - -[mi_user_store] -username = "admin" -password = "admin" - -[keystore] -file_name = "conf/security/dashboard.jks" -password = "wso2carbon" -key_password = "wso2carbon" -``` - -## Deployment - -
    -
    -
    -
    - - - -
    -
    -
    [server_config]
    -port = 9743
    -
    -
    -
    -
    -
    -
    - [server_config] - Required -

    - This configuration header is required for configuring the deployment parameters that are used for identifying a Micro Integrator Dashboard server. -

    -
    -
    -
    -
    - port -
    -
    -
    -

    - integer - Required -

    -
    - Default: 9743 -
    -
    -
    -

    The port of the Micro Integrator Dashboard.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - -## Heart beat - -
    -
    -
    -
    - - - -
    -
    -
    [heartbeat_config]
    -pool_size = 15
    -
    -
    -
    -
    -
    -
    - [heartbeat_config] - Required -

    - This configuration header is required for the Micro Integrator dashboard server to listen to the Micro Inetgrator runtimes. -

    -
    -
    -
    -
    - pool_size -
    -
    -
    -

    - integer - Required -

    -
    - Default: 15 -
    -
    -
    -

    The Micro Integrator Dashboard uses a thread pool executor to create threads and to handle incoming requests from Micro Integrator runtimes. This parameter controls the number of threads used by the executor pool.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - -## Micro Integrator User Store - -
    -
    -
    -
    - - - -
    -
    -
    [mi_user_store]
    -username = "admin"
    -password = "admin"
    -
    -
    -
    -
    -
    -
    - [mi_user_store] - Required -

    - This configuration header is required for the Micro Integrator dashboard server to connect with the Micro Integrator instances. -

    -
    -
    -
    -
    - username -
    -
    -
    -

    - string - Required -

    -
    - Default: "admin" -
    -
    -
    -

    The user name for signing in to the Micro Integrator runtimes.

    -
    -
    -
    -
    -
    - password -
    -
    -
    -

    - string - Required -

    -
    - Default: "admin" -
    -
    -
    -

    The user password for signing in to the Micro Integrator runtimes.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - -## Keystore - -
    -
    -
    -
    - - - -
    -
    -
    [keystore]
    -file_name = "conf/security/dashboard.jks"
    -password = "wso2carbon"
    -key_password = "wso2carbon"
    -
    -
    -
    -
    -
    - [keystore] - Required -

    - This configuration header is used for SSL handshaking when the server communicates with the web browser. -

    -
    -
    -
    -
    - file_name -
    -
    -
    -

    - string - Required -

    -
    - Default: conf/security/dashboard.jks -
    -
    - Possible Values: - -
    -
    -
    -

    The name of the keystore file that is used for SSL communication.

    -
    -
    -
    -
    - password -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2carbon -
    - -
    -
    -

    The password of the keystore file that is used for SSL communication. The keystore password is used when accessing the keys in the keystore.

    -
    -
    -
    -
    - key_password -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2carbon -
    - -
    -
    -

    The password of the private key that is included in the keystore.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - -## Truststore - -
    -
    -
    -
    - - - -
    -
    -
    [truststore]
    -file_name="con/security/wso2truststore.jks"
    -password="wso2carbon"
    -
    -
    -
    -
    -
    - [truststore] -

    - This configuration header is required for configuring the parameters that connect the Micro Integrator Dashboard to the keystore file (trust store) that is used to store the digital certificates that the server trusts for SSL communication. -

    -
    -
    -
    -
    - file_name -
    -
    -
    -

    - string - Required -

    -
    -
    -

    The path of the keystore file that is used for storing the trusted digital certificates.

    -
    -
    -
    -
    - password -
    -
    -
    -

    - string - Required -

    -
    -
    -

    The password of the keystore file that is used as the trust store.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - -## Single Sign-On - -!!! note - - Upgrade Micro Integrator Dashboard to version 4.0.1 or above to enable this feature. - - This feature was tested with WSO2 IS 5.10.0 and Shibboleth 4.1.2. There may be compatibility issues when using other vendors. - -
    -
    -
    -
    - - - -
    -
    -
    [sso]
    -enable = true
    -client_id = "abcqet54mfD6t5d7"
    -base_url = "https://localhost/oauth2"
    -jwt_issuer = "https://localhost/oauth2"
    -resource_server_URLs = ["https://localhost:9743"]
    -sign_in_redirect_URL = "https://localhost:9743/sso"
    -admin_group_attribute = "groups"
    -admin_groups = ["admin", "tester"]
    -
    -[[sso.authorization_request.params]]
    -key = "app_id"
    -value = "C123d"
    -
    -
    -
    -
    -
    -
    -
    - [sso] - Required -

    - This configuration header is required for configuring Single Sign-on with OpenID Connect. -

    -
    -
    -
    -
    - enable -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: true or false -
    -
    -
    -

    Use this paramater to enable Single Sign-On.

    -
    -
    -
    -
    - client_id -
    -
    -
    -

    - string - Required -

    -
    -
    -

    Specify the client ID generated from the Identity Provider.

    -
    -
    -
    -
    - base_url -
    -
    -
    -

    - string - Required -

    -
    -
    -

    The URL of the Identity Provider.

    -
    -
    -
    -
    - well_known_endpoint -
    -
    -
    -

    - string -

    -
    -
    -

    The well known endpoint that is used to get the OpenID Connect metadata of your Identity Provider.

    -
    -
    -
    -
    -
    - jwt_issuer -
    -
    -
    -

    - string - Required -

    -
    -
    -

    The Identity Provider's issuer identifier.

    -
    -
    -
    -
    -
    - override_well_known_endpoint -
    -
    -
    -

    - boolean -

    -
    - Default: false -
    -
    - Possible Values: true or false -
    - -
    -
    -

    Use this paramater to manually define the OpenID Connect endpoints of the Identity Provider. When overriding is enabled, you need to define authorization, token, user-info, token-revocation, introspection and logout endpoints.

    -
    -
    -
    -
    -
    - jwks_endpoint -
    -
    -
    -

    - string -

    -
    -
    -

    The Jwks endpoint URL.

    -
    -
    -
    -
    -
    - authorization_endpoint -
    -
    -
    -

    - string -

    -
    - Default: "/oauth2/authorize" -
    -
    -
    -

    The authorization endpoint URL.

    -
    -
    -
    -
    -
    - token_endpoint -
    -
    -
    -

    - string -

    -
    - Default: "/oauth2/token" -
    -
    -
    -

    The token endpoint URL.

    -
    -
    -
    -
    -
    - user_info_endpoint -
    -
    -
    -

    - string -

    -
    -
    -

    The user info endpoint URL.

    -
    -
    -
    -
    -
    - revocation_endpoint -
    -
    -
    -

    - string -

    -
    - Default: "/oauth2/revoke" -
    -
    -
    -

    The token revocation endpoint URL.

    -
    -
    -
    -
    -
    - introspection_endpoint -
    -
    -
    -

    - string -

    -
    -
    -

    The introspection endpoint URL.

    -
    -
    -
    -
    -
    - end_session_endpoint -
    -
    -
    -

    - string -

    -
    - Default: "/oidc/logout" -
    -
    -
    -

    The logout endpoint URL.

    -
    -
    -
    -
    -
    - resource_server_URLs -
    -
    -
    -

    - array - Required -

    -
    - Default: ["https://localhost:9743"] -
    -
    - Possible Values: ["https://{hostname/ip}:{port}"] -
    -
    -
    -

    The URL of the Micro Integrator Dashboard. Be sure to replace {hostname/ip} and {port} with the relevant values.

    -
    -
    -
    -
    -
    - sign_in_redirect_URL -
    -
    -
    -

    - string - Required -

    -
    - Default: "https://localhost:9743/sso" -
    -
    - Possible Values: "https://{hostname/ip}:{port}/sso" -
    -
    -
    -

    The Sign In redirect URL of the Micro Integrator Dashboard. Be sure to replace {hostname/ip} and {port} with the relevant values.

    -
    -
    -
    -
    -
    - sign_out_redirect_URL -
    -
    -
    -

    - string -

    -
    - Default: "https://localhost:9743" -
    -
    - Possible Values: "https://{hostname/ip}:{port}" -
    -
    -
    -

    The Sign Out redirect URL of the Micro Integrator Dashboard. Be sure to replace {hostname/ip} and {port} with the relevant values.

    -
    -
    -
    -
    -
    - admin_group_attribute -
    -
    -
    -

    - string -

    -
    -
    -

    The claim name used by the Identity Provider to determine the group of the user.

    -
    -
    -
    -
    -
    - admin_groups -
    -
    -
    -

    - array -

    -
    - Possible Values: ["publisher", "tester", "any group assigned to the users"] -
    -
    -
    -

    The groups which are used to grant admin privileges to users. If the user belongs to any of the defined groups, that user is considered as an Admin user.

    -
    -
    -
    -
    -
    - enable_PKCE -
    -
    -
    -

    - boolean -

    -
    - Default: true -
    -
    - Possible Values: true or false -
    - -
    -
    -

    Use this paramater to specify if a PKCE should be sent with the request for the authorization code.

    -
    -
    -
    -
    -
    - scope -
    -
    -
    -

    - array -

    -
    - Default: ["openid"] -
    -
    -
    -

    Use this paramater to specify the requested scopes.

    -
    -
    -
    -
    -
    - user_name_attribute -
    -
    -
    -

    - string -

    -
    - Default: "sub" -
    -
    -
    -

    Use this paramater to specify the attribute you need to use as the user name in the dashboard.

    -
    -
    -
    -
    -
    - additional_trusted_audience -
    -
    -
    -

    - array -

    -
    - Possible Values: ["account", "finance", "additional trusted audience other than client id"] -
    -
    -
    -

    The additional audience apart from the client_id configured in sso configs.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    - [[sso.authorization_request.params]] -

    - This configuration header is required for defining custom parameters that needs to be sent with the Authorization request to the Identity Provider. -

    -
    -
    -
    -
    - key -
    -
    -
    -

    - string -

    -
    -
    -

    Use this parameter to specify the key of the parameter you want to send with the authorization request.

    -
    -
    -
    -
    - value -
    -
    -
    -

    - string -

    -
    -
    -

    Use this parameter to specify the value of the parameter you want to send with the authorization request.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    diff --git a/en/docs/reference/config-catalog-mi.md b/en/docs/reference/config-catalog-mi.md deleted file mode 100644 index 974d53e0f8..0000000000 --- a/en/docs/reference/config-catalog-mi.md +++ /dev/null @@ -1,11552 +0,0 @@ -# Integration Server Configurations - -All the server-level configurations of your Micro Integrator instance can be applied using a single configuration file, which is the `deployment.toml` file (stored in the `MI_HOME/conf` directory). - -The complete list of configuration parameters that you can use in the `deployment.toml` file are listed below along with descriptions. You can also see the documentation on product [installation and setup](../install-and-setup/install-and-setup-overview.md) for details on applying product configurations to your Micro Integrator deployment. - -## Instructions for use - -To update the product configurations: - -1. Open the `deployment.toml` file (stored in the `MI_HOME/conf` directory). -2. Select the required configuration headers and parameters from the list given below and apply them to the `deployment.toml` file. - -The **default** `deployment.toml` file of the Micro Integrator is as follows: - -```toml -[server] -hostname = "localhost" - -[keystore.primary] -file_name = "wso2carbon.jks" -password = "wso2carbon" -alias = "wso2carbon" -key_password = "wso2carbon" - -[truststore] -file_name = "client-truststore.jks" -password = "wso2carbon" -alias = "symmetric.key.value" -algorithm = "AES" -``` - - - - -## Deployment - -
    -
    -
    -
    - - - -
    -
    -
    [server]
    -hostname="localhost"
    -node_ip="127.0.0.1"
    -enable_mtom=false
    -enable_swa=false
    -
    -
    -
    -
    -
    -
    - [server] - Required -

    - This configuration header is required for configuring the deployment parameters that are used for identifying a Micro Integrator server node. You need to update these values when you deploy WSO2 Micro Integrator. The required and optional parameters for this configuration are listed below. -

    -
    -
    -
    -
    - hostname -
    -
    -
    -

    - string - Required -

    -
    - Default: "localhost" -
    -
    - Possible Values: "127.0.0.1","localhost","<any-ip-address>" -
    -
    -
    -

    The hostname of the Micro Integrator instance.

    -
    -
    -
    -
    - offset -
    -
    -
    -

    - integer - -

    -
    - Default: 0 -
    - -
    -
    -

    Port offset allows you to run multiple WSO2 products, multiple instances of a WSO2 product, or multiple WSO2 product clusters on the same server or virtual machine (VM). Port offset defines the number by which all ports defined in the runtime such as the HTTP/S ports will be offset. For example, if the default HTTP port is 9443 and the port offset is 1, the effective HTTP port will be 9444. Therefore, for each additional WSO2 product instance, set the port offset to a unique value so that they can all run on the same server without any port conflicts.

    -
    -
    -
    -
    - enable_mtom -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Use this paramater to enable MTOM (Message Transmission Optimization Mechanism) for the product server.

    -
    -
    -
    -
    - enable_swa -
    -
    -
    -

    - boolean - Required -

    -
    - Default: "true" or "false" -
    - -
    -
    -

    Use this paramater to enable SwA (SOAP with Attachments) for the product server. When SwA is enabled, the Micro Integrator will process the files attached to SOAP messages.

    -
    -
    -
    -
    - userAgent -
    -
    -
    -

    - string - Required -

    -
    - Default: WSO2 ${product.key} ${product.version} -
    - -
    -
    -

    -
    -
    -
    -
    - serverDetails -
    -
    -
    -

    - string - Required -

    -
    - Default: WSO2 ${product.key} ${product.version} -
    - -
    -
    -

    -
    -
    -
    -
    - serverDetails -
    -
    -
    -

    - string - Required -

    -
    - Default: WSO2 ${product.key} ${product.version} -
    - -
    -
    -

    -
    -
    -
    -
    - synapse_config_file_path -
    -
    -
    -

    - string - Required -

    -
    - Default: repository/deployment/server/synapse-configs -
    - -
    -
    -

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## Service Catalog Client - -
    -
    -
    -
    - - - -
    -
    -
    [[service_catalog]]
    -apim_host = "https://localhost:9443"
    -enable = true
    -username = "$secret{username}"
    -password = "$secret{password}"
    -
    -
    -
    -
    -
    - [[service_catalog]] - Required -

    - This cofiguration header is required if you want the Micro Integrator to publish integation services to the Service Catalog in the API Publisher. This allows you to generate an API proxy for the integrations deployed in the Micro Integrator. -

    -
    -
    -
    -
    - apim_host -
    -
    -
    -

    - string - Required -

    -
    - Default: "https://127.0.0.1:9443" -
    -
    - Possible Values: "https://{hostname/ip}:{port}" -
    -
    -
    -

    The hostname of the API Manager runtime. Be sure to replace {hostname/ip} and {port} with the relevant values.

    -
    -
    -
    -
    - enable -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    - -
    -
    -

    The service catalog client in the Micro Integrator is enabled when this parameter is set to 'true'.

    -
    -
    -
    -
    - username -
    -
    -
    -

    - string - Required -

    -
    - Default: admin -
    -
    - Possible Values: - -
    -
    -
    -

    The user name for signing in to the API Manager runtime.

    -
    -
    -
    -
    - password -
    -
    -
    -

    - string - Required -

    -
    - Default: admin -
    -
    - Possible Values: - -
    -
    -
    -

    The user password for signing in to the API Manager runtime.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## Micro Integrator Dashboard - -
    -
    -
    -
    - - - -
    -
    -
    [dashboard_config]
    -dashboard_url = "https://localhost:9743/dashboard/api/"
    -heartbeat_interval = 5
    -group_id = "mi_dev"
    -node_id = "dev_node_2"
    -
    -
    -
    -
    -
    - [dashboard_config] - Required -

    - This configuration header is required for the Micro Integrator server to connect with the dashboard server. -

    -
    -
    -
    -
    - dashboard_url -
    -
    -
    -

    - string - Required -

    -
    - Default: "https://localhost:9743/dashboard/api/" -
    -
    - Possible Values: https://{hostname/ip}:{port}/dashboard/api/ -
    -
    -
    -

    The URL to access the dashboard server. Be sure to replace {hostname/ip} and {port} with the relevant values from your environment.

    -
    -
    -
    -
    - heartbeat_interval -
    -
    -
    -

    - integer - -

    -
    - Default: 5 -
    - -
    -
    -

    The time interval (in seconds) between two consecutive heartbeats that are sent from the Micro Integrator to the dashboard server.

    -
    -
    -
    -
    - group_id -
    -
    -
    -

    - string - Required -

    -
    - Default: default -
    -
    - Possible Values: - -
    -
    -
    -

    The server group to which the Micro Integrator instance belongs. Specify the same group ID in all the Micro Integrator servers that should belong to a single group. By default, a 'group_id' named 'default' is assinged to every Micro Integrator server that connects to the dashboard. When you sign in to the dashboard, you can view data per server group.

    -
    -
    -
    -
    - node_id -
    -
    -
    -

    - string - Required -

    -
    - Default: A random UUID or the node ID used for cluster coordination. -
    -
    - Possible Values: - -
    -
    -
    -

    The dashboard identifies the Micro Integrator node by this ID. If you have already specified a node ID when you set up the Micro Integrator cluster, the same node ID applies here by default. However, if a node ID is not defined in your clustering configurations, a random uuid is used here by default.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## Primary keystore - -
    -
    -
    -
    - - - -
    -
    -
    [keystore.primary]
    -file_name = "wso2carbon.jks"
    -type = "JKS"
    -password = "wso2carbon"
    -alias = "wso2carbon"
    -key_password = "wso2carbon"
    -
    -
    -
    -
    -
    - [keystore.primary] - Required -

    - This configuration header is required for configuring the parameters that connect the Micro Integrator to the primary keystore. This keystore is used for SSL handshaking (when the server communicates with another server) and for encrypting plain text information in configuration files. By default, this keystore is also used for encrypted data in internal datastores, unless you have configured a separate keystore for internal data encryption. -

    -
    -
    -
    -
    - file_name -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2carbon.jks -
    -
    - Possible Values: - -
    -
    -
    -

    The name of the keystore file that is used for SSL communication and for encrypting/decrypting data in configuration files.

    -
    -
    -
    -
    - type -
    -
    -
    -

    - string - Required -

    -
    - Default: JKS -
    -
    - Possible Values: "JKS", "PKCS12" -
    -
    -
    -

    The type of the keystore file.

    -
    -
    -
    -
    - password -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2carbon -
    - -
    -
    -

    The password of the keystore file that is used for SSL communication and for encrypting/decrypting data in configuration files. The keystore password is used when accessing the keys in the keystore.

    -
    -
    -
    -
    - alias -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2carbon -
    - -
    -
    -

    The alias of the public key corresponding to the private key that is included in the keystore. The public key is used for encrypting data in the Micro Integrator server, which only the corresponding private key can decrypt. The public key is embedded in a digital certificate, and this certificate can be shared over the internet by storing it in a separate trust store file.

    -
    -
    -
    -
    - key_password -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2carbon -
    - -
    -
    -

    The password of the private key that is included in the keystore. The private key is used to decrypt the data that has been encrypted using the keystore's public key.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## Internal keystore - -
    -
    -
    -
    - - - -
    -
    -
    [keystore.primary]
    -file_name = "wso2carbon.jks"
    -type = "JKS"
    -password = "wso2carbon"
    -alias = "wso2carbon"
    -key_password = "wso2carbon"
    -
    -
    -
    -
    -
    - [keystore.internal] - Required -

    - This configuration header is required for configuring the parameters that connect the Micro Integrator to the keystore used for encrypting/decrypting data in internal data stores. You may sometimes choose to configure a separate keystore for this purpose because the primary keystore needs to renew certificates frequently. However, for encrypting information in internal data stores, the keystore certificates should not be changed frequently because the data that is already encrypted will become unusable every time the certificate changes. Read more about configuring the internal keystore. -

    -
    -
    -
    -
    - file_name -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2carbon.jks -
    -
    - Possible Values: - -
    -
    -
    -

    The name of the keystore file that is used for data encryption/decryption in internal data stores. By default, the keystore file of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - type -
    -
    -
    -

    - string - Required -

    -
    - Default: JKS -
    -
    - Possible Values: "JKS", "PKCS12" -
    -
    -
    -

    The type of the keystore file. By default, the keystore type of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - password -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2carbon -
    - -
    -
    -

    The password of the keystore file that is used for data encryption/decryption in internal data stores. This keystore password is used when accessing the keys in the keystore. By default, the keystore password of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - alias -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2carbon -
    - -
    -
    -

    The alias of the public key corresponding to the private key that is included in the keystore. The public key is used for encrypting data in the Micro Integrator server, which only the corresponding private key can decrypt. The public key is embedded in a digital certificate, and this certificate can be shared over the internet by storing it in a separate trust store file. By default, the alias of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - key_password -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2carbon -
    - -
    -
    -

    The password of the private key that is included in the keystore. The private key is used to decrypt the data that has been encrypted using the keystore's public key. By default, the public key password of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## System Parameters - -
    -
    -
    -
    - - - -
    -
    -
    [system.parameter]
    -org.wso2.SecureVaultPasswordRegEx = "any_valid_regex"
    -
    -
    -
    -
    -
    - [system.parameter] - Required -

    - This configuration header is required for configuring system parameters for the server. -

    -
    -
    -
    -
    - org.wso2.SecureVaultPasswordRegEx -
    -
    -
    -

    - string - -

    -
    - Default: ^[\S]{5,30}$ -
    -
    - Possible Values: regex value -
    -
    -
    -

    A regex pattern that specifies the password length and character composition for passwords in a synapse configuration.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## Truststore - -
    -
    -
    -
    - - - -
    -
    -
    [truststore]
    -file_name="wso2truststore.jks"
    -type="JKS"
    -password="wso2carbon"
    -alias="symmetric.key.value"
    -
    -
    -
    -
    -
    - [truststore] - Required -

    - This configuration header is required for configuring the parameters that connect the Micro Integrator to the keystore file (trust store) that is used to store the digital certificates that the server trusts for SSL communication. Read more about configuring the truststore. -

    -
    -
    -
    -
    - file_name -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2truststore.jks -
    -
    - Possible Values: - -
    -
    -
    -

    The name of the keystore file that is used for storing the trusted digital certificates. The product is shipped with a default trust store (wso2truststore.jks), which contains the self-signed digital certificate of the default keystore.

    -
    -
    -
    -
    - type -
    -
    -
    -

    - string - Required -

    -
    - Default: JKS -
    -
    - Possible Values: "JKS", "PKCS12" -
    -
    -
    -

    The type of the keystore file that is used as the trust store.

    -
    -
    -
    -
    - password -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2carbon -
    - -
    -
    -

    The password of the keystore file that is used as the trust store.

    -
    -
    -
    -
    - alias -
    -
    -
    -

    - string - Required -

    -
    - Default: symmetric.key.value -
    - -
    -
    -

    The alias is the password of the digital certificate (which holds the public key) that is included in the trustore.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## Default File-based User Store - -
    -
    -
    -
    - - - -
    -
    -
    [internal_apis.file_user_store]
    -enable = true
    -
    -[[internal_apis.users]]
    -user.name = "user-1"
    -user.password = "pwd-1"
    -user.is_admin = true
    -
    -[[internal_apis.users]]
    -user.name = "user-2"
    -user.password = "pwd-2"
    -
    -
    -
    -
    -
    -
    - [internal_apis.file_user_store] - Required -

    - This configuration header is required for disabling the default file-based user store of the Micro Integrator's Management API. Read more about configuring user stores. -

    -
    -
    -
    -
    - enable -
    -
    -
    -

    - integer - -

    -
    - Default: true -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Set this paramter to 'false' if you want to disable the default file-based user store. This allows you to use an external user store for user authentication in the Management API.

    -
    -
    -
    -
    - [[internal_apis.users]] - Required -

    - This configuration header is required for defining the user name and password for the Management API. Reuse this header when you want to add more users. The user credentials are stored in the default file-based user store of the Management API. Read more about configuring user stores. -

    -
    -
    -
    -
    - user.name -
    -
    -
    -

    - string - -

    -
    - Default: admin -
    -
    - Possible Values: - -
    -
    -
    -

    Enter a user name. Note that this will overwrite the default 'admin' user that is stored in the user store.

    -
    -
    -
    -
    - user.password -
    -
    -
    -

    - string - -

    -
    - Default: admin -
    -
    - Possible Values: - -
    -
    -
    -

    Enter a password for the user specified by 'user.name'. Note that this will overwrite the default 'admin' password that is stored in the user store.

    -
    -
    -
    -
    - user.is_admin -
    -
    -
    -

    - boolean - -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Specifies whether or not the user has admin privileges.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## External User Store - -
    -
    -
    -
    - - - -
    -
    -
    [user_store]
    -type = "read_only_ldap"
    -class = "org.wso2.micro.integrator.security.user.core.ldap.ReadOnlyLDAPUserStoreManager"
    -connection_url = "ldap://localhost:10389"
    -connection_name = "uid=admin,ou=system"
    -connection_password = "admin"
    -anonymous_bind = false
    -user_search_base = "ou=Users,dc=wso2,dc=org"
    -user_name_attribute = "uid"
    -user_name_search_filter = "(&(objectClass=person)(uid=?))"
    -user_name_list_filter = "(objectClass=person)"
    -read_groups = true
    -group_search_base = "ou=Groups,dc=wso2,dc=org"
    -group_name_attribute = "cn"
    -group_name_search_filter = "(&(objectClass=groupOfNames)(cn=?))"
    -group_name_list_filter = "(objectClass=groupOfNames)"
    -membership_attribute = "member"
    -back_links_enabled = false
    -username_java_regex = "[a-zA-Z0-9._\\-|//]{3,30}$"
    -rolename_java_regex = "[a-zA-Z0-9._\\-|//]{3,30}$"
    -password_java_regex = "^[\\S]{5,30}$"
    -scim_enabled = false
    -password_hash_method = "PLAIN_TEXT"
    -multi_attribute_separator = ","
    -max_user_name_list_length = 100
    -max_role_name_list_length = 100
    -user_roles_cache_enabled = true
    -connection_pooling_enabled = true
    -ldap_connection_timeout = 5000
    -read_timeout = ''
    -retry_attempts = ''
    -connection_retry_delay = "120000"
    -
    -
    -
    -
    -
    -
    - [user_store] - Required -

    - This configuration header is required for conencting the Micro Integrator to an external user store. -

    -
    -
    -
    -
    - type -
    -
    -
    -

    - string - Required -

    -
    - Default: "read_only_ldap" -
    -
    - Possible Values: "read_only_ldap", "read_write_ldap", "database" -
    -
    -
    -

    This parameter specifies the type of user store. The following options are available: <ul><li>read_only_ldap: The Micro Integrator connects to a read-only LDAP. </li><li>read_write_ldap: The Micro Integrator connects to an LDAP with write permissions.</li><li>database: The Micro Integrator connects to an RDBMS user store.</li></ul> When you set this parameter, all of the remaining parameters (listed below) are inferred with default values. You can override the defaults by giving specific values to these parameters.

    -
    -
    -
    -
    - class -
    -
    -
    -

    - string - -

    -
    - Default: org.wso2.micro.integrator.security.user.core.ldap.ReadOnlyLDAPUserStoreManager -
    -
    - Possible Values: - -
    -
    -
    -

    The implementation class that enables the read-only LDAP user store. If the type parameter is not used, you need to specify a value for this parameter.

    -
    -
    -
    -
    - read_only -
    -
    -
    -

    - boolean - Required -

    -
    - Default: true -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Specifies whether or not the user store is read only.

    -
    -
    -
    -
    - connection_url -
    -
    -
    -

    - string - Required -

    -
    - Default: ldap://localhost:10389 -
    -
    - Possible Values: - -
    -
    -
    -

    The URL for connecting to the LDAP. Override the default URL for your setup. If you are connecting over ldaps (secured LDAP), you need to import the certificate of the user store to the truststore (wso2truststore.jks by default). See the instructions on how to <a href='{{base_path}}/install-and-setup/setup/mi-setup/setup/security/importing_ssl_certificate'>add certificates to the truststore</a>.

    -
    -
    -
    -
    - connection_name -
    -
    -
    -

    - string - Required -

    -
    - Default: uid=admin,ou=system -
    -
    - Possible Values: - -
    -
    -
    -

    The username used to connect to the user store and perform various operations. This user does not need to be an administrator in the user store. However, the user requires permission to read the user list and user attributes, and to perform search operations on the user store. The value you specify is used as the DN (Distinguish Name) attribute of the user who has sufficient permissions to perform operations on users and roles in LDAP.

    -
    -
    -
    -
    - connection_password -
    -
    -
    -

    - string - Required -

    -
    - Default: admin -
    - -
    -
    -

    Password for the connection user name.

    -
    -
    -
    -
    - user_search_base -
    -
    -
    -

    - string - -

    -
    - Default: ou=system -
    -
    - Possible Values: - -
    -
    -
    -

    The DN of the context or object under which the user entries are stored in the user store. When the user store searches for users, it will start from this location of the directory.

    -
    -
    -
    -
    - user_name_attribute -
    -
    -
    -

    - string - -

    -
    - Default: uid -
    -
    - Possible Values: - -
    -
    -
    -

    The attribute used for uniquely identifying a user entry. Users can be authenticated using their email address, UID, etc. The name of the attribute is considered as the username. Note that the email address is considered as a special case in WSO2 products. Read more about using the email address as user name.

    -
    -
    -
    -
    - user_name_search_filter -
    -
    -
    -

    - string - -

    -
    - Default: (&amp;(objectClass=person)(uid=?)) -
    -
    - Possible Values: - -
    -
    -
    -

    Filtering criteria used to search for a particular user entry.

    -
    -
    -
    -
    - user_name_list_filter -
    -
    -
    -

    - string - -

    -
    - Default: (objectClass=person) -
    -
    - Possible Values: - -
    -
    -
    -

    Filtering criteria for searching user entries in the user store. This query or filter is used when doing search operations on users with different search attributes. According to the default configuration, the search operation only provides the objects created from the person object class.

    -
    -
    -
    -
    - read_groups -
    -
    -
    -

    - boolean - -

    -
    - Default: true -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    This indicates whether groups should be read from the user store. If this is set to 'false', none of the groups in the user store can be read, and the following group configurations are NOT mandatory: 'group_search_base', 'group_name_list_filter', or 'group_name_attribute'.

    -
    -
    -
    -
    - group_search_base -
    -
    -
    -

    - string - -

    -
    - Default: ou=system -
    -
    - Possible Values: - -
    -
    -
    -

    The DN of the context or object under which the group entries are stored in the user store. When the user store searches for groups, it will start from this location of the directory.

    -
    -
    -
    -
    - group_name_attribute -
    -
    -
    -

    - string - -

    -
    - Default: cn -
    -
    - Possible Values: - -
    -
    -
    -

    The attribute used for uniquely identifying a group entry. This attribute is to be treated as the group name.

    -
    -
    -
    -
    - group_name_search_filter -
    -
    -
    -

    - string - -

    -
    - Default: (&amp;(objectClass=groupOfNames)(cn=?)) -
    -
    - Possible Values: - -
    -
    -
    -

    The filtering criteria used to search for a particular group entry.

    -
    -
    -
    -
    - group_name_list_filter -
    -
    -
    -

    - string - -

    -
    - Default: (objectClass=groupOfNames) -
    -
    - Possible Values: - -
    -
    -
    -

    The filtering criteria for searching group entries in the user store. This query or filter is used when doing search operations on groups with different search attributes.

    -
    -
    -
    -
    - membership_attribute -
    -
    -
    -

    - string - -

    -
    - Default: member -
    -
    - Possible Values: - -
    -
    -
    -

    Defines the attribute that contains the distinguished names (DN) of user objects that are in a group.

    -
    -
    -
    -
    - back_links_enabled -
    -
    -
    -

    - string - -

    -
    - Default: member -
    -
    - Possible Values: - -
    -
    -
    -

    Defines whether the backlink support is enabled.

    -
    -
    -
    -
    - username_java_regex -
    -
    -
    -

    - string - -

    -
    - Default: [a-zA-Z0-9._\-|//]{3,30}$ -
    -
    - Possible Values: - -
    -
    -
    -

    The regular expression used by the back-end components for username validation. By default, a length of 3 to 30 allowed for strings with non-empty characters. You can provide ranges of alphabets, numbers, and also ranges of ASCII values in the RegEx properties.

    -
    -
    -
    -
    - rolename_java_regex -
    -
    -
    -

    - string - -

    -
    - Default: [a-zA-Z0-9._\-|//]{3,30}$ -
    -
    - Possible Values: - -
    -
    -
    -

    The regular expression used by the back-end components for role name validation. By default, a length of 3 to 30 allowed for strings with non-empty characters. You can provide ranges of alphabets, numbers, and also ranges of ASCII values in the RegEx properties.

    -
    -
    -
    -
    - password_java_regex -
    -
    -
    -

    - string - -

    -
    - Default: ^[\S]{5,30}$ -
    -
    - Possible Values: - -
    -
    -
    -

    The regular expression used by the back-end components for password validation. By default, a length of 3 to 30 allowed for strings with non-empty characters. You can provide ranges of alphabets, numbers, and also ranges of ASCII values in the RegEx properties.

    -
    -
    -
    -
    - scim_enabled -
    -
    -
    -

    - boolean - -

    -
    - Default: true -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The regular expression used by the back-end components for password validation. By default, a length of 3 to 30 allowed for strings with non-empty characters. You can provide ranges of alphabets, numbers, and also ranges of ASCII values in the RegEx properties.

    -
    -
    -
    -
    - password_hash_method -
    -
    -
    -

    - string - -

    -
    - Default: PLAIN_TEXT -
    -
    - Possible Values: "SHA", "MD5", "PLAIN_TEXT" -
    -
    -
    -

    Specifies the password hashing algorithm used for hashing the password before storing in the user store. You can use the SHA digest method (SHA-1, SHA-256), the MD 5 digest method, or plain text passwords.

    -
    -
    -
    -
    - multi_attribute_separator -
    -
    -
    -

    - string - -

    -
    - Default: , -
    -
    - Possible Values: - -
    -
    -
    -

    This parameter is used to define a character to separate multiple attributes. This ensures that it will not appear as part of a claim value. Normally ',' is used to separate multiple attributes, but you can define ',,,', '...', or a similar character sequence.

    -
    -
    -
    -
    - max_user_name_list_length -
    -
    -
    -

    - integer - -

    -
    - Default: 100 -
    -
    - Possible Values: - -
    -
    -
    -

    Controls the number of users listed in the user store. This is useful when you have a large number of users and you don't want to list them all. Setting this property to 0 displays all users. In some user stores, there are policies to limit the number of records that can be returned from the query. Setting the value to 0 will list the maximum results returned by the user store. To increase that value, you need to set it at the user store level. Active directory has the 'MaxPageSize' property with the default value set to 1000.

    -
    -
    -
    -
    - max_role_name_list_length -
    -
    -
    -

    - integer - -

    -
    - Default: 100 -
    -
    - Possible Values: - -
    -
    -
    -

    Controls the number of roles listed in the user store. This is useful when you have a large number of roles and you don't want to list them all. Setting this property to 0 displays all roles. In some user stores, there are policies to limit the number of records that can be returned from the query. Setting the value to 0 will list the maximum results returned by the user store. To increase that value, you need to set it at the user store level. Active directory has the 'MaxPageSize' property with the default value set to 1000.

    -
    -
    -
    -
    - user_roles_cache_enabled -
    -
    -
    -

    - boolean - -

    -
    - Default: true -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    This parameter indicates whether the list of roles for a user should be cached. Set this to 'false' if the user roles are changed by external means and the changes should be instantly reflected in the product instance.

    -
    -
    -
    -
    - connection_pooling_enabled -
    -
    -
    -

    - boolean - -

    -
    - Default: true -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Define whether LDAP connection pooling is enabled. The connection performance will improve when this parameter is enabled.

    -
    -
    -
    -
    - ldap_connection_timeout -
    -
    -
    -

    - integer - -

    -
    - Default: 5000 -
    -
    - Possible Values: - -
    -
    -
    -

    This is the connection timeout period (in milliseconds) when the initial connection is created.

    -
    -
    -
    -
    - read_timeout -
    -
    -
    -

    - integer - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The value for this parameter is the read timeout in milliseconds for LDAP operations. If the LDAP provider cannot get an LDAP response within that period, it aborts the read attempt. The integer should be greater than zero. An integer less than or equal to zero means no read timeout is specified, which is equivalent to waiting for the response infinitely until it is received.

    -
    -
    -
    -
    - retry_attempts -
    -
    -
    -

    - integer - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Retry the authentication request if a timeout happened.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## Database Connection - -
    -
    -
    -
    - - - -
    -
    -
    [[datasource]]
    -id = "WSO2_CARBON_DB"
    -url= "jdbc:h2:./repository/database/WSO2CARBON_DB;DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=60000"
    -username="username"
    -password="password"
    -driver="org.h2.Driver"
    -pool_options.maxActive=50
    -pool_options.maxWait = 60000
    -pool_options.testOnBorrow = true
    -
    -
    -
    -
    -
    - [[datasource]] - Required -

    - This configuration header is required for connecting to a database from the Micro Integrator. Databases are only required if you are connecting the Micro Integrator to an RDBMS user store. -

    -
    -
    -
    -
    - id -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The name of the database.

    -
    -
    -
    -
    - url -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The connection URL for your database. Note that the URL depends on the type of database you use.

    -
    -
    -
    -
    - username -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The user name for connecting to the database.

    -
    -
    -
    -
    - password -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The password for connecting to the database.

    -
    -
    -
    -
    - driver -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The driver class of your database.

    -
    -
    -
    -
    - pool_options.maxActive -
    -
    -
    -

    - integer - -

    -
    - Default: 50 -
    -
    - Possible Values: - -
    -
    -
    -

    The maximum number of active connections that can be allocated from this pool at the same time. If you set this value too low, the response times for some requests might slow down as they have to wait for connections to get free. A value too high might cause too much memory/resource utilization and the system may slow down or be unresponsive.

    -
    -
    -
    -
    - pool_options.maxWait -
    -
    -
    -

    - integer - -

    -
    - Default: 60000 -
    -
    - Possible Values: - -
    -
    -
    -

    Maximum number of milliseconds that the pool waits (when there are no available connections) for a connection to be returned before throwing an exception.

    -
    -
    -
    -
    - pool_options.testOnBorrow -
    -
    -
    -

    - boolean - -

    -
    - Default: true -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Used to indicate if objects will be validated before being borrowed from the pool. If the object fails to validate, it will be dropped from the pool, and we will attempt to borrow another one.

    -
    -
    -
    -
    - pool_options.maxIdle -
    -
    -
    -

    - integer - -

    -
    - Default: 8 -
    -
    - Possible Values: - -
    -
    -
    -

    The maximum number of connections that can remain idle in the pool, without extra ones being released. Default value is 8. Put a negative value for unlimited. Idle connections are checked periodically (if enabled) and connections that have been idle for longer than minEvictableIdleTimeMillis will be released.

    -
    -
    -
    -
    - pool_options.minIdle -
    -
    -
    -

    - integer - -

    -
    - Default: 0 -
    -
    - Possible Values: - -
    -
    -
    -

    The minimum number of connections that can remain idle in the pool, without extra ones being created. The connection pool can shrink below this number if validation queries fail. Default value is 0.

    -
    -
    -
    -
    - pool_options.validationInterval -
    -
    -
    -

    - integer - -

    -
    - Default: 30000 -
    -
    - Possible Values: - -
    -
    -
    -

    This parameter controls how frequently a given validation query is executed (time in milliseconds). The default value is 30000 (30 seconds). That is, if a connection is due for validation, but has been validated previously within this interval, it will not be validated again.

    -
    -
    -
    -
    - pool_options.validationQuery -
    -
    -
    -

    - string - -

    -
    - Default: Null -
    -
    - Possible Values: - -
    -
    -
    -

    The SQL query used to validate connections from this pool before returning them to the caller. If specified, this query does not have to return any data, it just can't throw an SQLException. The default value is null. Example values are SELECT 1(mysql), select 1 from dual(oracle), SELECT 1(MS Sql Server).

    -
    -
    -
    -
    - pool_options.MaxPermSize -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The memory size allocated for WSO2 Micro Integrator.

    -
    -
    -
    -
    - pool_options.removeAbandoned -
    -
    -
    -

    - boolean - -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    If this property is set to 'true', a connection is considered abandoned and eligible for removal if it has been in use for longer than the removeAbandonedTimeout value explained below.

    -
    -
    -
    -
    - pool_options.removeAbandonedTimeout -
    -
    -
    -

    - boolean - -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The time in seconds that should pass before a connection that is in use can be removed. This is the time period after which the connection will be declared abandoned. This value should be set to the longest running query that the applications might have.

    -
    -
    -
    -
    - pool_options.logAbandoned -
    -
    -
    -

    - boolean - -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Set this property to 'true' if you wish to log when the connection was abandoned. If this option is set to 'true', a stack trace is recorded during the dataSource.getConnection call and is printed when a connection is not returned.

    -
    -
    -
    -
    - pool_options.initialSize -
    -
    -
    -

    - integer - -

    -
    - Default: 0 -
    -
    - Possible Values: - -
    -
    -
    -

    The initial number of connections created when the pool is started. Default value is 0.

    -
    -
    -
    -
    - pool_options.defaultTransactionIsolation -
    -
    -
    -

    - string - -

    -
    - Default: TRANSACTION_NONE -
    -
    - Possible Values: "TRANSACTION_NONE", "TRANSACTION_UNKNOWN", "TRANSACTION_READ_COMMITTED", "TRANSACTION_READ_UNCOMMITTED", "TRANSACTION_REPEATABLE_READ", "TRANSACTION_SERIALIZABLE" -
    -
    -
    -

    The default TransactionIsolation state of connections created by this pool.

    -
    -
    -
    -
    - pool_options.validationQueryTimeout -
    -
    -
    -

    - integer - -

    -
    - Default: -1 -
    -
    - Possible Values: - -
    -
    -
    -

    The timeout in seconds before a connection validation queries fail. This works by calling java.sql.Statement.setQueryTimeout(seconds) on the statement that executes the validationQuery . The pool itself doesn't timeout the query. It is still up to the JDBC driver to enforce query timeouts. A value less than or equal to zero will disable this feature. The default value is -1.

    -
    -
    -
    -
    - pool_options.timeBetweenEvictionRunsMillis -
    -
    -
    -

    - integer - -

    -
    - Default: 5000 -
    -
    - Possible Values: - -
    -
    -
    -

    The number of milliseconds to sleep between runs of the idle connection validation/cleaner thread. This value should not be set under 1 second. It dictates how often we check for idle, abandoned connections, and how often we validate idle connections. The default value is 5000 (5 seconds).

    -
    -
    -
    -
    - pool_options.numTestsPerEvictionRun -
    -
    -
    -

    - integer - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The number of objects to examine during each run of the idle object evictor thread.

    -
    -
    -
    -
    - pool_options.minEvictableIdleTimeMillis -
    -
    -
    -

    - integer - -

    -
    - Default: 60000 -
    -
    - Possible Values: - -
    -
    -
    -

    The minimum amount of time an object may sit idle in the pool before it is eligible for eviction. The default value is 60000 (60 seconds).

    -
    -
    -
    -
    - pool_options.defaultCatalog -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The default catalog of connections created by this pool.

    -
    -
    -
    -
    - pool_options.validatorClassName -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The name of a class that implements the org.apache.tomcat.jdbc.pool.Validator interface and provides a no-arg constructor (may be implicit). If specified, the class will be used to create a Validator instance, which is then used instead of any validation query to validate connections. The default value is null. An example value is com.mycompany.project.SimpleValidator.

    -
    -
    -
    -
    - pool_options.connectionProperties -
    -
    -
    -

    - string - -

    -
    - Default: Null -
    -
    - Possible Values: - -
    -
    -
    -

    The connection properties that will be sent to our JDBC driver when establishing new connections. Format of the string must be [propertyName=property;]* NOTE - The 'user' and 'password' properties will be passed explicitly, so they do not need to be included here. The default value is null.

    -
    -
    -
    -
    - pool_options.initSQL -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The ability to run a SQL statement exactly once, when the connection is created.

    -
    -
    -
    -
    - pool_options.jdbcInterceptors -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Flexible and pluggable interceptors to create any customizations around the pool, the query execution and the result set handling.

    -
    -
    -
    -
    - pool_options.abandonWhenPercentageFull -
    -
    -
    -

    - integer - -

    -
    - Default: 0 -
    -
    - Possible Values: - -
    -
    -
    -

    Connections that have been abandoned (timed out) wont get closed and reported up unless the number of connections in use are above the percentage defined by abandonWhenPercentageFull. The value should be between 0-100. The default value is 0, which implies that connections are eligible for closure as soon as removeAbandonedTimeout has been reached.

    -
    -
    -
    -
    - pool_options.maxAge -
    -
    -
    -

    - integer - -

    -
    - Default: 0 -
    -
    - Possible Values: - -
    -
    -
    -

    Time in milliseconds to keep this connection. When a connection is returned to the pool, the pool will check to see if the now - time-when-connected > maxAge has been reached, and if so, it closes the connection rather than returning it to the pool. The default value is 0, which implies that connections will be left open and no age check will be done upon returning the connection to the pool.

    -
    -
    -
    -
    - pool_options.suspectTimeout -
    -
    -
    -

    - integer - -

    -
    - Default: 0 -
    -
    - Possible Values: - -
    -
    -
    -

    Timeout value in seconds. Default value is 0. Similar to to the removeAbandonedTimeout value but instead of treating the connection as abandoned, and potentially closing the connection, this simply logs the warning if logAbandoned is set to true. If this value is equal or less than 0, no suspect checking will be performed. Suspect checking only takes place if the timeout value is larger than 0 and the connection was not abandoned or if abandon check is disabled. If a connection is suspect a WARN message gets logged and a JMX notification gets sent once.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## Management API - JWT Handler - -
    -
    -
    -
    - - - -
    -
    -
    [management_api.jwt_token_security_handler]
    -enable = true
    -token_store_config.max_size= "200"
    -token_store_config.clean_up_interval= "600"
    -token_store_config.remove_oldest_token_on_overflow= "true"
    -token_config.expiry= "3600"
    -token_config.size= "2048"
    -
    -
    -
    -
    -
    -
    - [management_api.jwt_token_security_handler] - Required -

    - This configuration header is required for configuring the default JWT token store configurations of the Micro Integrator's Management API. Read more about securing the Management API. -

    -
    -
    -
    -
    - enable -
    -
    -
    -

    - boolean - -

    -
    - Default: true -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Set this paramter to 'false' if you want to disable JWT authentication for the management API.

    -
    -
    -
    -
    - token_store_config.max_size -
    -
    -
    -

    - integer - -

    -
    - Default: 200 -
    -
    - Possible Values: - -
    -
    -
    -

    Number of tokens stored in the in-memory token store. User can increase or decrease this value accordingly.

    -
    -
    -
    -
    - token_store_config.clean_up_interval -
    -
    -
    -

    - integer - -

    -
    - Default: 600 -
    -
    - Possible Values: - -
    -
    -
    -

    Token cleanup will be handled through a seperate thread and the frequency of the token clean up can be configured from this setting. This will clean all the expired and revoked security tokens. The thread will run only when there are tokens in the store. If it is empty, the cleanup thread will automatically stop. Interval is specified in seconds.

    -
    -
    -
    -
    - token_store_config.remove_oldest_token_on_overflow -
    -
    -
    -

    - boolean - -

    -
    - Default: true -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    If set to 'true', this will remove the oldest accessed token when the token store is full. If it is set to 'false', the user should either wait until other tokens expire or increase the token store max size accordingly.

    -
    -
    -
    -
    - token_config.expiry -
    -
    -
    -

    - integer - -

    -
    - Default: 3600 -
    -
    - Possible Values: - -
    -
    -
    -

    This configures the expiry time of the token (specified in seconds).

    -
    -
    -
    -
    - token_config.size -
    -
    -
    -

    - integer - -

    -
    - Default: 2048 -
    -
    - Possible Values: - -
    -
    -
    -

    Specifies the key size of the token.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## Management API - Authorization Handler - -
    -
    -
    -
    - - - -
    -
    -
    [management_api.authorization_handler]
    -enable = false
    -
    -[[management_api.authorization_handler.resources]]
    -path = "/users"
    -
    -[[management_api.authorization_handler.resources]]
    -path = "/apis"
    -
    -
    -
    -
    -
    -
    - [management_api.authorization_handler] - Required -

    - This configuration header is required for disabling authorization for the Micro Integrator's Management API. Authorization only applies when an external user store is used. Read more about securing the Management API. -

    -
    -
    -
    -
    - enable -
    -
    -
    -

    - boolean - -

    -
    - Default: true -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Set this paramter to 'false' if you want to disable authorization for the management API.

    -
    -
    -
    -
    - [[management_api.authorization_handler.resources]] - Required -

    - This configuration header is required for enabling authorization for additional resources (other than 'users') of the Micro Integrator's Management API. Read more about securing the Management API. -

    -
    -
    -
    -
    - path -
    -
    -
    -

    - string - -

    -
    - Default: -
    -
    - Possible Values: /resource_name -
    -
    -
    -

    Use this parameter to specify the resources in the management API for which you want to enable authorization.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## Management API - CORS - -
    -
    -
    -
    - - - -
    -
    -
    [management_api.cors]
    -enabled = true
    -allowed_origins = "*"
    -allowed_headers = "Authorization"
    -
    -
    -
    -
    -
    - [management_api.cors] - Required -

    - This configuration header is required for configuring CORs for the Management API of the Micro Integrator. Read more about securing the Management API. -

    -
    -
    -
    -
    - enabled -
    -
    -
    -

    - boolean - -

    -
    - Default: true -
    -
    - Possible Values: - -
    -
    -
    -

    Set this paramter to 'false' if you want to disable CORs for the Management API.

    -
    -
    -
    -
    - allowed_origins -
    -
    -
    -

    - string - -

    -
    - Default: * -
    -
    - Possible Values: any string -
    -
    -
    -

    Specify the allowed origins. By default '*' indicates that all origins are allowed.

    -
    -
    -
    -
    - allowed_headers -
    -
    -
    -

    - string - -

    -
    - Default: Authorization -
    -
    - Possible Values: - -
    -
    -
    -

    Specify the allowed authorization headers.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## Message Builders (non-blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [message_builders]
    -application_xml = "org.apache.axis2.builder.ApplicationXMLBuilder"
    -form_urlencoded = "org.apache.synapse.commons.builders.XFormURLEncodedBuilder"
    -multipart_form_data = "org.apache.axis2.builder.MultipartFormDataBuilder"
    -text_plain = "org.apache.axis2.format.PlainTextBuilder"
    -application_json = "org.wso2.micro.integrator.core.json.JsonStreamBuilder"
    -json_badgerfish = "org.apache.axis2.json.JSONBadgerfishOMBuilder"
    -text_javascript = "org.apache.axis2.json.JSONBuilder"
    -octet_stream =  "org.wso2.carbon.relay.BinaryRelayBuilder"
    -application_binary = "org.apache.axis2.format.BinaryBuilder"
    -
    -
    -
    -
    -
    - [message_builders] - Required -

    - This configuration header is required for configuring the message builder implementation that is used to build messages that are received by the Micro Integrator in the default non-blocking mode. If you are using the Micro Integrator in blocking mode, see the message builder configurations for blocking mode. -

    -
    -
    -
    -
    - application_xml -
    -
    -
    -

    - string - -

    -
    - Default: org.apache.axis2.builder.ApplicationXMLBuilder -
    -
    - Possible Values: - -
    -
    -
    -

    The message builder implementation that builds messages with the 'application_xml' content type. If required, you can change the default builder class.

    -
    -
    -
    -
    - form_urlencoded -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: org.apache.synapse.commons.builders.XFormURLEncodedBuilder -
    -
    -
    -

    The message builder implementation that builds messages with the 'form_urlencoded' content type. If required, you can change the default builder class.

    -
    -
    -
    -
    - multipart_form_data -
    -
    -
    -

    - string - -

    -
    - Default: org.apache.axis2.builder.MultipartFormDataBuilder -
    -
    - Possible Values: - -
    -
    -
    -

    The message builder implementation that builds messages with the 'multipart_form_data' content type. If required, you can change the default builder class.

    -
    -
    -
    -
    - text_plain -
    -
    -
    -

    - string - -

    -
    - Default: org.apache.axis2.format.PlainTextBuilder -
    -
    - Possible Values: - -
    -
    -
    -

    The message builder implementation that builds messages with the 'text_plain' content type. If required, you can change the default builder class.

    -
    -
    -
    -
    - application_json -
    -
    -
    -

    - string - -

    -
    - Default: org.wso2.micro.integrator.core.json.JsonStreamBuilder -
    -
    - Possible Values: - -
    -
    -
    -

    The message builder implementation that builds messages with the 'application_json' content type. If required, you can change the default builder class.

    -
    -
    -
    -
    - json_badgerfish -
    -
    -
    -

    - string - -

    -
    - Default: org.apache.axis2.json.JSONBadgerfishOMBuilder -
    -
    - Possible Values: - -
    -
    -
    -

    The message builder implementation that builds messages with the 'json_badgerfish' content type. If required, you can change the default builder class.

    -
    -
    -
    -
    - text_javascript -
    -
    -
    -

    - string - -

    -
    - Default: org.apache.axis2.json.JSONBuilder -
    -
    - Possible Values: - -
    -
    -
    -

    The message builder implementation that builds messages with the 'text_javascript' content type. If required, you can change the default builder class.

    -
    -
    -
    -
    - octet_stream -
    -
    -
    -

    - string - -

    -
    - Default: org.wso2.carbon.relay.BinaryRelayBuilder -
    -
    - Possible Values: - -
    -
    -
    -

    The message builder implementation that builds messages with the 'octet_stream' content type. If required, you can change the default builder class.

    -
    -
    -
    -
    - application_binary -
    -
    -
    -

    - string - -

    -
    - Default: org.apache.axis2.format.BinaryBuilder -
    -
    - Possible Values: - -
    -
    -
    -

    The message builder implementation that builds messages with the 'application_binary' content type. If required, you can change the default builder class.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## Message Builders (blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [blocking.message_builders]
    -application_xml = "org.apache.axis2.builder.ApplicationXMLBuilder"
    -form_urlencoded = "org.apache.synapse.commons.builders.XFormURLEncodedBuilder"
    -multipart_form_data = "org.apache.axis2.builder.MultipartFormDataBuilder"
    -text_plain = "org.apache.axis2.format.PlainTextBuilder"
    -application_json = "org.wso2.micro.integrator.core.json.JsonStreamBuilder"
    -json_badgerfish = "org.apache.axis2.json.JSONBadgerfishOMBuilder"
    -text_javascript = "org.apache.axis2.json.JSONBuilder"
    -octet_stream =  "org.wso2.carbon.relay.BinaryRelayBuilder"
    -application_binary = "org.apache.axis2.format.BinaryBuilder"
    -
    -
    -
    -
    -
    - [blocking.message_builders] - Required -

    - This configuration header is required for configuring the message builder implementation that is used to build messages that are received by the Micro Integrator in blocking mode. You can use the same list of parameters that are available for message builders in non-blocking mode. -

    -
    -
    - -
    -
    -
    -
    -
    -
    -
    - - -## Message Formatters (non-blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [message_formatters]
    -form_urlencoded =  "org.apache.synapse.commons.formatters.XFormURLEncodedFormatter"
    -multipart_form_data =  "org.apache.axis2.transport.http.MultipartFormDataFormatter"
    -application_xml = "org.apache.axis2.transport.http.ApplicationXMLFormatter"
    -text_xml = "org.apache.axis2.transport.http.SOAPMessageFormatter"
    -soap_xml = "org.apache.axis2.transport.http.SOAPMessageFormatter"
    -text_plain = "org.apache.axis2.format.PlainTextFormatter"
    -application_json =  "org.wso2.micro.integrator.core.json.JsonStreamFormatter"
    -json_badgerfish = "org.apache.axis2.json.JSONBadgerfishMessageFormatter"
    -text_javascript = "org.apache.axis2.json.JSONMessageFormatter"
    -octet_stream = "org.wso2.carbon.relay.ExpandingMessageFormatter"
    -application_binary =  "org.apache.axis2.format.BinaryFormatter"
    -
    -
    -
    -
    -
    - [message_formatters] - Required -

    - This configuration header is required for configuring the message formatting implementation that is used for formatting messages that are sent out of the Micro Integrator in non-blocking mode. If you are using the Micro Integrator in blocking mode, see the message formatter configurations for blocking mode. -

    -
    -
    -
    -
    - application_xml -
    -
    -
    -

    - string - -

    -
    - Default: org.apache.axis2.transport.http.ApplicationXMLFormatter -
    -
    - Possible Values: - -
    -
    -
    -

    The message formating implementation that formats messages with the 'application_xml' content type before they are sent out of the Micro Integrator. If required, you can change the default formating class.

    -
    -
    -
    -
    - form_urlencoded -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: org.apache.synapse.commons.formatters.XFormURLEncodedFormatter -
    -
    -
    -

    The message formating implementation that formats messages with the 'form_urlencoded' content type before they are sent out of the Micro Integrator. If required, you can change the default formating class.

    -
    -
    -
    -
    - multipart_form_data -
    -
    -
    -

    - string - -

    -
    - Default: org.apache.axis2.transport.http.MultipartFormDataFormatter -
    -
    - Possible Values: - -
    -
    -
    -

    The message formating implementation that formats messages with the 'multipart_form_data' content type before they are sent out of the Micro Integrator. If required, you can change the default formating class.

    -
    -
    -
    -
    - text_plain -
    -
    -
    -

    - string - -

    -
    - Default: org.apache.axis2.format.PlainTextFormatter -
    -
    - Possible Values: - -
    -
    -
    -

    The message formating implementation that formats messages with the 'text_plain' content type before they are sent out of the Micro Integrator. If required, you can change the default formating class.

    -
    -
    -
    -
    - application_json -
    -
    -
    -

    - string - -

    -
    - Default: org.wso2.micro.integrator.core.json.JsonStreamFormatter -
    -
    - Possible Values: - -
    -
    -
    -

    The message formating implementation that formats messages with the 'application_json' content type before they are sent out of the Micro Integrator. If required, you can change the default formating class.

    -
    -
    -
    -
    - json_badgerfish -
    -
    -
    -

    - string - -

    -
    - Default: org.apache.axis2.json.JSONBadgerfishMessageFormatter -
    -
    - Possible Values: - -
    -
    -
    -

    The message formating implementation that formats messages with the 'json_badgerfish' content type before they are sent out of the Micro Integrator. If required, you can change the default formating class.

    -
    -
    -
    -
    - text_javascript -
    -
    -
    -

    - string - -

    -
    - Default: org.apache.axis2.json.JSONMessageFormatter -
    -
    - Possible Values: - -
    -
    -
    -

    The message formating implementation that formats messages with the 'text_javascript' content type before they are sent out of the Micro Integrator. If required, you can change the default formating class.

    -
    -
    -
    -
    - octet_stream -
    -
    -
    -

    - string - -

    -
    - Default: org.wso2.carbon.relay.ExpandingMessageFormatter -
    -
    - Possible Values: - -
    -
    -
    -

    The message formatting implementation that formats messages with the 'octet_stream' content type before they are sent out of the Micro Integrator. If required, you can change the default formating class.

    -
    -
    -
    -
    - application_binary -
    -
    -
    -

    - string - -

    -
    - Default: org.apache.axis2.format.BinaryFormatter -
    -
    - Possible Values: - -
    -
    -
    -

    The message formating implementation that formats messages with the 'application_binary' content type before they are sent out of the Micro Integrator. If required, you can change the default formating class.

    -
    -
    -
    -
    - text_xml -
    -
    -
    -

    - string - -

    -
    - Default: org.apache.axis2.transport.http.SOAPMessageFormatter -
    -
    - Possible Values: - -
    -
    -
    -

    The message formating implementation that formats messages with the 'text_xml' content type before they are sent out of the Micro Integrator. If required, you can change the default formating class.

    -
    -
    -
    -
    - soap_xml -
    -
    -
    -

    - string - -

    -
    - Default: org.apache.axis2.transport.http.SOAPMessageFormatter -
    -
    - Possible Values: - -
    -
    -
    -

    The message formating implementation that formats messages with the 'soap_xml' content type before they are sent out of the Micro Integrator. If required, you can change the default formating class.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## Message Formatters (blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [blocking.message_formatters]
    -form_urlencoded =  "org.apache.synapse.commons.formatters.XFormURLEncodedFormatter"
    -multipart_form_data =  "org.apache.axis2.transport.http.MultipartFormDataFormatter"
    -application_xml = "org.apache.axis2.transport.http.ApplicationXMLFormatter"
    -text_xml = "org.apache.axis2.transport.http.SOAPMessageFormatter"
    -soap_xml = "org.apache.axis2.transport.http.SOAPMessageFormatter"
    -text_plain = "org.apache.axis2.format.PlainTextFormatter"
    -application_json =  "org.wso2.micro.integrator.core.json.JsonStreamFormatter"
    -json_badgerfish = "org.apache.axis2.json.JSONBadgerfishMessageFormatter"
    -text_javascript = "org.apache.axis2.json.JSONMessageFormatter"
    -octet_stream = "org.wso2.carbon.relay.ExpandingMessageFormatter"
    -application_binary =  "org.apache.axis2.format.BinaryFormatter"
    -
    -
    -
    -
    -
    - [blocking.message_formatters] - Required -

    - This configuration header is required for configuring the message formatter implementations that are used to format messages that are sent out from the Micro Integrator in blocking mode. You can use the same list of parameters that are available for message formatters in non-blocking mode. -

    -
    -
    - -
    -
    -
    -
    -
    -
    -
    - - -## Custom Message Builders (non-blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [[custom_message_builders]]
    -content_type = "application/json/badgerfish"
    -class = "org.apache.axis2.json.JSONBadgerfishOMBuilder"
    -
    -
    -
    -
    -
    - [[custom_message_builders]] - Required -

    - This configuration header is required for configuring the custom message builder implementation class and the selected content types to which the builder should apply in non-blocking mode. See the instructions on configuring custom message builders and formatters. -

    -
    -
    -
    -
    - content_type -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The content types to which the custom message builder implementation should apply. You can specify the list of content types as follows: application/json/badgerfish.

    -
    -
    -
    -
    - class -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The custom message builder implementation that should apply to the given content types.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## Custom Message Builders (blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [[blocking.custom_message_builders]]
    -content_type = "application/json/badgerfish"
    -class = "org.apache.axis2.json.JSONBadgerfishOMBuilder"
    -
    -
    -
    -
    -
    - [[blocking.custom_message_builders]] - Required -

    - This configuration header is required for configuring the custom message builder implementation class and the selected content types to which the builder should apply in blocking mode. See the instructions on configuring custom message builders and formatters. You can use the same list of parameters that are available for custom message builders in non-blocking mode. -

    -
    -
    - -
    -
    -
    -
    -
    -
    -
    - - -## Custom Message Formatters (non-blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [[custom_message_formatters]]
    -content_type = "application/json/badgerfish"
    -class = "org.apache.axis2.json.JSONBadgerfishMessageFormatter"
    -
    -
    -
    -
    -
    - [[custom_message_formatters]] - Required -

    - This configuration header is required for configuring the custom message formatter implementation class and the selected content types to which the formatter should apply in non-blocking mode. See the instructions on configuring custom message builders and formatters. -

    -
    -
    -
    -
    - content_type -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The content types to which the custom message formatter implementation should apply. You can specify the list of content types as follows: application/json/badgerfish.

    -
    -
    -
    -
    - class -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The custom message formatter implementation that should apply to the given content types.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## Custom Message Formatters (blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [[blocking.custom_message_formatters]]
    -content_type = "application/json/badgerfish"
    -class = "org.apache.axis2.json.JSONBadgerfishMessageFormatter"
    -
    -
    -
    -
    -
    - [[blocking.custom_message_formatters]] - Required -

    - This configuration header is required for configuring the custom message formatter implementation class and the selected content types to which the formatter should apply in blocking mode. See the instructions on configuring custom message builders and formatters. You can use the same list of parameters that are available for custom message formatters in non-blocking mode. -

    -
    -
    - -
    -
    -
    -
    -
    -
    -
    - - -## Server Request Processor - -
    -
    -
    -
    - - - -
    -
    -
    [[server.get_request_processor]]
    -item = "swagger.yaml"
    -class = "org.wso2.micro.integrator.transport.handlers.requestprocessors.swagger.format.SwaggerYamlProcessor"
    -
    -[[server.get_request_processor]]
    -item = "swagger.json"
    -class = "org.wso2.micro.integrator.transport.handlers.requestprocessors.swagger.format.SwaggerJsonProcessor"
    -
    -
    -
    -
    -
    - [[server.get_request_processor]] - Required -

    - This configuration header is required for configuring the parameters that specify how special HTTP GET requests (such as '?wsdl', '?policy', etc.) are processed. This is an array-type header, which you can reuse depending on the number of processors you want to enable. -

    -
    -
    -
    -
    - item -
    -
    -
    -

    - string - Required -

    -
    - Default: "swagger.yaml" and "swagger.json" -
    -
    - Possible Values: - -
    -
    -
    -

    The item repesents the first parameter in the query string (e.g. ?wsdl), which needs special processing.

    -
    -
    -
    -
    - class -
    -
    -
    -

    - string - Required -

    -
    - Default: "org.wso2.micro.integrator.transport.handlers.requestprocessors.swagger.format.SwaggerYamlProcessor" and "org.wso2.micro.integrator.transport.handlers.requestprocessors.swagger.format.SwaggerYamlProcessor" -
    -
    - Possible Values: - -
    -
    -
    -

    This is the class that implements the org.wso2.carbon.transport.HttpGetRequestProcessor processor. By default, the following two classes are used for handling the two default request items: org.wso2.micro.integrator.transport.handlers.requestprocessors.swagger.format.SwaggerYamlProcessor (for swagger.yaml) and org.wso2.micro.integrator.transport.handlers.requestprocessors.swagger.format.SwaggerYamlProcessor (for swagger.json).

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## HTTP/S transport (non-blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [transport.http]
    -socket_timeout = "3m"
    -core_worker_pool_size = 400
    -max_worker_pool_size = 400
    -worker_pool_queue_length = -1
    -io_buffer_size = 16384
    -max_http_connection_per_host_port = 32767
    -preserve_http_user_agent = false
    -preserve_http_server_name = true
    -preserve_http_headers = ["Content-Type"]
    -disable_connection_keepalive = false
    -enable_message_size_validation = false
    -max_message_size_bytes = 81920
    -max_open_connections = -1
    -force_xml_validation = false
    -force_json_validation = false
    -listener.port = 8280    #inferred  default: 8280
    -listener.wsdl_epr_prefix ="$ref{server.hostname}"
    -listener.bind_address = "$ref{server.hostname}"
    -listener.secured_port = 8243
    -listener.secured_wsdl_epr_prefix = "$ref{server.hostname}"
    -listener.secured_bind_address = "$ref{server.hostname}"
    -listener.secured_protocols = "TLSv1,TLSv1.1,TLSv1.2"
    -listener.verify_client = "require"
    -listener.ssl_profile.file_path = "conf/sslprofiles/listenerprofiles.xml"
    -listener.ssl_profile.read_interval = "1h"
    -listener.preferred_ciphers = "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_128_GCM_SHA256"
    -listener.keystore.location ="$ref{keystore.tls.file_name}"
    -listener.keystore.type = "$ref{keystore.tls.type}"
    -listener.keystore.password = "$ref{keystore.tls.password}"
    -listener.keystore.key_password = "$ref{keystore.tls.key_password}"
    -listener.truststore.location = "$ref{truststore.file_name}"
    -listener.truststore.type = "$ref{truststore.type}"
    -listener.truststore.password = "$ref{truststore.password}"
    -sender.warn_on_http_500 = "*"
    -sender.proxy_host = "$ref{server.hostname}"
    -sender.proxy_port = 3128
    -sender.non_proxy_hosts = ["$ref{server.hostname}"]
    -sender.hostname_verifier = "AllowAll"
    -sender.keystore.location ="$ref{keystore.tls.file_name}"
    -sender.keystore.type = "$ref{keystore.tls.type}"
    -sender.keystore.password = "$ref{keystore.tls.password}"
    -sender.keystore.key_password = "$ref{keystore.tls.key_password}"
    -sender.truststore.location = "$ref{truststore.file_name}"
    -sender.truststore.type = "$ref{truststore.type}"
    -sender.truststore.password = "$ref{truststore.password}"
    -sender.ssl_profile.file_path = "conf/sslprofiles/senderprofiles.xml"
    -sender.ssl_profile.read_interval = "30s"
    -enable_message_size_validation = false
    -max_message_size_bytes = 2147483647
    -max_open_connections = -1
    -force_xml_validation = false
    -force_json_validation = false
    -
    -
    -
    -
    -
    - [transport.http] - Required -

    - This configuration header is required for configuring the parameters that are used for tuning the default HTTP/S passthrough transport of the Micro Integrator in non-blocking mode. -

    -
    -
    -
    -
    - socket_timeout -
    -
    -
    -

    - integer - Required -

    -
    - Default: 180000 -
    -
    - Possible Values: - -
    -
    -
    -

    This is the maximum period of inactivity between two consecutive data packets, specified in milliseconds.

    -
    -
    -
    -
    - core_worker_pool_size -
    -
    -
    -

    - integer - Required -

    -
    - Default: 400 -
    -
    - Possible Values: - -
    -
    -
    -

    The Micro Integrator uses a thread pool executor to create threads and to handle incoming requests. This parameter controls the number of core threads used by the executor pool. If you increase this parameter value, the number of requests received that can be processed by the integrator increases, hence, the throughput also increases. The nature of the integration scenario and the number of concurrent requests received by the integrator are the main factors that helps to determine this parameter.

    -
    -
    -
    -
    - max_worker_pool_size -
    -
    -
    -

    - integer - Required -

    -
    - Default: 400 -
    -
    - Possible Values: - -
    -
    -
    -

    This is the maximum number of threads in the worker thread pool. Specifying a maximum limit avoids performance degradation that can occur due to context switching. If the specified value is reached, you will see the error 'SYSTEM ALERT - HttpServerWorker threads were in BLOCKED state during last minute'. This can occur due to an extraordinarily high number of requests sent at a time when all the threads in the pool are busy, and the maximum number of threads is already reached.

    -
    -
    -
    -
    - worker_pool_queue_length -
    -
    -
    -

    - integer - Required -

    -
    - Default: -1 -
    -
    - Possible Values: - -
    -
    -
    -

    This defines the length of the queue that is used to hold runnable tasks to be executed by the worker pool. The thread pool starts queuing jobs when all the existing threads are busy, and the pool has reached the maximum number of threads. The value for this parameter should be -1 to use an unbound queue. If a bound queue is used and the queue gets filled to its capacity, any further attempts to submit jobs fail causing some messages to be dropped by Synapse.

    -
    -
    -
    -
    - io_buffer_size -
    -
    -
    -

    - integer - Required -

    -
    - Default: 16384 -
    -
    - Possible Values: - -
    -
    -
    -

    This is the value of the memory buffer allocated when reading data into the memory from the underlying socket/file channels. You should leave this property set to the default value.

    -
    -
    -
    -
    - max_http_connection_per_host_port -
    -
    -
    -

    - integer - Required -

    -
    - Default: 32767 -
    -
    - Possible Values: - -
    -
    -
    -

    This defines the maximum number of connections allowed per host port.

    -
    -
    -
    -
    - preserve_http_user_agent -
    -
    -
    -

    - boolean - Required -

    -
    - Default: "true" or "false" -
    -
    - Possible Values: - -
    -
    -
    -

    If this parameter is set to true, the user-agent HTTP header of messages passing through the integrator is preserved and printed in the outgoing message.

    -
    -
    -
    -
    - preserve_http_headers -
    -
    -
    -

    - string - Required -

    -
    - Default: Content-Type -
    -
    - Possible Values: - -
    -
    -
    -

    This parameter allows you to specify the header field/s of messages passing through the EI that need to be preserved and printed in the outgoing message such as Location, CommonsHTTPTransportSenderKeep-Alive, Date, Server, User-Agent, and Host. For example, http.headers.preserve = Location, Date, Server.

    -
    -
    -
    -
    - disable_connection_keepalive -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    If this parameter is set to true, the HTTP connections with the back end service are closed soon after the request is served. It is recommended to set this property to false so that the integrator does not have to create a new connection every time it sends a request to a back-end service. However, you may need to close connections after they are used if the back-end service does not provide sufficient support for keep-alive connections.

    -
    -
    -
    -
    - listener.port -
    -
    -
    -

    - integer - Required -

    -
    - Default: 8290 -
    -
    - Possible Values: - -
    -
    -
    -

    The port on which this transport receiver should listen for incoming messages.

    -
    -
    -
    -
    - listener.wsdl_epr_prefix -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    A URL prefix which will be added to all service EPRs and EPRs in WSDLs etc.

    -
    -
    -
    -
    - listener.secured_port -
    -
    -
    -

    - integer - Required -

    -
    - Default: 8253 -
    -
    - Possible Values: - -
    -
    -
    -

    The secured port on which this transport receiver should listen for incoming messages.

    -
    -
    -
    -
    - listener.keystore.location -
    -
    -
    -

    - string - Required -

    -
    - Default: MI_HOME/repository/resources/security/wso2carbon.jks -
    -
    - Possible Values: - -
    -
    -
    -

    The path to the keystore file that is used for securing the HTTP passthrough connection. By default, the keystore file of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - listener.keystore.type -
    -
    -
    -

    - string - Required -

    -
    - Default: JKS -
    -
    - Possible Values: "JKS" or "PKCS12" -
    -
    -
    -

    The type of the keystore file. By default, the keystore type of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - listener.keystore.password -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2carbon -
    -
    - Possible Values: - -
    -
    -
    -

    The password of the keystore file that is used for securing the HTTP passthrough connection. This keystore password is used when accessing the keys in the keystore. By default, the keystore password of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - listener.keystore.key_password -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2carbon -
    -
    - Possible Values: - -
    -
    -
    -

    The password of the private key that is used for securing the HTTP passthrough connection. This keystore password is used when accessing the keys in the keystore. By default, the keystore password of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - listener.truststore.location -
    -
    -
    -

    - string - Required -

    -
    - Default: MI_HOME/repository/resources/security/wso2truststore.jks -
    -
    - Possible Values: - -
    -
    -
    -

    The path to the keystore file that is used for storing the trusted digital certificates. By default, the product's trust store is configured for this purpose.

    -
    -
    -
    -
    - listener.truststore.type -
    -
    -
    -

    - string - Required -

    -
    - Default: JKS -
    -
    - Possible Values: "JKS" or "PKCS12" -
    -
    -
    -

    The type of the keystore file that is used as the trust store. By default, the product's trust store is configured for this purpose.

    -
    -
    -
    -
    - listener.truststore.password -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2carbon -
    -
    - Possible Values: - -
    -
    -
    -

    The password of the keystore file that is used as the trust store. By default, the product's trust store is configured for this purpose.

    -
    -
    -
    -
    - sender.warn_on_http_500 -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    If the outgoing messages should be sent through an HTTP proxy server, use this parameter to specify the target proxy.

    -
    -
    -
    -
    - sender.proxy_host -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    If the outgoing messages should be sent through an HTTP proxy server, use this parameter to specify the target proxy.

    -
    -
    -
    -
    - sender.proxy_port -
    -
    -
    -

    - integer - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The port through which the target proxy (specified by the 'sender.proxy_port' parameter) accepts HTTP traffic.

    -
    -
    -
    -
    - sender.proxy_username -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The username for authenticating the HTTP proxy server.

    -
    -
    -
    -
    - sender.proxy_password -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The password for authenticating the HTTP proxy server.

    -
    -
    -
    -
    - sender.secured_proxy_host -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    If the outgoing messages should be sent through an HTTPS proxy server, use this parameter to specify the target proxy.

    -
    -
    -
    -
    - sender.secured_proxy_port -
    -
    -
    -

    - integer - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The port through which the target proxy (specified by the 'sender.secured_proxy_port' parameter) accepts HTTPS traffic.

    -
    -
    -
    -
    - sender.secured_proxy_username -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The username for authenticating the HTTPS proxy server.

    -
    -
    -
    -
    - sender.secured_proxy_password -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The password for authenticating the HTTPS proxy server.

    -
    -
    -
    -
    - sender.non_proxy_hosts -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The list of hosts to which the HTTP traffic should be sent directly without going through the proxy. When trying to add multiple hostnames along with an asterisk in order to define a set of sub-domains for non-proxy hosts, you need to add a period before the asterisk when configuring proxy server.

    -
    -
    -
    -
    - sender.hostname_verifier -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The list of hosts to which the HTTP traffic should be sent directly without going through the proxy. When trying to add multiple hostnames along with an asterisk in order to define a set of sub-domains for non-proxy hosts, you need to add a period before the asterisk when configuring proxy server.

    -
    -
    -
    -
    - sender.keystore.location -
    -
    -
    -

    - string - Required -

    -
    - Default: MI_HOME/repository/resources/security/wso2carbon.jks -
    -
    - Possible Values: - -
    -
    -
    -

    The path to the keystore file that is used for securing the HTTP passthrough connection. By default, the keystore file of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - sender.keystore.type -
    -
    -
    -

    - string - Required -

    -
    - Default: JKS -
    -
    - Possible Values: "JKS" or "PKCS12" -
    -
    -
    -

    The type of the keystore file. By default, the keystore type of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - sender.keystore.password -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2carbon -
    -
    - Possible Values: - -
    -
    -
    -

    The password of the keystore file that is used for securing the HTTP passthrough connection. This keystore password is used when accessing the keys in the keystore. By default, the keystore password of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - sender.keystore.key_password -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2carbon -
    -
    - Possible Values: - -
    -
    -
    -

    The password of the private key that is used for securing the HTTP passthrough connection. This keystore password is used when accessing the keys in the keystore. By default, the keystore password of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - sender.truststore.location -
    -
    -
    -

    - string - Required -

    -
    - Default: MI_HOME/repository/resources/security/wso2truststore.jks -
    -
    - Possible Values: - -
    -
    -
    -

    The path to the keystore file that is used for storing the trusted digital certificates. By default, the product's trust store is configured for this purpose.

    -
    -
    -
    -
    - sender.truststore.type -
    -
    -
    -

    - string - Required -

    -
    - Default: JKS -
    -
    - Possible Values: "JKS" or "PKCS12" -
    -
    -
    -

    The type of the keystore file that is used as the trust store. By default, the product's trust store is configured for this purpose.

    -
    -
    -
    -
    - sender.truststore.password -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2carbon -
    -
    - Possible Values: - -
    -
    -
    -

    The password of the keystore file that is used as the trust store. By default, the product's trust store is configured for this purpose.

    -
    -
    -
    -
    - enable_message_size_validation -
    -
    -
    -

    - boolean - -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    If this property is enabled and the payload exceeds the size specified by the 'max_message_size_bytes' property, the Micro Integrator will discontinue reading the input stream. This will prevent out-of-memory issues.

    -
    -
    -
    -
    - max_message_size_bytes -
    -
    -
    -

    - integer - -

    -
    - Default: 2147483647 -
    -
    - Possible Values: - -
    -
    -
    -

    If the size of the payload exceeds this value, the Micro Integrator will discontinue reading the input stream. Only applicable if the ‘enable_message_size_validation’ property is enabled.

    -
    -
    -
    -
    - max_open_connections -
    -
    -
    -

    - integer - -

    -
    - Default: -1 -
    -
    - Possible Values: - -
    -
    -
    -

    This property allows connection throttling to restrict the number of simultaneously opened connections. That is, simultaneously opened incoming connections will be restricted by the specified value. To disable throttling, delete the ‘max_open_connections’ setting or set it to -1.

    -
    -
    -
    -
    - force_xml_validation -
    -
    -
    -

    - boolean - -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    This property validates badly formed XML messages by building the whole XML document. This validation ensures that erroneous XML messages will trigger the fault sequence in the Micro Integrator.

    -
    -
    -
    -
    - force_json_validation -
    -
    -
    -

    - boolean - -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    This property validates JSON messages by parsing the input message. This validation ensures that erroneous JSON messages will trigger the fault sequence in the Micro Integrator.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## HTTP/S Transport (blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [transport.blocking.http]
    -
    -listener.enable = true
    -listener.port = 8200
    -listener.hostname = ""
    -listener.origin_server = ""
    -listener.request_timeout = ""
    -listener.request_tcp_no_delay = ""
    -listener.request_core_thread_pool_size = ""
    -listener.request_max_thread_pool_size = ""
    -listener.thread_keepalive_time = ""
    -listener.thread_keepalive_time_unit = ""
    -
    -sender.enable = true
    -sender.enable_client_caching = true
    -sender.transfer_encoding = ""
    -sender.default_connections_per_host = 200
    -sender.omit_soap12_action = true
    -sender.so_timeout = 60000
    -
    -
    -
    -
    -
    - [transport.blocking.http] - Required -

    - This configuration header is required for configuring the parameters that are used for configuring the default HTTP/S passthrough transport in blocking mode. -

    -
    -
    -
    -
    - listener.enable -
    -
    -
    -

    - boolean - Required -

    -
    - Default: true -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    This parameter is used for enabling the HTTP passthrough transport listener in blocking mode.

    -
    -
    -
    -
    - listener.port -
    -
    -
    -

    - integer - Required -

    -
    - Default: 8200 -
    -
    - Possible Values: - -
    -
    -
    -

    The port on which this transport receiver should listen for incoming messages.

    -
    -
    -
    -
    - listener.hostname -
    -
    -
    -

    - string - -

    -
    - Default: - -
    - -
    -
    -

    -
    -
    -
    -
    - listener.origin_server -
    -
    -
    -

    - string - -

    -
    - Default: - -
    - -
    -
    -

    -
    -
    -
    -
    - listener.request_timeout -
    -
    -
    -

    - string - -

    -
    - Default: - -
    - -
    -
    -

    -
    -
    -
    -
    - listener.request_tcp_no_delay -
    -
    -
    -

    - string - -

    -
    - Default: - -
    - -
    -
    -

    -
    -
    -
    -
    - listener.request_core_thread_pool_size -
    -
    -
    -

    - string - -

    -
    - Default: - -
    - -
    -
    -

    -
    -
    -
    -
    - listener.request_max_thread_pool_size -
    -
    -
    -

    - string - -

    -
    - Default: - -
    - -
    -
    -

    -
    -
    -
    -
    - listener.thread_keepalive_time -
    -
    -
    -

    - string - -

    -
    - Default: - -
    - -
    -
    -

    -
    -
    -
    -
    - listener.thread_keepalive_time_unit -
    -
    -
    -

    - string - -

    -
    - Default: - -
    - -
    -
    -

    -
    -
    -
    -
    - sender.enabled -
    -
    -
    -

    - boolean - Required -

    -
    - Default: true -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    This parameter is used for enabling the HTTP passthrough transport sender in blocking mode.

    -
    -
    -
    -
    - sender.enable_client_caching -
    -
    -
    -

    - boolean - Required -

    -
    - Default: - -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    This parameter is used to specify whether the HTTP client should save cache entries and the cached responses in the JVM memory or not.

    -
    -
    -
    -
    - sender.transfer_encoding -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: "chunked" or "true" -
    -
    -
    -

    This parameter enables you to specify whether the data sent should be chunked. It can be used instead of the Content-Length header if you want to upload data without having to know the amount of data to be uploaded in advance.

    -
    -
    -
    -
    - sender.default_connections_per_host -
    -
    -
    -

    - integer - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The maximum number of connections that will be created per host server by the client. If the backend server is slow, the connections in use at a given time will take a long time to be released and added back to the connection pool. As a result, connections may not be available for some requests. In such situations, it is recommended to increase the value for this parameter.

    -
    -
    -
    -
    - sender.omit_soap12_action -
    -
    -
    -

    - boolean - Required -

    -
    - Default: - -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    If following is set to 'true', optional action part of the Content-Type will not be added to the SOAP 1.2 messages.

    -
    -
    -
    -
    - sender.so_timeout -
    -
    -
    -

    - integer - Required -

    -
    - Default: - -
    -
    - Possible Values: 60000 -
    -
    -
    -

    If following is set to 'true', optional action part of the Content-Type will not be added to the SOAP 1.2 messages.

    -
    -
    -
    -
    - sender.proxy_host -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    If the outgoing messages should be sent through an HTTP proxy server (in blocking mode), use this parameter to specify the target proxy.

    -
    -
    -
    -
    - sender.proxy_port -
    -
    -
    -

    - integer - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The port through which the target proxy (specified by the 'sender.proxy_host' parameter) accepts HTTP traffic (in blocking mode).

    -
    -
    -
    -
    - sender.proxy_username -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The username for authenticating the proxy server.

    -
    -
    -
    -
    - sender.proxy_password -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The password for authenticating the proxy server.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## HTTP proxy profile - -
    -
    -
    -
    - - - -
    -
    -
    [[transport.http.proxy_profile]]
    -target_hosts = ["example.com", ".*.sample.com"]
    -proxy_host = "localhost"
    -proxy_port = "3128"
    -proxy_username = "squidUser"
    -proxy_password = "password"
    -bypass_hosts = ["xxx.sample.com"]
    -
    -
    -
    -
    -
    - [[transport.http.proxy_profile]] - Required -

    - This configuration header is required for configuring HTTP proxy profiles when you use multiple proxy servers to route messages to different endpoints. -

    -
    -
    -
    -
    - target_hosts -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: "*", "example.com", "<any-ip-address>" -
    -
    -
    -

    A host name or a comma-separated list of host names for a target endpoint. Host names can be specified as regular expressions that match a pattern. When asterisks (*) is specified as the target hostname, it will match all the hosts in the profile.

    -
    -
    -
    -
    - proxy_host -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The host name of the proxy server.

    -
    -
    -
    -
    - proxy_port -
    -
    -
    -

    - integer - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The port number of the proxy server.

    -
    -
    -
    -
    - proxy_username -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The password for authenticating the proxy server.

    -
    -
    -
    -
    - bypass_hosts -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    A host name or a comma-separated list of host names that should not be sent via the proxy server. For example, if you want all requests sent to *.sample.com to be sent via a proxy server, while you need to directly send requests to hello.sample.com (without going through the proxy server), you can add hello.sample.com as a bypass host name.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## HTTP secured proxy profile - -
    -
    -
    -
    - - - -
    -
    -
    [[transport.http.secured_proxy_profile]]
    -target_hosts = ["example.com", ".*.sample.com"]
    -proxy_host = "localhost"
    -proxy_port = "3128"
    -proxy_username = "squidUser"
    -proxy_password = "password"
    -bypass_hosts = ["xxx.sample.com"]
    -
    -
    -
    -
    -
    - [[transport.http.secured_proxy_profile]] - Required -

    - This configuration header is required for configuring secured HTTP proxy profiles when you use multiple (secured) proxy servers to route messages to different endpoints. -

    -
    -
    -
    -
    - target_hosts -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: "*", "example.com", "<any-ip-address>" -
    -
    -
    -

    A host name or a comma-separated list of host names for a target endpoint. Host names can be specified as regular expressions that match a pattern. When asterisks (*) is specified as the target hostname, it will match all the hosts in the profile.

    -
    -
    -
    -
    - proxy_host -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The host name of the proxy server.

    -
    -
    -
    -
    - proxy_port -
    -
    -
    -

    - integer - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The port number of the proxy server.

    -
    -
    -
    -
    - proxy_username -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The password for authenticating the proxy server.

    -
    -
    -
    -
    - proxy_password -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The password for authenticating the proxy server.

    -
    -
    -
    -
    - bypass_hosts -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    A host name or a comma-separated list of host names that should not be sent via the proxy server. For example, if you want all requests sent to *.sample.com to be sent via a proxy server, while you need to directly send requests to hello.sample.com (without going through the proxy server), you can add hello.sample.com as a bypass host name.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## VFS Transport - -
    -
    -
    -
    - - - -
    -
    -
    [transport.vfs]
    -
    -listener.enable = true
    -listener.keystore.file_name = "$ref{keystore.tls.file_name}" 
    -listener.keystore.type = "$ref{keystore.tls.type}"
    -listener.keystore.password = "$ref{keystore.tls.password}"
    -listener.keystore.key_password = "$ref{keystore.tls.key_password}"
    -listener.keystore.alias = "$ref{keystore.tls.alias}"
    -
    -listener.parameter.customParameter = ""
    -
    -sender.enable = true
    -sender.parameter.customParameter = ""
    -
    -
    -
    -
    -
    - [transport.vfs] - Required -

    - This configuration header is required for configuring how the Micro Integrator communicates through the VFS transport. -

    -
    -
    -
    -
    - listener.enabled -
    -
    -
    -

    - boolean - Required -

    -
    - Default: true -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling the VFS transport listener.

    -
    -
    -
    -
    - listener.keystore.file_name -
    -
    -
    -

    - string - -

    -
    - Default: MI_HOME/repository/resources/security/wso2carbon.jks -
    -
    - Possible Values: - -
    -
    -
    -

    The path to the keystore file that is used for securing a VFS connection. By default, the keystore file of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - listener.keystore.type -
    -
    -
    -

    - string - -

    -
    - Default: JKS -
    -
    - Possible Values: "JKS" or "PKCS12" -
    -
    -
    -

    The type of the keystore file. By default, the keystore type of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - listener.keystore.password -
    -
    -
    -

    - string - -

    -
    - Default: wso2carbon -
    -
    - Possible Values: - -
    -
    -
    -

    The password of the keystore file that is used for securing a VFS connection. This keystore password is used when accessing the keys in the keystore. By default, the keystore password of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - listener.keystore.alias -
    -
    -
    -

    - string - -

    -
    - Default: wso2carbon -
    -
    - Possible Values: - -
    -
    -
    -

    The alias of the public key corresponding to the private key that is included in the keystore. The public key is used for encrypting data in the Micro Integrator server, which only the corresponding private key can decrypt. The public key is embedded in a digital certificate, and this certificate can be shared over the internet by storing it in a separate trust store file. By default, the alias of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - listener.keystore.key_password -
    -
    -
    -

    - string - -

    -
    - Default: wso2carbon -
    -
    - Possible Values: - -
    -
    -
    -

    The password of the private key that is included in the keystore. The private key is used to decrypt the data that has been encrypted using the keystore's public key. By default, the public key password of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - sender.enabled -
    -
    -
    -

    - boolean - Required -

    -
    - Default: wso2carbon -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling the VFS transport sender.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## MAIL Transport Listener (non-blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [transport.vfs]
    -
    -listener.enable = true
    -listener.keystore.file_name = "$ref{keystore.tls.file_name}" 
    -listener.keystore.type = "$ref{keystore.tls.type}"
    -listener.keystore.password = "$ref{keystore.tls.password}"
    -listener.keystore.key_password = "$ref{keystore.tls.key_password}"
    -listener.keystore.alias = "$ref{keystore.tls.alias}"
    -
    -listener.parameter.customParameter = ""
    -
    -sender.enable = true
    -sender.parameter.customParameter = ""
    -
    -
    -
    -
    -
    - [[transport.http.secured_proxy_profile]] - Required -

    - This configuration header is required for configuring the MailTo transport listener implementation of the Micro Integrator in non-blocking mode. Note that the list of parameters given below can be used for the non-blocking transport listener as well as the blocking transport listener. -

    -
    -
    -
    -
    - listener.enabled -
    -
    -
    -

    - boolean - Required -

    -
    - Default: true -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling the VFS transport listener.

    -
    -
    -
    -
    - listener.keystore.file_name -
    -
    -
    -

    - string - -

    -
    - Default: MI_HOME/repository/resources/security/wso2carbon.jks -
    -
    - Possible Values: - -
    -
    -
    -

    The path to the keystore file that is used for securing a VFS connection. By default, the keystore file of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - listener.keystore.type -
    -
    -
    -

    - string - -

    -
    - Default: JKS -
    -
    - Possible Values: "JKS" or "PKCS12" -
    -
    -
    -

    The type of the keystore file. By default, the keystore type of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - listener.keystore.password -
    -
    -
    -

    - string - -

    -
    - Default: wso2carbon -
    -
    - Possible Values: - -
    -
    -
    -

    The password of the keystore file that is used for securing a VFS connection. This keystore password is used when accessing the keys in the keystore. By default, the keystore password of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - listener.keystore.alias -
    -
    -
    -

    - string - -

    -
    - Default: wso2carbon -
    -
    - Possible Values: - -
    -
    -
    -

    The alias of the public key corresponding to the private key that is included in the keystore. The public key is used for encrypting data in the Micro Integrator server, which only the corresponding private key can decrypt. The public key is embedded in a digital certificate, and this certificate can be shared over the internet by storing it in a separate trust store file. By default, the alias of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - listener.keystore.key_password -
    -
    -
    -

    - string - -

    -
    - Default: wso2carbon -
    -
    - Possible Values: - -
    -
    -
    -

    The password of the private key that is included in the keystore. The private key is used to decrypt the data that has been encrypted using the keystore's public key. By default, the public key password of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - sender.enabled -
    -
    -
    -

    - boolean - Required -

    -
    - Default: wso2carbon -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling the VFS transport sender.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## MAIL Transport Listener (non-blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [transport.mail.listener]
    -enable = true   
    -name = "mailto"
    -parameter.customParameter = ""
    -
    -
    -
    -
    -
    - [transport.mail.listener] - Required -

    - This configuration header is required for configuring the MailTo transport listener implementation of the Micro Integrator in non-blocking mode. Note that the list of parameters given below can be used for the non-blocking transport listener as well as the blocking transport listener. -

    -
    -
    -
    -
    - enabled -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling the MAIL transport listener in the Micro Integrator.

    -
    -
    -
    -
    - name -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The name of the transport receiver.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## MAIL Transport Listener (non-blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [transport.blocking.mail.listener]
    -enable = true
    -name = "mailto"
    -parameter.customParameter = "value"
    -
    -
    -
    -
    -
    - [transport.blocking.mail.listener] - Required -

    - This configuration header groups the parameters that are used to configure the MailTo transport listener in blocking mode. You can use the same list of parameters that are available for the non-blocking mail sender. -

    -
    -
    - -
    -
    -
    -
    -
    -
    -
    - - -## MAIL Transport Sender (non-blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [[transport.mail.sender]]
    -name = "mailto"
    -parameter.hostname = "smtp.gmail.com"
    -parameter.port = "587"
    -parameter.enable_tls = true
    -parameter.auth = true
    -parameter.username = "demo_user"
    -parameter.password = "mailpassword"
    -parameter.from = "demo_user@wso2.com"
    -
    -
    -
    -
    -
    - [[transport.mail.sender]] - Required -

    - This configuration header groups the parameters that are used to configure the MailTo transport sender implementation of the Micro Integrator in non-blocking mode. Note that the list of parameters given below can be used for the non-blocking transport sender as well as the blocking transport sender. -

    -
    -
    -
    -
    - name -
    -
    -
    -

    - string - Required -

    -
    - Default: mailto -
    -
    - Possible Values: - -
    -
    -
    -

    The parameter for enabling the MAIL transport sender in the Micro Integrator.

    -
    -
    -
    -
    - parameter.hostname -
    -
    -
    -

    - string - Required -

    -
    - Default: smtp.gmail.com -
    -
    - Possible Values: - -
    -
    -
    -

    The mail server that serves outgoing mails from the Micro Integrator.

    -
    -
    -
    -
    - parameter.port -
    -
    -
    -

    - integer - Required -

    -
    - Default: 587 -
    -
    - Possible Values: - -
    -
    -
    -

    The port of the mail server.

    -
    -
    -
    -
    - parameter.enable_tls -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    This parameter specifies whether TLS is enabled for the MailTo transport.

    -
    -
    -
    -
    - parameter.username -
    -
    -
    -

    - string - Required -

    -
    - Default: demo_user -
    -
    - Possible Values: - -
    -
    -
    -

    The user name of the email account (mail sender). Note that in some email service providers, the user name is the same as the email address specified for 'parameter.from'.

    -
    -
    -
    -
    - parameter.password -
    -
    -
    -

    - string - Required -

    -
    - Default: mailpassword -
    -
    - Possible Values: - -
    -
    -
    -

    The password of the email account (mail sender).

    -
    -
    -
    -
    - parameter.from -
    -
    -
    -

    - string - Required -

    -
    - Default: demo_user@wso2.com -
    -
    - Possible Values: - -
    -
    -
    -

    The email address from which mails will be sent.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## MAIL Transport Sender (blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [transport.blocking.mail.listener]
    -enable = true
    -name = "mailto"
    -parameter.customParameter = "value"
    -
    -
    -
    -
    -
    - [[transport.blocking.mail.sender]] - Required -

    - This configuration header groups the parameters that are used to configure the MailTo transport sender in blocking mode. You can use the same list of parameters that are available for the non-blocking mail sender. -

    -
    -
    - -
    -
    -
    -
    -
    -
    -
    - - -## JMS Transport Listener (non-blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [[transport.jms.listener]]
    -
    -name = "myTopicListener"
    -parameter.initial_naming_factory = "org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory"
    -parameter.broker_name = "artemis"
    -parameter.provider_url = "tcp://localhost:61616"
    -parameter.connection_factory_name = "TopicConnectionFactory"
    -parameter.connection_factory_type = "topic"
    -parameter.cache_level = "consumer"
    -
    -parameter.naming_security_principal = ""
    -parameter.naming_security_credential = ""
    -parameter.transactionality = ""
    -parameter.transaction_jndi_name = ""
    -parameter.cache_user_transaction = true
    -parameter.session_transaction = true
    -parameter.session_acknowledgement = "AUTO_ACKNOWLEDGE"
    -parameter.jms_spec_version = "1.1"
    -parameter.username = ""
    -parameter.password = ""
    -parameter.destination = ""
    -parameter.destination_type = "queue"
    -parameter.default_reply_destination = ""
    -parameter.default_destination_type = "queue"
    -parameter.message_selector = ""
    -parameter.subscription_durable = false
    -parameter.durable_subscriber_client_id = ""
    -parameter.durable_subscriber_name = ""
    -parameter.pub_sub_local = false
    -parameter.receive_timeout = "1000"
    -parameter.concurrent_consumer = 1
    -parameter.max_concurrent_consumer = 1
    -parameter.idle_task_limit = 10
    -parameter.max_message_per_task = -1
    -parameter.initial_reconnection_duration = "10000"
    -parameter.reconnect_progress_factor = 2
    -parameter.max_reconnect_duration = "3600000"
    -parameter.reconnect_interval = "3600000"
    -parameter.max_jsm_connection = 10
    -parameter.max_consumer_error_retrieve_before_delay = 20
    -parameter.consume_error_delay = "100"         
    -parameter.consume_error_progression = "2.0"
    -
    -
    -
    -
    -
    - [[transport.jms.listener]] - Required -

    - This configuration header groups the parameters that are used to configure the JMS transport listener implementation of the Micro Integrator in non-blocking mode. Note that the list of parameters given below can be used for the non-blocking transport listener as well as the blocking transport listener. -

    -
    -
    -
    -
    - name -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The user-defined name of the JMS listener.

    -
    -
    -
    -
    - parameter.initial_naming_factory -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    JNDI initial context factory class. The class must implement the java.naming.spi.InitialContextFactory interface.

    -
    -
    -
    -
    - parameter.provider_url -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    URL of the JNDI provider.

    -
    -
    -
    -
    - parameter.connection_factory_name -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The JNDI name of the connection factory.

    -
    -
    -
    -
    - parameter.cache_level -
    -
    -
    -

    - string - -

    -
    - Default: consumer -
    -
    - Possible Values: consumer -
    -
    -
    -

    The cache level that should apply when JMS objects startup. When the Micro Integrator produces JMS messages, you need to specify this cache level in the deployment.toml file. If the Micro Integrator works as JMS listener, you need to specify the JMS cache level in the proxy service. See the list of service-level JMS parameters.

    -
    -
    -
    -
    - parameter.naming_security_principal -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The JNDI Username.

    -
    -
    -
    -
    - parameter.naming_security_credential -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The JNDI password.

    -
    -
    -
    -
    - parameter.transactionality -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Preferred mode of transactionality. <b>Note</b> that JMS transactions only works with either the Callout mediator or the Call mediator in blocking mode.

    -
    -
    -
    -
    - parameter.transaction_jndi_name -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    JNDI name to be used to require user transaction.

    -
    -
    -
    -
    - parameter.cache_user_transaction -
    -
    -
    -

    - boolean - -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Specifies whether or not caching should be enabled for user transactions.

    -
    -
    -
    -
    - parameter.session_transaction -
    -
    -
    -

    - boolean - -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Specifies whether or not the JMS session should be transacted.

    -
    -
    -
    -
    - parameter.session_acknowledgement -
    -
    -
    -

    - string - -

    -
    - Default: AUTO_ACKNOWLEDGE -
    -
    - Possible Values: - -
    -
    -
    -

    JMS session acknowledgment mode.

    -
    -
    -
    -
    - parameter.jms_spec_version -
    -
    -
    -

    - string - -

    -
    - Default: 1.1 -
    -
    - Possible Values: - -
    -
    -
    -

    JMS API version.

    -
    -
    -
    -
    - parameter.username -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The JMS connection username.

    -
    -
    -
    -
    - parameter.password -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The JMS connection password.

    -
    -
    -
    -
    - parameter.destination -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The JNDI name of the destination.

    -
    -
    -
    -
    - parameter.destination_type -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: "queue" or "topic" -
    -
    -
    -

    The type of the destination.

    -
    -
    -
    -
    - parameter.message_selector -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The message selector implementation.

    -
    -
    -
    -
    - parameter.subscription_durable -
    -
    -
    -

    - boolean - -

    -
    - Default: - -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Specifies whether or not the connection factory is subscription durable.

    -
    -
    -
    -
    - parameter.durable_subscriber_client_id -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The ClientId parameter when using durable subscriptions.

    -
    -
    -
    -
    - parameter.durable_subscriber_name -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The name of the durable subscriber.

    -
    -
    -
    -
    - parameter.pub_sub_local -
    -
    -
    -

    - boolean - -

    -
    - Default: - -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Specifies whether or not the messages should should be published by the same connection in which the messages were received.

    -
    -
    -
    -
    - parameter.receive_timeout -
    -
    -
    -

    - integer - -

    -
    - Default: 1000 -
    -
    - Possible Values: - -
    -
    -
    -

    Time to wait for a JMS message during polling. Set this parameter value to a negative integer to wait indefinitely. Set to zero to prevent waiting.

    -
    -
    -
    -
    - parameter.concurrent_consumer -
    -
    -
    -

    - integer - -

    -
    - Default: 1 -
    -
    - Possible Values: - -
    -
    -
    -

    The number of concurrent threads to be started to consume messages when polling.

    -
    -
    -
    -
    - parameter.max_concurrent_consumer -
    -
    -
    -

    - integer - -

    -
    - Default: 1 -
    -
    - Possible Values: - -
    -
    -
    -

    The maximum number of concurrent threads to use during polling.

    -
    -
    -
    -
    - parameter.idle_task_limit -
    -
    -
    -

    - integer - -

    -
    - Default: 10 -
    -
    - Possible Values: - -
    -
    -
    -

    The number of idle runs per thread before it dies out.

    -
    -
    -
    -
    - parameter.max_message_per_task -
    -
    -
    -

    - integer - -

    -
    - Default: -1 -
    -
    - Possible Values: - -
    -
    -
    -

    The maximum number of successful message receipts per thread.

    -
    -
    -
    -
    - parameter.initial_reconnection_duration -
    -
    -
    -

    - integer - -

    -
    - Default: 10000 -
    -
    - Possible Values: - -
    -
    -
    -

    The initial reconnection attempts duration in milliseconds.

    -
    -
    -
    -
    - parameter.reconnect_progress_factor -
    -
    -
    -

    - integer - -

    -
    - Default: 2 -
    -
    - Possible Values: - -
    -
    -
    -

    The factor by which the reconnection duration will be increased.

    -
    -
    -
    -
    - parameter.max_reconnect_duration -
    -
    -
    -

    - integer - -

    -
    - Default: 3600000 -
    -
    - Possible Values: - -
    -
    -
    -

    The maximum reconnection duration in milliseconds.

    -
    -
    -
    -
    - parameter.reconnect_interval -
    -
    -
    -

    - integer - -

    -
    - Default: 3600000 -
    -
    - Possible Values: - -
    -
    -
    -

    The reconnection interval in milliseconds.

    -
    -
    -
    -
    - parameter.max_jsm_connection -
    -
    -
    -

    - integer - -

    -
    - Default: 10 -
    -
    - Possible Values: - -
    -
    -
    -

    The maximum cached JMS connections in the producer level.

    -
    -
    -
    -
    - parameter.max_consumer_error_retrieve_before_delay -
    -
    -
    -

    - integer - -

    -
    - Default: 20 -
    -
    - Possible Values: - -
    -
    -
    -

    The number of retries on consume errors before sleep delay becomes effective.

    -
    -
    -
    -
    - parameter.consume_error_delay -
    -
    -
    -

    - integer - -

    -
    - Default: 100 -
    -
    - Possible Values: - -
    -
    -
    -

    The sleep delay when a consume error is encountered (in milliseconds).

    -
    -
    -
    -
    - parameter.consume_error_progression -
    -
    -
    -

    - integer - -

    -
    - Default: 2.0 -
    -
    - Possible Values: - -
    -
    -
    -

    The factor by which the consume error retry sleep will be increased.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## JMS Transport Listener (blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [[transport.blocking.jms.listener]]
    -
    -name = "myTopicListener"
    -parameter.initial_naming_factory = "org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory"
    -parameter.provider_url = "tcp://localhost:61616"
    -parameter.connection_factory_name = "TopicConnectionFactory"
    -parameter.connection_factory_type = "topic"
    -parameter.cache_level = "consumer"
    -
    -parameter.naming_security_principal = ""
    -parameter.naming_security_credential = ""
    -parameter.transactionality = ""
    -parameter.transaction_jndi_name = ""
    -parameter.cache_user_transaction = true
    -parameter.session_transaction = true
    -parameter.session_acknowledgement = "AUTO_ACKNOWLEDGE"
    -parameter.jms_spec_version = "1.1"
    -parameter.username = ""
    -parameter.password = ""
    -parameter.destination = ""
    -parameter.destination_type = "queue"
    -parameter.default_reply_destination = ""
    -parameter.default_destination_type = "queue"
    -parameter.message_selector = ""
    -parameter.subscription_durable = false
    -parameter.durable_subscriber_client_id = ""
    -parameter.durable_subscriber_name = ""
    -parameter.pub_sub_local = false
    -parameter.receive_timeout = "1000"
    -parameter.concurrent_consumer = 1
    -parameter.max_concurrent_consumer = 1
    -parameter.idle_task_limit = 10
    -parameter.max_message_per_task = -1
    -parameter.initial_reconnection_duration = "10000"
    -parameter.reconnect_progress_factor = 2
    -parameter.max_reconnect_duration = "3600000"
    -parameter.reconnect_interval = "3600000"
    -parameter.max_jsm_connection = 10
    -parameter.max_consumer_error_retrieve_before_delay = 20
    -parameter.consume_error_delay = "100"        
    -parameter.consume_error_progression = "2.0"
    -
    -
    -
    -
    -
    - [[transport.blocking.jms.listener]] - Required -

    - This configuration header groups the parameters that are used to configure the JMS transport listener in blocking mode. You can use the same list of parameters that are available for the non-blocking JMS listener. -

    -
    -
    - -
    -
    -
    -
    -
    -
    -
    - - -## JMS Transport Sender (non-blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [[transport.jms.sender]]
    -
    -name = "myTopicSender"
    -parameter.initial_naming_factory = "org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory"
    -parameter.broker_name = "artemis"
    -parameter.provider_url = "tcp://localhost:61616"
    -parameter.connection_factory_name = "TopicConnectionFactory"
    -parameter.connection_factory_type = "topic"
    -parameter.cache_level = "producer"
    -
    -parameter.naming_security_principal = ""
    -parameter.naming_security_credential = ""
    -parameter.transactionality = ""
    -parameter.transaction_jndi_name = ""
    -parameter.cache_user_transaction = true
    -parameter.session_transaction = true
    -parameter.session_acknowledgement = "AUTO_ACKNOWLEDGE"
    -parameter.jms_spec_version = "1.1"
    -parameter.username = ""
    -parameter.password = ""
    -parameter.destination = ""
    -parameter.destination_type = "queue"
    -parameter.default_reply_destination = ""
    -parameter.default_destination_type = "queue"
    -parameter.message_selector = ""
    -parameter.subscription_durable = false
    -parameter.durable_subscriber_client_id = ""
    -parameter.durable_subscriber_name = ""
    -parameter.pub_sub_local = false
    -parameter.receive_timeout = "1000"
    -parameter.concurrent_consumer = 1
    -parameter.max_concurrent_consumer = 1
    -parameter.idle_task_limit = 10
    -parameter.max_message_per_task = -1
    -parameter.initial_reconnection_duration = "10000"
    -parameter.reconnect_progress_factor = 2
    -parameter.max_reconnect_duration = "3600000"
    -parameter.reconnect_interval = "3600000"
    -parameter.max_jsm_connection = 10
    -parameter.max_consumer_error_retrieve_before_delay = 20
    -parameter.consume_error_delay = "100"
    -parameter.consume_error_progression = "2.0"
    -
    -parameter.vender_class_loader = false
    -
    -
    -
    -
    -
    - [[transport.jms.sender]] - Required -

    - This configuration header groups the parameters that are used to configure the JMS transport sender implementation of the Micro Integrator in non-blocking mode. -

    -
    -
    -
    -
    - name -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The user-defined name of the JMS sender.

    -
    -
    -
    -
    - parameter.initial_naming_factory -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    JNDI initial context factory class. The class must implement the java.naming.spi.InitialContextFactory interface.

    -
    -
    -
    -
    - parameter.broker_name -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The name of the JMS broker.

    -
    -
    -
    -
    - parameter.provider_url -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    URL of the JNDI provider.

    -
    -
    -
    -
    - parameter.connection_factory_name -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The JNDI name of the connection factory.

    -
    -
    -
    -
    - parameter.cache_level -
    -
    -
    -

    - string - -

    -
    - Default: producer -
    -
    - Possible Values: producer -
    -
    -
    -

    The cache level that should apply when JMS objects startup. When the Micro Integrator produces JMS messages, you need to specify this cache level in the deployment.toml file. If the Micro Integrator works as JMS listener, you need to specify the JMS cache level in the proxy service. See the list of service-level JMS parameters.

    -
    -
    -
    -
    - parameter.naming_security_principal -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The JNDI Username.

    -
    -
    -
    -
    - parameter.naming_security_credential -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The JNDI password.

    -
    -
    -
    -
    - parameter.transactionality -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Preferred mode of transactionality. <b>Note</b> that JMS transactions only works with either the Callout mediator or the Call mediator in blocking mode.

    -
    -
    -
    -
    - parameter.transaction_jndi_name -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    JNDI name to be used to require user transaction.

    -
    -
    -
    -
    - parameter.cache_user_transaction -
    -
    -
    -

    - boolean - -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Specifies whether or not caching should be enabled for user transactions.

    -
    -
    -
    -
    - parameter.session_transaction -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Specifies whether or not the JMS session should be transacted.

    -
    -
    -
    -
    - parameter.session_acknowledgement -
    -
    -
    -

    - string - -

    -
    - Default: AUTO_ACKNOWLEDGE -
    -
    - Possible Values: - -
    -
    -
    -

    JMS session acknowledgment mode.

    -
    -
    -
    -
    - parameter.jms_spec_version -
    -
    -
    -

    - string - -

    -
    - Default: 1.1 -
    -
    - Possible Values: - -
    -
    -
    -

    JMS API version.

    -
    -
    -
    -
    - parameter.username -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The JMS connection username.

    -
    -
    -
    -
    - parameter.password -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The JMS connection password.

    -
    -
    -
    -
    - parameter.destination -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The JNDI name of the destination.

    -
    -
    -
    -
    - parameter.destination_type -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: "queue" or "topic" -
    -
    -
    -

    The type of the destination.

    -
    -
    -
    -
    - parameter.default_reply_destination -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The JNDI name of the default reply destination.

    -
    -
    -
    -
    - parameter.default_destination_type -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: "queue" or "topic" -
    -
    -
    -

    The type of the reply destination.

    -
    -
    -
    -
    - parameter.message_selector -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The message selector implementation.

    -
    -
    -
    -
    - parameter.subscription_durable -
    -
    -
    -

    - boolean - -

    -
    - Default: - -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Specifies whether or not the connection factory is subscription durable.

    -
    -
    -
    -
    - parameter.durable_subscriber_client_id -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The ClientId parameter when using durable subscriptions.

    -
    -
    -
    -
    - parameter.durable_subscriber_name -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The name of the durable subscriber.

    -
    -
    -
    -
    - parameter.pub_sub_local -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Specifies whether or not the messages should should be published by the same connection in which the messages were received.

    -
    -
    -
    -
    - parameter.receive_timeout -
    -
    -
    -

    - integer - -

    -
    - Default: 1000 -
    -
    - Possible Values: - -
    -
    -
    -

    Time to wait for a JMS message during polling. Set this parameter value to a negative integer to wait indefinitely. Set to zero to prevent waiting.

    -
    -
    -
    -
    - parameter.concurrent_consumer -
    -
    -
    -

    - integer - -

    -
    - Default: 1 -
    -
    - Possible Values: - -
    -
    -
    -

    The number of concurrent threads to be started to consume messages when polling.

    -
    -
    -
    -
    - parameter.max_concurrent_consumer -
    -
    -
    -

    - integer - -

    -
    - Default: 1 -
    -
    - Possible Values: - -
    -
    -
    -

    The maximum number of concurrent threads to use during polling.

    -
    -
    -
    -
    - parameter.idle_task_limit -
    -
    -
    -

    - integer - -

    -
    - Default: 10 -
    -
    - Possible Values: - -
    -
    -
    -

    The number of idle runs per thread before it dies out.

    -
    -
    -
    -
    - parameter.max_message_per_task -
    -
    -
    -

    - integer - -

    -
    - Default: -1 -
    -
    - Possible Values: - -
    -
    -
    -

    The maximum number of successful message receipts per thread.

    -
    -
    -
    -
    - parameter.initial_reconnection_duration -
    -
    -
    -

    - integer - -

    -
    - Default: 10000 -
    -
    - Possible Values: - -
    -
    -
    -

    The initial reconnection attempts duration in milliseconds.

    -
    -
    -
    -
    - parameter.reconnect_progress_factor -
    -
    -
    -

    - integer - -

    -
    - Default: 2 -
    -
    - Possible Values: - -
    -
    -
    -

    The factor by which the reconnection duration will be increased.

    -
    -
    -
    -
    - parameter.max_reconnect_duration -
    -
    -
    -

    - integer - -

    -
    - Default: 3600000 -
    -
    - Possible Values: - -
    -
    -
    -

    The maximum reconnection duration in milliseconds.

    -
    -
    -
    -
    - parameter.reconnect_interval -
    -
    -
    -

    - integer - -

    -
    - Default: 3600000 -
    -
    - Possible Values: - -
    -
    -
    -

    The reconnection interval in milliseconds.

    -
    -
    -
    -
    - parameter.max_jsm_connection -
    -
    -
    -

    - integer - -

    -
    - Default: 10 -
    -
    - Possible Values: - -
    -
    -
    -

    The maximum cached JMS connections in the producer level.

    -
    -
    -
    -
    - parameter.max_consumer_error_retrieve_before_delay -
    -
    -
    -

    - integer - -

    -
    - Default: 20 -
    -
    - Possible Values: - -
    -
    -
    -

    The number of retries on consume errors before sleep delay becomes effective.

    -
    -
    -
    -
    - parameter.consume_error_delay -
    -
    -
    -

    - integer - -

    -
    - Default: 100 -
    -
    - Possible Values: - -
    -
    -
    -

    The sleep delay when a consume error is encountered (in milliseconds).

    -
    -
    -
    -
    - parameter.consume_error_progression -
    -
    -
    -

    - integer - -

    -
    - Default: 2.0 -
    -
    - Possible Values: - -
    -
    -
    -

    The factor by which the consume error retry sleep will be increased.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## JMS Transport Sender (blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [[transport.blocking.jms.sender]]
    -
    -name = "myTopicSender"
    -parameter.initial_naming_factory = "org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory"
    -parameter.provider_url = "tcp://localhost:61616"
    -parameter.connection_factory_name = "TopicConnectionFactory"
    -parameter.connection_factory_type = "topic"
    -parameter.cache_level = "producer"
    -
    -parameter.naming_security_principal = ""
    -parameter.naming_security_credential = ""
    -parameter.transactionality = ""
    -parameter.transaction_jndi_name = ""
    -parameter.cache_user_transaction = true
    -parameter.session_transaction = true
    -parameter.session_acknowledgement = "AUTO_ACKNOWLEDGE"
    -parameter.jms_spec_version = "1.1"
    -parameter.username = ""
    -parameter.password = ""
    -parameter.destination = ""
    -parameter.destination_type = "queue"
    -parameter.default_reply_destination = ""
    -parameter.default_destination_type = "queue"
    -parameter.message_selector = ""
    -parameter.subscription_durable = false
    -parameter.durable_subscriber_client_id = ""
    -parameter.durable_subscriber_name = ""
    -parameter.pub_sub_local = false
    -parameter.receive_timeout = "1000"
    -parameter.concurrent_consumer = 1
    -parameter.max_concurrent_consumer = 1
    -parameter.idle_task_limit = 10
    -parameter.max_message_per_task = -1
    -parameter.initial_reconnection_duration = "10000"
    -parameter.reconnect_progress_factor = 2
    -parameter.max_reconnect_duration = "3600000"
    -parameter.reconnect_interval = "3600000"
    -parameter.max_jsm_connection = 10
    -parameter.max_consumer_error_retrieve_before_delay = 20
    -parameter.consume_error_delay = "100"
    -parameter.consume_error_progression = "2.0"
    -parameter.vender_class_loader = false
    -
    -
    -
    -
    -
    - [[transport.blocking.jms.sender]] - Required -

    - This configuration header groups the parameters that are used to configure the JMS transport sender in blocking mode. You can use the same list of parameters that are available for the non-blocking JMS sender. -

    -
    -
    - -
    -
    -
    -
    -
    -
    -
    - - -## JNDI Connection Factories - -
    -
    -
    -
    - - - -
    -
    -
    [transport.jndi.connection_factories]
    -QueueConnectionFactory = "amqp://admin:admin@clientID/carbon?brokerlist='tcp://localhost:5675'"
    -TopicConnectionFactory = "amqp://admin:admin@clientID/carbon?brokerlist='tcp://localhost:5675'"
    -
    -
    -
    -
    -
    - [transport.jndi.connection_factories] - Required -

    - This configuration header groups the parameters used for specifying the JNDI connection factory classes. -

    -
    -
    -
    -
    - TopicConnectionFactory -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: amqp://admin:admin@clientID/carbon?brokerlist='tcp://localhost:5675' -
    -
    -
    -

    The connection factory URL for connecting to a JMS queue.

    -
    -
    -
    -
    - QueueConnectionFactory -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: amqp://admin:admin@clientID/carbon?brokerlist='tcp://localhost:5675' -
    -
    -
    -

    The connection factory URL for connecting to a JMS topic.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## JNDI Queues - -
    -
    -
    -
    - - - -
    -
    -
    [transport.jndi.queue]
    -JMSMS = "JMSMS"
    -StockQuotesQueue = "StockQuotesQueue"
    -
    -
    -
    -
    -
    - [transport.jndi.queue] - Required -

    - This configuration header is used to specify the list of queues that are defined your JMS broker. The JNDI name of the queue, and the actual queue name should be specifed as a key-value pair as follows: jndi_name = queue_name. -

    -
    -
    -
    -
    - <jndi_queue_name> -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: <queue_name> -
    -
    -
    -

    The jndi queue name and the actual queue name as a key-value pair.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## JNDI Topics - -
    -
    -
    -
    - - - -
    -
    -
    [transport.jndi.topic]
    -MyTopic = "example.MyTopic"
    -
    -
    -
    -
    -
    - [transport.jndi.topic] - Required -

    - This configuration header is used to specify the list of topics that are defined your JMS broker. The JNDI name of the topic, and the actual topic name should be specifed as a key-value pair as follows: jndi_name = topic_name. -

    -
    -
    -
    -
    - <jndi_topic_name> -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: <topic_name> -
    -
    -
    -

    The jndi queue name and the actual topic name as a key-value pair.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## RabbitMQ Listener - -
    -
    -
    -
    - - - -
    -
    -
    [[transport.rabbitmq.listener]]
    -
    -name = "rabbitMQListener"
    -parameter.hostname = "localhost"
    -parameter.port = 5672
    -parameter.username = "guest"
    -parameter.password = "guest"
    -parameter.connection_factory = ""
    -parameter.exchange_name = "amq.direct"
    -parameter.queue_name = "MyQueue"
    -parameter.queue_auto_ack = false
    -parameter.consumer_tag = ""
    -parameter.channel_consumer_qos = ""
    -parameter.durable = ""
    -parameter.queue_exclusive = ""
    -parameter.queue_auto_delete = ""
    -parameter.queue_routing_key = ""
    -parameter.queue_auto_declare = ""
    -parameter.exchange_auto_declare = ""
    -parameter.exchange_type = ""
    -parameter.exchange_durable = ""
    -parameter.exchange_auto_delete = ""
    -parameter.message_content_type = ""
    -
    -parameter.retry_interval = "10s"
    -parameter.retry_count = 5
    -parameter.connection_pool_size = 25
    -
    -parameter.ssl_enable = true
    -parameter.ssl_version = "SSL"
    -parameter.keystore_location ="$ref{keystore.tls.file_name}"
    -parameter.keystore_type = "$ref{keystore.tls.type}"
    -parameter.keystore_password = "$ref{keystore.tls.password}"
    -parameter.truststore_file_name ="$ref{truststore.file_name}"
    -parameter.truststore_type = "$ref{truststore.type}"
    -parameter.truststore_password = "$ref{truststore.password}"
    -
    -
    -
    -
    -
    - [[transport.rabbitmq.listener]] - Required -

    - This configuration header is required if you are configuring WSO2 Micro Integrator to receive messages from a RabbitMQ client. Read more about connecting the Micro Integator with RabbitMQ. -

    -
    -
    -
    -
    - name -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The name of the broker.

    -
    -
    -
    -
    - parameter.hostname -
    -
    -
    -

    - string - Required -

    -
    - Default: localhost -
    -
    - Possible Values: - -
    -
    -
    -

    The IP address of the server node.

    -
    -
    -
    -
    - parameter.port -
    -
    -
    -

    - integer - Required -

    -
    - Default: 5672 -
    -
    - Possible Values: - -
    -
    -
    -

    The port on which the RabbitMQ broker can be accessed.

    -
    -
    -
    -
    - parameter.username -
    -
    -
    -

    - string - Required -

    -
    - Default: guest -
    -
    - Possible Values: - -
    -
    -
    -

    The user name for connecting to RabbitMQ broker.

    -
    -
    -
    -
    - parameter.password -
    -
    -
    -

    - string - Required -

    -
    - Default: guest -
    -
    - Possible Values: - -
    -
    -
    -

    The password for connecting to the RabbitMQ broker.

    -
    -
    -
    -
    - parameter.connection_factory -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: org.apache.axis2.transport.rabbitmq.RabbitMQListener -
    -
    -
    -

    The name of the connection factory.

    -
    -
    -
    -
    - parameter.exchange_name -
    -
    -
    -

    - string - Required -

    -
    - Default: amq.direct -
    -
    - Possible Values: - -
    -
    -
    -

    Name of the RabbitMQ exchange to which the queue is bound. Use this parameter instead of rabbitmq.queue.routing.key, if you need to use the default exchange and publish to a queue.

    -
    -
    -
    -
    - parameter.queue_name -
    -
    -
    -

    - string - Required -

    -
    - Default: MyQueue -
    -
    - Possible Values: - -
    -
    -
    -

    The queue name to send or consume messages. If you do not specify this parameter, you need to specify the rabbitmq.queue.routing.key parameter.

    -
    -
    -
    -
    - parameter.queue_auto_ack -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The queue name to send or consume messages. If you do not specify this parameter, you need to specify the rabbitmq.queue.routing.key parameter.

    -
    -
    -
    -
    - parameter.consumer_tag -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The client­ generated consumer tag to establish context.

    -
    -
    -
    -
    - parameter.channel_consumer_qos -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The consumer qos value. You need to specify this parameter only if the rabbitmq.queue.auto.ack parameter is set to false.

    -
    -
    -
    -
    - parameter.durable -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Specifies whether the queue should remain declared even if the broker restarts.

    -
    -
    -
    -
    - parameter.queue_exclusive -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Specifies whether the queue should be exclusive or should be consumable by other connections.

    -
    -
    -
    -
    - parameter.queue_auto_delete -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Specifies whether to keep the queue even if it is not being consumed anymore.

    -
    -
    -
    -
    - parameter.queue_routing_key -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The routing key of the queue.

    -
    -
    -
    -
    - parameter.queue_auto_declare -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Specifies whether to create queues if they are not present. However, you should set this parameter only if queues are not declared prior on the broker. Setting this parameter in the publish URL to false improves RabbitMQ transport performance.

    -
    -
    -
    -
    - parameter.exchange_auto_declare -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Specifies whether to create exchanges if they are not present. However, you should set this parameter only if exchanges are not declared prior on the broker. Setting this parameter in the publish URL to false improves RabbitMQ transport performance.

    -
    -
    -
    -
    - parameter.exchange_type -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The type of the exchange.

    -
    -
    -
    -
    - parameter.exchange_durable -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Specifies whether the exchange should remain declared even if the broker restarts.

    -
    -
    -
    -
    - parameter.exchange_auto_delete -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Specifies whether to keep the exchange even if it is not bound to any queue anymore.

    -
    -
    -
    -
    - parameter.default_destination_type -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: text/xml -
    -
    -
    -

    The content type of the consumer. </br>Note that if the content type is specified in the message, this parameter does not override the specified content type.

    -
    -
    -
    -
    - parameter.retry_interval -
    -
    -
    -

    - integer - Required -

    -
    - Default: 30000 -
    -
    - Possible Values: - -
    -
    -
    -

    In the case of network failure or broker shut down, the Micro Integrator will attempt to reconnect a number of times (as sepcified by the parameter.retry_count parameter) with an interval (specified by this parameter) between the retry attempts.

    -
    -
    -
    -
    - parameter.retry_interval -
    -
    -
    -

    - integer - Required -

    -
    - Default: 3 -
    -
    - Possible Values: - -
    -
    -
    -

    In the case of network failure or broker shut down, the Micro Integrator will attempt to reconnect as many times as sepcified by this parameter with an interval (specified by the parameter.retry_interval parameter) between the retry attempts.

    -
    -
    -
    -
    - parameter.ssl_enable -
    -
    -
    -

    - boolean - Required -

    -
    - Default: - -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Specifies whether or not SSL is enabled for RabbitMQ connection. If you set this to 'true', be sure to update the keystore and trust store parameters given below.

    -
    -
    -
    -
    - parameter.ssl_version -
    -
    -
    -

    - string - Required -

    -
    - Default: SSL -
    -
    - Possible Values: - -
    -
    -
    -

    The SSL versions.

    -
    -
    -
    -
    - parameter.keystore_location -
    -
    -
    -

    - string - Required -

    -
    - Default: MI_HOME/repository/resources/security/wso2carbon.jks -
    -
    - Possible Values: - -
    -
    -
    -

    The path to the keystore file that is used for securing a RabbitMQ connection. By default, the keystore file of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - parameter.keystore_type -
    -
    -
    -

    - string - Required -

    -
    - Default: JKS -
    -
    - Possible Values: "JKS" or "PKCS12" -
    -
    -
    -

    The type of the keystore file. By default, the keystore type of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - parameter.keystore_password -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2carbon -
    -
    - Possible Values: - -
    -
    -
    -

    The password of the keystore file that is used for securing a RabbitMQ connection. This keystore password is used when accessing the keys in the keystore. By default, the keystore password of the primary keystore is enabled for this purpose.

    -
    -
    -
    -
    - parameter.truststore_location -
    -
    -
    -

    - string - Required -

    -
    - Default: MI_HOME/repository/resources/security/wso2truststore.jks -
    -
    - Possible Values: - -
    -
    -
    -

    The path to the keystore file that is used for storing the trusted digital certificates. By default, the product's trust store is configured for this purpose.

    -
    -
    -
    -
    - parameter.truststore_type -
    -
    -
    -

    - string - Required -

    -
    - Default: JKS -
    -
    - Possible Values: "JKS" or "PKCS12" -
    -
    -
    -

    The type of the keystore file that is used as the trust store. By default, the product's trust store is configured for this purpose.

    -
    -
    -
    -
    - parameter.truststore_password -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2carbon -
    - -
    -
    -

    The password of the keystore file that is used as the trust store. By default, the product's trust store is configured for this purpose.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## RabbitMQ Sender - -
    -
    -
    -
    - - - -
    -
    -
    [transport.rabbitmq]
    -sender_enable = true
    -
    -[[transport.rabbitmq.sender]]
    -name = "rabbitMQSender"
    -parameter.hostname = "localhost"
    -parameter.port = 5672
    -parameter.username = "guest"
    -parameter.password = "guest"
    -parameter.exchange_name = "amq.direct"
    -parameter.routing_key = "MyQueue"
    -parameter.reply_to_name = ""
    -parameter.queue_delivery_mode = 1 # 1/2
    -parameter.exchange_type = ""
    -parameter.queue_name = "MyQueue"
    -parameter.queue_durable = false
    -parameter.queue_exclusive = false
    -parameter.queue_auto_delete = false
    -parameter.exchange_durable = ""
    -parameter.queue_auto_declare = ""
    -parameter.exchange_auto_declare = ""
    -parameter.connection_pool_size = 10
    -
    -
    -
    -
    -
    - [transport.rabbitmq] - -

    - This configuration header is required for enabling the RabbitMQ listener in the Micro Integrator. Read more about connecting the Micro Integator with RabbitMQ. -

    -
    -
    -
    -
    - sender_enable -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Set this parameter to 'true' if you want to configure the Micro Integrator to send messages to a RabbitMQ client.

    -
    -
    -
    -
    - [[transport.rabbitmq.sender]] - -

    - This configuration header is optional when you have the RabbitMQ sender enabled ([transport.rabbitmq]. Read more about connecting the Micro Integator with RabbitMQ. -

    -
    -
    -
    -
    - name -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The name of the broker.

    -
    -
    -
    -
    - parameter.hostname -
    -
    -
    -

    - string - Required -

    -
    - Default: localhost -
    -
    - Possible Values: - -
    -
    -
    -

    The IP address of the server node.

    -
    -
    -
    -
    - parameter.port -
    -
    -
    -

    - integer - Required -

    -
    - Default: 5672 -
    -
    - Possible Values: - -
    -
    -
    -

    The port on which the RabbitMQ broker can be accessed.

    -
    -
    -
    -
    - parameter.username -
    -
    -
    -

    - string - Required -

    -
    - Default: guest -
    -
    - Possible Values: - -
    -
    -
    -

    The user name for connecting to RabbitMQ broker.

    -
    -
    -
    -
    - parameter.password -
    -
    -
    -

    - string - Required -

    -
    - Default: guest -
    -
    - Possible Values: - -
    -
    -
    -

    The password for connecting to the RabbitMQ broker.

    -
    -
    -
    -
    - parameter.exchange_name -
    -
    -
    -

    - string - Required -

    -
    - Default: amq.direct -
    -
    - Possible Values: - -
    -
    -
    -

    Name of the RabbitMQ exchange to which the queue is bound. Use this parameter instead of rabbitmq.queue.routing.key, if you need to use the default exchange and publish to a queue.

    -
    -
    -
    -
    - parameter.routing_key -
    -
    -
    -

    - string - Required -

    -
    - Default: MyQueue -
    -
    - Possible Values: - -
    -
    -
    -

    The routing key of the queue.

    -
    -
    -
    -
    - parameter.queue_name -
    -
    -
    -

    - string - Required -

    -
    - Default: MyQueue -
    -
    - Possible Values: - -
    -
    -
    -

    The queue name to send or consume messages. If you do not specify this parameter, you need to specify the rabbitmq.queue.routing.key parameter.

    -
    -
    -
    -
    - parameter.reply_to_name -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The name of the call back­ queue. Specify this parameter if you expect a response.

    -
    -
    -
    -
    - parameter.queue_delivery_mode -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: persistent -
    -
    -
    -

    The delivery mode of the queue. Possible values are Non­-persistent and Persistent.

    -
    -
    -
    -
    - parameter.exchange_type -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The type of the exchange.

    -
    -
    -
    -
    - parameter.queue_name -
    -
    -
    -

    - string - Required -

    -
    - Default: MyQueue -
    -
    - Possible Values: - -
    -
    -
    -

    The queue name to send or consume messages. If you do not specify this parameter, you need to specify the rabbitmq.queue.routing.key parameter.

    -
    -
    -
    -
    - parameter.queue_durable -
    -
    -
    -

    - string - Required -

    -
    - Default: MyQueue -
    -
    - Possible Values: - -
    -
    -
    -

    Whether the queue should remain declared even if the broker restarts. The default value is false.

    -
    -
    -
    -
    - parameter.queue_exclusive -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Whether the queue should be exclusive or should be consumable by other connections. The default value is false.

    -
    -
    -
    -
    - parameter.queue_auto_delete -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Specifies whether to keep the queue even if it is not being consumed anymore.

    -
    -
    -
    -
    - parameter.exchange_auto_declare -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Specifies whether to create exchanges if they are not present. However, you should set this parameter only if exchanges are not declared prior on the broker. Setting this parameter in the publish URL to false improves RabbitMQ transport performance.

    -
    -
    -
    -
    - parameter.queue_auto_declare -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Specifies whether to create queues if they are not present. However, you should set this parameter only if queues are not declared prior on the broker. Setting this parameter in the publish URL to false improves RabbitMQ transport performance.

    -
    -
    -
    -
    - parameter.exchange_durable -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Specifies whether the exchange should remain declared even if the broker restarts.

    -
    -
    -
    -
    - parameter.queue_auto_declare -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Whether to keep the queue even if it is not being consumed anymore. The default value is false.

    -
    -
    -
    -
    - parameter.exchange_auto_declare -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Whether to create queues if they are not present. However, you should set this parameter only if queues are not declared prior on the broker. Setting this parameter in the publish URL to false improves RabbitMQ transport performance.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## FIX Transport - -
    -
    -
    -
    - - - -
    -
    -
    [transport.fix]
    -
    -listener.enable = false
    -listener.parameter.customParameter = ""
    -sender.enable = false
    -sender.parameter.customParameter = ""
    -
    -
    -
    -
    -
    - [transport.fix] - Required -

    - This configuration header groups the parameters that configure the Micro Integrator to communicate through the FIX transport. -

    -
    -
    -
    -
    - listener.enabled -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling the FIX transport listener.

    -
    -
    -
    -
    - sender.enabled -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling the FIX transport sender.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## MQTT Transport - -
    -
    -
    -
    - - - -
    -
    -
    [transport.mqtt]
    -
    -listener.enable = false
    -listener.hostname = "$ref{server.hostname}"
    -listener.connection_factory = "mqttConFactory"
    -listener.server_port = 1883
    -listener.client_id = "client-id-1234"
    -listener.topic_name = "esb.test"
    -
    -# not reqired parameter list
    -listener.subscription_qos = 0
    -listener.session_clean = false
    -listener.enable_ssl = false
    -listener.subscription_username = ""
    -listener.subscription_password = ""
    -listener.temporary_store_directory = ""
    -listener.blocking_sender = false
    -listener.connect_type = "text/plain"
    -listener.message_retained = false
    -
    -listener.parameter.customParameter = ""
    -
    -sender.enable = false
    -sender.parameter.customParameter = ""
    -
    -
    -
    -
    -
    - [transport.mqtt] - Required -

    - This configuration header groups the parameters that configure the Micro Integrator to communicate through the MQTT transport. -

    -
    -
    -
    -
    - listener.enabled -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling the MQTT transport listener.

    -
    -
    -
    -
    - listener.hostname -
    -
    -
    -

    - string - Required -

    -
    - Default: $ref{server.hostname} -
    -
    - Possible Values: - -
    -
    -
    -

    The name of the host. By default, the hostname of the Micro Integrator server is used.

    -
    -
    -
    -
    - listener.connection_factory -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The connection factory URL for connecting to a JMS topic.

    -
    -
    -
    -
    - listener.server_port -
    -
    -
    -

    - integer - Required -

    -
    - Default: - -
    -
    - Possible Values: "1883" or "1885" -
    -
    -
    -

    The port ID.

    -
    -
    -
    -
    - listener.client_id -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The client ID.

    -
    -
    -
    -
    - listener.topic_name -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The name of the topic.

    -
    -
    -
    -
    - listener.parameter.customParameter -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Replace 'customParameter' with a required parameter name.

    -
    -
    -
    -
    - sender.enable -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling the MQTT transport sender.

    -
    -
    -
    -
    - sender.parameter.customParameter -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Replace 'customParameter' with a required parameter name.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## SAP Transport - -
    -
    -
    -
    - - - -
    -
    -
    [transport.sap]
    -
    -listener.idoc.enable = true
    -listener.bapi.enable = true
    -listener.idoc.class = "org.wso2.carbon.transports.sap.SAPTransportListener"
    -listener.idoc.parameter.customParameter = ""
    -listener.bapi.class = "org.wso2.carbon.transports.sap.SAPTransportListener"
    -listener.bapi.parameter.customParameter = ""
    -sender.idoc.enable = true
    -sender.bapi.enable = true
    -sender.idoc.class = "org.wso2.carbon.transports.sap.SAPTransportSender"
    -sender.idoc.parameter.customParameter = ""
    -sender.bapi.class = "org.wso2.carbon.transports.sap.SAPTransportSender"
    -sender.bapi.parameter.customParameter = ""
    -
    -
    -
    -
    -
    - [transport.sap] - -

    - This configuration header groups the parameters that configure the Micro Integrator to communicate with SAP. -

    -
    -
    -
    -
    - listener.idoc.enabled -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling SAP idoc transport listener.

    -
    -
    -
    -
    - listener.bapi.enabled -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling SAP bapi transport listener.

    -
    -
    -
    -
    - listener.idoc.class -
    -
    -
    -

    - string - Required -

    -
    - Default: org.wso2.carbon.transports.sap.SAPTransportListener -
    -
    - Possible Values: - -
    -
    -
    -

    The class that implements the SAP transport listener for the Sap IDoc libary.

    -
    -
    -
    -
    - listener.bapi.class -
    -
    -
    -

    - string - Required -

    -
    - Default: org.wso2.carbon.transports.sap.SAPTransportListener -
    -
    - Possible Values: - -
    -
    -
    -

    The class that implements the SAP transport listener for the SAP BAPI library.

    -
    -
    -
    -
    - sender.idoc.enabled -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling the SAP idoc transport sender.

    -
    -
    -
    -
    - sender.bapi.enabled -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling the SAP bapi transport sender.

    -
    -
    -
    -
    - sender.idoc.class -
    -
    -
    -

    - string - -

    -
    - Default: org.wso2.carbon.transports.sap.SAPTransportSender -
    -
    - Possible Values: - -
    -
    -
    -

    The class that implements the SAP transport sender for the Sap IDoc library.

    -
    -
    -
    -
    - sender.bapi.class -
    -
    -
    -

    - string - -

    -
    - Default: org.wso2.carbon.transports.sap.SAPTransportSender -
    -
    - Possible Values: - -
    -
    -
    -

    The class that implements the SAP transport listener for the Sap BAPI library.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## MSMQ Transport - -
    -
    -
    -
    - - - -
    -
    -
    [transport.msmq]
    -
    -listener.enable = false
    -listener.hostname = "$ref{server.hostname}"
    -listener.parameter.customParameter = ""
    -
    -sender.enable = false
    -sender.parameter.customParameter = ""
    -
    -
    -
    -
    -
    - [transport.msmq] - Required -

    - This configuration header groups the parameters that configure the Micro Integrator to communicate through the MSMQ transport. -

    -
    -
    -
    -
    - listener.enabled -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling MSMQ transport listener.

    -
    -
    -
    -
    - sender.enable -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling MSMQ transport sender.

    -
    -
    -
    -
    - listener.hostname -
    -
    -
    -

    - string - Required -

    -
    - Default: $ref{server.hostname} -
    -
    - Possible Values: - -
    -
    -
    -

    The hostname.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## TCP Transport (non-blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [transport.tcp]
    -
    -listener.enable = false
    -listener.port = 8000
    -listener.hostname = "$ref{server.hostname}"
    -listener.content_type = ["application/xml"]
    -listener.response_client = true
    -listener.parameter.customParameter = ""
    -
    -sender.enable = true
    -sender.parameter.customParameter = ""
    -
    -
    -
    -
    -
    - [transport.tcp] - Required -

    - This configuration header groups the parameters that configure the Micro Integrator to communicate through the TCP transport. Note that the list of parameters given below can be used for the non-blocking transport as well as the blocking transport. -

    -
    -
    -
    -
    - listener.enabled -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling the TCP transport listener.

    -
    -
    -
    -
    - listener.port -
    -
    -
    -

    - integer - Required -

    -
    - Default: 8000 -
    -
    - Possible Values: A positive integer less than 65535 -
    -
    -
    -

    The port on which the TCP server should listen for incoming messages.

    -
    -
    -
    -
    - listener.hostname -
    -
    -
    -

    - string - Required -

    -
    - Default: $ref{server.hostname} -
    -
    - Possible Values: - -
    -
    -
    -

    The host name of the server to be displayed in WSDLs, etc.

    -
    -
    -
    -
    - listener.content_type -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: "application/xml", "application/json", or "text/html" -
    -
    -
    -

    The content type of the input message.

    -
    -
    -
    -
    - listener.response_client -
    -
    -
    -

    - boolean - Required -

    -
    - Default: true -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Whether or not the client needs to get the response.

    -
    -
    -
    -
    - sender.enabled -
    -
    -
    -

    - boolean - Required -

    -
    - Default: true -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling the TCP transport sender.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## TCP Transport (blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [transport.blocking.tcp]
    -
    -listener.enable = false
    -listener.port = 8000
    -listener.hostname = "$ref{server.hostname}"
    -listener.content_type = ["application/xml"]
    -listener.response_client = true
    -listener.parameter.customParameter = ""
    -
    -sender.enable = false
    -sender.parameter.customParameter = ""
    -
    -
    -
    -
    -
    - [transport.blocking.tcp] - Required -

    - This configuration header groups the parameters that are used to configure the TCP transport in blocking mode. You can use the same list of parameters that are available for the non-blocking TCP transport. -

    -
    -
    - -
    -
    -
    -
    -
    -
    -
    - - -## Websocket Transport - -
    -
    -
    -
    - - - -
    -
    -
    [transport.ws]
    -
    -sender.enable = false
    -sender.outflow_dispatch_sequence = "outflowDispatchSeq"
    -sender.outflow_dispatch_fault_sequence = "outflowFaultSeq"      
    -sender.parameter.customParameter = ""
    -
    -
    -
    -
    -
    - [transport.ws] - Required -

    - This configuration header groups the parameters that configure the Micro Integrator to communicate through the Websocket transport. -

    -
    -
    -
    -
    - sender.enable -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling the websocket transport listener.

    -
    -
    -
    -
    - sender.outflow_dispatch_sequence -
    -
    -
    -

    - string - -

    -
    - Default: outflowDispatchSeq -
    -
    - Possible Values: - -
    -
    -
    -

    The sequence for the back-end to client mediation.

    -
    -
    -
    -
    - sender.outflow_dispatch_fault_sequence -
    -
    -
    -

    - string - -

    -
    - Default: outflowFaultSeq -
    -
    - Possible Values: - -
    -
    -
    -

    The fault sequence for the back-end to client mediation path.

    -
    -
    -
    -
    - sender.parameter.customParameter -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Replace 'customParameter' with required parameter name.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## Secure Websocket Transport - -
    -
    -
    -
    - - - -
    -
    -
    [transport.wss]
    -
    -sender.enable = false
    -sender.outflow_dispatch_sequence = "outflowDispatchSeq"
    -sender.outflow_dispatch_fault_sequence = "outflowFaultSeq"
    -sender.parameter.customParameter = ""
    -
    -sender.truststore_location = "$ref{truststore.file_name}"
    -sender.truststore_password = "$ref{truststore.password}"
    -
    -
    -
    -
    -
    - [transport.wss] - Required -

    - This configuration header groups the parameters that configure the Micro Integrator to communicate through the secured Websocket transport. -

    -
    -
    -
    -
    - sender.enable -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling the websocket secured transport sender.

    -
    -
    -
    -
    - sender.outflow_dispatch_sequence -
    -
    -
    -

    - string - -

    -
    - Default: outflowDispatchSeq -
    -
    - Possible Values: - -
    -
    -
    -

    The sequence for the back-end to client mediation.

    -
    -
    -
    -
    - sender.outflow_dispatch_fault_sequence -
    -
    -
    -

    - string - -

    -
    - Default: outflowFaultSeq -
    -
    - Possible Values: - -
    -
    -
    -

    The fault sequence for the back-end to client mediation path.

    -
    -
    -
    -
    - sender.truststore_location -
    -
    -
    -

    - string - Required -

    -
    - Default: MI_HOME/repository/resources/security/wso2truststore.jks -
    -
    - Possible Values: - -
    -
    -
    -

    The file path to the truststore that stores the trusted digital certificates for websocket use cases. By default, the product's trust store is configured for this purpose.

    -
    -
    -
    -
    - sender.truststore_password -
    -
    -
    -

    - string - Required -

    -
    - Default: wso2carbon -
    -
    - Possible Values: - -
    -
    -
    -

    The password of the keystore file that is used as the trust store.

    -
    -
    -
    -
    - sender.parameter.customParameter -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Replace 'customParameter' with required parameter name.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## UDP Transport (non-blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [transport.udp]
    -
    -listener.enable = false
    -listener.parameter.customParameter = ""
    -
    -sender.enable =false               
    -sender.parameter.customParameter = ""
    -
    -
    -
    -
    -
    - [transport.udp] - Required -

    - This configuration header groups the parameters that configure the Micro Integrator to communicate through the UDP transport. Note that the list of parameters given below can be used for the non-blocking transport as well as the blocking transport. -

    -
    -
    -
    -
    - listener.enabled -
    -
    -
    -

    - boolean - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling the UDP transport listener.

    -
    -
    -
    -
    - sender.enabled -
    -
    -
    -

    - string - Required -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    The parameter for enabling the UDP transport sender.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## UDP Transport (blocking mode) - -
    -
    -
    -
    - - - -
    -
    -
    [transport.blocking.udp]
    -
    -listener.enable = false
    -listener.parameter.customParameter = ""
    -
    -sender.enable = false        
    -sender.parameter.customParameter = ""
    -
    -
    -
    -
    -
    - [transport.blocking.tcp] - Required -

    - This configuration header groups the parameters that are used to configure the UDP transport in blocking mode. You can use the same list of parameters that are available for the non-blocking UDP transport. -

    -
    -
    - -
    -
    -
    -
    -
    -
    -
    - - -## Custom Transport Listener - -
    -
    -
    -
    - - - -
    -
    -
    [[custom_transport.listener]]
    -class = "org.wso2.micro.integrator.business.messaging.hl7.transport.HL7TransportListener"
    -protocol = "hl7"
    -
    -
    -
    -
    -
    - [[custom_transport.listener]] - Required -

    - This configuration header groups the parameters that configure the Micro Integrator to receive messages through a custom transport. -

    -
    -
    -
    -
    - class -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The class implementing the custom transport. For example, if you are using an HL7 transport listener, use the following class: org.wso2.micro.integrator.business.messaging.hl7.transport.HL7TransportListener.

    -
    -
    -
    -
    - protocol -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The transport protocol for the custom implementation. For example: hl7.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## Custom Transport Sender - -
    -
    -
    -
    - - - -
    -
    -
    [[custom_transport.sender]]
    -class = "org.wso2.micro.integrator.business.messaging.hl7.transport.HL7TransportSender"
    -protocol = "hl7"
    -
    -
    -
    -
    -
    - [transport.udp] - Required -

    - This configuration header groups the parameters that configure the Micro Integrator to send messages through a custom transport. -

    -
    -
    -
    -
    - class -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The class implementing the custom transport. For example, if you are using an HL7 transport listener, use the following class: org.wso2.micro.integrator.business.messaging.hl7.transport.HL7TransportSender.

    -
    -
    -
    -
    - protocol -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The transport protocol for the custom implementation. For example: hl7.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## Message Mediation - -
    -
    -
    -
    - - - -
    -
    -
    [mediation]
    -synapse.core_threads = 20
    -synapse.max_threads = 100
    -synapse.threads_queue_length = 10
    -
    -synapse.global_timeout_interval = "120000ms"
    -
    -synapse.enable_xpath_dom_failover=true
    -synapse.temp_data_chunk_size=3072
    -
    -synapse.command_debugger_port=9005
    -synapse.event_debugger_port=9006
    -
    -synapse.script_mediator_pool_size=15
    -synapse.enable_xml_nil=false
    -synapse.disable_auto_primitive_regex = "^-?(0|[1-9][0-9]*)(\\.[0-9]+)?([eE][+-]?[0-9]+)?$"
    -synapse.disable_custom_replace_regex = "@@@"
    -synapse.enable_namespace_declaration = false
    -synapse.build_valid_nc_name = false
    -synapse.enable_auto_primitive = false
    -synapse.json_out_auto_array = false
    -synapse.preserve_namespace_on_xml_to_json=false
    -flow.statistics.enable=false
    -flow.statistics.capture_all=false
    -statistics.enable_clean=true
    -statistics.clean_interval = "1000ms"
    -stat.tracer.collect_payloads=false
    -stat.tracer.collect_mediation_properties=false
    -inbound.core_threads = 20
    -inbound.max_threads = 100
    -
    -
    -
    -
    -
    - [mediation] - Required -

    - This configuration header groups the parameters used for tuning the mediation process (Synapse engine) of the Micro Integrator. These parameters are mainly used when mediators such as Iterate and Clone (which uses the internal thread pools) are used. -

    -
    -
    -
    -
    - synapse.core_threads -
    -
    -
    -

    - integer - -

    -
    - Default: 20 -
    -
    - Possible Values: - -
    -
    -
    -

    The initial number of synapse threads in the pool. This parameter is applicable only if the Iterate and Clone mediators are used to handle a higher load. These mediators use a thread pool to create new threads when processing messages and sending messages in parallal. You can configure the size of the thread pool by this parameter. The number of threads specified via this parameter should be increased as required to balance an increased load. Increasing the value specified for this parameter results in higher performance of the Iterate and Clone mediators.

    -
    -
    -
    -
    - synapse.max_threads -
    -
    -
    -

    - integer - -

    -
    - Default: 100 -
    -
    - Possible Values: - -
    -
    -
    -

    The maximum number of synapse threads in the pool. This parameter is applicable only if the Iterate and Clone mediators are used to handle a higher load. The number of threads specified for this parameter should be increased as required to balance an increased load.

    -
    -
    -
    -
    - synapse.threads_queue_length -
    -
    -
    -

    - integer - -

    -
    - Default: 10 -
    -
    - Possible Values: - -
    -
    -
    -

    The length of the queue that is used to hold the runnable tasks that are to be executed by the pool. This parameter is applicable only if the Iterate and Clone mediators are used to handle a higher load. You can specify a finite value as the queue length by giving any positive number. If this parameter is set to (-1) it means that the task queue length is infinite. If the queue length is finite, there can be situations where requests are rejected when the task queue is full and all the cores are occupied. If the queue length is infinite, and if some thread locking happens, the server can go out of memory. Therefore, you need to decide on an optimal value based on the actual load.

    -
    -
    -
    -
    - synapse.global_timeout_interval -
    -
    -
    -

    - integer - -

    -
    - Default: 120000 -
    -
    - Possible Values: - -
    -
    -
    -

    The maximum number of milliseconds within which a response for the request should be received. A response that arrives after the specified number of seconds cannot be correlated with the request. Hence, a warning will be logged and the request will be dropped. This parameter is also referred to as the time-out handler.

    -
    -
    -
    -
    - synapse.enable_xpath_dom_failover -
    -
    -
    -

    - boolean - -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    If this parameter is set to true, the Micro Integrator can switch to XPATH 2.0. This parameter can be set to false if XPATH 2.0 evaluations cause performance degradation. The Micro Integrator uses the Saxon Home Edition when implementing XPATH 2.0 functionalities, and thus supports all the functions that are shipped with it. For more information on the supported functions, see the Saxon Documentation.

    -
    -
    -
    -
    - synapse.temp_data_chunk_size -
    -
    -
    -

    - integer - -

    -
    - Default: 3072 -
    -
    - Possible Values: - -
    -
    -
    -

    The message size that can be processed by the Micro Integrator.

    -
    -
    -
    -
    - synapse.script_mediator_pool_size -
    -
    -
    -

    - integer - -

    -
    - Default: 15 -
    -
    - Possible Values: - -
    -
    -
    -

    When using externally referenced scripts, this parameter specifies the size of the script engine pool that should be used per script mediator. The script engines from this pool are used for externally referenced script execution where updates to external scripts on an engine currently in use may otherwise not be thread safe. It is recommended to keep this value at a reasonable size since there will be a pool per externally referenced script.

    -
    -
    -
    -
    - synapse.preserve_namespace_on_xml_to_json -
    -
    -
    -

    - boolean - -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Preserves the namespace declarations in the JSON output during XML to JSON message transformations.

    -
    -
    -
    -
    - flow.statistics.enable -
    -
    -
    -

    - boolean - -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Set this property to true and enable statistics for the required integration artifact to record information such as the following: <ul><li>The time spent on each mediator.</li><li>The time spent on processing each message.</li><li>The fault count of a single message flow.</li></ul>

    -
    -
    -
    -
    - flow.statistics.enable -
    -
    -
    -

    - boolean - -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Set this property to 'true' and set the flow.statistics.enable property also to 'true'. This will enable mediation statistics for all the integration artifacts by default. If you set this property to 'false', you need to set the flow.statistics.enable property to 'true' and manually enable statistics for the required integration artifact.

    -
    -
    -
    -
    - statistics.enable_clean -
    -
    -
    -

    - boolean - -

    -
    - Default: true -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    If this parameter is set to true, all the existing statistics would be cleared before processing a request. This is recommended if you want to increase the processing speed.

    -
    -
    -
    -
    - stat.tracer.collect_payloads -
    -
    -
    -

    - boolean - -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Set this property to true and enable tracing for the required integration artifact to record the message payload before and after the message mediation performed by individual mediators.

    -
    -
    -
    -
    - stat.tracer.collect_mediation_properties -
    -
    -
    -

    - boolean - -

    -
    - Default: false -
    -
    - Possible Values: "true" or "false" -
    -
    -
    -

    Set this property to true and enable tracing for the required integration artifact to record the following information:<ul><li>Message context properties.</li><li>Message transport-scope properties.</li></ul>

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## Synapse Handlers - -
    -
    -
    -
    - - - -
    -
    -
    [[synapse_handlers]]
    -name = 
    -class = 
    -
    -
    -
    -
    -
    -
    - [[synapse_handlers]] - Required -

    - This configuration header is required for configuring a synapse handler with the name and the implementation class. -

    -
    -
    -
    -
    - name -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Name of the synapse handler.

    -
    -
    -
    -
    - class -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The synapse handler implementation.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - - -## External Valut Configurations - -
    -
    -
    -
    - - - -
    -
    -
    #Static Token Authentication
    -
    -[[external_vault]]
    -name = "hashicorp" # required
    -address = "http://127.0.0.1:8200" # required
    -rootToken = "ROOT_TOKEN" # required
    -cachableDuration = "15000"
    -engineVersion = "2"
    -# If namespace is used, apply the namespace value:
    -namespace = "NAMESPACE"
    -# If HashiCorp vault server is hosted in HTTPS protocol, apply below fields
    -trustStoreFile = "${carbon.home}/repository/resources/security/client-truststore.jks"
    -keyStoreFile = "${carbon.home}/repository/resources/security/wso2carbon.jks"
    -keyStorePassword = "KEY_STORE_PASSWORD"
    -
    -#AppRole Authentication
    -
    -[[external_vault]]
    -name = "hashicorp" # required
    -address = "http://127.0.0.1:8200" # required
    -roleId = "ROLE_ID" # required
    -secretId = "SECRET_ID" # required
    -cachableDuration = "15000"
    -engineVersion = "2"
    -# If namespace is used, apply the namespace value:
    -namespace = "NAMESPACE"
    -# If HashiCorp vault server is hosted in HTTPS protocol, apply below fields
    -trustStoreFile = "${carbon.home}/repository/resources/security/client-truststore.jks"
    -keyStoreFile = "${carbon.home}/repository/resources/security/wso2carbon.jks"
    -keyStorePassword = "KEY_STORE_PASSWORD"
    -
    -#LDAP Authentication
    -
    -[[external_vault]]
    -name = "hashicorp" # required
    -address = "http://127.0.0.1:8200" # required
    -ldapUsername = "USERNAME" # required
    -ldapPassword = "PASSWORD" # required
    -cachableDuration = "15000"
    -engineVersion = "2"
    -# If HashiCorp vault server is hosted in HTTPS protocol, apply below fields
    -trustStoreFile = "${carbon.home}/repository/resources/security/client-truststore.jks"
    -keyStoreFile = "${carbon.home}/repository/resources/security/wso2carbon.jks"
    -keyStorePassword = "KEY_STORE_PASSWORD"
    -
    -
    -
    -
    -
    - [[external_vault]] - Required -

    - This configuration header is required for configuring an external vault for secrets. Read more about using HashiCorp sercrets. -

    -
    -
    -
    -
    - name -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: hashicorp -
    -
    -
    -

    The name of the vault. For example, specify 'hashicorp' when connecting to the HashiCorp vault.

    -
    -
    -
    -
    - address -
    -
    -
    -

    - string - Required -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    The URL for connecting to the vault.

    -
    -
    -
    -
    - rootToken -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Specify the root token generated from the HashiCorp server. This is only applicable if static token authentication is used when connecting the Micro Integrator to the HashiCorp server.

    -
    -
    -
    -
    - roleId -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Specify the role ID generated from HashiCorp. The secret ID and role ID you specify in the deployment.toml file will internally generate a token and authenticate the HashiCorp server connection. The role ID is only applicable if AppRole Pull authentication is used when connecting the Micro Integrator to the HashiCorp server.

    -
    -
    -
    -
    - secretId -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Specify the secret ID generated from HashiCorp. The secret ID and role ID you sepecify in the deployment.toml file will internally generate a token and authenticate the HashiCorp server connection. The secret ID you generate in HashiCorp may expire. If that happens, you can renew the security token. The secret ID is only applicable if AppRole Pull authentication is used when connecting the Micro Integrator to the HashiCorp server.

    -
    -
    -
    -
    - cachableDuration -
    -
    -
    -

    - string - -

    -
    - Default: 15000 -
    -
    - Possible Values: - -
    -
    -
    -

    All resources fetched from the HashiCorp vault would be cached for this number of milliseconds.

    -
    -
    -
    -
    - engineVersion -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: 2 -
    -
    -
    -

    The version of the HashiCorp secret engine.

    -
    -
    -
    -
    - namespace -
    -
    -
    -

    - string - -

    -
    - Default: - -
    -
    - Possible Values: - -
    -
    -
    -

    Namespace support is available only in the Enterprise edition of HashiCorp. The namespace value specified here applies globally to HashiCorp secrets in all synapse configurations.

    -
    -
    -
    -
    - trustStoreFile -
    -
    -
    -

    - string - -

    -
    - Default: ${carbon.home}/repository/resources/security/client-truststore.jks -
    -
    - Possible Values: - -
    -
    -
    -

    The keystore file (trust store) that is used to store the digital certificates that the Micro Integrator trusts for SSL communication.

    -
    -
    -
    -
    - keyStoreFile -
    -
    -
    -

    - string - -

    -
    - Default: ${carbon.home}/repository/resources/security/wso2carbon.jks -
    -
    - Possible Values: - -
    -
    -
    -

    This keystore used for SSL handshaking when the Micro Integrator communicates with the HashiCorp server.

    -
    -
    -
    -
    - keyStorePassword -
    -
    -
    -

    - string - -

    -
    - Default: wso2carbon -
    -
    - Possible Values: - -
    -
    -
    -

    The password of the keystore file that is used for SSL communication. If you are using the default keystore file in the Micro Integrator, the default password is 'wso2carbon'.

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - diff --git a/en/docs/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-configuration.md b/en/docs/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-configuration.md deleted file mode 100644 index 28abfacc50..0000000000 --- a/en/docs/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-configuration.md +++ /dev/null @@ -1,57 +0,0 @@ -# Setting up the Amazon DynamoDB Connector - -Amazon DynamoDB Connector allows you to access the [Amazon DynamoDB REST API](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.API.html) from integration sequence. - -Amazon DynamoDB makes it simple and cost-effective to store and retrieve any amount of data, as well as serve any level of request traffic. It uses a NoSQL database model, which is non-relational, allowing documents, graphs, and columnar among its data models. - -## Configuring message builders/formatters - -Before you start configuring the Amazon DynamoDB connector, you also need to configure the integration runtime, and we refer to that location as ``. - -Specific message builders/formatters configuration needs to be enabled in the product as shown below before starting the integration service. - -If you are using the Micro Integrator of **EI7** or **APIM 4.0.0**, you need to enable this property by adding the following to the **/conf/deployment.toml** file. You can further refer to the [Working with Message Builders and Formatters]({{base_path}}/reference/config-catalog/#http-transport) and [Product Configurations]({{base_path}}/install-and-setup/message_builders_formatters/message-builders-and-formatters/) documentations. - -```toml -[[custom_message_formatters]] -class="org.apache.synapse.commons.json.JsonStreamFormatter" -content_type = "application/x-amz-json-1.0" - -[[custom_message_builders]] -class="org.apache.synapse.commons.json.JsonStreamBuilder" -content_type = "application/x-amz-json-1.0" -``` - -If you are using **EI 6**, you can enable this property by doing the following Axis2 configurations in the **\repository\conf\axis2\axis2.xml** file. - -**messageFormatters** - -```xml - -``` -**messageBuilders** - -```xml - -``` - -> **Note**: If you want to perform blocking invocations, ensure that the above builder and formatter are added and enabled in the **\repository\conf\axis2\axis2_blocking_client.xml** file. - -## Setting up the AWS Account and DynamoDB Environment - -Please follow the steps mentioned in the [Setting up the Amazon Lambda Environment]({{base_path}}/reference/connectors/amazonlambda-connector/setting-up-amazonlambda/) document in order to create an Amazon account and obtain the access key id and secret access key. - -Please find the following steps to navigate in to the Amazon DynamoDB using the AWS account. - -1. Sign in to the AWS Management Console and search **Database** section under **Services**. - - Amazon Dynamodb aws console - -2. You can see the following operations and sub operations. The output in the AWS DynamoDB console are shown below. - - - Working with Items in Amazon DynamoDB - - Working with Tables in Amazon DynamoDB - -Amazon Dynamodb Table view \ No newline at end of file diff --git a/en/docs/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-example.md b/en/docs/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-example.md deleted file mode 100644 index ce62343372..0000000000 --- a/en/docs/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-example.md +++ /dev/null @@ -1,940 +0,0 @@ -# Amazon DynamoDB Connector Example - - Amazon DynamoDB Connector allows you to access the [Amazon DynamoDB REST API](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.API.html) from an integration sequence. - -## What you'll build - -Given below is a sample scenario that demonstrates how to work with the Amazon DynamoDB Connector and how to perform various `table` and `items` operations with Amazon DynamoDB. - -This example explains how to use Amazon DynamoDB Connector to: - -1. Create a table (a location for storing employee details) in Amazon DynamoDB. -2. Insert employee details (items) in to the created table. -3. Update employee details table. -4. Retrieve information about the inserted employee details (items). -5. Remove inserted employee details (items). -6. Retrieve list of tables. -7. Remove created employee details table. - -All seven operations are exposed via an API. The API with the context `/resources` has seven resources - -* `/addtable` : Creates a new table in the Amazon DynamoDB with the specified table name to store employee details. -* `/insertdetails` : Insert employee data (items) and store in the specified table. -* `/updatetable` : Update specified table (provisioned throughput settings, global secondary indexes, or DynamoDB Streams settings for a specified table). -* `/listdetails` : Retrieve information about the added employee details (items). -* `/deletedetails` : Remove added employee details from the specified table (items). -* `/listtable` : Retrieve information about the created tables. -* `/deletetable` : Remove created table in the Amazon DynamoDB. - -For more information about these operations, please refer to the [Amazon DynamoDB connector reference guide]({{base_path}}/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-configuration/). - -> **Note**: Before invoking the API, you need to configure message builders/formatters in deployment.toml. See [Setting up the Amazon DynamoDB Connector](amazondynamodb-connector-configuration/) documentation for more information. - -The following diagram shows the overall solution. The user creates a table, stores some employee details (items) into the table, and then receives it back. To invoke each operation, the user uses the same API. - -Amazon DynamoDB connector example - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Configure the connector in WSO2 Integration Studio - -Connectors can be added to integration flows in [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/). Once added, the operations of the connector can be dragged onto your canvas and added to your resources. - -### Import the connector - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -### Add integration logic - -First create an API, which will be where we configure the integration logic. Right click on the created Integration Project and select, **New** -> **Rest API** to create the REST API. Specify the API name as `amazonDynamoDBAPI` and API context as `/resources`. - -Adding a Rest API - -#### Configuring the API - -Now follow the steps below to add resources to the API. - -#### Configure a resource for the addtable operation - -1. Initialize the connector. - - 1. Navigate into the **Palette** pane and select the graphical operations icons listed under **Amazondynamodb Connector** section. Then drag and drop the `init` operation into the Design pane. - - Drag and drop init operation - - 2. Add the property values into the `init` operation as shown below. Replace the `region`, `accessKeyId`, `secretAccessKey`, `blocking` with your values. - - - **region** : The region of the application access. - - **accessKeyId** : The AWS secret access key. - - **secretAccessKey** : The AWS accessKeyId of the user account to generate the signature. - - **blocking** : Boolean type, this property helps the connector perform blocking invocations to AmazonDynamoDB. - - Add values to the init operation - -2. Set up the createTable operation. - - 1. Navigate into the **Palette** pane and select the graphical operations icons listed under **Amazondynamodb Connector** section. Then drag and drop the `createTable` operation into the Design pane. - - Drag and drop create table operation - - 2. The createTable operation creates a new table. Table names must be unique within each region. The `createTable` operation parameters are listed here. - - - **attributeDefinitions** : A list of attributes that describe the key schema for the table and indexes. If you are adding a new global secondary index to the table, AttributeDefinitions should include the key element(s) of the new index. - - **tableName** : The name of the table to create. - - **keySchema** : Specifies the attributes that make up the primary key for a table or an index. The attributes in keySchema must also be defined in attributeDefinitions. - - **localSecondaryIndexes** : One or more local secondary indexes (the maximum is five) to be created on the table. Each index is scoped to a given partition key value. There is a 10 GB size limit per partition key value. Alternately, the size of a local secondary index is unconstrained. - - **provisionedThroughput** : Represents the provisioned throughput setting for a specified table or index. - - While invoking the API, the above five parameter values come as a user inputs. - - Drag and drop create table operation - - 3. To get the input values in to the API we can use the [property mediator]({{base_path}}/reference/mediators/property-mediator). Navigate into the **Palette** pane and select the graphical mediators icons listed under the **Mediators** section. Then drag and drop the `Property` mediators into the Design pane as shown below. - - Add property mediators - - The parameters available for configuring the Property mediator are as follows: - - > **Note**: That the properties should be add to the pallet before create the operation. - - 4. Add the property mediator to capture the `attributeDefinitions` value. - - - **name** : attributeDefinitions - - **expression** : json-eval($.attributeDefinitions) - - Add property mediators attributeDefinitions - - 5. Add the property mediator to capture the `tableName` values. - - - **name** : tableName - - **expression** : json-eval($.tableName) - - Add values to capture tableName - - 6. Add the property mediator to capture the `keySchema` values. - - - **name** : keySchema - - **expression** : json-eval($.keySchema) - - Add values to capture keySchema - - 7. Add the property mediator to capture the `localSecondaryIndexes` values. - - - **name** : localSecondaryIndexes - - **expression** : json-eval($.localSecondaryIndexes) - - Add values to capture localSecondaryIndexes - - 8. Add the property mediator to capture the `provisionedThroughput` values. - - - **name** : provisionedThroughput - - **expression** : json-eval($.provisionedThroughput) - - Add values to capture provisionedThroughput - -#### Configure a resource for the insertdetails operation - -1. Initialize the connector. - - You can use the same configuration to initialize the connector. Please follow the steps given in 1.1 for setting up the `init` operation to the addtable operation. - -2. Set up the putItem operation. - - 1. Navigate into the **Palette** pane and select the graphical operations icons listed under **Amazondynamodb Connector** section. Then drag and drop the `putItem` operation into the Design pane. - - Drag and drop put items operation - - 2. The putItem operation use to insert new items to the tables. `putItem` operation parameters listed here. - - - **item** : A map of attribute name/value pairs, one for each attribute. Only the primary key attributes are required, but you can optionally provide other attribute name-value pairs for the item - - **tableName** : The name of the table to contain the item. - - While invoking the API, the above two parameter values come as a user inputs. - - Drag and drop put items table operation - - 3. Then drag and drop the `Property` mediators into the Design pane as mentioned in `addtable` operation. The parameters available for configuring the Property mediator are as follows. - - Add the property mediator to capture the `item` value. - - - **name** : item - - **expression** : json-eval($.item) - - Add property mediators to capture item - - 4. Add the property mediator to capture the `tableName` values. Please follow the steps given in `addtable` operation. - - - **name** : tableName - - **expression** : json-eval($.tableName) - -#### Configure a resource for the updatetable operation - -1. Initialize the connector. - - You can use the same configuration to initialize the connector. Please follow the steps given in 1.1 for setting up the `init` operation to the addtable operation. - -2. Set up the updateTable operation. - - 1. Navigate into the **Palette** pane and select the graphical operations icons listed under **Amazondynamodb Connector** section. Then drag and drop the `updateTable` operation into the Design pane. - - Drag and drop put items operation - - 2. The updateTable operation is used to update the created tables. The `updateTable` operation parameters are listed here. - - - **provisionedThroughput** : The new provisioned throughput setting for the specified table or index. - - **tableName** : The name of the table to contain the item. - - While invoking the API, the above two parameter values come as a user inputs. - - Drag and drop put items table operation - - 3. Then drag and drop the `Property` mediators into the Design pane as mentioned in the `addtable` operation. The parameters available for configuring the Property mediator are as follows. - - Add the property mediator to capture the `provisionedThroughput` value. - - - **name** : provisionedThroughput - - **expression** : json-eval($.provisionedThroughput) - - Add property mediators to capture provisionedThroughput - - 4. Add the property mediator to capture the `tableName` values. Please follow the steps given in `addtable` operation. - - - **name** : tableName - - **expression** : json-eval($.tableName) - -#### Configure a resource for the listdetails operation - -1. Initialize the connector. - - You can use the same configuration to initialize the connector. Please follow the steps given in 1.1 for setting up the `init` operation to the addtable operation. - -2. Set up the getItem operation. - - 1. Navigate into the **Palette** pane and select the graphical operations icons listed under **Amazondynamodb Connector** section. Then drag and drop the `getItem` operation into the Design pane. - - Drag and drop get items operation - - 2. The getItem operation is used to retrieve inserted items to the tables. The `getItem` operation parameters are listed here. - - - **key** : An array of primary key attribute values that define specific items in the table. For each primary key, you must provide all of the key attributes. - - **tableName** : The name of the table to contain the item. - - While invoking the API, the above two parameter values come as a user inputs. - - Drag and drop put items table operation - - 3. Then drag and drop the `Property` mediators into the Design pane as mentioned in `addtable` operation. The parameters available for configuring the Property mediator are as follows. - - Add the property mediator to capture the `key` value. - - - **name** : key - - **expression** : json-eval($.key) - - Add property mediators to capture key - - 4. Add the property mediator to capture the `tableName` values. Please follow the steps given in `addtable` operation. - - - **name** : tableName - - **expression** : json-eval($.tableName) - -#### Configure a resource for the deletedetails operation - -1. Initialize the connector. - - You can use the same configuration to initialize the connector. Please follow the steps given in 1.1 for setting up the `init` operation to the addtable operation. - -2. Set up the deleteItem operation. - - 1. Navigate into the **Palette** pane and select the graphical operations icons listed under **Amazondynamodb Connector** section. Then drag and drop the `deleteItem` operation into the Design pane. - - Drag and drop get items operation - - 2. The deleteItem operation is used to remove inserted items from the table. The `deleteItem` operation parameters are listed here. - - - **key** : An array of primary key attribute values that define specific items in the table. For each primary key, you must provide all of the key attributes. - - **tableName** : The name of the table to contain the item. - - **returnConsumedCapacity** : Determines the level of detail about provisioned throughput consumption that is returned in the response. - - **returnValues** : Use returnValues if you want to get the item attributes as they appeared before they were deleted. - - While invoking the API, the above two parameter values (key, tableName) come as a user inputs. - - Drag and drop put items table operation - - 3. Then drag and drop the `Property` mediators into the Design pane as mentioned in the `addtable` operation. The parameters available for configuring the Property mediator are as follows. - - Add the property mediator to capture the `key` value. Please follow the steps given in `listdetails` operation. - - - **name** : key - - **expression** : json-eval($.key) - - 4. Add the property mediator to capture the `tableName` values. Please follow the steps given in `listdetails` operation. - - - **name** : tableName - - **expression** : json-eval($.tableName) - -#### Configure a resource for the listtable operation - -1. Initialize the connector. - - You can use the same configuration to initialize the connector. Please follow the steps given in 1.1 for setting up the `init` operation to the addtable operation. - -2. Set up the listTables operation. - - 1. Navigate into the **Palette** pane and select the graphical operations icons listed under **Amazondynamodb Connector** section. Then drag and drop the `listTables` operation into the Design pane. - - Drag and drop list table operation - - 2. The listTables operation use to retrieve information about the created tables. `listTables` operation parameters listed here. - - - **exclusiveStartTableName** : The first table name that the listTables operation evaluates. Use the value returned for LastEvaluatedTableName. - - **limit** : The maximum number of table names to retrieve. If this parameter is not specified, the limit is 100. - - While invoking the API, the above two parameter values come as a user inputs. - - Drag and drop list table parameter operation - - 3. Then drag and drop the `Property` mediators into the Design pane as mentioned in `addtable` operation. The parameters available for configuring the Property mediator are as follows. - - Add the property mediator to capture the `exclusiveStartTableName` value. Please follow the steps given in the `listTables` operation. - - - **name** : exclusiveStartTableName - - **expression** : json-eval($.exclusiveStartTableName) - - Add property mediators to capture exclusiveStartTableName - - 4. Add the property mediator to capture the `limit` values. Please follow the steps given in the `listTables` operation. - - - **name** : limit - - **expression** : json-eval($.limit) - - Add property mediators to capture limit - -#### Configure a resource for the deletetable operation - -1. Initialize the connector. - - You can use the same configuration to initialize the connector. Please follow the steps given in 1.1 for setting up the `init` operation to the addtable operation. - -2. Set up the deleteTable operation. - - 1. Navigate into the **Palette** pane and select the graphical operations icons listed under **Amazondynamodb Connector** section. Then drag and drop the `deleteTable` operation into the Design pane. - - Drag and drop list table operation - - 2. The listTables operation is used to retrieve information about the created tables. The `deleteTable` operation parameters are listed here. - - - **exclusiveStartTableName** : The first table name that the listTables operation evaluates. Use the value returned for LastEvaluatedTableName. - - **limit** : The maximum number of table names to retrieve. If this parameter is not specified, the limit is 100. - - While invoking the API, the above two parameter values come as user inputs. - - Drag and drop list table parameter operation - - 3. Then drag and drop the `Property` mediators into the Design pane as mentioned in `addtable` operation. The parameters available for configuring the Property mediator are as follows: - - Add the property mediator to capture the `tableName` value. Please follow the steps given in the `deleteTable` and `listdetails` operations. - - - **name** : tableName - - **expression** : json-eval($.tableName) - -#### Get a response. - -When you are invoking the created API, the request of the message is going through the each resource. Finally, it is passed to the [Respond mediator]({{base_path}}/reference/mediators/respond-mediator/). The Respond Mediator stops the processing on the current message and sends the message back to the client as a response. - -Drag and drop **respond mediator** to the **Design view**. - -Add Respond mediator - -Now you can switch into the Source view and check the XML configuration files of the created API. - -??? note "amazonDynamoDBAPI.xml" - ``` - - - - - - - - - - - us-east-2 - AKIAY4QELOL7GF35XBW5 - SuQ4RsE/ZTf2H9VEXnMCvq8Pg8qSUHWpdyaV1QhJ - false - - - {$ctx:attributeDefinitions} - {$ctx:tableName} - {$ctx:keySchema} - {$ctx:localSecondaryIndexes} - {$ctx:provisionedThroughput} - - - - - - - - - - - - - - us-east-2 - AKIAY4QELOL7GF35XBW5 - SuQ4RsE/ZTf2H9VEXnMCvq8Pg8qSUHWpdyaV1QhJ - false - - - {$ctx:item} - {$ctx:tableName} - - - - - - - - - - - - - - us-east-2 - AKIAY4QELOL7GF35XBW5 - SuQ4RsE/ZTf2H9VEXnMCvq8Pg8qSUHWpdyaV1QhJ - false - - - {$ctx:key} - {$ctx:tableName} - TOTAL - ALL_OLD - - - - - - - - - - - - - - us-east-2 - AKIAY4QELOL7GF35XBW5 - SuQ4RsE/ZTf2H9VEXnMCvq8Pg8qSUHWpdyaV1QhJ - false - - - {$ctx:key} - {$ctx:tableName} - - - - - - - - - - - - - - us-east-2 - AKIAY4QELOL7GF35XBW5 - SuQ4RsE/ZTf2H9VEXnMCvq8Pg8qSUHWpdyaV1QhJ - false - - - {$ctx:exclusiveStartTableName} - {$ctx:limit} - - - - - - - - - - - - - - us-east-2 - AKIAY4QELOL7GF35XBW5 - SuQ4RsE/ZTf2H9VEXnMCvq8Pg8qSUHWpdyaV1QhJ - false - - - {$ctx:tableName} - {$ctx:provisionedThroughput} - - - - - - - - - - - - - us-east-2 - AKIAY4QELOL7GF35XBW5 - SuQ4RsE/ZTf2H9VEXnMCvq8Pg8qSUHWpdyaV1QhJ - false - - - {$ctx:tableName} - - - - - - - - - - ``` -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - - - Download ZIP - - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - -1. Creating a new table in the Amazon DynamoDB with the specified table name for store employee details. - - **Sample request** - - Save a file called data.json with the following payload. - - ```json - { - "attributeDefinitions":[ - { - "AttributeName":"employee_id", - "AttributeType":"S" - }, - { - "AttributeName":"name", - "AttributeType":"S" - }, - { - "AttributeName":"department", - "AttributeType":"S" - } - ], - "tableName":"Employee_Details", - "keySchema":[ - { - "AttributeName":"employee_id", - "KeyType":"HASH" - }, - { - "AttributeName":"name", - "KeyType":"RANGE" - } - ], - "localSecondaryIndexes":[ - { - "IndexName":"department", - "KeySchema":[ - { - "AttributeName":"employee_id", - "KeyType":"HASH" - }, - { - "AttributeName":"department", - "KeyType":"RANGE" - } - ], - "Projection":{ - "ProjectionType":"KEYS_ONLY" - } - } - ], - "provisionedThroughput":{ - "ReadCapacityUnits":5, - "WriteCapacityUnits":5 - } - } - ``` - - Invoke the API as shown below using the curl command. - ``` - curl -v POST -d @data.json "http://localhost:8290/resources/addtable" -H "Content-Type:application/json" - ``` - - **Expected Response** - - ```json - { - "TableDescription":{ - "AttributeDefinitions":[ - { - "AttributeName":"department", - "AttributeType":"S" - }, - { - "AttributeName":"employee_id", - "AttributeType":"S" - }, - { - "AttributeName":"name", - "AttributeType":"S" - } - ], - "CreationDateTime":1590068547.564, - "ItemCount":0, - "KeySchema":[ - { - "AttributeName":"employee_id", - "KeyType":"HASH" - }, - { - "AttributeName":"name", - "KeyType":"RANGE" - } - ], - "LocalSecondaryIndexes":[ - { - "IndexArn":"arn:aws:dynamodb:us-east-2:610968236798:table/Employee_Details/index/department", - "IndexName":"department", - "IndexSizeBytes":0, - "ItemCount":0, - "KeySchema":[ - { - "AttributeName":"employee_id", - "KeyType":"HASH" - }, - { - "AttributeName":"department", - "KeyType":"RANGE" - } - ], - "Projection":{ - "ProjectionType":"KEYS_ONLY" - } - } - ], - "ProvisionedThroughput":{ - "NumberOfDecreasesToday":0, - "ReadCapacityUnits":5, - "WriteCapacityUnits":5 - }, - "TableArn":"arn:aws:dynamodb:us-east-2:610968236798:table/Employee_Details", - "TableId":"10520308-ae1e-4742-b9d4-fc6aae67191e", - "TableName":"Employee_Details", - "TableSizeBytes":0, - "TableStatus":"CREATING" - } - } - - ``` - -2. Insert employee details (items) and stored into the specified table. - - **Sample request** - - Save a file called data.json with the following payload. - - ```json - { - "tableName":"Employee_Details", - "item":{ - "employee_id":{ - "S":"001" - }, - "name":{ - "S":"Jhone Fedrick" - }, - "department":{ - "S":"Engineering" - } - } - } - ``` - - Invoke the API as shown below using the curl command. - - ``` - curl -v POST -d @data.json "http://localhost:8290/resources/insertdetails" -H "Content-Type:application/json" - ``` - - **Expected Response** - - ``` - {} - ``` -3. Update specified table. - - **Sample request** - - Save a file called data.json with the following payload. - - ```json - { - "tableName":"Employee_Details", - "provisionedThroughput":{ - "ReadCapacityUnits":12, - "WriteCapacityUnits":12 - } - } - ``` - - Invoke the API as shown below using the curl command. - - ``` - curl -v POST -d @data.json "http://localhost:8290/resources/updatetable" -H "Content-Type:application/json" - ``` - - **Expected Response** - - ```json - { - "TableDescription":{ - "AttributeDefinitions":[ - { - "AttributeName":"department", - "AttributeType":"S" - }, - { - "AttributeName":"employee_id", - "AttributeType":"S" - }, - { - "AttributeName":"name", - "AttributeType":"S" - } - ], - "CreationDateTime":1590068547.564, - "ItemCount":0, - "KeySchema":[ - { - "AttributeName":"employee_id", - "KeyType":"HASH" - }, - { - "AttributeName":"name", - "KeyType":"RANGE" - } - ], - "LocalSecondaryIndexes":[ - { - "IndexArn":"arn:aws:dynamodb:us-east-2:610968236798:table/Employee_Details/index/department", - "IndexName":"department", - "IndexSizeBytes":0, - "ItemCount":0, - "KeySchema":[ - { - "AttributeName":"employee_id", - "KeyType":"HASH" - }, - { - "AttributeName":"department", - "KeyType":"RANGE" - } - ], - "Projection":{ - "ProjectionType":"KEYS_ONLY" - } - } - ], - "ProvisionedThroughput":{ - "LastIncreaseDateTime":1590071461.81, - "NumberOfDecreasesToday":0, - "ReadCapacityUnits":5, - "WriteCapacityUnits":5 - }, - "TableArn":"arn:aws:dynamodb:us-east-2:610968236798:table/Employee_Details", - "TableId":"10520308-ae1e-4742-b9d4-fc6aae67191e", - "TableName":"Employee_Details", - "TableSizeBytes":0, - "TableStatus":"UPDATING" - } - } - ``` - -4. Retrieve information about the added employee details (items). - - **Sample request** - - Save a file called data.json with the following payload. - - ```json - { - "tableName":"Employee_Details", - "key":{ - "employee_id":{ - "S":"001" - }, - "name":{ - "S":"Jhone Fedrick" - } - } - } - ``` - - Invoke the API as shown below using the curl command. - - ``` - curl -v POST -d @data.json "http://localhost:8290/resources/listdetails" -H "Content-Type:application/json" - ``` - - **Expected Response** - - ```json - { - "Item":{ - "department":{ - "S":"Engineering" - }, - "name":{ - "S":"Jhone Fedrick" - }, - "employee_id":{} - } - } - - ``` -5. Remove added employee details from the specified table (items). - - **Sample request** - - Save a file called data.json with the following payload. - - ```json - { - "tableName":"Employee_Details", - "key":{ - "employee_id":{ - "S":"001" - }, - "name":{ - "S":"Jhone Fedrick" - } - } - } - ``` - - Invoke the API as shown below using the curl command. - - ``` - curl -v POST -d @data.json "http://localhost:8290/resources/deletedetails" -H "Content-Type:application/json" - ``` - - **Expected Response** - - ```json - { - "Attributes":{ - "department":{ - "S":"Engineering" - }, - "name":{ - "S":"Jhone Fedrick" - }, - "employee_id":{ - "S":"001" - } - }, - "ConsumedCapacity":{ - "CapacityUnits":2, - "TableName":"Employee_Details" - } - } - ``` -6. Retrieve information about the created tables. - - **Sample request** - - Save a file called data.json with the following payload. - - ```json - { - "exclusiveStartTableName":"Employee_Details", - "limit":4 - } - ``` - - Invoke the API as shown below using the curl command. - - ``` - curl -v POST -d @data.json "http://localhost:8290/resources/listtable" -H "Content-Type:application/json" - ``` - - **Expected Response** - - ```json - { - "LastEvaluatedTableName":"TTestMyTablehread", - "TableNames":[ - "Results", - "Results1", - "Results123", - "TTestMyTablehread" - ] - } - ``` -7. Remove created table in the Amazon DynamoDB. - - **Sample request** - - Save a file called data.json with the following payload. - - ```json - { - "tableName":"Employee_Details" - } - - ``` - - Invoke the API as shown below using the curl command. - - ``` - curl -v POST -d @data.json " http://localhost:8290/resources/deletetable" -H "Content-Type:application/json" - ``` - - **Expected Response** - - ```json - { - "TableDescription":{ - "ItemCount":0, - "ProvisionedThroughput":{ - "NumberOfDecreasesToday":0, - "ReadCapacityUnits":12, - "WriteCapacityUnits":12 - }, - "TableArn":"arn:aws:dynamodb:us-east-2:610968236798:table/Employee_Details", - "TableId":"10520308-ae1e-4742-b9d4-fc6aae67191e", - "TableName":"Employee_Details", - "TableSizeBytes":0, - "TableStatus":"DELETING" - } - } - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-overview.md b/en/docs/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-overview.md deleted file mode 100644 index e2d6582f2d..0000000000 --- a/en/docs/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-overview.md +++ /dev/null @@ -1,33 +0,0 @@ -# Amazon DynamoDB Connector Overview - -Amazon DynamoDB Connector allows you to access the [Amazon DynamoDB REST API](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.API.html) from an integration sequence. - -Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. AmazonDynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS, so they do not have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. - -To see the Amazon DynamoDB connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "DynamoDB". - -Amazon DynamoDB Connector Store - -## Compatibility - -| Connector version | Supported product versions | -| ------------- |------------- | -| [1.0.1](https://github.com/wso2-extensions/esb-connector-amazondynamodb/tree/1.0.1) | APIM 4.0.0, EI 7.1.0, EI 7.0.x, EI 6.6.0 | - -For older versions, see the details in the connector store. - -## Amazon DynamoDB Connector documentation - -* **[Amazon DynamoDB Connector Example]({{base_path}}/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-example/)**: This example explains how to perform various `table` and `items` operations with Amazon DynamoDB. - -* **[Amazon DynamoDB Connector Reference]({{base_path}}/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-configuration/)**: This documentation provides a reference guide for the Amazon DynamoDB Connector. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, create a pull request in the following repository. - -* [Amazon DynamoDB Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-amazondynamodb) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-reference.md b/en/docs/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-reference.md deleted file mode 100644 index ff6ba7cd46..0000000000 --- a/en/docs/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-reference.md +++ /dev/null @@ -1,1865 +0,0 @@ -# Amazon DynamoDB Connector Reference - -The following operations allow you to work with the Amazon DynamoDB Connector. Click an operation name to see parameter details and samples on how to use it. - ---- - -## Initialize the connector - -To use the Amazon DynamoDB connector, add the element in your configuration before carrying out any other operations. To authenticate, it uses the Signature Version 4 signing specification, which describes how to construct signed requests to AWS. Whenever you send a request to AWS, you must include authorization information with your request so that AWS can verify the authenticity of the request. AWS uses the authorization information from your request to recreate your signature and then compares that signature with the one that you sent. These two signatures must match for you to successfully access AWS. Click here for further reference on the signing process. - -??? note "init" - The init operation is used to initialize the connection to Amazon S3. - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    regionThe region of the application access.Yes
    secretAccessKeyThe secret access key.Yes
    accessKeyIdThe accessKeyId of the user account to generate the signature.Yes
    blockingBoolean type, this property helps the connector perform blocking invocations to Amazon DynamoDB.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:region} - {$ctx:secretAccessKey} - {$ctx:accessKeyId} - {$ctx:blocking} - - ``` - - Ensure that the following Axis2 configurations are added and enabled in the `\conf\axis2\axis2.xml` file. - - ```xml - - ... - - ``` - - > **Note**: If you want to perform blocking invocations, ensure that the above builder and formatter are added and enabled in the `\conf\axis2\axis2_blocking_client.xml` file. - ---- - -### Items - -??? note "batchGetItem" - The batchGetItem operation returns the attributes of one or more items from one or more tables. The requested items are identified by the primary key. A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. - - The batchGetItem operation returns a partial result if the response size limit is exceeded, the table's provisioned throughput is exceeded, or an internal processing failure occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys. You can use this value to retry the operation starting with the next item to get. For example, if you ask to retrieve 100 items, but each individual item is 300 KB in size, the system returns 52 items (so that the16 MB limit is not exceeded) and an appropriate UnprocessedKeys value, so that you can get the next page of results. If required, your application can include its own logic to assemble the pages of results into one dataset. - - If none of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, batchGetItem will throw an exception. If at least one of the items is successfully processed, batchGetItem completes successfully while returning the keys of the unread items in UnprocessedKeys. By default, batchGetItem performs eventually consistent reads on every table in the request. If you want strongly consistent reads instead, you can set ConsistentRead to true for any or all tables. To minimize response latency, batchGetItem retrieves items in parallel. See the [related API documentation](http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetItem.html) for more information. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    requestItemsrequestItems: A map of one or more table names and, for each table, the corresponding primary keys for the items to retrieve. Each table name can be invoked only once. Each element in the map consists of the following: -
      -
    • Keys - Required - An array of primary key attribute values that define specific items in the table. For each primary key, you must provide all of the key attributes. For example, with a hash type primary key, you only need to specify the hash attribute. For a hash-and-range type primary key, you must specify both the hash attribute and the range attribute.
    • -
    • AttributesToGet - Optional - One or more attributes to be retrieved from the table. By default, all attributes are returned. If a specified attribute is not found, it does not appear in the result. Note that AttributesToGet has no effect on provisioned throughput consumption. DynamoDB determines capacity units consumed based on the item size, not on the amount of data that is returned to an application.
    • -
    • ConsistentRead - Optional - If true, a strongly consistent read is used; if false (the default), an eventually consistent read is used.
    • -
    • ExpressionAttributeNames - Optional - One or more substitution tokens for attribute names in the ProjectionExpression property.
    • -
    • ProjectionExpression - Optional - A string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas. If attribute names are not specified, then all attributes are returned. If any of the specified attributes are not found, they do not appear in the result.
    • -
    -
    Yes
    returnConsumedCapacityDetermines the level of detail about provisioned throughput consumption that is returned in the response. If set to TOTAL, the response includes the consumed capacity for tables and indexes. If set to INDEXES, the response includes the consumed capacity for indexes. If set to NONE (the default), the consumed capacity is not included in the response.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:requestItems} - {$ctx:returnConsumedCapacity} - - ``` - - **Sample request** - - ```json - { - "accessKeyId":"AKIAxxxxxxxxxxxx", - "secretAccessKey":"id4qxxxxxxxx", - "region":"us-east-1", - "blocking":"false", - "requestItems": { - "Thread": { - "Keys": [ - { - "ForumName": { - "S": "Amazon Dynamo" - }, - "Subject": { - "S": "How do I update multiple items?" - } - } - ], - "AttributesToGet": [ - "Tags", - "Message" - ] - } - }, - "returnConsumedCapacity":"TOTAL" - } - ``` - - **Sample response** - - ```json - { - "Responses":{ - "Test4782":[ - { - "Message":{ - "S":"I want to update multiple items in a single call. What's the best way to do that?" - }, - "Tags":{ - "SS":[ - "HelpMe", - "Multiple Items", - "Update" - ] - } - } - ] - }, - "UnprocessedKeys":{ - } - - } - ``` - -??? note "batchWriteItem" - The batchWriteItem operation puts or deletes multiple items in one or more tables. A single call to batchWriteItem can write up to 16 MB of data, which can comprise as many as 25 put or delete requests. Individual items to be written can be as large as 400 KB. This operation cannot update items. - - The individual PutRequest and DeleteRequest operations specified in batchWriteItem are atomic, but batchWriteItem as a whole is not. If any requested operations fail because the table's provisioned throughput is exceeded or an internal processing failure occurs, the failed operations are returned in the UnprocessedItems response property. You can investigate and optionally resend the requests. Typically, you would call batchWriteItem in a loop. Each iteration would check for unprocessed items and submit a new batchWriteItem request with those unprocessed items until all items have been processed. - - Note that if none of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, batchWriteItem will throw an exception. With batchWriteItem, you can efficiently write or delete large amounts of data, such as from Amazon Elastic MapReduce (EMR), or copy data from another database into DynamoDB. To improve performance with these large-scale operations, batchWriteItem does not behave in the same way as individual PutRequest and DeleteRequest calls would. For example, you cannot specify conditions on individual put and delete requests, and batchWriteItem does not return deleted items in the response. - - If one or more of the following is true, DynamoDB rejects the entire batch write operation: - - * One or more tables specified in the batchWriteItem request does not exist. - * Primary key attributes specified on an item in the request do not match those in the corresponding table's primary key schema. - * You try to perform multiple operations on the same item in the same batchWriteItem request. For example, you cannot put and delete the same item in the same batchWriteItem request. - * There are more than 25 requests in the batch. - * The total request size exceeds 16 MB. - * Any individual item in a batch exceeds 400 KB. - - See the [related API documentation](http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetItem.html) for more information. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    requestItemsA map of one or more table names and, for each table, the corresponding primary keys for the items to retrieve. Each table name can be invoked only once. Each element in the map consists of the following: -
      -
    • Keys - Required - An array of primary key attribute values that define specific items in the table. For each primary key, you must provide all of the key attributes. For example, with a hash type primary key, you only need to specify the hash attribute. For a hash-and-range type primary key, you must specify both the hash attribute and the range attribute.
    • -
    • AttributesToGet - Optional - One or more attributes to be retrieved from the table. By default, all attributes are returned. If a specified attribute is not found, it does not appear in the result. Note that AttributesToGet has no effect on provisioned throughput consumption. DynamoDB determines capacity units consumed based on the item size, not on the amount of data that is returned to an application.
    • -
    • ConsistentRead - Optional - If true, a strongly consistent read is used; if false (the default), an eventually consistent read is used.
    • -
    • ExpressionAttributeNames - Optional - One or more substitution tokens for attribute names in the ProjectionExpression property.
    • -
    • ProjectionExpression - Optional - A string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas. If attribute names are not specified, then all attributes are returned. If any of the specified attributes are not found, they do not appear in the result.
    • -
    -
    Yes
    returnConsumedCapacityDetermines the level of detail about provisioned throughput consumption that is returned in the response. If set to TOTAL, the response includes the consumed capacity for tables and indexes. If set to INDEXES, the response includes the consumed capacity for indexes. If set to NONE (the default), the consumed capacity is not included in the response.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:requestItems} - {$ctx:returnConsumedCapacity} - - ``` - - **Sample request** - - ```json - { - "accessKeyId":"AKIAxxxxxxxxxxxx", - "secretAccessKey":"id4qxxxxxxxx", - "region":"us-east-1", - "blocking":"false", - "requestItems": { - "Thread": { - "Keys": [ - { - "ForumName": { - "S": "Amazon Dynamo" - }, - "Subject": { - "S": "How do I update multiple items?" - } - } - ], - "AttributesToGet": [ - "Tags", - "Message" - ] - } - }, - "returnConsumedCapacity":"TOTAL" - } - ``` - - **Sample response** - - ```json - { - "Responses":{ - "Test4782":[ - { - "Message":{ - "S":"I want to update multiple items in a single call. What's the best way to do that?" - }, - "Tags":{ - "SS":[ - "HelpMe", - "Multiple Items", - "Update" - ] - } - } - ] - }, - "UnprocessedKeys":{ - } - - } - ``` - -??? note "deleteItem" - The deleteItem operation deletes a single item in a table by primary key. You can perform a conditional delete operation that deletes the item if it exists, or if it has an expected attribute value. In addition to deleting an item, you can also return the item's attribute values in the same operation, using the returnValues property. Unless you specify conditions, deleteItem is an idempotent operation, and running it multiple times on the same item or attribute does not result in an error response. Conditional deletes are useful for only deleting items if specific conditions are met. If those conditions are met, DynamoDB performs the delete operation. Otherwise, the item is not deleted. - - See the [related API documentation](http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DeleteItem.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    expectedA map of attribute/condition pairs. This is the conditional block for the deleteItem operation. Each element of this property consists of an attribute name, a comparison operator, and one or more values. DynamoDB uses the comparison operator to compare the attribute with the value(s) you supply. For each element of this property, the result of the evaluation is either true or false. (For example, {"ForumName":{"ComparisonOperator": "EQ", "AttributeValueList": [ {"S":"Amazon DynamoDB" }]}})Optional
    tableNameThe name of the table from which to delete the item. (Minimum length of 3. Maximum length of 255).Yes
    returnValuesUse returnValues if you want to get the item attributes as they appeared before they were deleted. For deleteItem, the valid values are: -
      -
    • NONE - If returnValues is not specified or if its value is NONE (the default), nothing is returned.
    • -
    • ALL_OLD - The content of the old item is returned. Valid Values: NONE | ALL_OLD | UPDATED_OLD | ALL_NEW | UPDATED_NEW.
    • -
    -
    Optional
    returnItemCollectionMetricsDetermines whether item collection metrics are returned: If set to SIZE, statistics about item collection, if any, that were modified during the operation are returned in the response. If set to NONE (the default), no statistics are returned.Optional
    conditionalOperatorA logical operator to apply to the conditions in the expected map: -
      -
    • AND - If all of the conditions evaluate to true, the entire map evaluates to true (default).
    • -
    • OR - If at least one of the conditions evaluate to true, the entire map evaluates to true. The operation will succeed only if the entire map evaluates to true.
    • -
    -
    Optional
    conditionExpressionA condition that must be satisfied in order for a conditional deleteItem operation to succeed. An expression can contain any of the following: -
      -
    • Functions: attribute_exists | attribute_not_exists | attribute_type | contains | begins_with | size These function names are case-sensitive.
    • -
    • Comparison operators: = | <> | < | > | <= | >= | BETWEEN | IN
    • -
    • Logical operators: AND | OR | NOT
    • -
    -
    Optional
    expressionAttributeNamesOne or more substitution tokens for attribute names in an expression. (For example, {"#LP":"LastPostDateTime"}).Optional
    expressionAttributeValuesOne or more values that can be substituted in an expression. (For example, { ":avail":{"S":"Available"}, ":back":{"S":"Backordered"}, ":disc":{"S":"Discontinued"} })Optional
    returnConsumedCapacityDetermines the level of detail about provisioned throughput consumption that is returned in the response: If set to TOTAL, the response includes the consumed capacity for tables and indexes. If set to INDEXES, the response includes consumed capacity for indexes. If set to NONE (the default), consumed capacity is not included in the response.Optional
    keyA map of attribute names to AttributeValue objects, representing the primary key of the item to delete. For the primary key, you must provide all of the attributes. For example, with a hash type primary key, you only need to specify the hash attribute. For a hash-and-range type primary key, you must specify both the hash attribute and the range attribute.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:expected} - {$ctx:tableName} - {$ctx:returnValues} - {$ctx:returnItemCollectionMetrics} - {$ctx:conditionalOperator} - {$ctx:conditionExpression} - {$ctx:expressionAttributeNames} - {$ctx:expressionAttributeValues} - {$ctx:returnConsumedCapacity} - {$ctx:key} - - ``` - - **Sample request** - - ```json - { - "accessKeyId":"AKIxxxxxxxxxx", - "secretAccessKey":"id4xxxxxx", - "region":"us-east-1", - "blocking":"false", - "tableName": "Thread", - "key": { - "ForumName": { - "S": "Amazon DynamoDB" - }, - "Subject": { - "S": "How do I update multiple items?" - } - }, - "conditionExpression":"attribute_not_exists", - "returnValues": "ALL_OLD", - "returnConsumedCapacity":"TOTAL" - } - ``` - - **Sample response** - - ```json - { - "Attributes":{ - "LastPostedBy":{ - "S":"fred@example.com" - }, - "ForumName":{ - "S":"Amazon DynamoDB" - }, - "LastPostDateTime":{ - "S":"201303201023" - }, - "Tags":{ - "SS":[ - "Update", - "Multiple Items", - "HelpMe" - ] - }, - "Subject":{ - "S":"How do I update multiple items?" - }, - "Message":{ - "S":"I want to update multiple items in a single call. What's the best way to do that?" - } - } - - } - ``` - -??? note "getItem" - The getItem operation returns a set of attributes for the item with the given primary key. If there is no matching item, getItem does not return any data. This operation provides an eventually consistent read by default. If your application requires a strongly consistent read, set consistentRead to true. Although a strongly consistent read might take more time than an eventually consistent read, it always returns the last updated value. - - See the [related API documentation](http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_GetItem.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    attributesToGetThe names of one or more attributes to retrieve. If no attribute names are specified, all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result. Note that attributesToGet has no effect on provisioned throughput consumption. DynamoDB determines capacity units consumed based on item size, not on the amount of data that is returned to an application. (For example, ["ForumName", "Subject"]).Optional
    tableNameThe name of the table containing the requested item. (Minimum length of 3. Maximum length of 255).Yes
    consistentReadDetermines the read consistency model. If set to true, the operation uses strongly consistent reads. Otherwise, the operation uses eventually consistent reads.Optional
    keyA map of attribute names to AttributeValue objects, representing the primary key of the item to retrieve. For the primary key, you must provide all of the attributes. For example, with a hash type primary key, you only need to specify the hash attribute. For a hash-and-range type primary key, you must specify both the hash attribute and the range attribute.Yes
    expressionAttributeNamesOne or more substitution tokens for attribute names in an expression. (For example, {"#LP":"LastPostDateTime"})Optional
    projectionExpressionA string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas. If attribute names are not specified, then all attributes are returned. If any of the specified attributes are not found, they do not appear in the result.Optional
    returnConsumedCapacityDetermines the level of detail about provisioned throughput consumption that is returned in the response: If set to TOTAL, the response includes the consumed capacity data for tables and indexes. If set to INDEXES, the response includes consumed capacity for indexes. If set to NONE (the default), consumed capacity is not included in the response.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:attributesToGet} - {$ctx:tableName} - {$ctx:consistentRead} - {$ctx:key} - {$ctx:expressionAttributeNames} - {$ctx:projectionExpression} - {$ctx:returnConsumedCapacity} - - ``` - - **Sample request** - - ```json - { - "accessKeyId":"AKIxxxxxxxxxx", - "secretAccessKey":"id4xxxxxx", - "region":"us-east-1", - "blocking":"false", - "tableName": "Thread", - "key": { - "ForumName": { - "S": "Amazon DynamoDB" - }, - "Subject": { - "S": "How do I update multiple items?" - } - }, - "projectionExpression":"#LP, Message, Tags", - "consistentRead": true, - "returnConsumedCapacity": "TOTAL", - "expressionAttributeNames":{"#LP":"LastPostDateTime"} - } - ``` - - **Sample response** - - ```json - { - "ConsumedCapacity":{ - "CapacityUnits":1, - "TableName":"Thread" - }, - "Item":{ - "Tags":{ - "SS":[ - "Update", - "Multiple Items", - "HelpMe" - ] - }, - "LastPostDateTime":{ - "S":"201303190436" - }, - "Message":{ - "S":"I want to update multiple items in a single call. What's the best way to do that?" - } - } - } - ``` - -??? note "putItem" - The putItem operation creates a new item, or replaces an old item with a new item. If an item already exists in the specified table with the same primary key, the new item completely replaces the existing item. You can perform a conditional put (insert a new item if one with the specified primary key does not exist), or replace an existing item if it has certain attribute values. - - In addition to creating an item, you can also return the attribute values of the item in the same operation using the returnValues property. When you add an item, the primary key attribute(s) are the only required attributes. Attribute values cannot be null. String and binary type attributes must have a length greater than zero. Set type attributes cannot be empty. Requests with empty values will be rejected with a validation exception. You can request that the putItem operation should return either a copy of the old item (before the update) or a copy of the new item (after the update). - - To prevent a new item from replacing an existing item, use a conditional expression with the putItem operation. The conditional expression should contain the attribute_not_exists function with the name of the attribute being used as the partition key for the table. Since every record must contain that attribute, the attribute_not_exists function will only succeed if no matching item exists. For more information about using this API, see Working with Items. - - See the [related API documentation](http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    expectedA map of attribute/condition pairs. This is the conditional block for the deleteItem operation. Each element of this property consists of an attribute name, a comparison operator, and one or more values. DynamoDB uses the comparison operator to compare the attribute with the value(s) you supply. For each element of this property, the result of the evaluation is either true or false. (For example, {"ForumName":{"ComparisonOperator": "EQ", "AttributeValueList": [ {"S":"Amazon DynamoDB" }]}})Optional
    tableNameThe name of the table from which to delete the item. (Minimum length of 3. Maximum length of 255).Yes
    itemA map of attribute name/value pairs, one for each attribute. Only the primary key attributes are required, but you can optionally provide other attribute name-value pairs for the item. You must provide all of the attributes for the primary key. For example, with a hash type primary key, you only need to specify the hash attribute. For a hash-and-range type primary key, you must specify both the hash attribute and the range attribute. If you specify any attributes that are part of an index key, the data types for those attributes must match those of the schema in the table's attribute definition. Each element in the item map is an AttributeValue object.Yes
    returnValuesUse returnValues if you want to get the item attributes as they appeared before they were updated with the putItem request. The possible values are: -
      -
    • NONE - If returnValues is not specified or if its value is NONE (the default), nothing is returned.
    • -
    • ALL_OLD - The content of the old item is returned. Valid Values: NONE | ALL_OLD | UPDATED_OLD | ALL_NEW | UPDATED_NEW.
    • -
    -
    Optional
    returnItemCollectionMetricsDetermines whether item collection metrics are returned: If set to SIZE, statistics about item collection, if any, that were modified during the operation are returned in the response. If set to NONE (the default), no statistics are returned.Optional
    conditionalOperatorA logical operator to apply to the conditions in the expected map: -
      -
    • AND - If all of the conditions evaluate to true, the entire map evaluates to true (default).
    • -
    • OR - If at least one of the conditions evaluate to true, the entire map evaluates to true. The operation will succeed only if the entire map evaluates to true.
    • -
    -
    Optional
    keyA map of attribute names to AttributeValue objects, representing the primary key of the item to delete. For the primary key, you must provide all of the attributes. For example, with a hash type primary key, you only need to specify the hash attribute. For a hash-and-range type primary key, you must specify both the hash attribute and the range attribute.Yes
    conditionExpressionA condition that must be satisfied in order for a conditional deleteItem operation to succeed. An expression can contain any of the following: -
      -
    • Functions: attribute_exists | attribute_not_exists | attribute_type | contains | begins_with | size These function names are case-sensitive.
    • -
    • Comparison operators: = | <> | < | > | <= | >= | BETWEEN | IN
    • -
    • Logical operators: AND | OR | NOT
    • -
    -
    Optional
    expressionAttributeNamesOne or more substitution tokens for attribute names in an expression. (For example, {"#LP":"LastPostDateTime"}).Optional
    expressionAttributeValuesOne or more values that can be substituted in an expression. (For example, { ":avail":{"S":"Available"}, ":back":{"S":"Backordered"}, ":disc":{"S":"Discontinued"} })Optional
    returnConsumedCapacityDetermines the level of detail about provisioned throughput consumption that is returned in the response: If set to TOTAL, the response includes the consumed capacity for tables and indexes. If set to INDEXES, the response includes consumed capacity for indexes. If set to NONE (the default), consumed capacity is not included in the response.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:expected} - {$ctx:tableName} - {$ctx:item} - {$ctx:returnValues} - {$ctx:returnItemCollectionMetrics} - {$ctx:conditionalOperator} - {$ctx:conditionExpression} - {$ctx:expressionAttributeNames} - {$ctx:expressionAttributeValues} - {$ctx:returnConsumedCapacity} - - ``` - - **Sample request** - - ```json - { - "accessKeyId":"AKIxxxxxxxxxx", - "secretAccessKey":"id4xxxxxx", - "region":"us-east-1", - "blocking":"false", - "tableName": "Thread", - "item": { - "LastPostDateTime": { - "S": "201303190422" - }, - "Tags": { - "SS": ["Update","Multiple Items","HelpMe"] - }, - "ForumName": { - "S": "Amazon Dynamo" - }, - "Message": { - "S": "I want to update multiple items in a single call. What's the best way to do that?" - }, - "Subject": { - "S": "How do I update multiple items?" - }, - "LastPostedBy": { - "S": "fred@example.com" - } - }, - "returnValues":"ALL_OLD", - "returnConsumedCapacity":"TOTAL", - "returnItemCollectionMetrics":"SIZE", - "expected":{ - "Message":{ - "ComparisonOperator": "EQ", - "AttributeValueList": [ {"S":"I want to update multiple item." }] - } - - } - } - ``` - - **Sample response** - - ```json - { - - } - ``` - -??? note "query" - The query operation uses the primary key of a table or a secondary index, to directly access items from that table or index. You can use the KeyConditionExpression property to provide a specific value for the partition key, and the query operation returns all of the items from the table or index with that partition key value. Optionally, you can narrow the scope of the query operation by specifying a sort key value and a comparison operator in KeyConditionExpression. You can use the ScanIndexForward property to get results in forward or reverse order, by sort key. - - Queries that do not return results consume the minimum read capacity units according to the type of read. If the total number of items meeting the query criteria exceeds the result set size limit of 1 MB, the query stops and results are returned to the user with a LastEvaluatedKey to continue the query in a subsequent operation. Unlike a scan operation, a query operation never returns an empty result set and a LastEvaluatedKey . The LastEvaluatedKey is only provided if the results exceed 1 MB, or if you have used limit. - - You can query a table, a local secondary index, or a global secondary index. For a query on a table or on a local secondary index, you can set consistentRead to true and obtain a strongly consistent result. Global secondary indexes support eventually consistent reads only, so do not specify consistentRead when querying a global secondary index. - - See the [related API documentation](http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    limitThe maximum number of items to evaluate (not necessarily the number of matching items). If DynamoDB processes the number of items up to the limit while processing the results, it stops the operation and returns the matching values up to that point, and a LastEvaluatedKey to apply in a subsequent operation, so that you can pick up from where you left off. Also, if the processed data set size exceeds 1 MB before DynamoDB reaches this limit, it stops the operation and returns the matching values up to the limit, and a LastEvaluatedKey to apply in a subsequent operation to continue the operation. For more information, see Query and Scan in the Amazon DynamoDB Developer Guide.Optional
    exclusiveStartKeyThe primary key of the first item that this operation will evaluate. Use the value that was returned for LastEvaluatedKey in the previous operation. The data type for exclusiveStartKey must be String, Number or Binary. No set data types are allowed. See exclusiveStartKey in Amazon DynamoDB API documentation for more information.Yes
    keyConditionsThe selection criteria for the query. (For example, "SongTitle": {ComparisonOperator: "BETWEEN", AttributeValueList: ["A", "M"]}). See keyConditions in Amazon DynamoDB Developer Guide for more information.Optional
    attributesToGetThe names of one or more attributes to retrieve. If no attribute names are specified, all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result. Note that attributesToGet has no effect on provisioned throughput consumption. DynamoDB determines capacity units consumed based on item size, not on the amount of data that is returned to an application. (For example, ["ForumName", "Subject"]) You cannot use both attributesToGet and select (see below) together in a query request, unless the value for select is SPECIFIC_ATTRIBUTES. (This usage is equivalent to specifying attributesToGet without any value for select.) If you are querying a local secondary index and request only attributes that are projected into that index, the operation will read only the index and not the table. If any of the requested attributes are not projected into the local secondary index, DynamoDB fetches each of these attributes from the parent table. If you are querying a global secondary index, you can only request attributes that are projected into the index. Global secondary index queries cannot fetch attributes from the parent table. (Minimum of 1 item in the list.) See attributesToGet in Amazon DynamoDB Developer GuideOptional
    tableNameThe name of the table containing the requested items. (Minimum length of 3. Maximum length of 255.)Yes
    selectThe attributes to be returned in the result. You can retrieve all item attributes, specific item attributes, the count of matching items, or in the case of an index, some or all of the attributes projected into the index. Set to SPECIFIC_ATTRIBUTES if you are also using attributesToGet (see above). Possible values: ALL_ATTRIBUTES | ALL_PROJECTED_ATTRIBUTES | SPECIFIC_ATTRIBUTES | COUNTOptional
    scanIndexForwardSpecifies ascending (true) or descending (false) traversal of the index. DynamoDB returns results reflecting the requested order determined by the range key. If the data type is Number, the results are returned in numeric order. For String, the results are returned in the order of ASCII character code values. For Binary, DynamoDB treats each byte of the binary data as unsigned when it compares binary values. Defaults to the ascending order.Yes
    queryFilterEvaluates the query results and returns only the desired values. If you specify more than one condition in the queryFilter map, by default all of the conditions must evaluate to true. (You can use the conditionalOperator property described below to OR the conditions instead. If you do this, at least one of the conditions must evaluate to true, rather than all of them). Each queryFilter element consists of an attribute name to compare, along with the following: -
      -
    • AttributeValueList - One or more values to evaluate against the supplied attribute. The number of values in the list depend on the ComparisonOperator that is used. For the type Number, value comparisons are numeric. String value comparisons for greater than, equals, or less than are based on ASCII character code values. For example, a is greater than A, and aa is greater than B. For Binary, DynamoDB treats each byte of the binary data as unsigned when it compares binary values, for example when evaluating query expressions.
    • -
    • ComparisonOperator - A comparator for evaluating attributes. For example: equals, greater than, less than, etc. The following comparison operators are available: EQ | NE | LE | LT | GE | GT | NOT_NULL | NULL | CONTAINS | NOT_CONTAINS | BEGINS_WITH | IN | BETWEEN. For complete descriptions of all comparison operators, see conditions. For example, "LastPostDateTime": {ComparisonOperator: "GT", AttributeValueList: [ 201303190421 ]}.
    • -
    -
    Optional
    consistentReadDetermines the read consistency model. If set to true, the operation uses strongly consistent reads. Otherwise, eventually consistent reads are used. Strongly consistent reads are not supported on global secondary indexes. If you query a global secondary index with consistentRead set to true, you will receive an error message.Optional
    indexNameThe name of an index to query. This can be any local secondary index or global secondary index on the table (Minimum length of 3. Maximum length of 255.)Optional
    conditionalOperatorA logical operator to apply to the conditions in the expected map: -
      -
    • AND - If all of the conditions evaluate to true, the entire map evaluates to true (default).
    • -
    • OR - If at least one of the conditions evaluate to true, the entire map evaluates to true. The operation will succeed only if the entire map evaluates to true.
    • -
    -
    Optional
    expressionAttributeNamesOne or more substitution tokens for attribute names in an expression. (For example, {"#LP":"LastPostDateTime"}).Optional
    expressionAttributeValuesOne or more values that can be substituted in an expression. (For example, { ":avail":{"S":"Available"}, ":back":{"S":"Backordered"}, ":disc":{"S":"Discontinued"} })Optional
    filterExpressionA string that contains conditions that DynamoDB applies after the query operation, but before the data is returned. Items that do not satisfy the FilterExpression criteria are not returned. (For example, "LastPostDateTime > :LP") For more information, see Filter Expressions in the Amazon DynamoDB Developer Guide.Optional
    keyConditionExpressionThe condition that specifies the key value(s) for items to be retrieved by the query operation.Optional
    projectionExpressionA string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas. If attribute names are not specified, then all attributes are returned. If any of the specified attributes are not found, those attributes do not appear in the result.Optional
    returnConsumedCapacityDetermines the level of detail about provisioned throughput consumption that is returned in the response: If set to TOTAL, the response includes the consumed capacity for tables and indexes. If set to INDEXES, the response includes consumed capacity for indexes. If set to NONE (the default), consumed capacity is not included in the response.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:limit} - {$ctx:exclusiveStartKey} - {$ctx:keyConditions} - {$ctx:attributesToGet} - {$ctx:tableName} - - {$ctx:scanIndexForward} - {$ctx:queryFilter} - {$ctx:consistentRead} - {$ctx:indexName} - {$ctx:conditionalOperator} - {$ctx:expressionAttributeNames} - {$ctx:expressionAttributeValues} - {$ctx:filterExpression} - {$ctx:keyConditionExpression} - {$ctx:projectionExpression} - {$ctx:returnConsumedCapacity} - - ``` - - **Sample request** - - ```json - { - "accessKeyId":"AKIxxxxxxxxxx", - "secretAccessKey":"id4xxxxxx", - "region":"us-east-1", - "blocking":"false", - "tableName": "Thread", - "indexName": "LastPostIndex", - "limit": 3, - "consistentRead": true, - "projectionExpression": "ForumName, #LP", - "keyConditionExpression": "ForumName = :v1 AND #LP BETWEEN :v2a AND :v2b", - "expressionAttributeNames":{"#LP":"LastPostDateTime"}, - "expressionAttributeValues": { - ":v1": {"S": "Amazon Dynamo"}, - ":v2a": {"S": "201303190421"}, - ":v2b": {"S": "201303190425"} - }, - "returnConsumedCapacity": "TOTAL" - } - ``` - - **Sample response** - - ```json - { - "Count":2, - "ScannedCount":2 - } - ``` - -??? note "scan" - The scan operation returns one or more items and item attributes by accessing every item in the table. To have DynamoDB return fewer items, you can provide a scanFilter. - - If the total number of scanned items exceeds the maximum data set size limit of 1 MB, the scan stops and results are returned to the user with a LastEvaluatedKey to continue the scan in a subsequent operation. The results also include the number of items exceeding the limit. A scan can result in no table data meeting the filter criteria. The result set is eventually consistent. - - By default, scan operations proceed sequentially. For faster performance on large tables, applications can request a parallel scan by specifying the segment and totalSegments properties. For more information, see [Parallel Scan](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Query.html#QueryAndScanParallelScan). - - See the [related API documentation](http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    limitThe maximum number of items to evaluate (not necessarily the number of matching items). If DynamoDB processes the number of items up to the limit while processing the results, it stops the operation and returns the matching values up to that point, and a LastEvaluatedKey to apply in a subsequent operation, so that you can pick up from where you left off. Also, if the processed data set size exceeds 1 MB before DynamoDB reaches this limit, it stops the operation and returns the matching values up to the limit, and a LastEvaluatedKey to apply in a subsequent operation to continue the operation. For more information, see Query and Scan in the Amazon DynamoDB Developer Guide.Optional
    totalSegmentsFor a parallel scan request, totalSegments represents the total number of segments into which the scan operation will be divided. The value of totalSegments corresponds to the number of application workers that will perform the parallel scan. For example, if you want to scan a table using four application threads, you would specify a totalSegments value of 4. The value for totalSegments must be greater than or equal to 1, and less than or equal to 1000000. If you specify a totalSegments value of 1, the scan will be sequential rather than parallel. If you specify totalSegments, you must also specify segment.Yes
    exclusiveStartKeyThe primary key of the first item that this operation will evaluate. Use the value that was returned for LastEvaluatedKey in the previous operation. The data type for exclusiveStartKey must be String, Number or Binary. No set data types are allowed. See exclusiveStartKey in Amazon DynamoDB API documentation for more information.Yes
    attributesToGetThe names of one or more attributes to retrieve. If no attribute names are specified, all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result. Note that attributesToGet has no effect on provisioned throughput consumption. DynamoDB determines capacity units consumed based on item size, not on the amount of data that is returned to an application. (For example, ["ForumName", "Subject"]) You cannot use both attributesToGet and select (see below) together in a query request, unless the value for select is SPECIFIC_ATTRIBUTES. (This usage is equivalent to specifying attributesToGet without any value for select.) If you are querying a local secondary index and request only attributes that are projected into that index, the operation will read only the index and not the table. If any of the requested attributes are not projected into the local secondary index, DynamoDB fetches each of these attributes from the parent table. If you are querying a global secondary index, you can only request attributes that are projected into the index. Global secondary index queries cannot fetch attributes from the parent table. (Minimum of 1 item in the list). See attributesToGet in Amazon DynamoDB Developer GuideOptional
    selectThe attributes to be returned in the result. You can retrieve all item attributes, specific item attributes, the count of matching items, or in the case of an index, some or all of the attributes projected into the index. Set to SPECIFIC_ATTRIBUTES if you are also using attributesToGet (see above). Possible values: ALL_ATTRIBUTES | ALL_PROJECTED_ATTRIBUTES | SPECIFIC_ATTRIBUTES | COUNTOptional
    segmentThe attributes to be returned in the result. You can retrieve all item attributes, specific item attributes, the count of matching items, or in the case of an index, some or all of the attributes projected into the index. Set to SPECIFIC_ATTRIBUTES if you are also using attributesToGet (see above). Possible values: ALL_ATTRIBUTES | ALL_PROJECTED_ATTRIBUTES | SPECIFIC_ATTRIBUTES | COUNTOptional
    tableNameThe name of the table containing the requested items. (Minimum length of 3. Maximum length of 255.)Yes
    scanFilterEvaluates the scan results and returns only the desired values. If you specify more than one condition in the scanFilter map, by default all of the conditions must evaluate to true. In other words, the conditions are ANDed together. (You can use the conditionalOperator property to OR the conditions instead. If you do this, at least one of the conditions must evaluate to true, rather than all of them).Yes
    conditionalOperatorA logical operator to apply to the conditions in the expected map: -
      -
    • AND - If all of the conditions evaluate to true, the entire map evaluates to true (default).
    • -
    • OR - If at least one of the conditions evaluate to true, the entire map evaluates to true. The operation will succeed only if the entire map evaluates to true.
    • -
    -
    Optional
    consistentReadDetermines the read consistency model. If set to true, the operation uses strongly consistent reads. Otherwise, eventually consistent reads are used. Strongly consistent reads are not supported on global secondary indexes. If you query a global secondary index with consistentRead set to true, you will receive an error message.Optional
    expressionAttributeNamesOne or more substitution tokens for attribute names in an expression. (For example, {"#LP":"LastPostDateTime"}).Optional
    expressionAttributeValuesOne or more values that can be substituted in an expression. (For example, { ":avail":{"S":"Available"}, ":back":{"S":"Backordered"}, ":disc":{"S":"Discontinued"} })Optional
    filterExpressionA string that contains conditions that DynamoDB applies after the query operation, but before the data is returned. Items that do not satisfy the FilterExpression criteria are not returned. (For example, "LastPostDateTime > :LP") For more information, see Filter Expressions in the Amazon DynamoDB Developer Guide.Optional
    indexNameThe name of an index to query. This can be any local secondary index or global secondary index on the table (Minimum length of 3. Maximum length of 255.)Optional
    keyConditionExpressionThe condition that specifies the key value(s) for items to be retrieved by the query operation.Optional
    projectionExpressionA string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas. If attribute names are not specified, then all attributes are returned. If any of the specified attributes are not found, those attributes do not appear in the result.Optional
    returnConsumedCapacityDetermines the level of detail about provisioned throughput consumption that is returned in the response: If set to TOTAL, the response includes the consumed capacity for tables and indexes. If set to INDEXES, the response includes consumed capacity for indexes. If set to NONE (the default), consumed capacity is not included in the response.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:limit} - {$ctx:totalSegments} - {$ctx:exclusiveStartKey} - {$ctx:attributesToGet} - - {$ctx:segment} - {$ctx:tableName} - {$ctx:scanFilter} - {$ctx:conditionalOperator} - {$ctx:consistentRead} - {$ctx:expressionAttributeNames} - {$ctx:expressionAttributeValues} - {$ctx:filterExpression} - {$ctx:indexName} - {$ctx:projectionExpression} - {$ctx:returnConsumedCapacity} - - ``` - - **Sample request** - - ```json - { - "accessKeyId":"AKIxxxxxxxxxx", - "secretAccessKey":"id4xxxxxx", - "region":"us-east-1", - "blocking":"false", - "tableName": "Thread", - "expressionAttributeNames":{"#LP":"LastPostDateTime"}, - "filterExpression": "#LP = :val", - "expressionAttributeValues": {":val": {"S": "201303190422"}}, - "returnConsumedCapacity": "TOTAL" - } - ``` - - **Sample response** - - ```json - { - "ConsumedCapacity":{ - "CapacityUnits":0.5, - "TableName":"Reply" - }, - "Count":4, - "Items":[ - { - "PostedBy":{ - "S":"joe@example.com" - }, - "ReplyDateTime":{ - "S":"20130320115336" - }, - "Id":{ - "S":"Amazon DynamoDB#How do I update multiple items?" - }, - "Message":{ - "S":"Have you looked at BatchWriteItem?" - } - }, - { - "PostedBy":{ - "S":"fred@example.com" - }, - "ReplyDateTime":{ - "S":"20130320115342" - }, - "Id":{ - "S":"Amazon DynamoDB#How do I update multiple items?" - }, - "Message":{ - "S":"No, I didn't know about that. Where can I find more information?" - } - }, - { - "PostedBy":{ - "S":"joe@example.com" - }, - "ReplyDateTime":{ - "S":"20130320115347" - }, - "Id":{ - "S":"Amazon DynamoDB#How do I update multiple items?" - }, - "Message":{ - "S":"BatchWriteItem is documented in the Amazon DynamoDB API Reference." - } - }, - { - "PostedBy":{ - "S":"fred@example.com" - }, - "ReplyDateTime":{ - "S":"20130320115352" - }, - "Id":{ - "S":"Amazon DynamoDB#How do I update multiple items?" - }, - "Message":{ - "S":"OK, I'll take a look at that. Thanks!" - } - } - ], - "ScannedCount":4 - } - ``` - -??? note "updateItem" - The updateItem operation edits an existing item's attributes, or inserts a new item if it does not already exist. You can put, delete, or add attribute values. You can also perform a conditional update (insert a new attribute name-value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected attribute values). In addition to updating an item, you can also return the item's attribute values in the same operation using the returnValues property. - - See the [related API documentation](http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateItem.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    expectedA map of attribute/condition pairs. This is the conditional block for the updateItem operation. Each element of expected consists of an attribute name, a comparison operator, and one or more values. DynamoDB uses the comparison operator to compare the attribute with the value(s) you supplied. For each expected element, the result of the evaluation is either true or false. If you specify more than one element in the expected map, by default all of the conditions must evaluate to true. In other words, the conditions are ANDed together. (You can use the ConditionalOperator property to OR the conditions instead. If you do this, at least one of the conditions must evaluate to true, rather than all of them.) If the expected map evaluates to true, the conditional operation succeeds. If it evaluates to false, it fails.Optional
    tableNameThe name of the table containing the requested items. (Minimum length of 3. Maximum length of 255.)Yes
    returnValuesUse returnValues if you want to get the item attributes as they appeared before they were updated with the putItem request. The possible values are: -
      -
    • NONE - If returnValues is not specified or if its value is NONE (the default), nothing is returned.
    • -
    • ALL_OLD - The content of the old item is returned. Valid Values: NONE | ALL_OLD | UPDATED_OLD | ALL_NEW | UPDATED_NEW.
    • -
    -
    Optional
    returnItemCollectionMetricsDetermines whether item collection metrics are returned: If set to SIZE, statistics about item collection, if any, that were modified during the operation are returned in the response. If set to NONE (the default), no statistics are returned.Optional
    conditionalOperatorA logical operator to apply to the conditions in the expected map: -
      -
    • AND - If all of the conditions evaluate to true, the entire map evaluates to true (default).
    • -
    • OR - If at least one of the conditions evaluate to true, the entire map evaluates to true. The operation will succeed only if the entire map evaluates to true.
    • -
    -
    Optional
    attributeUpdatesThe names of attributes to be modified, the action to perform on each, and the new value for each. If you are updating an attribute that is an index key attribute for any indexes on that table, the attribute type must match the index key type defined in the AttributesDefinition of the table description. You can use updateItem to update any non-key attributes. Attribute values cannot be null. String and binary type attributes must have lengths greater than zero. Set type attributes must not be empty. Requests with empty values will be rejected with a ValidationException .Optional
    keyA map of attribute names to AttributeValue objects, representing the primary key of the item to delete. For the primary key, you must provide all of the attributes. For example, with a hash type primary key, you only need to specify the hash attribute. For a hash-and-range type primary key, you must specify both the hash attribute and the range attribute.Optional
    conditionExpressionA condition that must be satisfied in order for a conditional deleteItem operation to succeed. An expression can contain any of the following: -
      -
    • Functions: attribute_exists | attribute_not_exists | attribute_type | contains | begins_with | size These function names are case-sensitive.
    • -
    • Comparison operators: = | <> | < | > | <= | >= | BETWEEN | IN
    • -
    • Logical operators: AND | OR | NOT
    • -
    -
    Optional
    expressionAttributeNamesOne or more substitution tokens for attribute names in an expression. (For example, {"#LP":"LastPostDateTime"}).Optional
    expressionAttributeValuesOne or more values that can be substituted in an expression. (For example, { ":avail":{"S":"Available"}, ":back":{"S":"Backordered"}, ":disc":{"S":"Discontinued"} })Optional
    returnConsumedCapacityDetermines the level of detail about provisioned throughput consumption that is returned in the response: If set to TOTAL, the response includes the consumed capacity for tables and indexes. If set to INDEXES, the response includes consumed capacity for indexes. If set to NONE (the default), consumed capacity is not included in the response.Optional
    updateExpressionAn expression that defines one or more attributes to be updated, the action to be performed on them, and new value(s) for them. -
      -
    • SET: Adds one or more attributes and values to an item. If any of these attribute already exist, they are replaced by the new values. You can also use SET to add or subtract from an attribute that is of type Number.
    • -
    • REMOVE: Removes one or more attributes from an item.
    • -
    • ADD: Adds the specified value to the item, if the attribute does not already exist.
    • -
    • DELETE: Deletes an element from a set. For more information on update expressions, see Modifying Items and Attributes in the Amazon DynamoDB Developer Guide.
    • -
    -
    Optional
    - - **Sample configuration** - - ```xml - - {$ctx:expected} - {$ctx:tableName} - {$ctx:returnValues} - {$ctx:returnItemCollectionMetrics} - {$ctx:conditionalOperator} - {$ctx:attributeUpdates} - {$ctx:key} - {$ctx:conditionExpression} - {$ctx:expressionAttributeNames} - {$ctx:expressionAttributeValues} - {$ctx:returnConsumedCapacity} - {$ctx:updateExpression} - - ``` - - **Sample request** - - ```json - { - "accessKeyId":"AKIxxxxxxxxxx", - "secretAccessKey":"id4xxxxxx", - "region":"us-east-1", - "blocking":"false", - "tableName": "Thread", - "key": { - "ForumName": { - "S": "Amazon Dynamo" - }, - "Subject": { - "S": "How do I update multiple items?" - } - }, - "updateExpression": "set LastPostedBy = :val1", - "conditionExpression": "LastPostedBy = :val2", - "expressionAttributeValues": { - ":val1": {"S": "alice@example.com"}, - ":val2": {"S": "fred@example.com"} - }, - "returnValues": "ALL_NEW" - } - ``` - - ```json - "accessKeyId":"AKIxxxxxxxxxx", - "secretAccessKey":"id4xxxxxx", - "region":"us-east-1", - "blocking":"false", - "tableName": "Thread", - "key": { - "ForumName": { - "S": "Amazon Dynamo" - }, - "Subject": { - "S": "How do I update multiple items?" - } - }, - "expected":{ - "ForumName":{ - "ComparisonOperator":"EQ", - "AttributeValueList":[ - { - "S":"Amazon DynamoDB" - } - ] - } - }, - "attributeUpdates":{ - "Message":{ - "Action":"PUT", - "Value":{ - "S":"The new Message." - } - } - }, - "returnValues": "ALL_NEW" - ``` - - **Sample response** - - ```json - { - - } - ``` - -### Tables - -??? note "createTable" - The createTable operation creates a new table. Table names must be unique within each region. See the [related API documentation](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_CreateTable.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    attributeDefinitionsA list of attributes that describe the key schema for the table and indexes.Yes
    tableNameThe name of the table to create. (Should be of minimum length 3, and maximum length 255.)Yes
    keySchemaSpecifies the attributes that make up the primary key for a table or an index. The attributes in keySchema must also be defined in attributeDefinitions. Each KeySchemaElement in the array is composed of: -
      -
    • AttributeName - The name of the key attribute.
    • -
    • KeyType - The role that the key attribute will assume. Possible values are as follows: HASH - partition key RANGE - sort key Note : The partition key of an item is also known as its hash attribute, and the sort key of an item is also known as its range attribute. For a simple primary key (partition key), you must provide exactly one element with a KeyType of HASH . For a composite primary key(partition key and sort key), you must provide exactly two elements, in the following order: The first element must have a KeyType of HASH , and the second element must have a KeyType of RANGE.
    • -
    -
    Yes
    localSecondaryIndexesOne or more local secondary indexes (the maximum is five) to be created on the table. Each index is scoped to a given partition key value. There is a 10 GB size limit per partition key value. Else, the size of a local secondary index is unconstrained. Each local secondary index in the array includes the following: -
      -
    • IndexName - The name of the local secondary index. Should be unique for this table.
    • -
    • KeySchema - Specifies the key schema for the local secondary index. The key schema should begin with the same partition key as the table.
    • -
        -
      • ProjectionType - Possible values are as follows:
      • -
          -
        • KEYS_ONLY - Only the index and primary keys are projected into the index.
        • -
        • INCLUDE - Only the specified table attributes are projected into the index. The list of projected attributes are in NonKeyAttributes.
        • -
        • ALL - All of the table attributes are projected into the index.
        • -
        -
      • NonKeyAttributes - A list of one or more non-key attribute names that are projected into the secondary index. The total count of attributes provided in NonKeyAttributes, summed across all of the secondary indexes, should not exceed 20.
      • -
      -
    -
    Optional
    provisionedThroughputRepresents the provisioned throughput setting for a specified table or index.Yes
    StreamSpecificationThe settings for DynamoDB streams on the table. These settings consist of: -
      -
    • StreamEnabled - Indicates whether to be enabled (true) or disabled (false) streams.
    • -
    • StreamViewType - When an item in the table is modified, the StreamViewType determines what information is written to the table's stream. Possible values for StreamViewType are:
    • -
        -
      • KEYS_ONLY - Only the key attributes of the modified item are written to the stream.
      • -
      • NEW_IMAGE - The entire item, as it appears after it was modified, is written to the stream.
      • -
      • OLD_IMAGE - The entire item, as it appeared before it was modified, is written to the stream.
      • -
      • NEW_AND_OLD_IMAGES - Both the new and the old item images of the item are written to the stream.
      • -
      -
    -
    Optional
    globalSecondaryIndexesOne or more global secondary indexes (the maximum is five) to be created on the table. Each global secondary index in the array includes the following: -
      -
    • IndexName - The name of the global secondary index. Should be unique for this table.
    • -
    • KeySchema - Specifies the key schema for the global secondary index.
    • -
    • Projection - Specifies attributes that are copied (projected) from the table into the index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Each attribute specification is composed of:
    • -
        -
      • ProjectionType - Possible values are as follows:
      • -
          -
        • KEYS_ONLY - Only the index and primary keys are projected into the index.
        • -
        • INCLUDE - Only the specified table attributes are projected into the index. The list of projected attributes are in NonKeyAttributes.
        • -
        • ALL - All of the table attributes are projected into the index.
        • -
        -
      • NonKeyAttributes - A list of one or more non-key attribute names that are projected into the secondary index. The total count of attributes provided in NonKeyAttributes, summed across all of the secondary indexes, should not exceed 20.
      • -
      -
    • ProvisionedThroughput - Specifies the provisioned throughput setting for the global secondary index.
    • -
    -
    Optional
    - - **Sample configuration** - - ```xml - - {$ctx:attributeDefinitions} - {$ctx:tableName} - {$ctx:keySchema} - {$ctx:localSecondaryIndexes} - {$ctx:provisionedThroughput} - {$ctx:StreamSpecification} - {$ctx:globalSecondaryIndexes} - - ``` - - **Sample request** - - ```json - { - "accessKeyId":"AKIAxxxxxxxxxxxx", - "secretAccessKey":"id4qxxxxxxxx", - "region":"us-east-1", - "blocking":"false", - "attributeDefinitions": [ - { - "AttributeName": "ForumName", - "AttributeType": "S" - }, - { - "AttributeName": "Subject", - "AttributeType": "S" - }, - { - "AttributeName": "LastPostDateTime", - "AttributeType": "S" - } - ], - "tableName": "Thread", - "keySchema": [ - { - "AttributeName": "ForumName", - "KeyType": "HASH" - }, - { - "AttributeName": "Subject", - "KeyType": "RANGE" - } - ], - "localSecondaryIndexes": [ - { - "IndexName": "LastPostIndex", - "KeySchema": [ - { - "AttributeName": "ForumName", - "KeyType": "HASH" - }, - { - "AttributeName": "LastPostDateTime", - "KeyType": "RANGE" - } - ], - "Projection": { - "ProjectionType": "KEYS_ONLY" - } - } - ], - "provisionedThroughput": { - "ReadCapacityUnits": 5, - "WriteCapacityUnits": 5 - } - } - ``` - - **Sample response** - - ```json - { - "TableDescription":{ - "TableArn":"arn:aws:dynamodb:us-west-2:123456789012:table/Thread", - "AttributeDefinitions":[ - { - "AttributeName":"ForumName", - "AttributeType":"S" - }, - { - "AttributeName":"LastPostDateTime", - "AttributeType":"S" - }, - { - "AttributeName":"Subject", - "AttributeType":"S" - } - ], - "CreationDateTime":1.36372808007E9, - "ItemCount":0, - "KeySchema":[ - { - "AttributeName":"ForumName", - "KeyType":"HASH" - }, - { - "AttributeName":"Subject", - "KeyType":"RANGE" - } - ], - "LocalSecondaryIndexes":[ - { - "IndexArn":"arn:aws:dynamodb:us-west-2:123456789012:table/Thread/index/LastPostIndex", - "IndexName":"LastPostIndex", - "IndexSizeBytes":0, - "ItemCount":0, - "KeySchema":[ - { - "AttributeName":"ForumName", - "KeyType":"HASH" - }, - { - "AttributeName":"LastPostDateTime", - "KeyType":"RANGE" - } - ], - "Projection":{ - "ProjectionType":"KEYS_ONLY" - } - } - ], - "ProvisionedThroughput":{ - "NumberOfDecreasesToday":0, - "ReadCapacityUnits":5, - "WriteCapacityUnits":5 - }, - "TableName":"Thread", - "TableSizeBytes":0, - "TableStatus":"CREATING" - } - } - ``` - -??? note "deleteTable" - The deleteTable operation deletes a table and all of its items. See the [related API documentation](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DeleteTable.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    tableNameThe name of the table to delete. (Should be of minimum length 3, and maximum length 255.)Yes
    - - **Sample configuration** - - ```xml - - {$ctx:tableName} - - ``` - - **Sample request** - - ```json - { - "accessKeyId":"AKIAxxxxxxxxxxxx", - "secretAccessKey":"id4qxxxxxxxx", - "region":"us-east-1", - "blocking":"false", - "tableName": "TestTable" - } - ``` - - **Sample response** - - ```json - { - "TableDescription":{ - "TableArn":"arn:aws:dynamodb:us-west-2:123456789012:table/Reply", - "ItemCount":0, - "ProvisionedThroughput":{ - "NumberOfDecreasesToday":0, - "ReadCapacityUnits":5, - "WriteCapacityUnits":5 - }, - "TableName":"Reply", - "TableSizeBytes":0, - "TableStatus":"DELETING" - } - } - ``` - -??? note "describeTable" - The describeTable operation retrieves information about a table, such as the current status of the table, when it was created, the primary key schema, and any indexes on the table. See the [related API documentation](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeTable.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    tableNameThe name of the table for which information is to be retrieved. (Should be of minimum length 3, and maximum length 255.)Yes
    - - **Sample configuration** - - ```xml - - {$ctx:tableName} - - ``` - - **Sample request** - - ```json - { - "accessKeyId":"AKIAxxxxxxxxxxxx", - "secretAccessKey":"id4qxxxxxxxx", - "region":"us-east-1", - "blocking":"false", - "tableName": "TestTable" - } - ``` - - **Sample response** - - ```json - { - "Table":{ - "TableArn":"arn:aws:dynamodb:us-west-2:123456789012:table/Thread", - "AttributeDefinitions":[ - { - "AttributeName":"ForumName", - "AttributeType":"S" - }, - { - "AttributeName":"LastPostDateTime", - "AttributeType":"S" - }, - { - "AttributeName":"Subject", - "AttributeType":"S" - } - ], - "CreationDateTime":1.363729002358E9, - "ItemCount":0, - "KeySchema":[ - { - "AttributeName":"ForumName", - "KeyType":"HASH" - }, - { - "AttributeName":"Subject", - "KeyType":"RANGE" - } - ], - "LocalSecondaryIndexes":[ - { - "IndexArn":"arn:aws:dynamodb:us-west-2:123456789012:table/Thread/index/LastPostIndex", - "IndexName":"LastPostIndex", - "IndexSizeBytes":0, - "ItemCount":0, - "KeySchema":[ - { - "AttributeName":"ForumName", - "KeyType":"HASH" - }, - { - "AttributeName":"LastPostDateTime", - "KeyType":"RANGE" - } - ], - "Projection":{ - "ProjectionType":"KEYS_ONLY" - } - } - ], - "ProvisionedThroughput":{ - "NumberOfDecreasesToday":0, - "ReadCapacityUnits":5, - "WriteCapacityUnits":5 - }, - "TableName":"Thread", - "TableSizeBytes":0, - "TableStatus":"ACTIVE" - } - } - ``` - -??? note "listTables" - The listTables operation retrieves the tables that you own in the current AWS region. The output from listTables is paginated, with each page returning a maximum of 100 table names. See the [related API documentation](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_ListTables.html) for more information. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    exclusiveStartTableNameThe first table name that the listTables operation evaluates. Use the value returned for LastEvaluatedTableName (this is the name of the last table in the current page of results) in the previous operation, so that you can obtain the next page of result.Yes
    limitThe maximum number of table names to retrieve. If this parameter is not specified, the limit is 100.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:exclusiveStartTableName} - {$ctx:limit} - - ``` - - **Sample request** - - ```json - { - "accessKeyId":"AKIAxxxxxxxxxxxx", - "secretAccessKey":"id4qxxxxxxxx", - "region":"us-east-1", - "blocking":"false", - "exclusiveStartTableName":"Music", - "limit":4 - } - ``` - - **Sample response** - - ```json - { - "LastEvaluatedTableName":"Thread", - "TableNames":[ - "Forum", - "Reply", - "Thread" - ] - } - ``` - -??? note "updateTable" - The updateTable operation updates provisioned throughput settings, global secondary indexes, or DynamoDB Streams settings for a given table. You can only perform one of the following operations at a time: - - * Modify the provisioned throughput settings of the table. - * Enable or disable streams on the table. - * Remove a global secondary index from the table. - * Create a new global secondary index on the table. Once the index begins backfilling, you can use updateTable to perform other operations. - - See the [related API documentation](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTable.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    tableNameThe name of the table to be updated. (Should be of minimum length 3, and maximum length 255).Yes
    attributeDefinitionsA list of attributes that describe the key schema for the table and indexes. If you are adding a new global secondary index to the table, AttributeDefinitions should include the key element(s) of the new index.Optional
    globalSecondaryIndexUpdatesAn array of one or more global secondary indexes for the table. For each index in the array, you can request one of the following actions: -
      -
    • Create - To add a new global secondary index to the table.
    • -
    • Update - To modify the provisioned throughput settings of an existing global secondary index.
    • -
    • Delete - To remove a global secondary index from the table.
    • -
    -
    Optional
    StreamSpecificationRepresents the DynamoDB streams configuration for the table.Optional
    provisionedThroughputThe new provisioned throughput setting for the specified table or index.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:tableName} - {$ctx:attributeDefinitions} - {$ctx:globalSecondaryIndexUpdates} - {$ctx:StreamSpecification} - {$ctx:provisionedThroughput} - - ``` - - **Sample request** - - ```json - { - "accessKeyId":"AKIAxxxxxxxxxxxx", - "secretAccessKey":"id4qxxxxxxxx", - "region":"us-east-1", - "blocking":"false", - "tableName":"Thread", - "provisionedThroughput":{ - "ReadCapacityUnits":12, - "WriteCapacityUnits":12 - } - } - ``` - - **Sample response** - - ```json - { - "TableDescription":{ - "TableArn":"arn:aws:dynamodb:us-west-2:123456789012:table/Thread", - "AttributeDefinitions":[ - { - "AttributeName":"ForumName", - "AttributeType":"S" - }, - { - "AttributeName":"LastPostDateTime", - "AttributeType":"S" - }, - { - "AttributeName":"Subject", - "AttributeType":"S" - } - ], - "CreationDateTime":1.363801528686E9, - "ItemCount":0, - "KeySchema":[ - { - "AttributeName":"ForumName", - "KeyType":"HASH" - }, - { - "AttributeName":"Subject", - "KeyType":"RANGE" - } - ], - "LocalSecondaryIndexes":[ - { - "IndexName":"LastPostIndex", - "IndexSizeBytes":0, - "ItemCount":0, - "KeySchema":[ - { - "AttributeName":"ForumName", - "KeyType":"HASH" - }, - { - "AttributeName":"LastPostDateTime", - "KeyType":"RANGE" - } - ], - "Projection":{ - "ProjectionType":"KEYS_ONLY" - } - } - ], - "ProvisionedThroughput":{ - "LastIncreaseDateTime":1.363801701282E9, - "NumberOfDecreasesToday":0, - "ReadCapacityUnits":5, - "WriteCapacityUnits":5 - }, - "TableName":"Thread", - "TableSizeBytes":0, - "TableStatus":"UPDATING" - } - } - ``` - -??? note "describeLimits" - The describeLimits operation retrieves the current provisioned-capacity limits allowed in a region. - - See the [related API documentation](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeLimits.html) for more information. - - **Sample configuration** - - ```xml - - ``` - - **Sample request** - - ```json - { - "accessKeyId":"AKIAxxxxxxxxxxxx", - "secretAccessKey":"id4qxxxxxxxx", - "region":"us-east-1", - "blocking":"false" - } - ``` - - **Sample response** - - ```json - { - "AccountMaxReadCapacityUnits":20000, - "AccountMaxWriteCapacityUnits":20000, - "TableMaxReadCapacityUnits":10000, - "TableMaxWriteCapacityUnits":10000 - } - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/amazonlambda-connector/amazonlambda-connector-config.md b/en/docs/reference/connectors/amazonlambda-connector/amazonlambda-connector-config.md deleted file mode 100644 index d51981822e..0000000000 --- a/en/docs/reference/connectors/amazonlambda-connector/amazonlambda-connector-config.md +++ /dev/null @@ -1,1225 +0,0 @@ -# Amazon Lambda Connector Reference - -The following operations allow you to work with the Amazon Lambda Connector. Click an operation name to see parameter details and samples on how to use it. - -### Accounts - -??? note "getAccountSettings" - The getAccountSettings operation retrieves details about your account's limits and usage in an AWS Region. See the [related API documentation](https://docs.aws.amazon.com/lambda/latest/dg/API_GetAccountSettings.html). - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    apiVersionGetAccountSettingsAPI version for GetAccountSettings method.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:apiVersionGetAccountSettings} - - ``` - - **Sample request** - - ```json - { - "secretAccessKey":"0b+fcboKq87Nf7mH6M**********************", - "accessKeyId":"AKIAJHJ*************", - "region":"us-east-2", - "blocking":"false", - "apiVersionGetAccountSettings": "2016-08-19" - } - ``` - - **Sample response** - - ```json - { - "AccountLimit": { - "CodeSizeUnzipped": 262144000, - "CodeSizeZipped": 52428800, - "ConcurrentExecutions": 1000, - "TotalCodeSize": 80530636800, - "UnreservedConcurrentExecutions": 1000, - "UnreservedConcurrentExecutionsMinimum": null - }, - "AccountUsage": { - "FunctionCount": 1, - "TotalCodeSize": 176268666 - }, - "DeprecatedFeaturesAccess": null, - "HasFunctionWithDeprecatedRuntime": false, - "PreviewFeatures": null - } - ``` - -### Aliases - -??? note "createAlias" - The createAlias implementation of the POST operation creates an alias for a Lambda function version. Use aliases to provide clients with a function identifier that you can update to invoke a different version. You can also map an alias to split invocation requests between two versions. Use the RoutingConfig parameter to specify a second version and the percentage of invocation requests that it receives. See the [related API documentation](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateAlias.html). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    apiVersionCreateAliasAPI version for CreateAlias method.Yes
    functionNameThe name of the Lambda function that the alias invokes.Yes
    createAliasDescriptionThe description of the alias.Yes
    functionVersionThe function version that the alias invokes.Yes
    aliasNameThe name of the alias.Yes
    aliasAdditionalVersionWeightsThe name of second alias, and the percentage of traffic that's routed to it.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:functionName} - {$ctx:createAliasDescription} - {$ctx:functionVersion} - {$ctx:aliasName} - {$ctx:aliasAdditionalVersionWeights} - {$ctx:apiVersionCreateAlias} - - ``` - - **Sample request** - - ```json - { - "secretAccessKey":"0b+fcboKq87Nf7mH6M**********************", - "accessKeyId":"AKIAJHJ*************", - "region":"us-east-2", - "blocking":"false", - "functionName":"test", - "functionVersion":"$LATEST", - "aliasName":"alias2", - "apiVersionCreateAlias":"2015-03-31" - } - ``` - - **Sample response** - - ```json - { - "AliasArn": "arn:aws:lambda:us-east-2:********:function:test:alias2", - "Description": "", - "FunctionVersion": "$LATEST", - "Name": "alias2", - "RevisionId": "be8925ae-a634-4303-92e2-5364d0724406", - "RoutingConfig": null - } - ``` - -??? note "deleteAlias" - The deleteAlias implementation deletes a Lambda function alias. See the [related API documentation](https://docs.aws.amazon.com/lambda/latest/dg/API_DeleteAlias.html). - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    apiVersionDeleteAliasAPI version for DeleteAlias method.Yes
    functionNameThe name of the Lambda function that the alias invokes.Yes
    aliasNameThe name of the alias.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:functionName} - {$ctx:aliasName} - {$ctx:apiVersionDeleteAlias} - - ``` - - **Sample request** - - ```json - { - "secretAccessKey":"0b+fcboKq87Nf7mH6M**********************", - "accessKeyId":"AKIAJHJ*************", - "region":"us-east-2", - "blocking":"false", - "functionName":"test", - "aliasName":"alias2", - "apiVersionDeleteAlias":"2015-03-31" - } - ``` - - **Sample response** - - ``` - Status: 204 No Content - ``` - -??? note "getAlias" - The getAlias implementation of the GET operation returns details about a Lambda function alias. See the [related API documentation](https://docs.aws.amazon.com/lambda/latest/dg/API_GetAlias.html). - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    apiVersionGetAliasAPI version for getAlias method.Yes
    functionNameThe name of the Lambda function that the alias invokes.Yes
    aliasNameThe name of the alias.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:functionName} - {$ctx:aliasName} - {$ctx:apiVersionGetAlias} - - ``` - - **Sample request** - - ```json - { - "secretAccessKey":"0b+fcboKq87Nf7mH6M**********************", - "accessKeyId":"AKIAJHJ*************", - "region":"us-east-2", - "blocking":"false", - "functionName":"test", - "aliasName":"alias2", - "apiVersionGetAlias":"2015-03-31" - } - ``` - - **Sample response** - - ``` - Status: 204 No Content - ``` - - ```json - { - "AliasArn": "arn:aws:lambda:us-east-2:********:function:test:alias2", - "Description": "", - "FunctionVersion": "$LATEST", - "Name": "alias2", - "RevisionId": "be8925ae-a634-4303-92e2-5364d0724406", - "RoutingConfig": null - } - ``` - -??? note "updateAlias" - The updateAlias method implementation updates the configuration of a Lambda function alias. See the [related API documentation](https://docs.aws.amazon.com/lambda/latest/dg/API_UpdateAlias.html). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    apiVersionUpdateAliasAPI version for updateAlias method.Yes
    functionNameThe name of the Lambda function that the alias invokes.Yes
    aliasNameThe name of the alias.Yes
    updatedAliasDescriptionThe description of the alias.Yes
    updatedAliasAdditionalVersionWeightThe name of second alias, and the percentage of traffic that's routed to it.Yes
    functionVersionThe function version that the alias invokes.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:functionName} - {$ctx:updatedAliasDescription} - {$ctx:functionVersion} - {$ctx:aliasName} - {$ctx:updatedAliasAdditionalVersionWeight} - {$ctx:apiVersionUpdateAlias} - - ``` - - **Sample request** - - ```json - { - "secretAccessKey":"0b+fcboKq87Nf7mH6M**********************", - "accessKeyId":"AKIAJHJ*************", - "region":"us-east-1", - "blocking":"false", - "functionName":"test", - "aliasName":"alias2", - "functionVersion":"$LATEST", - "apiVersionUpdateAlias":"2015-03-31" - } - ``` - - **Sample response** - - ``` - Status: 200 OK - ``` - - ```json - { - "AliasArn": "arn:aws:lambda:us-east-2:*********:function:test:alias2", - "Description": "", - "FunctionVersion": "$LATEST", - "Name": "alias2", - "RevisionId": "6d8d089b-c632-4a4b-91ba-ee1ce706c50a", - "RoutingConfig": null - } - ``` -### functions - -??? note "addPermission" - The addPermission method implementation grants an AWS service or another account permission to use a function. See the [related API documentation](https://docs.aws.amazon.com/lambda/latest/dg/API_AddPermission.html). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    apiVersionAddPermissionAPI version for AddPermission method.Yes
    functionNameName of the Lambda function, version, or alias.Yes
    permissionActionThe action that the principal can use on the function.For example, lambda:InvokeFunction or lambda:GetFunction.Yes
    permissionStatementIdA statement identifier that differentiates the statement from others in the same policy.Yes
    permissionPrincipalThe AWS service or account that invokes the function. If you specify a service, use SourceArn or SourceAccount to limit who can invoke the function through that service.Yes
    permissionQualifierSpecify a version or alias.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:functionName} - {$ctx:permissionAction} - {$ctx:permissionStatementId} - {$ctx:permissionPrincipal} - {$ctx:permissionQualifier} - {$ctx:apiVersionAddPermission} - - ``` - - **Sample request** - - ```json - { - "secretAccessKey":"0b+fcboKq87Nf7mH6M55z8I*****************", - "accessKeyId":"AKIAJHJX************", - "region":"us-east-2", - "blocking":"false", - "functionName":"testFunction", - "permissionAction":"lambda:addPermission", - "permissionPrincipal":"s3.amazonaws.com", - "permissionStatementId":"Permisssion_Added182p", - "apiVersionAddPermission":"2015-03-31" - } - ``` - - **Sample response** - - ``` - Status: 201 Created - ``` - - ```json - { - "Statement": "{\"Sid\":\"Permisssion_Added182p\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"s3.amazonaws.com\"},\"Action\":\"lambda:addPermission\",\"Resource\":\"arn:aws:lambda:us-east-2:*******:function:testFunction\"}" - } - ``` -??? note "createFunction" - The createFunction method implementation creates a new function. See the [related API documentation](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    apiVersionCreateFunctionThe API version for the CreateFunction method.Yes
    functionNameThe name of the Lambda function.Yes
    functionDescriptionContains description of the function.Yes
    s3BucketAn Amazon S3 bucket name in the same region as your function.Yes
    s3KeyThe Amazon S3 key of the deployment package.Yes
    s3ObjectVersionFor versioned objects, the version of the deployment package object to use.Yes
    zipFileThe base64-encoded contents of zip file containing your deployment package. AWS SDK and AWS CLI clients handle the encoding for you.Yes
    targetArnThe Amazon Resource Name (ARN) of an Amazon SQS queue or Amazon SNS topic.Yes
    environmentVariablesEnvironment variable key-value pairs.Yes
    kmsKeyArnThe ARN of the KMS key used to encrypt your function's environment variables. If not provided, AWS Lambda will use a default service key.Yes
    layersA list of function layers to add to the function's execution environment.Yes
    memorySizeThe amount of memory that your function has access to. Increasing the function's memory also increases it's CPU allocation. The default value is 128 MB. The value must be a multiple of 64 MB.Yes
    publishSet to true to publish the first version of the function during creation.Yes
    roleThe Amazon Resource Name (ARN) of the function’s execution role.Yes
    runtime The runtime version for the function.Valid Values: nodejs | nodejs4.3 | nodejs6.10 | nodejs8.10 | java8 | python2.7 | python3.6 | python3.7 | dotnetcore1.0 | dotnetcore2.0 | dotnetcore2.1 | nodejs4.3-edge | go1.x | ruby2.5 |.Yes
    tagsThe list of tags (key-value-pairs) assigned to the new function. For more information see Tagging Lambda Functions in the AWS Lambda Developer Guide.Yes
    timeoutThe amount of time that Lambda allows a function to run before terminating it. The default is 3 seconds. The maximum allowed value is 900 seconds.Yes
    modeSet Mode to Activate to sample and trace a subset of incoming requests with AWS X-Ray. The tracing mode to Activate to sample and trace a subset of incoming requests with AWS X-Ray.Yes
    securityGroupIdsA list of VPC security groups IDs.Yes
    subnetIdsA list of VPC subnet IDs.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:functionName} - {$ctx:permissionAction} - {$ctx:permissionStatementId} - {$ctx:permissionPrincipal} - {$ctx:permissionQualifier} - {$ctx:apiVersionAddPermission} - - ``` - - **Sample request** - - ```json - { - "secretAccessKey":"0b+fcboKq87Nf7mH6M55z8I*****************", - "accessKeyId":"AKIAJHJX************", - "region":"us-east-2", - "blocking":"false", - "s3Bucket":"ajbuck8", - "s3Key":"fnc.zip", - "s3ObjectVersion":"null", - "functionName":"createdFunc", - "handler":"mdhandler", - "role":"arn:aws:iam::14*****:role/service-role/yfuj", - "runtime":"python3.7", - "apiVersionCreateFunction":"2015-03-31" - } - ``` - - **Sample response** - - ```json - { - "CodeSha256": "tp34ACQUVOU5YVe84VQUQHsHWdfixrnP/mkMdtt6gEc=", - "CodeSize": 338, - "DeadLetterConfig": null, - "Description": "", - "Environment": null, - "FunctionArn": "arn:aws:lambda:us-east-2:*********:function:createdFunc", - "FunctionName": "createdFunc", - "Handler": "mdhandler", - "KMSKeyArn": null, - "LastModified": "2019-03-05T09:36:27.074+0000", - "Layers": null, - "MasterArn": null, - "MemorySize": 128, - "RevisionId": "acdf452b-5bf0-4203-9e22-728c200aa42a", - "Role": "arn:aws:iam::**********:role/service-role/yfuj", - "Runtime": "python3.7", - "Timeout": 3, - "TracingConfig": { - "Mode": "PassThrough" - }, - "Version": "$LATEST", - "VpcConfig": null - } - ``` -??? note "deleteFunction" - The deleteFunction method implementation deletes a Lambda function. See the [related API documentation](https://docs.aws.amazon.com/lambda/latest/dg/API_DeleteFunction.html). - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    apiVersionDeleteFunctionAPI version for DeleteFunction method.Yes
    functionNameThe name of the Lambda function.Yes
    deleteFunctionQualifierSpecify a version to delete. You can't delete a version that's referenced by an alias.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:functionName} - {$ctx:deleteFunctionQualifier} - {$ctx:apiVersionDeleteFunction} - - ``` - - **Sample request** - - ```json - { - "secretAccessKey":"0b+fcboKq87Nf7mH6M**********************", - "accessKeyId":"AKIAJHJX************", - "region":"us-east-1", - "blocking":"false", - "functionName":"func", - "apiVersionDeleteFunction":"2015-03-31" - } - ``` - - **Sample response** - - ``` - Status: 201 Created - ``` -??? note "getFunction" - The deleteFunction method implementation returns information about the function or function version. See the [related API documentation](https://docs.aws.amazon.com/lambda/latest/dg/API_GetFunction.html). - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    apiVersionGetFunctionAPI version for GetFunction method.Yes
    functionNameThe name of the Lambda function.Yes
    qualifierSpecify a version or alias.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:functionName} - {$ctx:qualifier} - {$ctx:apiVersionGetFunction} - - ``` - - **Sample request** - - ```json - { - "secretAccessKey":"0b+fcboKq87Nf7mH6M**********************", - "accessKeyId":"AKIAJHJ*************", - "region":"us-east-2", - "blocking":"false", - "functionName":"Fn", - "qualifier":"$LATEST", - "apiVersionGetFunction":"2015-03-31" - } - ``` - - **Sample response** - - ```json - { - "Code": { - "Location": "https://awslambda-us-east-2-tasks.s3.us-east-2.amazonaws.com/snapshots/1*****6/test-9f25e193-f604-4d9e-83f1-1254f57e92bc?versionId=wGTdzzK2xtmCGZdt_kgFyy4dlBV8qr1N&X-Amz-Security-Token=FQoGZXIvYXdzEFoaDGu12sbFFNlw0JI6rCK3A6sbM%2FoxC7a2gKuwHXuKoacmpYJa0L%2FtR%2B52PUf9Pbxh2K4OOg5iffmAhfRV%2BpdhyLs32zWlkiYXRpZseDeZPAbofXMZSoLDWhtLVB0EmLTwz33gX8EQfrsvAJa2xWyM9bsebmNwHe9jTa56DvfaQzPEEa4QXpzWEKH8i5%2FSz9iNCrQhbRP%2B5dvclV%2FULql2gMPlxbwPIZNIYdF1xZuddIGcZInkrEHL3956%2B0kHag%2FL%2FoWzN81IGkySbjKNgRFeLxlDEn9ZpDiC%2FdrnNqJ%2FuBdgben7T1ZV3ck5ra0aT7XKaZhDtEN4jHv0sw3O9rORxvlne50TZ56aVePW%2FpUekHjTUiMgrwG%2B2J4uXl2ht2lTJQW3heAFFCoo1DawPlSG%2Fszht8Mt%2BhkHOrE7Re2GRTlnj0jEzEtqgp3JjuaYZU7dtbU4PhbvavF2LtxWFin9p0hWGkcMjKWuWDTaHLdj%2FzTSkS3qifkD9k34B6P%2BaQE1liduGSwK4CgNGNIP5PISt%2Fyoq2Gii1A3yIKyFgeL1W3cJ%2FuhVL9iC%2FsAN6AMkGMsNNjO%2BxvlclQ0YNK10sGhsc7A0z0Cvsgo0O344wU%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20190305T090047Z&X-Amz-SignedHeaders=host&X-Amz-Expires=599&X-Amz-Credential=ASIARQRML75E7CY33SUO%2F20190305%2Fus-east-2%2Fs3%2Faws4_request&X-Amz-Signature=e82c9ea475e1ba363b6e061c2eebeded0dfd8f275ad8313e16f42430a4f4819b", - "RepositoryType": "S3" - }, - "Concurrency": null, - "Configuration": { - "CodeSha256": "pETr5sslHxypYmc5mm3M8j3RFMB2G5f5y8lQM/7ZVIs=", - "CodeSize": 262, - "DeadLetterConfig": null, - "Description": "", - "Environment": null, - "FunctionArn": "arn:aws:lambda:us-east-2:********:function:test:$LATEST", - "FunctionName": "test", - "Handler": "index.handler", - "KMSKeyArn": null, - "LastModified": "2019-03-05T08:43:52.123+0000", - "Layers": [ - { - "Arn": "arn:aws:lambda:us-east-2:*******:layer:ballerina-09903:1", - "CodeSize": 177304793, - "UncompressedCodeSize": 207173983 - } - ], - "MasterArn": null, - "MemorySize": 128, - "RevisionId": "1da07f2e-469d-4981-a350-38bb01f19167", - "Role": "arn:aws:iam::**********:role/test-role", - "Runtime": "nodejs8.10", - "Timeout": 3, - "TracingConfig": { - "Mode": "PassThrough" - }, - "Version": "$LATEST", - "VpcConfig": { - "SecurityGroupIds": [], - "SubnetIds": [], - "VpcId": "", - "VpcSetupStatus": null, - "VpcSetupStatusReason": null - } - }, - "Tags": null - } - ``` -??? note "getFunctionConfiguration" - The deleteFunction method implementation returns the version-specific settings of a Lambda function or version. See the [related API documentation](https://docs.aws.amazon.com/lambda/latest/dg/API_GetFunctionConfiguration.html). - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    apiVersionGetFunctionConfigurationAPI version for GetFunctionConfiguration method.Yes
    functionNameThe name of the Lambda function.Yes
    qualifierSpecify a version or alias.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:functionName} - {$ctx:qualifier} - {$ctx:apiVersionGetFunctionConfiguration} - - ``` - - **Sample request** - - ```json - { - "secretAccessKey":"0b+fcboKq87Nf7mH6M**********************", - "accessKeyId":"AKIAJHJ*************", - "region":"us-east-2", - "blocking":"false", - "functionName":"test", - "qualifier":"$LATEST", - "apiVersionGetFunctionConfiguration":"2015-03-31" - } - ``` - - **Sample response** - - ``` - Status: 200 OK - ``` - - ```json - { - "CodeSha256": "pETr5sslHxypYmc5mm3M8j3RFMB2G5f5y8lQM/7ZVIs=", - "CodeSize": 262, - "DeadLetterConfig": null, - "Description": "", - "Environment": null, - "FunctionArn": "arn:aws:lambda:us-east-2:*********:function:test:$LATEST", - "FunctionName": "test", - "Handler": "index.handler", - "KMSKeyArn": null, - "LastModified": "2019-03-05T08:43:52.123+0000", - "Layers": [ - { - "Arn": "arn:aws:lambda:us-east-2:***********:layer:ballerina-09903:1", - "CodeSize": 177304793, - "UncompressedCodeSize": 207173983 - } - ], - "MasterArn": null, - "MemorySize": 128, - "RevisionId": "1da07f2e-469d-4981-a350-38bb01f19167", - "Role": "arn:aws:iam::*********:role/test-role", - "Runtime": "nodejs8.10", - "Timeout": 3, - "TracingConfig": { - "Mode": "PassThrough" - }, - "Version": "$LATEST", - "VpcConfig": { - "SecurityGroupIds": [], - "SubnetIds": [], - "VpcId": "", - "VpcSetupStatus": null, - "VpcSetupStatusReason": null - } - } - ``` -??? note "invoke " - The deleteFunction method implementation invokes a Lambda function. See the [related API documentation](https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    apiVersionInvoke API version for Invoke method.Yes
    functionNameThe name of the Lambda function.Yes
    qualifierSpecify a version or alias.Yes
    x-amz-invocation-type It specifies the way you want to invoke the function. Choose from the following options.Yes
    x-amz-log-typeIt specifies whether to include the execution log in the response. Set to Tail to include it in the response. Valid values are: None and Tail.Yes
    x-amz-client-contextIt's the base64-encoded data about the invoking client to pass to the function in the context object. It can be up to 3583 bytes. -
      -
    1. RequestResponse (default) - Invoke the function synchronously. Keep the connection open until the function returns a response or times out. The API response includes the function response and additional data.
    2. -
    3. Event - Invoke the function asynchronously. Send events that fail multiple times to the function's dead-letter queue (if it's configured). The API response only includes a status code.
    4. -
    5. DryRun - Validate parameter values and verify that the user or role has permission to invoke the function.
    6. -
    -
    Yes
    payloadThe JSON that you want to provide to your Lambda function as input.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:functionName} - {$ctx:apiVersionInvoke} - {$ctx:qualifier} - {$ctx:x-amz-invocation-type} - {$ctx:x-amz-log-type} - {$ctx:x-amz-client-context} - {$ctx:payload} - - ``` - - **Sample request** - - ```json - { - "secretAccessKey":"0b+fcboKq87Nf7m****************", - "accessKeyId":"AKIAJHJXWUY*********", - "region":"us-east-1", - "blocking":"false", - "functionName":"LambdawithLayer", - "apiVersionInvoke":"2015-03-31" - } - ``` - - **Sample response** - - ``` - Status: 200 OK - ``` - - ```json - { - "body": "Hello from Lambda Layers!", - "statusCode": 200 - } - ``` -??? note "listFunctions" - The deleteFunction method implementation returns a list of Lambda functions, with the version-specific configuration of each. See the [related API documentation](https://docs.aws.amazon.com/lambda/latest/dg/API_ListFunctions.html). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    apiVersionListFunction API version for ListFunctions method.Yes
    functionVersionVersion name which specifies the version to include in entries for each function. Set to ALL to include entries for all published versions of each function.Yes
    markerIt specifies the pagination token that is returned by a previous request to retrieve the next page of results.Yes
    masterRegionFor Lambda@Edge functions, the AWS Region of the master function. For example, us-east-2 or ALL. If specified, you must set FunctionVersion to ALL.Yes
    maxItemsIt specifies the value, ranging from 1 to 10000, to limit the number of functions in the response.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:functionVersion} - {$ctx:apiVersionListFunctions} - {$ctx:marker} - {$ctx:masterRegion} - {$ctx:maxItems} - - ``` - **Sample request** - - ```json - { - "secretAccessKey":"0b+fcboKq87Nf7mH6M**********************", - "accessKeyId":"AKIAJHJ*************", - "region":"us-east-1", - "blocking":"false", - "functionVersion":"ALL", - "marker":"1", - "masterRegion":"us-east-1", - "maxItems":"3", - "apiVersionListFunctions":"2015-03-31" - } - ``` - -??? note "removePermission " - The deleteFunction method implementation revokes function-use permission from an AWS service or another account. See the [related API documentation](https://docs.aws.amazon.com/lambda/latest/dg/API_RemovePermission.html). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    apiVersionRemovePermissionAPI version for RemovePermission method.Yes
    functionNameName of the Lambda function.Yes
    permissionStatementIdStatement ID of the permission to remove.Yes
    permissionQualifierIt specifies a version or alias to remove permission from a published version of the function.Yes
    permissionRevisionIdIt's a Id which allow to update the policy only if the revision ID matches the ID that's specified. Use this option to avoid modifying a policy that has changed since you last read it.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:functionName} - {$ctx:apiVersionRemovePermission} - {$ctx:permissionStatementId} - {$ctx:permissionQualifier} - {$ctx:permissionRevisionId} - - ``` - - **Sample request** - - ```json - { - "secretAccessKey":"0b+fcboKq87Nf7mH6M**********************", - "accessKeyId":"AKIAJHJ*************", - "region":"us-east-1", - "blocking":"false", - "functionName":"Fn", - "permissionStatementId":"Permisssion_Added1443p", - "apiVersionRemovePermission":"2015-03-31" - } - ``` - **Sample response** - - ``` - Status: 204 No Content - ``` -### Layers - -??? note "addLayerVersionPermission" - The deleteFunction method implementation adds permission to the resource-based policy of a version of an AWS Lambda layer. See the [related API documentation](https://docs.aws.amazon.com/lambda/latest/dg/API_AddLayerVersionPermission.html). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    apiVersionAddLayerVersionPermissionAPI version for AddLayerVersionPermission method.Yes
    layerNameThe name or Amazon Resource Name (ARN) of the layer.Yes
    layerVersionNumberThe version number.Yes
    layerRevisionIdOnly update the policy if the revision ID matches the ID specified. Use this option to avoid modifying a policy that has changed since you last read it.Yes
    layerActionThe API action that grants access to the layer. For example, lambda:GetLayerVersion.Yes
    layerOrganizationIdWith the principal set to *, grant permission to all accounts in the specified organization.Yes
    layerPrincipalAn account ID, or * to grant permission to all AWS accounts.Yes
    layerStatementIdAn identifier that distinguishes the policy from others on the same layer version.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:layerName} - {$ctx:layerVersionNumber} - {$ctx:layerRevisionId} - {$ctx:layerAction} - {$ctx:layerOrganizationId} - {$ctx:layerPrincipal} - {$ctx:layerStatementId} - {$ctx:apiVersionAddLayerVersionPermission} - - ``` - - **Sample request** - - ```json - { - "secretAccessKey":"0b+fcboKq87Nf7mH6M**********************", - "accessKeyId":"AKIAJHJ*************", - "region":"us-east-2", - "blocking":"false", - "layerVersionNumber":"1", - "layerPrincipal":"*", - "layerStatementId":"Permisssion_Added", - "layerAction":"lambda:GetLayerVersion", - "layerName":"CustomFunction", - "apiVersionAddLayerVersionPermission":"2018-10-31" - } - ``` - - **Sample response** - - ```json - { - "RevisionId": "632d9fdb-a063-4309-99f5-023762923216", - "Statement": "{\"Sid\":\"Layer_Version_Permisssion_Added\",\"Effect\":\"Allow\",\"Principal\":\"*\",\"Action\":\"lambda:GetLayerVersion\",\"Resource\":\"arn:aws:lambda:us-east-2:**********:layer:CustomFunction:1\"}" - } - ``` -??? note "removeLayerVersionPermission" - The deleteFunction method implementation revokes permission to the resource-based policy of a version of an AWS Lambda layer. See the [related API documentation](https://docs.aws.amazon.com/lambda/latest/dg/API_RemoveLayerVersionPermission.html). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    apiVersionRemoveLayerVersionPermissionAPI version for RemoveLayerVersionPermission method.Yes
    layerNameThe name or Amazon Resource Name (ARN) of the layer.Yes
    layerVersionNumberThe version number of layer.Yes
    layerRevisionIdOnly update the policy if the revision ID matches the ID specified. Use this option to avoid modifying a policy that has changed since you last read it.Yes
    layerStatementIdAn identifier that distinguishes the policy from others on the same layer version.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:layerName} - {$ctx:layerVersionNumber} - {$ctx:layerStatementId} - {$ctx:layerRevisionId} - {$ctx:apiVersionRemoveLayerVersionPermission} - - ``` - - **Sample request** - - ```json - { - "secretAccessKey":"ZvLiOJbh/Gm5o/wE9l7+kAVtjDRg414a/Ev8sF0M", - "accessKeyId":"AKIAIZCDHDKX7DBMEKSA", - "region":"us-east-2", - "blocking":"false", - "layerVersionNumber":"1", - "layerStatementId":"Layer_Version_Permisssion_Added", - "layerName":"CustomFunction", - "apiVersionRemoveLayerVersionPermission":"2018-10-31" - } - ``` - - **Sample response** - - ``` - Status: 204 No Content - ``` - - - \ No newline at end of file diff --git a/en/docs/reference/connectors/amazonlambda-connector/amazonlambda-connector-example.md b/en/docs/reference/connectors/amazonlambda-connector/amazonlambda-connector-example.md deleted file mode 100644 index ade98ff2bf..0000000000 --- a/en/docs/reference/connectors/amazonlambda-connector/amazonlambda-connector-example.md +++ /dev/null @@ -1,242 +0,0 @@ -# Amazon Lambda Connector Example - -Given below is a sample scenario that demonstrates how to create an Amazon Lambda function in the AWS Lambda Service using the WSO2 Amazon Lambda Connector. - -## What you'll build -To use the Amazon Lambda connector, add the element in your configuration before carrying out any Amazon Lambda operations. This Amazon Lambda configuration authenticates with Amazon Lambda by specifying the AWS access key ID and secret access key ID, which are used for every operation. The signature is used with every request and thus differs based on the request the user makes. - -This example demonstrates how to use Amazon Lambda Connector to use `createFunction` operation. - -Here we exposed the `createFunction` operation via an API. The API has one resource with the context `/createFunction`. - -* `/createFunction` : The `createFunction` operation creates a Lambda function. - -To create a function, you need a deployment package and an execution role. The deployment package contains your function code. The execution role grants the function permission to use AWS services. - -The following diagram illustrates all the required functionality of the Amazon Lambda Service that you are going to build. - -Amazon Lambda Connector - -This example demonstrates, how to create an Amazon Lambda function easily using the WSO2 Amazon Lambda Connector. Before creating an Amazon Lambda function inside the AWS Lambda service, you need to implement the required deployment package (ZIP Archive) locally. - -As a next step, simply create an AWS S3 bucket and the deployment package should be uploaded into that bucket. This sample API contains a service that can be invoked through an HTTP POST request. Once the service is invoked, it creates a Lambda function inside the AWS Lambda service. When the created Lambda function is invoked, it is able to run without provisioning or managing servers. - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Configure the connector in WSO2 Integration Studio - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -1. Right click on the created Integration Project and select, -> **New** -> **Rest API** to create the REST API. - -2. Specify the API name as `createFunction` and API context as `/createFunction`. You can go to the XML configuration of the API (source view) and copy the following configuration. - - ``` - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - {$ctx:region} - {$ctx:accessKeyId} - {$ctx:secretAccessKey} - {$ctx:blocking} - - - {$ctx:functionName} - {$ctx:functionDescription} - {$ctx:apiVersionCreateFunction} - {$ctx:s3Bucket} - {$ctx:s3Key} - {$ctx:s3ObjectVersion} - {$ctx:zipFile} - {$ctx:targetArn} - {$ctx:environmentVariables} - {$ctx:handler} - {$ctx:kmsKeyArn} - {$ctx:layers} - {$ctx:memorySize} - {$ctx:publish} - {$ctx:role} - {$ctx:runtime} - {$ctx:tags} - {$ctx:timeout} - {$ctx:mode} - {$ctx:securityGroupIds} - {$ctx:subnetIds} - - - - - - - - ``` -3. Now we can export the imported connector and the API into a single CAR application. The CAR application is what we are going to deploy during server runtime. - -{!includes/reference/connectors/exporting-artifacts.md!} - -## Create Amazon Lambda Deployment Package (Lambda function) -In this scenario we created sample AWS Deployment Package (Lambda function) in Python. - -1. Our sample Deployment Package would look similar to the following (source view : addingNumbers.py). - ``` - import json - - print('Loading function') - - def addingNumbers(event, context): - #print("Received event: " + json.dumps(event, indent=2)) - value1 = event['key1'] - value2 = event['key2'] - print("value1 = " + value1) - print("value2 = " + value2) - return float(value1) + float(value2) # Echo back the addition of two keys - #raise Exception('Something went wrong') - ``` - -2. Create a ZIP archive. - -Please use command line terminal or shell to run following commands. Commands are shown in listings preceded by a prompt symbol ($) and the name of the current directory, when appropriate: - -``` -~/Documents$ zip addingNumbers.zip addingNumbers.py - adding: addingNumbers.py.py (deflated 17%) -``` - -## Upload Amazon Lambda Deployment Package (ZIP archive) in to the AWS S3 bucket - -1. Log in to the AWS Management Console. -2. Navigate to the created S3 bucket (e.g., eiconnectortest). -3. Click **Upload**. -4. Select created Amazon Lambda Deployment Package (ZIP archive) and Upload. - -## Create Execution Role - -You need to create an Execution Role by referring to the [Setting up the Amazon Lambda Environment]({{base_path}}/reference/connectors/amazonlambda-connector/setting-up-amazonlambda/) documentation. - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - - - Download ZIP - - -!!! tip - You may need to update the value of the access key and make other such changes before deploying and running this project. - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -1. Log in to the Micro Integrator CLI tool. - ``` - ./mi remote login - ``` -2. Provide default credentials admin for both username and password. - -3. In order to view the proxy services deployed, execute the following command. - ``` - ./mi api show - ``` -4. Send a POST request using a CURL command or sample client. - ``` - curl -v POST -d - '{ - "secretAccessKey":"xxxx", - "accessKeyId":"xxxx", - "region":"us-east-2", - "blocking":"false", - "s3Bucket":"eiconnectortest", - "s3Key":"addingNumbers.zip", - "s3ObjectVersion":"null", - "functionName":"eiLambdaConnector", - "handler":"addingNumbers.addingNumbers", - "role":"arn:aws:iam::610968236798:role/EIConnectorTestRole", - "runtime":"python3.7", - "apiVersionCreateFunction":"2015-03-31" - }' "http://localhost:8290/createFunction" -H "Content-Type:application/json" - ``` -5. See the following message content. - ``` - { - "Description": "", - "TracingConfig": { - "Mode": "PassThrough" - }, - { - "VpcConfig": null, - "RevisionId": "4b6e5fdd-cbfa-4ba2-9f6e-528cccdb333f", - "LastModified": "2020-03-13T05:33:54.900+0000", - "FunctionName": "eiLambdaConnector", - "Runtime": "python3.7", - "Version": "$LATEST", - "LastUpdateStatus": "Successful", - "Layers": null, - "FunctionArn": "arn:aws:lambda:us-east-2:610968236798:function:eiLambdaConnector", - "KMSKeyArn": null, - "MemorySize": 128, - "LastUpdateStatusReason": null, - "DeadLetterConfig": null, - "Timeout": 3, - "Handler": "addingNumbers.addingNumbers", - "CodeSha256": "VAISY9lY/a7DvxZNOSKCj+q/fsbfUaJjKhCsCVG3yzU=", - "Role": "arn:aws:iam::610968236798:role/EIConnectorTestRole", - "MasterArn": null, - "CodeSize": 405, - "State": "Active", - "StateReason": null, - "Environment": null, - "StateReasonCode": null, - "LastUpdateStatusReasonCode": null - } - ``` -6. Log in to the AWS Management Console. - -7. Navigate to the AWS Lambda and Functions tab. - Amazon Lambda Function - -8. Next you need to execute the function. Navigate to **Configure test events**.
    - Configure Test Event - -9. Click **Create new test event**. - Create Test Event - -10. Navigate and select the created test event from the dropdown in the top right corner. Click the **Test** button and execute the test event. - Execute Test Event - -## What's next - -* To customize this example for your own scenario, see [Amazon Lambda Connector Configuration]({{base_path}}/reference/connectors/amazonlambda-connector/amazonlambda-connector-config/) documentation. \ No newline at end of file diff --git a/en/docs/reference/connectors/amazonlambda-connector/amazonlambda-connector-overview.md b/en/docs/reference/connectors/amazonlambda-connector/amazonlambda-connector-overview.md deleted file mode 100644 index 6f50b2a0f3..0000000000 --- a/en/docs/reference/connectors/amazonlambda-connector/amazonlambda-connector-overview.md +++ /dev/null @@ -1,33 +0,0 @@ -# Amazon Lambda Connector Overview - -AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. - -The Amazon Lambda Connector allows you to access the REST API of [Amazon Web Service Lambda (AWS Lambda)](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html), which lets you run code without provisioning or managing servers. AWS Lambda is one such serverless compute service. Therefore you do not need to worry about which AWS resources to launch, or how will they manage them. Instead, you need to put the code on Lambda, and it runs. - -To see the Amazon Lambda connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "Amazon". - -Amazon Lambda Connector Store - -## Compatibility - -| Connector Version | Supported product versions | -| ------------- |-------------| -| 1.0.0 | EI 7.1.0, EI 7.0.x, EI 6.6.0, EI 6.5.0, EI 6.4.0 | - -For older versions, see the details in the connector store. - -## Amazon Lambda Connector documentation - -* **[Amazon Lambda Connector Example]({{base_path}}/reference/connectors/amazonlambda-connector/amazonlambda-connector-example/)**: This example demonstrates how to use Amazon Lambda Connector to use `createFunction` operation. - -* **[Amazon Lambda Connector Reference]({{base_path}}/reference/connectors/amazonlambda-connector/amazonlambda-connector-config/)**: This documentation provides a reference guide for the Amazon Lambda Connector. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, create a pull request in the following repository. - -* [Amazon Lambda Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-amazonlambda) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/amazonlambda-connector/setting-up-amazonlambda.md b/en/docs/reference/connectors/amazonlambda-connector/setting-up-amazonlambda.md deleted file mode 100644 index 4fb2d4f3ba..0000000000 --- a/en/docs/reference/connectors/amazonlambda-connector/setting-up-amazonlambda.md +++ /dev/null @@ -1,106 +0,0 @@ -# Setting up the Amazon Lambda Environment - -To use the AmazonLambda service, you must have an AWS account. If you do not already have an account, you are prompted to create one when you sign up. You are not charged for any AWS services that you sign up for unless you use them. - -## Signing Up for AWS - -**To sign up for AWS:** - -1. Navigate to [Amazon AWS website](https://aws.amazon.com/) and select **Create an AWS Account**. - > **Note**: If you previously signed in to the AWS Management Console using AWS account root user credentials, select **Sign in to a different account**. If you previously signed in to the console using IAM credentials, choose Sign-in using root account credentials. Then select **Create a new AWS account**. - -2. Follow the online instructions. - -Part of the sign-up procedure involves receiving a phone call and entering a verification code using the phone keypad. AWS will notify you by email when your account is active and available for you to use. - -## Obtaining user credentials - -You can access the Amazon Lambda service using the root user credentials but these credentials allow full access to all resources in the account as you cannot restrict permission for root user credentials. If you want to restrict certain resources and allow controlled access to AWS services then you can create IAM (Identity and Access Management) users in your AWS account for that scenario. - -## Steps to get an AWS Access Key for your AWS root account - - 1. Go to the AWS Management Console. - - AWS Management Console - - 2. Hover over your company name in the right top menu and click "My Security Credentials". - - My security credentials - - 3. Scroll to the "Access Keys" section. - - Create accesskey using root account - - 4. Click on "Create New Access Key". - 5. Copy both the Access Key ID (YOUR_AMAZON_LAMBDA_KEY) and Secret Access Key (YOUR_AMAZON_LAMBDA_SECRET). - -## Steps to get an AWS Access Key for an IAM user account - - 1. Sign in to the AWS Management Console and open the IAM console. - - IAM - - 2. In the navigation pane, choose Users. - - IAM users - - 3. Add a checkmark next to the name of the desired user, and then choose User Actions from the top. - 4. Click on Manage Access Keys. - - Security credentials - - 5. Click on Create Access Key. - - Create access key using IAM - - 6. Click on Show User Security Credentials. Copy and paste the Access Key ID and Secret Access Key values, or click on Download Credentials to download the credentials in a CSV (file). - - Download access key - -## Create Amazon S3 Bucket - - 1. Navigate to the created **AWS** account. - 2. Click **Services** tab on left top of the screen. - 3. Select **Storage** and click **S3**. - - Select amazon services - - 4. Create a bucket. - - Create S3 bucket - -## Create Deployment Package - - Your function's code consists of scripts or compiled programs and their dependencies. When you author functions in the Lambda console or a toolkit, the client creates a ZIP archive of your code called a [deployment package](https://docs.aws.amazon.com/lambda/latest/dg/deployment-package-v2.html). - - This sample explains how to create a sample Python program as a deployment package. - - 1. Create a sample Python function (e.g., lambda_function.py) file (on Linux and MacOS, use your preferred shell and package manager. On Windows 10, you can [install the Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10) to get a Windows-integrated version of Ubuntu and Bash). - 2. Create a ZIP archive. - ``` - ~/my-function$ zip function.zip lambda_function.py - - adding: lambda_function.py (deflated 17%) - ``` - 3. Upload the ZIP archive you created into the S3 bucket that you created. - - Upload deployment package - -## Create Execution Role - -You can use AWS Identity and Access Management (IAM) to manage access to the Lambda API and resources like functions and layers. For users and applications in your account that use Lambda, you manage permissions in a permissions policy that you can apply to IAM users, groups, or roles. To grant permissions to other accounts or AWS services that use your Lambda resources, you use a policy that applies to the resource itself. - -Creating an [Execution Role](https://docs.aws.amazon.com/lambda/latest/dg/lambda-permissions.html#lambda-intro-execution-role) in the IAM Console. - - 1. Open the [roles page](https://console.aws.amazon.com/iam/home#/roles) in the IAM console. - - Create IAM roles - - 2. Choose Create role. - 3. Under Common use cases, choose Lambda. - 4. Choose Next: Permissions. - 5. Under Attach permissions policies, choose the AWSLambdaBasicExecutionRole and AWSXrayWriteOnlyAccess managed policies. - 6. Choose Next: Tags. - 7. Choose Next: Review. - 8. For Role name, enter lambda-role.(Please copy and save the created role and role name to configure the connector) - 7. Choose Create role. \ No newline at end of file diff --git a/en/docs/reference/connectors/amazons3-connector/1.x/amazons3-connector-1.x-config.md b/en/docs/reference/connectors/amazons3-connector/1.x/amazons3-connector-1.x-config.md deleted file mode 100644 index 7ed564a200..0000000000 --- a/en/docs/reference/connectors/amazons3-connector/1.x/amazons3-connector-1.x-config.md +++ /dev/null @@ -1,62 +0,0 @@ -# Setting up the Amazon S3 Environment - -To use the AmazonS3 service, you must have an AWS account. If you do not already have an account, you are prompted to create one when you sign up. You are not charged for any AWS services that you sign up for unless you use them. - -## Signing Up for AWS - -* **To sign up for AWS:** - - 1. Navigate to [Amazon AWS website](https://aws.amazon.com/) and select **Create an AWS Account**. - - > **Note**: If you previously signed in to the AWS Management Console using AWS account root user credentials, select **Sign in to a different account**. If you previously signed in to the console using IAM credentials, choose Sign-in using root account credentials. Then select **Create a new AWS account**. - - 2. Follow the online instructions. - -Part of the sign-up procedure involves receiving a phone call and entering a verification code using the phone keypad. AWS will notify you by email when your account is active and available for you to use. - -## Obtaining user credentials - -You can access the Amazon S3 service using the root user credentials but these credentials allow full access to all resources in the account as you cannot restrict permission for root user credentials. If you want to restrict certain resources and allow controlled access to AWS services then you can create IAM (Identity and Access Management) users in your AWS account for that scenario. - -## Steps to get an AWS Access Key for your AWS root account - - 1. Go to the AWS Management Console. - - AWS Management Console - - 2. Hover over your company name in the right top menu and click "My Security Credentials". - - My security credentials - - 3. Scroll to the "Access Keys" section. - - Create accesskey using root account - - 4. Click on "Create New Access Key". - 5. Copy both the Access Key ID (YOUR_AMAZON_S3_KEY) and Secret Access Key (YOUR_AMAZON_S3_SECRET). - -## Steps to get an AWS Access Key for an IAM user account - - 1. Sign in to the AWS Management Console and open the IAM console. - - IAM - - 2. In the navigation pane, choose Users. - - IAM users - - 3. Add a checkmark next to the name of the desired user, and then choose User Actions from the top. - 4. Click on Manage Access Keys. - - Security credentials - - 5. Click on Create Access Key. - - Create access key using IAM - - 6. Click on Show User Security Credentials. Copy and paste the Access Key ID and Secret Access Key values, or click on Download Credentials to download the credentials in a CSV (file). - - Download access key - - -The Access Key ID (e.g., AKIAJA3J6GE646JWVA9C) and Secret Access Key (e.g., H/P/G3Tey1fQOKPAU1GBbl/NhL/WpSaEvxbvUlp4) will be required to configure the Amazon S3 connector. You can manage S3 buckets logging into S3 console. \ No newline at end of file diff --git a/en/docs/reference/connectors/amazons3-connector/1.x/amazons3-connector-1.x-example.md b/en/docs/reference/connectors/amazons3-connector/1.x/amazons3-connector-1.x-example.md deleted file mode 100644 index 70f1291d3b..0000000000 --- a/en/docs/reference/connectors/amazons3-connector/1.x/amazons3-connector-1.x-example.md +++ /dev/null @@ -1,326 +0,0 @@ -# Amazon S3 Connector Example - -The AmazonS3 Connector allows you to access the REST API of Amazon Simple Storage Service (Amazon S3). - -## What you'll build - -This example depicts how to use AmazonS3 connector to: - -1. Create a S3 bucket (a location for storing your data) in Amazon cloud. -2. Upload a message into the created bucket as a text file. -3. Retrieve created text file back and convert into a message in the integration runtime. - -All three operations are exposed via an API. The API with the context `/s3connector` has three resources: - -* `/createbucket` - Once invoked, it will create a bucket in Amazon with the specified name -* `/addobject` - The incoming message will be stored into the specified bucket with the specified name -* `/info` - Once invoked, it will read the specified file from the specified bucket and respond with the content of the file - -Following diagram shows the overall solution. The user creates a bucket, stores some message into the bucket, and then receives it back. - -To invoke each operation, the user uses the same API. - -Amazon S3 use case - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Setting up the environment - -Please follow the steps mentioned at [Setting up Amazon S3]({{base_path}}/reference/connectors/amazons3-connector/1.x/amazons3-connector-1.x-config) document in order to create a Amazon S3 account and obtain credentials you need to access the Amazon APIs. Keep them saved to be used in the next steps. - -## Configure the connector in WSO2 Integration Studio - -Follow these steps to set up the Integration Project and import AmazonS3 connector into it. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -1. Right click on the created Integration Project and select, -> **New** -> **Rest API** to create the REST API. - Adding a Rest API - -2. Specify the API name as `S3ConnectorTestAPI` and API context as `/s3connector`. You can go to the source view of the XML configuration file of the API and copy the following configuration. - -```xml - - - - - - - - - - - - - - AKICJA4J6GE6D6JSVB7B - H/P/H6Tey2fQODHAU1JBbl/NhL/WpSaEkebvLlp4 - us-east-2 - {$ctx:REST_METHOD} - false - s3.us-east-2.amazonaws.com - true - {$ctx:bucketName} - 100-continue - public-read - - - {$ctx:bucketUrl} - {$ctx:bucketRegion} - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - {$ctx:accessKeyId} - {$ctx:secretAccessKey} - {$ctx:region} - {$ctx:methodType} - {$ctx:ContentType} - {$ctx:addCharset} - {$ctx:host} - {$ctx:isXAmzDate} - 100-continue - - - {$ctx:bucketUrl} - {$ctx:objectName} - - - - - - - - - - - - - {$ctx:accessKeyId} - {$ctx:secretAccessKey} - {$ctx:region} - {$ctx:methodType} - {$ctx:ContentType} - {$ctx:addCharset} - {$ctx:host} - {$ctx:isXAmzDate} - 100-continue - - - - $1 - - - - - - - {$ctx:bucketUrl} - {$ctx:objectName} - {$ctx:uploadId} - {$ctx:partNumber} - - - - - - - - - - 1 - $1 - - - - - - - - - - - {$ctx:accessKeyId} - {$ctx:secretAccessKey} - {$ctx:methodType} - {$ctx:contentType} - {$ctx:addCharset} - {$ctx:isXAmzDate} - {$ctx:bucketName} - - - {$ctx:bucketUrl} - {$ctx:objectName} - {$ctx:uploadId} - {//partDetails/*} - - - - - - - - - - - - - - - - - - AKIAJA3J6GE646JSVA7A - H/P/G3Tey1fQOKPAU1GBbl/NhL/WpSaEvxbvUlp4 - us-east-2 - GET - {$ctx:contentType} - false - {$ctx:host} - true - - - {$ctx:bucketUrl} - {$ctx:objectName} - - - - - - - -``` - -**Note**: - -* As `accessKeyId` use the access key obtained from Amazon S3 setup and update the above API configuration. -* As `secretAccessKey` use the secret key obtained from Amazon S3 setup and update the above API configuration. -* Note that When you configure the `addobject` resource, there are three parts to it. You need to use three operations of the connector in order. - * initMultipartUpload - initialize the upload to the bucket. In the response of this operation you will receive generated `uploadId` by amazon S3 - * uploadPart - upload message part. There can be multiple parts to the same object. When you invoke the operation, feed `uploadId` and the correct `partNumber`. - * completeMultipartUpload - once all parts are done uploading, call this operation. It will add up all the parts and create the object in the requested bucket. -* Note that `region` at `host` and `bucketUrl` properties are hard coded. Please change them as per the requirement. -* For more information please refer the [reference guide]({{base_path}}/reference/connectors/amazons3-connector/1.x/amazons3-connector-1.x-reference) for Amazon S3 connector. - -Now we can export the imported connector and the API into a single CAR application. CAR application is the one we are going to deploy to server runtime. - -{!includes/reference/connectors/exporting-artifacts.md!} - -Now the exported CApp can be deployed in the integration runtime so that we can run it and test. - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - - - Download ZIP - - -!!! tip - You may need to update the value of the access key and make other such changes before deploying and running this project. - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -We can use Curl or Postman to try the API. The testing steps are provided for curl. Steps for Postman should be straightforward and can be derived from the curl requests. - -### Creating a bucket in Amazon S3 - -1. Create a file called data.xml with the following content. Note that the bucket region is `us-east-2`. If you need to create the bucket in a different region, modify the hard coded region of the API configuration accordingly. - ``` - - wso2engineers - us-east-2 - - ``` -2. Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - ``` - curl -H "Content-Type: application/xml" --request PUT --data @data.xml http://127.0.0.1:8290/s3connector/createbucket - ``` -**Expected Response**: - - You should receive 200OK response. Please navigate to Amazon AWS S3 console[s3.console.aws.amazon.com] and see if a bucket called `wso2engineers` is created. If you tried to create a bucket with a name that already exists, it will reply back with a message indicating the conflict. - - Creating Amazon S3 bucket - -### Post a message into Amazon S3 bucket - -1. Create a file called data.xml with the following content. - ``` - - Julian.txt - wso2engineers - Julian Garfield, Software Engineer, Integration Group - - ``` -2. Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - ``` - curl -H "Content-Type: application/xml" --request POST --data @data.xml http://127.0.0.1:8290/s3connector/addobject - ``` -**Expected Response**: - You will receive a response like below containing the details of the object created. - ``` - - - http://wso2engineers.s3.amazonaws.com/Julian.txt - wso2engineers - Julian.txt - "2b492c33895569c5c06cd7942f42914f-1" - - ``` - Navigate to AWS S3 console and click on the bucket `wso2engineers`. You will note that a file has been created with the name `Julian.txt`. - Upload object to Amazon S3 bucket - -### Read objects from Amazon S3 bucket - -Now let us read the information on `wso2engineers` that we stored in the Amazon S3 bucket. - -1. Create a file called data.xml with the following content. It specifies which bucket to read from and what the filename is. This example assumes that the object is stored at root level inside the bucket. You can also read a object stored in a folder inside the bucket. - ``` - - Julian.txt - wso2engineers - - ``` -2. Invoke the API as shown below using the curl command. - ``` - curl -H "Content-Type: application/xml" --request POST --data @data.xml http://127.0.0.1:8290/s3connector/info - ``` -**Expected Response**: - You will receive a response like below containing the details of the engineer requested. - - ``` - Julian Garfield, Software Engineer, Integration Group - ``` - -In this example Amazon S3 connector is used to perform operations with Amazon S3 storage. You can receive details of the errors that occur when invoking S3 operations using the S3 responses itself. Please read the [Amazon S3 connector reference guide]({{base_path}}/reference/connectors/amazons3-connector/1.x/amazons3-connector-1.x-reference) to learn more about the operations you can perform with the Amazon S3 connector. diff --git a/en/docs/reference/connectors/amazons3-connector/1.x/amazons3-connector-1.x-reference.md b/en/docs/reference/connectors/amazons3-connector/1.x/amazons3-connector-1.x-reference.md deleted file mode 100644 index 5fc2817bb2..0000000000 --- a/en/docs/reference/connectors/amazons3-connector/1.x/amazons3-connector-1.x-reference.md +++ /dev/null @@ -1,4431 +0,0 @@ -# Amazon S3 Connector Reference - -The following operations allow you to work with the Amazon S3 Connector. Click an operation name to see parameter details and samples on how to use it. - ---- - -## Initialize the connector - -To use the Amazon S3 connector, add the element in your configuration before carrying out any Amazon S3 operations. This Amazon S3 configuration authenticates with Amazon S3 by specifying the AWS access key ID and secret access key ID, which are used for every operation. The signature is used with every request and thus differs based on the request the user makes. - -??? note "init" - The init operation is used to initialize the connection to Amazon S3. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    accessKeyIdAWS access key ID.Yes
    secretAccessKeyAWS secret access key.Yes
    regionRegion which is used select a regional endpoint to make requests.Yes
    methodTypeType of the HTTP method.Yes
    contentLengthLength of the message without the headers according to RFC 2616.Yes
    contentTypeThe content type of the resource in case the request content in the body.Yes
    addCharsetTo add the char set to ContentType header. Set to true to add the charset in the ContentType header of POST and HEAD methods.Yes
    hostFor path-style requests, the value is s3.amazonaws.com. For virtual-style requests, the value is BucketName.s3.amazonaws.com.Yes
    isXAmzDateThe current date and time according to the requester.Yes
    bucketNameName of the bucket required.Yes
    blockingThe blocking parameter helps the connector to perform the blocking invocations to Amazon S3.Yes
    privateKeyFilePathPath of AWS private Key File.Yes
    keyPairIdKey pair ID of AWS cloud front.Yes
    policyTypePolicy for the URL signing. It can be custom or canned policy.Yes
    urlSignSpecify whether to create Signed URL or not. It can be true or false.Yes
    dateLessThanCan access the object before this specific date only.Yes
    dateGreaterThanCan access the object before this specific date only.Yes
    ipAddressIP address for creating Policy.Yes
    contentMD5Base64 encoded 128-bit MD5 digest of the message without the headers according to RFC 1864.Yes
    expectThis header can be used only if a body is sent to not to send the request body until it recieves an acknowledgment.Yes
    xAmzSecurityTokenThe security token based on whether using Amazon DevPay operations or temporary security credentials.Yes
    xAmzAclSets the ACL of the bucket using the specified canned ACL.Yes
    xAmzGrantReadAllows the specified grantee or grantees to list the objects in the bucket.Yes
    xAmzGrantWriteAllows the specified grantee or grantees to create, overwrite, and delete any object in the bucket.Yes
    xAmzGrantReadAcpAllows the specified grantee or grantees to read the bucket ACL.Yes
    xAmzGrantWriteAcpAllows the specified grantee or grantees to write the ACL for the applicable bucket.Yes
    xAmzGrantFullControlAllows the specified grantee or grantees the READ, WRITE, READ_ACP, and WRITE_ACP permissions on the bucket.Yes
    xAmzMetaField names prefixed with x-amz-meta- contain user-specified metadata.Yes
    xAmzServeEncryptionSpecifies server-side encryption algorithm to use when Amazon S3 creates an object.Yes
    xAmzStorageClassStorage class to use for storing the object.Yes
    xAmzWebsiteLocationAmazon S3 stores the value of this header in the object metadata.Yes
    xAmzMfaThe value is the concatenation of the authentication device's serial number, a space, and the value that is displayed on your authentication device.Yes
    xAmzCopySourceThe name of the source bucket and key name of the source object, separated by a slash.Yes
    xAmzCopySourceRangeThe range of bytes to copy from the source object.Yes
    xAmzMetadataDirectiveSpecifies whether the metadata is copied from the source object or replaced with metadata provided in the request.Yes
    xAmzCopySourceIfMatchCopies the object if its entity tag (ETag) matches the specified tag.Yes
    xAmzCopySourceIfNoneMatchCopies the object if its entity tag (ETag) is different than the specified ETag.Yes
    xAmzCopySourceIfUnmodifiedSinceCopies the object if it hasn't been modified since the specified time.Yes
    xAmzCopySourceIfModifiedSinceCopies the object if it has been modified since the specified time.Yes
    xAmzServerSideEncryptionSpecifies the server-side encryption algorithm to use when Amazon S3 creates the target object.Yes
    - - > **Note**: You need to pass the bucketName within init configuration only if you use the bucketURL in path-style (e.g., BucketName.s3.amazonaws.com). For the virtual-style bucketUrl (e.g., s3.amazonaws.com) you should not pass the bucketName. - - **Sample configuration** - - ```xml - - {$ctx:accessKeyId} - {$ctx:secretAccessKey} - {$ctx:methodType} - {$ctx:region} - {$ctx:contentType} - {$ctx:addCharset} - {$ctx:bucketName} - {$ctx:isXAmzDate} - {$ctx:expect} - {$ctx:contentMD5} - {$ctx:xAmzSecurityToken} - {$ctx:contentLength} - {$ctx:host} - {$ctx:xAmzAcl} - {$ctx:xAmzGrantRead} - {$ctx:xAmzGrantWrite} - {$ctx:xAmzGrantReadAcp} - {$ctx:xAmzGrantWriteAcp} - {$ctx:xAmzGrantFullControl} - {$ctx:uriRemainder} - {$ctx:xAmzCopySource} - {$ctx:xAmzCopySourceRange} - {$ctx:xAmzCopySourceIfMatch} - {$ctx:xAmzCopySourceIfNoneMatch} - {$ctx:xAmzCopySourceIfUnmodifiedSince} - {$ctx:xAmzCopySourceIfModifiedSince} - {$ctx:cacheControl} - {$ctx:contentEncoding} - {$ctx:expires} - {$ctx:xAmzMeta} - {$ctx:xAmzServeEncryption} - {$ctx:xAmzStorageClass} - {$ctx:xAmzWebsiteLocation} - - ``` - ---- - -### Buckets - -??? note "getBuckets" - The getBuckets implementation of the GET operation returns a list of all buckets owned by the authenticated sender of the request. To authenticate a request, use a valid AWS Access Key ID that is registered with Amazon S3. Anonymous requests cannot list buckets, and a user cannot list buckets that were not created by that particular user. When calling init before this operation, the following headers should be removed: xAmzAcl, x AmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTServiceGET.html) for more information. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    apiUrlAmazon S3 API URL, e.g.: http://s3.amazonaws.comYes
    regionAmazon S3 region, e.g.: us-east-1Yes
    - - **Sample configuration** - - ```xml - - {$ctx:apiUrl} - {$ctx:region} - - ``` - - **Sample request** - - ```xml - - AKIAXXXXXXXXXXQM7G5EA - qHZBBzXXXXXXXXXXDYQc4oMQMnAOj+34XXXXXXXXXXO2s - GET - - application/xml - - 100-continue - s3.amazonaws.com - us-east-1 - true - - https://s3.amazonaws.com - - ``` - - -??? note "createBucket" - The createBucket implementation of the PUT operation creates a new bucket. To create a bucket, the user should be registered with Amazon S3 and have a valid AWS Access Key ID to authenticate requests. Anonymous requests are never allowed to create buckets. By creating the bucket, the user becomes the owner of the bucket. Not every string is an acceptable bucket name. For information on bucket naming restrictions, see [Working with Amazon S3 Buckets](http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html). By default, the bucket is created in the US Standard region. The user can optionally specify a region in the request body. For example, if the user resides in Europe, the user will probably find it advantageous to create buckets in the EU (Ireland) region. For more information, see [How to Select a Region for Your Buckets](http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro). See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html) for more information. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    bucketRegionRegion for the created bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - {$ctx:bucketRegion} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXXXXXXA - qHZXXXXXXQc4oMQMnAOj+340XXxO2s - us-east-2 - PUT - 256 - application/xml - - - s3.us-east-2.amazonaws.com - true - - signv4test - us-east-2 - http://s3.us-east-2.amazonaws.com/signv4test - - - index2.html - - - Error2.html - - - - - docs/ - - - documents/ - - - - - images/ - - - documents/ - - - - - - ``` - - -??? note "createBucketWebsiteConfiguration" - Sets the configuration of the website that is specified in the website subresource. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    websiteConfigWebsite configuration information. For information on the elements you use in the request to specify the website configuration, see [here](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTwebsite.html).Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - {$ctx:websiteConfig} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXXXXXXA - qHZXXXXXXQc4oMQMnAOj+340XXxO2s - us-east-2 - PUT - 256 - application/xml - - - s3.us-east-2.amazonaws.com - true - - signv4test - http://s3.us-east-2.amazonaws.com/signv4test - - - index2.html - - - Error2.html - - - - - docs/ - - - documents/ - - - - - images/ - - - documents/ - - - - - - ``` - -??? note "createBucketPolicy" - The createBucketPolicy implementation of the PUT operation adds or replaces a policy on a bucket. If the bucket already has a policy, the one in this request completely replaces it. To perform this operation, you must be the bucket owner. - - If you are not the bucket owner but have PutBucketPolicy permissions on the bucket, Amazon S3 returns a 405 Method Not Allowed. In all other cases, for a PUT bucket policy request that is not from the bucket owner, Amazon S3 returns 403 Access Denied. There are restrictions about who can create bucket policies and which objects in a bucket they can apply to. - - When calling init before this operation, the following headers should be removed: xAmzAcl, x AmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTpolicy.html) for more information. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    bucketPolicyPolicy of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - {$ctx:bucketPolicy} - - ``` - - **Sample request** - - ```json - { - "accessKeyId": "AKXXXXXXXXX5EAS", - "secretAccessKey": "qHXXXXXXNMDYadDdsQMnAOj+3XXXXPs", - "region":"us-east-2", - "methodType": "PUT", - "contentType": "application/json", - "bucketName": "signv4test", - "isXAmzDate": "true", - "bucketUrl": "http://s3.us-east-2.amazonaws.com/signv4test", - "contentMD5":"", - "xAmzSecurityToken":"", - "host":"s3.us-east-2.amazonaws.com", - "expect":"", - "contentLength":"", - "bucketPolicy": { - "Version":"2012-10-17", - "Statement":[{ - "Sid":"AddPerm", - "Effect":"Allow", - "Principal": { - "AWS": "*" - }, - "Action":["s3:GetObject"], - "Resource":["arn:aws:s3:::signv4test/*"] - }] - } - } - ``` - -??? note "createBucketACL" - The createBucketACL operation uses the ACL sub-resource to set the permissions on an existing bucket using access control lists (ACL). See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTacl.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    ownerIdThe ID of the bucket owner.Yes
    ownerDisplayNameThe screen name of the bucket owner.Yes
    accessControlListContainer for ACL information, which includes the following: -
      -
    • Grant: Container for the grantee and permissions. -
        -
      • Grantee: The subject whose permissions are being set. -
          -
        • ID: ID of the grantee.
        • -
        • DisplayName: Screen name of the grantee.
        • -
        -
      • -
      • Permission: Specifies the permission to give to the grantee.
      • -
      -
    • -
    -
    Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - {$ctx:ownerId} - {$ctx:ownerDisplayName} - {$ctx:accessControlList} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXXXXXG5EA - qHZXXXXXXXDYQc4oMQXXXOj+340pXXX23s - PUT - application/xml - false - - - signv4test - true - - s3.us-east-2.amazonaws.com - us-east-2 - - http://s3.us-east-2.amazonaws.com/signv4test - 9a48e6b16816cc75df306d35bb5d0bd0778b61fbf49b8ef4892143197c84a867 - admin+aws+connectors+secondary - - - - - 9a48e6b16816cc75df306d35bb5d0bd0778b61fbf49b8ef4892143197c84a867 - admin+aws+connectors+secondary - - FULL_CONTROL - - - - http://acs.amazonaws.com/groups/global/AllUsers - - READ - - - - - ``` - -??? note "createBucketLifecycle" - The createBucketLifecycle operation uses the acl subresource to set the permissions on an existing bucket using access control lists (ACL). See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlifecycle.html) for more information. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    lifecycleConfigurationContainer for lifecycle rules, which includes the following: -
      -
    • Rule: Container for a lifecycle rule. -
        -
      • ID: Unique identifier for the rule. The value cannot be longer than 255 characters.
      • -
      • Prefix: Object key prefix identifying one or more objects to which the rule applies.
      • -
      • Status: If Enabled, Amazon S3 executes the rule as scheduled. If Disabled, Amazon S3 ignores the rule.
      • -
      • Transition: This action specifies a period in the objects' lifetime when Amazon S3 should transition them to the STANDARD_IA or the GLACIER storage class. -
          -
        • Days: Specifies the number of days after object creation when the specific rule action takes effect.
        • -
        • StorageClass: Specifies the Amazon S3 storage class to which you want the object to transition.
        • -
        -
      • -
      • Expiration: This action specifies a period in an object's lifetime when Amazon S3 should take the appropriate expiration action. -
          -
        • Days: Specifies the number of days after object creation when the specific rule action takes effect.
        • -
        -
      • -
      -
    • -
    -
    Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - {$ctx:lifecycleConfiguration} - - ``` - - **Sample request** - - ```xml - - AKXXXXXXXXXXX5EA - qHXXXXXXXXXXXqQc4oMQMnAOj+33XXXXXDPO2s - us-east-2 - PUT - application/xml - true - http://s3.us-east-2.amazonaws.com/signv4test - 0 - - signv4test - s3.us-east-2.amazonaws.com - - - public-read - - - - - - - - id1 - documents/ - Enabled - - 30 - GLACIER - - - - id2 - logs/ - Enabled - - 365 - - - - - ``` - -??? note "createBucketReplication" - The createBucketReplication operation uses the acl subresource to set the permissions on an existing bucket using access control lists (ACL). See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTreplication.html) for more information. - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    roleAmazon Resource Name (ARN) of an IAM role for Amazon S3 to assume when replicating the objects.Yes
    rulesContainer for replication rules, which includes the following: -
      -
    • Rule: Container for information about a particular replication rule. -
        -
      • ID: Unique identifier for the rule. The value cannot be longer than 255 characters.
      • -
      • Prefix: Object key prefix identifying one or more objects to which the rule applies.
      • -
      • Status: The rule is ignored if status is not Enabled..
      • -
      • Destination: Container for destination information. -
          -
        • Bucket:Amazon resource name (ARN) of the bucket where you want Amazon S3 to store replicas of the object identified by the rule.
        • -
        -
      • -
      -
    • -
    -
    Yes
    - - **Sample configuration** - - ```xml - - {$ctx:role} - {$ctx:bucketUrl} - {$ctx:rules} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - us-east-2 - PUT - application/xml - true - http://s3.us-east-2.amazonaws.com/signv4test - 0 - - signv4test - s3.us-east-2.amazonaws.com - - - public-read - - - - - - arn:aws:iam::35667example:role/CrossRegionReplicationRoleForS3 - - - id1 - documents/ - Enabled - - arn:aws:s3:::signv4testq23aa1 - - - - - ``` - -??? note "createBucketTagging" - The createBucketTagging operation uses the tagging subresource to add a set of tags to an existing bucket. Use tags to organize your AWS bill to reflect your own cost structure. To do this, sign up to get your AWS account bill with tag key values included. Then, to see the cost of combined resources, organize your billing information according to resources with the same tag key values. For example, you can tag several resources with a specific application name, and then organize your billing information to see the total cost of that application across several services. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTtagging.html) for more information. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    tagSetContainer for a set of tags, which includes the following: -
      -
    • Tag: Container for tag information. -
        -
      • Key: Name of the tag.
      • -
      • Value: Value of the tag.
      • -
      -
    • -
    -
    Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - {$ctx:tagSet} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - us-east-2 - PUT - application/xml - true - http://s3.us-east-2.amazonaws.com/signv4test - 0 - - signv4test - s3.us-east-2.amazonaws.com - - - public-read - - - - - - - - Project - Project One - - - User - jsmith - - - - ``` - -??? note "createBucketRequestPayment" - The createBucketRequestPayment operation uses the requestPayment subresource to set the request payment configuration of a bucket. By default, the bucket owner pays for downloads from the bucket. This configuration parameter enables the bucket owner (only) to specify that the person requesting the download will be charged for the download. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTrequestPaymentPUT.html) for more information. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    payerSpecifies who pays for the download and request fees.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - {$ctx:payer} - - ``` - - **Sample request** - - ```xml - - AKXXXXXXXXXXX5EA - qHXXXXXXXXXXXqQc4oMQMnAOj+33XXXXXDPO2s - us-east-2 - PUT - application/xml - true - http://s3.us-east-2.amazonaws.com/signv4test - 0 - - signv4test - s3.us-east-2.amazonaws.com - - - public-read - - - - - - Requester - - ``` - -??? note "createBucketVersioning" - The createBucketVersioning operation uses the requestPayment subresource to set the request payment configuration of a bucket. By default, the bucket owner pays for downloads from the bucket. This configuration parameter enables the bucket owner (only) to specify that the person requesting the download will be charged for the download. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTVersioningStatus.html) for more information. - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    statusSets the versioning state of the bucket.Yes
    mfaDeleteSpecifies whether MFA Delete is enabled in the bucket versioning configuration.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - {$ctx:status} - {$ctx:mfaDelete} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - us-east-2 - PUT - application/xml - true - http://s3.us-east-2.amazonaws.com/signv4test - 0 - - signv4test - s3.us-east-2.amazonaws.com - - - public-read - - - - - - Enabled - - ``` - -??? note "deleteBucket" - The deleteBucket implementation of the DELETE operation deletes the bucket named in the URI. All objects (including all object versions and Delete Markers) in the bucket must be deleted before the bucket itself can be deleted. When calling init before this operation, the following headers should be removed: xAmzAcl, x AmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETE.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIAIGURZM7SDFGJ7TRO6KSFSQ - asAX8CJoDKzeOgfdgd0Ve5dMCFk4STUFDdfgdgRHkGX6m0CcY - DELETE - us-east-2 - - application/xml - - - true - - signv4test - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - - ``` - -??? note "deleteBucketPolicy" - The deleteBucketPolicy implementation of the DELETE operation deletes the policy on a specified bucket. To use the operation, you must have DeletePolicy permissions on the specified bucket and be the bucket owner. If there are no DeletePolicy permissions, Amazon S3 returns a 403 Access Denied error. If there is the correct permission, but you are not the bucket owner, Amazon S3 returns a 405 Method Not Allowed error. If the bucket does not have a policy, Amazon S3 returns a 204 No Content error. There are restrictions about who can create bucket policies and which objects in a bucket they can apply to. When calling init before this operation, the following headers should be removed: xAmzAcl, x AmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETE.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIAQEIGURZSDFDM7GJ7TRO6KQ - asAX8CJoDvcvKzeOd0Ve5dMjkjCFk4STUFDRHkGX6m0CcY - DELETE - application/xml - 256 - - true - - - us-east-2 - signv4test - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - - ``` - -??? note "deleteBucketCors" - The deleteBucketCors operation deletes the CORS configuration information set for the bucket. To use this operation, you must have permission to perform the s3:PutCORSConfiguration action. The bucket owner has this permission by default and can grant this permission to others. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETEcors.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIAIGURZMSDFD7GJ7TRO6KQDFD - asAX8CJoDKzeOd0Ve5dfgdgdfMCFk4STUFDRHSFSDkGX6m0CcY - PUT - application/xml - true - 0 - - us-east-2 - signv4test - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - - - public-read - - - - - - - ``` - -??? note "deleteBucketLifecycle" - The deleteBucketLifecycle operation deletes the lifecycle configuration from the specified bucket. Amazon S3 removes all the lifecycle configuration rules in the lifecycle subresource associated with the bucket. Your objects never expire, and Amazon S3 no longer automatically deletes any objects on the basis of rules contained in the deleted lifecycle configuration. To use this operation, you must have permission to perform the s3:PutLifecycleConfiguration action. By default, the bucket owner has this permission and the bucket owner can grant this permission to others. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETElifecycle.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIAIGURZMSDFD7GJ7TRO6KQDFD - asAX8CJoDKzeOd0Ve5dfgdgdfMCFk4STUFDRHSFSDkGX6m0CcY - PUT - application/xml - true - 0 - - us-east-2 - signv4test - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - - - public-read - - - - - - - ``` - -??? note "deleteBucketReplication" - The deleteBucketReplication operation deletes the replication sub-resource associated with the specified bucket. This operation requires permission for the s3:DeleteReplicationConfiguration action. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETEreplication.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIAIGURZMSDFD7GJ7TRO6KQDFD - asAX8CJoDKzeOd0Ve5dfgdgdfMCFk4STUFDRHSFSDkGX6m0CcY - PUT - application/xml - true - 0 - - us-east-2 - signv4test - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - - - public-read - - - - - - - ``` - -??? note "deleteBucketTagging" - The deleteBucketTagging operation uses the tagging sub-resource to remove a tag set from the specified bucket. To use this operation, you must have permission to perform the s3:PutBucketTagging action. By default, the bucket owner has this permission and can grant this permission to others. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETEtagging.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIAIGURZMSDFD7GJ7TRO6KQDFD - asAX8CJoDKzeOd0Ve5dfgdgdfMCFk4STUFDRHSFSDkGX6m0CcY - PUT - application/xml - true - 0 - - us-east-2 - signv4test - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - - - public-read - - - - - - - ``` - -??? note "deleteBucketWebsiteConfiguration" - The deleteBucketWebsiteConfiguration operation removes the website configuration for a bucket. Amazon S3 returns a 207 OK response upon successfully deleting a website configuration on the specified bucket. It will give a 200 response if the website configuration you are trying to delete does not exist on the bucket, and a 404 response if the bucket itself does not exist. This DELETE operation requires the S3: DeleteBucketWebsite permission. By default, only the bucket owner can delete the website configuration attached to a bucket. However, bucket owners can grant other users permission to delete the website configuration by writing a bucket policy granting them the S3: DeleteBucketWebsite permission. When calling init before this operation, the following headers should be removed: xAmzAcl, x AmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETEwebsite.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIAIGURZM7GDFDJ7TRO6KQDFD - asAdfsX8CJoDKzeOd0Ve5dMCdfsdFk4STUFDRHkdsfGX6m0CcY - DELETE - application/xml - - us-east-2 - signv4test - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - - true - - - - ``` - -??? note "getObjectsInBucket" - The getObjectsInBucket implementation of the GET operation returns some or all (up to 1000) of the objects in a bucket. The request parameters act as selection criteria to return a subset of the objects in a bucket. To use this implementation of the operation, the user must have READ access to the bucket. When calling init before this operation, the following headers should be removed: xAmzAcl, x AmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html)) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    delimiterA delimiter is a character used to group keys. All keys that contain the same string between the prefix, if specified, and the first occurrence of the delimiter after the prefixes are grouped under a single result element CommonPrefixes. If the prefix parameter is not specified, the substring starts at the beginning of the key. The keys that are grouped under the CommonPrefixesresult element are not returned elsewhere in the response.Optional
    encodingTypeRequests Amazon S3 to encode the response and specifies the encoding method to use. An object key can contain any Unicode character. However, XML 1.0 parser cannot parse some characters such as characters with an ASCII value from 0 to 10. For characters that are not supported in XML 1.0, this parameter can be added to request Amazon S3 to encode the keys in the response.Optional
    markerSpecifies the key to start with when listing objects in a bucket. Amazon S3 lists objects in alphabetical order.Optional
    maxKeysSets the maximum number of keys returned in the response body. The response might contain fewer keys but will never contain more.Optional
    prefixLimits the response to keys that begin with the specified prefix.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - {$ctx:delimiter} - {$ctx:encodingType} - {$ctx:marker} - {$ctx:maxKeys} - {$ctx:prefix} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - us-east-2 - GET - application/xml - - - signv4test - true - - s3.us-east-2.amazonaws.com - - http://s3.us-east-2.amazonaws.com/signv4test - t - obj - - 3 - url - images - - ``` - -??? note "getBucketLifeCycle" - The getBucketLifeCycle operation returns the lifecycle configuration information set on the bucket. To use this operation, permissions should be given to perform the s3:GetLifecycleConfiguration action. The bucket owner has this permission by default and can grant this permission to others. There is usually some time lag before lifecycle configuration deletion is fully propagated to all the Amazon S3 systems. When calling init before this operation, the following headers should be removed: xAmzAcl, xAmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETlifecycle.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - us-east-2 - GET - application/xml - - - signv4test - true - - s3.us-east-2.amazonaws.com - - http://s3.us-east-2.amazonaws.com/signv4test - - ``` - -??? note "createBucketCors" - The createBucketCors operation returns the CORS configuration information set for the bucket. To use this operation, you must have permission to perform the s3:CreateBucketCORS action. By default, the bucket owner has this permission and can grant it to others. When calling init before this operation, the following headers should be removed: xAmzAcl, xAmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTcors.html) for more information. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    corsConfigurationContainer for up to 100 CORSRules elements.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - {$ctx:corsConfiguration} - - ``` - - **Sample request** - - ```xml - - AKXXXXXXXXXXX5EA - qHXXXXXXXXXXXqQc4oMQMnAOj+33XXXXXDPO2s - us-east-2 - PUT - 256 - application/xml - - s3.us-east-2.amazonaws.com - true - - signv4test - http://s3.us-east-2.amazonaws.com/signv4test - - - * - GET - * - 3000 - - - - ``` - -??? note "getBucketCors" - The getBucketCors operation returns the CORS configuration information set for the bucket. To use this operation, you must have permission to perform the s3:GetBucketCORS action. By default, the bucket owner has this permission and can grant it to others. When calling init before this operation, the following headers should be removed: xAmzAcl, xAmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETcors.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - us-east-2 - GET - application/xml - 256 - - signv4test - true - - s3.us-east-2.amazonaws.com - - http://s3.us-east-2.amazonaws.com/signv4test - - ``` - -??? note "getBucketLocation" - The getBucketLocation operation returns the lifecycle configuration information set on the bucket. To use this operation, you must be the bucket owner. When calling init before this operation, the following headers should be removed: xAmzAcl, x AmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETlocation.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - us-east-2 - GET - application/xml - - - signv4test - true - - s3.us-east-2.amazonaws.com - - http://s3.us-east-2.amazonaws.com/signv4test - - ``` - -??? note "getBucketLogging" - The getBucketLogging operation returns the logging status of a bucket and the permissions users have to view and modify that status. To use this operation, you must be the bucket owner. When calling init before this operation, the following headers should be removed: xAmzAcl, xAmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETlogging.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - us-east-2 - GET - application/xml - - - signv4test - true - - s3.us-east-2.amazonaws.com - - http://s3.us-east-2.amazonaws.com/signv4test - - ``` - -??? note "getBucketNotification" - The getBucketNotification operation returns the lifecycle configuration information set on the bucket. To use this operation, you must be the bucket owner to read the notification configuration of a bucket. However, the bucket owner can use a bucket policy to grant permission to other users to read this configuration with the s3:GetBucketNotification permission. When calling init before this operation, the following headers should be removed: xAmzAcl, xAmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETnotification.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - us-east-2 - GET - application/xml - - - signv4test - true - - s3.us-east-2.amazonaws.com - - http://s3.us-east-2.amazonaws.com/signv4test - - ``` - -??? note "getBucketTagging" - The getBucketTagging operation returns the lifecycle configuration information set on the bucket. To use this operation, you must have permission to perform the s3:GetBucketTagging action. By default, the bucket owner has this permission and can grant this permission to others. When calling init before this operation, the following headers should be removed: xAmzAcl, xAmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETtagging.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - us-east-2 - GET - application/xml - - - signv4test - true - - s3.us-east-2.amazonaws.com - - http://s3.us-east-2.amazonaws.com/signv4test - - ``` - -??? note "getBucketReplication" - The getBucketReplication operation returns the lifecycle configuration information set on the bucket. To use this operation, you must have permission to perform the s3:GetReplicationConfiguration action. For more information about permissions, go to Using Bucket Policies and User Policies in the Amazon Simple Storage Service Developer Guide. When calling init before this operation, the following headers should be removed: xAmzAcl, x AmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETreplication.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - us-east-2 - GET - application/xml - - - signv4test - true - - s3.us-east-2.amazonaws.com - - http://s3.us-east-2.amazonaws.com/signv4test - - ``` - -??? note "getBucketPolicy" - The getBucketPolicy implementation of the GET operation returns the policy of a specified bucket. To use this operation, the user must have GetPolicy permissions on the specified bucket, and the user must be the bucket owner. If the user does not have GetPolicy permissions, Amazon S3 returns a 403 Access Denied error. If the user has correct permissions, but the user is not the bucket owner, Amazon S3 returns a 405 Method Not Allowed error. If the bucket does not have a policy, Amazon S3 returns a 404 Policy Not found error. There are restrictions about who can create bucket policies and which objects in a bucket they can apply to. When calling init before this operation, the following headers should be removed: xAmzAcl, xAmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETpolicy.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - us-east-2 - GET - application/xml - - - signv4test - true - - s3.us-east-2.amazonaws.com - - http://s3.us-east-2.amazonaws.com/signv4test - - ``` - -??? note "getBucketObjectVersions" - The getBucketObjectVersions operation lists metadata about all of the versions of objects in a bucket. Request parameters can be used as selection criteria to return metadata about a subset of all the object versions. To use this operation, the user must have READ access to the bucket. When calling init before this operation, the following headers should be removed: xAmzAcl, xAmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETVersion.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    delimiterA delimiter is a character used to group keys.Optional
    encodingTypeRequests Amazon S3 to encode the response and specifies the encoding method to use.Optional
    keyMarkerSpecifies the key in the bucket that you want to start listing from. See also versionIdMarker below.Optional
    maxKeysSets the maximum number of keys returned in the response body.Optional
    prefixLimits the response to keys that begin with the specified prefix.Optional
    versionIdMarkerSpecifies the object version you want to start listing from.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - {$ctx:delimiter} - {$ctx:encodingType} - {$ctx:keyMarker} - {$ctx:maxKeys} - {$ctx:prefix} - {$ctx:versionIdMarker} - - ``` - - **Sample request** - - ```xml - - AKXXXXXS3KJA - ieXXHXXTVh/12hL2VxxJJS - GET - application/xml - 256 - - testkeerthu1234 - true - - - - http://s3.amazonaws.com/testkeerthu1234 - / - - - 3 - images - - - ``` - -??? note "getBucketRequestPayment" - The getBucketRequestPayment implementation of the GET operation returns the request payment configuration of a bucket. To use this operation, the user must be the bucket owner. When calling init before this operation, the following headers should be removed: xAmzAcl, xAmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETpolicy.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - us-east-2 - GET - application/xml - - - signv4test - true - - s3.us-east-2.amazonaws.com - - http://s3.us-east-2.amazonaws.com/signv4test - - ``` - -??? note "getBucketVersioning" - The getBucketVersioning implementation of the GET operation returns the versioning state of a bucket. To retrieve the versioning state of a bucket, the user must be the bucket owner. This implementation also returns the MFA Delete status of the versioning state. If the MFA Delete status is enabled, the bucket owner must use an authentication device to change the versioning state of the bucket. When calling init before this operation, the following headers should be removed: xAmzAcl, xAmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETpolicy.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - us-east-2 - GET - application/xml - - - signv4test - true - - s3.us-east-2.amazonaws.com - - http://s3.us-east-2.amazonaws.com/signv4test - - ``` - -??? note "getWebSiteConfiguration" - The getWebSiteConfiguration implementation of the GET operation returns the website configuration associated with a bucket. To host the website on Amazon S3, a bucket can be configured as a website by adding a website configuration. This GET operation requires the S3:GetBucketWebsite permission. By default, only the bucket owner can read the bucket website configuration. However, bucket owners can allow other users to read the website configuration by writing a bucket policy granting them the S3:GetBucketWebsite permission. When calling init before this operation, the following headers should be removed: xAmzAcl, xAmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETwebsite.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - us-east-2 - GET - application/xml - - - signv4test - true - - s3.us-east-2.amazonaws.com - - http://s3.us-east-2.amazonaws.com/signv4test - - ``` - -??? note "getBucketACL" - The getBucketACL implementation of the GET operation returns the access control list (ACL) of a bucket. To use GET to return the ACL of the bucket, the user must have READ_ACP access to the bucket. If READ_ACP permission is granted to the anonymous user, you can return the ACL of the bucket without using an authorization header. When calling init before this operation, the following headers should be removed: xAmzAcl, xAmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETacl.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - us-east-2 - GET - application/xml - 256 - - signv4test - true - - s3.us-east-2.amazonaws.com - - http://s3.us-east-2.amazonaws.com/signv4test - - ``` - -??? note "checkBucketPermission" - The checkBucketPermission operation determines whether a bucket exists and you have permission to access it. The operation returns a 200 OK if the bucket exists and you have permission to access it. Otherwise, the operation might return responses such as 404 Not Found and 403 Forbidden. When calling init before this operation, the following headers should be removed: xAmzAcl, x AmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/bucket-permissions-check.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKXXXXXXXXXXX5EA - qHXXXXXXXXXXXqQc4oMQMnAOj+33XXXXXDPO2s - us-east-2 - HEAD - application/xml - - - - s3.us-east-2.amazonaws.com - - signv4test - true - http://s3.us-east-2.amazonaws.com/signv4test - - ``` - -??? note "setBucketACL" - The setBucketACL implementation of the PUT operation sets the permissions on an existing bucket using access control lists (ACL). You set the permissions by specifying the ACL in the request body. When calling init before this operation, the following headers should be removed: xAmzAcl, x AmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTacl.html) for more information. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    accessControlPolicyContains the following elements that set the ACL permissions for an object per grantee: -
      -
    • Owner: Container for the bucket owner's ID and display name. -
        -
      • ID: ID of the bucket owner, or the ID of the grantee.
      • -
      • DisplayName: Screen name of the bucket owner.
      • -
      -
    • -
    • AccessControlList: Container for the grants. -
        -
      • Grant: Container for the grantee and the permissions of this grant. -
          -
        • Grantee: The subject whose permissions are being set. -
            -
          • URI: Granting permission to a predefined Amazon S3 group.
          • -
          -
        • -
        • Permission: Specifies the permission given to the grantee.
        • -
        -
      • -
      -
    • -
    -
    Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - {$ctx:accessControlPolicy} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - us-east-2 - PUT - 2000 - application/xml - - - s3.us-east-2.amazonaws.com - true - - signv4test - http://s3.us-east-2.amazonaws.com/signv4test - - - 9a48e6b16816cc75df306d35bb5d0bd0778b61fbf49b8ef4892143197c84a867 - admin+aws+connectors+secondary - - - - - 9a48e6b16816cc75df306d35bb5d0bd0778b61fbf49b8ef4892143197c84a867 - admin+aws+connectors+secondary - - FULL_CONTROL - - - - http://acs.amazonaws.com/groups/global/AllUsers - - READ - - - - http://acs.amazonaws.com/groups/s3/LogDelivery - - WRITE - - - - - ``` - -??? note "headBucket" - The headBucket operation is useful to determine if a bucket exists and you have permission to access it. The operation returns a 200 OK if the bucket exists and you have permission to access it. Otherwise, the operation might return responses such as 404 Not Found and 403 Forbidden. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - us-west-2 - HEAD - application/xml - false - - - 1513162931643testconbkt2 - true - - s3-us-west-2.amazonaws.com - - http://s3-us-west-2.amazonaws.com/1513162931643testconbkt2 - - ``` - -??? note "listMultipartUploads" - The listMultipartUploads operation lists in-progress multipart uploads. A multipart upload is in progress when it has been initiated using the Initiate Multipart Upload request but has not yet been completed or aborted. It returns a default value of 1000 multipart uploads in the response. The number of uploads can be further limited in a response by specifying the maxUploads property. If additional multipart uploads satisfy the list criteria, the response will contain an "IsTruncated" element with the value "true". To list the additional multipart uploads, use the keyMarker and uploadIdMarker request parameters. - - In the response, the uploads are sorted by key. If the application has initiated more than one multipart upload using the same object key, uploads in the response are first sorted by key. Additionally, uploads are sorted in ascending order within each key by the upload initiation time. - - When calling init before this operation, the following headers should be removed: xAmzAcl, xAmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    delimiterA delimiter is a character you use to group keys. All keys that contain the same string between the prefix, if specified, and the first occurrence of the delimiter after the prefix are grouped under a single result element CommonPrefixes. If you do not specify the prefix parameter, the substring starts at the beginning of the key. The keys that are grouped under the CommonPrefixesresult element are not returned elsewhere in the response.Yes
    encodingTypeRequests Amazon S3 to encode the response and specifies the encoding method to use. An object key can contain any Unicode character. However, XML 1.0 parser cannot parse some characters such as characters with an ASCII value from 0 to 10. For characters that are not supported in XML 1.0, you can add this parameter to request Amazon S3 to encode the keys in the response.Yes
    maxUploadsSets the maximum number of multipart uploads, from 1 to 1,000, to return in the response body. 1,000 is the maximum number of uploads that can be returned in a response.Yes
    keyMarkerSpecifies the key to start with when listing objects in a bucket. Amazon S3 lists objects in alphabetical order.Yes
    prefixLimits the response to keys that begin with the specified prefix.Yes
    versionIdMarkerTogether with keyMarker, specifies the multipart upload after which listing should begin. If keyMarker is not specified, the uploadIdMarker parameter is ignored. Otherwise, any multipart uploads for a key equal to the keyMarker might be included in the list only if they have an upload ID lexicographically greater than the specified uploadIdMarker.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - {$ctx:delimiter} - {$ctx:encodingType} - {$ctx:maxUploads} - {$ctx:keyMarker} - {$ctx:prefix} - {$ctx:uploadIdMarker} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - us-east-2 - GET - application/xml - true - http://s3.us-east-2.amazonaws.com/signv4test - 0 - - signv4test - s3.us-east-2.amazonaws.com - - - public-read - - - - - - - ``` - -### Objects - -??? note "deleteObject" - The deleteObject operation removes the null version (if there is one) of an object and inserts a delete marker, which becomes the latest version of the object. If there is no null version, Amazon S3 does not remove any objects. - - If the object you want to delete is in a bucket where the bucket versioning configuration is MFA Delete enabled, you must include the xAmzMfa header in the request. Requests that include xAmzMfa must use HTTPS. For more information about MFA Delete, see Using MFA Delete . - - Following is the proxy configuration for init and deleteObject. The init section has additional parameters and parameters that need to be removed apart from those mentioned in the Connecting to Amazon S3 section. - - See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectDELETE.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    accessKeyIdAWS access key ID.Yes
    secretAccessKeyAWS secret access key.Yes
    methodTypeHTTP method type.Yes
    contentTypeContent type of the resource.Yes
    bucketNameName of the bucket.Yes
    isXAmzDateIndicates whether the current date and time are considered to calculate the signature. Valid values: true or false.Yes
    contentMD5Base64 encoded 128-bit MD5 digest of the message according to RFC 1864.Yes
    xAmzSecurityTokenThe security token based on whether Amazon DevPay operations or temporary security credentials are used.Yes
    hostThe path-style requests (s3.amazonaws.com) or virtual-style requests (BucketName.s3.amazonaws.com).Yes
    regionRegion that is used select a regional endpoint to make requests.Yes
    expectWhen this property is set to 100-continue, the request does not send the request body until it receives an acknowledgment. If the message is rejected based on the headers, the body of the message is not sent.Yes
    contentLengthLength of the message without the headers according to RFC 2616.Yes
    xAmzMfaRequired to permanently delete a versioned object if versioning is configured with MFA Delete enabled. The value is the concatenation of the authentication device's serial number, a space, and the value displayed on your authentication device.Yes
    bucketUrlThe URL of the bucket.Yes
    objectNameThe name of the object to be deleted.Yes
    versionIdVersion Id of an object to remove a specific object version.Yes
    - - > **Note**: To remove a specific version, the user must be the bucket owner and must use the versionId sub-resource, which permanently deletes the version. - - **Sample configuration** - - ```xml - - {$ctx:accessKeyId} - {$ctx:secretAccessKey} - {$ctx:methodType} - {$ctx:contentType} - {$ctx:bucketName} - {$ctx:isXAmzDate} - {$ctx:contentMD5} - {$ctx:xAmzSecurityToken} - {$ctx:host} - {$ctx:region} - {$ctx:expect} - {$ctx:contentLength} - {$ctx:xAmzMfa} - - - - {$ctx:bucketUrl} - {$ctx:objectName} - {$ctx:versionId} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - DELETE - - application/xml - - 100-continue - us-east-2 - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - signv4test - true - - - testObject1 - FHbrL3xf2TK54hLNWWArYI79woSElvHf - - ``` - -??? note "deleteMultipleObjects" - The deleteMultipleObjects operation deletes multiple objects from a bucket using a single HTTP request. If object keys that need to be deleted are known, this operation provides a suitable alternative to sending individual delete requests (deleteObject). The deleteMultipleObjects request contains a list of up to 1000 keys that the user wants to delete. In the XML, you provide the object key names, and optionally provide version IDs if you want to delete a specific version of the object from a versioning-enabled bucket. For each key, Amazon S3 performs a delete operation and returns the result of that deletion, success or failure, in the response. Note that if the object specified in the request is not found, Amazon S3 returns the result as deleted. - - The deleteMultipleObjects operation supports two modes for the response: verbose and quiet. By default, the operation uses the verbose mode in which the response includes the result of deletion of each key in your request. In the quiet mode, the response includes only keys where the delete operation encountered an error. For a successful deletion, the operation does not return any information about the deletion in the response body. - - When using the deleteMultipleObjects operation that attempts to delete a versioned object on an MFA Delete enabled bucket, you must include an MFA token. If you do not provide one, even if there are non-versioned objects you are attempting to delete. Additionally, f you provide an invalid token, the entire request will fail, regardless of whether there are versioned keys in the request. For more information about MFA Delete, see MFA Delete. - - Following is the proxy configuration for init and deleteMultipleObjects. The init section has additional parameters and parameters that need to be removed apart from those mentioned in the Connecting to Amazon S3 section. - - See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    accessKeyIdAWS access key ID.Yes
    secretAccessKeyAWS secret access key.Yes
    methodTypeHTTP method type.Yes
    contentTypeContent type of the resource.Yes
    bucketNameName of the bucket.Yes
    isXAmzDateIndicates whether the current date and time are considered to calculate the signature. Valid values: true or false.Yes
    xAmzSecurityTokenThe security token based on whether Amazon DevPay operations or temporary security credentials are used.Yes
    hostThe path-style requests (s3.amazonaws.com) or virtual-style requests (BucketName.s3.amazonaws.com).Yes
    regionRegion that is used select a regional endpoint to make requests.Yes
    expectWhen this property is set to 100-continue, the request does not send the request body until it receives an acknowledgment. If the message is rejected based on the headers, the body of the message is not sent.Yes
    contentLengthLength of the message without the headers according to RFC 2616.Yes
    xAmzMfaRequired to permanently delete a versioned object if versioning is configured with MFA Delete enabled. The value is the concatenation of the authentication device's serial number, a space, and the value displayed on your authentication device.Yes
    bucketUrlThe URL of the bucket.Yes
    deleteConfigThe configuration for deleting the objects. It contains the following properties: -
      -
    • Delete: Container for the request.
    • -
        -
      • Quiet: Enable quiet mode for the request. When you add this element, you must set its value to true. Default is false.
      • -
      • Object: Container element that describes the delete request for each object.
      • -
          -
        • Key: Key name of the object to delete.
        • -
        • VersionId: Version ID for the specific version of the object to delete.
        • -
        -
      -
    -
    Yes
    - - **Sample configuration** - - ```xml - - {$ctx:accessKeyId} - {$ctx:secretAccessKey} - {$ctx:methodType} - {$ctx:contentType} - {$ctx:bucketName} - {$ctx:isXAmzDate} - {$ctx:xAmzSecurityToken} - {$ctx:host} - {$ctx:region} - {$ctx:expect} - {$ctx:contentLength} - {$ctx:xAmzMfa} - - - - {$ctx:bucketUrl} - {$ctx:quiet} - {$ctx:deleteConfig} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - POST - application/xml - true - - - - us-east-2 - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - signv4test - true - - - - testobject33 - M46OVgxl4lHBNCeZwBpEZvGhj0k5vvjK - - - testObject1 - PwbvPU.yn3YcHOCF8bntKeTdzfKQC6jN - - - - - ``` - -??? note "getObject" - The getObject operation retrieves objects from Amazon S3. To use this operation, the user must have READ access to the object. If the user grants READ access to the anonymous user, the object can be returned without using an authorization header. By default, this operation returns the latest version of the object. - - An Amazon S3 bucket has no directory hierarchy such as in a typical computer file system. However, a logical hierarchy can be created by using object key names that imply a folder structure. For example, instead of naming an object sample.jpg, it could be named photos/2006/February/sample.jpg. To retrieve an object from such a logical hierarchy, the full key name for the object should be specified. - - For a virtual hosted-style request example, if you have the object photos/2006/February/sample.jpg, specify the resource as /photos/2006/February/sample.jpg. For a path-style request example, if you have the object photos/2006/February/sample.jpg in the bucket named examplebucket, specify the resource as /examplebucket/photos/2006/February/sample.jpg. If the object to be retrieved is a GLACIER storage class object, the object is archived in Amazon Glacier, and you must first restore a copy using the POST Object restore API before retrieving the object. Otherwise, this operation returns the "InvalidObjectStateError" error. - - When calling init before this operation, the following headers should be removed: xAmzAcl, xAmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. - - See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    objectNameThe name of the object to retrieve details for.Yes
    queryQuery for search parameters.Yes
    responseContentTypeContent-Type header of the response.Yes
    responseContentLanguageContent-Language header of the response.Yes
    responseExpiresExpires header of the response.Yes
    responseCacheControlCache-Control header of the response.Yes
    responseContentDispositionContent-Disposition header of the response.Yes
    responseContentEncodingContent-Encoding header of the response.Yes
    rangeHTTP range header.Yes
    ifModifiedSinceReturn the object only if it has been modified.Yes
    ifUnmodifiedSinceReturn the object only if it has not been modified.Yes
    ifMatchReturn the object only if its ETag is the same.Yes
    ifNoneMatchReturns the object only if its ETag is not the same as the one specified.Yes
    Return the object only if its ETag is not same.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - {$ctx:objectName} - {$ctx:responseContentType} - {$ctx:responseContentLanguage} - {$ctx:responseExpires} - {$ctx:responseCacheControl} - {$ctx:responseContentDisposition} - {$ctx:responseContentEncoding} - {$ctx:range} - {$ctx:ifModifiedSince} - {$ctx:ifUnmodifiedSince} - {$ctx:ifMatch} - {$ctx:ifNoneMatch} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - GET - application/xml - us-east-2 - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - signv4test - true - - - Tree2.png - - - - - - - - - - - - - ``` - -??? note "createObject" - The createObject operation adds an object to a bucket. You must have WRITE permissions on a bucket to add an object to it. Amazon S3 does not add partial objects, so if a success response is received, the entire object is added to the bucket. Because Amazon S3 is a distributed system, if it receives multiple write requests for the same object simultaneously, it overwrites all but the last object written. - - To ensure that data is not corrupted traversing the network, use the Content-MD5 header. When it is used, Amazon S3 checks the object against the provided MD5 value and, if they do not match, it returns an error. Additionally, you can calculate the MD5 value while putting an object to Amazon S3 and compare the returned ETag with the calculated MD5 value. - - When uploading an object, you can specify the accounts or groups that should be granted specific permissions on the object. There are two ways to grant the appropriate permissions using the request headers: either specify a canned (predefined) ACL using the "x-amz-acl" request header, or specify access permissions explicitly using the "x-amz-grant-read", "x-amz-grant-read-acp", "x-amz-grant-write-acp", and "x-amz-grant-full-control" headers. These headers map to the set of permissions Amazon S3 supports in an ACL. Use only one approach, not both. - - See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html) for more information. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    objectNameThe name of the object to retrieve details for.Yes
    - - **Sample configuration** - - ```xml - - {$url:bucketUrl} - {$url:objectName} - - ``` - -??? note "createObjectACL" - The createObjectACL operation sets the access control list (ACL) permissions for an object that already exists in a bucket. You can specify the ACL in the request body or specify permissions using request headers, depending on the application needs. For example, if there is an existing application that updates an object ACL using the request body, you can continue to use that approach. - - The ACL of an object is set at the object version level. By default, createObjectACL sets the ACL of the latest version of an object. To set the ACL of a different version, use the versionId property. - - See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    objectNameName of the object whose acl needs to be set.Yes
    ownerIdID of the bucket owner.Yes
    ownerDisplayNameScreen name of the bucket owner.Yes
    accessControlListContainer for ACL information, which includes the following: -
      -
    • Grant: Container for the grantee and permissions.
    • -
        -
      • Grantee: The subject whose permissions are being set.
      • -
          -
        • ID: ID of the grantee.
        • -
        • DisplayName: Screen name of the grantee.
        • -
        -
      • Permission: Specifies the permission to give to the grantee.
      • -
      -
    -
    Yes
    versionIdVersion Id of an object to remove a specific object version.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:objectName} - {$ctx:bucketUrl} - {$ctx:ownerId} - {$ctx:ownerDisplayName} - {$ctx:accessControlList} - {$ctx:versionId} - - ``` - - **Sample request** - - ```xml - - AKIAIGURZMDFG7TRO6KQ - asAX8CJoDKzdfg0Ve5dMCFk4STUFDRHkGX6m0CcY - PUT - 256 - application/xml - - - us-east-2 - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - signv4test - true - - testObject2 - FHbrL3xf2TK54hLNWWArYI79woSElvHf - - - - - - - f422baefcd6a519ea3c43bec8874b6c3f71c83f72549f4fb4c0e23044efd2531 - rhettige@yahoo.com - - - - c6567b8c9274b78d6af4a3080c5e43e700f560f3517b7d9acc87251412044c35 - pe.chanaka.ck@gmail.com - - WRITE_ACP - - - - c6567b8c9274b78d6af4a3080c5e43e700f560f3517b7d9acc87251412044c35 - pe.chanaka.ck@gmail.com - - READ - - - - ``` - -??? note "createObjectCopy" - The createObjectCopy operation creates a copy of an object that is already stored in Amazon S3. This operation is the same as performing a GET and then a PUT. Adding the request header "x-amz-copy-source" enables the PUT operation to copy the source object into the destination bucket. - - When copying an object, most of the metadata (default) can be preserved, or new metadata can be specified. However, the ACL is not preserved and is set to "private" for the user making the request. All copy requests must be authenticated and cannot contain a message body. Additionally, the user must have the READ access to the source object and WRITE access to the destination bucket. To copy an object only under certain conditions, such as whether the ETag matches or whether the object was modified before or after a specified date, the request headers such as "x-amz-copy-source-if-match", "x-amz-copy-source-if-none-match", "x-amz-copy-source-if-unmodified-since", or "x-amz-copy-source-if-modified-since" must be used (all headers prefixed with "x-amz-" must be signed, including "x-amz-copy-source"). - - There are two instances when the copy request could return an error. One is when Amazon S3 receives the copy request, and the other can occur while Amazon S3 is copying the files. If the error occurs before the copy operation starts, you receive a standard Amazon S3 error. If the error occurs during the copy operation, the error response is embedded in the 200 OK response. This means that a 200 OK response can contain either a success or an error. If the request is an HTTP 1.1 request, the response is chunk encoded. Otherwise, it will not contain the content-length, and you will need to read the entire body. - - When copying an object, the accounts or groups that should be granted specific permissions on the object can be specified. There are two ways to grant the appropriate permissions using the request headers: one is to specify a canned (predefined) ACL using the "x-amz-acl" request header, and the other is to s pecify access permissions explicitly using the "x-amz-grant-read", "x-amz-grant-read-acp", "x-amz-grant-write-acp", and "x-amz-grant-full-control" headers. These headers map to the set of permissions Amazon S3 supports in an ACL. Use one approach, not both . - - Following is the proxy configuration for init and createObjectCopy. The init section has additional parameters and parameters that need to be removed apart from those mentioned in the Connecting to Amazon S3 section. - - See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    accessKeyIdAWS access key ID.Yes
    secretAccessKeyAWS secret access key.Yes
    methodTypeHTTP method type.Yes
    contentLengthLength of the message without the headers according to RFC 2616.Yes
    contentTypeContent type of the resource.Yes
    contentMD5Base64 encoded 128-bit MD5 digest of the message according to RFC 1864.Yes
    expectWhen this property is set to 100-continue, the request does not send the request body until it receives an acknowledgment. If the message is rejected based on the headers, the body of the message is not sent.Yes
    hostThe path-style requests (s3.amazonaws.com) or virtual-style requests (BucketName.s3.amazonaws.com).Yes
    regionRegion that is used to select a regional endpoint to make requests.Yes
    isXAmzDateSpecifies whether the current date and time are considered to calculate the signature. Valid values: true or false.Yes
    xAmzSecurityTokenThe security token based on whether Amazon DevPay operations or temporary security credentials are used.Yes
    bucketNameName of the bucket.Yes
    xAmzAclThe canned ACL to apply to the object.Yes
    xAmzGrantReadAllows the specified grantee or grantees to list the objects in the bucket.Yes
    xAmzGrantWriteAllows the specified grantee or grantees to create, overwrite, and delete any object in the bucket.Yes
    xAmzGrantReadAcpAllows the specified grantee or grantees to read the bucket ACL.Yes
    xAmzGrantWriteAcpAllows the specified grantee or grantees to write the ACL for the applicable bucket.Yes
    xAmzGrantFullControlAllows the specified grantee or grantees the READ, WRITE, READ_ACP, and WRITE_ACP permissions on the bucket.Yes
    xAmzCopySourceThe name of the source bucket and key name of the source object, separated by a slash (/).Yes
    xAmzMetadataDirectiveSpecifies whether the metadata is copied from the source object or replaced with metadata provided in the request.Optional
    xAmzCopySourceIfMatchCopies the object if its entity tag (ETag) matches the specified tag. Otherwise, the request returns a 412 HTTP status code error (failed precondition).Optional
    xAmzCopySourceIfNoneMatchCopies the object if its entity tag (ETag) is different from the specified ETag. Otherwise, the request returns a 412 HTTP status code error (failed precondition).Optional
    xAmzCopySourceIfUnmodifiedSinceCopies the object if it has not been modified since the specified time. Oherwise, the request returns a 412 HTTP status code error (failed precondition).Optional
    xAmzCopySourceIfModifiedSinceCopies the object if it has been modified since the specified time. Otherwise, the request returns a 412 HTTP status code error (failed condition).Optional
    xAmzServeEncryptionSpecifies the server-side encryption algorithm to use when Amazon S3 creates the target object.Optional
    xAmzStorageClassRRS enables customers to reduce their costs by storing non-critical, reproducible data at lower levels of redundancy than Amazon S3's standard storage.Optional
    xAmzWebsiteLocationIf the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata.Yes
    bucketUrlThe URL of the bucket.Yes
    destinationObjectThe destination where the source will be copied.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:accessKeyId} - {$ctx:secretAccessKey} - {$ctx:methodType} - {$ctx:contentLength} - {$ctx:contentType} - {$ctx:contentMD5} - {$ctx:expect} - {$ctx:host} - {$ctx:region} - {$ctx:isXAmzDate} - {$ctx:xAmzSecurityToken} - {$ctx:bucketName} - {$ctx:xAmzAcl} - {$ctx:xAmzGrantRead} - {$ctx:xAmzGrantWrite} - {$ctx:xAmzGrantReadAcp} - {$ctx:xAmzGrantWriteAcp} - {$ctx:xAmzGrantFullControl} - {$ctx:xAmzCopySource} - {$ctx:xAmzMetadataDirective} - {$ctx:xAmzCopySourceIfMatch} - {$ctx:xAmzCopySourceIfNoneMatch} - {$ctx:xAmzCopySourceIfUnmodifiedSince} - {$ctx:xAmzCopySourceIfModifiedSince} - {$ctx:xAmzServeEncryption} - {$ctx:xAmzStorageClass} - {$ctx:xAmzWebsiteLocation} - - - - {$ctx:bucketUrl} - {$ctx:destinationObject} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - PUT - - application/xml - - - true - - us-east-2 - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - signv4test - testObject5 - - - - - - - /imagesBucket5/testObject37 - - - - - - - - - - ``` - -??? note "getObjectMetaData" - The getObjectMetaData operation retrieves metadata from an object without returning the object itself. This operation is useful if you are interested only in an object's metadata. To use this operation, you must have READ access to the object. The response is identical to the GET response except that there is no response body. - - When calling init before this operation, the following headers should be removed: xAmzAcl, xAmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. - - See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    objectNameThe name of the object to retrieve details for.Yes
    rangeDownloads the specified range bytes of an object.Yes
    ifModifiedSinceReturns the object only if it has been modified since the specified time. Otherwise, returns 304.Yes
    ifUnmodifiedSinceReturns the object only if it has not been modified since the specified time. Otherwise, returns 412.Yes
    ifMatchReturns the object only if its entity tag (ETag) is the same as the one specified. Otherwise, returns 412.Yes
    ifNoneMatchReturns the object only if its entity tag (ETag) is different from the one specified. Otherwise, returns 304.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - {$ctx:objectName} - {$ctx:range} - {$ctx:ifModifiedSince} - {$ctx:ifUnmodifiedSince} - {$ctx:ifMatch} - {$ctx:ifNoneMatch} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEOj+343HD82s - HEAD - - application/xml - - - true - - us-east-2 - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - signv4test - testObject2 - - - - - - - ``` - -??? note "uploadPart" - The uploadPart operation uploads a part in a multipart upload. In this operation, you provide part data in your request. However, you have an option to specify your existing Amazon S3 object as the data source for the part being uploaded. You must initiate a multipart upload (see initMultipartUpload) before you can upload any part. In response to your initiate request, Amazon S3 returns an upload ID, which is the unique identifier that must be included in the upload part request. - - Part numbers can be any number from 1 to 10,000 (inclusive). A part number uniquely identifies a part and also defines its position within the object being created. If a new part is uploaded using the same part number that was used with a previous part, the previously uploaded part is overwritten. Each part must be at least 5 MB in size, except the last part. There is no size limit on the last part of your multipart upload. - - To ensure that data is not corrupted when traversing the network, specify the Content-MD5 header in the upload part request. Amazon S3 checks the part data against the provided MD5 value. If they do not match, Amazon S3 returns an error. After the multipart upload is initiated and one or more parts are uploaded, you must either complete or abort multipart upload in order to stop getting charged for storage of the uploaded parts. Only after you either complete or abort multipart upload will Amazon S3 free up the parts storage and stop charging you for the parts storage. - - See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    objectNameThe name to give for the newly created object.Yes
    uploadIdThis specifies the ID of the initiated multipart upload.Yes
    partNumberPart number that identifies the part.Yes
    - - **Sample request** - - ```xml - - AKIAIGUASDRZM7GJ7TRO6KQAD - asAX8CJoDKsdfzeOd0Ve5dMCFk4STUFDRHkGX6m0CSLKcY - true - text/plain - PUT - us-east-2 - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - signv4test - - - - - http://localhost:8889/services/multipart?objectName=testFile1.txt&uploadId=VSMdi3EgFYBq_DpBv6G0LWXydidqO9WIw90UIp81EripQrJNuxOo.jf3tkA.23aURwTOZPBD4iCfcogwtMc8_A--&partNumber=1&bucketUrl=http://sinhala.com.s3-us-west-2.amazonaws.com&accessKeyId=AKIAIGUASDRZM7GJ7TRO6KQAD&secretAccessKey=asAX8CJoDKsdfzeOd0Ve5dMCFk4STUFDRHkGX6m0CSLKcY&bucketName=sinhala.com&isXAmzDate=true&methodType=PUT - ``` - -??? note "completeMultipartUpload" - The completeMultipartUpload operation completes a multipart upload by assembling previously uploaded parts. You should first initiate the multipart upload using initMultipartUpload, and then upload all parts using uploadParts. After you successfully upload all relevant parts of an upload, call completeMultipartUpload to complete the upload. When you call completeMultipartUpload, Amazon S3 concatenates all the parts in ascending order by part number to create a new object. In the completeMultipartUpload request, you must provide the complete parts list (see listParts). For each part in the list, the part number and the ETag header value must be provided. When the part is uploaded the part number and the ETag header value should be returned. - - Processing of a completeMultipartUpload request can take several minutes. After Amazon S3 begins processing the request, it sends an HTTP response header that specifies a 200 OK response. While processing is in progress, Amazon S3 periodically sends whitespace characters to keep the connection from timing out. Because a request could fail after the initial 200 OK response has been sent, it is important that you check the response body to determine whether the request succeeded. If completeMultipartUpload fails, applications should be prepared to retry the failed requests. - - When calling init before this operation, the following headers should be removed: xAmzAcl, xAmzGrantRead, xAmzGrantWrite, xAmzGrantReadAcp, xAmzGrantWriteAcp, and xAmzGrantFullControl. - - See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadComplete.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    partDetailsThe container that holds the part details. The part details are as follows: -
      -
    • part: The container for elements related to a previously uploaded part.
    • -
        -
      • partNumber: The part number that identifies the part.
      • -
      • ETag: The entity tag returned when the part is uploaded.
      • -
      -
    -
    Yes
    bucketUrlThe URL of the bucket.Yes
    objectNameThe name to give the newly created object.Yes
    cacheControlThis can be used to specify caching behavior along the request or reply chain.Yes
    contentDispositionThis specifies presentational information for the object.Yes
    contentEncodingThis specifies what content encodings have been applied to the object.Yes
    expiresThis specifies the date and time at which the object is no longer cacheable.Yes
    uploadIdThis specifies the ID of the current multipart upload.Yes
    partDetailsThis contains all the part numbers and the corresponding Etags.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:partDetails} - {$ctx:bucketUrl} - {$ctx:objectName} - {$ctx:uploadId} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEXXX343HD82s - POST - application/xml - true - - myimage.png - us-east-2 - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - signv4test - VONszTPldyDo80ARdEMI2kVxEBLQYY1tncD7PpB54WDtLTACJIn.jWRIGo7iL_EkJYn9Z2BT3MM.kEqju9CgLyUveDtl6MgXzRYqjb8R4L.ZVpUhv25d56P2Tk1XnD0C - - - 1 - LKJLINTLNM9879NL7jNLk - - - - - - - - ``` - -??? note "abortMultipartUpload" - The abortMultipartUpload operation aborts a multipart upload. After a multipart upload is aborted, no additional parts can be uploaded using that upload ID. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to abort a given multipart upload multiple times in order to completely free all storage consumed by all parts. To verify that all parts have been removed so that you do not get charged for the part storage, call the listParts operation and ensure the parts list is empty. - - Following is the proxy configuration for init and abortMultipartUpload. The init section has additional parameters and parameters that need to be removed apart from those mentioned in the Connecting to Amazon S3 section. - - See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadAbort.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    accessKeyIdAWS access key ID.Yes
    secretAccessKeyAWS secret access key.Yes
    methodTypeHTTP method type.Yes
    contentLengthLength of the message without the headers according to RFC 2616.Yes
    contentTypeContent type of the resource.Yes
    contentMD5Base64 encoded 128-bit MD5 digest of the message according to RFC 1864.Yes
    expectWhen this property is set to 100-continue, the request does not send the request body until it receives an acknowledgment. If the message is rejected based on the headers, the body of the message is not sent.Yes
    hostThe path-style requests (s3.amazonaws.com) or virtual-style requests (BucketName.s3.amazonaws.com).Yes
    regionRegion that is used to select a regional endpoint to make requests.Yes
    isXAmzDateSpecifies whether the current date and time are considered to calculate the signature. Valid values: true or false.Yes
    xAmzSecurityTokenThe security token based on whether Amazon DevPay operations or temporary security credentials are used.Yes
    bucketNameName of the bucket.Yes
    xAmzAclThe canned ACL to apply to the object.Yes
    xAmzGrantReadAllows the specified grantee or grantees to list the objects in the bucket.Yes
    xAmzGrantWriteAllows the specified grantee or grantees to create, overwrite, and delete any object in the bucket.Yes
    xAmzGrantReadAcpAllows the specified grantee or grantees to read the bucket ACL.Yes
    xAmzGrantWriteAcpAllows the specified grantee or grantees to write the ACL for the applicable bucket.Yes
    xAmzGrantFullControlAllows the specified grantee or grantees the READ, WRITE, READ_ACP, and WRITE_ACP permissions on the bucket.Yes
    xAmzMetaSpecifies whether the metadata is copied from the source object or replaced with metadata provided in the request.Optional
    xAmzServeEncryptionSpecifies the server-side encryption algorithm to use when Amazon S3 creates the target object.Optional
    xAmzStorageClassRRS enables customers to reduce their costs by storing non-critical, reproducible data at lower levels of redundancy than Amazon S3's standard storage.Optional
    xAmzWebsiteLocationIf the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata.Yes
    cacheControlThis can be used to specify caching behavior along the request or reply chain.Yes
    contentDispositionThis specifies presentational information for the object.Yes
    contentEncodingThis specifies what content encodings have been applied to the object.Yes
    expiresThe Expires header of the response.Yes
    bucketUrlThe URL of the bucket.Yes
    objectNameThe name of the object.Yes
    uploadIdThis specifies the ID of the current multipart upload.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:accessKeyId} - {$ctx:secretAccessKey} - {$ctx:methodType} - {$ctx:contentLength} - {$ctx:contentType} - {$ctx:contentMD5} - {$ctx:expect} - {$ctx:host} - {$ctx:region} - {$ctx:isXAmzDate} - {$ctx:xAmzSecurityToken} - {$ctx:bucketName} - {$ctx:xAmzAcl} - {$ctx:xAmzGrantRead} - {$ctx:xAmzGrantWrite} - {$ctx:xAmzGrantReadAcp} - {$ctx:xAmzGrantWriteAcp} - {$ctx:xAmzGrantFullControl} - {$ctx:xAmzMeta} - {$ctx:xAmzServeEncryption} - {$ctx:xAmzStorageClass} - {$ctx:xAmzWebsiteLocation} - - - - {$ctx:cacheControl} - {$ctx:contentDisposition} - {$ctx:contentEncoding} - {$ctx:expires} - {$ctx:bucketUrl} - {$ctx:objectName} - {$ctx:uploadId} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEXXX343HD82s - DELETE - application/xml - true - - - myimage.png - us-east-2 - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - signv4test - VONszTPldyDo80ARdEMI2kVxEBLQYY1tncD7PpB54WDtLTACJIn.jWRIGo7iL_EkJYn9Z2BT3MM.kEqju9CgLyUveDtl6MgXzRYqjb8R4L.ZVpUhv25d56P2Tk1XnD0C - - - - - - - - - - - - Content-Language:enus - STANDARD - - - ``` - -??? note "listParts" - The listParts operation lists the parts that have been uploaded for a specific multipart upload. - - This operation must include the upload ID, which can be obtained using the initMultipartUpload operation. The listParts operation returns a maximum of 1,000 uploaded parts. The default number of parts returned is 1,000 parts, but you can restrict the number of parts using the maxParts property. If the multipart upload consists of more than 1,000 parts, the response returns an IsTruncated field with the value of true and a NextPartNumberMarker element. In subsequent listParts requests, you can include the partNumberMarker query string parameter and set its value to the NextPartNumberMarker field value from the previous response. - - Following is the proxy configuration for init and listParts. The init section has additional parameters and parameters that need to be removed apart from those mentioned in the Connecting to Amazon S3 section. - - See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListParts.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    accessKeyIdAWS access key ID.Yes
    secretAccessKeyAWS secret access key.Yes
    methodTypeHTTP method type.Yes
    contentTypeContent type of the resource.Yes
    bucketNameName of the bucket.Yes
    isXAmzDateSpecifies whether the current date and time are considered to calculate the signature. Valid values: true or false.Yes
    expectWhen this property is set to 100-continue, the request does not send the request body until it receives an acknowledgment. If the message is rejected based on the headers, the body of the message is not sent.Yes
    contentMD5Base64 encoded 128-bit MD5 digest of the message according to RFC 1864.Yes
    xAmzSecurityTokenThe security token based on whether Amazon DevPay operations or temporary security credentials are used.Yes
    hostThe path-style requests (s3.amazonaws.com) or virtual-style requests (BucketName.s3.amazonaws.com).Yes
    regionRegion that is used to select a regional endpoint to make requests.Yes
    uriRemainderThe URI syntax consists of a sequence of components separated by reserved characters, with the first component defining the semantics for the remainder of the URI string.Yes
    contentLengthLength of the message without the headers according to RFC 2616.Yes
    maxPartsMaximum number of parts allowed in the response.Yes
    partNumberMarkerSpecifies the part after which listing should begin. Only parts with higher part numbers will be listed.Yes
    contentEncodingThe Content-Encoding header of the request.Yes
    encodingTypeRequests Amazon S3 to encode the response and specifies the encoding method to use.Yes
    bucketUrlThe URL of the bucket.Yes
    objectNameThe name of the object.Yes
    uploadIdThe ID of the upload.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:accessKeyId} - {$ctx:secretAccessKey} - {$ctx:methodType} - {$ctx:contentType} - {$ctx:bucketName} - {$ctx:isXAmzDate} - {$ctx:expect} - {$ctx:contentMD5} - {$ctx:xAmzSecurityToken} - {$ctx:host} - {$ctx:region} - {$ctx:uriRemainder} - {$ctx:contentLength} - - - - {$ctx:maxParts} - {$ctx:partNumberMarker} - {$ctx:contentEncoding} - {$ctx:encodingType} - {$ctx:uploadId} - {$ctx:bucketUrl} - {$ctx:objectName} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - oSDz22F2mwtR+qHXXBXXXXASYQc4oMCEXXX343HD82s - GET - - - url - application/xml - - KyxZ7yjpSSZM9f0bdRectMF5dPg2h08BqTsmWf.8OEIq2Z4YvYg01LmJL0kVDqVcz2utci2CDE2Cn7k647j_84GhExGAN9uer65jljH_oapI758RA_AmcyW4N2usGHH0 - - us-east-2 - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - signv4test - myimage.png - true - 100 - 8 - - - ``` - -??? note "initMultipartUpload" - The initMultipartUpload operation initiates a multipart upload and returns an upload ID. This upload ID is used to associate all the parts in the specific multipart upload. You specify this upload ID in each of your subsequent uploadPart requests. You also include this upload ID in the final request to either complete or abort the multipart upload request. - - For request signing, multipart upload is just a series of regular requests: you initiate multipart upload, send one or more requests to upload parts (uploadPart), and finally complete multipart upload (completeMultipartUpload). You sign each request individually. After you initiate multipart upload and upload one or more parts, you must either complete or abort multipart upload in order to stop getting charged for storage of the uploaded parts. Only after you either complete or abort multipart upload will Amazon S3 free up the parts storage and stop charging you for the parts storage. - - Following is the proxy configuration for init and initMultipartUpload. The init section has additional parameters and parameters that need to be removed apart from those mentioned in the Connecting to Amazon S3 section. - - See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadInitiate.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    accessKeyIdAWS access key ID.Yes
    secretAccessKeyAWS secret access key.Yes
    methodTypeHTTP method type.Yes
    contentTypeContent type of the resource.Yes
    bucketNameName of the bucket.Yes
    isXAmzDateSpecifies whether the current date and time are considered to calculate the signature. Valid values: true or false.Yes
    expectWhen this property is set to 100-continue, the request does not send the request body until it receives an acknowledgment. If the message is rejected based on the headers, the body of the message is not sent.Yes
    contentMD5Base64 encoded 128-bit MD5 digest of the message according to RFC 1864.Yes
    xAmzSecurityTokenThe security token based on whether Amazon DevPay operations or temporary security credentials are used.Yes
    hostThe path-style requests (s3.amazonaws.com) or virtual-style requests (BucketName.s3.amazonaws.com).Yes
    regionRegion that is used to select a regional endpoint to make requests.Yes
    uriRemainderThe URI syntax consists of a sequence of components separated by reserved characters, with the first component defining the semantics for the remainder of the URI string.Yes
    contentLengthLength of the message without the headers according to RFC 2616.Yes
    xAmzAclThe canned ACL to apply to the object.Yes
    xAmzGrantReadAllows the specified grantee or grantees to list the objects in the bucket.Yes
    xAmzGrantWriteAllows the specified grantee or grantees to create, overwrite, and delete any object in the bucket.Yes
    xAmzGrantReadAcpAllows the specified grantee or grantees to read the bucket ACL.Yes
    xAmzGrantWriteAcpAllows the specified grantee or grantees to write the ACL for the applicable bucket.Yes
    xAmzGrantFullControlAllows the specified grantee or grantees the READ, WRITE, READ_ACP, and WRITE_ACP permissions on the bucket.Yes
    xAmzMetaSpecifies whether the metadata is copied from the source object or replaced with metadata provided in the request.Optional
    xAmzServeEncryptionSpecifies the server-side encryption algorithm to use when Amazon S3 creates the target object.Optional
    xAmzStorageClassRRS enables customers to reduce their costs by storing non-critical, reproducible data at lower levels of redundancy than Amazon S3's standard storage.Optional
    xAmzWebsiteLocationIf the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata.Yes
    cacheControlThis can be used to specify caching behavior along the request or reply chain.Yes
    contentDispositionThis specifies presentational information for the object.Yes
    contentEncodingThis specifies what content encodings have been applied to the object.Yes
    expiresThe date and time at which the object is no longer cacheable.Yes
    bucketUrlThe URL of the bucket.Yes
    objectNameThe name of the object.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:accessKeyId} - {$ctx:secretAccessKey} - {$ctx:methodType} - {$ctx:contentType} - {$ctx:bucketName} - {$ctx:isXAmzDate} - {$ctx:expect} - {$ctx:contentMD5} - {$ctx:xAmzSecurityToken} - {$ctx:host} - {$ctx:region} - {$ctx:uriRemainder} - {$ctx:contentLength} - {$ctx:xAmzAcl} - {$ctx:xAmzGrantRead} - {$ctx:xAmzGrantWrite} - {$ctx:xAmzGrantReadAcp} - {$ctx:xAmzGrantWriteAcp} - {$ctx:xAmzGrantFullControl} - {$ctx:xAmzMeta} - {$ctx:xAmzServeEncryption} - {$ctx:xAmzStorageClass} - {$ctx:xAmzWebsiteLocation} - - - - {$ctx:cacheControl} - {$ctx:contentDisposition} - {$ctx:contentEncoding} - {$ctx:expires} - {$ctx:bucketUrl} - {$ctx:objectName} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEXXX343HD82s - POST - application/xml - true - - - myImage.png - us-east-2 - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - signv4test - - - - - - - - - - - - Content-Language:enus - AES256 - STANDARD - - - ``` - -??? note "getObjectACL" - The getObjectACL operation uses the ACL subresource to return the access control list (ACL) of an object. To use this operation, you must have READ_ACP access to the object. - - Following is the proxy configuration for getObjectACL. - - See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGETacl.html) for more information. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    objectNameThe name of the object.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:objectName} - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEXXX343HD82s - GET - application/xml; charset=UTF-8 - true - - - 100-continue - - us-east-2 - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - signv4test - public-read - GrantRead - Grantwrite - GrantReadAcp - GrantWriteAcp - GrantFullControl - testFile.txt - - ``` - -??? note "getObjectTorrent" - The getObjectTorrent operation uses the torrent subresource to return torrent files from a bucket. BitTorrent can save you bandwidth when you're distributing large files. - - You can get torrent only for objects that are less than 5 GB in size and that are not encrypted using server-side encryption with customer-provided encryption key. - - To use this operation, you must have READ access to the object. - - Following is the proxy configuration for getObjectTorrent. - - See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGETtorrent.html) for more information. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    objectNameThe name of the object.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:objectName} - {$ctx:bucketUrl} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEXXX343HD82s - GET - application/xml; charset=UTF-8 - true - - - 100-continue - - us-east-2 - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - signv4test - public-read - GrantRead - Grantwrite - GrantReadAcp - GrantWriteAcp - GrantFullControl - testFile.txt - - ``` - -??? note "restoreObject" - The restoreObject operation restores a temporary copy of an archived object. You can optionally provide version ID to restore specific object version. If version ID is not provided, it will restore the current version. The number of days that you want the restored copy will be determined by numberOfDays. After the specified period, Amazon S3 deletes the temporary copy. Note that the object remains archived; Amazon S3 deletes only the restored copy. - - An object in the Glacier storage class is an archived object. To access the object, you must first initiate a restore request, which restores a copy of the archived object. Restore jobs typically complete in three to five hours. - - Following is the proxy configuration for restoreObject. - - See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOSTrestore.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    objectNameThe name of the object.Yes
    numberOfDaysLifetime of the restored (active) copy.Yes
    versionIdVersion Id of an object to restore a specific object version.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:objectName} - {$ctx:bucketUrl} - {$ctx:numberOfDays} - {$ctx:versionId} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEXXX343HD82s - POST - application/xml; charset=UTF-8 - true - - - 100-continue - - us-east-2 - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - signv4test - public-read - GrantRead - Grantwrite - GrantReadAcp - GrantWriteAcp - GrantFullControl - testFile.txt - 7 - - - ``` - -??? note "uploadPartCopy" - The uploadPartCopy operation uploads a part by copying data from an existing object as data source. You specify the data source by adding the x-amz-copy-source in your request and a byte range by adding the x-amz-copy-source-range in your request. The minimum allowable part size for a multipart upload is 5 MB. - - See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    objectNameThe name to give the newly created object.Yes
    uploadIdThis specifiy the ID of the initiated multipart upload.Yes
    partNumberThis specifiy the number or the index of the uploaded part.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:objectName} - {$ctx:bucketUrl} - {$ctx:uploadId} - {$ctx:partNumber} - - ``` - - **Sample request** - - ```xml - - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEXXX343HD82s - PUT - application/xml; charset=UTF-8 - 256 - - testFile1.txt - true - - us-east-2 - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - signv4test - - SsNUDqUklMaoV_IfePCpGAZHjaxJx.cGXEcX6TVW4I6WzOQFnAKomYevz5qi5LtkfTvlpwjY9M6QDGsIIvdGEQzBURo3MMU2Yh.ZEQDsk_lsnx3Z8m9jsglW6FIfKGQ_ - 2 - /testFile1.txt?partNumber=2&uploadId=SsNUDqUklMaoV_IfePCpGAZHjaxJx.cGXEcX6TVW4I6WzOQFnAKomYevz5qi5LtkfTvlpwjY9M6QDGsIIvdGEQzBURo3MMU2Yh.ZEQDsk_lsnx3Z8m9jsglW6FIfKGQ_ - /testBucket1/testFile.jpg - bytes=0-9 - - - - - - ``` - -??? note "headObject" - The headObject operation retrieves metadata from an object without returning the object itself. This operation is useful if you are interested only in an object's metadata. To use this operation, you must have READ access to that object. A HEAD request has the same options as a GET operation on an object. The response is identical to the GET response except that there is no response body. - - See the [related API documentation](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketUrlThe URL of the bucket.Yes
    objectNameThe name to give the newly created object.Yes
    rangeThe specified range bytes of an object to download.Optional
    ifModifiedSinceReturn the object only if it has been modified since the specified time.Optional
    ifUnmodifiedSinceReturn the object only if it has not been modified since the specified time.Optional
    ifMatchReturn the object only if its entity tag (ETag) is the same as the one specified.Optional
    ifNoneMatchReturn the object only if its entity tag (ETag) is different from the one specified.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:bucketUrl} - {$ctx:objectName} - {$ctx:range} - {$ctx:ifModifiedSince} - {$ctx:ifUnmodifiedSince} - {$ctx:ifMatch} - {$ctx:ifNoneMatch} - - ``` - - **Sample request** - - ```xml - AKIXXXXXHXQXXG5XX - qHXXBXXXXASYQc4oMCEXXX343HD82s - PUT - 256 - application/xml - - - us-east-2 - s3.us-east-2.amazonaws.com - http://s3.us-east-2.amazonaws.com/signv4test - signv4test - true - - testObject2 - - - - - - - - - - - - - ``` - diff --git a/en/docs/reference/connectors/amazons3-connector/amazons3-connector-config.md b/en/docs/reference/connectors/amazons3-connector/amazons3-connector-config.md deleted file mode 100644 index d9304df7e8..0000000000 --- a/en/docs/reference/connectors/amazons3-connector/amazons3-connector-config.md +++ /dev/null @@ -1,101 +0,0 @@ -# Setting up the Amazon S3 Environment - -To use the AmazonS3 service, you must have an AWS account. If you don't already have an account, you are prompted to create one when you sign up. You're not charged for any AWS services that you sign up for unless you use them. - -## Signing Up for AWS - -* **To sign up for AWS:** - - 1. Navigate to [Amazon AWS website](https://aws.amazon.com/) and select **Create an AWS Account**. - - > **Note**: If you previously signed in to the AWS Management Console using AWS account root user credentials, select **Sign in to a different account**. If you previously signed in to the console using IAM credentials, choose Sign-in using root account credentials. Then select **Create a new AWS account**. - - 2. Follow the online instructions. - -Part of the sign-up procedure involves receiving a phone call and entering a verification code using the phone keypad. AWS will notify you by email when your account is active and available for you to use. - -## Obtaining user credentials - -You can access the Amazon S3 service using the root user credentials but these credentials allow full access to all resources in the account as you cannot restrict permission for root user credentials. If you want to restrict certain resources and allow controlled access to AWS services then you can create IAM (Identity and Access Management) users in your AWS account for that scenario. - -## Steps to get an AWS Access Key for your AWS root account - - 1. Go to the AWS Management Console. - - AWS Management Console - - 2. Hover over your company name in the right top menu and click "My Security Credentials". - - My security credentials - - 3. Scroll to the "Access Keys" section. - - Create accesskey using root account - - 4. Click on "Create New Access Key". - 5. Copy both the Access Key ID (YOUR_AMAZON_S3_KEY) and Secret Access Key (YOUR_AMAZON_S3_SECRET). - -## Steps to get an AWS Access Key for an IAM user account - - 1. Sign in to the AWS Management Console and open the IAM console. - - IAM - - 2. In the navigation pane, choose Users. - - IAM users - - 3. Add a checkmark next to the name of the desired user, and then choose User Actions from the top. - 4. Click on Manage Access Keys. - - Security credentials - - 5. Click on Create Access Key. - - Create access key using IAM - - 6. Click on Show User Security Credentials. Copy and paste the Access Key ID and Secret Access Key values, or click on Download Credentials to download the credentials in a CSV (file). - - Download access key - - -The Access Key ID (e.g., AKIAJA3J6GE646JWVA9C) and Secret Access Key (e.g., H/P/G3Tey1fQOKPAU1GBbl/NhL/WpSaEvxbvUlp4) will be required to configure the Amazon S3 connector. You can manage S3 buckets logging into S3 console. - -## Deploying the client libraries - -Finally download and place the following client libraries in to the `/lib` directory (From S3 connector 2.0.5 and above). - -* [auth-2.20.26.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/auth/2.20.26) -* [aws-core-2.20.26.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/aws-core/2.20.26) -* [aws-query-protocol-2.20.26.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/aws-query-protocol/2.20.26) -* [aws-xml-protocol-2.20.26.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/aws-xml-protocol/2.20.26) -* [endpoints-spi-2.20.26.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/endpoints-spi/2.20.26) -* [http-client-spi-2.20.26.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/http-client-spi/2.20.26) -* [json-utils-2.20.26](https://mvnrepository.com/artifact/software.amazon.awssdk/json-utils/2.20.26) -* [metrics-spi-2.20.26.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/metrics-spi/2.20.26) -* [profiles-2.20.26.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/profiles/2.20.26) -* [protocol-core-2.20.26.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/protocol-core/2.20.26) -* [reactive-streams-1.0.0.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/reactive-streams/1.0.0) -* [regions-2.20.26.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/regions/2.20.26) -* [s3-2.20.26.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/s3/2.20.26) -* [sdk-core-2.20.26.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/sdk-core/2.20.26) -* [third-party-jackson-core-2.20.26.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/third-party-jackson-core/2.20.26) -* [url-connection-client-2.1.2.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/url-connection-client/2.1.2) -* [utils-2.20.26.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/utils/2.20.26) - -If you are using S3 connector version 2.0.4 or below, then you have to place the following client libraries instead of above. - -* [auth-2.14.12.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/auth/2.14.12) -* [aws-core-2.13.71.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/aws-core/2.13.71) -* [aws-query-protocol-2.14.12.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/aws-query-protocol/2.14.12) -* [aws-xml-protocol-2.14.12.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/aws-xml-protocol/2.14.12) -* [http-client-spi-2.14.12.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/http-client-spi/2.14.12) -* [metrics-spi-2.14.12.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/metrics-spi/2.14.12) -* [profiles-2.14.12.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/profiles/2.14.12) -* [protocol-core-2.14.12.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/protocol-core/2.14.12) -* [reactive-streams-1.0.0.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/reactive-streams/1.0.0) -* [regions-2.14.12.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/regions/2.14.12) -* [s3-2.14.12.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/s3/2.14.12) -* [sdk-core-2.14.12.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/sdk-core/2.14.12) -* [url-connection-client-2.1.2.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/url-connection-client/2.1.2) -* [utils-2.14.27.jar](https://mvnrepository.com/artifact/software.amazon.awssdk/utils/2.14.27) diff --git a/en/docs/reference/connectors/amazons3-connector/amazons3-connector-example.md b/en/docs/reference/connectors/amazons3-connector/amazons3-connector-example.md deleted file mode 100644 index 40640f2bf0..0000000000 --- a/en/docs/reference/connectors/amazons3-connector/amazons3-connector-example.md +++ /dev/null @@ -1,288 +0,0 @@ -# Amazon S3 Connector Example - -The AmazonS3 Connector allows you to access the Amazon Simple Storage Service (Amazon S3) via the AWS [SDK](https://aws.amazon.com/sdk-for-java/). - -## What you'll build - -This example depicts how to use AmazonS3 connector to: - -1. Create a S3 bucket (a location for storing your data) in Amazon cloud. -2. Upload a message into the created bucket as a text file. -3. Retrieve created text file back and convert into a message in the integration runtime. - -All three operations are exposed via an API. The API with the context `/s3connector` has three resources: - -* `/createbucket` - Once invoked, it will create a bucket in Amazon with the specified name -* `/addobject` - The incoming message will be stored into the specified bucket with the specified name -* `/info` - Once invoked, it will read the specified file from the specified bucket and respond with the content of the file - -Following diagram shows the overall solution. The user creates a bucket, stores some message into the bucket, and then receives it back. - -To invoke each operation, the user uses the same API. - -Amazon S3 use case - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Setting up the environment - -Please follow the steps mentioned at [Setting up Amazon S3]({{base_path}}/reference/connectors/amazons3-connector/amazons3-connector-config) document in order to create a Amazon S3 account and obtain credentials you need to access the Amazon APIs. Keep them saved to be used in the next steps. - -## Configure the connector in WSO2 Integration Studio - -Follow these steps to set up the Integration Project and import AmazonS3 connector into it. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -## Creating the Integration Logic - -1. Specify the API name as `S3ConnectorTestAPI` and API context as `/s3connector`. - -2. First we will create the `/createbucket` resource. This API resource will retrieve the bucket name from the incoming HTTP PUT request and create a bucket in Amazon S3. Right click on the API Resource and go to **Properties** view. We use a URL template called `/createbucket` as we have three API resources inside a single API. The method will be PUT. - - Amazon S3 use case - -3. Next drag and drop the 'createBucket' operation of the S3 Connector to the Design View as shown below. Here, you will receive the following inputs from the user. - - bucketName - Name of the bucket - - Amazon S3 use case - -4. Create a connection from the properties window by clicking on the '+' icon as shown below. - - Amazon S3 use case - - In the popup window, the following parameters must be provided. - - - Connection Name - Unique name to identify the connection by. - - Connection Type - Type of the connection that specifies the protocol to be used. - - AWS Access Key ID - Access key associated with your Amazon user account. - - AWS Secret Access Key - Secret Access key associated with your Amazon user account. - - Region - Region that is used to select a regional endpoint to make requests. - - !!! note - 1. You can either define the credentials or allow the AWS SDK to manage the credentials. The SDK will look for AWS credentials in system/user environment variables or use the IAM role for authentication if the application is running in an EC2 instance. - 2. The [IAM role for authentication](https://docs.amazonaws.cn/en_us/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) is available only with Amazon S3 connector v2.0.2 and above. - - Amazon S3 use case - -5. After the connection is successfully created, select the created connection as 'Connection' from the drop down menu in the properties window. - -6. Next, configure the following parameters in the properties window, - - - Bucket Name - json-eval($.bucketName) - - Bucket Region - Select a region from the drop-down menu. Here we are using us-east-2. - - Amazon S3 use case - -7. Drag and drop the [Respond Mediator]({{base_path}}/reference/mediators/respond-mediator/) to send back the response from creating the bucket as shown below. - - Amazon S3 use case - -8. Create the next API resource, which is `/addobject` by dragging and dropping another API resource to the design view. This API resource will retrieve information about the object from the incoming HTTP POST request such as the bucketName, objectKey and the file content and upload it to Amazon S3. - - Amazon S3 use case - -9. Drag and drop the ‘putObject’ operation of the S3 Connector to the Design View. In the properties view, select the already created connection as 'Connection' from the drop down menu and provide the following expressions to the below properties, - - Bucket Name - json-eval($.bucketName) - - Object Key - json-eval($.objectKey) - - File Content - json-eval($.message) - - Amazon S3 use case - -10. Drag and drop the [Respond Mediator]({{base_path}}/reference/mediators/respond-mediator/) to send back the response from uploading the object. - -11. Create the next API resource, which is `/info` by dragging and dropping another API resource to the design view. This API resource will retrieve information from the incoming HTTP POST request such as the bucketName, objectKey and get the object from Amazon S3. - - Amazon S3 use case - -12. Next drag and drop the ‘getObject’ operation of the S3 Connector to the Design View. In the properties view, select the already created connection as 'Connection' from the drop down menu and provide the following expressions to the below properties, - - - Bucket Name - json-eval($.bucketName) - - Object Key - json-eval($.objectKey) - - Amazon S3 use case - -13. Finally, drag and drop the [Respond Mediator]({{base_path}}/reference/mediators/respond-mediator/) to send back the response from the getObject operation. - -14. You can find the complete API XML configuration below. You can go to the source view and copy paste the following config. - -```xml - - - - - - {json-eval($.bucketName)} - us-east-2 - - - - - - - - - - {json-eval($.bucketName)} - {json-eval($.objectKey)} - {json-eval($.message)} - - - - - - - - - - {json-eval($.bucketName)} - {json-eval($.objectKey)} - - - - - - - -``` - -**Note**: - -* As `awsAccessKeyId` use the access key obtained from Amazon S3 setup and update the above API configuration. -* As `awsSecretAccessKey` use the secret key obtained from Amazon S3 setup and update the above API configuration. -* Note that `region`, `connectionName` and credentials are hard coded. Please change them as per the requirement. -* For more information please refer the [reference guide]({{base_path}}/reference/connectors/amazons3-connector/amazons3-connector-reference) for Amazon S3 connector. - -Now we can export the imported connector and the API into a single CAR application. CAR application is the one we are going to deploy to server runtime. - -{!includes/reference/connectors/exporting-artifacts.md!} - -Now the exported CApp can be deployed in the integration runtime so that we can run it and test. - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - - - Download ZIP - - -!!! tip - You may need to update the value of the access key and make other such changes before deploying and running this project. - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -We can use Curl or Postman to try the API. The testing steps are provided for curl. Steps for Postman should be straightforward and can be derived from the curl requests. - -### Creating a bucket in Amazon S3 - -1. Create a file called `data.json` with the following content. Note that the bucket region is `us-east-2`. If you need to create the bucket in a different region, modify the hard coded region of the API configuration accordingly. - ```json - { - "bucketName":"wso2engineers" - } - ``` -2. Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - ``` - curl -H "Content-Type: application/json" --request PUT --data @data.json http://127.0.0.1:8290/s3connector/createbucket - ``` -**Expected Response**: - - You will receive a response like below containing the details of the bucket created. - - ```json - { - "createBucketResult": { - "success": true, - "Response": { - "Status": "200:Optional[OK]", - "Location": "http://wso2engineers.s3.amazonaws.com/" - } - } - } - ``` - - Please navigate to [Amazon AWS S3 console](https://s3.console.aws.amazon.com/) and see if a bucket called `wso2engineers` is created. If you tried to create a bucket with a name that already exists, it will reply back with a message indicating the conflict. - - Creating Amazon S3 bucket - -### Post a message into Amazon S3 bucket - -1. Create a file called `data.json` with the following content. - ```json - { - "bucketName":"wso2engineers", - "objectKey":"Julian.txt", - "message":"Julian Garfield, Software Engineer, Integration Group" - } - ``` -2. Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - ``` - curl -H "Content-Type: application/json" --request POST --data @data.json http://127.0.0.1:8290/s3connector/addobject - ``` -**Expected Response**: - You will receive a response like below containing the details of the object created. - - ```json - { - "putObjectResult": { - "success": true, - "PutObjectResponse": { - "ETag": "\"359a77e8b4a63a637df3e63d16fd0e34\"" - } - } - } - ``` - Navigate to AWS S3 console and click on the bucket `wso2engineers`. You will note that a file has been created with the name `Julian.txt`. - Upload object to Amazon S3 bucket - -### Read objects from Amazon S3 bucket - -Now let us read the information on `wso2engineers` that we stored in the Amazon S3 bucket. - -1. Create a file called data.json with the following content. It specifies which bucket to read from and what the filename is. This example assumes that the object is stored at root level inside the bucket. You can also read a object stored in a folder inside the bucket. - - ```json - { - "bucketName":"wso2engineers", - "objectKey":"Julian.txt" - } - ``` -2. Invoke the API as shown below using the curl command. - ``` - curl -H "Content-Type: application/json" --request POST --data @data.json http://127.0.0.1:8290/s3connector/info - ``` -**Expected Response**: - You receive a response similar to the following. The `Content` element contains the contents of the file requested. - - !!! note - The `Content` element is available only with Amazon S3 connector v2.0.1 and above. - - ```json - { - "getObjectResult": { - "success": true, - "GetObjectResponse": { - "AcceptRanges": "bytes", - "Content": "Julian Garfield, Software Engineer, Integration Group", - "ContentLength": 45, - "ContentType": "text/plain; charset=UTF-8", - "DeleteMarker": false, - "ETag": "\"359a77e8b4a63a637df3e63d16fd0e34\"", - "LastModified": null, - "metadata": null, - "MissingMeta": 0, - "PartsCount": 0, - "TagCount": 0 - } - } - } - ``` - -In this example Amazon S3 connector is used to perform operations with Amazon S3 storage. You can receive details of the errors that occur when invoking S3 operations using the S3 responses itself. Please read the [Amazon S3 connector reference guide]({{base_path}}/reference/connectors/amazons3-connector/amazons3-connector-reference) to learn more about the operations you can perform with the Amazon S3 connector. diff --git a/en/docs/reference/connectors/amazons3-connector/amazons3-connector-overview.md b/en/docs/reference/connectors/amazons3-connector/amazons3-connector-overview.md deleted file mode 100644 index 119cccab12..0000000000 --- a/en/docs/reference/connectors/amazons3-connector/amazons3-connector-overview.md +++ /dev/null @@ -1,36 +0,0 @@ -# Amazon S3 Connector Overview - -Amazon S3 is a web-based storage service that can be used to store and retrieve data at anytime from anywhere on the web. Amazon uses the same service to run its own network that proves its scalability, reliability, and security. - -The Amazon S3 Connector versions 1.0.10 and below allow you to access the REST API of [Amazon Storage Service S3](https://docs.aws.amazon.com/AmazonS3/latest/API/Welcome.html). This lets you store your information and retrieve it back when needed. The AmazonS3 Connector is useful to take your on-premise data to the cloud. The advantage is, you do not need to worry about managing and replicating data on-premise. - -The versions 2.0.0 and above allow you to access the AWS component via AWS [SDK](https://aws.amazon.com/sdk-for-java/). The SDK makes it easy to call AWS services using idiomatic Java APIs. - -To see the Amazon S3 connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "Amazon". - -Amazon S3 Connector Store - -## Compatibility - -| Connector Version | Supported product versions | Supported API | -| ------------- |-------------|-------------| -| 2.0.2 | APIM 4.0.0, EI 7.1.0, EI 7.0.x, EI 6.6.0 | AWS SDK | -| 1.0.10 | APIM 4.0.0, EI 7.1.0, EI 7.0.x, EI 6.6.0, EI 6.5.0 | REST | - -For older versions, see the details in the connector store. - -## Amazon S3 Connector documentation (latest - 2.x version) - -* **[Amazon S3 Connector Example]({{base_path}}/reference/connectors/amazons3-connector/amazons3-connector-example/)**: This example demonstrates how to use the Amazon S3 Connector to create an S3 bucket, upload a text message into the bucket, retrieve it, and convert it into a message in the integration runtime. - -* **[Amazon S3 Connector Reference]({{base_path}}/reference/connectors/amazons3-connector/amazons3-connector-reference/)**: This documentation provides a reference guide for the Amazon S3 Connector. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, create a pull request in the following repository. - -* [Amazon S3 Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-amazons3) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/amazons3-connector/amazons3-connector-reference.md b/en/docs/reference/connectors/amazons3-connector/amazons3-connector-reference.md deleted file mode 100644 index 03293a2168..0000000000 --- a/en/docs/reference/connectors/amazons3-connector/amazons3-connector-reference.md +++ /dev/null @@ -1,3535 +0,0 @@ -# Amazon S3 Connector Reference - -The following operations allow you to work with the Amazon S3 Connector. Click an operation name to see parameter details and samples on how to use it. - ---- - -## Initialize the connector - -To use the Amazon S3 connector, add the element in your configuration before carrying out any Amazon S3 operations. This Amazon S3 configuration authenticates with Amazon S3 by specifying the AWS access key ID and secret access key ID, which are used for every operation. - -??? note "init" - The init operation is used to initialize the connection to Amazon S3. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    awsAccessKeyIdAWS access key ID.Optional
    awsSecretAccessKeyAWS secret access key.Optional
    nameUnique name to identify the connection by.Yes
    regionRegion which is used select a regional endpoint to make requests, e.g.: us-east-1.Yes
    hostThe AWS API endpoint hostname to which you need to connect.Optional
    - - > **Note**: You can either pass credentials within init configuration or set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as environment variables. The AWS SDK uses provider chains to look for AWS credentials in system/user environment variables. - - To set these environment variables on Linux, macOS, or Unix, use export : - export AWS_ACCESS_KEY_ID=AKIXXXXXXXXXXA - export AWS_SECRET_ACCESS_KEY=qHZXXXXXXQc4oMQMnAOj+340XXxO2s - - To set these environment variables on Windows, use set : - set AWS_ACCESS_KEY_ID=AKIXXXXXXXXXXA - set AWS_SECRET_ACCESS_KEY=qHZXXXXXXQc4oMQMnAOj+340XXxO2s - - > **Note**: If the application is running in an EC2 instance and credentials are not defined in the init configuration, the credentials will be obtained from the [IAM role](https://docs.amazonaws.cn/en_us/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) assigned for the Amazon EC2 instance. This option is available only with Amazon S3 connector v2.0.2 and above. - - **Sample configuration** - - ```xml - - {$ctx:awsAccessKeyId} - {$ctx:awsSecretAccessKey} - {$ctx:connectionName} - {$ctx:region} - - ``` - ---- - -### Buckets - -??? note "listBuckets" - The listBuckets implementation returns a list of all buckets owned by the authenticated sender of the request. To authenticate a request, use a valid AWS Access Key ID that is registered with Amazon S3. Anonymous requests cannot list buckets, and a user cannot list buckets that were not created by that particular user. - - **Sample configuration** - - ```xml - - ``` - - **Sample request** - - ```xml - - ``` - - -??? note "createBucket" - The createBucket operation creates a new bucket. To create a bucket, the user should be registered with Amazon S3 and have a valid AWS Access Key ID to authenticate requests. Anonymous requests are never allowed to create buckets. By creating the bucket, the user becomes the owner of the bucket. Not every string is an acceptable bucket name. For information on bucket naming restrictions, see [Working with Amazon S3 Buckets](http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html). By default, the bucket is created in the US Standard region. The user can optionally specify a region in the request body. For example, if the user resides in Europe, the user will probably find it advantageous to create buckets in the EU (Ireland) region. For more information, see [How to Select a Region for Your Buckets](http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro). See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/CreateBucketRequest.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    bucketRegionRegion for the created bucket.Yes
    aclThe canned ACL to apply to the object.Optional
    grantReadAllows the specified grantee or grantees to list the objects in the bucket.Optional
    grantWriteAllows the specified grantee or grantees to create, overwrite, and delete any object in the bucket.Optional
    grantReadACPAllows the specified grantee or grantees to read the bucket ACL.Optional
    grantWriteACPAllows the specified grantee or grantees to write the ACL for the applicable bucket.Optional
    grantFullControlAllows the specified grantee or grantees the READ, WRITE, READ_ACP, and WRITE_ACP permissions on the bucket.Optional
    objectLockEnabledForBucketThe object lock mode that you want to apply to the copied object.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - {$ctx:bucketRegion} - {$ctx:acl} - {$ctx:grantFullControl} - {$ctx:grantRead} - {$ctx:grantReadACP} - {$ctx:grantWrite} - {$ctx:grantWriteACP} - {$ctx:objectLockEnabledForBucket} - - ``` - - **Sample request** - - ```xml - - signv4test - us-east-2 - false - - ``` - - -??? note "putBucketWebsite" - Sets the configuration of the website that is specified in the website subresource. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    websiteConfigWebsite configuration information. For information on the elements you use in the request to specify the website configuration, see [here](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/PutBucketWebsiteRequest.html).Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - {$ctx:websiteConfig} - - ``` - - **Sample request** - - ```xml - - signv4test - - - index2.html - - - Error2.html - - - - - docs/ - - - documents/ - - - - - images/ - - - documents/ - - - - - - ``` - -??? note "putBucketPolicy" - The putBucketPolicy operation adds or replaces a policy on a bucket. If the bucket already has a policy, the one in this request completely replaces it. To perform this operation, you must be the bucket owner. - - If you are not the bucket owner but have PutBucketPolicy permissions on the bucket, Amazon S3 returns a 405 Method Not Allowed. In all other cases, for a PUT bucket policy request that is not from the bucket owner, Amazon S3 returns 403 Access Denied. There are restrictions about who can create bucket policies and which objects in a bucket they can apply to. - - See the [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/PutBucketPolicyRequest.html) for more information. - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    bucketPolicyPolicy of the bucket.Yes
    confirmRemoveSelfBucketAccessUse this to change this bucket policy in the future.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - {$ctx:bucketPolicy} - {$ctx:confirmRemoveSelfBucketAccess} - - ``` - - **Sample request** - - ```json - { - "awsAccessKeyId": "AKXXXXXXXXX5EAS", - "awsSecretAccessKey": "qHXXXXXXNMDYadDdsQMnAOj+3XXXXPs", - "region":"us-east-2", - "connectionName": "amazonS3", - "bucketName": "signv4test", - "bucketPolicy": { - "Version":"2012-10-17", - "Statement":[{ - "Sid":"AddPerm", - "Effect":"Allow", - "Principal": { - "AWS": ["*"] - }, - "Action":["s3:*"], - "Resource":["arn:aws:s3:::signv4test/*"] - }] - }, - "confirmRemoveSelfBucketAccess":"" - } - ``` - -??? note "putBucketACL" - The putBucketACL operation uses the ACL sub-resource to set the permissions on an existing bucket using access control lists (ACL). See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/PutBucketAclRequest.html) for more information. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    accessControlListContainer for ACL information, which includes the following: -
      -
    • Grant: Container for the grantee and permissions. -
        -
      • Grantee: The subject whose permissions are being set. -
          -
        • ID: ID of the grantee.
        • -
        • DisplayName: Screen name of the grantee.
        • -
        -
      • -
      • Permission: Specifies the permission to give to the grantee.
      • -
      -
    • -
    -
    Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - {$ctx:accessControlList} - - ``` - - **Sample request** - - ```xml - - signv4test - - - - - 9a48e6b16816cc75df306d35bb5d0bd0778b61fbf49b8ef4892143197c84a867 - admin+aws+connectors+secondary - - FULL_CONTROL - - - - http://acs.amazonaws.com/groups/global/AllUsers - - READ - - - - - ``` - -??? note "putBucketLifecycleConfiguration" - The putBucketLifecycleConfiguration operation uses the acl subresource to set the permissions on an existing bucket using access control lists (ACL). See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/PutBucketLifecycleConfigurationRequest.html) for more information. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    lifecycleConfigurationContainer for lifecycle rules, which includes the following: -
      -
    • Rule: Container for a lifecycle rule. -
        -
      • ID: Unique identifier for the rule. The value cannot be longer than 255 characters.
      • -
      • Prefix: Object key prefix identifying one or more objects to which the rule applies.
      • -
      • Status: If Enabled, Amazon S3 executes the rule as scheduled. If Disabled, Amazon S3 ignores the rule.
      • -
      • Transition: This action specifies a period in the objects' lifetime when Amazon S3 should transition them to the STANDARD_IA or the GLACIER storage class. -
          -
        • Days: Specifies the number of days after object creation when the specific rule action takes effect.
        • -
        • StorageClass: Specifies the Amazon S3 storage class to which you want the object to transition.
        • -
        -
      • -
      • Expiration: This action specifies a period in an object's lifetime when Amazon S3 should take the appropriate expiration action. -
          -
        • Days: Specifies the number of days after object creation when the specific rule action takes effect.
        • -
        -
      • -
      -
    • -
    -
    Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - {$ctx:lifecycleConfiguration} - - ``` - - **Sample request** - - ```xml - - signv4test - - - id1 - documents/ - Enabled - - 30 - GLACIER - - - - id2 - logs/ - Enabled - - 365 - - - - - ``` - -??? note "putBucketReplication" - The putBucketReplication operation uses the acl subresource to set the permissions on an existing bucket using access control lists (ACL). See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/PutBucketReplicationRequest.html) for more information. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    replicationConfigurationContainer for replication configuration (Amazon Resource Name (ARN) of an IAM role and set of replication rules): -
      -
    • Rule: Container for information about a particular replication rule. -
        -
      • ID: Unique identifier for the rule. The value cannot be longer than 255 characters.
      • -
      • Prefix: Object key prefix identifying one or more objects to which the rule applies.
      • -
      • Status: The rule is ignored if status is not Enabled..
      • -
      • Destination: Container for destination information. -
          -
        • Bucket:Amazon resource name (ARN) of the bucket where you want Amazon S3 to store replicas of the object identified by the rule.
        • -
        -
      • -
      -
    • -
    • Role: Amazon Resource Name (ARN) of an IAM role for Amazon S3 to assume when replicating the objects.
    • -
    -
    Yes
    - - **Sample configuration** - - ```xml - - signv4test - {$ctx:bucketName} - {$ctx:replicationConfiguration} - - ``` - - **Sample request** - - ```xml - - signv4test - - - id1 - documents/ - Enabled - - arn:aws:s3:::signv4testq23aa1 - - - - - ``` - -??? note "putBucketTagging" - The putBucketTagging operation uses the tagging subresource to add a set of tags to an existing bucket. Use tags to organize your AWS bill to reflect your own cost structure. To do this, sign up to get your AWS account bill with tag key values included. Then, to see the cost of combined resources, organize your billing information according to resources with the same tag key values. For example, you can tag several resources with a specific application name, and then organize your billing information to see the total cost of that application across several services. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/PutBucketTaggingRequest.html) for more information. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    tagSetContainer for a set of tags, which includes the following: -
      -
    • Tag: Container for tag information. -
        -
      • Key: Name of the tag.
      • -
      • Value: Value of the tag.
      • -
      -
    • -
    -
    Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - {$ctx:tagSet} - - ``` - - **Sample request** - - ```xml - - signv4test - - - Project - Project One - - - User - jsmith - - - - ``` - -??? note "putBucketRequestPayment" - The putBucketRequestPayment operation uses the requestPayment subresource to set the request payment configuration of a bucket. By default, the bucket owner pays for downloads from the bucket. This configuration parameter enables the bucket owner (only) to specify that the person requesting the download will be charged for the download. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/PutBucketRequestPaymentRequest.html) for more information. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    payerSpecifies who pays for the download and request fees.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - {$ctx:payer} - - ``` - - **Sample request** - - ```xml - - signv4test - Requester - - ``` - -??? note "putBucketVersioning" - The putBucketVersioning operation uses the requestPayment subresource to set the request payment configuration of a bucket. By default, the bucket owner pays for downloads from the bucket. This configuration parameter enables the bucket owner (only) to specify that the person requesting the download will be charged for the download. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/PutBucketVersioningRequest.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    statusSets the versioning state of the bucket.Optional
    mfaThe concatenation of the authentication device's serial number, a space, and the value that is displayed on your authentication device.Optional
    mfaDeleteSpecifies whether MFA Delete is enabled in the bucket versioning configuration.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - {$ctx:status} - {$ctx:mfaDelete} - - ``` - - **Sample request** - - ```xml - - signv4test - Enabled - - ``` - -??? note "deleteBucket" - The deleteBucket operation deletes the bucket named in the URI. All objects (including all object versions and Delete Markers) in the bucket must be deleted before the bucket itself can be deleted. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/DeleteBucketRequest.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - - ``` - - **Sample request** - - ```xml - - signv4test - - ``` - -??? note "deleteBucketPolicy" - The deleteBucketPolicy operation deletes the policy on a specified bucket. To use the operation, you must have DeletePolicy permissions on the specified bucket and be the bucket owner. If there are no DeletePolicy permissions, Amazon S3 returns a 403 Access Denied error. If there is the correct permission, but you are not the bucket owner, Amazon S3 returns a 405 Method Not Allowed error. If the bucket does not have a policy, Amazon S3 returns a 204 No Content error. There are restrictions about who can create bucket policies and which objects in a bucket they can apply to. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/DeleteBucketPolicyRequest.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - - ``` - - **Sample request** - - ```xml - - signv4test - - ``` - -??? note "deleteBucketCORS" - The deleteBucketCORS operation deletes the CORS configuration information set for the bucket. To use this operation, you must have permission to perform the s3:PutCORSConfiguration action. The bucket owner has this permission by default and can grant this permission to others. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/DeleteBucketCorsRequest.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - - ``` - - **Sample request** - - ```xml - - signv4test - - ``` - -??? note "deleteBucketLifecycle" - The deleteBucketLifecycle operation deletes the lifecycle configuration from the specified bucket. Amazon S3 removes all the lifecycle configuration rules in the lifecycle subresource associated with the bucket. Your objects never expire, and Amazon S3 no longer automatically deletes any objects on the basis of rules contained in the deleted lifecycle configuration. To use this operation, you must have permission to perform the s3:PutLifecycleConfiguration action. By default, the bucket owner has this permission and the bucket owner can grant this permission to others. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/DeleteBucketLifecycleRequest.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - - ``` - - **Sample request** - - ```xml - - signv4test - - ``` - -??? note "deleteBucketReplication" - The deleteBucketReplication operation deletes the replication sub-resource associated with the specified bucket. This operation requires permission for the s3:DeleteReplicationConfiguration action. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/DeleteBucketReplicationRequest.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - - ``` - - **Sample request** - - ```xml - - signv4test - - ``` - -??? note "deleteBucketTagging" - The deleteBucketTagging operation uses the tagging sub-resource to remove a tag set from the specified bucket. To use this operation, you must have permission to perform the s3:PutBucketTagging action. By default, the bucket owner has this permission and can grant this permission to others. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/DeleteBucketTaggingRequest.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - - ``` - - **Sample request** - - ```xml - - signv4test - - ``` - -??? note "deleteBucketWebsiteConfiguration" - The deleteBucketWebsiteConfiguration operation removes the website configuration for a bucket. Amazon S3 returns a 207 OK response upon successfully deleting a website configuration on the specified bucket. It will give a 200 response if the website configuration you are trying to delete does not exist on the bucket, and a 404 response if the bucket itself does not exist. This operation requires the S3: DeleteBucketWebsite permission. By default, only the bucket owner can delete the website configuration attached to a bucket. However, bucket owners can grant other users permission to delete the website configuration by writing a bucket policy granting them the S3: DeleteBucketWebsite permission. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/DeleteBucketWebsiteRequest.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - - ``` - - **Sample request** - - ```xml - - signv4test - - ``` - -??? note "listObjects" - The listObjects operation returns some or all (up to 1000) of the objects in a bucket. The request parameters act as selection criteria to return a subset of the objects in a bucket. To use this implementation of the operation, the user must have READ access to the bucket. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/ListObjectsRequest.html)) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    delimiterA delimiter is a character used to group keys. All keys that contain the same string between the prefix, if specified, and the first occurrence of the delimiter after the prefixes are grouped under a single result element CommonPrefixes. If the prefix parameter is not specified, the substring starts at the beginning of the key. The keys that are grouped under the CommonPrefixesresult element are not returned elsewhere in the response.Optional
    encodingTypeRequests Amazon S3 to encode the response and specifies the encoding method to use. An object key can contain any Unicode character. However, XML 1.0 parser cannot parse some characters such as characters with an ASCII value from 0 to 10. For characters that are not supported in XML 1.0, this parameter can be added to request Amazon S3 to encode the keys in the response.Optional
    markerSpecifies the key to start with when listing objects in a bucket. Amazon S3 lists objects in alphabetical order.Optional
    maxKeysSets the maximum number of keys returned in the response body. The response might contain fewer keys but will never contain more.Optional
    prefixLimits the response to keys that begin with the specified prefix.Optional
    requestPayerConfirms that the requester knows that they will be charged for the request.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - {$ctx:delimiter} - {$ctx:encodingType} - {$ctx:marker} - {$ctx:maxKeys} - {$ctx:prefix} - {$ctx:requestPayer} - - ``` - - **Sample request** - - ```xml - - signv4test - 3 - The name of the bucket. - images - - ``` - -??? note "getBucketLifecycleConfiguration" - The getBucketLifecycleConfiguration operation returns the lifecycle configuration information set on the bucket. To use this operation, permissions should be given to perform the s3:GetLifecycleConfiguration action. The bucket owner has this permission by default and can grant this permission to others. There is usually some time lag before lifecycle configuration deletion is fully propagated to all the Amazon S3 systems. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/GetBucketLifecycleConfigurationRequest.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - - ``` - - **Sample request** - - ```xml - - signv4test - - ``` - -??? note "putBucketCORS" - The putBucketCORS operation returns the CORS configuration information set for the bucket. To use this operation, you must have permission to perform the s3:putBucketCORS action. By default, the bucket owner has this permission and can grant it to others. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/PutBucketCorsRequest.html) for more information. - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    corsConfigurationContainer for up to 100 CORSRules elements.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - {$ctx:corsConfiguration} - - ``` - - **Sample request** - - ```xml - - signv4test - - - * - GET - * - 3000 - - - - ``` - -??? note "getBucketCORS" - The getBucketCORS operation returns the CORS configuration information set for the bucket. To use this operation, you must have permission to perform the s3:getBucketCORS action. By default, the bucket owner has this permission and can grant it to others. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/GetBucketCorsRequest.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - - ``` - - **Sample request** - - ```xml - - signv4test - - ``` - -??? note "getBucketLocation" - The getBucketLocation operation returns the lifecycle configuration information set on the bucket. To use this operation, you must be the bucket owner. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/GetBucketLocationRequest.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - - ``` - - **Sample request** - - ```xml - - signv4test - - ``` - -??? note "getBucketLogging" - The getBucketLogging operation returns the logging status of a bucket and the permissions users have to view and modify that status. To use this operation, you must be the bucket owner. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/GetBucketLoggingRequest.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - - ``` - - **Sample request** - - ```xml - - signv4test - - ``` - -??? note "getBucketNotificationConfiguration" - The getBucketNotificationConfiguration operation returns the lifecycle configuration information set on the bucket. To use this operation, you must be the bucket owner to read the notification configuration of a bucket. However, the bucket owner can use a bucket policy to grant permission to other users to read this configuration with the s3:getBucketNotificationConfiguration permission. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/GetBucketNotificationConfigurationRequest.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - - ``` - - **Sample request** - - ```xml - - signv4test - - ``` - -??? note "getBucketTagging" - The getBucketTagging operation returns the lifecycle configuration information set on the bucket. To use this operation, you must have permission to perform the s3:GetBucketTagging action. By default, the bucket owner has this permission and can grant this permission to others. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/GetBucketTaggingRequest.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - - ``` - - **Sample request** - - ```xml - - signv4test - - ``` - -??? note "getBucketReplication" - The getBucketReplication operation returns the lifecycle configuration information set on the bucket. To use this operation, you must have permission to perform the s3:GetReplicationConfiguration action. For more information about permissions, go to Using Bucket Policies and User Policies in the Amazon Simple Storage Service Developer Guide. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/GetBucketReplicationRequest.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - - ``` - - **Sample request** - - ```xml - - signv4test - - ``` - -??? note "getBucketPolicy" - The getBucketPolicy operation returns the policy of a specified bucket. To use this operation, the user must have GetPolicy permissions on the specified bucket, and the user must be the bucket owner. If the user does not have GetPolicy permissions, Amazon S3 returns a 403 Access Denied error. If the user has correct permissions, but the user is not the bucket owner, Amazon S3 returns a 405 Method Not Allowed error. If the bucket does not have a policy, Amazon S3 returns a 404 Policy Not found error. There are restrictions about who can create bucket policies and which objects in a bucket they can apply to. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/GetBucketPolicyRequest.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - - ``` - - **Sample request** - - ```xml - - signv4test - - ``` - -??? note "listObjectVersions" - The listObjectVersions operation lists metadata about all of the versions of objects in a bucket. Request parameters can be used as selection criteria to return metadata about a subset of all the object versions. To use this operation, the user must have READ access to the bucket. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/ListObjectVersionsRequest.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    delimiterA delimiter is a character used to group keys.Optional
    encodingTypeRequests Amazon S3 to encode the response and specifies the encoding method to use.Optional
    keyMarkerSpecifies the key in the bucket that you want to start listing from. See also versionIdMarker below.Optional
    maxKeysSets the maximum number of keys returned in the response body.Optional
    prefixLimits the response to keys that begin with the specified prefix.Optional
    versionIdMarkerSpecifies the object version you want to start listing from.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - {$ctx:delimiter} - {$ctx:encodingType} - {$ctx:keyMarker} - {$ctx:maxKeys} - {$ctx:prefix} - {$ctx:versionIdMarker} - - ``` - - **Sample request** - - ```xml - - testkeerthu1234 - / - 3 - images - - ``` - -??? note "getBucketRequestPayment" - The getBucketRequestPayment operation returns the request payment configuration of a bucket. To use this operation, the user must be the bucket owner. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/GetBucketRequestPaymentRequest.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - - ``` - - **Sample request** - - ```xml - - signv4test - - ``` - -??? note "getBucketVersioning" - The getBucketVersioning operation returns the versioning state of a bucket. To retrieve the versioning state of a bucket, the user must be the bucket owner. This implementation also returns the MFA Delete status of the versioning state. If the MFA Delete status is enabled, the bucket owner must use an authentication device to change the versioning state of the bucket. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/GetBucketVersioningRequest.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - - ``` - - **Sample request** - - ```xml - - signv4test - - ``` - -??? note "getBucketWebsite" - The getBucketWebsite operation returns the website configuration associated with a bucket. To host the website on Amazon S3, a bucket can be configured as a website by adding a website configuration. This operation requires the S3:GetBucketWebsite permission. By default, only the bucket owner can read the bucket website configuration. However, bucket owners can allow other users to read the website configuration by writing a bucket policy granting them the S3:GetBucketWebsite permission. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/GetBucketWebsiteRequest.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - - ``` - - **Sample request** - - ```xml - - signv4test - - ``` - -??? note "getBucketACL" - The getBucketACL operation returns the access control list (ACL) of a bucket. To return the ACL of the bucket, the user must have READ_ACP access to the bucket. If READ_ACP permission is granted to the anonymous user, you can return the ACL of the bucket without the authorization. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/GetBucketAclRequest.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - - ``` - - **Sample request** - - ```xml - - signv4test - - ``` -??? note "headBucket" - The headBucket operation is useful to determine if a bucket exists and you have permission to access it. The operation returns a 200 OK if the bucket exists and you have permission to access it. Otherwise, the operation might return responses such as 404 Not Found and 403 Forbidden. See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/HeadBucketRequest.html) for more information. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - - ``` - - **Sample request** - - ```xml - - 1513162931643testconbkt2 - - ``` - -??? note "listMultipartUploads" - The listMultipartUploads operation lists in-progress multipart uploads. A multipart upload is in progress when it has been initiated using the Initiate Multipart Upload request but has not yet been completed or aborted. It returns a default value of 1000 multipart uploads in the response. The number of uploads can be further limited in a response by specifying the maxUploads property. If additional multipart uploads satisfy the list criteria, the response will contain an "IsTruncated" element with the value "true". To list the additional multipart uploads, use the keyMarker and uploadIdMarker request parameters. - - In the response, the uploads are sorted by key. If the application has initiated more than one multipart upload using the same object key, uploads in the response are first sorted by key. Additionally, uploads are sorted in ascending order within each key by the upload initiation time. - - See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/glacier/model/ListMultipartUploadsRequest.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    delimiterA delimiter is a character you use to group keys. All keys that contain the same string between the prefix, if specified, and the first occurrence of the delimiter after the prefix are grouped under a single result element CommonPrefixes. If you do not specify the prefix parameter, the substring starts at the beginning of the key. The keys that are grouped under the CommonPrefixesresult element are not returned elsewhere in the response.Optional
    encodingTypeRequests Amazon S3 to encode the response and specifies the encoding method to use. An object key can contain any Unicode character. However, XML 1.0 parser cannot parse some characters such as characters with an ASCII value from 0 to 10. For characters that are not supported in XML 1.0, you can add this parameter to request Amazon S3 to encode the keys in the response.Optional
    maxUploadsSets the maximum number of multipart uploads, from 1 to 1,000, to return in the response body. 1,000 is the maximum number of uploads that can be returned in a response.Optional
    keyMarkerSpecifies the key to start with when listing objects in a bucket. Amazon S3 lists objects in alphabetical order.Optional
    prefixLimits the response to keys that begin with the specified prefix.Optional
    uploadIdMarkerSpecifies the multipart upload after which listing should begin.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - {$ctx:delimiter} - {$ctx:encodingType} - {$ctx:maxUploads} - {$ctx:keyMarker} - {$ctx:prefix} - {$ctx:uploadIdMarker} - - ``` - - **Sample request** - - ```xml - - signv4test - - ``` - -### Objects - -??? note "deleteObject" - The deleteObject operation removes the null version (if there is one) of an object and inserts a delete marker, which becomes the latest version of the object. If there is no null version, Amazon S3 does not remove any objects. - - If the object you want to delete is in a bucket where the bucket versioning configuration is MFA Delete enabled, you must include the mfa in the request. For more information about MFA Delete, see Using MFA Delete . - - Following is the proxy configuration for init and deleteObject. The init section has connection parameters. - - See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/clouddirectory/model/DeleteObjectRequest.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    objectKeyThe name of the object to be deleted.Yes
    versionIdVersion Id of an object to remove a specific object version.Optional
    bypassGovernanceRetentionIndicates whether S3 Object Lock should bypass Governance-mode restrictions to process this operation.Optional
    mfaRequired to permanently delete a versioned object if versioning is configured with MFA Delete enabled. The value is the concatenation of the authentication device's serial number, a space, and the value displayed on your authentication device.Optional
    requestPayerConfirms that the requester knows that they will be charged for the request.Optional
    - - > **Note**: To remove a specific version, the user must be the bucket owner and must use the versionId sub-resource, which permanently deletes the version. - - **Sample configuration** - - ```xml - - {$ctx:awsAccessKeyId} - {$ctx:awsSecretAccessKey} - {$ctx:connectionName} - {$ctx:region} - - - - {$ctx:bucketName} - {$ctx:objectKey} - {$ctx:versionId} - {$ctx:bypassGovernanceRetention} - {$ctx:requestPayer} - - ``` - - **Sample request** - - ```xml - - signv4test - testObject1 - FHbrL3xf2TK54hLNWWArYI79woSElvHf - - ``` - -??? note "deleteObjects" - The deleteObjects operation deletes multiple objects from a bucket using a single HTTP request. If object keys that need to be deleted are known, this operation provides a suitable alternative to sending individual delete requests (deleteObject). The deleteObjects request contains a list of up to 1000 keys that the user wants to delete. In the XML, you provide the object key names, and optionally provide version IDs if you want to delete a specific version of the object from a versioning-enabled bucket. For each key, Amazon S3 performs a delete operation and returns the result of that deletion, success or failure, in the response. Note that if the object specified in the request is not found, Amazon S3 returns the result as deleted. - - The deleteObjects operation supports two modes for the response: verbose and quiet. By default, the operation uses the verbose mode in which the response includes the result of deletion of each key in your request. In the quiet mode, the response includes only keys where the delete operation encountered an error. For a successful deletion, the operation does not return any information about the deletion in the response body. - - When using the deleteObjects operation that attempts to delete a versioned object on an MFA Delete enabled bucket, you must include an MFA token. If you do not provide one, even if there are non-versioned objects you are attempting to delete. Additionally, f you provide an invalid token, the entire request will fail, regardless of whether there are versioned keys in the request. For more information about MFA Delete, see MFA Delete. - - Following is the proxy configuration for init and deleteObjects. The init section has connection parameters. - - See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/DeleteObjectsRequest.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameName of the bucket.Yes
    bypassGovernanceRetentionIndicates whether S3 Object Lock should bypass Governance-mode restrictions to process this operation.Optional
    mfaRequired to permanently delete a versioned object if versioning is configured with MFA Delete enabled. The value is the concatenation of the authentication device's serial number, a space, and the value displayed on your authentication device.Optional
    deleteConfigThe configuration for deleting the objects. It contains the following properties: -
      -
    • Delete: Container for the request.
    • -
        -
      • Quiet: Enable quiet mode for the request. When you add this element, you must set its value to true. Default is false.
      • -
      • Object: Container element that describes the delete request for each object.
      • -
          -
        • Key: Key name of the object to delete.
        • -
        • VersionId: Version ID for the specific version of the object to delete.
        • -
        -
      -
    -
    Yes
    requestPayerConfirms that the requester knows that they will be charged for the request.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:awsAccessKeyId} - {$ctx:awsSecretAccessKey} - {$ctx:connectionName} - {$ctx:region} - - - - {$ctx:bucketName} - {$ctx:bypassGovernanceRetention} - {$ctx:deleteConfig} - {$ctx:mfa} - {$ctx:requestPayer} - - ``` - - **Sample request** - - ```xml - - signv4test - - - - testobject33 - M46OVgxl4lHBNCeZwBpEZvGhj0k5vvjK - - - testObject1 - PwbvPU.yn3YcHOCF8bntKeTdzfKQC6jN - - - - - ``` - -??? note "getObject" - The getObject operation retrieves objects from Amazon S3. To use this operation, the user must have READ access to the object. If the user grants READ access to the anonymous user, the object can be returned without the authorization. By default, this operation returns the latest version of the object. - - An Amazon S3 bucket has no directory hierarchy such as in a typical computer file system. However, a logical hierarchy can be created by using object key names that imply a folder structure. For example, instead of naming an object sample.jpg, it could be named photos/2006/February/sample.jpg. To retrieve an object from such a logical hierarchy, the full key name for the object should be specified. - - For a virtual hosted-style request example, if you have the object photos/2006/February/sample.jpg, specify the resource as /photos/2006/February/sample.jpg. For a path-style request example, if you have the object photos/2006/February/sample.jpg in the bucket named examplebucket, specify the resource as /examplebucket/photos/2006/February/sample.jpg. If the object to be retrieved is a GLACIER storage class object, the object is archived in Amazon Glacier, and you must first restore a copy before retrieving the object. Otherwise, this operation returns the "InvalidObjectStateError" error. - - See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/mediastoredata/model/GetObjectRequest.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    objectKeyThe name of the object to retrieve details for.Yes
    responseContentTypeContent-Type property of the response.Optional
    responseContentLanguageContent-Language property of the response.Optional
    responseExpiresExpires property of the response.Optional
    responseCacheControlCache-Control property of the response.Optional
    responseContentDispositionContent-Disposition property of the response.Optional
    responseContentEncodingContent-Encoding property of the response.Optional
    rangeHTTP range property.Optional
    ifModifiedSinceReturn the object only if it has been modified.Optional
    ifUnmodifiedSinceReturn the object only if it has not been modified.Optional
    ifMatchReturn the object only if its ETag is the same.Optional
    ifNoneMatchReturn the object only if its ETag is not the same as the one specified.Optional
    versionIdVersionId used to reference a specific version of the object.Optional
    sseCustomerAlgorithmSpecifies the algorithm to use to when encrypting the object.Optional
    sseCustomerKeySpecifies the customer-provided encryption key for Amazon S3 to use in encrypting data.Optional
    sseCustomerKeyMD5Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321.Optional
    partNumberPart number of the object being read.Optional
    requestPayerConfirms that the requester knows that they will be charged for the request.Optional
    destinationFilePathSpecifies the path to the local file where the contents of the response needs to be written. If the destination file already exists or if the file is not writable by the current user, an exception will be thrown. -
    - **Note**: This parameter is available only with Amazon S3 connector v2.0.1 and above.
    Optional
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - {$ctx:objectKey} - {$ctx:responseContentType} - {$ctx:responseContentLanguage} - {$ctx:responseExpires} - {$ctx:responseCacheControl} - {$ctx:responseContentDisposition} - {$ctx:responseContentEncoding} - {$ctx:range} - {$ctx:ifModifiedSince} - {$ctx:ifUnmodifiedSince} - {$ctx:ifMatch} - {$ctx:ifNoneMatch} - {$ctx:versionId} - {$ctx:sseCustomerAlgorithm} - {$ctx:sseCustomerKey} - {$ctx:sseCustomerKeyMD5} - {$ctx:partNumber} - {$ctx:requestPayer} - - ``` - - **Sample request** - - ```xml - - signv4test - Tree2.png - 1 - - ``` - -??? note "putObject" - The putObject operation adds an object to a bucket. You must have WRITE permissions on a bucket to add an object to it. Amazon S3 does not add partial objects, so if a success response is received, the entire object is added to the bucket. Because Amazon S3 is a distributed system, if it receives multiple write requests for the same object simultaneously, it overwrites all but the last object written. - - To ensure that data is not corrupted traversing the network, use the Content-MD5 parameter. When it is used, Amazon S3 checks the object against the provided MD5 value and, if they do not match, it returns an error. Additionally, you can calculate the MD5 value while putting an object to Amazon S3 and compare the returned ETag with the calculated MD5 value. - - When uploading an object, you can specify the accounts or groups that should be granted specific permissions on the object. There are two ways to grant the appropriate permissions using the request parameters: either specify a canned (predefined) ACL using the "acl", or specify access permissions explicitly using the "grantRead", "grantReadACP", "grantWriteACP", and "grantFullControl" parameters. These parameters map to the set of permissions that Amazon S3 supports in an ACL. Use only one approach, not both. - - See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/clouddirectory/model/putObjectRequest.html) for more information. - - !!! note - The `fileContent` parameter is available only with Amazon S3 connector v2.0.2 and above. Either `filePath` or `fileContent` parameter is mandatory. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    objectKeyThe name of the object to retrieve details for.Yes
    filePathThe path of the source file to be uploaded.Optional
    fileContentContent of the file.Optional
    aclThe canned ACL to apply to the object.Optional
    cacheControlThis can be used to specify caching behavior along the request or reply chain.Optional
    contentDispositionThis specifies presentational information for the object.Optional
    contentEncodingThe language the content is in.Optional
    contentLanguageThis specifies what content encodings have been applied to the object.Optional
    contentTypeA standard MIME type describing the format of the object data.Optional
    contentMD5The base64-encoded 128-bit MD5 digest of the message according to RFC 1864.Optional
    expiresThis specifies the date and time at which the object is no longer cacheable.Optional
    grantReadAllows the specified grantee or grantees to list the objects in the bucket.Optional
    grantReadACPAllows the specified grantee or grantees to read the bucket ACL.Optional
    grantWriteACPAllows the specified grantee or grantees to write the ACL for the applicable bucket.Optional
    grantFullControlAllows the specified grantee or grantees the READ, WRITE, READ_ACP, and WRITE_ACP permissions on the bucket.Optional
    metadataThe metadata. Comma separated key value pair. The key and value are separated by ':'t.Optional
    serverSideEncryptionSpecifies the server-side encryption algorithm to use when Amazon S3 creates the target object.Optional
    storageClassRRS enables customers to reduce their costs by storing non-critical, reproducible data at lower levels of redundancy than Amazon S3's standard storage.Optional
    websiteRedirectLocationIf the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this parameter in the object metadata.Optional
    sseCustomerAlgorithmSpecifies the algorithm to use to when encrypting the object.Optional
    sseCustomerKeySpecifies the customer-provided encryption key for Amazon S3 to use in encrypting data.Optional
    sseCustomerKeyMD5Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321.Optional
    ssekmsKeyIdSpecifies the ID of the symmetric customer managed AWS KMS CMK to use for object encryption.Optional
    ssekmsEncryptionContextSpecifies the AWS KMS Encryption Context to use for object encryption.Optional
    requestPayerConfirms that the requester knows that they will be charged for the request.Optional
    taggingThe tag-set for the object. The tag-set must be encoded as URL Query parameters. (For example, "Key1=Value1"). This must be used in conjunction with the TaggingDirective.Optional
    objectLockModeThe object lock mode that you want to apply to the uploaded object.Optional
    objectLockRetainUntilDateSpecifies the date and time when you want the Object Lock to expire.Optional
    objectLockLegalHoldStatusSpecifies whether you want to apply a Legal Hold to the uploaded object.Optional
    - - **Sample configuration** - - ```xml - - {$url:bucketName} - {$url:objectKey} - {$url:filePath} - {$ctx:acl} - {$ctx:cacheControl} - {$ctx:contentDisposition} - {$ctx:contentEncoding} - {$ctx:contentLanguage} - {$ctx:contentType} - {$ctx:expires} - {$ctx:grantRead} - {$ctx:grantReadACP} - {$ctx:grantWriteACP} - {$ctx:grantFullControl} - {$ctx:metadata} - {$ctx:serverSideEncryption} - {$ctx:storageClass} - {$ctx:websiteRedirectLocation} - {$ctx:sseCustomerAlgorithm} - {$ctx:sseCustomerKey} - {$ctx:sseCustomerKeyMD5} - {$ctx:ssekmsKeyId} - {$ctx:ssekmsEncryptionContext} - {$ctx:requestPayer} - {$ctx:tagging} - {$ctx:objectLockMode} - {$ctx:objectLockRetainUntilDate} - {$ctx:objectLockLegalHoldStatus} - - ``` - - **Sample request** - - ```xml - - signv4test - s3_image.jpg - /Users/mine/Desktop/S3_img.jpg - - ``` - -??? note "putObjectAcl" - The putObjectAcl operation sets the access control list (ACL) permissions for an object that already exists in a bucket. You can specify the ACL in the request body or specify permissions using request, depending on the application needs. For example, if there is an existing application that updates an object ACL using the request body, you can continue to use that approach. - - The ACL of an object is set at the object version level. By default, putObjectAcl sets the ACL of the latest version of an object. To set the ACL of a different version, use the versionId property. - - See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/PutObjectAclRequest.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    objectKeyName of the object whose acl needs to be set.Yes
    accessControlListContainer for ACL information, which includes the following: -
      -
    • Grant: Container for the grantee and permissions.
    • -
        -
      • Grantee: The subject whose permissions are being set.
      • -
          -
        • ID: ID of the grantee.
        • -
        • DisplayName: Screen name of the grantee.
        • -
        -
      • Permission: Specifies the permission to give to the grantee.
      • -
      -
    -
    Yes
    versionIdVersion Id of an object to remove a specific object version.Optional
    aclThe canned ACL to apply to the object.Optional
    grantReadAllows the specified grantee or grantees to list the objects in the bucket.Optional
    grantWriteAllows the specified grantee or grantees to create, overwrite, and delete any object in the bucket.Optional
    grantReadACPAllows the specified grantee or grantees to read the bucket ACL.Optional
    grantWriteACPAllows the specified grantee or grantees to write the ACL for the applicable bucket.Optional
    grantFullControlAllows the specified grantee or grantees the READ, WRITE, READ_ACP, and WRITE_ACP permissions on the bucket.Optional
    requestPayerConfirms that the requester knows that they will be charged for the request.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:objectKey} - {$ctx:bucketName} - {$ctx:accessControlList} - {$ctx:versionId} - {$ctx:acl} - {$ctx:grantRead} - {$ctx:grantReadACP} - {$ctx:grantWrite} - {$ctx:grantWriteACP} - {$ctx:grantFullControl} - {$ctx:requestPayer} - - ``` - - **Sample request** - - ```xml - - signv4test - testObject2 - FHbrL3xf2TK54hLNWWArYI79woSElvHf - authenticated-read - - - c6567b8c9274b78d6af4a3080c5e43e700f560f3517b7d9acc87251412044c35 - - - - c6567b8c9274b78d6af4a3080c5e43e700f560f3517b7d9acc87251412044c35 - pe.chanaka.ck@gmail.com - - WRITE_ACP - - - - c6567b8c9274b78d6af4a3080c5e43e700f560f3517b7d9acc87251412044c35 - pe.chanaka.ck@gmail.com - - READ - - - - ``` - -??? note "copyBucketObject" - The copyBucketObject operation creates a copy of an object that is already stored in Amazon S3. This operation is the same as performing a GET and then a PUT. Adding the "copySource" enables to copy the source object into the destination bucket. - - When copying an object, most of the metadata (default) can be preserved, or new metadata can be specified. However, the ACL is not preserved and is set to "private" for the user making the request. All copy requests must be authenticated and cannot contain a message body. Additionally, the user must have the READ access to the source object and WRITE access to the destination bucket. To copy an object only under certain conditions, such as whether the ETag matches or whether the object was modified before or after a specified date with the parameters such as "copySourceIfMatch", "copySourceIfNoneMatch", "copySourceIfUnmodifiedSince", or "copySourceIfModifiedSince" must be used. - - There are two instances when the copy request could return an error. One is when Amazon S3 receives the copy request, and the other can occur while Amazon S3 is copying the files. If the error occurs before the copy operation starts, you receive a standard Amazon S3 error. If the error occurs during the copy operation, the error response is embedded in the 200 OK response. This means that a 200 OK response can contain either a success or an error. If the request is an HTTP 1.1 request, the response is chunk encoded. Otherwise, it will not contain the content-length, and you will need to read the entire body. - - When copying an object, the accounts or groups that should be granted specific permissions on the object can be specified. There are two ways to grant the appropriate permissions using the request: one is to specify a canned (predefined) ACL using the "acl" parameter, and the other is to specify access permissions explicitly using the "grantRead", "grantReadACP", "grantWriteACP", and "grantFullControl" parameters. These parameters map to the set of permissions that Amazon S3 supports in an ACL. Use only one approach, not both. - - Following is the proxy configuration for init and copyBucketObject. The init section has connection parameters. - - See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/CopyObjectRequest.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    aclThe canned ACL to apply to the object.Optional
    cacheControlThis can be used to specify caching behavior along the request or reply chain.Optional
    contentDispositionThis specifies presentational information for the object.Optional
    contentEncodingThe language the content is in.Optional
    contentLanguageThis specifies what content encodings have been applied to the object.Optional
    contentTypeA standard MIME type describing the format of the object data.Optional
    grantReadAllows the specified grantee or grantees to list the objects in the bucket.Optional
    grantReadACPAllows the specified grantee or grantees to read the bucket ACL.Optional
    grantWriteACPAllows the specified grantee or grantees to write the ACL for the applicable bucket.Optional
    grantFullControlAllows the specified grantee or grantees the READ, WRITE, READ_ACP, and WRITE_ACP permissions on the bucket.Optional
    copySourceThe name of the source bucket and key name of the source object, separated by a slash (/).Yes
    metadataDirectiveSpecifies whether the metadata is copied from the source object or replaced with metadata provided in the request.Optional
    metadataNew metadata to replace. Comma separated key value pair. The key and value are separated by ':'.Optional
    taggingDirectiveSpecifies whether the object tag-set are copied from the source object or replaced with tag-set provided in the request.Optional
    copySourceIfMatchCopies the object if its entity tag (ETag) matches the specified tag. Otherwise, the request returns a 412 HTTP status code error (failed precondition).Optional
    copySourceIfNoneMatchCopies the object if its entity tag (ETag) is different from the specified ETag. Otherwise, the request returns a 412 HTTP status code error (failed precondition).Optional
    copySourceIfUnmodifiedSinceCopies the object if it has not been modified since the specified time. Oherwise, the request returns a 412 HTTP status code error (failed precondition).Optional
    copySourceIfModifiedSinceCopies the object if it has been modified since the specified time. Otherwise, the request returns a 412 HTTP status code error (failed condition).Optional
    expiresThe date and time at which the object is no longer cacheable.Optional
    serverSideEncryptionSpecifies the server-side encryption algorithm to use when Amazon S3 creates the target object.Optional
    storageClassRRS enables customers to reduce their costs by storing non-critical, reproducible data at lower levels of redundancy than Amazon S3's standard storage.Optional
    websiteRedirectLocationIf the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this parameter in the object metadata.Optional
    sseCustomerAlgorithmSpecifies the algorithm to use to when encrypting the object.Optional
    sseCustomerKeySpecifies the customer-provided encryption key for Amazon S3 to use in encrypting data.Optional
    sseCustomerKeyMD5Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321.Optional
    ssekmsKeyIdSpecifies the ID of the symmetric customer managed AWS KMS CMK to use for object encryption.Optional
    ssekmsEncryptionContextSpecifies the AWS KMS Encryption Context to use for object encryption.Optional
    copySourceSSECustomerAlgorithmSpecifies the algorithm to use when decrypting the source object.Optional
    copySourceSSECustomerKeySpecifies the customer-provided encryption key for Amazon S3 to use to decrypt the source object.Optional
    copySourceSSECustomerKeyMD5Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321.Optional
    requestPayerConfirms that the requester knows that they will be charged for the request.Optional
    taggingThe tag-set for the object. The tag-set must be encoded as URL Query parameters. (For example, "Key1=Value1"). This must be used in conjunction with the TaggingDirective.Optional
    objectLockModeThe object lock mode that you want to apply to the uploaded object.Optional
    objectLockRetainUntilDateSpecifies the date and time when you want the Object Lock to expire.Optional
    objectLockLegalHoldStatusSpecifies whether you want to apply a Legal Hold to the uploaded object.Optional
    destinationBucketName of the destination bucket to copy the object.Yes
    destinationKeyThe destination where the source will be copied.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:awsAccessKeyId} - {$ctx:awsSecretAccessKey} - {$ctx:connectionName} - {$ctx:region} - - - - {$ctx:copySource} - {$ctx:acl} - {$ctx:cacheControl} - {$ctx:contentDisposition} - {$ctx:contentEncoding} - {$ctx:contentLanguage} - {$ctx:contentType} - {$ctx:copySourceIfMatch} - {$ctx:copySourceIfModifiedSince} - {$ctx:copySourceIfNoneMatch} - {$ctx:copySourceIfUnmodifiedSince} - {$ctx:expires} - {$ctx:grantRead} - {$ctx:grantReadACP} - {$ctx:grantWriteACP} - {$ctx:grantFullControl} - {$ctx:metadataDirective} - {$ctx:metadata} - {$ctx:taggingDirective} - {$ctx:tagging} - {$ctx:serverSideEncryption} - {$ctx:storageClass} - {$ctx:websiteRedirectLocation} - {$ctx:sseCustomerAlgorithm} - {$ctx:sseCustomerKey} - {$ctx:sseCustomerKeyMD5} - {$ctx:ssekmsKeyId} - {$ctx:ssekmsEncryptionContext} - {$ctx:copySourceSSECustomerAlgorithm} - {$ctx:copySourceSSECustomerKey} - {$ctx:copySourceSSECustomerKeyMD5} - {$ctx:requestPayer} - {$ctx:objectLockMode} - {$ctx:objectLockRetainUntilDate} - {$ctx:objectLockLegalHoldStatus} - {$ctx:destinationBucket} - {$ctx:destinationKey} - - ``` - - **Sample request** - - ```xml - - signv4test - /imagesBucket5/testObject37 - testObject5 - - ``` - -??? note "uploadPart" - The uploadPart operation uploads a part in a multipart upload. In this operation, you provide part data in your request. However, you have an option to specify your existing Amazon S3 object as the data source for the part being uploaded. You must initiate a multipart upload (see createMultipartUpload) before you can upload any part. In response to your initiate request, Amazon S3 returns an upload ID, which is the unique identifier that must be included in the upload part request. - - Part numbers can be any number from 1 to 10,000 (inclusive). A part number uniquely identifies a part and also defines its position within the object being created. If a new part is uploaded using the same part number that was used with a previous part, the previously uploaded part is overwritten. Each part must be at least 5 MB in size, except the last part. There is no size limit on the last part of your multipart upload. - - To ensure that data is not corrupted when traversing the network, specify the Content-MD5 parameter in the upload part request. Amazon S3 checks the part data against the provided MD5 value. If they do not match, Amazon S3 returns an error. After the multipart upload is initiated and one or more parts are uploaded, you must either complete or abort multipart upload in order to stop getting charged for storage of the uploaded parts. Only after you either complete or abort multipart upload will Amazon S3 free up the parts storage and stop charging you for the parts storage. - - See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/UploadPartRequest.html) for more information. - - !!! note - The `fileContent` parameter is available only with Amazon S3 connector v2.0.2 and above. Either `filePath` or `fileContent` parameter is mandatory. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    objectKeyThe name to give for the newly created object.Yes
    uploadIdThis specifies the ID of the initiated multipart upload.Yes
    partNumberPart number that identifies the part.Yes
    filePathThe path of the source file to be uploaded.Optional
    fileContentContent of the file.Optional
    sseCustomerAlgorithmSpecifies the algorithm to use to when encrypting the object.Optional
    sseCustomerKeySpecifies the customer-provided encryption key for Amazon S3 to use in encrypting data.Optional
    sseCustomerKeyMD5Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321.Optional
    requestPayerConfirms that the requester knows that they will be charged for the request.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - {$ctx:objectKey} - {$ctx:uploadId} - {$ctx:partNumber} - {$url:filePath} - {$ctx:sseCustomerAlgorithm} - {$ctx:sseCustomerKey} - {$ctx:sseCustomerKeyMD5} - {$ctx:requestPayer} - - ``` - - **Sample request** - - ```xml - - signv4test - testObj.jpg - cI0BzCZ7cx69YP.dhqpwEZAhgH7IzLVuOYjZVZdrmR9LSYAnxPqyYXlzHWGG3hgyH_MuJkTO8cltkaOK.TeG_7zBjFrjJduFCuFLDwah.ZXK7pvlTTDPQAaTRLW_o4FR - 1 - /Users/mine/Desktop/S3_img.jpg - - ``` - -??? note "completeMultipartUpload" - The completeMultipartUpload operation completes a multipart upload by assembling previously uploaded parts. You should first initiate the multipart upload using createMultipartUpload, and then upload all parts using uploadParts. After you successfully upload all relevant parts of an upload, call completeMultipartUpload to complete the upload. When you call completeMultipartUpload, Amazon S3 concatenates all the parts in ascending order by part number to create a new object. In the completeMultipartUpload request, you must provide the complete parts list (see listParts). For each part in the list, the part number and the ETag value must be provided. When the part is uploaded the part number and the ETag value should be returned. - - Processing of a completeMultipartUpload request can take several minutes. After Amazon S3 begins processing the request, it sends an HTTP response header that specifies a 200 OK response. While processing is in progress, Amazon S3 periodically sends whitespace characters to keep the connection from timing out. Because a request could fail after the initial 200 OK response has been sent, it is important that you check the response body to determine whether the request succeeded. If completeMultipartUpload fails, applications should be prepared to retry the failed requests. - - See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/glacier/model/CompleteMultipartUploadRequest.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    completedPartDetailsThe container that holds the completed part details. The part details are as follows: -
      -
    • part: The container for elements related to a previously uploaded part.
    • -
        -
      • PartNumber: The part number that identifies the part.
      • -
      • ETag: The entity tag returned when the part is uploaded.
      • -
      -
    -
    Yes
    bucketNameThe name of the bucket.Yes
    objectKeyThe name to give the newly created object.Yes
    uploadIdThis specifies the ID of the current multipart upload.Yes
    requestPayerConfirms that the requester knows that they will be charged for the request.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - {$ctx:objectKey} - {$ctx:uploadId} - {$ctx:completedPartDetails} - {$ctx:requestPayer} - - ``` - - **Sample request** - - ```xml - - signv4test - myimage.png - VONszTPldyDo80ARdEMI2kVxEBLQYY1tncD7PpB54WDtLTACJIn.jWRIGo7iL_EkJYn9Z2BT3MM.kEqju9CgLyUveDtl6MgXzRYqjb8R4L.ZVpUhv25d56P2Tk1XnD0C - - - 1 - LKJLINTLNM9879NL7jNLk - - - - ``` - -??? note "abortMultipartUpload" - The abortMultipartUpload operation aborts a multipart upload. After a multipart upload is aborted, no additional parts can be uploaded using that upload ID. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to abort a given multipart upload multiple times in order to completely free all storage consumed by all parts. To verify that all parts have been removed so that you do not get charged for the part storage, call the listParts operation and ensure the parts list is empty. - - Following is the proxy configuration for init and abortMultipartUpload. The init section has connection parameters. - - See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/glacier/model/AbortMultipartUploadRequest.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    objectKeyThe name of the object.Yes
    uploadIdThis specifies the ID of the current multipart upload.Yes
    requestPayerConfirms that the requester knows that they will be charged for the request.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:awsAccessKeyId} - {$ctx:awsSecretAccessKey} - {$ctx:connectionName} - {$ctx:region} - - - - {$ctx:bucketName} - {$ctx:objectKey} - {$ctx:uploadId} - {$ctx:requestPayer} - - ``` - - **Sample request** - - ```xml - - signv4test - myimage.png - VONszTPldyDo80ARdEMI2kVxEBLQYY1tncD7PpB54WDtLTACJIn.jWRIGo7iL_EkJYn9Z2BT3MM.kEqju9CgLyUveDtl6MgXzRYqjb8R4L.ZVpUhv25d56P2Tk1XnD0C - - ``` - -??? note "listParts" - The listParts operation lists the parts that have been uploaded for a specific multipart upload. - - This operation must include the upload ID, which can be obtained using the createMultipartUpload operation. The listParts operation returns a maximum of 1,000 uploaded parts. The default number of parts returned is 1,000 parts, but you can restrict the number of parts using the maxParts property. If the multipart upload consists of more than 1,000 parts, the response returns an IsTruncated field with the value of true and a NextPartNumberMarker element. In subsequent listParts requests, you can include the partNumberMarker query string parameter and set its value to the NextPartNumberMarker field value from the previous response. - - Following is the proxy configuration for init and listParts. The init section has connection parameters. - - See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/glacier/model/ListPartsRequest.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    objectKeyThe name of the object.Yes
    uploadIdThe ID of the upload.Yes
    maxPartsMaximum number of parts allowed in the response.Optional
    partNumberMarkerSpecifies the part after which listing should begin. Only parts with higher part numbers will be listed.Optional
    requestPayerConfirms that the requester knows that they will be charged for the request.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:awsAccessKeyId} - {$ctx:awsSecretAccessKey} - {$ctx:connectionName} - {$ctx:region} - - - - {$ctx:bucketName} - {$ctx:objectKey} - {$ctx:uploadId} - {$ctx:maxParts} - {$ctx:partNumberMarker} - {$ctx:requestPayer} - - ``` - - **Sample request** - - ```xml - - signv4test - myimage.png - KyxZ7yjpSSZM9f0bdRectMF5dPg2h08BqTsmWf.8OEIq2Z4YvYg01LmJL0kVDqVcz2utci2CDE2Cn7k647j_84GhExGAN9uer65jljH_oapI758RA_AmcyW4N2usGHH0 - 100 - 8 - - ``` - -??? note "createMultipartUpload" - The createMultipartUpload operation initiates a multipart upload and returns an upload ID. This upload ID is used to associate all the parts in the specific multipart upload. You specify this upload ID in each of your subsequent uploadPart requests. You also include this upload ID in the final request to either complete or abort the multipart upload request. - - For request signing, multipart upload is just a series of regular requests: you initiate multipart upload, send one or more requests to upload parts (uploadPart), and finally complete multipart upload (completeMultipartUpload). You sign each request individually. After you initiate multipart upload and upload one or more parts, you must either complete or abort multipart upload in order to stop getting charged for storage of the uploaded parts. Only after you either complete or abort multipart upload will Amazon S3 free up the parts storage and stop charging you for the parts storage. - - Following is the proxy configuration for init and createMultipartUpload. The init section has connection parameters. - - See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/CreateMultipartUploadRequest.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    objectKeyThe name of the object.Yes
    aclThe canned ACL to apply to the object.Optional
    grantReadAllows the specified grantee or grantees to list the objects in the bucket.Optional
    grantReadACPAllows the specified grantee or grantees to read the bucket ACL.Optional
    grantWriteACPAllows the specified grantee or grantees to write the ACL for the applicable bucket.Optional
    grantFullControlAllows the specified grantee or grantees the READ, WRITE, READ_ACP, and WRITE_ACP permissions on the bucket.Optional
    metadataSpecifies whether the metadata is copied from the source object or replaced with metadata provided in the request.Optional
    serverSideEncryptionSpecifies the server-side encryption algorithm to use when Amazon S3 creates the target object.Optional
    storageClassRRS enables customers to reduce their costs by storing non-critical, reproducible data at lower levels of redundancy than Amazon S3's standard storage.Optional
    websiteRedirectLocationIf the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this parameter in the object metadata.Optional
    sseCustomerAlgorithmSpecifies the algorithm to use to when encrypting the object.Optional
    sseCustomerKeySpecifies the customer-provided encryption key for Amazon S3 to use in encrypting data.Optional
    sseCustomerKeyMD5Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321.Optional
    ssekmsKeyIdSpecifies the ID of the symmetric customer managed AWS KMS CMK to use for object encryption.Optional
    ssekmsEncryptionContextSpecifies the AWS KMS Encryption Context to use for object encryption.Optional
    cacheControlThis can be used to specify caching behavior along the request or reply chain.Optional
    contentDispositionThis specifies presentational information for the object.Optional
    contentEncodingThis specifies what content encodings have been applied to the object.Optional
    contentLanguageThis specifies what content encodings have been applied to the object.Optional
    contentTypeA standard MIME type describing the format of the object data.Optional
    expiresThe date and time at which the object is no longer cacheable.Optional
    requestPayerConfirms that the requester knows that they will be charged for the request.Optional
    taggingThe tag-set for the object. The tag-set must be encoded as URL Query parameters. (For example, "Key1=Value1"). This must be used in conjunction with the TaggingDirective.Optional
    objectLockModeThe object lock mode that you want to apply to the uploaded object.Optional
    objectLockRetainUntilDateSpecifies the date and time when you want the Object Lock to expire.Optional
    objectLockLegalHoldStatusSpecifies whether you want to apply a Legal Hold to the uploaded object.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:awsAccessKeyId} - {$ctx:awsSecretAccessKey} - {$ctx:connectionName} - {$ctx:region} - - - - {$ctx:bucketName} - {$ctx:objectKey} - {$ctx:acl} - {$ctx:cacheControl} - {$ctx:contentDisposition} - {$ctx:contentEncoding} - {$ctx:contentLanguage} - {$ctx:contentType} - {$ctx:expires} - {$ctx:grantRead} - {$ctx:grantReadACP} - {$ctx:grantWriteACP} - {$ctx:grantFullControl} - {$ctx:metadata} - {$ctx:serverSideEncryption} - {$ctx:storageClass} - {$ctx:websiteRedirectLocation} - {$ctx:sseCustomerAlgorithm} - {$ctx:sseCustomerKey} - {$ctx:sseCustomerKeyMD5} - {$ctx:ssekmsKeyId} - {$ctx:ssekmsEncryptionContext} - {$ctx:requestPayer} - {$ctx:tagging} - {$ctx:objectLockMode} - {$ctx:objectLockRetainUntilDate} - {$ctx:objectLockLegalHoldStatus} - - ``` - - **Sample request** - - ```xml - - signv4test - upload.png - Content-Language:enus - AES256 - STANDARD - - ``` - -??? note "multipartUpload" - The multipartUpload operation initializes and completes a multipart upload by uploading parts to that specific multipart upload. - - Following is the proxy configuration for init and multipartUpload. The init section has connection parameters. - - !!! note - The `fileContent` parameter is available only with Amazon S3 connector v2.0.2 and above. Either `filePath` or `fileContent` parameter is mandatory. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    objectKeyThe name of the object.Yes
    partDetailsThis contains all the parts with the part numbers.Yes
    filePathThe path of the source file to be uploaded.Optional
    fileContentContent of the file.Optional
    requestPayerConfirms that the requester knows that they will be charged for the request.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:awsAccessKeyId} - {$ctx:awsSecretAccessKey} - {$ctx:connectionName} - {$ctx:region} - - - - {$ctx:bucketName} - {$ctx:objectKey} - {$ctx:partDetails} - {$ctx:filePath} - {$ctx:requestPayer} - - ``` - - **Sample request** - - ```xml - - signv4test - myimage.png - /Users/mine/Desktop/10MB.mp4 - - - 1 - - - 2 - - - - ``` - -??? note "getObjectACL" - The getObjectACL operation uses the ACL subresource to return the access control list (ACL) of an object. To use this operation, you must have READ_ACP access to the object. - - Following is the proxy configuration for getObjectACL. - - See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/GetObjectAclRequest.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    objectKeyThe name of the object.Yes
    versionIdVersionId used to reference a specific version of the object.Optional
    requestPayerConfirms that the requester knows that they will be charged for the request.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:objectKey} - {$ctx:bucketName} - {$ctx:versionId} - {$ctx:requestPayer} - - ``` - - **Sample request** - - ```xml - - signv4test - testFile.txt - - ``` - -??? note "getObjectTagging" - The getObjectTagging operation returns the tag-set of an object. You send the request against the tagging subresource associated with the object. - - By default, this operation returns information about current version of an object. To retrieve tags of any other version, use the versionId parameter. - - To use this operation, you must have permission to perform the s3:GetObjectTagging action. To retrieve tags of a version, you need permission for the s3:GetObjectVersionTagging action. - - Following is the proxy configuration for getObjectTagging. - - See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/GetObjectTaggingRequest.html) for more information. - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    objectKeyThe name of the object.Yes
    versionIdThe version id of the object to retrieve tags of it.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - {$ctx:objectKey} - {$ctx:versionId} - - ``` - - **Sample request** - - ```xml - - signv4test - testFile.txt - - ``` - -??? note "getObjectTorrent" - The getObjectTorrent operation uses the torrent subresource to return torrent files from a bucket. BitTorrent can save you bandwidth when you're distributing large files. - - You can get torrent only for objects that are less than 5 GB in size and that are not encrypted using server-side encryption with customer-provided encryption key. - - To use this operation, you must have READ access to the object. - - Following is the proxy configuration for getObjectTorrent. - - See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/GetObjectTorrentRequest.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    objectKeyThe name of the object.Yes
    torrentFilePathThe path of the torrent file to be created.Yes
    requestPayerConfirms that the requester knows that they will be charged for the request.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - {$ctx:objectKey} - {$ctx:torrentFilePath} - {$ctx:requestPayer} - - ``` - - **Sample request** - - ```xml - - signv4test - testFile.txt - /Users/mine/Desktop/testFile.torrent - - ``` - -??? note "restoreObject" - The restoreObject operation restores a temporary copy of an archived object. You can optionally provide version ID to restore specific object version. If version ID is not provided, it will restore the current version. The number of days that you want the restored copy will be determined by numberOfDays. After the specified period, Amazon S3 deletes the temporary copy. Note that the object remains archived; Amazon S3 deletes only the restored copy. - - An object in the Glacier storage class is an archived object. To access the object, you must first initiate a restore request, which restores a copy of the archived object. Restore jobs typically complete in three to five hours. - - Following is the proxy configuration for restoreObject. - - See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/RestoreObjectRequest.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    objectKeyThe name of the object.Yes
    versionIdVersion Id of an object to restore a specific object version.Optional
    restoreRequestContainer for the RestoreRequest parameters (Days, Description, GlacierJobParameters, OutputLocation, SelectParameters, Tier and Type).Yes
    requestPayerConfirms that the requester knows that they will be charged for the request.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:objectKey} - {$ctx:bucketName} - {$ctx:versionId} - {$ctx:restoreRequest} - {$ctx:requestPayer} - - ``` - - **Sample request** - - ```xml - - signv4test - testFile.txt - - 2 - - Expedited - - - - ``` - -??? note "uploadPartCopy" - The uploadPartCopy operation uploads a part by copying data from an existing object as data source. You specify the data source by adding the copySource in your request and a byte range by adding the copySourceRange in your request. The minimum allowable part size for a multipart upload is 5 MB. - - See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/UploadPartCopyRequest.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    objectKeyThe name to give the newly created object.Yes
    uploadIdThis specify the ID of the initiated multipart upload.Yes
    partNumberThis specify the number or the index of the uploaded part.Yes
    copySourceThe name of the source bucket and key name of the source object, separated by a slash (/).Yes
    copySourceRangeCopy the specified range bytes of an object.Optional
    ifModifiedSinceReturn the object only if it has been modified.Optional
    ifUnmodifiedSinceReturn the object only if it has not been modified.Optional
    ifMatchReturn the object only if its ETag is the same.Optional
    ifNoneMatchReturns the object only if its ETag is not the same as the one specified.Optional
    copySourceSSECustomerAlgorithmSpecifies the algorithm to use when decrypting the source object.Optional
    copySourceSSECustomerKeySpecifies the customer-provided encryption key for Amazon S3 to use to decrypt the source object.Optional
    copySourceSSECustomerKeyMD5Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321.Optional
    sseCustomerAlgorithmSpecifies the algorithm to use to when encrypting the object.Optional
    sseCustomerKeySpecifies the customer-provided encryption key for Amazon S3 to use in encrypting data.Optional
    sseCustomerKeyMD5Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321.Optional
    requestPayerConfirms that the requester knows that they will be charged for the request.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:objectKey} - {$ctx:bucketName} - {$ctx:uploadId} - {$ctx:partNumber} - /imagesBucket5/testObject37 - {$ctx:copySourceRange} - {$ctx:ifModifiedSince} - {$ctx:ifUnmodifiedSince} - {$ctx:ifMatch} - {$ctx:ifNoneMatch} - {$ctx:copySourceSSECustomerAlgorithm} - {$ctx:copySourceSSECustomerKey} - {$ctx:copySourceSSECustomerKeyMD5} - {$ctx:sseCustomerAlgorithm} - {$ctx:sseCustomerKey} - {$ctx:sseCustomerKeyMD5} - {$ctx:requestPayer} - - ``` - - **Sample request** - - ```xml - - signv4test - testFile1.txt - SsNUDqUklMaoV_IfePCpGAZHjaxJx.cGXEcX6TVW4I6WzOQFnAKomYevz5qi5LtkfTvlpwjY9M6QDGsIIvdGEQzBURo3MMU2Yh.ZEQDsk_lsnx3Z8m9jsglW6FIfKGQ_ - 2 - /testBucket1/testFile.jpg - bytes=0-9 - - ``` - -??? note "headObject" - The headObject operation retrieves metadata from an object without returning the object itself. This operation is useful if you are interested only in an object's metadata. To use this operation, you must have READ access to that object. A HEAD request has the same options as a GET operation on an object. The response is identical to the GET response except that there is no response body. - - See the [related API documentation](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/HeadObjectRequest.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    bucketNameThe name of the bucket.Yes
    objectKeyThe name to give the newly created object.Yes
    rangeThe specified range bytes of an object to download.Optional
    ifModifiedSinceReturn the object only if it has been modified since the specified time.Optional
    ifUnmodifiedSinceReturn the object only if it has not been modified since the specified time.Optional
    ifMatchReturn the object only if its entity tag (ETag) is the same as the one specified.Optional
    ifNoneMatchReturn the object only if its entity tag (ETag) is different from the one specified.Optional
    versionIdVersionId used to reference a specific version of the object.Optional
    sseCustomerAlgorithmSpecifies the algorithm to use to when encrypting the object.Optional
    sseCustomerKeySpecifies the customer-provided encryption key for Amazon S3 to use in encrypting data.Optional
    sseCustomerKeyMD5Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321.Optional
    partNumberPart number of the object being read.Optional
    requestPayerConfirms that the requester knows that they will be charged for the request.Optional
    - - **Sample configuration** - - ```xml - - {$ctx:bucketName} - {$ctx:objectKey} - {$ctx:range} - {$ctx:ifModifiedSince} - {$ctx:ifUnmodifiedSince} - {$ctx:ifMatch} - {$ctx:ifNoneMatch} - {$ctx:versionId} - {$ctx:sseCustomerAlgorithm} - {$ctx:sseCustomerKey} - {$ctx:sseCustomerKeyMD5} - {$ctx:partNumber} - {$ctx:requestPayer} - - ``` - - **Sample request** - - ```xml - - signv4test - testObject2 - - ``` - diff --git a/en/docs/reference/connectors/amazonsqs-connector/amazon-inbound-endpoint-1.0.x/amazonsqs-inbound-endpoint-reference-configuration.md b/en/docs/reference/connectors/amazonsqs-connector/amazon-inbound-endpoint-1.0.x/amazonsqs-inbound-endpoint-reference-configuration.md deleted file mode 100644 index 01af314718..0000000000 --- a/en/docs/reference/connectors/amazonsqs-connector/amazon-inbound-endpoint-1.0.x/amazonsqs-inbound-endpoint-reference-configuration.md +++ /dev/null @@ -1,113 +0,0 @@ -# AmazonSQS Inbound Endpoint Reference - -The following configurations allow you to configure AmazonSQS Inbound Endpoint for your scenario. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescriptionRequiredPossible ValuesDefault Value
    waitTimeThe time to wait when polling queues for messages. By default, there is no wait (short polling). Setting the waitTime up to 20 seconds (the maximum wait time) creates long polling.No0 - 200
    destinationURL of the Amazon SQS Queue from which you want to consume messages.YesN/A N/A
    secretKeyThe secret key used to sign requests (a 40-character sequence).YesN/AN/A
    accessKeyThe access key that corresponds to the secret key that you used to sign the request (a 20-character sequence).YesN/AN/A
    maxNoOfMessageMaximum number of messages to return. Amazon SQS never returns more messages than this value but might return fewer. Not necessarily all the messages in the queue are returned.No1-101
    attributeNamesA comma-separated list of attributes you want to return along with the received message.NoN/AN/A
    contentTypeContent type of the messageNotext/plain
    - application/json
    - application/xml
    text/plain
    autoRemoveMessageCheck whether the message need to be deleted or not from the queue. If you set this parameter as false, in any cases the message will be in the queue until message retention period of time.Notrue
    - false
    true
    SET_ROLLBACK_ONLY In the failure scenario, the mediation flow is going to the fault sequence which is specified in the configuration. If a failure occurs, the fault sequence if you have set "SET_ROLLBACK_ONLY" property as "true" the message will roll back to the Amazon SQS queue.
    -
    Noproperty name="SET_ROLLBACK_ONLY" value="true"-
    - - - **SET_ROLLBACK_ONLY Property** - - If a failure occurs, the Amazon SQS message will roll back. In the following property is set to true in the fault handler, in order to roll back the Amazon SQS queue messages when a failure occurs. - - ``` - - ``` - -??? note "Sample fault sequence" - ``` - - - - - - - - - - - - - - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-connector-config.md b/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-connector-config.md deleted file mode 100644 index c7245e73d0..0000000000 --- a/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-connector-config.md +++ /dev/null @@ -1,612 +0,0 @@ -# Amazon SQS Connector Reference - -The following operations allow you to work with the Amazon SQS Connector. Click an operation name to see parameter details and samples on how to use it. - ---- - -## Initialize the connector - -To use the Amazon SQS connector, add the element in your configuration before carrying out any other Amazon SQS operations. This uses the standard HTTP Authorization header to pass authentication information. Developers are issued an AWS access key ID and an AWS secret access key when they register. For request authentication, the secret access key and the access key ID elements will be used to compute the signature. The authentication uses the "HmacSHA256" signature method and the signature version "4". Click [here](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/RequestAuthenticationArticle.html) for further information on the authentication process. To use the HTTPS amazon AWS url, you need to import the certificate into the integration runtime's client keystore. - -??? note "init" - The init operation is used to initialize the connection to Amazon SQS. - - !!! note - 1. Either `secretAccessKey` and `accessKeyId` or `iamRole` is mandatory. - 2. When the server is running in an EC2 instance, you can use the [IAM role for authentication](https://docs.amazonaws.cn/en_us/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html). The `iamRole` parameter is available only with Amazon SQS connector v1.1.1 and above. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    secretAccessKeyThe secret access key (a 40-character sequence).Optional
    accessKeyIdThe access key ID that corresponds to the secret access key that you used to sign the request (a 20-character, alphanumeric sequence).Optional
    iamRoleThe IAM role associated with the EC2 instance.Optional
    enableIMDSv1Whether to use IMDSv1 to access EC2 instance metadata. By default, IMDSv2 will be used.Optional
    versionThe version of the API, which is "2012-11-05".Yes
    regionThe regional endpoint to make your requests (e.g., us-east-1).Yes
    enableSSLWhether the Amazon AWS URL should be HTTP or HTTPS. Set to true if you want the URL to be HTTPS.Optional
    contentTypeThe content type that is used to generate the signature.Optional
    blockingBoolean type, this property helps the connector perform blocking invocations to Amazon SQS.Yes
    - - **Sample configuration using secretAccessKey and accessKeyId** - - ```xml - - {$ctx:secretAccessKey} - {$ctx:accessKeyId} - {$ctx:version} - {$ctx:region} - {$ctx:enableSSL} - {$ctx:contentType} - {$ctx:blocking} - - ``` - - **Sample configuration using IAM role** - - ```xml - - {$ctx:iamRole} - {$ctx:version} - {$ctx:region} - {$ctx:enableSSL} - {$ctx:contentType} - {$ctx:blocking} - - ``` - ---- - -### Messages - -??? note "receiveMessage" - This operation retrieves one or more messages, with a maximum limit of 10 messages, from the specified queue. The default behavior is short poll, where a weighted random set of machines is sampled. This means only the messages on the sampled machines are returned. If the number of messages in the queue is small (less than 1000), it is likely you will get fewer messages than you requested per call. If the number of messages in the queue is extremely small, you might not receive any messages in a particular response. In this case, you should repeat the request. See the [related API documentation](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    maxNumberOfMessagesThe maximum number of messages to be returned. Values can be from 1 to 10. Default is 1.Optional
    waitTimeSecondsThe duration (in seconds) for which the call will wait for a message to arrive in the queue before returning. If a message is available, the call will return sooner than WaitTimeSeconds. Long poll support is enabled by using this parameter. For more information, see Amazon SQS Long Poll.Optional
    messageAttributeNamesThe name of the message attribute. The message attribute name can contain the following characters: A-Z, a-z, 0-9, underscore (_), hyphen (-), and period (.). The name must not start or end with a period, and it should not have successive periods. The name is case sensitive and must be unique among all attribute names for the message. The name can be up to 256 characters long. The name cannot start with "AWS." or "Amazon." (including any case variations), because these prefixes are reserved for use by Amazon Web Services. When using the operation, you can send a list of attribute names to receive, or you can return all of the attributes by specifying "All" or ".*" in your request. You can also use "foo.*" to return all message attributes starting with the "foo" prefix.Optional
    visibilityTimeoutThe duration (in seconds) in which the received messages are hidden from subsequent retrieve requests after being retrieved by the request.Optional
    attributesA list of attributes that need to be returned along with each message.Optional
    queueIdThe unique identifier of the queue.Yes
    queueNameThe name of the queue.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:maxNumberOfMessages} - {$ctx:waitTimeSeconds} - {$ctx:messageAttributeNames} - {$ctx:visibilityTimeout} - {$ctx:attributes} - {$ctx:queueId} - {$ctx:queueName} - - ``` - - **Sample request** - - ```xml - - AKIAJXHDKJWR2ZDDVPEBTQ - N9VT2P3MdfaL7Li1P3hJu1GTdtOO7Kd7NfPlyYG8f/6 - 2009-02-01 - us-east-1 - 899940420354 - Test - 10 - - ``` - -??? note "sendMessage" - This operation delivers a message to the specified queue. You can send payload messages up to 256 KB (262,144 bytes) in size. See the [related API documentation](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html) for more information. - - > **Note**: The following list shows the characters (in Unicode) allowed in your message, according to the W3C XML specification. For more information, go to [http://www.w3.org/TR/REC-xml/#charsets](http://www.w3.org/TR/REC-xml/#charsets). If you send any characters not included in the list, your request will be rejected. #x9 | #xA | #xD | [#x20 to #xD7FF] | [#xE000 to #xFFFD] | [#x10000 to #x10FFFF]. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    queueIdThe unique identifier of the queue.Yes
    queueNameThe name of the queue.Yes
    messageBodyThe message to be sent, a String that is a maximum of 256 KB in size. For a list of allowed characters, see the preceding important note.Yes
    delaySecondsThe number of seconds (0 to 900, which is 15 minutes) to delay a specific message. Messages with a positive delaySeconds value become available for processing after the delay time is finished. If you do not specify a value, the default value for the queue applies.Optional
    messageAttributesEach message attribute consists of a Name, Type, and Value. For more information, see Message Attribute Items.Optional
    messageDeduplicationIdThe ID used for deduplication of sent messages. If a message with a particular messageDeduplicationId is sent successfully, any messages sent with the same messageDeduplicationId are accepted successfully but aren't delivered during the 5-minute deduplication interval, see Using the MessageDeduplicationId Property.Optional
    messageGroupIdThe ID that specifies that a message belongs to a specific message group. Messages that belong to the same message group are processed in a FIFO manner, see Using the MessageGroupId Property.Optional
    - - > **Note**: The messageGroupId and messageDeduplicationId parameters apply only to FIFO (first-in-first-out) queues and valid values are alphanumeric characters (a-z, A-Z, 0-9) and punctuation (!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~). When you set FIFOQueue, you can't set delaySeconds per message. You can set this parameter only on a queue level. - - **Sample configuration** - - ```xml - - {$ctx:queueId} - {$ctx:queueName} - {$ctx:messageBody} - {$ctx:delaySeconds} - {$ctx:messageAttributes} - {$ctx:messageDeduplicationId} - {$ctx:messageGroupId} - - ``` - - **Sample request for sendMessage** - - ```xml - - AKIAJXHDKJWRDD2ZVPfghEBTQ - N9VT2P3MaL7LikjhyhJu1GTtOO7Kd7NfPlfghyYG8f/6 - 2009-02-01 - us-east-1 - 899940420354 - Test - Testing the operation - - ``` - - **Sample request for sendMessage to FIFOQueue** - - ```xml - - AKIAJXHxxxxxx - N9VT2P3xxxxxx - 2012-11-05 - us-west-2 - 899940420354 - test.fifo - MyMessageGroupId1234567890 - MyMessageDeduplicationId1234567890 - Testing the operation - - ``` - -??? note "sendMessageBatch" - This operation delivers batch messages to the specified queue. You can send payload messages up to 256 KB (262,144 bytes) in size. See the [related API documentation](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessageBatch.html) for more information. - - > **Note**: The following list shows the characters (in Unicode) allowed in your message, according to the W3C XML specification. For more information, go to [http://www.w3.org/TR/REC-xml/#charsets](http://www.w3.org/TR/REC-xml/#charsets). If you send any characters not included in the list, your request will be rejected. #x9 | #xA | #xD | [#x20 to #xD7FF] | [#xE000 to #xFFFD] | [#x10000 to #x10FFFF] - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    queueIdThe unique identifier of the queue.Yes
    queueNameThe name of the queue.Yes
    delaySecondsThe number of seconds (0 to 900, which is 15 minutes) to delay a specific message. Messages with a positive delaySeconds value become available for processing after the delay time is finished. If you do not specify a value, the default value for the queue applies.Optional
    messageAttributesList of SendMessageBatchRequestEntry items.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:queueId} - {$ctx:queueName} - {$ctx:delaySeconds} - {$ctx:messageRequestEntry} - - ``` - - **Sample request** - - ```xml - - AKIAJXHDKJWRDD2ZVPfghEBTQ - N9VT2P3MaL7Li1P3hjgGTtOO7Kd7NfPlfghyYG8f/6 - 2009-02-01 - us-east-1 - 492228198692 - TestCo1n - SendMessageBatchRequestEntry.1.Id=test_msg_001&SendMessageBatchRequestEntry.1.MessageBody=test%20message%20body%201&SendMessageBatchRequestEntry.2.Id=test_msg_002&SendMessageBatchRequestEntry.2.MessageBody=test%20message%20body%202 - - ``` - -??? note "deleteMessage" - This operation deletes the specified message from the specified queue. You specify the message by using the message's receipt handle and not the message ID you received when you sent the message. Even if the message is locked by another reader due to the visibility timeout setting, it is still deleted from the queue. If you leave a message in the queue for longer than the queue's configured retention period, Amazon SQS automatically deletes it. See the [related API documentation](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_DeleteMessage.html) for more information. - - > **Note**: The receipt handle is associated with a specific instance of receiving the message. If you receive a message more than once, the receipt handle you get every time you receive the message is different. When you use this operation, if you do not provide the most recently received receipt handle for the message, the request will still succeed, but the message might not be deleted. - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    queueIdThe unique identifier of the queue.Yes
    queueNameThe name of the queue.Yes
    receiptHandleThe receipt handle associated with the message to be deleted.Yes
    - - > **Note**: It is possible you will receive a message even after you have deleted it. This might happen on rare occasions if one of the servers storing a copy of the message is unavailable when you request to delete the message. The copy remains on the server and might be returned to you again on a subsequent receive request. You should create your system to be idempotent so that receiving a particular message more than once is not a problem. - - **Sample configuration** - - ```xml - - {$ctx:queueId} - {$ctx:queueName} - {$ctx:receiptHandle} - - ``` - - **Sample request** - - ```xml - - AKIAJXHDKJWR2ZVSDPEBTQ - N9VT2P3MaL7Li1PjkhGTtOO7Kddf7NfPlyYG8f/6 - 2009-02-01 - us-east-1 - 899940420354 - Test - ib8MCWgVft0d03wCmmzGU9b41lxRVMYIHLnfckXhkh/6DmqOhu+qHcsuzXUik5HvhGLa/A3tnTUTOXydKJoTOTlP3KUjOSOrwVxKoOi+bhLyLJuYAtkhfRMY/ZF1Jh4CzGSk3tLfPSfzOo3bqgf7mWklwM18BnufuWjSl8HjJQYnegs5yDDypAZZqtBuMv6gT/1aMbQbL15Vo8b0Fr06hFjSZzPpA0vxbb9NpksToMq4yPf8X3jt/Njn1sPZSG0OKqdgACiavmi0mzAT/4QLi+waSFnyG0h+wN1z9OdHsr1+4= - - ``` - -??? note "deleteMessageBatch" - This operation deletes multiple messages from the specified queue. See the [related API documentation](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_DeleteMessageBatch.html) for more information. - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    queueIdThe unique identifier of the queue.Yes
    queueNameThe name of the queue.Yes
    messageRequestEntryA list of receipt handles for the messages to be deleted.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:messageRequestEntry} - {$ctx:queueId} - {$ctx:queueName} - - ``` - - **Sample request** - - ```xml - - AKIAJXHDKJWR2ZVSDPEBTQ - N9VT2P3MaL7Li1PjkhGTtOO7Kddf7NfPlyYG8f/6 - 2009-02-01 - us-east-1 - 899940420354 - Test - DeleteMessageBatchRequestEntry.1.Id=msg1 - &DeleteMessageBatchRequestEntry.1.ReceiptHandle=gfk0T0R0waama4fVxIVNgeNP8ZEDcw7zZU1Zw%3D%3D&DeleteMessageBatchRequestEntry.2.Id=msg2&DeleteMessageBatchRequestEntry.2.ReceiptHandle=gfk0T0R0waama4fVFffkjKzmhMCymjQvfTFk2LxT33G4ms5subrE0deLKWSscPU1oD3J9zgeS4PQQ3U30qOumIE6AdAv3w%2F%2Fa1IXW6AqaWhGsEPaLm3Vf6IiWqdM8u5imB%2BNTwj3tQRzOWdTOePjOjPcTpRxBtXix%2BEvwJOZUma9wabv%2BSw6ZHjwmNcVDx8dZXJhVp16Bksiox%2FGrUvrVTCJRTWTLc59oHLLF8sEkKzRmGNzTDGTiV%2BYjHfQj60FD3rVaXmzTsoNxRhKJ72uIHVMGVQiAGgB%2BqAbSqfKHDQtVOmJJgkHug%3D%3D - - ``` - -??? note "changeMessageVisibility" - This operation changes the visibility timeout of a specified message in a queue to a new value. The maximum allowed timeout value you can set the value to is 12 hours. This means you can't extend the timeout of a message in an existing queue to more than a total visibility timeout of 12 hours. (For more information on visibility timeout, see [Visibility Timeout in the Amazon SQS Developer Guide](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/AboutVT.html)). - - For example, let's say you have a message whose default message visibility timeout is 30 minutes. You could call this operation with a value of two hours, and the effective timeout would be two hours and 30 minutes. When that time is reached, you could again extend the time-out by calling changeMessageVisiblity; but this time, the maximum allowed timeout would be 9 hours and 30 minutes. See the [related API documentation](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ChangeMessageVisibility.html) for more information. - - > **Note**: There is a 120,000 limit for the number of in-flight messages per queue. Messages are in flight after they have been received from the queue by a consuming component but have not yet been deleted from the queue. If you reach the 120,000 limit, you will receive an OverLimit error message from Amazon SQS. To help avoid reaching the limit, you should delete the messages from the queue after they have been processed. You can also increase the number of queues you use to process the messages. - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    queueIdThe unique identifier of the queue.Yes
    queueNameThe name of the queue.Yes
    receiptHandleThe receipt handle associated with the message whose visibility timeout you are changing.Yes
    visibilityTimeoutThe new value (in seconds from 0 to 43200, which is 12 hours) for the message's visibility timeout.Yes
    - - > **Note**: If you attempt to set visibilityTimeout to an amount more than the maximum time left, Amazon SQS returns an error. It will not automatically recalculate and increase the timeout to the maximum time remaining. - > - > Unlike with a queue, when you change the visibility timeout for a specific message, that timeout value is applied immediately but is not saved in memory for that message. If you don't delete a message after it is received, the visibility timeout for the message the next time it is received reverts to the original timeout value, not the value you set with the changeMessageVisibility operation. - - **Sample configuration** - - ```xml - - {$ctx:receiptHandle} - {$ctx:queueId} - {$ctx:queueName} - {$ctx:visibilityTimeout} - - ``` - - **Sample request** - - ```xml - - AKIAJXHDKJWR2ZVPESSBTQ - N9VT2P3MaL7Lhgu1GTtOO7Kd7NfPlyYG8f/6 - 2009-02-01 - us-east-1 - 899940420354 - Test - ib8MCWgVft3IGz2EvDZBjzlBHi0rmXxJUcKbqlvkuH9WO9LaWQNQ8isW3IX8iCZBHovl8NQeC/EbbsLCSS2bMDGMZ5mxQ9C+UudaXRNxwj+VeLP4DQoTOMXEnw3V3Pk7GoVJ62YwrbnfH9U6c7qd8xCptVK1FIn6Pu4zNYRRiQmO8ENP3Tt0S81gHCz8sGdunXuro1tymIhxxliq29uPX8plYmvmkeCc9Fezib1cccpPpUkFhIHY8PkCXxI04i6zSM/o1o/wag2d0iDBVS20hBR2g8e6h8il1z9OdHsr1+4= - 10 - - ``` - -??? note "changeMessageVisibilityBatch" - This operation changes the visibility timeout of multiple messages. See the [related API documentation](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ChangeMessageVisibilityBatch.html) for more information. - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    queueIdThe unique identifier of the queue.Yes
    queueNameThe name of the queue.Yes
    messageRequestEntryA list of receipt handles of the messages for which the visibility timeout must be changed.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:messageRequestEntry} - {$ctx:queueId} - {$ctx:queueName} - - ``` - - **Sample request** - - ```xml - - AKIAJXHDKJWR2ZVPESSBTQ - N9VT2P3MaL7Li1P3GjhgDNTtOO7Kd7NfPlyYG8f/6 - 2009-02-01 - us-east-1 - 899940420354 - Test - ChangeMessageVisibilityBatchRequestEntry.1.Id=change_visibility_msg_1&ChangeMessageVisibilityBatchRequestEntry.1.ReceiptHandle=ib8MCWgVft3IGz2EvDZBjzlBHi0rmXxJUcKbqlvkuH9WO9LaWQNQ8isW3IX8iCZBHovl8NQeC/EbbsLCSS2b&ChangeMessageVisibilityBatchRequestEntry.1.VisibilityTimeout=10&ChangeMessageVisibilityBatchRequestEntry.2.Id=change_visibility_msg_2&ChangeMessageVisibilityBatchRequestEntry.2.ReceiptHandle=ib8MCWgVft3IGz2EvDZBjzlBHi0rmXxJUcKbqlvkuH9WO9LaWQNQ8isW3IX8iCZBHovl8NQeC/EbbsLCSS2b - - ``` - ---- - -### Permissions - -??? note "addPermission" - This operation adds a permission to a queue for a specific [principal](http://docs.aws.amazon.com/general/latest/gr/glos-chap.html#P), enabling you to share access to the queue. When you create a queue, you have full control access rights for the queue. Only you (as owner of the queue) can grant or deny permissions to the queue. For more information about these permissions, see [Shared Queues in the Amazon SQS Developer Guide](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/acp-overview.html). See the [related API documentation](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ChangeMessageVisibilityBatch.html) for more information. - - > **Note**: - > - This operation writes an Amazon SQS-generated policy. If you want to write your own policy, use SetQueueAttributes to upload your policy. For more information about writing your own policy, see Using The Access Policy Language in the Amazon SQS Developer Guide. - > - Some API actions take lists of parameters. These lists are specified using the param.n notation. Values of n are integers starting from 1. For example, a parameter list with two elements looks like this: `&Attribute.1=this, &Attribute.2=that`. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    awsAccountNumbersThe AWS account number of the principal who will be given permission. The principal must have an AWS account but does not need to be signed up for Amazon SQS. For information about locating the AWS account identification, see Your AWS Identifiers in the Amazon SQS Developer Guide.Yes
    actionNameThe action the client wants to allow for the specified principal. The following are valid values: * | SendMessage | ReceiveMessage | DeleteMessage | ChangeMessageVisibility | GetQueueAttributes | GetQueueUrl. For more information about these actions, see Understanding Permissions in the Amazon SQS Developer Guide.Yes
    labelThe unique identification of the permission you are setting (e.g., AliceSendMessage). Constraints: Maximum 80 characters; alphanumeric characters, hyphens (-), and underscores (_) are allowed.Yes
    queueIdThe unique identifier of the queue.Yes
    queueNameThe name of the queue.Yes
    - - **Sample configuration** - - ```xml - - {$ctx:awsAccountNumbers} - {$ctx:actionNames} - - {$ctx:queueId} - {$ctx:queueName} - - ``` - - **Sample request** - - ```xml - - AKIAJXHDKJWDDR2ZVPEBTQ - N9VT2P3MaL7Li1P3hJu1GsdfTtOO7Kd7NfPlyYG8f/6 - AWSAccountId.1=899940420354&AWSAccountId.2=294598218081 - ActionName.1=SendMessage&ActionName.2=ReceiveMessage - - 899940420354 - Test - 2009-02-01 - us-east-1 - - ``` diff --git a/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-connector-example.md b/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-connector-example.md deleted file mode 100644 index 41fa22de22..0000000000 --- a/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-connector-example.md +++ /dev/null @@ -1,219 +0,0 @@ -# AmazonSQS Connector Example - -The WSO2 Amazon SQS connector allows you to access the exposed Amazon SQS API from an integration sequence. - -## What you'll build - -This example explains how to use Amazon SQS Connector to create a queue in the Amazon SQS, send a message to the queue, forward it to Simple Stock Quote Service Backend and send the response to the user. - -It has a single HTTP API resource, which is `sendToQueue`. - -AmazonSQS-Connector - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Setting up the environment - -1. Please follow the steps mentioned in the [Setting up the Amazon S3 Environment ]({{base_path}}/reference/connectors/amazonsqs-connector/amazonsqs-connector-config) document in order to create a Amazon account and obtain access key id and secret access key. Keep them saved to be used in the next steps. - -2. In this example we will be using XPath 2.0 which needs to be enabled in the product as shown below before starting the integration service. - - If you are using the Micro Integrator of **EI 7** or **APIM 4.0.0**, you need to enable this property by adding the following to the PRODUCT-HOME/conf/deployment.toml file. You can further refer to the [Product Configurations]({{base_path}}/reference/config-catalog/#http-transport). - - ``` - [mediation] - synapse.enable_xpath_dom_failover="true" - ``` - - If you are using **EI 6**, you can enable this property by uncommenting **synapse.xpath.dom.failover.enabled=true** property in PRODUCT-HOME/conf/synapse.properties file. - -3. In this example we use the SimpleStockQuote service backend. Therefore, the SimpleStockQuote service needs to be started. - -## Configure the connector in WSO2 Integration Studio - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -1. First let's create the following sequences, which are buildMessage, createQueue, sendMessage and ReceiveAndForwardMessage. Right click on the created Integration Project and select, -> **New** -> **Sequence** to create the Sequence. - Adding a Sequence - -2. Provide the Sequence name as buildMessage. You can go to the source view of the XML configuration file of the API and copy the following configuration. In this sequence we are taking the user's input `companyName` and we build the message using a Payload Factory Mediator. - ``` - - - - - - - - $1 - - - - - - - -
    - - - - - - ``` -3. Create the createQueue sequence as shown below. In this sequence, we create a queue in the Amazon SQS instance. - ``` - - - - AKIAJRM3ROHOPXQ4V6QA - r7hfmtqVaLiRZSwnKxni4mq7MJ2kkUZ2GlcCkBNg - 2009-02-01 - us-east-2 - - - {$ctx:queueName} - - - - - - - - - - - ``` - - 4. Create sendMessage sequence as shown below. In this sequence, we send the message that we built in step 1 to the Amazon SQS Queue. - ``` - - - - AKIAJRM3ROJKJJXQ4V6QA - r7hfmtqVjdwieILi4mq7MJ2kkUZ2GlcCkBNg - 2009-02-01 - us-east-2 - - - {$ctx:queueId} - {$ctx:queueName} - {$ctx:target_property} - - - ``` - - 5. Create the ReceiveAndForwardMessage sequence as shown below. In this sequence, we will receive the message from the Amazon SQS queue and forward it into the StockQuote Endpoint. - ``` - - - - AKIAJRM3ROJKJJXQ4V6QA - r7hfmtqVjdwieILi4mq7MJ2kkUZ2GlcCkBNg - 2009-02-01 - us-east-2 - - - 5 - {$ctx:queueId} - {$ctx:queueName} - - - - - - $1 - - - - - - -
    - - - - - ``` - - 6. Now right click on the created Integration Project and select **New** -> **Rest API** to create the REST API. - - 7. Provide the API name as SQSAPI and the API context as `/sqs`. You can go to the source view of the XML configuration file of the API and copy the following configuration. - ``` - - - - - - - - - - - - - - - - ``` - -{!includes/reference/connectors/exporting-artifacts.md!} - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - - - Download ZIP - - -!!! tip - You may need to update the value of the access key and make other such changes before deploying and running this project. - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -1. Create a file called data.json with the following payload. - ``` - { - "companyName":"WSO2", - "queueName":"Queue1" - } - ``` -2. Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - ``` - curl -H "Content-Type: application/json" --request POST --data @body.json http://localhost:8290/sqs/sendToQueue - ``` - -**Expected Response**: - -You should get the following response with the 'sys_id' and keep it saved. - -``` - - - 4.233604086603518 - -8.707965767387106 - -150.5908765590026 - 153.98353327622493 - Wed Apr 08 10:38:56 IST 2020 - 158.9975778178183 - -565228.6001002677 - WSO2 Company - -151.38099715271312 - 23.761940918708092 - -2.8310759126772127 - -149.5404650806414 - WSO2 - 9834 - - -``` - - diff --git a/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-connector-overview.md b/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-connector-overview.md deleted file mode 100644 index 8501a7fb28..0000000000 --- a/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-connector-overview.md +++ /dev/null @@ -1,41 +0,0 @@ -# Amazon SQS Connector Overview - -Amazon Simple Queue Service (SQS) is a fully managed message queuing service that allows you to run business applications and services so that the messaging is not dependent on the IT infrastructure itself. This means the messages can run and fail independently of each other in a way that does not cause slowdowns, system-wide faults, or a disturbance within the application. By using Amazon SQS, you can move data between distributed components of your applications that perform different tasks without losing messages or requiring each component to be always available. - -To see the Amazon SQS connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "Amazon". - -Amazon SQS Connector Store - -## Compatibility - -| Connector Version | Supported product versions | -| ------------- |-------------| -| 1.1.1 | APIM 4.0.0, EI 7.1.0, EI 7.0.x, EI 6.6.0, EI 6.5.0 | - -For older versions, see the details in the connector store. - -## Amazon SQS Connector documentation - -The WSO2 Amazon SQS connector allows you to access the exposed API through the integration runtime. Through this connector, you can perform CRUD operations for queues in Amazon SQS instance, update permissions and can work with messages. For further reference, please refer to [Amazon SQS API reference](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/Welcome.html). - -* **[Amazon SQS Connector Example]({{base_path}}/reference/connectors/amazonsqs-connector/amazonsqs-connector-example/)**: This example explains how to use the Amazon SQS Connector to create a queue in the Amazon SQS, send a message to the queue, forward it to a backend service and send the response to the user. - -* **[Amazon SQS Connector Reference]({{base_path}}/reference/connectors/amazonsqs-connector/amazonsqs-connector-config/)**: This documentation provides a reference guide for the Amazon SQS Connector. - -## Amazon SQS Inbound Endpoint - -The AmazonSQS Inbound Endpoint allows you to connect to Amazon and consume messages form an Amazon SQS queue. The messages are then injected into the mediation engine for further processing and mediation. - -* **[Amazon SQS Inbound Endpoint Example]({{base_path}}/reference/connectors/amazonsqs-connector/amazonsqs-inbound-endpoint-example/)**: This example demonstrates how the AmazonSQS inbound endpoint works as a message consumer. - -* **[Amazon SQS Inbound Endpoint Reference]({{base_path}}/reference/connectors/amazonsqs-connector/amazonsqs-inbound-endpoint-reference-configuration/)**: This documentation provides a reference guide for the Amazon SQS Inbound Endpoint. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, create a pull request in the following repository. - -* [Amazon SQS Connector GitHub repository](https://github.com/wso2-extensions/esb-inbound-amazonsqs) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-inbound-endpoint-example.md b/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-inbound-endpoint-example.md deleted file mode 100644 index c66dc41b3e..0000000000 --- a/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-inbound-endpoint-example.md +++ /dev/null @@ -1,105 +0,0 @@ -# AmazonSQS Inbound Endpoint Example - -The AmazonSQS Inbound Endpoint allows you to connect to Amazon and consume messages form an Amazon SQS queue. The messages are then injected into the mediation engine for further processing and mediation. - -## What you'll build - -This scenario demonstrates how the AmazonSQS inbound endpoint works as a message consumer. In this scenario, you should have a connectivity with Amazon AWS account. Please follow the steps mentioned in the [Setting up the Amazon Lambda Environment]({{base_path}}/reference/connectors/amazonlambda-connector/setting-up-amazonlambda/) document in order to create an Amazon account and obtain access key id and secret access key. - -The Amazon SQS queue will receive messages from a third party system, while the integration runtime will keep listening to the messages from that queue. First you need to create a **Queue** inside the **Simple Queue Service** and send a message to the created Queue. The WSO2 AmazonSQS Inbound Endpoint will receive the message and notify. If you are extending this sample scenario, you can perform any kind of mediation using the [mediators]({{base_path}}/reference/mediators/about-mediators/). - -Following diagram shows the overall solution we are going to build. The Simple Queue Service will receive messages from the outside, while the AmazonSQS inbound endpoint will consume messages based on the updates. - -AmazonSQS Inbound Endpoint - -## Configure inbound endpoint using WSO2 Integration Studio - -1. Download [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/). Create an **Integration Project** as below. - - Creating a new Integration Project - -2. Right click on **Created Integration Project** -> **New** -> **Inbound Endpoint** -> **Create A New Inbound Endpoint** -> **Inbound Endpoint Creation Type**and select as **custom** -> Click **Next**. - - Creating inbound endpoint - -3. Click on **Inbound Endpoint** in design view and under `properties` tab, update class name to `org.wso2.carbon.inbound.amazonsqs.AmazonSQSPollingConsumer`. - -4. Navigate to the source view and update it with the following configuration as required. - - ```xml - - - - true - 2000 - true - 19 - 10 - https://sqs.us-east-2.amazonaws.com/610968236798/eiconnectortestSQS - AKIAY4QELOL7GF35XBW5 - SuQ4RsE/ZTf2H9VEXnMCvq8Pg8qSUHWpdyaV1QhJ - attributeName1,contentType - application/json - org.wso2.carbon.inbound.amazonsqs.AmazonSQSPollingConsumer - polling - - - ``` - **Sequence to process the message** - - In this example, for simplicity we will just log the message, but in a real world use case, this can be any type of message mediation. - - ```xml - - - - - ``` -> **Note**: To configure the `secretKey` and `accessKey` parameter value, please use the [Setting up the Amazon Lambda Environment]({{base_path}}/reference/connectors/amazonlambda-connector/setting-up-amazonlambda/) documentation. -> - **secretKey** : The secret key used to sign requests. -> - **accessKey** : The access key that corresponds to the secret key that you used to sign the request. -> - **destination** : URL of the Amazon SQS Queue from which you want to consume messages. - -## Exporting Integration Logic as a CApp - -**CApp (Carbon Application)** is the deployable artefact on the integration runtime. Let us see how we can export integration logic we developed into a CApp. To export the `Solution Project` as a CApp, a `Composite Application Project` needs to be created. Usually, when a solution project is created, this project is automatically created by Integration Studio. If not, you can specifically create it by navigating to **File** -> **New** -> **Other** -> **WSO2** -> **Distribution** -> **Composite Application Project**. - -1. Right click on Composite Application Project and click on **Export Composite Application Project**.
    - Export as a Carbon Application - -2. Select an **Export Destination** where you want to save the .car file. - -3. In the next **Create a deployable CAR file** screen, select inbound endpoint and sequence artifacts and click **Finish**. The CApp will get created at the specified location provided in the previous step. - -## Deployment - -1. Navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for `AmazonSQS Connector`. Click on `AmazonSQS Inbound Endpoint` and download the .jar file by clicking on `Download Inbound Endpoint`. - > **Note**: Copy this .jar file into **/dropins** folder. - -2. Copy the exported carbon application to the **/repository/deployment/server/carbonapps** folder. - -3. [Start the integration server]({{base_path}}/get-started/quick-start-guide/integration-qsg/#start-the-micro-integrator). - -## Testing - -Please log in to the Amazon **Simple Queue Service**-> created **Queue**. Select the Queue and **right click**-> **Send a Message**-> enter **Message**, or you can even use [AmazonSQS Connector Example]({{base_path}}/reference/connectors/amazonsqs-connector/amazonsqs-connector-example) we have implemented before. - -**Sample Message** - -``` -{"Message":"Test Amazon SQS Service"} -``` -AmazonSQS Inbound Endpoint will consume message from the Simple Queue Service. - -**Expected response** - -You will see following message in the server log file (found at /repository/logs/wso2carbon.log). - -```bash -[2020-05-22 12:28:03,799] INFO {org.apache.synapse.mediators.builtin.LogMediator} - To: , MessageID: urn:uuid:CB783799949CD049281590130683750, Direction: request, Payload: {"Message":"Test Amazon SQS Service"} -``` \ No newline at end of file diff --git a/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-inbound-endpoint-reference-configuration.md b/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-inbound-endpoint-reference-configuration.md deleted file mode 100644 index 4fbd76b798..0000000000 --- a/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-inbound-endpoint-reference-configuration.md +++ /dev/null @@ -1,116 +0,0 @@ -# AmazonSQS Inbound Endpoint Reference - -The following configurations allow you to configure AmazonSQS Inbound Endpoint for your scenario. - -!!! note - If your server is running on an EC2 instance, you can use [IAM role for authentication](https://docs.amazonaws.cn/en_us/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) with Amazon SQS Inbound Endpoint v1.1.0 and above. Please note that both the `secretKey` and `accessKey` parameters should be excluded when using IAM Role authentication. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescriptionRequiredPossible ValuesDefault Value
    waitTimeThe time to wait when polling queues for messages. By default, there is no wait (short polling). Setting the waitTime up to 20 seconds (the maximum wait time) creates long polling.No0 - 200
    destinationURL of the Amazon SQS Queue from which you want to consume messages.YesN/A N/A
    secretKeyThe secret key used to sign requests (a 40-character sequence).No only if IAM Role authentication is usedN/AN/A
    accessKeyThe access key that corresponds to the secret key that you used to sign the request (a 20-character sequence).No only if IAM Role authentication is usedN/AN/A
    maxNoOfMessageMaximum number of messages to return. Amazon SQS never returns more messages than this value but might return fewer. Not necessarily all the messages in the queue are returned.No1-101
    attributeNamesA comma-separated list of attributes you want to return along with the received message.NoN/AN/A
    contentTypeContent type of the messageNotext/plain
    - application/json
    - application/xml
    text/plain
    autoRemoveMessageCheck whether the message need to be deleted or not from the queue. If you set this parameter as false, in any cases the message will be in the queue until message retention period of time.Notrue
    - false
    true
    SET_ROLLBACK_ONLY In the failure scenario, the mediation flow is going to the fault sequence which is specified in the configuration. If a failure occurs, the fault sequence if you have set "SET_ROLLBACK_ONLY" property as "true" the message will roll back to the Amazon SQS queue.
    -
    Noproperty name="SET_ROLLBACK_ONLY" value="true"-
    - - - **SET_ROLLBACK_ONLY Property** - - If a failure occurs, the Amazon SQS message will roll back. In the following property is set to true in the fault handler, in order to roll back the Amazon SQS queue messages when a failure occurs. - - ``` - - ``` - -??? note "Sample fault sequence" - ``` - - - - - - - - - - - - - - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/as400-pcml-connector/as400-pcml-connector-configuration.md b/en/docs/reference/connectors/as400-pcml-connector/as400-pcml-connector-configuration.md deleted file mode 100644 index 3890fbd765..0000000000 --- a/en/docs/reference/connectors/as400-pcml-connector/as400-pcml-connector-configuration.md +++ /dev/null @@ -1,65 +0,0 @@ -# Setting up the AS400 PCML Environment - -The AS400 PCML connector allows you to access RPG programs that are available on AS400 (renamed as IBM iSeries). This is done using [Program Call Markup Language](https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_74/rzahh/pcml.htm) (PCML). - -you need to have access to an IBM iSeries server. If you do not have an on-premise IBM iSeries server, go to public IBM iSeries server, and create a [public IBM iSeries server](https://pub400.com/) account. - -The connector uses the IBM JTOpen library for all its operations. Copy the **jt400.jar** to `/repository/components/lib` folder. You can download the IBM JTOpen library from (here)[https://sourceforge.net/projects/jt400/]. - -### Setting up public IBM iSeries server - -Follow the steps below to create public IBM iSeries server. - -1. Go to [https://pub400.com](https://pub400.com/) and click the **Sign up** button. - - AS400-signup page - -1. Go to [https://pub400.com](https://pub400.com/) and click the **Sign up** button. - - AS400-signup details - -3. You will receive an email response with the client credentials to your email account (it will take a few hours). - -4. Navigate in to the [IBM i Access - Client Solutions](https://www.ibm.com/support/pages/ibm-i-access-client-solutions) page. IBM i Access Client Solutions consolidates the most commonly used tasks for managing your IBM i into one simplified location. - -5. Download the **IBM i Access Client Solutions**. - - IBM i Access Client Solutions - -6. Extract the downloaded .zip file and install it. - -7. Start the IBM I access client solution. - - IBM i Access Client Solutions start - -8. Click **System Configurations** section under **Management** section. - - IBM i Access system-configuration - -9. The following window appears system configuration parameters. Click the **New** button. - - IBM i Access add new system - -10. Add the required details. Add the **System name** and **Description** under General tab. In this sample we use `pub400.com` as the **System name**. - - IBM i Access add new system name - -11. Add the **Default username** and **port** number under the **Connection** tab. In this sample we need to use `port` number as `23`. Click **OK**. - - IBM i Access add port number - -12. After setting up the configuration we can use the **5250 Emulator**. Click the **5250 Emulator** link under the **General** section. - - IBM 5250-emulator - -13. When you click this emulator you need to add the username and password you got in step 3. Then you will see the following window. - - PUB400 server command line view - -14. You need to add the same user name and password you got in the step 3. - -15. Now you can see the following **IBM i Main Menu**. - - IBM i Main Menu - -To configure the AS400 PCML connector, you will be required AS400 public server `username` and `password`. \ No newline at end of file diff --git a/en/docs/reference/connectors/as400-pcml-connector/as400-pcml-connector-reference.md b/en/docs/reference/connectors/as400-pcml-connector/as400-pcml-connector-reference.md deleted file mode 100644 index 1878031101..0000000000 --- a/en/docs/reference/connectors/as400-pcml-connector/as400-pcml-connector-reference.md +++ /dev/null @@ -1,393 +0,0 @@ -# AS400 PCML Connector Reference - -The following operations allow you to work with the AS400 PCML Connector. Click an operation name to see parameter details and samples on how to use it. - ---- - -## Initialize the connector - -To use the AS400 PCML connector, add the element in your configuration before carrying out any other pcml operations. - -The BigQuery API requires all requests to be authenticated as a user or a service account. For more information, see https://cloud.google.com/bigquery/authentication. See https://developers.google.com/identity/protocols/OAuth2ServiceAccount for information on service account authentication. For more information, see [related BigQuery documentation](https://developers.google.com/identity/protocols/OAuth2WebServer). - -??? note "init" - The init operation is used to initialize the connection to AS400 server. - - **Sample configuration** - - ```xml - - AS400_SystemName - MyUserID - MyPassword - - ``` - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    systemNameThe name of the AS400 system that you need to connect to.Yes
    userIDThe user ID to use when connecting to the AS400 system.Yes
    passwordThe password to use when connecting to the AS400 system.Yes
    - - Using an AS400 Connection Pool - - The connector also supports creating AS400 connections using a connection pool and this can be declared in the init operation. - - init with connection pool declaration - - ```xml - - AS400_SystemName - MyUserID - MyPassword - MyConnectionPool - 50 - 30000 - 600000 - -1 - 300000 - true - true - 300000 - true - - ``` - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    poolNameThe name used to uniquely identify a connection pool.Yes
    maxConnectionsThe maximum number of connections.Yes
    maxInactivityThe maximum time in milliseconds of inactivity before an available connection is closed.Yes
    maxLifetimeThe maximum life in milliseconds for an available connection.Yes
    maxUseCountThe maximum number of times a connection can be used before it is replaced in the pool.Yes
    maxUseTimeThe maximum time in milliseconds a connection can be in use before it is closed and returned to the pool.Yes
    runMaintenanceIndicates whether the maintenance thread is used to clean up expired connections.Yes
    threadUsedIndicates whether threads are used for communicating with the host servers and for running maintenance. The default value is true.Yes
    cleanupIntervalThe time interval in milliseconds for running the maintenance daemon. Default value is 300000 milliseconds.Yes
    pretestConnectionsIndicates whether connections are pretested before they are allocated to requesters.Yes
    - - Each AS400 connection pool is mapped against a given pool name and stored within the ESB memory. A new connection pool will only be created if a connection pool with the given pool name does not exist. If a connection pool with the given pool name does exist, the connector uses the existing pool. Connection pools are stored within a single server node and are not distributed among the cluster. All pool related parameters will not take effect unless pool.poolName is defined. When using the connection pools in the integration server of WSO2, the first request that comes into the mediation flow will create the AS400 connection pool and use it. Every subsequent request will use the created connection pool for getting connections. After using a connection from the connection pool, it is mandatory to return the connection back to the pool. The connection can be returned to the pool by using a call operation or by using a returnPool operation. - - Setting Socket Properties for the AS400 Connection - - The connector allows setting socket properties for the AS400 connection. These properties can be used depending on the use case and to prevent the AS400 connection timing out. - - ```xml - - AS400_SystemName - MyUserID - MyPassword - false - 10000 - 87380 - 16384 - 0 - 15000 - false - - ``` - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    keepAliveValue for SO_KEEPALIVE socket option.Yes
    loginTimeoutThe timeout value in milliseconds when creating a new socket connection.Yes
    receiveBufferSizeValue in bytes for SO_RCVBUF socket option.Yes
    sendBufferSizeValue in bytes for SO_SNDBUF socket option.Yes
    soLingerValue in seconds for SO_LINGER socket option.Yes
    soTimeoutValue in milliseconds for SO_TIMEOUT socket option.Yes
    tcpNoDelayValue for TCP_NODELAY socket option.Yes
    - - All above properties are optional. When using socket properties with AS400 connection pools, the socket properties are applied to the connection pool directly. - - - ---- - -### call Operation - -??? note "call" - The call operation can be used to access a program in the AS400 server. - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    programNameThe name of the program that you need to call.Yes.
    pcmlFileLocationThe location of the file in the registry.Yes.
    pcmlInputsThe XML representation of the input parameters for the program.Yes.
    pool.returnPoolNameThe name of the pool to which the AS400 connection should be returned to once the program call is finished.Yes.
    - - **Sample configurations** - - ```xml - - MyProgram - conf:/pcml/my-pcml-file.pcml - {//pcml:pcmlInputs} - MyConnectionPool - - ``` - Let's assume that there is an RPG program in an AS400 server that performs an addition when two input values are provided. Following will be the PCML source file that is required for this program. This PCML file needs to be stored as /_system/config/pcml/PcmlNumberAddition.pcml resource in the ESB registry. - - ```xml - - - - - - - - ``` - - **Sample request** - - ```xml - - - 5 - 10 - - - ``` - - **Sample response** - - ```xml - < - - - - 5 - 10 - 15 - - - - ``` - - ---- - -### returnPool Operation - -??? note "returnPool" - The returnPool operation can be used to return the AS400 instance from the mediation flow back into the connection pool. - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    returnPoolName The name of the pool which the AS400 connection needs to be returned to.Yes.
    - - ```xml - - MyConnectionPool - - ``` - - ---- - -### trace Operation - -??? note "trace" - The trace operation can be used to enable/disable the trace logs that are generated by JTOpen library. The Trace feature of JTOpen library provides several levels of logging and each of these logging levels can be enabled/disabled using this operation. The operation also allows enabling/disabling all log levels at once. This operation is meant for development and debugging purposes only. All log levels are disabled by default. The generated logs will be available in the /repository/logs/pcml-connector-logs.log file. The location of the log files can be changed only during startup by setting the com.ibm.as400.access.Trace.file system property. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    allWhether all trace logs needs to be enabled/disabled.Yes.
    conversionWhether conversion trace logs needs to be enabled/disabled.Yes.
    allWhether all trace logs needs to be enabled/disabled.Yes.
    conversionWhether conversion trace logs needs to be enabled/disabled.Yes.
    datastreamWhether datastream logs needs to be enabled/disabled.Yes.
    diagnosticsWhether diagnostic trace logs needs to be enabled/disabled.Yes.
    errorWhether error trace logs needs to be enabled/disabled.Yes.
    informationWhether information logs needs to be enabled/disabled.Yes.
    pcmlWhether PCML trace logs needs to be enabled/disabled.Yes.
    warningWhether warning trace logs needs to be enabled/disabled.Yes.
    proxyWhether proxy logs needs to be enabled/disabled.Yes.
    - - ```xml - - true - true - false - true - true - true - true - false - - ``` - Important! - - Enabling and disabling the log levels will effect all AS400 PCML connector based operations within the ESB instance. If there are two proxy services within ESB that use the connector, and trace logs for a specific level are enabled for one of them, the same level of logs will be enabled for the other proxy service as well. - - Trace Operation is not recommended for Production Environments. Enabling trace logs is not recommended for production environments as using the trace operation will generate logs in the log file but this file does not get cleared. \ No newline at end of file diff --git a/en/docs/reference/connectors/bigquery-connector/bigquery-connector-configuration.md b/en/docs/reference/connectors/bigquery-connector/bigquery-connector-configuration.md deleted file mode 100644 index 295fe44922..0000000000 --- a/en/docs/reference/connectors/bigquery-connector/bigquery-connector-configuration.md +++ /dev/null @@ -1,135 +0,0 @@ -# Setting up the BigQuery Environment - -The BigQuery connector allows you to access the [BigQuery REST API](https://cloud.google.com/bigquery/docs/reference/rest) from an integration sequence. - -To work with the BigQuery connector, you need to have a Google Cloud Platform account. If you do not have a Google Cloud Platform account, go to [console.cloud.google.com](https://console.cloud.google.com/freetrial), and create a Google Cloud Platform trial account. - -BigQuery uses the OAuth 2.0 protocol for authorization. All requests to the BigQuery REST API will be authorized against a registered user. Developers can generate user credentials from the Google Cloud Platform using two different mechanisms. See the following sections for detail. - -### Obtaining user credentials - -Follow the steps below to generate user credentials. - -**Obtaining a client ID and client secret** - -1. Go to [https://accounts.google.com/SignUphttps://accounts.google.com/SignUp](https://accounts.google.com/SignUp) and create a Google account. - -2. Go to [https://console.developers.google.com/projectselector/apis/credentials](https://console.developers.google.com/apis/credentials), and sign in to your **Google account**. - - Bigquery-credentials-page - -3. If you do not already have a project, you can create a new project. Click **Create credentials** and then select **OAuth client ID**. - - Select OAuth client ID - -4. Next, **select** Web Application, and **create a client**. - - Select web application - -5. Add [https://www.google.com](https://www.google.com) as the redirect URI (you can add any URI that you need as redirect URI) under **Authorized redirect URIs**, and then click **Create**. This displays the **client ID** and **client secret**. - - Authorization-redirect-URI - -6. Make a note of the **client ID** and **client secret** that is displayed, and then **click OK**. - -7. Click **Library** on the left navigation pane. - - Select library - -8. Search **BigQuery API** under the **Big data category**. - - Pubsub API - -9. Click **Enable**. This enables the BigQuery API. - - Pubsub enable API - -10. Get the authorization code by sending a GET request to the following URL. Replace the `` and `` with the redirect URI and client ID values noted in the previous steps. Enter the following URL in your web browser. - - ``` - https://accounts.google.com/o/oauth2/auth?redirect_uri=&response_type=code&client_id=&scope=https://www.googleapis.com/auth/bigquery&approval_prompt=force&access_type=offline - ``` - Note the authorization code for future use. - - Get authorization code - -11. Get the `access token` and `refresh token` by sending a POST request to the URL given below. Be sure to use an **x-www-form-urlencoded** body with the ``, ``, ``, and `` values noted before, and also set the `grant_type` to **authorization_code**. You will need them to configure the WSO2 Big Query Connector. - - ``` - https://www.googleapis.com/oauth2/v3/token. - ``` - Bigquery get token using postman - -### Obtaining credentials using the service account - -1. Open the [Service Accounts](https://console.cloud.google.com/projectselector2/iam-admin/serviceaccounts) page in the GCP console. - - Bigquery service account - -2. Select your project and click **Open**. - -3. Click **Create Service Account**. - - Bigquery create service account - -4. Enter **Service account details**, and then click **Create**. - - Bigquery enter service account - -5. Select a **role** you wish to grant to the service account and click **Continue**. - - Bigquery enter service account role - -6. Grant users access to this service account (optional) and click **Done**. - - Bigquery enter service account grant user access - -7. Go to the service account for which you wish to create a key and **click** the created Service account in that row. - - Bigquery enter service account grant user access - -8. Click **Create key**. - - Bigquery service account create key - -9. Select the key type as **P12** and click **Create**. Then the created key will be downloaded. - -### Creating Project, Dataset and Table - -**Creating Project** - -1. Open the BigQuery console. - -2. Click **down arrow key** shown in the following image. - - Bigquery create project step1 - -3. Click **New Project**. - - Bigquery create project step2 - -3. Enter new project details. - - Bigquery create project step3 - -**Creating Dataset** - -1. After creating the Project, click the created **project**. You can see the following details. Then click Create **Dataset**. - - Bigquery create Dataset step1 - -2. Enter required Dataset details and click **Create Dataset**. - - Bigquery create Dataset step2 - -**Creating Table** - -1. After creating the Dataset, click the created **Dataset**. You can see the following details. Then click **Create Table**. - - Bigquery create Table step1 - -2. Enter required Table details and click **Create**. - - Bigquery create Table step2 - -For more information about these operations, please refer to the [BigQuery connector reference guide]({{base_path}}/reference/connectors/bigquery-connector/bigquery-connector-reference/). \ No newline at end of file diff --git a/en/docs/reference/connectors/bigquery-connector/bigquery-connector-example.md b/en/docs/reference/connectors/bigquery-connector/bigquery-connector-example.md deleted file mode 100644 index 6bcbf5d03a..0000000000 --- a/en/docs/reference/connectors/bigquery-connector/bigquery-connector-example.md +++ /dev/null @@ -1,719 +0,0 @@ -# BigQuery Connector Example - -The WSO2 BigQuery connector is mostly comprised of operations that are useful for retrieving BigQuery data such as project details, datasets, tables, and jobs (it has one operation that can be used to insert data into BigQuery tables). - -In this example we are trying to build up a sample scenario based on the BigQuery Table operations. - -## What you'll build - -Given below is a sample scenario that demonstrates how to work with the WSO2 BigQuery Connector: - -1. The user sends the request to invoke an API to get created table details from the BigQuery. This REST call will retrieve schema level information and send it back to the API caller. -2. Insert data in to the created table. -3. Retrieve inserted details from the BigQuery table. -4. Run an SQL query (BigQuery) and retrieve details from BigQuery table. - -All four operations are exposed via an `bigquery-testAPI` API. The API with the context `/resources` has four resources. - -* `/gettabledetails`: This is used to get get created table details from the BigQuery table by ID. -* `/insertdetails` : This is used to inserts the data into the table. -* `/getdetails` : This is used to retrieves table data from a specified set of rows. -* `/runQuery` : The runQuery operation runs an SQL query (BigQuery) and returns results if the query completes within a specified timeout. - - > **Note**: Before starting this scenario, you need to create a **project** in BigQuery. Next, create a **Dataset** and under that Dataset you have to have **Table**. For more information about these operations, please refer to the [Setting up the BigQuery Environment]({{base_path}}/reference/connectors/bigquery-connector/bigquery-connector-configuration/). - -The following diagram shows the overall solution. User can invoke the table schema level details from the `gettabledetails` resource. Using the response details, the API caller can insert data into the created table. If users need to retrieve table data from a specified set of rows, they need to invoke the `getdetails` resource. Finally `/runQuery` resource runs an SQL query (BigQuery) and returns results back to the API caller. - -BigQuery connector example - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Configure the connector in WSO2 Integration Studio - -Connectors can be added to integration flows in [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/). Once added, the operations of the connector can be dragged onto your canvas and added to your resources. - -### Import the connector - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -### Add integration logic - -First create an API, which will be where we configure the integration logic. Right click on the created Integration Project and select, **New** -> **Rest API** to create the REST API. Specify the API name as `bigquery-testAPI` and API context as `/resources`. - -Adding a Rest API - -#### Configuring the API - -##### Configure a resource for the gettabledetails operation - -Create a resource that to invoke an API to get created table details from the BigQuery. To achieve this, add the following components to the configuration. - -1. Initialize the connector. - - 1. Navigate into the **Palette** pane and select the graphical operations icons listed under **BigQuery Connector** section. Then drag and drop the `init` operation into the Design pane. - - Drag and drop init operation - - 2. Add the property values into the `init` operation as shown below. Replace the `apiUrl`, `accessToken`, `clientId`, `clientSecret`, `refreshToken`, `apiKey`, `callback`, and `prettyPrint` with your values. - - - **apiUrl**: The base endpoint URL of the BigQuery API. - - **accessToken**: The OAuth token for the BigQuery API. - - **clientId** : The client ID for the BigQuery API. - - **clientSecret** : The client Secret for the BigQuery API. - - **refreshToken** : The refresh token for the BigQuery API. - - **apiKey** : The API key. Required unless you provide an OAuth 2.0 token. - - **callback** : The name of the JavaScript callback function that handles the response. Used in JavaScript JSON-P requests. - - **prettyPrint** : Returns the response with indentations and line breaks. If the property is true, the response is returned in a human-readable format. - - Add values to the init operation - -2. Set up the getTable operation. This operation retrieves a table by ID. - - 1. Navigate into the **Palette** pane and select the graphical operations icons listed under **BigQuery Connector** section. Then drag and drop the `getTable` operation into the Design pane. - - Drag and drop getTable operation - - 2. In this operation we are going to get a BigQuery table details. - - - **datasetId** : The dataset ID of the requested table. - - **projectId** : The project ID of the requested table. - - **tableId** : The ID of the requested table. - - In this example, the above `datasetId`,`projectId` and `tableId` parameter values are populated as an input value for the BigQuery `getTable` operation. - - hSet parameters - -3. To get the input values in to the `getTable`, we can use the [property mediator]({{base_path}}/reference/mediators/property-mediator). Navigate into the **Palette** pane and select the graphical mediators icons listed under **Mediators** section. Then drag and drop the `Property` mediators onto the Design pane as shown below. - > **Note**: The properties should be added to the pallet before creating the operation. - - The parameters available for configuring the Property mediator are as follows: - - 1. Add the property mediator to capture the `tableId` value. The 'tableId' contains the ID of the requested table. - - - **name** : tableId - - **value expression** : json-eval($.tableId) - - Add property mediators to get tableId - - 2. Add the property mediator to capture the `datasetId` values. The 'volume' contains stock quote volume of the selected company. - - - **name** : datasetId - - **value expression** : json-eval($.datasetId) - - Add property mediators to get datasetId - - 3. Add the property mediator to capture the `projectId` values. The 'volume' contains stock quote volume of the selected company. - - - **name** : projectId - - **value expression** : json-eval($.projectId) - - Add property mediators to get projectId - -4. Forward the backend response to the API caller. - - When you are invoking the created resource, the request of the message is going through the `/gettabledetails` resource. Finally, it is passed to the [Respond mediator]({{base_path}}/reference/mediators/respond-mediator/). The Respond Mediator stops the processing on the current message and sends the message back to the client as a response. - - 1. Drag and drop **respond mediator** to the **Design view**. - - Add Respond mediator - - 2. Once you have setup the resource, you can see the `gettabledetails` resource as shown below. - - Resource design view - -##### Configure a resource for the insertdetails operation - -1. Initialize the connector. - You can use the same configuration to initialize the connector. Please follow the steps given in section 1 for setting up the `init` operation to the `gettabledetails` operation. - -2. Set up the insertAllTableData operation. - Navigate into the **Palette** pane and select the graphical operations icons listed under **BigQuery Connector** section. Then drag and drop the `insertAllTableData` operation into the Design pane. The `insertAllTableData` operation inserts the data into the table. - - - **datasetId** : The dataset ID of the requested table. - - **projectId** : The project ID of the requested table. - - **tableId** : The ID of the requested table. - - **skipInvalidRows** : A boolean value to check whether the row should be validated. - - **ignoreUnknownValues** : A boolean value to validate whether the values match the table schema. - - **jsonPay** : A JSON object that contains a row of data. - - Drag and drop insertAllTableData operation - -3. To get the input values in to the `getTable`, we can use the [property mediator]({{base_path}}/reference/mediators/property-mediator). Navigate into the **Palette** pane and select the graphical mediators icons listed under **Mediators** section. Then drag and drop the `Property` mediators onto the Design pane as shown below. - - The parameters available for configuring the Property mediator are as follows: - - 1. Add the property mediator to capture the `datasetId`, `projectId`, `tableId` values. Please follow the steps given in `gettabledetails` operation section 3. - - 2. Add the property mediator to capture the `datasetId` values. The 'volume' contains stock quote volume of the selected company. - - - **name** : jsonPay - - **value expression** : json-eval($.jsonPay) - - Add property mediators to get jsonPay - - In this example, `skipInvalidRows` value is configured as **true** and `ignoreUnknownValues` value is configured as **true**. - -4. Forward the backend response to the API caller. Please follow the steps given in section 4 in the `gettabledetails` operation. - -##### Configure a resource for the listTabledata operation - -1. Initialize the connector. - You can use the same configuration to initialize the connector. Please follow the steps given in section 1 for setting up the `init` operation to the `gettabledetails` operation. - -2. Set up the listTabledata operation. - Navigate into the **Palette** pane and select the graphical operations icons listed under **BigQuery Connector** section. Then drag and drop the `listTabledata` operation into the Design pane. The `listTabledata` operation retrieves table data from a specified set of rows. - - - **datasetId** : The dataset ID of the requested table. - - **projectId** : The project ID of the requested table. - - **tableId** : The ID of the requested table. - - Drag and drop insertAllTableData operation - -3. To get the input values in to the `listTabledata`, we can use the [property mediator]({{base_path}}/reference/mediators/property-mediator). Navigate into the **Palette** pane and select the graphical mediators icons listed under **Mediators** section. Then drag and drop the `Property` mediators onto the Design pane as shown below. - - The parameters available for configuring the Property mediator are as follows: - - 1. Add the property mediator to capture the `datasetId`, `projectId`, `tableId` values. Please follow the steps given in `gettabledetails` operation section 3. - -4. Forward the backend response to the API caller. Please follow the steps given in section 4 in the `gettabledetails` operation. - -##### Configure a resource for the /runQuery operation - -1. Initialize the connector. - You can use the same configuration to initialize the connector. Please follow the steps given in section 1 for setting up the `init` operation to the `gettabledetails` operation. - -2. Set up the /runQuery operation. - Navigate into the **Palette** pane and select the graphical operations icons listed under **BigQuery Connector** section. Then drag and drop the `/runQuery` operation into the Design pane. The `/runQuery` operation runs an SQL query (BigQuery) and returns results if the query completes within a specified timeout. - - - **projectId** : The project ID of the requested table. - - **kind** : The resource type of the request. - - **defaultProjectId** : The ID of the project that contains this dataset. - - **defaultDatasetId** : A unique ID (required) for this dataset without the project name. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters. - - **query** : A query string (required) that complies with the BigQuery query syntax. - - **maxResults** : The maximum number of rows of data (results) to return per page. Responses are also limited to 10 MB. By default, there is no maximum row count and only the byte limit applies. - - **timeoutMs** : Specifies how long (in milliseconds) the system should wait for the query to complete before expiring and returning the request. - - **dryRun** : If set to true, BigQuery does not run the job. Instead, if the query is valid, BigQuery returns statistics about the job. If the query is invalid, an error returns. The default value is false. - - **useQueryCache** : Specifies whether to look for the result in the query cache. The default value is true. - - Drag and drop insertAllTableData operation - -3. To get the input values in to the `runQuery`, we can use the [property mediator]({{base_path}}/reference/mediators/property-mediator/). Navigate into the **Palette** pane and select the graphical mediators icons listed under **Mediators** section. Then drag and drop the `Property` mediators onto the Design pane as shown below. - - The parameters available for configuring the Property mediator are as follows: - - 1. Add the property mediator to capture the `projectId`, `defaultDatasetId` value. Please follow the steps given in `gettabledetails` operation section 3. - In this example, `kind` value is configured as **bigquery#tableDataInsertAllResponse**, `query` value is configured as **SELECT * FROM students**, `maxResults` value is configured as **1000**, `timeoutMs` value is configured as **1000**, `dryRun` value is configured as **false** and `useQueryCache` value is configured as **true**. - -4. Forward the backend response to the API caller. Please follow the steps given in section 4 in the `gettabledetails` operation. - -Now you can switch into the Source view and check the XML configuration files of the created API and sequences. - -??? note "bigquery-testAPI.xml" - ``` - - - - - - - - - https://www.googleapis.com - ya29.a0AfH6SMA6j0L_cGNi0BpxXLGaYlUQUbkHpGY31iFpjz4VOlbx3PlP5XBWW9E5bvdqW7cu8kjxMqJ7WShYGxOooXNc20cnNkHOkfesaun6NnhA3omK8ERWKSfICJGucG1tp3P0mVWNtQ6M2ZdDgigQ-3gmB0Xtphj3Ovw - 392276369305-pg6a4bq41r79gsv3mdmd8vesscf477sf.apps.googleusercontent.com - UgtzggStea3Xfd9q7TUMeyNo - 1//0gCwbRibyQinFCgYIARAAGBASNwF-L9IrO9590FKKiOro0UUEZEHD4DiG9or41nbIEmWOzsaM22btR4QLKXHfGMDDUWK2hrp5EBo - {$ctx:registryPath} - XXXX - callBackFunction - true - {$ctx:quotaUser} - {$ctx:userIp} - {$ctx:fields} - {$ctx:ifNoneMatch} - {$ctx:ifMatch} - - - {$ctx:datasetId} - {$ctx:projectId} - {$ctx:tableId} - - - - - - - - - - - - - - https://www.googleapis.com - ya29.a0AfH6SMA6j0L_cGNi0BpxXLGaYlUQUbkHpGY31iFpjz4VOlbx3PlP5XBWW9E5bvdqW7cu8kjxMqJ7WShYGxOooXNc20cnNkHOkfesaun6NnhA3omK8ERWKSfICJGucG1tp3P0mVWNtQ6M2ZdDgigQ-3gmB0Xtphj3Ovw - 392276369305-pg6a4bq41r79gsv3mdmd8vesscf477sf.apps.googleusercontent.com - UgtzggStea3Xfd9q7TUMeyNo - 1//0gCwbRibyQinFCgYIARAAGBASNwF-L9IrO9590FKKiOro0UUEZEHD4DiG9or41nbIEmWOzsaM22btR4QLKXHfGMDDUWK2hrp5EBo - {$ctx:registryPath} - XXXX - callBackFunction - true - {$ctx:quotaUser} - {$ctx:userIp} - {$ctx:fields} - {$ctx:ifNoneMatch} - {$ctx:ifMatch} - - - {$ctx:datasetId} - {$ctx:projectId} - {$ctx:tableId} - true - true - {$ctx:templateSuffix} - {$ctx:jsonPay} - - - - - - - - - - - - - https://www.googleapis.com - ya29.a0AfH6SMA6j0L_cGNi0BpxXLGaYlUQUbkHpGY31iFpjz4VOlbx3PlP5XBWW9E5bvdqW7cu8kjxMqJ7WShYGxOooXNc20cnNkHOkfesaun6NnhA3omK8ERWKSfICJGucG1tp3P0mVWNtQ6M2ZdDgigQ-3gmB0Xtphj3Ovw - 392276369305-pg6a4bq41r79gsv3mdmd8vesscf477sf.apps.googleusercontent.com - UgtzggStea3Xfd9q7TUMeyNo - 1//0gCwbRibyQinFCgYIARAAGBASNwF-L9IrO9590FKKiOro0UUEZEHD4DiG9or41nbIEmWOzsaM22btR4QLKXHfGMDDUWK2hrp5EBo - {$ctx:registryPath} - XXXX - callBackFunction - true - {$ctx:quotaUser} - {$ctx:userIp} - {$ctx:fields} - {$ctx:ifNoneMatch} - {$ctx:ifMatch} - - - {$ctx:datasetId} - {$ctx:projectId} - {$ctx:tableId} - {$ctx:maxResults} - {$ctx:pageToken} - {$ctx:startIndex} - - - - - - - - - - - - - https://www.googleapis.com - ya29.a0AfH6SMA6j0L_cGNi0BpxXLGaYlUQUbkHpGY31iFpjz4VOlbx3PlP5XBWW9E5bvdqW7cu8kjxMqJ7WShYGxOooXNc20cnNkHOkfesaun6NnhA3omK8ERWKSfICJGucG1tp3P0mVWNtQ6M2ZdDgigQ-3gmB0Xtphj3Ovw - 392276369305-pg6a4bq41r79gsv3mdmd8vesscf477sf.apps.googleusercontent.com - UgtzggStea3Xfd9q7TUMeyNo - 1//0gCwbRibyQinFCgYIARAAGBASNwF-L9IrO9590FKKiOro0UUEZEHD4DiG9or41nbIEmWOzsaM22btR4QLKXHfGMDDUWK2hrp5EBo - {$ctx:registryPath} - XXXX - callBackFunction - true - {$ctx:quotaUser} - {$ctx:userIp} - {$ctx:fields} - {$ctx:ifNoneMatch} - {$ctx:ifMatch} - - - {$ctx:projectId} - bigquery#tableDataInsertAllResponse - SELECT * FROM students - 10000 - 10000 - false - true - {$ctx:defaultDatasetId} - {$ctx:defaultProjectId} - {$ctx:useLegacySql} - - - - - - - - - - - - - - - https://www.googleapis.com - ya29.a0AfH6SMA6j0L_cGNi0BpxXLGaYlUQUbkHpGY31iFpjz4VOlbx3PlP5XBWW9E5bvdqW7cu8kjxMqJ7WShYGxOooXNc20cnNkHOkfesaun6NnhA3omK8ERWKSfICJGucG1tp3P0mVWNtQ6M2ZdDgigQ-3gmB0Xtphj3Ovw - 392276369305-pg6a4bq41r79gsv3mdmd8vesscf477sf.apps.googleusercontent.com - UgtzggStea3Xfd9q7TUMeyNo - 1//0gCwbRibyQinFCgYIARAAGBASNwF-L9IrO9590FKKiOro0UUEZEHD4DiG9or41nbIEmWOzsaM22btR4QLKXHfGMDDUWK2hrp5EBo - {$ctx:registryPath} - XXXX - callBackFunction - true - {$ctx:quotaUser} - {$ctx:userIp} - {$ctx:fields} - {$ctx:ifNoneMatch} - {$ctx:ifMatch} - - - {$ctx:datasetId} - {$ctx:projectId} - {$ctx:tableId} - - - - - - - - - - - - - - https://www.googleapis.com - ya29.a0AfH6SMA6j0L_cGNi0BpxXLGaYlUQUbkHpGY31iFpjz4VOlbx3PlP5XBWW9E5bvdqW7cu8kjxMqJ7WShYGxOooXNc20cnNkHOkfesaun6NnhA3omK8ERWKSfICJGucG1tp3P0mVWNtQ6M2ZdDgigQ-3gmB0Xtphj3Ovw - 392276369305-pg6a4bq41r79gsv3mdmd8vesscf477sf.apps.googleusercontent.com - UgtzggStea3Xfd9q7TUMeyNo - 1//0gCwbRibyQinFCgYIARAAGBASNwF-L9IrO9590FKKiOro0UUEZEHD4DiG9or41nbIEmWOzsaM22btR4QLKXHfGMDDUWK2hrp5EBo - {$ctx:registryPath} - XXXX - callBackFunction - true - {$ctx:quotaUser} - {$ctx:userIp} - {$ctx:fields} - {$ctx:ifNoneMatch} - {$ctx:ifMatch} - - - {$ctx:datasetId} - {$ctx:projectId} - {$ctx:tableId} - true - true - {$ctx:templateSuffix} - {$ctx:jsonPay} - - - - - - - - - - - - - https://www.googleapis.com - ya29.a0AfH6SMA6j0L_cGNi0BpxXLGaYlUQUbkHpGY31iFpjz4VOlbx3PlP5XBWW9E5bvdqW7cu8kjxMqJ7WShYGxOooXNc20cnNkHOkfesaun6NnhA3omK8ERWKSfICJGucG1tp3P0mVWNtQ6M2ZdDgigQ-3gmB0Xtphj3Ovw - 392276369305-pg6a4bq41r79gsv3mdmd8vesscf477sf.apps.googleusercontent.com - UgtzggStea3Xfd9q7TUMeyNo - 1//0gCwbRibyQinFCgYIARAAGBASNwF-L9IrO9590FKKiOro0UUEZEHD4DiG9or41nbIEmWOzsaM22btR4QLKXHfGMDDUWK2hrp5EBo - {$ctx:registryPath} - XXXX - callBackFunction - true - {$ctx:quotaUser} - {$ctx:userIp} - {$ctx:fields} - {$ctx:ifNoneMatch} - {$ctx:ifMatch} - - - {$ctx:datasetId} - {$ctx:projectId} - {$ctx:tableId} - {$ctx:maxResults} - {$ctx:pageToken} - {$ctx:startIndex} - - - - - - - - - - - - - https://www.googleapis.com - ya29.a0AfH6SMA6j0L_cGNi0BpxXLGaYlUQUbkHpGY31iFpjz4VOlbx3PlP5XBWW9E5bvdqW7cu8kjxMqJ7WShYGxOooXNc20cnNkHOkfesaun6NnhA3omK8ERWKSfICJGucG1tp3P0mVWNtQ6M2ZdDgigQ-3gmB0Xtphj3Ovw - 392276369305-pg6a4bq41r79gsv3mdmd8vesscf477sf.apps.googleusercontent.com - UgtzggStea3Xfd9q7TUMeyNo - 1//0gCwbRibyQinFCgYIARAAGBASNwF-L9IrO9590FKKiOro0UUEZEHD4DiG9or41nbIEmWOzsaM22btR4QLKXHfGMDDUWK2hrp5EBo - {$ctx:registryPath} - XXXX - callBackFunction - true - {$ctx:quotaUser} - {$ctx:userIp} - {$ctx:fields} - {$ctx:ifNoneMatch} - {$ctx:ifMatch} - - - {$ctx:projectId} - bigquery#tableDataInsertAllResponse - SELECT * FROM students - 10000 - 10000 - false - true - {$ctx:defaultDatasetId} - {$ctx:defaultProjectId} - {$ctx:useLegacySql} - - - - - - - - ``` -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - - - Download ZIP - - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - -1. The user sends the request to invoke an API to get created table details from the BigQuery. - - **Sample request** - - Save a file called **data.json** with the following payload. - - ```json - { - "tableId":"students", - "datasetId":"Sample1", - "projectId":"ei-connector-improvement" - } - ``` - ``` - curl -v POST -d @data.json "http://localhost:8290/resources/getTable" -H "Content-Type:application/json" - ``` - **Expected Response** - - ```json - // API callback - callBackFunction({ - "kind": "bigquery#table", - "etag": "G5Yv0gFoLTD2gSToi5YPwA==", - "id": "ei-connector-improvement:Sample1.students", - "selfLink": "https://www.googleapis.com/bigquery/v2/projects/ei-connector-improvement/datasets/Sample1/tables/students", - "tableReference": { - "projectId": "ei-connector-improvement", - "datasetId": "Sample1", - "tableId": "students" - }, - "schema": { - "fields": [ - { - "name": "name", - "type": "STRING", - "mode": "NULLABLE" - }, - { - "name": "age", - "type": "INTEGER", - "mode": "NULLABLE" - } - ] - }, - "numBytes": "0", - "numLongTermBytes": "0", - "numRows": "0", - "creationTime": "1592219906721", - "lastModifiedTime": "1592219906768", - "type": "TABLE", - "location": "US" - } - ); - ``` -2. Insert data in to the created table. - - **Sample request** - - Save a file called **data.json** with the following payload. - - ```json - { - "tableId":"students", - "datasetId":"Sample1", - "projectId":"ei-connector-improvement", - "jsonPay":{ - "json": - { - "name":"Jhone", - "age":"30" - } - } - } - ``` - ``` - curl -v POST -d @data.json "http://localhost:8290/resources/insertAllTableData" -H "Content-Type:application/json" - ``` - - **Expected Response** - - ```json - { - "kind": "bigquery#tableDataInsertAllResponse" - } - ``` -3. Retrieve inserted details from the BigQuery table. - - **Sample request** - - Save a file called **data.json** with the following payload. - - ```json - { - "tableId":"students", - "datasetId":"Sample1", - "projectId":"ei-connector-improvement" - } - ``` - ``` - curl -v POST -d @data.json "http://localhost:8290/resources/listTabledata" -H "Content-Type:application/json" - ``` - - **Expected Response** - - ```json - // API callback - callBackFunction({ - "kind": "bigquery#tableDataList", - "etag": "CddYdG3ttrhpWPEGTOpKKg==", - "totalRows": "0", - "rows": [ - { - "f": [ - { - "v": "Kasun" - }, - { - "v": "25" - } - ] - }, - { - "f": [ - { - "v": "Jhone" - }, - { - "v": "30" - } - ] - } - ] - } - ); - ``` -4. Run an SQL query (BigQuery) and retrieve details from BigQuery table. - - **Sample request** - - Save a file called **data.json** with the following payload. - - ```json - { - "defaultDatasetId":"Sample1", - "projectId":"ei-connector-improvement" - } - ``` - - ``` - curl -v POST -d @data.json "http://localhost:8290/resources/runQuery" -H "Content-Type:application/json" - ``` - **Expected Response** - - ```json - { - "kind": "bigquery#queryResponse", - "schema": { - "fields": [ - { - "name": "name", - "type": "STRING", - "mode": "NULLABLE" - }, - { - "name": "age", - "type": "INTEGER", - "mode": "NULLABLE" - } - ] - }, - "jobReference": { - "projectId": "ei-connector-improvement", - "jobId": "job_YQS1kmzYpfBT-wKvkLi5uVbSL_Mh", - "location": "US" - }, - "totalRows": "2", - "rows": [ - { - "f": [ - { - "v": "Kasun" - }, - { - "v": "25" - } - ] - }, - { - "f": [ - { - "v": "Jhone" - }, - { - "v": "30" - } - ] - } - ], - "totalBytesProcessed": "30", - "jobComplete": true, - "cacheHit": false - } - ``` diff --git a/en/docs/reference/connectors/bigquery-connector/bigquery-connector-overview.md b/en/docs/reference/connectors/bigquery-connector/bigquery-connector-overview.md deleted file mode 100644 index 04de3e9b91..0000000000 --- a/en/docs/reference/connectors/bigquery-connector/bigquery-connector-overview.md +++ /dev/null @@ -1,31 +0,0 @@ -# BigQuery Connector Overview - -The BigQuery connector allows you to access the [BigQuery REST API](https://cloud.google.com/bigquery/docs/reference/rest) from an integration sequence. BigQuery is a tool that allows you to execute SQL-like queries on large amounts of data at outstanding speeds. It is a serverless Software as a Service that supports querying using ANSI SQL. - -To see the BigQuery Connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "Event". **BigQuery** is the name of the connector that has this functionality. - -BigQuery Connector Store - -## Compatibility - -| Connector Version | Supported product versions | -| ------------- |-------------| -| 1.0.9 | APIM 4.0.0, EI 7.1.0, EI 7.0.x EI 6.6.0 EI 6.5.0 | - -For older versions, see the details in the connector store. - -## BigQuery Connector Documentation - -* **[BigQuery Connector Example]({{base_path}}/reference/connectors/bigquery-connector/bigquery-connector-example/)**: This example demonstrates how to work with the BigQuery Connector. - -* **[BigQuery Connector Reference]({{base_path}}/reference/connectors/bigquery-connector/bigquery-connector-reference/)**: This documentation provides a reference guide for the BigQuery Connector. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, create a pull request in the following repository. - -* [BigQuery Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-bigquery) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/bigquery-connector/bigquery-connector-reference.md b/en/docs/reference/connectors/bigquery-connector/bigquery-connector-reference.md deleted file mode 100644 index 5674967b3e..0000000000 --- a/en/docs/reference/connectors/bigquery-connector/bigquery-connector-reference.md +++ /dev/null @@ -1,1131 +0,0 @@ -# BigQuery Connector Reference - -The following operations allow you to work with the BigQuery Connector. Click an operation name to see parameter details and samples on how to use it. - ---- - -## Initialize the connector - -To use the BigQuery connector, add the or element in your configuration before carrying out any other BigQuery operations. - -The BigQuery API requires all requests to be authenticated as a user or a service account. For more information, see https://cloud.google.com/bigquery/authentication. See https://developers.google.com/identity/protocols/OAuth2ServiceAccount for information on service account authentication. For more information, see [related BigQuery documentation](https://developers.google.com/identity/protocols/OAuth2WebServer). - -??? note "init" - The init operation is used to initialize the connection to BigQuery. - - **Sample configuration** - - ```xml - - {$ctx:apiUrl} - {$ctx:accessToken} - {$ctx:clientSecret} - {$ctx:clientId} - {$ctx:refreshToken} - {$ctx:registryPath} - {$ctx:fields} - {$ctx:prettyPrint} - {$ctx:quotaUser} - {$ctx:userIp} - - ``` - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    apiUrlThe base endpoint URL of the BigQuery API.Yes
    accessTokenThe OAuth token for the BigQuery API.Yes
    clientIdThe client ID for the BigQuery API.Yes
    clientSecretThe client secret for the BigQuery API.Yes
    refreshTokenThe refresh token for the BigQuery API.Yes
    registryPathThe registry path to save the access token.Yes
    fieldsList of fields to be returned in the response.Yes
    callbackThe name of the JavaScript callback function that handles the response. Used in JavaScript JSON-P requests.Yes
    apiKeyThe API key. Required unless you provide an OAuth 2.0 token.Yes
    prettyPrintReturns the response with indentations and line breaks. If the property is true, the response is returned in a human-readable format.Yes
    quotaUserAlternative to userIp. Lets you enforce per-user quotas from a server-side application even in cases where the user's IP address is unknown.Yes
    userIpIP address of the end user for whom the API call is being made. Lets you enforce per-user quotas when calling the API from a server-side application.Yes
    ifMatchEtag value to use for returning a page of list values if the values have not changed.Yes
    ifNoneMatchEtag value to use for returning a page of list values if the values have changed.Yes
    - - Alternatively, you can use the following operation (getAccessTokenFromServiceAccount) to get the access token and to do all the other operations. - - ```xml - - {$ctx:apiUrl} - {$ctx:keyStoreLocation} - {$ctx:serviceAccount} - {$ctx:scope} - {$ctx:accessTokenRegistryPath} - - ``` - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    apiUrlThe base endpoint URL of the BigQuery API.Yes
    keyStoreLocationThe location where the p12 key file is located.Yes
    serviceAccountThe value of the service account.Yes
    scopeThe space delimited scope to access the API.Yes
    accessTokenRegistryPathThe registry path to store the access token (this is an optional parameter).Yes
    - - You can also use the below operation (getAccessTokenFromAuthorizationCode) to get the access token and to do all the other operations. - - ```xml - - {$ctx:apiUrl} - {$ctx:authorizationCode} - {$ctx:redirectUrl} - {$ctx:clientSecret} - {$ctx:clientId} - {$ctx:registryPath} - - ``` - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    apiUrlThe base endpoint URL of the BigQuery API.Yes
    authorizationCodeThe authorization code to be used for obtaining the access token.Yes
    redirectUrlThe redirect URL to be used in the OAuth 2.0 authorization flow.Yes
    clientSecretThe space delimited scope to access the API.Yes
    clientIdThe registry path to store the access token (this is an optional parameter).Yes
    registryPathThe registry path to store the access token (this is an optional parameter).Yes
    - - You can also use the below operation (getAccessTokenFromRefreshToken) to get the access token and to do all the other operations. - - ```xml - - {$ctx:apiUrl} - {$ctx:clientSecret} - {$ctx:clientId} - {$ctx:refreshToken} - - ``` - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    apiUrlThe base endpoint URL of the BigQuery API.Yes
    clientSecretThe space delimited scope to access the API.Yes
    clientIdThe registry path to store the access token (this is an optional parameter).Yes
    refreshTokenThe refresh token for the BigQuery API.Yes
    - - **Sample request** - - ```json - { - "apiUrl": "https://www.googleapis.com", - "clientId": "504627865627-kdni8r2s10sjcgd4v6stthdaqb4bvnba.apps.googleusercontent.com", - "refreshToken": "1/uWful-diQNAdk-alDUa6ixxxxxxxx-LpJIikEQ2sqA", - "clientSecret": "ChlbHI_T7zssXXTRYuqj_-TM" - } - ``` - - ---- - -### Datasets - -??? note "getDataset" - The getDataset operation retrieves a dataset. For more information, see related [BigQuery documentation](https://cloud.google.com/bigquery/docs/reference/v2/datasets/get). - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    projectIdThe ID of the project to which the dataset belongs.Yes.
    datasetIdThe ID of the dataset.Yes.
    - - **Sample configurations** - - ```xml - - {$ctx:projectId} - {$ctx:datasetId} - - ``` - - **Sample request** - - ```json - { - "accessToken": "ya29.BwKYx40Dith1DFQBDjZOHNqhcxmKs9zbkjAWQa1q8mdMFndp2-q8ifG66fwprOigRwKSNw", - "apiUrl": "https://www.googleapis.com", - "clientId": "504627865627-kdni8r2s10sjddfgXzqb4bvnba.apps.googleusercontent.com", - "clientSecret": "ChlbHI_T7zssXXTRYuqj_-TM", - "refreshToken": "1/uWful-diQNAdk-alDUa6ixxxxxxxx-LpJIikEQ2sqA", - "registryPath": "connectors/bq", - "projectId": "publicdata", - "datasetId": "samples", - "fields": "id", - "callback": "callBackFunction", - "apiKey": "154987fd5h4x6gh4", - "prettyPrint": "true", - "quotaUser": "1hx46f5g4h5ghx6h41x54gh6f4hx", - "userIp": "192.77.88.12", - "ifNoneMatch":"hnk59tKBkX8cdlePZ8VtzgVzuO4/tS1oqpXxnkU21hZeK5k4lqRrRr8", - "ifMatch":"hnk59tKBkX8cdlePZ8VtzgVzuO4/tS1oqpXxnkU21hZeK5k4lqRrRr8" - } - ``` - - **Sample response** - - ```json - { - "kind": "bigquery#dataset", - "etag": "1xuEK5ngZZ+fj0iioOa6Og==", - "id": "testbig-235116:testData", - "selfLink": "https://content.googleapis.com/bigquery/v2/projects/testbig-235116/datasets/testData", - "datasetReference": { - "datasetId": "testData", - "projectId": "testbig-235116" - }, - "defaultTableExpirationMs": "5184000000", - "access": [ - { - "role": "WRITER", - "specialGroup": "projectWriters" - }, - { - "role": "OWNER", - "specialGroup": "projectOwners" - }, - { - "role": "OWNER", - "userByEmail": "iamkesan@gmail.com" - }, - { - "role": "READER", - "specialGroup": "projectReaders" - } - ], - "creationTime": "1553104741840", - "lastModifiedTime": "1553104741840", - "location": "US", - "defaultPartitionExpirationMs": "5184000000" - } - ``` - -??? note "listDatasets" - The listDatasets operation lists a set of data. For more information, see related [BigQuery documentation](https://cloud.google.com/bigquery/docs/reference/v2/datasets/list). - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    projectIdThe ID of the project to which the dataset belongs.Yes.
    maxResultsThe maximum number of results per page.Yes.
    pageTokenThe page token value.Yes.
    isAllA boolean value that determines whether to list all datasets, including hidden ones.Yes.
    - - **Sample configurations** - - ```xml - - {$ctx:projectId} - {$ctx:maxResults} - {$ctx:pageToken} - {$ctx:isAll} - - ``` - - **Sample request** - - ```json - { - "accessToken": "ya29.BwKYx40Dith1DFQBDjZOHNqhcxmKs9zbkjAWQa1q8mdMFndp2-q8ifG66fwprOigRwKSNw", - "apiUrl": "https://www.googleapis.com", - "clientId": "504627865627-kdni8r2s10sjddfgXzqb4bvnba.apps.googleusercontent.com", - "clientSecret": "ChlbHI_T7zssXXTRYuqj_-TM", - "refreshToken": "1/uWful-diQNAdk-alDUa6ixxxxxxxx-LpJIikEQ2sqA", - "registryPath": "connectors/bq", - "projectId": "publicdata", - "maxResults": "1", - "pageToken": "1", - "isAll": "true", - "fields": "datasets/datasetReference", - "callback": "callBackFunction", - "apiKey": "154987fd5h4x6gh4", - "prettyPrint": "true", - "quotaUser": "1hx46f5g4h5ghx6h41x54gh6f4hx", - "userIp": "192.77.88.12", - "ifNoneMatch":"hnk59tKBkX8cdlePZ8VtzgVzuO4/tS1oqpXxnkU21hZeK5k4lqRrRr8", - "ifMatch":"hnk59tKBkX8cdlePZ8VtzgVzuO4/tS1oqpXxnkU21hZeK5k4lqRrRr8" - } - ``` - - **Sample response** - - ```json - { - "kind": "bigquery#datasetList", - "etag": "5xsXo/uZ5RUfG49EzOV9Gg==", - "datasets": [ - { - "kind": "bigquery#dataset", - "id": "testbig-235116:testData", - "datasetReference": { - "datasetId": "testData", - "projectId": "testbig-235116" - }, - "location": "US" - } - ] - } - ``` - -### Jobs - -??? note "runQuery" - The runQuery operation runs an SQL query (BigQuery) and returns results if the query completes within a specified timeout. For more information, see related [BigQuery documentation](https://cloud.google.com/bigquery/docs/reference/v2/jobs/query). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    useQueryCacheSpecifies whether to look for the result in the query cache. The default value is true.Yes.
    timeoutMsSpecifies how long (in milliseconds) the system should wait for the query to complete before expiring and returning the request.Yes.
    queryA query string (required) that complies with the BigQuery query syntax.Yes.
    dryRunIf set to true, BigQuery does not run the job. Instead, if the query is valid, BigQuery returns statistics about the job. If the query is invalid, an error returns. The default value is false.Yes.
    defaultProjectIdThe ID of the project that contains this dataset.Yes.
    defaultDatasetIdA unique ID (required) for this dataset without the project name. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.Yes.
    projectIdThe ID of the project that is billed for the query.Yes.
    maxResultsThe maximum number of rows of data (results) to return per page. Responses are also limited to 10 MB. By default, there is no maximum row count and only the byte limit applies.Yes.
    kindThe resource type of the request.Yes.
    useLegacySqlSpecifies whether to use BigQuery's legacy SQL dialect for this query. The default value is true. If set to false, the query will use BigQuery's standard SQL. For information on BigQuery's standard SQL, see https://cloud.google.com/bigquery/docs/reference/standard-sql/migrating-from-legacy-sql.Yes.
    - - **Sample configurations** - - ```xml - - {$ctx:useQueryCache} - {$ctx:timeoutMs} - {$ctx:query} - {$ctx:dryRun} - {$ctx:defaultProjectId} - {$ctx:defaultDatasetId} - {$ctx:projectId} - {$ctx:maxResults} - {$ctx:kind} - {$ctx:useLegacySql} - - ``` - - **Sample request** - - ```json - { - "quotaUser":"1hx46f5g4h5ghx6h41x54gh6f4hx", - "userIp":"192.77.88.12", - "accessToken":"ya29.6QFjdRjTZyXmIjxkO6G6dJoLrch1Ktt1IzFm", - "clientId": "504627865627-kdni8r2s10sjddfgXzqb4bvnba.apps.googleusercontent.com", - "clientSecret": "ChlbHI_T7zssXXTRYuqj_-TM", - "refreshToken": "1/uWful-diQNAdk-alDUa6ixxxxxxxx-LpJIikEQ2sqA", - "registryPath": "connectors/bq", - "prettyPrint":"true", - "callback":"callBackFunction", - "apiUrl":"https://www.googleapis.com", - "fields":"id,etag", - "useQueryCache":"true", - "timeoutMs":"10000", - "query":"SELECT count(*) FROM [publicdata:samples.github_nested]", - "dryRun":"false", - "defaultProjectId":"bigqueryproject-1092", - "defaultDatasetId":"test_100", - "projectId":"bigqueryproject-1092", - "maxResults":"10000", - "kind":"bigquery#queryRequest", - "ifNoneMatch":"hnk59tKBkX8cdlePZ8VtzgVzuO4/tS1oqpXxnkU21hZeK5k4lqRrRr8", - "ifMatch":"hnk59tKBkX8cdlePZ8VtzgVzuO4/tS1oqpXxnkU21hZeK5k4lqRrRr8" - } - ``` - - **Sample response** - - ```json - { - "kind": "bigquery#queryResponse", - "schema": { - "fields": [ - { - "name": "Name", - "type": "STRING", - "mode": "NULLABLE" - }, - { - "name": "Age", - "type": "INTEGER", - "mode": "NULLABLE" - } - ] - }, - "jobReference": { - "projectId": "testbig-235116", - "jobId": "job_GECobzPaLdbBW-SqIG-WrfOzaqtQ", - "location": "US" - }, - "totalRows": "2", - "rows": [ - { - "f": [ - { - "v": "John" - }, - { - "v": "45" - } - ] - }, - { - "f": [ - { - "v": "Harry" - }, - { - "v": "25" - } - ] - } - ], - "totalBytesProcessed": "670", - "jobComplete": true, - "cacheHit": false - } - ``` - -### Projects - -??? note "listProjects" - The listProjects operation retrieves all projects. For more information, see related [BigQuery documentation](https://cloud.google.com/bigquery/docs/reference/v2/projects/list). - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    maxResultsThe maximum number of results per page.Yes.
    pageTokenThe page token value.Yes.
    - - **Sample configurations** - - ```xml - - {$ctx:maxResults} - {$ctx:pageToken} - - ``` - - **Sample request** - - ```json - { - "accessToken" : "ya29.BwKYx40Dith1DFQBDjZOHNqhcxmKs9zbkjAWQa1q8mdMFndp2-q8ifG66fwprOigRwKSNw", - "apiUrl" : "https://www.googleapis.com", - "clientId": "504627865627-kdni8r2s10sjddfgXzqb4bvnba.apps.googleusercontent.com", - "clientSecret": "ChlbHI_T7zssXXTRYuqj_-TM", - "refreshToken": "1/uWful-diQNAdk-alDUa6ixxxxxxxx-LpJIikEQ2sqA", - "registryPath": "connectors/bq", - "maxResults" : "1", - "pageToken" : "1", - "fields": "id", - "callback": "callBackFunction", - "apiKey": "154987fd5h4x6gh4", - "prettyPrint": "true", - "quotaUser": "1hx46f5g4h5ghx6h41x54gh6f4hx", - "userIp": "192.77.88.12", - "ifNoneMatch":"hnk59tKBkX8cdlePZ8VtzgVzuO4/tS1oqpXxnkU21hZeK5k4lqRrRr8", - "ifMatch":"hnk59tKBkX8cdlePZ8VtzgVzuO4/tS1oqpXxnkU21hZeK5k4lqRrRr8" - } - ``` - - **Sample response** - - ```json - { - "kind": "bigquery#projectList", - "etag": "jdhx8JpxmSC6iJhWFNchpw==", - "projects": [ - { - "kind": "bigquery#project", - "id": "ascendant-lore-235117", - "numericId": "719690246975", - "projectReference": { - "projectId": "ascendant-lore-235117" - }, - "friendlyName": "My First Project" - }, - { - "kind": "bigquery#project", - "id": "true-kite-235118", - "numericId": "911077124704", - "projectReference": { - "projectId": "true-kite-235118" - }, - "friendlyName": "My First Project" - } - ], - "totalItems": 2 - } - ``` - -### Table Data - -??? note "listTabledata" - The listTabledata operation retrieves table data from a specified set of rows. For more information, see related [BigQuery documentation](https://cloud.google.com/bigquery/docs/reference/v2/tabledata/list). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    datasetIdThe maximum number of results per page.Yes.
    projectIdThe ID of the project to which the dataset belongs.Yes.
    tableIdThe ID of the table.Yes.
    maxResultsThe maximum results per page.Yes.
    pageTokenThe page token value.Yes.
    startIndexZero-based index of the starting row.Yes.
    - - **Sample configurations** - - ```xml - - {$ctx:datasetId} - {$ctx:projectId} - {$ctx:tableId} - {$ctx:maxResults} - {$ctx:pageToken} - {$ctx:startIndex} - - ``` - - **Sample request** - - ```json - { - "accessToken": "ya29.BwKYx40Dith1DFQBDjZOHNqhcxmKs9zbkjAWQa1q8mdMFndp2-q8ifG66fwprOigRwKSNw", - "apiUrl": "https://www.googleapis.com", - "clientId": "504627865627-kdni8r2s10sjddfgXzqb4bvnba.apps.googleusercontent.com", - "clientSecret": "ChlbHI_T7zssXXTRYuqj_-TM", - "refreshToken": "1/uWful-diQNAdk-alDUa6ixxxxxxxx-LpJIikEQ2sqA", - "registryPath": "connectors/bq", - "projectId": "publicdata", - "datasetId": "samples", - "tableId": "github_nested", - "maxResults": "1", - "pageToken": "1", - "startIndex": "1", - "fields": "id", - "callback": "callBackFunction", - "apiKey": "154987fd5h4x6gh4", - "prettyPrint": "true", - "quotaUser": "1hx46f5g4h5ghx6h41x54gh6f4hx", - "userIp": "192.77.88.12", - "ifNoneMatch":"hnk59tKBkX8cdlePZ8VtzgVzuO4/tS1oqpXxnkU21hZeK5k4lqRrRr8", - "ifMatch":"hnk59tKBkX8cdlePZ8VtzgVzuO4/tS1oqpXxnkU21hZeK5k4lqRrRr8" - } - ``` - - **Sample response** - - ```json - { - "kind": "bigquery#tableDataList", - "etag": "RRRjVfSIc2CcCrEaLPH6Dg==", - "totalRows": "2", - "rows": [ - { - "f": [ - { - "v": "John" - }, - { - "v": null - } - ] - }, - { - "f": [ - { - "v": "Harry" - }, - { - "v": "90" - } - ] - } - ] - } - ``` - -??? note "insertAllTableData" - The insertAllTableData operation retrieves table data from a specified set of rows. For more information, see related [BigQuery documentation](https://cloud.google.com/bigquery/docs/reference/v2/tabledata/insertAll). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    datasetIdThe maximum number of results per page.Yes.
    projectIdThe ID of the project to which the dataset belongs.Yes.
    tableIdThe ID of the table.Yes.
    skipInvalidRowsA boolean value to check whether the row should be validated.Yes.
    ignoreUnknownValuesA boolean value to validate whether the values match the table schema.Yes.
    templateSuffixInstance table.Yes.
    jsonPayA JSON object that contains a row of data.Yes.
    - - **Sample configurations** - - ```xml - - {$ctx:datasetId} - {$ctx:projectId} - {$ctx:tableId} - {$ctx:skipInvalidRows} - {$ctx:ignoreUnknownValues} - {$ctx:templateSuffix} - {$ctx:jsonPay} - - ``` - - **Sample request** - - ```json - { - "apiUrl":"https://www.googleapis.com", - "keyStoreLocation":"/home/hariprasath/Desktop/bigQuery/p12/Non Production-232c0d8ac8f2.p12", - "serviceAccount":"service-account.gserviceaccount.com", - "scope":"https://www.googleapis.com/auth/bigquery", - "datasetId": "zSta", - "tableId": "ECOMM", - "projectId": "dataservices", - "kind": "bigquery#tableDataInsertAllRequest", - "skipInvalidRows": true, - "ignoreUnknownValues": true, - "templateSuffix":"_20160315", - "jsonPay": - { - "insertId": "xxxxx", - "json": - { - "SOURCE_ID":"2", - "DESTINATION_ID":"13", - "SIGNAL_TYPE_ID":"13", - "DATA":"hariprasath", - "TRANSACTION_TIMESTAMP":"2014-03-01T22:12:22.000Z", - "BQ_INSERT_TIMESTAMP":"2016-02-26 20:12:01" - } - } - } - ``` - - Following is a sample request that inserts multiple records. - - ```json - { - "apiUrl":"https://www.googleapis.com", - "keyStoreLocation":"/home/hariprasath/Desktop/bigQuery/p12/Non Production-232c0d8ac8f2.p12", - "serviceAccount":"service-account.gserviceaccount.com", - "scope":"https://www.googleapis.com/auth/bigquery", - "datasetId": "zSta", - "tableId": "Sample", - "projectId": "dataservices", - "kind": "bigquery#tableDataInsertAllRequest", - "skipInvalidRows": true, - "ignoreUnknownValues": true, - "templateSuffix":"_20160315", - "jsonPay":[ - { - "insertId":"1014", - "json":{ - "Name":"John", - "Age":25 - } - }, - { - "insertId":"1015", - "json":{ - "Name":"Vasan", - "Age":45 - } - } - ] - } - ``` - - **Sample response** - - ```json - { - "kind": "bigquery#tableDataInsertAllResponse" - } - ``` - -### Tables - -??? note "getTable" - The getTable operation retrieves a table by ID. For more information, see related [BigQuery documentation](https://cloud.google.com/bigquery/docs/reference/v2/tables/get). - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    tableIdThe ID of the table.Yes.
    datasetIdThe dataset ID of the requested table.Yes.
    projectIdThe project ID of the requested table.Yes.
    - - **Sample configurations** - - ```xml - - {$ctx:tableId} - {$ctx:datasetId} - {$ctx:projectId} - - ``` - - **Sample request** - - ```json - { - "accessToken": "ya29.BwKYx40Dith1DFQBDjZOHNqhcxmKs9zbkjAWQa1q8mdMFndp2-q8ifG66fwprOigRwKSNw", - "apiUrl": "https://www.googleapis.com", - "clientId": "504627865627-kdni8r2s10sjddfgXzqb4bvnba.apps.googleusercontent.com", - "clientSecret": "ChlbHI_T7zssXXTRYuqj_-TM", - "refreshToken": "1/uWful-diQNAdk-alDUa6ixxxxxxxx-LpJIikEQ2sqA", - "registryPath": "connectors/bq", - "projectId": "publicdata", - "datasetId": "samples", - "tableId": "github_nested", - "maxResults": "1", - "pageToken": "1", - "startIndex": "1", - "fields": "id", - "callback": "callBackFunction", - "apiKey": "154987fd5h4x6gh4", - "prettyPrint": "true", - "quotaUser": "1hx46f5g4h5ghx6h41x54gh6f4hx", - "userIp": "192.77.88.12", - "ifNoneMatch":"hnk59tKBkX8cdlePZ8VtzgVzuO4/tS1oqpXxnkU21hZeK5k4lqRrRr8", - "ifMatch":"hnk59tKBkX8cdlePZ8VtzgVzuO4/tS1oqpXxnkU21hZeK5k4lqRrRr8" - } - ``` - - **Sample response** - - ```json - { - "kind": "bigquery#tableList", - "etag": "ASMRI9cY0t0ilhpaFI4OMA==", - "tables": [ - { - "kind": "bigquery#table", - "id": "testbig-235116:testData.github_nested_copy", - "tableReference": { - "projectId": "testbig-235116", - "datasetId": "testData", - "tableId": "github_nested_copy" - }, - "type": "TABLE", - "creationTime": "1553104818977", - "expirationTime": "1558288818977" - }, - { - "kind": "bigquery#table", - "id": "testbig-235116:testData.sample_20190322", - "tableReference": { - "projectId": "testbig-235116", - "datasetId": "testData", - "tableId": "sample_20190322" - }, - "type": "TABLE", - "creationTime": "1553239767833", - "expirationTime": "1558423767833" - } - ], - "totalItems": 2 - } - ``` - -??? note "listTables" - The listTables operation retrieves all available tables in the specified dataset. For more information, see related [BigQuery documentation](https://cloud.google.com/bigquery/docs/reference/v2/tables/list). - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescriptionRequired
    datasetIdThe dataset ID of the tables that should be listed.Yes.
    pageTokenThe page token (which is returned by a previous call) for requesting the next page of results.Yes.
    projectIdThe project ID of the tables that should be listed.Yes.
    maxResultsThe maximum number of results to return.Yes.
    - - **Sample configurations** - - ```xml - - {$ctx:datasetId} - {$ctx:pageToken} - {$ctx:projectId} - {$ctx:maxResults} - - ``` - - **Sample request** - - ```json - { - "accessToken": "ya29.BwKYx40Dith1DFQBDjZOHNqhcxmKs9zbkjAWQa1q8mdMFndp2-q8ifG66fwprOigRwKSNw", - "apiUrl": "https://www.googleapis.com", - "clientId": "504627865627-kdni8r2s10sjddfgXzqb4bvnba.apps.googleusercontent.com", - "clientSecret": "ChlbHI_T7zssXXTRYuqj_-TM", - "refreshToken": "1/uWful-diQNAdk-alDUa6ixxxxxxxx-LpJIikEQ2sqA", - "registryPath": "connectors/bq", - "projectId": "publicdata", - "datasetId": "samples", - "tableId": "github_nested", - "maxResults": "1", - "pageToken": "1", - "startIndex": "1", - "fields": "id", - "callback": "callBackFunction", - "apiKey": "154987fd5h4x6gh4", - "prettyPrint": "true", - "quotaUser": "1hx46f5g4h5ghx6h41x54gh6f4hx", - "userIp": "192.77.88.12", - "ifNoneMatch":"hnk59tKBkX8cdlePZ8VtzgVzuO4/tS1oqpXxnkU21hZeK5k4lqRrRr8", - "ifMatch":"hnk59tKBkX8cdlePZ8VtzgVzuO4/tS1oqpXxnkU21hZeK5k4lqRrRr8" - } - ``` - - **Sample response** - - ```json - { - "kind": "bigquery#tableList", - "etag": "ASMRI9cY0t0ilhpaFI4OMA==", - "tables": [ - { - "kind": "bigquery#table", - "id": "testbig-235116:testData.github_nested_copy", - "tableReference": { - "projectId": "testbig-235116", - "datasetId": "testData", - "tableId": "github_nested_copy" - }, - "type": "TABLE", - "creationTime": "1553104818977", - "expirationTime": "1558288818977" - }, - { - "kind": "bigquery#table", - "id": "testbig-235116:testData.sample_20190322", - "tableReference": { - "projectId": "testbig-235116", - "datasetId": "testData", - "tableId": "sample_20190322" - }, - "type": "TABLE", - "creationTime": "1553239767833", - "expirationTime": "1558423767833" - } - ], - "totalItems": 2 - } - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/cerediandayforce-overview.md b/en/docs/reference/connectors/ceridiandayforce-connector/cerediandayforce-overview.md deleted file mode 100644 index 08a41bec62..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/cerediandayforce-overview.md +++ /dev/null @@ -1,35 +0,0 @@ -# Ceridian Dayforce Connector Overview - -Dayforce is a comprehensive human capital management system that covers the entire employee lifecycle including HR, payroll, benefits, talent management, workforce management, and services. The entire system resides on cloud that takes the burden of managing and replicating data on-premise. - -The Ceridian Dayforce connector allows you to access the REST API of Ceridian Dayforce HCM. - -To see the Ceridian Dayforce Connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "dayforce". - -Ceridian Dayforce Connector Store - -## Compatibility - -| Connector Version | Supported product versions | -| ------------- |-------------| -| 1.0.0 | APIM 4.0.0, EI 7.1.0, EI 7.0.x EI 6.6.0 EI 6.5.0 | - -For older versions, see the details in the connector store. - -## Ceridian Dayforce Connector documentation - -* **[Setting up the Ceridian Dayforce Environment]({{base_path}}/reference/connectors/ceridiandayforce-connector/ceridiandayforce-connector-config/)**: You need to have a Ceridian Dayforce developer account and obtain test user credentials to try this out. - -* **[Ceridian Dayforce Connector Example]({{base_path}}/reference/connectors/ceridiandayforce-connector/ceridiandayforce-connector-example/)**: This example depicts how to use Dayforce connector to send a GET request to retrieve address of employees and send a POST request to create contacts of an employee. - -* **[Ceridian Dayforce Connector Reference]({{base_path}}/reference/connectors/ceridiandayforce-connector/ceridiandayforce-connector-reference/)**: This documentation provides a reference guide for the Ceridian Dayforce Connector. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, create a pull request in the following repository. - -* [Ceridian Dayforce Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-dayforce) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/ceridiandayforce-connector-config.md b/en/docs/reference/connectors/ceridiandayforce-connector/ceridiandayforce-connector-config.md deleted file mode 100644 index 8d4b4e0a33..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/ceridiandayforce-connector-config.md +++ /dev/null @@ -1,19 +0,0 @@ -# Setting up the Ceridian Dayforce Environment - -The Dayforce Connector allows you to access the REST API of [Ceridian Dayforce HCM](https://www.ceridian.com/products/dayforce), which lets you store your human capital information and retrieve them back when needed. - -To use the Dayforce cloud service, you must have a [Dayforce HCM](https://www.dayforcehcm.com) account. To test the REST API of Dayforce we will use a Dayforce developer account, which is free and lets us access a developer Dayforce instance. - -## Signing Up for Dayforce Developer Account - -* **To sign up for Dayforce Developer Account:** - - 1. Navigate to [Ceridian Dayforce Developer Network](https://developers.dayforce.com) and select **Register**. - 2. Follow the online instructions. - -If your company has already purchased a namespace in Dayforce, use that to sign up. Otherwise, you can still use their -sample environment by selecting sample option. - -## Obtaining Test User Credentials - -Navigate to **API Explorer > Employee > GET Employees**. There you can see the basic authentication credentials, username and password for the sample environment. Click on the GET method to expand it. Execute the method to view the request and response corresponding to the GET Employees method. In the request you can see the sample environment URI. diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/ceridiandayforce-connector-example.md b/en/docs/reference/connectors/ceridiandayforce-connector/ceridiandayforce-connector-example.md deleted file mode 100644 index 2466559d76..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/ceridiandayforce-connector-example.md +++ /dev/null @@ -1,235 +0,0 @@ -# Ceridian Dayforce Connector Example - -The Ceridian Dayforce connector allows you to access the REST API of Ceridian Dayforce HCM. Dayforce is a -comprehensive human capital management system that covers the entire employee lifecycle including HR, payroll, -benefits, talent management, workforce management, and services. The entire system resides on cloud that takes the -burden of managing and replicating data on-premise. - -## What you'll build - -This example depicts how to use Dayforce connector to: - -1. Send GET request to retrieve address of employees from the sample environment defaults -2. Send a POST request to create contacts of an employee. (Note that the POST and PATCH requests will not update the -sample environment database as it is shared among all developers. However, we will get a response with HTTP code 200) - -Both of the two operations are exposed via an API. The API with the context `/dayforceconnector` has three resources - -* `/getEmployeeAddress` - Once invoked, it will retrieve the address information of a specified employee -* `/postEmployeeContact` - This will create the contact information of an employee when invoked. The relevant -parameters must be passed in the body as we will see below. - -## Setting up the environment - -Please follow the steps mentioned at [Setting up Ceridian Dayforce Environment]({{base_path}}/reference/connectors/ceridiandayforce-connector/ceridiandayforce-connector-config/) document in order to create a Ceridian Dayforce developer account and obtain credentials you need to access the -Dayforce sample APIs. Keep them saved to be used in the next steps. - -## Configure the connector in WSO2 Integration Studio - -Follow these steps to set up the Integration Project and import Dayforce connector into it. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -1. Right click on the created ESB Solution Project and select, -> **New** -> **Rest API** to create the REST API. - Adding a Rest API - -2. Specify the API name as `DayforceConnectorTestAPI` and API context as `/dayforceconnector`. You can go to the -source view of the XML configuration file of the API and copy the following configuration. - -```xml - - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - - - - - - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - - - - - - - -``` - -Now we can export the imported connector and the API into a single CAR application. CAR application is the one we are -going to deploy to server runtime. - -{!includes/reference/connectors/exporting-artifacts.md!} - -Now the exported CApp can be deployed in the integration runtime so that we can run it and test. - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - - - Download ZIP - - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -We can use Curl or Postman to try the API. The testing steps are provided for curl. Steps for Postman should be -straightforward and can be derived from the curl requests. - -### GET the address information of an employee in Dayforce - -* Invoke the API as shown below using the curl command. Curl Application can be downloaded from -[here] (https://curl.haxx.se/download.html). - -``` -curl --location --request POST 'http://192.168.8.100:8290/dayforceconnector/getEmployeeAddress' \ ---header 'Content-Type: application/json' \ ---data '{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr58.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -}' -``` - -**Note** -* You may have to change the 'http://192.168.8.100:8290' part depending on the ip address on which your integration server instance is running. -* You may have to change the 'clientNamespace' in the request body as Dayforce developer instance gets moved around by Ceridian. The address can be obtained ad mentioned in section Setting up the environment - -**Expected Response**: - -You should receive 200 OK response with the response body as follows, - -```json -{ - "Data": [ - { - "Address1": "4114 Yonge St.", - "City": "North York", - "PostalCode": "M2P 2B7", - "Country": { - "Name": "Canada", - "XRefCode": "CAN", - "ShortName": "Canada", - "LongName": "Canada" - }, - "State": { - "Name": "Ontario", - "XRefCode": "ON", - "ShortName": "Ontario", - "LongName": "Ontario" - }, - "EffectiveStart": "2017-01-15T00:00:00", - "ContactInformationType": { - "ContactInformationTypeGroup": { - "XRefCode": "Address", - "ShortName": "Address", - "LongName": "Address" - }, - "XRefCode": "PrimaryResidence", - "ShortName": "Primary Residence", - "LongName": "Primary Residence" - }, - "IsPayrollMailing": false - } - ] -} -``` - -### POST the contact information of an employee in Dayforce - -* Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - -``` -curl --location --request POST 'http://192.168.8.100:8290/dayforceconnector/postEmployeeContact' \ ---header 'Content-Type: application/json' \ ---data '{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr58.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "FALSE", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "ContactNumber": "202 265 8987", - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "EffectiveStart": "2000-01-01T00:00:00", - "ContactInformationType": { - "ContactInformationTypeGroup": { - "XRefCode": "Phone", - "ShortName": "Phone", - "LongName": "Phone" - }, - "XRefCode": "HomePhone", - "ShortName": "Home", - "LongName": "Home" - }, - "IsForSystemCommunications": false, - "IsPreferredContactMethod": false, - "IsUnlistedNumber": false, - "IsVerified": false, - "IsRejected": false, - "ShowRejectedWarning": true, - "NumberOfVerificationRequests": 0 - } -}' -``` - -**Expected Response**: -* You should get a 200 OK response. Please bear in mind that this post will not update the database in the sample -environment. However, if you use this in a test or production environment changes will be made to the database. - -In this example Ceridian Dayforce connector is used to perform operations with Dayforce HCM. Please read the [Ceridian Dayforce connector reference guide]({{base_path}}/reference/connectors/ceridiandayforce-connector/ceridiandayforce-connector-reference/) to learn more about the operations you can perform with the Dayforce connector. diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/ceridiandayforce-connector-reference.md b/en/docs/reference/connectors/ceridiandayforce-connector/ceridiandayforce-connector-reference.md deleted file mode 100644 index 55b721a01d..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/ceridiandayforce-connector-reference.md +++ /dev/null @@ -1,30 +0,0 @@ -# Configuring Ceridian Dayforce REST Operations - -[[Prerequisites]](#Prerequisites) [[Initializing the connector]](#initializing-the-connector) - -## Prerequisites - -> NOTE: For development purposes we can use test credentials provided by Dayforce. However, to understand the Dayforce API and the request responses handled by Dayforce it is recommended that you create a developer account. If you do not have a Dayforce account, go to [https://developers.dayforce.com/Special-Pages/Registration.aspx](https://developers.dayforce.com/Special-Pages/Registration.aspx) and create a Dayforce developer account. - -To use the Dayforce REST connector, add the element in your configuration before carrying out any other Dayforce REST operation. - -## Initializing the connector -Add the following - -#### init -```xml - - {$ctx:username} - {$ctx:ceredianPwd} - {$ctx:clientNamespace} - {$ctx:apiVersion} - -``` - -**Properties** -* username: The username of your Dayforce environment. For testing we can use the sample environment credential: DFWSTest -* password: The password of your Dayforce environment. For testing we can use the sample environment credential: DFWSTest -* clientNamespace: The namespace of your Dayforce environment. For testing we can use the sample environment: usconfigr57.dayforcehcm.com/Api/ddn -* apiVersion: The version of the API you want to call. For testing we will set it to: V1 - -Now that you have connected to Dayforce, use the information in the following topics to perform various operations with the connector. diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/configuration/orgunitdetails.md b/en/docs/reference/connectors/ceridiandayforce-connector/configuration/orgunitdetails.md deleted file mode 100644 index b71ea99f9c..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/configuration/orgunitdetails.md +++ /dev/null @@ -1,290 +0,0 @@ -# Working with Org Unit Details - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve details of a specific Org Unit - -| Operation | Description | -| ------------- |-------------| -|[GET Org Unit Details](#retrieving-org-unit-details)| Retrieve details of a specific Org Unit using its XRefCode, including its relationship to a Parent Org Unit and Legal Entity. The list of Org Unit XRefCodes can be retrieved using GET Org Units. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Org Unit Details -We can use GET Org Unit Details operation with required parameters to find details of a selected org unit - -**GET Org Unit Details** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:expand} - {$ctx:includeChildOrgUnits} - -``` - -**Properties** - -* xRefCode (Mandatory - string): The unique identifier (external reference code) of the org unit. The value provided must be the exact match for an org unit; otherwise, a bad request (400) error will be returned. -* contextDate (Optional - string): The Context Date value is an “as-of” date used to determine which org unit data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2019-01-01T12:34:56 -* expand (Optional - string): This parameter accepts a comma-separated list of top-level entities that contain the data elements needed for downstream processing. When this parameter is not used, only data elements from the orgunit primary record will be included. For more information, please refer to the Introduction to Dayforce Web Services document. -* includeChildOrgUnits (Optional - boolean): When a TRUE value is used in this parameter, the immediate child org units’ information under the org unit being retrieved will be returned as well. The default value is FALSE if parameter is not specified. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "Store320", - "includeChildOrgUnits": "true" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": { - "OrgLevel": { - "XRefCode": "Site", - "ShortName": "Site", - "LongName": "Site" - }, - "PhysicalLocation": true, - "PostalCode": "63103", - "CountryCode": "USA", - "OpeningDate": "2012-01-01T00:00:00", - "GeoCity": { - "ShortName": "St. Louis" - }, - "County": "St. Louis", - "IsOrgManaged": true, - "IsMobileOrg": false, - "LedgerCode": "", - "StateCode": "MO", - "Address": "1401 Clark Ave.", - "ChildOrgUnits": { - "Items": [ - { - "OrgLevel": { - "XRefCode": "OnSiteDepartment", - "ShortName": "Department", - "LongName": "Department" - }, - "XRefCode": "Store 32014", - "ShortName": "Store 320 - Customer Service" - }, - { - "OrgLevel": { - "XRefCode": "OnSiteDepartment", - "ShortName": "Department", - "LongName": "Department" - }, - "XRefCode": "Store 32025", - "ShortName": "Store 320 - Meat" - }, - { - "OrgLevel": { - "XRefCode": "OnSiteDepartment", - "ShortName": "Department", - "LongName": "Department" - }, - "XRefCode": "Store 32026", - "ShortName": "Store 320 - Produce" - }, - { - "OrgLevel": { - "XRefCode": "OnSiteDepartment", - "ShortName": "Department", - "LongName": "Department" - }, - "XRefCode": "Store 32027", - "ShortName": "Store 320 - Seafood" - }, - { - "OrgLevel": { - "XRefCode": "OnSiteDepartment", - "ShortName": "Department", - "LongName": "Department" - }, - "XRefCode": "Store 32028", - "ShortName": "Store 320 - Stocking" - }, - { - "OrgLevel": { - "XRefCode": "OnSiteDepartment", - "ShortName": "Department", - "LongName": "Department" - }, - "XRefCode": "Store 320 Mgmt", - "ShortName": "Store 320 - Management" - } - ] - }, - "XRefCode": "Store320", - "ShortName": "Store 320" - } -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Configuration/Organization-Data/Org-Units/PATCH-Org-Units.aspx](https://developers.dayforce.com/Build/API-Explorer/Configuration/Organization-Data/Org-Units/PATCH-Org-Units.aspx) -(sic) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1. Create a sample proxy as shown below: - ```xml - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:includeChildOrgUnits} - - - - - - - ``` - -2. Create a json file named query.json and copy the configurations given below to it: - - ```json - { - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "Store320", - "includeChildOrgUnits": "true" - } - ``` - -3. Replace the credentials with your values. - -4. Execute the following curl command: - - ```bash - curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json - ``` -5. Dayforce returns HTTP Code 200 with the following response body. - - ```json - { - "Data": { - "OrgLevel": { - "XRefCode": "Site", - "ShortName": "Site", - "LongName": "Site" - }, - "PhysicalLocation": true, - "PostalCode": "63103", - "CountryCode": "USA", - "OpeningDate": "2012-01-01T00:00:00", - "GeoCity": { - "ShortName": "St. Louis" - }, - "County": "St. Louis", - "IsOrgManaged": true, - "IsMobileOrg": false, - "LedgerCode": "", - "StateCode": "MO", - "Address": "1401 Clark Ave.", - "ChildOrgUnits": { - "Items": [ - { - "OrgLevel": { - "XRefCode": "OnSiteDepartment", - "ShortName": "Department", - "LongName": "Department" - }, - "XRefCode": "Store 32014", - "ShortName": "Store 320 - Customer Service" - }, - { - "OrgLevel": { - "XRefCode": "OnSiteDepartment", - "ShortName": "Department", - "LongName": "Department" - }, - "XRefCode": "Store 32025", - "ShortName": "Store 320 - Meat" - }, - { - "OrgLevel": { - "XRefCode": "OnSiteDepartment", - "ShortName": "Department", - "LongName": "Department" - }, - "XRefCode": "Store 32026", - "ShortName": "Store 320 - Produce" - }, - { - "OrgLevel": { - "XRefCode": "OnSiteDepartment", - "ShortName": "Department", - "LongName": "Department" - }, - "XRefCode": "Store 32027", - "ShortName": "Store 320 - Seafood" - }, - { - "OrgLevel": { - "XRefCode": "OnSiteDepartment", - "ShortName": "Department", - "LongName": "Department" - }, - "XRefCode": "Store 32028", - "ShortName": "Store 320 - Stocking" - }, - { - "OrgLevel": { - "XRefCode": "OnSiteDepartment", - "ShortName": "Department", - "LongName": "Department" - }, - "XRefCode": "Store 320 Mgmt", - "ShortName": "Store 320 - Management" - } - ] - }, - "XRefCode": "Store320", - "ShortName": "Store 320" - } - } - ``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/configuration/orgunits.md b/en/docs/reference/connectors/ceridiandayforce-connector/configuration/orgunits.md deleted file mode 100644 index 64814cdd89..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/configuration/orgunits.md +++ /dev/null @@ -1,784 +0,0 @@ -# Working with Org Units - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve, create or update organization units - -| Operation | Description | -| ------------- |-------------| -|[GET Org Units](#retrieving-org-units)| Retrieve a list of Org Unit XRefCodes. An XRefcode can then be used to retrieve details of an Org Unit with GET Org Unit Details. | -|[POST Org Units](#creating-org-units)| Create a new Org Unit with a unique XRefCode. It includes creating its relationship to an existing Parent Org Unit and Legal Entity that are specified in OrgUnitParents and OrgUnitLegalEntities, respectively. | -|[PATCH Org Units](#updating-org-units)| Update an existing Org Unit using its XRefCode, which should be specified in the request URL. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Org Units -We can use GET Org Units operation with required parameters to retrieve a list of org units - -**GET Org Units** -```xml - -``` - -**Properties** - -There are no properties. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "XRefCode": "Corporate" - }, - { - "XRefCode": "RetailCoUSA" - }, - { - "XRefCode": "RefCode_1" - }, - { - "XRefCode": "RefCode_2" - }, - { - "XRefCode": "Store118" - }, - { - "XRefCode": "RefCode_3" - }, - { - "XRefCode": "RefCode_4" - }, - { - "XRefCode": "RefCode_5" - }, - { - "XRefCode": "RefCode_6" - }, - { - "XRefCode": "RefCode_7" - }, - { - "XRefCode": "RefCode_8" - }, - { - "XRefCode": "Store113" - }, - { - "XRefCode": "RefCode_9" - }, - { - "XRefCode": "RefCode_10" - }, - { - "XRefCode": "RefCode_11" - }, - { - "XRefCode": "RefCode_12" - }, - { - "XRefCode": "RefCode_13" - }, - { - "XRefCode": "RefCode_14" - }, - { - "XRefCode": "Store110" - }, - { - "XRefCode": "4" - }, - { - "XRefCode": "1" - }, - { - "XRefCode": "7" - }, - { - "XRefCode": "2" - }, - { - "XRefCode": "3" - }, - { - "XRefCode": "5" - }, - { - "XRefCode": "6" - }, - { - "XRefCode": "Store105" - }, - { - "XRefCode": "RefCode_15" - }, - { - "XRefCode": "RefCode_16" - }, - { - "XRefCode": "RefCode_17" - }, - { - "XRefCode": "RefCode_18" - }, - { - "XRefCode": "RefCode_19" - }, - { - "XRefCode": "RefCode_20" - }, - { - "XRefCode": "Store120" - }, - { - "XRefCode": "Store 120 - Accessories" - }, - { - "XRefCode": "Store 120 - Mens" - }, - { - "XRefCode": "Store 120 - Footwear" - }, - { - "XRefCode": "Store 120 - Womens" - }, - { - "XRefCode": "Store 120 - Management" - }, - { - "XRefCode": "Store 120 - Receiving" - }, - { - "XRefCode": "Store125" - }, - { - "XRefCode": "RefCode_21" - }, - { - "XRefCode": "RefCode_22" - }, - { - "XRefCode": "RefCode_23" - }, - { - "XRefCode": "RefCode_24" - }, - { - "XRefCode": "RefCode_25" - }, - { - "XRefCode": "RefCode_26" - }, - { - "XRefCode": "Store999" - }, - { - "XRefCode": "RefCode_27" - }, - { - "XRefCode": "RefCode_28" - }, - { - "XRefCode": "RefCode_29" - }, - { - "XRefCode": "RefCode_30" - }, - { - "XRefCode": "RefCode_31" - }, - { - "XRefCode": "RefCode_32" - }, - { - "XRefCode": "RefCode_33" - }, - { - "XRefCode": "RefCode_34" - }, - { - "XRefCode": "Plant1" - }, - { - "XRefCode": "500Assembly 1" - }, - { - "XRefCode": "500Assembly 2" - }, - { - "XRefCode": "500Maintenance" - }, - { - "XRefCode": "500Management" - }, - { - "XRefCode": "500Operations" - }, - { - "XRefCode": "500Packaging" - }, - { - "XRefCode": "RefCode_35" - }, - { - "XRefCode": "Bank 1" - }, - { - "XRefCode": "Bank 1Admin" - }, - { - "XRefCode": "Bank 1Commercial Lines" - }, - { - "XRefCode": "Bank 1Customer Service" - }, - { - "XRefCode": "Bank 1Employee Benefits" - }, - { - "XRefCode": "Bank 1Personal Lines" - }, - { - "XRefCode": "RefCode_36" - }, - { - "XRefCode": "Sunny" - }, - { - "XRefCode": "SunnyAdmin" - }, - { - "XRefCode": "SunnyCommunity Care" - }, - { - "XRefCode": "SunnyHousekeeping" - }, - { - "XRefCode": "SunnyNursing" - }, - { - "XRefCode": "SunnyPP&E" - }, - { - "XRefCode": "SunnyReception" - }, - { - "XRefCode": "Store122" - }, - { - "XRefCode": "RefCode_37" - }, - { - "XRefCode": "Store155" - }, - { - "XRefCode": "RefCode_38" - }, - { - "XRefCode": "RefCode_39" - }, - { - "XRefCode": "Hotel 1" - }, - { - "XRefCode": "Hotel 1Concierge" - }, - { - "XRefCode": "Hotel 1Food & Beverage" - }, - { - "XRefCode": "Hotel 1Front Desk" - }, - { - "XRefCode": "Hotel 1Housekeeping" - }, - { - "XRefCode": "Hotel 1Management" - }, - { - "XRefCode": "Hotel 1Operations" - }, - { - "XRefCode": "Hotel 1PP&E" - }, - { - "XRefCode": "RefCode_40" - }, - { - "XRefCode": "Site1CAN" - }, - { - "XRefCode": "Site1CAN8" - }, - { - "XRefCode": "Site1CAN10" - }, - { - "XRefCode": "Site1CAN5" - }, - { - "XRefCode": "Site1CAN11" - }, - { - "XRefCode": "Plant 3" - }, - { - "XRefCode": "Plant 38" - }, - { - "XRefCode": "Plant 39" - }, - { - "XRefCode": "Plant 310" - }, - { - "XRefCode": "Plant 35" - }, - { - "XRefCode": "Plant 311" - }, - { - "XRefCode": "Plant 36" - }, - { - "XRefCode": "Plant4" - }, - { - "XRefCode": "Plant48" - }, - { - "XRefCode": "Plant49" - }, - { - "XRefCode": "Plant410" - }, - { - "XRefCode": "Plant45" - }, - { - "XRefCode": "Plant411" - }, - { - "XRefCode": "Plant46" - }, - { - "XRefCode": "RefCode_41" - }, - { - "XRefCode": "RefCode_42" - }, - { - "XRefCode": "Store320" - }, - { - "XRefCode": "Store 32014" - }, - { - "XRefCode": "Store 32025" - }, - { - "XRefCode": "Store 32026" - }, - { - "XRefCode": "Store 32027" - }, - { - "XRefCode": "Store 32028" - }, - { - "XRefCode": "Store 320 Mgmt" - }, - { - "XRefCode": "RefCode_43" - }, - { - "XRefCode": "Minneapolis" - }, - { - "XRefCode": "RefCode_44" - }, - { - "XRefCode": "RefCode_45" - }, - { - "XRefCode": "RefCode_46" - }, - { - "XRefCode": "Store 1001" - }, - { - "XRefCode": "Store 100122" - }, - { - "XRefCode": "RefCode_47" - }, - { - "XRefCode": "RefCode_48" - }, - { - "XRefCode": "Cloverleaf" - }, - { - "XRefCode": "Cloverleaf12" - }, - { - "XRefCode": "Cloverleaf33" - }, - { - "XRefCode": "Cloverleaf19" - }, - { - "XRefCode": "Cloverleaf30" - }, - { - "XRefCode": "Cloverleaf32" - }, - { - "XRefCode": "Cloverleaf31" - }, - { - "XRefCode": "Cranberry" - }, - { - "XRefCode": "Cranberry12" - }, - { - "XRefCode": "Cranberry33" - }, - { - "XRefCode": "Cranberry19" - }, - { - "XRefCode": "Cranberry30" - }, - { - "XRefCode": "Cranberry32" - }, - { - "XRefCode": "Cranberry31" - }, - { - "XRefCode": "Store2001" - }, - { - "XRefCode": "Store200122" - }, - { - "XRefCode": "RefCode_49" - }, - { - "XRefCode": "RefCode_50" - }, - { - "XRefCode": "RefCode_51" - }, - { - "XRefCode": "Plant 601" - }, - { - "XRefCode": "Plant 6018" - }, - { - "XRefCode": "Plant 6019" - }, - { - "XRefCode": "Plant 60110" - }, - { - "XRefCode": "Plant 6015" - }, - { - "XRefCode": "Plant 60111" - }, - { - "XRefCode": "Plant 6016" - }, - { - "XRefCode": "Plant 501" - }, - { - "XRefCode": "Plant 5018" - }, - { - "XRefCode": "Plant 50110" - }, - { - "XRefCode": "Plant 5015" - }, - { - "XRefCode": "Plant 50111" - }, - { - "XRefCode": "Head Office" - }, - { - "XRefCode": "HeadOfficeHR" - }, - { - "XRefCode": "HeadOfficeFinance" - }, - { - "XRefCode": "HeadOfficeMarketing" - }, - { - "XRefCode": "HeadOfficeSeniorLeadership" - }, - { - "XRefCode": "HeadOfficeSeniorOperations" - }, - { - "XRefCode": "RefCode_52" - }, - { - "XRefCode": "RefCode_53" - }, - { - "XRefCode": "Store 200" - }, - { - "XRefCode": "200Accessories" - }, - { - "XRefCode": "200Footwear" - }, - { - "XRefCode": "200Management" - }, - { - "XRefCode": "200Mens" - }, - { - "XRefCode": "200Receiving" - }, - { - "XRefCode": "200Womens" - }, - { - "XRefCode": "Financial Co. Canada" - }, - { - "XRefCode": "Central Bank" - }, - { - "XRefCode": "Central Bank Admin" - }, - { - "XRefCode": "Central Bank Customer Service" - }, - { - "XRefCode": "Central Bank IT" - }, - { - "XRefCode": "Eastern Bank" - }, - { - "XRefCode": "Eastern Bank Admin" - }, - { - "XRefCode": "Eastern Bank Customer Service" - }, - { - "XRefCode": "Eastern Bank IT" - }, - { - "XRefCode": "Western Bank" - }, - { - "XRefCode": "Western Bank Admin" - }, - { - "XRefCode": "Western Bank Customer Service" - }, - { - "XRefCode": "Western Bank IT" - }, - { - "XRefCode": "Clinic" - }, - { - "XRefCode": "ClinicMedical" - }, - { - "XRefCode": "Project" - }, - { - "XRefCode": "ITProjects" - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Configuration/Organization-Data/Org-Units/GET-Org-Units.aspx](https://developers.dayforce.com/Build/API-Explorer/Configuration/Organization-Data/Org-Units/GET-Org-Units.aspx) - -#### Creating Org Units -We can use POST Org Units operation with required parameters to create a new org unit - -**GET Org Units** -```xml - - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "isValidateOnly": "true", - "fieldAndValue": - { - "OrgLevel": { - "XRefCode": "Site" - }, - "PhysicalLocation": 1, - "OrgUnitParents": { - "Items": [ - { - "ParentOrgUnit": { - "XRefCode": "Corporate" - }, - "EffectiveStart": "2019-06-26T00:00:00-04:00" - } - ] - }, - "XRefCode": "ShopDDN", - "ShortName": "ShopDDN" - } -} -``` - -**Sample response** - -Dayforce returns HTTP Code 200 - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Configuration/Organization-Data/Org-Units/POST-Org-Units.aspx](https://developers.dayforce.com/Build/API-Explorer/Configuration/Organization-Data/Org-Units/POST-Org-Units.aspx) - -#### Updating Org Units -We can use PATCH Org Units operation with required parameters to update an existing org unit - -**PATCH Org Units** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the org unit. The value provided must be the exact match for an org unit; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "Store320", - "isValidateOnly": "true", - "fieldAndValue": - { - "XRefCode": "Store320", - "ShortName": "Store 320", - "LongName": "Store 320" - } -} -``` - -**Sample response** - -Dayforce returns HTTP Code 200 - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Configuration/Organization-Data/Org-Units/PATCH-Org-Unit.aspx](https://developers.dayforce.com/Build/API-Explorer/Configuration/Organization-Data/Org-Units/PATCH-Org-Unit.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "fieldAndValue": - { - "OrgLevel": { - "XRefCode": "Site" - }, - "PhysicalLocation": 1, - "OrgUnitParents": { - "Items": [ - { - "ParentOrgUnit": { - "XRefCode": "Corporate" - }, - "EffectiveStart": "2019-06-26T00:00:00-04:00" - } - ] - }, - "XRefCode": "ShopDDN", - "ShortName": "ShopDDN" - } -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-documents/documentdetails.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-documents/documentdetails.md deleted file mode 100644 index c39801fc73..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-documents/documentdetails.md +++ /dev/null @@ -1,143 +0,0 @@ -# Working with Document Details - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve details of a document - -| Operation | Description | -| ------------- |-------------| -|[GET Document Details](#retrieving-document-details)| This request allows to retrieve the contents of a particular document. It requires the document GUID that can be obtained with Get a List of Documents. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Document Details -We can use GET Document Details operation with required parameters to retrieve details about a document. - -**GET Document Details** -```xml - - {$ctx:documentGuid} - -``` - -**Properties** - -* documentGuid (Mandatory): Uniquely identifies the document you want to retrieve. Partial search is not supported, so provide the full value. Otherwise, a 400 error will be returned. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "documentGuid": "696afd0c-5890-4316-9b7e-7ac990189018" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": { - "DocumentType": "Jpg", - "DocumentGroup": "DocumentManagement", - "SourceReportUniqueId": "3ca57484-9053-45b6-ab65-1562636c4713", - "PublishDateTime": "2015-04-15T14:39:11.35", - "Title": "Aaron Glover Birth Certificate.jpg", - "PageCount": 0, - "CultureId": 0, - "Contents": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAkGBhQSERUUExQWFRUWGBcXGBcYFxcaGhgXGB0cGBcaFxgfHCcfFx0jGhwcHy8gJCcpLCwsHB8xNTAqNSYrLCkBCQoKDgwOGg8PGiwcHyQsLCwsKSwsLCksKSksKSwsLCwsLCksLCwsLCwpLCwsKSwpLCksLCwsLCwsLCwsLCwsLP/AABEIAMkA+wMBIgACEQEDEQH/xAAbAAABBQEBAAAAAAAAAAAAAAAEAAECAwUGB//EAEsQAAIBAgQDAwcIBgoBAwUBAAECEQADBBIhMQVBURMiYQYycYGRofAHFCNCscHR4SQzUmKT0xVDU3JzgpKistLCY3TxNKOztOIW/8QAGAEAAwEBAAAAAAAAAAAAAAAAAAECAwT/xAAiEQACAgICAwEBAQEAAAAAAAAAAQIRAzESIRNBUTIicWH/2gAMAwEAAhEDEQA/APUcVfKBYglnRfadfdRFq80ASPZQPFNrX+Nb+/7pNH2l3rnm2maY0qtlwvH4AqwX28PZUANNKdR+FZ85fR0ifat8D86ftW8Pj11E04FT5JfSaQ4vHw9n50/bnwqKrFMKfkl9CkSGIPh7/wAaftj4fHrpo0oPiXEVs22dzCqJ9pAAE6SSQo13IprJJ6CkHC8fCmOIPhVFi+GVWGoYAj0HWrLiaUvJL6FIn256Uu3Ph76YDT7KD4ljUsW3u3WCW0BZmOwA3P5ak7DcU1kkwpBS4hui++pfOvD31w/HPlHXC2bdy7ZuK18xYtkEE8gbrRFs6glRmIBG+sYuDwnFzjrq/Pk7NjbuEJbQkIxuW1FpLoYWo7Iz3m5HvkmtVz9sKR6kcYY29/5VD+kf3ff+VDYZSLaqWLMAASTJOmpJ6zrVq2DUeR+y1GPsvXGTy99SGI8Pj2VVZtdanIHoqXllZDSHOI8Kl2/hUEPvpytHlkFIkb/hT9t4VACnijzSFSJG/wCBpdv4GoxSK0eaQUh/nHgfd+NL5yOh9341HJT5KPMwpD/OR0Px66cYgePsqvLUVSn5mOkWHFjx9lTD0K9v76MUaCtsc+WxSSWjneJj9T/jJ/xcfaY9dHgwI8aAx571n/F+yzeOnro5uVRPZeLQUmlNc0++kg2qq/igpliAJC6mNWIVR6SxAA3k1gtgELUhXmvlJ8rKYPHNhyge2qp9IsnJcMl1YSMwCldV1BnRth0qeWVs4W7iNxbtNehWkXEykoUaBIaI2kHQ66VTxsR03rqLLIrF8lPKVMfYW/bV1Rtg4AOhI+qSDqOsxBgSJ2Bp9nxpUONOhGF5UeV9rBWwbjL2jsqW7ebV2YwDG4Uakt0HUgV598ub3xYw9+3cYWc/eQEAC4AXtsSBJOjDUwCojU0L8st2xi7FrFYb6RrNx7dy8gBVVBy5bjbz2mUpvIZiDBk7PlMGxvk9dusFkIl9QrFgsMLhBlV1FtiIiBtJiT0xilTA9A4DfDWAwOktB37uY5eeoyxrzBBrSzV45huI4u5wE3cLc7AWrKtcYybj9iotv2bA/Rg9kTmMsTIhR3m6r5JfKK/i8GHxF0XXlvqqpVVJWO6BmJgMSf2hp1znj2wO3mvPPlFxxuY7BYPMAkXMXeE6MtgM1pW/dLoZ9XSvRCDrFeTfKgxwvFcDi3JW09u5hnbkobOCT6Fu5v8AJ4UsS/oGZPyy8Q/SMHY7PIlm6CHLpmecgkWgc6oIgOYDEGNpPpHCo+fOIP6iyNhyu4yZjbauB+WjhKXLWFu2LQOIv3jmyLne4ezEiRJYLlUDkNa7rgiRjbsDQ2rZHoa9jGBEaagj21tL8gXca8tsHhxcY3Q5tAF1sg3WWTAz5ZCSxiXKiTUMJ8oGE+Zpi7l0WrbiQGILZoGZAokuwmCFmvOPIWxabH8WtXELi5duW+zQSzIzYhWyjRViV7zQq6ajSh/KzyKu8P4IVvXVuTibbIuUfR5s0gPuSRJIEAEtEzNLxx0w5M9gXyrw30A7ZQcSQLIOjPIkECNuUnSSBuRXmXyicbe/xaxhLeIZsOwQ37Vu73Tka4bqNl5lEgrrB5Sa2uAeQmHxOBwhu2lz2lstmMknNYssQSGGbvRCvIEREQK53D4FMV5S3wVV7dhESCqkd1bVltIiQWcg6QQDpFEYxT6Ed15U4fH58H80uqqq47ZSshyFO+olDqMsrqQZGWRT8ofyg3OHHDrbsLeN8uoBdlIZMkRCnMCXA5Vyl3EYq75StbsYl8mHCyGGdERxbFy1kBAILuq5j3pgkkrNDeVy3sTx61aRxcOFUXR2gUKrMRcRO4BzaygJBMkTmAp8F1YzovKPjHEcM64tsdhLeGIVOze1cgsd8iCXutIJBzr3eQE16JgsQGtqwdXDKpDrGVwQO8sE6HfQnSuBx3lhgb2IThWIRg0qrZ1+jW6sNbTvRnnSGjKZUahq7+zbVVCgQBoB0ArLIutAEKaYuKVNHxFYASmosaTH4imJkbGigGBpFtaimnx7qmBNMZEUSraChm6fHSigK6MHsmRzePH0uG/xX/8A18R+FGc6Dxx+mw/hcefXYvR93uowCqns0w6MXyu8p7uGUJhrIu32GbvkJatJMZ7zkgCW7qjMCSDG1cLjL2MTDYrF4jE9ri7NsG1aQFbeH7Um2boSBLhM5DMJEE6ggn1X5ijnMVUkQZI5jzT6RJg7iWjc145xPjKpxvGWml7OKy2jOgNxLQtNbzHwe5b3ADMkkATRjrSJZ2PCcXZ4phxhbtpTYuWQ1pp7ysCQQBuroCsMPONu7OgIoP5I8M64ZAxH0fzhNCdfpSVEdAVuEHkG/eJrhvJbH3sFiHwQY5lbtsO/I90mQNfPQzHI5wdyRt+Vd1uGX8FetXb1zAXkVSlxyxSANbZOttwhVwRHeVhtNW16EbWG+UjJxO5gbeGW3bzMc7i4jmLedm7Mrs0afuwYMxWPxDiuNwvF/m2MxBu2cWJAUtbTvAqtu3DE2gXHZGCZBDedDAX5SQ6HCcRWGuWHFm6w2fL9JYfpluIWE7HMAKI+VNreJ4ZhMXaczauhEfSWR1kHTYjs1J6MGHKigOpb5O7Fvh+MtYZe7iLRZAcxJYDtLWYnXRoyj6onmTOR8kWNGK4ZcwrN9W7ajwcb+PduAR+76a6HyH8o3xmGXKnZsbZe2WUlQxMN3Q2tsXJCSyllB0GXM2D5H/Jpi8Bi3+nmzdBH0aFJ0aWJE9iVViFyNJJGwBFTe02BmfJNeW9gL+EdgCwu2SCwH65Qba8tczX4EzoRUPkQ8oVtzhrkrcV37uVjK3MkzHm5LiQSdAH1iDXW8P8AkjsYbFjE4cup1ABYt2YYQxQmWLkSsltMxO8Gurt+TVgXXuC2oznO6hUAd9s9yBLnfziRJJ3NKU40I1lNY3lb5M2sfh2sXl7p1Vge8jDzWWdiPYQSDvWyDTSa5U6djPE+GYbinDLgtqljF21BS0LtwW2RCTomdwFViBMFwcqgGBp13BP6TxF83by2MIjJbVghN659GbjQGnswSbhB84iNBzHeka7fHx91SrZ5f+AeOeTXkvxDA8Rvtls3ReJuZ++QWJeGW3mBEdqwOdgNdHJ0Op8snBMXicBaFtFcW2W7cVQc5YKy9xRIgZmJEk9JjX041XesK4KsMwIggiQQetHlt3QHJ+ROOufM4awytbS3CF0zMy21tsJnKuqc2kAwQIrhPkrxDf0rju3t3UvXbssvZswXM1xiHdRCashBMKwBjlPtNu2AIAgDYARHqoT+ibefPEsTIJkxpHdB83rpGpPU01kXYHkPyT4tb2Nx+LdoY3GuNMEraJuMWPQCRtoMkcxU/ktufOsZjse+ivckEzoi5rp56BfoN52jaa9Jt+RWGVL6dkhW/mzgosHMcxzEDMxmNS06CMtc5wf5OGwmDxVq07zetXlAa4zQ9xMgMKigbKJgmBMjar5xYjnfkqtHG47F8Rcau7LbBjRRBgeKg2Vnpmmp8A8rsbjeJYxLOJcYVDdCFbNq4QWlbOQMskSrPBOqqRoSKqwWCxvCOCYlexBuFm+kVm+jt3BDPDAZipH1eoJ0XUj5F/m1jChnu21ci7dZWKhgRIJgjMAtm3mkcrj1T9sDRHH+I8Ntrcxtw4qy1yCyYfKyK0ZmuHTIigsQAhkrGYCu9wPHLN5Q9u9adGIClXUgsYgDxj6u9cPwHyyxfFWuvhuzw+HtsQhZDdvXSkHzcwW3II0g+cQMxBI4y5abinG5R1CYTKBfs21ts7IxKMJLAs10kgmRkVmiAangnvoLPaPKPj6YLDviLs9mmXNlEnvMqaAkT50xNBeSvlpYx6s2HF0opgs9vKCw3CyZJHPTmK83+VbFXMXjU4dh7rtmCXL6kjskKrmQgBZWEl2USCSkDNXpfkhw5LGFtoltrYCgZXjMoHJiCQTMkkaFi5+tWbglHsZts3h7vjSnWmR55U4bXasWMa7tRE0O5oia6MHslnO3mm9h1/ae4fUttx/5Ci2EadKAxBjF4bwGI/4oPvNaN097005fo0w6CLArgflG+TkYy1ddSe1RS1hAYUMWa5eGXbNdJjMdiF5TXe2QYqHEsStq29xzCIrOx6KolttdhWUZNS6FLZ5Hw/yffHW+G41hBGY3rjRHZZX7RnMc7oZgDub55DTb422DxlvE4HtUu6i/Z7F1Y2nJbPLCQoS5mYkjzb0AEwDl3/IfF8SIGIdrGHSBbwqGLdpFGVVOh7W4AIY5QAQQG0gdZ5I/Jxh8C7G0Z2LAkkkj9XMnzV1IWPOg7oprolJLZBR5PeQBGCOGxTtettbVAHgOsQwykCbWVh3VJciBqAMtaXkZ5EjAW2tKZUk9O8N87kAS5nLOgCqoA0k9UBUYNc7yNjKcJw9LU5FjMZPsAHoAAgAbUQVpkJ51KKzbYEStNB8fdU6alYEQtS1qLj0+qpCmAqQpzTA0gIqppAVOmNOwGExS1pSacmgBqiSelSzeFMGoGQxGHzqVJ35gkEeIPKsriHk6r2L1sTmuWblnMWdiA65TBZvXy2ramkDVKTQjy3B+RmL4bg76Ya5PcvFCyIXLsoIUZbmkldCQYMbzph+Q3ELXCOHvexKlLzFyttgwa6/moo0gBQOe3aOa9svWVdSrqGU7hgCDz1B8a53yx8kExuG7DKoUsG5rlIBh1y6ZhMaiCCwO8jaOS+mI4H5NcBktYjieNbvXc153M6W5ziBPMgXMsbCzG5FZXB+N42+nEeJ9vftWVW61q0LjZQ/1DlMqQndBGmYz0IOt8pfktibPC7OHw73biKVW5ZCm4WA1Qh8ufKuVRl83QHTLB6Hyf8lsLfwgsK2KS2LarcsucSi94SVIbKGhpnLImdpEaclVgF/JXxq/icCj4i4brtLZioBjPcQDugLEID1kmTqBXZsfdWT5O+TiYK32Vonsx5qnMcurE6liTJYmtcCuWbTdoaKbyyJ+PGjJoZxpREVvh0xSZzV9f0nD/wB3EH/Yn40cDQjH9Ks/4WJ+2wPvNNxDCu+XJcKMsncwZGmYfW1jfxpy2aYuomvYcxVXEcKt629tgYdGVhsYYZTHjBoIYa9lZe0BYgBTBEHUkyNdRA02jSoW8HiYIOIDGT9QKIg+BMzB3OlY8e7E9lF3D44jKtzDINjd7O4z7bi0WCKSZ+swHQ7Vq8K4eLC5QXdmOZ7jkFncwCzEaDQAAAAAAAAAUK+ExABCXhMHzhPeljO2g1Ajw9ukt2TTk20IvBpGmFLPWIh5p6gLgJqQNFAImkDUc+9MXpgSmpA0ytNPmpAMTSqLOKTPRQDXLsemPjnUs1Vm5rtVimqoBB6Qenp5E0gFUVGtODSFIB6VNSmgB6aacUxoAGxuEt3BFxUccgwBHTYilhMDbtTkRUnfKoE7kTA11J9pohgPCmC/Hoqr6oB8wpBp2qI1ppikMm2oq+hGFFzXRh9kyOZvj9Ks+FrEn/dhx99Go0k+mgr3/wBRZ8UxA9X0Rj2qPdRdsR7TTns0xaIXDezymWBsGiDIA1IBZYMnx7u2tE4a8wHeAmT5uoidNTHL45U9nELqCQD0JE8z16A+w9Kl88XMVLaiJnTzttdAfjqJzf8AgPY/zg79m59dqPV9JVa4ohv1Vz22v5lWfPFBWGENoNtSASY9Qqw8zz+OVToRD50T/V3BtztfzKuFzwI9MfjTipZqkkbP4fZ+NVYhjBygkkEaFdCdvrCrZ1FO7wCenxpSA5m3hMWGntLpHdIU9nyKkie12IBH+bwireHjFrdDXCzIAQUHY6n6skvOkx4xMakApPKnDmAHMnKR3G1DRlI058qIXjFskDMZJyiUYd6VEbfvL7Zra5fALmxLMCBbdSQYM2tDy2c/YawnwfECBN62I3K5ddEEnNZPRto1edlg9HcfKpJ2A6E6AdKzl8oLDQQxIYwIt3TrpA0TnIjryqYt+kARh1uCyA5zXAsFgBBbaQI018PbWF22O2IGnMWyZ80D+rgbMefnAfV12bPGrLEAMZOWJS4o7xyrqyganQD7qPBFHLjtAYfBcXiFVvnKMzEjLktmAI1nujn91alvHAmOzuj0oQPbUDxqyCQbgBzFdZ84RI213HtqWE4xaunKlwM3SCD7x4j2ilL7QBQ+NKefA+w0801ZgKoXLgA2PqVj9gqdPQAI3EFH1bn8G8f/AAqP9Jp+zd/gX/5dGUiKdr4AJcxxyylt31gjKbZ2mYuZZqFniByzctPbMkQFa5oOcoDE+NHU1O18Awr3GLou5Vw7tbJWHFu4Mo0zl1ZQZ5DLPInnGrhb7OgZrbWyd0aCRrEEgx46UTTE0N36AhlFSURSza/d8eqlFIYidvTV9Cs5kemiZrpw6ZEjmn1xNsf+hiD68+H58vzNG2nn0yfjwoNDGLT/ANve/wDyWKLsDX1/fRLZpifRK7w21c8+2j/3lVvVqPiatucOtl85RcwHnECfRm3qwXGEwhI5GVE9dzIqD4g80I/zL9xrG22PbIPw+2y5GRWXcKVET4DbxqfzMSTLfxLg9gDQPVToYj8vxq130H5U3YmDHh68jd/jX/5lMeHr1ufx7/8AMq1sUf2Gb0FPvYVEY5h/U3PTNj+bR/QtF6WAuxPrZj7yTU8s/Bof58/9hd9tj+dUL2NuFSFsXM0GJNmJ5TF6YnpU8WKyV3hNo720137oEnxga1R/QFqQcsEbQSI1zGI21M1l2cPjs6Eu5UEZgVw8tBM7PpIIGh3WecDX4il50+iJtNO7ZTpB5DNOsHlt6jXaewHfhqkQWuEERBu3CD/uoVfJmwNkiZmGbWdTz1mBPoFJMHigB9KpM6yBtpyyCTM9N48aJ4eLyI3bMHOZspUDRD5oOiyY36+6htr2Mrw/ALCRltIIiCV101EEiRFaBGn21zmKvY/O+QDIWJQxqFnRWGQjbnvrzjW/huIxQuN2qu1uNAFSc2m8KukTzpuL+isNxPk9ZuHM6BmkmeetLC+T9pGzIpViCJDEGCQSJ6EgT6B0ohccf7K7/pH/AGohL5P1WHpH51DcgK/mP79z/W1WKum59ZmrM/gaYNUgQI21NV3MOT/WOvoyfepq8eiosfCnYAvzE/21322/5dI4I/21322/5dO2IfPAtHLpLlgDrOyxJg+I0112JDmAYHIx8SPtqrYA/wAxP9td/wDt/wAukmDIg9tcPgTbj/gKxl49i9vmgLA6lbikaGDCzMaEamZ5cqO4fxd3Um7Zeyc2VRDNManXKBvMRMgTzptSDo1QPj4FV3LIIIJMHTeN/EaiqF4isMYcgEychgRvOmkVL56CNFuc9kbcaHlvPKppjIYXhSW2zKWJgjvOzaE5uZ60aDQWGxVxnZWtFFA0YspzEMQdAdBlykf5ugJMYUnd9iKnMR4kfaP/AIoifj4FDASf8w9xFEgV0YtEzOdA/TE/9vfj+JYn2ae2jLJ6dT9tDYYfpcnlhyNv2rn/APNFWxpRLZriVhlm7pFZ3EcebYHcZ8zQcs6QrMSdDp3Y9ekmASbYE/8AyKhdwSHU5v8AXcHjsGFSkkwB+FcZF45chUgEmTscxTKw3BIEjr6q0rmu3KgTgU/f/i3dP99Tt2Qu0+tmP2sabSu0CTCEQj4+6p28WhJXMMyxI25SN99I2prJqOJ4RauGXto5IAOZVOnSSNqzlV9ikWDFKwBVgwMEQQZB2261Th+JI8wwIHPXbfc7iOdO+ARSWVVDGZIABO51jfnv1NDpw61rNtDm0PcXUdDpqN9KcYxaEaQXrUSQBvVfZg826+e//bT0bUgg9XpJ+00lF+xGTiPK21bzl8yhSwLQpEqcukMeY8IlQYLAVtYe8HQMDIIBB6g6gj0jWqDb19O/iBJAnmBJ9p61FsOGOpYeC3HX/iwqpRT0OgzNFODNCf0YvJrn8e9/3q9LUcz/AKmP2kzWXQixdpqsYgHYg6kezQ+wipKJHOgV4JbVldVCsrFpEiSZmddRrMeA6CBV7ALXEqWKAyQASOgYkDXxKsPUahj8atpQzTBYLMExPMmIUAbk6VVY4SiZsg7Od8umxJmNgdd46dBEn4bI/W3f9Q+zLTXGwKv/APQ4fX6a3EAznWBm0Gu2pB99JPKDDvBW9bMtlHfWSxMAATrJ08eVUv5MWiDOYg6mRbMyQeadQD6h0qWG8nrSHMohgZnKgM6ncKDuT7T1qqgBpTUqQT00itZgZmK4HZY52XUAw2dhlEs+kGBqSfUOlW2MJZ0dYJQBc2cmAgIhjPITM+uirmHzaEmCCCOoOkdfZFRt4BFUqqgAySORkAc+UACNoEVXLrtgZNrg+ExCF1UOGJkkvuSSeemrE+vTlWrh7IQQo0knUzqSSfeajhuGLbBFuUBM6GddvrTyA9lS+at/av7LX8uhysCYf7fz/CrJoWxaYNqzN6cnuhBV9JjGUd6roqmye97BREeiunFomZzuG1xbjph7Z/1XLs+3KPYKOKaTQdhP0q4eXYWRv/6l+KMZpqZfo0xXRShudp9XJA65s2u/IAae3w1XEsCbmSHa2VMyOekRHv57ey1fyqwmqewoyzwi6YjEOukGFBDaudiTl8+JUg91ddBCxvDLpIZLzqQoGgUjRWWSpcTJYN6UG+ka6Pyp81S5MVGNhsNiFYGbzKDOWLHe1YmWN6dQQP8AL6Iqv4PGmcly4pzSJSxlyksSD9Mx+soBHJAOcjpLccudTms3k70SYeL4Vfds1u89olVXvKr6AGZHaZJPMgTpoYqi3w3EqDmxHejT6MFRGeO7IB84HYaoJkCuiaqivOnGbGjEuYS/lAD6qJVwoYl2FwEsGuAFQGSBJ80zSw7YpWGb6RdM020UxJLFctz0AA+kzW02opBD9nxtV8hGAfnDluzuNEn6tslQcwH1zBAIOoM5dombeH28Qr/SHOIEyANYMnuk8wANNmboKPfgNlt0BgggGdwFHmzGyr/pHSpXOFWiioUBCEkAzoecevXwMRECHzQASi/ntZr6awGXQdoVLm7AjfLlWBH1ieVSNnELfk3FFtnUhJOYKoOYAFRmzamJ7umrRRa8FtFVUrooYABnAAYhiIDaiVBE7RpFOnAbI1FsAwBoWBgEkagzoST6Y6CM3JAXnFkf1bn0BPveql4pO1u5zA/V8jB+vyOlNb4NbVSqqAGjNBYSRsZmZ8d4AGwAqy9wxGABBhQYAZlGscgRzGnTlUfyA/z7clHEbk5BHPU5+mtWi+CY59OcejfkfZQ6cEtDZf8Ae55z+11+xegh7HCLaNmUQZzecx7xzCYJI2Zh6PQIP5AKZ9Nj7CaFPER+xd/hXP8ArRYXrS51PQAh4mP2Lv8ABuf9aYcTH7F3+Dc/CjWWoRVJr4AOOJj9i7/Buf8AWrbWMDfVuD0o4+0VcAKeKVr4A2b0+w1FnHj7DUxUY8KkClMSC2UBpHVHA/1FQD7avNRjwpkpgPzHpq6qCe8PTRFdWHQpGBZb9IueFqwfa+I/CjBa0ms/Da4q8OlrDf8ALE1p54Wpns0xt8ejDu2mXEF+zuNB7uQpBDW0VswME6r15eMUjhsttAUutrP1JUrBltBJlZkSSdpmrb2DzXD+uEkHMrkJ3Qp2mJOXp/5ULhwwhjavyoAy5wwlgqkAFjKjU5i3LnMVohMuwuE7NRcRLjOAVysVzDlDEKSSIHP6xJMagf8AotGTMy35QqMpyszgEEGNiNSNwcuadzRFu8VuFltX8pmRGhZineCltIhp2jUwcwp3wxtPKLdYgqDmYlWDZEkRqSq9Y83xmkIngMIgvq/Z3w30ihnGgBMkzuSYIUmTEzus1Ya6bZDCzigBuoAyNOVRpALEDqB5p3Jk1GyUVltpjcwzBWlSFkCSuYjct5wBMqx2BnZxPDlZsxe4A2oWYXQaHKVOogNB5jas2xGOLJzFsuIYmQHABMakywXRYUIsFiRt54Ll4jCtbeAcU6QhlWDmZIymRmIgAkySfWZKs8GVICtdhQAO+TAA2AP90b7xrImrsPw3KwY3LjECNW7p5arG/P4NJzQUZeKwo7O2WGIuMEeCI7VZhwGB1z90INORneoXi2pJxbESe6oUAHMYWRnEaL3SSY5zrsLwoCO/cJH1i0nlEkjX8z1Mxw/DAjA9pcaP27hYbRqDv1nefWC1NACANbuf1zKMxmZGsEDIFAP1v3hE8xVWHtMmS4pvsvem2w1IEr5uTcsc05lka60Zb4TFwN2l1spLKGuEgSpXYjox+Jl8Pw0I+YO8mCZYQ3dCrm0kwB7ZO5NVyQGXZRllB85EKMsmBJzkq2VYGsDONTyO2a5mbLpcxZbQAlFEHmf1cekxyMb0ZhOEC2iqrt3QAphJ022WD46ayepm1+DErl7W7GsyyktpGpKkkbGNtOhMy5IAbGqzJ2gfEANANsKAQBIJylM/jAOunqWLtnS8pvDMJKqgLebAGUqSp16gb+miL/CWZy/b3UJAEKVCiB0KmeZ1nc+ESbhRKBe1uAoTDgrn1MwSVPomJjnqZm19EHJMfA91Sp6VYjFTGnoXiF8hYXz2OVdtCecHeACY8KaVsCOJx+WAAGaQIzAAE6iTynTYE6zFAPxC/mYACASNLbGPMI1z97zug2O2lEYPhkOWflIVZ0AkEnYbkBoMwSdTWkVrS4x62IzMPxVoGdRz1BIOsle4dfNBbeYA0100bV4MAVgg6gg0PjbIYAHSSADPU7b6yQBHOqrLZGGujHK0kD6TYNqZ1gADxFJpPtAaE0x9FKnrMZEselRTYVI7bUyHSj0Mrdu+o8aLoEL9IPD8KOrsxfkU/RzmFWcTf8LeH+2+fvo8voBQHDD+lYn+5hx7rp++jAamWzTErRZbO9AvxFAWknu791tTOTKmkO2chcoJMkDerLuIiVBCsytkJiM0ch4EjTnXN4rgsw1u/btNIG/bbhLQgkrJAbRcpBNyWkGA4oJ76Oowt7OobLEzoYOx6gkEHcEGCCDVy7iN6wrmGxGiriVMdnK5AGOQFmWc+aGlC0mSJ1UNVK8QxoHmYe4Q9oMVZgMpVS4UZtSWOhkQGBK6QW4i5HWqRQeJ4lF1bcCO7Jzay2bIAsa6IzHaAJ15ZVnjN17Vwjsy7Ei0FdMoDAG3mYvLtlIfRR5wEaSRcDgLtsPdLK9x1XcrlZ1XLnzBxLMIWIUAEARrmyWP6SbuPxJBREMM7A8tLa63Gg+pPAutGha5TBvfS45zW2bKJ7SRcAEyItKV1YoQBPNQSVmrbXEMY6TbSypaFUsWIzAd98sgsuaFABmCTDQKbxgbqcQBu9mBMaFpEBozRvJIBE9My9dL8ZfFtGdphVLHQnRRJgDU6Cua4KMTaPftq4CLqHXMzklrhOsZ2Yzl806k3BoF2uJX5s3AZGZWWNAe8Cu5MDfmalxpgTfGhTDaa78ieg6nw0J1iYqpuI/uXD/kb7Dr6omsu5ib0lRZdybvaKS9oBVJz5STckw2kAbNVaYy+5tELcCBbZaHw3fIM3SALug0UaGO8dok7cEKzorV/NB11E6gg69QdR6CNOgqOI4zatgl3VApAYkwATsCdp2031HWsexjcUHAa1bAKgKpdcxuKJuRDEFSNADEQJOsAmzi77qxAtyRcyFbhYAhoQkhdQV10GhU9YGcsffYB9nitshIM5yVXRhMbnUbTpm2kjqKKa5HWuevYS5b765dFYCWPddjq5kRcc6b5RIA9AmOxmKayqPaDMVQNkbViAO1K6RlLa7g5SRoToeJN9AdX23hUs1YXDuJX20uWWXvKC3cGhRZaM5J+lzchCkHcGthDrUThQF9ZfGMT2ZVmcouolRmMkrlAUKxaTpoNATodCumKFx0hcygkqc0A6tG67ayJjxiog6YE7DgIgBGygbCdJ0GkaCYA0qAxDOvdBXMrQSNVYcmGon09CDUbSrm1fNmbOo17ukRMnQ66abkUUmmg+OdN9AVHDZx3+eUlZmGUhtG02IHLlVeJuQWGkAZp2giNzOnsosmKBDZyNtSD45RrMjqcsT40JiD6RpTVbsTtULsZZTAVWpj49dTDzTaoClPPHp+40bFBqe+I+NDRldeL8imcxw8/pOJ/uYY+uLoj2Ae2jhQOG0xGI6EYcj05XX/AMR76LnQax10qZbZtidIzeLXouWx9GTyV9ySYgHI0BojcHQ6HSs/C23dQUXDMBGfQsokAkydW3Y+II8a3MTgrdw98ToRudvRNV2uE201RYMMBDNHegmRMakAn0VSYOLbszcNeyIznsBBCoYAASQHDE67sAfExrRd5GKothEKGC6keapmIBZZOYH0wedDWuGNoDhbBjSWIBM5TIhDGm42ldCdKLs4JkAKW7YYq4cBiBJKgRB/ZUDbf1y2yEmZmDxDu6AWrHIlR2RYArmIAF2PN80g8uQGhVzsrgt9lYSWGo7odWGVmWe0XZSQcpIO2wMkYfBOq5hZtLcXRALjQAYD9+JGg00+8mnC8JZZPZi2cykBL1w7hu0JJYSSTExME7nddBTFc4e945b9tZIZs1tQBJRBcDMXLAnMygiNEfbumieEcKDA9vh7SEEZQoEbAaHMTpA5DYdNI20vgyQDtA7e9pouaTz72cjTYqNIJOoFXq/8R/8AtUyboKZO3hVSQihQdTAG/I/HQDlQflHeAw7k2xcHdBRtA0uo9xgxrMRRUAGQW6asx9xMesVC/aV1KsJB3HoqFvsvj0c+qxmJw6AQAzaZQCEOvf7o7z+ByIddMtz2FJuhcMjsjIMqmSSQJkaZIUKfEBegq3EcHQMrKjPGsdrcBBGXLll9NgfHLqaoucEN1iWQoWMG4L7MwIWNATB1AGx0PXba0ZtMMxd3s2F23Za4xDHTPJMKIIUETHM9D6aCvIoZi2Db9oPNw5mfs2fvBTlIk96dTb36V3ODMwE2nLBYJW7aUbsCApQgDKT6oHLS25wzI7wl51IHeN1IgclHnLqxJ21GuhFIKCMR2QCMllmBViAouCMuUABACJ2O0gKMswAFhMKjtluWAhZVgZnMhe8RooUBSw0BIkjbKABbvACCyoHKgBVbtMundzaDwmIjUL4sLsLwE5szm4jL5sXQQRqNVywABEdZJ0JIo6XsKYTw3iTMy2xYe2uu8wvdzQO5HONwBsOQrZtb+NC2kjmT6Y+4DlRKXB4/H21nNlcWE5qRNV9qOVRe9pWFE0CYzh07agmcugyneUIGhJ1MyJjap21ZREuN+WboABEmBv7aJN7WImrc9U5PTFQKEedyd9ToBr0Gp5+G09aIs2cviTuTuek1INTC78ffSbbCidUTqasDCh7h8TRFDokx1qSHumq8wpz5o8TNWx0PaP0g+OVHUAjAXFA5z7gaNmujHomezDOCXtO0k5igtnoVDFhp1kn21euHHX49lPSpuCbszUmuhHDAcz7qibfiatb76rp8UHJiW14mp9j+97vzqK1Z+dKkPnL6RFrx9350/Z/vD2fnSWmX49tKkPyS+iNueY9n50uz8R76Q/H7KaikHkl9G7PxHs/On7LxFP8AhSb76fFD8svpA2j1FTgxGnx6qfp6qYUuKFzkQKfE0iu2nvqQ2+OtOPj2UUh+SRELpGntqV2Ty99PyqJo4oOchL6PePgUgnh76frTmjggWSQwHh7IpyZ5fZ+NKpDalwQeRkCTrpU0fTUUh+H3U1vc0cEw8jHRutOW+IpD491Mu/tpeNC5sjz500D4Bqzn7aenwQ/IyKx8A/hVpuiOvt/Cq+Xx0qS86l40Lm2UWlPagxoA3LqI/CjZqlN6hWkY8SZS5dn/2Q==", - "FileName": "Aaron Glover Birth Certificate.jpg", - "SizeBytes": 10651 - } -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Documents/Get-Document-Details.aspx](https://developers.dayforce.com/Build/API-Explorer/Documents/Get-Document-Details.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:documentGuid} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "documentGuid": "696afd0c-5890-4316-9b7e-7ac990189018" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": { - "DocumentType": "Jpg", - "DocumentGroup": "DocumentManagement", - "SourceReportUniqueId": "3ca57484-9053-45b6-ab65-1562636c4713", - "PublishDateTime": "2015-04-15T14:39:11.35", - "Title": "Aaron Glover Birth Certificate.jpg", - "PageCount": 0, - "CultureId": 0, - "Contents": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAkGBhQSERUUExQWFRUWGBcXGBcYFxcaGhgXGB0cGBcaFxgfHCcfFx0jGhwcHy8gJCcpLCwsHB8xNTAqNSYrLCkBCQoKDgwOGg8PGiwcHyQsLCwsKSwsLCksKSksKSwsLCwsLCksLCwsLCwpLCwsKSwpLCksLCwsLCwsLCwsLCwsLP/AABEIAMkA+wMBIgACEQEDEQH/xAAbAAABBQEBAAAAAAAAAAAAAAAEAAECAwUGB//EAEsQAAIBAgQDAwcIBgoBAwUBAAECEQADBBIhMQVBURMiYQYycYGRofAHFCNCscHR4SQzUmKT0xVDU3JzgpKistLCY3TxNKOztOIW/8QAGAEAAwEBAAAAAAAAAAAAAAAAAAECAwT/xAAiEQACAgICAwEBAQEAAAAAAAAAAQIRAzESIRNBUTIicWH/2gAMAwEAAhEDEQA/APUcVfKBYglnRfadfdRFq80ASPZQPFNrX+Nb+/7pNH2l3rnm2maY0qtlwvH4AqwX28PZUANNKdR+FZ85fR0ifat8D86ftW8Pj11E04FT5JfSaQ4vHw9n50/bnwqKrFMKfkl9CkSGIPh7/wAaftj4fHrpo0oPiXEVs22dzCqJ9pAAE6SSQo13IprJJ6CkHC8fCmOIPhVFi+GVWGoYAj0HWrLiaUvJL6FIn256Uu3Ph76YDT7KD4ljUsW3u3WCW0BZmOwA3P5ak7DcU1kkwpBS4hui++pfOvD31w/HPlHXC2bdy7ZuK18xYtkEE8gbrRFs6glRmIBG+sYuDwnFzjrq/Pk7NjbuEJbQkIxuW1FpLoYWo7Iz3m5HvkmtVz9sKR6kcYY29/5VD+kf3ff+VDYZSLaqWLMAASTJOmpJ6zrVq2DUeR+y1GPsvXGTy99SGI8Pj2VVZtdanIHoqXllZDSHOI8Kl2/hUEPvpytHlkFIkb/hT9t4VACnijzSFSJG/wCBpdv4GoxSK0eaQUh/nHgfd+NL5yOh9341HJT5KPMwpD/OR0Px66cYgePsqvLUVSn5mOkWHFjx9lTD0K9v76MUaCtsc+WxSSWjneJj9T/jJ/xcfaY9dHgwI8aAx571n/F+yzeOnro5uVRPZeLQUmlNc0++kg2qq/igpliAJC6mNWIVR6SxAA3k1gtgELUhXmvlJ8rKYPHNhyge2qp9IsnJcMl1YSMwCldV1BnRth0qeWVs4W7iNxbtNehWkXEykoUaBIaI2kHQ66VTxsR03rqLLIrF8lPKVMfYW/bV1Rtg4AOhI+qSDqOsxBgSJ2Bp9nxpUONOhGF5UeV9rBWwbjL2jsqW7ebV2YwDG4Uakt0HUgV598ub3xYw9+3cYWc/eQEAC4AXtsSBJOjDUwCojU0L8st2xi7FrFYb6RrNx7dy8gBVVBy5bjbz2mUpvIZiDBk7PlMGxvk9dusFkIl9QrFgsMLhBlV1FtiIiBtJiT0xilTA9A4DfDWAwOktB37uY5eeoyxrzBBrSzV45huI4u5wE3cLc7AWrKtcYybj9iotv2bA/Rg9kTmMsTIhR3m6r5JfKK/i8GHxF0XXlvqqpVVJWO6BmJgMSf2hp1znj2wO3mvPPlFxxuY7BYPMAkXMXeE6MtgM1pW/dLoZ9XSvRCDrFeTfKgxwvFcDi3JW09u5hnbkobOCT6Fu5v8AJ4UsS/oGZPyy8Q/SMHY7PIlm6CHLpmecgkWgc6oIgOYDEGNpPpHCo+fOIP6iyNhyu4yZjbauB+WjhKXLWFu2LQOIv3jmyLne4ezEiRJYLlUDkNa7rgiRjbsDQ2rZHoa9jGBEaagj21tL8gXca8tsHhxcY3Q5tAF1sg3WWTAz5ZCSxiXKiTUMJ8oGE+Zpi7l0WrbiQGILZoGZAokuwmCFmvOPIWxabH8WtXELi5duW+zQSzIzYhWyjRViV7zQq6ajSh/KzyKu8P4IVvXVuTibbIuUfR5s0gPuSRJIEAEtEzNLxx0w5M9gXyrw30A7ZQcSQLIOjPIkECNuUnSSBuRXmXyicbe/xaxhLeIZsOwQ37Vu73Tka4bqNl5lEgrrB5Sa2uAeQmHxOBwhu2lz2lstmMknNYssQSGGbvRCvIEREQK53D4FMV5S3wVV7dhESCqkd1bVltIiQWcg6QQDpFEYxT6Ed15U4fH58H80uqqq47ZSshyFO+olDqMsrqQZGWRT8ofyg3OHHDrbsLeN8uoBdlIZMkRCnMCXA5Vyl3EYq75StbsYl8mHCyGGdERxbFy1kBAILuq5j3pgkkrNDeVy3sTx61aRxcOFUXR2gUKrMRcRO4BzaygJBMkTmAp8F1YzovKPjHEcM64tsdhLeGIVOze1cgsd8iCXutIJBzr3eQE16JgsQGtqwdXDKpDrGVwQO8sE6HfQnSuBx3lhgb2IThWIRg0qrZ1+jW6sNbTvRnnSGjKZUahq7+zbVVCgQBoB0ArLIutAEKaYuKVNHxFYASmosaTH4imJkbGigGBpFtaimnx7qmBNMZEUSraChm6fHSigK6MHsmRzePH0uG/xX/8A18R+FGc6Dxx+mw/hcefXYvR93uowCqns0w6MXyu8p7uGUJhrIu32GbvkJatJMZ7zkgCW7qjMCSDG1cLjL2MTDYrF4jE9ri7NsG1aQFbeH7Um2boSBLhM5DMJEE6ggn1X5ijnMVUkQZI5jzT6RJg7iWjc145xPjKpxvGWml7OKy2jOgNxLQtNbzHwe5b3ADMkkATRjrSJZ2PCcXZ4phxhbtpTYuWQ1pp7ysCQQBuroCsMPONu7OgIoP5I8M64ZAxH0fzhNCdfpSVEdAVuEHkG/eJrhvJbH3sFiHwQY5lbtsO/I90mQNfPQzHI5wdyRt+Vd1uGX8FetXb1zAXkVSlxyxSANbZOttwhVwRHeVhtNW16EbWG+UjJxO5gbeGW3bzMc7i4jmLedm7Mrs0afuwYMxWPxDiuNwvF/m2MxBu2cWJAUtbTvAqtu3DE2gXHZGCZBDedDAX5SQ6HCcRWGuWHFm6w2fL9JYfpluIWE7HMAKI+VNreJ4ZhMXaczauhEfSWR1kHTYjs1J6MGHKigOpb5O7Fvh+MtYZe7iLRZAcxJYDtLWYnXRoyj6onmTOR8kWNGK4ZcwrN9W7ajwcb+PduAR+76a6HyH8o3xmGXKnZsbZe2WUlQxMN3Q2tsXJCSyllB0GXM2D5H/Jpi8Bi3+nmzdBH0aFJ0aWJE9iVViFyNJJGwBFTe02BmfJNeW9gL+EdgCwu2SCwH65Qba8tczX4EzoRUPkQ8oVtzhrkrcV37uVjK3MkzHm5LiQSdAH1iDXW8P8AkjsYbFjE4cup1ABYt2YYQxQmWLkSsltMxO8Gurt+TVgXXuC2oznO6hUAd9s9yBLnfziRJJ3NKU40I1lNY3lb5M2sfh2sXl7p1Vge8jDzWWdiPYQSDvWyDTSa5U6djPE+GYbinDLgtqljF21BS0LtwW2RCTomdwFViBMFwcqgGBp13BP6TxF83by2MIjJbVghN659GbjQGnswSbhB84iNBzHeka7fHx91SrZ5f+AeOeTXkvxDA8Rvtls3ReJuZ++QWJeGW3mBEdqwOdgNdHJ0Op8snBMXicBaFtFcW2W7cVQc5YKy9xRIgZmJEk9JjX041XesK4KsMwIggiQQetHlt3QHJ+ROOufM4awytbS3CF0zMy21tsJnKuqc2kAwQIrhPkrxDf0rju3t3UvXbssvZswXM1xiHdRCashBMKwBjlPtNu2AIAgDYARHqoT+ibefPEsTIJkxpHdB83rpGpPU01kXYHkPyT4tb2Nx+LdoY3GuNMEraJuMWPQCRtoMkcxU/ktufOsZjse+ivckEzoi5rp56BfoN52jaa9Jt+RWGVL6dkhW/mzgosHMcxzEDMxmNS06CMtc5wf5OGwmDxVq07zetXlAa4zQ9xMgMKigbKJgmBMjar5xYjnfkqtHG47F8Rcau7LbBjRRBgeKg2Vnpmmp8A8rsbjeJYxLOJcYVDdCFbNq4QWlbOQMskSrPBOqqRoSKqwWCxvCOCYlexBuFm+kVm+jt3BDPDAZipH1eoJ0XUj5F/m1jChnu21ci7dZWKhgRIJgjMAtm3mkcrj1T9sDRHH+I8Ntrcxtw4qy1yCyYfKyK0ZmuHTIigsQAhkrGYCu9wPHLN5Q9u9adGIClXUgsYgDxj6u9cPwHyyxfFWuvhuzw+HtsQhZDdvXSkHzcwW3II0g+cQMxBI4y5abinG5R1CYTKBfs21ts7IxKMJLAs10kgmRkVmiAangnvoLPaPKPj6YLDviLs9mmXNlEnvMqaAkT50xNBeSvlpYx6s2HF0opgs9vKCw3CyZJHPTmK83+VbFXMXjU4dh7rtmCXL6kjskKrmQgBZWEl2USCSkDNXpfkhw5LGFtoltrYCgZXjMoHJiCQTMkkaFi5+tWbglHsZts3h7vjSnWmR55U4bXasWMa7tRE0O5oia6MHslnO3mm9h1/ae4fUttx/5Ci2EadKAxBjF4bwGI/4oPvNaN097005fo0w6CLArgflG+TkYy1ddSe1RS1hAYUMWa5eGXbNdJjMdiF5TXe2QYqHEsStq29xzCIrOx6KolttdhWUZNS6FLZ5Hw/yffHW+G41hBGY3rjRHZZX7RnMc7oZgDub55DTb422DxlvE4HtUu6i/Z7F1Y2nJbPLCQoS5mYkjzb0AEwDl3/IfF8SIGIdrGHSBbwqGLdpFGVVOh7W4AIY5QAQQG0gdZ5I/Jxh8C7G0Z2LAkkkj9XMnzV1IWPOg7oprolJLZBR5PeQBGCOGxTtettbVAHgOsQwykCbWVh3VJciBqAMtaXkZ5EjAW2tKZUk9O8N87kAS5nLOgCqoA0k9UBUYNc7yNjKcJw9LU5FjMZPsAHoAAgAbUQVpkJ51KKzbYEStNB8fdU6alYEQtS1qLj0+qpCmAqQpzTA0gIqppAVOmNOwGExS1pSacmgBqiSelSzeFMGoGQxGHzqVJ35gkEeIPKsriHk6r2L1sTmuWblnMWdiA65TBZvXy2ramkDVKTQjy3B+RmL4bg76Ya5PcvFCyIXLsoIUZbmkldCQYMbzph+Q3ELXCOHvexKlLzFyttgwa6/moo0gBQOe3aOa9svWVdSrqGU7hgCDz1B8a53yx8kExuG7DKoUsG5rlIBh1y6ZhMaiCCwO8jaOS+mI4H5NcBktYjieNbvXc153M6W5ziBPMgXMsbCzG5FZXB+N42+nEeJ9vftWVW61q0LjZQ/1DlMqQndBGmYz0IOt8pfktibPC7OHw73biKVW5ZCm4WA1Qh8ufKuVRl83QHTLB6Hyf8lsLfwgsK2KS2LarcsucSi94SVIbKGhpnLImdpEaclVgF/JXxq/icCj4i4brtLZioBjPcQDugLEID1kmTqBXZsfdWT5O+TiYK32Vonsx5qnMcurE6liTJYmtcCuWbTdoaKbyyJ+PGjJoZxpREVvh0xSZzV9f0nD/wB3EH/Yn40cDQjH9Ks/4WJ+2wPvNNxDCu+XJcKMsncwZGmYfW1jfxpy2aYuomvYcxVXEcKt629tgYdGVhsYYZTHjBoIYa9lZe0BYgBTBEHUkyNdRA02jSoW8HiYIOIDGT9QKIg+BMzB3OlY8e7E9lF3D44jKtzDINjd7O4z7bi0WCKSZ+swHQ7Vq8K4eLC5QXdmOZ7jkFncwCzEaDQAAAAAAAAAUK+ExABCXhMHzhPeljO2g1Ajw9ukt2TTk20IvBpGmFLPWIh5p6gLgJqQNFAImkDUc+9MXpgSmpA0ytNPmpAMTSqLOKTPRQDXLsemPjnUs1Vm5rtVimqoBB6Qenp5E0gFUVGtODSFIB6VNSmgB6aacUxoAGxuEt3BFxUccgwBHTYilhMDbtTkRUnfKoE7kTA11J9pohgPCmC/Hoqr6oB8wpBp2qI1ppikMm2oq+hGFFzXRh9kyOZvj9Ks+FrEn/dhx99Go0k+mgr3/wBRZ8UxA9X0Rj2qPdRdsR7TTns0xaIXDezymWBsGiDIA1IBZYMnx7u2tE4a8wHeAmT5uoidNTHL45U9nELqCQD0JE8z16A+w9Kl88XMVLaiJnTzttdAfjqJzf8AgPY/zg79m59dqPV9JVa4ohv1Vz22v5lWfPFBWGENoNtSASY9Qqw8zz+OVToRD50T/V3BtztfzKuFzwI9MfjTipZqkkbP4fZ+NVYhjBygkkEaFdCdvrCrZ1FO7wCenxpSA5m3hMWGntLpHdIU9nyKkie12IBH+bwireHjFrdDXCzIAQUHY6n6skvOkx4xMakApPKnDmAHMnKR3G1DRlI058qIXjFskDMZJyiUYd6VEbfvL7Zra5fALmxLMCBbdSQYM2tDy2c/YawnwfECBN62I3K5ddEEnNZPRto1edlg9HcfKpJ2A6E6AdKzl8oLDQQxIYwIt3TrpA0TnIjryqYt+kARh1uCyA5zXAsFgBBbaQI018PbWF22O2IGnMWyZ80D+rgbMefnAfV12bPGrLEAMZOWJS4o7xyrqyganQD7qPBFHLjtAYfBcXiFVvnKMzEjLktmAI1nujn91alvHAmOzuj0oQPbUDxqyCQbgBzFdZ84RI213HtqWE4xaunKlwM3SCD7x4j2ilL7QBQ+NKefA+w0801ZgKoXLgA2PqVj9gqdPQAI3EFH1bn8G8f/AAqP9Jp+zd/gX/5dGUiKdr4AJcxxyylt31gjKbZ2mYuZZqFniByzctPbMkQFa5oOcoDE+NHU1O18Awr3GLou5Vw7tbJWHFu4Mo0zl1ZQZ5DLPInnGrhb7OgZrbWyd0aCRrEEgx46UTTE0N36AhlFSURSza/d8eqlFIYidvTV9Cs5kemiZrpw6ZEjmn1xNsf+hiD68+H58vzNG2nn0yfjwoNDGLT/ANve/wDyWKLsDX1/fRLZpifRK7w21c8+2j/3lVvVqPiatucOtl85RcwHnECfRm3qwXGEwhI5GVE9dzIqD4g80I/zL9xrG22PbIPw+2y5GRWXcKVET4DbxqfzMSTLfxLg9gDQPVToYj8vxq130H5U3YmDHh68jd/jX/5lMeHr1ufx7/8AMq1sUf2Gb0FPvYVEY5h/U3PTNj+bR/QtF6WAuxPrZj7yTU8s/Bof58/9hd9tj+dUL2NuFSFsXM0GJNmJ5TF6YnpU8WKyV3hNo720137oEnxga1R/QFqQcsEbQSI1zGI21M1l2cPjs6Eu5UEZgVw8tBM7PpIIGh3WecDX4il50+iJtNO7ZTpB5DNOsHlt6jXaewHfhqkQWuEERBu3CD/uoVfJmwNkiZmGbWdTz1mBPoFJMHigB9KpM6yBtpyyCTM9N48aJ4eLyI3bMHOZspUDRD5oOiyY36+6htr2Mrw/ALCRltIIiCV101EEiRFaBGn21zmKvY/O+QDIWJQxqFnRWGQjbnvrzjW/huIxQuN2qu1uNAFSc2m8KukTzpuL+isNxPk9ZuHM6BmkmeetLC+T9pGzIpViCJDEGCQSJ6EgT6B0ohccf7K7/pH/AGohL5P1WHpH51DcgK/mP79z/W1WKum59ZmrM/gaYNUgQI21NV3MOT/WOvoyfepq8eiosfCnYAvzE/21322/5dI4I/21322/5dO2IfPAtHLpLlgDrOyxJg+I0112JDmAYHIx8SPtqrYA/wAxP9td/wDt/wAukmDIg9tcPgTbj/gKxl49i9vmgLA6lbikaGDCzMaEamZ5cqO4fxd3Um7Zeyc2VRDNManXKBvMRMgTzptSDo1QPj4FV3LIIIJMHTeN/EaiqF4isMYcgEychgRvOmkVL56CNFuc9kbcaHlvPKppjIYXhSW2zKWJgjvOzaE5uZ60aDQWGxVxnZWtFFA0YspzEMQdAdBlykf5ugJMYUnd9iKnMR4kfaP/AIoifj4FDASf8w9xFEgV0YtEzOdA/TE/9vfj+JYn2ae2jLJ6dT9tDYYfpcnlhyNv2rn/APNFWxpRLZriVhlm7pFZ3EcebYHcZ8zQcs6QrMSdDp3Y9ekmASbYE/8AyKhdwSHU5v8AXcHjsGFSkkwB+FcZF45chUgEmTscxTKw3BIEjr6q0rmu3KgTgU/f/i3dP99Tt2Qu0+tmP2sabSu0CTCEQj4+6p28WhJXMMyxI25SN99I2prJqOJ4RauGXto5IAOZVOnSSNqzlV9ikWDFKwBVgwMEQQZB2261Th+JI8wwIHPXbfc7iOdO+ARSWVVDGZIABO51jfnv1NDpw61rNtDm0PcXUdDpqN9KcYxaEaQXrUSQBvVfZg826+e//bT0bUgg9XpJ+00lF+xGTiPK21bzl8yhSwLQpEqcukMeY8IlQYLAVtYe8HQMDIIBB6g6gj0jWqDb19O/iBJAnmBJ9p61FsOGOpYeC3HX/iwqpRT0OgzNFODNCf0YvJrn8e9/3q9LUcz/AKmP2kzWXQixdpqsYgHYg6kezQ+wipKJHOgV4JbVldVCsrFpEiSZmddRrMeA6CBV7ALXEqWKAyQASOgYkDXxKsPUahj8atpQzTBYLMExPMmIUAbk6VVY4SiZsg7Od8umxJmNgdd46dBEn4bI/W3f9Q+zLTXGwKv/APQ4fX6a3EAznWBm0Gu2pB99JPKDDvBW9bMtlHfWSxMAATrJ08eVUv5MWiDOYg6mRbMyQeadQD6h0qWG8nrSHMohgZnKgM6ncKDuT7T1qqgBpTUqQT00itZgZmK4HZY52XUAw2dhlEs+kGBqSfUOlW2MJZ0dYJQBc2cmAgIhjPITM+uirmHzaEmCCCOoOkdfZFRt4BFUqqgAySORkAc+UACNoEVXLrtgZNrg+ExCF1UOGJkkvuSSeemrE+vTlWrh7IQQo0knUzqSSfeajhuGLbBFuUBM6GddvrTyA9lS+at/av7LX8uhysCYf7fz/CrJoWxaYNqzN6cnuhBV9JjGUd6roqmye97BREeiunFomZzuG1xbjph7Z/1XLs+3KPYKOKaTQdhP0q4eXYWRv/6l+KMZpqZfo0xXRShudp9XJA65s2u/IAae3w1XEsCbmSHa2VMyOekRHv57ey1fyqwmqewoyzwi6YjEOukGFBDaudiTl8+JUg91ddBCxvDLpIZLzqQoGgUjRWWSpcTJYN6UG+ka6Pyp81S5MVGNhsNiFYGbzKDOWLHe1YmWN6dQQP8AL6Iqv4PGmcly4pzSJSxlyksSD9Mx+soBHJAOcjpLccudTms3k70SYeL4Vfds1u89olVXvKr6AGZHaZJPMgTpoYqi3w3EqDmxHejT6MFRGeO7IB84HYaoJkCuiaqivOnGbGjEuYS/lAD6qJVwoYl2FwEsGuAFQGSBJ80zSw7YpWGb6RdM020UxJLFctz0AA+kzW02opBD9nxtV8hGAfnDluzuNEn6tslQcwH1zBAIOoM5dombeH28Qr/SHOIEyANYMnuk8wANNmboKPfgNlt0BgggGdwFHmzGyr/pHSpXOFWiioUBCEkAzoecevXwMRECHzQASi/ntZr6awGXQdoVLm7AjfLlWBH1ieVSNnELfk3FFtnUhJOYKoOYAFRmzamJ7umrRRa8FtFVUrooYABnAAYhiIDaiVBE7RpFOnAbI1FsAwBoWBgEkagzoST6Y6CM3JAXnFkf1bn0BPveql4pO1u5zA/V8jB+vyOlNb4NbVSqqAGjNBYSRsZmZ8d4AGwAqy9wxGABBhQYAZlGscgRzGnTlUfyA/z7clHEbk5BHPU5+mtWi+CY59OcejfkfZQ6cEtDZf8Ae55z+11+xegh7HCLaNmUQZzecx7xzCYJI2Zh6PQIP5AKZ9Nj7CaFPER+xd/hXP8ArRYXrS51PQAh4mP2Lv8ABuf9aYcTH7F3+Dc/CjWWoRVJr4AOOJj9i7/Buf8AWrbWMDfVuD0o4+0VcAKeKVr4A2b0+w1FnHj7DUxUY8KkClMSC2UBpHVHA/1FQD7avNRjwpkpgPzHpq6qCe8PTRFdWHQpGBZb9IueFqwfa+I/CjBa0ms/Da4q8OlrDf8ALE1p54Wpns0xt8ejDu2mXEF+zuNB7uQpBDW0VswME6r15eMUjhsttAUutrP1JUrBltBJlZkSSdpmrb2DzXD+uEkHMrkJ3Qp2mJOXp/5ULhwwhjavyoAy5wwlgqkAFjKjU5i3LnMVohMuwuE7NRcRLjOAVysVzDlDEKSSIHP6xJMagf8AotGTMy35QqMpyszgEEGNiNSNwcuadzRFu8VuFltX8pmRGhZineCltIhp2jUwcwp3wxtPKLdYgqDmYlWDZEkRqSq9Y83xmkIngMIgvq/Z3w30ihnGgBMkzuSYIUmTEzus1Ya6bZDCzigBuoAyNOVRpALEDqB5p3Jk1GyUVltpjcwzBWlSFkCSuYjct5wBMqx2BnZxPDlZsxe4A2oWYXQaHKVOogNB5jas2xGOLJzFsuIYmQHABMakywXRYUIsFiRt54Ll4jCtbeAcU6QhlWDmZIymRmIgAkySfWZKs8GVICtdhQAO+TAA2AP90b7xrImrsPw3KwY3LjECNW7p5arG/P4NJzQUZeKwo7O2WGIuMEeCI7VZhwGB1z90INORneoXi2pJxbESe6oUAHMYWRnEaL3SSY5zrsLwoCO/cJH1i0nlEkjX8z1Mxw/DAjA9pcaP27hYbRqDv1nefWC1NACANbuf1zKMxmZGsEDIFAP1v3hE8xVWHtMmS4pvsvem2w1IEr5uTcsc05lka60Zb4TFwN2l1spLKGuEgSpXYjox+Jl8Pw0I+YO8mCZYQ3dCrm0kwB7ZO5NVyQGXZRllB85EKMsmBJzkq2VYGsDONTyO2a5mbLpcxZbQAlFEHmf1cekxyMb0ZhOEC2iqrt3QAphJ022WD46ayepm1+DErl7W7GsyyktpGpKkkbGNtOhMy5IAbGqzJ2gfEANANsKAQBIJylM/jAOunqWLtnS8pvDMJKqgLebAGUqSp16gb+miL/CWZy/b3UJAEKVCiB0KmeZ1nc+ESbhRKBe1uAoTDgrn1MwSVPomJjnqZm19EHJMfA91Sp6VYjFTGnoXiF8hYXz2OVdtCecHeACY8KaVsCOJx+WAAGaQIzAAE6iTynTYE6zFAPxC/mYACASNLbGPMI1z97zug2O2lEYPhkOWflIVZ0AkEnYbkBoMwSdTWkVrS4x62IzMPxVoGdRz1BIOsle4dfNBbeYA0100bV4MAVgg6gg0PjbIYAHSSADPU7b6yQBHOqrLZGGujHK0kD6TYNqZ1gADxFJpPtAaE0x9FKnrMZEselRTYVI7bUyHSj0Mrdu+o8aLoEL9IPD8KOrsxfkU/RzmFWcTf8LeH+2+fvo8voBQHDD+lYn+5hx7rp++jAamWzTErRZbO9AvxFAWknu791tTOTKmkO2chcoJMkDerLuIiVBCsytkJiM0ch4EjTnXN4rgsw1u/btNIG/bbhLQgkrJAbRcpBNyWkGA4oJ76Oowt7OobLEzoYOx6gkEHcEGCCDVy7iN6wrmGxGiriVMdnK5AGOQFmWc+aGlC0mSJ1UNVK8QxoHmYe4Q9oMVZgMpVS4UZtSWOhkQGBK6QW4i5HWqRQeJ4lF1bcCO7Jzay2bIAsa6IzHaAJ15ZVnjN17Vwjsy7Ei0FdMoDAG3mYvLtlIfRR5wEaSRcDgLtsPdLK9x1XcrlZ1XLnzBxLMIWIUAEARrmyWP6SbuPxJBREMM7A8tLa63Gg+pPAutGha5TBvfS45zW2bKJ7SRcAEyItKV1YoQBPNQSVmrbXEMY6TbSypaFUsWIzAd98sgsuaFABmCTDQKbxgbqcQBu9mBMaFpEBozRvJIBE9My9dL8ZfFtGdphVLHQnRRJgDU6Cua4KMTaPftq4CLqHXMzklrhOsZ2Yzl806k3BoF2uJX5s3AZGZWWNAe8Cu5MDfmalxpgTfGhTDaa78ieg6nw0J1iYqpuI/uXD/kb7Dr6omsu5ib0lRZdybvaKS9oBVJz5STckw2kAbNVaYy+5tELcCBbZaHw3fIM3SALug0UaGO8dok7cEKzorV/NB11E6gg69QdR6CNOgqOI4zatgl3VApAYkwATsCdp2031HWsexjcUHAa1bAKgKpdcxuKJuRDEFSNADEQJOsAmzi77qxAtyRcyFbhYAhoQkhdQV10GhU9YGcsffYB9nitshIM5yVXRhMbnUbTpm2kjqKKa5HWuevYS5b765dFYCWPddjq5kRcc6b5RIA9AmOxmKayqPaDMVQNkbViAO1K6RlLa7g5SRoToeJN9AdX23hUs1YXDuJX20uWWXvKC3cGhRZaM5J+lzchCkHcGthDrUThQF9ZfGMT2ZVmcouolRmMkrlAUKxaTpoNATodCumKFx0hcygkqc0A6tG67ayJjxiog6YE7DgIgBGygbCdJ0GkaCYA0qAxDOvdBXMrQSNVYcmGon09CDUbSrm1fNmbOo17ukRMnQ66abkUUmmg+OdN9AVHDZx3+eUlZmGUhtG02IHLlVeJuQWGkAZp2giNzOnsosmKBDZyNtSD45RrMjqcsT40JiD6RpTVbsTtULsZZTAVWpj49dTDzTaoClPPHp+40bFBqe+I+NDRldeL8imcxw8/pOJ/uYY+uLoj2Ae2jhQOG0xGI6EYcj05XX/AMR76LnQax10qZbZtidIzeLXouWx9GTyV9ySYgHI0BojcHQ6HSs/C23dQUXDMBGfQsokAkydW3Y+II8a3MTgrdw98ToRudvRNV2uE201RYMMBDNHegmRMakAn0VSYOLbszcNeyIznsBBCoYAASQHDE67sAfExrRd5GKothEKGC6keapmIBZZOYH0wedDWuGNoDhbBjSWIBM5TIhDGm42ldCdKLs4JkAKW7YYq4cBiBJKgRB/ZUDbf1y2yEmZmDxDu6AWrHIlR2RYArmIAF2PN80g8uQGhVzsrgt9lYSWGo7odWGVmWe0XZSQcpIO2wMkYfBOq5hZtLcXRALjQAYD9+JGg00+8mnC8JZZPZi2cykBL1w7hu0JJYSSTExME7nddBTFc4e945b9tZIZs1tQBJRBcDMXLAnMygiNEfbumieEcKDA9vh7SEEZQoEbAaHMTpA5DYdNI20vgyQDtA7e9pouaTz72cjTYqNIJOoFXq/8R/8AtUyboKZO3hVSQihQdTAG/I/HQDlQflHeAw7k2xcHdBRtA0uo9xgxrMRRUAGQW6asx9xMesVC/aV1KsJB3HoqFvsvj0c+qxmJw6AQAzaZQCEOvf7o7z+ByIddMtz2FJuhcMjsjIMqmSSQJkaZIUKfEBegq3EcHQMrKjPGsdrcBBGXLll9NgfHLqaoucEN1iWQoWMG4L7MwIWNATB1AGx0PXba0ZtMMxd3s2F23Za4xDHTPJMKIIUETHM9D6aCvIoZi2Db9oPNw5mfs2fvBTlIk96dTb36V3ODMwE2nLBYJW7aUbsCApQgDKT6oHLS25wzI7wl51IHeN1IgclHnLqxJ21GuhFIKCMR2QCMllmBViAouCMuUABACJ2O0gKMswAFhMKjtluWAhZVgZnMhe8RooUBSw0BIkjbKABbvACCyoHKgBVbtMundzaDwmIjUL4sLsLwE5szm4jL5sXQQRqNVywABEdZJ0JIo6XsKYTw3iTMy2xYe2uu8wvdzQO5HONwBsOQrZtb+NC2kjmT6Y+4DlRKXB4/H21nNlcWE5qRNV9qOVRe9pWFE0CYzh07agmcugyneUIGhJ1MyJjap21ZREuN+WboABEmBv7aJN7WImrc9U5PTFQKEedyd9ToBr0Gp5+G09aIs2cviTuTuek1INTC78ffSbbCidUTqasDCh7h8TRFDokx1qSHumq8wpz5o8TNWx0PaP0g+OVHUAjAXFA5z7gaNmujHomezDOCXtO0k5igtnoVDFhp1kn21euHHX49lPSpuCbszUmuhHDAcz7qibfiatb76rp8UHJiW14mp9j+97vzqK1Z+dKkPnL6RFrx9350/Z/vD2fnSWmX49tKkPyS+iNueY9n50uz8R76Q/H7KaikHkl9G7PxHs/On7LxFP8AhSb76fFD8svpA2j1FTgxGnx6qfp6qYUuKFzkQKfE0iu2nvqQ2+OtOPj2UUh+SRELpGntqV2Ty99PyqJo4oOchL6PePgUgnh76frTmjggWSQwHh7IpyZ5fZ+NKpDalwQeRkCTrpU0fTUUh+H3U1vc0cEw8jHRutOW+IpD491Mu/tpeNC5sjz500D4Bqzn7aenwQ/IyKx8A/hVpuiOvt/Cq+Xx0qS86l40Lm2UWlPagxoA3LqI/CjZqlN6hWkY8SZS5dn/2Q==", - "FileName": "Aaron Glover Birth Certificate.jpg", - "SizeBytes": 10651 - } -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-documents/listofdocuments.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-documents/listofdocuments.md deleted file mode 100644 index 2bc6712cb8..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-documents/listofdocuments.md +++ /dev/null @@ -1,171 +0,0 @@ -# Working with Employee Addresses - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve list of documents attached to an employee - -| Operation | Description | -| ------------- |-------------| -|[GET a List of Documents](#retrieving-a-list-of-documents)| This request allows to retrieve the list of documents attached to an employee. The response includes the document GUID used to retrieve contents with Get Document Details. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving a List of Documents -We can use GET a List of Documents operation with required parameters to get the list of documents related to an employee. - -**GET a List of Documents** -```xml - - {$ctx:xRefCode} - -``` - -**Properties** - -* employeeXRefCode (Mandatory): Uniquely identifies the employee whose document you want to retrieve. Partial search is not supported, so provide the full value. Otherwise, a 400 error will be returned. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "DocumentGUID": "52dd3956-4ba3-4001-8678-a102757d42eb", - "DocumentName": "Aaron Glover Employment Contract.jpg", - "DocumentType": {}, - "FileName": "Aaron Glover Employment Contract.jpg", - "UploadedDate": "2015-04-15T14:39:20.13", - "UploadedBy": { - "DisplayName": "Macon Burke", - "XRefCode": "62779", - "LoginId": "CAdmin" - } - }, - { - "DocumentGUID": "696afd0c-5890-4316-9b7e-7ac990189018", - "DocumentName": "Aaron Glover Birth Certificate.jpg", - "DocumentType": {}, - "FileName": "Aaron Glover Birth Certificate.jpg", - "UploadedDate": "2015-04-15T14:39:10.7", - "UploadedBy": { - "DisplayName": "Macon Burke", - "XRefCode": "62779", - "LoginId": "CAdmin" - } - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Documents/GET-a-List-of-Documents-(1).aspx](https://developers.dayforce.com/Build/API-Explorer/Documents/GET-a-List-of-Documents-(1).aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "DocumentGUID": "52dd3956-4ba3-4001-8678-a102757d42eb", - "DocumentName": "Aaron Glover Employment Contract.jpg", - "DocumentType": {}, - "FileName": "Aaron Glover Employment Contract.jpg", - "UploadedDate": "2015-04-15T14:39:20.13", - "UploadedBy": { - "DisplayName": "Macon Burke", - "XRefCode": "62779", - "LoginId": "CAdmin" - } - }, - { - "DocumentGUID": "696afd0c-5890-4316-9b7e-7ac990189018", - "DocumentName": "Aaron Glover Birth Certificate.jpg", - "DocumentType": {}, - "FileName": "Aaron Glover Birth Certificate.jpg", - "UploadedDate": "2015-04-15T14:39:10.7", - "UploadedBy": { - "DisplayName": "Macon Burke", - "XRefCode": "62779", - "LoginId": "CAdmin" - } - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeclockdevicegroups.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeclockdevicegroups.md deleted file mode 100644 index d279c43819..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeclockdevicegroups.md +++ /dev/null @@ -1,141 +0,0 @@ -# Working with Employee Clock Device Groups - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operation allows you to retrieve an employee's clock device group - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Clock Device Groups](#retrieving-employee-clock-device-groups)| Retrieve an employee's clock device groups that control access to the clocks the employee can punch on. | - -### Operation details - -This section provides more details on the operation. - -#### Retrieving Employee Clock Device Groups -We can use GET Employee Clock Device Groups operation with required parameters to search and find the required employee's clock device group. - -**GET Employee Clock Device Groups** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): he unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "63499" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "ClockDeviceGroup": { - "ShortName": "Clock Group 1", - "LongName": "Clock Group 1" - } - } - ] -} -``` - -**Related Dayforce documentation** - -[]() - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "63499" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "ClockDeviceGroup": { - "ShortName": "Clock Group 1", - "LongName": "Clock Group 1" - } - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeecompensationsummary.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeecompensationsummary.md deleted file mode 100644 index 1932ab695c..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeecompensationsummary.md +++ /dev/null @@ -1,236 +0,0 @@ -# Working with Employee Compensation Summary - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operation allows you to retrieve an employee's clock device group - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Compensation Summary](#retrieving-employee-compensation-summary)| Retrieve an employee's condensed status information based on compensation changes. | - -### Operation details - -This section provides more details on the operation. - -#### Retrieving Employee Compensation Summary -We can use GET Employee Compensation Summary operation with required parameters to search and find the required employee's compensation summary. - -**GET Employee Compensation Summary** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): he unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDate": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "EmployeeNumber": "42199", - "EffectiveStart": "2007-07-01T00:00:00", - "PayGrade": { - "ShortName": "Associates", - "LongName": "Associates" - }, - "PayType": { - "XRefCode": "HourlyNon", - "ShortName": "Hourly(Non-Exempt)", - "LongName": "Hourly(Non-Exempt)" - }, - "PayGroup": { - "PayFrequency": { - "PayFrequencyType": "w", - "ShortName": "Weekly", - "LongName": "Weekly" - }, - "XRefCode": "USA", - "ShortName": "USA - Weekly", - "LongName": "USA - Weekly" - }, - "PayClass": { - "XRefCode": "FT", - "ShortName": "FT", - "LongName": "Full Time" - }, - "AlternateRate": 3.00000, - "AverageDailyHours": 8.0000, - "BaseRate": 21.50000, - "BaseSalary": 44720.00000, - "NormalWeeklyHours": 40.0000, - "VacationRate": 10.00000, - "MinimumRate": 8.00000, - "ControlRate": 10.50000, - "MaximumRate": 13.00000, - "RateMidPoint": 10.50000, - "MinimumSalary": 16640.00000, - "ControlSalary": 21840.00000, - "MaximumSalary": 27040.00000, - "SalaryMidPoint": 21840.00000, - "CompRatio": 2.04762, - "ChangePercent": 0.075, - "ChangeValue": 1.50000, - "PreviousBaseSalary": 41600.00000, - "PreviousBaseRate": 20.00000, - "PayPolicy": { - "XRefCode": "MHourly", - "ShortName": "MHourly ", - "LongName": "MHourly " - } - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Compensation-Summary/GET-Employee-Compensation-Summary.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Compensation-Summary/GET-Employee-Compensation-Summary.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:contextDate} - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDate": "2017-01-01T13:24:56" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "EmployeeNumber": "42199", - "EffectiveStart": "2007-07-01T00:00:00", - "PayGrade": { - "ShortName": "Associates", - "LongName": "Associates" - }, - "PayType": { - "XRefCode": "HourlyNon", - "ShortName": "Hourly(Non-Exempt)", - "LongName": "Hourly(Non-Exempt)" - }, - "PayGroup": { - "PayFrequency": { - "PayFrequencyType": "w", - "ShortName": "Weekly", - "LongName": "Weekly" - }, - "XRefCode": "USA", - "ShortName": "USA - Weekly", - "LongName": "USA - Weekly" - }, - "PayClass": { - "XRefCode": "FT", - "ShortName": "FT", - "LongName": "Full Time" - }, - "AlternateRate": 3.00000, - "AverageDailyHours": 8.0000, - "BaseRate": 21.50000, - "BaseSalary": 44720.00000, - "NormalWeeklyHours": 40.0000, - "VacationRate": 10.00000, - "MinimumRate": 8.00000, - "ControlRate": 10.50000, - "MaximumRate": 13.00000, - "RateMidPoint": 10.50000, - "MinimumSalary": 16640.00000, - "ControlSalary": 21840.00000, - "MaximumSalary": 27040.00000, - "SalaryMidPoint": 21840.00000, - "CompRatio": 2.04762, - "ChangePercent": 0.075, - "ChangeValue": 1.50000, - "PreviousBaseSalary": 41600.00000, - "PreviousBaseRate": 20.00000, - "PayPolicy": { - "XRefCode": "MHourly", - "ShortName": "MHourly ", - "LongName": "MHourly " - } - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeecourses.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeecourses.md deleted file mode 100644 index 3a59832ab4..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeecourses.md +++ /dev/null @@ -1,159 +0,0 @@ -# Working with Employee Courses - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve courses of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Courses](#retrieving-courses-of-employee)| Retrieve courses associated to an employee. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Courses -We can use GET Employee Courses operation with required parameters to search and find the courses associated with employees. - -**GET Employee Addresses** -```xml - - {$ctx:xRefCode} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "Course": { - "CourseType": { - "ShortName": "Management" - }, - "CourseProvider": { - "XRefCode": "Internal", - "ShortName": "Internal" - }, - "ShortName": "Health and Safety" - }, - "EmployeeTrainingProgram": { - "TrainingProgram": { - "ShortName": "First Aid Training" - }, - "EnrollmentDate": "2011-12-12T00:00:00" - } - } - ] -} -``` - -**Related Dayforce documentation** - -[]() - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "Course": { - "CourseType": { - "ShortName": "Management" - }, - "CourseProvider": { - "XRefCode": "Internal", - "ShortName": "Internal" - }, - "ShortName": "Health and Safety" - }, - "EmployeeTrainingProgram": { - "TrainingProgram": { - "ShortName": "First Aid Training" - }, - "EnrollmentDate": "2011-12-12T00:00:00" - } - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeemploymentagreements.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeemploymentagreements.md deleted file mode 100644 index bf3878b6b1..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeemploymentagreements.md +++ /dev/null @@ -1,354 +0,0 @@ -# Working with Employee Employment Agreements - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve, create or update employment agreements of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Employment Agreements](#retrieving-employee-employment-agreement)| Retrieve the employment agreement information of an employee. | -|[POST Employee Employment Agreements](#creating-employee-employment-agreement)| Create the employment agreement information of an employee. | -|[PATCH Employee Employment Agreements](#updating-employee-employment-agreement)| Update the employment agreement information of an employee. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Employment Agreements -We can use GET Employee Contacts operation with required parameters to retrieve the employment agreement information of an employee. - -**GET Employee Employee Employment Agreements** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): he unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "EffectiveStart": "2019-01-01T00:00:00", - "XRefCode": "dcc714ea-0e89-415c-aad1-242185951438", - "EmploymentAgreementType": { - "XRefCode": "BLCO", - "ShortName": "Blue collar", - "LongName": "Blue collar" - }, - "EmploymentAgreementPopulation": { - "XRefCode": "DKFULI", - "ShortName": "Funktionaer-ligende (Salaried \"ligende\")", - "LongName": "Funktionaer-ligende (Salaried \"ligende\")" - }, - "EmploymentAgreementDetails": { - "XRefCode": "DKFS", - "ShortName": "Vacation funktionaer with full salary", - "LongName": "Vacation funktionaer with full salary" - }, - "EmploymentAgreementTaxRegime": { - "XRefCode": "DKBK", - "ShortName": "BIKORT", - "LongName": "BIKORT" - }, - "EmploymentAgreementDuration": { - "XRefCode": "OE", - "ShortName": "Open ended", - "LongName": "Open ended" - }, - "Country": { - "Name": "Denmark", - "XRefCode": "DNK", - "ShortName": "Denmark", - "LongName": "Denmark" - } - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Employment-Agreements/GET-Employee-Employment-Agreements.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Employment-Agreements/GET-Employee-Employment-Agreements.aspx) - -#### Creating Employee Employment Agreements -We can use POST Employee Employment Agreements operation with required parameters to create the required employee's employment agreement information. - -**POST Employee Employment Agreements** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "EffectiveStart": "2019-01-01T00:00:00", - "XRefCode": "dcc714ea-0e89-415c-aad1-242185951438", - "EmploymentAgreementType": { - "XRefCode": "BLCO", - "ShortName": "Blue collar", - "LongName": "Blue collar" - }, - "EmploymentAgreementPopulation": { - "XRefCode": "DKFULI", - "ShortName": "Funktionaer-ligende (Salaried \"ligende\")", - "LongName": "Funktionaer-ligende (Salaried \"ligende\")" - }, - "EmploymentAgreementDetails": { - "XRefCode": "DKFS", - "ShortName": "Vacation funktionaer with full salary", - "LongName": "Vacation funktionaer with full salary" - }, - "EmploymentAgreementTaxRegime": { - "XRefCode": "DKBK", - "ShortName": "BIKORT", - "LongName": "BIKORT" - }, - "EmploymentAgreementDuration": { - "XRefCode": "OE", - "ShortName": "Open ended", - "LongName": "Open ended" - }, - "Country": { - "Name": "Denmark", - "XRefCode": "DNK", - "ShortName": "Denmark", - "LongName": "Denmark" - } - } -} -``` - -**Sample response** - -This method returns a HTTP code 200 and no response body - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Employment-Agreements/POST-Employee-Employment-Agreements.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Employment-Agreements/POST-Employee-Employment-Agreements.aspx) - -#### Updating Employee Employment Agreements -We can use PATCH Employee Employment Agreements operation with required parameters to update the employment agreement information of existing employees. - -**PATCH Employee Employment Agreements** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "EffectiveStart": "2019-01-01T00:00:00", - "XRefCode": "dcc714ea-0e89-415c-aad1-242185951438", - "EmploymentAgreementType": { - "XRefCode": "BLCO", - "ShortName": "Blue collar", - "LongName": "Blue collar" - }, - "EmploymentAgreementPopulation": { - "XRefCode": "DKFULI", - "ShortName": "Funktionaer-ligende (Salaried \"ligende\")", - "LongName": "Funktionaer-ligende (Salaried \"ligende\")" - }, - "EmploymentAgreementDetails": { - "XRefCode": "DKFS", - "ShortName": "Vacation funktionaer with full salary", - "LongName": "Vacation funktionaer with full salary" - }, - "EmploymentAgreementTaxRegime": { - "XRefCode": "DKBK", - "ShortName": "BIKORT", - "LongName": "BIKORT" - }, - "EmploymentAgreementDuration": { - "XRefCode": "OE", - "ShortName": "Open ended", - "LongName": "Open ended" - }, - "Country": { - "Name": "Denmark", - "XRefCode": "DNK", - "ShortName": "Denmark", - "LongName": "Denmark" - } - } -} -``` - -**Sample response** - -This operation returns HTTP code 200 with no response body - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Employment-Agreements/GET-Employee-Employment-Agreements-(1).aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Employment-Agreements/GET-Employee-Employment-Agreements-(1).aspx) -(sic) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "EffectiveStart": "2019-01-01T00:00:00", - "XRefCode": "dcc714ea-0e89-415c-aad1-242185951438", - "EmploymentAgreementType": { - "XRefCode": "BLCO", - "ShortName": "Blue collar", - "LongName": "Blue collar" - }, - "EmploymentAgreementPopulation": { - "XRefCode": "DKFULI", - "ShortName": "Funktionaer-ligende (Salaried \"ligende\")", - "LongName": "Funktionaer-ligende (Salaried \"ligende\")" - }, - "EmploymentAgreementDetails": { - "XRefCode": "DKFS", - "ShortName": "Vacation funktionaer with full salary", - "LongName": "Vacation funktionaer with full salary" - }, - "EmploymentAgreementTaxRegime": { - "XRefCode": "DKBK", - "ShortName": "BIKORT", - "LongName": "BIKORT" - }, - "EmploymentAgreementDuration": { - "XRefCode": "OE", - "ShortName": "Open ended", - "LongName": "Open ended" - }, - "Country": { - "Name": "Denmark", - "XRefCode": "DNK", - "ShortName": "Denmark", - "LongName": "Denmark" - } - } -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeemploymentstatuses.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeemploymentstatuses.md deleted file mode 100644 index 5086f89da6..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeemploymentstatuses.md +++ /dev/null @@ -1,633 +0,0 @@ -# Working with Employee Employment Statuses - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve, create or update employment Statuses of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Employment Statuses](#retrieving-employee-employment-agreement)| Retrieve the employment agreement information of an employee. | -|[POST Employee Employment Statuses](#creating-employee-employment-agreement)| Create the employment agreement information of an employee. | -|[PATCH Employee Employment Statuses](#updating-employee-employment-agreement)| Update the employment agreement information of an employee. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Employment Statuses -We can use GET Employee Contacts operation with required parameters to retrieve the employment status of an employee. - -**GET Employee Employee Employment Statuses** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): he unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "EmployeeNumber": "42199", - "EffectiveStart": "2014-02-04T00:00:00", - "EmploymentStatus": { - "IsBenefitArrearsEnabled": false, - "XRefCode": "INACTIVE", - "ShortName": "Inactive", - "LongName": "Inactive" - }, - "EmploymentStatusGroup": { - "XRefCode": "INACTIVE", - "ShortName": "Inactive", - "LongName": "Inactive" - }, - "PayType": { - "XRefCode": "HourlyNon", - "ShortName": "Hourly(Non-Exempt)", - "LongName": "Hourly(Non-Exempt)" - }, - "PayGroup": { - "PayFrequency": { - "PayFrequencyType": "w", - "ShortName": "Weekly", - "LongName": "Weekly" - }, - "XRefCode": "USA", - "ShortName": "USA - Weekly", - "LongName": "USA - Weekly" - }, - "PayTypeGroup": { - "XRefCode": "Hourly", - "ShortName": "Hourly", - "LongName": "Hourly" - }, - "PayClass": { - "XRefCode": "FT", - "ShortName": "FT", - "LongName": "Full Time" - }, - "PunchPolicy": { - "XRefCode": "Default", - "ShortName": "Default", - "LongName": "Default" - }, - "PayPolicy": { - "XRefCode": "MHourly", - "ShortName": "MHourly ", - "LongName": "MHourly " - }, - "PayHolidayGroup": { - "XRefCode": "USA", - "ShortName": "USA", - "LongName": "USA" - }, - "EntitlementPolicy": { - "XRefCode": "Default", - "ShortName": "Default", - "LongName": "Default" - }, - "ShiftRotation": { - "XRefCode": "Morning", - "ShortName": "Morning", - "LongName": "Morning" - }, - "ShiftRotationDayOffset": 0, - "ShiftRotationStartDate": "2007-12-31T00:00:00", - "CreateShiftRotationShift": true, - "TimeOffPolicy": { - "XRefCode": "Default", - "ShortName": "Default", - "LongName": "Default" - }, - "ShiftTradePolicy": { - "XRefCode": "default", - "ShortName": "Corporate", - "LongName": "Corporate" - }, - "AttendancePolicy": { - "XRefCode": "DEFAULT", - "ShortName": "Default", - "LongName": "Default" - }, - "SchedulePolicy": { - "XRefCode": "Manufacturing", - "ShortName": "Manufacturing", - "LongName": "Manufacturing" - }, - "OvertimeGroup": { - "XRefCode": "OTG1", - "ShortName": "OT Group 1", - "LongName": "OT Group 1" - }, - "PayrollPolicy": { - "XRefCode": "USA", - "ShortName": "USA", - "LongName": "USA" - }, - "AlternateRate": 3.00000, - "AverageDailyHours": 8.0000, - "BaseRate": 21.50000, - "BaseSalary": 44720.00000, - "NormalWeeklyHours": 40.0000, - "VacationRate": 10.00000 - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Employment-Statuses/GET-Employee-Employment-Statuses.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Employment-Statuses/GET-Employee-Employment-Statuses.aspx) - -#### Creating Employee Employment Statuses -We can use POST Employee Employment Statuses operation with required parameters to create the required employee's employment Status. - -**POST Employee Employment Statuses** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "EmployeeNumber": "42199", - "EffectiveStart": "2014-02-04T00:00:00", - "EmploymentStatus": { - "IsBenefitArrearsEnabled": false, - "XRefCode": "INACTIVE", - "ShortName": "Inactive", - "LongName": "Inactive" - }, - "EmploymentStatusGroup": { - "XRefCode": "INACTIVE", - "ShortName": "Inactive", - "LongName": "Inactive" - }, - "PayType": { - "XRefCode": "HourlyNon", - "ShortName": "Hourly(Non-Exempt)", - "LongName": "Hourly(Non-Exempt)" - }, - "PayGroup": { - "PayFrequency": { - "PayFrequencyType": "w", - "ShortName": "Weekly", - "LongName": "Weekly" - }, - "XRefCode": "USA", - "ShortName": "USA - Weekly", - "LongName": "USA - Weekly" - }, - "PayTypeGroup": { - "XRefCode": "Hourly", - "ShortName": "Hourly", - "LongName": "Hourly" - }, - "PayClass": { - "XRefCode": "FT", - "ShortName": "FT", - "LongName": "Full Time" - }, - "PunchPolicy": { - "XRefCode": "Default", - "ShortName": "Default", - "LongName": "Default" - }, - "PayPolicy": { - "XRefCode": "MHourly", - "ShortName": "MHourly ", - "LongName": "MHourly " - }, - "PayHolidayGroup": { - "XRefCode": "USA", - "ShortName": "USA", - "LongName": "USA" - }, - "EntitlementPolicy": { - "XRefCode": "Default", - "ShortName": "Default", - "LongName": "Default" - }, - "ShiftRotation": { - "XRefCode": "Morning", - "ShortName": "Morning", - "LongName": "Morning" - }, - "ShiftRotationDayOffset": 0, - "ShiftRotationStartDate": "2007-12-31T00:00:00", - "CreateShiftRotationShift": true, - "TimeOffPolicy": { - "XRefCode": "Default", - "ShortName": "Default", - "LongName": "Default" - }, - "ShiftTradePolicy": { - "XRefCode": "default", - "ShortName": "Corporate", - "LongName": "Corporate" - }, - "AttendancePolicy": { - "XRefCode": "DEFAULT", - "ShortName": "Default", - "LongName": "Default" - }, - "SchedulePolicy": { - "XRefCode": "Manufacturing", - "ShortName": "Manufacturing", - "LongName": "Manufacturing" - }, - "OvertimeGroup": { - "XRefCode": "OTG1", - "ShortName": "OT Group 1", - "LongName": "OT Group 1" - }, - "PayrollPolicy": { - "XRefCode": "USA", - "ShortName": "USA", - "LongName": "USA" - }, - "AlternateRate": 3.00000, - "AverageDailyHours": 8.0000, - "BaseRate": 21.50000, - "BaseSalary": 44720.00000, - "NormalWeeklyHours": 40.0000, - "VacationRate": 10.00000 - } -} -``` - -**Sample response** - -This method returns a HTTP code 200 and no response body - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Employment-Statuses/POST-Employee-Employment-Statuses.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Employment-Statuses/POST-Employee-Employment-Statuses.aspx) - -#### Updating Employee Employment Statuses -We can use PATCH Employee Employment Statuses operation with required parameters to update the employment status of existing employees. - -**PATCH Employee Employment Agreements** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "EmployeeNumber": "42199", - "EffectiveStart": "2014-02-04T00:00:00", - "EmploymentStatus": { - "IsBenefitArrearsEnabled": false, - "XRefCode": "INACTIVE", - "ShortName": "Inactive", - "LongName": "Inactive" - }, - "EmploymentStatusGroup": { - "XRefCode": "INACTIVE", - "ShortName": "Inactive", - "LongName": "Inactive" - }, - "PayType": { - "XRefCode": "HourlyNon", - "ShortName": "Hourly(Non-Exempt)", - "LongName": "Hourly(Non-Exempt)" - }, - "PayGroup": { - "PayFrequency": { - "PayFrequencyType": "w", - "ShortName": "Weekly", - "LongName": "Weekly" - }, - "XRefCode": "USA", - "ShortName": "USA - Weekly", - "LongName": "USA - Weekly" - }, - "PayTypeGroup": { - "XRefCode": "Hourly", - "ShortName": "Hourly", - "LongName": "Hourly" - }, - "PayClass": { - "XRefCode": "FT", - "ShortName": "FT", - "LongName": "Full Time" - }, - "PunchPolicy": { - "XRefCode": "Default", - "ShortName": "Default", - "LongName": "Default" - }, - "PayPolicy": { - "XRefCode": "MHourly", - "ShortName": "MHourly ", - "LongName": "MHourly " - }, - "PayHolidayGroup": { - "XRefCode": "USA", - "ShortName": "USA", - "LongName": "USA" - }, - "EntitlementPolicy": { - "XRefCode": "Default", - "ShortName": "Default", - "LongName": "Default" - }, - "ShiftRotation": { - "XRefCode": "Morning", - "ShortName": "Morning", - "LongName": "Morning" - }, - "ShiftRotationDayOffset": 0, - "ShiftRotationStartDate": "2007-12-31T00:00:00", - "CreateShiftRotationShift": true, - "TimeOffPolicy": { - "XRefCode": "Default", - "ShortName": "Default", - "LongName": "Default" - }, - "ShiftTradePolicy": { - "XRefCode": "default", - "ShortName": "Corporate", - "LongName": "Corporate" - }, - "AttendancePolicy": { - "XRefCode": "DEFAULT", - "ShortName": "Default", - "LongName": "Default" - }, - "SchedulePolicy": { - "XRefCode": "Manufacturing", - "ShortName": "Manufacturing", - "LongName": "Manufacturing" - }, - "OvertimeGroup": { - "XRefCode": "OTG1", - "ShortName": "OT Group 1", - "LongName": "OT Group 1" - }, - "PayrollPolicy": { - "XRefCode": "USA", - "ShortName": "USA", - "LongName": "USA" - }, - "AlternateRate": 3.00000, - "AverageDailyHours": 8.0000, - "BaseRate": 21.50000, - "BaseSalary": 44720.00000, - "NormalWeeklyHours": 40.0000, - "VacationRate": 10.00000 - } -} -``` - -**Sample response** - -This operation returns HTTP code 200 with no response body - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Employment-Statuses/PATCH-Employee-Employment-Statuses.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Employment-Statuses/PATCH-Employee-Employment-Statuses.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:contextDateRangeFrom} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "EmployeeNumber": "42199", - "EffectiveStart": "2014-02-04T00:00:00", - "EmploymentStatus": { - "IsBenefitArrearsEnabled": false, - "XRefCode": "INACTIVE", - "ShortName": "Inactive", - "LongName": "Inactive" - }, - "EmploymentStatusGroup": { - "XRefCode": "INACTIVE", - "ShortName": "Inactive", - "LongName": "Inactive" - }, - "PayType": { - "XRefCode": "HourlyNon", - "ShortName": "Hourly(Non-Exempt)", - "LongName": "Hourly(Non-Exempt)" - }, - "PayGroup": { - "PayFrequency": { - "PayFrequencyType": "w", - "ShortName": "Weekly", - "LongName": "Weekly" - }, - "XRefCode": "USA", - "ShortName": "USA - Weekly", - "LongName": "USA - Weekly" - }, - "PayTypeGroup": { - "XRefCode": "Hourly", - "ShortName": "Hourly", - "LongName": "Hourly" - }, - "PayClass": { - "XRefCode": "FT", - "ShortName": "FT", - "LongName": "Full Time" - }, - "PunchPolicy": { - "XRefCode": "Default", - "ShortName": "Default", - "LongName": "Default" - }, - "PayPolicy": { - "XRefCode": "MHourly", - "ShortName": "MHourly ", - "LongName": "MHourly " - }, - "PayHolidayGroup": { - "XRefCode": "USA", - "ShortName": "USA", - "LongName": "USA" - }, - "EntitlementPolicy": { - "XRefCode": "Default", - "ShortName": "Default", - "LongName": "Default" - }, - "ShiftRotation": { - "XRefCode": "Morning", - "ShortName": "Morning", - "LongName": "Morning" - }, - "ShiftRotationDayOffset": 0, - "ShiftRotationStartDate": "2007-12-31T00:00:00", - "CreateShiftRotationShift": true, - "TimeOffPolicy": { - "XRefCode": "Default", - "ShortName": "Default", - "LongName": "Default" - }, - "ShiftTradePolicy": { - "XRefCode": "default", - "ShortName": "Corporate", - "LongName": "Corporate" - }, - "AttendancePolicy": { - "XRefCode": "DEFAULT", - "ShortName": "Default", - "LongName": "Default" - }, - "SchedulePolicy": { - "XRefCode": "Manufacturing", - "ShortName": "Manufacturing", - "LongName": "Manufacturing" - }, - "OvertimeGroup": { - "XRefCode": "OTG1", - "ShortName": "OT Group 1", - "LongName": "OT Group 1" - }, - "PayrollPolicy": { - "XRefCode": "USA", - "ShortName": "USA", - "LongName": "USA" - }, - "AlternateRate": 3.00000, - "AverageDailyHours": 8.0000, - "BaseRate": 21.50000, - "BaseSalary": 44720.00000, - "NormalWeeklyHours": 40.0000, - "VacationRate": 10.00000 - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeemploymenttypes.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeemploymenttypes.md deleted file mode 100644 index e62ab3ed36..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeemploymenttypes.md +++ /dev/null @@ -1,149 +0,0 @@ -# Working with Employee Employment Types - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve employment type of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Employment Types](#retrieving-employee-employment-types)| Retrieve employee employment types (i.e.: contractor, pensioner, etc.). | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Employment Types -We can use GET Employee Employment Types operation with required parameters to get the employment type of an employee. - -**GET Employee Employment Types** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "EffectiveStart": "2019-06-01T00:00:00", - "EmploymentType": { - "XRefCode": "Employee", - "ShortName": "Employee", - "LongName": "Employee" - } - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Employment-Types/GET-Employee-Employment-Types.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Employment-Types/GET-Employee-Employment-Types.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:contextDateRangeFrom} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "EffectiveStart": "2019-06-01T00:00:00", - "EmploymentType": { - "XRefCode": "Employee", - "ShortName": "Employee", - "LongName": "Employee" - } - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeehighlycompensatedemployees.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeehighlycompensatedemployees.md deleted file mode 100644 index 3ccb4a7340..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeehighlycompensatedemployees.md +++ /dev/null @@ -1,137 +0,0 @@ -# Working with Employee Highly Compensated Employees - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve the high compensation status of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Highly Compensated Employees](#retrieving-employee-high-compensation)| Retrieve highly compensated employee indicators. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee High Compensation -We can use GET Employee Highly Compensated Employees operation with required parameters to find if an employee is highly compensated. - -**GET Employee Addresses** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "EffectiveStart": "2019-07-01T00:00:00", - "IsHCE": true - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Highly-Compensated-Employees/GET-Employee-Highly-Compensated-Employees.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Highly-Compensated-Employees/GET-Employee-Highly-Compensated-Employees.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "EffectiveStart": "2019-07-01T00:00:00", - "IsHCE": true - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeehrincidents.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeehrincidents.md deleted file mode 100644 index 1d0f7771db..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeehrincidents.md +++ /dev/null @@ -1,267 +0,0 @@ -# Working with Employee HR Incidents - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve hr incidents of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee HR Incidents](#retrieving-employee-hr-incidents)| Retrieve HR incidents attached to an employee. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee HR Incidents -We can use GET Employee HR Incidents operation with required parameters to search and find the hr incidents related to an employee. - -**GET Employee HR Incidents** -```xml - - {$ctx:xRefCode} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "OrgUnit": { - "XRefCode": "Plant1", - "ShortName": "Plant 1", - "LongName": "Plant 1" - }, - "HRIncidentState": "CLOSED", - "OpenDate": "2013-09-23T00:00:00", - "HRIncidentType": { - "XRefCode": "SafetyandHealthRecordable", - "ShortName": "Safety and Health – OSHA Recordable", - "LongName": "Safety and Health – OSHA Recordable" - }, - "ClosedDate": "2013-09-23T00:00:00", - "HRIncidentDate": "2013-09-01T00:00:00", - "HRIncidentBeganWork": "1900-01-01T09:00:00", - "HRIncidentEventTime": "1900-01-01T10:00:00", - "SafetyHealthType": { - "XRefCode": "Injury", - "ShortName": "Injury", - "LongName": "Injury" - }, - "HRIncidentInjury": { - "XRefCode": "LimbF", - "ShortName": "Limb - Fracture", - "LongName": "Limb - Fracture" - }, - "HRIncidentBodyPart": { - "XRefCode": "LimbsKnee", - "ShortName": "Limbs - Knee", - "LongName": "Limbs - Knee" - }, - "Died": false, - "HRIncidentArea": "Shop floor", - "TaskBeingPerformed": "moving packages", - "CausedObject": "Ladder", - "CausedAction": "fell off ladder.", - "PrivacyCase": false, - "DoctorName": "Dr. Jones", - "EmergencyRoom": true, - "HospitalOvernight": true, - "Hospital": "St Josephs", - "HospitalStreet": "44 Main Street", - "HospitalCity": "Jersey City", - "HospitalStateCode": "NJ", - "HospitalZip": "10017", - "DateReturnToWork": "2013-09-23T00:00:00", - "DaysLost": 15.00, - "IsDaysLost": true - }, - { - "OrgUnit": { - "XRefCode": "Corporate", - "ShortName": "prd-500b-2018--01-29", - "LongName": "XYZ Co..PRDemoGold - Jan 29th 2018 -53hf23\r12-22- update UK payrol/ppaca/onboad date\r12-18- Ran update payroll BSI script\rUpdate PPACA calanders" - }, - "HRIncidentState": "OPEN", - "OpenDate": "2019-10-01T00:00:00", - "HRIncidentAction": { - "ShortName": "Coaching" - }, - "HRIncidentType": { - "XRefCode": "Attendance and Punct", - "ShortName": "Attendance and Punctuality" - }, - "HRIncidentBeganWork": "1970-01-01T00:00:00", - "HRIncidentEventTime": "1970-01-01T00:00:00", - "Died": false, - "PrivacyCase": false, - "EmergencyRoom": false, - "HospitalOvernight": false - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/HR-Incidents/GET-Employee-HR-Incidents.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/HR-Incidents/GET-Employee-HR-Incidents.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "OrgUnit": { - "XRefCode": "Plant1", - "ShortName": "Plant 1", - "LongName": "Plant 1" - }, - "HRIncidentState": "CLOSED", - "OpenDate": "2013-09-23T00:00:00", - "HRIncidentType": { - "XRefCode": "SafetyandHealthRecordable", - "ShortName": "Safety and Health – OSHA Recordable", - "LongName": "Safety and Health – OSHA Recordable" - }, - "ClosedDate": "2013-09-23T00:00:00", - "HRIncidentDate": "2013-09-01T00:00:00", - "HRIncidentBeganWork": "1900-01-01T09:00:00", - "HRIncidentEventTime": "1900-01-01T10:00:00", - "SafetyHealthType": { - "XRefCode": "Injury", - "ShortName": "Injury", - "LongName": "Injury" - }, - "HRIncidentInjury": { - "XRefCode": "LimbF", - "ShortName": "Limb - Fracture", - "LongName": "Limb - Fracture" - }, - "HRIncidentBodyPart": { - "XRefCode": "LimbsKnee", - "ShortName": "Limbs - Knee", - "LongName": "Limbs - Knee" - }, - "Died": false, - "HRIncidentArea": "Shop floor", - "TaskBeingPerformed": "moving packages", - "CausedObject": "Ladder", - "CausedAction": "fell off ladder.", - "PrivacyCase": false, - "DoctorName": "Dr. Jones", - "EmergencyRoom": true, - "HospitalOvernight": true, - "Hospital": "St Josephs", - "HospitalStreet": "44 Main Street", - "HospitalCity": "Jersey City", - "HospitalStateCode": "NJ", - "HospitalZip": "10017", - "DateReturnToWork": "2013-09-23T00:00:00", - "DaysLost": 15.00, - "IsDaysLost": true - }, - { - "OrgUnit": { - "XRefCode": "Corporate", - "ShortName": "prd-500b-2018--01-29", - "LongName": "XYZ Co..PRDemoGold - Jan 29th 2018 -53hf23\r12-22- update UK payrol/ppaca/onboad date\r12-18- Ran update payroll BSI script\rUpdate PPACA calanders" - }, - "HRIncidentState": "OPEN", - "OpenDate": "2019-10-01T00:00:00", - "HRIncidentAction": { - "ShortName": "Coaching" - }, - "HRIncidentType": { - "XRefCode": "Attendance and Punct", - "ShortName": "Attendance and Punctuality" - }, - "HRIncidentBeganWork": "1970-01-01T00:00:00", - "HRIncidentEventTime": "1970-01-01T00:00:00", - "Died": false, - "PrivacyCase": false, - "EmergencyRoom": false, - "HospitalOvernight": false - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeelabordefaults.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeelabordefaults.md deleted file mode 100644 index 5b4d00cd7d..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeelabordefaults.md +++ /dev/null @@ -1,157 +0,0 @@ -# Working with Employee Labor Defaults - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve the default labour of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Labor Defaults](#retrieving-employee-labour-defaults)| Retrieve employee labor defaults. Labor defaults specify an employee default position, project, docket or other timesheet information. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Labor Defaults -We can use GET Employee Labor Defaults operation with required parameters to search and find the default labour of an employee. - -**GET Employee Addresses** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "Position": { - "XRefCode": "Assembly 2 Process Technician", - "ShortName": "Assembly 2 Process Technician" - }, - "EffectiveStart": "2019-10-01T00:00:00", - "Location": { - "XRefCode": "500Assembly 2", - "ShortName": "Plant 1 - Assembly 2", - "LongName": "Plant 1 - Assembly 2" - } - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Labor-Defaults/GET-Employee-Labor-Defaults.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Labor-Defaults/GET-Employee-Labor-Defaults.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:contextDateRangeFrom} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "Position": { - "XRefCode": "Assembly 2 Process Technician", - "ShortName": "Assembly 2 Process Technician" - }, - "EffectiveStart": "2019-10-01T00:00:00", - "Location": { - "XRefCode": "500Assembly 2", - "ShortName": "Plant 1 - Assembly 2", - "LongName": "Plant 1 - Assembly 2" - } - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeonboardingpolicies.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeonboardingpolicies.md deleted file mode 100644 index 496db3378e..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeonboardingpolicies.md +++ /dev/null @@ -1,253 +0,0 @@ -# Working with Employee Onboarding Policies - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve, create or update onboarding policies of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Onboarding Policies](#retrieving-employee-onboarding-policies)| Retrieve onboarding policies assigned to an employee. | -|[POST Employee Onboarding Policies](#creating-employee-onboarding-policies)| Assign onboarding policies to an employee. | -|[PATCH Employee Onboarding Policies](#updating--employee-onboarding-policies)| Update the onboarding policies assigned to an employee. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Onboarding Policies -We can use GET Employee addresses operation with required parameters to get the onboarding policies of an employee. - -**GET Employee Onboarding Policies** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "OnboardingPolicy": { - "XRefCode": "db3de97e-f173-442d-86db-0e8bcb259ed0", - "ShortName": "Default" - }, - "EffectiveStart": "2019-01-01T00:00:00", - "EffectiveEnd": "2019-10-12T23:59:00", - "IsInternalHire": false - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Onboarding-Policies/GET-Employee-Onboarding-Policies.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Onboarding-Policies/GET-Employee-Onboarding-Policies.aspx) - -#### Creating Employee Onboarding Policies -We can use POST Employee Onboarding Policies operation with required parameters to assign onboarding policies to an employee. - -**POST Employee Onboarding Policies** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "OnboardingPolicy": { - "XRefCode": "db3de97e-f173-442d-86db-0e8bcb259ed0", - "ShortName": "Default" - }, - "EffectiveStart": "2019-01-01T00:00:00", - "EffectiveEnd": "2019-10-12T23:59:00", - "IsInternalHire": false - } -} -``` - -**Sample response** - -Dayforce returns HTTP Code 200 - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Onboarding-Policies/POST-Employee-Onboarding-Policies.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Onboarding-Policies/POST-Employee-Onboarding-Policies.aspx) - -#### Updating Employee Onboarding Policies -We can use PATCH Employee addresses operation with required parameters to search and find the required employees. - -**PATCH Employee Onboarding Policies** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "OnboardingPolicy": { - "XRefCode": "db3de97e-f173-442d-86db-0e8bcb259ed0", - "ShortName": "Default" - }, - "EffectiveStart": "2019-01-01T00:00:00", - "EffectiveEnd": "2019-10-12T23:59:00", - "IsInternalHire": false - } -} -``` - -**Sample response** - -Dayforce returns HTTP Code 200 - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Onboarding-Policies/PATCH-Onboarding-Policies.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Onboarding-Policies/PATCH-Onboarding-Policies.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:contextDateRangeFrom} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "OnboardingPolicy": { - "XRefCode": "db3de97e-f173-442d-86db-0e8bcb259ed0", - "ShortName": "Default" - }, - "EffectiveStart": "2019-01-01T00:00:00", - "EffectiveEnd": "2019-10-12T23:59:00", - "IsInternalHire": false - } - ] -} -``` \ No newline at end of file diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeorginfo.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeorginfo.md deleted file mode 100644 index fbb18e808b..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeorginfo.md +++ /dev/null @@ -1,351 +0,0 @@ -# Working with Employee Org Info - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve organizational information of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Org Info](#retrieving-employee-org-info)| Retrieve the organizational hierarchy attached to an employee. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Org Info -We can use GET Employee Org Info operation with required parameters to search and find the organizational info of a required employees. - -**GET Employee Org Info** -```xml - - {$ctx:xRefCode} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "OrgUnitDetail": { - "EffectiveStart": "2000-01-01T00:00:00", - "Address": { - "Address1": "11920 Amberpark Drive", - "City": "Alpharetta", - "PostalCode": "30009", - "Country": { - "XRefCode": "USA" - }, - "State": { - "XRefCode": "GA" - } - }, - "ChildSortOrder": 159, - "IsPhysicalLocation": false, - "IsPrimary": true, - "ParentSortOrder": 1, - "OrgLevel": { - "XRefCode": "Corp", - "ShortName": "Corporate", - "LongName": "Corporate Level" - }, - "XRefCode": "Corporate", - "ShortName": "prd-500b-2018--01-29", - "LongName": "XYZ Co..PRDemoGold - Jan 29th 2018 -53hf23\r12-22- update UK payrol/ppaca/onboad date\r12-18- Ran update payroll BSI script\rUpdate PPACA calanders" - } - }, - { - "OrgUnitDetail": { - "EffectiveStart": "2000-01-01T00:00:00", - "ChildSortOrder": 159, - "IsPhysicalLocation": false, - "IsPrimary": true, - "ParentSortOrder": 146, - "OrgLevel": { - "XRefCode": "Region", - "ShortName": "Region", - "LongName": "Region Level" - }, - "XRefCode": "RefCode_33", - "ShortName": "Manufacturing Co. USA", - "LongName": "Manufacturing Co. USA" - } - }, - { - "OrgUnitDetail": { - "EffectiveStart": "2000-01-01T00:00:00", - "ChildSortOrder": 159, - "IsPhysicalLocation": false, - "IsPrimary": true, - "ParentSortOrder": 147, - "OrgLevel": { - "XRefCode": "District", - "ShortName": "District", - "LongName": "District" - }, - "XRefCode": "RefCode_34", - "ShortName": "District 01", - "LongName": "District 01" - } - }, - { - "OrgUnitDetail": { - "EffectiveStart": "2000-01-01T00:00:00", - "Address": { - "Address1": "20 Wilkinson Avenue", - "City": "Jersey City", - "PostalCode": "07305", - "Country": { - "XRefCode": "USA" - }, - "State": { - "XRefCode": "NJ" - } - }, - "ChildSortOrder": 159, - "IsPhysicalLocation": true, - "IsPrimary": true, - "LedgerCode": "500", - "ParentSortOrder": 148, - "OrgLevel": { - "XRefCode": "Site", - "ShortName": "Site", - "LongName": "Site" - }, - "XRefCode": "Plant1", - "ShortName": "Plant 1", - "LongName": "Plant 1" - } - }, - { - "OrgUnitDetail": { - "EffectiveStart": "2000-01-01T00:00:00", - "ChildSortOrder": 159, - "IsPhysicalLocation": false, - "IsPrimary": true, - "LedgerCode": "500", - "ParentSortOrder": 159, - "OrgLevel": { - "XRefCode": "OnSiteDepartment", - "ShortName": "Department", - "LongName": "Department" - }, - "XRefCode": "500Packaging", - "ShortName": "Plant 1 - Packaging", - "LongName": "Plant 1 - Packaging" - }, - "Department": { - "XRefCode": "11", - "ShortName": "Packaging", - "LongName": "Packaging" - } - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Org-Unit-Info/GET-Employee-Org-Info.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Org-Unit-Info/GET-Employee-Org-Info.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "OrgUnitDetail": { - "EffectiveStart": "2000-01-01T00:00:00", - "Address": { - "Address1": "11920 Amberpark Drive", - "City": "Alpharetta", - "PostalCode": "30009", - "Country": { - "XRefCode": "USA" - }, - "State": { - "XRefCode": "GA" - } - }, - "ChildSortOrder": 159, - "IsPhysicalLocation": false, - "IsPrimary": true, - "ParentSortOrder": 1, - "OrgLevel": { - "XRefCode": "Corp", - "ShortName": "Corporate", - "LongName": "Corporate Level" - }, - "XRefCode": "Corporate", - "ShortName": "prd-500b-2018--01-29", - "LongName": "XYZ Co..PRDemoGold - Jan 29th 2018 -53hf23\r12-22- update UK payrol/ppaca/onboad date\r12-18- Ran update payroll BSI script\rUpdate PPACA calanders" - } - }, - { - "OrgUnitDetail": { - "EffectiveStart": "2000-01-01T00:00:00", - "ChildSortOrder": 159, - "IsPhysicalLocation": false, - "IsPrimary": true, - "ParentSortOrder": 146, - "OrgLevel": { - "XRefCode": "Region", - "ShortName": "Region", - "LongName": "Region Level" - }, - "XRefCode": "RefCode_33", - "ShortName": "Manufacturing Co. USA", - "LongName": "Manufacturing Co. USA" - } - }, - { - "OrgUnitDetail": { - "EffectiveStart": "2000-01-01T00:00:00", - "ChildSortOrder": 159, - "IsPhysicalLocation": false, - "IsPrimary": true, - "ParentSortOrder": 147, - "OrgLevel": { - "XRefCode": "District", - "ShortName": "District", - "LongName": "District" - }, - "XRefCode": "RefCode_34", - "ShortName": "District 01", - "LongName": "District 01" - } - }, - { - "OrgUnitDetail": { - "EffectiveStart": "2000-01-01T00:00:00", - "Address": { - "Address1": "20 Wilkinson Avenue", - "City": "Jersey City", - "PostalCode": "07305", - "Country": { - "XRefCode": "USA" - }, - "State": { - "XRefCode": "NJ" - } - }, - "ChildSortOrder": 159, - "IsPhysicalLocation": true, - "IsPrimary": true, - "LedgerCode": "500", - "ParentSortOrder": 148, - "OrgLevel": { - "XRefCode": "Site", - "ShortName": "Site", - "LongName": "Site" - }, - "XRefCode": "Plant1", - "ShortName": "Plant 1", - "LongName": "Plant 1" - } - }, - { - "OrgUnitDetail": { - "EffectiveStart": "2000-01-01T00:00:00", - "ChildSortOrder": 159, - "IsPhysicalLocation": false, - "IsPrimary": true, - "LedgerCode": "500", - "ParentSortOrder": 159, - "OrgLevel": { - "XRefCode": "OnSiteDepartment", - "ShortName": "Department", - "LongName": "Department" - }, - "XRefCode": "500Packaging", - "ShortName": "Plant 1 - Packaging", - "LongName": "Plant 1 - Packaging" - }, - "Department": { - "XRefCode": "11", - "ShortName": "Packaging", - "LongName": "Packaging" - } - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeepayadjustmentcodegroups.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeepayadjustmentcodegroups.md deleted file mode 100644 index c8ca32d460..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeepayadjustmentcodegroups.md +++ /dev/null @@ -1,137 +0,0 @@ -# Working with Employee Pay Adjustment Code Groups - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve, create or update addresses of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Pay Adjustment Code Groups](#retrieving-employee-pay-adjustment-code-groups)| retrieve employee pay adjustment groups that control which pay codes can be used in an employee's timesheet. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Pay Adjustment Code Groups -We can use GET Employee Pay Adjustment Code Groups operation with required parameters to get the pay adjustment group of an employee. - -**GET Employee Pay Adjustment Code Groups** -```xml - - {$ctx:xRefCode} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "PayAdjCodeGroup": { - "XRefCode": "Timesheet", - "ShortName": "Timesheet", - "LongName": "Timesheet" - } - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Pay-Adjustment-Groups/GET-Employee-Pay-Adjustment-Code-Groups.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Pay-Adjustment-Groups/GET-Employee-Pay-Adjustment-Code-Groups.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "PayAdjCodeGroup": { - "XRefCode": "Timesheet", - "ShortName": "Timesheet", - "LongName": "Timesheet" - } - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeepaygraderates.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeepaygraderates.md deleted file mode 100644 index 7c1483f225..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeepaygraderates.md +++ /dev/null @@ -1,165 +0,0 @@ -# Working with Employee Pay Grade Rates - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve pay rates of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Pay Grade Rates](#retrieving-employee-pay-grade-rates)| Retrieve employee pay grade rates related to their position rate policies. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Pay Grade Rates -We can use GET Employee Pay Grade Rates operation with required parameters to find the pay rates of an employee. - -**GET Employee Pay Grade Rates** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDate": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "EffectiveStart": "2000-01-01T00:00:00", - "WorkAssignmentEffectiveStart": "2000-01-01T00:00:00", - "PayGrade": { - "ShortName": "Associates", - "LongName": "Associates" - }, - "MinimumRate": 8.00000, - "ControlRate": 10.50000, - "MaximumRate": 13.00000, - "RateMidPoint": 10.50000, - "MinimumSalary": 16640.00000, - "ControlSalary": 21840.00000, - "MaximumSalary": 27040.00000, - "SalaryMidPoint": 21840.00000 - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Pay-Grade-Rates/GET-Employee-Pay-Grade-Rates.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Pay-Grade-Rates/GET-Employee-Pay-Grade-Rates.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:contextDate} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDate": "2017-01-01T13:24:56" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "EffectiveStart": "2000-01-01T00:00:00", - "WorkAssignmentEffectiveStart": "2000-01-01T00:00:00", - "PayGrade": { - "ShortName": "Associates", - "LongName": "Associates" - }, - "MinimumRate": 8.00000, - "ControlRate": 10.50000, - "MaximumRate": 13.00000, - "RateMidPoint": 10.50000, - "MinimumSalary": 16640.00000, - "ControlSalary": 21840.00000, - "MaximumSalary": 27040.00000, - "SalaryMidPoint": 21840.00000 - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeperformanceratings.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeperformanceratings.md deleted file mode 100644 index 77803c0ac6..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeperformanceratings.md +++ /dev/null @@ -1,169 +0,0 @@ -# Working with Employee Performance Ratings - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve, create or update performance rating of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Performance Ratings](#retrieving-employee-performance-ratings)| Retrieve details on employee performance reviews. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Performance Ratings -We can use GET Employee Performance Ratings operation with required parameters to search and find the performance review of required employees. - -**GET Employee Performance Ratings** -```xml - - {$ctx:xRefCode} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "NextReviewDate": "2020-04-01T00:00:00", - "PerformanceCycle": { - "ShortName": "Annual cycle" - }, - "PerformanceRatingScale": { - "XRefCode": "PERFORMANCERATINGSCALE1TO5", - "ShortName": "Performance rating scale (Show Rating Name And Value)", - "LongName": "Rating scale used to evaluate the employee performance on a scale of 1 to 5 (Show Rating Name And Value)" - }, - "PerformanceRating": { - "XRefCode": "MEETSEXPECTATIONS", - "ShortName": "Meets Expectations", - "LongName": "Performance consistently met expectations in all essential areas of responsibility, at times possibly exceeding expectations, and the quality of work overall was very good." - }, - "RatingScore": 85.000, - "ReviewDate": "2019-04-01T00:00:00", - "Reviewer": { - "XRefCode": "67206" - }, - "ReviewPeriodStartDate": "2018-01-01T00:00:00", - "ReviewPeriodEndDate": "2018-12-31T00:00:00" - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Performance-Ratings/GET-Employee-Performance-Ratings.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Performance-Ratings/GET-Employee-Performance-Ratings.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "NextReviewDate": "2020-04-01T00:00:00", - "PerformanceCycle": { - "ShortName": "Annual cycle" - }, - "PerformanceRatingScale": { - "XRefCode": "PERFORMANCERATINGSCALE1TO5", - "ShortName": "Performance rating scale (Show Rating Name And Value)", - "LongName": "Rating scale used to evaluate the employee performance on a scale of 1 to 5 (Show Rating Name And Value)" - }, - "PerformanceRating": { - "XRefCode": "MEETSEXPECTATIONS", - "ShortName": "Meets Expectations", - "LongName": "Performance consistently met expectations in all essential areas of responsibility, at times possibly exceeding expectations, and the quality of work overall was very good." - }, - "RatingScore": 85.000, - "ReviewDate": "2019-04-01T00:00:00", - "Reviewer": { - "XRefCode": "67206" - }, - "ReviewPeriodStartDate": "2018-01-01T00:00:00", - "ReviewPeriodEndDate": "2018-12-31T00:00:00" - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeproperties.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeproperties.md deleted file mode 100644 index a81e29864e..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeproperties.md +++ /dev/null @@ -1,293 +0,0 @@ -# Working with Employee Properties - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve, create or update properties of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Properties](#retrieving-employee-properties)| Retrieve employee properties that represent custom defined information. | -|[POST Employee Properties](#creating-employee-properties)| Create employee properties that represent custom defined information. | -|[PATCH Employee Properties](#updating-employee-properties)| Update employee properties that represent custom defined information. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Properties -We can use GET Employee Properties operation with required parameters to search and find the required employees' properties. - -**GET Employee Properties** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDate": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "EffectiveStart": "2000-01-01T00:00:00", - "EmployeeProperty": { - "DataType": 1, - "EmployeeCardinality": 0, - "GenerateHREvent": false, - "XRefCode": "EmployeePropertyXrefCode1", - "ShortName": "Shoe Size", - "LongName": "Shoe Size" - }, - "NumberValue": 11.00000 - }, - { - "EffectiveStart": "2000-01-01T00:00:00", - "EmployeeProperty": { - "DataType": 0, - "EmployeeCardinality": 1, - "GenerateHREvent": false, - "XRefCode": "EmployeePropertyXrefCode2", - "ShortName": "Dietary Restrictions", - "LongName": "Dietary Restrictions" - }, - "OptionValue": { - "XRefCode": "LACTOSE INTOLERANT", - "ShortName": "Lactose Intolerant", - "LongName": "Lactose Intolerant" - } - }, - { - "EffectiveStart": "2000-01-01T00:00:00", - "EmployeeProperty": { - "DataType": 0, - "EmployeeCardinality": 1, - "GenerateHREvent": false, - "XRefCode": "EmployeePropertyXrefCode2", - "ShortName": "Dietary Restrictions", - "LongName": "Dietary Restrictions" - }, - "OptionValue": { - "XRefCode": "GLUTEN FREE", - "ShortName": "Gluten Free", - "LongName": "Gluten Free" - } - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Properties/GET-Employee-Properties.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Properties/GET-Employee-Properties.aspx) - -#### Creating Employee Properties -We can use POST Employee Properties operation with required parameters to create properties for an employee. - -**POST Employee Properties** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "EffectiveStart": "2000-01-01T00:00:00", - "EmployeeProperty": { - "DataType": 1, - "EmployeeCardinality": 0, - "GenerateHREvent": false, - "XRefCode": "EmployeePropertyXrefCode1", - "ShortName": "Shoe Size", - "LongName": "Shoe Size" - }, - "NumberValue": 11 - } -} -``` - -**Sample response** - -Dayforce returns HTTP Code 200. - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Properties/POST-Employee-Properties.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Properties/POST-Employee-Properties.aspx) - -#### Updating Employee Properties -We can use PATCH Employee Properties operation with required parameters to update employee properties. - -**PATCH Employee Properties** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "EffectiveStart": "2000-01-01T00:00:00", - "EmployeeProperty": { - "DataType": 1, - "EmployeeCardinality": 0, - "GenerateHREvent": false, - "XRefCode": "EmployeePropertyXrefCode1", - "ShortName": "Shoe Size", - "LongName": "Shoe Size" - }, - "NumberValue": 11 - } -} -``` - -**Sample response** - -Dayforce returns HTTP Code 200. - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Properties/PATCH-Employee-Properties.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Properties/PATCH-Employee-Properties.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "EffectiveStart": "2000-01-01T00:00:00", - "EmployeeProperty": { - "DataType": 1, - "EmployeeCardinality": 0, - "GenerateHREvent": false, - "XRefCode": "EmployeePropertyXrefCode1", - "ShortName": "Shoe Size", - "LongName": "Shoe Size" - }, - "NumberValue": 11 - } -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200. \ No newline at end of file diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeskills.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeskills.md deleted file mode 100644 index 2bd0c9daed..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeskills.md +++ /dev/null @@ -1,149 +0,0 @@ -# Working with Employee Skills - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve the skills of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Skills](#retrieving-employee-skills)| Retrieve skills attached to an employee. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Skills -We can use GET Employee Skills operation with required parameters to get the skills of an employee. - -**GET Employee Skills** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "Skill": { - "XRefCode": "8", - "ShortName": "Tech", - "LongName": "Tech" - }, - "SkillLevel": {}, - "EffectiveStart": "2019-09-01T00:00:00", - "LastAssignedBy": "Macon Burke" - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Skills/GET-Employee-Skills.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Skills/GET-Employee-Skills.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following body - -```json -{ - "Data": [ - { - "Skill": { - "XRefCode": "8", - "ShortName": "Tech", - "LongName": "Tech" - }, - "SkillLevel": {}, - "EffectiveStart": "2019-09-01T00:00:00", - "LastAssignedBy": "Macon Burke" - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeetrainingprograms.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeetrainingprograms.md deleted file mode 100644 index 7cbf381ea7..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeetrainingprograms.md +++ /dev/null @@ -1,135 +0,0 @@ -# Working with Employee Training Programs - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve training programs of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Training Programs](#retrieving-employee-training-programs)| Retrieve training programs attached to an employee. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Training Programs -We can use GET Employee Training Programs operation with required parameters to get the training programs related to an employee. - -**GET Employee Training Programs** -```xml - - {$ctx:xRefCode} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "TrainingProgram": { - "ShortName": "First Aid Training" - }, - "EnrollmentDate": "2011-12-12T00:00:00" - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Training-Programs/GET-Employee-Training-Programs.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Training-Programs/GET-Employee-Training-Programs.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "TrainingProgram": { - "ShortName": "First Aid Training" - }, - "EnrollmentDate": "2011-12-12T00:00:00" - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeunionmemberships.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeunionmemberships.md deleted file mode 100644 index 2a0c5b8577..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeunionmemberships.md +++ /dev/null @@ -1,150 +0,0 @@ -# Working with Employee Union Memberships - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve union memberships of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Union Memberships](#)| Retrieve employee union membership information. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Addresses -We can use GET Employee Union Memberships operation with required parameters to find the union memberships of the required employees. - -**GET Employee Union Memberships** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeTo": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "UnionMembershipDate": "2000-01-01T00:00:00", - "EffectiveStart": "2000-01-01T00:00:00", - "Union": { - "XRefCode": "Local 306", - "ShortName": "Local 306", - "LongName": "Local 306" - } - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Union-Memberships/GET-Employee-Union-Memberships.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Union-Memberships/GET-Employee-Union-Memberships.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:contextDateRangeTo} - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeTo": "2017-01-01T13:24:56" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "UnionMembershipDate": "2000-01-01T00:00:00", - "EffectiveStart": "2000-01-01T00:00:00", - "Union": { - "XRefCode": "Local 306", - "ShortName": "Local 306", - "LongName": "Local 306" - } - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeworkassignments.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeworkassignments.md deleted file mode 100644 index d5abd612cf..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeworkassignments.md +++ /dev/null @@ -1,653 +0,0 @@ -# Working with Employee Work Assignments - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve, create or update work assignments of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Work Assignments](#retrieving-employee-work-assignments)| Retrieve employee work assignments. | -|[POST Employee Work Assignments](#creating-employee-work-assignments)| Create employee work assignments. | -|[PATCH Employee Work Assignments](#updating-employee-work-assignments)| Update employee work assignments. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Work Assignments -We can use GET Employee Work Assignments operation with required parameters to find the work assignment of employees. - -**GET Employee Work Assignments** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDate": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "Position": { - "Department": { - "XRefCode": "11", - "ShortName": "Packaging", - "LongName": "Packaging" - }, - "Job": { - "EmployeeEEO": { - "XRefCode": "8", - "ShortName": "8 - Laborers and Helpers", - "LongName": "8 - Laborers and Helpers" - }, - "IsUnionJob": false, - "JobRank": 13, - "JobClassification": { - "XRefCode": "ProductionStaff", - "ShortName": "Production Staff", - "LongName": "Production Staff" - }, - "FLSAStatus": { - "XRefCode": "NON-EXEMPT", - "ShortName": "Non-exempt" - }, - "XRefCode": "13", - "ShortName": "Packager" - }, - "XRefCode": "Packaging Packager", - "ShortName": "Package Handler" - }, - "Location": { - "ClockTransferCode": "540", - "LegalEntity": { - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "LegalEntityAddress": { - "Address1": "1 Wilkinson Street", - "City": "Jersey City", - "PostalCode": "10017", - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "State": { - "Name": "New Jersey", - "XRefCode": "NJ", - "ShortName": "New Jersey" - } - }, - "LegalIdNumber": "654565981", - "XRefCode": "Manufacturing Co. USA ", - "ShortName": " Manufacturing Co. USA ", - "LongName": "Manufacturing Co. USA " - }, - "XRefCode": "500Packaging", - "ShortName": "Plant 1 - Packaging", - "LongName": "Plant 1 - Packaging" - }, - "EffectiveStart": "2000-01-01T00:00:00", - "IsPAPrimaryWorkSite": false, - "IsPrimary": true, - "IsStatutory": false, - "IsVirtual": false, - "BusinessTitle": "Senior Package Handler", - "LastModifiedTimeStamp": "2016-04-26T09:53:27.947", - "Rank": 15 - }, - { - "Position": { - "Department": { - "XRefCode": "9", - "ShortName": "Assembly 2", - "LongName": "Assembly 2" - }, - "Job": { - "EmployeeEEO": { - "XRefCode": "3", - "ShortName": "3 - Technicians", - "LongName": "3 - Technicians" - }, - "IsUnionJob": false, - "JobRank": 12, - "JobClassification": { - "XRefCode": "ManagerialAndProfessionalStaff", - "ShortName": "Managerial, Professional and Engineering Staff", - "LongName": "Managerial, Professional and Engineering Staff" - }, - "FLSAStatus": { - "XRefCode": "NON-EXEMPT", - "ShortName": "Non-exempt" - }, - "XRefCode": "12", - "ShortName": "Process Technician" - }, - "XRefCode": "Assembly 2 Process Technician", - "ShortName": "Assembly 2 Process Technician" - }, - "Location": { - "ClockTransferCode": "510", - "LegalEntity": { - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "LegalEntityAddress": { - "Address1": "1 Wilkinson Street", - "City": "Jersey City", - "PostalCode": "10017", - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "State": { - "Name": "New Jersey", - "XRefCode": "NJ", - "ShortName": "New Jersey" - } - }, - "LegalIdNumber": "654565981", - "XRefCode": "Manufacturing Co. USA ", - "ShortName": " Manufacturing Co. USA ", - "LongName": "Manufacturing Co. USA " - }, - "XRefCode": "500Assembly 2", - "ShortName": "Plant 1 - Assembly 2", - "LongName": "Plant 1 - Assembly 2" - }, - "EffectiveStart": "2011-12-12T00:00:00", - "IsPAPrimaryWorkSite": false, - "IsPrimary": false, - "IsStatutory": false, - "IsVirtual": false, - "LastModifiedTimeStamp": "2012-06-18T14:13:31.55", - "JobRate": 22.50000 - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Work-Assignments/GET-Employee-Work-Assignments.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Work-Assignments/GET-Employee-Work-Assignments.aspx) - -#### Creating Employee Work Assignments -We can use POST Employee Work Assignments operation with required parameters to create work assignments for employees - -**POST Employee Work Assignments** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "100421", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "Position": { - "Department": { - "XRefCode": "9", - "ShortName": "Packaging", - "LongName": "Packaging" - }, - "Job": { - "EmployeeEEO": { - "XRefCode": "8", - "ShortName": "8 - Laborers and Helpers", - "LongName": "8 - Laborers and Helpers" - }, - "IsUnionJob": false, - "JobRank": 13, - "JobClassification": { - "XRefCode": "ProductionStaff", - "ShortName": "Production Staff", - "LongName": "Production Staff" - }, - "FLSAStatus": { - "XRefCode": "NON-EXEMPT", - "ShortName": "Non-exempt" - }, - "XRefCode": "13", - "ShortName": "Packager" - }, - "XRefCode": "Packaging Packager", - "ShortName": "Package Handler" - }, - "Location": { - "ClockTransferCode": "540", - "LegalEntity": { - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "LegalEntityAddress": { - "Address1": "1 Wilkinson Street", - "City": "Jersey City", - "PostalCode": "10017", - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "State": { - "Name": "New Jersey", - "XRefCode": "NJ", - "ShortName": "New Jersey" - } - }, - "LegalIdNumber": "654565981", - "XRefCode": "Manufacturing Co. USA ", - "ShortName": " Manufacturing Co. USA ", - "LongName": "Manufacturing Co. USA " - }, - "XRefCode": "500Packaging", - "ShortName": "Plant 1 - Packaging", - "LongName": "Plant 1 - Packaging" - }, - "EffectiveStart": "2000-01-01T00:00:00", - "IsPAPrimaryWorkSite": false, - "IsPrimary": true, - "IsStatutory": false, - "IsVirtual": false, - "BusinessTitle": "Senior Package Handler", - "LastModifiedTimeStamp": "2016-04-26T09:53:27.947", - "Rank": 15 - } -} -``` - -**Sample response** - -Dayforce returns 200 - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Work-Assignments/POST-Employee-Work-Assignments.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Work-Assignments/POST-Employee-Work-Assignments.aspx) - -#### Updating Employee Work Assignments -We can use PATCH Employee Work Assignments operation with required parameters to update the work assignments of an employee. - -**PATCH Employee Work Assignments** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "100421", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "Position": { - "Department": { - "XRefCode": "9", - "ShortName": "Packaging", - "LongName": "Packaging" - }, - "Job": { - "EmployeeEEO": { - "XRefCode": "8", - "ShortName": "8 - Laborers and Helpers", - "LongName": "8 - Laborers and Helpers" - }, - "IsUnionJob": false, - "JobRank": 13, - "JobClassification": { - "XRefCode": "ProductionStaff", - "ShortName": "Production Staff", - "LongName": "Production Staff" - }, - "FLSAStatus": { - "XRefCode": "NON-EXEMPT", - "ShortName": "Non-exempt" - }, - "XRefCode": "13", - "ShortName": "Packager" - }, - "XRefCode": "Packaging Packager", - "ShortName": "Package Handler" - }, - "Location": { - "ClockTransferCode": "540", - "LegalEntity": { - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "LegalEntityAddress": { - "Address1": "1 Wilkinson Street", - "City": "Jersey City", - "PostalCode": "10017", - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "State": { - "Name": "New Jersey", - "XRefCode": "NJ", - "ShortName": "New Jersey" - } - }, - "LegalIdNumber": "654565981", - "XRefCode": "Manufacturing Co. USA ", - "ShortName": " Manufacturing Co. USA ", - "LongName": "Manufacturing Co. USA " - }, - "XRefCode": "500Packaging", - "ShortName": "Plant 1 - Packaging", - "LongName": "Plant 1 - Packaging" - }, - "EffectiveStart": "2000-01-01T00:00:00", - "IsPAPrimaryWorkSite": false, - "IsPrimary": true, - "IsStatutory": false, - "IsVirtual": false, - "BusinessTitle": "Senior Package Handler", - "LastModifiedTimeStamp": "2016-04-26T09:53:27.947", - "Rank": 15 - } -} -``` - -**Sample response** - -Dayforce returns 200 - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Work-Assignments/PATCH-Employee-Work-Assignments.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Work-Assignments/PATCH-Employee-Work-Assignments.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:contextDate} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDate": "2017-01-01T13:24:56" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "Position": { - "Department": { - "XRefCode": "11", - "ShortName": "Packaging", - "LongName": "Packaging" - }, - "Job": { - "EmployeeEEO": { - "XRefCode": "8", - "ShortName": "8 - Laborers and Helpers", - "LongName": "8 - Laborers and Helpers" - }, - "IsUnionJob": false, - "JobRank": 13, - "JobClassification": { - "XRefCode": "ProductionStaff", - "ShortName": "Production Staff", - "LongName": "Production Staff" - }, - "FLSAStatus": { - "XRefCode": "NON-EXEMPT", - "ShortName": "Non-exempt" - }, - "XRefCode": "13", - "ShortName": "Packager" - }, - "XRefCode": "Packaging Packager", - "ShortName": "Package Handler" - }, - "Location": { - "ClockTransferCode": "540", - "LegalEntity": { - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "LegalEntityAddress": { - "Address1": "1 Wilkinson Street", - "City": "Jersey City", - "PostalCode": "10017", - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "State": { - "Name": "New Jersey", - "XRefCode": "NJ", - "ShortName": "New Jersey" - } - }, - "LegalIdNumber": "654565981", - "XRefCode": "Manufacturing Co. USA ", - "ShortName": " Manufacturing Co. USA ", - "LongName": "Manufacturing Co. USA " - }, - "XRefCode": "500Packaging", - "ShortName": "Plant 1 - Packaging", - "LongName": "Plant 1 - Packaging" - }, - "EffectiveStart": "2000-01-01T00:00:00", - "IsPAPrimaryWorkSite": false, - "IsPrimary": true, - "IsStatutory": false, - "IsVirtual": false, - "BusinessTitle": "Senior Package Handler", - "LastModifiedTimeStamp": "2016-04-26T09:53:27.947", - "Rank": 15 - }, - { - "Position": { - "Department": { - "XRefCode": "9", - "ShortName": "Assembly 2", - "LongName": "Assembly 2" - }, - "Job": { - "EmployeeEEO": { - "XRefCode": "3", - "ShortName": "3 - Technicians", - "LongName": "3 - Technicians" - }, - "IsUnionJob": false, - "JobRank": 12, - "JobClassification": { - "XRefCode": "ManagerialAndProfessionalStaff", - "ShortName": "Managerial, Professional and Engineering Staff", - "LongName": "Managerial, Professional and Engineering Staff" - }, - "FLSAStatus": { - "XRefCode": "NON-EXEMPT", - "ShortName": "Non-exempt" - }, - "XRefCode": "12", - "ShortName": "Process Technician" - }, - "XRefCode": "Assembly 2 Process Technician", - "ShortName": "Assembly 2 Process Technician" - }, - "Location": { - "ClockTransferCode": "510", - "LegalEntity": { - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "LegalEntityAddress": { - "Address1": "1 Wilkinson Street", - "City": "Jersey City", - "PostalCode": "10017", - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "State": { - "Name": "New Jersey", - "XRefCode": "NJ", - "ShortName": "New Jersey" - } - }, - "LegalIdNumber": "654565981", - "XRefCode": "Manufacturing Co. USA ", - "ShortName": " Manufacturing Co. USA ", - "LongName": "Manufacturing Co. USA " - }, - "XRefCode": "500Assembly 2", - "ShortName": "Plant 1 - Assembly 2", - "LongName": "Plant 1 - Assembly 2" - }, - "EffectiveStart": "2011-12-12T00:00:00", - "IsPAPrimaryWorkSite": false, - "IsPrimary": false, - "IsStatutory": false, - "IsVirtual": false, - "LastModifiedTimeStamp": "2012-06-18T14:13:31.55", - "JobRate": 22.50000 - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeworkcontracts.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeworkcontracts.md deleted file mode 100644 index 0da786bd72..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-employment-information/employeeworkcontracts.md +++ /dev/null @@ -1,248 +0,0 @@ -# Working with Employee Addresses - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve, create or update work contracts of employees - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Work Contracts](#retrieving-employee-work-contracts)| Retrieve work contracts used in UK to represent the employee contracted work duration. | -|[POST Employee Work Contracts](#creating-employee-work-contracts)| Create work contracts used in UK to represent the employee contracted work duration. | -|[PATCH Employee Work Contracts](#updating-employee-work-contracts)| Update work contracts used in UK to represent the employee contracted work duration. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Work Contracts -We can use GET Employee Work Contracts operation with required parameters to get the work contracts of employees. - -**GET Employee Work Contracts** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "BaseComplementaryHours": 0.00000, - "CreateShiftOnHolidays": false, - "StartDate": "2019-09-01T00:00:00", - "WorkContract": { - "XRefCode": "FT Monthly 100%", - "ShortName": "FT Monthly 100%" - } - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Work-Contracts/GET-Employee-Work-Contracts.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Work-Contracts/GET-Employee-Work-Contracts.aspx) - -#### Creating Employee Work Contracts -We can use POST Employee Work Contracts operation with required parameters to create work contracts for employees. - -**POST Employee Work Contracts** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "BaseComplementaryHours": 0, - "CreateShiftOnHolidays": false, - "StartDate": "2019-09-01T00:00:00", - "WorkContract": { - "XRefCode": "FT Monthly 100%", - "ShortName": "FT Monthly 100%" - } - } -} -``` - -**Sample response** - -Dayforce returns 200 - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Work-Contracts/POST-Employee-Work-Contracts.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Work-Contracts/POST-Employee-Work-Contracts.aspx) - -#### Updating Employee Work Contracts -We can use PATCH Employee Work Contracts operation with required parameters to update the work contracts of employees - -**PATCH Employee Work Contracts** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "BaseComplementaryHours": 0, - "CreateShiftOnHolidays": false, - "StartDate": "2019-09-01T00:00:00", - "WorkContract": { - "XRefCode": "FT Monthly 100%", - "ShortName": "FT Monthly 100%" - } - } -} -``` - -**Sample response** - -Dayforce returns HTTP Code 200 - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Work-Contracts/PATCH-Employee-Work-Contracts.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Employment-Information/Work-Contracts/PATCH-Employee-Work-Contracts.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "BaseComplementaryHours": 0, - "CreateShiftOnHolidays": false, - "StartDate": "2019-09-01T00:00:00", - "WorkContract": { - "XRefCode": "FT Monthly 100%", - "ShortName": "FT Monthly 100%" - } - } -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeeaddresses.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeeaddresses.md deleted file mode 100644 index a0f19edc3a..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeeaddresses.md +++ /dev/null @@ -1,329 +0,0 @@ -# Working with Employee Addresses - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve, create or update addresses of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Addresses](#retrieving-employee-addresses)| Retrieve addresses of an employee | -|[POST Employee Addresses](#create-employee-address)| Create addresses of an employee.| -|[PATCH Employee Addresses](#update-employee-address)| Update addresses of an employee. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Addresses -We can use GET Employee addresses operation with required parameters to search and find the required employee's address. - -**GET Employee Addresses** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "Address1": "4114 Yonge St.", - "City": "North York", - "PostalCode": "M2P 2B7", - "Country": { - "Name": "Canada", - "XRefCode": "CAN", - "ShortName": "Canada", - "LongName": "Canada" - }, - "State": { - "Name": "Ontario", - "XRefCode": "ON", - "ShortName": "Ontario", - "LongName": "Ontario" - }, - "EffectiveStart": "2017-01-15T00:00:00", - "ContactInformationType": { - "ContactInformationTypeGroup": { - "XRefCode": "Address", - "ShortName": "Address", - "LongName": "Address" - }, - "XRefCode": "PrimaryResidence", - "ShortName": "Primary Residence", - "LongName": "Primary Residence" - }, - "IsPayrollMailing": false - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Addresses/GET-Employee-Addresses.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Addresses/GET-Employee-Addresses.aspx) - -#### Create Employee Address - -We can use POST Employee Addresses operation with required parameters to create address of an employee in Dayforce. - -**POST Employee* Addresses* -```xml - - {$ctx:xRefCode} - {$ctx:fieldAndValue} - {$ctx:isValidateOnly} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee for whom the subordinate data will be updated. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by the POST Employee operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "fieldAndValue": { - "Address1": "4114 Yonge St.", - "City": "North York", - "PostalCode": "M2P 2B7", - "Country": { - "Name": "Canada", - "XRefCode": "CAN", - "ShortName": "Canada", - "LongName": "Canada" - }, - "State": { - "Name": "Ontario", - "XRefCode": "ON", - "ShortName": "Ontario", - "LongName": "Ontario" - }, - "EffectiveStart": "2017-01-15T00:00:00", - "ContactInformationType": { - "ContactInformationTypeGroup": { - "XRefCode": "Address", - "ShortName": "Address", - "LongName": "Address" - }, - "XRefCode": "PrimaryResidence", - "ShortName": "Primary Residence", - "LongName": "Primary Residence" - }, - "IsPayrollMailing": false - } -} -``` - -**Sample response** - -There is no response body for this method - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Addresses/POST-Employee-Addresses.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Addresses/POST-Employee-Addresses.aspx) - -#### Update Employee Address - -We can use PATCH employee addresses operation to update the address of an existing employee. - - -**PATCH Employee Addresses** -```xml - - {$ctx:fieldAndValue} - {$ctx:xRefCode} - {$ctx:isValidateOnly} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee to be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Optional): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by the this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "fieldAndValue": { - "Address1": "4114 Yonge St.", - "City": "North York", - "PostalCode": "M2P 2B7", - "Country": { - "Name": "Canada", - "XRefCode": "CAN", - "ShortName": "Canada", - "LongName": "Canada" - }, - "State": { - "Name": "Ontario", - "XRefCode": "ON", - "ShortName": "Ontario", - "LongName": "Ontario" - }, - "EffectiveStart": "2017-01-15T00:00:00", - "ContactInformationType": { - "ContactInformationTypeGroup": { - "XRefCode": "Address", - "ShortName": "Address", - "LongName": "Address" - }, - "XRefCode": "PrimaryResidence", - "ShortName": "Primary Residence", - "LongName": "Primary Residence" - }, - "IsPayrollMailing": false - } -} -``` - -**Sample response** - -This operation has no response body - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Addresses/PATCH-Employee-Addresses.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Addresses/PATCH-Employee-Addresses.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:fieldAndValue} - {$ctx:isValidateOnly} - - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "fieldAndValue": { - "Address1": "4114 Yonge St.", - "City": "North York", - "PostalCode": "M2P 2B7", - "Country": { - "Name": "Canada", - "XRefCode": "CAN", - "ShortName": "Canada", - "LongName": "Canada" - }, - "State": { - "Name": "Ontario", - "XRefCode": "ON", - "ShortName": "Ontario", - "LongName": "Ontario" - }, - "EffectiveStart": "2017-01-15T00:00:00", - "ContactInformationType": { - "ContactInformationTypeGroup": { - "XRefCode": "Address", - "ShortName": "Address", - "LongName": "Address" - }, - "XRefCode": "PrimaryResidence", - "ShortName": "Primary Residence", - "LongName": "Primary Residence" - }, - "IsPayrollMailing": false - } -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeecantaxes.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeecantaxes.md deleted file mode 100644 index 1e687e0fd0..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeecantaxes.md +++ /dev/null @@ -1,313 +0,0 @@ -# Working with Canadian Employee Taxes - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve tax details of a Canadian employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee CAN Federal Taxes](#retrieving-canadian-employee-federal-taxes)| Retrieve a Canadian employee's total federal claim amount, resident status and authorized tax credits. | -|[GET Employee CAN State Taxes](#retrieving-canadian-employee-state-taxes)| Retrieve a Canadian employee's total provincial claim amount, prescribed deductions and authorized tax credits. | -|[GET Employee CAN Tax Statuses](#retrieving-canadian-employee-tax-statuses)| Retrieve a Canadian employee's provincial tax filing status (e.g. single, married). | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Canadian Employee Federal Taxes -We can use GET Employee CAN Federal Taxes operation with required parameters to retrieve federal taxes of a Canadian employee. - -**GET Employee CAN Federal Taxes** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): he unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "EffectiveStart": "2019-01-01T00:00:00", - "TotalClaimAmount": 12269.00000, - "IsNonResident": false, - "MultipleEmployer": false, - "IncomeLessThanClaim": false, - "AuthorizedTaxCredits": 50.00000, - "AdditionalAmount": 1000.00000 - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-CAN-Federal-Taxes/GET-Employee-CAN-Federal-Taxes.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-CAN-Federal-Taxes/GET-Employee-CAN-Federal-Taxes.aspx) - -#### Retrieving Canadian Employee State Taxes -We can use GET Employee CAN State Taxes operation with required parameters to retrieve the state taxes of Canadian employee. - -**GET Employee CAN State Taxes** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): he unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "EffectiveStart": "2019-01-01T00:00:00", - "State": { - "Name": "Ontario", - "XRefCode": "ON", - "ShortName": "Ontario", - "LongName": "Ontario" - }, - "TotalClaimAmount": 1000.00000, - "AuthorizedTaxCredits": 50.00000, - "HasQuebecHealthContributionExemption": false, - "IncomeLessThanClaim": false - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-CAN-State-Taxes/GET-Employee-CAN-State-Taxes.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-CAN-State-Taxes/GET-Employee-CAN-State-Taxes.aspx) - -#### Retrieving Canadian Employee Tax Statuses -We can use GET Employee CAN Employee Tax Statuses operation with required parameters to retrieve tax filing statuses of Canadian employees. - -**GET Employee Addresses** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): he unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "100421" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "ProvinceCode": "Federal", - "EffectiveStart": "2019-01-01T00:00:00", - "TaxPropertyCollection": { - "Items": [ - { - "PropertyCodeName": "IS_STATUS_INDIAN", - "PropertyValue": "False" - }, - { - "PropertyCodeName": "REGISTRY_NUMBER", - "PropertyValue": "123456789" - }, - { - "PropertyCodeName": "CPT30_FORM_FILED", - "PropertyValue": "False" - }, - { - "PropertyCodeName": "EMPLOYMENT_CODE_CAN" - }, - { - "PropertyCodeName": "BEYOND_PROV_CAN", - "PropertyValue": "-1" - }, - { - "PropertyCodeName": "NUMBER_OF_DAYS_OUTSIDE_CANADA", - "PropertyValue": "15" - } - ] - } - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-CAN-Tax-Statuses/GET-Employee-CAN-Tax-Statuses.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-CAN-Tax-Statuses/GET-Employee-CAN-Tax-Statuses.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "100421" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns the following response body with code 200 - -```json -{ - "Data": [ - { - "ProvinceCode": "Federal", - "EffectiveStart": "2019-01-01T00:00:00", - "TaxPropertyCollection": { - "Items": [ - { - "PropertyCodeName": "IS_STATUS_INDIAN", - "PropertyValue": "False" - }, - { - "PropertyCodeName": "REGISTRY_NUMBER", - "PropertyValue": "123456789" - }, - { - "PropertyCodeName": "CPT30_FORM_FILED", - "PropertyValue": "False" - }, - { - "PropertyCodeName": "EMPLOYMENT_CODE_CAN" - }, - { - "PropertyCodeName": "BEYOND_PROV_CAN", - "PropertyValue": "-1" - }, - { - "PropertyCodeName": "NUMBER_OF_DAYS_OUTSIDE_CANADA", - "PropertyValue": "15" - } - ] - } - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeecontacts.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeecontacts.md deleted file mode 100644 index 4049b5acff..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeecontacts.md +++ /dev/null @@ -1,392 +0,0 @@ -# Working with Employee Contacts - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve, create or update contacts of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Contacts](#retrieving-employee-contact)| Retrieve an employee's contact information. | -|[POST Employee Contacts](#creating-employee-contact)| Create an employee's contact information. | -|[PATCH Employee Contacts](#updating-employee-contact)| Update an employee's contact information. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Contact -We can use GET Employee Contacts operation with required parameters to retrieve the contact information of an employee. - -**GET Employee Employee Contact** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): he unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "ContactNumber": "202 265 8987", - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "EffectiveStart": "2000-01-01T00:00:00", - "ContactInformationType": { - "ContactInformationTypeGroup": { - "XRefCode": "Phone", - "ShortName": "Phone", - "LongName": "Phone" - }, - "XRefCode": "HomePhone", - "ShortName": "Home", - "LongName": "Home" - }, - "IsForSystemCommunications": false, - "IsPreferredContactMethod": false, - "IsUnlistedNumber": false, - "IsVerified": false, - "IsRejected": false, - "ShowRejectedWarning": true, - "NumberOfVerificationRequests": 0 - }, - { - "ContactNumber": "201 569 8785", - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "EffectiveStart": "2000-01-01T00:00:00", - "ContactInformationType": { - "ContactInformationTypeGroup": { - "XRefCode": "Phone", - "ShortName": "Phone", - "LongName": "Phone" - }, - "XRefCode": "Mobile", - "ShortName": "Mobile", - "LongName": "Mobile" - }, - "IsForSystemCommunications": false, - "IsPreferredContactMethod": false, - "IsUnlistedNumber": false, - "IsVerified": false, - "IsRejected": false, - "ShowRejectedWarning": true, - "NumberOfVerificationRequests": 0 - }, - { - "ElectronicAddress": "@AaronGloverOfficial", - "EffectiveStart": "2014-12-17T00:00:00", - "ContactInformationType": { - "ContactInformationTypeGroup": { - "XRefCode": "OnlineProfile", - "ShortName": "Online Profile", - "LongName": "Online Profile" - }, - "XRefCode": "Twitter", - "ShortName": "Twitter", - "LongName": "Twitter" - }, - "IsForSystemCommunications": false, - "IsPreferredContactMethod": false, - "IsUnlistedNumber": false, - "IsVerified": false, - "IsRejected": false, - "ShowRejectedWarning": false, - "NumberOfVerificationRequests": 0 - }, - { - "ElectronicAddress": "alok.jesudasen@ceridian.com", - "EffectiveStart": "2018-09-05T00:00:00", - "ContactInformationType": { - "ContactInformationTypeGroup": { - "XRefCode": "ElectronicAddress", - "ShortName": "Email Address", - "LongName": "Email Address" - }, - "XRefCode": "BusinessEmail", - "ShortName": "Business Email", - "LongName": "Business Email" - }, - "IsForSystemCommunications": true, - "IsPreferredContactMethod": true, - "IsUnlistedNumber": false, - "IsVerified": true, - "IsRejected": false, - "ShowRejectedWarning": true, - "NumberOfVerificationRequests": 0 - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Contacts/GET-Employee-Contacts.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Contacts/GET-Employee-Contacts.aspx) - -#### Creating Employee Contact -We can use POST Employee Contacts operation with required parameters to create the required employee's contact information. - -**POST Employee Contacts** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "FALSE", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "ContactNumber": "202 265 8987", - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "EffectiveStart": "2000-01-01T00:00:00", - "ContactInformationType": { - "ContactInformationTypeGroup": { - "XRefCode": "Phone", - "ShortName": "Phone", - "LongName": "Phone" - }, - "XRefCode": "HomePhone", - "ShortName": "Home", - "LongName": "Home" - }, - "IsForSystemCommunications": false, - "IsPreferredContactMethod": false, - "IsUnlistedNumber": false, - "IsVerified": false, - "IsRejected": false, - "ShowRejectedWarning": true, - "NumberOfVerificationRequests": 0 - } -} -``` - -**Sample response** - -This method returns a HTTP code 200 and no response body - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Contacts/POST-Employee-Contacts.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Contacts/POST-Employee-Contacts.aspx) - -#### Updating Employee Contact -We can use PATCH Employee Contacts operation with required parameters to update the contact information of existing employees. - -**PATCH Employee Contacts** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "FALSE", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "ContactNumber": "202 265 8987", - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "EffectiveStart": "2000-01-01T00:00:00", - "ContactInformationType": { - "ContactInformationTypeGroup": { - "XRefCode": "Phone", - "ShortName": "Phone", - "LongName": "Phone" - }, - "XRefCode": "HomePhone", - "ShortName": "Home", - "LongName": "Home" - }, - "IsForSystemCommunications": false, - "IsPreferredContactMethod": false, - "IsUnlistedNumber": false, - "IsVerified": false, - "IsRejected": false, - "ShowRejectedWarning": true, - "NumberOfVerificationRequests": 0 - } -} -``` - -**Sample response** - -This operation returns HTTP code 200 with no response body - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Contacts/PATCH-Employee-Contacts.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Contacts/PATCH-Employee-Contacts.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "FALSE", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "ContactNumber": "202 265 8987", - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "EffectiveStart": "2000-01-01T00:00:00", - "ContactInformationType": { - "ContactInformationTypeGroup": { - "XRefCode": "Phone", - "ShortName": "Phone", - "LongName": "Phone" - }, - "XRefCode": "HomePhone", - "ShortName": "Home", - "LongName": "Home" - }, - "IsForSystemCommunications": false, - "IsPreferredContactMethod": false, - "IsUnlistedNumber": false, - "IsVerified": false, - "IsRejected": false, - "ShowRejectedWarning": true, - "NumberOfVerificationRequests": 0 - } -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeedirectdeposits.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeedirectdeposits.md deleted file mode 100644 index 5f9eb2ac6c..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeedirectdeposits.md +++ /dev/null @@ -1,155 +0,0 @@ -# Working with Employee Direct Deposits - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve direct deposit information of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Direct Deposits](#retrieving-employee-direct-deposit)| Retrieve an employee's direct deposit information. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Direct Deposit -We can use GET Employee Direct Deposits operation with required parameters to search and find the required employee's direct deposit information. - -**GET Employee Direct Deposits** -```xml - - {$ctx:xRefCode} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "AccountNumber": "664648651", - "BankName": "1ST UNITED SERVICES CU", - "DepositNumber": 1, - "PayMethod": { - "XRefCode": "CHECKING", - "ShortName": "Checking", - "LongName": "Checking" - }, - "IsDeleted": false, - "IsPercentage": false, - "IsRemainder": true, - "RequiresPreNote": false, - "RoutingTransitNumber": "321174000" - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Direct-Deposits/GET-Employee-Direct-Deposits.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Direct-Deposits/GET-Employee-Direct-Deposits.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 and the following response - -```json -{ - "Data": [ - { - "AccountNumber": "664648651", - "BankName": "1ST UNITED SERVICES CU", - "DepositNumber": 1, - "PayMethod": { - "XRefCode": "CHECKING", - "ShortName": "Checking", - "LongName": "Checking" - }, - "IsDeleted": false, - "IsPercentage": false, - "IsRemainder": true, - "RequiresPreNote": false, - "RoutingTransitNumber": "321174000" - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeeemergencycontacts.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeeemergencycontacts.md deleted file mode 100644 index cc12e1b564..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeeemergencycontacts.md +++ /dev/null @@ -1,389 +0,0 @@ -# Working with Employee Emergency Contacts - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve, create or update emergency contacts of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Emergency Contacts](#retrieving-employee-emergency-contact)| Retrieve an employee's emergency contact information. | -|[POST Employee Emergency Contacts](#creating-employee-emergency-contact)| Create an employee's emergency contact information. | -|[PATCH Employee Emergency Contacts](#updating-employee-emergency-contact)| Update an employee's emergency contact information. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Emergency Contact -We can use GET Employee Emergency Contacts operation with required parameters to retrieve the emergency contact information of an employee. - -**GET Employee Emergency Addresses** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): he unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeTo": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "FirstName": "Alice", - "LastName": "Glover", - "IsPrimary": true, - "Relationship": { - "XRefCode": "WIFE", - "ShortName": "Wife", - "LongName": "Wife" - }, - "Addresses": { - "Items": [] - }, - "Contacts": { - "Items": [ - { - "ContactNumber": "213 658 9654", - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "EffectiveStart": "2000-01-01T00:00:00", - "ContactInformationType": { - "ContactInformationTypeGroup": { - "XRefCode": "Phone", - "ShortName": "Phone", - "LongName": "Phone" - }, - "XRefCode": "Mobile", - "ShortName": "Mobile", - "LongName": "Mobile" - }, - "IsForSystemCommunications": false, - "IsPreferredContactMethod": false, - "IsUnlistedNumber": false, - "IsVerified": false, - "IsRejected": false, - "ShowRejectedWarning": true, - "NumberOfVerificationRequests": 0 - } - ] - } - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Emergency-Contacts/GET-Employee-Emergency-Contacts.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Emergency-Contacts/GET-Employee-Emergency-Contacts.aspx) - -#### Creating Employee Emergency Contact -We can use POST Employee Emergency Contacts operation with required parameters to create the required employee's emergency contact information. - -**POST Employee Emergency Contacts** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "FirstName": "Alice", - "LastName": "Glover", - "IsPrimary": true, - "Relationship": { - "XRefCode": "WIFE", - "ShortName": "Wife", - "LongName": "Wife" - }, - "Addresses": { - "Items": [] - }, - "Contacts": { - "Items": [ - { - "ContactNumber": "213 658 9654", - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "EffectiveStart": "2000-01-01T00:00:00", - "ContactInformationType": { - "ContactInformationTypeGroup": { - "XRefCode": "Phone", - "ShortName": "Phone", - "LongName": "Phone" - }, - "XRefCode": "Mobile", - "ShortName": "Mobile", - "LongName": "Mobile" - }, - "IsForSystemCommunications": false, - "IsPreferredContactMethod": false, - "IsUnlistedNumber": false, - "IsVerified": false, - "IsRejected": false, - "ShowRejectedWarning": true, - "NumberOfVerificationRequests": 0 - } - ] - } - } -} -``` - -**Sample response** - -This method returns a HTTP code 200 and no response body - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Emergency-Contacts/POST-Employee-Emergency-Contacts.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Emergency-Contacts/POST-Employee-Emergency-Contacts.aspx) - -#### Updating Employee Emergency Contact -We can use PATCH Employee Emergency Contacts operation with required parameters to update the emergency contact information of existing employees. - -**PATCH Employee Emergency Contacts** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "FirstName": "Alice", - "LastName": "Glover", - "IsPrimary": true, - "Relationship": { - "XRefCode": "WIFE", - "ShortName": "Wife", - "LongName": "Wife" - }, - "Addresses": { - "Items": [] - }, - "Contacts": { - "Items": [ - { - "ContactNumber": "213 658 9654", - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "EffectiveStart": "2000-01-01T00:00:00", - "ContactInformationType": { - "ContactInformationTypeGroup": { - "XRefCode": "Phone", - "ShortName": "Phone", - "LongName": "Phone" - }, - "XRefCode": "Mobile", - "ShortName": "Mobile", - "LongName": "Mobile" - }, - "IsForSystemCommunications": false, - "IsPreferredContactMethod": false, - "IsUnlistedNumber": false, - "IsVerified": false, - "IsRejected": false, - "ShowRejectedWarning": true, - "NumberOfVerificationRequests": 0 - } - ] - } - } -} -``` - -**Sample response** - -This operation returns HTTP code 200 with no response body - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Emergency-Contacts/PATCH-Employee-Emergency-Contacts.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Emergency-Contacts/PATCH-Employee-Emergency-Contacts.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "FirstName": "Alice", - "LastName": "Glover", - "IsPrimary": true, - "Relationship": { - "XRefCode": "WIFE", - "ShortName": "Wife", - "LongName": "Wife" - }, - "Addresses": { - "Items": [] - }, - "Contacts": { - "Items": [ - { - "ContactNumber": "213 658 9654", - "Country": { - "Name": "United States of America", - "XRefCode": "USA", - "ShortName": "United States of America", - "LongName": "United States of America" - }, - "EffectiveStart": "2000-01-01T00:00:00", - "ContactInformationType": { - "ContactInformationTypeGroup": { - "XRefCode": "Phone", - "ShortName": "Phone", - "LongName": "Phone" - }, - "XRefCode": "Mobile", - "ShortName": "Mobile", - "LongName": "Mobile" - }, - "IsForSystemCommunications": false, - "IsPreferredContactMethod": false, - "IsUnlistedNumber": false, - "IsVerified": false, - "IsRejected": false, - "ShowRejectedWarning": true, - "NumberOfVerificationRequests": 0 - } - ] - } - } -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeeethnicities.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeeethnicities.md deleted file mode 100644 index ebb2ccce44..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeeethnicities.md +++ /dev/null @@ -1,164 +0,0 @@ -# Working with Employee Ethnicities - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve ethnicity information of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Ethnicities](#retrieving-employee-ethnicities)| | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Ethnicities -We can use GET Employee Ethnicities operation with required parameters to search and find the ethnicity required employees. - -**GET Employee Addresses** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): he unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeTo": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "EffectiveStart": "2013-06-25T00:00:00", - "Ethnicity": { - "XRefCode": "Black or African American (not Hispanic or Latino)", - "ShortName": "Black or African American (not Hispanic or Latino)", - "LongName": "Black or African American (not Hispanic or Latino)" - }, - "ManagerEthnicity": { - "XRefCode": "Black or African American (not Hispanic or Latino)", - "ShortName": "Black or African American (not Hispanic or Latino)", - "LongName": "Black or African American (not Hispanic or Latino)" - } - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Ethnicities/GET-Employee-Ethnicities.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Ethnicities/GET-Employee-Ethnicities.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:contextDateRangeTo} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeTo": "2017-01-01T13:24:56" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "EffectiveStart": "2013-06-25T00:00:00", - "Ethnicity": { - "XRefCode": "Black or African American (not Hispanic or Latino)", - "ShortName": "Black or African American (not Hispanic or Latino)", - "LongName": "Black or African American (not Hispanic or Latino)" - }, - "ManagerEthnicity": { - "XRefCode": "Black or African American (not Hispanic or Latino)", - "ShortName": "Black or African American (not Hispanic or Latino)", - "LongName": "Black or African American (not Hispanic or Latino)" - } - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeehealthandwellness.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeehealthandwellness.md deleted file mode 100644 index deb3cb01e8..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeehealthandwellness.md +++ /dev/null @@ -1,141 +0,0 @@ -# Working with Employee Health and Wellness - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve tobacco use status of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Health and Wellness](#retrieving-employee-health-and-wellness)| Retrieve an employee's tobacco use status | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Health and Wellness -We can use GET Employee Health and Wellness operation with required parameters to search and find the required employee's tobacco use status. - -**GET Employee Addresses** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): he unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "TobaccoUser": "YES", - "EffectiveStart": "2017-05-25T00:00:00" - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Health-and-Wellness/GET-Employee-Health-and-Wellness.aspx](hhttps://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Health-and-Wellness/GET-Employee-Health-and-Wellness.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:contextDateRangeFrom} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "TobaccoUser": "YES", - "EffectiveStart": "2017-05-25T00:00:00" - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeemaritalstatuses.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeemaritalstatuses.md deleted file mode 100644 index af6ece94b1..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeemaritalstatuses.md +++ /dev/null @@ -1,250 +0,0 @@ -# Working with Employee Marital Statuses - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve, create or update marital status of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Marital Statuses](#retrieving-employee-marital-statuses)| Retrieve an employee's marital statuses information. | -|[POST Employee Marital Statuses](#creating-employee-marital-statuses)| Create an employee's marital statuses information. | -|[PATCH Employee Marital Statuses](#updating-employee-marital-statuses)| Update an employee's marital statuses information. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Marital Statuses -We can use GET Employee Marital Statuses operation with required parameters to retrieve the marital status information of an employee. - -**GET Employee Marital Statuses** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): he unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "MaritalStatus": { - "XRefCode": "SINGLE", - "ShortName": "Single", - "LongName": "Single" - }, - "EffectiveStart": "2000-01-01T00:00:00" - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Marital-Statuses/GET-Marital-Statuses.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Marital-Statuses/GET-Marital-Statuses.aspx) - -#### Creating Employee Marital Statuses -We can use POST Employee Marital Statuses operation with required parameters to create the required employee's marital status. - -**POST Employee Marital Statuses** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "MaritalStatus": { - "XRefCode": "SINGLE", - "ShortName": "Single", - "LongName": "Single" - }, - "EffectiveStart": "2000-01-01T00:00:00" - } -} -``` - -**Sample response** - -This method returns a HTTP code 200 and no response body - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Marital-Statuses/POST-Employee-Marital-Statuses.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Marital-Statuses/POST-Employee-Marital-Statuses.aspx) - -#### Updating Employee Marital Statuses -We can use PATCH Employee Marital Statuses operation with required parameters to update the marital status information of existing employees. - -**PATCH Employee Marital Statuses** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "contextDateRangeFrom": "2017-01-01T13:24:56", - "fieldAndValue": { - "MaritalStatus": { - "XRefCode": "SINGLE", - "ShortName": "Single", - "LongName": "Single" - }, - "EffectiveStart": "2000-01-01T00:00:00" - } -} -``` - -**Sample response** - -This operation returns HTTP code 200 with no response body - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Marital-Statuses/PATCH-Employee-Statuses.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-Marital-Statuses/PATCH-Employee-Statuses.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:contextDateRangeFrom} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "MaritalStatus": { - "XRefCode": "SINGLE", - "ShortName": "Single", - "LongName": "Single" - }, - "EffectiveStart": "2000-01-01T00:00:00" - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeeustaxes.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeeustaxes.md deleted file mode 100644 index 6aff002f7c..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-personal-information/employeeustaxes.md +++ /dev/null @@ -1,310 +0,0 @@ -# Working with US Employee Taxes - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve tax details of a US employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee US Federal Taxes](#retrieving-US-employee-federal-taxes)| Retrieve a US employee's total federal claim amount, resident status and authorized tax credits. | -|[GET Employee US State Taxes](#retrieving-US-employee-state-taxes)| Retrieve a US employee's total state claim amount, prescribed deductions and authorized tax credits. | -|[GET Employee US Tax Statuses](#retrieving-US-employee-tax-statuses)| Retrieve a US employee's provincial tax filing status (e.g. single, married). | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving US Employee Federal Taxes -We can use GET Employee US Federal Taxes operation with required parameters to retrieve federal taxes of a US employee. - -**GET Employee US Federal Taxes** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): he unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "EffectiveStart": "2019-01-01T00:00:00", - "FilingStatus": { - "CountryCode": "USA", - "FederalFilingStatusCode": "S", - "CalculationCode": "1", - "PayrollOutput": "Single", - "ShortName": "Single", - "LongName": "Single" - }, - "Allowances": 50, - "AdditionalAmount": 100.00000, - "IsTaxExempt": false, - "IsLocked": false - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-US-Federal-Taxes/GET-Employee-US-Federal-Taxes.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-US-Federal-Taxes/GET-Employee-US-Federal-Taxes.aspx) - -#### Retrieving US Employee State Taxes -We can use GET Employee US State Taxes operation with required parameters to retrieve the state taxes of US employee. - -**GET Employee US State Taxes** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): he unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "EffectiveStart": "2019-01-01T00:00:00", - "State": { - "XRefCode": "NJ", - "LongName": "New Jersey" - }, - "FilingStatus": { - "CountryCode": "USA", - "StateFilingStatusCode": "S", - "CalculationCode": "1", - "PayrollOutput": "Single", - "ShortName": "Single", - "LongName": "Single" - }, - "AdditionalAmount": 100.00000, - "IsTaxExempt": false, - "IsLocked": false - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-US-State-Taxes/GET-Employee-US-State-Taxes.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-US-State-Taxes/GET-Employee-US-State-Taxes.aspx) - -#### Retrieving US Employee Tax Statuses -We can use GET Employee US Employee Tax Statuses operation with required parameters to retrieve tax filing statuses of US employees. - -**GET Employee Addresses** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): he unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "StateCode": "Federal", - "EffectiveStart": "2019-01-01T00:00:00", - "TaxPropertyCollection": { - "Items": [ - { - "PropertyCodeName": "STANDARD_OCCUPATIONAL_CODE", - "PropertyValue": "51-1000" - }, - { - "PropertyCodeName": "STATUTORY_EMPLOYEE", - "PropertyValue": "False" - }, - { - "PropertyCodeName": "SS_MED_RELIGIOUS_EXEMPTION", - "PropertyValue": "False" - }, - { - "PropertyCodeName": "RETIREMENT_PLAN_ELIGIBILITY", - "PropertyValue": "2" - } - ] - } - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-US-Tax-Statuses/GET-Employee-US-Tax-Statuses.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee-Personal-Information/Employee-US-Tax-Statuses/GET-Employee-US-Tax-Statuses.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns the following response body with code 200 - -```json -{ - "Data": [ - { - "StateCode": "Federal", - "EffectiveStart": "2019-01-01T00:00:00", - "TaxPropertyCollection": { - "Items": [ - { - "PropertyCodeName": "STANDARD_OCCUPATIONAL_CODE", - "PropertyValue": "51-1000" - }, - { - "PropertyCodeName": "STATUTORY_EMPLOYEE", - "PropertyValue": "False" - }, - { - "PropertyCodeName": "SS_MED_RELIGIOUS_EXEMPTION", - "PropertyValue": "False" - }, - { - "PropertyCodeName": "RETIREMENT_PLAN_ELIGIBILITY", - "PropertyValue": "2" - } - ] - } - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-time-management/availability.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-time-management/availability.md deleted file mode 100644 index a9a08339c9..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-time-management/availability.md +++ /dev/null @@ -1,317 +0,0 @@ -# Working with Employee Availability - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve the availability of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Availability](#retrieving-employee-availability)| Availability represents the periods an employee is available to be scheduled for work. This request allows you to retrieve a single employee's daily availability between two dates. In order to use it, an employee XRefCodes is needed. Employee XRefCodes can be retrieved with GET Employees. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Availability -We can use GET Availability operation with required parameters to search and find availability of required employees. - -**GET Availability** -```xml - - {$ctx:xRefCode} - {$ctx:filterAvailabilityStartDate} - {$ctx:filterAvailabilityEndDate} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* filterAvailabilityStartDate (Mandatory): Inclusive period start date to determine which employee availability data to retrieve . Example: 2017-01-01T00:00:00 -* filterAvailabilityEndDate (Mandatory): Inclusive period end date to determine which employee availability data to retrieve . Example: 2017-01-01T00:00:00 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "filterAvailabilityStartDate": "2018-02-04T00:00:00", - "filterAvailabilityEndDate": "2018-02-18T00:00:00" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "DateOfRequest": "2018-02-04T00:00:00", - "UnAvailable": true, - "IsDefault": true - }, - { - "DateOfRequest": "2018-02-05T00:00:00", - "IsDefault": true, - "StartTime1": "00:00:00", - "EndTime1": "1.00:00:00" - }, - { - "DateOfRequest": "2018-02-06T00:00:00", - "IsDefault": true, - "StartTime1": "07:00:00", - "EndTime1": "09:00:00", - "StartTime2": "14:00:00", - "EndTime2": "20:00:00" - }, - { - "DateOfRequest": "2018-02-07T00:00:00", - "IsDefault": true, - "StartTime1": "00:00:00", - "EndTime1": "1.00:00:00" - }, - { - "DateOfRequest": "2018-02-08T00:00:00", - "IsDefault": true, - "StartTime1": "00:00:00", - "EndTime1": "1.00:00:00" - }, - { - "DateOfRequest": "2018-02-09T00:00:00", - "IsDefault": true, - "StartTime1": "00:00:00", - "EndTime1": "1.00:00:00" - }, - { - "DateOfRequest": "2018-02-10T00:00:00", - "IsDefault": true, - "StartTime1": "00:00:00", - "EndTime1": "1.00:00:00" - }, - { - "DateOfRequest": "2018-02-11T00:00:00", - "UnAvailable": true, - "IsDefault": true - }, - { - "DateOfRequest": "2018-02-12T00:00:00", - "IsDefault": true, - "StartTime1": "00:00:00", - "EndTime1": "1.00:00:00" - }, - { - "DateOfRequest": "2018-02-13T00:00:00", - "IsDefault": true, - "StartTime1": "07:00:00", - "EndTime1": "09:00:00", - "StartTime2": "14:00:00", - "EndTime2": "20:00:00" - }, - { - "DateOfRequest": "2018-02-14T00:00:00", - "IsDefault": true, - "StartTime1": "00:00:00", - "EndTime1": "1.00:00:00" - }, - { - "DateOfRequest": "2018-02-15T00:00:00", - "IsDefault": true, - "StartTime1": "00:00:00", - "EndTime1": "1.00:00:00" - }, - { - "DateOfRequest": "2018-02-16T00:00:00", - "IsDefault": true, - "StartTime1": "00:00:00", - "EndTime1": "1.00:00:00" - }, - { - "DateOfRequest": "2018-02-17T00:00:00", - "IsDefault": true, - "StartTime1": "00:00:00", - "EndTime1": "1.00:00:00" - }, - { - "DateOfRequest": "2018-02-18T00:00:00", - "UnAvailable": true, - "IsDefault": true - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Time-Management/Employee-Availability.aspx](https://developers.dayforce.com/Build/API-Explorer/Time-Management/Employee-Availability.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:filterAvailabilityStartDate} - {$ctx:filterAvailabilityEndDate} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "filterAvailabilityStartDate": "2018-02-04T00:00:00", - "filterAvailabilityEndDate": "2018-02-18T00:00:00" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following body - -```json -{ - "Data": [ - { - "DateOfRequest": "2018-02-04T00:00:00", - "UnAvailable": true, - "IsDefault": true - }, - { - "DateOfRequest": "2018-02-05T00:00:00", - "IsDefault": true, - "StartTime1": "00:00:00", - "EndTime1": "1.00:00:00" - }, - { - "DateOfRequest": "2018-02-06T00:00:00", - "IsDefault": true, - "StartTime1": "07:00:00", - "EndTime1": "09:00:00", - "StartTime2": "14:00:00", - "EndTime2": "20:00:00" - }, - { - "DateOfRequest": "2018-02-07T00:00:00", - "IsDefault": true, - "StartTime1": "00:00:00", - "EndTime1": "1.00:00:00" - }, - { - "DateOfRequest": "2018-02-08T00:00:00", - "IsDefault": true, - "StartTime1": "00:00:00", - "EndTime1": "1.00:00:00" - }, - { - "DateOfRequest": "2018-02-09T00:00:00", - "IsDefault": true, - "StartTime1": "00:00:00", - "EndTime1": "1.00:00:00" - }, - { - "DateOfRequest": "2018-02-10T00:00:00", - "IsDefault": true, - "StartTime1": "00:00:00", - "EndTime1": "1.00:00:00" - }, - { - "DateOfRequest": "2018-02-11T00:00:00", - "UnAvailable": true, - "IsDefault": true - }, - { - "DateOfRequest": "2018-02-12T00:00:00", - "IsDefault": true, - "StartTime1": "00:00:00", - "EndTime1": "1.00:00:00" - }, - { - "DateOfRequest": "2018-02-13T00:00:00", - "IsDefault": true, - "StartTime1": "07:00:00", - "EndTime1": "09:00:00", - "StartTime2": "14:00:00", - "EndTime2": "20:00:00" - }, - { - "DateOfRequest": "2018-02-14T00:00:00", - "IsDefault": true, - "StartTime1": "00:00:00", - "EndTime1": "1.00:00:00" - }, - { - "DateOfRequest": "2018-02-15T00:00:00", - "IsDefault": true, - "StartTime1": "00:00:00", - "EndTime1": "1.00:00:00" - }, - { - "DateOfRequest": "2018-02-16T00:00:00", - "IsDefault": true, - "StartTime1": "00:00:00", - "EndTime1": "1.00:00:00" - }, - { - "DateOfRequest": "2018-02-17T00:00:00", - "IsDefault": true, - "StartTime1": "00:00:00", - "EndTime1": "1.00:00:00" - }, - { - "DateOfRequest": "2018-02-18T00:00:00", - "UnAvailable": true, - "IsDefault": true - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-time-management/employeepunches.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-time-management/employeepunches.md deleted file mode 100644 index 34e8683237..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-time-management/employeepunches.md +++ /dev/null @@ -1,442 +0,0 @@ -# Working with Employee Punches - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve work shift data of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Punches](#retrieving-employee-punches)| Extract the worked shift data for several employees at a time. Required parameters for the call include FilterTransactionStartTimeUTC and FilterTransactionEndTimeUTC. The system will search for all employee punch records that were modified between these two dates. The two dates must be 7 days apart or less. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Punches -We can use GET Employee Punches operation with required parameters to get work shift data of an employee. - -**GET Employee Punches** -```xml - - {$ctx:filterTransactionStartTimeUTC} - {$ctx:filterTransactionEndTimeUTC} - {$ctx:employeeXRefCode} - {$ctx:locationXRefCode} - {$ctx:positionXRefCode} - {$ctx:departmentXRefCode} - {$ctx:jobXRefCode} - {$ctx:docketXRefCode} - {$ctx:projectXRefCode} - {$ctx:payAdjustmentXRefCode} - {$ctx:shiftStatus} - {$ctx:filterShiftTimeStart} - {$ctx:filterShiftTimeEnd} - {$ctx:businessDate} - {$ctx:pageSize} - -``` - -**Properties** - -* filterTransactionStartTimeUTC (Mandatory): Inclusive transaction period start date in UTC to determine which employee punch data to retrieve. Example: 2017-01-01T00:00:00 -* filterTransactionEndTimeUTC (Mandatory): Inclusive transaction period end date in UTC to determine which employee punch data to retrieve. Example: 2017-01-01T00:00:00 -* employeeXRefCode (Optional): The unique identifier (external reference code) of the employee to be retrieved. The value provided must be the exact match for an employee -* locationXRefCode (Optional): A case-sensitive field that identifies a location or organizational units -* positionXRefCode (Optional): A case-sensitive field that identifies one or more Positions -* departmentXRefCode (Optional): A case-sensitive field that identifies one or more Departments -* jobXRefCode (Optional): A case-sensitive field that identifies one or more Jobs -* docketXRefCode (Optional): A case-sensitive field that identifies one or more dockets -* projectXRefCode (Optional): A case-sensitive field that identifies one or more projects -* payAdjustmentXRefCode (Optional): A case-sensitive field that identifies one or more pay adjustment -* shiftStatus (Optional): A case-sensitive field containing shift status groups. Examples: [ACTIVE, COMPLETED, PROBLEM, ALL] -* filterShiftTimeStart (Optional): Use with FilterTransactionStartTimeUTC to search for shifts with a Start and end time in a given timeframe. Example: Used to include or exclude edits made to historical punches -* filterShiftTimeEnd (Optional): Use with FilterTransactionEndTimeUTC to search for shifts with a Start and end time in a given timeframe. Example: Used to include or exclude edits made to historical -* businessDate (Optional): The Business Date value is intended as a “Timesheet View” to return punch data related to a clients Business day parameter configuration. Example: 2017-01-01T00:00:00 -* pageSize (Optional): The number of records returned per page in the paginated response - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "filterTransactionStartTimeUTC": "2019-03-25T00:00:00", - "filterTransactionEndTimeUTC": "2019-03-29T00:00:00" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "PunchXRefCode": "#DF_1480318", - "EmployeeXRefCode": "67206", - "PunchStatus": "c", - "TimeStart": "2019-03-25T09:00:00", - "TimeEnd": "2019-03-25T17:00:00", - "NetHours": 7.500, - "LocationXRefCode": "Store 32014", - "PositionXRefCode": "Cust. Svc. Dept. Mgr ", - "DepartmentXRefCode": "14", - "JobXRefCode": "4", - "BusinessDate": "2019-03-25T00:00:00", - "IsDeleted": false, - "IsOnCall": false, - "FuturePunch": false, - "LastModifiedTimestampUtc": "2019-03-28T12:29:07.063", - "MealBreaks": [ - { - "PunchXRefCode": "#DF_1480318", - "Type": "m", - "TimeStart": "2019-03-25T11:15:00", - "TimeEnd": "2019-03-25T11:45:00", - "NetHours": 0.500, - "IsAutoInjected": false, - "LastModifiedTimestampUtc": "2019-03-28T12:28:58.473" - } - ] - }, - { - "PunchXRefCode": "#DF_1480319", - "EmployeeXRefCode": "67206", - "PunchStatus": "c", - "TimeStart": "2019-03-26T09:00:00", - "TimeEnd": "2019-03-26T17:00:00", - "NetHours": 7.500, - "LocationXRefCode": "Store 32014", - "PositionXRefCode": "Cust. Svc. Dept. Mgr ", - "DepartmentXRefCode": "14", - "JobXRefCode": "4", - "BusinessDate": "2019-03-26T00:00:00", - "IsDeleted": false, - "IsOnCall": false, - "FuturePunch": false, - "LastModifiedTimestampUtc": "2019-03-28T12:29:07.063", - "MealBreaks": [ - { - "PunchXRefCode": "#DF_1480319", - "Type": "m", - "TimeStart": "2019-03-26T11:15:00", - "TimeEnd": "2019-03-26T11:45:00", - "NetHours": 0.500, - "IsAutoInjected": false, - "LastModifiedTimestampUtc": "2019-03-28T12:28:58.473" - } - ] - }, - { - "PunchXRefCode": "#DF_1480320", - "EmployeeXRefCode": "45522", - "PunchStatus": "c", - "TimeStart": "2019-03-25T09:00:00", - "TimeEnd": "2019-03-25T17:00:00", - "NetHours": 7.000, - "LocationXRefCode": "Store 32028", - "PositionXRefCode": "Day Stocker", - "DepartmentXRefCode": "28", - "JobXRefCode": "32", - "BusinessDate": "2019-03-25T00:00:00", - "IsDeleted": false, - "IsOnCall": false, - "FuturePunch": false, - "LastModifiedTimestampUtc": "2019-03-28T12:28:59.397", - "MealBreaks": [ - { - "PunchXRefCode": "#DF_1480320", - "Type": "m", - "TimeStart": "2019-03-25T12:30:00", - "TimeEnd": "2019-03-25T13:30:00", - "NetHours": 1.000, - "IsAutoInjected": false, - "LastModifiedTimestampUtc": "2019-03-28T12:28:59.397" - } - ] - }, - { - "PunchXRefCode": "#DF_1480321", - "EmployeeXRefCode": "45522", - "PunchStatus": "c", - "TimeStart": "2019-03-26T09:00:00", - "TimeEnd": "2019-03-26T17:00:00", - "NetHours": 7.500, - "LocationXRefCode": "Store 32028", - "PositionXRefCode": "Day Stocker", - "DepartmentXRefCode": "28", - "JobXRefCode": "32", - "BusinessDate": "2019-03-26T00:00:00", - "IsDeleted": false, - "IsOnCall": false, - "FuturePunch": false, - "LastModifiedTimestampUtc": "2019-03-28T12:28:59.397", - "MealBreaks": [ - { - "PunchXRefCode": "#DF_1480321", - "Type": "m", - "TimeStart": "2019-03-26T11:15:00", - "TimeEnd": "2019-03-26T11:45:00", - "NetHours": 0.500, - "IsAutoInjected": false, - "LastModifiedTimestampUtc": "2019-03-28T12:28:59.397" - } - ] - }, - { - "PunchXRefCode": "#DF_1480322", - "EmployeeXRefCode": "45522", - "PunchStatus": "c", - "TimeStart": "2019-03-28T09:00:00", - "TimeEnd": "2019-03-28T17:00:00", - "NetHours": 7.500, - "LocationXRefCode": "Store 32028", - "PositionXRefCode": "Day Stocker", - "DepartmentXRefCode": "28", - "JobXRefCode": "32", - "BusinessDate": "2019-03-28T00:00:00", - "IsDeleted": false, - "IsOnCall": false, - "FuturePunch": false, - "LastModifiedTimestampUtc": "2019-03-28T12:28:59.397", - "MealBreaks": [ - { - "PunchXRefCode": "#DF_1480322", - "Type": "m", - "TimeStart": "2019-03-28T11:15:00", - "TimeEnd": "2019-03-28T11:45:00", - "NetHours": 0.500, - "IsAutoInjected": false, - "LastModifiedTimestampUtc": "2019-03-28T12:28:59.397" - } - ] - } - ], - "Paging": { - "Next": "" - } -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Time-Management/Employee-Punches.aspx](https://developers.dayforce.com/Build/API-Explorer/Time-Management/Employee-Punches.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:filterTransactionStartTimeUTC} - {$ctx:filterTransactionEndTimeUTC} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "filterTransactionStartTimeUTC": "2019-03-25T00:00:00", - "filterTransactionEndTimeUTC": "2019-03-29T00:00:00" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "PunchXRefCode": "#DF_1480318", - "EmployeeXRefCode": "67206", - "PunchStatus": "c", - "TimeStart": "2019-03-25T09:00:00", - "TimeEnd": "2019-03-25T17:00:00", - "NetHours": 7.500, - "LocationXRefCode": "Store 32014", - "PositionXRefCode": "Cust. Svc. Dept. Mgr ", - "DepartmentXRefCode": "14", - "JobXRefCode": "4", - "BusinessDate": "2019-03-25T00:00:00", - "IsDeleted": false, - "IsOnCall": false, - "FuturePunch": false, - "LastModifiedTimestampUtc": "2019-03-28T12:29:07.063", - "MealBreaks": [ - { - "PunchXRefCode": "#DF_1480318", - "Type": "m", - "TimeStart": "2019-03-25T11:15:00", - "TimeEnd": "2019-03-25T11:45:00", - "NetHours": 0.500, - "IsAutoInjected": false, - "LastModifiedTimestampUtc": "2019-03-28T12:28:58.473" - } - ] - }, - { - "PunchXRefCode": "#DF_1480319", - "EmployeeXRefCode": "67206", - "PunchStatus": "c", - "TimeStart": "2019-03-26T09:00:00", - "TimeEnd": "2019-03-26T17:00:00", - "NetHours": 7.500, - "LocationXRefCode": "Store 32014", - "PositionXRefCode": "Cust. Svc. Dept. Mgr ", - "DepartmentXRefCode": "14", - "JobXRefCode": "4", - "BusinessDate": "2019-03-26T00:00:00", - "IsDeleted": false, - "IsOnCall": false, - "FuturePunch": false, - "LastModifiedTimestampUtc": "2019-03-28T12:29:07.063", - "MealBreaks": [ - { - "PunchXRefCode": "#DF_1480319", - "Type": "m", - "TimeStart": "2019-03-26T11:15:00", - "TimeEnd": "2019-03-26T11:45:00", - "NetHours": 0.500, - "IsAutoInjected": false, - "LastModifiedTimestampUtc": "2019-03-28T12:28:58.473" - } - ] - }, - { - "PunchXRefCode": "#DF_1480320", - "EmployeeXRefCode": "45522", - "PunchStatus": "c", - "TimeStart": "2019-03-25T09:00:00", - "TimeEnd": "2019-03-25T17:00:00", - "NetHours": 7.000, - "LocationXRefCode": "Store 32028", - "PositionXRefCode": "Day Stocker", - "DepartmentXRefCode": "28", - "JobXRefCode": "32", - "BusinessDate": "2019-03-25T00:00:00", - "IsDeleted": false, - "IsOnCall": false, - "FuturePunch": false, - "LastModifiedTimestampUtc": "2019-03-28T12:28:59.397", - "MealBreaks": [ - { - "PunchXRefCode": "#DF_1480320", - "Type": "m", - "TimeStart": "2019-03-25T12:30:00", - "TimeEnd": "2019-03-25T13:30:00", - "NetHours": 1.000, - "IsAutoInjected": false, - "LastModifiedTimestampUtc": "2019-03-28T12:28:59.397" - } - ] - }, - { - "PunchXRefCode": "#DF_1480321", - "EmployeeXRefCode": "45522", - "PunchStatus": "c", - "TimeStart": "2019-03-26T09:00:00", - "TimeEnd": "2019-03-26T17:00:00", - "NetHours": 7.500, - "LocationXRefCode": "Store 32028", - "PositionXRefCode": "Day Stocker", - "DepartmentXRefCode": "28", - "JobXRefCode": "32", - "BusinessDate": "2019-03-26T00:00:00", - "IsDeleted": false, - "IsOnCall": false, - "FuturePunch": false, - "LastModifiedTimestampUtc": "2019-03-28T12:28:59.397", - "MealBreaks": [ - { - "PunchXRefCode": "#DF_1480321", - "Type": "m", - "TimeStart": "2019-03-26T11:15:00", - "TimeEnd": "2019-03-26T11:45:00", - "NetHours": 0.500, - "IsAutoInjected": false, - "LastModifiedTimestampUtc": "2019-03-28T12:28:59.397" - } - ] - }, - { - "PunchXRefCode": "#DF_1480322", - "EmployeeXRefCode": "45522", - "PunchStatus": "c", - "TimeStart": "2019-03-28T09:00:00", - "TimeEnd": "2019-03-28T17:00:00", - "NetHours": 7.500, - "LocationXRefCode": "Store 32028", - "PositionXRefCode": "Day Stocker", - "DepartmentXRefCode": "28", - "JobXRefCode": "32", - "BusinessDate": "2019-03-28T00:00:00", - "IsDeleted": false, - "IsOnCall": false, - "FuturePunch": false, - "LastModifiedTimestampUtc": "2019-03-28T12:28:59.397", - "MealBreaks": [ - { - "PunchXRefCode": "#DF_1480322", - "Type": "m", - "TimeStart": "2019-03-28T11:15:00", - "TimeEnd": "2019-03-28T11:45:00", - "NetHours": 0.500, - "IsAutoInjected": false, - "LastModifiedTimestampUtc": "2019-03-28T12:28:59.397" - } - ] - } - ], - "Paging": { - "Next": "" - } -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-time-management/employeerawpunches.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-time-management/employeerawpunches.md deleted file mode 100644 index a8c336745d..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-time-management/employeerawpunches.md +++ /dev/null @@ -1,324 +0,0 @@ -# Working with Employee Raw Punches - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve, create raw punches of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Raw Punches](#retrieving-employee-raw-punches)| Retrieve raw punches as they are entered at the clock. | -|[POST Employee Raw Punches](#creating-employee-raw-punches)| Insert a raw punch. This raw punch record will be treated as a punch coming from the clock and be validated against configured punch policies. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Raw Punches -We can use GET Employee Raw Punches operation with required parameters to retrieve raw punches of employees. - -**GET Employee Raw Punches** -```xml - - {$ctx:filterTransactionStartTimeUTC} - {$ctx:filterTransactionEndTimeUTC} - {$ctx:employeeXRefCode} - {$ctx:employeeBadge} - {$ctx:punchState} - {$ctx:punchTypes} - {$ctx:pageSize} - -``` - -**Properties** - -* filterTransactionStartTimeUTC (Mandatory): Inclusive transaction period start date in UTC to determine which employee punch data to retrieve. Example: 2017-01-01T00:00:00 -* filterTransactionEndTimeUTC (Mandatory): Inclusive transaction period end date in UTC to determine which employee punch data to retrieve. Example: 2017-01-01T00:00:00 -* employeeXRefCode (Optional): The unique identifier (external reference code) of the employee to be retrieved. The value provided must be the exact match for an employee -* employeeBadge (Optional): The badge number of the employee to be retrieved. The value provided must be the exact match for a badge -* punchState (Optional): The state of the punch. Examples: [PROCESSED, REJECTED, ALL] -* punchTypes (Optional): Comma separated values of punch types. Example: [Punch_In, Break_Out, Job_Transfer, ALL, etc] -* pageSize (Optional): The number of records returned per page in the paginated response - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "filterTransactionStartTimeUTC": "2019-06-03T00:00:00", - "filterTransactionEndTimeUTC": "2019-06-05T00:00:00" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "RawPunchXRefCode": "#DF_322", - "PunchState": "Rejected", - "EmployeeBadge": "42199", - "RawPunchTime": "2019-06-04T11:28:28-04:00", - "WasOfflinePunch": false, - "PunchType": "Punch_In", - "PunchDevice": "prdemo500bHTML", - "IsDuplicate": false, - "RejectedReason": "Employee Badge Validation", - "IPAddress": "10.66.25.192", - "PunchOrigin": "C" - }, - { - "RawPunchXRefCode": "#DF_323", - "PunchState": "Rejected", - "EmployeeXRefCode": "42199", - "EmployeeBadge": "33333", - "RawPunchTime": "2019-06-04T11:28:56-04:00", - "WasOfflinePunch": false, - "PunchType": "Punch_In", - "PunchDevice": "prdemo500bHTML", - "IsDuplicate": false, - "RejectedReason": "Shift Start Validation", - "IPAddress": "10.66.25.192", - "PunchOrigin": "C" - }, - { - "RawPunchXRefCode": "#DF_324", - "PunchState": "Processed", - "EmployeeXRefCode": "42199", - "EmployeeBadge": "33333", - "RawPunchTime": "2019-06-04T11:30:00-04:00", - "WasOfflinePunch": false, - "PunchType": "Punch_In", - "PunchDevice": "prdemo500bHTML", - "IsDuplicate": false, - "IPAddress": "10.66.25.192", - "PunchOrigin": "C" - }, - { - "RawPunchXRefCode": "#DF_325", - "PunchState": "Processed", - "EmployeeXRefCode": "42199", - "EmployeeBadge": "33333", - "RawPunchTime": "2019-06-04T11:30:18-04:00", - "WasOfflinePunch": false, - "PunchType": "Meal_In", - "PunchDevice": "prdemo500bHTML", - "IsDuplicate": false, - "IPAddress": "10.66.25.192", - "PunchOrigin": "C" - }, - { - "RawPunchXRefCode": "#DF_326", - "PunchState": "Processed", - "EmployeeXRefCode": "42199", - "EmployeeBadge": "33333", - "RawPunchTime": "2019-06-04T18:28:28-04:00", - "WasOfflinePunch": false, - "PunchType": "Meal_Out", - "PunchDevice": "API", - "IsDuplicate": false, - "IPAddress": "63.235.55.130", - "PunchOrigin": "C" - } - ], - "Paging": { - "Next": "" - } -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Time-Management/GET-Employee-Raw-Punches.aspx](https://developers.dayforce.com/Build/API-Explorer/Time-Management/GET-Employee-Raw-Punches.aspx) - -#### Creating Employee Raw Punches -We can use POST Employee Raw Punches operation with required parameters to create raw punches for employees. - -**POST Employee Raw Punches** -```xml - - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "fieldAndValue": { - "EmployeeBadge": "33333", - "RawPunchTime": "2019-06-04T11:28:28-04:00", - "PunchType": "Punch_In", - "PunchDevice": "API" - } -} -``` - -**Sample response** - -Dayforce returns HTTP Code 200 - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Time-Management/POST-Employee-Raw-Punches.aspx](https://developers.dayforce.com/Build/API-Explorer/Time-Management/POST-Employee-Raw-Punches.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:filterTransactionStartTimeUTC} - {$ctx:filterTransactionEndTimeUTC} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "filterTransactionStartTimeUTC": "2019-06-03T00:00:00", - "filterTransactionEndTimeUTC": "2019-06-05T00:00:00" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "RawPunchXRefCode": "#DF_322", - "PunchState": "Rejected", - "EmployeeBadge": "42199", - "RawPunchTime": "2019-06-04T11:28:28-04:00", - "WasOfflinePunch": false, - "PunchType": "Punch_In", - "PunchDevice": "prdemo500bHTML", - "IsDuplicate": false, - "RejectedReason": "Employee Badge Validation", - "IPAddress": "10.66.25.192", - "PunchOrigin": "C" - }, - { - "RawPunchXRefCode": "#DF_323", - "PunchState": "Rejected", - "EmployeeXRefCode": "42199", - "EmployeeBadge": "33333", - "RawPunchTime": "2019-06-04T11:28:56-04:00", - "WasOfflinePunch": false, - "PunchType": "Punch_In", - "PunchDevice": "prdemo500bHTML", - "IsDuplicate": false, - "RejectedReason": "Shift Start Validation", - "IPAddress": "10.66.25.192", - "PunchOrigin": "C" - }, - { - "RawPunchXRefCode": "#DF_324", - "PunchState": "Processed", - "EmployeeXRefCode": "42199", - "EmployeeBadge": "33333", - "RawPunchTime": "2019-06-04T11:30:00-04:00", - "WasOfflinePunch": false, - "PunchType": "Punch_In", - "PunchDevice": "prdemo500bHTML", - "IsDuplicate": false, - "IPAddress": "10.66.25.192", - "PunchOrigin": "C" - }, - { - "RawPunchXRefCode": "#DF_325", - "PunchState": "Processed", - "EmployeeXRefCode": "42199", - "EmployeeBadge": "33333", - "RawPunchTime": "2019-06-04T11:30:18-04:00", - "WasOfflinePunch": false, - "PunchType": "Meal_In", - "PunchDevice": "prdemo500bHTML", - "IsDuplicate": false, - "IPAddress": "10.66.25.192", - "PunchOrigin": "C" - }, - { - "RawPunchXRefCode": "#DF_326", - "PunchState": "Processed", - "EmployeeXRefCode": "42199", - "EmployeeBadge": "33333", - "RawPunchTime": "2019-06-04T18:28:28-04:00", - "WasOfflinePunch": false, - "PunchType": "Meal_Out", - "PunchDevice": "API", - "IsDuplicate": false, - "IPAddress": "63.235.55.130", - "PunchOrigin": "C" - } - ], - "Paging": { - "Next": "" - } -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-time-management/schedules.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-time-management/schedules.md deleted file mode 100644 index 9c0d9364eb..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-time-management/schedules.md +++ /dev/null @@ -1,136 +0,0 @@ -# Working with Employee Schedules - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve schedules of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Schedules](#retrieving-employee-schedules)| Retrieve the configured schedules for a single employee for every day within a defined period. In order to use this request, an employee XRefCodes is needed. Employee XRefCodes can be retrieved with GET Employees. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Schedules -We can use GET Schedules operation with required parameters to find the schedules of employees. - -**GET Schedules** -```xml - - {$ctx:xRefCode} - {$ctx:filterScheduleStartDate} - {$ctx:filterScheduleEndDate} - {$ctx:isPosted} - {$ctx:expand} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* filterScheduleStartDate (Mandatory): Inclusive period start aligned to the employee business day start date to determine which employee schedule data to retrieve . Example: 2017-01-01T13:24:56 -* filterScheduleEndDate (Mandatory): Exclusive period end aligned to the employee business day start to determine which employee schedule data to retrieve . Example: 2017-01-01T13:24:56 -* isPosted (Optional - boolean): A flag to determine whether to display posted schedules. By default it searches for published schedules -* expand (Optional - string): This parameter accepts a comma-separated list of top-level entities that contain the data elements needed for downstream processing. When this parameter is not used, only data elements from the primary record will be included. For more information, please refer to the Introduction to Dayforce Web Services document. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "filterScheduleStartDate": "2018-02-04T00:00:00", - "filterScheduleEndDate": "2018-02-18T00:00:00" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json - -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Time-Management/GET-Employee-Schedules.aspx](https://developers.dayforce.com/Build/API-Explorer/Time-Management/GET-Employee-Schedules.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:filterScheduleStartDate} - {$ctx:filterScheduleEndDate} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "filterScheduleStartDate": "2018-02-04T00:00:00", - "filterScheduleEndDate": "2018-02-18T00:00:00" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json - -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee-time-management/timeawayfromwork.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee-time-management/timeawayfromwork.md deleted file mode 100644 index 541c36eb7c..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee-time-management/timeawayfromwork.md +++ /dev/null @@ -1,174 +0,0 @@ -# Working with Employee Time Away from Work - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve time away from work of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Time Away from Work](#retrieving-employee-time-away-from-work)| Retrieve the scheduled time away from work (TAFW) periods of a single employee. In order to use this request, an employee XRefCodes is needed. Employee XRefCodes can be retrieved with GET Employees. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Time Away from Work -We can use GET Employee Time Away from Work operation with required parameters to get the time spent by employees away from work. - -**GET Employee Time Away from Work** -```xml - - {$ctx:xRefCode} - {$ctx:filterTAFWStartDate} - {$ctx:filterTAFWEndDate} - {$ctx:status} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* filterTAFWStartDate (Mandatory - string): Inclusive period start date to determine which employee time away from work data to retrieve . Example: 2017-01-01T13:24:56 -* filterTAFWEndDate (Mandatory - string): Exclusive period end date to determine which employee time away from work data to retrieve . Example: 2017-01-01T13:24:56 -* status (Mandatory - string): A case-sensitive field containing status for time away from work values. Examples: [APPROVED,PENDING,CANCELED,DENIED,CANCELPENDING] - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "filterTAFWStartDate": "2018-02-04T00:00:00", - "filterTAFWEndDate": "2018-02-18T00:00:00", - "status": "APPROVED" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "DateOfRequest": "2018-02-07T09:50:00", - "TimeStart": "2018-02-07T00:00:00", - "TimeEnd": "2018-02-09T00:00:00", - "NetHours": 16.000, - "ReasonName": "Sick", - "AllDay": true - }, - { - "DateOfRequest": "2018-02-07T09:52:00", - "TimeStart": "2018-02-14T00:00:00", - "TimeEnd": "2018-02-15T00:00:00", - "NetHours": 8.000, - "ReasonName": "Training", - "AllDay": true - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Time-Management/GET-Employee-TAFW.aspx](https://developers.dayforce.com/Build/API-Explorer/Time-Management/GET-Employee-TAFW.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:filterTAFWStartDate} - {$ctx:filterTAFWEndDate} - {$ctx:status} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "filterTAFWStartDate": "2018-02-04T00:00:00", - "filterTAFWEndDate": "2018-02-18T00:00:00", - "status": "APPROVED" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "DateOfRequest": "2018-02-07T09:50:00", - "TimeStart": "2018-02-07T00:00:00", - "TimeEnd": "2018-02-09T00:00:00", - "NetHours": 16.000, - "ReasonName": "Sick", - "AllDay": true - }, - { - "DateOfRequest": "2018-02-07T09:52:00", - "TimeStart": "2018-02-14T00:00:00", - "TimeEnd": "2018-02-15T00:00:00", - "NetHours": 8.000, - "ReasonName": "Training", - "AllDay": true - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employee.md b/en/docs/reference/connectors/ceridiandayforce-connector/employee.md deleted file mode 100644 index deb33d8391..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employee.md +++ /dev/null @@ -1,586 +0,0 @@ -# Working with Employees - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to work with Employee where you can retrieve employee human resources data and update it. -If the employee is not already in Dayforce, you can add them. - -| Operation | Description | -| ------------- |-------------| -|[GET Employees](#retrieving-a-list-of-employees)| Search for employees using a number of parameters including organizational unit, hire dates, and employment status. | -|[GET Employee Details](#retrieving-details-of-employee)|Retrieve detailed data for a given employee.| -|[POST Employee](#create-an-employee)|Create an employee in Dayforce. This includes hiring an employee. Dayforce validates the data you submit and creates a new record for the employee.| -|[PATCH Employee](#update-existing-employee)|Update existing employee records. Supports rehire as well.| - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving a list of Employees - -We can use GET Employees operation with required parameters to search and find the required employees. - -**GET Employees** -```xml - - {$ctx:employeeNumber} - {$ctx:employmentStatusXRefCode} - {$ctx:orgUnitXRefCode} - {$ctx:filterHireStartDate} - {$ctx:filterHireEndDate} - {$ctx:filterTerminationStartDate} - {$ctx:filterTerminationEndDate} - {$ctx:filterUpdatedStartDate} - {$ctx:filterUpdatedEndDate} - {$ctx:contextDate} - -``` - -**Properties** - -* employeeNumber: Employment identification number assigned to an employee. A partial value can be provided for a wider search.\ -* employmentStatusXRefCode: A case-sensitive field containing employment status values, which can be client-specific. Use a ContextDate value to search for employees with a given status as of a point in time. Otherwise, the search will use the current date and time.
    -* orgUnitXRefCode: A case-sensitive field that identifies a client's organizational units. Use this to search all levels of the employees’ organization including department, location, region, corporate, etc. Use a ContextDate value to search for employees with a specific value as of a point in time. Otherwise, the search will use the current date and time.
    -* filterHireStartDate: Use to search for employees whose most recent hire date is greater than or equal to the specified value (e.g. 2017-01-01T13:24:56). Use with filterHireEndDate to search for employees hired or rehired in a given timeframe. -* filterHireEndDate: This date is used to search for employees whose most recent hire date is less than or equal to the specified value. Typically this parameter is used in conjunction with FilterHireStartDate to search for employees hired or rehired in a given timeframe. Example: 2017-01-01T13:24:56 -* filterTerminationStartDate: This date is used to search for employees with termination date values greater than or equal to the specified value. Typically this parameter is used in conjunction with FilterTerminationStartDate to search for employees terminated in a given timeframe. Example: 2017-01-01T13:24:56 -* filterTerminationEndDate: This date is used to search for employees with termination date values less than or equal to the specified value. Typically this parameter is used in conjunction with filterTerminationStartDate to search for employees terminated in a given timeframe. Example: 2017-01-01T13:24:56 -* filterUpdatedStartDate: The beginning date used when searching for employees with updates (and newly effective records) in a specified timeframe. When a value is provided for this parameter, a filterUpdatedEndDate value must also be provided. Because this search is conducted across all entities in the HR data model regardless of whether the requesting user has access to them, it is possible that the query will return XRefCode of employees with changes in which the consuming application is not interested. Example: 2017-01-01T13:24:56 -* filterUpdatedEndDate: The end date used when searching for employees with updates (and newly effective records) in a specified timeframe. When a value is provided for this parameter, a filterUpdatedStartDate value must also be provided. Example: 2017-01-01T13:24:56 -* contextDate: The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by the GET Employees operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "employeeNumber": "42199" -} -``` - -**Sample response** - -Given below is a sample response for the GET Employees operation. - -```json -{ - "XRefCode": "42199" -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee/GET-Employees.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee/GET-Employees.aspx) - -#### Retrieving details of employee -We can use GET Employee Details operation with required parameters to retrieve information on employees - -**GET Employee Details** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:expand} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode: The unique identifier (external reference code) of the employee to be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate: The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* expand: This parameter accepts a comma-separated list of top-level entities that contain the data elements needed for downstream processing. When this parameter is not used, only data elements from the employee primary record will be included. For more information, please refer to the Introduction to Dayforce Web Services document. -* contextDateRangeFrom: The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo: The Context Date Range To value is end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by the GET Employee Details operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` - -**Sample response** - -Given below is a sample response for the GET Employee Details operation. - -```json -{ - "Data": { - "BioExempt": false, - "BirthDate": "1969-06-23T00:00:00", - "ChecksumTimestamp": "2019-10-01T08:21:19.28", - "ClockSupervisor": false, - "Culture": { - "XRefCode": "en-US", - "ShortName": "English (US)", - "LongName": "English (US)" - }, - "EligibleForRehire": "NOTANSWERED", - "FederatedId": "aaron.glover", - "Gender": "M", - "HireDate": "2000-08-23T00:00:00", - "HomePhone": "202 265 8987", - "NewHireApprovalDate": "2000-01-01T00:00:00", - "NewHireApproved": true, - "NewHireApprovedBy": "System", - "OriginalHireDate": "2000-08-23T00:00:00", - "PhotoExempt": false, - "RegisteredDisabled": "NO", - "RequiresExitInterview": false, - "SeniorityDate": "2000-08-23T00:00:00", - "SocialSecurityNumber": "252013727", - "StartDate": "2000-08-23T00:00:00", - "TaxExempt": true, - "FirstTimeAccessEmailSentCount": 0, - "FirstTimeAccessVerificationAttempts": 0, - "SendFirstTimeAccessEmail": false, - "EmployeeBadge": { - "BadgeNumber": "33333", - "EffectiveStart": "2000-01-01T00:00:00" - }, - "LoginId": "mworker", - "HomeOrganization": { - "XRefCode": "500Packaging", - "ShortName": "Plant 1 - Packaging", - "LongName": "Plant 1 - Packaging" - }, - "EmployeeNumber": "42199", - "BioSensitivityLevel": { - "XRefCode": "DEFAULT", - "ShortName": "Default", - "LongName": "Default" - }, - "XRefCode": "42199", - "CommonName": "Aaron", - "DisplayName": "Aaron Glover", - "FirstName": "Aaron", - "LastName": "Glover" - } -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee/GET-Employee-Details.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee/GET-Employee-Details.aspx) - -#### Create an employee - -We can use POST Employee operation with required parameters to create a new employee in Dayforce. - -**POST Employee** -```xml - - {$ctx:fieldAndValue} - {$ctx:isValidateOnly} - -``` - -**Properties** - -* isValidateOnly: When a TRUE value is used in this parameter, POST (hire and rehire ) and PATCH (employee update) operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by the POST Employee operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "employeeNumber": "42199", - "isValidateOnly": "true", - "fieldAndValue": { - "FirstName": "FSample", - "LastName": "LSample", - "XRefCode":"POST0090", - "BioExempt": false, - "BirthDate": "1990-05-12T00:00:00", - "Culture": { - "XRefCode": "en-US" - }, - "Gender": "M", - "HireDate": "2017-01-15T00:00:00", - "PhotoExempt": false, - "RequiresExitInterview": false, - "SocialSecurityNumber": "252012728", - "SendFirstTimeAccessEmail": false, - "FirstTimeAccessEmailSentCount": 0, - "FirstTimeAccessVerificationAttempts": 0, - "Addresses": { - "Items": [ - { - "Address1": "4110 Yonge St.", - "City": "North York", - "PostalCode": "M2P 2B7", - "Country": { - "XRefCode": "CAN" - }, - "State": { - "XRefCode": "ON" - }, - "ContactInformationType": { - "XRefCode": "PrimaryResidence" - }, - "EffectiveStart": "2017-01-15T00:00:00" - } - ] - } , - "Contacts": { - "Items": [ - { - "ContactInformationType": { - "XRefCode": "HomePhone" - }, - "ContactNumber":"4169872987", - "Country": { - "XRefCode": "CAN" - }, - "EffectiveStart": "2017-01-15T00:00:00", - "ShowRejectedWarning": true, - "IsForSystemCommunications": false, - "IsPreferredContactMethod": false, - "NumberOfVerificationRequests": 0 - } - ] - }, - "EmploymentStatuses": { - "Items": [ - { - - "EffectiveStart": "2017-01-15T00:00:00", - "EmploymentStatus": { - "XRefCode": "ACTIVE" - }, - "PayType": { - "XRefCode": "HourlyNon" - }, - "PayClass": { - "XRefCode": "FT" - }, - "PayGroup": { - "XRefCode": "CAN" - }, - "CreateShiftRotationShift": true, - "BaseRate": 10.25, - "EmploymentStatusReason": { - "XrefCode":"NEWHIRE" - } - } - ] - }, - "Roles": { - "Items": [ - { - - "Role": { - "XRefCode": "CAssociate" - }, - "EffectiveStart": "2017-01-15T00:00:00", - "isDefault": true - } - ] - }, - "WorkAssignments": { - "Items": [ - { - "Position": { - "Department": { - "XRefCode": "6" - }, - "Job": { - "XRefCode": "7" - } - }, - "Location": { - "XRefCode": "500Operations" - }, - "EffectiveStart": "2017-01-15T00:00:00", - "IsPAPrimaryWorkSite": false, - "IsPrimary": true, - "IsVirtual": false, - "EmploymentStatusReason": { - "XrefCode":"NEW ASSIGNMENT" - } - - } - ] - } -} -} -``` - -**Sample response** - -Given below is a sample response for the POST Employee operation. - -```json -{ - "ProcessResults": [ - { - "Code": "HR_Employee_ValidSSNRequired", - "Context": "Employee.SocialSecurityNumber", - "Level": "WARN", - "Message": "Valid National ID is required for employee" - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee/POST-Employee.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee/POST-Employee.aspx) - -#### Update existing employee - -We can use PATCH employee operation to update an existing employee details. - - -**PATCH Employee** -```xml - - {$ctx:fieldAndValue} - {$ctx:xRefCode} - {$ctx:isValidateOnly} - -``` - -**Properties** - -* xRefCode: The unique identifier (external reference code) of the employee to be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly: When a TRUE value is used in this parameter, POST (hire and rehire ) and PATCH (employee update) operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by the PATCH Employee operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "employeeNumber": "42199", - "isValidateOnly": "true", - "xRefCode": "42199", - "fieldAndValue": { - "FirstName": "FSample", - "LastName": "LSample", - "XRefCode":"42199", - "BioExempt": false, - "BirthDate": "1990-05-12T00:00:00", - "Culture": { - "XRefCode": "en-US" - }, - "Gender": "M", - "HireDate": "2017-01-15T00:00:00", - "PhotoExempt": false, - "RequiresExitInterview": false, - "SocialSecurityNumber": "252012728", - "SendFirstTimeAccessEmail": false, - "FirstTimeAccessEmailSentCount": 0, - "FirstTimeAccessVerificationAttempts": 0, - "Addresses": { - "Items": [ - { - "Address1": "4110 Yonge St.", - "City": "North York", - "PostalCode": "M2P 2B7", - "Country": { - "XRefCode": "CAN" - }, - "State": { - "XRefCode": "ON" - }, - "ContactInformationType": { - "XRefCode": "PrimaryResidence" - }, - "EffectiveStart": "2017-01-15T00:00:00" - } - ] - } , - "Contacts": { - "Items": [ - { - "ContactInformationType": { - "XRefCode": "HomePhone" - }, - "ContactNumber":"4169872987", - "Country": { - "XRefCode": "CAN" - }, - "EffectiveStart": "2017-01-15T00:00:00", - "ShowRejectedWarning": true, - "IsForSystemCommunications": false, - "IsPreferredContactMethod": false, - "NumberOfVerificationRequests": 0 - } - ] - }, - "EmploymentStatuses": { - "Items": [ - { - - "EffectiveStart": "2017-01-15T00:00:00", - "EmploymentStatus": { - "XRefCode": "ACTIVE" - }, - "PayType": { - "XRefCode": "HourlyNon" - }, - "PayClass": { - "XRefCode": "FT" - }, - "PayGroup": { - "XRefCode": "CAN" - }, - "CreateShiftRotationShift": true, - "BaseRate": 10.25, - "EmploymentStatusReason": { - "XrefCode":"NEWHIRE" - } - } - ] - }, - "Roles": { - "Items": [ - { - - "Role": { - "XRefCode": "CAssociate" - }, - "EffectiveStart": "2017-01-15T00:00:00", - "isDefault": true - } - ] - }, - "WorkAssignments": { - "Items": [ - { - "Position": { - "Department": { - "XRefCode": "6" - }, - "Job": { - "XRefCode": "7" - } - }, - "Location": { - "XRefCode": "500Operations" - }, - "EffectiveStart": "2017-01-15T00:00:00", - "IsPAPrimaryWorkSite": false, - "IsPrimary": true, - "IsVirtual": false, - "EmploymentStatusReason": { - "XrefCode":"NEW ASSIGNMENT" - } - - } - ] - } -} -} -``` - -**Sample response** - -Given below is a sample response for the PATCH Employee operation. - -```json -{ - "ProcessResults": [ - { - "Code": "HR_Employee_ValidSSNRequired", - "Context": "Employee.SocialSecurityNumber", - "Level": "WARN", - "Message": "Valid National ID is required for employee" - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Employee/PATCH-Employee.aspx](https://developers.dayforce.com/Build/API-Explorer/Employee/PATCH-Employee.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "employeeNumber": "42199" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns a json response similar to the one shown below: - -```json -{ - "XRefCode": "42199" -} -``` \ No newline at end of file diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/employment-eligibility-verification/i9order.md b/en/docs/reference/connectors/ceridiandayforce-connector/employment-eligibility-verification/i9order.md deleted file mode 100644 index 2595ea528e..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/employment-eligibility-verification/i9order.md +++ /dev/null @@ -1,128 +0,0 @@ -# Working with I-9 Order - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to update i-9 employment eligibility of an employee - -| Operation | Description | -| ------------- |-------------| -|[PATCH I-9 Order](#updating-i-9-order)| Update I-9 employment eligibility verification order status. | - -### Operation details - -This section provides more details on each of the operations. - -#### Updating I-9 Order -We can use PATCH I-9 Order operation with required parameters to search and find employment eligibility of an employee - -**PATCH I-9 Order** -```xml - - {$ctx:i9OrderId} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* i9OrderId (Mandatory): The unique identifier for the I-9 order on the I-9 partner's system. The value of this parameter needs to match the value for the I9OrderId property in the request body. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "i9OrderId": "FF6C2C9F-E3D7-42F1-90C6-C163F5C8B9DF", - "isValidateOnly": "true", - "fieldAndValue": - { - "I9OrderId": "FF6C2C9F-E3D7-42F1-90C6-C163F5C8B9DF", - "OrderStatusXRefCode": "PENDING EMPLOYER" - } -} -``` - -**Sample response** - -Dayforce returns HTTP Code 200 - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Reporting-(1)/GET-Report-Metadata-for-a-list-of-reports.aspx](https://developers.dayforce.com/Build/API-Explorer/Reporting-(1)/GET-Report-Metadata-for-a-list-of-reports.aspx) -(sic) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:i9OrderId} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "i9OrderId": "FF6C2C9F-E3D7-42F1-90C6-C163F5C8B9DF", - "isValidateOnly": "true", - "fieldAndValue": - { - "I9OrderId": "FF6C2C9F-E3D7-42F1-90C6-C163F5C8B9DF", - "OrderStatusXRefCode": "PENDING EMPLOYER" - } -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/recruiting/jobpostings.md b/en/docs/reference/connectors/ceridiandayforce-connector/recruiting/jobpostings.md deleted file mode 100644 index 46c0028620..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/recruiting/jobpostings.md +++ /dev/null @@ -1,556 +0,0 @@ -# Working with Job Postings - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve job postings - -| Operation | Description | -| ------------- |-------------| -|[GET Job Postings](#retrieving-job-postings)| Get the job postings available through the candidate portal. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Job Postings -We can use GET Job Postings operation with required parameters to search for job postings - -**GET Job Postings** -```xml - - {$ctx:companyName} - {$ctx:parentCompanyName} - {$ctx:internalJobBoardCode} - {$ctx:includeActivePostingOnly} - {$ctx:lastUpdateTimeFrom} - {$ctx:lastUpdateTimeTo} - {$ctx:datePostedFrom} - {$ctx:datePostedTo} - {$ctx:htmlDescription} - -``` - -**Properties** - -* companyName (Optional- string): Company name. Example: XYZ Co. -* parentCompanyName (Optional - string): Parent Company name. Example: Ceridian -* internalJobBoardCode (Optional - string): XRefCode of Job Board. Example: CANDIDATEPORTAL -* includeActivePostingOnly (Optional - boolean): If true, then exclude inactive postings from the result. If False, then the 'Last Update Time From' and 'Last Update Time To' parameters are required and the range specified between the 'Last Update Time From' and 'Last Update Time To' parameters must not be larger than 1 month. Example: True -* lastUpdateTimeFrom (Optional - string): A starting timestamp of last updated job posting. Example: 2017-01-01T13:24:56 -* lastUpdateTimeTo (Optional - string): An ending timestamp of last updated job posting. Example: 2017-02-01T13:24:56 -* datePostedFrom (Optional - string): A starting timestamp of job posting date. Example: 2017-01-01T13:24:56 -* datePostedTo (Optional - string): An ending timestamp of job posting date. Example: 2017-02-01T13:24:56 -* htmlDescription (Optional - boolean): A flag to feed the jobs over with html formatting or plain text description - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -[ - { - "Title": "Magasin 120 - Assistant gérant", - "Description": "Objectif de poste d’assistant gérant de magasin : soutient ses clients par la formation du personnel, de l’achat et de l’étalage des produits.\n Tâches reliées du poste d’assistant gérant de magasin :\n -Former le personnel du magasin ; orienter les nouvelles embauches vers les produits et des matériaux de formation ; offrir des séances de formation ; l'évaluer les résultats et les besoins d’apprentissage des associés en collaboration avec le gérant du magasin ; élaborer et appliquer les nouvelles formations produit. - Évalue la concurrence en visitant les magasins concurrents ; la collecte d'informations tels que le style, la qualité et les prix de marchandise compétitifs. - Achats d’inventaire via recherche de nouveaux produits ; anticiper l'intérêt des acheteurs ; négocier des réductions de prix de volume ; placer et accélérer les commandes ; confirmer la réception. - Attirer les clients par l’exposition de la marchandise en façade ; respecter les suggestions et les horaires d'étalage ; mise en place ou d’assemblage de propriétés d'étalages préfabriqués ; la préparation des marchandises affichées dans les fenêtres et vitrines et sur le plancher de vente. - Favoriser les ventes en présentant la marchandise et les produits aux clients. – Aider aux clients en fournissant des informations ; répondre aux questions ; obtenir la marchandise demandée ; effectuer les transactions de paiement ; la préparation des marchandises pour la livraison. – Préparer les soldes et la relation client en analysant les rapports et la catégorisation de renseignements sur les ventes ; identifier et étudier les plaintes et les suggestions de services rapportés par les clients. - Maintient l’environnement du magasin propre et sécuritaire par le développement et l'édition des itinéraires d'évacuation ; la détermination et la documentation des emplacements de matériaux et de produits chimiques potentiellement dangereux. - Maintient l'inventaire en cochant la marchandise pour déterminer les niveaux de stock ; anticiper la demande. - Préparer des rapports par la collecte, l'analyse et la synthèse des informations. - Maintient un service de qualité en établissant et en appliquant des normes de l'organisation. - Maintenir une connaissance professionnelle et technique en participant à des ateliers éducatifs ; revue des publications professionnelles ; établir des réseaux personnels ; l'analyse comparative des meilleurs pratiques state-of-the-art ; participation à des associations professionnelles. - Contribue à l'effort d'équipe.\n\n\nCareers at our Company...\n\nWhether you are taking the first step toward beginning your career or are a professional looking for an exciting new opportunity, we offer a wide array of challenging and creative paths.\nSee Other Job Opportunities", - "ClientSiteName": "Client Careers Site", - "ClientSiteXRefCode": "CANDIDATEPORTAL", - "CompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "ParentCompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "JobDetailsUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/fr-CA/ddn/Posting/View/9", - "ApplyUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/fr-CA/ddn/JobApplication?postingId=9", - "City": "Alpharetta", - "State": "GA", - "Country": "USA", - "PostalCode": "", - "DatePosted": "2014-06-13T00:00:00", - "LastUpdated": "2017-01-16T14:21:51.937", - "ReferenceNumber": 9, - "CultureCode": "fr-CA", - "ParentRequisitionCode": 5, - "JobType": 0, - "TravelRequired": 0 - }, - { - "Title": "Store 120 - Assistant Manager", - "Description": "Job Purpose:\n • Serves customers by training staff; purchasing and displaying products.\nJob Duties:\n • Trains store staff by reviewing and revising orientation to products and sales training materials; delivering training sessions; reviewing staff job results and learning needs with retail store manager; developing and implementing new product training.\n • Evaluates competition by visiting competing stores; gathering information such as style, quality, and prices of competitive merchandise.\n • Purchases inventory by researching emerging products; anticipating buyer interest; negotiating volume price breaks; placing and expediting orders; verifying receipt.\n • Attracts customers by originating display ideas; following display suggestions or schedules; constructing or assembling prefabricated display properties; producing merchandise displays in windows and showcases, and on sales floor.\n • Promotes sales by demonstrating merchandise and products to customers.\n • Helps customers by providing information; answering questions; obtaining merchandise requested; completing payment transactions; preparing merchandise for delivery.\n • Prepares sales and customer relations reports by analyzing and categorizing sales information; identifying and investigating customer complaints and service suggestions.\n • Maintains a safe and clean store environment by developing and publishing evacuation routes; determining and documenting locations of potentially dangerous materials and chemicals.\n • Maintains inventory by checking merchandise to determine inventory levels; anticipating customer demand.\n • Prepares reports by collecting, analyzing, and summarizing information.\n • Maintains quality service by establishing and enforcing organization standards.\n • Maintains professional and technical knowledge by attending educational workshops; reviewing professional publications; establishing personal networks; benchmarking state-of-the-art practices; participating in professional societies.\n • Contributes to team effort by accomplishing related results as needed.\n\n\nCareers at our Company...\n\nWhether you are taking the first step toward beginning your career or are a professional looking for an exciting new opportunity, we offer a wide array of challenging and creative paths.\nSee Other Job Opportunities", - "ClientSiteName": "Client Careers Site", - "ClientSiteXRefCode": "CANDIDATEPORTAL", - "CompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "ParentCompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "JobDetailsUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-US/ddn/Posting/View/9", - "ApplyUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-US/ddn/JobApplication?postingId=9", - "City": "Alpharetta", - "State": "GA", - "Country": "USA", - "PostalCode": "", - "DatePosted": "2014-06-13T00:00:00", - "LastUpdated": "2017-01-16T14:21:51.937", - "ReferenceNumber": 9, - "CultureCode": "en-US", - "ParentRequisitionCode": 5, - "JobType": 0, - "TravelRequired": 0 - }, - { - "Title": "Magasin 110 – Associé aux accessoires", - "Description": "But Vente aux détails : Sert clients, aide à sélectionner des produits.\n\nTâches reliées du poste de Vente aux détails : -\nAccueillir les clients et offrir de l'assistance. - Dirige clients vers les racks et\nles compteurs ; leur suggère des articles. - Conseille les clients en\nfournissant des informations sur les produits. - Aide à la clientèle à\neffectuer des sélections en bâtissant la confiance, offrant des suggestions\net des opinions. – Documente les ventes par la mise à jour des profils\nclient. - Traite les paiements ; traitement des chèques , espèces, et carte de\nmagasin ou autre cartes de crédit . – Avise les clients des rabais pour\nclients privilégiés. - Contribue à l'effort d'équipe.\n\n\nCareers at our Company...\n\nWhether you are taking the first step toward beginning your career or are a professional looking for an exciting new opportunity, we offer a wide array of challenging and creative paths.\nSee Other Job Opportunities", - "ClientSiteName": "Client Careers Site", - "ClientSiteXRefCode": "CANDIDATEPORTAL", - "CompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "ParentCompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "JobDetailsUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/fr-CA/ddn/Posting/View/17", - "ApplyUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/fr-CA/ddn/JobApplication?postingId=17", - "City": "San Francisco", - "State": "CA", - "Country": "USA", - "PostalCode": "", - "DatePosted": "2014-06-13T00:00:00", - "LastUpdated": "2017-01-16T14:19:41.263", - "ReferenceNumber": 17, - "CultureCode": "fr-CA", - "ParentRequisitionCode": 3, - "JobType": 0, - "TravelRequired": 0 - }, - { - "Title": "Store 110 - Accessories Associate", - "Description": "Job Purpose:\n • Serves customers by helping them select products.\nJob Duties:\n • Welcomes customers by greeting them; offering them assistance.\n • Directs customers by escorting them to racks and counters; suggesting items.\n • Advises customers by providing information on products.\n • Helps customer make selections by building customer confidence; offering suggestions and opinions.\n • Documents sale by creating or updating customer profile records.\n • Processes payments by totaling purchases; processing checks, cash, and store or other credit cards.\n • Keeps clientele informed by notifying them of preferred customer sales and future merchandise of potential interest.\n • Contributes to team effort by accomplishing related results as needed.\nRetail Salesperson Skills and Qualifications:\n • Listening\n • Customer Service\n • Meeting Sales Goals\n • Selling to Customer Needs\n • Product Knowledge\n • Verbal Communication\n\n\nCareers at our Company...\n\nWhether you are taking the first step toward beginning your career or are a professional looking for an exciting new opportunity, we offer a wide array of challenging and creative paths.\nSee Other Job Opportunities", - "ClientSiteName": "Client Careers Site", - "ClientSiteXRefCode": "CANDIDATEPORTAL", - "CompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "ParentCompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "JobDetailsUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-US/ddn/Posting/View/17", - "ApplyUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-US/ddn/JobApplication?postingId=17", - "City": "San Francisco", - "State": "CA", - "Country": "USA", - "PostalCode": "", - "DatePosted": "2014-06-13T00:00:00", - "LastUpdated": "2017-01-16T14:19:41.263", - "ReferenceNumber": 17, - "CultureCode": "en-US", - "ParentRequisitionCode": 3, - "JobType": 0, - "TravelRequired": 0 - }, - { - "Title": "Magasin 110 - Associé - Hommes", - "Description": "But Vente aux détails : Sert clients en les aidant à\nsélectionner des produits .\n\nTâches reliées du poste de Vente aux détails : - Accueillir\nles clients en leur offrant une assistance . - Dirige clients vers les racks et\nles compteurs ; leur suggère des articles. - Conseille les clients en\nfournissant des informations sur les produits . - Aide la clientèle à effectuer\ndes sélections en bâtissant la confiance des clients ; offrant des suggestions\net des opinions . – Documente les ventes en faisant la mise à jour des profils\nclient. - Traite les paiements; traitement des chèques , espèces, et carte de\nmagasin ou autre cartes de crédit . – Avise les clients des soldes pour clients\nprivilégiés. - Contribue à l'effort d'équipe en accomplissant des résultats\nconnexes au besoin .\n\n\nCareers at our Company...\n\nWhether you are taking the first step toward beginning your career or are a professional looking for an exciting new opportunity, we offer a wide array of challenging and creative paths.\nSee Other Job Opportunities", - "ClientSiteName": "Client Careers Site", - "ClientSiteXRefCode": "CANDIDATEPORTAL", - "CompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "ParentCompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "JobDetailsUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/fr-CA/ddn/Posting/View/19", - "ApplyUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/fr-CA/ddn/JobApplication?postingId=19", - "City": "San Francisco", - "State": "CA", - "Country": "USA", - "PostalCode": "", - "DatePosted": "2014-06-13T00:00:00", - "LastUpdated": "2017-01-16T14:11:50.573", - "ReferenceNumber": 19, - "CultureCode": "fr-CA", - "ParentRequisitionCode": 2, - "JobType": 0, - "TravelRequired": 0 - }, - { - "Title": "Store 110 - Men's Associate", - "Description": "Job Purpose:\n • Serves customers by helping them select products.\nJob Duties:\n • Welcomes customers by greeting them; offering them assistance.\n • Directs customers by escorting them to racks and counters; suggesting items.\n • Advises customers by providing information on products.\n • Helps customer make selections by building customer confidence; offering suggestions and opinions.\n • Documents sale by creating or updating customer profile records.\n • Processes payments by totaling purchases; processing checks, cash, and store or other credit cards.\n • Keeps clientele informed by notifying them of preferred customer sales and future merchandise of potential interest.\n • Contributes to team effort by accomplishing related results as needed.\nRetail Salesperson Skills and Qualifications:\n • Listening\n • Customer Service\n • Meeting Sales Goals\n • Selling to Customer Needs\n • Product Knowledge\n • Verbal Communication\n\n\nCareers at our Company...\n\nWhether you are taking the first step toward beginning your career or are a professional looking for an exciting new opportunity, we offer a wide array of challenging and creative paths.\nSee Other Job Opportunities", - "ClientSiteName": "Client Careers Site", - "ClientSiteXRefCode": "CANDIDATEPORTAL", - "CompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "ParentCompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "JobDetailsUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-US/ddn/Posting/View/19", - "ApplyUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-US/ddn/JobApplication?postingId=19", - "City": "San Francisco", - "State": "CA", - "Country": "USA", - "PostalCode": "", - "DatePosted": "2014-06-13T00:00:00", - "LastUpdated": "2017-01-16T14:11:50.573", - "ReferenceNumber": 19, - "CultureCode": "en-US", - "ParentRequisitionCode": 2, - "JobType": 0, - "TravelRequired": 0 - }, - { - "Title": "Magasin 120 - Associés des dames", - "Description": "But Vente aux détails : Sert clients en les aidant à\nsélectionner des produits .\n\nTâches reliées du poste de Vente aux détails : - Accueillir\nles clients en leur offrant une assistance . - Dirige clients vers les racks et\nles compteurs ; leur suggère des articles. - Conseille les clients en\nfournissant des informations sur les produits . - Aide la clientèle à effectuer\ndes sélections en bâtissant la confiance des clients ; offrant des suggestions\net des opinions . – Documente les ventes en faisant la mise à jour des profils\nclient. - Traite les paiements; traitement des chèques , espèces, et carte de\nmagasin ou autre cartes de crédit . – Avise les clients des soldes pour clients\nprivilégiés. - Contribue à l'effort d'équipe en accomplissant des résultats\nconnexes au besoin .\n\n\nCareers at our Company...\n\nWhether you are taking the first step toward beginning your career or are a professional looking for an exciting new opportunity, we offer a wide array of challenging and creative paths.\nSee Other Job Opportunities", - "ClientSiteName": "Client Careers Site", - "ClientSiteXRefCode": "CANDIDATEPORTAL", - "CompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "ParentCompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "JobDetailsUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/fr-CA/ddn/Posting/View/21", - "ApplyUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/fr-CA/ddn/JobApplication?postingId=21", - "AddressLine1": "11720 Amberpark Drive", - "City": "Alpharetta", - "State": "GA", - "Country": "USA", - "PostalCode": "30009", - "DatePosted": "2014-06-13T00:00:00", - "LastUpdated": "2017-01-16T14:17:45.723", - "ReferenceNumber": 21, - "CultureCode": "fr-CA", - "ParentRequisitionCode": 1, - "JobType": 0, - "TravelRequired": 0 - }, - { - "Title": "Store 120 - Womens Associate", - "Description": "Job Purpose:\n • Serves customers by helping them select products.\nJob Duties:\n • Welcomes customers by greeting them; offering them assistance.\n • Directs customers by escorting them to racks and counters; suggesting items.\n • Advises customers by providing information on products.\n • Helps customer make selections by building customer confidence; offering suggestions and opinions.\n • Documents sale by creating or updating customer profile records.\n • Processes payments by totaling purchases; processing checks, cash, and store or other credit cards.\n • Keeps clientele informed by notifying them of preferred customer sales and future merchandise of potential interest.\n • Contributes to team effort by accomplishing related results as needed.\nRetail Salesperson Skills and Qualifications:\n • Listening\n • Customer Service\n • Meeting Sales Goals\n • Selling to Customer Needs\n • Product Knowledge\n • Verbal Communication\n\n\nCareers at our Company...\n\nWhether you are taking the first step toward beginning your career or are a professional looking for an exciting new opportunity, we offer a wide array of challenging and creative paths.\nSee Other Job Opportunities", - "ClientSiteName": "Client Careers Site", - "ClientSiteXRefCode": "CANDIDATEPORTAL", - "CompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "ParentCompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "JobDetailsUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-US/ddn/Posting/View/21", - "ApplyUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-US/ddn/JobApplication?postingId=21", - "AddressLine1": "11720 Amberpark Drive", - "City": "Alpharetta", - "State": "GA", - "Country": "USA", - "PostalCode": "30009", - "DatePosted": "2014-06-13T00:00:00", - "LastUpdated": "2017-01-16T14:17:45.723", - "ReferenceNumber": 21, - "CultureCode": "en-US", - "ParentRequisitionCode": 1, - "JobType": 0, - "TravelRequired": 0 - }, - { - "Title": "Store 118 - Store Manager", - "Description": "Store Manager Job Purpose: Serves customers by providing merchandise; supervising staff.\nRetail Store Manager Job Duties: - Completes store operational requirements by scheduling and assigning employees; following up on work results. - Maintains store staff by recruiting, selecting, orienting, and training employees. - Maintains store staff job results by coaching, counseling, and disciplining employees; planning, monitoring, and appraising job results. - Achieves financial objectives by preparing an annual budget; scheduling expenditures; analyzing variances; initiating corrective actions. - Identifies current and future customer requirements by establishing rapport with potential and actual customers and other persons in a position to understand service requirements. - Ensures availability of merchandise and services by approving contracts; maintaining inventories. - Formulates pricing policies by reviewing merchandising activities; determining additional needed sales promotion; authorizing clearance sales; studying trends. - Markets merchandise by studying advertising, sales promotion, and display plans; analyzing operating and financial statements for profitability ratios. - Secures merchandise by implementing security systems and measures. - Protects employees and customers by providing a safe and clean store environment. - Maintains the stability and reputation of the store by complying with legal requirements. - Determines marketing strategy changes by reviewing operating and financial statements and departmental sales records. - Maintains professional and technical knowledge by attending educational workshops; reviewing professional publications; establishing personal networks; participating in professional societies. - Maintains operations by initiating, coordinating, and enforcing program, operational, and personnel policies and procedures. - Contributes to team effort by accomplishing related results as needed.", - "ClientSiteName": "Client Careers Site", - "ClientSiteXRefCode": "CANDIDATEPORTAL", - "CompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "ParentCompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "JobDetailsUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-US/ddn/Posting/View/33", - "ApplyUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-US/ddn/JobApplication?postingId=33", - "City": "Philadelphia", - "State": "PA", - "Country": "USA", - "PostalCode": "", - "DatePosted": "2014-06-16T00:00:00", - "LastUpdated": "2016-04-15T11:54:07.283", - "ReferenceNumber": 33, - "CultureCode": "en-US", - "ParentRequisitionCode": 6, - "JobType": 0, - "TravelRequired": 0 - }, - { - "Title": "Team Member", - "Description": "Team Member Job Purpose: Serves customers by helping them select products and delivering a great in-store experience.\n\nTeam Member Job Duties:\n- Welcomes customers by greeting them and offering them assistance.\n- Provides expert advice on the full range of brands and products available.\n- Offers solutions and in-store experiences that build customer confidence.\n- Documents sale by creating or updating customer profile records.\n- Processes payments by totaling purchases; processing checks, cash, and store or other credit cards.\n- Keeps customers informed by notifying them of sales and future products of potential interest.\n- Boosts store sales through refill and store presentation.\n- Works with the team to achieve store goals and objectives together.\n\nTeam Member Skills and Experience:\n- Recommended 2+ years of industry experience.\n- Some Customer Service experience required.\n- Strong communication skills and ability to explain product features and functionalities.\n- Experience with Point of Sale systems is a bonus.\n\nAs a Team Member, you will have the opportunity to be part of a dynamic and engaging work environment. The fast paced environment and strong company culture fosters both personal and professional growth. With consistent growth, our company provides many opportunities for advancement and encourages cross-functional experience.", - "ClientSiteName": "Client Careers Site", - "ClientSiteXRefCode": "CANDIDATEPORTAL", - "CompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "ParentCompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "JobDetailsUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-GB/ddn/Posting/View/65", - "ApplyUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-GB/ddn/JobApplication?postingId=65", - "AddressLine1": "54 Park St", - "City": "Sydney", - "State": "NSW", - "Country": "AU", - "PostalCode": "2000", - "DatePosted": "2017-08-03T00:00:00", - "LastUpdated": "2017-08-03T14:34:14.993", - "ReferenceNumber": 65, - "CultureCode": "en-GB", - "ParentRequisitionCode": 12, - "JobType": 0, - "TravelRequired": 0 - } -] -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Recruiting/Get-Job-Postings.aspx](https://developers.dayforce.com/Build/API-Explorer/Recruiting/Get-Job-Postings.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -[ - { - "Title": "Magasin 120 - Assistant gérant", - "Description": "Objectif de poste d’assistant gérant de magasin : soutient ses clients par la formation du personnel, de l’achat et de l’étalage des produits.\n Tâches reliées du poste d’assistant gérant de magasin :\n -Former le personnel du magasin ; orienter les nouvelles embauches vers les produits et des matériaux de formation ; offrir des séances de formation ; l'évaluer les résultats et les besoins d’apprentissage des associés en collaboration avec le gérant du magasin ; élaborer et appliquer les nouvelles formations produit. - Évalue la concurrence en visitant les magasins concurrents ; la collecte d'informations tels que le style, la qualité et les prix de marchandise compétitifs. - Achats d’inventaire via recherche de nouveaux produits ; anticiper l'intérêt des acheteurs ; négocier des réductions de prix de volume ; placer et accélérer les commandes ; confirmer la réception. - Attirer les clients par l’exposition de la marchandise en façade ; respecter les suggestions et les horaires d'étalage ; mise en place ou d’assemblage de propriétés d'étalages préfabriqués ; la préparation des marchandises affichées dans les fenêtres et vitrines et sur le plancher de vente. - Favoriser les ventes en présentant la marchandise et les produits aux clients. – Aider aux clients en fournissant des informations ; répondre aux questions ; obtenir la marchandise demandée ; effectuer les transactions de paiement ; la préparation des marchandises pour la livraison. – Préparer les soldes et la relation client en analysant les rapports et la catégorisation de renseignements sur les ventes ; identifier et étudier les plaintes et les suggestions de services rapportés par les clients. - Maintient l’environnement du magasin propre et sécuritaire par le développement et l'édition des itinéraires d'évacuation ; la détermination et la documentation des emplacements de matériaux et de produits chimiques potentiellement dangereux. - Maintient l'inventaire en cochant la marchandise pour déterminer les niveaux de stock ; anticiper la demande. - Préparer des rapports par la collecte, l'analyse et la synthèse des informations. - Maintient un service de qualité en établissant et en appliquant des normes de l'organisation. - Maintenir une connaissance professionnelle et technique en participant à des ateliers éducatifs ; revue des publications professionnelles ; établir des réseaux personnels ; l'analyse comparative des meilleurs pratiques state-of-the-art ; participation à des associations professionnelles. - Contribue à l'effort d'équipe.\n\n\nCareers at our Company...\n\nWhether you are taking the first step toward beginning your career or are a professional looking for an exciting new opportunity, we offer a wide array of challenging and creative paths.\nSee Other Job Opportunities", - "ClientSiteName": "Client Careers Site", - "ClientSiteXRefCode": "CANDIDATEPORTAL", - "CompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "ParentCompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "JobDetailsUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/fr-CA/ddn/Posting/View/9", - "ApplyUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/fr-CA/ddn/JobApplication?postingId=9", - "City": "Alpharetta", - "State": "GA", - "Country": "USA", - "PostalCode": "", - "DatePosted": "2014-06-13T00:00:00", - "LastUpdated": "2017-01-16T14:21:51.937", - "ReferenceNumber": 9, - "CultureCode": "fr-CA", - "ParentRequisitionCode": 5, - "JobType": 0, - "TravelRequired": 0 - }, - { - "Title": "Store 120 - Assistant Manager", - "Description": "Job Purpose:\n • Serves customers by training staff; purchasing and displaying products.\nJob Duties:\n • Trains store staff by reviewing and revising orientation to products and sales training materials; delivering training sessions; reviewing staff job results and learning needs with retail store manager; developing and implementing new product training.\n • Evaluates competition by visiting competing stores; gathering information such as style, quality, and prices of competitive merchandise.\n • Purchases inventory by researching emerging products; anticipating buyer interest; negotiating volume price breaks; placing and expediting orders; verifying receipt.\n • Attracts customers by originating display ideas; following display suggestions or schedules; constructing or assembling prefabricated display properties; producing merchandise displays in windows and showcases, and on sales floor.\n • Promotes sales by demonstrating merchandise and products to customers.\n • Helps customers by providing information; answering questions; obtaining merchandise requested; completing payment transactions; preparing merchandise for delivery.\n • Prepares sales and customer relations reports by analyzing and categorizing sales information; identifying and investigating customer complaints and service suggestions.\n • Maintains a safe and clean store environment by developing and publishing evacuation routes; determining and documenting locations of potentially dangerous materials and chemicals.\n • Maintains inventory by checking merchandise to determine inventory levels; anticipating customer demand.\n • Prepares reports by collecting, analyzing, and summarizing information.\n • Maintains quality service by establishing and enforcing organization standards.\n • Maintains professional and technical knowledge by attending educational workshops; reviewing professional publications; establishing personal networks; benchmarking state-of-the-art practices; participating in professional societies.\n • Contributes to team effort by accomplishing related results as needed.\n\n\nCareers at our Company...\n\nWhether you are taking the first step toward beginning your career or are a professional looking for an exciting new opportunity, we offer a wide array of challenging and creative paths.\nSee Other Job Opportunities", - "ClientSiteName": "Client Careers Site", - "ClientSiteXRefCode": "CANDIDATEPORTAL", - "CompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "ParentCompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "JobDetailsUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-US/ddn/Posting/View/9", - "ApplyUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-US/ddn/JobApplication?postingId=9", - "City": "Alpharetta", - "State": "GA", - "Country": "USA", - "PostalCode": "", - "DatePosted": "2014-06-13T00:00:00", - "LastUpdated": "2017-01-16T14:21:51.937", - "ReferenceNumber": 9, - "CultureCode": "en-US", - "ParentRequisitionCode": 5, - "JobType": 0, - "TravelRequired": 0 - }, - { - "Title": "Magasin 110 – Associé aux accessoires", - "Description": "But Vente aux détails : Sert clients, aide à sélectionner des produits.\n\nTâches reliées du poste de Vente aux détails : -\nAccueillir les clients et offrir de l'assistance. - Dirige clients vers les racks et\nles compteurs ; leur suggère des articles. - Conseille les clients en\nfournissant des informations sur les produits. - Aide à la clientèle à\neffectuer des sélections en bâtissant la confiance, offrant des suggestions\net des opinions. – Documente les ventes par la mise à jour des profils\nclient. - Traite les paiements ; traitement des chèques , espèces, et carte de\nmagasin ou autre cartes de crédit . – Avise les clients des rabais pour\nclients privilégiés. - Contribue à l'effort d'équipe.\n\n\nCareers at our Company...\n\nWhether you are taking the first step toward beginning your career or are a professional looking for an exciting new opportunity, we offer a wide array of challenging and creative paths.\nSee Other Job Opportunities", - "ClientSiteName": "Client Careers Site", - "ClientSiteXRefCode": "CANDIDATEPORTAL", - "CompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "ParentCompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "JobDetailsUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/fr-CA/ddn/Posting/View/17", - "ApplyUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/fr-CA/ddn/JobApplication?postingId=17", - "City": "San Francisco", - "State": "CA", - "Country": "USA", - "PostalCode": "", - "DatePosted": "2014-06-13T00:00:00", - "LastUpdated": "2017-01-16T14:19:41.263", - "ReferenceNumber": 17, - "CultureCode": "fr-CA", - "ParentRequisitionCode": 3, - "JobType": 0, - "TravelRequired": 0 - }, - { - "Title": "Store 110 - Accessories Associate", - "Description": "Job Purpose:\n • Serves customers by helping them select products.\nJob Duties:\n • Welcomes customers by greeting them; offering them assistance.\n • Directs customers by escorting them to racks and counters; suggesting items.\n • Advises customers by providing information on products.\n • Helps customer make selections by building customer confidence; offering suggestions and opinions.\n • Documents sale by creating or updating customer profile records.\n • Processes payments by totaling purchases; processing checks, cash, and store or other credit cards.\n • Keeps clientele informed by notifying them of preferred customer sales and future merchandise of potential interest.\n • Contributes to team effort by accomplishing related results as needed.\nRetail Salesperson Skills and Qualifications:\n • Listening\n • Customer Service\n • Meeting Sales Goals\n • Selling to Customer Needs\n • Product Knowledge\n • Verbal Communication\n\n\nCareers at our Company...\n\nWhether you are taking the first step toward beginning your career or are a professional looking for an exciting new opportunity, we offer a wide array of challenging and creative paths.\nSee Other Job Opportunities", - "ClientSiteName": "Client Careers Site", - "ClientSiteXRefCode": "CANDIDATEPORTAL", - "CompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "ParentCompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "JobDetailsUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-US/ddn/Posting/View/17", - "ApplyUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-US/ddn/JobApplication?postingId=17", - "City": "San Francisco", - "State": "CA", - "Country": "USA", - "PostalCode": "", - "DatePosted": "2014-06-13T00:00:00", - "LastUpdated": "2017-01-16T14:19:41.263", - "ReferenceNumber": 17, - "CultureCode": "en-US", - "ParentRequisitionCode": 3, - "JobType": 0, - "TravelRequired": 0 - }, - { - "Title": "Magasin 110 - Associé - Hommes", - "Description": "But Vente aux détails : Sert clients en les aidant à\nsélectionner des produits .\n\nTâches reliées du poste de Vente aux détails : - Accueillir\nles clients en leur offrant une assistance . - Dirige clients vers les racks et\nles compteurs ; leur suggère des articles. - Conseille les clients en\nfournissant des informations sur les produits . - Aide la clientèle à effectuer\ndes sélections en bâtissant la confiance des clients ; offrant des suggestions\net des opinions . – Documente les ventes en faisant la mise à jour des profils\nclient. - Traite les paiements; traitement des chèques , espèces, et carte de\nmagasin ou autre cartes de crédit . – Avise les clients des soldes pour clients\nprivilégiés. - Contribue à l'effort d'équipe en accomplissant des résultats\nconnexes au besoin .\n\n\nCareers at our Company...\n\nWhether you are taking the first step toward beginning your career or are a professional looking for an exciting new opportunity, we offer a wide array of challenging and creative paths.\nSee Other Job Opportunities", - "ClientSiteName": "Client Careers Site", - "ClientSiteXRefCode": "CANDIDATEPORTAL", - "CompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "ParentCompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "JobDetailsUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/fr-CA/ddn/Posting/View/19", - "ApplyUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/fr-CA/ddn/JobApplication?postingId=19", - "City": "San Francisco", - "State": "CA", - "Country": "USA", - "PostalCode": "", - "DatePosted": "2014-06-13T00:00:00", - "LastUpdated": "2017-01-16T14:11:50.573", - "ReferenceNumber": 19, - "CultureCode": "fr-CA", - "ParentRequisitionCode": 2, - "JobType": 0, - "TravelRequired": 0 - }, - { - "Title": "Store 110 - Men's Associate", - "Description": "Job Purpose:\n • Serves customers by helping them select products.\nJob Duties:\n • Welcomes customers by greeting them; offering them assistance.\n • Directs customers by escorting them to racks and counters; suggesting items.\n • Advises customers by providing information on products.\n • Helps customer make selections by building customer confidence; offering suggestions and opinions.\n • Documents sale by creating or updating customer profile records.\n • Processes payments by totaling purchases; processing checks, cash, and store or other credit cards.\n • Keeps clientele informed by notifying them of preferred customer sales and future merchandise of potential interest.\n • Contributes to team effort by accomplishing related results as needed.\nRetail Salesperson Skills and Qualifications:\n • Listening\n • Customer Service\n • Meeting Sales Goals\n • Selling to Customer Needs\n • Product Knowledge\n • Verbal Communication\n\n\nCareers at our Company...\n\nWhether you are taking the first step toward beginning your career or are a professional looking for an exciting new opportunity, we offer a wide array of challenging and creative paths.\nSee Other Job Opportunities", - "ClientSiteName": "Client Careers Site", - "ClientSiteXRefCode": "CANDIDATEPORTAL", - "CompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "ParentCompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "JobDetailsUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-US/ddn/Posting/View/19", - "ApplyUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-US/ddn/JobApplication?postingId=19", - "City": "San Francisco", - "State": "CA", - "Country": "USA", - "PostalCode": "", - "DatePosted": "2014-06-13T00:00:00", - "LastUpdated": "2017-01-16T14:11:50.573", - "ReferenceNumber": 19, - "CultureCode": "en-US", - "ParentRequisitionCode": 2, - "JobType": 0, - "TravelRequired": 0 - }, - { - "Title": "Magasin 120 - Associés des dames", - "Description": "But Vente aux détails : Sert clients en les aidant à\nsélectionner des produits .\n\nTâches reliées du poste de Vente aux détails : - Accueillir\nles clients en leur offrant une assistance . - Dirige clients vers les racks et\nles compteurs ; leur suggère des articles. - Conseille les clients en\nfournissant des informations sur les produits . - Aide la clientèle à effectuer\ndes sélections en bâtissant la confiance des clients ; offrant des suggestions\net des opinions . – Documente les ventes en faisant la mise à jour des profils\nclient. - Traite les paiements; traitement des chèques , espèces, et carte de\nmagasin ou autre cartes de crédit . – Avise les clients des soldes pour clients\nprivilégiés. - Contribue à l'effort d'équipe en accomplissant des résultats\nconnexes au besoin .\n\n\nCareers at our Company...\n\nWhether you are taking the first step toward beginning your career or are a professional looking for an exciting new opportunity, we offer a wide array of challenging and creative paths.\nSee Other Job Opportunities", - "ClientSiteName": "Client Careers Site", - "ClientSiteXRefCode": "CANDIDATEPORTAL", - "CompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "ParentCompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "JobDetailsUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/fr-CA/ddn/Posting/View/21", - "ApplyUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/fr-CA/ddn/JobApplication?postingId=21", - "AddressLine1": "11720 Amberpark Drive", - "City": "Alpharetta", - "State": "GA", - "Country": "USA", - "PostalCode": "30009", - "DatePosted": "2014-06-13T00:00:00", - "LastUpdated": "2017-01-16T14:17:45.723", - "ReferenceNumber": 21, - "CultureCode": "fr-CA", - "ParentRequisitionCode": 1, - "JobType": 0, - "TravelRequired": 0 - }, - { - "Title": "Store 120 - Womens Associate", - "Description": "Job Purpose:\n • Serves customers by helping them select products.\nJob Duties:\n • Welcomes customers by greeting them; offering them assistance.\n • Directs customers by escorting them to racks and counters; suggesting items.\n • Advises customers by providing information on products.\n • Helps customer make selections by building customer confidence; offering suggestions and opinions.\n • Documents sale by creating or updating customer profile records.\n • Processes payments by totaling purchases; processing checks, cash, and store or other credit cards.\n • Keeps clientele informed by notifying them of preferred customer sales and future merchandise of potential interest.\n • Contributes to team effort by accomplishing related results as needed.\nRetail Salesperson Skills and Qualifications:\n • Listening\n • Customer Service\n • Meeting Sales Goals\n • Selling to Customer Needs\n • Product Knowledge\n • Verbal Communication\n\n\nCareers at our Company...\n\nWhether you are taking the first step toward beginning your career or are a professional looking for an exciting new opportunity, we offer a wide array of challenging and creative paths.\nSee Other Job Opportunities", - "ClientSiteName": "Client Careers Site", - "ClientSiteXRefCode": "CANDIDATEPORTAL", - "CompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "ParentCompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "JobDetailsUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-US/ddn/Posting/View/21", - "ApplyUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-US/ddn/JobApplication?postingId=21", - "AddressLine1": "11720 Amberpark Drive", - "City": "Alpharetta", - "State": "GA", - "Country": "USA", - "PostalCode": "30009", - "DatePosted": "2014-06-13T00:00:00", - "LastUpdated": "2017-01-16T14:17:45.723", - "ReferenceNumber": 21, - "CultureCode": "en-US", - "ParentRequisitionCode": 1, - "JobType": 0, - "TravelRequired": 0 - }, - { - "Title": "Store 118 - Store Manager", - "Description": "Store Manager Job Purpose: Serves customers by providing merchandise; supervising staff.\nRetail Store Manager Job Duties: - Completes store operational requirements by scheduling and assigning employees; following up on work results. - Maintains store staff by recruiting, selecting, orienting, and training employees. - Maintains store staff job results by coaching, counseling, and disciplining employees; planning, monitoring, and appraising job results. - Achieves financial objectives by preparing an annual budget; scheduling expenditures; analyzing variances; initiating corrective actions. - Identifies current and future customer requirements by establishing rapport with potential and actual customers and other persons in a position to understand service requirements. - Ensures availability of merchandise and services by approving contracts; maintaining inventories. - Formulates pricing policies by reviewing merchandising activities; determining additional needed sales promotion; authorizing clearance sales; studying trends. - Markets merchandise by studying advertising, sales promotion, and display plans; analyzing operating and financial statements for profitability ratios. - Secures merchandise by implementing security systems and measures. - Protects employees and customers by providing a safe and clean store environment. - Maintains the stability and reputation of the store by complying with legal requirements. - Determines marketing strategy changes by reviewing operating and financial statements and departmental sales records. - Maintains professional and technical knowledge by attending educational workshops; reviewing professional publications; establishing personal networks; participating in professional societies. - Maintains operations by initiating, coordinating, and enforcing program, operational, and personnel policies and procedures. - Contributes to team effort by accomplishing related results as needed.", - "ClientSiteName": "Client Careers Site", - "ClientSiteXRefCode": "CANDIDATEPORTAL", - "CompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "ParentCompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "JobDetailsUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-US/ddn/Posting/View/33", - "ApplyUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-US/ddn/JobApplication?postingId=33", - "City": "Philadelphia", - "State": "PA", - "Country": "USA", - "PostalCode": "", - "DatePosted": "2014-06-16T00:00:00", - "LastUpdated": "2016-04-15T11:54:07.283", - "ReferenceNumber": 33, - "CultureCode": "en-US", - "ParentRequisitionCode": 6, - "JobType": 0, - "TravelRequired": 0 - }, - { - "Title": "Team Member", - "Description": "Team Member Job Purpose: Serves customers by helping them select products and delivering a great in-store experience.\n\nTeam Member Job Duties:\n- Welcomes customers by greeting them and offering them assistance.\n- Provides expert advice on the full range of brands and products available.\n- Offers solutions and in-store experiences that build customer confidence.\n- Documents sale by creating or updating customer profile records.\n- Processes payments by totaling purchases; processing checks, cash, and store or other credit cards.\n- Keeps customers informed by notifying them of sales and future products of potential interest.\n- Boosts store sales through refill and store presentation.\n- Works with the team to achieve store goals and objectives together.\n\nTeam Member Skills and Experience:\n- Recommended 2+ years of industry experience.\n- Some Customer Service experience required.\n- Strong communication skills and ability to explain product features and functionalities.\n- Experience with Point of Sale systems is a bonus.\n\nAs a Team Member, you will have the opportunity to be part of a dynamic and engaging work environment. The fast paced environment and strong company culture fosters both personal and professional growth. With consistent growth, our company provides many opportunities for advancement and encourages cross-functional experience.", - "ClientSiteName": "Client Careers Site", - "ClientSiteXRefCode": "CANDIDATEPORTAL", - "CompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "ParentCompanyName": "XYZ Co..PRDemo3 - April18th-2017-Hf7\rUpdated life event form for emps\rUpdated Pay group pp#s", - "JobDetailsUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-GB/ddn/Posting/View/65", - "ApplyUrl": "https://usconfigr57.dayforcehcm.com/CandidatePortal/en-GB/ddn/JobApplication?postingId=65", - "AddressLine1": "54 Park St", - "City": "Sydney", - "State": "NSW", - "Country": "AU", - "PostalCode": "2000", - "DatePosted": "2017-08-03T00:00:00", - "LastUpdated": "2017-08-03T14:34:14.993", - "ReferenceNumber": 65, - "CultureCode": "en-GB", - "ParentRequisitionCode": 12, - "JobType": 0, - "TravelRequired": 0 - } -] -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/reporting/reportmetadataforalistofreports.md b/en/docs/reference/connectors/ceridiandayforce-connector/reporting/reportmetadataforalistofreports.md deleted file mode 100644 index 679b15b6bd..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/reporting/reportmetadataforalistofreports.md +++ /dev/null @@ -1,139 +0,0 @@ -# Working with Report Metadata for a list of reports - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve Report Metadata for a list of reports - -| Operation | Description | -| ------------- |-------------| -|[GET Report Metadata for a list of reports](#retrieving-report-metadata-for-a-list-of-reports)| Retrieve base information for all reports available via Web Services. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Report Metadata for a list of reports -We can use GET Report Metadata for a list of reports operation with required parameters to find Report Metadata for a list of reports. - -**GET Report Metadata for a list of reports** -```xml - -``` - -**Properties** - -There are no properties - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "Name": "API-Payroll Earning and hours Detail", - "XRefCode": "Payroll_Earning_Hours_Detail", - "MaxRows": 20000 - }, - { - "Name": "API - candidates", - "XRefCode": "API-candidates", - "MaxRows": 20000 - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Reporting/GET-Report-Metadata-for-a-list-of-reports.aspx](https://developers.dayforce.com/Build/API-Explorer/Reporting/GET-Report-Metadata-for-a-list-of-reports.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` - -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` - -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "Name": "API-Payroll Earning and hours Detail", - "XRefCode": "Payroll_Earning_Hours_Detail", - "MaxRows": 20000 - }, - { - "Name": "API - candidates", - "XRefCode": "API-candidates", - "MaxRows": 20000 - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/reporting/reportmetadataforaspecificreport.md b/en/docs/reference/connectors/ceridiandayforce-connector/reporting/reportmetadataforaspecificreport.md deleted file mode 100644 index 1607a275bc..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/reporting/reportmetadataforaspecificreport.md +++ /dev/null @@ -1,291 +0,0 @@ -# Working with Report Metadata for a specific report - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve meta data for a report - -| Operation | Description | -| ------------- |-------------| -|[GET Report Metadata for a specific report](#retrieving-report-metadata-for-a-specific-report)| Get detailed information about a specific report including its column metadata, the list of its filter parameters and the different values that can populate these parameters. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Report Metadata for a specific report -We can use GET Employee addresses operation with required parameters to find meta data for a report - -**GET Report Metadata for a specific report** -```xml - - {$ctx:xRefCode} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the report to be retrieved. The value provided must be the exact match for an report; otherwise, a bad request (400) error will be returned. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "Payroll_Earning_Hours_Detail" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "Name": "API-Payroll Earning and hours Detail", - "XRefCode": "Payroll_Earning_Hours_Detail", - "MaxRows": 20000, - "ColumnMetadata": [ - { - "CodeName": "Employee.DisplayName", - "DisplayName": "Employee", - "DataType": "String" - }, - { - "CodeName": "PRPayRunResultPermanent.EmployeeNumber", - "DisplayName": "Employee Number", - "DataType": "String" - }, - { - "CodeName": "OrgUnit.ShortName", - "DisplayName": "Location", - "DataType": "String" - }, - { - "CodeName": "PRPayRunEarningPermanent.PayDate", - "DisplayName": "Pay Date", - "DataType": "Date" - }, - { - "CodeName": "Job.ShortName", - "DisplayName": "Job Name", - "DataType": "String" - }, - { - "CodeName": "PREarningCode.ShortName", - "DisplayName": "Earning Code Name", - "DataType": "String" - }, - { - "CodeName": "PRPayRunEarningPermanent.Rate", - "DisplayName": "Earning Rate", - "DataType": "Decimal" - }, - { - "CodeName": "PRPayRunEarningPermanent.Units", - "DisplayName": "Earning Hours", - "DataType": "Decimal" - }, - { - "CodeName": "PRPayRunEarningPermanent.Amount", - "DisplayName": "Earning Earning", - "DataType": "Decimal" - }, - { - "CodeName": "PRPayRunEarningPermanent.WeekStartDate", - "DisplayName": "Week Start Date", - "DataType": "Date" - }, - { - "CodeName": "PRPayRunEarningPermanent.WeekEndDate", - "DisplayName": "Week End Date", - "DataType": "Date" - }, - { - "CodeName": "PRPayRunEarningPermanent.WeekNumber", - "DisplayName": "Week Number", - "DataType": "Integer" - } - ], - "Parameters": [ - { - "Name": "@EffectiveStart", - "DisplayName": "@EffectiveStart", - "ReportParameterMetadataId": "32683c04-2d51-4407-ba11-6634f1ae6038", - "DataType": "DateTime", - "DefaultValue": "2/7/2020 12:00:00 AM", - "IsRequired": true - }, - { - "Name": "@EffectiveEnd", - "DisplayName": "@EffectiveEnd", - "ReportParameterMetadataId": "32683c04-2d51-4407-ba11-6634f1ae6039", - "DataType": "DateTime", - "IsRequired": false - } - ] - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Reporting/GET-Report-Metadata-for-one-reports.aspx](https://developers.dayforce.com/Build/API-Explorer/Reporting/GET-Report-Metadata-for-one-reports.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "Payroll_Earning_Hours_Detail" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "Name": "API-Payroll Earning and hours Detail", - "XRefCode": "Payroll_Earning_Hours_Detail", - "MaxRows": 20000, - "ColumnMetadata": [ - { - "CodeName": "Employee.DisplayName", - "DisplayName": "Employee", - "DataType": "String" - }, - { - "CodeName": "PRPayRunResultPermanent.EmployeeNumber", - "DisplayName": "Employee Number", - "DataType": "String" - }, - { - "CodeName": "OrgUnit.ShortName", - "DisplayName": "Location", - "DataType": "String" - }, - { - "CodeName": "PRPayRunEarningPermanent.PayDate", - "DisplayName": "Pay Date", - "DataType": "Date" - }, - { - "CodeName": "Job.ShortName", - "DisplayName": "Job Name", - "DataType": "String" - }, - { - "CodeName": "PREarningCode.ShortName", - "DisplayName": "Earning Code Name", - "DataType": "String" - }, - { - "CodeName": "PRPayRunEarningPermanent.Rate", - "DisplayName": "Earning Rate", - "DataType": "Decimal" - }, - { - "CodeName": "PRPayRunEarningPermanent.Units", - "DisplayName": "Earning Hours", - "DataType": "Decimal" - }, - { - "CodeName": "PRPayRunEarningPermanent.Amount", - "DisplayName": "Earning Earning", - "DataType": "Decimal" - }, - { - "CodeName": "PRPayRunEarningPermanent.WeekStartDate", - "DisplayName": "Week Start Date", - "DataType": "Date" - }, - { - "CodeName": "PRPayRunEarningPermanent.WeekEndDate", - "DisplayName": "Week End Date", - "DataType": "Date" - }, - { - "CodeName": "PRPayRunEarningPermanent.WeekNumber", - "DisplayName": "Week Number", - "DataType": "Integer" - } - ], - "Parameters": [ - { - "Name": "@EffectiveStart", - "DisplayName": "@EffectiveStart", - "ReportParameterMetadataId": "32683c04-2d51-4407-ba11-6634f1ae6038", - "DataType": "DateTime", - "DefaultValue": "2/7/2020 12:00:00 AM", - "IsRequired": true - }, - { - "Name": "@EffectiveEnd", - "DisplayName": "@EffectiveEnd", - "ReportParameterMetadataId": "32683c04-2d51-4407-ba11-6634f1ae6039", - "DataType": "DateTime", - "IsRequired": false - } - ] - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/reporting/reports.md b/en/docs/reference/connectors/ceridiandayforce-connector/reporting/reports.md deleted file mode 100644 index e29c027252..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/reporting/reports.md +++ /dev/null @@ -1,171 +0,0 @@ -# Working with Reports - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve reports - -| Operation | Description | -| ------------- |-------------| -|[GET Reports](#retrieving-reports)| Run a report and receive its results via web services. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Reports -We can use GET Reports operation with required parameters to find reports - -**GET Reports** -```xml - - {$ctx:xRefCode} - {$ctx:pageSize} - {$ctx:reportParameters} - -``` - -**Properties** - -* xRefCode (Mandatory - string): The unique identifier (external reference code) of the report to be retrieved. The value provided must be the exact match for an report; otherwise, a bad request (400) error will be returned. -* pageSize (Optional - integer): The number of records returned per page in the paginated response -* reportParameters (Optional - object): A list of key value pairs for those reports which take as input user supplied parameter values. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "Payroll_Earning_Hours_Detail", - "pageSize": "1" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": { - "XRefCode": "Payroll_Earning_Hours_Detail", - "Rows": [ - { - "Employee_DisplayName": "Gary Shy", - "PRPayRunResultPermanent_EmployeeNumber": "8965594", - "OrgUnit_ShortName": "Plant 2 - Assembly 1", - "PRPayRunEarningPermanent_PayDate": "2016-01-05T00:00:00.0000000", - "Job_ShortName": "Process Technician", - "PREarningCode_ShortName": "Regular", - "PRPayRunEarningPermanent_Rate": "14.00000", - "PRPayRunEarningPermanent_Units": "8.00000", - "PRPayRunEarningPermanent_Amount": "****", - "PRPayRunEarningPermanent_WeekStartDate": "2015-12-27T00:00:00.0000000", - "PRPayRunEarningPermanent_WeekEndDate": "2016-01-02T00:00:00.0000000", - "PRPayRunEarningPermanent_WeekNumber": "1" - } - ] - }, - "Paging": { - "Next": "https://usconfigr57.dayforcehcm.com:443/Api/ddn/V1/Reports/Payroll_Earning_Hours_Detail?cursor=XQNm%252Fy8QDwwOTVmwS55YIR9dxnzR39EsaiqKsIKTt6dOMJg%252Fbsgm%252B31dDpM5RlnJ" - } -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/Reporting/GET-Reports.aspx](https://developers.dayforce.com/Build/API-Explorer/Reporting/GET-Reports.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:pageSize} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "Payroll_Earning_Hours_Detail", - "pageSize": "1" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": { - "XRefCode": "Payroll_Earning_Hours_Detail", - "Rows": [ - { - "Employee_DisplayName": "Gary Shy", - "PRPayRunResultPermanent_EmployeeNumber": "8965594", - "OrgUnit_ShortName": "Plant 2 - Assembly 1", - "PRPayRunEarningPermanent_PayDate": "2016-01-05T00:00:00.0000000", - "Job_ShortName": "Process Technician", - "PREarningCode_ShortName": "Regular", - "PRPayRunEarningPermanent_Rate": "14.00000", - "PRPayRunEarningPermanent_Units": "8.00000", - "PRPayRunEarningPermanent_Amount": "****", - "PRPayRunEarningPermanent_WeekStartDate": "2015-12-27T00:00:00.0000000", - "PRPayRunEarningPermanent_WeekEndDate": "2016-01-02T00:00:00.0000000", - "PRPayRunEarningPermanent_WeekNumber": "1" - } - ] - }, - "Paging": { - "Next": "https://usconfigr57.dayforcehcm.com:443/Api/ddn/V1/Reports/Payroll_Earning_Hours_Detail?cursor=XQNm%252Fy8QDwwOTVmwS55YIR9dxnzR39EsaiqKsIKTt6dOMJg%252Fbsgm%252B31dDpM5RlnJ" - } -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/documentmanagementsecuritygroups.md b/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/documentmanagementsecuritygroups.md deleted file mode 100644 index a831a58e98..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/documentmanagementsecuritygroups.md +++ /dev/null @@ -1,137 +0,0 @@ -# Working with Document Management Security Groups - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve Document Management Security Groups - -| Operation | Description | -| ------------- |-------------| -|[GET Document Management Security Groups](#retrieving-document-management-security-groups)| Retrieve Document Management Security Groups assigned to an employee that control access to documents. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Document Management Security Groups -We can use GET Document Management Security Groups operation with required parameters to search and find the required employees. - -**GET Document Management Security Groups** -```xml - - {$ctx:xRefCode} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "DocMgmtSecurityGroup": { - "XRefCode": "Legal", - "ShortName": "Legal", - "LongName": "Legal" - } - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Document-Management-Security-Groups/GET-Document-Management-Security-Groups.aspx](https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Document-Management-Security-Groups/GET-Document-Management-Security-Groups.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "DocMgmtSecurityGroup": { - "XRefCode": "Legal", - "ShortName": "Legal", - "LongName": "Legal" - } - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/employeelocations.md b/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/employeelocations.md deleted file mode 100644 index 44d165db38..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/employeelocations.md +++ /dev/null @@ -1,298 +0,0 @@ -# Working with Employee Locations - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve, create or update locations of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Locations](#retrieving-employee-locations)| Retrieve locations, and their respective authority types, that an employee manages. | -|[POST Employee Locations](#creating-employee-locations)| Assign locations and authority types for an employee to manage. | -|[PATCH Employee Locations](#updating-employee-locations)| Update assigned locations and authority types for an employee to manage. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Locations -We can use GET Employee Locations operation with required parameters to search and find location of an employee. - -**GET Employee Locations** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "IsPrimary": true, - "Location": { - "XRefCode": "Bank 1Admin", - "ShortName": "Bank 1 - Admin", - "LongName": "Bank 1 - Admin" - }, - "EffectiveStart": "2019-01-01T00:00:00", - "IsDefault": true, - "EmployeeLocationAuthorities": { - "Items": [ - { - "EffectiveStart": "2019-01-01T00:00:00", - "AuthorityType": { - "XRefCode": "MANAGER", - "ShortName": "Manager", - "LongName": "Management" - } - } - ] - } - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Locations/GET-Employee-Locations.aspx](https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Locations/GET-Employee-Locations.aspx) - -#### Creating Employee Locations -We can use POST Employee Locations operation with required parameters to create employee locations. - -**POST Employee Locations** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "fieldAndValue": { - "IsPrimary": true, - "Location": { - "XRefCode": "Bank 1Admin", - "ShortName": "Bank 1 - Admin", - "LongName": "Bank 1 - Admin" - }, - "EffectiveStart": "2019-01-01T00:00:00", - "IsDefault": true, - "EmployeeLocationAuthorities": { - "Items": [ - { - "EffectiveStart": "2019-01-01T00:00:00", - "AuthorityType": { - "XRefCode": "MANAGER", - "ShortName": "Manager", - "LongName": "Management" - } - } - ] - } - } -} -``` - -**Sample response** - -Dayforce returns HTTP Code 200 - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Locations/POST-Employee-Locations.aspx](https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Locations/POST-Employee-Locations.aspx) - -#### Updating Employee Locations -We can use PATCH Employee Locations operation with required parameters to update the locations of employees. - -**PATCH Employee Locations** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "fieldAndValue": { - "IsPrimary": true, - "Location": { - "XRefCode": "Bank 1Admin", - "ShortName": "Bank 1 - Admin", - "LongName": "Bank 1 - Admin" - }, - "EffectiveStart": "2019-01-01T00:00:00", - "IsDefault": true, - "EmployeeLocationAuthorities": { - "Items": [ - { - "EffectiveStart": "2019-01-01T00:00:00", - "AuthorityType": { - "XRefCode": "MANAGER", - "ShortName": "Manager", - "LongName": "Management" - } - } - ] - } - } -} -``` - -**Sample response** - -Dayforce returns HTTP Code 200 - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Locations/PATCH-Employee-Locations.aspx](https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Locations/PATCH-Employee-Locations.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "fieldAndValue": { - "IsPrimary": true, - "Location": { - "XRefCode": "Bank 1Admin", - "ShortName": "Bank 1 - Admin", - "LongName": "Bank 1 - Admin" - }, - "EffectiveStart": "2019-01-01T00:00:00", - "IsDefault": true, - "EmployeeLocationAuthorities": { - "Items": [ - { - "EffectiveStart": "2019-01-01T00:00:00", - "AuthorityType": { - "XRefCode": "MANAGER", - "ShortName": "Manager", - "LongName": "Management" - } - } - ] - } - } -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/employeemanagers.md b/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/employeemanagers.md deleted file mode 100644 index 9a1dc498f7..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/employeemanagers.md +++ /dev/null @@ -1,147 +0,0 @@ -# Working with Employee Managers - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve managers of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Managers](#retrieving-employee-managers)| Retrieve the managers assigned to employees, either through direct management assignment or management by location. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Managers -We can use GET Employee Managers operation with required parameters to search and find the managers of the required employees. - -**GET Employee Managers** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "EffectiveStart": "2000-01-01T00:00:00", - "ManagerXRefCode": "62779", - "ManagerFirstName": "Macon", - "ManagerLastName": "Burke", - "DerivationMethod": 1 - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Managers/GET-Employee-Managers.aspx](https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Managers/GET-Employee-Managers.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:contextDateRangeFrom} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDateRangeFrom": "2017-01-01T13:24:56" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "EffectiveStart": "2000-01-01T00:00:00", - "ManagerXRefCode": "62779", - "ManagerFirstName": "Macon", - "ManagerLastName": "Burke", - "DerivationMethod": 1 - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/employeeroles.md b/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/employeeroles.md deleted file mode 100644 index 2700a9db84..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/employeeroles.md +++ /dev/null @@ -1,250 +0,0 @@ -# Working with Employee Roles - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve, create or update roles of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Roles](#retrieving-employee-roles)| Retrieve user roles assigned to an employee. | -|[POST Employee Roles](#creating-employee-roles)| Assign roles to an employee. | -|[PATCH Employee Roles](#updating-employee-roles)| Update the assigned roles to an employee. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Roles -We can use GET Employee Roles operation with required parameters to search and find the roles of a required employees. - -**GET Employee Roles** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "contextDate": "2017-01-01T13:24:56" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "IsDefault": true, - "Role": { - "XRefCode": "MAssociate", - "ShortName": "MAssociate", - "LongName": "MAssociate" - }, - "EffectiveStart": "2015-12-02T00:00:00", - "IsPrestartRole": false - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Roles/GET-Employee-Roles.aspx](https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Roles/GET-Employee-Roles.aspx) - -#### Creating Employee Roles -We can use POST Employee Roles operation with required parameters to assign roles to an employee. - -**POST Employee Roles** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "fieldAndValue": { - "IsDefault": true, - "Role": { - "XRefCode": "MAssociate", - "ShortName": "MAssociate", - "LongName": "MAssociate" - }, - "EffectiveStart": "2015-12-02T00:00:00", - "IsPrestartRole": false - } -} -``` - -**Sample response** - -Dayforce returns HTTP Code 200 - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Roles/POST-Employee-Roles.aspx](https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Roles/POST-Employee-Roles.aspx) - -#### Updating Employee Roles -We can use PATCH Employee Roles operation with required parameters to update the roles of an employee - -**PATCH Employee Roles** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "fieldAndValue": { - "IsDefault": true, - "Role": { - "XRefCode": "MAssociate", - "ShortName": "MAssociate", - "LongName": "MAssociate" - }, - "EffectiveStart": "2015-12-02T00:00:00", - "IsPrestartRole": false - } -} -``` - -**Sample response** - -Dayforce returns HTTP Code 200 - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Roles/PATCH-Employee-Roles.aspx](https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Roles/PATCH-Employee-Roles.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "fieldAndValue": { - "IsDefault": true, - "Role": { - "XRefCode": "MAssociate", - "ShortName": "MAssociate", - "LongName": "MAssociate" - }, - "EffectiveStart": "2015-12-02T00:00:00", - "IsPrestartRole": false - } -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/employeessoaccounts.md b/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/employeessoaccounts.md deleted file mode 100644 index c086b5c37a..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/employeessoaccounts.md +++ /dev/null @@ -1,222 +0,0 @@ -# Working with Employee SSO Accounts - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve, create or update SSO accounts of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee SSO Accounts](#retrieving-employee-sso-accounts)| Retrieve Single Sign-On (SSO) accounts of an employee. | -|[POST Employee SSO Accounts](#creating-employee-sso-accounts)| Create Single Sign-On (SSO) accounts of an employee. | -|[PATCH Employee SSO Accounts](#updating-employee-sso-accounts)| Update Single Sign-On (SSO) accounts of an employee. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Addresses -We can use GET Employee SSO Accounts operation with required parameters to get the SSO account of an employee. - -**GET Employee SSO Accounts** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "LoginName": "aaron.glover" - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/SSO-Accounts/GET-Employee-SSO-Accounts.aspx](https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/SSO-Accounts/GET-Employee-SSO-Accounts.aspx) - -#### Creating Employee SSO Accounts -We can use POST Employee SSO Accounts operation with required parameters to create SSO account of an employee - -**POST Employee SSO Accounts** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "fieldAndValue": { - "LoginName": "aaron.glover" - } -} -``` - -**Sample response** - -Dayforce returns HTTP Code 200 - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/SSO-Accounts/POST-Employee-SSO-Accounts.aspx](https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/SSO-Accounts/POST-Employee-SSO-Accounts.aspx) - -#### Updating Employee SSO Accounts -We can use PATCH Employee SSO Accounts operation with required parameters to update the SSO account details of an employee - -**PATCH Employee SSO Accounts** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "fieldAndValue": { - "LoginName": "aaron.glover" - } -} -``` - -**Sample response** - -Dayforce returns HTTP Code 200 - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/SSO-Accounts/PATCH-Employee-SSO-Accounts.aspx](https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/SSO-Accounts/PATCH-Employee-SSO-Accounts.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "LoginName": "aaron.glover" - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/employeeworkassignmentmanagers.md b/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/employeeworkassignmentmanagers.md deleted file mode 100644 index df3bff3963..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/employeeworkassignmentmanagers.md +++ /dev/null @@ -1,267 +0,0 @@ -# Working with Employee Work Assignment Managers - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve, create or update work assignment managers of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET Employee Work Assignment Managers](#retrieving-employee-work-assignment-managers)| Retrieve managers assigned to an employee through Direct Management. | -|[POST Employee Work Assignment Managers](#creating-employee-work-assignment-managers)| Assign managers to an employee through Direct Management. | -|[PATCH Employee Work Assignment Managers](#updating-employee-work-assignment-managers)| Update the managers assigned to an employee through Direct Management. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving Employee Work Assignment Managers -We can use GET Employee Work Assignment Managers operation with required parameters to find the work assignment manager of employees. - -**GET Employee Work Assignment Managers** -```xml - - {$ctx:xRefCode} - {$ctx:contextDate} - {$ctx:contextDateRangeFrom} - {$ctx:contextDateRangeTo} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* contextDate (Optional): The Context Date value is an “as-of” date used to determine which employee data to search when records have specific start and end dates. The service defaults to the current datetime if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeFrom (Optional): The Context Date Range From value is the start of the range of dates used to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 -* contextDateRangeTo (Optional): The Context Date Range To value is the end of the range of dates to determine which employee data to search when records have specific start and end dates. The service defaults to null if the requester does not specify a value. Example: 2017-01-01T13:24:56 - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "EffectiveStart": "2000-01-01T00:00:00", - "EmploymentStatusGroupXRefCode": "INACTIVE", - "ManagerXRefCode": "62779", - "ManagerName": "Macon Burke", - "ActiveEmployeePosition": { - "XRefCode": "Packaging Packager", - "ShortName": "Package Handler" - }, - "ActiveEmployeeLocation": { - "XRefCode": "500Packaging", - "ShortName": "Plant 1 - Packaging" - } - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Work-Assignment-Managers/GET-Employee-Work-Assignment-Managers.aspx](https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Work-Assignment-Managers/GET-Employee-Work-Assignment-Managers.aspx) - -#### Creating Employee Work Assignment Managers -We can use POST Employee Work Assignment Managers operation with required parameters to create work assignment managers of an employee. - -**POST Employee Work Assignment Managers** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "fieldAndValue": { - "EffectiveStart": "2000-01-01T00:00:00", - "EmploymentStatusGroupXRefCode": "INACTIVE", - "ManagerXRefCode": "62779", - "ManagerName": "Macon Burke", - "ActiveEmployeePosition": { - "XRefCode": "Packaging Packager", - "ShortName": "Package Handler" - }, - "ActiveEmployeeLocation": { - "XRefCode": "500Packaging", - "ShortName": "Plant 1 - Packaging" - } - } -} -``` - -**Sample response** - -Dayforce returns HTTP Code 200 - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Work-Assignment-Managers/POST-Employee-Work-Assignment-Managers.aspx](https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Work-Assignment-Managers/POST-Employee-Work-Assignment-Managers.aspx) - -#### Updating Employee Work Assignment Managers -We can use PATCH Employee Work Assignment Managers operation with required parameters to update the work assignment managers of an employee - -**PATCH Employee Work Assignment Managers** -```xml - - {$ctx:xRefCode} - {$ctx:isValidateOnly} - {$ctx:fieldAndValue} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. -* isValidateOnly (Mandatory): When a TRUE value is used in this parameter, POST and PATCH operations validate the request without applying updates to the database. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199", - "isValidateOnly": "true", - "fieldAndValue": { - "EffectiveStart": "2000-01-01T00:00:00", - "EmploymentStatusGroupXRefCode": "INACTIVE", - "ManagerXRefCode": "62779", - "ManagerName": "Macon Burke", - "ActiveEmployeePosition": { - "XRefCode": "Packaging Packager", - "ShortName": "Package Handler" - }, - "ActiveEmployeeLocation": { - "XRefCode": "500Packaging", - "ShortName": "Plant 1 - Packaging" - } - } -} -``` - -**Sample response** - -Dayforce returns HTTP Code 200 - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Work-Assignment-Managers/PATCH-Employee-Work-Assignment-Managers.aspx](https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/Work-Assignment-Managers/PATCH-Employee-Work-Assignment-Managers.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "EffectiveStart": "2000-01-01T00:00:00", - "EmploymentStatusGroupXRefCode": "INACTIVE", - "ManagerXRefCode": "62779", - "ManagerName": "Macon Burke", - "ActiveEmployeePosition": { - "XRefCode": "Packaging Packager", - "ShortName": "Package Handler" - }, - "ActiveEmployeeLocation": { - "XRefCode": "500Packaging", - "ShortName": "Plant 1 - Packaging" - } - } - ] -} -``` diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/userpayadjustmentcodegroups.md b/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/userpayadjustmentcodegroups.md deleted file mode 100644 index 9776b1138f..0000000000 --- a/en/docs/reference/connectors/ceridiandayforce-connector/user-security-authority-and-management/userpayadjustmentcodegroups.md +++ /dev/null @@ -1,137 +0,0 @@ -# Working with User Pay Adjustment Code Groups - -[[Overview]](#overview) [[Operation details]](#operation-details) [[Sample configuration]](#sample-configuration) - -### Overview - -The following operations allow you to retrieve User Pay Adjustment Code Groups of an employee - -| Operation | Description | -| ------------- |-------------| -|[GET User Pay Adjustment Code Groups](#retrieving-user-pay-adjustment-code-groups)| Retrieve User Pay Adjustment Groups assigned to an employee. These control which pay adjustment codes the employee can assign to timesheets. | - -### Operation details - -This section provides more details on each of the operations. - -#### Retrieving User Pay Adjustment Code Groups -We can use GET User Pay Adjustment Code Groups operation with required parameters to search and find the required employees. - -**GET User Pay Adjustment Code Groups** -```xml - - {$ctx:xRefCode} - -``` - -**Properties** - -* xRefCode (Mandatory): The unique identifier (external reference code) of the employee whose data will be retrieved. The value provided must be the exact match for an employee; otherwise, a bad request (400) error will be returned. - -**Sample request** - -Following is a sample request that can be handled by this operation. - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` - -**Sample response** - -Given below is a sample response for this operation. - -```json -{ - "Data": [ - { - "PayAdjCodeGroup": { - "XRefCode": "Timesheet", - "ShortName": "Timesheet", - "LongName": "Timesheet" - } - } - ] -} -``` - -**Related Dayforce documentation** - -[https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/User-Pay-Adjustment-Groups/GET-User-Pay-Adjustment-Code-Groups.aspx](https://developers.dayforce.com/Build/API-Explorer/User-Security,-Authority-Management/User-Pay-Adjustment-Groups/GET-User-Pay-Adjustment-Code-Groups.aspx) - -### Sample configuration - -Following example illustrates how to connect to Dayforce with the init operation and query operation. - -1.Create a sample proxy as below : -```xml - - - - - - - - - - - - {$ctx:username} - {$ctx:password} - {$ctx:clientNamespace} - {$ctx:apiVersion} - - - {$ctx:xRefCode} - - - - - - - -``` - -2.Create a json file named query.json and copy the configurations given below to it: - -```json -{ - "username": "DFWSTest", - "password": "DFWSTest", - "clientNamespace": "usconfigr57.dayforcehcm.com/Api/ddn", - "apiVersion": "V1", - "xRefCode": "42199" -} -``` -3.Replace the credentials with your values. - -4.Execute the following curl command: - -```bash -curl http://localhost:8280/services/query -H "Content-Type: application/json" -d @query.json -``` -5.Dayforce returns HTTP Code 200 with the following response body - -```json -{ - "Data": [ - { - "PayAdjCodeGroup": { - "XRefCode": "Timesheet", - "ShortName": "Timesheet", - "LongName": "Timesheet" - } - } - ] -} -``` diff --git a/en/docs/reference/connectors/connector-usage.md b/en/docs/reference/connectors/connector-usage.md deleted file mode 100644 index 3980cb49ee..0000000000 --- a/en/docs/reference/connectors/connector-usage.md +++ /dev/null @@ -1,155 +0,0 @@ -# Connector Usage Guidelines - -This document provides a set of guidelines on how to use connectors throughout their lifecycle. - -## Using connectors in your integration project - -Connectors can be added and used as part of the integration logic of your integration solution. This helps you configure inbound and outbound connections to third-party applications or to systems that support popular B2B protocols. - -### New connector versions - -From time to time there are new connector versions released. These new versions may have new operations and changes to existing operations. When moving to a new connector version from an older version, it is recommended to reconfigure your connector from scratch. - -### Importing connectors - -All the connectors are hosted in the [Integration Connector Store](https://store.wso2.com/store/assets/esbconnector/list). You can download the connector from the store as a .zip file. - -Connector store - -The source code for connectors can also be found in the specific [WSO2 extensions GitHub repository](https://github.com/wso2-extensions/). - -However, the recommended approach to use connectors for integration logic development is through WSO2 Integration Studio. Developers can browse and import connectors to the workplace using WSO2 Integration Studio itself. As a result, there is no need to go and download the connector from the store separately or obtain it from the source code. - -**To import a connector**: - -1. Open [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/). - -2. [Create an Integration Project]({{base_path}}/integrate/develop/create-integration-project). - -3. Right-click the ESB Configs folder and select **New** -> **Add/Remove Connector**. Search for the connector and follow the steps in the wizard to import the connector. - - Import a connector - -### Providing values for operation parameters - -After importing the connector, you can drag and drop operations to the design palette and use them. When providing values for operation parameters, you can provide static values or dynamic values. Dynamic values can be provided in one of the following ways. - -* As an [XPATH expression](https://www.w3schools.com/xml/xpath_syntax.asp). -* As a [JSON expression](https://docs.oracle.com/cd/E60058_01/PDF/8.0.8.x/8.0.8.0.0/PMF_HTML/JsonPath_Expressions.htm). -* As a property. - * Most of the time, this will be a custom property you set earlier in the mediation flow using [the property mediator]({{base_path}}/reference/mediators/property-mediator). Any property set with the default scope exists throughout the message flow and you can read it anywhere in the message flow after it is set. The property exists throughout the mediation flow. - * You can also provide properties of other scopes as well (i.e., a header value). However, they may not exist throughout the message flow. Please read [the property mediator documentation]({{base_path}}/reference/mediators/property-mediator) to understand more. - -### Transform message as operation needs - -Some connectors use message content in the $body to execute the operation. In such situations, you may need to transform the current message in the way the connector operation needs before using that with the connector operation. Following are some of the mediators you can use to transform the message. - -* **[PayloadFactory mediator]({{base_path}}/reference/mediators/payloadfactory-mediator)** - This replaces the current message with a message in the format we specify. We can use the information of the current message to construct this new message. -* **[Enrich mediator]({{base_path}}/reference/mediators/enrich-mediator)** - Enrich the current message modifying or adding new elements. This is also useful to save the current message as a property and to place a message in a property as the current message. -* **[Datamapper mediator]({{base_path}}/reference/mediators/data-mapper-mediator)** - Transform JSON, XML, CSV messages between formats. -* **[Script mediator]({{base_path}}/reference/mediators/script-mediator)** - Use JavaScript, Groovy or Ruby scripting languages to transform message in a custom manner. -* **[Custom class mediator]({{base_path}}/reference/mediators/class-mediator)** - Use Java to transform message in a custom manner (use Axiom, Jackson, or Gson libraries). -* **Mediator Modules (new)** - Import module and use operations to transform message (currently CSV related transformations only). - -The above mediators are useful to transform the message anywhere in the mediation flow. Hence, the same mediators can be used to transform the result of a certain connector operation in the way the next connector operation needs. - -### Result of the operation invocation - -Unless specified otherwise, the result of the connector operation (response from the connector application) will be available in the message context after using the connector operation. You can do any further mediation with the result or send it back to the invoker using [Respond mediator]({{base_path}}/reference/mediators/respond-mediator). - -### Export and run a project with connectors - -The recommended way to run any integration logic is using Carbon applications. CApp is the deployable artifact for the integration runtime. The recommendation is the same even when the integration logic is using an integration connector. - -In order to include a connector into a CApp and export, a **ConnectorExporter project** needs to be created and the connector needs to be added to that. Then you can add the ConnectorExporter project to the exporting artifact list when exporting CApp. - -The exported CApp needs to be copied to the deployment folder of the integration server (/repository/deployment/server/carbonapps). The changes will get hot-deployed if the server is already running. - -## Configuring connectors - -Configurations required for initializing the connectors must be provided in one of the following ways depending on the connector. - -### For recently updated connector versions - -For recently updated connector versions, you need to create a connection, add configurations, and associate your connection with operations. - -For recently updated connector versions, this is available from WSO2 Integration Studio 7.1.0 onwards. When creating a connection you can provide configuration values and they will get saved as a local-entry internally. - -Connection configuration - -### For connector versions that were not updated recently - -For connector versions that were not updated recently, you need to use the `init` operation - -You can refer to the documentation of the relevant connector and configure the `init` operation of it. This operation needs to be applied before any other operation of the same connector when you design mediation logic. The `init` operation is visible only for older connector versions in WSO2 Integration Studio. - -Connection configuration with init - -Instead of having the `init` operation before each connector operation, you can create an [inline XML local-entry]({{base_path}}/integrate/develop/creating-artifacts/registry/creating-local-registry-entries/) with the XML configuration of the `init` operation configuration and refer to it at the beginning of each connector operation. - -### Externalizing connector initialization parameters - -Externalizing connection `init` parameters is important because it enables you to inject environment specific parameters without modifying the integration logic you deploy. The recommended approach to perform this is using environment specific CAR applications. - -No matter whether you create a new connection or create a local entry manually with init operation configuration, at the end you will have connector initialization configurations as local entries. Connector operations will refer them by their names. This enables us to group local entries related to connector configurations as a separate CApp. - -Keeping local entry names unchanged, you can create configurations specific to different environments and export them into different CApps. Upon deployment, it is possible to deploy this CApp along with other CApps containing integration logic - -The following are some other ways to externalize connection initialization parameters. This is specific to connector `init` operation parameters (for previous connector versions) or for connection parameters when creating new connector connections (newer connector versions). - -* Specify an expression to read them as system variables (i.e., `get-property('System','email.hostName')`). Then you can pass the values for system variables in the `/bin/integrator.sh` script. You can do this specific to the environment. - -* Specify an expression to read them as registry variables (i.e., `get-property(get-property('registry','conf:'))`). Then you can provide values in the registry specific to the environment at the registry path specified. Make sure you share the registry between the nodes if setting up a server cluster. - -## Deployment - -There are no special requirements when deploying the integration runtime with artifacts that has connectors. However, the following facts need to be considered. - -To seamlessly refresh tokens, use a registry location that is visible to all [cluster members]({{base_path}}/install-and-setup/deployment/deploying_wso2_ei/) (for example, config registry mounted). Here the refresh token value should be passed as a connector parameter. For detailed information on how this can be done for the relevant connectors, see the relevant documentation. - -## Performance tuning and monitoring - -SaaS connectors use HTTP/HTTPS protocol to communicate. They use the WSO2 mediation engine itself. Hence [HTTP protocol related tunings]({{base_path}}/install-and-setup/performance_tuning/http_transport_tuning/) are applied. - -Technology connectors use protocols that are custom. Thus their tuning needs to be done at the connector itself. All connection related tunings are present in the form you get when you create a new connection for the connector. For the older connectors, configurations will be present in the `init` operation. - -Please refer to the reference documentation of the connector for details. - -## Troubleshooting - -### Enable detailed logging - -Connector implementations will have DEBUG and TRACE level logs. You can enable them to see in detail what is going on with the connector. - -* See [Configuring Log4j2 Properties section of the documentation]({{base_path}}/observe/micro-integrator/classic-observability-logs/configuring-log4j2-properties/) on how to enable DEBUG logs specifically for a Java package and on how to view the logs. - -* To get the package name of the connector implementation, refer the [How to contribute section of the overview page of connector documentation]({{base_path}}/reference/connectors/connectors-overview/#contribute-to-the-connector-project). - -!!! note - Add fault sequences to the enclosing entities of connector operations (e.g., API resource) to gracefully handle the errors. - -### Enable wire logging - -For SaaS connectors that use the HTTP transport of the integration runtime, developers can enable wire logs to see details of the messages that are sent from the runtime to the back-end service and the response sent back. This is useful to check the exact message that is sent out by the connector to the back-end service. See [documentation on monitoring wire logs]({{base_path}}/observe/micro-integrator/classic-observability-logs/monitoring-logs/#wire-logs) for instructions on how to enable wire logs. - -### Mediation debug - -WSO2 Integration Studio provides debugging capabilities. You cannot use mediation debugging to debug templates packaged inside a connector. However, you can use it to check the following. - -* Whether you are passing the correct message into the connector operation. -* Whether your input parameters for connector operations contain the expected values. -* What is the response message after using connector cooperation. - -Please refer to [the Debugging Mediation documentation]({{base_path}}/integrate/develop/debugging-mediation/) for instructions on how to use mediation debugging. - -### Debugging connector code - -You can get the source code of the connector and remotely debug it with your scenario to find out issues. Refer to the ["How to contribute” section of the connector overview page]({{base_path}}/reference/connectors/connectors-overview/#contribute-to-the-connector-project), get the GitHub repository, clone it, checkout the relevant version, and debug. It is open source! - -Start the server with `./integrator.sh -debug ` and connect to that port from your IDE (IntelliJ IDEA). - -## Report an Issue - -Click on the **Report Issue** button on the connector store page for the connector. You will get diverted to the GitHub repository of the connector. Please report your issues there. - -It is preferable to create another issue at WSO2 Micro Integrator project and link that issue. Specify the title of the issue as `[Connector]`. diff --git a/en/docs/reference/connectors/connectors-overview.md b/en/docs/reference/connectors/connectors-overview.md deleted file mode 100644 index 8bd4e565de..0000000000 --- a/en/docs/reference/connectors/connectors-overview.md +++ /dev/null @@ -1,147 +0,0 @@ -# Connectors Overview - -Integration Connectors are extensions to the integration runtime of WSO2 (compatible with EI 6.x, EI 7.x, and also APIM 4.0.0). They allow you to interact with SaaS applications on the cloud, databases, and popular B2B protocols. - -Connectors are hosted in a [connector store](https://store.wso2.com/store/assets/esbconnector/list) and can be added to integration flows in [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/). Once added, the operations of the connector can be dragged onto your canvas and added to your sequences. - -Each connector provides a set of operations, which you call from your proxy services, sequences, and APIs to interact with the specific third-party service. - -This documentation provides an example of how to use each connector and a reference of each of its operations and their properties. - -## Types of connectors - -There are three types of connectors available. - -**Cloud Connectors**: These connectors enable you to integrate with the APIs of external cloud systems (SaaS Applications) such as Salesforce, ServiceNow, Gmail, Amazon S3, etc. - -**Technology Connectors**: You can send and receive information over standard protocols using connectors like File Connector, ISO8583 Connector, etc. - -**Database Connectors**: You can integrate with databases and perform actions using connector operations. - -## Inbound and outbound connectors - -Most of the connectors available in the connector store are outbound connectors that illustrate connections and operations going out from the integration runtime to third-party applications and systems. However, there are connectors that also enable inbound connectivity from popular third-party applications. - -<img src="{{base_path}}/assets/img/integrate/connectors/inbound-outbound.png" title="Inbound and Outbound Connectors" width="700" alt="Inbound and Outbound Connectors"/> - -Some examples for inbound connectors are as follows. - -* Salesforce -* Amazon SQS -* DB Event Listener - -## Advantages of connectors - -Using connectors provide the following advantages: - -<table> - <tr> - <th>Advantage</th> - <th>Description</th> - </tr> - <tr> - <td><b>Integrate fast</b></td> - <td>Let's say you need to get some data from Salesforce and return some of it back to users. If you want to do it, first you need to find and analyze the available APIs from Salesforce. Then you need to code to interfaces or use a SDK provided to communicate with the third party. Designing such a module from ground up takes a lot of time. Connectors make this process simple as you can easily add the connector via WSO2 Integration Studio and drag and drop your connector’s operations onto the canvas and use them to design your integration flow with the help of our detailed documentation.</td> - </tr> - <tr> - <td><b>Easy to use</b></td> - <td>Connector operations are designed to hide complexities in communication and expose what is required for the users for integration. Connectors are fully supported by WSO2 Integration Studio so that you can just drag and drop operations and configure the integration flow. There is less code complexity when using connectors as most of the intricacy has already been dealt with. It is also very easy to authenticate with external systems as you only need to configure credentials and access URLs in most cases.</td> - </tr> - <tr> - <td><b>150+ available connectors</b></td> - <td>There are numerous connectors available in the store. This provides multiple options on SAAS providers and popular APIs for you to build your integration use cases as required.</td> - </tr> - <tr> - <td><b>Less domain knowledge required</b></td> - <td>When integrating with different APIs, the Connector documentation provides detailed information. Less domain knowledge is required when integrating with systems since all the information about operations are available to you and you do not need to know elaborate details of the system you are integrating with. You can spend less time studying the APIs and focus more on business use cases.</td> - </tr> - <tr> - <td><b>Dynamically added to the runtime</b></td> - <td>Connectors and most of the extensions can be directly added to and removed from the runtime. It is not required to restart the servers for deployments.</td> - </tr> - <tr> - <td><b>Easy maintenance</b></td> - <td>Connectors act as a bridge between main integration logic and the third-party application. Even if the application needs to be updated to support new features, the main integration logic of the API version does not need to be changed.</td> - </tr> -</table> - -## How to use connectors - -When configuring the integration logic, you need to use WSO2 Integration Studio. When ready, you can export the project along with dependency connectors to the integration runtime. See [documentation on adding connectors]({{base_path}}/integrate/develop/creating-artifacts/adding-connectors/) for more information. - -See the following video for a quick look at how to use connectors. - -<iframe width="560" height="315" src="https://www.youtube.com/embed/O2rAFdL8lZQ" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> - -> **Note**: You can also access the connectors directly from the [connector store](https://store.wso2.com/store/assets/esbconnector/list) if required. - -### Connectors in Integration Studio - -You can search and import the required connectors to your integration project in WSO2 Integration Studio. Once imported the supported operations are displayed on the design palette. - -### Initialize Connector - -You need to start using the connector by dragging and dropping the `init` operation to the place where you need to use the connector in the message flow. Every connector has this operation. The parameters needed for init will usually be authentication parameters. - -### Operations - -An operation of a particular connector represents a function that can be executed using that connector. The input parameters of an operation can be hard coded or fed into the connector programmatically using Properties. You can use any property (custom or internal) set to the message flow before the connector operation is used. Sometimes, the payload in the message context is considered by the connector for its operation as the payload to be sent over the connector. You can manipulate this payload using mediators like enrich, payloadfactory, etc. After the connector operation is used in the message flow, response received by invoking that operation will be available in the message context for the next mediator to be used. - -### Input Sources - -As described above, input for the connector operations would be: - -* Hard coded values -* Values of properties set to the message flow before the connector operation is used -* Payload that is in message context (message flow) at the place the connector operation is used - -### Inbound endpoints - -Inbound endpoints are used to listen or poll for messages from external systems like Kafka and NATS. These are also used to trigger events in your integration runtime and start a message flow upon changes made to external systems (i.e., RDBMS Database). - -The configurations are made using WSO2 Integration Studio for inbound endpoints as well. When configuring, you need to define it as an inbound endpoint of type “custom”. The parameter “class” has the fully qualified name to the Java class implementing message pooling functionality. - -You need to download the required libraries from the connector store and import them to the integration runtime. Usually, the inbound endpoint is co-hosted with the associated outbound connector. - -### Exporting to runtime - -Once the integration logic is done, you can export it along with the dependency connectors as a CApp. CApp is the deployable unit of your integration logic. However, when you are using inbound endpoints, you need to place corresponding libraries in the integration runtime. - -## Example scenarios and operations - -Within this documentation, each connector has an example scenario associated with it that illustrates how a simple use case can be done using the connector. It also has a detailed reference guide that lists out the operations and the respective properties of each of these operations, highlighting sample values, required configurations, and even sample requests and responses. - -!!! Info - For details on connectors not mentioned in this documentation, you can find more details in [WSO2 ESB Connectors documentation](https://docs.wso2.com/display/ESBCONNECTORS/WSO2+ESB+Connectors+Documentation) or in the [GitHub repository of the connector](https://github.com/wso2-extensions) you are looking for. - -## Empowering the developer - -We have to be honest; we would really like to encourage developers who want to solve integration problems. If you have a use case that requires you to customize one of our connectors, we encourage you to go ahead and make the changes you need. - -### Report an issue - -You can report issues for any connector under the [Micro Integrator repository](https://github.com/wso2/micro-integrator/issues/new). Once you have reported the issue, do the following: - -* Add the label `Connector`. -* Add to the project `WSO2 Connectors`. - -For an example, please refer to [this issue](https://github.com/wso2/micro-integrator/issues/1358). - -### Contribute to the connector project - -1. Search for the connector you want to customize in the [WSO2 Extensions GitHub repository](https://github.com/wso2-extensions) and clone it. - -2. Use the following command to build the connector. - - ``` - mvn clean install - ``` - -3. Make the changes you need and send in a pull request. - -### Write your own custom connector - -There may be instances where the product you want to integrate with does not have a connector as yet. In this case, you can build your own connector. Please refer to the document [here]({{base_path}}/integrate/develop/customizations/creating-new-connector) for detailed instructions. The following are the types of custom connectors that you can write. - -* Custom connector -* Custom inbound endpoint diff --git a/en/docs/reference/connectors/csv-module/csv-module-config.md b/en/docs/reference/connectors/csv-module/csv-module-config.md deleted file mode 100644 index 50a287bdc1..0000000000 --- a/en/docs/reference/connectors/csv-module/csv-module-config.md +++ /dev/null @@ -1,635 +0,0 @@ -# CSV Module Reference - -CSV Module in WSO2 Enterprise Integrator helps working with CSV payloads. This transforms a given payload into another type of payload according to your requirements. You can change the type of the output payload using these transformation configurations as well. You can send the payload to be transformed in multiple ways (e.g., POST request ). - -The following transformations can be performed with this module. - - -## CSV to CSV transformation - -You can use the CSV to CSV transformation to convert a CSV payload into another CSV payload according to your requirements using the configurations given below. - -### Operation details - -<table> -<thead> - <tr> - <th>Name</th> - <th>Parameter</th> - <th>Value</th> - <th>Description</th> - </tr> -</thead> -<tbody> - <tr> - <td>Header</td> - <td>headerPresent</td> - <td>Absent<br>Present</td> - <td>Specify whether the CSV input has a header row</td> - </tr> - <tr> - <td>Separator</td> - <td>valueSeparator</td> - <td>Default : "," (comma)</td> - <td>Specify the separator to use in the CSV input.<br>To use tab as the separator, use the value tab to this property. To use space, use the value space.</td> - </tr> - <tr> - <td>Skip Headers</td> - <td>skipHeader</td> - <td>true, false</td> - <td>This is available only if the value of the <code>headerPresent</code> property is set to <code>Present</code>. The default value is <code>false</code>.</td> - </tr> - <tr> - <td>Skip Data Rows</td> - <td>dataRowsToSkip</td> - <td></td> - <td>Specify the number of data rows to skip in the CSV. The default is 0.<br>- If headerPresent is Present, then data rows are the rows excluding the first row.<br>- If <code>headerPresent</code> is <code>Absent</code>, then data rows are the rows starting from the first row.<br></td> - </tr> - <tr> - <td>Order by Column</td> - <td>orderByColumn</td> - <td></td> - <td>Order the CSV content by values of the given column. If you want to specify the column by column index, provide the index of the column (Indexes are starting from 1). <br>To specify the column by column name, give the column name within double quotes (e.g., "name"). <br>Specifying the column by column name works only if the value of the <code>headerPresent</code> property is <code>Present</code>.</td> - </tr> - <tr> - <td>Sort Columns</td> - <td>columnOrdering</td> - <td>Ascending, Descending</td> - <td>This option is enabled if the <code>orderByColumn</code> has a value. This determines whether the CSV should be ordered ascendingly or descendingly according to the given column.<br>The default value is <code>Ascending</code>.</td> - </tr> - <tr> - <td>Skip Columns</td> - <td>columnsToSkip</td> - <td></td> - <td>Specify columns to skip from the CSV payload. You can specify the columns as comma-separated values. <br>This property supports more complex queries also, you can find full specifications below in <b>CSV Columns Skipper Query</b>. - </tr> - <tr> - <td>Custom Header</td> - <td>customHeader</td> - <td></td> - <td>Set a custom header to the output CSV payload. If this property not specified, the header for the output CSV is determined as follows,<br>- If the value of the <code>headerPresent</code> is <code>Absent</code> , the output CSV would not have a header.<br>- If the value of the <code>headerPresent</code> is <code>Present</code> and <code>skipHeader</code> is set as <code>true</code>, output CSV would not have a header.<br>- If <code>headerPresent</code> is <code>Present</code> and <code>skipHeader</code> is set as <code>false</code>, output CSV would have the header of the input CSV.<br></td> - </tr> - <tr> - <td>Separator</td> - <td>customValueSeparator</td> - <td>Default : "," (comma)</td> - <td>Values separator to use in the output CSV.</td> - </tr> -</tbody> -</table> - -### Sample configuration - -Given below is a sample request. - - ``` - id,name,email,phone_number - 1,De witt Hambidge,dwitt0@newsvine.com,true - 2,Brody Dowthwaite,bdowthwaite1@delicious.com,false - 3,Catlin Drought,cdrought2@etsy.com,608-510-7991 - 4,Kissiah Douglass,kdouglass3@squarespace.com,true - 5,Robinette Udey,rudey4@nytimes.com,true - ``` - -A sample synapse configuration for the csvToCsv operation is shown below. - - ```xml - <CSV.csvToCsv> - <headerPresent>Present</headerPresent> - <skipHeader>true</skipHeader> - <dataRowsToSkip>1</dataRowsToSkip> - <orderByColumn>2</orderByColumn> - <columnOrdering>Ascending</columnOrdering> - <columnsToSkip>"phone_number"</columnsToSkip> - <customHeader>index,name,email</customHeader> - </CSV.csvToCsv> - ``` - -The following is the sample response, for the request given above. - - ``` - index,name,email - 2,Brody Dowthwaite,bdowthwaite1@delicious.com - 3,Catlin Drought,cdrought2@etsy.com - 4,Kissiah Douglass,kdouglass3@squarespace.com - 5,Robinette Udey,rudey4@nytimes.com - ``` - -## CSV to JSON transformation - -You can use the CSV to JSON transformation to convert a CSV payload into a JSON payload according to your requirements using the configurations given below. - -### Operation details - -<table> -<thead> - <tr> - <th>Name</th> - <th>Parameter</th> - <th>Value</th> - <th>Description</th> - </tr> -</thead> -<tbody> - <tr> - <td>Header</td> - <td>headerPresent</td> - <td>Absent<br>Present</td> - <td>Specify whether the CSV input has a header row</td> - </tr> - <tr> - <td>Separator</td> - <td>valueSeparator</td> - <td>Default : "," (comma)</td> - <td>Specify the separator to use in the CSV input.<br>To use tab as the separator, use the value tab to this property. To use space, use the value space.</td> - </tr> - <tr> - <td>Skip Headers</td> - <td>skipHeader</td> - <td>true, false</td> - <td>This is available only if the value of the <code>headerPresent</code> property is set to <code>Present</code>. The default value is <code>false</code>.</td> - </tr> - <tr> - <td>Skip Data Rows</td> - <td>dataRowsToSkip</td> - <td></td> - <td>Specify the number of data rows to skip in the CSV. The default is 0.<br>- If headerPresent is Present, then data rows are the rows excluding the first row.<br>- If <code>headerPresent</code> is <code>Absent</code>, then data rows are the rows starting from the first row.<br></td> - </tr> - <tr> - <td>Empty Values</td> - <td>csvEmptyValues</td> - <td></td> - <td>Specify how to treat empty CSV values in the output JSON.</td> - </tr> - <tr> - <td>JSON Keys</td> - <td>jsonKeys</td> - <td></td> - <td>If you need to set custom keys in the JSON output, specify JSON keys in this property. Use a set of comma-separated strings as JSON keys. (eg: name,email). If this property is not specified, JSON keys of the output are determined as follows,<br>-If the value of the headerPresent is Absent , JSON keys would be autogenerated (eg: key-1, key-2).<br>- If the value of the headerPresent is Present, CSV header values would be used as the JSON keys.</td> - </tr> - <tr> - <td>Data Types</td> - <td>dataTypes</td> - <td></td> - <td>This property represents the data types of JSON fields. Supporting data types are, String, Boolean, Integer and Number. <br>The input for this property is a JSON. It is easy to config this with the Integrations Studio Property view . <br>For this property, the property view provides a table with columns, "Column Name Or Index", "Is Column Name" and "Data Type". "Column Name Or Index" column accepts an index or name of a CSV column which you need to change the data type in the output JSON. "Is Column Name" gives you a dropdown with values Yes and No. <br>The Default is Yes. The value Yes means, you have input a column name in the Column Name Or Index column. <br>No means, you have given an index in the Column Name Or Index column. "Data Type" column represents the output data type.</td> - </tr> - <tr> - <td>Root JSON Key</td> - <td>rootJsonKey</td> - <td></td> - <td>If you need to wrap the JSON output inside a wrapper object, specify the key for the wrapper object.</td> - </tr> -</tbody> -</table> - -### Sample configuration - -Given below is a sample request. - - ``` - id,name,email,phone_number - 1,De witt Hambidge,dwitt0@newsvine.com,true - 2,Brody Dowthwaite,bdowthwaite1@delicious.com,false - 3,Catlin Drought,cdrought2@etsy.com,608-510-7991 - 4,Kissiah Douglass,kdouglass3@squarespace.com,true - 5,Robinette Udey,rudey4@nytimes.com,true - ``` - -A sample synapse configuration for the csvToJson operation is shown below. - - ```xml - <CSV.csvToJson> - <headerPresent>Present</headerPresent> - <skipHeader>true</skipHeader> - <columnsToSkip>"phone_number"</columnsToSkip> - <dataRowsToSkip>1</dataRowsToSkip> - <csvEmptyValues>Null</csvEmptyValues> - <jsonKeys>index,name,email</jsonKeys> - <dataTypes>[{"Column Name Or Index":"id","Is Column Name":"Yes","Data Type":"Number"},{"Column Name Or Index":"2","Is Column Name":"No","Data Type":"String"}]</dataTypes> - <rootJsonKey>results</rootJsonKey> - </CSV.csvToJson> - ``` - -The following is the sample response, for the request given above. - - ```JSON - { - "results": [ - { - "index": 2.0, - "name": "Brody Dowthwaite", - "email": "bdowthwaite1@delicious.com" - }, - { - "index": 3.0, - "name": "Catlin Drought", - "email": "cdrought2@etsy.com" - }, - { - "index": 4.0, - "name": "Kissiah Douglass", - "email": "kdouglass3@squarespace.com" - }, - { - "index": 5.0, - "name": "Robinette Udey", - "email": "rudey4@nytimes.com" - } - ] - } - ``` - -## CSV to XML transformation - -You can use the CSV to XML transformation to convert a CSV payload into a XML payload according to your requirements using the configurations given below. - -### Operation details - -<table> -<thead> - <tr> - <th>Name</th> - <th>Parameter</th> - <th>Value</th> - <th>Description</th> - </tr> -</thead> -<tbody> - <tr> - <td>Header</td> - <td>headerPresent</td> - <td>Absent<br>Present</td> - <td>Specify whether the CSV input has a header row</td> - </tr> - <tr> - <td>Separator</td> - <td>valueSeparator</td> - <td>Default : "," (comma)</td> - <td>Specify the separator to use in the CSV input.<br>To use tab as the separator, use the value tab to this property. To use space, use the value space.</td> - </tr> - <tr> - <td>Skip Headers</td> - <td>skipHeader</td> - <td>true, false</td> - <td>This is available only if the value of the <code>headerPresent</code> property is set to <code>Present</code>. The default value is <code>false</code>.</td> - </tr> - <tr> - <td>Skip Data Rows</td> - <td>dataRowsToSkip</td> - <td></td> - <td>Specify the number of data rows to skip in the CSV. The default is 0.<br>- If headerPresent is Present, then data rows are the rows excluding the first row.<br>- If <code>headerPresent</code> is <code>Absent</code>, then data rows are the rows starting from the first row.<br></td> - </tr> - <tr> - <td>Skip Columns</td> - <td>columnsToSkip</td> - <td></td> - <td>Specify columns to skip from the CSV payload. You can specify the columns as comma-separated values. <br>This property supports more complex queries also, you can find full specifications below in <b>CSV Columns Skipper Query</b>. - </tr> - <tr> - <td><b>Root Element Group</b></td> - <td></td> - <td></td> - <td>You can use the properties under this group to config the root XML element of the output XML payload. You can find the following properties under the root element group.</td> - </tr> - <tr> - <td>Tag</td> - <td>rootElementTag</td> - <td></td> - <td>Name of the XML tag of the root element. The default value is root.</td> - </tr> - <tr> - <td>Namespace</td> - <td>rootElementNamespace</td> - <td></td> - <td>Namespace of the root element.</td> - </tr> - <tr> - <td>Namespace URI</td> - <td>rootElementNamespaceURI</td> - <td></td> - <td>Namespace URI of the root element.</td> - </tr> - <tr> - <td><b>Group Element Group</b></td> - <td></td> - <td></td> - <td>The properties under this group are for configuring the group elements of the output XML payload. You can find the following properties under the group element group.</td> - </tr> - <tr> - <td>Tag</td> - <td>groupElementName</td> - <td></td> - <td>Name of the XML tag of the group element. The default value is group.</td> - </tr> - <tr> - <td>Namespace</td> - <td>groupElementNamespace</td> - <td></td> - <td>Namespace of the group element.</td> - </tr> - <tr> - <td>Namespace URI</td> - <td>groupElementNamespace</td> - <td></td> - <td>Namespace URI of the group element.</td> - </tr> -</tbody> -</table> - -### Sample configuration - -Given below is a sample request. - - ``` - id,name,email,phone_number - 1,De witt Hambidge,dwitt0@newsvine.com,true - 2,Brody Dowthwaite,bdowthwaite1@delicious.com,false - 3,Catlin Drought,cdrought2@etsy.com,608-510-7991 - 4,Kissiah Douglass,kdouglass3@squarespace.com,true - 5,Robinette Udey,rudey4@nytimes.com,true - ``` - -A sample synapse configuration for the csvToXml operation is shown below. - - ```xml - <CSV.csvToXml> - <headerPresent>Present</headerPresent> - <skipHeader>true</skipHeader> - <columnsToSkip>"phone_number"</columnsToSkip> - <tagNames>index,name,email</tagNames> - <rootElementTag>results</rootElementTag> - <groupElementTag>result</groupElementTag> - </CSV.csvToXml> - ``` - -The following is the sample response, for the request given above. - - ```xml - <results> - <result> - <index>1</index> - <name>De witt Hambidge</name> - <email>dwitt0@newsvine.com</email> - </result> - <result> - <index>2</index> - <name>Brody Dowthwaite</name> - <email>bdowthwaite1@delicious.com</email> - </result> - <result> - <index>3</index> - <name>Catlin Drought</name> - <email>cdrought2@etsy.com</email> - </result> - <result> - <index>4</index> - <name>Kissiah Douglass</name> - <email>kdouglass3@squarespace.com</email> - </result> - <result> - <index>5</index> - <name>Robinette Udey</name> - <email>rudey4@nytimes.com</email> - </result> - </results> - ``` - -## JSON to CSV transformation - -You can use the JSON to CSV transformation to convert a JSON payload into a CSV payload according to your requirements using the configurations given below. - -### Operation details - -<table> -<thead> - <tr> - <th>Name</th> - <th>Parameter</th> - <th>Value</th> - <th>Description</th> - </tr> -</thead> -<tbody> - <tr> - <td>CSV Header</td> - <td>customHeader</td> - <td></td> - <td>Set a custom header to the output CSV payload. If this property is not specified, the key values of the input would be used as the output CSV headers.</td> - </tr> -</tbody> -</table> - -### Sample configuration - -Given below is a sample request. - - ``` json - [ - { - "id": "1", - "name": "De witt Hambidge", - "email": "dwitt0@newsvine.com", - "phone_number": "true" - }, - { - "id": "2", - "name": "Brody Dowthwaite", - "email": "bdowthwaite1@delicious.com", - "phone_number": "false" - }, - { - "id": "3", - "name": "Catlin Drought", - "email": "cdrought2@etsy.com", - "phone_number": "608-510-7991" - }, - { - "id": "4", - "name": "Kissiah Douglass", - "email": "kdouglass3@squarespace.com", - "phone_number": "true" - }, - { - "id": "5", - "name": "Robinette Udey", - "email": "rudey4@nytimes.com", - "phone_number": "true" - } - ] - ``` -A sample synapse configuration for the jsonToCsv operation is shown below. - - ``` xml - <CSV.jsonToCsv> - <customHeader>index,name,email,number</customHeader> - </CSV.jsonToCsv> - ``` -The following is the sample response, for the request given above. - - ``` - index,name,email,number - 1,De witt Hambidge,dwitt0@newsvine.com,true - 2,Brody Dowthwaite,bdowthwaite1@delicious.com,false - 3,Catlin Drought,cdrought2@etsy.com,608-510-7991 - 4,Kissiah Douglass,kdouglass3@squarespace.com,true - 5,Robinette Udey,rudey4@nytimes.com,true - ``` - -## XML to CSV transformation - -You can use the XML to CSV transformation to convert a XML payload into a CSV payload according to your requirements using the configurations given below. - -### Operation details - -<table> -<thead> - <tr> - <th>Name</th> - <th>Parameter</th> - <th>Value</th> - <th>Description</th> - </tr> -</thead> -<tbody> - <tr> - <td>CSV Header</td> - <td>customHeader</td> - <td></td> - <td>Set a custom header to the output CSV payload. If this property is not specified, Key values of the input would be used as the output CSV headers.</td> - </tr> -</tbody> -</table> - -### Sample configuration - -Given below is a sample request. - - ``` xml - <root> - <group> - <id>1</id> - <name>De witt Hambidge</name> - <email>dwitt0@newsvine.com</email> - <phone_number>true</phone_number> - </group> - <group> - <id>2</id> - <name>Brody Dowthwaite</name> - <email>bdowthwaite1@delicious.com</email> - <phone_number>false</phone_number> - </group> - <group> - <id>3</id> - <name>Catlin Drought</name> - <email>cdrought2@etsy.com</email> - <phone_number>608-510-7991</phone_number> - </group> - <group> - <id>4</id> - <name>Kissiah Douglass</name> - <email>kdouglass3@squarespace.com</email> - <phone_number>true</phone_number> - </group> - <group> - <id>5</id> - <name>Robinette Udey</name> - <email>rudey4@nytimes.com</email> - <phone_number>true</phone_number> - </group> - </root> - ``` - -A sample synapse configuration for the xmlToCsv operation is shown below. - - ``` xml - <CSV.xmlToCsv> - <customHeader>index,name,email,number</customHeader> - </CSV.xmlToCsv> - ``` - -The following is the sample response, for the request given above. - - ``` - index,name,email,number - 1,De witt Hambidge,dwitt0@newsvine.com,true - 2,Brody Dowthwaite,bdowthwaite1@delicious.com,false - 3,Catlin Drought,cdrought2@etsy.com,608-510-7991 - 4,Kissiah Douglass,kdouglass3@squarespace.com,true - 5,Robinette Udey,rudey4@nytimes.com,true - ``` - -## CSV Columns Skipper Query - -The `columnsToSkip` (Skip Columns) property in CSV to JSON, CSV to XML, CSV to CSV operations supports a simple query language to configure the skipping columns. - -### Queries - -#### Single Column - -The column selection query can be a single column representing one column in the CSV. You can represent a column with its index or using the header name for that column. - -* Column index : Column indexes are starting from 1. You can give a single column index as the column skipper query. E.g., `3` -* Column name: You can specify a column using its name. Note that, this feature work only if the value of the `headerPresent` property is `Present`. You can give the column name within double quotations in the columns skipper query. (E.g., `"email"`) - -#### Multiple Columns -You can select multiple columns by combining them with a comma (,). - - ``` - 1,2,3, - ``` - - ``` - "name":"email" - ``` - - ``` - 3:"email" - ``` - -#### Element Range -You can specify a range of columns in the query. Use colon character (:) to define a range. - - ``` - 1:5 - ``` - - ``` - "name":"email" - ``` - - ``` - 3:"email" - ``` - -You can use asterisk symbol to represent the last column in case you don't know the number of columns. For example if you want to skip all the columns from column `3`, then use the following query, - - ``` - 3:* - ``` - -#### Group Elements -You can use opening and closing brackets to define a group of elements. A few examples are shown below. - - ``` - (1:5) - ``` - - ``` - (3,4,"name") - ``` - - ``` - 2,3,("name":*) - ``` - -#### Not Syntax -You can use the exclamation mark (!) to exclude columns from columns skipper. For example, if you want to skip all the columns from 5 to 10 but want to include 7th column, can use the query given below - - ``` - 5:10,!7 - ``` -The following is an additional example of how you can use this. - - ``` - 3:*,!(10:"email") - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/db-event-inbound-endpoint/db-event-inbound-endpoint-config.md b/en/docs/reference/connectors/db-event-inbound-endpoint/db-event-inbound-endpoint-config.md deleted file mode 100644 index ac8cb44fa0..0000000000 --- a/en/docs/reference/connectors/db-event-inbound-endpoint/db-event-inbound-endpoint-config.md +++ /dev/null @@ -1,113 +0,0 @@ -# DB Event Inbound Endpoint Reference - -The following configurations allow you to configure DB Event Inbound Endpoint for your scenario. - -<style type="text/css"> -.tg {border-collapse:collapse;border-spacing:0;} -.tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;} -.tg th{font-family:Arial, sans-serif;font-size:20px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;} -.tg .tg-0pky{border-color:inherit;text-align:left;vertical-align:top} -</style> -<table class="tg"> - <tr> - <th class="tg-0pky">Parameter</th> - <th class="tg-0pky">Description</th> - <th class="tg-0pky">Required</th> - <th class="tg-0pky">Possible Values</th> - <th class="tg-0pky">Default Value</th> - </tr> - <tr> - <td class="tg-0pky">sequential</td> - <td class="tg-0pky">Whether the messages should be polled and injected sequentially.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">true , false</td> - <td class="tg-0pky">TRUE</td> - </tr> - <tr> - <td class="tg-0pky">driverName</td> - <td class="tg-0pky">The class name of the database driver.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">com.mysql.jdbc.Driver</td> - <td class="tg-0pky">-</td> - </tr> - <tr> - <td class="tg-0pky">url</td> - <td class="tg-0pky">The JDBC URL of the database.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">jdbc:mysql://<HOST>/<DATABASE_NAME></td> - <td class="tg-0pky">-</td> - </tr> - <tr> - <td class="tg-0pky">username</td> - <td class="tg-0pky">The user name to connect to the database.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">-</td> - <td class="tg-0pky">-</td> - </tr> - <tr> - <td class="tg-0pky">password</td> - <td class="tg-0pky">The password to connect to the database.</td> - <td class="tg-0pky">Required if you have set a password for the database.</td> - <td class="tg-0pky">-</td> - <td class="tg-0pky">-</td> - </tr> - <tr> - <td class="tg-0pky">tableName</td> - <td class="tg-0pky">The name of the table to capture changes to records.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">-</td> - <td class="tg-0pky">-</td> - </tr> - <tr> - <td class="tg-0pky">filteringCriteria</td> - <td class="tg-0pky">The criteria to poll the database for record changes. Possible criteria are as follows:<br> - <li><b>byLastUpdatedTimestampColumn:</b> Specify this to poll the database for a record that has changed since the last modified timestamp.</li> - <li><b>byBooleanColumn:</b> Specify this to poll the database for record changes according to a column where it contains the boolean (true or false) value. By default, values are set to true. Each polling cycle takes only the records with the value true and updates it as false after polling. <b>Note:</b> When you create this database table column, you have to specify the data type as varchar instead of boolean data type.</li> - <li><b>deleteAfterPoll:</b> Specify this if you want to delete records after polling.</li> - </td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">-</td> - <td class="tg-0pky">-</td> - </tr> - <tr> - <td class="tg-0pky">filteringColumnName</td> - <td class="tg-0pky">The actual name of the column that captures changes.<br/> - <li>If filteringCriteria is `byLastUpdatedTimestampColumn`, this needs to be a column of type `Timestamp` and should be updated with the record.</li> - <li>If filteringCriteria is `byBooleanColumn` this needs to be a column of type `Varchar`.</li> - </td> - <td class="tg-0pky">Required if the value of the filteringCriteria parameter is specified as byLastUpdatedTimestampColumn or byBooleanColumn</td> - <td class="tg-0pky">-</td> - <td class="tg-0pky">-</td> - </tr> - <tr> - <td class="tg-0pky">primaryKey</td> - <td class="tg-0pky">The primary key column name.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">ID</td> - <td class="tg-0pky">-</td> - </tr> - <tr> - <td class="tg-0pky">connectionValidationQuery</td> - <td class="tg-0pky">The query to check the availability of the connection.</td> - <td class="tg-0pky">No</td> - <td class="tg-0pky">SELECT 1</td> - <td class="tg-0pky">SELECT 1</td> - </tr> - <tr> - <td class="tg-0pky">registryPath</td> - <td class="tg-0pky">The registry path of the timestamp. This is used to retrieve records when the value of the filteringCriteria parameter is specified as byLastUpdatedTimestampColumn.</td> - <td class="tg-0pky">No</td> - <td class="tg-0pky">-</td> - <td class="tg-0pky">Name of the Inbound Endpo</td> - </tr> -</table> - -<br/> - -## Rollback the events - -Once processing of an event fails, it will trigger a specified `fault sequence`. It is possible to specify the following property in such a situation. -```xml -<property name="SET_DB_ROLLBACK_ONLY" value="true"/> -``` -Once this property is set to `true`, DB event listener will not do any updates to the database. That is, it will not delete the row associated with the event or it will not update the boolean value being monitored. Also, it will not consider that event as received by the endpoint. Upon the next DB event poll, the same event will be triggered again. You can build a re-try mechanism upon mediation failures using this feature. \ No newline at end of file diff --git a/en/docs/reference/connectors/db-event-inbound-endpoint/db-event-inbound-endpoint-example.md b/en/docs/reference/connectors/db-event-inbound-endpoint/db-event-inbound-endpoint-example.md deleted file mode 100644 index 726109e606..0000000000 --- a/en/docs/reference/connectors/db-event-inbound-endpoint/db-event-inbound-endpoint-example.md +++ /dev/null @@ -1,143 +0,0 @@ -# DB Event Inbound Endpoint Example - -Following are the main features of the event generator. - -1. Trigger an event with the data of table row when a new record is added or updated. Optionally, delete the row associated with the event after triggering the event -2. Trigger an event when a boolean field is flipped in a particular table row. - -## What you'll build - -In this example let us see how to configure `DB-event Inbound Endpoint` so that it can listen to data changes done to a `MySQL` table. Out of the features mentioned above feature no:1 is used here. Please refer to [reference guide]({{base_path}}/reference/connectors/db-event-inbound-endpoint/db-event-inbound-endpoint-config/) if you need to use other features. - -In an enterprise system, a relational database table is used to store customer information. Customers' information is added by an external system to the database which is not in enterprise's control. As soon as a new customer is inserted, the system need to pick up and process its data. The integration runtime is used here to listen to DB changes and invoke the relevant processes. It can invoke backend APIs or place data into a message bus after required data transformations. However, for simplicity of this example, we will just log the message. You can extend the sample as required using WSO2 mediators. - -Following diagram shows the overall solution we are going to build. External system will update the MySQL DB and the integration runtime will trigger events based on the inserts and updates. - -<img src="{{base_path}}/assets/img/integrate/connectors/db-event-diagram.png" title="Overview of DB event inbound EP use case" width="600" alt="Overview of DB event inbound EP use case"/> - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Setting up the environment - -First, install [MySQL database](https://www.mysql.com/downloads/) locally. If you have a remote server, please obtain credentials required to connect. In this example, database credentials are assumed as username=`root` and password=`root`. - -1. Create a database called `test`. Then create a table called `CDC_CUSTOM` under that database using following SQL script. - ```sql - CREATE TABLE `test`.`CDC_CUSTOM` ( - `ID` INT NOT NULL, - `NAME` VARCHAR(45) NULL, - `ADDRESS` VARCHAR(45) NULL, - `AMOUNT` INT NULL, - PRIMARY KEY (`ID`)); - ``` - -2. We need an additional column in order to track new records. If you apply this feature to an existing database table, you can alter the table as shown below. It will add a column of type `TIMESTAMP`, which gets automatically updated when you insert or update of a record. - ```sql - ALTER TABLE CDC_CUSTOM - ADD COLUMN UPDATED_AT - TIMESTAMP DEFAULT CURRENT_TIMESTAMP - ON UPDATE CURRENT_TIMESTAMP; - ``` - - -## Configure inbound endpoint using WSO2 Integration Studio - -1. Download [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/). Create an **Integration Project** as below. -<img src="{{base_path}}/assets/img/integrate/connectors/solution-project.jpg" title="Creating a new Integration Project" width="800" alt="Creating a new Integration Project" /> - -2. Right click on **Source** -> **main** -> **synapse-config** -> **inbound-endpoints** and add a new **custom inbound endpoint**.</br> -<img src="{{base_path}}/assets/img/integrate/connectors/db-event-inbound-ep.png" title="Creating DB event inbound endpoint" width="400" alt="Creating DB event inbound endpoint" style="border:1px solid black"/> - -3. Click on **Inbound Endpoint** in design view and under `properties` tab, update class name to `org.wso2.carbon.inbound.poll.dbeventlistener.DBEventPollingConsumer`. - -4. Navigate to source view and update it with following config. Please note that you need to update url, username and password as required. - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <inboundEndpoint class="org.wso2.carbon.inbound.poll.dbeventlistener.DBEventPollingConsumer" name="CustomerDBEventEP" onError="eventProcessFailSeq" sequence="DBEventProcessSeq" suspend="false" xmlns="http://ws.apache.org/ns/synapse"> - <parameters> - <parameter name="interval">1000</parameter> - <parameter name="class">org.wso2.carbon.inbound.poll.dbeventlistener.DBEventPollingConsumer</parameter> - <parameter name="sequential">true</parameter> - <parameter name="coordination">true</parameter> - <parameter name="inbound.behavior">polling</parameter> - <parameter name="driverName">com.mysql.jdbc.Driver</parameter> - <parameter name="url">jdbc:mysql://localhost/test</parameter> - <parameter name="username">root</parameter> - <parameter name="password">root</parameter> - <parameter name="tableName">CDC_CUSTOM</parameter> - <parameter name="filteringCriteria">byLastUpdatedTimestampColumn</parameter> - <parameter name="filteringColumnName">UPDATED_AT</parameter> - <parameter name="primaryKey">ID</parameter> - <parameter name="connectionValidationQuery">SELECT 1</parameter> - <parameter name="registryPath">dbEventIE/timestamp</parameter> - </parameters> - </inboundEndpoint> - ``` - - -## Exporting Integration Logic as a CApp - -**CApp (Carbon Application)** is the deployable artefact on the integration runtime. Let us see how we can export integration logic we developed into a CApp. To export the `Solution Project` as a CApp, a `Composite Application Project` needs to be created. Usually, when a solution project is created, this project is automatically created by Integration Studio. If not, you can specifically create it by navigating to **File** -> **New** -> **Other** -> **WSO2** -> **Distribution** -> **Composite Application Project**. - -1. Right click on Composite Application Project and click on **Export Composite Application Project**.</br> - <img src="{{base_path}}/assets/img/integrate/connectors/capp-project1.jpg" title="Export as a Carbon Application" width="300" alt="Export as a Carbon Application" /> - -2. Select an **Export Destination** where you want to save the .car file. - -3. In the next **Create a deployable CAR file** screen, select inbound endpoint and sequence artifacts and click **Finish**. The CApp will get created at the specified location provided at the previous step. - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/db-event-listener.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -!!! tip - You may need to update the database details and make other such changes before deploying and running this project. - -## Deploying on WSO2 Enterprise Integrator - -1. Navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for `DB Event Listener`. Click on `DB Event Listener` and download the .jar file by clicking on `Download Inbound Endpoint`. Copy this .jar file into the <PRODUCT-HOME>/lib folder. - -2. Download [`mysql-connector-java`](https://dev.mysql.com/downloads/connector/j/) associated with `MySQL` server version and add it to the <PRODUCT-HOME>/lib folder.` - -3. Copy the exported carbon application to the `<PRODUCT-HOME>/repository/deployment/server/carbonapps` folder. - -4. Start the server - -Now the integration runtime will start listening to the data changes of `CDC_CUSTOM` table. - -## Testing - -### Adding a new record - -1. Using MySQL terminal, execute the following SQL to insert a new customer record into the table. - ```sql - INSERT INTO `test`.`CDC_CUSTOM` (`ID`, `NAME`, `ADDRESS`, `AMOUNT`) VALUES (001, "john", "22/3, Tottenham Court, London" , 1000); - ``` -2. You can see a log entry in WSO2 server console similar to the following. - ``` - [2020-03-26 17:40:00,871] INFO {org.apache.synapse.mediators.builtin.LogMediator} - To: , MessageID: urn:uuid:4B1D55C3ABCEE82B961585224600739, Direction: request, message = event received, Envelope: <?xml version='1.0' encoding='utf-8'?><soapenv:Envelope xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope"><soapenv:Body><Record><ID>1</ID><NAME>john</NAME><ADDRESS>22/3, Tottenham Court, London</ADDRESS><AMOUNT>1000</AMOUNT><PAID>false</PAID><UPDATED_AT>2020-03-26 16:57:57.0</UPDATED_AT></Record></soapenv:Body></soapenv:Envelope> - ``` - -3. If you add another new record, only that new record will get notified to the integration runtime and the old records will be ignored. - - -### Update an existing record - -1. Using MySQL terminal, execute the following SQL to update the added record. - ```sql - UPDATE `test`.`CDC_CUSTOM` SET AMOUNT = 2000 WHERE ID = 001; - ``` -2. You can see a log entry in WSO2 server console similar to the following. - ``` - [2020-03-27 18:13:06,906] INFO {org.apache.synapse.mediators.builtin.LogMediator} - To: , MessageID: urn:uuid:1958A94F892D158A661585312986834, Direction: request, message = event received, Envelope: <?xml version='1.0' encoding='utf-8'?><soapenv:Envelope xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope"><soapenv:Body><Record><ID>1</ID><NAME>john</NAME><ADDRESS>22/3, Tottenham Court, London</ADDRESS><AMOUNT>2000</AMOUNT><PAID>false</PAID><UPDATED_AT>2020-03-27 18:13:06.0</UPDATED_AT></Record></soapenv:Body></soapenv:Envelope> - ``` - -> **Note**: You can do any type of advanced integration using the rich mediator catalog, not just logging. - -## What's Next - -* To customize this example for your own scenario, see [DB Event Inbound Endpoint Configuration]({{base_path}}/reference/connectors/db-event-inbound-endpoint/db-event-inbound-endpoint-config/) documentation for all configuration options of the endpoint. \ No newline at end of file diff --git a/en/docs/reference/connectors/db-event-inbound-endpoint/db-event-inbound-endpoint-overview.md b/en/docs/reference/connectors/db-event-inbound-endpoint/db-event-inbound-endpoint-overview.md deleted file mode 100644 index 4ffb1e0219..0000000000 --- a/en/docs/reference/connectors/db-event-inbound-endpoint/db-event-inbound-endpoint-overview.md +++ /dev/null @@ -1,33 +0,0 @@ -# DB Event Inbound Endpoint Overview - -Data is the most valuable asset in any business. Almost every cooperate system has an on-premise or cloud-based data storage facility. When the individual systems in a particular business are integrated together, sometimes they are coupled via database systems. For an example, one system can write data, while another system reads and processes them. In such instances, the systems may want to know if there are any changes to the data being performed by external parties or systems. - -Hence, for an enterprise integration platform it is a useful feature to be able to generate events based on the data changes. **DB Event Inbound Endpoint** is the DB event listener for the integration runtime of WSO2. You can configure it with any popular Database systems such as `MySQL` and `Oracle` etc. - -To see the DB Event Inbound Endpoint, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "Event". **DB Event Listener** is the name of the connector that has this functionality. - -<img src="{{base_path}}/assets/img/integrate/connectors/db-event-store.png" title="DB Event Listener Store" width="200" alt="DB Event Listener Store"/> - -## Compatibility - -| Connector Version | Supported product versions | -| ------------- |-------------| -| 1.0.4 | APIM 4.0.0, EI 7.1.0, EI 7.0.x, EI 6.6.0, EI 6.5.0, EI 6.4.0, EI 6.1.1 | - -For older versions, see the details in the connector store. - -## DB Event Inbound Endpoint - -* **[DB Event Inbound Endpoint Example]({{base_path}}/reference/connectors/db-event-inbound-endpoint/db-event-inbound-endpoint-example)**: In this example you will learn how to configure `DB-event Inbound Endpoint` so that it can listen to data changes done to a `MySQL` table. - -* **[DB Event Inbound Endpoint Reference]({{base_path}}/reference/connectors/db-event-inbound-endpoint/db-event-inbound-endpoint-config)**: This documentation provides a reference guide for the DB Event Inbound Endpoint. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, create a pull request in the following repository. - -* [DB Event Inbound Endpoint GitHub repository](https://github.com/wso2-extensions/esb-inbound-dbevent) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/develop-connectors.md b/en/docs/reference/connectors/develop-connectors.md deleted file mode 100644 index 4b397c3092..0000000000 --- a/en/docs/reference/connectors/develop-connectors.md +++ /dev/null @@ -1,1254 +0,0 @@ -# Connector Developer Guidelines - -Integration Connectors are extensions to the integration runtime of WSO2 (compatible with EI 6.x, EI 7.x, as well as APIM 4.0.0). This enables developers to interact with SaaS applications on the cloud, databases, and popular B2B protocols. - -Connectors are hosted in a connector store and can be added to integration flows in WSO2 Integration Studio, which is the tooling component for developing integrations. Once added, the operations of the connector can be dragged onto your canvas and added to your sequences and proxy services. - -Each connector provides a set of operations, which you can call from your proxy services, sequences, and APIs to interact with the specific third-party service. - -This document is an in-depth guide for developers to follow when developing a new connector from scratch. It aims to cover the initial steps to be followed, best practices, and details the means of implementing the UI schema for Integration Studio support. - -## Connector Architecture - -A connector is a collection or a set of operations that can be used in the integration flow to access a specific service or functionality. These operations are invoked from proxy services, sequences, and APIs to interact. - -* A connector operation is made using [sequence templates]({{base_path}}/reference/synapse-properties/template-properties/). -* The integration logic inside a connector operation is constructed using mediators. -* The integration logic inside a connector operation needs some custom functionality not provided by mediators, a java implementation can be attached to the associated sequence template. This is using the Custom Class Mediator approach. -* If the third-party service provider provides a Java SDK to interact with the service, connector operation can use them extending the java implementation. - -<img src="{{base_path}}/assets/img/integrate/connectors/dev-connectors.png" title="Developing Connectors" width="800" alt="Developing Connectors"/> - -### Connector Types - -There are two types of connectors. - -* Application/SaaS connectors - Connects to cloud applications. These are implemented purely using WSO2 mediators and constructs. E.g., Amazon S3, Salesforce -* Technology connectors - Implements different B2B protocols. Logic for these are implemented using mainly Java. E.g., JMS, NATS, Email. - -### Connector Structure - -The typical folder structure of a connector is as follows. - -``` -├── pom.xml -├── repository -├── src -│ ├── main -│ │ ├── assembly -│ │ │ ├── assemble-connector.xml -│ │ │ └── filter.properties -│ │ ├── java -│ │ │ └── org -│ │ │ └── wso2 -│ │ │ └── carbon -│ │ │ └── connector -│ │ │ └── sampleConnector.java -│ │ └── resources -│ │ ├── config -│ │ │ ├── component.xml -│ │ │ └── init.xml -│ │ ├── connector.xml -│ │ └── sample -│ │ ├── component.xml -│ │ └── operation1.xml -│ └── test -``` - -* **pom.xml** - Defines the build information for maven. -* **repository** - When running Integration tests, the integration runtime distribution should be placed here. -* **src/main/assembly** - Instructions on packaging the connector. -* **src/main/java/org/wso2/carbon/connector** - Java code which is being used to implement connector logic. -* **src/main/resources** - Contains sequence templates for each connector operation. -* **src/main/resources/config** - Contains the connector initialization logic. -* **src/main/resources/connector.xml** - Contains the connector information. -* **src/test** - Contains the test cases. - -### About the connector.xml file - -All the operations exposed by the connector should be registered in this file. The syntax is as follows. - -```xml -<?xml version="1.0" encoding="UTF-8"?> -<connector> - <component name="sample" package="org.wso2.carbon.connector"> - <dependency component="config" /> - <dependency component="sample" /> - <description>WSO2 sample connector library</description> - </component> - <icon>icon/icon-small.gif</icon> -</connector> -``` - -<table> - <tr> - <th>Attribute</th> - <th>Description</th> - </tr> - <tr> - <td>name</td> - <td>The ‘name’ attribute of the ‘component’ element in the connector.xml file defines the name of the connector. When operations are being invoked, this is the name appended to the operation.</td> - </tr> - <tr> - <td>Dependancy</td> - <td>Defines the sub directories which contain the operations.</td> - </tr> - <tr> - <td>icon</td> - <td>Path to the icon file of the connector.</td> - </tr> -</table> - -For example, according to the sample above, it contains two subdirectories named ‘config’ and ‘sample’ inside /resources. -``` - └── resources - ├── config - │ ├── component.xml - │ └── init.xml - ├── connector.xml - └── sample - ├── component.xml - └── operation1.xml -``` - -### Subdirectory containing operations - -Resources folder is used to group the operations in the connector in a more organized manner. - -It may contain subdirectories which contain operations. Each of those subdirectories should contain a component.xml file as below defining each template which represents an operation. Ultimately, all component.xml files in sub-directories should be referred to by the main component.xml file of the connector. - -Below is the component.xml in ‘sample’ subdirectory. - -```xml -<?xml version="1.0" encoding="UTF-8"?> -<component name="sample" type="synapse/template"> - <subComponents> - <component name="operation1"> - <file>operation1.xml</file> - <description>sample wso2 connector method</description> - </component> - </subComponents> -</component> -``` - -<table> - <tr> - <th>Attribute</th> - <th>Description</th> - </tr> - <tr> - <td>name</td> - <td>The name of the subdirectory. This is the name to be used as the ‘component’ attribute of the ‘dependency’ element in the connector.xml file.</td> - </tr> - <tr> - <td>subComponents</td> - <td>Defines the template files.</td> - </tr> - <tr> - <td>component (under subComponent)</td> - <td>Defines an operation. The ‘name’ attribute defines the name of the operation. The following is an example of what you can find in the component.xml file. - <code> - <subComponents> - <component name="operation1"> - <file>operation1.xml</file> - <description>sample wso2 connector method</description> - </component> - </subComponents> - </code> - </td> - </tr> - <tr> - <td>file</td> - <td>Name of the file containing the operation.</td> - </tr> - <tr> - <td>description</td> - <td>Description of the operation</td> - </tr> -</table> - -For example: -``` - └── resources - ├── config - │ ├── component.xml - │ └── init.xml - ├── connector.xml - └── sample - ├── component.xml - └── operation1.xml -``` - -The following is a sample available in the component.xml file. - -```xml -<component name="sample" type="synapse/template"> -``` - -### Operation - -An operation of an integration connector is implemented using a [synapse template](https://docs.wso2.com/display/EI611/Sequence+Template) as mentioned before. -A typical template configuration for an operation would look like below. - -```xml -<?xml version="1.0" encoding="UTF-8"?> -<template xmlns="http://ws.apache.org/ns/synapse" name="operation1"> - <parameter name="hostName" /> - <sequence> - <log level="full"> - <property name="*******host name********" expression="$func:hostName" /> - </log> - </sequence> -</template> -``` - -<table> - <tr> - <th>Attribute</th> - <th>Description</th> - </tr> - <tr> - <td>name</td> - <td>The name of the operation. This should correspond to the name defined in the subcomponent in the component.xml. </td> - </tr> - <tr> - <td>parameter</td> - <td>The parameters required for the operation are defined as parameters.</td> - </tr> - <tr> - <td>sequence</td> - <td>The mediation logic is implemented here.</td> - </tr> -</table> - -The following is a sample of the code in component.xml. - -```xml - <subComponents> - <component name="operation1"> - <file>operation1.xml</file> - <description>sample wso2 connector method</description> - </component> - </subComponents> -``` - -The following is a sample code extracted from operation1.xml - -```xml -<template xmlns="http://ws.apache.org/ns/synapse" name="operation1"> -``` - -### Invoking an operation - -When invoking an operation from the main integration flow, the connector name defined in the `connector.xml` would be appended to the respective operation. Invoking the operation would look similar to the following. - -```xml -<sample.operation1> - <hostName>localhost</hostName> -</sample.operation1> -``` - -## Writing Your First Connector - -### Prerequisites - -* Download and install Apache Maven. - -### Step 1: Create Maven project template - -We will use the [Maven archetype](https://github.com/wso2-extensions/archetypes/tree/master/esb-connector-archetype) to generate the Maven project template and sample connector code. - -1. Open a terminal and navigate to the directory where you want the connector code to be created and run the following command. - - ```bash - mvn org.apache.maven.plugins:maven-archetype-plugin:2.4:generate -DarchetypeGroupId=org.wso2.carbon.extension.archetype -DarchetypeArtifactId=org.wso2.carbon.extension.esb.connector-archetype -DarchetypeVersion=2.0.4 -DgroupId=org.wso2.carbon.esb.connector -DartifactId=org.wso2.carbon.esb.connector.googlebooks -Dversion=1.0.0 -DarchetypeRepository=http://maven.wso2.org/nexus/content/repositories/wso2-public/ - ``` - -2. Enter the name of the connector and press enter. - -3. Next, press ‘Y’ and enter to confirm configuration properties. - - You will observe the following in the logs, if the connector was successfully created. - - ```bash - [INFO] ---------------------------------------------------------------------------- - [INFO] Using following parameters for creating project from Archetype: org.wso2.carbon.extension.esb.connector-archetype:2.0.4 - [INFO] ---------------------------------------------------------------------------- - [INFO] Parameter: groupId, Value: org.wso2.carbon.esb.connector - [INFO] Parameter: artifactId, Value: org.wso2.carbon.esb.connector.googlebooks - [INFO] Parameter: version, Value: 1.0.0 - [INFO] Parameter: package, Value: org.wso2.carbon.esb.connector - [INFO] Parameter: packageInPathFormat, Value: org/wso2/carbon/esb/connector - [INFO] Parameter: package, Value: org.wso2.carbon.esb.connector - [INFO] Parameter: groupId, Value: org.wso2.carbon.esb.connector - [INFO] Parameter: artifactId, Value: org.wso2.carbon.esb.connector.googlebooks - [INFO] Parameter: connector_name, Value: Synergy - [INFO] Parameter: version, Value: 1.0.0 - [WARNING] CP Don't override file /home/user/org.wso2.carbon.esb.connector.googlebooks/src/test/resources/testng.xml - [WARNING] CP Don't override file /home/user/org.wso2.carbon.esb.connector.googlebooks/src/test/resources/artifacts/ESB/connector/config/Synergy.properties - [INFO] project created from Archetype in dir: /home/user/org.wso2.carbon.esb.connector.googlebooks - [INFO] ------------------------------------------------------------------------ - [INFO] BUILD SUCCESS - [INFO] ------------------------------------------------------------------------ - [INFO] Total time: 01:15 h - [INFO] Finished at: 2020-08-10T11:59:34+05:30 - [INFO] ------------------------------------------------------------------------ - - ``` - - You may observe that the following directory structure is created. - -### Step 2: Adding the new connector resources - -Now, let's configure files in the `org.wso2.carbon.esb.connector.sample/src/main/resources` directory: - -1. Create a directory named googlebooks_volume in the `/src/main/resources` directory. - -2. Create a file named `listVolume.xml` with the following content in the googlebooks_volume directory: - - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <template xmlns="http://ws.apache.org/ns/synapse" name="listVolume"> - <parameter name="searchQuery" description="Full-text search query string." /> - <sequence> - <property name="uri.var.searchQuery" expression="$func:searchQuery" /> - <call> - <endpoint> - <http method="get" uri-template="https://www.googleapis.com/books/v1/volumes?q={uri.var.searchQuery}" /> - </endpoint> - </call> - </sequence> - </template> - ``` - -3. Create a file named `component.xml` in the googlebooks_volume directory and add the following content. - - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <component name="googlebooks_volume" type="synapse/template"> - <subComponents> - <component name="listVolume"> - <file>listVolume.xml</file> - <description>Lists volumes that satisfy the given query.</description> - </component> - </subComponents> - </component> - ``` - -4. Edit the `connector.xml` file in the `src/main/resources` directory and replace the contents with the following dependency: - - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <connector> - <component name="sample" package="org.wso2.carbon.connector"> - <dependency component="googlebooks_volume" /> - <description>wso2 sample connector library</description> - </component> - </connector> - ``` - -5. Create a folder named icon in the /src/main/resources directory and add two icons. You can download icons from the following location: http://svn.wso2.org/repos/wso2/scratch/connectors/icons/. - -### Step 3: Building the connector - -Open a terminal, navigate to the `org.wso2.carbon.esb.connector.sample` directory and execute the following maven command: - -```bash -mvn clean install -``` - -This builds the connector and generates a ZIP file named `sample-connector-1.0.0.zip` in the target directory. - -### Step 4: Testing the connector - -1. Open WSO2 Integration Studio and [create an integration project]({{base_path}}/integrate/develop/create-integration-project) by clicking **New Integration Project**. - -2. In the window that appears, make sure you select **Connector Exporter Project"** as a module of the project. - - <img src="{{base_path}}/assets/img/integrate/connectors/connector-project.png" title="Connector Exporter Project" width="600" alt="Connector Exporter Project"/> - -3. In the newly created project, navigate to `SampleConnector/SampleConnectorConfigs/src/main/synapse-config/api` in WSO2 Integration Studio. Right-click and select **New** -> **Rest API**. - -4. Select **Create A New API Artifact** and provide below details. - * Name - sampleAPI - * Context - /sample - -5. Right-click the `SampleConnectorConfigs` project and select **Add or Remove Connector**. In the window that appears, select **Add from File System** and select the file path to the `<sample_connector_folder>/target/sample-connector-1.0.0.zip` file. You may observe the sample-connector added in the pallette as shown below. - - <img src="{{base_path}}/assets/img/integrate/connectors/connector-explorer.png" title="Connector Expolorer" width="300" alt="Connector Explorer"/> - -6. Switch to the source view and update the configuration as below. - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <api context="/sample" name="sampleAPI" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST" uri-template="/listVolume"> - <inSequence> - <sample.listVolume> - <searchQuery>{json-eval($.searchQuery)}</searchQuery> - </sample.listVolume> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - </api> - ``` - - <img src="{{base_path}}/assets/img/integrate/connectors/studio-sequence.png" title="Integration Studio Sequence" width="400" alt="Integration Studio Sequence"/> - -7. Right-click the `SampleConnectorConnectorExporter` project and go to -> **New** -> **Add or Remove Connectors** -> **Select ‘workspace’**. Select the connector from the below window and click **OK** and then click **Finish**. - - <img src="{{base_path}}/assets/img/integrate/connectors/workspace-connector.png" title="Connector Workspace" width="400" alt="Connector Workspace"/> - -8. To run the project, right-click on the project and select **Run As** -> **Run on Micro Integrator**. - -9. Select the artifacts to be exported and click **Finish**. - - <img src="{{base_path}}/assets/img/integrate/connectors/select-artifacts.png" title="Select Artifacts" width="500" alt="Select Artifacts"/> - -10. Send a POST call to http://localhost:8290/sample/listVolume with the below request payload. - ```json - { - "searchQuery": "rabbit" - } - ``` - -11. A JSON response containing book information will be returned. - - <img src="{{base_path}}/assets/img/integrate/connectors/json-response.png" title="JSON response" width="800" alt="JSON Response"/> - - -## Extending Connector Capabilities with Java - -In cases where you need to provide custom capabilities that cannot be fulfilled using mediators, we are able to implement this logic in Java within the connector itself and invoking them using the Class Mediator. This capability is useful when creating Technology connectors. - -These Java classes should reside inside /src/main/java/org.wso2.carbon.connector/ directory. - -### Sample - -This sample is an extension to the ‘Writing your first connector’ section. Let us improve the connector with a Java implementation. - -In the same project, you may observe the sampleConnector class created in the `/src/main/java/org.wso2.carbon.connector/` directory. - -<img src="{{base_path}}/assets/img/integrate/connectors/sampleconnector-class.png" title="sampleConnector class" width="300" alt="sampleConnector class"/> - -The class would look similar to the following. - -```java -public class sampleConnector extends AbstractConnector { - - @Override - public void connect(MessageContext messageContext) throws ConnectException { - Object templateParam = getParameter(messageContext, "generated_param"); - try { - log.info("sample sample connector received message :" + templateParam); - /**Add your connector code here - **/ - } catch (Exception e) { - throw new ConnectException(e); - } - } -} -``` - -This class is being invoked by `/src/main/resources/sample/sample_template.xml` with the below code segment. - -```xml -<class name="org.wso2.carbon.connector.sampleConnector" /> -``` - -Now, let’s add the component containing the `sample_template.xml` to the connector by adding the below line to `connector.xml`. - -```xml -<dependency component="sample" /> -``` - -After adding this line, the `connector.xml` should be similar to the following: - -```xml -<?xml version="1.0" encoding="UTF-8"?> -<connector> - <component name="sample" package="org.wso2.carbon.connector"> - <dependency component="googlebooks_volume" /> - <dependency component="sample" /> - <description>wso2 sample connector library</description> - </component> -</connector> -``` - -In the sample, when the connect method is invoked, it should log message “sample sample connector received message : <template_param_passed>”. - -### Invoking the sample - -1. Add a new REST API resource with the following configuration. - * URI Style: URI_TEMPLATE - * URI Template: /sampleTemplate - * Methods: POST - <img src="{{base_path}}/assets/img/integrate/connectors/rest-api-resource.png" title="REST API resource" width="700" alt="REST API resource"/> - -2. Drag and drop the sample_template operation as indicated below, and configure the generated_param expression as `json-eval($.generatedParam)`. - <img src="{{base_path}}/assets/img/integrate/connectors/sample-template-operation.png" title="Sample template operation" width="500" alt="Sample template operation"/> - - The API resource would now look similar to the following: - ```xml - <resource methods="POST" uri-template="/sampleTemplate"> - <inSequence> - <sample.sample_template> - <generated_param>{json-eval($.generatedParam)}</generated_param> - </sample.sample_template> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - ``` - -3. Run the project in Micro Integrator as done previously and invoke http://localhost:8290/sample/sampleTemplate with the below payload. - ``` - { - "generatedParam": "Hello World" - } - ``` - <img src="{{base_path}}/assets/img/integrate/connectors/sample-template-payload.png" title="Sample template payload" width="300" alt="Sample template payload"/> - -**AbstractConnector class** - Any Java class being invoked from a template sequence must extend the `AbstractConnector` class and override the `connect()` method. The logic to be invoked must be inside the `connect()` method. - -**Invoking the java class** - The Java class must be invoked from the template sequence using the following syntax. -```xml -<class name="org.wso2.carbon.connector.sampleConnector" /> -``` - -> **Note**: The class should not contain class level variables as it will introduce concurrency issues during message mediation. - -## Connection Handling - -In connectors, we often need to establish connections with the third-party applications or sometimes need to maintain connection configuration. This is done using the ‘init’ operation which is typically invoked before any operation is performed. - -This is a hidden operation which is not mandatory for all connectors to implement. - -In the latest versions of the connectors, connections are abstracted into local entries by configuring the ‘init’ operation in the local entries. It is then linked to the connector operations, which allows the user to maintain multiple connection entries and configure which connection to be used for each operation. - -E.g., The following is a connection created for the email operations. - -**Local Entry containing the `init` operation** -```xml -<?xml version="1.0" encoding="UTF-8"?> -<localEntry key="imapsconnection" xmlns="http://ws.apache.org/ns/synapse"> - <email.init> - <host>imap.gmail.com</host> - <port>993</port> - <name>imapsconnection</name> - <username></username> - <password></password> - <connectionType>IMAPS</connectionType> - <maxActiveConnections>4</maxActiveConnections> - </email.init> -</localEntry> -``` - -**Operation invoked in the mediation flow** -```xml -<email.list configKey="imapsconnection"> - <subjectRegex>{json-eval($.subjectRegex)}</subjectRegex> -</email.list> -``` - -Here, the `init` operation is configured using the `configKey` attribute. When the `configKey` attribute is configured, the operations in the local entry with the relevant key name is invoked before invoking the operation. - -### SaaS Connectors - -In SaaS connectors, where the logic is implemented using pure integration constructs, it often uses OAuth 2.0 for authentication. Connector core provides the capability to handle access tokens and refresh expired tokens. - -#### Authentication Mechanism using Refresh Token - -In previous versions of connectors, expiry based access token refreshing was preferred for connections. However, in the latest versions, the process of refreshing access tokens will be retry-based, which means that when an endpoint is called with the current access token and a 4XX HTTP response code is returned, the token is refreshed using the refresh token and the call is reattempted. If the second call also fails, the failure message is passed to the client. - -In order to implement this, the below template can be used. -```xml -<template name="callWithRetry" xmlns="http://ws.apache.org/ns/synapse"> - <sequence> - <filter source="boolean($ctx:uri.var.refreshToken)" regex="true"> - <then> - <filter source="$ctx:httpMethod" regex="(post|patch)"> - <enrich> - <source clone="true" type="body"/> - <target property="ORIGINAL_MSG_PAYLOAD" type="property"/> - </enrich> - </filter> - <property name="uri.var.accessToken.reg" - expression="get-property('registry', $ctx:uri.var.accessTokenRegistryPath)"/> - <header name="Authorization" - expression="fn:concat('Bearer ', $ctx:uri.var.accessToken.reg )" - scope="transport"/> - <salesforcerest.callOptions/> - <property name="httpCode" expression="$axis2:HTTP_SC" scope="default" type="STRING"/> - <filter source="$ctx:httpCode" regex="4[0-9][0-9]"> - <then> - <class name="org.wso2.carbon.connector.core.RefreshAccessToken"/> - <header name="Authorization" - expression="fn:concat('Bearer ', $ctx:uri.var.accessToken )" - scope="transport"/> - <filter source="$ctx:httpMethod" regex="(post|patch)"> - <enrich> - <source clone="true" property="ORIGINAL_MSG_PAYLOAD" type="property"/> - <target type="body"/> - </enrich> - </filter> - <salesforcerest.callOptions/> - </then> - </filter> - </then> - <else> - <header name="Authorization" - expression="fn:concat('Bearer ', $ctx:uri.var.accessToken )" - scope="transport"/> - <salesforcerest.callOptions/> - </else> - </filter> - </sequence> -</template> -``` - -For example, see [the sample code](https://github.com/wso2-extensions/esb-connector-salesforcerest/blob/master/src/main/resources/salesforcerest-config/callWithRetry.xml) - -Here, `salesforcerest.calloptions` will contain call mediators defined for HTTP methods GET, POST, DELETE, etc. (For example, [see the code](https://github.com/wso2-extensions/esb-connector-salesforcerest/blob/master/src/main/resources/salesforcerest-config/callOptions.xml)) - -There are two class mediators made available in carbon-mediation for refreshing the access token. See the [related pull request](https://github.com/wso2/carbon-mediation/pull/1423) for more information. - -1. **RefreshAccessToken.java** - In the above template, you may observe this class is being invoked using the below line. - ```xml - <class name="org.wso2.carbon.connector.salesforcerest.RefreshAccessToken"/> - ``` - -2. **RefreshAccessTokenWithExpiry.java** - This class can be used if you need a periodic refresh of access tokens. It can be invoked as follows, - ```xml - <class name="org.wso2.carbon.connector.salesforcerest.RefreshAccessTokenWithExpiry"/> - ``` - -### Technology Connectors - -In Technology connectors, when the logic is implemented using Java we often need to maintain the connections made. For example, email connections, Kafka connections etc. The connection can be created/configured when the `init` operation is invoked and maintained to be used across operations. - -In order to handle this, the connector core consists of a connection handler. Furthermore, it also consists of a generic connection pool to maintain a connection pool for each connector. - -When implementing a connection for a connector, it must implement the `Connection` class. For more information, see the [code](https://github.com/wso2/carbon-mediation/blob/master/components/mediation-connector/org.wso2.carbon.connector.core/src/main/java/org/wso2/carbon/connector/core/connection/Connection.java). - -For example, see [the Java code](https://github.com/wso2-extensions/esb-connector-email/blob/master/src/main/java/org/wso2/carbon/connector/connection/EmailConnection.java). - -#### Connection Handler - -Connection Handler contains a map that maintains connections/connection pools. Following are the methods it provides. - -<table> - <tr> - <th>Method</th> - <th>Description</th> - </tr> - <tr> - <td>createConnection(String connector, String connectionName, Connection connection)</td> - <td>Puts the connection to the connection map. No pooling.</td> - </tr> - <tr> - <td>createConnection(String connector, String connectionName, ConnectionFactory factory, Configuration configuration)</td> - <td>This method is used to create a connection pool. In order to create a connection pool ConnectionFactory class (https://github.com/wso2/carbon-mediation/blob/master/components/mediation-connector/org.wso2.carbon.connector.core/src/main/java/org/wso2/carbon/connector/core/pool/ConnectionFactory.java) must be implemented as done in https://github.com/wso2-extensions/esb-connector-email/blob/master/src/main/java/org/wso2/carbon/connector/connection/EmailConnectionFactory.java. This specifies how the connections are created.</br> - </br> - Configurations of the connection pool must be set in the Configurations object to be passed (https://github.com/wso2/carbon-mediation/blob/master/components/mediation-connector/org.wso2.carbon.connector.core/src/main/java/org/wso2/carbon/connector/core/pool/Configuration.java). - </td> - </tr> - <tr> - <td>getConnection(String connector, String connectionName)</td> - <td>Retrieve connection for the relevant connector.</td> - </tr> - <tr> - <td>returnConnection(String connector, String connectionName, Connection connection)</td> - <td>Return connection back to the pool.</td> - </tr> - <tr> - <td>shutdownConnections()</td> - <td>Shuts down all connections in the connection handler.</td> - </tr> - <tr> - <td>shutdownConnections(String connector)</td> - <td>Shuts down all connections of the relevant connector.</td> - </tr> - <tr> - <td>checkIfConnectionExists(String connector, String connectionName)</td> - <td>Check if the connection exists by the given connection name for the relevant connector.</td> - </tr> -</table> - -## Utilities - -Connector core acts as a SDK for connector development. It is added as a dependency to the connector project automatically via the connector architype. It is advised to use utilities in the connector core whenever possible when you need to extend connector operation functionalities. - -Below are some of the utilities provided by the [connector core](https://github.com/dinuish94/carbon-mediation/tree/master/components/mediation-connector/org.wso2.carbon.connector.core). - -### Handling expired Access Tokens - -**RefreshAccessToken** - This is a class mediator that you can use to refresh your access token. When invoked, this class mediator calls the token refresh url with a GET request and reads access_token and sets it to a property and saves it to governance registry for reuse. See [code](https://github.com/wso2/carbon-mediation/blob/master/components/mediation-connector/org.wso2.carbon.connector.core/src/main/java/org/wso2/carbon/connector/core/RefreshAccessToken.java). - -**RefreshAccessTokenWithExpiry** - This is a class mediator used for refreshing access tokens, similar to the above. However, this does not invoke the end point right away to refresh. Whenever this class mediator is called it will check whether a pre-agreed time limit has passed. If the time has passed it will call the refresh endpoint to get a new access token. See [code](https://github.com/wso2/carbon-mediation/blob/master/components/mediation-connector/org.wso2.carbon.connector.core/src/main/java/org/wso2/carbon/connector/core/RefreshAccessTokenWithExpiry.java). - -You can also extend these two classes to change the behavior if the refresh endpoint of the particular SaaS has different behaviors. You can add the child class into the connector project under `java/<appropriate_package>` and refer to those local class mediators. - -### Connection Handling - -See the above section on Connection Handling. - -### Read template parameters - -Template parameters can be read using the `lookupTemplateParamater(MessageContext ctxt, String paramName)` method in `ConnectorUtils` as indicated below. - -``` -ConnectorUtils.lookupTemplateParamater(messageContext, ”param”) -``` - -### Read connection pool parameters - -Connection pool parameters can be parsed from the template parameters and set to the Configuration object using the `getPoolConfiguration(MessageContext messageContext)` method in `ConnectorUtils` as indicated below. - -``` -ConnectorUtils.getPoolConfiguration(messageContext) -``` - -### Handling payloads - -Following methods in `PayloadUtils` class can be used for payload building and transformations. - -<table> - <tr> - <th>Methods</th> - <th>Description</th> - </tr> - <tr> - <td>setContent(MessageContext messageContext, InputStream inputStream, String contentType)</td> - <td>Builds content according to the given content type and set in the message body</td> - </tr> - <tr> - <td>handleSpecialProperties(String contentType, MessageContext axis2MessageCtx)</td> - <td>Changes the content type and handles other headers</td> - </tr> - <tr> - <td>preparePayload(MessageContext messageContext, String xmlString)</td> - <td>Converts the XML String to XML Element and sets in message context</td> - </tr> - <tr> - <td>setPayloadInEnvelope(MessageContext axis2MsgCtx, OMElement payload)</td> - <td>Sets the OMElement in the message context</td> - </tr> -</table> - -## Best practices - -**Use functionalities available in Connector Core** -Every connector depends on the [WSO2 Connector Core](https://github.com/wso2/carbon-mediation/tree/master/components/mediation-connector/org.wso2.carbon.connector.core), which acts as the interface between the mediation engine and the connector implementation. It is the SDK provided to develop connectors. Connection pooling, OAuth-based authentication, JSON and XML utilities are there. - -**Never use class level variables when you extend “AbstractConnector” class** -The `connect` method of this class must be stateless as multiple threads will access it at the same time (e.g., [Email Send](https://github.com/wso2-extensions/esb-connector-email/blob/master/src/main/java/org/wso2/carbon/connector/operations/EmailSend.java)). Due to the same reason, avoid using class level variables to assign and keep values as that makes this method stateful. - -**Add DEBUG and TRACE logs when required** -This is extremely useful in production. It is always advised to add required DEBUG and TRACE logs when extended Java logic is written. Developers can also add debug and trace logs for sequence templates using log mediator. In both cases make sure to use the connector name as a prefix. Otherwise it will be hard to identify the logs related to the connector when runtime has multiple connectors deployed. - -```xml -<log category="DEBUG" level="custom"> - <property name="message" value="This is a debug log"/> -</log> -``` - -**Add meaningful comments to the code** -This helps for other developers to read through the implementation and understand. In sequence templates also developers can use XML-based comments. - -```xml - <!-- Calling test EP to obtain key required for further mediation--> - <call> - <endpoint key="testEP"/> - </call> -``` - -**Group operations for readability** -If the connector has many operations, instead of adding templates for all the operations in the same level, developers can group them to folders for easy navigation and readability (i.e., [DayforceConnector](https://github.com/wso2-extensions/esb-connector-dayforce/tree/master/src/main/resources)). - -**Define private templates and reuse. Do not duplicate logic across templates** -Developers may define a template with the `<hidden>true</hidden>` property in `component.xml` related to the template ([example component.xml](https://github.com/wso2-extensions/esb-connector-email/blob/master/src/main/resources/config/component.xml)). Then that template will not be presented as a connector operation to the users when rendered in WSO2 Integration Studio. It is a private template which you can refer to construct logic in other templates. This provides a way to keep a reusable logic inside the connector for easy maintenance. See the [example](https://github.com/niruhan/esb-connector-salesforcerest/tree/master/src/main/resources/salesforcerest-config) for more information. - -**Use property Group if there are a lot of properties to define** -Within some operations we need to define a number of properties together. When you use WSO2 Integration Studio to develop the logic, this fact makes sequence template logic to render in a lengthy manner in the UI. It makes it harder to navigate. To prevent this and to make XML definition also more readable you can group properties together using [Property Group mediator]({{base_path}}/reference/mediators/property-group-mediator/). - -**Use `$ctx`: syntax instead of `get-property()` when reading properties** -When you use the [property mediator]({{base_path}}/reference/mediators/property-mediator/) to read properties, always use `$ctx:` syntax. It delivers better performance. Make sure to use properties in the correct scope. - -**Avoid old mediators** -Please do not use mediators like `<send/>`, `<loopback/>` in sequence templates. They are there for the sake of backward compatibility. Always stick to mediators like `<call/>` and `<respond/>`. - -**Timeout configs for connections** -Connection timeout is an environment dependent configuration. Developers may define a default value, however it should be available for users to configure. If it is a technology connector, timeout is a configuration of the “connection”. If it is a SaaS connector developer needs to template it so that it can be passed to `<call>` mediator. For more information, see [here](https://github.com/wso2-extensions/esb-connector-salesforcerest/blob/df72e90af3781f995186ccb79ecfcb8ba71fe866/src/main/resources/salesforcerest-config/callOptions.xml#L32). - -**Handle errors meaningfully. Use ERROR CODES** -Sometimes it is required to handle errors within the connector. Sometimes it is required to let the calling template handle the error. Sometimes it is required to forward the error message back to the connector operation invoker as it is. It is good to analyze use cases, and then design which errors need to be handled at which instance. However, it is a good practice to define and use error codes. - -Please read the [WSO2 Error Code guide]({{base_path}}/reference/error_handling/). - -**Write test cases** - -## Input and Output schema - -Input and output schema can be defined for connectors so that a [datamapper mediator]({{base_path}}/reference/mediators/data-mapper-mediator/) can be used to easily transform the payloads required for each operation. - -These schemas are placed inside `/resources` under `input_schema` and `output_schema` folders. - -### Input schema - -Maps the input format required for the operation. For example: - -Operation - -```xml -<template xmlns="http://ws.apache.org/ns/synapse" name="sample"> - <parameter name="param" description="Sample parameter."/> - <sequence> - <property name="param" expression="$func:param"/> - </sequence> -</template> -``` - -Input Schema - -```json -{ - "$schema":"http:\/\/wso2.org\/json-schema\/wso2-data-mapper-v5.0.0\/schema#", - "id":"http:\/\/wso2jsonschema.org", - "title":"root", - "type":"object", - "properties":{ - "source":{ - "id":"http:\/\/wso2jsonschema.org\/param", - "type":"string" - } -} -``` - -### Output schema - -Maps the out format of the operation. - -Output Schema -```json -{ - "$schema":"http:\/\/wso2.org\/json-schema\/wso2-data-mapper-v5.0.0\/schema#", - "id":"http:\/\/wso2jsonschema.org", - "title":"result", - "type":"object", - "properties":{ - "success":{ - "id":"http:\/\/wso2jsonschema.org\/success", - "type":"boolean" - } - } -} -``` - -## The UI schema - -In order to support the WSO2 Integration Studio (version 7.1.0 +) properties window shown below, the UI schema should be derived for each operation. If this schema is present in the connector, when imported to the Integration Studio, the properties panel will automatically get generated as per the information there. - -<img src="{{base_path}}/assets/img/integrate/connectors/ui-schema.png" title="UI schema" width="500" alt="UI schema"/> - -When adding the UI Model to the connector, the JSON files containing the schema should be included in a directory called ‘uischema’ under the resources directory. - -<img src="{{base_path}}/assets/img/integrate/connectors/ui-schema-directory.png" title="UI schema directory" width="300" alt="UI schema directory"/> - -Let us go through the constructs available in the UI schema. - -### Connection - -In previous versions, the connector is being initialized using the `init` operation. - -In the latest connector versions, the `init` operation, which is used to initiate the connector, is being created as a local entry and then referred from Integration Studio itself. - -This operation is referred to as ‘Connection’ in UI schema terminology. Here we will define the fields that are required to initialize the connection of a connector. - -Connection schema should be created in a separate file. As a practice, the name of the file should be the name of the connection. - -The schema of a connection is as follows. - -```json -{ - "connectorName": "email", - "connectionName": "IMAP", - "title": "IMAP Connection", - "help": "<h1>Email Connector</h1> <b>The email connector supports IMAP, POP3 and SMTP protocols for handling emails</b>", - "elements": [] -} -``` - -<table> - <tr> - <th>Property Name</th> - <th>Description</th> - </tr> - <tr> - <td>connectorName</td> - <td>Name of the connector</td> - </tr> - <tr> - <td>connectionName</td> - <td>Unique name for the connection</td> - </tr> - <tr> - <td>title</td> - <td>Title of the connection to be shown</td> - </tr> - <tr> - <td>help</td> - <td>Help tip to be shown</td> - </tr> - <tr> - <td>elements</td> - <td>Field elements of the connection</td> - </tr> -</table> - -### Operation - -Connection operation will be portrayed in the new Integration Studio connector view as shown below. - -<img src="{{base_path}}/assets/img/integrate/connectors/connection-operation.png" title="Connection operation" width="700" alt="Connection operation"/> - -Operation schema for each operation should be created in a separate file. As a practice, the name of the file should be the name of the operation. - -The schema of an operation is as follows. - -```json -{ - "connectorName": "email", - "operationName": "send", - "title": "Send Email", - "help": "<h1>Send Email</h1> <b>The send operation sends an email.</b><br><br><ul><li><a href=\"https://ei.docs.wso2.com/en/latest/micro-integrator/reference/connectors/file-connector/file-connector-config/\"> More Help </a></li></ul>", - "elements": [] -} -``` - -<table> - <tr> - <th>Property Name</th> - <th>Description</th> - </tr> - <tr> - <td>connectorName</td> - <td>Name of the connector</td> - </tr> - <tr> - <td>operationName</td> - <td>Unique name for the operation</td> - </tr> - <tr> - <td>title</td> - <td>Title of the connection to be shown</td> - </tr> - <tr> - <td>help</td> - <td>Help tip to be shown</td> - </tr> - <tr> - <td>elements</td> - <td>Field elements of the connection</td> - </tr> -</table> - -### Elements - -The following is an element definition. - -```json -{ - "type": "attribute", - "value": { - "name": "from", - "displayName": "From", - "inputType": "stringOrExpression", - "defaultValue": "", - "required": "false", - "helpTip": "The 'From' address of the message sender" - } -} -``` - -#### Types of Elements - -**Attribute** -```json -{ - "type": "attribute", - "value": { - "name": "from", - "displayName": "From", - "inputType": "stringOrExpression", - "defaultValue": "", - "required": "false", - "helpTip": "The 'From' address of the message sender" - } -} -``` - -<table> - <tr> - <th>Property Name</th> - <th>Description</th> - </tr> - <tr> - <td>type</td> - <td>Type of the element</td> - </tr> - <tr> - <td>value</td> - <td>Value of the element</td> - </tr> - <tr> - <td>name</td> - <td>Name of the element</td> - </tr> - <tr> - <td>displayName</td> - <td>Display name to be shown</td> - </tr> - <tr> - <td>inputType</td> - <td>Field type for the attribute (stringOrExpression, booleanOrExpression, textOrExpression).</td> - </tr> - <tr> - <td>required</td> - <td>Whether the field is a mandatory field or not</td> - </tr> - <tr> - <td>helpTip</td> - <td>Help tip to be shown</td> - </tr> -</table> - -**Attribute Group** - -Grouping multiple attributes together - -```json -{ - "type": "attributeGroup", - "value": { - "groupName": "Basic", - "elements": [] - } -} -``` - -<table> - <tr> - <th>Property Name</th> - <th>Description</th> - </tr> - <tr> - <td>type</td> - <td>Type of the element</td> - </tr> - <tr> - <td>value</td> - <td>Value of the element</td> - </tr> - <tr> - <td>groupName</td> - <td>Name of the group</td> - </tr> - <tr> - <td>elements</td> - <td>Elements in the group</td> - </tr> -</table> - -When defining an attribute for the connection to be used in an operation, the following format should be used. - -```json -{ - "type": "attribute", - "value": { - "name": "configRef", - "displayName": "Connection", - "inputType": "connection", - "allowedConnectionTypes": [ - "SMTP", - "SMTPS" - ], - "defaultType": "connection.smtp", - "defaultValue": "", - "required": "true", - "helpTip": "Connection to be used" - } -} -``` - -Additional parameters to be added. - -<table> - <tr> - <th>Property Name</th> - <th>Description</th> - </tr> - <tr> - <td>allowedConnectionTypes</td> - <td>Names of the connection types to be used for the operation. The name should correspond to the “connectionName” attribute in the connection schema.</td> - </tr> - <tr> - <td>defaultType</td> - <td>Default connection type to be used.</td> - </tr> -</table> - -### Samples - -**Connection** - imap.json -```json -{ - "connectorName": "email", - "connectionName": "IMAP", - "title": "IMAP Connection", - "help": "<h1>Email Connector</h1> <b>The email connector supports IMAP, POP3 and SMTP protocols for handling emails</b>", - "elements": [ - { - "type": "attribute", - "value": { - "name": "connectionName", - "displayName": "Connection Name", - "inputType": "string", - "defaultValue": "EMAIL_CONNECTION_1", - "required": "true", - "helpTip": "The name for the email connection", - "validation": "nameWithoutSpecialCharactors" - } - }, - { - "type": "attributeGroup", - "value": { - "groupName": "General", - "elements": [ - { - "type": "attributeGroup", - "value": { - "groupName": "Basic", - "elements": [ - { - "type": "attribute", - "value": { - "name": "host", - "displayName": "Host", - "inputType": "stringOrExpression", - "defaultValue": "", - "required": "true", - "helpTip": "Host name of the mail server" - } - } - ] - } - } - ] - } - }, - { - "type": "attributeGroup", - "value": { - "groupName": "Advanced", - "elements": [ - { - "type": "attribute", - "value": { - "name": "readTimeout", - "displayName": "Read Timeout", - "inputType": "stringOrExpression", - "defaultValue": "", - "required": "false", - "helpTip":"The socket read timeout value" - } - } - ] - } - } - ] -} -``` - -**Operation** - send.json -```json -{ - "connectorName": "email", - "operationName": "send", - "title": "Send Email", - "help": "<h1>Send Email</h1> <b>The send operation sends an email.</b><br><br><ul><li><a href=\"https://apim.docs.wso2.com/en/latest/reference/connectors/file-connector/file-connector-config/\"> More Help </a></li></ul>", - "elements": [ - { - "type": "attributeGroup", - "value": { - "groupName": "General", - "elements": [ - { - "type": "attribute", - "value": { - "name": "configRef", - "displayName": "Connection", - "inputType": "connection", - "allowedConnectionTypes": [ - "SMTP", - "SMTPS" - ], - "defaultType": "connection.smtp", - "defaultValue": "", - "required": "true", - "helpTip": "Connection to be used" - } - }, - { - "type": "attributeGroup", - "value": { - "groupName": "Basic", - "elements": [ - { - "type": "attribute", - "value": { - "name": "from", - "displayName": "From", - "inputType": "stringOrExpression", - "defaultValue": "", - "required": "false", - "helpTip": "The 'From' address of the message sender" - } - } - ] - } - }, - { - "type": "attributeGroup", - "value": { - "groupName": "Advanced", - "elements": [ - { - "type": "attribute", - "value": { - "name": "contentType", - "displayName": "Content Type", - "inputType": "stringOrExpression", - "defaultValue": "text/html", - "required": "false", - "helpTip": "Content Type of the body" - } - } - ] - } - } - ] - } - } - ] -} -``` - -## Icons - -Icons for the connector must be added to the icon folder under the root folder of the connector. - -<img src="{{base_path}}/assets/img/integrate/connectors/icon-folder.png" title="Icon folder" width="300" alt="Icon folder"/> - -The icon names are icon-large(72x80) and icon-small(25x25) and they should be in .png format. diff --git a/en/docs/reference/connectors/documentum/documentum-example.md b/en/docs/reference/connectors/documentum/documentum-example.md deleted file mode 100644 index 51913c1944..0000000000 --- a/en/docs/reference/connectors/documentum/documentum-example.md +++ /dev/null @@ -1,228 +0,0 @@ -# Documentum Connector Example - -Documentum Connector can be used to perform operations on OpenText Documentum Enterprise content management system. - -## What you'll build - -This example explains how to use Documentum Connector to create a folder and retrieve cabinet details from Documentum. The user sends a payload with the repository name, parent folder ID and name of the folder to be created. Then the connector communicates with Documentum to create a folder under the -parent folder in the specified cabinet. - -The example consists of an API named as Documentum API with two resources create folder and get cabinets. - -**Create Folder** -/createfolder: The user sends the request payload which includes the repository, parent folder ID and folder name. This request is sent to the integration runtime by invoking the Documentum API. It will create the new folder in Documentum under the parent folder given. - -**Get Cabinets** -/getcabinets: The user sends the request payload, containing the repository name to list cabinets present under that in Documentum. This request is sent to the integration runtime where the Documentum Connector API resides. Once the API is invoked, it returns the list of cabinets. - -**Create Document** -/createdocument: The user sends the request payload which includes the folder ID and document name. This request is sent to the integration runtime where the Documentum Connector API resides. Once the API is invoked, it will create the new Document in Documentun under the given folder ID. - -The following diagram shows the overall solution. - -<img src="{{base_path}}/assets/img/integrate/connectors/documentum-example.png" title="Documentum connector example" width="400" alt="Documentum connector example"/> - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Configure the connector in WSO2 Integration Studio - -Connectors can be added to integration flows in [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/). Once added, the operations of the connector can be dragged onto your canvas and added to your resources. - -### Import the connector - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -Now the connector is added to the palette. - -### Configure a Proxy Service - -1. Right click on the project and go to **New** -> **Proxy Service**. - -2. Select **Create A New Proxy Service** from the options that appear and click **Next**. - -3. Provide a name for the proxy service and click **Finish**. - -4. Add the following configuration to the source view of the project. - - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <proxy name="documentum_createfolder" startOnLoad="true" transports="http https" xmlns="http://ws.apache.org/ns/synapse"> - <target> - <inSequence> - <property expression="json-eval($.repo)" name="repo" scope="default" type="STRING"/> - <property expression="json-eval($.objectcodeID)" name="objectcode" scope="default" type="STRING"/> - <property expression="json-eval($.foldername)" name="foldername" scope="default" type="STRING"/> - <documentum.createfolder> - <repo>{$ctx:repo}</repo> - <objectcode>{$ctx:objectcode}</objectcode> - <foldername>{$ctx:foldername}</foldername> - </documentum.createfolder> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </target> - </proxy> - ``` - -You can see the newly added connector in the design palette. - -<img src="{{base_path}}/assets/img/integrate/connectors/documentum-proxy.png" title="Documentum proxy" width="800" alt="Documentum proxy"/> - -### Configure the connection and create folder operation - -1. Do the following configurations to initialize the connector. - <img src="{{base_path}}/assets/img/integrate/connectors/documentum-connection.png" title="Documentum connection" width="800" alt="Documentum connection"/> - -2. Do the following configurations to set up the `create folder` operation. - <img src="{{base_path}}/assets/img/integrate/connectors/documentum-create-folder.png" title="Documentum create folder" width="800" alt="Documentum create folder"/> - -Now we can export the imported connector and the API into a single CAR application. CAR application is the one we are going to deploy to server runtime. - -{!includes/reference/connectors/exporting-artifacts.md!} - -Now the exported CApp can be deployed in the integration runtime so that we can run it and test. - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/googlepubsub-connector.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -!!! tip - You may need to update the simulator details and make other such changes before deploying and running this project. - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -**Create folder operation** - -1. Open Postman and use a POST operation with the following sample request payload, then click on **Send**. - ```json - { - "repos":"doctest", - "objectcodeID":"0b0277b68004e7dd", - "foldername":"demo" - } - ``` - -2. You will see the following sample response payload. - -```json - { -     "name":"folder", -     "type":"dm_folder", -     "definition":"http://183.28.254.179:8078/documentum-rest-web-16.7.0000.0076/repositories/doctest/types/dm_folder", -     "properties": { -         "object_name":"testchildfolder", -         "r_object_type":"dm_folder", -         "title":"", -         "subject":"", -         "authors": null, -         "keywords": null, -         "a_application_type": "", -         "a_status": "", -         "r_creation_date": "2020-07-17T04:30:56.000+00:00", -         "r_modify_date": "2020-07-17T04:30:56.000+00:00", -         "r_modifier": "appowner", -         "r_access_date": null, -         "a_is_hidden": false, -         "i_is_deleted": false, -         "a_retention_date": null, -         "a_archive": false, -         "a_compound_architecture": "", -         "a_link_resolved": false, -         "i_reference_cnt": 1, - "i_has_folder": true, -         "i_folder_id": [ -             "0b0277b68004e7dd" -         ], -         "r_composite_id": null, -         "r_composite_label": null, -         "r_component_label": null, -         "r_order_no": null, -         "r_link_cnt": 0, -         "r_link_high_cnt": 0, -         "r_assembled_from_id": "0000000000000000", -         "r_frzn_assembly_cnt": 0, -         "r_has_frzn_assembly": false, -         "resolution_label": "", -         "r_is_virtual_doc": 0, -         "i_contents_id": "0000000000000000", -         "a_content_type": "", -         "r_page_cnt": 0, -         "r_content_size": 0, -         "a_full_text": true, -         "a_storage_type": "", -         "i_cabinet_id": "0c0277b68002952c", -         "owner_name": "appowner", -         "owner_permit": 7, -         "group_name": "docu", -         "group_permit": 5, -         "world_permit": 4, -         "i_antecedent_id": "0000000000000000", -         "i_chronicle_id": "0b0277b6800584f7", -         "i_latest_flag": true, -         "r_lock_owner": "", -         "r_lock_date": null, -         "r_lock_machine": "", -         "log_entry": "", -         "r_version_label": [ -             "1.0", -             "CURRENT" -         ], - "i_branch_cnt": 0, -         "i_direct_dsc": false, -         "r_immutable_flag": false, -         "r_frozen_flag": false, -         "r_has_events": false, -         "acl_domain": "appowner", -         "acl_name": "dm_450277b680000101", -         "a_special_app": "", -         "i_is_reference": false, -         "r_creator_name": "appowner", -         "r_is_public": true, -         "r_policy_id": "0000000000000000", -         "r_resume_state": 0, -         "r_current_state": 0, -         "r_alias_set_id": "0000000000000000", -         "a_effective_date": null, -         "a_expiration_date": null, -         "a_publish_formats": null, -         "a_effective_label": null, -         "a_effective_flag": null, -         "a_category": "", - "language_code": "", -         "a_is_template": false, -         "a_controlling_app": "", -         "r_full_content_size": 0.0, -         "a_extended_properties": null, -         "a_is_signed": false, -         "a_last_review_date": null, -         "i_retain_until": null, -         "r_aspect_name": null, -         "i_retainer_id": null, -         "i_partition": 0, -         "i_is_replica": false, -         "i_vstamp": 0, -         "r_folder_path": [ -             "/WSO2 Connector/Sample/testchildfolder" -         ], -         "i_ancestor_id": [ -             "0b0277b6800584f7", -             "0b0277b68004e7dd", -             "0c0277b68002952c" -         ], -         "r_object_id": "0b0277b6800584f7" - } - } -``` diff --git a/en/docs/reference/connectors/documentum/documentum-overview.md b/en/docs/reference/connectors/documentum/documentum-overview.md deleted file mode 100644 index 9c9a01eeff..0000000000 --- a/en/docs/reference/connectors/documentum/documentum-overview.md +++ /dev/null @@ -1,53 +0,0 @@ -# Documentum Overview - -Content, in a broad sense, is information stored as computer data files. It can include word processing, spreadsheet, graphics, video and audio files. Most content is stored locally on personal computers, organized arbitrarily, and only available to a single user. This means that valuable data is subject to loss, and projects are subject to delay when people cannot get the information they need. - -The best way to protect these important assets is to move them to a centralized content management system. Documentum is one of the software that does all with the help of API calls. - -The Documentum Connector allows you to do the following operations. - -* Create folder -* Find folder -* Delete folder -* Create document -* Find document -* Delete document -* Create cabinets -* Find cabinet -* Delete cabinet -* Get cabinet -* Create acl -* Apply acl -* Delete acl -* Get all acls -* Get all version -* Get current version -* Delete all version action to be perform - -To see the Documentum connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "Documentum". **Documentum Connector** is the name of the connector that has this functionality. - -<img src="{{base_path}}/assets/img/integrate/connectors/documentum-store.png" title="Documentum Connector Store" width="200" alt="Documentum Connector Store"/> - -## Compatibility - -| Connector Version | Supported product versions | -| ------------- |-------------| -| 1.0.0 | APIM 4.0.0, EI 7.1.0, 7.0.x | - -For older versions, see the details in the connector store. - -## Documentum Connector documentation - -* **[Documentum Connector Example]({{base_path}}/reference/connectors/documentum/documentum-example/)**: This example explains how to use Documentum Connector to create folder, find folder, delete folder, create document, find document, delete document, create cabinets, find cabinet, delete cabinet, get cabinet, create acl, apply acl, delete acl, get all acls, get all version, get current version and delete all versions. - -* **[Documentum Connector Reference]({{base_path}}/reference/connectors/documentum/documentum-reference/)**: This documentation provides a reference guide for the Documentum Connector. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, create a pull request in the following repository. - -* [Documentum Connector repository](https://github.com/wso2-extensions/esb-connector-documentum) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/documentum/documentum-reference.md b/en/docs/reference/connectors/documentum/documentum-reference.md deleted file mode 100644 index e46bb8ba40..0000000000 --- a/en/docs/reference/connectors/documentum/documentum-reference.md +++ /dev/null @@ -1,648 +0,0 @@ -# Documentum Connector Reference - -The following operations allow you to work with the Documentum Connector. Find an operation name to see parameter details and samples on how to use it. - -??? note "Create Folder" - The Create Folder operation enables you to create a new folder in Documentum. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>repos</td> - <td>The name of the root repository being created in Documentum. For example, doctest.</td> - <td>Yes.</td> - </tr> - <tr> - <td>objectcodeID</td> - <td>The Documentum Object Code ID for the parent folder. For example, "0c0277b68002952c"</td> - <td>Yes.</td> - </tr> - <tr> - <td>foldername</td> - <td>The Folder name to be given which is user Specific. For example, “Sample123”</td> - <td>Yes.</td> - </tr> - </table> - - **Sample request** - - ```json - { - "repos":"doctest", - "objectcodeID":"0c0277b68002952c", - "foldername":" Sample123" - } - ``` - -??? note "Find Folder" - The Find Folder operation is used to find the folder. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>repos</td> - <td>The name of the root repository being created in Documentum. For example, doctest.</td> - <td>Yes.</td> - </tr> - <tr> - <td>folderObjectID</td> - <td>The Folder object ID of the Documentum to find. For example, “0b0277b680048998”</td> - <td>Yes.</td> - </tr> - </table> - - **Sample request** - - ```json - { - "repos":"doctest", - "folderObjectID":"0b0277b680048998" - } - ``` - -??? note "Delete Folder" - The Delete Folder operation is used to delete the folder. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>repos</td> - <td>The name of the root repository being created in Documentum. For example, doctest.</td> - <td>Yes.</td> - </tr> - <tr> - <td>folderObjectID</td> - <td>The Folder object ID of the documentum to be deleted. For example, “0b0277b680048998”</td> - <td>Yes.</td> - </tr> - </table> - - **Sample request** - - ```json - { - "repos":"doctest", - "folderObjectID":"0b0277b680048998" - } - ``` - -??? note "Create Document" - The Create Document operation enables you to create a new document in Documentum. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>repos</td> - <td>The name of the root repository being created in Documentum. For example, doctest.</td> - <td>Yes.</td> - </tr> - <tr> - <td>FolderID</td> - <td>The Documentum object ID of the folder where the document has to be created. For example, "0b0277b68004e7de"</td> - <td>Yes.</td> - </tr> - <tr> - <td>object_name</td> - <td>The name of the document which is user specific. For example, “TestDoc1”</td> - <td>Yes.</td> - </tr> - <tr> - <td>a_content_type</td> - <td>The content type of the document(pdf,gif,csv,png,etc.). For example, "pdf"</td> - <td>Yes.</td> - </tr> - </table> - - **Sample request** - - ```json - { - "repos":"doctest", - "folderID":"0b0277b68004e7de", - "object_name":"TestingJul24", - "a_content_type":"pdf" - } - ``` - -??? note "Find Document" - The Find Document operation enables you to find an existing document in Documentum. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>repos</td> - <td>The name of the root repository being created in Documentum. For example, doctest.</td> - <td>Yes.</td> - </tr> - <tr> - <td>FolderID</td> - <td>The documentum object ID of the document to find . For example, "0b0277b68004e7de"</td> - <td>Yes.</td> - </tr> - </table> - - **Sample request** - - ```json - { - "repos":"doctest", - "folderID":"0b0277b68004e7de", - } - ``` - -??? note "Delete Document" - The Delete Document operation enables you to delete an existing document in Documentum. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>repos</td> - <td>The name of the root repository being created in Documentum. For example, doctest.</td> - <td>Yes.</td> - </tr> - <tr> - <td>FolderID</td> - <td>The documentum object ID of the document to be deleted. For example, "0b0277b68004e7de"</td> - <td>Yes.</td> - </tr> - </table> - - **Sample request** - - ```json - { - "repos":"doctest", - "folderID":"0b0277b68004e7de", - } - ``` - -??? note "Add Content to Document" - The Add Content to Document operation enables you to add some content into a document in Documentum. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>repos</td> - <td>The name of the root repository being created in Documentum. For example, doctest.</td> - <td>Yes.</td> - </tr> - <tr> - <td>objectID</td> - <td>The documentum Object ID of the document where the content to be added. For example, "090277b68002952c"</td> - <td>Yes.</td> - </tr> - <tr> - <td>format</td> - <td>The format of the present document(pdf,gif,png,csv,etc..). For example, "pdf".</td> - <td>Yes.</td> - </tr> - <tr> - <td>overwrite</td> - <td>Whether to Overwrite the content in the document. For example, "true" or "false".</td> - <td>Yes.</td> - </tr> - </table> - - **Sample request** - - Click the form data in Postman and enter the key value as indicated below and attach the file to be uploaded. - - <table> - <tr> - <th>Key</th> - <th>Value</th> - </tr> - <tr> - <td>request File</td> - <td>file123.pdf</td> - </tr> - </table> - -??? note "Create Content Full Document" - The Create Content Full Document operation is used to create a document with content in Documentum. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>repos</td> - <td>The name of the root repository being created in Documentum. For example, doctest.</td> - <td>Yes.</td> - </tr> - <tr> - <td>objectID</td> - <td>The documentum Folder Object ID where the document to be created. For example, "090277b68002952c"</td> - <td>Yes.</td> - </tr> - <tr> - <td>format</td> - <td>The format of the document(png,gif,pdf,csv,etc..). For example, "pdf".</td> - <td>Yes.</td> - </tr> - <tr> - <td>count</td> - <td>The no of the documents added to the content. For example, "1" or "2".</td> - <td>Yes.</td> - </tr> - <tr> - <td>primary</td> - <td>To make all data as primary content of the document. For example, "true" or "false".</td> - <td>Yes.</td> - </tr> - </table> - - **Sample request** - - Click the form data in Postman and enter the key value as indicated below and attach the file to be uploaded. - - <table> - <tr> - <th>Key</th> - <th>Value</th> - </tr> - <tr> - <td>objects Text</td> - <td>{"properties”: {"r_object_type":"dm_document","object_name": "TestDoc"}}</td> - </tr> - <tr> - <td>content1 File</td> - <td>document.pdf</td> - </tr> - </table> - -??? note "Get Document Content" - The Get Document Content operation is used to get content from the document. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>repos</td> - <td>The name of the root repository being created in Documentum. For example, doctest.</td> - <td>Yes.</td> - </tr> - <tr> - <td>documentObjectID</td> - <td>The documentum object ID of the document to get content. For example, "090277b6800600a3"</td> - <td>Yes.</td> - </tr> - </table> - - **Sample request** - - ```json - { - "repos":"doctest", - "documentObjectID":"090277b6800600a3" - } - ``` - -??? note "Create Cabinet" - The Create Cabinet operation is used to create the cabinet. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>repos</td> - <td>The name of the root repository being created in Documentum. For example, doctest.</td> - <td>Yes.</td> - </tr> - <tr> - <td>object_name</td> - <td>The name of the cabinet to be created which is user specific. For example, "TestCabinet6"</td> - <td>Yes.</td> - </tr> - </table> - - **Sample request** - - ```json - { - "repos":"doctest", - "object_name":"TestCabinet6" - } - ``` - -??? note "Find Cabinet" - The Find Cabinet operation is used to find the cabinet. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>repos</td> - <td>The name of the root repository being created in Documentum. For example, doctest.</td> - <td>Yes.</td> - </tr> - <tr> - <td>cabinateObjectID</td> - <td>The documentum object ID of the cabinet to find. For example, "0c0277b68004dbc1"</td> - <td>Yes.</td> - </tr> - </table> - - **Sample request** - - ```json - { - "repos":"doctest", - "cabinateObjectID":"0c0277b68004dbc1" - } - ``` - -??? note "Delete Cabinet" - The Delete Cabinet operation is used to delete the cabinet. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>repos</td> - <td>The name of the root repository being created in Documentum. For example, doctest.</td> - <td>Yes.</td> - </tr> - <tr> - <td>cabinateObjectID</td> - <td>The documentum object ID of the cabinet to be deleted. For example, "0c0277b68004dbc1"</td> - <td>Yes.</td> - </tr> - </table> - - **Sample request** - - ```json - { - "repos":"doctest", - "cabinateObjectID":"0c0277b68004dbc1" - } - ``` - -??? note "Get Cabinets" - The Get Cabinets operation is used to retrieve all the cabinets. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>repos</td> - <td>The name of the root repository being created in Documentum which list all child cabinets. For example, doctest.</td> - <td>Yes.</td> - </tr> - </table> - - **Sample request** - - ```json - { - "repos":"doctest" - } - ``` - -??? note "Create ACL" - The Create ACL operation enables you to create an ACL in Documentum. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>repos</td> - <td>The name of the root repository being created in Documentum. For example, doctest.</td> - <td>Yes.</td> - </tr> - <tr> - <td>object_name</td> - <td>The name of the ACL to be created which is user specific. For example, "Testacl8"</td> - <td>Yes.</td> - </tr> - <tr> - <td>description</td> - <td>The description of the ACL for details. For example, "TestaclonJul8".</td> - <td>Yes.</td> - </tr> - <tr> - <td>owner_name</td> - <td>The name of the Application admin/other privilege to be applied on the new ACL. For example, "appowner".</td> - <td>Yes.</td> - </tr> - </table> - - **Sample request** - - ```json - { - "repos":"doctest", - "object_name":"Testacl8", - "description": "TestaclJul8", - "owner_name": "appowner" - } - ``` - -??? note "Apply ACL" - The Apply ACL operation is used to apply ACL in Documentum. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>repos</td> - <td>The name of the root repository being created in Documentum. For example, doctest.</td> - <td>Yes.</td> - </tr> - <tr> - <td>objectID</td> - <td>The documentum odject ID of the docuement where ACL has to be applied. For example, "Testacl8"</td> - <td>Yes.</td> - </tr> - <tr> - <td>acl_name</td> - <td>The name of the ACL to be applied. For example, "TestaclJul8".</td> - <td>Yes.</td> - </tr> - </table> - - **Sample request** - - ```json - { - "repos":"doctest", - "objectID":"0b0277b680043127", - "acl_name":"Testacl8" - } - ``` - -??? note "Delete ACL" - The Delete ACL operation is used to delete an ACL in Documentum. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>repos</td> - <td>The name of the root repository being created in Documentum. For example, doctest.</td> - <td>Yes.</td> - </tr> - <tr> - <td>aclObjectID</td> - <td>The ACL documentum object ID of the object(folder/document/cabinet) to be removed. For example, "450277b680001d42"</td> - <td>Yes.</td> - </tr> - </table> - - **Sample request** - - ```json - { - "repos":"doctest", - "aclObjectID":"450277b680001d42" - } - ``` - -??? note "Get All ACL" - The Get All ACL operation is used to retrieve all ACL in Documentum. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>repos</td> - <td>The name of the root repository being created in Documentum which list all ACL's created under that. For example, doctest.</td> - <td>Yes.</td> - </tr> - </table> - - **Sample request** - - ```json - { - "repos":"doctest", - } - ``` - -??? note "Get Current Version" - The Get Current Version operation is used to get all the versions in Documentum. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>repos</td> - <td>The name of the root repository being created in Documentum. For example, doctest.</td> - <td>Yes.</td> - </tr> - <tr> - <td>chronical_id</td> - <td>The documentum chronical object ID of the document. For example, "090277b680053e99"</td> - <td>Yes.</td> - </tr> - </table> - - **Sample request** - - ```json - { - "repos":"doctest", - "chronical_id":"090277b680053e99" - } - ``` - -??? note "Delete Version" - The Delete Version operation is used to delete a version in Documentum. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>repos</td> - <td>The name of the root repository being created in Documentum. For example, doctest.</td> - <td>Yes.</td> - </tr> - <tr> - <td>chronical_id</td> - <td>The documentum chronical object ID of the document version to be deleted. For example, "090277b680053e99"</td> - <td>Yes.</td> - </tr> - </table> - - **Sample request** - - ```json - { - "repos":"doctest", - "chronical_id":"090277b680053e99" - } - ``` - -??? note "Find Sys Object" - The Find Sys Object operation is used to find the system object in Documentum. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>repos</td> - <td>The name of the root repository being created in Documentum. For example, doctest.</td> - <td>Yes.</td> - </tr> - <tr> - <td>objectID</td> - <td>The documentum object ID of any kind of object to find. For example, "090277b680047c89"</td> - <td>Yes.</td> - </tr> - </table> - - **Sample request** - - ```json - { - "repo":"doctest", - "objectID":"090277b680047c89" - } - ``` diff --git a/en/docs/reference/connectors/email-connector/email-connector-config.md b/en/docs/reference/connectors/email-connector/email-connector-config.md deleted file mode 100644 index d6f9a17609..0000000000 --- a/en/docs/reference/connectors/email-connector/email-connector-config.md +++ /dev/null @@ -1,684 +0,0 @@ -# Email Connector Reference - -The following operations allow you to work with the Email Connector. Click an operation name to see parameter details and samples on how to use it. - -??? note "init" - The init operation configures the connection parameters used to establish a connection to the mail server. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>host</td> - <td>Host name of the mail server.</td> - <td>Yes</td> - </tr> - <tr> - <td>port</td> - <td>The port number of the mail server.</td> - <td>Yes</td> - </tr> - <tr> - <td>name</td> - <td>Unique name the connection is identified by.</td> - <td>Yes</td> - </tr> - <tr> - <td>username</td> - <td>Username used to connect with the mail server.</td> - <td>Yes</td> - </tr> - <tr> - <td>password</td> - <td>Password to connect with the mail server.</td> - <td>Yes</td> - </tr> - <tr> - <td>connectionType</td> - <td>Email connection type (protocol) that should be used to establish the connection with the server. (IMAP/IMAPS/POP3/POP3S/SMTP/SMTPS).</td> - <td>Yes</td> - </tr> - <tr> - <td>readTimeout</td> - <td>The socket read timeout value. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>connectionTimeout</td> - <td>The socket connection timeout value. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>writeTimeout</td> - <td>The socket write timeout value. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>requireTLS</td> - <td>Whether the connection should be established using TLS. The default value is false. Therefore, for secured protocols SSL will be used by default.</td> - <td>Optional</td> - </tr> - <tr> - <td>checkServerIdentity</td> - <td>Whether server identity should be checked.</td> - <td>Optional</td> - </tr> - <tr> - <td>trustedHosts</td> - <td>Comma separated string of trust host names.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslProtocols</td> - <td>Comma separated string of SSL protocols.</td> - <td>Optional</td> - </tr> - <tr> - <td>cipherSuites</td> - <td>Comma separated string of Cipher Suites.</td> - <td>Optional</td> - </tr> - <tr> - <td>maxActiveConnections</td> - <td>Maximum number of active connections in the pool. When negative, there is no limit to the number of objects that can be managed by the pool at one time. Default is 8.</td> - <td>Optional</td> - </tr> - <tr> - <td>maxIdleConnections</td> - <td>Maximum number of idle connections in the pool. When negative, there is no limit to the number of objects that may be idle at one time. Default is 8.</td> - <td>Optional</td> - </tr> - <tr> - <td>maxWaitTime</td> - <td>Specifies the number of milliseconds to wait for a pooled component to become available when the pool is exhausted and the exhaustedAction is set to WHEN_EXHAUSTED_WAIT. If maxWait is negative, it will be blocked indefinitely. Default is -1.</td> - <td>Optional</td> - </tr> - <tr> - <td>evictionCheckInterval</td> - <td>The number of milliseconds between runs of the object evictor. When non-positive, no eviction thread will be launched. The default setting for this parameter is -1</td> - <td>Optional</td> - </tr> - <tr> - <td>minEvictionTime</td> - <td>The minimum amount of time an object may sit idle in the pool before it is eligible for eviction. When non-positive, no object will be dropped from the pool due to idle time alone. This setting has no effect unless timeBetweenEvictionRunsMillis > 0. The default setting for this parameter is 30 minutes.</td> - <td>Optional</td> - </tr> - <tr> - <td>exhaustedAction</td> - <td>The behavior of the pool when the pool is exhausted. (WHEN_EXHAUSTED_FAIL/WHEN_EXHAUSTED_BLOCK/WHEN_EXHAUSTED_GROW) Default is WHEN_EXHAUSTED_FAIL.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <email.connection> - <host>127.0.0.1</host> - <port>465</port> - <connectionType>SMTPS</connectionType> - <name>smtpconnection</name> - <username>user1</username> - <password>user1</password> - </email.connection> - ``` - - -??? note "list" - The list operation retrieves emails matching the specified filters. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>deleteAfterRetrieve</td> - <td>Whether the email should be deleted after retrieving.</td> - <td>Optional</td> - </tr> - <tr> - <td>receivedSince</td> - <td>The date after which to retrieve received emails.</td> - <td>Optional</td> - </tr> - <tr> - <td>receivedUntil</td> - <td>The date until which to retrieve received emails.</td> - <td>Optional</td> - </tr> - <tr> - <td>sentSince</td> - <td>The date after which to retrieve sent emails.</td> - <td>Optional</td> - </tr> - <tr> - <td>sentUntil</td> - <td>The date until which to retrieve sent emails.</td> - <td>Optional</td> - </tr> - <tr> - <td>subjectRegex</td> - <td>Subject Regex to match with the wanted emails.</td> - <td>Optional</td> - </tr> - <tr> - <td>fromRegex</td> - <td>From email address to match with the wanted emails.</td> - <td>Optional</td> - </tr> - <tr> - <td>seen</td> - <td>Whether to retrieve 'seen' or 'not seen' emails.</td> - <td>Optional</td> - </tr> - <tr> - <td>answered</td> - <td>Whether to retrieve 'answered' or 'unanswered' emails.</td> - <td>Optional</td> - </tr> - <tr> - <td>deleted</td> - <td>Whether to retrieve 'deleted' or 'not deleted' emails.</td> - <td>Optional</td> - </tr> - <tr> - <td>recent</td> - <td>Whether to retrieve 'recent' or 'past' emails.</td> - <td>Optional</td> - </tr> - <tr> - <td>offset</td> - <td>The index from which to retrieve emails.</td> - <td>Optional</td> - </tr> - <tr> - <td>limit</td> - <td>The number of emails to be retrieved.</td> - <td>Optional</td> - </tr> - <tr> - <td>folder</td> - <td>Name of the Mailbox folder to retrieve emails from. Default is `INBOX`.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <email.list configKey="imapconnection"> - <subjectRegex>{json-eval($.subjectRegex)}</subjectRegex> - <seen>{json-eval($.seen)}</seen> - <answered>{json-eval($.answered)}</answered> - <deleted>{json-eval($.deleted)}</deleted> - <recent>{json-eval($.recent)}</recent> - <offset>{json-eval($.offset)}</offset> - <limit>{json-eval($.limit)}</limit> - <folder>{json-eval($.folder)}</folder> - </email.list> - ``` - - **Sample request** - - Following is a sample REST/JSON request that can be handled by the list operation. - ```json - { - "subjectRegex":"This is the subject", - "offset":"0", - "limit":"2", - "folder":"INBOX" - } - ``` - - **Sample response** - - The response received would contain the meta data of the email as below. - - ```json - { - "emails": { - "email": [ - { - "index": 0, - "emailID": "<1623446944.0.152334336343@localhost>", - "to": "<your-email>@gmail.com", - "from": "<your-email>@gmail.com", - "replyTo": "<your-email>@gmail.com", - "subject": "Sample email", - "attachments": { - "index": "0", - "name": "contacts.csv", - "contentType": "TEXT/CSV" - } - } - ] - } - } - ``` - - > **Note:** The index of the email can be used to retrieve the email content and attachment content using below operations. - - ??? note "getEmailBody" - - > **Note:** 'List' operation MUST be invoked prior to invoking this operation as it will retrieve the email body of the emails retrieved by the 'list' operation. - - The getEmailBody operation retrieves the email content. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>emailIndex</td> - <td>Index of the email as per above response of which to retrieve the email body and content.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <email.getEmailBody> - <emailIndex>0</emailIndex> - </email.getEmailBody> - ``` - - Following properties will be set in the message context containing email data. - - * EMAIL_ID: Email ID of the email. - * TO: Recipients of the email. - * FROM: Sender of the email. - * SUBJECT: Subject of the email. - * CC: CC Recipients of the email. - * BCC: BCC Recipients of the email. - * REPLY_TO: Reply to Recipients of the email. - * HTML_CONTENT: HTML content of the email. - * TEXT_CONTENT: Text content of the email. - - - ??? note "getEmailAttachment" - - > **Note:** 'List' operation MUST be invoked prior to invoking this operation as it will retrieve the attachment of the emails retrieved by the 'list' operation. - - The getEmailAttachment operation retrieves the email content. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>emailIndex</td> - <td>Index of the email as per above response of which to retrieve the email body and content.</td> - <td>Yes</td> - </tr> - <tr> - <td>attachmentIndex</td> - <td>Index of the attachment as per above response of which to retrieve the attachment content.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <email.getEmailAttachment> - <emailIndex>0</emailIndex> - <attachmentIndex>0</attachmentIndex> - </email.getEmailAttachment> - ``` - - Following properties will be set in the message context containing attachment data. - - * ATTACHMENT_TYPE: Content Type of the attachment. - * ATTACHMENT_NAME: Name of the attachment. - - This operation will set the content of the attachment in the message context according to its content type. - - **Sample response** - - ```csv - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"><soapenv:Body><axis2ns3:text xmlns:axis2ns3="http://ws.apache.org/commons/ns/payload">id,firstname,surname,phone,email - 1,John1,Doe,096548763,john1.doe@texasComp.com - 2,Jane2,Doe,091558780,jane2.doe@texasComp.com - </axis2ns3:text></soapenv:Body></soapenv:Envelope> - - ``` - - -??? note "send" - The send operation sends an email to specified recipients with the specified content. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>from</td> - <td>The 'From' address of the message sender.</td> - <td>Yes</td> - </tr> - <tr> - <td>to</td> - <td>The recipient addresses of 'To' (primary) type.</td> - <td>Yes</td> - </tr> - <tr> - <td>personalName</td> - <td>The personal name of the message sender. This is available from Email Connector version 1.1.2 onwards.</td> - <td>Optional</td> - </tr> - <tr> - <td>cc</td> - <td>The recipient addresses of 'CC' (carbon copy) type.</td> - <td>Optional</td> - </tr> - <tr> - <td>bcc</td> - <td>The recipient addresses of 'BCC' (blind carbon copy) type.</td> - <td>Optional</td> - </tr> - <tr> - <td>replyTo</td> - <td>The email addresses to which to reply to this email.</td> - <td>Optional</td> - </tr> - <tr> - <td>subject</td> - <td>The subject of the email.</td> - <td>Optional</td> - </tr> - <tr> - <td>content</td> - <td>Body of the message in any format.</td> - <td>Optional</td> - </tr> - <tr> - <td>contentType</td> - <td>Content Type of the body text.</td> - <td>Optional</td> - </tr> - <tr> - <td>encoding</td> - <td>The character encoding of the body.</td> - <td>Optional</td> - </tr> - <tr> - <td>attachments</td> - <td>The attachments that are sent along with the email body.</td> - <td>Optional</td> - </tr> - <tr> - <td>contentTransferEncoding</td> - <td>Encoding used to indicate the type of transformation that is used to represent the body in an acceptable manner for transport.</td> - <td>Optional</td> - </tr> - </table> - - > NOTE: If there are any custom headers to be added to the email they can be set as Axis2 properties in the context with the prefix "EMAIL-HEADER:" as the property name similar to below. - ``` - <property name="EMAIL-HEADER:myProperty" value="testValue"/> - ``` - - **Sample configuration** - - ```xml - <email.send configKey="smtpconnection"> - <from>{json-eval($.from)}</from> - <to>{json-eval($.to)}</to> - <subject>{json-eval($.subject)}</subject> - <content>{json-eval($.content)}</content> - <attachments>{json-eval($.attachments)}</attachments> - </email.send> - ``` - - **Sample request** - - ```json - { - "from": "user1@gmail.com", - "to": "user2@gmail.com", - "subject": "This is the subject", - "content": "This is the body", - "attachments": "/Users/user1/Desktop/contacts.csv" - } - ``` - - > NOTE: The latest Email connector (v1.0.2 onwards) supports the attachments from JSON request payload. The connector is tested with .txt, .pdf and images (.png and .jpg). - ```json - { - "attachments": [{"name": "sampleimagefile.png", "content": "iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO9TXL0Y4OHwAAAABJRU5ErkJggg=="}] - } - ``` - -??? note "delete" - The delete operation deletes an email. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>emailID</td> - <td>Email ID of the email to delete.</td> - <td>Yes</td> - </tr> - <tr> - <td>folder</td> - <td>Name of the mailbox folder from which to delete the emails. Default is `INBOX`.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <email.delete configKey="imapconnection"> - <folder>{json-eval($.folder)}</folder> - <emailID>{json-eval($.emailID)}</emailID> - </email.delete> - ``` - - **Sample request** - - ```json - { - "folder":"Inbox", - "emailID": "<296045440.2.15945432523040@localhost>" - } - ``` - - -??? note "markAsDeleted" - The markAsDeleted operation marks an email as deleted. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>emailID</td> - <td>Email ID of the email to mark as deleted.</td> - <td>Yes</td> - </tr> - <tr> - <td>folder</td> - <td>Name of the mailbox folder where the email is. Default is `INBOX`.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <email.markAsRead configKey="imapconnection"> - <folder>{json-eval($.folder)}</folder> - <emailID>{json-eval($.emailID)}</emailID> - </email.markAsRead> - ``` - - **Sample request** - - ```json - { - "folder":"Inbox", - "emailID": "<296045440.2.15945432523040@localhost>" - } - ``` - - -??? note "markAsRead" - The markAsRead marks an email as read. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>emailID</td> - <td>Email ID of the email to mark as read.</td> - <td>Yes</td> - </tr> - <tr> - <td>folder</td> - <td>Name of the mailbox folder where the email is. Default is `INBOX`.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <email.markAsRead configKey="imapconnection"> - <folder>{json-eval($.folder)}</folder> - <emailID>{json-eval($.emailID)}</emailID> - </email.markAsRead> - ``` - - **Sample request** - - ```json - { - "folder":"Inbox", - "emailID": "<296045440.2.15945432523040@localhost>" - } - ``` - - -??? note "expungeFolder" - The expungeFolder operation permanently deletes the emails marked for deletion. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>folder</td> - <td>Name of the mailbox folder where the email is. Default is `INBOX`.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <email.expungeFolder configKey="imapconnection"> - <folder>{json-eval($.folder)}</folder> - </email.expungeFolder> - ``` - - **Sample request** - - ```json - { - "folder":"Inbox" - } - ``` - -??? note "inlineImages" - The `inlineImages` parameter, is a JSONArray that enables the insertion of inline image details. This parameter can be used to specify the image properties, such as the image URL, size, and alignment, among others. By including inline images in the JSONArray, developers can create more visually appealing and engaging content within their application. Note that this feature is available from Email connector version 1.1.1 onwards. - - There are 2 methods you can follow to add images, as listed below. - - **1. Providing file path** - ``` - { - "from": "abc@wso2.com", - "to": "xyz@wso2.com", - "subject": "Sample email subject", - "content": "<H1>Image1</H1><img src=\"cid:image1\" alt=\"this is image of image1\"><br/><H1>Image2</H1><img src=\"cid:image2\" alt=\"this is image of image2\">", - "inlineImages": [ - { - "contentID": "image1", - "filePath": "/Users/user/Documents/images/image1.jpeg" - }, - { - "contentID": "image2", - "filePath": "/Users/user/Documents/images/image2.jpeg" - } - ], - "contentType": "text/html" - } - ``` - - **2. Base64Content** - ``` - { - "from": "abc@wso2com", - "to": "xyz@wso2.com", - "subject": "Sample email subject", - "content": "<H1>Image1</H1><img src=\"cid:image1\" alt=\"this is image of a image1\"><br/><H1>Image2</H1><img src=\"cid:image2\" alt=\"this is a image2\">", - "inlineImages": [ - { - "contentID": "image1", - "fileName": "image1.jpeg", - "base64Content": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAY......" - }, - { - "contentID": "image2", - "fileName": "image2.jpeg", - "base64Content": "/9j/4AAQSkZJRgABAQEBLAEsAAD/4QBbRXhp...." - } - ], - "contentType": "text/html" - } - ``` - - -### Sample configuration in a scenario - -The following is a sample proxy service that illustrates how to connect to the Email connector and use the send operation to send an email. You can use this sample as a template for using other operations in this category. - -**Sample Proxy** -```xml -<proxy xmlns="http://ws.apache.org/ns/synapse" - name="SendEmail" - transports="https,http" - statistics="disable" - trace="disable" - startOnLoad="true"> - <target> - <inSequence> - <email.send configKey="smtpsconnection"> - <from>{json-eval($.from)}</from> - <to>{json-eval($.to)}</to> - <subject>{json-eval($.subject)}</subject> - <content>{json-eval($.content)}</content> - </email.send> - <respond/> - </inSequence> - </target> - <description/> -</proxy> -``` - -**Note**: For more information on how this works in an actual scenario, see [Email Connector Example]({{base_path}}/reference/connectors/email-connector/email-connector-example/). diff --git a/en/docs/reference/connectors/email-connector/email-connector-example.md b/en/docs/reference/connectors/email-connector/email-connector-example.md deleted file mode 100644 index 9aa6a77af5..0000000000 --- a/en/docs/reference/connectors/email-connector/email-connector-example.md +++ /dev/null @@ -1,272 +0,0 @@ -# Email Connector Example - -Email Connector can be used to perform operations using protocols SMTP, IMAP and POP3. - -## What you'll build - -This example explains how to use Email Connector to send an email and retrieve the same email from Gmail. The user sends a payload with the recipients and content of the email. Then, by invoking another API resource, the content of the sent email will be retrieved. - -The example consists of an API named as EmailConnector API with two resources `send` and `retrieve`. - -* `/send `: The user sends the request payload which includes the recipients, content and attachments of the email. This request is sent to the integration runtime by invoking the EmailConnector API. It will send the email to the relevant recipients. - - <p><img src="{{base_path}}/assets/img/integrate/connectors/email-conn-14.png" title="Send function" width="800" alt="Send function" /></p> - -* `/retrieve `: The user sends the request payload, containing the filter to search the received email. This request is sent to the integration runtime where the EmailConnector API resides. Once the API is invoked, it returns the filtered emails. - - <img src="{{base_path}}/assets/img/integrate/connectors/email-conn-15.png" title="Retrieve function" width="800" alt="Retrieve function"/> - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Configure the connector in WSO2 Integration Studio - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -## Creating the Integration Logic - -1. Right click on the created Integration Project and select, -> **New** -> **Rest API** to create the REST API. - <img src="{{base_path}}/assets/img/integrate/connectors/adding-an-api.jpg" title="Adding a Rest API" width="800" alt="Adding a Rest API"/> - -2. Provide the API name as Email Connector and the API context as `/emailconnector`. - -3. First we will create the `/send` resource. This API resource will retrieve information from the incoming HTTP post request such as recipients and content and construct the email and send to the mentioned recipients.<br/> - Right click on the API Resource and go to **Properties** view. We use a URL template called `/send` as we have two API resources inside single API. The method will be `Post`. - <img src="{{base_path}}/assets/img/integrate/connectors/email-conn-1.png" title="Adding the API resource." width="800" alt="Adding the API resource."/> - -4. In this operation we are going to receive following inputs from the user. - - from - Sender of the email. - - to - Recipient of the email. - - subject - Subject of the email. - - content - Content to be included in the email. - - contentType - Content Type of the email - -5. Drag and drop the 'send' operation of the Email Connector to the Design View as shown below. - <img src="{{base_path}}/assets/img/integrate/connectors/email-conn-2.png" title="Adding the send operation." width="800" alt="Adding the send operation."/> - -6. Create a connection from the properties window by clicking on the '+' icon as shown below. - <img src="{{base_path}}/assets/img/integrate/connectors/email-conn-3.png" title="Adding the connection." width="800" alt="Adding the connection."/> - - In the pop up window, following parameters must be provided. <br/> - - - Connection Name - Unique name to identify the connection by. - - Connection Type - Type of the connection which specifies the protocol to be used. - - Host - Host name of the mail server. - - Port - The port number of the mail server. - - Username - Username used to connect with the mail server. - - Password - Password to connect with the mail server. - - Following values can be provided to connect to Gmail. <br/> - - - Connection Name - smtpsconnection - - Connection Type - SMTP Secured Connection - - Host - smtp.gmail.com - - Port - 465 - - Username - <your_email_address> - - Password - <your_email_password> - > **NOTE**: If you have enabled 2-factor authentication, an app password should be obtained as instructed [here](https://support.google.com/accounts/answer/185833?hl=en). - - <img src="{{base_path}}/assets/img/integrate/connectors/email-conn-4.png" title="Connection parameters." width="400" alt="Connection parameters."/> - -7. After the connection is successfully created, select the created connection as 'Connection' from the drop down in the properties window. - <img src="{{base_path}}/assets/img/integrate/connectors/email-conn-5.png" title="Selecting the connection." width="800" alt="Selecting the connection."/> - -8. Next, provide the expressions as below to the following properties in the properties window to obtain respective values from the JSON request payload. - - to - json-eval($.to) - - from - json-eval($.from) - - subject - json-eval($.subject) - - content - json-eval($.content) - - contentType - json-eval($.contentType) - -9. Drag and drop the [Respond Mediator]({{base_path}}/reference/mediators/respond-mediator/) to respond the response from sending the email as shown below. - <img src="{{base_path}}/assets/img/integrate/connectors/email-conn-6.png" title="Adding the respond mediator." width="800" alt="Adding the respond mediator."/> - -10. Create the next API resource, which is `/retrieve` by dragging and dropping another API resource to the design view. This API resource will retrieve filters from the incoming HTTP post request from which to filter the email messages such as the subject, retrieve the emails, retrieve email body and respond back. - This will be used to retrieve the email we just sent. This will also be a `POST` request. - <img src="{{base_path}}/assets/img/integrate/connectors/email-conn-7.png" title="Adding new resource." width="800" alt="Adding new resource."/> - -11. Drag and drop the 'list' operation of the Email Connector to the Design View as shown below. - <img src="{{base_path}}/assets/img/integrate/connectors/email-conn-8.png" title="Adding list operation." width="800" alt="Adding list operation."/> - -12. Next, we will create a IMAP connection to list emails similar to step 6. Following are the values to be provided when creating the connection. - - - Connection Name - imapsconnection - - Connection Type - IMAP Secured Connection - - Host - imap.gmail.com - - Port - 993 - - Username - <your_email_address> - - Password - <your_email_password> - -13. In this operation we are going to receive following inputs from the user. - - subjectRegex - Subject Regex to filter the email from. <br/> - - Therefore, provide the expressions as below to the following properties in the properties window to obtain respective values from the JSON request payload.<br/> - - - Subject Regex: json-eval($.subjectRegex) - -14. We will next iterate the response received and obtain the email content of each email using the `getEmailBody` operation. In order to do this, drag and drop the [Foreach Mediator]({{base_path}}/reference/mediators/foreach-mediator/) as shown below and enter `//emails/email` as the Foreach Expression in the properties window. - <img src="{{base_path}}/assets/img/integrate/connectors/email-conn-9.png" title="Adding foreach mediator." width="800" alt="Adding foreach mediator."/> - -15. Inside the [Foreach Mediator]({{base_path}}/reference/mediators/foreach-mediator/), drag and drop the `getEmailBody` operation as shown below and provide the `//email/index/text()` expression as the Email Index. - <img src="{{base_path}}/assets/img/integrate/connectors/email-conn-10.png" title="Adding getEmailBody operation." width="800" alt="Adding getEmailBody operation."/> - - > **NOTE**: Further, you can use `getAttachment` operation to retrieve attachment content if there are any. Refer [Reference Documentation](email-connector-config/) to learn more. - -16. Next, we will use a [Payload Factory Mediator]({{base_path}}/reference/mediators/payloadfactory-mediator/), to add the email content to the same response we received from `list` operation and configure the Payload mediator as shown below. - <img src="{{base_path}}/assets/img/integrate/connectors/email-conn-11.png" title="Adding payload factory mediator." width="800" alt="Adding payload facotry mediator."/> - - Enter following as the payload: - ```xml - <email> - <emailID>$1</emailID> - <to>$2</to> - <from>$3</from> - <subject>$4</subject> - <textContent>$5</textContent> - </email> - ``` - - Here, you may observe that we are obtaining `TEXT_CONTENT` property which is being set when getEmailBody is invoked to retrieve the email content. You can find the list of similar properties set in this operation [here]({{base_path}}/reference/connectors/email-connector/email-connector-config/). - -17. Drag and drop a [Property Mediator]({{base_path}}/reference/mediators/property-mediator/) and set the Property name as 'messageType' and the value as application/json. This is added so that the response will be in json. - <img src="{{base_path}}/assets/img/integrate/connectors/email-conn-12.png" title="Adding property mediator." width="800" alt="Adding property mediator."/> - -18. Finally, drag and drop the [Respond Mediator]({{base_path}}/reference/mediators/respond-mediator/) after the 'foreach' mediator to respond the response of retrieved emails. - <img src="{{base_path}}/assets/img/integrate/connectors/email-conn-13.png" title="Adding property mediator." width="800" alt="Adding property mediator."/> - -19. You can find the complete API XML configuration below. You can go to the source view and copy paste the following config. -```xml -<?xml version="1.0" encoding="UTF-8"?> -<api context="/emailconnector" name="EmailConnector" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST" uri-template="/send"> - <inSequence> - <email.send configKey="smtpsconnection"> - <from>{json-eval($.from)}</from> - <to>{json-eval($.to)}</to> - <subject>{json-eval($.subject)}</subject> - <content>{json-eval($.content)}</content> - <contentType>{json-eval($.contentType)}</contentType> - </email.send> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" uri-template="/retrieve"> - <inSequence> - <email.list configKey="imapsconnection"> - <subjectRegex>{json-eval($.subjectRegex)}</subjectRegex> - </email.list> - <foreach expression="//emails/email"> - <sequence> - <email.getEmailBody> - <emailIndex>{//email/index/text()}</emailIndex> - </email.getEmailBody> - <payloadFactory media-type="xml"> - <format> - <email xmlns=""> - <emailID>$1</emailID> - <to>$2</to> - <from>$3</from> - <subject>$4</subject> - <textContent>$5</textContent> - </email> - </format> - <args> - <arg evaluator="xml" expression="//email/emailID"/> - <arg evaluator="xml" expression="//email/to"/> - <arg evaluator="xml" expression="//email/from"/> - <arg evaluator="xml" expression="//email/subject"/> - <arg evaluator="xml" expression="$ctx:TEXT_CONTENT"/> - </args> - </payloadFactory> - </sequence> - </foreach> - <property name="messageType" scope="axis2" type="STRING" value="application/json"/> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> -</api> -``` - -{!includes/reference/connectors/exporting-artifacts.md!} - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/emailconnector.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -### Email Send Operation - -1. Create a file called data.json with the following payload. We will send the email to ourself so that we can retrieve it later. - ``` - { - "from": "<your-email>@gmail.com", - "to": "<your-email>@gmail.com", - "subject": "Sample email", - "content": "This is the body of the sample email", - "contentType":"text/plain" - } - ``` -2. Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - ``` - curl -H "Content-Type: application/json" --request POST --data @body.json http://localhost:8290/emailconnector/send - ``` -**Expected Response**: -You should get a 'Success' response as below and you will receive the email. - ``` - { - "result": { - "success": true - } - } - ``` - -### Email List Operation - -1. Create a file called data.json with the following payload. - ``` - { - "subjectRegex":"Sample email" - } - ``` -2. Invoke the API as shown below using the curl command. - ``` - curl -H "Content-Type: application/json" --request POST --data @body.json http://localhost:8290/emailconnector/retrieve - ``` -**Expected Response**: -You should get a response like below. - ``` - { - "emails": { - "email": [ - { - "index": 0, - "emailID": "<1623446944.0.152334336343@localhost>", - "to": "<your-email>@gmail.com", - "from": "<your-email>@gmail.com", - "subject": "Sample email", - "textContent": "This is the body of the sample email" - } - ] - } - } - ``` - -## What's Next - -* To customize this example for your own scenario, see [Email Connector Configuration]({{base_path}}/reference/connectors/email-connector/email-connector-config/) documentation for all operation details of the connector. diff --git a/en/docs/reference/connectors/email-connector/email-connector-overview.md b/en/docs/reference/connectors/email-connector/email-connector-overview.md deleted file mode 100644 index 65406cfcd3..0000000000 --- a/en/docs/reference/connectors/email-connector/email-connector-overview.md +++ /dev/null @@ -1,31 +0,0 @@ -# Email Connector Overview - -The Email Connector allows you to list, send emails and perform other actions such as mark email as read, mark email as deleted, delete email and expunge folder on different mailboxes using protocols IMAP, POP3 and SMTP. - -To see the available Email connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "Email". - -<img src="{{base_path}}/assets/img/integrate/connectors/email-connector-store.png" title="Email Connector Store" width="200" alt="Email Connector Store"/> - -## Compatibility - -| Connector version | Supported product versions | -| ------------- |------------- | -| [1.0.0](https://github.com/wso2-extensions/esb-connector-email/tree/org.wso2.carbon.connector.emailconnector-1.0.0) | APIM 4.0.0, EI 7.1.0, EI 7.0.x, EI 6.6.0, EI 6.5.0, EI 6.4.0 | - -For older versions, see the details in the connector store. - -## Email Connector documentation - -* **[Email Connector Example]({{base_path}}/reference/connectors/email-connector/email-connector-example/)**: This example explains how to use Email Connector to send an email and retrieve the same. - -* **[Email Connector Reference]({{base_path}}/reference/connectors/email-connector/email-connector-config/)**: This documentation provides a reference guide for the Email Connector. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, please create a pull request in the following repository. - -* [Email Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-email) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/fhir-connector/fhir-connector-config.md b/en/docs/reference/connectors/fhir-connector/fhir-connector-config.md deleted file mode 100644 index 736768d558..0000000000 --- a/en/docs/reference/connectors/fhir-connector/fhir-connector-config.md +++ /dev/null @@ -1,886 +0,0 @@ -# FHIR Connector Reference - -The FHIR Connector allows you to work with resources in [FHIR](http://www.hl7.org/fhir/index.html), which are the modular components of FHIR. The connector uses the [FHIR RESTFul API](http://www.hl7.org/fhir/http.html) to interact with FHIR. - -## Initializing the connector - -Before you start performing various operations with the connector, make sure to import the FHIR certificate to your ESB client keystore. - -To use the FHIR connector, add the <fhir.init> element in your configuration before carrying out any other FHIR operations. - -For more information on authentication/security of the FHIR REST API, see http://www.hl7.org/implement/standards/fhir/security.html. - -??? note "fhir.init" - The fhir.init operation initializes the connector to interact with the FHIR REST API. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>base</td> - <td>The service root URL.</td> - <td>Yes.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <fhir.init> - <base>{$ctx:base}</base> - <fhir.init> - ``` - ---- - -### Conformance - -??? note "getConformance" - The conformance interaction retrieves the server's conformance statement that defines how it supports resources. The conformance interaction retrieves the server's conformance statement that defines how it supports resources, For that use fhir.getConformance and specify the following properties. For more information, see [FHIR API documentation](http://www.hl7.org/implement/standards/fhir/http.html#conformance). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>base</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#root">service root URL</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <th>format</th> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#mime-type">Mime Type</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>id, content, lastUpdated, profile, query, security, tag, text, filter</td> - <td>These are the optional parameters and are common search parameters for all resources.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <fhir.getConformance> - <base>{$ctx:base}</base> - <format>{$ctx:format}</format> - <id>{$ctx:id}</id> - <content>{$ctx:content}</content> - <lastUpdated>{$ctx:lastUpdated}</lastUpdated> - <profile>{$ctx:profile}</profile> - <query>{$ctx:query}</query> - <security>{$ctx:security}</security> - <tag>{$ctx:tag}</tag> - <text>{$ctx:text}</text> - <filter>{$ctx:filter}</filter> - </fhir.getConformance> - ``` - - **Sample request** - - ```json - { - "base": "https://open-api.fhir.me", - "format": "json", - "id":"%s(id)", - "content":"%s(content)", - "lastUpdated":"%s(lastUpdated)", - "profile":"%s(profile)", - "query":"%s(query)", - "security":"%s(security)", - "tag":"%s(tag)", - "text":"%s(text)", - "filter":"%s(filter)" - } - ``` - -### Resources - -??? note "create" - To creates a new resource in a server-assigned location, use fhir.create and specify the following properties. For more information, see related [FHIR API documentation](http://www.hl7.org/implement/standards/fhir/http.html#create). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>base</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#root">service root URL</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>type</td> - <td>The name of a resource type (e.g., "Patient").</td> - <td>Yes.</td> - </tr> - <tr> - <td>format</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#mime-type">Mime Type</a>.</td> - <td>Yes.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <fhir.create> - <base>{$ctx:base}</base> - <type>{$ctx:type}</type> - <format>{$ctx:format}</format> - </fhir.create> - ``` - - **Sample request** - - ```json - { - "base": "https://open-api.fhir.me", - "type": "Patient", - "format": "json" - } - ``` - -??? note "update" - To create a new current version for an existing resource or create an initial version if no resource already exists for the given id, use fhir.update and specify the following properties. For more information, see related [FHIR API documentation](http://www.hl7.org/implement/standards/fhir/http.html#update). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>base</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#root">service root URL</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>type</td> - <td>The name of a resource type (e.g., "Patient").</td> - <td>Yes.</td> - </tr> - <tr> - <td>idToUpdate</td> - <td>The element of a particular resource. If no id element is provided, or the value is wrong, the server SHALL respond with a HTTP 400 error code, and SHOULD provide an operation outcome identifying the issue.</td> - <td>Yes.</td> - </tr> - <tr> - <td>format</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#mime-type">Mime Type</a>.</td> - <td>Yes.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <fhir.update> - <base>{$ctx:base}</base> - <type>{$ctx:type}</type> - <idToUpdate>{$ctx:idToUpdate}</idToUpdate> - <format>{$ctx:format}</format> - </fhir.update> - ``` - - **Sample request** - - ```json - { - "base": "https://open-api.fhir.me", - "type": "Patient", - "idToUpdate":"1032702", - "format": "json" - } - ``` - -??? note "conditionalUpdate" - The conditional update interaction allows a client to update an existing resource based on some identification criteria, rather than by logical id, For this use fhir.conditionalUpdate and specify the following properties. For more information, see related [FHIR API documentation](http://www.hl7.org/implement/standards/fhir/http.html#update). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>base</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#root">service root URL</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>type</td> - <td>The name of a resource type (e.g., "Patient").</td> - <td>Yes.</td> - </tr> - <tr> - <td>format</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#mime-type">Mime Type</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>id, content, lastUpdated, profile, query, security, tag, text, filter</td> - <td>These are the optional parameters and are common search parameters for all resources.</td> - <td>Yes.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <fhir.conditionalUpdate> - <base>{$ctx:base}</base> - <type>{$ctx:type}</type> - <format>{$ctx:format}</format> - <id>{$ctx:id}</id> - <content>{$ctx:content}</content> - <lastUpdated>{$ctx:lastUpdated}</lastUpdated> - <profile>{$ctx:profile}</profile> - <query>{$ctx:query}</query> - <security>{$ctx:security}</security> - <tag>{$ctx:tag}</tag> - <text>{$ctx:text}</text> - <filter>{$ctx:filter}</filter> - </fhir.conditionalUpdate> - ``` - - **Sample request** - - ```json - { - "base": "https://open-api.fhir.me", - "type": "Patient", - "format": "json", - "id":"%s(id)", - "content":"%s(content)", - "lastUpdated":"%s(lastUpdated)", - "profile":"%s(profile)", - "query":"%s(query)", - "security":"%s(security)", - "tag":"%s(tag)", - "text":"%s(text)", - "filter":"%s(filter)" - } - ``` - -??? note "delete" - To removes an existing resource, use fhir.delete and specify the following properties. For more information, see related [FHIR API documentation](http://www.hl7.org/implement/standards/fhir/http.html#delete). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>base</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#root">service root URL</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>type</td> - <td>The name of a resource type (e.g., "Patient").</td> - <td>Yes.</td> - </tr> - <tr> - <td>idToUpdate</td> - <td>The element of a particular resource. If no id element is provided, or the value is wrong, the server SHALL respond with a HTTP 400 error code, and SHOULD provide an operation outcome identifying the issue.</td> - <td>Yes.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <fhir.delete> - <base>{$ctx:base}</base> - <type>{$ctx:type}</type> - <idToDelete>{$ctx:idToDelete}</idToDelete> - </fhir.delete> - ``` - - **Sample request** - - ```json - { - "base": "https://open-api.fhir.me", - "type": "Patient", - "idToDelete":"1032782", - "format": "json" - } - ``` - -??? note "conditionalDelete" - The conditional delete interaction allows a client to delete an existing resource based on some selection criteria, rather than by a specific logical id, For this use fhir.conditionalDelete and specify the following properties. For more information, see related [FHIR API documentation](http://www.hl7.org/implement/standards/fhir/http.html#delete). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>base</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#root">service root URL</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>type</td> - <td>The name of a resource type (e.g., "Patient").</td> - <td>Yes.</td> - </tr> - <tr> - <td>format</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#mime-type">Mime Type</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>id, content, lastUpdated, profile, query, security, tag, text, filter</td> - <td>These are the optional parameters and are common search parameters for all resources.</td> - <td>Yes.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <fhir.conditionalDelete> - <base>{$ctx:base}</base> - <type>{$ctx:type}</type> - <format>{$ctx:format}</format> - <id>{$ctx:id}</id> - <content>{$ctx:content}</content> - <lastUpdated>{$ctx:lastUpdated}</lastUpdated> - <profile>{$ctx:profile}</profile> - <query>{$ctx:query}</query> - <security>{$ctx:security}</security> - <tag>{$ctx:tag}</tag> - <text>{$ctx:text}</text> - <filter>{$ctx:filter}</filter> - </fhir.conditionalDelete> - ``` - - **Sample request** - - ```json - { - "base": "https://open-api.fhir.me", - "type": "Patient", - "format": "json", - "id":"%s(id)", - "content":"%s(content)", - "lastUpdated":"%s(lastUpdated)", - "profile":"%s(profile)", - "query":"%s(query)", - "security":"%s(security)", - "tag":"%s(tag)", - "text":"%s(text)", - "filter":"%s(filter)" - } - ``` - -??? note "readResource" - To accesses the current contents of a resource, use fhir.readResource and specify the following properties. For more information, see related [FHIR API documentation](http://www.hl7.org/implement/standards/fhir/http.html#read). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>base</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#root">service root URL</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>type</td> - <td>The name of a resource type (e.g., "Patient").</td> - <td>Yes.</td> - </tr> - <tr> - <td>format</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#mime-type">Mime Type</a>.</td> - <td>Yes.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <fhir.readResource> - <base>{$ctx:base}</base> - <type>{$ctx:type}</type> - <format>{$ctx:format}</format> - </fhir.readResource> - ``` - - **Sample request** - - ```json - { - "base": "https://open-api.fhir.me", - "type": "Patient", - "format": "json" - } - ``` - -??? note "readSpecificResourceById" - To accesses the current contents of a resource, use fhir.readSpecificResourceById and specify the following properties. For more information, see related [FHIR API documentation](http://www.hl7.org/implement/standards/fhir/http.html#read). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>base</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#root">service root URL</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>type</td> - <td>The name of a resource type (e.g., "Patient").</td> - <td>Yes.</td> - </tr> - <tr> - <td>id</td> - <td>The possible values for the logical Id.</td> - <td>Yes.</td> - </tr> - <tr> - <td>format</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#mime-type">Mime Type</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>summary</td> - <td>The search parameter _summary can be used when reading a resource. It can have the values true, false, text & data.</td> - <td>Yes.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <fhir.readSpecificResourceById> - <base>{$ctx:base}</base> - <type>{$ctx:type}</type> - <id>{$ctx:id}</id> - <format>{$ctx:format}</format> - <summary>{$ctx:summary}</summary> - </fhir.readSpecificResourceById> - ``` - - **Sample request** - - ```json - { - "base": "https://open-api.fhir.me", - "type": "Patient", - "id":"1032702", - "format": "json", - "summary": "true" - } - ``` - -??? note "vReadResource" - To preforms a version specific read of the resource, use fhir.vReadResource and specify the following properties. For more information, see related [FHIR API documentation](http://www.hl7.org/implement/standards/fhir/http.html#vread). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>base</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#root">service root URL</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>type</td> - <td>The name of a resource type (e.g., "Patient").</td> - <td>Yes.</td> - </tr> - <tr> - <td>logicalId</td> - <td>The possible values for the logical Id.</td> - <td>Yes.</td> - </tr> - <tr> - <td>format</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#mime-type">Mime Type</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>versionId</td> - <td>The Version Id ("vid") is an opaque identifier that conforms to the same format requirements as a Logical Id.</td> - <td>Yes.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <fhir.vReadResource> - <base>{$ctx:base}</base> - <type>{$ctx:type}</type> - <logicalId>{$ctx:logicalId}</logicalId> - <versionId>{$ctx:versionId}</versionId> - <format>{$ctx:format}</format> - </fhir.vReadResource> - ``` - - **Sample request** - - ```json - { - "base": "https://open-api.fhir.me", - "type": "Patient", - "logicalId":"1032702", - "versionId":"107748", - "format": "json" - } - ``` - -### History - -??? note "history" - To retrieves the history of a particular resource supported by the system , use fhir.history and specify the following properties. For more information, see related [FHIR API documentation](http://www.hl7.org/implement/standards/fhir/http.html#history). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>base</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#root">service root URL</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>type</td> - <td>The name of a resource type (e.g., "Patient").</td> - <td>Yes.</td> - </tr> - <tr> - <td>idForHistory</td> - <td>The id of the history that needs to be retrieved.</td> - <td>Yes.</td> - </tr> - <tr> - <td>format</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#mime-type">Mime Type</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>id, content, lastUpdated, profile, query, security, tag, text, filter</td> - <td>These are the optional parameters and are common search parameters for all resources.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <fhir.history> - <base>{$ctx:base}</base> - <type>{$ctx:type}</type> - <idForHistory>{$ctx:idForHistory}</idForHistory> - <format>{$ctx:format}</format> - <id>{$ctx:id}</id> - <content>{$ctx:content}</content> - <lastUpdated>{$ctx:lastUpdated}</lastUpdated> - <profile>{$ctx:profile}</profile> - <query>{$ctx:query}</query> - <security>{$ctx:security}</security> - <tag>{$ctx:tag}</tag> - <text>{$ctx:text}</text> - <filter>{$ctx:filter}</filter> - </fhir.history> - ``` - - **Sample request** - - ```json - { - "base": "https://open-api.fhir.me", - "type": "Patient", - "idForHistory":1032702", - "format": "json", - "id":"%s(id)", - "content":"%s(content)", - "lastUpdated":"%s(lastUpdated)", - "profile":"%s(profile)", - "query":"%s(query)", - "security":"%s(security)", - "tag":"%s(tag)", - "text":"%s(text)", - "filter":"%s(filter)" - } - ``` - -??? note "historyAll" - To retrieves the history of all resources supported by the system , use fhir.historyAll and specify the following properties. For more information, see related [FHIR API documentation](http://www.hl7.org/implement/standards/fhir/http.html#history). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>base</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#root">service root URL</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>format</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#mime-type">Mime Type</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>id, content, lastUpdated, profile, query, security, tag, text, filter</td> - <td>These are the optional parameters and are common search parameters for all resources.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <fhir.historyAll> - <base>{$ctx:base}</base> - <format>{$ctx:format}</format> - <id>{$ctx:id}</id> - <content>{$ctx:content}</content> - <lastUpdated>{$ctx:lastUpdated}</lastUpdated> - <profile>{$ctx:profile}</profile> - <query>{$ctx:query}</query> - <security>{$ctx:security}</security> - <tag>{$ctx:tag}</tag> - <text>{$ctx:text}</text> - <filter>{$ctx:filter}</filter> - </fhir.historyAll> - ``` - - **Sample request** - - ```json - { - "base": "https://open-api.fhir.me", - "format": "json", - "id":"%s(id)", - "content":"%s(content)", - "lastUpdated":"%s(lastUpdated)", - "profile":"%s(profile)", - "query":"%s(query)", - "security":"%s(security)", - "tag":"%s(tag)", - "text":"%s(text)", - "filter":"%s(filter)" - } - ``` - -??? note "historyType" - To retrieves the history of all resources of a given type supported by the system , use fhir.historyType and specify the following properties. For more information, see related [FHIR API documentation](http://www.hl7.org/implement/standards/fhir/http.html#history). - <table> - <tr> - <th>Parameter Name</td> - <th>Description</td> - <th>Required</td> - </tr> - <tr> - <td>base</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#root">service root URL</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>type</td> - <td>The name of a resource type (e.g., "Patient").</td> - <td>Yes.</td> - </tr> - <tr> - <td>format</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#mime-type">Mime Type</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>id, content, lastUpdated, profile, query, security, tag, text, filter</td> - <td>These are the optional parameters and are common search parameters for all resources.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <fhir.historyType> - <base>{$ctx:base}</base> - <type>{$ctx:type}</type> - <format>{$ctx:format}</format> - <id>{$ctx:id}</id> - <content>{$ctx:content}</content> - <lastUpdated>{$ctx:lastUpdated}</lastUpdated> - <profile>{$ctx:profile}</profile> - <query>{$ctx:query}</query> - <security>{$ctx:security}</security> - <tag>{$ctx:tag}</tag> - <text>{$ctx:text}</text> - <filter>{$ctx:filter}</filter> - </fhir.historyType> - ``` - - **Sample request** - - ```json - { - "base": "https://open-api.fhir.me", - "type":"Patient" - "format": "json", - "id":"%s(id)", - "content":"%s(content)", - "lastUpdated":"%s(lastUpdated)", - "profile":"%s(profile)", - "query":"%s(query)", - "security":"%s(security)", - "tag":"%s(tag)", - "text":"%s(text)", - "filter":"%s(filter)" - } - ``` - -### Search - -??? note "search" - To search from a particular resource supported by the system , use fhir.search and specify the following properties. For more information, see related [FHIR API documentation on search operation](http://www.hl7.org/implement/standards/fhir/http.html#search) and [filter operation](http://www.hl7.org/implement/standards/fhir/search.html#filter). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>base</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#root">service root URL</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>type</td> - <td>The name of a resource type (e.g., "Patient").</td> - <td>Yes.</td> - </tr> - <tr> - <td>format</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#mime-type">Mime Type</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>id, content, lastUpdated, profile, query, security, tag, text, filter</td> - <td>These are the optional parameters and are common search parameters for all resources.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <fhir.search> - <base>{$ctx:base}</base> - <type>{$ctx:type}</type> - <format>{$ctx:format}</format> - <id>{$ctx:id}</id> - <content>{$ctx:content}</content> - <lastUpdated>{$ctx:lastUpdated}</lastUpdated> - <profile>{$ctx:profile}</profile> - <query>{$ctx:query}</query> - <security>{$ctx:security}</security> - <tag>{$ctx:tag}</tag> - <text>{$ctx:text}</text> - <filter>{$ctx:filter}</filter> - </fhir.search> - ``` - - **Sample request** - - ```json - { - "base": "https://open-api.fhir.me", - "type": "Patient", - "format": "json", - "id":"%s(id)", - "content":"%s(content)", - "lastUpdated":"%s(lastUpdated)", - "profile":"%s(profile)", - "query":"%s(query)", - "security":"%s(security)", - "tag":"%s(tag)", - "text":"%s(text)", - "filter":"%s(filter)" - } - ``` - -??? note "searchPost" - To search from a particular resource supported by the system , use fhir.searchPost and specify the following properties. For more information, see related [FHIR API documentation on the search operation](http://www.hl7.org/implement/standards/fhir/http.html#search). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>base</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#root">service root URL</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>type</td> - <td>The name of a resource type (e.g., "Patient").</td> - <td>Yes.</td> - </tr> - <tr> - <td>format</td> - <td>The <a href="http://www.hl7.org/implement/standards/fhir/http.html#mime-type">Mime Type</a>.</td> - <td>Yes.</td> - </tr> - <tr> - <td>id, content, lastUpdated, profile, query, security, tag, text, filter</td> - <td>These are the optional parameters and are common search parameters for all resources.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <fhir.historyAll> - <base>{$ctx:base}</base> - <format>{$ctx:format}</format> - <id>{$ctx:id}</id> - <content>{$ctx:content}</content> - <lastUpdated>{$ctx:lastUpdated}</lastUpdated> - <profile>{$ctx:profile}</profile> - <query>{$ctx:query}</query> - <security>{$ctx:security}</security> - <tag>{$ctx:tag}</tag> - <text>{$ctx:text}</text> - <filter>{$ctx:filter}</filter> - </fhir.historyAll> - ``` - - **Sample request** - - ```json - { - "base": "https://open-api.fhir.me", - "type": "Patient", - "format": "json", - "id":"%s(id)", - "content":"%s(content)", - "lastUpdated":"%s(lastUpdated)", - "profile":"%s(profile)", - "query":"%s(query)", - "security":"%s(security)", - "tag":"%s(tag)", - "text":"%s(text)", - "filter":"%s(filter)" - } - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/fhir-connector/fhir-connector-example.md b/en/docs/reference/connectors/fhir-connector/fhir-connector-example.md deleted file mode 100644 index 46ac878bf6..0000000000 --- a/en/docs/reference/connectors/fhir-connector/fhir-connector-example.md +++ /dev/null @@ -1,425 +0,0 @@ -# FHIR Connector Example - -In this example the connector uses the FHIR REST API to interact with FHIR. - -## What you'll build - -Given below is a sample API that illustrates how you can connect to a FHIR server and invoke operations. It exposes FHIR functionalities as a restful service. Users can invoke the API with HTTP/HTTPs with required information. - -1. `/create` : creates a new patient at FHIR server -2. `/read` : retrieve information about the patient from FHIR server -3. `/readSpecificResourceById`: map this to the scenario is it read patient by Id -4. `/update` : update patient information from FHIR server. -5. `/delete` : remove added patient information from FHIR server. - -To know the further information about these operations please refer this link. - -> **Note**: If no ID element is provided, or the value is wrong, the server responds with a HTTP 400 error code and provides an operation outcome identifying the issue. - -Before you start configuring the FHIR connector, you also need to download the relevant integration runtime of WSO2, and we refer to that location as `<PRODUCT_HOME>`. - -Specific message builders/formatters configuration needs to be enabled in the product as shown below before starting the integration service. - -If you are using **EI7** or **APIM 4.0.0**, you need to enable this property by adding the following to the **<PRODUCT_HOME>/conf/deployment.toml** file. You can further refer to the [Working with Message Builders and Formatters]({{base_path}}/reference/config-catalog/#http-transport) and [Product Configurations]({{base_path}}/install-and-setup/message_builders_formatters/message-builders-and-formatters/) documentations. - -```toml -[[custom_message_builders]] -content_type = "application/fhir+json" -class = "org.wso2.micro.integrator.core.json.JsonStreamBuilder" - -[[custom_message_formatters]] -content_type = "application/fhir+json" -class = "org.wso2.micro.integrator.core.json.JsonStreamFormatter" -``` - -If you are using **EI 6**, you can enable this property by doing the following Axis2 configurations in the **<PRODUCT_HOME>\repository\conf\axis2\axis2.xml** file. - -**messageFormatters** - -```xml -<messageFormatter contentType="application/fhir+json" -class="org.wso2.carbon.integrator.core.json.JsonStreamFormatter"/> -``` - -**messageBuilders** - -```xml -<messageBuilder contentType="application/fhir+json" -class="org.wso2.carbon.integrator.core.json.JsonStreamBuilder"/> -``` - -The following diagram illustrates all the required functionality of the FHIR API service that you are going to build. - -In here FHIR clients can invoke the API with HTTP/HTTPs with required information. The FHIR connector exposes each request to converting to the Health Level Seven International standards and then send to the resources available in the FHIR server. - -This server is regularly loaded with a standard set of test data sets and also this server can store any data that related to administrative concepts such as patients, providers, organizations and devices, as well as a variety of clinical concepts including problems, medications, diagnostics, care plans and financial issues, among others. - -<img src="{{base_path}}/assets/img/integrate/connectors/fhirconnector.png" title="FHIR Connector" width="800" alt="FHIR Connector"/> - -## Configure the connector in WSO2 Integration Studio - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -1. Right click on the created Integration Project and select **New** -> **Rest API** to create the REST API. - -2. Specify the API name as `SendisoTestAPI` and API context as `/resources`. You can go to the source view of the XML configuration file of the API and copy the following configuration (source view). - - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <api context="/resources" name="SampleApi" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST" url-mapping="/create"> - <inSequence> - <property expression="json-eval($.base)" name="base" scope="default" type="STRING"/> - <property expression="json-eval($.resourceType)" name="type" scope="default" type="STRING"/> - <property expression="json-eval($.format)" name="format" scope="default" type="STRING"/> - <log level="custom"> - <property expression="get-property('transport','Content-Type')" name="base"/> - </log> - <fhir.init> - <base>http://hapi.fhir.org/baseR4</base> - </fhir.init> - <switch source="get-property('transport','Content-Type')"> - <case regex="application/json"> - <property name="format" scope="default" type="STRING" value="json"/> - <fhir.create> - <type>{$ctx:type}</type> - <format>{$ctx:format}</format> - </fhir.create> - </case> - <case regex="application/xml"> - <property name="format" scope="default" type="STRING" value="xml"/> - <fhir.create> - <type>{$ctx:type}</type> - <format>{$ctx:format}</format> - </fhir.create> - </case> - <default/> - </switch> - <log level="full" separator=","/> - <send/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" url-mapping="/read"> - <inSequence> - <property expression="json-eval($.base)" name="base" scope="default" type="STRING"/> - <property expression="json-eval($.resourceType)" name="type" scope="default" type="STRING"/> - <property expression="json-eval($.format)" name="format" scope="default" type="STRING"/> - <fhir.init> - <base>http://hapi.fhir.org/baseR4</base> - </fhir.init> - <switch source="get-property('transport','Content-Type')"> - <case regex="application/json"> - <property name="format" scope="default" type="STRING" value="json"/> - <fhir.readResource> - <type>{$ctx:type}</type> - <format>{$ctx:format}</format> - </fhir.readResource> - </case> - <case regex="application/xml"> - <property name="format" scope="default" type="STRING" value="xml"/> - <fhir.readResource> - <type>{$ctx:type}</type> - <format>{$ctx:format}</format> - </fhir.readResource> - </case> - <default/> - </switch> - <log level="full" separator=","/> - <send/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" url-mapping="/readSpecificResourceById"> - <inSequence> - <property expression="json-eval($.base)" name="base" scope="default" type="STRING"/> - <property expression="json-eval($.resourceType)" name="type" scope="default" type="STRING"/> - <property expression="json-eval($.format)" name="format" scope="default" type="STRING"/> - <property expression="json-eval($.id)" name="id" scope="default" type="STRING"/> - <property expression="json-eval($.summary)" name="summary" scope="default" type="STRING"/> - <fhir.init> - <base>http://hapi.fhir.org/baseR4</base> - </fhir.init> - <switch source="get-property('transport','Content-Type')"> - <case regex="application/json"> - <property name="format" scope="default" type="STRING" value="json"/> - <fhir.readSpecificResourceById> - <type>{$ctx:type}</type> - <id>{$ctx:id}</id> - <format>{$ctx:format}</format> - <summary>{$ctx:summary}</summary> - </fhir.readSpecificResourceById> - </case> - <case regex="application/xml"> - <property name="format" scope="default" type="STRING" value="xml"/> - <fhir.readSpecificResourceById> - <type>{$ctx:type}</type> - <id>{$ctx:id}</id> - <format>{$ctx:format}</format> - <summary>{$ctx:summary}</summary> - </fhir.readSpecificResourceById> - </case> - <default/> - </switch> - <log level="full" separator=","/> - <send/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" url-mapping="/update"> - <inSequence> - <property expression="json-eval($.base)" name="base" scope="default" type="STRING"/> - <property expression="json-eval($.resourceType)" name="type" scope="default" type="STRING"/> - <property expression="json-eval($.format)" name="format" scope="default" type="STRING"/> - <property expression="json-eval($.idToUpdate)" name="idToDelete" scope="default" type="STRING"/> - <fhir.init> - <base>http://hapi.fhir.org/baseR4</base> - </fhir.init> - <switch source="get-property('transport','Content-Type')"> - <case regex="application/json"> - <property name="format" scope="default" type="STRING" value="json"/> - <fhir.update> - <type>{$ctx:type}</type> - <idToUpdate>{$ctx:idToUpdate}</idToUpdate> - <format>{$ctx:format}</format> - </fhir.update> - </case> - <case regex="application/xml"> - <property name="format" scope="default" type="STRING" value="xml"/> - <fhir.update> - <type>{$ctx:type}</type> - <idToUpdate>{$ctx:idToUpdate}</idToUpdate> - <format>{$ctx:format}</format> - </fhir.update> - </case> - <default/> - </switch> - <log level="full" separator=","/> - <send/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" url-mapping="/delete"> - <inSequence> - <property expression="json-eval($.base)" name="base" scope="default" type="STRING"/> - <property expression="json-eval($.resourceType)" name="type" scope="default" type="STRING"/> - <property expression="json-eval($.format)" name="format" scope="default" type="STRING"/> - <property expression="json-eval($.idToDelete)" name="idToDelete" scope="default" type="STRING"/> - <fhir.init> - <base>http://hapi.fhir.org/baseR4</base> - </fhir.init> - <switch source="get-property('transport','Content-Type')"> - <case regex="application/json"> - <property name="format" scope="default" type="STRING" value="json"/> - <fhir.delete> - <type>{$ctx:type}</type> - <idToDelete>{$ctx:idToDelete}</idToDelete> - </fhir.delete> - </case> - <case regex="application/xml"> - <property name="format" scope="default" type="STRING" value="xml"/> - <fhir.delete> - <type>{$ctx:type}</type> - <idToDelete>{$ctx:idToDelete}</idToDelete> - </fhir.delete> - </case> - <default/> - </switch> - <log level="full" separator=","/> - <send/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - </api> - ``` -To learn about supported operations and their parameters, please refer to `FHIR connector reference`. - -3. Now we can export the imported connector and the API into a single CAR application. The CAR application is what we are going to deploy during server runtime. - -{!includes/reference/connectors/exporting-artifacts.md!} - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/fhir-connector.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - -### Add patient information - -**Sample Request** - -``` -curl -v POST -d - ' { "resourceType": "Patient", - "name": [{"family": "Jhone","given": ["Winney","Rodrigo"]}] - }' "http://localhost:8290/resources/create" -H "Content-Type:application/json" - -``` -**Expected Response**: - -``` -<jsonObject> - <resourceType>Patient</resourceType> - <id>698021</id> - <meta> - <versionId>1</versionId> - <lastUpdated>2020-03-24T07:57:14.506+00:00</lastUpdated> - </meta> - <text> - <status>generated</status> - <div><div xmlns="http://www.w3.org/1999/xhtml"><div class="hapiHeaderText">Winney Rodrigo <b>JHONE </b></div><table class="hapiPropertyTable"><tbody/></table></div></div> - </text> - <name> - <family>Jhone</family> - <given>Winney</given> - <given>Rodrigo</given> - </name> -</jsonObject> -``` - -### Read patient information - -**Sample Request** - -``` -curl -v POST -d - ' { - "resourceType": "Patient" - }' "http://localhost:8290/resources/read" -H "Content-Type:application/json" -``` - -**Expected Response**: - -It will retrieve all the existing resources in the FHIR server. - -### Read specific patient information - -**Sample Request** - -``` -curl -v POST -d - '{ - "resourceType":"Patient", - "id":"698021" - }' "http://localhost:8290/resources/readSpecificResourceById" -H "Content-Type:application/json" -``` - -**Expected Response**: - -```xml -<jsonObject> - <resourceType>Patient</resourceType> - <id>698021</id> - <meta> - <versionId>1</versionId> - <lastUpdated>2020-03-24T07:57:14.506+00:00</lastUpdated> - </meta> - <text> - <status>generated</status> - <div><div xmlns="http://www.w3.org/1999/xhtml"><div class="hapiHeaderText">Winney Rodrigo <b>JHONE </b></div><table class="hapiPropertyTable"><tbody/></table></div></div> - </text> - <name> - <family>Jhone</family> - <given>Winney</given> - <given>Rodrigo</given> - </name> -</jsonObject> -``` - -### Update patient information - -**Sample Request** - -``` -curl -v POST -d - '{ - "resourceType":"Patient", - "idToUpdate":"597079", - "name":[ - { - "family":"Marry", - "given":[ - "Samsong", - "Perera" - ] - } - ] - }' "http://localhost:8290/resources/update" -H "Content-Type:application/json" -``` - -**Expected Response**: - -```xml -<jsonObject> - <resourceType>Patient</resourceType> - <id>597079</id> - <meta> - <versionId>1</versionId> - <lastUpdated>2020-03-24T07:57:14.506+00:00</lastUpdated> - </meta> - <text> - <status>generated</status> - <div><div xmlns="http://www.w3.org/1999/xhtml"><div class="hapiHeaderText">Winney Rodrigo <b>JHONE </b></div><table class="hapiPropertyTable"><tbody/></table></div></div> - </text> - <name> - <family>Marry</family> - <given>Samsong</given> - <given>Perera</given> - </name> -</jsonObject> -``` - -### Delete patient information - -**Sample Request** - -``` -curl -v POST -d - '{ - "resourceType":"Patient", - "idToDelete":"597079", - }' "http://localhost:8290/resources/delete" -H "Content-Type:application/json" -``` - -**Expected Response**: - -```xml -<jsonObject> - <resourceType>OperationOutcome</resourceType> - <text> - <status>generated</status> - <div><div xmlns="http://www.w3.org/1999/xhtml"><h1>Operation Outcome</h1><table border="0"><tr><td style="font-weight: bold;">INFORMATION</td><td>[]</td><td><pre>Successfully deleted 1 resource(s) in 46ms</pre></td> - - - </tr> - </table> - </div></div> - </text> - <issue> - <severity>information</severity> - <code>informational</code> - <diagnostics>Successfully deleted 1 resource(s) in 46ms</diagnostics> - </issue> -</jsonObject> -``` - -This demonstrates how the WSO2 FHIR connector works. diff --git a/en/docs/reference/connectors/fhir-connector/fhir-connector-overview.md b/en/docs/reference/connectors/fhir-connector/fhir-connector-overview.md deleted file mode 100644 index f7fa0c5af6..0000000000 --- a/en/docs/reference/connectors/fhir-connector/fhir-connector-overview.md +++ /dev/null @@ -1,33 +0,0 @@ -# FHIR Connector Overview - -Fast Healthcare Interoperability Resources (FHIR) is an interoperability standard for electronic exchange of healthcare information. The FHIR connector can be used to invoke FHIR operations within the mediation logic. - -This connector uses the [HAPI FHIR APIs](https://hapifhir.io) to connect with a Test Server, which is an open source server licensed under the Apache Software License 2.0 (Java-based implementation of the FHIR specification). - -To see the FHIR Connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "fhir". - -<img src="{{base_path}}/assets/img/integrate/connectors/fhir-store.png" title="FHIR Connector Store" width="200" alt="FHIR Connector Store"/> - -## Compatibility - -| Connector Version | Supported product versions | -| ------------- |-------------| -| 1.0.2 | APIM 4.0.0, EI 7.1.0, EI 7.0.x EI 6.6.0 EI 6.5.0 | - -For older versions, see the details in the connector store. - -## FHIR Connector documentation - -* **[FHIR Connector Example]({{base_path}}/reference/connectors/fhir-connector/fhir-connector-example/)**: In this example you will learn how you can connect to a FHIR server and invoke operations. This is illustrated using a sample API. - -* **[FHIR Connector Reference]({{base_path}}/reference/connectors/fhir-connector/fhir-connector-config/)**: This documentation provides a reference guide for the FHIR Connector. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, create a pull request in the following repository. - -* [FHIR Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-fhir) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/file-connector/3.x/file-connector-3.x-config.md b/en/docs/reference/connectors/file-connector/3.x/file-connector-3.x-config.md deleted file mode 100644 index f70241f9e1..0000000000 --- a/en/docs/reference/connectors/file-connector/3.x/file-connector-3.x-config.md +++ /dev/null @@ -1,1595 +0,0 @@ -# File Connector Reference - -The following operations allow you to work with the File Connector version 3. Click an operation name to see parameter details and samples on how to use it. - -??? note "append" - The append operation appends content to an existing file in a specified location. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>destination</td> - <td>The location of the file for which content needs to be appended.</td> - <td>Yes</td> - </tr> - <tr> - <td>inputContent</td> - <td>The content to be appended.</td> - <td>Yes</td> - </tr> - <tr> - <td>position</td> - <td>Position to append the content. If you provide a valid position, content will be appended to that position. Otherwise, content will be appended at the end of the file.</td> - <td>Yes</td> - </tr> - <tr> - <td>encoding</td> - <td>The encoding that is supported. Possible values are US-ASCII, UTF-8, and UTF-16.</td> - <td>Optional</td> - </tr> - <tr> - <td>setTimeout</td> - <td>The timeout value on the JSC (Java Secure Channel) session in milliseconds. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setPassiveMode</td> - <td>Set to true if you want to enable passive mode.</td> - <td>Optional</td> - </tr> - <tr> - <td>setSoTimeout</td> - <td>The socket timeout value for the FTP client. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setUserDirIsRoot</td> - <td>Set to true if you want to use root as the user directory.</td> - <td>Optional</td> - </tr> - <tr> - <td>setStrictHostKeyChecking</td> - <td>Sets the requirement to use host key checking. E.g., no.</td> - <td>Optional</td> - </tr> - <tr> - <td>sftpIdentities</td> - <td>Location of the private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>sftpIdentityPassphrase</td> - <td>Passphrase of the private key.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <fileconnector.append> - <destination>{$ctx:destination}</destination> - <inputContent>{$ctx:inputContent}</inputContent> - <position>{$ctx:position}</position> - <encoding>{$ctx:encoding}</encoding> - <setTimeout>{$ctx:setTimeout}</setTimeout> - <setPassiveMode>{$ctx:setPassiveMode}</setPassiveMode> - <setSoTimeout>{$ctx:setSoTimeout}</setSoTimeout> - <setUserDirIsRoot>{$ctx:setUserDirIsRoot}</setUserDirIsRoot> - <setStrictHostKeyChecking>{$ctx:setStrictHostKeyChecking}</setStrictHostKeyChecking> - <sftpIdentities>{$ctx:sftpIdentities}</sftpIdentities> - <sftpIdentityPassphrase>{$ctx:sftpIdentityPassphrase}</sftpIdentityPassphrase> - </fileconnector.append> - ``` - - **Sample request** - - Following is a sample REST/JSON request that can be handled by the append operation. - ```json - { - "destination":"/home/vive/Desktop/file/append.txt", - "inputContent":"Add Append Text." - } - ``` - - -??? note "archive" - The archive operation archives files or folders. This operation supports the ZIP archive type. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>source</td> - <td>The location of the file. This can be a file on the local physical file system or a file on an FTP server. - <ul> - <li>For local files, the URI format is [file://]absolute-path, where absolute-path is a valid absolute file name for the local platform. UNC names are supported under Windows (e.g., file:///home/user/test or file:///C:/Windows).</li> - <li>For files on a FTP server, the URI format is ftp://[ username[: password]@] hostname[: port][ relative-path] (e.g., ftp://myusername:mypassword@somehost/pub/downloads/test.txt).</li> - </ul> - </td> - <td>Yes</td> - </tr> - <tr> - <td>destination</td> - <td>The location of the archived file with the file name. (e.g., file:///home/user/test/test.zip)</td> - <td>Yes</td> - </tr> - <tr> - <td>inputContent</td> - <td>The input content that needs to be archived.</td> - <td>Yes</td> - </tr> - <tr> - <td>fileName</td> - <td>The name of the file where input content needs to be archived.</td> - <td>Yes</td> - </tr> - <tr> - <td>setTimeout</td> - <td>The timeout value on the JSC (Java Secure Channel) session in milliseconds. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setPassiveMode</td> - <td>Set to true if you want to enable passive mode.</td> - <td>Optional</td> - </tr> - <tr> - <td>setSoTimeout</td> - <td>The socket timeout value for the FTP client. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setUserDirIsRoot</td> - <td>Set to true if you want to use root as the user directory.</td> - <td>Optional</td> - </tr> - <tr> - <td>setStrictHostKeyChecking</td> - <td>Sets the requirement to use host key checking. E.g., no.</td> - <td>Optional</td> - </tr> - <tr> - <td>includeSubDirectories</td> - <td>Set to true if you want to include the sub directories.</td> - <td>Optional</td> - </tr> - <tr> - <td>sftpIdentities</td> - <td>Location of the private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>sftpIdentityPassphrase</td> - <td>Passphrase of the private key.</td> - <td>Optional</td> - </tr> - </table> - - > NOTE: To make an archive operation, you can provide either the source or inputContent. If inputContent is provided as the parameter, we need to specify fileName. Otherwise, it will use the default fileName (output.txt). - - **Sample configuration** - - ```xml - <fileconnector.archives> - <source>{$ctx:source}</source> - <destination>{$ctx:destination}</destination> - <inputContent>{$ctx:inputContent}</inputContent> - <fileName>{$ctx:fileName}</fileName> - <setTimeout>{$ctx:setTimeout}</setTimeout> - <setPassiveMode>{$ctx:setPassiveMode}</setPassiveMode> - <setSoTimeout>{$ctx:setSoTimeout}</setSoTimeout> - <setUserDirIsRoot>{$ctx:setUserDirIsRoot}</setUserDirIsRoot> - <setStrictHostKeyChecking>{$ctx:setStrictHostKeyChecking}</setStrictHostKeyChecking> - <includeSubDirectories>{$ctx:includeSubDirectories}</includeSubDirectories> - <sftpIdentities>{$ctx:sftpIdentities}</sftpIdentities> - <sftpIdentityPassphrase>{$ctx:sftpIdentityPassphrase}</sftpIdentityPassphrase> - </fileconnector.archives> - ``` - - **Sample request** - - Following is a sample REST/JSON request that can be handled by the archive operation. - ```json - { - "source":"/home/vive/Desktop/file", - "destination":"/home/user/test/file.zip", - "includeSubDirectories":"true" - } - ``` - - -??? note "copy" - The copy operation copies files from one location to another. This operation can be used when you want to copy any kind of files and large files as well. You can also copy particular files with specified file patterns. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>source</td> - <td>The location of the file. This can be a file on the local physical file system or a file on an FTP server. - <ul> - <li>For local files, the URI format is [file://]absolute-path, where absolute-path is a valid absolute file name for the local platform. UNC names are supported under Windows (e.g., file:///home/user/test or file:///C:/Windows).</li> - <li>For files on a FTP server, the URI format is ftp://[ username[: password]@] hostname[: port][ relative-path] (e.g., ftp://myusername:mypassword@somehost/pub/downloads/test.txt).</li> - </ul> - </td> - <td>Yes</td> - </tr> - <tr> - <td>destination</td> - <td>The location of the archived file with the file name. (e.g., file:///home/user/test/test.zip)</td> - <td>Yes</td> - </tr> - <tr> - <td>filePattern</td> - <td>The pattern of the files to be copied. (e.g., [a-zA-Z][a-zA-Z]*.(txt|xml|jar))</td> - <td>Optional</td> - </tr> - <tr> - <td>setTimeout</td> - <td>The timeout value on the JSC (Java Secure Channel) session in milliseconds. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setPassiveMode</td> - <td>Set to true if you want to enable passive mode.</td> - <td>Optional</td> - </tr> - <tr> - <td>setSoTimeout</td> - <td>The socket timeout value for the FTP client. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setUserDirIsRoot</td> - <td>Set to true if you want to use root as the user directory.</td> - <td>Optional</td> - </tr> - <tr> - <td>setStrictHostKeyChecking</td> - <td>Sets the requirement to use host key checking. E.g., no.</td> - <td>Optional</td> - </tr> - <tr> - <td>includeParentDirectory</td> - <td>Set to true if you want to include the parent directory.</td> - <td>Optional</td> - </tr> - <tr> - <td>sourceSftpIdentities</td> - <td>Location of the source's private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>sourceSftpIdentityPassphrase</td> - <td>Passphrase of the source's private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>targetSftpIdentities</td> - <td>Location of the target's private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>targetSftpIdentityPassphrase</td> - <td>Passphrase of the target's private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>includeSubDirectories</td> - <td>Set to true if you want to include the sub directories.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <fileconnector.copy> - <source>{$ctx:source}</source> - <destination>{$ctx:destination}</destination> - <filePattern>{$ctx:filePattern}</filePattern> - <setTimeout>{$ctx:setTimeout}</setTimeout> - <setPassiveMode>{$ctx:setPassiveMode}</setPassiveMode> - <setSoTimeout>{$ctx:setSoTimeout}</setSoTimeout> - <setUserDirIsRoot>{$ctx:setUserDirIsRoot}</setUserDirIsRoot> - <setStrictHostKeyChecking>{$ctx:setStrictHostKeyChecking}</setStrictHostKeyChecking> - <includeParentDirectory>{$ctx:includeParentDirectory}</includeParentDirectory> - <sourceSftpIdentities>{$ctx:sftpIdentities}</sourceSftpIdentities> - <sourceSftpIdentityPassphrase>{$ctx:sourceSftpIdentityPassphrase}</sourceSftpIdentityPassphrase> - <targetSftpIdentities>{$ctx:targetSftpIdentities}</targetSftpIdentities> - <targetSftpIdentityPassphrase>{$ctx:targetSftpIdentityPassphrase}</targetSftpIdentityPassphrase> - <includeSubDirectories>{$ctx:includeSubDirectories}</includeSubDirectories> - </fileconnector.copy> - ``` - - **Sample request** - - ```json - { - "source":"/home/vive/Desktop/file", - "destination":"/home/user/test/fileCopy", - "filePattern":".*\.xml", - "includeParentDirectory":"false", - "includeSubDirectories":"false" - } - ``` - - -??? note "create" - The create operation creates a file or folder in a specified location. When creating a file, you can either create the file with content or without content. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>filePath</td> - <td>The location of the file. This can be a file on the local physical file system or a file on an FTP server. - <ul> - <li>For local files, the URI format is [file://]absolute-path, where absolute-path is a valid absolute file name for the local platform. UNC names are supported under Windows (e.g., file:///home/user/test or file:///C:/Windows).</li> - <li>For files on a FTP server, the URI format is ftp://[ username[: password]@] hostname[: port][ relative-path] (e.g., ftp://myusername:mypassword@somehost/pub/downloads/test.txt).</li> - </ul> - </td> - <td>Yes</td> - </tr> - <tr> - <td>inputContent</td> - <td>The content of the file.</td> - <td>Optional</td> - </tr> - <tr> - <td>encoding</td> - <td>The encoding that is supported. Possible values are US-ASCII, UTF-8, and UTF-16.</td> - <td>Optional</td> - </tr> - <tr> - <td>isBinaryContent</td> - <td>Set to true if input content should be handled as binary data. Input content is expected to be base64 encoded binary content.</td> - <td>Optional</td> - </tr> - <tr> - <td>setTimeout</td> - <td>The timeout value on the JSC (Java Secure Channel) session in milliseconds. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setPassiveMode</td> - <td>Set to true if you want to enable passive mode.</td> - <td>Optional</td> - </tr> - <tr> - <td>setSoTimeout</td> - <td>The socket timeout value for the FTP client. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setUserDirIsRoot</td> - <td>Set to true if you want to use root as the user directory.</td> - <td>Optional</td> - </tr> - <tr> - <td>setStrictHostKeyChecking</td> - <td>Sets the requirement to use host key checking. E.g., no.</td> - <td>Optional</td> - </tr> - <tr> - <td>sftpIdentities</td> - <td>Location of the private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>sftpIdentityPassphrase</td> - <td>Passphrase of the private key.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <fileconnector.create> - <filePath>{$ctx:filePath}</filePath> - <inputContent>{$ctx:inputContent}</inputContent> - <encoding>{$ctx:encoding}</encoding> - <isBinaryContent>{$ctx:isBinaryContent}</isBinaryContent> - <setTimeout>{$ctx:setTimeout}</setTimeout> - <setPassiveMode>{$ctx:setPassiveMode}</setPassiveMode> - <setSoTimeout>{$ctx:setSoTimeout}</setSoTimeout> - <setUserDirIsRoot>{$ctx:setUserDirIsRoot}</setUserDirIsRoot> - <setStrictHostKeyChecking>{$ctx:setStrictHostKeyChecking}</setStrictHostKeyChecking> - <sftpIdentities>{$ctx:sftpIdentities}</sftpIdentities> - <sftpIdentityPassphrase>{$ctx:sftpIdentityPassphrase}</sftpIdentityPassphrase> - </fileconnector.create> - ``` - - **Sample request** - - ```json - { - "filePath":"sftp://UserName:Password@Host/home/connectors/create.txt", - "inputContent":"InputContent Text", - "encoding":"UTF8" - } - ``` - - -??? note "delete" - The delete operation deletes a file or folder from the file system. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>source</td> - <td>The location of the file. This can be a file on the local physical file system or a file on an FTP server. - <ul> - <li>For local files, the URI format is [file://]absolute-path, where absolute-path is a valid absolute file name for the local platform. UNC names are supported under Windows (e.g., file:///home/user/test or file:///C:/Windows).</li> - <li>For files on a FTP server, the URI format is ftp://[ username[: password]@] hostname[: port][ relative-path] (e.g., ftp://myusername:mypassword@somehost/pub/downloads/test.txt).</li> - </ul> - </td> - <td>Yes</td> - </tr> - <tr> - <td>filePattern</td> - <td>The pattern of the files to be deleted.(e.g., [a-zA-Z][a-zA-Z]*.(txt|xml|jar)).</td> - <td>Optional</td> - </tr> - <tr> - <td>setTimeout</td> - <td>The timeout value on the JSC (Java Secure Channel) session in milliseconds. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setPassiveMode</td> - <td>Set to true if you want to enable passive mode.</td> - <td>Optional</td> - </tr> - <tr> - <td>setSoTimeout</td> - <td>The socket timeout value for the FTP client. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setUserDirIsRoot</td> - <td>Set to true if you want to use root as the user directory.</td> - <td>Optional</td> - </tr> - <tr> - <td>setStrictHostKeyChecking</td> - <td>Sets the requirement to use host key checking. E.g., no.</td> - <td>Optional</td> - </tr> - <tr> - <td>includeSubDirectories</td> - <td>Set to true if you want to include the sub directories.</td> - <td>Optional</td> - </tr> - <tr> - <td>deleteContainerFolders</td> - <td>Set to true if you want to delete the container folders.</td> - <td>Optional</td> - </tr> - <tr> - <td>sftpIdentities</td> - <td>Location of the private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>sftpIdentityPassphrase</td> - <td>Passphrase of the private key.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <fileconnector.delete> - <source>{$ctx:source}</source> - <filePattern>{$ctx:filePattern}</filePattern> - <setTimeout>{$ctx:setTimeout}</setTimeout> - <setPassiveMode>{$ctx:setPassiveMode}</setPassiveMode> - <setSoTimeout>{$ctx:setSoTimeout}</setSoTimeout> - <setUserDirIsRoot>{$ctx:setUserDirIsRoot}</setUserDirIsRoot> - <setStrictHostKeyChecking>{$ctx:setStrictHostKeyChecking}</setStrictHostKeyChecking> - <includeSubDirectories>{$ctx:includeSubDirectories}</includeSubDirectories> - <deleteContainerFolders>{$ctx:deleteContainerFolders}</deleteContainerFolders> - <sftpIdentities>{$ctx:sftpIdentities}</sftpIdentities> - <sftpIdentityPassphrase>{$ctx:sftpIdentityPassphrase}</sftpIdentityPassphrase> - </fileconnector.delete> - ``` - - **Sample request** - - ```json - { - "source":"/home/vive/Desktop/file", - "filePattern":".*\.txt", - "includeSubDirectories":"true" - } - ``` - - -??? note "isFileExist" - The isFileExist operation checks the existence of a file in a specified location. This operation returns true if the file exists and returns false if the file does not exist in the specified location. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>source</td> - <td>The location of the file. This can be a file on the local physical file system or a file on an FTP server. - <ul> - <li>For local files, the URI format is [file://]absolute-path, where absolute-path is a valid absolute file name for the local platform. UNC names are supported under Windows (e.g., file:///home/user/test or file:///C:/Windows).</li> - <li>For files on a FTP server, the URI format is ftp://[ username[: password]@] hostname[: port][ relative-path] (e.g., ftp://myusername:mypassword@somehost/pub/downloads/test.txt).</li> - </ul> - </td> - <td>Yes</td> - </tr> - <tr> - <td>setTimeout</td> - <td>The timeout value on the JSC (Java Secure Channel) session in milliseconds. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setPassiveMode</td> - <td>Set to true if you want to enable passive mode.</td> - <td>Optional</td> - </tr> - <tr> - <td>setSoTimeout</td> - <td>The socket timeout value for the FTP client. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setUserDirIsRoot</td> - <td>Set to true if you want to use root as the user directory.</td> - <td>Optional</td> - </tr> - <tr> - <td>setStrictHostKeyChecking</td> - <td>Sets the requirement to use host key checking. E.g., no.</td> - <td>Optional</td> - </tr> - <tr> - <td>sftpIdentities</td> - <td>The location of the private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>sftpIdentityPassphrase</td> - <td>The passphrase of the private key.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <fileconnector.isFileExist> - <source>{$ctx:source}</source> - <setTimeout>{$ctx:setTimeout}</setTimeout> - <setPassiveMode>{$ctx:setPassiveMode}</setPassiveMode> - <setSoTimeout>{$ctx:setSoTimeout}</setSoTimeout> - <setUserDirIsRoot>{$ctx:setUserDirIsRoot}</setUserDirIsRoot> - <setStrictHostKeyChecking>{$ctx:setStrictHostKeyChecking}</setStrictHostKeyChecking> - <sftpIdentities>{$ctx:sftpIdentities}</sftpIdentities> - <sftpIdentityPassphrase>{$ctx:sftpIdentityPassphrase}</sftpIdentityPassphrase> - </fileconnector.isFileExist> - ``` - - **Sample request** - - ```json - { - "source":"/home/vive/Desktop/file/test.txt" - } - ``` - - -??? note "listFileZip" - The listFileZip operation lists all the file paths inside a compressed file. This operation supports the ZIP archive type. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>source</td> - <td>The location of the file. This can be a file on the local physical file system or a file on an FTP server. - <ul> - <li>For local files, the URI format is [file://]absolute-path, where absolute-path is a valid absolute file name for the local platform. UNC names are supported under Windows (e.g., file:///home/user/test or file:///C:/Windows).</li> - <li>For files on a FTP server, the URI format is ftp://[ username[: password]@] hostname[: port][ relative-path] (e.g., ftp://myusername:mypassword@somehost/pub/downloads/test.txt).</li> - </ul> - </td> - <td>Yes</td> - </tr> - <tr> - <td>setTimeout</td> - <td>The timeout value on the JSC (Java Secure Channel) session in milliseconds. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setPassiveMode</td> - <td>Set to true if you want to enable passive mode.</td> - <td>Optional</td> - </tr> - <tr> - <td>setSoTimeout</td> - <td>The socket timeout value for the FTP client. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setUserDirIsRoot</td> - <td>Set to true if you want to use root as the user directory.</td> - <td>Optional</td> - </tr> - <tr> - <td>setStrictHostKeyChecking</td> - <td>Sets the requirement to use host key checking. E.g., no.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <fileconnector.listFileZip> - <source>{$ctx:source}</source> - <setTimeout>{$ctx:setTimeout}</setTimeout> - <setPassiveMode>{$ctx:setPassiveMode}</setPassiveMode> - <setSoTimeout>{$ctx:setSoTimeout}</setSoTimeout> - <setUserDirIsRoot>{$ctx:setUserDirIsRoot}</setUserDirIsRoot> - <setStrictHostKeyChecking>{$ctx:setStrictHostKeyChecking}</setStrictHostKeyChecking> - </fileconnector.listFileZip> - ``` - - **Sample request** - - ```json - { - "source":"/home/vive/Desktop/file/test.zip" - } - ``` - - -??? note "move" - The move operation moves a file or folder from one location to another. - - **Info**: The move operation can only move a file/folder within the same server. For example, you can move a file/folder from one local location to another local location, or from one remote location to another remote location on the same server. You cannot use the move operation to move a file/folder between different servers. If you want to move a file/folder from a local location to a remote location or vice versa, use the copy operation followed by delete operation instead of using the move operation. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>source</td> - <td>The location of the file. This can be a file on the local physical file system or a file on an FTP server. - <ul> - <li>For local files, the URI format is [file://]absolute-path, where absolute-path is a valid absolute file name for the local platform. UNC names are supported under Windows (e.g., file:///home/user/test or file:///C:/Windows).</li> - <li>For files on a FTP server, the URI format is ftp://[ username[: password]@] hostname[: port][ relative-path] (e.g., ftp://myusername:mypassword@somehost/pub/downloads/test.txt).</li> - </ul> - </td> - <td>Yes</td> - </tr> - <tr> - <td>destination</td> - <td>The location where the file has to be moved to.</td> - <td>Yes</td> - </tr> - <tr> - <td>setTimeout</td> - <td>The timeout value on the JSC (Java Secure Channel) session in milliseconds. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setPassiveMode</td> - <td>Set to true if you want to enable passive mode.</td> - <td>Optional</td> - </tr> - <tr> - <td>setSoTimeout</td> - <td>The socket timeout value for the FTP client. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setUserDirIsRoot</td> - <td>Set to true if you want to use root as the user directory.</td> - <td>Optional</td> - </tr> - <tr> - <td>setStrictHostKeyChecking</td> - <td>Sets the requirement to use host key checking. E.g., no.</td> - <td>Optional</td> - </tr> - <tr> - <td>includeParentDirectory</td> - <td>Set to true if you want to include the parent directory.</td> - <td>Optional</td> - </tr> - <tr> - <td>includeSubDirectories</td> - <td>Set to true if you want to include the sub directories.</td> - <td>Optional</td> - </tr> - <tr> - <td>setAvoidPermission</td> - <td>Set to true if you want to skip the file permission check.</td> - <td>Optional</td> - </tr> - <tr> - <td>sourceSftpIdentities</td> - <td>Location of the source's private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>sourceSftpIdentityPassphrase</td> - <td>Passphrase of the source's private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>targetSftpIdentities</td> - <td>Location of the target's private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>targetSftpIdentityPassphrase</td> - <td>Passphrase of the target's private key.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <fileconnector.move> - <source>{$ctx:source}</source> - <destination>{$ctx:destination}</destination> - <setTimeout>{$ctx:setTimeout}</setTimeout> - <setPassiveMode>{$ctx:setPassiveMode}</setPassiveMode> - <setSoTimeout>{$ctx:setSoTimeout}</setSoTimeout> - <setUserDirIsRoot>{$ctx:setUserDirIsRoot}</setUserDirIsRoot> - <setStrictHostKeyChecking>{$ctx:setStrictHostKeyChecking}</setStrictHostKeyChecking> - <filePattern>{$ctx:filePattern}</filePattern> - <includeParentDirectory>{$ctx:includeParentDirectory}</includeParentDirectory> - <includeSubDirectories>{$ctx:includeSubDirectories}</includeSubDirectories> - <setAvoidPermission>{$ctx:setAvoidPermission}</setAvoidPermission> - <sourceSftpIdentities>{$ctx:sftpIdentities}</sourceSftpIdentities> - <sourceSftpIdentityPassphrase>{$ctx:sourceSftpIdentityPassphrase}</sourceSftpIdentityPassphrase> - <targetSftpIdentities>{$ctx:targetSftpIdentities}</targetSftpIdentities> - <targetSftpIdentityPassphrase>{$ctx:targetSftpIdentityPassphrase}</targetSftpIdentityPassphrase> - </fileconnector.move> - ``` - - **Sample request** - - ```json - { - "source":"/home/vive/Desktop/file", - "destination":"/home/vive/Desktop/move", - "filePattern":".*\.txt", - "includeParentDirectory":"true", - "includeSubDirectories":"true" - } - ``` - - -??? note "read" - The read operation reads content from an existing file in a specified location. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>source</td> - <td>The location of the file. This can be a file on the local physical file system or a file on an FTP server. - <ul> - <li>For local files, the URI format is [file://]absolute-path, where absolute-path is a valid absolute file name for the local platform. UNC names are supported under Windows (e.g., file:///home/user/test or file:///C:/Windows).</li> - <li>For files on a FTP server, the URI format is ftp://[ username[: password]@] hostname[: port][relative-path] (e.g., ftp://myusername:mypassword@somehost/pub/downloads/test.txt).</li> - </ul> - </td> - <td>Yes</td> - </tr> - <tr> - <td>filePattern</td> - <td>The pattern of the file to be read.</td> - <td>Yes</td> - </tr> - <tr> - <td>contentType</td> - <td>Content type of the files processsed by the connector.</td> - <td>Yes</td> - </tr> - <tr> - <td>streaming</td> - <td>The streaming mode. This can be either true or false.</td> - <td>Optional</td> - </tr> - <tr> - <td>setTimeout</td> - <td>The timeout value on the JSC (Java Secure Channel) session in milliseconds. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setPassiveMode</td> - <td>Set to true if you want to enable passive mode.</td> - <td>Optional</td> - </tr> - <tr> - <td>setSoTimeout</td> - <td>The socket timeout value for the FTP client. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setUserDirIsRoot</td> - <td>Set to true if you want to use root as the user directory.</td> - <td>Optional</td> - </tr> - <tr> - <td>setStrictHostKeyChecking</td> - <td>Sets the requirement to use host key checking. E.g., no.</td> - <td>Optional</td> - </tr> - <tr> - <td>includeParentDirectory</td> - <td>Set to true if you want to include the parent directory.</td> - <td>Optional</td> - </tr> - <tr> - <td>sftpIdentities</td> - <td>Location of the private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>sftpIdentityPassphrase</td> - <td>Passphrase of the private key.</td> - <td>Optional</td> - </tr> - </table> - - **Info**: To enable streaming for large files, you have to add the following message builder and formatter in the <ESB_HOME>/repository/conf/axis2/axis2.xml file: - * Add <messageFormatter contentType="application/file" class="org.wso2.carbon.relay.ExpandingMessageFormatter"/> under message formatters. - * Add <messageBuilder contentType="application/file" class="org.apache.axis2.format.BinaryBuilder"/> under message builders. - - **Sample configuration** - - ```xml - <fileconnector.read> - <source>{$ctx:source}</source> - <filePattern>{$ctx:filePattern}</filePattern> - <contentType>{$ctx:contentType}</contentType> - <setTimeout>{$ctx:setTimeout}</setTimeout> - <setPassiveMode>{$ctx:setPassiveMode}</setPassiveMode> - <setSoTimeout>{$ctx:setSoTimeout}</setSoTimeout> - <setUserDirIsRoot>{$ctx:setUserDirIsRoot}</setUserDirIsRoot> - <setStrictHostKeyChecking>{$ctx:setStrictHostKeyChecking}</setStrictHostKeyChecking> - <sftpIdentities>{$ctx:sftpIdentities}</sftpIdentities> - <sftpIdentityPassphrase>{$ctx:sftpIdentityPassphrase}</sftpIdentityPassphrase> - </fileconnector.read> - ``` - - **Sample request** - - ```json - { - "source":"/home/vive/Desktop/file", - "contentType":"application/xml", - "filePattern":".*\.xml", - "streaming":"false" - } - ``` - - -??? note "search" - The search operation finds a file or folder based on a given file pattern or directory pattern in a specified location. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>source</td> - <td>The location of the file. This can be a file on the local physical file system or a file on an FTP server. - <ul> - <li>For local files, the URI format is [file://]absolute-path, where absolute-path is a valid absolute file name for the local platform. UNC names are supported under Windows (e.g., file:///home/user/test or file:///C:/Windows).</li> - <li>For files on a FTP server, the URI format is ftp://[ username[: password]@] hostname[: port][ relative-path] (e.g., ftp://myusername:mypassword@somehost/pub/downloads/test.txt).</li> - </ul> - </td> - <td>Yes</td> - </tr> - <tr> - <td>filePattern</td> - <td>The pattern of the file to be read.</td> - <td>Yes</td> - </tr> - <tr> - <td>recursiveSearch</td> - <td>Whether you are searching recursively (the possible values are True or False).</td> - <td>Yes</td> - </tr> - <tr> - <td>setTimeout</td> - <td>The timeout value on the JSC (Java Secure Channel) session in milliseconds. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setPassiveMode</td> - <td>Set to true if you want to enable passive mode.</td> - <td>Optional</td> - </tr> - <tr> - <td>setSoTimeout</td> - <td>The socket timeout value for the FTP client. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setUserDirIsRoot</td> - <td>Set to true if you want to use root as the user directory.</td> - <td>Optional</td> - </tr> - <tr> - <td>setStrictHostKeyChecking</td> - <td>Sets the requirement to use host key checking. E.g., no.</td> - <td>Optional</td> - </tr> - <tr> - <td>includeParentDirectory</td> - <td>Set to true if you want to include the parent directory.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <fileconnector.search> - <source>{$ctx:source}</source> - <filePattern>{$ctx:filePattern}</filePattern> - <recursiveSearch>{$ctx:recursiveSearch}</recursiveSearch> - <setTimeout>{$ctx:setTimeout}</setTimeout> - <setPassiveMode>{$ctx:setPassiveMode}</setPassiveMode> - <setSoTimeout>{$ctx:setSoTimeout}</setSoTimeout> - <setUserDirIsRoot>{$ctx:setUserDirIsRoot}</setUserDirIsRoot> - <setStrictHostKeyChecking>{$ctx:setStrictHostKeyChecking}</setStrictHostKeyChecking> - </fileconnector.search> - ``` - - **Sample request** - ```json - { - "source":"/home/vive/Desktop/file", - "filePattern":".*\.xml", - "recursiveSearch":"true" - } - ``` - - -??? note "unzip" - The unzip operation decompresses zip file. This operation supports ZIP archive type. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>source</td> - <td>The location of the file. This can be a file on the local physical file system or a file on an FTP server. - <ul> - <li>For local files, the URI format is [file://]absolute-path, where absolute-path is a valid absolute file name for the local platform. UNC names are supported under Windows (e.g., file:///home/user/test or file:///C:/Windows).</li> - <li>For files on a FTP server, the URI format is ftp://[ username[: password]@] hostname[: port][ relative-path] (e.g., ftp://myusername:mypassword@somehost/pub/downloads/test.txt).</li> - </ul> - </td> - <td>Yes</td> - </tr> - <tr> - <td>destination</td> - <td>The location of the decompressed file.</td> - <td>Yes</td> - </tr> - <tr> - <td>setTimeout</td> - <td>The timeout value on the JSC (Java Secure Channel) session in milliseconds. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setPassiveMode</td> - <td>Set to true if you want to enable passive mode.</td> - <td>Optional</td> - </tr> - <tr> - <td>setSoTimeout</td> - <td>The socket timeout value for the FTP client. E.g., 100000.</td> - <td>Optional</td> - </tr> - <tr> - <td>setUserDirIsRoot</td> - <td>Set to true if you want to use root as the user directory.</td> - <td>Optional</td> - </tr> - <tr> - <td>setStrictHostKeyChecking</td> - <td>Sets the requirement to use host key checking. E.g., no.</td> - <td>Optional</td> - </tr> - <tr> - <td>sourceSftpIdentities</td> - <td>Location of the source's private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>sourceSftpIdentityPassphrase</td> - <td>Passphrase of the source's private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>targetSftpIdentities</td> - <td>Location of the target's private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>targetSftpIdentityPassphrase</td> - <td>Passphrase of the target's private key.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <fileconnector.unzip> - <source>{$ctx:source}</source> - <destination>{$ctx:destination}</destination> - <setTimeout>{$ctx:setTimeout}</setTimeout> - <setPassiveMode>{$ctx:setPassiveMode}</setPassiveMode> - <setSoTimeout>{$ctx:setSoTimeout}</setSoTimeout> - <setUserDirIsRoot>{$ctx:setUserDirIsRoot}</setUserDirIsRoot> - <setStrictHostKeyChecking>{$ctx:setStrictHostKeyChecking}</setStrictHostKeyChecking> - <sourceSftpIdentities>{$ctx:sftpIdentities}</sourceSftpIdentities> - <sourceSftpIdentityPassphrase>{$ctx:sourceSftpIdentityPassphrase}</sourceSftpIdentityPassphrase> - <targetSftpIdentities>{$ctx:targetSftpIdentities}</targetSftpIdentities> - <targetSftpIdentityPassphrase>{$ctx:targetSftpIdentityPassphrase}</targetSftpIdentityPassphrase> - </fileconnector.unzip> - ``` - - **Sample request** - - ```json - { - "source":"/home/vive/Desktop/file/test.zip", - "destination":"/home/vive/Desktop/file/test" - } - ``` - - -??? note "ftpOverProxy" - The ftpOverProxy operation connects to a FTP server through a proxy. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>proxyHost</td> - <td>The host name of the proxy.</td> - <td>Yes</td> - </tr> - <tr> - <td>proxyPort</td> - <td>The port number of the proxy.</td> - <td>Yes</td> - </tr> - <tr> - <td>proxyUsername</td> - <td>The user name of the proxy.</td> - <td>Yes</td> - </tr> - <tr> - <td>proxyPassword</td> - <td>The password of the proxy.</td> - <td>Yes</td> - </tr> - <tr> - <td>ftpUsername</td> - <td>The username of the FTP server.</td> - <td>Yes</td> - </tr> - <tr> - <td>ftpPassword</td> - <td>The password of the FTP server.</td> - <td>Yes</td> - </tr> - <tr> - <td>ftpServer</td> - <td>The FTP server name.</td> - <td>Yes</td> - </tr> - <tr> - <td>ftpPort</td> - <td>The port number of the FTP server.</td> - <td>Yes</td> - </tr> - <tr> - <td>targetPath</td> - <td>The target path. For example, if the file path is ftp://myusername:mypassword@somehost/pub/downloads/testProxy.txt, the targetPath will be pub/downloads/.</td> - <td>Yes</td> - </tr> - <tr> - <td>targetFile</td> - <td>The name of the file (e.g., if the path is like "ftp://myusername:mypassword@somehost/pub/downloads/testProxy.txt", then targetPath will be "testProxy.txt").</td> - <td>Yes</td> - </tr> - <tr> - <td>keepAliveTimeout</td> - <td>The time to wait between sending control connection keep alive messages when processing file upload or download.</td> - <td>Optional</td> - </tr> - <tr> - <td>controlKeepAliveReplyTimeout</td> - <td>The time to wait for control keep-alive message replies.</td> - <td>Optional</td> - </tr> - <tr> - <td>binaryTransfer</td> - <td>Set the file type to be transferred.</td> - <td>Optional</td> - </tr> - <tr> - <td>localActive</td> - <td>Set the current data connection mode to either ACTIVE_LOCAL_DATA_CONNECTION_MODE or PASSIVE_LOCAL_DATA_CONNECTION_MODE.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <fileconnector.ftpOverProxy> - <proxyHost>{$ctx:proxyHost}</proxyHost> - <proxyPort>{$ctx:proxyPort}</proxyPort> - <proxyUsername>{$ctx:proxyUsername}</proxyUsername> - <proxyPassword>{$ctx:proxyPassword}</proxyPassword> - <ftpUsername>{$ctx:ftpUsername}</ftpUsername> - <ftpPassword>{$ctx:ftpPassword}</ftpPassword> - <ftpServer>{$ctx:ftpServer}</ftpServer> - <ftpPort>{$ctx:ftpPort}</ftpPort> - <targetPath>{$ctx:targetPath}</targetPath> - <targetFile>{$ctx:targetFile}</targetFile> - <keepAliveTimeout>{$ctx:keepAliveTimeout}</keepAliveTimeout> - <controlKeepAliveReplyTimeout>{$ctx:controlKeepAliveReplyTimeout}</controlKeepAliveReplyTimeout> - <binaryTransfer>{$ctx:binaryTransfer}</binaryTransfer> - <localActive>{$ctx:localActive}</localActive> - </fileconnector.ftpOverProxy> - ``` - - **Sample request** - - ```json - { - "proxyHost":"SampleProxy", - "proxyPort":"3128", - "proxyUsername":"wso2", - "proxyPassword":"Password", - "ftpUsername":"primary", - "ftpPassword":"Password", - "ftpServer":"192.168.56.6", - "ftpPort":"21", - "targetFile":"/home/primary/res" - } - ``` - - -??? note "send" - The send operation sends a file to a specified location. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>address</td> - <td>The address where the file has to be sent.</td> - <td>Optional</td> - </tr> - <tr> - <td>append</td> - <td>Set this to true if you want to append the response to the response file.</td> - <td>Optional</td> - </tr> - </table> - - > **Note**: To send a VFS file, you have to specify the following properties in your configuration: - ``` - <property name="OUT_ONLY" value="true"/> - <property name="ClientApiNonBlocking" value="true" scope="axis2" action="remove"/> - ``` - - **Sample configuration** - - ```xml - <fileconnector.send> - <address>{$ctx:address}</address> - <append>{$ctx:append}</append> - </fileconnector.send> - ``` - - **Sample request** - - ```json - { - "address":"/home/vive/Desktop/file/outTest", - "append":"true" - } - ``` - - -??? note "getSize" - The getSize operation returns the size of a file. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>source</td> - <td>The location of the file.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <fileconnector.getSize> - <source>{$ctx:source}</source> - </fileconnector.getSize> - ``` - - **Sample request** - - ```json - { - "source":"/home/vive/Desktop/file/outTest/sample.txt" - } - ``` - - -??? note "getLastModifiedTime" - The getLastModifiedTime operation returns last modified time of a file/folder. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>source</td> - <td>The location of the file.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <fileconnector.getLastModifiedTime> - <source>{$ctx:source}</source> - </fileconnector.getLastModifiedTime> - ``` - - **Sample request** - - ```json - { - "source":"/home/vive/Desktop/file/outTest/sample.txt" - } - ``` - - -??? note "splitFile" - The splitFile operation splits a file into multiple chunks. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>source</td> - <td>The location of the file.</td> - <td>Yes</td> - </tr> - <tr> - <td>destination</td> - <td>The location to write the file.</td> - <td>Yes</td> - </tr> - <tr> - <td>chunkSize</td> - <td>The chunk size in bytes to split the file. This is to split the file based on chunk size. You should provide either chunkSize or numberOfLines to split the file.</td> - <td>Yes</td> - </tr> - <tr> - <td>numberOfLines</td> - <td>The number of lines per file. This is to split the file based on the number of lines. You should provide either chunkSize or numberOfLines to split the file.</td> - <td>Yes</td> - </tr> - <tr> - <td>xpathExpression</td> - <td>Defines a pattern in order to select a set of nodes in XML document.</td> - <td>Yes</td> - </tr> - <tr> - <td>sourceSftpIdentities</td> - <td>Location of the source's private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>sourceSftpIdentityPassphrase</td> - <td>Passphrase of the source's private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>targetSftpIdentities</td> - <td>Location of the target's private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>targetSftpIdentityPassphrase</td> - <td>Passphrase of the target's private key.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <fileconnector.splitFile> - <source>{$ctx:source}</source> - <destination>{$ctx:destination}</destination> - <chunkSize>{$ctx:chunkSize}</chunkSize> - <numberOfLines>{$ctx:numberOfLines}</numberOfLines> - <xpathExpression>{$ctx:xpathExpression}</xpathExpression> - <sourceSftpIdentities>{$ctx:sftpIdentities}</sourceSftpIdentities> - <sourceSftpIdentityPassphrase>{$ctx:sourceSftpIdentityPassphrase}</sourceSftpIdentityPassphrase> - <targetSftpIdentities>{$ctx:targetSftpIdentities}</targetSftpIdentities> - <targetSftpIdentityPassphrase>{$ctx:targetSftpIdentityPassphrase}</targetSftpIdentityPassphrase> - </fileconnector.splitFile> - ``` - - **Sample request** - - ```json - { - "source":"/home/vive/Desktop/file/outTest/sample.txt", - "destination":"/home/vive/Desktop/file/outTest/", - "chunkSize":"4096", - "xpathExpression":"//products/product" - } - ``` - - -??? note "mergeFiles" - The mergeFiles operation merges multiple chunks into a single file. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>source</td> - <td>The location of the file.</td> - <td>Yes</td> - </tr> - <tr> - <td>destination</td> - <td>The location to write the file.</td> - <td>Yes</td> - </tr> - <tr> - <td>filePattern</td> - <td>The pattern of the file to be read.</td> - <td>Yes</td> - </tr> - <tr> - <td>sourceSftpIdentities</td> - <td>Location of the source's private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>sourceSftpIdentityPassphrase</td> - <td>Passphrase of the source's private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>targetSftpIdentities</td> - <td>Location of the target's private key.</td> - <td>Optional</td> - </tr> - <tr> - <td>targetSftpIdentityPassphrase</td> - <td>Passphrase of the target's private key.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <fileconnector.mergeFiles> - <source>{$ctx:source}</source> - <destination>{$ctx:destination}</destination> - <filePattern>{$ctx:filePattern}</filePattern> - <sourceSftpIdentities>{$ctx:sftpIdentities}</sourceSftpIdentities> - <sourceSftpIdentityPassphrase>{$ctx:sourceSftpIdentityPassphrase}</sourceSftpIdentityPassphrase> - <targetSftpIdentities>{$ctx:targetSftpIdentities}</targetSftpIdentities> - <targetSftpIdentityPassphrase>{$ctx:targetSftpIdentityPassphrase}</targetSftpIdentityPassphrase> - </fileconnector.mergeFiles> - ``` - - **Sample request** - - ```json - { - "source":"/home/vive/Desktop/file/outTest/", - "destination":"/home/vive/Desktop/file/outTest/sample.txt", - "filePattern":"*.txt*" - } - ``` - - -??? note "readSpecifiedLines" - The readSpecifiedLines operation reads specific lines between given line numbers from a file. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>source</td> - <td>The location of the file.</td> - <td>Yes</td> - </tr> - <tr> - <td>contentType</td> - <td>Content type of the files processed by the connector.</td> - <td>Yes</td> - </tr> - <tr> - <td>start</td> - <td>Read from this line number.</td> - <td>Yes</td> - </tr> - <tr> - <td>end</td> - <td>Read up to this line number.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <fileconnector.readSpecifiedLines> - <source>{$ctx:source}</source> - <contentType>{$ctx:contentType}</contentType> - <start>{$ctx:start}</start> - <end>{$ctx:end}</end> - </fileconnector.readSpecifiedLines> - ``` - - **Sample request** - - ```json - { - "source":"/home/vive/Desktop/file/outTest/sampleText.txt", - "start":"5", - "end":"25" - } - ``` - - -??? note "readALine" - The readALine operation reads a specific line from a file. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>source</td> - <td>The location of the file.</td> - <td>Yes</td> - </tr> - <tr> - <td>lineNumber</td> - <td>Line number to read.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <fileconnector.readALine> - <source>{$ctx:source}</source> - <lineNumber>{$ctx:lineNumber}</lineNumber> - </fileconnector.readALine> - ``` - - **Sample request** - - ```json - { - "source":"/home/vive/Desktop/file/outTest/sampleText.txt", - "lineNumber":"5" - } - ``` - - - -### Sample configuration in a scenario - -The following is a sample proxy service that illustrates how to connect to the File connector and use the create operation to create a file. You can use this sample as a template for using other operations in this category. - -**Sample Proxy** -```xml -<proxy xmlns="http://ws.apache.org/ns/synapse" - name="FileConnector_create" - transports="https,http" - statistics="disable" - trace="disable" - startOnLoad="true"> - <target> - <inSequence> - <property name="source" expression="json-eval($.source)"/> - <property name="inputContent" expression="json-eval($.inputContent)"/> - <property name="encoding" expression="json-eval($.encoding)"/> - <property name="setTimeout" expression="json-eval($.setTimeout)"/> - <property name="setPassiveMode" expression="json-eval($.setPassiveMode)"/> - <property name="setSoTimeout" expression="json-eval($.setSoTimeout)"/> - <property name="setStrictHostKeyChecking" - expression="json-eval($.setStrictHostKeyChecking)"/> - <property name="setUserDirIsRoot" expression="json-eval($.setUserDirIsRoot)"/> - <fileconnector.create> - <source>{$ctx:source}</source> - <inputContent>{$ctx:inputContent}</inputContent> - <encoding>{$ctx:encoding}</encoding> - <setTimeout>{$ctx:setTimeout}</setTimeout> - <setPassiveMode>{$ctx:setPassiveMode}</setPassiveMode> - <setSoTimeout>{$ctx:setSoTimeout}</setSoTimeout> - <setUserDirIsRoot>{$ctx:setUserDirIsRoot}</setUserDirIsRoot> - <setStrictHostKeyChecking>{$ctx:setStrictHostKeyChecking}</setStrictHostKeyChecking> - </fileconnector.create> - <respond/> - </inSequence> - </target> - <description/> -</proxy> -``` - -**Note**: For more information on how this works in an actual scenario, see [File Connector Example]({{base_path}}/reference/connectors/file-connector/3.x/file-connector-3.x-example). diff --git a/en/docs/reference/connectors/file-connector/3.x/file-connector-3.x-example.md b/en/docs/reference/connectors/file-connector/3.x/file-connector-3.x-example.md deleted file mode 100644 index 2df788c90f..0000000000 --- a/en/docs/reference/connectors/file-connector/3.x/file-connector-3.x-example.md +++ /dev/null @@ -1,184 +0,0 @@ -# File Connector Example - -File Connector can be used to perform operations in the local file system as well as in a remote server such as FTP and SFTP. - -## What you'll build - -This example explains how to use File Connector to create a file in the local file system and read the particular file. The user sends a payload specifying which content is to be written to the file. Based on that content, a file is created in the specified location. Then the content of the file can be read as an HTTP response by invoking the other API resource upon the existence of the file. - -It will have two HTTP API resources, which are `create` and `read`. - -* `/create `: The user sends the request payload which includes the location where the file needs to be saved and the content needs to be added to the file. This request is sent to the integration runtime by invoking the FileConnector API. It saves the file in the specified location with the relevant content. - - <p><img src="{{base_path}}/assets/img/integrate/connectors/filecon-3.x/fileconnector-03.png" title="Adding a Rest API" width="800" alt="Adding a Rest API" /></p> - -* `/read `: The user sends the request payload, which includes the location of the file that needs to be read. This request is sent to the integration runtime where the FileConnector API resides. Once the API is invoked, it first checks if the file exists in the specified location. If it exists, the content is read and response is sent to the user. If the file does not exist, it sends an error response to the user. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon-3.x/fileconnector-02.png" title="Adding a Rest API" width="800" alt="Adding a Rest API"/> - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Configure the connector in WSO2 Integration Studio - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -## Creating the Integration Logic - -1. Right click on the created Integration Project and select, -> **New** -> **Rest API** to create the REST API. - <img src="{{base_path}}/assets/img/integrate/connectors/filecon-3.x/adding-an-api.png" title="Adding a Rest API" width="800" alt="Adding a Rest API"/> - -2. Provide the API name as File Connector and the API context as `/fileconnector`. - -3. First we will create the `/create` resource. Right click on the API Resource and go to **Properties** view. We use a URL template called `/create` as we have two API resources inside single API. The method will be `Post`. - <img src="{{base_path}}/assets/img/integrate/connectors/filecon-3.x/filecon-3.png" title="Adding the API resource." width="800" alt="Adding the API resource."/> - -4. In this operation we are going to receive input from the user which is `filePath` and `inputContent`. - - filePath - location that the file is going to be created. - - inputContent - what needs to be written to the file. - -5. The above two parameters are saved to properties. Drag and drop the Property Mediator onto the canvas in the design view and do as shown below. For further reference, you can read about the [Property mediator]({{base_path}}/reference/mediators/property-mediator/). - <img src="{{base_path}}/assets/img/integrate/connectors/filecon-3.x/filecon-1.png" title="Adding a property" width="800" alt="Adding a property"/> - -6. Add the another Property Mediator to get the InputContent value copied. Do the same as in the above step. - - property name: InputContent - - Value Type: EXPRESSION - - Value Expression: json-eval($.inputContent) - -7. Drag and drop the create operation of the File Connector to the Design View as shown below. Set the parameter values as below. We use the property values that we added in step 4 and 5 in this step as `$ctx:filePath` and `$ctx:inputContent`. - <img src="{{base_path}}/assets/img/integrate/connectors/filecon-3.x/file-con2.png" title="Adding createFile operation" width="800" alt="Adding createFile operation"/> - -8. Add a Respond Mediator as the user needs to see the response. Now we are done with creating the first API resource, and it is displayed as shown below. - <img src="{{base_path}}/assets/img/integrate/connectors/filecon-3.x/filecon9.png" title="First API Resource" width="800" alt="First API Resource"/> - -9. Create the next API resource, which is `/read`. From this we are going to read the file content from a user specified location. - -10. As described in step 3, drag and drop another API resource to the design view. Use the URL template as `/read`. The method will be POST. - <img src="{{base_path}}/assets/img/integrate/connectors/filecon-3.x/apiResource.png" title="Adding an API resource" width="800" alt="Adding an API resource"/> - -11. In this operation, the user sends the file location as the request payload. It will be written to the property as we did in step 10. - <img src="{{base_path}}/assets/img/integrate/connectors/filecon-3.x/filecon4.png" title="Adding property mediator" width="800" alt="Adding property mediator"/> - -12. Then we are going to check if the file actually exists in the specified location. We can use `isFileExists` operation of the File Connector. - <img src="{{base_path}}/assets/img/integrate/connectors/filecon-3.x/filecon5.png" title="Adding property mediator" width="800" alt="Adding property mediator"/> - -13. Then we copy the `isFileExists` response to a property mediator. Add another property mediator and add values as shown below. - <img src="{{base_path}}/assets/img/integrate/connectors/filecon-3.x/filecon6.png" title="Adding property mediator" width="800" alt="Adding property mediator"/> - -14. Based on its response, we decide if we read the file or print an error to the user. In this case, we use a Switch Mediator. - <img src="{{base_path}}/assets/img/integrate/connectors/filecon-3.x/filecon7.png" title="Adding switch mediator" width="800" alt="Adding switch mediator"/> - -15. Drag and drop the read operation to the **case** in the Switch mediator. In the default case log an error and drops the message. - <img src="{{base_path}}/assets/img/integrate/connectors/filecon-3.x/filecon8.png" title="switch mediator" width="800" alt="switch mediator"/> - - 16. You can find the complete API XML configuration below. You can go to the source view and copy paste the following config. - - ``` - <?xml version="1.0" encoding="UTF-8"?> - <api context="/fileconnector" name="FileConnector" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST" uri-template="/create"> - <inSequence> - <property expression="json-eval($.filePath)" name="source" scope="default" type="STRING"/> - <property expression="json-eval($.inputContent)" name="inputContent" scope="default" type="STRING"/> - <fileconnector.create> - <filePath>{$ctx:filePath}</filePath> - <inputContent>{$ctx:inputContent}</inputContent> - </fileconnector.create> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" uri-template="/read"> - <inSequence> - <property expression="json-eval($.source)" name="source" scope="default" type="STRING"/> - <fileconnector.isFileExist> - <source>{$ctx:source}</source> - </fileconnector.isFileExist> - <property expression="json-eval($.fileExist)" name="response" scope="default" type="STRING"/> - <log level="custom"> - <property expression="get-property('response')" name="responselog"/> - </log> - <switch source="get-property('response')"> - <case regex="true"> - <fileconnector.read> - <source>{$ctx:source}</source> - </fileconnector.read> - <respond/> - </case> - <default> - <log> - <property name="notext" value=""File does not exist""/> - </log> - <drop/> - </default> - </switch> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - </api> - - ``` - -{!includes/reference/connectors/exporting-artifacts.md!} - - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/filecon-3.x/fileconnector.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -### File Create Operation - -1. Create a file called data.json with the following payload. - ``` - { - "filePath":"<file_path>/create.txt", - "inputContent": "This is a test file" - } - ``` - > **Note**: When you configuring this `source` parameter in Windows operating system you need to set this property shown as `<source>C:\\Users\Kasun\Desktop\Salesforcebulk-connector\create.txt</source>`. - -2. Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - ``` - curl -H "Content-Type: application/json" --request POST --data @body.json http://10.100.5.136:8290/fileconnector/create - ``` -**Expected Response**: -You should get a 'Success' response, and the file should be created in the specified location in the above payload. - -### File Read Operation - -1. Create a file called data.json with the following payload. - ``` - { - "source":"<file_path>/create.txt" - } - ``` -2. Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - ``` - curl -H "Content-Type: application/json" --request POST --data @body.json http://10.100.5.136:8290/fileconnector/read - ``` - -**Expected Response**: -You should get the following text returned. - -` -This is a test file. -` - -## What's Next - -* To customize this example for your own scenario, see [File Connector Configuration]({{base_path}}/reference/connectors/file-connector/3.x/file-connector-3.x-config) documentation for all operation details of the connector. diff --git a/en/docs/reference/connectors/file-connector/file-connector-config.md b/en/docs/reference/connectors/file-connector/file-connector-config.md deleted file mode 100644 index 2ba34208de..0000000000 --- a/en/docs/reference/connectors/file-connector/file-connector-config.md +++ /dev/null @@ -1,2936 +0,0 @@ -# File Connector Reference - -The following configurations allow you to work with the File Connector version 4. - -## Connection configurations - -The File connector can be used to deal with two types of file systems: - -- <b>Local File System</b>: A file system of the server where the WSO2 integration runtime is deployed. -- <b>Remote File System</b>: A file system outside the server where the WSO2 integration runtime is deployed. There are few industry standard protocols established to expose a file system over TCP. Following protocols are supported by the File connector. - - - FTP - - FTPS - - SFTP - -There are different connection configurations that can be used for the above protocols. They contain a common set of configurations and some additional configurations specific to the protocol. - -<img src="{{base_path}}/img/connectors/filecon-reference-22.png" title="types of file connections" width="800" alt="types of file connections"/> - - -!!! Note - The File connector internally uses the [Apache VFS Library](https://commons.apache.org/proper/commons-vfs/). According to the selected connection type, the following VFS connection URLs will be generated. - - === "Local File" - ```bash - [file://] absolute-path - file:///home/someuser/somedir - file:///C:/Documents and Settings - ``` - - === "FTP" - ```bash - ftp://[ username[: password]@] hostname[: port][ relative-path] - ftp://myusername:mypassword@somehost/pub/downloads/somefile.tgz - ``` - - === "FTPS" - ```bash - ftps://[ username[: password]@] hostname[: port][ absolute-path] - ftps://myusername:mypassword@somehost/pub/downloads/somefile.tgz - ``` - - === "SFTP" - ```bash - sftp://[ username[: password]@] hostname[: port][ relative-path] - sftp://myusername:mypassword@somehost/pub/downloads/somefile.tgz - ``` - -!!! Tip - There are instances where errors occur when using .csv files and the output is encoded. To overcome this, add the following configuration to the `<PRODUCT_HOME>/repository/conf/deployment.toml` file. - - ```toml - - [[custom_message_formatters]] - content_type = "text/csv" - class = "org.apache.axis2.format.PlainTextFormatter" - - [[custom_message_builders]] - content_type = "text/csv" - class = "org.apache.axis2.format.PlainTextBuilder" - - ``` - - Also, you need to modify your proxy service as indicated below. - - ```xml - - <?xml version="1.0" encoding="UTF-8"?> - <sequence name="proxyDeployingSeq" trace="disable" xmlns="http://ws.apache.org/ns/synapse"> - <property name="slotNumber" value="1"/> - <property expression="get-property("SYSTEM_DATE", "mm")" name="currentMin" scope="default" type="STRING"/> - <file.read configKey="slotFileConnection"> - <path>/csv/slot.csv</path> - <readMode>Specific Line</readMode> - <startLineNum>0</startLineNum> - <endLineNum>0</endLineNum> - <lineNum>{get-property('currentMin')}</lineNum> - <contentType>text/csv</contentType> - <includeResultTo>Message Property</includeResultTo> - <resultPropertyName>slotNumber</resultPropertyName> - <enableStreaming>false</enableStreaming> - <enableLock>false</enableLock> - </file.read> - <log level="custom"> - <property name="slott" expression="get-property('slotNumber')"/> - </log> - </sequence> - - ``` - -### Common configs to all connection types - -<table> - <tr> - <th>Parameter Name</th> - <th>Element</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - Connection Name - </td> - <td> - name - </td> - <td> - String - </td> - <td> - A unique name to identify the connection. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Connection Type - </td> - <td> - connectionType - </td> - <td> - String - </td> - <td> - The protocol used for communicating with the file system.</br> - <b>Possible values</b>: - <ul> - <li> - <b>Local</b>: Provides access to the files on the local physical file system. - </li> - <li> - <b>FTP</b>: Provides access to the files on an FTP server. - </li> - <li> - <b>FTPS</b>: Provides access to the files on an FTP server over SSL. - </li> - <li> - <b>SFTP</b>: Provides access to the files on an SFTP server (that is, an SSH or SCP server). - </li> - </ul> - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Working Directory - </td> - <td> - workingDir - </td> - <td> - String - </td> - <td> - This is the working directory. The file paths in operations, which are associated with the connection, should be provided relative to this folder. </br> - <b>Note</b>: As per <a href="https://commons.apache.org/proper/commons-vfs/filesystems.html#Local_Files">VFS documentation</a>, for windows, the working directory of local connections should be as follows: <code>/C:/Documents</code>. - <td> - Defaults to file system root. - </td> - <td> - No - </td> - </tr> - <tr> - <td> - File Locking Behaviour - </td> - <td> - fileLockScheme - </td> - <td> - String - </td> - <td> - Specify whether to acquire node-specific lock (Local) or cluster-wide lock (Cluster) when locks are acquired in read and write operations.</br> - <ul> - <li> - <b>Local</b></br> - When a lock is acquired, it is acquired within the context of file operations performed by that server node only. Local lock acquired by some file operation on a particular server node is not visible to the other server nodes that may access the same file system. - </li> - <li> - <b>Cluster</b></br> - When multiple server nodes access the same file system performing read and write operations, you may use this behaviour. Here, when a file lock is acquired, it is visible to all file connector operations across the nodes. This is acquired by creating a <code>.lock</code> file in the same file system (for the file that is being accessed). The behaviour depends on the OS and the file system. Therefore, this feature may not work as intended in high-concurrent scenarios. - </li> - </ul> - <b>Note</b>:</br> - File locking is available for read and write operations. When enabled, a file specific lock is acquired before the operation and released after the operation. Parallel read/write operations are restricted when locking is enabled by a file operation. - <td> - Local - </td> - <td> - Yes - </td> - </tr> -</table> - -### Common remote connection configs - -<table> - <tr> - <th>Parameter Name</th> - <th>Element</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - Host - </td> - <td> - host - </td> - <td> - String - </td> - <td> - Host name of the file server. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Port - </td> - <td> - port - </td> - <td> - Number - </td> - <td> - The port number of the file server - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Username - </td> - <td> - username - </td> - <td> - String - </td> - <td> - User name used to connect with the file server. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Password - </td> - <td> - password - </td> - <td> - String - </td> - <td> - Password to connect with the file server. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - User Directory Is Root - </td> - <td> - userDirIsRoot - </td> - <td> - Boolean - </td> - <td> - If set to false (default), VFS will choose the file system's root as the VFS's root. If you want to have the user's home as the VFS root, then set this to 'true'. - </td> - <td> - false - </td> - <td> - No - </td> - </tr> -</table> - -### FTP/FTPS-specific configs - -<table> - <tr> - <th>Parameter Name</th> - <th>Element</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - Is Passive - </td> - <td> - isPassive - </td> - <td> - Boolean - </td> - <td> - If passive mode is enabled, set this to 'true'.</br></br> - <b>Note</b> the following about 'Active/Passive' mode: - <ol> - <li> - <b>Active Mode</b>: The client starts listening on a random port for incoming data connections from the server (the client sends the FTP command PORT to inform the server on which port it is listening). Nowadays, the client is typically behind a firewall (e.g. built-in Windows firewall) or NAT router (e.g. ADSL modem), unable to accept incoming TCP connections. The passive mode was introduced and is heavily used for this reason. - </li> - <li> - <b>Passive Mode</b>: In the passive mode, the client uses the control connection to send a PASV command to the server and then receives a server IP address and server port number from the server, which the client then uses to open a data connection to the server IP address and server port number received. - </li> - </ol> - </td> - <td> - true - </td> - <td> - No - </td> - </tr> - <tr> - <td> - FTP Connection Timeout - </td> - <td> - ftpConnectionTimeout - </td> - <td> - Number - </td> - <td> - Specify the timeout in milliseconds for the initial control connection. - </td> - <td> - 100000 - </td> - <td> - No - </td> - </tr> - <tr> - <td> - FTP Socket Timeout - </td> - <td> - ftpSocketTimeout - </td> - <td> - Number - </td> - <td> - Specify the socket timeout in milliseconds for the FTP client. - </td> - <td> - 150000 - </td> - <td> - No - </td> - </tr> -</table> - -### FTPS-specific configs - -<table> - <tr> - <th>Parameter Name</th> - <th>Element</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - KeyStore Path - </td> - <td> - keyStorePath - </td> - <td> - String - </td> - <td> - The keystore path. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - KeyStore Password - </td> - <td> - keyStorePassword - </td> - <td> - String - </td> - <td> - The password to the keystore. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - TrustStore Path - </td> - <td> - trustStorePath - </td> - <td> - String - </td> - <td> - The truststore path. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - TrustStore Password - </td> - <td> - trustStorePassword - </td> - <td> - String - </td> - <td> - The password to the truststore. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Implicit Mode Enabled - </td> - <td> - implicitModeEnabled - </td> - <td> - Boolean - </td> - <td> - Set this to 'true' if <a href="https://en.wikipedia.org/wiki/FTPS#Implicit">implicit mode </a>is enabled. - <ul> - <li> - <b>Implicit</b>: The TLS ClientHello message should be initiated by client. - </li> - <li> - <b>Explicit</b>: The client must "explicitly request" security from an FTPS server. - </li> - </ul> - </td> - <td> - false - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Channel Protection Level - </td> - <td> - channelProtectionLevel - </td> - <td> - String - </td> - <td> - The FTP Data Channel protection level. Possible values: C,S,E,P.</br> - <b>Example</b>: Sends a “PROT P” command when implicit SSL is enabled. - </td> - </tr> -</table> - -### SFTP connection configs - -<table> - <tr> - <th>Parameter Name</th> - <th>Element</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - SFTP Connection Timeout - </td> - <td> - sftpConnectionTimeout - </td> - <td> - Number - </td> - <td> - The <b>Jsch</b> connection timeout in milli seconds. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - SFTP Session Timeout - </td> - <td> - sftpSessionTimeout - </td> - <td> - Number - </td> - <td> - The <b>Jsch</b> session timeout in milli seconds. - </td> - <td> - 100000 - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Strict Host Key Check - </td> - <td> - strictHostKeyChecking - </td> - <td> - Boolean - </td> - <td> - Specifies whether the Host key should be checked. If set to 'true', the connector (JSch) will always verify the public key (fingerprint) of the SSH/SFTP server. - </td> - <td> - false - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Private Key File - </td> - <td> - privateKeyFilePath - </td> - <td> - String - </td> - <td> - Path to the private key file.</br></br> - <b>Note</b>: You can only use a key generated in a classic manner (<i>ssh-keygen -m PEM</i>). - </td> - <td> - false - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Private Key Passphrase - </td> - <td> - privateKeyPassword - </td> - <td> - String - </td> - <td> - The passphrase of the private key. The security of a key (even if encrypted) is retained because it is not available to anyone else. You can specify the passphrase when generating keys. - </td> - <td> - false - </td> - <td> - No - </td> - </tr> - <tr> - <td> - File System Permission Check - </td> - <td> - setAvoidPermission - </td> - <td> - Boolean - </td> - <td> - Set to true if you want to skip the file permission check.</br> - Available in file-connector <b>v4.0.9</b> and above. - </td> - <td> - false - </td> - <td> - No - </td> - </tr> -</table> - -## Operations - -The following operations allow you to work with the File Connector version 4. Click an operation name to see parameter details and samples on how to use it. - -??? note "createDirectory" - Creates a new folder in a provided directory path. - <table> - <tr> - <th>Parameter Name</th> - <th>Element</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - File Connection - </td> - <td> - name - </td> - <td> - String - </td> - <td> - The name of the file connection configuration to use. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Directory Path - </td> - <td> - directoryPath - </td> - <td> - String - </td> - <td> - The new directory path. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - </table> - - **Response** - - ```xml - <createDirectoryResult> - <success>true</success> - </createDirectoryResult> - ``` - -??? note "checkExist" - Check if a given file or folder exists. - <table> - <tr> - <th>Parameter Name</th> - <th>Element</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - File Connection - </td> - <td> - name - </td> - <td> - String - </td> - <td> - The name of the file connection configuration to use. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - File/Folder Path - </td> - <td> - path - </td> - <td> - String - </td> - <td> - The new directory path that should be scanned. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - </table> - - **Response** - - ```xml - <checkExistResult> - <success>true</success> - <fileExists>true</fileExists> - </checkExistResult> - ``` - -??? note "compress" - Archives a file or a directory. - <table> - <tr> - <th>Parameter Name</th> - <th>Element</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - File Connection - </td> - <td> - name - </td> - <td> - String - </td> - <td> - The name of the file connection configuration to use. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Folder/File To Compress - </td> - <td> - sourceDirectoryPath - </td> - <td> - String - </td> - <td> - The path to the folder that should be compressed. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Targer File Path - </td> - <td> - targetFilePath - </td> - <td> - String - </td> - <td> - The path to the compressed file that will be created. If the file already exists, it is overwritten. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Include Sub Directories - </td> - <td> - includeSubDirectories - </td> - <td> - Boolean - </td> - <td> - Specifies whether the sub folders in the original folder should be included in the compressed file. - </td> - <td> - true - </td> - <td> - No - </td> - </tr> - </table> - - **Response** - - ```xml - <compressResult> - <success>true</success> - <NumberOfFilesAdded>16</NumberOfFilesAdded> - </compressResult> - ``` - - **Error** - - ```xml - <compressResult> - <success>false</success> - <code>700102</code> - <detail>File or directory to compress does not exist</detail> - </compressResult> - ``` - -??? note "copy" - Copies the file or folder specified by a source path to a target path. The source can be a file or a folder. If it is a folder, the copying is recursive. - - <table> - <tr> - <th>Parameter Name</th> - <th>Element</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - File Connection - </td> - <td> - name - </td> - <td> - String - </td> - <td> - The name of the file connection configuration to use. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Source Path - </td> - <td> - sourcePath - </td> - <td> - String - </td> - <td> - The path to the file that should be copied. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Targer Path - </td> - <td> - targetPath - </td> - <td> - String - </td> - <td> - The location (folder) to which the file should be copied. </br> - If the target folder does not exist at the time of copy, a new folder is created. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Source File Pattern - </td> - <td> - sourceFilePattern - </td> - <td> - String - </td> - <td> - The file name pattern of the source file. Example: <i>[a-zA-Z][a-zA-Z]*.(txt|xml|jar)</i> - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Copy Including Source Parent - </td> - <td> - includeParent - </td> - <td> - Boolean - </td> - <td> - Specify whether the parent folder should be copied from the file source along with the content. By default, only the content inside the folder will get copied. - </td> - <td> - false - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Overwrite Existing Files - </td> - <td> - overwrite - </td> - <td> - Boolean - </td> - <td> - Specifies whether or not to overwrite the file if the same file already exists in the target destination. - </td> - <td> - false - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Rename To - </td> - <td> - renameTo - </td> - <td> - String - </td> - <td> - The new name of the copied file. - </td> - <td> - Original file name. - </td> - <td> - No - </td> - </tr> - </table> - - **Response** - - ```xml - <copyFilesResult> - <success>true</success> - </copyFilesResult> - ``` - - **Error** - - ```xml - <copyFilesResult> - <success>false</success> - <code>700103</code> - <detail>Destination file already exists and overwrite not allowed</detail> - </copyFilesResult> - ``` - -??? note "move" - Moves the file or folder specified by the source path to the target directory. The source can be a file or a folder. If it is a folder, the moving is recursive. - - The move operation can only move a file/folder within the same server. For example, you can move a file/folder from one local location to another local location, or from one remote location to another remote location on the same server. You cannot use the move operation to move a file/folder between different servers. - - <table> - <tr> - <th>Parameter Name</th> - <th>Element</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - File Connection - </td> - <td> - name - </td> - <td> - String - </td> - <td> - The name of the file connection configuration to use. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Source Path - </td> - <td> - sourcePath - </td> - <td> - String - </td> - <td> - The path to the file that should be copied. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Targer Path - </td> - <td> - targetPath - </td> - <td> - String - </td> - <td> - The location to which the file should be copied. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Create Parent Directories - </td> - <td> - createParentDirectories - </td> - <td> - Boolean - </td> - <td> - Specifies whether the parent directory should be created if it doesn't already exist in the target folder. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Include Parent - </td> - <td> - includeParent - </td> - <td> - Boolean - </td> - <td> - Specify whether the parent folder should be copied from the file source along with the content. - </td> - <td> - false - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Overwrite Existing Files - </td> - <td> - overwrite - </td> - <td> - Boolean - </td> - <td> - Specifies whether or not to overwrite the file if the same file already exists in the target destination. - </td> - <td> - false - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Rename To - </td> - <td> - renameTo - </td> - <td> - String - </td> - <td> - The new name of the moved files. - </td> - <td> - Original file name. - </td> - <td> - No - </td> - </tr> - <tr> - <td> - File Pattern - </td> - <td> - filePattern - </td> - <td> - String - </td> - <td> - The pattern (regex) of the files to be moved. </br> - <b>Example</b>: <code>[a-zA-Z][a-zA-Z]*.(txt|xml|jar)</code>.</br> - Available in file-connector <b>v4.0.5</b> and above - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - </table> - - **Response** - - ```xml - <moveFilesResult> - <success>true</success> - </moveFilesResult> - ``` - - **Error** - - ```xml - <moveFilesResult> - <success>false</success> - <code>700103</code> - <detail>Destination file already exists and overwrite not allowed</detail> - </moveFilesResult> - ``` - -??? note "read" - Reads the content and metadata of a file at a given path. Metadata of the file is added as properties while content is set to the message body (or optionally to a message context property). - - Known message properties representing file properties: - - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - FILE_LAST_MODIFIED_TIME - </td> - <td> - DateTime - </td> - <td> - The time at which the file was last modified. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - FILE_SIZE - </td> - <td> - Number - </td> - <td> - The file size (in bytes). - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - FILE_IS_DIR - </td> - <td> - Boolean - </td> - <td> - Specifies whether a folder directory is represented as the file. - </td> - <td> - false - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - FILE_PATH - </td> - <td> - String - </td> - <td> - The file path. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - FILE_URL - </td> - <td> - String - </td> - <td> - The VFS URL of the file. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - FILE_NAME - </td> - <td> - String - </td> - <td> - The file name or folder name. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - FILE_NAME_WITHOUT_EXTENSION - </td> - <td> - String - </td> - <td> - The file name without the extension. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - </table> - - Important: - - - When reading a folder, the first file that matches the pattern will be read first. Note that sub directories are not scanned. If you need to move or delete the file before reading the folder again, use the `FILE_NAME` context variable. - - The MIME type (content-type) of the message is determined by the file extension (i.e an XML file will be read as a message with the `application/xml` MIME type). However, users can force the MIME type by the `ContentType` parameter. Similarly, the `Encoding` parameter can be used to force the encoding. - - You can set `EnableLock` to `true` to enable file system lock until the reading is completed and the stream is closed. - - When large files are read, use `streaming=true`. Note that you need to first make necessary changes in the `deployment.toml`. The `ContentType` parameter also needs to be `application/binary`. Note that file reading modes are not applicable when streaming is set to `true`. The complete file is always streamed. - - <table> - <tr> - <th>Parameter Name</th> - <th>Element</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - File Connection - </td> - <td> - name - </td> - <td> - String - </td> - <td> - The name of the file connection configuration to use. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - File Path - </td> - <td> - path - </td> - <td> - String - </td> - <td> - The path to the file that should be read. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - File Pattern - </td> - <td> - filePattern - </td> - <td> - String Regex - </td> - <td> - The file name pattern that should be matched when reading the file. - </td> - <td> - All text files (<code>.*\.txt</code>) - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Add Result To - </td> - <td> - includeResultTo - </td> - <td> - String - </td> - <td> - Specify where to add the result of the file that is read.</br> - <ul> - <li> - <b>Message Body</b> - </li> - <li> - <b>Message Property</b>: The payload that was in the message body before applying the <b>file read</b> operation will remain intact. - </li> - </ul> - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Property Name - </td> - <td> - resultPropertyName - </td> - <td> - String - </td> - <td> - If <code>Add Result To==Message Property</code>, you need to specify this value. Result of the <b>file read</b> operation will be added as a <code>default</code> scope property by the specified name. This can now be accessed later in the flow. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Read Mode - </td> - <td> - readMode - </td> - <td> - String - </td> - <td> - Available file reading modes: Read complete file, between lines, from line, upto line, single line, metadata only. - </td> - <td> - Reads complete file. - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Start Line Num - </td> - <td> - startLineNum - </td> - <td> - Number - </td> - <td> - Starts reading the file from the specified line. - </td> - <td> - 1 - </td> - <td> - No - </td> - </tr> - <tr> - <td> - End Line Num - </td> - <td> - endLineNum - </td> - <td> - Number - </td> - <td> - Reads the file upto the specified line. - </td> - <td> - Last line of file. - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Specific Line number - </td> - <td> - lineNum - </td> - <td> - Number - </td> - <td> - Specific line to read. - </td> - <td> - When the reading mode is <code>SINGLE_LINE</code>. - </td> - <td> - No - </td> - </tr> - <tr> - <td> - MIMEType - </td> - <td> - contentType - </td> - <td> - String - </td> - <td> - Content type of the message set to the payload by this operation - </td> - <td> - Determined by the file extension. - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Encoding - </td> - <td> - encoding - </td> - <td> - String - </td> - <td> - Encoding of the message set to the payload by this operation. - </td> - <td> - UTF-8 - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Enable Streaming - </td> - <td> - enableStreaming - </td> - <td> - Boolean - </td> - <td> - Specifies whether or not streaming is used to read the file without any interpretation of the content. - </td> - <td> - false - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Enable Locking - </td> - <td> - enableLock - </td> - <td> - Boolean - </td> - <td> - Specifies whether or not to lock the file. - </td> - <td> - false - </td> - <td> - No - </td> - </tr> - </table> - - **Response** - - ```xml - This is line one. - This is line two. - This is line three. - This is line four. - This is line five. - This is line six. - This is line seven. - This is line eight. - This lis line nine. - This is line ten. - ``` - - **Full Log** - - ```bash - [2020-10-06 06:01:44,083] INFO {LogMediator} - {api:TestAPI} To: /filetest, MessageID: urn:uuid:7ab557c0-f9cb-4cf6-9c7b-f06a4640522a, Direction: request, message = After Read, FILE_LAST_MODIFIED_TIME = 10/06/2020 05:46:39, FILE_SIZE = 30, FILE_IS_DIR = false, FILE_NAME = test1.txt, FILE_PATH = /wso2/test, FILE_URL = file:///Users/hasitha/temp/file-connector-test/wso2/test/test1.txt, FILE_NAME_WITHOUT_EXTENSION = test1, Envelope: <?xml version='1.0' encoding='utf-8'?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"><soapenv:Body><text xmlns="http://ws.apache.org/commons/ns/payload">This is test1.txt file content</text></soapenv:Body></soapenv:Envelope> - ``` - - **Error** - - ```xml - <readResult> - <success>false</success> - <code>700102</code> - <detail>File or folder not found: file:///Users/hasitha/temp/file-connector-test/wso2/test/abcd.txt</detail> - </readResult> - ``` - -??? note "rename" - Rename a file in a specified path. The new name cannot contain path separators. - - <table> - <tr> - <th>Parameter Name</th> - <th>Element</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - File Connection - </td> - <td> - name - </td> - <td> - String - </td> - <td> - The name of the file connection configuration to use. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Path - </td> - <td> - path - </td> - <td> - String - </td> - <td> - The path to the file that should be renamed. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Rename To - </td> - <td> - renameTo - </td> - <td> - String - </td> - <td> - The file's new name. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Overwrite Existing Files - </td> - <td> - overwrite - </td> - <td> - Boolean - </td> - <td> - Specifies whether or not to overwrite the file in the target directory (if the same file exists). - </td> - <td> - false - </td> - <td> - No - </td> - </tr> - </table> - - **Response** - - ```xml - <renameFileResult> - <success>true</success> - </renameFileResult> - ``` - - **Error** - - ```xml - <renameFileResult> - <success>false</success> - <code>700103</code> - <detail>Destination file already exists and overwrite not allowed</detail> - </renameFileResult> - ``` - -??? note "delete" - Deletes the files matching in a given directory. - - <table> - <tr> - <th>Parameter Name</th> - <th>Element</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - File Connection - </td> - <td> - name - </td> - <td> - String - </td> - <td> - The name of the file connection configuration to use. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - File/Directory Path - </td> - <td> - path - </td> - <td> - String - </td> - <td> - The path to the file/folder that should be deleted. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Pattern to Match Files - </td> - <td> - matchingPattern - </td> - <td> - String - </td> - <td> - The pattern that should be matched when listing files. This does not operate recursively on sub folders. - </td> - <td> - All files. - </td> - <td> - No - </td> - </tr> - </table> - - **Response** - - For a single file: - - ```xml - <deleteFileResult> - <success>true</success> - </deleteFileResult> - ``` - - For a folder: - - ```xml - <deleteFileResult> - <success>true</success> - <numOfDeletedFiles>5</numOfDeletedFiles> - </deleteFileResult> - ``` - -??? note "unzip" - Unzip a specified file to a given location. If a folder with the same name exists, it is overwritten. - - <table> - <tr> - <th>Parameter Name</th> - <th>Element</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - File Connection - </td> - <td> - name - </td> - <td> - String - </td> - <td> - The name of the file connection configuration to use. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Zip File Path - </td> - <td> - sourceFilePath - </td> - <td> - String - </td> - <td> - The path to the ZIP file that should be unzipped. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Target Directory - </td> - <td> - targetDirectory - </td> - <td> - String - </td> - <td> - The location (folder) to which the ZIP file should be unzipped. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - </table> - - > NOTE: The latest File connector (v4.0.7 onwards) supports decompressing the .gz files. - - **Response** - - ```xml - <unzipFileResult> - <success>true</success> - <zipFileContent> - <test1.txt>extracted</test1.txt> - <test2.txt>extracted</test2.txt> - <hasitha--a1.txt>extracted</hasitha--a1.txt> - <hasitha--a2.txt>extracted</hasitha--a2.txt> - <hasitha--b--b2.txt>extracted</hasitha--b--b2.txt> - <hasitha--b--b1.txt>extracted</hasitha--b--b1.txt> - <hasitha--b--c--test1.txt>extracted</hasitha--b--c--test1.txt> - <hasitha--b--c--c1.txt>extracted</hasitha--b--c--c1.txt> - </zipFileContent> - </unzipFileResult> - ``` - - **On Error** - - ```xml - <unzipFileResult> - <success>false</success> - <code>700102</code> - <detail>File not found: file:///Users/hasitha/temp/file-connector-test/wso2/archievenew.zip</detail> - </unzipFileResult> - ``` - - JSON equivalent: - - ```json - { - "unzipFileResult": { - "success": false, - "code": 700102, - "detail": "File not found: file:///Users/hasitha/temp/file-connector-test/wso2/archievenew.zip" - } - } - ``` - -??? note "splitFile" - Splits a file into multiple smaller files. - - - If the folder does not exist, it will be created. - - If the folder has files, they will be overwritten. - - <table> - <tr> - <th>Parameter Name</th> - <th>Element</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - File Connection - </td> - <td> - name - </td> - <td> - String - </td> - <td> - The name of the file connection configuration to use. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Path to the file to split - </td> - <td> - sourceFilePath - </td> - <td> - String - </td> - <td> - The path to the file that should be split. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Target Directory - </td> - <td> - targetDirectory - </td> - <td> - String - </td> - <td> - The path to the target folder where the new files should be created. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Split Mode - </td> - <td> - splitMode - </td> - <td> - String - </td> - <td> - The split mode to use. The available options are as follows:</br> - <ul> - <li>ChunkSize</li> - <li>Linecount</li> - <li>XPATH Expression</li> - </ul> - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Chunk Size - </td> - <td> - chunkSize - </td> - <td> - Number - </td> - <td> - If the <b>Split Mode</b> is 'Chunk Size', specify the chunk size (in bytes) into which the file should be split. - </td> - <td> - - - </td> - <td> - - - </td> - </tr> - <tr> - <td> - Line Count - </td> - <td> - lineCount - </td> - <td> - Number - </td> - <td> - If the <b>Split Mode</b> is 'Line Count', specify the number of lines by which the original file should be split. - </td> - <td> - - - </td> - <td> - - - </td> - </tr> - <tr> - <td> - XPATH Expression - </td> - <td> - xpathExpression - </td> - <td> - Number - </td> - <td> - If the <b>Split Mode</b> is 'XPATH Expression', specify the expression by which the file should be split. Only applies when splitting XML files. - </td> - <td> - Chunk Size - </td> - <td> - Yes - </td> - </tr> - </table> - - **Response** - - ```xml - <splitFileResult> - <success>true</success> - <numberOfSplits>6</numberOfSplits> - </splitFileResult> - ``` - - **On Error** - - ```xml - <splitFileResult> - <success>false</success> - <code>700107</code> - <detail>Parameter 'xpathExpression' is not provided</detail> - </splitFileResult> - ``` - -??? note "listFiles" - Lists all the files (that match the specified pattern) in the directory path. - - <table> - <tr> - <th>Parameter Name</th> - <th>Element</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - File Connection - </td> - <td> - name - </td> - <td> - String - </td> - <td> - The name of the file connection configuration to use. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Directory Path - </td> - <td> - directoryPath - </td> - <td> - String - </td> - <td> - The path to the directory from which files should be listed. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Matching Pattern - </td> - <td> - matchingPattern - </td> - <td> - String - </td> - <td> - The file pattern that should be used to select files for listing. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - List Files in Sub Directories - </td> - <td> - recursive - </td> - <td> - Boolean - </td> - <td> - List files from sub directories recursively. - </td> - <td> - false - </td> - <td> - No - </td> - </tr> - <tr> - <td> - File Sort Attribute - </td> - <td> - sortingAttribute - </td> - <td> - String - </td> - <td> - Files will get sorted and listed according to one of the follow: Name, Size, LastModifiedTime. - </td> - <td> - Name - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Sort Order - </td> - <td> - sortingOrder - </td> - <td> - String - </td> - <td> - The sorting order applicable to the <b>File Sort</b> attribute.</br> - <b>Possible Values</b>: Ascending, Descending. - </td> - <td> - Ascending - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Response Format - </td> - <td> - responseFormat - </td> - <td> - String - </td> - <td> - Format to list the files in response. - <b>Possible Values</b>: Hierarchical, Flat. - </td> - <td> - Hierarchical - </td> - <td> - No - </td> - </tr> - </table> - - **Response** - - ```xml - <listFilesResult> - <success>true</success> - <directory name="test"> - <file>.DS_Store</file> - <directory name="aa"/> - <file>abc.txt</file> - <directory name="hasitha"> - <file>a1.txt</file> - <file>a2.txt</file> - </directory> - <file>input.xml</file> - <file>output.csv</file> - </directory> - </listFilesResult> - ``` - -??? note "exploreZipFile" - Explore the contents of a ZIP file in a specific location. - - <table> - <tr> - <th>Parameter Name</th> - <th>Element</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - File Connection - </td> - <td> - name - </td> - <td> - String - </td> - <td> - The name of the file connection configuration to use. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Zip File Path - </td> - <td> - zipFilePath - </td> - <td> - String - </td> - <td> - The path to the ZIP file that should be explored. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - </table> - - **Response** - - ```xml - <exploreZipFileResult> - <success>true</success> - <zipFileContent> - <file>test1.txt</file> - <file>test2.txt</file> - <file>hasitha/a1.txt</file> - <file>hasitha/a2.txt</file> - <file>hasitha/b/b2.txt</file> - <file>hasitha/b/b1.txt</file> - <file>hasitha/b/c/test1.txt</file> - <file>hasitha/b/c/c1.txt</file> - </zipFileContent> - </exploreZipFileResult> - ``` - - **On Error** - - ```xml - <exploreZipFileResult> - <success>false</success> - <code>700102</code> - <detail>Zip file not found at path file:///Users/hasitha/temp/file-connector-test/wso2/test/archieve.zip</detail> - </exploreZipFileResult> - ``` - -??? note "mergeFiles" - Merge the contents of multiple files in a folder to a single file. - - <table> - <tr> - <th>Parameter Name</th> - <th>Element</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - File Connection - </td> - <td> - name - </td> - <td> - String - </td> - <td> - The name of the file connection configuration to use. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Source Directory Path - </td> - <td> - sourceDirectoryPath - </td> - <td> - String - </td> - <td> - The path to the source folder containing the files that should be merged. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Target File Path - </td> - <td> - targetFilePath - </td> - <td> - String - </td> - <td> - Path to the folder that holds the merged file. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - File Pattern - </td> - <td> - filePattern - </td> - <td> - String - </td> - <td> - The pattern that should be used for selecting the source files that should be merged.</br> - <b>Example</b>: <code>[a-zA-Z][a-zA-Z]*.(txt|xml|jar)</code>. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Write Mode - </td> - <td> - writeMode - </td> - <td> - String - </td> - <td> - If the file already exists, this parameter will determine whether the existing file should be overwritten or appended during the merge.</br> - Possible values are Ovewrite or Append. - </td> - <td> - Overwrite - </td> - <td> - Yes - </td> - </tr> - </table> - - **Response** - - ```xml - <mergeFilesResult> - <success>true</success> - <detail> - <numberOfMergedFiles>5</numberOfMergedFiles> - <totalWrittenBytes>992</totalWrittenBytes> - </detail> - </mergeFilesResult> - ``` - - **On Error** - - ```xml - <mergeFilesResult> - <success>false</success> - <code>700102</code> - <detail>Directory not found: file:///Users/hasitha/temp/file-connector-test/wso2/toMergesnsdfb</detail> - </mergeFilesResult> - ``` - -??? note "write" - Writes content to a specified file. - - <table> - <tr> - <th>Parameter Name</th> - <th>Element</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - File Connection - </td> - <td> - name - </td> - <td> - String - </td> - <td> - The name of the file connection configuration to use. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - File Path - </td> - <td> - filePath - </td> - <td> - String - </td> - <td> - The path to the file that should be written (include file name and extension). - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Content/Expression - </td> - <td> - contentOrExpression - </td> - <td> - String - </td> - <td> - Static content or expression to evaluate content. - </td> - <td> - The content will be fetched from the body ("$Body") of the incoming message. - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - MIME Type - </td> - <td> - mimeType - </td> - <td> - String - </td> - <td> - The MIME type that will be applied in order to format the outgoing message.</br></br> Possible values: "Automatic","text/plain", "application/xml", "application/binary", "application/json", "text/xml".</br></br> - If you don't want to change the MIME type of the message that has been mediated before this operation, use the default "Automatic" value. If the value is set to "application/binary", a binary file will get created with base-64 decoded content. - </td> - <td> - Automatic - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Write Mode - </td> - <td> - writeMode - </td> - <td> - String - </td> - <td> - If the file already exists, this parameter will determine whether the existing file should be overwritten or appended. You can also specify if a new file should be created.</br> - Possible values: Ovewrite, Append, Create New. - </td> - <td> - Overwrite - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Append New Line - </td> - <td> - appendNewLine - </td> - <td> - Boolean - </td> - <td> - Specifies whether a new line should be added to the end of the file after the content is written. - </td> - <td> - false - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Encoding - </td> - <td> - encoding - </td> - <td> - String - </td> - <td> - Applied only when some static content or evaluated content is written.</br> - <b>Possible Values</b>: US-ASCII, UTF-8, or UTF-16. - </td> - <td> - UTF-8 - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Compress - </td> - <td> - compress - </td> - <td> - Boolean - </td> - <td> - Specifies whether the content should be compressed after the content is written. Only available when the <b>Write Mode</b> is ‘Create New ‘or ‘OverWrite’. - </td> - <td> - false - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Enable Streaming - </td> - <td> - enableStreaming - </td> - <td> - Boolean - </td> - <td> - Write file using the stream set to the message context. - </td> - <td> - false - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Enable Locking - </td> - <td> - enableLock - </td> - <td> - Boolean - </td> - <td> - Specifies whether or not to lock the file during file write.</br></br> - <b>Note</b>: If the connector is processing a file named 'xyz.xml', a file called 'xyz.xml.lock' is created to represent the lock (with the CREATE_NEW mode). Once the file connector operation is completed, the file is deleted. When you create the lock, you can set an expiry time as well. If the connector operation fails to create the file because it already exists, that means that another process is working on it. Then connector operation will fail and the application will have to retry. Information such as the servername and PID is written to the lock file, which may be important for debugging. - </td> - <td> - false - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Add Result To - </td> - <td> - includeResultTo - </td> - <td> - String - </td> - <td> - Specify where to add the result after writing the file.</br> - <ul> - <li> - <b>Message Body</b>: The result will be added to the message property. - </li> - <li> - <b>Message Property</b>: The payload that was in the message body before applying the <b>file write</b> operation will remain intact. - </li> - </ul> - </td> - <td> - Message Body - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Property Name - </td> - <td> - resultPropertyName - </td> - <td> - String - </td> - <td> - If the <b>Add Result To</b> attribute is set to "Message Property", specify a property name. The result of the file write operation will be added as a default scope property - by the specified name. This property can be accessed later in the message flow. - </td> - <td> - - - </td> - <td> - Yes (If <b>Add Restul To</b> is "Message Property") - </td> - </tr> - <tr> - <td> - Update Last Modified Timestamp. - </td> - <td> - updateLastModified - </td> - <td> - Boolean - </td> - <td> - Specify whether to update the last modified timestamp of the file. This is avalable from version 4.0.4.</br> - </td> - <td> - true - </td> - <td> - No - </td> - </tr> - </table> - - **Response** - - ```xml - <writeResult> - <success>true</success> - <writtenBytes>16</writtenBytes> - </writeResult> - ``` - - **Error** - - ```xml - <writeResult> - <success>false</success> - <code>700108</code> - <detail>Target file already exists. Path = file:///Users/hasitha/temp/file-connector-test/copy/kandy/hasitha.txt</detail> - </writeResult> - ``` diff --git a/en/docs/reference/connectors/file-connector/file-connector-example.md b/en/docs/reference/connectors/file-connector/file-connector-example.md deleted file mode 100644 index 2de1144e62..0000000000 --- a/en/docs/reference/connectors/file-connector/file-connector-example.md +++ /dev/null @@ -1,253 +0,0 @@ -# File Connector Example - -File Connector can be used to perform operations in the local file system as well as in a remote server such as FTP and SFTP. - -## What you'll build - -This example describes how to use the File Connector to write messages to local files and then read the files. Similarly, the same example can be configured to communicate with a remote file system (i.e FTP server) easily. The example also uses some other WSO2 mediators to manipulate messages. - -<!-- -Following diagram shows the overall solution. - -<img src="{{base_path}}/assets/img/integrate/connectors/fileconnector-01.png" title="Adding a Rest API" width="800" alt="Adding a Rest API"/> ---> - -An API is exposed to accept XML messages (employee information). -When a message is received, it is converted to a CSV message and then stored to a property. -A check is done to see if the CSV file (with the same information) exists in the file system. If it does not exist, the connector creates a CSV file with CSV headers included. Then, the connector appends the new CSV entries in the current message to the CSV file. -The connector then reads the same CSV file and converts the information back to XML and responds to the client. - -<!-- -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. ---> - -## Setting up the environment - -Create a folder in your local file system with read and write access. This will be your working directory. In this example, it is `/Users/hasitha/temp/file-connector-test/dataCollection`. - -!!! Note - If you set up a FTP server, SFTP server, or a Samba server, do the required configurations and select a working directory. Save the host, port, and security related information for future use. - -## Configure the connector in WSO2 Integration Studio - -Follow these steps to set up the Integration Project and the Connector Exporter. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -## Creating the Integration Logic - -1. Create a new integration project. Be sure to enable a Connector Exporter. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon1.png" title="create project" width="500" alt="create project"/> - - -2. Create an API named `TestAPI` with the `/fileTest` context. This API will accept employee information. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon2.png" title="create API" width="500" alt="create API"/> - -3. In the default resource of the API, enable POST requests. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon3.png" title="select post method" width="500" alt="select post method"/> - -4. Add the Log mediator to the design canvas and configure a custom log that indicates when the API receives a message. -5. Add the DataMapper mediator and configure it to transform the incoming XML message to a CSV message. -6. Double-click the Datamapper mediator and add a new transform configuration called 'xmlToCsv'. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon4.png" title="new datamapper config" width="800" alt="new datamapper config"/> - -7. Save the following content as an XML file. This will be the data input file. - - ```xml - <test> - <information> - <people> - <person> - <name>Hasitha</name> - <age>34</age> - <company>wso2</company> - </person> - <person> - <name>Johan</name> - <age>32</age> - <company>IBM</company> - </person> - </people> - </information> - </test> - ``` - -8. Load the input file into the Datamapper config view. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon5.png" title="load input" width="800" alt="load input"/> - -9. Save the following content as a CSV file. This will be the data output file. - - ``` - Name,Age,Company - Hasitha,34,wso2 - Johan,32,IBM - ``` - -10. Load the output CSV file into the datamapper config view. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon6.png" title="load output csv" width="800" alt="load output csv"/> - -11. Configure the mapping as shown below. Each element in the input should be mapped to the respective element in the output. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon7.png" title="data mapping" width="800" alt="data mapping"/> - -12. Specify the input as XML and output as CSV in the datamapper as shown below. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon8.png" title="datamapper input output config" width="800" alt="datamapper input output config"/> - -13. Add the Enrich mediator and configure it to save the output generated by the datamapper to a property named `CONTENT`. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon9.png" title="enrich - save payload" width="800" alt="enrich - save payload"/> - -14. Now, let's use the File connector to check if the file containing employee information already exists in the file system. - - 1. Add the <b>checkExist</b> operation of the File connector to the canvas. - 2. Create a new file connection pointing to the working directory we already set up. Keep this as the File connection for the operation. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon10.png" title="working directory" width="800" alt="working directory"/> - - 3. Configure the file path as `/dataCollection/employees/employees.csv`. This file will store the employee information. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon11.png" title="checkExist operation" width="800" alt="checkExist operation"/> - -15. Add the Filter mediator to branch out the logic based on the result from the File connector’s `checkExist` operation. - - !!! Note - If the file does not exist, the File connector’s <b>write</b> operation (which we configure later) will create the file. - - 1. Click the Filter mediator and define the filter condition as shown below. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon12.png" title="filter mediator" width="800" alt="filter mediator"/> - - 2. Inside the “else” block, add the File Connector's <b>write</b> operation and configure it to write the static content of CSV file headers: “Name,Age,Company”. - - !!! Note - Be sure to append a new line at the end of the file. The <b>Write Mode</b> needs to be `Create New`. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon13.png" title="create new" width="800" alt="create new"/> - -16. After the Filter mediator, use the Enrich mediator again to put back the saved payload into the message payload. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon14.png" title="enrich - put back payload" width="800" alt="enrich - put back payload"/> - -17. Add the File connector’s <b>write</b> operation again and configure it to append the CSV message to the existing file. The <b>Write Mode</b> needs to be 'Append'. - - !!! Note - As we need the newest message on the top, always append to line number 2. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon15.png" title="append to file" width="800" alt="append to file"/> - -18. Add the File connector’s <b>read</b> operation and configure it to read the same CSV file. - - !!! Note - The file reading will start from line number 2. The content is read as a text message. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon16.png" title="read csv file" width="800" alt="read csv file"/> - -19. Add the Datamapper mediator again and configure it to convert the CSV message (after reading) back to XML. - - 1. Double-click the data mapper and add a new configuration called 'csvToXml'. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon17.png" title="output datamapper config" width="800" alt="output datamapper config"/> - - 2. This time, the mapping should be from CSV to XML. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon18.png" title="output datamapper dialog" width="800" alt="output datamapper dialog"/> - -20. Finally, use the <b>Respond</b> mediator to send the transformed message to the API caller. -21. Now, let's configure a fault sequence to generate an error message when an error occurs in the message flow. - - 1. Create a fault sequence with a <b>Log</b> mediator and <b>Respond</b> mediator. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon19.png" title="fault sequence" width="800" alt="fault sequence"/> - - 2. Configure the Log mediator generate a custom error. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon20.png" title="error log" width="800" alt="error log"/> - - 3. Add the fault sequence to the API resource as its fault sequence. - -{!includes/reference/connectors/exporting-artifacts.md!} - -<!-- -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/fileconnector.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> ---> - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -1. Create a file called data.json with the following payload. - - !!! Note - When you configuring this `source` parameter in the Windows operating system, set this property as `<source>C:\\Users\Name\Desktop\Salesforcebulk-connector\create.txt</source>`. - - ```xml - <test> - <information> - <people> - <person> - <name>Hasitha</name> - <age>34</age> - <company>wso2</company> - </person> - <person> - <name>Johan</name> - <age>32</age> - <company>IBM</company> - </person> - <person> - <name>Bob</name> - <age>30</age> - <company>Oracle</company> - </person> - <person> - <name>Alice</name> - <age>28</age> - <company>Microsoft</company> - </person> - <person> - <name>Anne</name> - <age>30</age> - <company>Google</company> - </person> - </people> - </information> - </test> - ``` - -2. Invoke the API as shown below using the curl command. - - !!! Info - Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - - ```bash - curl -H "Content-Type: application/xml" --request POST --data @body.json http://10.100.5.136:8290/fileconnector/create - ``` - -3. Check the file system to verify that the CSV file has been created. - - <img src="{{base_path}}/assets/img/integrate/connectors/filecon21.png" title="file creation result" width="800" alt="file creation result"/> - -4. If you invoke the API again with a different set of employees, the new employees will get appended to the same file. The response you receive will include all the employees that were added from both messages. - -In this example, the File connector was used to create a file, write to a file, and to read a file. By blending these capabilities together with other powerful message manipulation features of WSO2, it is possible to define a working scenario in minutes. The File connector has many more functionalities. Refer the [File Connector reference guide]({{base_path}}/reference/connectors/file-connector/file-connector-config/) for more information. - -## What's Next - -* To customize this example for your own scenario, see [File Connector Configuration]({{base_path}}/reference/connectors/file-connector/file-connector-config/) documentation for all operation details of the connector. diff --git a/en/docs/reference/connectors/file-connector/file-connector-overview.md b/en/docs/reference/connectors/file-connector/file-connector-overview.md deleted file mode 100644 index d1577bda19..0000000000 --- a/en/docs/reference/connectors/file-connector/file-connector-overview.md +++ /dev/null @@ -1,58 +0,0 @@ -# File Connector Overview - -The File Connector allows you to connect to different file systems and perform various operations. The File Connector uses the [Apache Commons VFS](https://commons.apache.org/proper/commons-vfs/) I/O functionalities to execute operations. - -File Connector introduces the independent operations related to the file system and allows you to easily manipulate files based on your requirement. The file streaming functionality using Apache Commons I/O lets you copy large files and reduces the file transfer time between two file systems resulting in a significant improvement in performance that can be utilized in file operations. - -To see the available File connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "File". - -<img src="{{base_path}}/assets/img/integrate/connectors/file-connector-store.png" title="File Connector Store" width="200" alt="File Connector Store"/> - -## Compatibility - -<table> - <tr> - <th> - Connector version - </th> - <th> - Supported product versions - </th> - </tr> - <tr> - <td> - 4.x (latest) - </td> - <td> - APIM 4.0.0, EI 6.4.0, EI 6.5.0, EI 6.6.0, EI 7.0.x, EI 7.1.0 - </td> - </tr> - <tr> - <td> - 3.x - </td> - <td> - EI 6.4.0, EI 6.5.0, EI 6.6.0, EI 7.0.x, EI 7.1.0 - </td> - </tr> -</table> - -For older versions, see the details in the connector store. - -## File Connector documentation (latest - 4.x version) - -* **[File Connector Example]({{base_path}}/reference/connectors/file-connector/file-connector-example/)**: This example explains how to use File Connector to create a file in the local file system and read the particular file. - -* **[File Connector Reference]({{base_path}}/reference/connectors/file-connector/file-connector-config/)**: This documentation provides a reference guide for the File Connector. - -For older versions, see the details in the relevant links. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, please create a pull request in the following repository. - -* [File Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-file) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/gmail-connector/configuring-gmail-api.md b/en/docs/reference/connectors/gmail-connector/configuring-gmail-api.md deleted file mode 100644 index f8878003d1..0000000000 --- a/en/docs/reference/connectors/gmail-connector/configuring-gmail-api.md +++ /dev/null @@ -1,50 +0,0 @@ -## Creating the Client ID and Client Secret - -1. Navigate to [API Credentials Page](https://console.developers.google.com/projectselector/apis/credentials) and sign in with your Google account. - -2. Click on **Select a Project** and click **NEW PROJECT**, to create a project. - <img src="{{base_path}}/assets/img/integrate/connectors/create-project.png" title="Creating a new Project" width="800" alt="Creating a new Project" /> - -3. Enter `GmailConnector` as the name of the project and click **Create**. - -4. Click **Configure consent screen** in the next screen. - <img src="{{base_path}}/assets/img/integrate/connectors/consent-screen.jpg" title="Consent Screen" width="800" alt="Consent Screen" /> - -5. Provide the Application Name as `GmailConnector` in the Consent Screen. - <img src="{{base_path}}/assets/img/integrate/connectors/consent-screen2.jpg" title="Consent Screen" width="800" alt="Consent Screen" /> - -6. Click Create credentials and click OAuth client ID. - <img src="{{base_path}}/assets/img/integrate/connectors/create-credentials.png" title="Create Credentials" width="800" alt="Create Credentials" /> - -7. Enter the following details in the Create OAuth client ID screen and click Create. - - | Type | Name | - | ------------------ | -------------------------------------------------| - | Application type | Web Application | - | Name | GmailConnector | - | Authorized redirect URIs | https://developers.google.com/oauthplayground | - - -8. A Client ID and a Client Secret are provided. Keep them saved. - <img src="{{base_path}}/assets/img/integrate/connectors/credentials.png" title="Credentials" width="800" alt="Credentials" /> - -9. Click Library on the side menu, search for **Gmail API** and click on it. - -10. Click **Enable** to enable the Gmail API. - - -## Obtaining Access Token and Refresh Token -1. Navigate to [OAuth 2.0 Playground](https://developers.google.com/oauthplayground/) and click OAuth 2.0 Configuration button in the Right top corner. - -2. Select **Use your own OAuth credentials**, and provide the obtained Client ID and Client Secret values as above click on Close. - <img src="{{base_path}}/assets/img/integrate/connectors/oath-configuration.png" title="Obtaining Oauth-configuration" width="800" alt="Obtaining Oauth-configuration" /> - -3. Under Step 1, select `Gmail API v1` from the list of APIs, select all the scopes expect the [gmail.metadata scope](https://www.googleapis.com/auth/gmail.metadata) scope. - - <img src="{{base_path}}/assets/img/integrate/connectors/select-scopes.png" title="Selecting Scopes" width="800" alt="Selecting Scopes" /> - -4. Click on **Authorize APIs** button and select your Gmail account when you are asked and allow the scopes. - <img src="{{base_path}}/assets/img/integrate/connectors/grant-permission.png" title="Grant Permission" width="800" alt="Grant Permission" /> - -5. Under Step 2, click **Exchange authorization code for tokens** to generate an display the Access Token and Refresh Token. Now we are done with configuring the Gmail API. - <img src="{{base_path}}/assets/img/integrate/connectors/refreshtoken.png" title="Getting Tokens" width="800" alt="etting Tokens" /> diff --git a/en/docs/reference/connectors/gmail-connector/gmail-connector-config.md b/en/docs/reference/connectors/gmail-connector/gmail-connector-config.md deleted file mode 100644 index 520f7913cc..0000000000 --- a/en/docs/reference/connectors/gmail-connector/gmail-connector-config.md +++ /dev/null @@ -1,1456 +0,0 @@ -# Gmail Connector Reference - -The following operations allow you to work with the Gmail Connector. Click an operation name to see parameter details and samples on how to use it. - ---- - -To use the Gmail connector, add the `<gmail.init>` element in your configuration before carrying out any other Gmail operations. - -??? note "gmail.init" - The Gmail API uses OAuth2 authentication with Tokens. For more information on authentication, go to [Authorizing Your App with Gmail](https://developers.google.com/gmail/api/auth/about-auth). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>accessToken</td> - <td>Value of the Access Token to access the Gmail REST API.</td> - <td>Yes</td> - </tr> - <tr> - <td>refreshToken</td> - <td>Value of the Refresh Token, which generates a new Access Token when the previous one gets expired.</td> - <td>Yes</td> - </tr> - <tr> - <td>apiUrl</td> - <td>The API URL of Gmail (https://www.googleapis.com/gmail).</td> - <td>Yes</td> - </tr> - <tr> - <td>userId</td> - <td>User mail ID.</td> - <td>Yes</td> - </tr> - <tr> - <td>clientSecret</td> - <td>Value of the Client Secret you obtained when you registered your application with the Gmail API.</td> - <td>Yes</td> - </tr> - <tr> - <td>clientId</td> - <td>Value of the Client ID you obtained when you registered your application with Gmail API.</td> - <td>Yes</td> - </tr> - <tr> - <td>registryPath</td> - <td>Registry Path of the connector where the Access Token will be stored (if not provided, the connector stores the Access Token in the connectors/Gmail/accessToken Registry Path).</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.init> - <userId>{$ctx:userId}</userId> - <refreshToken>{$ctx:refreshToken}</refreshToken> - <clientSecret>{$ctx:clientSecret}</clientSecret> - <clientId>{$ctx:clientId}</clientId> - <registryPath>{$ctx:registryPath}</registryPath> - <accessToken>{$ctx:accessToken}</accessToken> - <apiUrl>{$ctx:apiUrl}</apiUrl> - </gmail.init> - ``` - ---- - -### Drafts - -??? note "listDrafts" - The listDrafts operation lists all drafts in Gmail. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/drafts/list) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>maxResults</td> - <td>Maximum number of messages to return.</td> - <td>Yes</td> - </tr> - <tr> - <td>pageToken</td> - <td>Page token to retrieve a specific page of results in the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.listDrafts> - <maxResults>{$ctx:maxResults}</maxResults> - <pageToken>{$ctx:pageToken}</pageToken> - </gmail.listDrafts> - ``` - - **Sample request** - - ```json - { - "maxResults":"10" - "pageToken":"09876536614133772469" - } - ``` - -??? note "readDraft" - The readDraft operation retrieves a particular draft email. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/drafts/get) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>id</td> - <td>The ID of the draft email to be retrieve</td> - <td>Yes</td> - </tr> - <tr> - <td>format</td> - <td>The <a href="https://developers.google.com/gmail/api/v1/reference/users/drafts/get#parameters">format</a> to return the draft in</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.readDraft> - <id>{$ctx:id}</id> - <format>{$ctx:format}</format> - </gmail.readDraft> - ``` - - **Sample request** - - ```json - { - "id": "1492984134337920839", - "format":"raw" - } - ``` - -??? note "deleteDraft" - The deleteDraft operation deletes an existing draft. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/drafts/delete) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>id</td> - <td>The ID of the draft email to be deleted.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.deleteDraft> - <id>{$ctx:id}</id> - </gmail.deleteDraft> - ``` - - **Sample request** - - ```json - { - "id":"1491513685150755887" - } - ``` - -??? note "createDraft" - The createDraft operation creates a new draft. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/drafts/create) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>to</td> - <td>The email address of the recipient of this email.</td> - <td>Yes</td> - </tr> - <tr> - <td>subject</td> - <td>Subject of the email.</td> - <td>Yes</td> - </tr> - <tr> - <td>from</td> - <td>The email address of the sender of the email.</td> - <td>Yes</td> - </tr> - <tr> - <td>cc</td> - <td>The email addresses of recipients who will receive a copy of this email.</td> - <td>Yes</td> - </tr> - <tr> - <td>bcc</td> - <td>The email addresses of recipients who will privately receive a copy of this email (their email addresses will be hidden from each other).</td> - <td>Yes</td> - </tr> - <tr> - <td>threadId</td> - <td>ID of the thread.</td> - <td>Yes</td> - </tr> - <tr> - <td>id</td> - <td>ID of the email.</td> - <td>Yes</td> - </tr> - <tr> - <td>messageBody</td> - <td>Content of the email.</td> - <td>Yes</td> - </tr> - <tr> - <td>contentType</td> - <td>If the message body is in the format of HTML or if you need to send a rich text then you must give the parameter value as "text/html; charset=UTF-8" otherwise it takes the default value as text/plain.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.createDraft> - <to>{$ctx:to}</to> - <subject>{$ctx:subject}</subject> - <from>{$ctx:from}</from> - <cc>{$ctx:cc}</cc> - <bcc>{$ctx:bcc}</bcc> - <id>{$ctx:id}</id> - <threadId>{$ctx:threadId}</threadId> - <messageBody>{$ctx:messageBody}</messageBody> - <contentType>{$ctx:contentType}</contentType> - </gmail.createDraft> - ``` - - **Sample request** - - ```json - { - "to":"tharis63@hotmail.com", - "from":"tharis63@gmail.com", - "subject":"test", - "messageBody":"Hi hariprasath", - "cc":"tharis63@outlook.com", - "bcc":"tharis63@yahoo.com", - "id":"154b8c77e551c509", - "threadId":"154b8c77e551c509" - "contentType":"text/html; charset=UTF-8" - } - ``` - -### Labels - -??? note "listLabels" - The listLabels operation lists all existing labels. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/labels/list) for more information. - - **Sample configuration** - - ```xml - </gmail.listLabels> - ``` - - -??? note "readLabel" - The readLabel operation gets a label's details. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/labels/get) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>id</td> - <td>The ID of the label whose details you want to retrieve.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.readLabel> - <id>{$ctx:id}</id> - </gmail.readLabel> - ``` - - **Sample request** - - ```json - { - "id":"Label_1" - } - ``` - -??? note "deleteLabel" - The deleteLabel operation deletes a label. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/labels/delete) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>id</td> - <td>The ID of the label to be deleted.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.deleteLabel> - <id>{$ctx:id}</id> - </gmail.deleteLabel> - ``` - - **Sample request** - - ```json - { - "id":"57648478394803" - } - ``` - -??? note "createLabels" - The createLabels operation creates a new label. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/labels/create) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>name</td> - <td>The display name of the label.</td> - <td>Yes</td> - </tr> - <tr> - <td>messageListVisibility</td> - <td>The visibility of messages with this label in the message list in the Gmail web interface.</td> - <td>Yes</td> - </tr> - <tr> - <td>labelListVisibility</td> - <td>The visibility of the label in the label list in the Gmail web interface.</td> - <td>Yes</td> - </tr> - <tr> - <td>type</td> - <td>The owner type for the label.</td> - <td>Yes</td> - </tr> - <tr> - <td>messagesTotal</td> - <td>The total number of messages with the label.</td> - <td>Yes</td> - </tr> - <tr> - <td>messagesUnread</td> - <td>The number of unread messages with the label.</td> - <td>Yes</td> - </tr> - <tr> - <td>threadsTotal</td> - <td>The total number of threads with the label.</td> - <td>Yes</td> - </tr> - <tr> - <td>threadsUnread</td> - <td>The number of unread threads with the label.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.createLabels> - <name>{$ctx:name}</name> - <messageListVisibility>{$ctx:messageListVisibility}</messageListVisibility> - <labelListVisibility>{$ctx:labelListVisibility}</labelListVisibility> - <type>{$ctx:type}</type> - <messagesTotal>{$ctx:messagesTotal}</messagesTotal> - <messagesUnread>{$ctx:messagesUnread}</messagesUnread> - <threadsTotal>{$ctx:threadsTotal}</threadsTotal> - <threadsUnread>{$ctx:threadsUnread}</threadsUnread> - </gmail.createLabels> - ``` - - **Sample request** - - ```json - { - "name": "TestESB2", - "threadsUnread": 100, - "messageListVisibility": "show", - "threadsTotal": 100, - "type": "user", - "messagesTotal": 100, - "messagesUnread": 100, - "labelListVisibility": "labelShow" - } - ``` - -??? note "updateLabels" - The updateLabels operation updates an existing label. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/labels/update) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>name</td> - <td>The display name of the label.</td> - <td>Yes</td> - </tr> - <tr> - <td>messageListVisibility</td> - <td>The visibility of messages with this label in the message list in the Gmail web interface.</td> - <td>Yes</td> - </tr> - <tr> - <td>labelListVisibility</td> - <td>The visibility of the label in the label list in the Gmail web interface.</td> - <td>Yes</td> - </tr> - <tr> - <td>type</td> - <td>The owner type for the label.</td> - <td>Yes</td> - </tr> - <tr> - <td>messagesTotal</td> - <td>The total number of messages with the label.</td> - <td>Yes</td> - </tr> - <tr> - <td>messagesUnread</td> - <td>The number of unread messages with the label.</td> - <td>Yes</td> - </tr> - <tr> - <td>threadsTotal</td> - <td>The total number of threads with the label.</td> - <td>Yes</td> - </tr> - <tr> - <td>threadsUnread</td> - <td>The number of unread threads with the label.</td> - <td>Yes</td> - </tr> - <tr> - <td>id</td> - <td>The ID of the label to update.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.updateLabels> - <name>{$ctx:name}</name> - <messageListVisibility>{$ctx:messageListVisibility}</messageListVisibility> - <labelListVisibility>{$ctx:labelListVisibility}</labelListVisibility> - <type>{$ctx:type}</type> - <messagesTotal>{$ctx:messagesTotal}</messagesTotal> - <messagesUnread>{$ctx:messagesUnread}</messagesUnread> - <threadsTotal>{$ctx:threadsTotal}</threadsTotal> - <threadsUnread>{$ctx:threadsUnread}</threadsUnread> - <id>{$ctx:id}</id> - </gmail.updateLabels> - ``` - - **Sample request** - - ```json - { - "id":"426572682792", - "name": "TestESB2", - "threadsUnread": 100, - "messageListVisibility": "show", - "threadsTotal": 100, - "type": "user", - "messagesTotal": 100, - "messagesUnread": 100, - "labelListVisibility": "labelShow" - } - ``` - -### Messages - -??? note "listAllMails" - The listAllMails operation lists all messages. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/labels/get) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>includeSpamTrash</td> - <td>Includes messages from SPAM and TRASH in the results (default: false).</td> - <td>Yes</td> - </tr> - <tr> - <td>labelIds</td> - <td>Only returns messages with labels that match all of the specified label IDs.</td> - <td>Yes</td> - </tr> - <tr> - <td>maxResults</td> - <td>Maximum number of messages to return.</td> - <td>Yes</td> - </tr> - <tr> - <td>pageToken</td> - <td>Page token to retrieve a specific page of results in the list.</td> - <td>Yes</td> - </tr> - <tr> - <td>q</td> - <td>Only returns messages matching the specified query. Supports the same query format as the Gmail search box.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.listAllMails> - <includeSpamTrash>{$ctx:includeSpamTrash}</includeSpamTrash> - <labelIds>{$ctx:labelIds}</labelIds> - <maxResults>{$ctx:maxResults}</maxResults> - <pageToken>{$ctx:pageToken}</pageToken> - <q>{$ctx:q}</q> - </gmail.listAllMails> - ``` - - **Sample request** - - ```json - { - "maxResults":"10", - "includeSpamTrash":"true", - "pageToken":"00965906535058580458", - "labelIds":"UNREAD", - "q":"Jira" - } - ``` - -??? note "readMail" - The readMail operation retrieves a message by its ID. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/messages/get) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>id</td> - <td>The ID of the message to retrieve.</td> - <td>Yes</td> - </tr> - <tr> - <td>format</td> - <td>The format to return.</td> - <td>Yes</td> - </tr> - <tr> - <td>metadataHeaders</td> - <td>When the format is METADATA, only include the headers specified in this property.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.readMail> - <id>{$ctx:id}</id> - <format>{$ctx:format}</format> - <metadataHeaders>{$ctx:metadataHeaders}</metadataHeaders> - </gmail.readMail> - ``` - - **Sample request** - - ```json - { - "id":"14bbb686ba287e1d", - "format":"minimal" - } - ``` - -??? note "sendMail" - The sendMail operation sends a plain message. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/messages/send) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>to</td> - <td>The email address of the recipient of the message.</td> - <td>Yes</td> - </tr> - <tr> - <td>subject</td> - <td>Subject of the message.</td> - <td>Yes</td> - </tr> - <tr> - <td>from</td> - <td>The email address of the sender of the message.</td> - <td>Yes</td> - </tr> - <tr> - <td>cc</td> - <td>The email addresses of recipients who will receive a copy of this message.</td> - <td>Yes</td> - </tr> - <tr> - <td>bcc</td> - <td>The email addresses of recipients who will privately receive a copy of this message (their email addresses will be hidden).</td> - <td>Yes</td> - </tr> - <tr> - <td>messageBody</td> - <td>The content of the message.</td> - <td>Yes</td> - </tr> - <tr> - <td>contentType</td> - <td>If the message body is in the format of html or need to send a rich text then we must give the parameter value as "text/html; charset=UTF-8" otherwise it takes the default value as text/plain.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.sendMail> - <to>{$ctx:to}</to> - <subject>{$ctx:subject}</subject> - <from>{$ctx:from}</from> - <cc>{$ctx:cc}</cc> - <bcc>{$ctx:bcc}</bcc> - <messageBody>{$ctx:messageBody}</messageBody> - <contentType>{$ctx:contentType}</contentType> - </gmail.sendMail> - ``` - - **Sample request** - - ```json - { - "to":"ashalya86@gmail.com", - "subject":"Hello", - "cc":"vanii@gamil.com", - "bcc":"elil@gmail.com", - "messageBody":"Hello! Thank you for contacting us." - "contentType":"text/html; charset=UTF-8" - } - ``` - -??? note "modifyExistingMessages" - The modifyExistingMessages operation modifies an existing message. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/messages/modify) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>id</td> - <td>The ID of the message to modify.</td> - <td>Yes</td> - </tr> - <tr> - <td>addLabelIds</td> - <td>A list of IDs of labels to add to this message.</td> - <td>Yes</td> - </tr> - <tr> - <td>removeLabelIds</td> - <td>A list of IDs of labels to remove from this message.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.modifyExistingThreads> - <id>{$ctx:id}</id> - <addLabelIds>{$ctx:addLabelIds}</addLabelIds> - <removeLabelIds>{$ctx:removeLabelIds}</removeLabelIds> - </gmail.modifyExistingThreads> - ``` - - **Sample request** - - ```json - { - "id":"14ba5cd56fcb61ee", - "addLabelIds": [ - "Label_33", - "Label_24"], - "removeLabelIds": [ - "Label_28", - "Label_31"] - } - ``` - -??? note "trashMessages" - The trashMessages operation sends a message to the trash. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/messages/trash) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>id</td> - <td>The ID of the message to send to trash.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.trashMessages> - <id>{$ctx:id}</id> - </gmail.trashMessages> - ``` - - **Sample request** - - ```json - { - "id":"4647683792802" - } - ``` - -??? note "unTrashMessages" - The unTrashMessages operation removes a message from trash. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/messages/untrash) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>id</td> - <td>The ID of the message to untrash.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.unTrashMessages> - <id>{$ctx:id}</id> - </gmail.unTrashMessages> - ``` - - **Sample request** - - ```json - { - "id":"4647683792802" - } - ``` - -??? note "deleteMessages" - The deleteMessages operation permanently deletes a message. The message cannot be recovered after it is deleted. You can use trashMessages instead if you do not want to permanently delete the message. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/messages/delete) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>id</td> - <td>The ID of the message to untrash.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.deleteMessages> - <id>{$ctx:id}</id> - </gmail.deleteMessages> - ``` - - **Sample request** - - ```json - { - "id":"4647683792802" - } - ``` - -??? note "sendMailWithAttachment" - The sendMailWithAttachment operation sends a message with attachments. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/messages/send) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>to</td> - <td>The email addresses of the recipients of the message.</td> - <td>Yes</td> - </tr> - <tr> - <td>subject</td> - <td>Subject of the message.</td> - <td>Yes</td> - </tr> - <tr> - <td>cc</td> - <td>The email addresses of recipients who will receive a copy of this message.</td> - <td>Yes</td> - </tr> - <tr> - <td>bcc</td> - <td>The email addresses of recipients who will privately receive a copy of this message (their email addresses will be hidden).</td> - <td>Yes</td> - </tr> - <tr> - <td>fileName</td> - <td>A comma-separated list of file names of the attachments you want to include with the message.</td> - <td>Yes</td> - </tr> - <tr> - <td>filePath</td> - <td>A comma-separated list of file paths of the attachments you want to include with the message.</td> - <td>Yes</td> - </tr> - <tr> - <td>messageBody</td> - <td>Content of the message.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.sendMailWithAttachment> - <subject>{$ctx:subject}</subject> - <to>{$ctx:to}</to> - <cc>{$ctx:cc}</cc> - <bcc>{$ctx:bcc}</bcc> - <messageBody>{$ctx:messageBody}</messageBody> - <fileName>{$ctx:fileName}</fileName> - <filePath>{$ctx:filePath}</filePath> - </gmail.sendMailWithAttachment> - ``` - - **Sample request** - - ```json - { - "subject":"WSO2 Gmail Connector", - "to":"hmrajas1990@gmail.com", - "cc":"hmrajas1990@gmail.com", - "bcc":"rajjaz@wso2.com", - "messageBody":"Welcome to WSO2 ESB Gmail Connector!!!!!", - "fileName":"/home/rajjaz/Documents/ESB/esb-connector-gmail/src/test/resources/artifacts/ESB/config/smile.png", - "filePath":"smile.png" - } - ``` - -### Threads - -??? note "listAllThreads" - The listAllThreads operation lists all the existing email threads. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/threads/list) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>includeSpamTrash</td> - <td>Include messages from SPAM and TRASH in the results (default: false).</td> - <td>Yes</td> - </tr> - <tr> - <td>labelIds</td> - <td>Only returns threads with labels that match all of the specified label IDs.</td> - <td>Yes</td> - </tr> - <tr> - <td>maxResults</td> - <td>Maximum number of messages to return.</td> - <td>Yes</td> - </tr> - <tr> - <td>pageToken</td> - <td>Page token to retrieve a specific page of results in the list.</td> - <td>Yes</td> - </tr> - <tr> - <td>q</td> - <td>Only returns messages matching the specified query. Supports the same query format as the Gmail search box.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.listAllThreads> - <includeSpamTrash>{$ctx:includeSpamTrash}</includeSpamTrash> - <labelIds>{$ctx:labelIds}</labelIds> - <maxResults>{$ctx:maxResults}</maxResults> - <pageToken>{$ctx:pageToken}</pageToken> - <q>{$ctx:q}</q> - </gmail.listAllThreads> - ``` - - **Sample request** - - ```json - { - "maxResults":"10", - "includeSpamTrash":"true", - "pageToken":"00965906535058580458", - "labelIds":"UNREAD", - "q":"Jira" - } - ``` - -??? note "readThread" - The readThread operation retrieves an existing thread. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/threads/get) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>id</td> - <td>The ID of the thread to retrieve.</td> - <td>Yes</td> - </tr> - <tr> - <td>format</td> - <td>The format in which to return the messages in the thread.</td> - <td>Yes</td> - </tr> - <tr> - <td>metadataHeaders</td> - <td>When the format is METADATA, only include the headers specified with this property.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.readThread> - <id>{$ctx:id}</id> - <format>{$ctx:format}</format> - <metadataHeaders>{$ctx:metadataHeaders}</metadataHeaders> - </gmail.readThread> - ``` - - **Sample request** - - ```json - { - "id":"14bbb686ba287e1d", - "format":"minimal" - } - ``` - -??? note "trashThreads" - The trashThreads operation sends a thread to the trash. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/threads/trash) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>id</td> - <td>The ID of the thread to trash.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.trashThreads> - <id>{$ctx:id}</id> - </gmail.trashThreads> - ``` - - **Sample request** - - ```json - { - "id":"14bbb686ba287e1d" - } - ``` - -??? note "unTrashThreads" - The unTrashThreads operation removes a thread from the trash. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/threads/untrash) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>id</td> - <td>The ID of the thread to untrash.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.unTrashThreads> - <id>{$ctx:id}</id> - </gmail.unTrashThreads> - ``` - - **Sample request** - - ```json - { - "id":"14bbb686ba287e1d" - } - ``` - -??? note "modifyExistingThreads" - The modifyExistingThreads operation modifies an existing thread. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/threads/modify) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>id</td> - <td>The ID of the thread to modify.</td> - <td>Yes</td> - </tr> - <tr> - <td>addLabelIds</td> - <td>A list of IDs of labels to add to this thread.</td> - <td>Yes</td> - </tr> - <tr> - <td>removeLabelIds</td> - <td>A list of IDs of labels to remove from this thread.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.modifyExistingThreads> - <id>{$ctx:id}</id> - <addLabelIds>{$ctx:addLabelIds}</addLabelIds> - <removeLabelIds>{$ctx:removeLabelIds}</removeLabelIds> - </gmail.modifyExistingThreads> - ``` - - **Sample request** - - ```json - { - "id":"14b31c7af7b778f4", - "addLabelIds": [ - "Label_33", - "Label_24"], - "removeLabelIds": [ - "Label_28", - "Label_31"] - } - ``` - -### User History - -??? note "listTheHistory" - The listTheHistory operation lists the history of changes to the user's mailbox. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/history/list) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>startHistoryId</td> - <td>Returns history records after the specified startHistoryId.</td> - <td>Yes</td> - </tr> - <tr> - <td>maxResults</td> - <td>The maximum number of history records to return.</td> - <td>Yes</td> - </tr> - <tr> - <td>pageToken</td> - <td>Page token to retrieve a specific page of results in the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <gmail.listTheHistory> - <startHistoryId>{$ctx:startHistoryId}</startHistoryId> - <labelId>{$ctx:labelId}</labelId> - <maxResults>{$ctx:maxResults}</maxResults> - <pageToken>{$ctx:pageToken}</pageToken> - </gmail.listTheHistory> - ``` - - **Sample request** - - ```json - { - "startHistoryId":"7399652", - "labelId":"Label_31", - "maxResults":"10" - } - ``` - -### User Profiles - -??? note "getUserProfile" - The getUserProfile operation lists all details about the user's profile. See the [related API documentation](https://developers.google.com/gmail/api/v1/reference/users/getProfile) for more information. - - **Sample configuration** - - ```xml - <gmail.getUserProfile/> - ``` - -### Sample configuration in a scenario - -The following is a sample proxy service that illustrates how to connect to Gmail with the init operation and use the listDrafts operation. The sample request for this proxy can be found in listDrafts sample request. You can use this sample as a template for using other operations in this category. - -```xml -<proxy xmlns="http://ws.apache.org/ns/synapse" - name="gmail_listDrafts" - transports="https,http" - statistics="disable" - trace="disable" - startOnLoad="true"> - <target> - <target> - <inSequence> - <property name="maxResults" expression="json-eval($.maxResults)"/> - <property name="pageToken" expression="json-eval($.pageToken)"/> - <property name="userId" expression="json-eval($.userId)"/> - <property name="refreshToken" expression="json-eval($.refreshToken)"/> - <property name="clientId" expression="json-eval($.clientId)"/> - <property name="clientSecret" expression="json-eval($.clientSecret)"/> - <property name="accessToken" expression="json-eval($.accessToken)"/> - <property name="registryPath" expression="json-eval($.registryPath)"/> - <property name="apiUrl" expression="json-eval($.apiUrl)"/> - <gmail.init> - <userId>{$ctx:userId}</userId> - <refreshToken>{$ctx:refreshToken}</refreshToken> - <clientSecret>{$ctx:clientSecret}</clientSecret> - <clientId>{$ctx:clientId}</clientId> - <registryPath>{$ctx:registryPath}</registryPath> - <accessToken>{$ctx:accessToken}</accessToken> - <apiUrl>{$ctx:apiUrl}</apiUrl> - </gmail.init> - <gmail.listDrafts> - <maxResults>{$ctx:maxResults}</maxResults> - <pageToken>{$ctx:pageToken}</pageToken> - </gmail.listDrafts> - <respond/> - </inSequence> - <outSequence> - <log/> - <send/> - </outSequence> - </target> - <parameter name="serviceType">proxy</parameter> - <description/> -</proxy> -``` - -The following is a sample proxy service that illustrates how to connect to Gmail with the init operation and use the readLabel operation. The sample request for this proxy can be found in readLabel sample request. You can use this sample as a template for using other operations in this category. - -```xml -<proxy xmlns="http://ws.apache.org/ns/synapse" - name="gmail_listLabels" - transports="https,http" - statistics="disable" - trace="disable" - startOnLoad="true"> - <target> - <inSequence> - <property name="id" expression="json-eval($.id)"/> - <property name="format" expression="json-eval($.format)"/> - <property name="metadataHeaders" expression="json-eval($.metadataHeaders)"/> - <property name="userId" expression="json-eval($.userId)"/> - <property name="refreshToken" expression="json-eval($.refreshToken)"/> - <property name="clientId" expression="json-eval($.clientId)"/> - <property name="clientSecret" expression="json-eval($.clientSecret)"/> - <property name="accessToken" expression="json-eval($.accessToken)"/> - <property name="registryPath" expression="json-eval($.registryPath)"/> - <property name="apiUrl" expression="json-eval($.apiUrl)"/> - <gmail.init> - <userId>{$ctx:userId}</userId> - <refreshToken>{$ctx:refreshToken}</refreshToken> - <clientSecret>{$ctx:clientSecret}</clientSecret> - <clientId>{$ctx:clientId}</clientId> - <registryPath>{$ctx:registryPath}</registryPath> - <accessToken>{$ctx:accessToken}</accessToken> - <apiUrl>{$ctx:apiUrl}</apiUrl> - </gmail.init> - <gmail.readLabel> - <id>{$ctx:id}</id> - </gmail.readLabel> - <respond/> - </inSequence> - <outSequence> - <log/> - <send/> - </outSequence> - </target> - <parameter name="serviceType">proxy</parameter> - <description/> -</proxy> -``` - -The following is a sample proxy service that illustrates how to connect to Gmail with the init operation and use the listAllMails operation. The sample request for this proxy can be found in listAllMails sample request. You can use this sample as a template for using other operations in this category. - -```xml -<proxy xmlns="http://ws.apache.org/ns/synapse" - name="gmail_listAllMails" - transports="https,http" - statistics="disable" - trace="disable" - startOnLoad="true"> - <target> - <inSequence> - <property name="includeSpamTrash" expression="json-eval($.includeSpamTrash)"/> - <property name="labelIds" expression="json-eval($.labelIds)"/> - <property name="maxResults" expression="json-eval($.maxResults)"/> - <property name="pageToken" expression="json-eval($.pageToken)"/> - <property name="q" expression="json-eval($.q)"/> - <property name="userId" expression="json-eval($.userId)"/> - <property name="refreshToken" expression="json-eval($.refreshToken)"/> - <property name="clientId" expression="json-eval($.clientId)"/> - <property name="clientSecret" expression="json-eval($.clientSecret)"/> - <property name="accessToken" expression="json-eval($.accessToken)"/> - <property name="registryPath" expression="json-eval($.registryPath)"/> - <property name="apiUrl" expression="json-eval($.apiUrl)"/> - <gmail.init> - <userId>{$ctx:userId}</userId> - <refreshToken>{$ctx:refreshToken}</refreshToken> - <clientSecret>{$ctx:clientSecret}</clientSecret> - <clientId>{$ctx:clientId}</clientId> - <registryPath>{$ctx:registryPath}</registryPath> - <accessToken>{$ctx:accessToken}</accessToken> - <apiUrl>{$ctx:apiUrl}</apiUrl> - </gmail.init> - <gmail.listAllMails> - <includeSpamTrash>{$ctx:includeSpamTrash}</includeSpamTrash> - <labelIds>{$ctx:labelIds}</labelIds> - <maxResults>{$ctx:maxResults}</maxResults> - <pageToken>{$ctx:pageToken}</pageToken> - <q>{$ctx:q}</q> - </gmail.listAllMails> - <respond/> - </inSequence> - <outSequence> - <log/> - <send/> - </outSequence> - </target> - <parameter name="serviceType">proxy</parameter> - <description/> -</proxy> -``` - -The following is a sample proxy service that illustrates how to connect to Gmail with the init operation and use the listAllThreads operation. The sample request for this proxy can be found in listAllThreads sample request. You can use this sample as a template for using other operations in this category. - -```xml -<proxy xmlns="http://ws.apache.org/ns/synapse" - name="gmail_listAllThreads" - transports="https,http" - statistics="disable" - trace="disable" - startOnLoad="true"> - <target> - <inSequence> - <property name="includeSpamTrash" expression="json-eval($.includeSpamTrash)"/> - <property name="labelIds" expression="json-eval($.labelIds)"/> - <property name="maxResults" expression="json-eval($.maxResults)"/> - <property name="pageToken" expression="json-eval($.pageToken)"/> - <property name="q" expression="json-eval($.q)"/> - <property name="userId" expression="json-eval($.userId)"/> - <property name="refreshToken" expression="json-eval($.refreshToken)"/> - <property name="clientId" expression="json-eval($.clientId)"/> - <property name="clientSecret" expression="json-eval($.clientSecret)"/> - <property name="accessToken" expression="json-eval($.accessToken)"/> - <property name="registryPath" expression="json-eval($.registryPath)"/> - <property name="apiUrl" expression="json-eval($.apiUrl)"/> - <gmail.init> - <userId>{$ctx:userId}</userId> - <refreshToken>{$ctx:refreshToken}</refreshToken> - <clientSecret>{$ctx:clientSecret}</clientSecret> - <clientId>{$ctx:clientId}</clientId> - <registryPath>{$ctx:registryPath}</registryPath> - <accessToken>{$ctx:accessToken}</accessToken> - <apiUrl>{$ctx:apiUrl}</apiUrl> - </gmail.init> - <gmail.listAllThreads> - <includeSpamTrash>{$ctx:includeSpamTrash}</includeSpamTrash> - <labelIds>{$ctx:labelIds}</labelIds> - <maxResults>{$ctx:maxResults}</maxResults> - <pageToken>{$ctx:pageToken}</pageToken> - <q>{$ctx:q}</q> - </gmail.listAllThreads> - <respond/> - </inSequence> - <outSequence> - <log/> - <send/> - </outSequence> - </target> - <parameter name="serviceType">proxy</parameter> - <description/> -</proxy> -``` - -The following is a sample proxy service that illustrates how to connect to Gmail with the init operation and use the listTheHistory operation. The sample request for this proxy can be found in listTheHistory sample request. - -```xml -<proxy xmlns="http://ws.apache.org/ns/synapse" - name="gmail_listTheHistory" - transports="https,http" - statistics="disable" - trace="disable" - startOnLoad="true"> - <target> - <inSequence> - <property name="startHistoryId" expression="json-eval($.startHistoryId)"/> - <property name="labelId" expression="json-eval($.labelId)"/> - <property name="maxResults" expression="json-eval($.maxResults)"/> - <property name="pageToken" expression="json-eval($.pageToken)"/> - <property name="userId" expression="json-eval($.userId)"/> - <property name="refreshToken" expression="json-eval($.refreshToken)"/> - <property name="clientId" expression="json-eval($.clientId)"/> - <property name="clientSecret" expression="json-eval($.clientSecret)"/> - <property name="accessToken" expression="json-eval($.accessToken)"/> - <property name="registryPath" expression="json-eval($.registryPath)"/> - <property name="apiUrl" expression="json-eval($.apiUrl)"/> - <gmail.init> - <userId>{$ctx:userId}</userId> - <refreshToken>{$ctx:refreshToken}</refreshToken> - <clientSecret>{$ctx:clientSecret}</clientSecret> - <clientId>{$ctx:clientId}</clientId> - <registryPath>{$ctx:registryPath}</registryPath> - <accessToken>{$ctx:accessToken}</accessToken> - <apiUrl>{$ctx:apiUrl}</apiUrl> - </gmail.init> - <gmail.listTheHistory> - <startHistoryId>{$ctx:startHistoryId}</startHistoryId> - <labelId>{$ctx:labelId}</labelId> - <maxResults>{$ctx:maxResults}</maxResults> - <pageToken>{$ctx:pageToken}</pageToken> - </gmail.listTheHistory> - <respond/> - </inSequence> - <outSequence> - <log/> - <send/> - </outSequence> - </target> - <parameter name="serviceType">proxy</parameter> - <description/> -</proxy> -``` - -The following is a sample proxy service that illustrates how to connect to Gmail with the init operation and use the getUserProfile operation. The sample request for this proxy can be found in listTheProfile sample request. - -```xml -<proxy xmlns="http://ws.apache.org/ns/synapse" - name="gmail_getUserProfile" - transports="https,http" - statistics="disable" - trace="disable" - startOnLoad="true"> - <target> - <inSequence> - <property name="userId" expression="json-eval($.userId)"/> - <property name="refreshToken" expression="json-eval($.refreshToken)"/> - <property name="clientId" expression="json-eval($.clientId)"/> - <property name="clientSecret" expression="json-eval($.clientSecret)"/> - <property name="accessToken" expression="json-eval($.accessToken)"/> - <property name="registryPath" expression="json-eval($.registryPath)"/> - <property name="apiUrl" expression="json-eval($.apiUrl)"/> - <gmail.init> - <userId>{$ctx:userId}</userId> - <refreshToken>{$ctx:refreshToken}</refreshToken> - <clientSecret>{$ctx:clientSecret}</clientSecret> - <clientId>{$ctx:clientId}</clientId> - <registryPath>{$ctx:registryPath}</registryPath> - <accessToken>{$ctx:accessToken}</accessToken> - <apiUrl>{$ctx:apiUrl}</apiUrl> - </gmail.init> - <gmail.getUserProfile/> - <respond/> - </inSequence> - <outSequence> - <log/> - <send/> - </outSequence> - </target> - <parameter name="serviceType">proxy</parameter> - <description/> -</proxy> -``` \ No newline at end of file diff --git a/en/docs/reference/connectors/gmail-connector/gmail-connector-example.md b/en/docs/reference/connectors/gmail-connector/gmail-connector-example.md deleted file mode 100644 index 9cc6d01871..0000000000 --- a/en/docs/reference/connectors/gmail-connector/gmail-connector-example.md +++ /dev/null @@ -1,121 +0,0 @@ -# Gmail Connector Example - -The Gmail Connector allows you to access the [Gmail REST API](https://developers.google.com/gmail/api/v1/reference) from an integration sequence. - -## What you'll build - -<img src="{{base_path}}/assets/img/integrate/connectors/gmailconnector.png" title="Using Gmail Connector" width="800" alt="Using Gmail Connector"/> - -This example demonstrates a scenario where a customer feedback Gmail account of a company can be easily managed using the WSO2 Gmail Connector. This application contains a service that can be invoked through an HTTP GET request. Once the service is invoked, it returns the contents of unread emails in the Inbox under the label of Customers, while sending an automated response to the customer, thanking them for their feedback. The number of emails that can be handled in a single invocation is specified in the application. - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Configure the connector in WSO2 Integration Studio - -1. Follow these steps to set up the Integration Project and the Connector Exporter Project. -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -2. Right click on the created Integration Project and select, -> **New** -> **Rest API** to create the REST API. - <img src="{{base_path}}/assets/img/integrate/connectors/adding-an-api.jpg" title="Adding a Rest API" width="800" alt="Adding a Rest API"/> - -3. Follow these steps to [configure the Gmail API]({{base_path}}/reference/connectors/gmail-connector/configuring-gmail-api/) and obtain the Client Id, Client Secret, Access Token and Refresh Token. - -4. Provide the API name as **SendMails**. You can go to the source view of the XML configuration file of the API and copy the following configuration. -```xml -<?xml version="1.0" encoding="UTF-8"?> -<api context="/sendmails" name="SendMails" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="GET"> - <inSequence> - <gmail.init> - <userId></userId> - <accessToken></accessToken> - <apiUrl>https://www.googleapis.com/gmail</apiUrl> - <clientId></clientId> - <clientSecret></clientSecret> - <refreshToken></refreshToken> - </gmail.init> - <gmail.listAllMails> - <includeSpamTrash>false</includeSpamTrash> - <maxResults>20</maxResults> - <q>is:unread label:customers</q> - </gmail.listAllMails> - <iterate expression="json-eval($.messages)"> - <target> - <sequence> - <sequence key="reply"/> - </sequence> - </target> - </iterate> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> -</api> -``` - -5. Right click on the created Integration Project and select **New** -> **Sequence** to create the defined sequence called **reply**. - -6. Provide the Sequence name as **reply**. You can go to the source view of the XML configuration file of the API and copy the following configuration. -```xml -<?xml version="1.0" encoding="UTF-8"?> -<sequence name="reply" trace="disable" xmlns="http://ws.apache.org/ns/synapse"> - <property expression="json-eval($.id)" name="msgId" scope="default" type="STRING"/> - <gmail.getAccessTokenFromRefreshToken> - <clientId></clientId> - <clientSecret></clientSecret> - <refreshToken></refreshToken> - </gmail.getAccessTokenFromRefreshToken> - <gmail.readMail> - <id>{$ctx:msgId}</id> - </gmail.readMail> - <property expression="json-eval($.payload.headers[6].value)" name="response" scope="default" type="STRING"/> - <log level="custom"> - <property expression="get-property('response')" name="response1"/> - </log> - <gmail.getAccessTokenFromRefreshToken> - <clientId></clientId> - <clientSecret></clientSecret> - <refreshToken></refreshToken> - </gmail.getAccessTokenFromRefreshToken> - <gmail.sendMail> - <to>{$ctx:response}</to> - <subject>Best of Europe - 6 Countries in 9 Days</subject> - <from>isurumuy@gmail.com</from> - <messageBody>Thank you for your valuable feedback.</messageBody> - </gmail.sendMail> -</sequence> -``` -7. In the Rest API and in the Sequence, provide your obtained **Client ID**, **Client Secret**, **Access Token**, and **Refresh Token** accordingly. The **userID** should be your Gmail address. - -8. Follow these steps to export the artifacts. -{!includes/reference/connectors/exporting-artifacts.md!} - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/gmailconnector.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -!!! tip - You may need to update the value of the access token and make other such changes before deploying and running this project. - -## Deployment -Follow these steps to deploy the exported CApp in the integration runtime.<br> - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing -Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - -``` - curl -H "Content-Type: application/json" --request GET http://localhost:8290/sendmails -``` - -The senders should receive an email with a subject of "Best of Europe — 6 Countries in 9 Days", and a body of "Thank you for your valuable feedback." - -## What's Next - -* To customize this example for your own scenario, see [Gmail Connector Configuration]({{base_path}}/reference/connectors/gmail-connector/gmail-connector-config/) documentation. \ No newline at end of file diff --git a/en/docs/reference/connectors/gmail-connector/gmail-connector-overview.md b/en/docs/reference/connectors/gmail-connector/gmail-connector-overview.md deleted file mode 100644 index c011b4c874..0000000000 --- a/en/docs/reference/connectors/gmail-connector/gmail-connector-overview.md +++ /dev/null @@ -1,33 +0,0 @@ -# Gmail Connector Overview - -Gmail is a free, Web-based e-mail service provided by Google. It allows you to send, read, and delete emails through the Gmail REST API. Furthermore, it provides the ability to read, trash, untrash, and delete threads, create, update, and delete drafts, get the Gmail profile, and access the mailbox history as well, while handling OAuth 2.0 authentication. - -To see the Gmail Connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "gmail". - -<img src="{{base_path}}/assets/img/integrate/connectors/gmail-store.png" title="Gmail Connector Store" width="200" alt="Gmail Connector Store"/> - -## Compatibility - -| Connector Version | Supported product versions | -| ------------- |-------------| -| 3.0.8 | APIM 4.0.0, EI 7.1.0, EI 7.0.x EI 6.6.0 EI 6.5.0 | - -For older versions, see the details in the connector store. - -## Gmail Connector documentation - -* **[Creating the Client ID and Client Secret]({{base_path}}/reference/connectors/gmail-connector/configuring-gmail-api/)**: You need to first create Gmail credentials for the connector to use in order to interact with Gmail. - -* **[Gmail Connector Example]({{base_path}}/reference/connectors/gmail-connector/gmail-connector-example/)**: This example demonstrates a scenario where a customer feedback Gmail account of a company can be easily managed using the WSO2 Gmail Connector. - -* **[Gmail Connector Reference]({{base_path}}/reference/connectors/gmail-connector/gmail-connector-config/)**: This documentation provides a reference guide for the Gmail Connector. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, create a pull request in the following repository. - -* [Gmail Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-gmail) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/google-firebase-connector/google-firebase-configuration.md b/en/docs/reference/connectors/google-firebase-connector/google-firebase-configuration.md deleted file mode 100644 index d9d37d4815..0000000000 --- a/en/docs/reference/connectors/google-firebase-connector/google-firebase-configuration.md +++ /dev/null @@ -1,326 +0,0 @@ -# Google Firebase Connector Reference - -## Initializing Google Firebase Connector - -``` -<googlefirebase.init> -``` - -``` -<googlefirebase.init> - <accountType>{$ctx:accountType}</accountType> - <projectId>{$ctx:projectId}</projectId> - <privateKeyId>{$ctx:privateKeyId}</privateKeyId> - <privateKey>{$ctx:privateKey}</privateKey> - <clientEmail>{$ctx:clientEmail}</clientEmail> - <clientId>{$ctx:clientId}</clientId> - <authUri>{$ctx:authUri}</authUri> - <tokenUri>{$ctx:tokenUri}</tokenUri> - <authProviderCertUrl>{$ctx:authProviderCertUrl}</authProviderCertUrl> - <clientCertUrl>{$ctx:clientCertUrl}</clientCertUrl> -</googlefirebase.init> -``` - -**NOTE:** -The parameters under `<init>` section of the configuration above are referring to the credentials we obtained from Google Firebase. The parameters are mapped to the keys contained in the Json file that you have downloaded from Firebase. - -``` -accountType --> type -projectId --> project_id -privateKeyId --> private_key_id -privateKey --> private_key -clientEmail --> client_email -clientId --> client_id -authUri --> auth_uri -tokenUri --> token_uri -authProviderCertUrl --> auth_provider_x509_cert_url -clientCertUrl --> client_x509_cert_url -``` -<br/> - -## Sending Cloud Messaging messages to Firebase - -``` -<googlefirebase.sendMessage> -``` - -It allows you to send Firebase Cloud Messaging messages to end-user devices. Specifically, you can send messages to individual devices, named topics, or condition statements that match one or more topics. The Admin FCM API enables constructing message payloads tailored to different target platforms (Android, iOS and Web). If a message payload contains configuration options for multiple platforms, the FCM service customizes the message for each platform when delivering. Firebase Cloud Messaging (FCM) offers different types of FCM messages. With FCM, you can send two types of messages to clients: - -* Notification messages, sometimes thought of as "display messages." -* Data messages, which are handled by the client app. - -For more info, see [https://firebase.google.com/docs/cloud-messaging/concept-options](https://firebase.google.com/docs/cloud-messaging/concept-options) - -``` -<googlefirebase.sendMessage> - <messagingType>{$ctx:messagingType}</messagingType> - <dryRunMode>{$ctx:dryRunMode}</dryRunMode> - <registrationToken>{$ctx:registrationToken}</registrationToken> - <topicName>{$ctx:topicName}</topicName> - <condition>{$ctx:condition}</condition> - <dataFieldsOfMessage>{$ctx:dataFieldsOfMessage}</dataFieldsOfMessage> - <notificationTitle>{$ctx:notificationTitle}</notificationTitle> - <notificationBody>{$ctx:notificationBody}</notificationBody> - <androidPriority>{$ctx:androidPriority}</androidPriority> - <timeToLiveDuration>{$ctx:timeToLiveDuration}</timeToLiveDuration> - <restrictedPackageName>{$ctx:restrictedPackageName}</restrictedPackageName> - <collapseKey>{$ctx:collapseKey}</collapseKey> - <dataFieldsOfAndroidConfig>{$ctx:dataFieldsOfAndroidConfig}</dataFieldsOfAndroidConfig> - <androidNotificationTitle>{$ctx:androidNotificationTitle}</androidNotificationTitle> - <androidNotificationBody>{$ctx:androidNotificationBody}</androidNotificationBody> - <androidClickAction>{$ctx:androidClickAction}</androidClickAction> - <androidIcon>{$ctx:androidIcon}</androidIcon> - <androidColor>{$ctx:androidColor}</androidColor> - <androidTag>{$ctx:androidTag}</androidTag> - <androidSound>{$ctx:androidSound}</androidSound> - <androidTitleLocalizationKey>{$ctx:androidTitleLocalizationKey}</androidTitleLocalizationKey> - <androidBodyLocalizationKey>{$ctx:androidBodyLocalizationKey}</androidBodyLocalizationKey> - <androidTitleLocalizationArgs>{$ctx:androidTitleLocalizationArgs}</androidTitleLocalizationArgs> - <androidBodyLocalizationArgs>{$ctx:androidBodyLocalizationArgs}</androidBodyLocalizationArgs> - <apnsHeaders>{$ctx:apnsHeaders}</apnsHeaders> - <apnsCustomData>{$ctx:apnsCustomData}</apnsCustomData> - <apnsBadge>{$ctx:apnsBadge}</apnsBadge> - <apnsSound>{$ctx:apnsSound}</apnsSound> - <apnsContentAvailable>{$ctx:apnsContentAvailable}</apnsContentAvailable> - <apnsCategory>{$ctx:apnsCategory}</apnsCategory> - <apnsThreadId>{$ctx:apnsThreadId}</apnsThreadId> - <apnsAlertTitle>{$ctx:apnsAlertTitle}</apnsAlertTitle> - <apnsAlertBody>{$ctx:apnsAlertBody}</apnsAlertBody> - <webPushHeaders>{$ctx:webPushHeaders}</webPushHeaders> - <webPushData>{$ctx:webPushData}</webPushData> - <webPushNotificationTitle>{$ctx:webPushNotificationTitle}</webPushNotificationTitle> - <webPushNotificationBody>{$ctx:webPushNotificationBody}</webPushNotificationBody> - <webPushNotificationIcon>{$ctx:webPushNotificationIcon}</webPushNotificationIcon> - <webPushNotificationBadge>{$ctx:webPushNotificationBadge}</webPushNotificationBadge> - <webPushNotificationImage>{$ctx:webPushNotificationImage}</webPushNotificationImage> - <webPushNotificationLanguage>{$ctx:webPushNotificationLanguage}</webPushNotificationLanguage> - <webPushNotificationTag>{$ctx:webPushNotificationTag}</webPushNotificationTag> - <webPushNotificationDirection>{$ctx:webPushNotificationDirection}</webPushNotificationDirection> - <webPushNotificationRenotify>{$ctx:webPushNotificationRenotify}</webPushNotificationRenotify> - <webPushNotificationInteraction>{$ctx:webPushNotificationInteraction}</webPushNotificationInteraction> - <webPushNotificationSilent>{$ctx:webPushNotificationSilent}</webPushNotificationSilent> - <webPushNotificationTimestamp>{$ctx:webPushNotificationTimestamp}</webPushNotificationTimestamp> - <webPushNotificationVibrate>{$ctx:webPushNotificationVibrate}</webPushNotificationVibrate> -</googlefirebase.sendMessage> -``` - - -**Properties** - -* messagingType - This will define the messaging type. It must contain exactly one of the "token", "topic" or "condition" value. -* registrationToken - If you specify "messagingType" as token, you must provide this value. FCM API allows you to send messages to individual devices by specifying a registration token for the target device. Registration tokens are strings generated by the client FCM SDKs for each end-user client app instance. -* topicName - If you specify "messagingType" as topic, you must provide this value. Name of the topic. Based on the publish/subscribe model, FCM topic messaging allows you to send a message to multiple devices that have opted in to a particular topic. You compose topic messages as needed, and FCM handles routing and delivering the message reliably to the right devices. -* condition - If you specify "messagingType" as condition, you must provide this value. And if you want to send a message to a combination of topics, you must specify this value. This is done by specifying a condition, which is a boolean expression that specifies the target topics. -* dryRunMode - It is used to send FCM messages in the dry run mode. It performs all the usual validations on the messages sent in this mode, but they are not actually delivered to the target devices. -* dataFieldsOfMessage - It defines key-value pair to the message as a data field. Eg: "key1:value1,key2:value2" -* notificationTitle - Sets the notification information (notification title) to be included in the message. -* notificationBody - Sets the notification information (notification body) to be included in the message -* androidPriority - Message priority. Must be one of "normal" and "high" values. -* timeToLiveDuration - The time-to-live duration of the message. This is how long the message will be kept in FCM storage if the target devices are offline. Maximum allowed is 4 weeks, which is also the default. Set to 0 to send the message immediately (fire and forget). -restrictedPackageName - Package name of the application where the registration tokens must match in order to receive the message. -* collapseKey - An identifier of a group of messages that can be collapsed, so that only the last message gets sent when delivery can be resumed. A maximum of 4 different collapse keys is allowed at any given time. -* dataFieldsOfAndroidConfig - A map of key-value pairs where each key and value are strings. If specified, overrides the data field set on the "dataFieldsOfMessage" property (top-level message). Eg: "key1:value1, key2:value2", -* androidNotificationTitle - Sets notification title that is specific to Android notifications. -* androidNotificationBody - Sets notification body that is specific to Android notifications. -* androidClickAction - Sets the action associated with a user click on the notification. If specified, an activity with a matching Intent Filter is launched when a user clicks on the notification. -* androidIcon - The notification icon. If not specified, FCM displays the launcher icon specified in your app manifest. -* androidColor - The notification's icon color, expressed in #rrggbb format. -* androidTag - Identifier used to replace existing notifications in the notification drawer. If not specified, each request creates a new notification. If specified and a notification with the same tag is already being shown, the new notification replaces the existing one in the notification drawer. -* androidSound - The sound to play when the device receives the notification. Supports default or the filename of a sound resource bundled in the app. -* androidTitleLocalizationKey - The key of the title string in the app's string resources to use to localize the title text to the user's current localization. -* androidBodyLocalizationKey - The key of the body string in the app's string resources to use to localize the body text to the user's current localization. -* androidTitleLocalizationArgs - A list of string values to be used in place of the format specifiers in "androidTitleLocalizationKey" to use to localize the title text to the user's current localization. -* androidBodyLocalizationArgs - A list of string values to be used in place of the format specifiers in "androidBodyLocalizationKey" to use to localize the body text to the user's current localization. -* apnsHeaders - HTTP request headers defined in Apple Push Notification Service. Refer to APNS request headers for supported headers. Eg: "header1:value1, header2:value2" -* apnsCustomData - A map of key-value pairs where each key and value are strings. If specified, overrides the data field set on the "dataFieldsOfMessage" (top-level message). Eg: "key1:value1, key2:value2" -* apnsBadge - Include this key when you want the system to modify the badge of your app icon. If this key is not included in the dictionary, the badge is not changed. To remove the badge, set the value of this key to 0. -* apnsSound - Include this key when you want the system to play a sound. The value of this key is the name of a sound file. -apnsContentAvailable - Include this key with a value of 1 to configure a background update notification. When this key is present, the system wakes up your app in the background and delivers the notification to its app delegate. -* apnsCategory - Provide this key with a string value that represents the notification’s type. -* apnsThreadId - Provide this key with a string value that represents the app-specific identifier for grouping notifications. -* apnsAlertTitle - Include this key when you want the system to display a standard alert. A short string describing the purpose of the notification. -* apnsAlertBody - Include this key when you want the system to display a standard alert. The text of the alert message. -* webPushHeaders - Adds the given key-value pair as a Webpush HTTP header. Refer to WebPush specification for supported headers. Eg:"header1:value1,header2:value2" -* webPushData - A map of key-value pairs where each key and value are strings. If specified, overrides the data field set on the "dataFieldsOfMessage" (top-level message). Eg: "key1:value1, key2:value2" -* webPushNotificationTitle - The title of the notification. This title overrides the corresponding field on the "notificationTitle" (top-level message notification). -* webPushNotificationBody - The body of the notification. This body overrides the corresponding field on the "notificationBody" (top-level message notification). -* webPushNotificationIcon - It contains the URL of an icon to be displayed as part of the web notification. -* webPushNotificationBadge - The URL of the image used to represent the notification when there is not enough space to display the notification itself. -* webPushNotificationImage - The URL of an image to be displayed as part of the notification. -* webPushNotificationLanguage - Sets the language of the notification. -* webPushNotificationTag - Sets an identifying tag on the notification. The idea of notification tags is that more than one notification can share the same tag, linking them together. One notification can then be programmatically replaced with another to avoid the users' screen being filled up with a huge number of similar notifications. -* webPushNotificationDirection - The text direction of the notification. -* webPushNotificationRenotify - Specifies whether the user should be notified after a new notification replaces an old one. -* webPushNotificationInteraction - A Boolean indicating that a notification should remain active until the user clicks or dismisses it, rather than closing automatically. -* webPushNotificationSilent - Specifies whether the notification should be silent. i.e., no sounds or vibrations should be issued, regardless of the device settings. -* webPushNotificationTimestamp - Specifies the time at which a notification is created or applicable. -* webPushNotificationVibrate - Specifies a vibration pattern for devices with vibration hardware to emit. - - -**Sample message** - -``` -{ - "accountType": "service_account", - "projectId": "wso2-42209", - "privateKeyId": "8d4ed1af9c0dsfdsfgdgdgsf55a7197b0ac9afe", - "privateKey": "-----BEGIN PRIVATE KEY-----\nMIIEvQIBdasfsggdgfhfjjkljlC3wkCC\nGKRvlPH1wqXPMhhsWEMH0k9OgiQ+XMdfsfnjgntenjtQ2VtTmCNBWyg/r+ff59KuqhQBs\nHy3ixWbSYW5XYd0ww6fT/UOH1dLYKSKLidEO6v2Rd2Xh+bxSwqi3IuyjuDy8WBJ6\n5NlV1HdKC5jZWrGZjVgooAHFs5WxhnTiYVWL4egOjzPzBiujWIJGfZCL91oPlqyf\nS4hn+JYh3yhOoCy2MpeDsreAcy9LuPdiR3u7Kqb49/e6pGf0Z2KjL4375OUEEK6S\nCixuZQPFHlnqXi6OcmsLqa+A0kmjkLlVHPe9iH6n5Ku7Ikd+P+7ycRS2W8SF6+fg\nIt7oO9LpAgMBAAECggEACQy65kOIq+W2gFYcSfLHjhwqj/FKiaexd94l91slYPpV\n44v2ghxpOqmDTFRNA1rdgTM0NlFbFnCx8wc3TWOuZTN+0RoMHouPZo2GKX7Epepg\nSiVNW9NSaVQZHbTuAHV8ST41U7M4AyG+t6i1JEx2recStG+QCfqi2/xV2V0kN6ZL\neUdGIznl6CmfOdz2mU9JDOVLMpGBEfjEv8B9QukO0odTAJTTlP/XFHVbjHn14B+O\nn2YSIqzXD0aBAJxsecxkDwRsXP3wlg0viAu/wa8b7m6vKMJPYTf85/TThiZNloer\nl1PHuvLfCSlgV0XltyMoUzcNBw8mQgP7ggq4bR1etQKBgQDdGixkmQNIs8X78qAC\nonH6dFG8lUbDbOQCmM1UgijyZHiUNCC0pO6tKCHv9XVkhv845xdv+uIBtxixnETT\njiAQchiy+KaBEUrFY2XPkJt4XKTY8hOhg60IMk4fD9QIl8GUMqp/6ut4YcFe3ldL\nPXWQBsKXLWH7G4GV7Cb9OwEsRQKBgQDDG/Amuxc5vodv78DGDop1BIixW4FXqowD\nYs7LCbzSCTeZ3NF3gqXkF8GJRmBj6GOkrOBfleeffyjeVAuyX/K6PbXUFzrn0bHi\nYG1zDLMXUZEIguK3aJwl3sDnqNYr5GK92Yt6nMmZwXY/0E60b54PJpg1oKy8hfug\nZBC/N8mgVQKBgCn4BeUyhkUOms4wR984Jpp76ef6Deyahs1XY+JespcQKzM2kd64\nT/XeYFLELPxgA6Ixe2luHehlcPKFzyq5F60He1i9ih2FwsOlEnZL5Lb8Hu5vRPqr\nm/SqV9ndj0nyRHR1CZguZ3P6WlI/siI+EEq+fcFkg+y+U+K5aM04nghhAoGBAKSR\n+TPCFWoYgobxVMn6U+E2LNJkm6nFagolGsZ59TG4opR+hJRot+K4Av/2Q7GhwAKT\n60HU4KVRDbjSbXdMpSFgkfFOktocrw2CRm+Ho7wkidADDpajfyoWROJiMByfrIX0\nbEjE3Ot7GnHjE6/wggLHjBWX7HusC72TCekwdjptAoGAZO2GhJq72eaKs4WoaQys\nkzUTlYxzeP3/hOJbJiD/p2VJkNfwSV5AkOYcPFAyEV5kydA7DwzAKLGaTk9a04Wx\nWXm7wCSVm8QMCYirFU7HTpjnge96fSpO12w6tQ0PB2EIVZ4hFVRVW3sy1i67saqi\nzbNQ+/qgESIWS+7KXf2otwQ=\n-----END PRIVATE KEY-----\n", - "clientEmail": "firebase-adminsdk-yr45b@wso2-42209.iam.gserviceaccount.com", - "clientId": "1080514883273363474", - "authUri": "https://accounts.google.com/o/oauth2/auth", - "tokenUri": "https://oauth2.googleapis.com/token", - "authProviderCertUrl": "https://www.googleapis.com/oauth2/v1/certs", - "clientCertUrl": "https://www.googleapis.com/robot/v1/metadata/x509/firebase-adminsdk-yr86b%40wso2-42209.iam.gserviceaccount.com", - "messagingType":"token", - "dryRunMode":false, - "registrationToken":"dTOJJxAyOBA:APA91bG776j90xEViBzejzy-Ije_FzAAudqY7T67Z52jEsPiANkTA4R5-C7xkx-DkOAzrw9s_YXbTjrbLttiO5rOUsYxcBTYZ684C74aCg6oQWzqrfC9fvSMmDXHR8v_8dPuH1eYIvv1", - "topicName":"test2", - "condition":"'test2' in topics", - "dataFieldsOfMessage":"key1:value1,key2:value2", - "notificationTitle":"test title", - "notificationBody":"test body", - "androidPriority":"normal", - "timeToLiveDuration":"123", - "restrictedPackageName":"com.google.firebase.quickstart.fcm", - "collapseKey":"test-key", - "dataFieldsOfAndroidConfig":"key3:value3,key4:value4", - "androidNotificationTitle":"Android Notification title", - "androidNotificationBody":"Android Notification body", - "androidClickAction":"android.intent.action.SHOW_APP_INFO", - "androidIcon":"@mipmap/ic_launcher", - "androidColor":"#112233", - "androidTag":"test-tag", - "androidSound":"@raw/bryan_sample", - "androidTitleLocalizationKey":"notification_title_string", - "androidBodyLocalizationKey":"notification_message_string", - "androidTitleLocalizationArgs":"t-arg2,t-arg3", - "androidBodyLocalizationArgs":"b-arg2,b-arg3", - "apnsHeaders":"header1:value1,header2:value2", - "apnsCustomData":"key5:value5,key6:value6", - "apnsBadge":"42", - "apnsSound":"apnsSound", - "apnsContentAvailable":true, - "apnsCategory":"category", - "apnsThreadId":"Thread-Id", - "apnsAlertTitle":"alert-title", - "apnsAlertBody":"alert-body", - "webPushHeaders":"header3:value3,header4:value4", - "webPushData":"key7:value7,key8:value8", - "webPushNotificationTitle":"web-notification-title", - "webPushNotificationBody":"web-notification-body", - "webPushNotificationIcon":"https://img.icons8.com/color/2x/baby-app.png", - "webPushNotificationBadge":"https://img.icons8.com/color/2x/ipad.png", - "webPushNotificationImage":"https://img.icons8.com/color/2x/ios-photos.png", - "webPushNotificationLanguage":"TA", - "webPushNotificationTag":"web-Tag", - "webPushNotificationDirection":"AUTO", - "webPushNotificationRenotify":true, - "webPushNotificationInteraction":false, - "webPushNotificationSilent":false, - "webPushNotificationTimestamp":"100", - "webPushNotificationVibrate":"200,100,200" - } -``` -<br/> - -## Subscribe a device to a Firebase topic - -``` -<googlefirebase.subscribeToTopic> -``` - -``` -<googlefirebase.subscribeToTopic> - <topicName>{$ctx:topicName}</topicName> - <tokenList>{$ctx:tokenList}</tokenList> -</googlefirebase.subscribeToTopic> -``` - -**Properties** - -* topicName - The topic name that need to be subscribe by devices. -* tokenList - List of registration tokens as comma separated values which are generated by the client FCM SDKs for each end-user client app instance. (Eg:- "YOUR_REGISTRATION_TOKEN_1,YOUR_REGISTRATION_TOKEN_2, ----,YOUR_REGISTRATION_TOKEN_n") - -**Sample request** - -``` -{ - "accountType": "service_account", - "projectId": "Test-42xx9", - "privateKeyId": "8d4ed1afsfsg56jiyu5ggfhgc02b3055a7197b0ac9afe", - "privateKey": "-----BEGIN PRIVATE KEY-----\nMI3sfsgfgdhgdhgogxD9TEC3wkCC\nGKRvlPH1wqXPMhhsWEMHdhhghghgNBWyg/r+ff59KuqhQBs\nHy3ixWbSYW5XYd0ww6fT/UOH1dLYKSKLidEO6v2Rd2Xh+bxSwqi3IuyjuDy8WBJ6\asafrfrsfrsgtyuy\nS4hn+hfhgfhf/e6pGf0Z2KjL4375OUEEK6S\nCixuZQPFHlnqXi6OcmsLqa+A0kmjkLlVHPe9iH6n5Ku7Ikd+P+7ycRS2W8SF6+fg\nIt7oO9LpAgMBAAECggEACQy65kOIq+W2gFYcSfLHjhwqj/FKiaexd94l91slYPpV\n44v2ghxpOqmDTFRNA1rdgTM0NlFbFnCx8wc3TWOuZTN+0RoMHouPZo2GKX7Epepg\nSiVNW9NSaVQZHbTuAHV8ST41U7M4AyG+t6i1JEx2recStG+QCfqi2/xV2V0kN6ZL\neUdGIznl6CmfOdz2mU9JDOVLMpGBEfjEv8B9QukO0odTAJTTlP/XFHVbjHn14B+O\nn2YSIqzXD0aBAJxsecxkDwRsXP3wlg0viAu/wa8b7m6vKMJPYTf85/TThiZNloer\nl1PHuvLfCSlgV0XltyMoUzcNBw8mQgP7ggq4bR1etQKBgQDdGixkmQNIs8X78qAC\nonH6dFG8lUbDbOQCmM1UgijyZHiUNCC0pO6tKCHv9XVkhv845xdv+uIBtxixnETT\njiAQchiy+KaBEUrFY2XPkJt4XKTY8hOhg60IMk4fD9QIl8GUMqp/6ut4YcFe3ldL\nPXWQBsKXLWH7G4GV7Cb9OwEsRQKBgQDDG/Amuxc5vodv78DGDop1BIixW4FXqowD\nYs7LCbzSCTeZ3NF3gqXkF8GJRmBj6GOkrOBfleeffyjeVAuyX/K6PbXUFzrn0bHi\nYG1zDLMXUZEIguK3aJwl3sDnqNYr5GK92Yt6nMmZwXY/0E60b54PJpg1oKy8hfug\nZBC/N8mgVQKBgCn4BeUyhkUOms4wR984Jpp76ef6Deyahs1XY+JespcQKzM2kd64\nT/XeYFLELPxgA6Ixe2luHehlcPKFzyq5F60He1i9ih2FwsOlEnZL5Lb8Hu5vRPqr\nm/SqV9ndj0nyRHR1CZguZ3P6WlI/siI+EEq+fcFkg+y+U+K5aM04nghhAoGBAKSR\n+TPCFWoYgobxVMn6U+E2LNJkm6nFagolGsZ59TG4opR+hJRot+K4Av/2Q7GhwAKT\n60HU4KVRDbjSbXdMpSFgkfFOktocrw2CRm+Ho7wkidADDpajfyoWROJiMByfrIX0\nbEjE3Ot7GnHjE6/wggLHjBWX7HusC72TCekwdjptAoGAZO2GhJq72eaKs4WoaQys\nkzUTlYxzeP3/hOJbJiD/p2VJkNfwSV5AkOYcPFAyEV5kydA7DwzAKLGaTk9a04Wx\nWXm7wCSVm8QMCYirFU7HTpjnge96fSpO12w6tQ0PB2EIVZ4hFVRVW3sy1i67saqi\nzbNQ+/qgESIWS+7KXf2otwQ=\n-----END PRIVATE KEY-----\n", - "clientEmail": "firebase-adminsdk-yr86b@Test-42xx9.iam.gserviceaccount.com", - "clientId": "10805155677889363474", - "authUri": "https://accounts.google.com/o/oauth2/auth", - "tokenUri": "https://oauth2.googleapis.com/token", - "authProviderCertUrl": "https://www.googleapis.com/oauth2/v1/certs", - "clientCertUrl": "https://www.googleapis.com/robot/v1/metadata/x509/firebase-adminsdk-yr86b%40Test-42xx9.iam.gserviceaccount.com", - "topicName":"test2", - "tokenList":"dlDIxxxxxxxxxxxxBHM:APA9cdfgdghhfjhjjkgkjkjklolooOI6ri8bwHafgNXDjX2n2kKwo4fK8hmuoamVddJfAwWr4xymkLZea_wVfLlENk,dlAIxxxxxxxxxxxxBHM:APAxvvfsojnwgfgwnwkww8bwHafgNXDjX2n2kKwo4fK8hmuRTBJJbksea_wVfLlENk" -} -``` - -**Sample Response** - -``` -{ - "Result": { - "SuccessCount":1, - "FailureCount":0, - "Errors":[] - } -} -``` -<br/> - -## Unsubscribe a device from a Firebase topic - -``` -<googlefirebase.unsubscribeFromTopic> -``` - -This operation allows you to unsubscribe devices from a topic by passing list of registration tokens. Registration tokens are strings generated by the client FCM SDKs for each end-user client app instance. - -``` -<googlefirebase.unsubscribeFromTopic> - <topicName>{$ctx:topicName}</topicName> - <tokenList>{$ctx:tokenList}</tokenList> -</googlefirebase.unsubscribeFromTopic> -``` - -**Properties** - -* topicName - The topic name that need to be subscribe by devices. -* tokenList - List of registration tokens as comma separated values which are generated by the client FCM SDKs for each end-user client app instance. (Eg:- "YOUR_REGISTRATION_TOKEN_1,YOUR_REGISTRATION_TOKEN_2, ----,YOUR_REGISTRATION_TOKEN_n") - -**Sample request** - -``` -{ - "accountType": "service_account", - "projectId": "Test-42xx9", - "privateKeyId": "8d4ed1afsfsg56jiyu5ggfhgc02b3055a7197b0ac9afe", - "privateKey": "-----BEGIN PRIVATE KEY-----\nMI3sfsgfgdhgdhgogxD9TEC3wkCC\nGKRvlPH1wqXPMhhsWEMHdhhghghgNBWyg/r+ff59KuqhQBs\nHy3ixWbSYW5XYd0ww6fT/UOH1dLYKSKLidEO6v2Rd2Xh+bxSwqi3IuyjuDy8WBJ6\asafrfrsfrsgtyuy\nS4hn+hfhgfhf/e6pGf0Z2KjL4375OUEEK6S\nCixuZQPFHlnqXi6OcmsLqa+A0kmjkLlVHPe9iH6n5Ku7Ikd+P+7ycRS2W8SF6+fg\nIt7oO9LpAgMBAAECggEACQy65kOIq+W2gFYcSfLHjhwqj/FKiaexd94l91slYPpV\n44v2ghxpOqmDTFRNA1rdgTM0NlFbFnCx8wc3TWOuZTN+0RoMHouPZo2GKX7Epepg\nSiVNW9NSaVQZHbTuAHV8ST41U7M4AyG+t6i1JEx2recStG+QCfqi2/xV2V0kN6ZL\neUdGIznl6CmfOdz2mU9JDOVLMpGBEfjEv8B9QukO0odTAJTTlP/XFHVbjHn14B+O\nn2YSIqzXD0aBAJxsecxkDwRsXP3wlg0viAu/wa8b7m6vKMJPYTf85/TThiZNloer\nl1PHuvLfCSlgV0XltyMoUzcNBw8mQgP7ggq4bR1etQKBgQDdGixkmQNIs8X78qAC\nonH6dFG8lUbDbOQCmM1UgijyZHiUNCC0pO6tKCHv9XVkhv845xdv+uIBtxixnETT\njiAQchiy+KaBEUrFY2XPkJt4XKTY8hOhg60IMk4fD9QIl8GUMqp/6ut4YcFe3ldL\nPXWQBsKXLWH7G4GV7Cb9OwEsRQKBgQDDG/Amuxc5vodv78DGDop1BIixW4FXqowD\nYs7LCbzSCTeZ3NF3gqXkF8GJRmBj6GOkrOBfleeffyjeVAuyX/K6PbXUFzrn0bHi\nYG1zDLMXUZEIguK3aJwl3sDnqNYr5GK92Yt6nMmZwXY/0E60b54PJpg1oKy8hfug\nZBC/N8mgVQKBgCn4BeUyhkUOms4wR984Jpp76ef6Deyahs1XY+JespcQKzM2kd64\nT/XeYFLELPxgA6Ixe2luHehlcPKFzyq5F60He1i9ih2FwsOlEnZL5Lb8Hu5vRPqr\nm/SqV9ndj0nyRHR1CZguZ3P6WlI/siI+EEq+fcFkg+y+U+K5aM04nghhAoGBAKSR\n+TPCFWoYgobxVMn6U+E2LNJkm6nFagolGsZ59TG4opR+hJRot+K4Av/2Q7GhwAKT\n60HU4KVRDbjSbXdMpSFgkfFOktocrw2CRm+Ho7wkidADDpajfyoWROJiMByfrIX0\nbEjE3Ot7GnHjE6/wggLHjBWX7HusC72TCekwdjptAoGAZO2GhJq72eaKs4WoaQys\nkzUTlYxzeP3/hOJbJiD/p2VJkNfwSV5AkOYcPFAyEV5kydA7DwzAKLGaTk9a04Wx\nWXm7wCSVm8QMCYirFU7HTpjnge96fSpO12w6tQ0PB2EIVZ4hFVRVW3sy1i67saqi\nzbNQ+/qgESIWS+7KXf2otwQ=\n-----END PRIVATE KEY-----\n", - "clientEmail": "firebase-adminsdk-yr86b@Test-42xx9.iam.gserviceaccount.com", - "clientId": "10805155677889363474", - "authUri": "https://accounts.google.com/o/oauth2/auth", - "tokenUri": "https://oauth2.googleapis.com/token", - "authProviderCertUrl": "https://www.googleapis.com/oauth2/v1/certs", - "clientCertUrl": "https://www.googleapis.com/robot/v1/metadata/x509/firebase-adminsdk-yr86b%40Test-42xx9.iam.gserviceaccount.com", - "topicName":"test2", - "tokenList":"dlDIxxxxxxxxxxxxBHM:APA9cdfgdghhfjhjjkgkjkjklolooOI6ri8bwHafgNXDjX2n2kKwo4fK8hmuoamVddJfAwWr4xymkLZea_wVfLlENk,dlAIxxxxxxxxxxxxBHM:APAxvvfsojnwgfgwnwkww8bwHafgNXDjX2n2kKwo4fK8hmuRTBJJbksea_wVfLlENk" -} -``` - -**Sample Response** - -``` -{ - "Result": { - "SuccessCount":1, - "FailureCount":0, - "Errors":[] - } -} -``` - diff --git a/en/docs/reference/connectors/google-firebase-connector/google-firebase-connector-example.md b/en/docs/reference/connectors/google-firebase-connector/google-firebase-connector-example.md deleted file mode 100644 index e8e7a756b4..0000000000 --- a/en/docs/reference/connectors/google-firebase-connector/google-firebase-connector-example.md +++ /dev/null @@ -1,276 +0,0 @@ -# Google Firebase Connector Example - -**Google Firebase Connector** is useful for integrating Google Firebase with other enterprise applications, on-premise or cloud. You can generate notifications and send them to Firebase so that they will be triggered to all the registered devices on that topic. - -Please refer to the links below for more use cases. - -* [Android Topic Messaging](https://firebase.google.com/docs/cloud-messaging/android/topic-messaging) -* [Android Firebase Push Notifications with Topic Message Subscription](https://inducesmile.com/android/android-firebase-push-notification-with-topic-message-subscription/) -* [Android Firebase Push Notifications video](https://www.youtube.com/watch?v=aG2JC8c9EK0) - - -## What you'll build - -In this example let us see how we can use Google Firebase Connector to generate a push notification based on an HTTP API invocation. The integration logic will extract information from HTTP message to the API to generate the push notification. - -> **Note**: The connector can also be used to register a device to a particular topic on Firebase. We will not cover it here. We will also not cover how the notifications can be received using Android or IOS apps. Please refer to online resources to get know about them. - -Overall integration scenario would look like below. -<br/><br/> -<img src="{{base_path}}/assets/img/integrate/connectors/google-firebase-scenario.png" title="Google Firebase Connector scenario" width="800" alt="Google Firebase Connector scenario"/> - -## Setting up the environment - -You need to create an application at Google Firebase and get the credentials required. Please follow [Setting up Google Firebase]({{base_path}}/reference/connectors/google-firebase-connector/google-firebase-setup/) on how to do that. - -## Configure the connector in WSO2 Integration Studio - -Follow these steps to set up the Integration Project and import Google Firebase connector into it. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -1. Right click on the created Integration Project and select, -> **New** -> **Rest API** to create the REST API. -2. Specify the API name as `FirebaseNotify` and API context as `/firebasenotify`. You can go to the source view of the XML configuration file of the API and copy the following configuration. - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <api context="/firebasenotify" name="FirebaseNotify" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST" uri-template="/send"> - <inSequence> - <log description="log message"> - <property name="message" value="Google Firebase send notification"/> - </log> - <sequence key="MessageCreateSeq"/> - <googlefirebase.init> - <accountType>service_account</accountType> - <projectId>teststatusapp</projectId> - <privateKeyId>4109637cc1c5c274p811db07d85769f902eb7341</privateKeyId> - <privateKey>-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCyw7v4edhZ4xcm\nzXiTuJHC8Km7AHMqX2UH7yuEf5t2TmPrC3B2l/BA5PY6wwSqA4aoF04rBm0eWW3E\ny8sSUhP7Jkm4yEq16N0IfCsyP04u6pOxSdV2hHlVg8xWwQVJwxMZ19bQjJQIONV2\n7epm6GLnBbZX+sx+HSoOtxFTV5BAsu1blfiyrx/Epvoyw7cicPj0uiPx6R1a+qK+\n2h7wFapLZMsdilvJpcj57S+huwI+G7qSH79kRehyWTfqDKrPHzrdNJ5j880vahM9\n+JhZS8lXHOZt/UA1p2IvsISKP03iFJxrBggWtyW1ld+0qc8QTKQmyQFk6GIhdchp\n5HZ+omqJAgMBAAECggEABrUP9qlECCUHFjmqeE4aFPtV6PpB4pv973u72/le5kQC\ncnwamgPb0TmDW24kpjDWmB8OESXzyT+FKMzrMi/8x9p2dL26H47B9VVVrb7WPS3S\nwQ8Hw04XkIZ05SfnSIQD3FSUKX1uzqCq8g7dnG3EITjTsBkiRqnrjS4KqGrTiCPb\npFREwyePsAx4v2KnI9TL77K9mtZ93Hys+bDj5477FwZ76zqiy56H+v0MzxpSdV0x\nJrTe5VVpgZFi+q1qPMdCUmtx+MTMzCT1ZsOXPCLJqXkpCuVsdLZjNwyfUT3URCdV\nZ3Aw6CQ8YIBmb1eApCdqoxuIZiIsfgaCbrAqPlAnYQKBgQD1D9DhH17NEbfcWxmz\n9lhfIgh/2BgliS4D9w060D3QNR9680gACZC5FjDxWDB490/zUZ+Vj7jiNO9AtXFP\nhLVxIklzh2uJc7YdlEOXSQ8STrgQaBv4MUg2S31XvUtVu5ziuf9f9eBJvMt+cEQk\niNBnS0yivXWcJoRetDa411szYQKBgQC6vmCYd1k0wq5qppvUSAsBDnJ3oAQS1W9X\nphZbLs7Kvkcfno5xOOfsuqF4JJy1vN/yqJ7Y+zVXlf3ZBkfYC5r9piNtkriSsDzB\nFhk1tIpHq2meeq02N7SK2+avn6DrePEr259tT6A2izMp6Y4wIxMgaU/+F8mayNpz\n3Ib69oUwKQKBgQCJ4PksAGNtS7+/qj3+4+Z6uAJCM8n6LIGIV5LI+Wsd3xW0Lnbf\nFoKnsFWfJHg5RyRjiRQZqQBjvVazeKKlE8ymN51N8+5MKp9XaxjQYJmrOkETcg/y\nh3/SlIyUNfvR47n0UqPdUNB9jEyN+gpM5/EhfNtEYQZv8bfeNNTpELnOJQKBgH34\niI6xC8MUhLW6+ClmA85NoZfioHzX74jvp+sQkzyeyLmiqrHj0keVyfCSuge6hlNZ\nvfXe16firV+l5fbuNTpfxUxYChwhuIoDzzOepP0du1zFomyNfUDvvBjClLnjVsTg\nHRaO/SNuGTBvtZPxRSi7AdQE1eGNFhfMLl3CyCupAoGATNzKTIbNVDVTlEECBr0s\nu9VsEJvgunEVhIqf8HPJdqhc3gz7zsvDi5CIrkbKlVJ0gFcOk0zF3VeXBLNgW99S\nbVGnHve/hCnta4DJI6/AOvHR8FXgjEg7Oq8KvbpjSret2BxM9bqypcw4rFSlbjNr\nDxoPamy2NZkhs+pm+IFQNPs=\n-----END PRIVATE KEY-----\n</privateKey> - <clientEmail>firebase-adminsdk-slyr1@teststatusapp.iam.gserviceaccount.com</clientEmail> - <clientId>110823266879433001255</clientId> - <authUri>https://accounts.google.com/o/oauth2/auth</authUri> - <tokenUri>https://oauth2.googleapis.com/token</tokenUri> - <authProviderCertUrl>https://www.googleapis.com/oauth2/v1/certs</authProviderCertUrl> - <clientCertUrl>https://www.googleapis.com/robot/v1/metadata/x509/firebase-adminsdk-slyr1%40teststatusapp.iam.gserviceaccount.com</clientCertUrl> - </googlefirebase.init> - <googlefirebase.sendMessage> - <messagingType>topic</messagingType> - <dryRunMode>false</dryRunMode> - <topicName>status</topicName> - <condition>'status' in topics</condition> - <dataFieldsOfMessage>{$ctx:dataFieldsOfMessage}</dataFieldsOfMessage> - <notificationTitle>{$ctx:notificationTitle}</notificationTitle> - <notificationBody>{$ctx:notificationBody}</notificationBody> - <androidPriority>{$ctx:androidPriority}</androidPriority> - <timeToLiveDuration>{$ctx:timeToLiveDuration}</timeToLiveDuration> - <restrictedPackageName>{$ctx:restrictedPackageName}</restrictedPackageName> - <collapseKey>{$ctx:collapseKey}</collapseKey> - <dataFieldsOfAndroidConfig>{$ctx:dataFieldsOfAndroidConfig}</dataFieldsOfAndroidConfig> - <androidNotificationTitle>{$ctx:androidNotificationTitle}</androidNotificationTitle> - <androidNotificationBody>{$ctx:androidNotificationBody}</androidNotificationBody> - <androidClickAction>{$ctx:androidClickAction}</androidClickAction> - <androidIcon>{$ctx:androidIcon}</androidIcon> - <androidColor>{$ctx:androidColor}</androidColor> - <androidTag>{$ctx:androidTag}</androidTag> - <androidSound>{$ctx:androidSound}</androidSound> - <androidTitleLocalizationKey>{$ctx:androidTitleLocalizationKey}</androidTitleLocalizationKey> - <androidBodyLocalizationKey>{$ctx:androidBodyLocalizationKey}</androidBodyLocalizationKey> - <androidTitleLocalizationArgs>{$ctx:androidTitleLocalizationArgs}</androidTitleLocalizationArgs> - <androidBodyLocalizationArgs>{$ctx:androidBodyLocalizationArgs}</androidBodyLocalizationArgs> - <apnsHeaders>{$ctx:apnsHeaders}</apnsHeaders> - <apnsCustomData>{$ctx:apnsCustomData}</apnsCustomData> - <apnsBadge>{$ctx:apnsBadge}</apnsBadge> - <apnsSound>{$ctx:apnsSound}</apnsSound> - <apnsContentAvailable>{$ctx:apnsContentAvailable}</apnsContentAvailable> - <apnsCategory>{$ctx:apnsCategory}</apnsCategory> - <apnsThreadId>{$ctx:apnsThreadId}</apnsThreadId> - <apnsAlertTitle>{$ctx:apnsAlertTitle}</apnsAlertTitle> - <apnsAlertBody>{$ctx:apnsAlertBody}</apnsAlertBody> - <webPushHeaders>{$ctx:webPushHeaders}</webPushHeaders> - <webPushData>{$ctx:webPushData}</webPushData> - <webPushNotificationTitle>{$ctx:webPushNotificationTitle}</webPushNotificationTitle> - <webPushNotificationBody>{$ctx:webPushNotificationBody}</webPushNotificationBody> - <webPushNotificationIcon>{$ctx:webPushNotificationIcon}</webPushNotificationIcon> - <webPushNotificationBadge>{$ctx:webPushNotificationBadge}</webPushNotificationBadge> - <webPushNotificationImage>{$ctx:webPushNotificationImage}</webPushNotificationImage> - <webPushNotificationLanguage>{$ctx:webPushNotificationLanguage}</webPushNotificationLanguage> - <webPushNotificationTag>{$ctx:webPushNotificationTag}</webPushNotificationTag> - <webPushNotificationDirection>{$ctx:webPushNotificationDirection}</webPushNotificationDirection> - <webPushNotificationRenotify>{$ctx:webPushNotificationRenotify}</webPushNotificationRenotify> - <webPushNotificationInteraction>{$ctx:webPushNotificationInteraction}</webPushNotificationInteraction> - <webPushNotificationSilent>{$ctx:webPushNotificationSilent}</webPushNotificationSilent> - <webPushNotificationTimestamp>{$ctx:webPushNotificationTimestamp}</webPushNotificationTimestamp> - <webPushNotificationVibrate>{$ctx:webPushNotificationVibrate}</webPushNotificationVibrate> - </googlefirebase.sendMessage> - <log description="log message"> - <property name="message" value="Google Firebase sucessuflly sent notification"/> - </log> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - </api> - ``` -3. Right click on the created Integration Project and select, -> **New** -> **Sequence** to create a sequence. Here we will define the logic how the push notification should be constructed. You can extract the information from the incoming HTTP message and set to the properties so that they will be picked up by the connector to construct push notification message. All the fields are not mandatory - some are specific to Android devices and some are specific to Web apps. Note how this sequence is called by the API. - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <sequence name="MessageCreateSeq" trace="disable" xmlns="http://ws.apache.org/ns/synapse"> - <propertyGroup> - <property expression="json-eval($.dataFieldsOfMessage)" name="dataFieldsOfMessage" scope="default" type="STRING"/> - <property expression="json-eval($.notificationTitle)" name="notificationTitle" scope="default" type="STRING"/> - <property expression="json-eval($.notificationBody)" name="notificationBody" scope="default" type="STRING"/> - <property expression="json-eval($.androidPriority)" name="androidPriority" scope="default" type="STRING"/> - <property expression="json-eval($.timeToLiveDuration)" name="timeToLiveDuration" scope="default" type="STRING"/> - <property expression="json-eval($.restrictedPackageName)" name="restrictedPackageName" scope="default" type="STRING"/> - <property expression="json-eval($.collapseKey)" name="collapseKey" scope="default" type="STRING"/> - <property expression="json-eval($.dataFieldsOfAndroidConfig)" name="dataFieldsOfAndroidConfig" scope="default" type="STRING"/> - <property expression="json-eval($.androidNotificationTitle)" name="androidNotificationTitle" scope="default" type="STRING"/> - <property expression="json-eval($.androidNotificationBody)" name="androidNotificationBody" scope="default" type="STRING"/> - <property expression="json-eval($.androidClickAction)" name="androidClickAction" scope="default" type="STRING"/> - <property expression="json-eval($.androidIcon)" name="androidIcon" scope="default" type="STRING"/> - <property expression="json-eval($.androidColor)" name="androidColor" scope="default" type="STRING"/> - <property expression="json-eval($.androidTag)" name="androidTag" scope="default" type="STRING"/> - <property expression="json-eval($.androidSound)" name="androidSound" scope="default" type="STRING"/> - <property expression="json-eval($.androidTitleLocalizationKey)" name="androidTitleLocalizationKey" scope="default" type="STRING"/> - <property expression="json-eval($.androidBodyLocalizationKey)" name="androidBodyLocalizationKey" scope="default" type="STRING"/> - <property expression="json-eval($.androidTitleLocalizationArgs)" name="androidTitleLocalizationArgs" scope="default" type="STRING"/> - <property expression="json-eval($.androidBodyLocalizationArgs)" name="androidBodyLocalizationArgs" scope="default" type="STRING"/> - <property expression="json-eval($.apnsHeaders)" name="apnsHeaders" scope="default" type="STRING"/> - <property expression="json-eval($.apnsCustomData)" name="apnsCustomData" scope="default" type="STRING"/> - <property expression="json-eval($.apnsBadge)" name="apnsBadge" scope="default" type="STRING"/> - <property expression="json-eval($.apnsSound)" name="apnsSound" scope="default" type="STRING"/> - <property expression="json-eval($.apnsContentAvailable)" name="apnsContentAvailable" scope="default" type="STRING"/> - <property expression="json-eval($.apnsCategory)" name="apnsCategory" scope="default" type="STRING"/> - <property expression="json-eval($.apnsThreadId)" name="apnsThreadId" scope="default" type="STRING"/> - <property expression="json-eval($.apnsAlertTitle)" name="apnsAlertTitle" scope="default" type="STRING"/> - <property expression="json-eval($.apnsAlertBody)" name="apnsAlertBody" scope="default" type="STRING"/> - <property expression="json-eval($.webPushHeaders)" name="webPushHeaders" scope="default" type="STRING"/> - <property expression="json-eval($.webPushData)" name="webPushData" scope="default" type="STRING"/> - <property expression="json-eval($.webPushNotificationTitle)" name="webPushNotificationTitle" scope="default" type="STRING"/> - <property expression="json-eval($.webPushNotificationBody)" name="webPushNotificationBody" scope="default" type="STRING"/> - <property expression="json-eval($.webPushNotificationIcon)" name="webPushNotificationIcon" scope="default" type="STRING"/> - <property expression="json-eval($.webPushNotificationBadge)" name="webPushNotificationBadge" scope="default" type="STRING"/> - <property expression="json-eval($.webPushNotificationImage)" name="webPushNotificationImage" scope="default" type="STRING"/> - <property expression="json-eval($.webPushNotificationLanguage)" name="webPushNotificationLanguage" scope="default" type="STRING"/> - <property expression="json-eval($.webPushNotificationTag)" name="webPushNotificationTag" scope="default" type="STRING"/> - <property expression="json-eval($.webPushNotificationDirection)" name="webPushNotificationDirection" scope="default" type="STRING"/> - <property expression="json-eval($.webPushNotificationRenotify)" name="webPushNotificationRenotify" scope="default" type="STRING"/> - <property expression="json-eval($.webPushNotificationInteraction)" name="webPushNotificationInteraction" scope="default" type="STRING"/> - <property expression="json-eval($.webPushNotificationSilent)" name="webPushNotificationSilent" scope="default" type="STRING"/> - </propertyGroup> - </sequence> - ``` - -> **Note**: The parameters under `<init>` section of the configuration above are referring to the credentials we obtained from Google Firebase in above steps. The parameters are mapped to the keys of the JSON file that you have downloaded as below. - -``` -accountType --> type -projectId --> project_id -privateKeyId --> private_key_id -privateKey --> private_key -clientEmail --> client_email -clientId --> client_id -authUri --> auth_uri -tokenUri --> token_uri -authProviderCertUrl --> auth_provider_x509_cert_url -clientCertUrl --> client_x509_cert_url -``` - -Now we can export the imported connector, sequence, and the API into a single CAR application. CAR application is the one we are going to deploy to server runtime. - - -{!includes/reference/connectors/exporting-artifacts.md!} - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/google-firebase-test-project.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -!!! tip - You may need to update the value of the credentials and make other such changes before deploying and running this project. - -## Deployment - -Now the exported CApp can be deployed in the integration runtime so that we can run it and test. - -**Note**: Download the following .jar files. -1. [firebase-admin-6.5.0.jar](https://mvnrepository.com/artifact/com.google.firebase/firebase-admin/6.5.0) -2. [google-auth-library-credentials-0.11.0.jar](https://mvnrepository.com/artifact/com.google.auth/google-auth-library-credentials/0.11.0) -3. [google-auth-library-oauth2-http-0.11.0.jar](https://mvnrepository.com/artifact/com.google.auth/google-auth-library-oauth2-http/0.11.0) -4. [api-common-1.7.0.jar](https://mvnrepository.com/artifact/com.google.api/api-common/1.7.0) -and place those into `<Product_HOME>/lib` folder. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -We can use Curl or Postman to try the API. The testing steps are provided for curl. Steps for Postman should be straightforward and can be derived from the curl requests. - -1. Create a file called data.xml with the following content. - ``` - { - "dataFieldsOfMessage":"key1:value1,key2:value2", - "notificationTitle":"test title", - "notificationBody":"test body", - "androidPriority":"normal", - "timeToLiveDuration":"123", - "restrictedPackageName":"com.google.firebase.quickstart.fcm", - "collapseKey":"test-key", - "dataFieldsOfAndroidConfig":"key3:value3,key4:value4", - "androidNotificationTitle":"Android Notification title", - "androidNotificationBody":"Android Notification body", - "androidClickAction":"android.intent.action.SHOW_APP_INFO", - "androidIcon":"@mipmap/ic_launcher", - "androidColor":"#112233", - "androidTag":"test-tag", - "androidSound":"@raw/bryan_sample", - "androidTitleLocalizationKey":"notification_title_string", - "androidBodyLocalizationKey":"notification_message_string", - "androidTitleLocalizationArgs":"t-arg2,t-arg3", - "androidBodyLocalizationArgs":"b-arg2,b-arg3", - "apnsHeaders":"header1:value1,header2:value2", - "apnsCustomData":"key5:value5,key6:value6", - "apnsBadge":"42", - "apnsSound":"apnsSound", - "apnsContentAvailable":true, - "apnsCategory":"category", - "apnsThreadId":"Thread-Id", - "apnsAlertTitle":"alert-title", - "apnsAlertBody":"alert-body", - "webPushHeaders":"header3:value3,header4:value4", - "webPushData":"key7:value7,key8:value8", - "webPushNotificationTitle":"web-notification-title", - "webPushNotificationBody":"web-notification-body", - "webPushNotificationIcon":"https://img.icons8.com/color/2x/baby-app.png", - "webPushNotificationBadge":"https://img.icons8.com/color/2x/ipad.png", - "webPushNotificationImage":"https://img.icons8.com/color/2x/ios-photos.png", - "webPushNotificationLanguage":"TA", - "webPushNotificationTag":"web-Tag", - "webPushNotificationDirection":"AUTO", - "webPushNotificationRenotify":true, - "webPushNotificationInteraction":false, - "webPushNotificationSilent":false, - "webPushNotificationTimestamp":"100", - "webPushNotificationVibrate":"200,100,200" - } - ``` -2. Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - ``` - curl -H "Content-Type: application/json" --request POST --data @data.xml http://127.0.0.1:8280/firebasenotify/send - ``` -**Expected Response**: - ``` - { - "Result": { - "MessageID": "projects/teststatusapp/messages/1079202156867212695" - } - } - ``` -If you have registered some devices to your application, the notification will appear on that device. - -## What's Next - -* Please read the [Google Firebase Connector reference guide]({{base_path}}/reference/connectors/google-firebase-connector/google-firebase-configuration/) to learn more about the operations you can perform with the connector. \ No newline at end of file diff --git a/en/docs/reference/connectors/google-firebase-connector/google-firebase-overview.md b/en/docs/reference/connectors/google-firebase-connector/google-firebase-overview.md deleted file mode 100644 index b2553b75a3..0000000000 --- a/en/docs/reference/connectors/google-firebase-connector/google-firebase-overview.md +++ /dev/null @@ -1,35 +0,0 @@ -# Google Firebase Connector Overview - -Google Firebase is a rich modern platform to create quick mobile app back-ends, with a ton of built-in and ready-to-integrate features. The most used feature of Firebase is as a back-end. However, along with this back-end, one of the popular features is **push notifications**. We can register Android, IOS, and Web-based backend to Google Firebase applications and push notifications to them. Firebase being a Google product, a lot of people use it for reliable push notifications. In the mobile world, push notifications are very popular. - -You can use the Firebase console itself to trigger out messages to the registered devices or you can even schedule a CRON job. Firebase provides a `Messaging Console`, which you can use to send all kinds of push messages, filter target users, schedule messages, and much more. Needless to state, it provides notification history and reports as well. However, when it come to integration scenarios we should be able to generate a notification externally and send it to Google Firebase. - -To see the Google Firebase Connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "firebase". - -<img src="{{base_path}}/assets/img/integrate/connectors/google-firebase-store.png" title="Google Firebase Connector Store" width="200" alt="Google Firebase Connector Store"/> - -## Compatibility - -| Connector Version | Supported product versions | -| ------------- |-------------| -| 1.0.2 | APIM 4.0.0, EI 7.1.0, EI 7.0.x, EI 6.6.0, EI 6.5.0 | - -For older versions, see the details in the connector store. - -## Google Firebase Connector documentation - -* **[Setting up Google Firebase Environment]({{base_path}}/reference/connectors/google-firebase-connector/google-firebase-setup/)**: You need to first create a project and generate private keys for the connector to use in order to interact with Google Firebase. - -* **[Google Firebase Connector Example]({{base_path}}/reference/connectors/google-firebase-connector/google-firebase-connector-example/)**: This example demonstrates how to use Google Firebase Connector to generate a push notification based on an HTTP API invocation. - -* **[Google Firebase Connector Reference]({{base_path}}/reference/connectors/google-firebase-connector/google-firebase-configuration/)**: This documentation provides a reference guide for the Google Firebase Connector. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, create a pull request in the following repository. - -* [Google Firebase Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-googlefirebase) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/google-firebase-connector/google-firebase-setup.md b/en/docs/reference/connectors/google-firebase-connector/google-firebase-setup.md deleted file mode 100644 index e084f556c8..0000000000 --- a/en/docs/reference/connectors/google-firebase-connector/google-firebase-setup.md +++ /dev/null @@ -1,11 +0,0 @@ -# Setting up Google Firebase Environment - -1. Open up [Firebase Console](https://console.firebase.google.com/) and log in. -2. Add a Firebase project. The **Add project** dialog also gives you the option to add Firebase to an existing Google Cloud Platform project. - <img src="{{base_path}}/assets/img/integrate/connectors/add-firebase-project.jpg" title="Add Firebase project" width="400" alt="Add Firebase project"/> -3. Navigate to the [Service Accounts](https://console.firebase.google.com/project/teststatusapp/settings/serviceaccounts/adminsdk) tab in your project's settings page. -4. Click the **Generate New Private Key** button at the bottom of the **Firebase Admin SDK** section of the **Service Accounts** tab. - <img src="{{base_path}}/assets/img/integrate/connectors/get-firebase-credentials.png" title="Get Firebase credentials" width="600" alt="Get Firebase credentials"/> - - After you click the button, a JSON file containing your service account's credentials will be downloaded. You'll need information in this file to initialize the Google Firebase Connector in the [integration scenario]({{base_path}}/reference/connectors/google-firebase-connector/google-firebase-connector-example/) you are going to build next. - diff --git a/en/docs/reference/connectors/google-pubsub-connector/googlepubsub-connector-configuration.md b/en/docs/reference/connectors/google-pubsub-connector/googlepubsub-connector-configuration.md deleted file mode 100644 index e2f8f1a0ac..0000000000 --- a/en/docs/reference/connectors/google-pubsub-connector/googlepubsub-connector-configuration.md +++ /dev/null @@ -1,65 +0,0 @@ -# Setting up the Google PubSub Environment - -The Google Pub/Sub connector allows you to access the [Google Cloud Pub/Sub API Version v1](https://cloud.google.com/pubsub/docs/reference/rest/) using an integration sequence. - -To work with the Google Pub/Sub connector, you need to have a Google Cloud Platform account. If you do not have a Google Cloud Platform account, go to [console.cloud.google.com](https://console.cloud.google.com/freetrial), and create a Google Cloud Platform trial account. - -Google Pub/Sub uses the OAuth 2.0 protocol for authentication and authorization. All requests to the Google Cloud Pub/Sub API must be authorized by an authenticated user. For information on how to obtain authentication and authorization user credentials, see the following section. - -### Obtaining user credentials - -Follow the steps below to generate user credentials. - -**Obtaining a client ID and client secret** - -1. Go to [https://console.developers.google.com/projectselector/apis/credentials](https://console.developers.google.com/apis/credentials), and sign in to your **Google account**. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-credentials-page.png" title="Google pubsub-credentials-page" width="600" alt="Google pubsub-credentials-page"/> - -2. If you do not already have a project, you create a new project, click **Create credentials** and then select **OAuth client ID**. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-create-credentials.png" title="Select OAuth client ID" width="600" alt="Select OAuth client ID"/> - -3. Next, **select** Web Application, and **create a client**. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-select-web-application.png" title="Select web application" width="600" alt="Select web application"/> - -4. Add [https://developers.google.com/oauthplayground](https://developers.google.com/oauthplayground/) as the redirect URL under **Authorized redirect URIs**, and then click **Create**. This displays the **client ID** and **client secret**. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-authorization-redirect-uri.png" title="Authorization-redirect-URI" width="600" alt="Authorization-redirect-URI"/> - -5. Make a note of the **client ID** and **client secret** that is displayed, and then **click OK**. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-authorization-redirect-uri.png" title="Authorization-redirect-URI" width="600" alt="Authorization-redirect-URI"/> - -6. Click **Library** on the left navigation pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-select-library.png" title="Select library" width="600" alt="Select library"/> - -7. Search **Google Cloud Pub/Sub API** under the **Big data or Networking category**. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-select-api.png" title="Pubsub API" width="600" alt="Pubsub API"/> - -8. Click **Enable**. This enables the Google Cloud Pub/Sub API. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-enable-api.png" title="Pubsub enable API" width="600" alt="Pubsub enable API"/> - -**Obtaining an access token and refresh token** - -1. Navigate to [OAuth playground](https://developers.google.com/oauthplayground), click the gear icon on the top right corner of the screen, and select **Use your own OAuth credentials**. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-apply-playground-credentials.png" title="playground-credentials" width="600" alt="playground-credentials"/> - -2. Specify the **client ID** and **client secret** that you obtained in step 3 above, and click **Close**. - -3. Under Step 1 on the screen, select **Google Cloud Pub/Sub API** from the list of APIs, and select all the **scopes** that are listed down under Google Cloud Pub/Sub API. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-authorize-api.png" title="Authorize API" width="600" alt="Authorize API"/> - -4. Click **Authorize APIs**. This requests for permission to access your profile details. - -5. Click **ALLOW**. - -6. In Step 2 on the screen, click **Exchange authorization code for tokens** to generate and view the access token and refresh token. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-exchange-authorization-code.png" title="Exchange-authorization-code" width="600" alt="Exchange-authorization-code"/> \ No newline at end of file diff --git a/en/docs/reference/connectors/google-pubsub-connector/googlepubsub-connector-example.md b/en/docs/reference/connectors/google-pubsub-connector/googlepubsub-connector-example.md deleted file mode 100644 index 401abec634..0000000000 --- a/en/docs/reference/connectors/google-pubsub-connector/googlepubsub-connector-example.md +++ /dev/null @@ -1,362 +0,0 @@ -# Google Pub Sub Connector Example - -The Google Pub/Sub connector allows you to access the [Google Cloud Pub/Sub API Version v1](https://cloud.google.com/pubsub/docs/reference/rest/) from an integration sequence. - -## What you'll build - -Given below is a sample scenario that demonstrates how to work with the WSO2 Google Pub Sub Connector to: - -1. Create a Topic to store company update notifications. -2. Insert company update notifications to the created topic. -3. Retrieve company updates from the created topic. - -To work with the Google Pub/Sub connector, you need to have a Google Cloud Platform account. Please refer the [Setting up the Google Pub Sub Environment]({{base_path}}/reference/connectors/google-pubsub-connector/googlepubsub-connector-configuration/) documentation to setup an account. - -In this scenario the user needs to create a **Topic** in **Google Cloud Platform account** under **Big Data**. This topic is used to store notifications related to the company updates. Once the user invokes the `createTopic` resource, the subscribing operation also gets triggered simultaneously. Then the user can insert company update notifications to the created topic. Finally the user can retrieve the company updates from the subscribed topic while invoking the API. - -All three operations are exposed via an API. The API with the context `/resources` has six resources. - -* `/createTopic` : Used to create a Topic for store company notifications and subscribe to the topic. -* `/insertCompanyNotifications` : Used to insert company update notifications to the subscribed topic. -* `/getcompanynotifictions` : Used to retrieve information about the company updates. - -> **Note**: In this example we will be using XPath 2.0 which needs to be enabled in the product as shown below before starting the integration service. If you are using **EI 7** or **APIM 4.0.0**, you need to enable this property by adding the following to the PRODUCT-HOME/conf/deployment.toml file. You can further refer to the [Product Configurations]({{base_path}}/reference/config-catalog/#http-transport). If you are using **EI 6**, you can enable this property by uncommenting **synapse.xpath.dom.failover.enabled=true** property in PRODUCT-HOME/conf/synapse.properties file. - ``` - [mediation] - synapse.enable_xpath_dom_failover=true - ``` - -The following diagram shows the overall solution. The user creates a topic, stores some company update notifications, and then receives it back. To invoke each operation, the user uses the same API. - -<img src="{{base_path}}/assets/img/integrate/connectors/google-pubsub-connector1.png" title="pub-sub connector example" width="700" alt="pub-sub connector example"/> - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Configure the connector in WSO2 Integration Studio - -Connectors can be added to integration flows in [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/). Once added, the operations of the connector can be dragged onto your canvas and added to your resources. - -### Import the connector - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -### Add integration logic - -First create an API, which will be where we configure the integration logic. Right click on the created Integration Project and select, **New** -> **Rest API** to create the REST API. Specify the API name as `pubsubApi` and API context as `/resources`. - -<img src="{{base_path}}/assets/img/integrate/connectors/adding-an-api.jpg" title="Adding a Rest API" width="800" alt="Adding a Rest API"/> - -#### Configuring the API - -**Configure a resource for the createTopic operation** - -1. Initialize the connector. - - 1. Navigate into the **Palette** pane and select the graphical operations icons listed under **Googlepubsub Connector** section. Then drag and drop the `init` operation into the Design pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-drag-and-drop-init.png" title="Drag and drop init operation" width="500" alt="Drag and drop init operation"/> - - 2. Add the property values into the `init` operation as shown below. Replace the `accessToken`, `apiUrl`, `apiVersion` with your values. - - - **accessToken** : The access token that grants access to the Google Pub/Sub API on behalf of a user. - - **apiUrl** : The application URL of Google Pub/Sub. - - **apiVersion** : The version of the Google Pub/Sub API. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-api-init-operation.png" title="Add values to the init operation" width="800" alt="Add values to the init operation"/> - -2. Set up the **createTopic** operation. - - 1. Navigate into the **Palette** pane and select the graphical operations icons listed under **Googlepubsub Connector** section. Then drag and drop the `createTopic` operation into the Design pane. - - 2. The createTopic operation creates a new topic with the name that you specify. - - - **projectId** : The unique ID of the project within which you want to create a topic. - - **topicName** : The name that you want to give the topic that you are creating. - - While invoking the API, topicName values is populated as an input value for the operation. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-drag-and-drop.createtopic.png" title="Drag and drop createTopic operation" width="500" alt="Drag and drop createTopic operation"/> - - 3. To get the input values in to the API we can use the [property mediator]({{base_path}}/reference/mediators/property-mediator). Navigate into the **Palette** pane and select the graphical mediators icons listed under **Mediators** section. Then drag and drop the `Property` mediators into the Design pane as shown below. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-api-drag-and-drop-property-mediator.png" title="Add property mediators" width="800" alt="Add property mediators"/> - - The parameters available for configuring the Property mediator are as follows: - - > **Note**: The properties should be added to the pallet before creating the operation. - - 4. Add the property mediator to capture the `topicName` value. The topicName contains the name that you want to give the topic that you are creating. - - - **name** : topicName - - **expression** : json-eval($.topicName) - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-api-property-mediator-property1-value1.png" title="Add property mediators topicName" width="600" alt="Add property mediators topicName"/> - -3. Set up the **createTopicSubscription** operation. - - 1. Initialize the connector. You can use the same configuration to initialize the connector. Please follow the steps given in section 1 for setting up the init operation to the createTopic operation. - - 2. Set up the `createTopicSubscription` operation. Navigate into the **Palette** pane and select the graphical operations icons listed under **Googlepubsub Connector** section. Then drag and drop the `createTopicSubscription` operation into the **Design pane**. - - - **projectId** : The unique ID of the project within which the topic is created. - - **subscriptionName** : The name of the subscription. - - **topicName** : The name of the topic for which you want to create a subscription. - - **ackDeadlineSeconds** : The maximum time a subscriber can take to acknowledge a message that is received. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-api-createtopicsubscription-operation.png" title="Add values to the createTopicSubscription operation" width="800" alt="Add values to the createTopicSubscription operation"/> - - 3. Add the property mediator to capture the `subscriptionName` values. This contains the name of the subscription. - - - **name** : subscriptionName - - **expression** : json-eval($.subscriptionName) - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-api-property-mediator-property2-value2.png" title="Add values to capture subscriptionName" width="600" alt="Add values to capture subscriptionName"/> - - 4. Add the property mediator to store the name of the created Topic value from the response of the createTopic operation. - - - **name** : nameforsubscription - - **expression** : json-eval($.name) - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-api-property-mediator-nameforsubscription.png" title="Add values to capture nameforsubscription" width="600" alt="Add values to capture nameforsubscription"/> - - 5. Add the property mediator to capture the Topic name from the response using the splitting separators in the results. - - - **name** : nameforsubscription - - **expression** : json-eval($.name) - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-api-property-mediator-splitting.png" title="Add values to capture splitting value" width="600" alt="Add values to capture splitting value"/> - -4. Forward the backend response to the API caller. - - When you are invoking the created resource, the request of the message is going through the `/createTopic` resource. Finally, it is passed to the [Respond mediator]({{base_path}}/reference/mediators/respond-Mediator/). The Respond Mediator stops the processing on the current message and sends the message back to the client as a response. - - 1. Drag and drop the **respond mediator** to the **Design view**. - - <img src="{{base_path}}/assets/img/integrate/connectors/smpp-drag-and-drop-respond-mediator.png" title="Add Respond mediator" width="800" alt="Add Respond mediator"/> - - 2. Once you have setup the sequences and API, you can see the `/createTopic` resource as shown below. - - <img src="{{base_path}}/assets/img/integrate/connectors/createtopic-design-view.png" title="API Design view" width="600" alt="API Design view"/> - -**Configure a resource for the publishMessage operation** - -1. Initialize the connector. - - 1. Initialize the connector. You can use the same configuration to initialize the connector. Please follow the steps given in section 1 for setting up the init operation to the createTopic operation. - - 2. Set up the `publishMessage` operation. Navigate into the **Palette** pane and select the graphical operations icons listed under **Googlepubsub Connector** section. Then drag and drop the `publishMessage` operation into the **Design pane**. - - - **projectId** : The unique ID of the project within which the topic is created. - - **topicName** : The name of the topic for which you want to create a subscription. - - **data** : The message payload. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-api-publishmessage-operation.png" title="Add values to the createTopicSubscription operation" width="800" alt="Add values to the createTopicSubscription operation"/> - - 3. Add the property mediator to capture the `topicName` values. - - - **name** : topicName - - **expression** : json-eval($.topicName) - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-topicname1.png" title="Add values to the topicName operation" width="800" alt="Add values to the topicName operation"/> - - 4. Add the property mediator to capture the `data` values. - - - **name** : data - - **expression** : json-eval($.data) - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-data.png" title="Add values to the data operation" width="800" alt="Add values to the data operation"/> - -**Configure a resource for the pullMessage operation** - -1. Initialize the connector. - - 1. Initialize the connector. You can use the same configuration to initialize the connector. Please follow the steps given in section 1 for setting up the init operation to the createTopic operation - - 2. Set up the `publishMessage` operation. Navigate into the **Palette** pane and select the graphical operations icons listed under **Googlepubsub Connector** section. Then drag and drop the `publishMessage` operation into the **Design pane**. - - - **projectId** : The unique ID of the project within which the topic is created. - - **subscriptionName** : The name of the topic for which you want to create a subscription. - - **maxMessages** : The maximum number of messages to retrieve. - - **returnImmediately** : Set this to true if you want the server to respond immediately. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-pullmessages.png" title="Add values to the pull messages operation" width="800" alt="Add values to the pull messages operation"/> - - 3. Add the property mediator to capture the `subscriptionName` values. Follow the steps given in createTopicSubscription operation. - -Now you can switch into the Source view and check the XML configuration files of the created API and sequences. - -!!! note "pubsubApi.xml" - ``` - <?xml version="1.0" encoding="UTF-8"?> - <api context="/resources" name="pubsubApi" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST" url-mapping="/createTopic"> - <inSequence> - <property expression="json-eval($.topicName)" name="topicName" scope="default" type="STRING"/> - <property expression="json-eval($.subscriptionName)" name="subscriptionName" scope="default" type="STRING"/> - <googlepubsub.init> - <accessToken>ya29.a0AfH6SMA0MU0Frk_7gNnA79QUWQGnalPXvmkoA4MYS8p8Mt9OSC5SUqqcqIjcrP-_ollVB9gpeg3SufbCpASMCWyHcVCN6ZMCbqz4IdQqRVi8Kt22tI6gR5zvgtWn1qFWnYnGQ6Ehqi_mS9k0PL_R-kQcl-AkqveA8ZY</accessToken> - <apiUrl>https://pubsub.googleapis.com</apiUrl> - <apiVersion>v1</apiVersion> - </googlepubsub.init> - <googlepubsub.createTopic> - <projectId>ei-connector-improvement</projectId> - <topicName>{$ctx:topicName}</topicName> - </googlepubsub.createTopic> - <property expression="json-eval($.name)" name="nameforsubscription" scope="default" type="STRING"/> - <property expression="fn:tokenize($ctx:nameforsubscription,'/')[last()]" name="test" scope="default" type="STRING" xmlns:fn="http://www.w3.org/2005/xpath-functions"/> - <googlepubsub.init> - <accessToken>ya29.a0AfH6SMA0MU0Frk_7gNnA79QUWQGnalPXvmkoA4MYS8p8Mt9OSC5SUqqcqIjcrP-_ollVB9gpeg3SufbCpASMCWyHcVCN6ZMCbqz4IdQqRVi8Kt22tI6gR5zvgtWn1qFWnYnGQ6Ehqi_mS9k0PL_R-kQcl-AkqveA8ZY</accessToken> - <apiUrl>https://pubsub.googleapis.com</apiUrl> - <apiVersion>v1</apiVersion> - </googlepubsub.init> - <googlepubsub.createTopicSubscription> - <projectId>ei-connector-improvement</projectId> - <subscriptionName>{$ctx:subscriptionName}</subscriptionName> - <topicName>{$ctx:test}</topicName> - <ackDeadlineSeconds>30</ackDeadlineSeconds> - </googlepubsub.createTopicSubscription> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" url-mapping="/insertcompanynotifications"> - <inSequence> - <property expression="json-eval($.topicName)" name="topicName" scope="default" type="STRING"/> - <property expression="json-eval($.data)" name="data" scope="default" type="STRING"/> - <googlepubsub.init> - <accessToken>ya29.a0AfH6SMA0MU0Frk_7gNnA79QUWQGnalPXvmkoA4MYS8p8Mt9OSC5SUqqcqIjcrP-_ollVB9gpeg3SufbCpASMCWyHcVCN6ZMCbqz4IdQqRVi8Kt22tI6gR5zvgtWn1qFWnYnGQ6Ehqi_mS9k0PL_R-kQcl-AkqveA8ZY</accessToken> - <apiUrl>https://pubsub.googleapis.com</apiUrl> - <apiVersion>v1</apiVersion> - </googlepubsub.init> - <googlepubsub.publishMessage> - <projectId>ei-connector-improvement</projectId> - <topicName>{$ctx:topicName}</topicName> - <data>{$ctx:data}</data> - </googlepubsub.publishMessage> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" url-mapping="/getcompanynotifictions"> - <inSequence> - <property expression="json-eval($.subscriptionName)" name="subscriptionName" scope="default" type="STRING"/> - <googlepubsub.init> - <accessToken>ya29.a0AfH6SMDDFZCdoo37Tb48MrJU-ZnNoyrYqNY8r5cgWX0kD7n3GBhZr_TbicfvywjKwGYaZEBV50_yGINVOhZr_4jFMu2O03c87NiDCBpKW5zdsnl3x9iWdsosjDoE7uAGEKKLikPgnKfcgilGB2d-MBzu_c2e53kXG6A</accessToken> - <apiUrl>https://pubsub.googleapis.com</apiUrl> - <apiVersion>v1</apiVersion> - </googlepubsub.init> - <googlepubsub.pullMessage> - <projectId>ei-connector-improvement</projectId> - <subscriptionName>{$ctx:subscriptionName}</subscriptionName> - <maxMessages>2</maxMessages> - <returnImmediately>false</returnImmediately> - </googlepubsub.pullMessage> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - </api> - ``` - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/googlepubsub-connector.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -!!! tip - You may need to update the simulator details and make other such changes before deploying and running this project. - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - -1. Create a Topic for store company update notifications. - -**Sample request** - - ``` - curl -v POST -d '{"topicName":"CompanyUpdates","subscriptionName": "SubscriptionForCompanyUpdates"}' "http://localhost:8290/resources/createTopic" -H "Content-Type:application/json" - ``` -**Expected response** - - ```json - { - "name": "projects/ei-connector-improvement/subscriptions/SubscriptionForCompanyUpdates", - "topic": "projects/ei-connector-improvement/topics/CompanyUpdates", - "pushConfig": {}, - "ackDeadlineSeconds": 30, - "messageRetentionDuration": "604800s", - "expirationPolicy": { - "ttl": "2678400s" - } - } - ``` -**You will see the results from G-Cloud console** - - - Created Topic. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-gcloudtopic.png" title="pubsub-gcloudTopic" width="800" alt="pubsub-gcloudTopic"/> - - - Created subscription for the Topic that you specify in the G-Cloud. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-gcloudsubscription.png" title="pubsub-gcloudSubscription" width="800" alt="pubsub-gcloudSubscription"/> - -2. Insert company update notifications to the created topic. - -**Sample request** - - ``` - curl -v POST -d '{"topicName":"CompanyUpdates", "data":"This is first notification"}' "http://localhost:8290/resources/insertcompanynotifications" -H "Content-Type:application/json" - ``` -**Expected response** - - ```json - { - "messageIds": [ - "1268617220412368" - ] - } - ``` -3. Retrieve company updates from the created topic. - -**Sample request** - - ``` - curl -v POST -d '{"subscriptionName":"SubscriptionForCompanyUpdates"}' "http://localhost:8290/resources/getcompanynotifictions" -H "Content-Type:application/json" - ``` -**Expected response** - - ```json - { - "receivedMessages": [ - { - "ackId": "ISE-MD5FU0RQBhYsXUZIUTcZCGhRDk9eIz81IChFEgIIFAV8fXFYW3VfVBoHUQ0Zcnxmd2NTQQhXRFB_VVsRDXptXFcnUA8fentgcmhYEwUDR1B4V3Pr67-C9PCXYxclSpuLu6xvM8byp5xMZho9XxJLLD5-NjNFQV5AEkw9BkRJUytDCypYEU4E", - "message": { - "data": "VGhpcyBpcyBmaXJzdCBub3RpZmljYXRpb24=", - "messageId": "1268617220412368", - "publishTime": "2020-06-09T15:36:35.632Z" - } - } - ] - } - ``` -**You will see the results from G-Cloud console** - - - View published company update notification. - - <img src="{{base_path}}/assets/img/integrate/connectors/pubsub-viewmessages.png" title="pubsub-viewmessages" width="800" alt="pubsub-viewmessages"/> - \ No newline at end of file diff --git a/en/docs/reference/connectors/google-pubsub-connector/googlepubsub-connector-overview.md b/en/docs/reference/connectors/google-pubsub-connector/googlepubsub-connector-overview.md deleted file mode 100644 index 547f5ec40e..0000000000 --- a/en/docs/reference/connectors/google-pubsub-connector/googlepubsub-connector-overview.md +++ /dev/null @@ -1,42 +0,0 @@ -# Google PubSub Connector Overview - -The Google Pub/Sub connector allows you to access the [Google Cloud Pub/Sub API Version v1](https://cloud.google.com/pubsub/docs/reference/rest/) from an integration sequence. Google Cloud Pub/Sub is a fully-managed real-time messaging service that allows you to send and receive messages between independent applications. - -The Google Pub/Sub Connector allows developers to make asynchronous messaging flows inside the mediation. It facilitates the following use-cases. - -1. One-to-many messaging. The WSO2 integration runtime can place a message in a topic and many other parties can consume it. -2. Distributing event notifications - The WSO2 integration runtime can send events to Google Pub Sub and interested event listeners will get triggered. -3. Streaming sensor data to Google cloud using the WSO2 integration runtime. -4. Reliability improvement in processing messages. Messages received by the WSO2 integration runtime can be sent to Google Pub Sub and later received and processed by a WSO2 integration runtime in a different region. - -Inspired from: [Google Pub/Sub docs](https://cloud.google.com/pubsub/docs/overview) - -To see the Google Pub/Sub Connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "pubsub". - -<img src="{{base_path}}/assets/img/integrate/connectors/pubsub-store.png" title="Google PubSub Connector Store" width="200" alt="Google PubSub Connector Store"/> - -## Compatibility - -| Connector Version | Supported product versions | -| ------------- |-------------| -| 1.0.2 | APIM 4.0.0, EI 7.1.0, EI 7.0.x, EI 6.6.0, EI 6.5.0 | - -For older versions, see the details in the connector store. - -## Google Pub/Sub Connector documentation - -* **[Setting up the Google Pub/Sub Environment]({{base_path}}/reference/connectors/google-pubsub-connector/googlepubsub-connector-configuration/)**: You need to first generate user credentials and access tokens in order to interact with Google PubSub. - -* **[Google Pub/Sub Connector Example]({{base_path}}/reference/connectors/google-pubsub-connector/googlepubsub-connector-example/)**: This example demonstrates how to work with the Google Pub/Sub Connector. - -* **[Google Pub/Sub Connector Reference]({{base_path}}/reference/connectors/google-pubsub-connector/googlepubsub-connector-reference/)**: This documentation provides a reference guide for the Google Pub/Sub Connector. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, create a pull request in the following repository. - -* [Google Pub/Sub Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-googlepubsub) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/google-pubsub-connector/googlepubsub-connector-reference.md b/en/docs/reference/connectors/google-pubsub-connector/googlepubsub-connector-reference.md deleted file mode 100644 index b9eb5170c4..0000000000 --- a/en/docs/reference/connectors/google-pubsub-connector/googlepubsub-connector-reference.md +++ /dev/null @@ -1,305 +0,0 @@ -# Google Pub/Sub Connector Reference - -The following operations allow you to work with the Google Pub/Sub Connector. Click an operation name to see parameter details and samples on how to use it. - ---- - -To use the Google Pub/Sub connector, add the <googlepubsub.init> element in your configuration before any other Google Pub/Sub operation. This configuration authenticates with Google Pub/Sub via user credentials. - -Google Pub/Sub uses the OAuth 2.0 protocol for authentication and authorization. All requests to the Google Cloud Pub/Sub API must be authorized by an authenticated user. - -??? note "googlepubsub.init" - This operation allows you to initialize the connection to Google Pub/Sub. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>apiUrl</td> - <td>The application URL of Google Pub/Sub.</td> - <td>Yes</td> - </tr> - <tr> - <td>apiVersion</td> - <td>The version of the Google Pub/Sub API.</td> - <td>Yes</td> - </tr> - <tr> - <td>accessToken</td> - <td>The access token that grants access to the Google Pub/Sub API on behalf of a user.</td> - <td>Yes</td> - </tr> - <tr> - <td>clientId</td> - <td>The client id provided by the Google developer console.</td> - <td>Yes</td> - </tr> - <tr> - <td>clientSecret</td> - <td>The client secret provided by the Google developer console.</td> - <td>Yes</td> - </tr> - <tr> - <td>refreshToken</td> - <td>The refresh token provided by the Google developer console, which can be used to obtain new access tokens.</td> - <td>Yes</td> - </tr> - <tr> - <td>blocking</td> - <td>Set this to true if you want the connector to perform blocking invocations to Google Pub/Sub.</td> - <td>Yes</td> - </tr> - <tr> - <td>tokenEndpoint</td> - <td>The token endpoint of the Google API. The default will be set to https://www.googleapis.com/oauth2/v4/token if not provided.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <googlepubsub.init> - <apiUrl>{$ctx:apiUrl}</apiUrl> - <apiVersion>{$ctx:apiVersion}</apiVersion> - <accessToken>{$ctx:accessToken}</accessToken> - <clientId>{$ctx:clientId}</clientId> - <clientSecret>{$ctx:clientSecret}</clientSecret> - <refreshToken>{$ctx:refreshToken}</refreshToken> - <blocking>{$ctx:blocking}</blocking> - <tokenEndpoint>{$ctx:tokenEndpoint}</tokenEndpoint> - </googlepubsub.init> - ``` - ---- - -### Project Topics - -??? note "createTopic" - The createTopic operation creates a new topic with a name that you specify. See the [related API documentation](https://cloud.google.com/pubsub/docs/reference/rest/v1/projects.topics/create) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>topicName</td> - <td>The name of the topic that you are creating.</td> - <td>Yes</td> - </tr> - <tr> - <td>projectId</td> - <td>The unique ID of the project within which you want to create the topic.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <googlepubsub.createTopic> - <topicName>{$ctx:topicName}</topicName> - <projectId>{$ctx:projectId}</projectId> - </googlepubsub.createTopic> - ``` - - **Sample request** - - ```json - { - "apiUrl":"https://pubsub.googleapis.com", - "apiVersion":"v1", - "accessToken": "ya29.GlwG2NhgX_NQhxjtF_0G9bzf0FEj_shNWgF_GXmjeYQF0XQXrBjjcrJukforOeyTAHoFfSQW0x-OrrZ2lj47Z6k6DAYZuUv3ZhJMl-ll4mvouAbc", - "topicName":"topicA", - "projectId":"rising-parser-123456" - } - ``` - -??? note "publishMessage" - The publishMessage operation publishes messages to a specified topic. See the [related API documentation](https://cloud.google.com/pubsub/docs/reference/rest/v1/projects.topics/publish) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>topicName</td> - <td>The unique name of the topic to which messages should be published.</td> - <td>Yes</td> - </tr> - <tr> - <td>projectId</td> - <td>The unique ID of the project within which the topic is created.</td> - <td>Yes</td> - </tr> - <tr> - <td>data</td> - <td>The message payload.</td> - <td>Yes</td> - </tr> - <tr> - <td>attributes</td> - <td>Additional attributes of the message.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <googlepubsub.publishMessage> - <topicName>{$ctx:topicName}</topicName> - <projectId>{$ctx:projectId}</projectId> - <data>{$ctx:data}</data> - <attributes>{$ctx:attributes}</attributes> - </googlepubsub.publishMessage> - ``` - - **Sample request** - - ```json - { - "apiUrl":"https://pubsub.googleapis.com", - "apiVersion":"v1", - "accessToken": "ya29.GlwG2NhgX_NQhxjtF_0G9bzf0FEj_shNWgF_GXmjeYQF0XQXrBjjcrJukforOeyTAHoFfSQW0x-OrrZ2lj47Z6k6DAYZuUv3ZhJMl-ll4mvouAbc", - "topicName":"topicA", - "projectId":"rising-parser-123456" - } - ``` - -### Project Subscriptions - -??? note "createTopicSubscription" - The createTopicSubscription operation creates a subscription to a topic that you specify. See the [related API documentation](https://cloud.google.com/pubsub/docs/reference/rest/v1/projects.subscriptions/create) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>topicName</td> - <td>The name of the topic that you are creating.</td> - <td>Yes</td> - </tr> - <tr> - <td>projectId</td> - <td>The unique ID of the project within which you want to create the topic.</td> - <td>Yes</td> - </tr> - <tr> - <td>subscriptionName</td> - <td>The name of the subscription.</td> - <td>Yes</td> - </tr> - <tr> - <td>ackDeadlineSeconds</td> - <td>The maximum time a subscriber can take to acknowledge a message that is received.</td> - <td>Optional</td> - </tr> - <tr> - <td>pushEndpoint</td> - <td>The URL that specifies the endpoint to which messages should be pushed.</td> - <td>Optional</td> - </tr> - <tr> - <td>attributes</td> - <td>Additional endpoint configuration attributes.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <googlepubsub.createTopicSubscription> - <topicName>{$ctx:topicName}</topicName> - <projectId>{$ctx:projectId}</projectId> - <subscriptionName>{$ctx:subscriptionName}</subscriptionName> - <ackDeadlineSeconds>{$ctx:ackDeadlineSeconds}</ackDeadlineSeconds> - <pushEndpoint>{$ctx:pushEndpoint}</pushEndpoint> - <attributes>{$ctx:attributes}</attributes> - </googlepubsub.createTopicSubscription> - ``` - - **Sample request** - - ```json - { - "apiUrl":"https://pubsub.googleapis.com", - "apiVersion":"v1", - "accessToken": "ya29.GlwAJG2NhgX_NQhxjtF_0G9bzf0FEj_shNWgF_GXmYFpwIxjeYQF0XQXukforOeyTAHoFfSQW0x-OrrZ2lj47Z6k6DAYZuUv3ZhJMl-ll4mvouAbc", - "projectId":"rising-parser-123456", - "topicName":"topicA", - "subscriptionName":"mysubA", - "ackDeadlineSeconds":"30", - "pushEndpoint": "https://example.com/push", - "attributes": {"key": "value1","key2":"values2"} - } - ``` - -??? note "pullMessage" - The pullMessage operation retrieves messages that are published to a topic. See the [related API documentation](https://cloud.google.com/pubsub/docs/reference/rest/v1/projects.subscriptions/pull) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>topicName</td> - <td>The name of the topic to which the subscription belongs.</td> - <td>Yes</td> - </tr> - <tr> - <td>projectId</td> - <td>The unique ID of the project within which the topic is created.</td> - <td>Yes</td> - </tr> - <tr> - <td>subscriptionName</td> - <td>The name of the subscription from which messages should be retrieved.</td> - <td>Yes</td> - </tr> - <tr> - <td>maxMessages</td> - <td>The maximum number of messages to retrieve.</td> - <td>Optional</td> - </tr> - <tr> - <td>returnImmediately</td> - <td>Set this to true if you want the server to respond immediately.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <googlepubsub.pullMessage> - <topicName>{$ctx:topicName}</topicName> - <projectId>{$ctx:projectId}</projectId> - <subscriptionName>{$ctx:subscriptionName}</subscriptionName> - <maxMessages>{$ctx:maxMessages}</maxMessages> - <returnImmediately>{$ctx:returnImmediately}</returnImmediately> - </googlepubsub.pullMessage> - ``` - - **Sample request** - - ```json - { - "apiUrl":"https://pubsub.googleapis.com", - "apiVersion":"v1", - "accessToken": "ya29.GlwABbJG2NhgX_NQhxjtF_0G9bzf0FEj_shNWgF_GXmYFpwIxjeYQF0XjcrJukforOeyTAHoFfSQW0x-OrrZ2lj47Z6k6DAYZuUv3ZhJMl-ll4mvouAbc", - "topicName":"topicA", - "projectId":"rising-parser-123456", - "subscriptionName":"mysubA", - "maxMessages":"2", - "returnImmediately":"false" - } - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/google-spreadsheet-connector/get-credentials-for-google-spreadsheet.md b/en/docs/reference/connectors/google-spreadsheet-connector/get-credentials-for-google-spreadsheet.md deleted file mode 100644 index cc2eaaf2fb..0000000000 --- a/en/docs/reference/connectors/google-spreadsheet-connector/get-credentials-for-google-spreadsheet.md +++ /dev/null @@ -1,56 +0,0 @@ -# Get Credentials for Google Spreadsheet - -To obtain the Access Token, Client Id, Client Secret and Refresh Token, we need to follow the below steps. - -1. Open the [Google API Console Credentials](https://console.developers.google.com/apis/credentials) page. You will be prompted to log in to a Google Account. Log in to your relevant Google Account. - -2. Click on **Select a Project** and click **NEW PROJECT**, to create a project. - <img src="{{base_path}}/assets/img/integrate/connectors/create-project.png" title="Creating a new Project" width="800" alt="Creating a new Project" /> - -3. Enter `SpreadsheetConnector` as the name of the project and click **Create**. - -4. Click **Configure consent screen** in the next screen. - - <img src="{{base_path}}/assets/img/integrate/connectors/consent-screen.png" title="Consent Screen" width="800" alt="Consent Screen" /> - -5. Provide the Application Name as `SpreadsheetConnector` in the Consent Screen. - - <img src="{{base_path}}/assets/img/integrate/connectors/consent-screen2.png" title="Consent Screen" width="800" alt="Consent Screen" /> - -6. Click Create credentials and click OAuth client ID. - - <img src="{{base_path}}/assets/img/integrate/connectors/create-credentials.png" title="Create Credentials" width="800" alt="Create Credentials" /> - -7. Enter the following details in the Create OAuth client ID screen and click Create. - - | Type | Name | - | ------------------ | -------------------------------------------------| - | Application type | Web Application | - | Name | SpreadsheetConnector | - | Authorized redirect URIs | https://developers.google.com/oauthplayground | - - -8. A Client ID and a Client Secret are provided. Keep them saved. - <img src="{{base_path}}/assets/img/integrate/connectors/credentials.png" title="Credentials" width="800" alt="Credentials" /> - -9. Click Library on the side menu, search for **Google Sheets API** and click on it. - -10. Click **Enable** to enable the Google Sheets API. - <img src="{{base_path}}/assets/img/integrate/connectors/sheetsapi.png" title="Enable Google Sheets API" width="800" alt="Enable Google Sheets API" /> - - -## Obtaining Access Token and Refresh Token -1. Navigate to [OAuth 2.0 Playground](https://developers.google.com/oauthplayground/) and click the OAuth 2.0 Configuration button in the top right corner of your screen. - -2. Select **Use your own OAuth credentials**, and provide the obtained Client ID and Client Secret values. Click on **Close**. - <img src="{{base_path}}/assets/img/integrate/connectors/oath-configuration.png" title="Obtaining Oauth-configuration" width="800" alt="Obtaining Oauth-configuration" /> - -3. Under Step 1, select `Google Sheets API v4` from the list of APIs and select all the scopes. - <img src="{{base_path}}/assets/img/integrate/connectors/sheetsapi2.png" title="Selecting Scopes" width="800" alt="Selecting Scopes" /> - -4. Click on **Authorize APIs** button and select your Gmail account when you are asked and allow the scopes. - <img src="{{base_path}}/assets/img/integrate/connectors/sheetsapi4.png" title="Grant Permission" width="800" alt="Grant Permission" /> - -5. Under Step 2, click **Exchange authorization code for tokens** to generate and display the Access Token and Refresh Token. Now we are done with configuring the Google Sheets API. - <img src="{{base_path}}/assets/img/integrate/connectors/refreshtoken.png" title="Getting Tokens" width="800" alt="Getting Tokens" /> - diff --git a/en/docs/reference/connectors/google-spreadsheet-connector/google-spreadsheet-connector-config.md b/en/docs/reference/connectors/google-spreadsheet-connector/google-spreadsheet-connector-config.md deleted file mode 100644 index dfd59e68ab..0000000000 --- a/en/docs/reference/connectors/google-spreadsheet-connector/google-spreadsheet-connector-config.md +++ /dev/null @@ -1,2453 +0,0 @@ -# Google Spreadsheet Connector Reference - -The following operations allow you to work with the Google Spreadsheet Connector. Click an operation name to see parameter details and samples on how to use it. - ---- - -## Initialize the connector - -To use the Google Spreadsheet connector, add the <googlespreadsheet.init> element in your proxy configuration before use any other Google Spreadsheet operations. The <googlespreadsheet.init> element is used to authenticate the user using OAuth2 authentication and allows the user to access the Google account which contains the spreadsheets. For more information on authorizing requests in Google Spreadsheets, see [https://developers.google.com/sheets/api/guides/authorizing](https://developers.google.com/sheets/api/guides/authorizing). - -> **Note**: When trying it out the first time, you need to use valid accessToken to use the connector operations. If the provided accessToken has expired then the token refreshing flow will be handled inside the connector. See the [documetation to set up Google Spreadsheets and get credentials such as clientId, clientSecret, accessToken, and refreshToken]({{base_path}}/reference/connectors/google-spreadsheet-connector/get-credentials-for-google-spreadsheet/). - -??? note "googlespreadsheet.init" - The googlespreadsheet.init operation initializes the connector to interact with Google Spreadsheet. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>accessToken</td> - <td>Access token which is obtained through the OAuth2 playground.</td> - <td>Yes.</td> - </tr> - <tr> - <td>apiUrl</td> - <td>The application URL of Google Sheet version v4. </td> - <td>Yes.</td> - </tr> - <tr> - <td>clientId</td> - <td>Value of your client id, which can be obtained via Google developer console.</td> - <td>Yes.</td> - </tr> - <tr> - <td>clientSecret</td> - <td>Value of your client secret, which can be obtained via Google developer console.</td> - <td>Yes.</td> - </tr> - <tr> - <td>refreshToken</td> - <td>Refresh token which is obtained through the OAuth2 playground. It is used to refresh the accesstoken.</td> - <td>Yes.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.init> - <accessToken>{$ctx:accessToken}</accessToken> - <clientId>{$ctx:clientId}</clientId> - <clientSecret>{$ctx:clientSecret}</clientSecret> - <refreshToken>{$ctx:refreshToken}</refreshToken> - <apiUrl>{$ctx:apiUrl}</apiUrl> - </googlespreadsheet.init> - ``` - - To get the OAuth access token directly call the init method (this method call getAccessTokenFromRefreshToken method itself) or add <googlespreadsheet.getAccessTokenFromRefreshToken> element before <googlespreadsheet.init> element in your configuration. - - **Sample for getAccessTokenFromRefreshToken** - - ```xml - <googlespreadsheet.getAccessTokenFromRefreshToken> - <clientId>{$ctx:clientId}</clientId> - <clientSecret>{$ctx:clientSecret}</clientSecret> - <refreshToken>{$ctx:refreshToken}</refreshToken> - </googlespreadsheet.getAccessTokenFromRefreshToken> - ``` - ---- - -### Spreadsheet operation - -??? note "googlespreadsheet.createSpreadsheet" - This createSpreadsheet operation allows you to create a new spreadsheet by specifying the spreadsheet id, sheet properties, and add named ranges. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Optional.</td> - </tr> - <tr> - <td>properties</td> - <td>Properties of the spreadsheet.</td> - <td>Optional.</td> - </tr> - <tr> - <td>sheets</td> - <td>List of sheets and their properties that you want to add into the spreadsheet. You can add multiple sheets.</td> - <td>Optional.</td> - </tr> - <tr> - <td>namedRanges</td> - <td>Create names that refer to a single cell or a group of cells on the sheet. Following sample request will create name range with the name "Name" for the range A1:A6.</td> - <td>Optional.</td> - </tr> - <tr> - <td>fields</td> - <td>Specifying which fields to include in a partial response. For the following request only the "spreadsheetId" will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.createSpreadsheet> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <properties>{$ctx:properties}</properties> - <sheets>{$ctx:sheets}</sheets> - <namedRanges>{$ctx:namedRanges}</namedRanges> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.createSpreadsheet> - ``` - - **Sample request** - - The sample request given below calls the createSpreadsheet operation. With the following request we can specify spreadsheet details such as spreadsheet name ("Company"), sheet details such as sheet name ("Employees") as an array. So the spreadsheet will be created inside Google Sheets with the name "Company" and the sheet will be created with the name "Employees". Here we specify the "fields" property to get a partial response. As per the following request, only the "spreadsheetId" will be included in the response. - - ```json - { - "clientId":"xxxxxxxxxxxxxxxxxxxxxxxn6f2m.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxxxxxxxxxxxxxx", - "refreshToken":"1/xxxxxxxxxxxxxx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.xxxxxxxxxxxxxxxxxxxxxxxxxxxxx-pOuVvnbnHhkVn5u8t6Qr", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "properties":{ - "title": "Company" - }, - "sheets":[ - { - "properties": - { - "title": "Employees" - } - } - ], - "fields": "spreadsheetId" - } - ``` - - **Sample response** - - ```json - { - "spreadsheetId": "1bWbo72MAhKgeNDCPcE4Wj3uGgN7K9lW1ckDScZV8b30" - } - ``` - ---- - -### Sheet operations - -??? note "googlespreadsheet.addSheetBatchRequest" - The addSheetBatchRequest operation allows you to add new sheets to an existing spreadsheet. You can specify the sheet properties for the new sheet. An error is thrown if you provide a title that is used for an existing sheet. For more information, see [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#AddSheetRequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td>It contains data that is an update to apply to a spreadsheet. To add multiple sheets within the spread sheet, need to repeat "addSheetBatchRequest" property within the requests attribute as below.</td> - <td>Yes.</td> - </tr> - <tr> - <td>fields</td> - <td>Specifying which fields to include in a partial response. For the following request only the "spreadsheetId" will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.addSheetBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.addSheetBatchRequest> - ``` - - **Sample request** - - The sample request given below calls the addSheetBatchRequest operation. The request specifies the multiple sheet properties, such as the sheet name ("Expenses1", "Expenses2"), sheet type ("GRID"), and the dimension ((50,10), (70,10)) of the sheet as an array. The fields property is specified to get a partial response. The spreadsheetId and replies values will be included in the response. replies contain properties such as sheet name, type, row, column count, and sheetId. - - ```json - { - "clientId":"617729022812-xxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxxxxxxxx", - "refreshToken":"1/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.xxxxxxxxxxxxxxxxxxxxxxxxxx-pOuVvnbnHhkVn5u8t6Qr", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "1oGxpE3C_2elS4kcCZaB3JqVMiXCYLamC1CXZOgBzy9A", - "requests": [ - { - "addSheet": { - "properties": { - "title": "Expenses1", - "sheetType": "GRID", - "gridProperties": { - "rowCount": 50, - "columnCount": 10 - } - } - } - }, - { - "addSheet": { - "properties": { - "title": "Expenses2", - "sheetType": "GRID", - "gridProperties": { - "rowCount": 70, - "columnCount": 10 - } - } - } - } - ], - "fields": "spreadsheetId,replies" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "1oGxpE3C_2elS4kcCZaB3JqVMiXCYLamC1CXZOgBzy9A", - "replies": [ - { - "addSheet": { - "properties": { - "sheetId": 372552230, - "title": "Expenses1", - "index": 1, - "sheetType": "GRID", - "gridProperties": { - "rowCount": 50, - "columnCount": 10 - } - } - } - }, - { - "addSheet": { - "properties": { - "sheetId": 568417391, - "title": "Expenses2", - "index": 2, - "sheetType": "GRID", - "gridProperties": { - "rowCount": 70, - "columnCount": 10 - } - } - } - } - ] - } - ``` - -??? note "googlespreadsheet.deleteSheetBatchRequest" - The deleteSheetBatchRequest operation allows you to remove sheets from a given spreadsheet using "sheetId". You can get the "sheetId" using the getSheetMetaData operation. For more information, see [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#deletesheetrequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td>It contains data that is an update to apply to a spreadsheet. To add multiple sheets within the spread sheet, need to repeat "addSheetBatchRequest" property within the requests attribute as below.</td> - <td>Yes.</td> - </tr> - <tr> - <td>fields</td> - <td>Specifying which fields to include in a partial response. For the following request only the "spreadsheetId" will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.deleteSheetBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.deleteSheetBatchRequest> - ``` - - **Sample request** - - ```json - { - "clientId":"617729022812-xxxxxxxxxxxxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", - "refreshToken":"1/xxxxxxxxxxxxxxxxxxxxxxx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.xxxxxxxxxxxxxxxxxxxxxxxxxxxxx-pOuVvnbnHhkVn5u8t6Qr", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "12KoqoxmxxxxxxxxxxxxxxxxxxxxxKMEIFGCD9EBdrXFGA", - "requests": [ - { - "deleteSheet": - { - "sheetId": 813171540 - } - } - ], - "fields": "spreadsheetId" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA" - } - ``` - -??? note "googlespreadsheet.getSheetMetaData" - The getSheetMetaData operation allows you to provides the sheet metadata within a given spreadsheet. This method can be used to acquire sheet properties and other metadata. If you only want to read the sheet properties, set the includeGridData query parameter to false to prevent the inclusion of the spreadsheet cell data. The Spreadsheet response contains an array of Sheet objects. The sheet titles and size information specifically can be found under the SheetProperties element of these objects. For more information, see [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/get). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>includeGridData</td> - <td>True if grid data should be returned. This parameter is ignored if a field mask was set in the request.</td> - <td>Optional.</td> - </tr> - <tr> - <td>fields</td> - <td>Specifying which fields to include in a partial response.</td> - <td>Optional.</td> - </tr> - <tr> - <td>ranges</td> - <td>The ranges to retrieve from the spreadsheet.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.getSheetMetaData> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <includeGridData>{$ctx:includeGridData}</includeGridData> - <ranges>{$ctx:ranges}</ranges> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.getSheetMetaData> - ``` - - **Sample request** - - ```json - { - "clientId":"617729022812-xxxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxx", - "refreshToken":"1/xxxxxxxxxxxxxxx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.xxxxxxxxxxxxxxxxxxxxxxxxx-pOuVvnbnHhkVn5u8t6Qr", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "1oGxpE3C_2elS4kcCZaB3JqVMiXCYLamC1CXZOgBzy9A", - "includeGridData":"false", - "ranges": "Employees!A1:B2" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "1oGxpE3C_2elS4kcCZaB3JqVMiXCYLamC1CXZOgBzy9A", - "properties": { - "title": "Company", - "locale": "en_US", - "autoRecalc": "ON_CHANGE", - "timeZone": "Etc/GMT", - "defaultFormat": { - "backgroundColor": { - "red": 1, - "green": 1, - "blue": 1 - }, - "padding": { - "top": 2, - "right": 3, - "bottom": 2, - "left": 3 - }, - "verticalAlignment": "BOTTOM", - "wrapStrategy": "OVERFLOW_CELL", - "textFormat": { - "foregroundColor": {}, - "fontFamily": "arial,sans,sans-serif", - "fontSize": 10, - "bold": false, - "italic": false, - "strikethrough": false, - "underline": false - } - } - }, - "sheets": [ - { - "properties": { - "sheetId": 789, - "title": "Employees", - "index": 0, - "sheetType": "GRID", - "gridProperties": { - "rowCount": 1000, - "columnCount": 26 - } - } - } - ], - "spreadsheetUrl": "https://docs.google.com/spreadsheets/d/1oGxpE3C_2elS4kcCZaB3JqVMiXCYLamC1CXZOgBzy9A/edit" - } - ``` - -??? note "googlespreadsheet.updateSheetPropertiesBatchRequest" - The updateSheetPropertiesBatchRequest operation allows you to update all sheet properties. This method allows you to update the size, title, and other sheet properties, see [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#UpdateSheetPropertiesRequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td> It contains data that is a kind of update to apply to a spreadsheet. To Update multiple sheets properties within the spread sheet, need to repeat `updateSheetProperties` property within the `requests` property.</td> - <td>Yes.</td> - </tr> - <tr> - <td>fields (Outside the requests property)</td> - <td>Specifying which fields to include in a partial response. This is define outside the requests body.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.updateSheetPropertiesBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.updateSheetPropertiesBatchRequest> - ``` - - **Sample request** - - ```json - { - "clientId":"617729022812-xxxxxxxxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxxxxxx", - "refreshToken":"1/xxxxxxxxxxxxxxxxxx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.xxxxxxxxxxxxxxxxxxxxxxxxxxxx-pOuVvnbnHhkVn5u8t6Qr", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "1oGxpE3C_2elS4kcCZaB3JqVMiXCYLamC1CXZOgBzy9A", - "requests": [ - { - "updateSheetProperties": { - "properties": { - "sheetId": 789, - "gridProperties": { - "columnCount": 25, - "rowCount": 10 - }, - "title": "Sheet1" - }, - "fields": "title,gridProperties(rowCount,columnCount)" - } - } - ], - "fields": "spreadsheetId" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "1oGxpE3C_2elS4kcCZaB3JqVMiXCYLamC1CXZOgBzy9A" - } - ``` - -??? note "googlespreadsheet.copyTo" - The copyTo operation allows you to copy a single sheet from a spreadsheet to another spreadsheet. Returns the properties of the newly created sheet, see [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets.sheets/copyTo). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>sheetId</td> - <td>The ID of the sheet to copy.</td> - <td>Yes.</td> - </tr> - <tr> - <td>destinationSpreadsheetId</td> - <td>The ID of the spreadsheet to copy the sheet to.</td> - <td>Optional.</td> - </tr> - <tr> - <td>fields</td> - <td>Specifying which fields to include in a partial response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.copyTo> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <sheetId>{$ctx:sheetId}</sheetId> - <destinationSpreadsheetId>{$ctx:destinationSpreadsheetId}</destinationSpreadsheetId> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.copyTo> - ``` - - **Sample request** - - ```json - { - "clientId":"617729022812-xxxxxxxxxxxxxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", - "refreshToken":"1/xxxxxxxxxxxx-fCyxRTyf-xxxxxxxxxxx", - "accessToken":"ya29.xxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "1oGxpE3C_2elS4kcCxxxxxxxxxxxxLamC1CXZOgBzy9A", - "sheetId":"789", - "destinationSpreadsheetId":"12KoqoxmykLLYbxxxxxxxxxxxxxxxxxxxxEIFGCD9EBdrXFGA" - } - ``` - **Sample response** - - ```json - { - "sheetId": 813171540, - "title": "Copy of Sheet1", - "index": 1, - "sheetType": "GRID", - "gridProperties": { - "rowCount": 10, - "columnCount": 25 - } - } - ``` - ---- - -### Sheet Data operations - -??? note "googlespreadsheet.addRowsColumnsData" - The addRowsColumnsData method allows you to add a new rows or columns of data to a sheet, see [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets.values/append). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>range</td> - <td>The [A1 notation](https://developers.google.com/sheets/api/guides/concepts#a1_notation) of the values to retrieve.</td> - <td>Yes.</td> - </tr> - <tr> - <td>insertDataOption</td> - <td>How the input data should be inserted. For more detail [click here](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values/append#insertdataoption).</td> - <td>Optional.</td> - </tr> - <tr> - <td>valueInputOption</td> - <td> How the input data should be interpreted. For more detail [click here](https://developers.google.com/sheets/api/reference/rest/v4/ValueInputOption).</td> - <td>Yes.</td> - </tr> - <tr> - <td>majorDimension</td> - <td>The major dimension that results should use. For more detail [click here](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values#Dimension).</td> - <td>Optional.</td> - </tr> - <tr> - <td>fields</td> - <td>Specifying which fields to include in a partial response. For the following request only the `updates` will be included in the response.</td> - <td>Optional.</td> - </tr> - <tr> - <td>values</td> - <td>The data that was to be written. For more detail [click here](https://developers.google.com/protocol-buffers/docs/reference/google.protobuf#listvalue).</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.deleteDimensionBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.deleteDimensionBatchRequest> - ``` - - **Sample request** - - The following request appends data in row major fashion. The range is used to search for existing data and find a "table" within that range. Values will be appended to the next row of the table, starting with the first column of the table. - - ```json - { - "clientId":"617729022812-xxxxxxxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxxxxxxxxxxxx", - "refreshToken":"1/xxxxxxxxxxxxxxxxxxx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.xxxxxxxxxxxxxxxxxxxxxxxxxxx-pOuVvnbnHhkVn5u8t6Qr", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "12KoqoxmykLLYbtsm6CxxxxxxxxxxxxxxxxxxxxxxxxxxdrXFGA", - "range":"Sheet1!A1:B2", - "insertDataOption":"INSERT_ROWS", - "majorDimension":"ROWS", - "valueInputOption":"RAW", - "values":[ - [ - "20", - "21" - ], - [ - "22", - "23" - ] - ] - } - ``` - **Sample response** - - The response include the updates details. - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CxxxxxxxxxxxxxxxxxxxxxxxxxxdrXFGA", - "updates": { - "spreadsheetId": "12KoqoxmykLLYbtsm6CxxxxxxxxxxxxxxxxxxxxxxxxxxdrXFGA", - "updatedRange": "Sheet1!A1:B2", - "updatedRows": 2, - "updatedColumns": 2, - "updatedCells": 4 - } - } - ``` - -??? note "googlespreadsheet.deleteDimensionBatchRequest" - The deleteDimensionBatchRequest method allows you to delete rows or columns by specifying the dimension, see [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#DeleteDimensionRequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td>It contains data that is a kind of update to apply to a spreadsheet. To perform multiple delete operation within the spreadsheet, need to repeat `deleteDimension` property within the requests property.</td> - <td>Yes.</td> - </tr> - <tr> - <td>fields</td> - <td>Specifying which fields to include in a partial response. For the following request only the `spreadsheetId` will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.deleteDimensionBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.deleteDimensionBatchRequest> - ``` - - **Sample request** - - The following request deletes the first three rows in the sheet since we specify dimension as ROWS. - - ```json - { - "clientId":"617729022812-xxxxxxxxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"ry_AXMsEe5Sn9iVoOY7ATnb8", - "refreshToken":"1/xxxxxxxxxxxxxxxxx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.xxxxxxxxxxxxxxxxxxxxxxxx-pOuVvnbnHhkVn5u8t6Qr", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA", - "requests": [ - { - "deleteDimension": { - "range": { - "sheetId": 121832844, - "dimension": "ROWS", - "startIndex": 0, - "endIndex": 3 - } - } - } - ], - "fields": "spreadsheetId" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA" - } - ``` - -??? note "googlespreadsheet.getCellData" - The getCellData method allows you to retrieve any set of cell data from a sheet. It return cell contents not only as input values (as would be entered by a user at a keyboard) but also it grants full access to values, formulas, formatting, hyperlinks, data validation, and other properties. See [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets.values/get). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>range</td> - <td>The [A1 notation](https://developers.google.com/sheets/api/guides/concepts#a1_notation) of the values to retrieve.</td> - <td>Yes.</td> - </tr> - <tr> - <td>dateTimeRenderOption</td> - <td>How dates, times, and durations should be represented in the output. For more detail [click here](https://developers.google.com/sheets/api/reference/rest/v4/DateTimeRenderOption).</td> - <td>Optional.</td> - </tr> - <tr> - <td>majorDimension</td> - <td>The major dimension that results should use. For more detail [click here](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values#Dimension).</td> - <td>Optional.</td> - </tr> - <tr> - <td>valueRenderOption</td> - <td> How values should be represented in the output. For more detail [click here](https://developers.google.com/sheets/api/reference/rest/v4/ValueRenderOption).</td> - <td>Optional.</td> - </tr> - <tr> - <td>fields</td> - <td>Specifying which fields to include in a partial response. For the following request only the `values` will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.getCellData> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <range>{$ctx:range}</range> - <dateTimeRenderOption>{$ctx:dateTimeRenderOption}</dateTimeRenderOption> - <majorDimension>{$ctx:majorDimension}</majorDimension> - <valueRenderOption>{$ctx:valueRenderOption}</valueRenderOption> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.getCellData> - ``` - - **Sample request** - - The following returns the cells data in the range A1:E14 of sheet Sheet1 in row-major order. - - ```json - { - "clientId":"617729022812-xxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxxxxx", - "refreshToken":"1/xxxxxxxxxxxx-x-LpK6fDWF9DgcM", - "accessToken":"ya29.xxxxxxxxxxxxxxxxxx-pOuVvnbnHhkVn5u8t6Qr", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA", - "range":"Sheet1!A1:E14", - "dateTimeRenderOption":"SERIAL_NUMBER", - "majorDimension":"ROWS", - "valueRenderOption":"UNFORMATTED_VALUE" - } - ``` - **Sample response** - - In the response cell values in the rage A1:E14 will be return. - - ```json - { - "range": "Sheet1!A1:E14", - "majorDimension": "ROWS", - "values": [ - [ - "20", - "21" - ], - [ - "22", - "23" - ] - ] - } - ``` - -??? note "googlespreadsheet.getMultipleCellData" - The getMultipleCellData method allow you to retrieve any set of cell data from a sheet (including multiple ranges). It return cell contents not only as input values (as would be entered by a user at a keyboard) but also it grants full access to values, formulas, formatting, hyperlinks, data validation, and other properties. See [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets.values/batchGet). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>ranges</td> - <td>The [ranges](https://developers.google.com/sheets/api/guides/concepts#a1_notation) of the values to retrieve from the spreadsheet.</td> - <td>Optional.</td> - </tr> - <tr> - <td>dateTimeRenderOption</td> - <td>How dates, times, and durations should be represented in the output. For more detail [click here](https://developers.google.com/sheets/api/reference/rest/v4/DateTimeRenderOption).</td> - <td>Optional.</td> - </tr> - <tr> - <td>majorDimension</td> - <td>The major dimension that results should use. For more detail [click here](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values#Dimension).</td> - <td>Optional.</td> - </tr> - <tr> - <td>valueRenderOption</td> - <td> How values should be represented in the output. For more detail [click here](https://developers.google.com/sheets/api/reference/rest/v4/ValueRenderOption).</td> - <td>Optional.</td> - </tr> - <tr> - <td>fields</td> - <td>Specifying which fields to include in a partial response. For the following request only the `valueRanges` will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.getMultipleCellData> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <ranges>{$ctx:ranges}</ranges> - <dateTimeRenderOption>{$ctx:dateTimeRenderOption}</dateTimeRenderOption> - <majorDimension>{$ctx:majorDimension}</majorDimension> - <valueRenderOption>{$ctx:valueRenderOption}</valueRenderOption> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.getMultipleCellData> - ``` - - **Sample request** - - This will allow you to get cell data by specifying multiple cell range using ranges parameter. Here we can specify multiple cell ranges as a comma sperated. - - ```json - { - "clientId":"617729022812-xxxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxxxxxxxxxxxxx", - "refreshToken":"1/xxxxxxxxx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.xxxxxxxxxxxxxx-pOuVvnbnHhkVn5u8t6Qr", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA", - "ranges":"Sheet1!A1:B2,Sheet1!B1:C2,Sheet1!D4:G5", - "dateTimeRenderOption":"SERIAL_NUMBER", - "majorDimension":"ROWS", - "valueRenderOption":"UNFORMATTED_VALUE" - } - ``` - **Sample response** - - In the response we will get all cell data that is in the specified cell ranges. - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA", - "valueRanges": [ - { - "range": "Sheet1!A1:B2", - "majorDimension": "ROWS", - "values": [ - [ - "20", - "21" - ], - [ - "22", - "23" - ] - ] - }, - { - "range": "Sheet1!B1:C2", - "majorDimension": "ROWS", - "values": [ - [ - "21", - 34 - ], - [ - "23", - 47 - ] - ] - }, - { - "range": "Sheet1!D4:G5", - "majorDimension": "ROWS" - } - ] - } - ``` - -??? note "googlespreadsheet.editCell" - The editCell method allow you to edit the content of the cell with new values. See [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets.values/update). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>data</td> - <td>The new values to apply to the spreadsheet.</td> - <td>Optional.</td> - </tr> - <tr> - <td>valueInputOption</td> - <td>How the input data should be interpreted. For more detail [click here](https://developers.google.com/sheets/api/reference/rest/v4/ValueInputOption).</td> - <td>Optional.</td> - </tr> - <tr> - <td>fields</td> - <td>Specifying which fields to include in a partial response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.editCell> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <range>{$ctx:range}</range> - <valueInputOption>{$ctx:valueInputOption}</valueInputOption> - <fields>{$ctx:fields}</fields> - <majorDimension>{$ctx:majorDimension}</majorDimension> - <values>{$ctx:values}</values> - </googlespreadsheet.editCell> - ``` - - **Sample request** - - In the request we can specify which sheet. - - ```json - { - "clientId":"617729022812-xxxxxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxxxxxxxxx", - "refreshToken":"1/Si2q4aOZsaMlYW7bBIoO-fCyxRTyf-xxxxxxxxxxxxx", - "accessToken":"ya29.xxxxxxxxxxxxxxxxxxxxxx-pOuVvnbnHhkVn5u8t6Qr", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA", - "range":"Sheet1!A1:E3", - "majorDimension":"ROWS", - "valueInputOption":"RAW", - "values":[ - [ - "1111", - "2222" - ], - [ - "3333", - "4444" - ] - ] - } - ``` - **Sample response** - - In the response we will get updated details such as cell dimension, cell count, sheet range. - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA", - "updatedRange": "Sheet1!A1:B2", - "updatedRows": 2, - "updatedColumns": 2, - "updatedCells": 4 - } - ``` - -??? note "googlespreadsheet.editMultipleCell" - The editMultipleCell method allow you to edit the content of multiple cell with new values. See [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets.values/batchUpdate). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>data</td> - <td>The new values to apply to the spreadsheet.</td> - <td>Optional.</td> - </tr> - <tr> - <td>valueInputOption</td> - <td>How the input data should be interpreted. For more detail [click here](https://developers.google.com/sheets/api/reference/rest/v4/ValueInputOption).</td> - <td>Optional.</td> - </tr> - <tr> - <td>fields</td> - <td>Specifying which fields to include in a partial response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.editMultipleCell> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <data>{$ctx:data}</data> - <valueInputOption>{$ctx:valueInputOption}</valueInputOption> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.editMultipleCell> - ``` - - **Sample request** - - Edit the content of multiple cell ranges with new values. We can specify multiple cell ranges and values as JSON array in data. - - ```json - { - "clientId":"617729022812-xxxxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxxxxxxx", - "refreshToken":"1/Si2q4aOZsaMlYW7bBIoO-xxxxxxxxxxxxx-LpK6fDWF9DgcM", - "accessToken":"ya29.xxxxxxxxxxxx-pOuVvnbnHhkVn5u8t6Qr", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA", - "valueInputOption":"RAW", - "data": [ - { - "values": [["7","8"],["9","10"]], - "range": "Sheet1!A6" - } - ] - } - ``` - **Sample response** - - In the response we will get updated cell, range details as as array in responses property. - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA", - "totalUpdatedRows": 2, - "totalUpdatedColumns": 2, - "totalUpdatedCells": 4, - "totalUpdatedSheets": 1, - "responses": [ - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA", - "updatedRange": "Sheet1!A6:B7", - "updatedRows": 2, - "updatedColumns": 2, - "updatedCells": 4 - } - ] - } - ``` - -??? note "googlespreadsheet.updateCellsBatchRequest" - The updateCellsBatchRequest method allows you to removes all values from a sheet while leaving any formatting unaltered. Specifying userEnteredValue in fields(within the requests property) without providing a corresponding value is interpreted as an instruction to clear values in the range. This can be used with other fields as well. For example, changing the fields(within the requests property) value to userEnteredFormat and making the request clears the sheet of all formatting, but leaves the cell values untouched..see [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#updatecellsrequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td>It contains data that is a kind of update to apply to a spreadsheet. To perform multiple updateCells operation within the spread sheet, need to repeat `updateCells` property within the `requests` property.</td> - <td>Optional.</td> - </tr> - <tr> - <td>fields (Outside the requests property)</td> - <td>Specifying which fields to include in a partial response. For the following request only the `spreadsheetId` will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.updateCellsBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.updateCellsBatchRequest> - ``` - - **Sample request** - - ```json - { - "clientId":"617729022812-cxxxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"ry_AXMsEe5Sn9iVoOY7ATnb8", - "refreshToken":"1/xxxxxxxxxxxxx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.xxxxxxxxxx-pOuVvnbnHhkVn5u8t6Qr", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA", - "requests": [ - { - "updateCells": { - "start": { - "columnIndex": 3, - "rowIndex": 2, - "sheetId": 121832844 - }, - "rows": [ - { - "values": [ - {"userEnteredValue": {"numberValue": 444}}, - {"userEnteredValue": {"numberValue": 777}} - ] - } - ], - "fields": "userEnteredValue" - } - } - ], - "fields": "spreadsheetId" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA" - } - ``` - -??? note "googlespreadsheet.appendDimensionBatchRequest" - The appendDimensionBatchRequest method allows you to appends empty rows and columns to the end of the sheet. See [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#appenddimensionrequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td>It contains data that is a kind of update to apply to a spreadsheet. To perform multiple updateCells operation within the spread sheet, need to repeat `updateCells` property within the `requests` property.</td> - <td>Optional.</td> - </tr> - <tr> - <td>fields (Outside the requests property)</td> - <td>Specifying which fields to include in a partial response. For the following request only the `spreadsheetId` will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.appendDimensionBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.appendDimensionBatchRequest> - ``` - - **Sample request** - - This sample requst allow you to append diamention in row wise with the length 2. - - ```json - { - "clientId":"617729022812-xxxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxx", - "refreshToken":"1/xxxxxxxxxxx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.xxxxxxxxxxxxx-pOuVvnbnHhkVn5u8t6Qr", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA", - "requests": [ - { - "appendDimension": { - "dimension": "ROWS", - "sheetId": 121832844, - "length": 2 - } - } - ], - "fields": "spreadsheetId" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA" - } - ``` - -??? note "googlespreadsheet.updateBordersBatchRequest" - The updateBordersBatchRequest method allow you to edit cell borders. See [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#updatebordersrequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td>It contains data that is a kind of update to apply to a spreadsheet. To perform multiple updateCells operation within the spread sheet, need to repeat `updateCells` property within the `requests` property.</td> - <td>Optional.</td> - </tr> - <tr> - <td>fields (Outside the requests property)</td> - <td>Specifying which fields to include in a partial response. For the following request only the `spreadsheetId` will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.updateBordersBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.updateBordersBatchRequest> - ``` - - **Sample request** - - In following request we can specify for which range of the sheet the border need to be updated and the formatting details of the border. - - ```json - { - "clientId":"617729022812-xxxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxxxxxxxxxxxx", - "refreshToken":"1/xxxxxxxxxxxx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.xxxxxxxxxxxxxxxx-pOuVvnbnHhkVn5u8t6Qr", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA", - "requests": [ - { - "updateBorders": - { - "range": { - "sheetId": 121832844, - "startRowIndex": 0, - "endRowIndex": 10, - "startColumnIndex": 0, - "endColumnIndex": 6 - }, - "top": { - "style": "DASHED", - "width": 1, - "color": {"blue": 1} - }, - "bottom": - { - "style": "DASHED", - "width": 1, - "color": {"blue": 1} - }, - "innerHorizontal": { - "style": "DASHED", - "width": 1, - "color": {"blue": 1} - } - } - } - ], - "fields": "spreadsheetId" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA" - } - ``` - -??? note "googlespreadsheet.updateBordersBatchRequest" - The updateBordersBatchRequest method allow you to edit cell borders. See [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#updatebordersrequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td>It contains data that is a kind of update to apply to a spreadsheet. To perform multiple updateCells operation within the spread sheet, need to repeat `updateCells` property within the `requests` property.</td> - <td>Optional.</td> - </tr> - <tr> - <td>fields (Outside the requests property)</td> - <td>Specifying which fields to include in a partial response. For the following request only the `spreadsheetId` will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.updateBordersBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.updateBordersBatchRequest> - ``` - - **Sample request** - - In following request we can specify for which range of the sheet the border need to be updated and the formatting details of the border. - - ```json - { - "clientId":"617729022812-xxxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxxxxxxxxxxxx", - "refreshToken":"1/xxxxxxxxxxxx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.xxxxxxxxxxxxxxxx-pOuVvnbnHhkVn5u8t6Qr", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA", - "requests": [ - { - "updateBorders": - { - "range": { - "sheetId": 121832844, - "startRowIndex": 0, - "endRowIndex": 10, - "startColumnIndex": 0, - "endColumnIndex": 6 - }, - "top": { - "style": "DASHED", - "width": 1, - "color": {"blue": 1} - }, - "bottom": - { - "style": "DASHED", - "width": 1, - "color": {"blue": 1} - }, - "innerHorizontal": { - "style": "DASHED", - "width": 1, - "color": {"blue": 1} - } - } - } - ], - "fields": "spreadsheetId" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA" - } - ``` - -??? note "googlespreadsheet.repeatCellsBatchRequest" - The repeatCellsBatchRequest method allow you to updates all cells in the range to the values in the given Cell object. Only the fields listed in the fields(within the requests property)will be updated. Others are unchanged. See [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#repeatcellrequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td>It contains data that is a kind of update to apply to a spreadsheet. To perform multiple repeatCell operation within the spread sheet, need to repeat `repeatCell` property within the `requests` property.</td> - <td>Yes.</td> - </tr> - <tr> - <td>fields (Outside the requests property)</td> - <td>Specifying which fields to include in a partial response. For the following request only the `spreadsheetId` will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.repeatCellsBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.repeatCellsBatchRequest> - ``` - - **Sample request** - - Here the formating specified in "cell" object will be repeted for row index from 13 to 15. - - ```json - { - "clientId":"617729022812-xxxxxxxxxxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxxxx", - "refreshToken":"1/xxxxxxxxxxx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.xxxxxxxxxxxxxxxxx-pOuVvnbnHhkVn5u8t6Qr", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA", - "requests": [ - { - "repeatCell": { - "range": { - "sheetId": 121832844, - "startRowIndex": 13, - "endRowIndex": 15 - }, - "cell": { - "userEnteredFormat": { - "backgroundColor": { - "red": 0.0, - "green": 0.0, - "blue": 0.0 - } - } - }, - "fields": "userEnteredFormat(backgroundColor)" - } - } - ], - "fields": "spreadsheetId" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA" - } - ``` - -??? note "googlespreadsheet.mergeCellsBatchRequest" - The mergeCellsBatchRequest method allow you to merges all cells in the range. See [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#mergecellsrequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td>It contains data that is a kind of update to apply to a spreadsheet. To perform multiple mergeCells operation within the spread sheet, need to repeat `mergeCells` property within the `requests` property.</td> - <td>Yes.</td> - </tr> - <tr> - <td>fields (Outside the requests property)</td> - <td>Specifying which fields to include in a partial response.For the following request only the `spreadsheetId` will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.mergeCellsBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.mergeCellsBatchRequest> - ``` - - **Sample request** - - ```json - { - "clientId":"617729022812-xxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxx", - "refreshToken":"1/xxxxxxxxxxxxxxxxxx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.xxxxxxxxxxxxxxxxxxxxx-pOuVvnbnHhkVn5u8t6Qr", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA", - "requests": [ - { - "mergeCells": { - "range": { - "sheetId": 121832844, - "startRowIndex": 0, - "endRowIndex": 2, - "startColumnIndex": 0, - "endColumnIndex": 2 - }, - "mergeType": "MERGE_ALL" - } - } - ], - "fields": "spreadsheetId" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA" - } - ``` - -??? note "googlespreadsheet.setDataValidationBatchRequest" - The setDataValidationBatchRequest Sets a data validation rule to every cell in the range. To clear validation in a range, call this with no rule specified..see [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#setdatavalidationrequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td> It contains data that is a kind of update to apply to a spreadsheet. To perform multiple setDataValidation operation within the spread sheet, need to repeat `setDataValidation` property within the `requests` property.</td> - <td>Yes.</td> - </tr> - <tr> - <td>fields (Outside the requests property)</td> - <td>Specifying which fields to include in a partial response.For the following request only the `spreadsheetId` will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.setDataValidationBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.setDataValidationBatchRequest> - ``` - - **Sample request** - - ```json - { - "clientId":"617729022812-xxxxxxxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxxxxxxxxxxxxxxxxxxx", - "refreshToken":"1/xx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.xxxxxxxxxxxxxx-pOuVvnbnHhkVn5u8t6Qr", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA", - "requests": [ - { - "mergeCells": { - "range": { - "sheetId": 121832844, - "startRowIndex": 0, - "endRowIndex": 2, - "startColumnIndex": 0, - "endColumnIndex": 2 - }, - "mergeType": "MERGE_ALL" - } - } - ], - "fields": "spreadsheetId" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA" - } - ``` - -??? note "googlespreadsheet.copyPasteBatchRequest" - The copyPasteBatchRequest method allows you to copy cell formatting in one range and paste it into another range on the same sheet. See [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#copypasterequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td> It contains data that is a kind of update to apply to a spreadsheet. To perform multiple copyPaste operation within the spread sheet, need to repeat `copyPaste` property within the `requests` property.</td> - <td>Yes.</td> - </tr> - <tr> - <td>fields (Outside the requests property)</td> - <td>Specifying which fields to include in a partial response.For the following request only the `spreadsheetId` will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.copyPasteBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.copyPasteBatchRequest> - ``` - - **Sample request** - - The following request copies the formatting in range A1:D10 and pastes it to the F1:I10 range on the same sheet. The original values in A1:I10 remain unchanged. - - ```json - { - "clientId":"617729022812-xxxxxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"ry_AXMsEe5Sn9iVoOY7ATnb8", - "refreshToken":"1/xxxxxxxxxx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.xxxxxxxxxxx-pOuVvnbnHhkVn5u8t6Qr", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA", - "requests": [ - { - "copyPaste": { - "source": { - "sheetId": 121832844, - "startRowIndex": 0, - "endRowIndex": 10, - "startColumnIndex": 0, - "endColumnIndex": 4 - }, - "destination": { - "sheetId": 121832844, - "startRowIndex": 0, - "endRowIndex": 10, - "startColumnIndex": 5, - "endColumnIndex": 9 - }, - "pasteType": "PASTE_FORMAT", - "pasteOrientation": "NORMAL" - } - } - ], - "fields": "spreadsheetId" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA" - } - ``` - -??? note "googlespreadsheet.cutPasteBatchRequest" - The cutPasteBatchRequest method allows you to cuts the one range and pastes its data, formats, formulas, and merges to the another range on the same sheet. See [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#cutpasterequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td>It contains data that is a kind of update to apply to a spreadsheet. To perform multiple cutPaste operation within the spread sheet, need to repeat `cutPaste` property within the `requests` property.</td> - <td>Yes.</td> - </tr> - <tr> - <td>fields (Outside the requests property)</td> - <td>Specifying which fields to include in a partial response.For the following request only the `spreadsheetId` will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.cutPasteBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.cutPasteBatchRequest> - ``` - - **Sample request** - - The following request cuts the range A1:D10 and pastes its data, formats, formulas, and merges to the F1:I10 range on the same sheet. The original source range cell contents are removed. - - ```json - { - "clientId":"617729022812-xxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxxxx", - "refreshToken":"1/xxxxxxxxxxxxxx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.Ci-xxxxxxxxxxxxx", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "14PJALKcIXLr75rJWXlHhVjOt7z0Nby7AvcKXJGhMN2s", - "requests": [ - { - "cutPaste": { - "source": { - "sheetId": 1020069232, - "startRowIndex": 0, - "endRowIndex": 10, - "startColumnIndex": 0, - "endColumnIndex": 4 - }, - "destination": { - "sheetId": 401088778, - "rowIndex": 0, - "columnIndex": 5 - }, - "pasteType": "PASTE_NORMAL" - } - } - ], - "fields": "spreadsheetId" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA" - } - ``` - -??? note "googlespreadsheet.updateConditionalFormatRuleBatchRequest" - The updateConditionalFormatRuleBatchRequest method allows you to updates a conditional format rule at the given index, or moves a conditional format rule to another index,see [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#updateconditionalformatrulerequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td>It contains data that is a kind of update to apply to a spreadsheet. To perform multiple updateConditionalFormatRule operation within the spread sheet, need to repeat `updateConditionalFormatRule` property within the `requests` property.</td> - <td>Yes.</td> - </tr> - <tr> - <td>fields (Outside the requests property)</td> - <td>Specifying which fields to include in a partial response.For the following request only the `spreadsheetId` will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.updateConditionalFormatRuleBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.updateConditionalFormatRuleBatchRequest> - ``` - - **Sample request** - - The following request replaces the conditional formatting rule at index 0 with a new rule that formats cells containing the exact text specified ("Total Cost") in the A1:D5 range. - - ```json - { - "clientId":"617729022812-xxxxxxxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxxxx", - "refreshToken":"1/xxxxxxxxxxxxxxxx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.Ci-xxxxxxxxxxxxxxxxxxxxxx-kQ9Wri4bsf4TEulw", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "14PJALKcIXLr75rJWXlHhVjOt7z0Nby7AvcKXJGhMN2s", - "requests": [ - { - "updateConditionalFormatRule": { - "sheetId": 1020069232, - "index": 0, - "rule": { - "ranges": [ - { - "sheetId": 1020069232, - "startRowIndex": 0, - "endRowIndex": 5, - "startColumnIndex": 0, - "endColumnIndex": 4 - } - ], - "booleanRule": { - "condition": { - "type": "TEXT_EQ", - "values": [ - { - "userEnteredValue": "Total Cost" - } - ] - }, - "format": { - "textFormat": { - "bold": true - } - } - } - } - } - } - ], - "fields": "spreadsheetId" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA" - } - ``` - -??? note "googlespreadsheet.addConditionalFormatRuleBatchRequest" - The addConditionalFormatRuleBatchRequest method allows you to adds a new conditional format rule at the given index,see [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#addconditionalformatrulerequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td>It contains data that is a kind of update to apply to a spreadsheet. To perform multiple addConditionalFormatRule operation within the spread sheet, need to repeat `addConditionalFormatRule` property within the `requests` property.</td> - <td>Yes.</td> - </tr> - <tr> - <td>fields (Outside the requests property)</td> - <td>Specifying which fields to include in a partial response.For the following request only the `spreadsheetId` will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.addConditionalFormatRuleBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.addConditionalFormatRuleBatchRequest> - ``` - - **Sample request** - - The following request establishes new gradient conditional formatting rules for row 10 and 11 of a sheet. The first rule states that cells in that row have their backgrounds colored according to their value. The lowest value in the row will be colored dark red, while the highest value will be colored bright green. The color of other values will be determined by interpolation. - - ```json - { - "clientId":"617729022812-vjo2edd0i4bcb38ifu4qg17ke5nn6f2m.apps.googleusercontent.com", - "clientSecret":"ry_AXMsEe5Sn9iVoOY7ATnb8", - "refreshToken":"1/Si2q4aOZsaMlYW7bBIoO-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.Ci-CA9sR2IXoOaVg9fpRwf8fEhF8lqfOJL1FpRihUlNxEa8kw-kQ9Wri4bsf4TEulw", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "14PJALKcIXLr75rJWXlHhVjOt7z0Nby7AvcKXJGhMN2s", - "requests": [ - { - "addConditionalFormatRule": { - "rule": { - "ranges": [ - { - "sheetId": 1020069232, - "startRowIndex": 10, - "endRowIndex": 11 - } - ], - "gradientRule": { - "minpoint": { - "color": { - "green": 0.2, - "red": 0.8 - }, - "type": "MIN" - }, - "maxpoint": { - "color": { - "green": 0.9 - }, - "type": "MAX" - } - } - }, - "index": 0 - } - } - ], - "fields": "spreadsheetId" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA" - } - ``` - -??? note "googlespreadsheet.deleteConditionalFormatRuleBatchRequest" - The deleteConditionalFormatRuleBatchRequest method allows you to deletes a conditional format rule at the given index,see [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#DeleteConditionalFormatRuleRequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td>It contains data that is a kind of update to apply to a spreadsheet. To perform multiple deleteConditionalFormatRule operation within the spread sheet, need to repeat `deleteConditionalFormatRule` property within the `requests` property.</td> - <td>Yes.</td> - </tr> - <tr> - <td>fields (Outside the requests property)</td> - <td>Specifying which fields to include in a partial response.For the following request only the `spreadsheetId` will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.deleteConditionalFormatRuleBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.deleteConditionalFormatRuleBatchRequest> - ``` - - **Sample request** - - The following request deletes the conditional formatting rule having index 0 in the sheet specified by sheetId. - - ```json - { - "clientId":"617729022812-vjo2edd0i4bcb38ifu4qg17ke5nn6f2m.apps.googleusercontent.com", - "clientSecret":"ry_AXMsEe5Sn9iVoOY7ATnb8", - "refreshToken":"1/Si2q4aOZsaMlYW7bBIoO-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.Ci-CA9sR2IXoOaVg9fpRwf8fEhF8lqfOJL1FpRihUlNxEa8kw-kQ9Wri4bsf4TEulw", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "14PJALKcIXLr75rJWXlHhVjOt7z0Nby7AvcKXJGhMN2s", - "requests": [ - { - "deleteConditionalFormatRule": { - "sheetId": 1020069232, - "index": 0 - } - } - ], - "fields": "spreadsheetId" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA" - } - ``` - -??? note "googlespreadsheet.updateDimensionPropertiesBatchRequest" - The updateDimensionPropertiesBatchRequest method allows you to updates properties of dimensions within the specified range,see [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#updatedimensionpropertiesrequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td>It contains data that is a kind of update to apply to a spreadsheet. To perform multiple updateDimensionProperties operation within the spreadsheet, need to repeat `updateDimensionProperties` property within the `requests` property.</td> - <td>Yes.</td> - </tr> - <tr> - <td>fields (Outside the requests property)</td> - <td>Specifying which fields to include in a partial response.For the following request only the `spreadsheetId` will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.updateDimensionPropertiesBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.updateDimensionPropertiesBatchRequest> - ``` - - **Sample request** - - The following request updates the width of column A to 160 pixels. - - ```json - { - "clientId":"617729022812-vjo2edd0i4bcb38ifu4qg17ke5nn6f2m.apps.googleusercontent.com", - "clientSecret":"ry_AXMsEe5Sn9iVoOY7ATnb8", - "refreshToken":"1/Si2q4aOZsaMlYW7bBIoO-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.Ci-CA9sR2IXoOaVg9fpRwf8fEhF8lqfOJL1FpRihUlNxEa8kw-kQ9Wri4bsf4TEulw", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "14PJALKcIXLr75rJWXlHhVjOt7z0Nby7AvcKXJGhMN2s", - "requests": [ - { - "updateDimensionProperties": { - "range": { - "sheetId": 1020069232, - "dimension": "COLUMNS", - "startIndex": 0, - "endIndex": 1 - }, - "properties": { - "pixelSize": 160 - }, - "fields": "pixelSize" - } - } - ], - "fields": "spreadsheetId" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA" - } - ``` - -??? note "googlespreadsheet.autoResizeDimensionsBatchRequest" - The autoResizeDimensionsBatchRequest method allows you to automatically resize one or more dimensions based on the contents of the cells in that dimension,see [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#autoresizedimensionsrequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td>It contains data that is a kind of update to apply to a spreadsheet. To perform multiple autoResizeDimensions operation within the spread sheet, need to repeat `autoResizeDimensions` property within the `requests` property.</td> - <td>Yes.</td> - </tr> - <tr> - <td>fields (Outside the requests property)</td> - <td>Specifying which fields to include in a partial response.For the following request only the `spreadsheetId` will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.autoResizeDimensionsBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.autoResizeDimensionsBatchRequest> - ``` - - **Sample request** - - The following request turns on automatic resizing of columns A:C, based on the size of the column content. Automatic resizing of rows is not supported. - - ```json - { - "clientId":"617729022812-xxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxxx", - "refreshToken":"1/xxxxxxxxxxxxxxx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.Ci-xxxxxxxxxxx-kQ9Wri4bsf4TEulw", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "14PJALKcIXLr75rJWXlHhVjOt7z0Nby7AvcKXJGhMN2s", - "requests": [ - { - "autoResizeDimensions": { - "dimensions": { - "sheetId": 1020069232, - "dimension": "COLUMNS", - "startIndex": 0, - "endIndex": 3 - } - } - } - ], - "fields": "spreadsheetId" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA" - } - ``` - -??? note "googlespreadsheet.insertDimensionBatchRequest" - The insertDimensionBatchRequest method allows you to inserts rows or columns in a sheet at a particular index.,see [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#insertdimensionrequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td>It contains data that is a kind of update to apply to a spreadsheet. To perform multiple insertDimension operation within the spread sheet, need to repeat `insertDimension` property within the `requests` property.</td> - <td>Yes.</td> - </tr> - <tr> - <td>fields (Outside the requests property)</td> - <td>Specifying which fields to include in a partial response.For the following request only the `spreadsheetId` will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.insertDimensionBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.insertDimensionBatchRequest> - ``` - - **Sample request** - - The following request inserts two blank columns at column C. The inheritBefore field, if true, tells the API to give the new columns or rows the same properties as the prior row or column; otherwise the new columns or rows acquire the properties of those that follow them. inheritBefore cannot be true if inserting a row at row 1 or a column at column A. - - ```json - { - "clientId":"617729022812-xxxxxxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxxxx", - "refreshToken":"1/Si2q4aOZsaMlYW7bBIxxxxxxxxxxxxxxoO-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.Ci-xxxxxxxxxxxxxxxxxx-kQ9Wri4bsf4TEulw", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "14PJALKcIXLr75rJWXlHhVjOt7z0Nby7AvcKXJGhMN2s", - "requests":[ - { - "insertDimension": - { - "range": - { - "sheetId": 1020069232, - "dimension": "COLUMNS", - "startIndex": 2, - "endIndex": 4 - }, - "inheritFromBefore": true - } - } - ], - "fields": "spreadsheetId" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA" - } - ``` - -??? note "googlespreadsheet.moveDimensionBatchRequest" - The moveDimensionBatchRequest method allows you to moves one or more rows or columns,see [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#movedimensionrequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td>It contains data that is a kind of update to apply to a spreadsheet. To perform multiple moveDimension operation within the spread sheet, need to repeat `moveDimension` property within the `requests` property.</td> - <td>Yes.</td> - </tr> - <tr> - <td>fields (Outside the requests property)</td> - <td>Specifying which fields to include in a partial response.For the following request only the `spreadsheetId` will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.insertDimensionBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.insertDimensionBatchRequest> - ``` - - **Sample request** - - The following request moves column A to the column D position. - - ```json - { - "clientId":"617729022812-xxxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxxxxx", - "refreshToken":"1/xxxxxxxxxx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.Ci-xxxxxxxxxxxx-kQ9Wri4bsf4TEulw", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "14PJALKcIXLr75rJWXlHhVjOt7z0Nby7AvcKXJGhMN2s", - "requests":[ - { - "moveDimension": - { - "source": - { - "sheetId": 1020069232, - "dimension": "COLUMNS", - "startIndex": 0, - "endIndex": 1 - }, - "destinationIndex": 3 - } - } - ], - "fields": "spreadsheetId" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA" - } - ``` - -??? note "googlespreadsheet.sortRangeBatchRequest" - The sortRangeBatchRequest method allows you to sorts data in rows based on a sort order per column,see [the Google Spreadsheet documentation](https://developers.google.com/sheets/reference/rest/v4/spreadsheets/request#sortrangerequest). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>spreadsheetId</td> - <td>Unique value of the spreadsheet</td> - <td>Yes.</td> - </tr> - <tr> - <td>requests</td> - <td> It contains data that is a kind of update to apply to a spreadsheet. To perform multiple sortRange operation within the spread sheet, need to repeat `sortRange` property within the `requests` property.</td> - <td>Yes.</td> - </tr> - <tr> - <td>fields (Outside the requests property)</td> - <td>Specifying which fields to include in a partial response.For the following request only the `spreadsheetId` will be included in the response.</td> - <td>Optional.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <googlespreadsheet.sortRangeBatchRequest> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <requests>{$ctx:requests}</requests> - <fields>{$ctx:fields}</fields> - </googlespreadsheet.sortRangeBatchRequest> - ``` - - **Sample request** - - The following request sorts the range A1:F10, first by column B in ascending order, then by column D in descending order, then by column E in descending order. - - ```json - { - "clientId":"617729022812-xxxxxxxxxxxxxx.apps.googleusercontent.com", - "clientSecret":"xxxxxxxxxxxx", - "refreshToken":"1/xxxxxxxxxxxxx-fCyxRTyf-LpK6fDWF9DgcM", - "accessToken":"ya29.Ci-xxxxxxxxxxx-kQ9Wri4bsf4TEulw", - "apiUrl":"https://sheets.googleapis.com/v4/spreadsheets", - "spreadsheetId": "14PJALKcIXLr75rJWXlHhVjOt7z0Nby7AvcKXJGhMN2s", - "requests": [ - { - "sortRange": { - "range": { - "sheetId": 1020069232, - "startRowIndex": 0, - "endRowIndex": 10, - "startColumnIndex": 0, - "endColumnIndex": 6 - }, - "sortSpecs": [ - { - "dimensionIndex": 1, - "sortOrder": "ASCENDING" - }, - { - "dimensionIndex": 3, - "sortOrder": "DESCENDING" - }, - { - "dimensionIndex": 4, - "sortOrder": "DESCENDING" - } - ] - } - } - ], - "fields": "spreadsheetId" - } - ``` - **Sample response** - - ```json - { - "spreadsheetId": "12KoqoxmykLLYbtsm6CEOggk5bTKMEIFGCD9EBdrXFGA" - } - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/google-spreadsheet-connector/google-spreadsheet-connector-example.md b/en/docs/reference/connectors/google-spreadsheet-connector/google-spreadsheet-connector-example.md deleted file mode 100644 index e325fa5921..0000000000 --- a/en/docs/reference/connectors/google-spreadsheet-connector/google-spreadsheet-connector-example.md +++ /dev/null @@ -1,393 +0,0 @@ -# Google Spreadsheet Connector Example - -The Google Sheets API lets users to read and modify any aspect of a spreadsheet. The WSO2 Google Spreadsheet Connector allows you to access the Google Spreadsheet [API Version v4](https://developers.google.com/sheets/api/guides/concepts) from an integration sequence. It allows users to read/write any aspect of the spreadsheet via the spreadsheets collection. It has the ability to do spreadsheet operations and spreadsheet data operations. - -## What you'll build - -This example explains how to use Google Spreadsheet Connector to create a Google spreadsheet, write data to it, and read it. Further, it explains how the data in the spreadsheet can be edited. - -It will have three HTTP API resources, which are `insert`, `read` and `edit`. - -* `/insert `: The user sends the request payload, which includes the name of the spreadsheet, the sheet names, and what data should be inserted to which sheet and which range of cells. This request is sent to the integration runtime by invoking the Spreadsheet API. It creates a spreadsheet with specified data in the specified cell range. - - <img src="{{base_path}}/assets/img/integrate/connectors/sheet-insert.png" title="Calling insert operation" width="800" alt="Calling insert operation"/> - -* `/read `: The user sends the request payload, which includes the spreadsheet Id that should be obtained from calling the `insert` API resource, and the range of the cell range to be read. - - <img src="{{base_path}}/assets/img/integrate/connectors/sheet-read.png" title="Calling read operation" width="800" alt="Calling read operation"/> - -* `/edit `: The user sends the request payload, which includes the spreadsheet Id that should be obtained from calling the `insert` API resource, and the data to be edited that includes values and the range. - - <img src="{{base_path}}/assets/img/integrate/connectors/sheet-edit.png" title="Calling edit operation" width="800" alt="Calling edit operation"/> - - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Configure the connector in WSO2 Integration Studio - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -## Creating the Integration Logic - -1. Follow these steps to [Configure Google Sheets API]({{base_path}}/reference/connectors/google-spreadsheet-connector/get-credentials-for-google-spreadsheet/) and obtain the Client Id, Client Secret, Access Token, and Refresh Token. - -2. Right click on the created Integration Project and select, -> **New** -> **Rest API** to create the REST API. - <img src="{{base_path}}/assets/img/integrate/connectors/adding-an-api.jpg" title="Adding a Rest API" width="800" alt="Adding a Rest API"/> - -3. Provide the API name as `SpreadsheetAPI` and the API context as `/insert`. - -4. First we will create the `/insert` resource. Right click on the API Resource and go to **Properties** view. We use a URL template called `/insert` as we have two API resources inside single API. The method will be `Post`. - <img src="{{base_path}}/assets/img/integrate/connectors/filecon-3.png" title="Adding the API resource." width="800" alt="Adding the API resource."/> - -5. In this operation we are going to receive input from the user, which are `properties`, `sheets`, `range` and `values`. - - properties - It can provide the spreadsheet properties such as title of the spreadsheet. - - sheets - It can provide set of sheets to be created. - - range - It provides the sheet name and the range that data need to be inserted. - - values - Data to be inserted. - -6. The above four parameters are saved to a property group. Drag and drop the Property Group mediator onto the canvas in the design view and do as shown below. For further reference, you can read about the [Property Group mediator]({{base_path}}/reference/mediators/property-group-mediator). You can add set of properties as below. - - <img src="{{base_path}}/assets/img/integrate/connectors/sheetcon1.png" title="Adding a property into a property group" width="800" alt="Adding a property"/> - -7. Once all the properties are added to the Property Group Mediator, it looks as below. - - <img src="{{base_path}}/assets/img/integrate/connectors/sheetcon2.png" title="Property Group Mediator" width="800" alt="Property Group Mediator"/> - -8. The `createSpreadsheet` operation is going to be added as a separate sequence. Right click on the created Integration Project and select, -> **New** -> **Sequence** to create the `createSpreadsheet` sequence. - -9. Drag and drop the **init** operation in the Googlespreadsheet Connector as below. Fill the following values that you obtained in the step 1. - - accessToken - - apiUrl: https://sheets.googleapis.com/v4/spreadsheets - - clientId - - clientSecret - - refreshToken - - <img src="{{base_path}}/assets/img/integrate/connectors/sheetcon3.png" title="init operation" width="800" alt="init operation"/> - -10. Drag and drop **createSpreadsheet** operation to the Canvas next. Parameter values are defined in step 6 and 7 in the property group. - - <img src="{{base_path}}/assets/img/integrate/connectors/sheetcon4.png" title="Parameters" width="800" alt="Parameters"/> - -11. The complete XML configuration for the `createSpreadsheet.xml` looks as below. -```xml - <?xml version="1.0" encoding="UTF-8"?> - <sequence name="createSpreadsheet" trace="disable" xmlns="http://ws.apache.org/ns/synapse"> - <googlespreadsheet.init> - <accessToken></accessToken> - <apiUrl>https://sheets.googleapis.com/v4/spreadsheets</apiUrl> - <clientId></clientId> - <clientSecret></clientSecret> - <refreshToken></refreshToken> - </googlespreadsheet.init> - <googlespreadsheet.createSpreadsheet> - <properties>{$ctx:properties}</properties> - <sheets>{$ctx:sheets}</sheets> - </googlespreadsheet.createSpreadsheet> - </sequence> -``` - -12. Next we need to create the `addData.xml` sequence as above. As explained in step 8, create a sequence by right clicking the Integration Project that has already been created. - -13. Below is the complete XML configuration for `addData.xml` file. -```xml - - <?xml version="1.0" encoding="UTF-8"?> - <sequence name="addData" trace="disable" xmlns="http://ws.apache.org/ns/synapse"> - <property expression="json-eval($.spreadsheetId)" name="spreadsheetId" scope="default" type="STRING"/> - <googlespreadsheet.init> - <accessToken></accessToken> - <apiUrl>https://sheets.googleapis.com/v4/spreadsheets</apiUrl> - <clientId></clientId> - <clientSecret></clientSecret> - <refreshToken></refreshToken> - </googlespreadsheet.init> - <googlespreadsheet.addRowsColumnsData> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <range>{$ctx:range}</range> - <insertDataOption>INSERT_ROWS</insertDataOption> - <valueInputOption>RAW</valueInputOption> - <majorDimension>ROWS</majorDimension> - <values>{$ctx:values}</values> - </googlespreadsheet.addRowsColumnsData> - </sequence> - -``` - -14. Now go back to `SpreadsheeetAPI.xml` file, and from **Defined Sequences** drag and drop **createSpreadsheet** sequence, **addData** sequence and finally the Respond Mediator to the canvas. Now we are done with creating the first API resource, and it is displayed as shown below. - - <img src="{{base_path}}/assets/img/integrate/connectors/sheetcon5.png" title="insert operation xml config" width="800" alt="insert operation xml config"/> - -15. Create the next API resource, which is `/read`. From this, we are going to read the specified spreadsheet data. Use the URL template as `/read`. The method will be POST. - - <img src="{{base_path}}/assets/img/integrate/connectors/apiresource.jpg" title="Adding an API resource" width="800" alt="Adding an API resource"/> - -16. Let's create `readData.xml` sequence. The complete XML configuration looks as below. -```xml - <?xml version="1.0" encoding="UTF-8"?> - <sequence name="readData" trace="disable" xmlns="http://ws.apache.org/ns/synapse"> - <property expression="json-eval($.spreadsheetId)" name="spreadsheetId" scope="default" type="STRING"/> - <property expression="json-eval($.range)" name="range" scope="default" type="STRING"/> - <googlespreadsheet.init> - <accessToken></accessToken> - <apiUrl>https://sheets.googleapis.com/v4/spreadsheets</apiUrl> - <clientId></clientId> - <clientSecret></clientSecret> - <refreshToken></refreshToken> - </googlespreadsheet.init> - <googlespreadsheet.getCellData> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <range>{$ctx:range}</range> - <dateTimeRenderOption>SERIAL_NUMBER</dateTimeRenderOption> - <majorDimension>ROWS</majorDimension> - <valueRenderOption>UNFORMATTED_VALUE</valueRenderOption> - </googlespreadsheet.getCellData> - </sequence> -``` - -19. In this operation, the user sends the spreadsheetId and range as the request payload. They will be written to properties as we did in step 10. - -20. Go back to SpreadsheetAPI. Drag and drop `readData` sequence from the **Defined Sequences** to the canvas followed by a Respond mediator. - <img src="{{base_path}}/assets/img/integrate/connectors/sheetcon6.png" title="Adding the read resource" width="800" alt="Adding read resource"/> - -21. Next go to SpreadsheetAPI. To create the next API resource, drag and drop another API resource to the design view. Use the URL template as `/edit`. The method will be POST. - -22. Create the sequence `editSpeadsheet.xml` which looks as below. -```xml - <?xml version="1.0" encoding="UTF-8"?> - <sequence name="editSpreadsheet" trace="disable" xmlns="http://ws.apache.org/ns/synapse"> - <property expression="json-eval($.spreadsheetId)" name="spreadsheetId" scope="default" type="STRING"/> - <property expression="json-eval($.data)" name="data" scope="default" type="STRING"/> - <googlespreadsheet.init> - <accessToken></accessToken> - <apiUrl>https://sheets.googleapis.com/v4/spreadsheets</apiUrl> - <clientId></clientId> - <clientSecret></clientSecret> - <refreshToken></refreshToken> - </googlespreadsheet.init> - <googlespreadsheet.editMultipleCell> - <spreadsheetId>{$ctx:spreadsheetId}</spreadsheetId> - <valueInputOption>RAW</valueInputOption> - <data>{$ctx:data}</data> - </googlespreadsheet.editMultipleCell> - </sequence> - -``` -23. Go back to SpreadsheetAPI. Drag and drop `editSpeadsheet` sequence from the **Defined Sequences** to the canvas followed by a Respond mediator. - <img src="{{base_path}}/assets/img/integrate/connectors/sheetcon7.png" title="Adding the edit resource" width="800" alt="Adding edit resource"/> - -24. Below is the complete XML configuration of the SpreadsheetAPI. -```xml - <?xml version="1.0" encoding="UTF-8"?> - <api context="/spreadsheet" name="SpreadsheetAPI" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST" uri-template="/insert"> - <inSequence> - <propertyGroup description="It contains the set of properties related to spreadsheet creation and addData operations. "> - <property expression="json-eval($.properties)" name="properties" scope="default" type="STRING"/> - <property expression="json-eval($.sheets)" name="sheets" scope="default" type="STRING"/> - <property expression="json-eval($.range)" name="range" scope="default" type="STRING"/> - <property expression="json-eval($.values)" name="values" scope="default" type="STRING"/> - </propertyGroup> - <sequence description="This sequence will create a spreadsheet and outputs the spreadsheet url. " key="createSpreadsheet"/> - <sequence description="This sequence will insert the data to the created spreadsheet. " key="addData"/> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" uri-template="/read"> - <inSequence> - <sequence description="This sequence will read data of the spreadsheet. " key="readData"/> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" uri-template="/edit"> - <inSequence> - <sequence key="editSpreadsheet"/> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - </api> - -``` - -{!includes/reference/connectors/exporting-artifacts.md!} - - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/google-spreadsheet-connector.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -!!! tip - You may need to update the value of the access token and make other such changes before deploying and running this project. - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -### Spreadsheet insert Operation - -Invoke the SpreadsheetAPI with the following URL. An application such as [Postman](https://www.postman.com/) can be used to invoke the API. - -``` - Resource method: POST - URL: http://localhost:8290/spreadsheet/insert -``` - - ``` - { - "properties":{ - "title": "Company" - }, - "sheets":[ - { - "properties": - { - "title": "Employees" - } - }, - { - "properties": - { - "title": "Hector" - } - } - ], - "range":"Employees!A1:C3", - "values":[ - [ - "First Name", - "Last Name", - "Gender" - ], - [ - "John", - "Doe", - "Male" - ], - [ - "Leon", - "Wins", - "Female" - ] - ] - } - - ``` -**Expected Response**: -You should get a success response as below, and the spreadsheet should be created in the given ID in the response with data inserted. - -``` - { - "spreadsheetId": "1ddnO00fcjuLvEMCUORVjYQ4C0VLeAPNGmcvSvELHbPU", - "updates": { - "spreadsheetId": "1ddnO00fcjuLvEMCUORVjYQ4C0VLeAPNGmcvSvELHbPU", - "updatedRange": "Employees!A1:C3", - "updatedRows": 3, - "updatedColumns": 3, - "updatedCells": 9 - } - } -``` - -### Spreadsheet Read Operation - -Invoke the SpreadsheetAPI with the following URL. An application such as [Postman](https://www.postman.com/) can be used to invoke the API. Obtain the Spreadsheet ID from step 1. -``` - Resource method: POST - URL: http://localhost:8290/spreadsheet/read -``` - - ``` - { - "spreadsheetId":"1Ht0FWeKtKqBb1pEEzLcRMM8s5mktJdhivX3iaFXo-qQ", - "range":"Employees!A1:C3" - } - ``` - -**Expected Response**: -You should get the following response returned. -``` - { - "range": "Employees!A1:C3", - "majorDimension": "ROWS", - "values": [ - [ - "First Name", - "Last Name", - "Gender" - ], - [ - "John", - "Doe", - "Male" - ], - [ - "Leon", - "Wins", - "Female" - ] - ] - } - -``` - -### Spreadsheet Edit Operation - -1. Invoke the SpreadsheetAPI with the following URL. Application such as [Postman](https://www.postman.com/) can be used to invoke the API. Obtain the Spreadsheet ID from the step 1. -``` - Resource method: POST - URL: http://localhost:8290/spreadsheet/edit -``` - - ``` - { - "spreadsheetId":"1Ht0FWeKtKqBb1pEEzLcRMM8s5mktJdhivX3iaFXo-qQ", - "data": [ - { - "values": [["Isuru","Uyanage","Female"],["Supun","Silva","Male"]], - "range": "Employees!A6" - } - ] - } - ``` - -**Expected Response**: -You should get the following response returned. - -``` - { - "spreadsheetId": "1Ht0FWeKtKqBb1pEEzLcRMM8s5mktJdhivX3iaFXo-qQ", - "totalUpdatedRows": 2, - "totalUpdatedColumns": 3, - "totalUpdatedCells": 6, - "totalUpdatedSheets": 1, - "responses": [ - { - "spreadsheetId": "1Ht0FWeKtKqBb1pEEzLcRMM8s5mktJdhivX3iaFXo-qQ", - "updatedRange": "Employees!A6:C7", - "updatedRows": 2, - "updatedColumns": 3, - "updatedCells": 6 - } - ] - } -``` -The spreadsheet should be edited within the above specified cell range. - -## What's Next - -* To customize this example for your own scenario, see [Google Spreadsheet Connector Configuration]({{base_path}}/reference/connectors/google-spreadsheet-connector/google-spreadsheet-connector-config/) documentation for all operation details of the connector. diff --git a/en/docs/reference/connectors/google-spreadsheet-connector/google-spreadsheet-overview.md b/en/docs/reference/connectors/google-spreadsheet-connector/google-spreadsheet-overview.md deleted file mode 100644 index ad736a5cc2..0000000000 --- a/en/docs/reference/connectors/google-spreadsheet-connector/google-spreadsheet-overview.md +++ /dev/null @@ -1,33 +0,0 @@ -# Google Spreadsheet Connector Overview - -The Google Sheets API lets users to read and modify any aspect of a spreadsheet. The WSO2 Google Spreadsheet Connector allows you to access the [Google Spreadsheet API Version v4](https://developers.google.com/sheets/api/guides/concepts) through an integration sequence. It allows users to read/write any aspect of the spreadsheet via the spreadsheets collection. - -To see the Google Spreadsheet Connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "google". - -<img src="{{base_path}}/assets/img/integrate/connectors/google-spreadsheet-store.png" title="Google Spreadsheet Connector Store" width="200" alt="Google Spreadsheet Connector Store"/> - -## Compatibility - -| Connector Version | Supported product versions | -| ------------- |-------------| -| 3.0.1 | APIM 4.0.0, EI 7.1.0, EI 7.0.x, EI 6.6.0, EI 6.5.0 | - -For older versions, see the details in the connector store. - -## Google Spreadsheet Connector documentation - -* **[Get Credentials for Google Spreadsheet]({{base_path}}/reference/connectors/google-spreadsheet-connector/get-credentials-for-google-spreadsheet/)**: You need to obtain the Access Token, Client Id, Client Secret, and Refresh Token in order to integrate with Google Spreadsheet. - -* **[Google Spreadsheet Connector Example]({{base_path}}/reference/connectors/google-spreadsheet-connector/google-spreadsheet-connector-example/)**: This example explains how to use Google Spreadsheet Connector to create a Google spreadsheet, write data to it, and read it. - -* **[Google Spreadsheet Connector Reference]({{base_path}}/reference/connectors/google-spreadsheet-connector/google-spreadsheet-connector-config/)**: This documentation provides a reference guide for the Google Spreadsheet Connector. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, create a pull request in the following repository. - -* [Google Spreadsheet Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-googlespreadsheet) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/iso8583-connector/iso8583-connector-configuration.md b/en/docs/reference/connectors/iso8583-connector/iso8583-connector-configuration.md deleted file mode 100644 index 94f1fa69b0..0000000000 --- a/en/docs/reference/connectors/iso8583-connector/iso8583-connector-configuration.md +++ /dev/null @@ -1,27 +0,0 @@ -# Setting up ISO8583 Connector - -ISO8583 is an international standard for financial transaction messaging protocol. It is the International Organization for Standardization standard for systems that exchange electronic transactions initiated by cardholders using payment cards. - -Typically, whenever we use a credit card, debit card, or ATM card, the data travels from one system to another system. A card-based transaction typically needs to travel between a number of systems. The WSO2 ISO8583 connector allows you to maintain the common transaction messaging standards. - -## Setting up the environment - -Before you start configuring the ISO8583 connector, you also need WSO2 MI, and we refer to that location as <PRODUCT_HOME>. - -To configure the ISO8583 connector, copy the following client libraries from the given locations to the `<PRODUCT_HOME>/repository/components/lib` directory. - -* [jpos-1.9.4.jar](http://mvnrepository.com/artifact/org.jpos/jpos/1.9.4) -* [jdom-1.1.3.jar](http://mvnrepository.com/artifact/org.jdom/jdom/1.1.3) -* [commons-cli-1.3.1.jar](http://mvnrepository.com/artifact/commons-cli/commons-cli/1.3.1) - -## Configure the test server - -For testing purposes, you need to have a test server (basically a Java socket connection that listens on port 5010) to handle ISO8583 requests that come from the connector. You also need to generate responses by changing the relevant response fields, and then send the responses back to the connector. You can test the connector with the sample Java server program that is provided in the following [git location](https://github.com/wso2-docs/CONNECTORS/tree/master/ISO8583/ISO8583TestServer). To test the ISO8583 Inbound operation scenario, you can use the sample Java client program that is provided in the following [git location](https://github.com/wso2-docs/CONNECTORS/tree/master/ISO8583/ISO8583TestClient/1.0.0). - -You can include required header information within the header tag. It supports 2-byte or 4-byte headers. To include header information, you need to convert the 2-byte or 4-byte header into a string using base64 encoding, and then specify the string value within the header tag. For more information on the ISO8583 standard, see [ISO8583 Documentation](https://en.wikipedia.org/wiki/ISO_8583). - -If you use the [sample Java server program](https://github.com/wso2-docs/CONNECTORS/tree/master/ISO8583/ISO8583TestServer) to send an ISO8583 request with a header value from the connector, you need to update the iso87ascii.xml file with the relevant headerLength information. - -The ISO8583 connector uses the jpos library, which is a third party library that provides a high-performance bridge between card messages generated at point of sale terminals, ATMs, and internal systems across the entire financial messaging network. The jposdef.xml file has the field definitions of standard ISO8583 messages. According to the field definitions, each ISO8583 message in XML format coming from the REST client is packed and sent to the test server. Therefore, you need to create a file called jposdef.xml (with the contents given [here](https://github.com/wso2-extensions/esb-connector-iso8583/blob/master/src/main/resources/jposdef.xml)) in the <PRODUCT_HOME> directory. - -Now you have connected to the test server. For more information, see [ISO8583 Connector Example]({{base_path}}/reference/connectors/ISO8583-connector/ISO8583-connector-example/). \ No newline at end of file diff --git a/en/docs/reference/connectors/iso8583-connector/iso8583-connector-example.md b/en/docs/reference/connectors/iso8583-connector/iso8583-connector-example.md deleted file mode 100644 index 22b9f8c739..0000000000 --- a/en/docs/reference/connectors/iso8583-connector/iso8583-connector-example.md +++ /dev/null @@ -1,94 +0,0 @@ -# ISO8583 Connector Example - -Given below is a sample scenario that demonstrates how the WSO2 ISO8583 Connector sends an ISO8583 message to financial networks using the integration runtime of WSO2. - -## What you'll build - -This example demonstrates how to expose core banking system functionality working with ISO8583 protocol as an API. Here, the integration runtime acts as ISO8583 terminal for the banking network. In this scenario to mock the banking network we used Test mock server. - -Given below is a sample API that illustrates how you can configure ISO8583 with the `init` operation and then use the `iso8583.sendMessage` operation to send an ISO8583 message for the financial transactions. - -To know the further information about the `init` and `iso8583.sendMessage` operations please refer this link. - -<img src="{{base_path}}/assets/img/integrate/connectors/iso8583-connector.png" title="ISO8583 Connector" width="800" alt="ISO8583 Connector"/> - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Configure the connector in WSO2 Integration Studio - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -1. Right click on the created Integration Project and select, -> **New** -> **Rest API** to create the REST API. - -2. Specify the API name as `SendisoTestAPI` and API context as `/sendiso`. You can go to the source view of the XML configuration file of the API and copy the following configuration (source view). - - ``` - <?xml version="1.0" encoding="UTF-8"?> - <api context="/sendiso" name="SendisoTestAPI" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST"> - <inSequence> - <log> - <property name="status" value="Sending_an_ISO8583_Messsage"/> - </log> - <iso8583.init> - <serverHost>localhost</serverHost> - <serverPort>5010</serverPort> - </iso8583.init> - <iso8583.sendMessage/> - <respond/> - </inSequence> - <outSequence> - <log/> - <send/> - </outSequence> - <faultSequence/> - </resource> - </api> - ``` -Now we can export the imported connector and the API into a single CAR application. CAR application is the one we are going to deploy to server runtime. - -{!includes/reference/connectors/exporting-artifacts.md!} - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/iso8583-connector.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - - ``` - curl -v POST -d - '<ISOMessage> - <header>AAAAaw==</header> - <data> - <field id="104">000001161204171926FABCDE123ABD06414243</field> - <field id="109">000termid1210Community106A5DFGR1112341234234</field> - <field id="125">1048468112122012340000100000001107221800</field> - <field id="127">01581200F230040102B000000000000004000000</field> - </data> - </ISOMessage>' "http://localhost:8290/sendiso" -H "Content-Type:application/xml" - ``` -**Expected Response**: - - ``` - <ISOMessage> - <header>MDIxMA==</header> - <data> - <field id="0">8000</field> - <field id="23">000</field> - </data> - </ISOMessage> - ``` diff --git a/en/docs/reference/connectors/iso8583-connector/iso8583-connector-overview.md b/en/docs/reference/connectors/iso8583-connector/iso8583-connector-overview.md deleted file mode 100644 index 065cfc620b..0000000000 --- a/en/docs/reference/connectors/iso8583-connector/iso8583-connector-overview.md +++ /dev/null @@ -1,49 +0,0 @@ -# ISO8583 Connector Overview - -The ISO8583 message format is used for financial transactions such as ATM, POS, Credit Card, Mobile Banking, Internet Banking, KIOSK, e-commerce, etc. transactions. - -The financial transaction involves communication between two systems through a socket connection. After the connection is established, each system can send messages in ISO8583 format, which commonly will be requested and the other system will send a response. - -For example, the purchase made in a store may travel from the merchant terminal through another terminal such as banking systems. This requires a network or networks to the issuing bank where the card holder's account is held. -Cardholder-originated transactions include purchase, withdrawal, deposit, refund, reversal, balance inquiry, payments, and inter-account transfers. ISO8583 also defines system-to-system messages for secure key exchanges, reconciliation of totals, and other administrative purposes. The response on authorizing or declining the transaction needs to be returned by the same route to the terminal. - -To see the ISO8583 Connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "ISO8583". - -<img src="{{base_path}}/assets/img/integrate/connectors/iso8583-store.png" title="ISO8583 Connector Store" width="200" alt="ISO8583 Connector Store"/> - -## Compatibility - -| Connector Version | Supported product versions | -| ------------- |-------------| -| 1.0.3 | APIM 4.0.0, EI 7.1.0, EI 7.0.x, EI 6.6.0, EI 6.5.0 | - -For older versions, see the details in the connector store. - -## ISO8583 Connector documentation - -The ISO8583 Connector allows you to send ISO8583 standard messages from an integration sequence. ISO8583 is an international messaging standard for financial transaction card originated messages, and is commonly used in transactions between devices such as point-of-sale(POS) terminals and automated teller machines(ATMs). Although there are various versions of the ISO8583 standard, this connector is developed based on the 1987 version. - -* **[Setting up ISO8583 Connector]({{base_path}}/reference/connectors/ISO8583-connector/ISO8583-connector-configuration/)**: This includes instructions on setting up the environment and the test server in order to try this out. - -* **[ISO8583 Connector Example]({{base_path}}/reference/connectors/ISO8583-connector/ISO8583-connector-example/)**: This example demonstrates how to expose core banking system functionality working with ISO8583 protocol as an API. - -* **[ISO8583 Connector Reference]({{base_path}}/reference/connectors/ISO8583-connector/ISO8583-connector-reference/)**: This documentation provides a reference guide for the ISO8583 Connector. - -## ISO8583 Inbound Endpoint documentation - -The ISO8583 inbound endpoint acts as a message consumer. This is bundled with the ISO8583 connector and can be obtained from the connector store. The ISO8583 inbound endpoint supported via the integration runtime of WSO2 is a listening inbound endpoint that can consume ISO8583 standard messages. The ISO8583 connector allows outbound messages from the integration runtime to third-party applications, while the inbound endpoint only allows incoming messages. The inbound endpoint converts the messages to XML format and injects messages to a sequence. - -* **[ISO8583 Inbound Endpoint Example]({{base_path}}/reference/connectors/ISO8583-connector/ISO8583-inbound-endpoint-example/)**: This example demonstrates how the ISO8583 inbound endpoint works as an ISO8583 message consumer. - -* **[ISO8583 Inbound Endpoint Reference]({{base_path}}/reference/connectors/ISO8583-connector/ISO8583-inbound-endpoint-config/)**: This documentation provides a reference guide for the ISO8583 Inbound Endpoint. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, create a pull request in one of the following repositories. - -* [ISO8583 Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-iso8583) -* [ISO8583 Inbound Endpoint GitHub repository](https://github.com/wso2-extensions/esb-inbound-iso8583) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/iso8583-connector/iso8583-connector-reference.md b/en/docs/reference/connectors/iso8583-connector/iso8583-connector-reference.md deleted file mode 100644 index 06722d75ff..0000000000 --- a/en/docs/reference/connectors/iso8583-connector/iso8583-connector-reference.md +++ /dev/null @@ -1,61 +0,0 @@ -# ISO8583 Connector Reference - -The following operations allow you to work with the ISO8583 Connector. Click an operation name to see parameter details and samples on how to use it. - ---- - -## Initialize the connector - -To use the ISO8583 connector, add the <iso8583.init> element in your configuration before connecting with Testserver. - -??? note "init" - The init operation is used to initialize the connection to ISO8583. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>serverHost</td> - <td>Here the host is localhost.</td> - <td>Yes</td> - </tr> - <tr> - <td>serverPort</td> - <td>Here the port is 5010 , The Testserver will start to listen on that port.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <amazonsqs.init> - <secretAccessKey>{$ctx:secretAccessKey}</secretAccessKey> - <accessKeyId>{$ctx:accessKeyId}</accessKeyId> - <version>{$ctx:version}</version> - <region>{$ctx:region}</region> - <enableSSL>{$ctx:enableSSL}</enableSSL> - <contentType>{$ctx:contentType}</contentType> - <blocking>{$ctx:blocking}</blocking> - </amazonsqs.init> - ``` - -To send the messages, use </iso8583.sendMessage> operation and using Rest-client to send the XML format messages. In Rest-client set the header application/xml as Content-Type. - -POST the body in XML format and XML format message should be in the following structure. - -```xml -<ISOMessage> - <data> - <field id="0">0200</field> - <field id="3">568893</field> - <field id="4">000000020000</field> - <field id="7">0110563280</field> - <field id="11">456893</field> - <field id="44">DFGHT</field> - <field id="105">ABCDEFGHIJ 9871236548</field> - </data> -</ISOMessage> -``` \ No newline at end of file diff --git a/en/docs/reference/connectors/iso8583-connector/iso8583-inbound-endpoint-config.md b/en/docs/reference/connectors/iso8583-connector/iso8583-inbound-endpoint-config.md deleted file mode 100644 index dd5f81e6da..0000000000 --- a/en/docs/reference/connectors/iso8583-connector/iso8583-inbound-endpoint-config.md +++ /dev/null @@ -1,71 +0,0 @@ -# ISO8583 Inbound Endpoint Reference - -The following operations allow you to work with the ISO8583 Inbound Endpoint. Click an operation name to see parameter details and samples on how to use it. - -ISO8583 Inbound endpoint allows the ISO8583 standard messages through the WSO2 integration runtime. ISO8583 is a message standard that is used in financial transactions. There are various versions of the ISO8583 standard. The Inbound Endpoint is developed based on the 1987 version of the standard. For more information about ISO8583 Standard, go to ISO8583 Documentation. - -The WSO2 ISO8583 inbound endpoint acts as a message consumer. Since it is a listening inbound, it is listening on port 5000. When a client is connected on port 5000, the WSO2 ISO8583 Inbound Endpoint starts to consume the ISO8583 standard messages and inject the messages in XML format into sequence. - -In order to use the ISO8583 inbound endpoint, you need to do the following: - -- Download the inbound `org.wso2.carbon.inbound.iso8583-1.0.0.jar` file from the [https://store.wso2.com/store/assets/esbconnector/ISO8583](https://store.wso2.com/store/assets/esbconnector/ISO8583). -- Download the `jpos-1.9.4.jar` from the [http://mvnrepository.com/artifact/org.jpos/jpos/1.9.4](http://mvnrepository.com/artifact/org.jpos/jpos/1.9.4). -- Download `jdom-1.1.3.jar` from [http://mvnrepository.com/artifact/org.jdom/jdom/1.1.3](http://mvnrepository.com/artifact/org.jdom/jdom/1.1.3). -- Download `commons-cli-1.3.1.jar` from [http://mvnrepository.com/artifact/commons-cli/commons-cli/1.3.1](http://mvnrepository.com/artifact/commons-cli/commons-cli/1.3.1). - -Copy the .jar files to the <PRODUCT_HOME>/lib directory. - -> **Note**: `jpos` is the third party library, and `jposdef.xml` has the field definitions of the standard ISO8583 Messages. According to the field definitions, each and every ISO8583 message that comes from the client will be unpacked and the fields of the ISO8583 standard messages will be identified. - -To handle the concurrent messages in ISO8583 inbound endpoint, you need to create the threadpool and it can contain a varying amount of threads. The number of threads in the pool is determined by these variables: - -- `corePoolSize`: The number of threads to keep in the pool, even if they are idle. -- `maximumPoolSize`: The maximum number of threads to allow in the pool. - -Another parameter in `threadPool` configuration is `keepAliveTime`, which is the maximum time that excess idle threads will be alive for new tasks before terminating. - -<table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>port</td> - <td>Hosts have ports. The socket connection is created according to that port and the server starts listening to that port once the socket connection is established. Possible values are 0-65535 and the default is 5000.</td> - <td>Yes</td> - </tr> - <tr> - <td>coreThreads</td> - <td>The number of threads to keep in the pool.</td> - <td>Yes</td> - </tr> - <tr> - <td>maxThreads</td> - <td>The maximum number of threads to allow in the pool.</td> - <td>Yes</td> - </tr> - <tr> - <td>keepAliveTime</td> - <td>If the pool currently has more than corePoolSize threads, excess threads will be terminated if they have been idle for more than the keepAliveTime.</td> - <td>Yes</td> - </tr> -</table> - -**Sample configuration** - -```xml -<inboundEndpoint - class="org.wso2.carbon.inbound.iso8583.listening.ISO8583MessageConsumer" - name="custom_listener" onError="fault" sequence="request" suspend="false"> - <parameters> - <parameter name="sequential">true</parameter> - <parameter name="inbound.behavior">listening</parameter> - <parameter name="port">5000</parameter> - </parameters> -</inboundEndpoint> -``` - -> **Note**: To send ISO8583 Standard messages to an inbound endpoint, you can use Java client applications. The client needs to produce the ISO8583 Standard messages and get the acknowledgement from the inbound endpoint. - -A Sample test client program is provided in https://github.com/wso2-docs/CONNECTORS/tree/master/ISO8583/ISO8583TestClient. You can use this sample client to test the inbound endpoint. \ No newline at end of file diff --git a/en/docs/reference/connectors/iso8583-connector/iso8583-inbound-endpoint-example.md b/en/docs/reference/connectors/iso8583-connector/iso8583-inbound-endpoint-example.md deleted file mode 100644 index 712adc4a0d..0000000000 --- a/en/docs/reference/connectors/iso8583-connector/iso8583-inbound-endpoint-example.md +++ /dev/null @@ -1,107 +0,0 @@ -# ISO8583 Inbound Endpoint Example - -In the real world, financial scenarios are happening among thousands of banking systems and networks. In this situation, one system needs to act as a message publisher and another system needs to be capable of receiving messages. Once the message is received, further processing actions are performed based on the logic that is implemented in the internal system. - -The ISO8583 inbound endpoint of WSO2 acts as a message consumer. The ISO8583 inbound endpoint is a listening inbound endpoint that can consume ISO8583 standard messages. It then converts the messages to XML format and injects messages to a sequence in the integration runtime. - -## What you'll build - -This scenario demonstrates how the ISO8583 inbound endpoint works as an ISO8583 message consumer. In this scenario, to generate ISO8583 messages we use a sample Java client program here inside the banking network functionality simulates using the test client program. - -The ISO8583 inbound endpoint listens on port 5000 and acts as a ISO8583 standard message consumer. When a sample Java client connects on port 5000, the ISO8583 inbound endpoint consumes ISO8583 standard messages, converts the messages to XML format, and then injects messages to a sequence in the integration runtime. - -See [ISO8583 connector configuration]({{base_path}}/reference/connectors/ISO8583-connector/ISO8583-connector-configuration/) for more information. However, for simplicity of this example, we will just log the message. You can extend the sample as required using WSO2 [ mediators]({{base_path}}/reference/mediators/about-mediators). - -The following diagram illustrates all the required functionality of the ISO8583 inbound operations that you are going to build. - -For example, while transferring bank and financial sector information using the ISO85883 message format among the banking networks, the message receiving can be done by using inbound endpoints. The ISO8583 inbound endpoint of WSO2 acts as an ISO8583 message receiver. You can inject that message into the mediation flow for getting the required output. - -<img src="{{base_path}}/assets/img/integrate/connectors/iso8583-inbound-operations.png" title="ISO8583 inbound operations" width="800" alt="ISO8583 inbound operations"/> - -## Configure inbound endpoint using WSO2 Integration Studio - -1. Download [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/). Create an **Integration Project** as below. -<img src="{{base_path}}/assets/img/integrate/connectors/integration-project.png" title="Creating a new Integration Project" width="800" alt="Creating a new Integration Project" /> - -2. Right click on **Source** -> **main** -> **synapse-config** -> **inbound-endpoints** and add a new **custom inbound endpoint**.</br> -<img src="{{base_path}}/assets/img/integrate/connectors/db-event-inbound-ep.png" title="Creating inbound endpoint" width="400" alt="Creating inbound endpoint" style="border:1px solid black"/> - -3. Click on **Inbound Endpoint** in design view and under `properties` tab, update class name to `org.wso2.carbon.inbound.iso8583.listening.ISO8583MessageConsumer`. - -4. Navigate to the source view and update it with the following configuration as required. - - ```xml - <?xml version="1.0" encoding="UTF-8"?><inboundEndpoint xmlns="http://ws.apache.org/ns/synapse" name="custom_listener" sequence="requestISO" onError="fault" class="org.wso2.carbon.inbound.iso8583.listening.ISO8583MessageConsumer" suspend="false"> - <parameters> - <parameter name="inbound.behavior">listening</parameter> - <parameter name="sequential">true</parameter> - <parameter name="coordination">true</parameter> - <parameter name="port">5000</parameter> - <parameter name="isProxy">false</parameter> - </parameters> - </inboundEndpoint> - ``` - Sequence to process the message. - - In this example for simplicity we will just log the message, but in a real world use case, this can be any type of message mediation. - - ```xml - <?xml version="1.0" encoding="UTF-8"?><sequence xmlns="http://ws.apache.org/ns/synapse" name="requestISO" onError="fault"> - <log level="full"> - <property name="Log_Message for ISO8583 Inbound Endpoint" value="Message received from sample1-source"/> - </log> - </sequence> - ``` -## Exporting Integration Logic as a CApp - -**CApp (Carbon Application)** is the deployable artefact on the integration runtime. Let us see how we can export integration logic we developed into a CApp. To export the `Solution Project` as a CApp, a `Composite Application Project` needs to be created. Usually, when a solution project is created, this project is automatically created by Integration Studio. If not, you can specifically create it by navigating to **File** -> **New** -> **Other** -> **WSO2** -> **Distribution** -> **Composite Application Project**. - -1. Right click on Composite Application Project and click on **Export Composite Application Project**.</br> - <img src="{{base_path}}/assets/img/integrate/connectors/capp-project1.jpg" title="Export as a Carbon Application" width="300" alt="Export as a Carbon Application" /> - -2. Select an **Export Destination** where you want to save the .car file. - -3. In the next **Create a deployable CAR file** screen, select inbound endpoint and sequence artifacts and click **Finish**. The CApp will get created at the specified location provided in the previous step. - -## Deployment - -1. Navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for `ISO8583`. Click on `ISO8583 Inbound Endpoint` and download the .jar file by clicking on `Download Inbound Endpoint`. Copy this .jar file into <PRODUCT-HOME>/lib folder. - -2. Download [jpos-1.9.4.jar](http://mvnrepository.com/artifact/org.jpos/jpos/1.9.4), [jdom-1.1.3.jar](http://mvnrepository.com/artifact/org.jdom/jdom/1.1.3), and [commons-cli-1.3.1.jar](http://mvnrepository.com/artifact/commons-cli/commons-cli/1.3.1) and add it to <PRODUCT-HOME>/lib folder. - -3. Copy the exported carbon application to the <PRODUCT-HOME>/repository/deployment/server/carbonapps folder. - -4. Start the integration server. - -## Testing - -1. Run Test Client program. Use a ISO8583 standard message as input; - - ``` - 0200B220000100100000000000000002000020134500000050000001115221801234890610000914XYRTUI5269TYUI021ABCDEFGHIJ 1234567890 - ``` - - **Expected response** - - ``` - [2020-03-26 15:47:26,003] INFO {org.apache.synapse.mediators.builtin.LogMediator} - To: , MessageID: urn:uuid:FB34DB1FB26FB57D561585217845823, Direction: request, Log_Message for ISO8583 Inbound Endpoint = Message received from sample1-source, Envelope: - <?xml version="1.0" encoding="UTF-8"?> - <soapenv:Envelope xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope"> - <soapenv:Body> - <ISOMessage> - <header>AHc=</header> - <data> - <field id="0">0200</field> - <field id="3">201345</field> - <field id="4">000000500000</field> - <field id="7">0111522180</field> - <field id="11">123489</field> - <field id="32">100009</field> - <field id="44">XYRTUI5269TYUI</field> - <field id="111">ABCDEFGHIJ 1234567890</field> - </data> - </ISOMessage> - </soapenv:Body> - </soapenv:Envelope> - ``` - diff --git a/en/docs/reference/connectors/jira-connector/jira-connector-config.md b/en/docs/reference/connectors/jira-connector/jira-connector-config.md deleted file mode 100644 index 0a94e2857d..0000000000 --- a/en/docs/reference/connectors/jira-connector/jira-connector-config.md +++ /dev/null @@ -1,4317 +0,0 @@ -# Jira Connector Reference - -The following operations allow you to work with the Jira Connector. Click an operation name to see parameter details and samples on how to use it. - -??? note "init" - The init operation configures the connection parameters used to establish a connection to the Jira server. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>username</td> - <td>The username of the user.</td> - <td>Yes</td> - </tr> - <tr> - <td>password</td> - <td>The password of the user.</td> - <td>Yes</td> - </tr> - <tr> - <td>uri</td> - <td>The instance URI of Jira account.</td> - <td>Yes</td> - </tr> - <tr> - <td>blocking</td> - <td>This property helps the connector perform blocking invocations to Jira.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.init> - <username>{$ctx:username}</username> - <password>{$ctx:password}</password> - <uri>{$ctx:uri}</uri> - <blocking>{$ctx:blocking}</blocking> - </jira.init> - ``` - - **Sample request** - - The following sample REST request can be handled by the init operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "blocking":"false" - } - ``` - - -??? note "getDashboards" - This operation returns a JSON representation of the list of dashboards, including their names, IDs, and more. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>maxResults</td> - <td>The maximum number of dashboards to return, up to 1000 (default is 50).</td> - <td>Yes</td> - </tr> - <tr> - <td>startAt</td> - <td>The index of the first dashboard to return (0-based). Must be 0 or a multiple of maxResults.</td> - <td>Yes</td> - </tr> - <tr> - <td>filter</td> - <td>An optional filter that is applied to the list of dashboards.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getDashboards> - <maxResults>{$ctx:maxResults}</maxResults> - <filter>{$ctx:filter}</filter> - <startAt>{$ctx:startAt}</startAt> - </jira.getDashboards> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the getDashboards operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "maxResults":"50", - "filter":"favourite" - } - ``` - - **Sample response** - - Given below is a sample response for the getDashboards operation. - - ```json - { - "startAt": 0, - "maxResults": 50, - "total": 1, - "dashboards": [ - { - "id": "10100", - "name": "test", - "self": "http://localhost:8080/rest/api/2/dashboard/10100", - "view": "http://localhost:8080/secure/Dashboard.jspa?selectPageId=10100" - } - ] - } - ``` - -??? note "getDashboardById" - - This operation returns a JSON representation of the dashboard details, including its name, ID, and more. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>id</td> - <td>Identifies the dashboard that you want to get.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getDashboardById> - <id>{$ctx:id}</id> - </jira.getDashboardById> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `getDashboardById` operation. - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "id":"10100" - } - ``` - - **Sample response** - - Given below is a sample response for the `getDashboardById` operation. - - ```json - { - "id": "10100", - "name": "test", - "self": "http://localhost:8080/rest/api/2/dashboard/10100", - "view": "http://localhost:8080/secure/Dashboard.jspa?selectPageId=10100" - } - ``` - -??? note "getFilterById" - - To get information about a specific filter, use `getFilterById` and specify the filter ID. This operation returns a JSON representation of the filter information, including the name, ID, search URL, and more. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>filterId</td> - <td>Identifies the filter that you want to get.</td> - <td>Yes</td> - </tr> - <tr> - <td>expand</td> - <td>The parameters to expand.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getFilterById> - <filterId>{$ctx:filterId}</filterId> - <expand>{$ctx:expand}</expand> - </jira.getFilterById> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `getFilterById` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "filterId":"10100" - } - ``` - - **Sample response** - - Given below is a sample response for the `getFilterById` operation. - - ```json - { - "self": "http://localhost:8080/rest/api/2/filter/10100", - "id": "10100", - "name": "All Open Bugs", - "description": "Lists all open bugs", - "owner": { - "self": "http://localhost:8080/rest/api/2/user?username=admin", - "key": "admin", - "name": "admin", - "avatarUrls": { - "48x48": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=48", - "24x24": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=24", - "16x16": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=16", - "32x32": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=32" - }, - "displayName": "admin@gmail.com", - "active": true - }, - "jql": "type = Bug AND resolution is EMPTY", - "viewUrl": "http://localhost:8080/issues/?filter=10100", - "searchUrl": "http://localhost:8080/rest/api/2/search?jql=type+%3D+Bug+AND+resolution+is+EMPTY", - "favourite": true, - "sharePermissions": [], - "editable": true, - "sharedUsers": { - "size": 0, - "items": [], - "max-results": 1000, - "start-index": 0, - "end-index": 0 - }, - "subscriptions": { - "size": 0, - "items": [], - "max-results": 1000, - "start-index": 0, - "end-index": 0 - } - } - ``` - -??? note "getFavouriteFilters" - To get the favorite filters of the current user, use `getFavouriteFilter`. This operation returns a JSON representation of the filters, including their names, IDs, search URLs, and more. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>expand</td> - <td>The parameters to expand.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getFavouriteFilters> - <expand>{$ctx:expand}</expand> - </jira.getFavouriteFilters> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `getFavouriteFilters` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080" - } - ``` - - **Sample response** - Given below is a sample response for the `getFavouriteFilters` operation. - - ```json - [ - { - "self": "http://localhost:8080/rest/api/2/filter/10100", - "id": "10100", - "name": "All Open Bugs", - "description": "Lists all open bugs", - "owner": { - "self": "http://localhost:8080/rest/api/2/user?username=admin", - "key": "admin", - "name": "admin", - "avatarUrls": { - "48x48": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=48", - "24x24": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=24", - "16x16": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=16", - "32x32": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=32" - }, - "displayName": "admin@gmail.com", - "active": true - }, - "jql": "type = Bug AND resolution is EMPTY", - "viewUrl": "http://localhost:8080/issues/?filter=10100", - "searchUrl": "http://localhost:8080/rest/api/2/search?jql=type+%3D+Bug+AND+resolution+is+EMPTY", - "favourite": true, - "sharePermissions": [], - "editable": true, - "sharedUsers": { - "size": 0, - "items": [], - "max-results": 1000, - "start-index": 0, - "end-index": 0 - }, - "subscriptions": { - "size": 0, - "items": [], - "max-results": 1000, - "start-index": 0, - "end-index": 0 - } - } - ] - ``` - -??? note "createFilter" - To create a new filter, use `createFilter` and attach the JSON representation of the filter as the payload of the request. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>filterName</td> - <td>The name of the filter.</td> - <td>Yes</td> - </tr> - <tr> - <td>description</td> - <td>The description of the filter.</td> - <td>Yes</td> - </tr> - <tr> - <td>jqlType</td> - <td>The jql type of the filter.</td> - <td>Yes</td> - </tr> - <tr> - <td>favourite</td> - <td>Specify whether the filter is a favourite.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.createFilter> - <filterName>{$ctx:filterName}</filterName> - <description>{$ctx:description}</description> - <jqlType>{$ctx:jqlType}</jqlType> - <favourite>{$ctx:favourite}</favourite> - </jira.createFilter> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `createFilter` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "filterName":"All Open Bugs", - "description":"Lists all open bugs", - "jqlType":"Bug and resolution is empty", - "favourite":"true" - } - ``` - - **Sample response** - - Given below is a sample response for the `createFilter` operation. - - ```json - { - "self": "http://localhost:8080/rest/api/2/filter/10100", - "id": "10100", - "name": "All Open Bugs", - "description": "Lists all open bugs", - "owner": { - "self": "http://localhost:8080/rest/api/2/user?username=admin", - "key": "admin", - "name": "admin", - "avatarUrls": { - "48x48": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=48", - "24x24": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=24", - "16x16": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=16", - "32x32": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=32" - }, - "displayName": "admin@gmail.com", - "active": true - }, - "jql": "type = Bug AND resolution is EMPTY", - "viewUrl": "http://localhost:8080/issues/?filter=10100", - "searchUrl": "http://localhost:8080/rest/api/2/search?jql=type+%3D+Bug+AND+resolution+is+EMPTY", - "favourite": true, - "sharePermissions": [], - "editable": true, - "sharedUsers": { - "size": 0, - "items": [], - "max-results": 1000, - "start-index": 0, - "end-index": 0 - }, - "subscriptions": { - "size": 0, - "items": [], - "max-results": 1000, - "start-index": 0, - "end-index": 0 - } - } - ``` - -??? note "updateFilterById" - To update an existing filter, use `updateFilterById` with the filter ID. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>filterId</td> - <td>The id of the filter.</td> - <td>Yes</td> - </tr> - <tr> - <td>filterName</td> - <td>The name of the filter.</td> - <td>Yes</td> - </tr> - <tr> - <td>description</td> - <td>The description of the filter.</td> - <td>Yes</td> - </tr> - <tr> - <td>jqlType</td> - <td>The jql type of the filter.</td> - <td>Yes</td> - </tr> - <tr> - <td>favourite</td> - <td>Specify whether the filter is a favourite.</td> - <td>Yes</td> - </tr> - <tr> - <td>expand</td> - <td>The parameters to expand.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.updateFilterById> - <filterId>{$ctx:filterId}</filterId> - <filterName>{$ctx:filterName}</filterName> - <description>{$ctx:description}</description> - <jqlType>{$ctx:jqlType}</jqlType> - <favourite>{$ctx:favourite}</favourite> - </jira.updateFilterById> - ``` - - **Sample request** - The following is a sample REST/JSON request that can be handled by the `updateFilterById` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "filterName":"All Bugs", - "description":"Lists all bugs", - "jqlType":"Bug and resolution is empty", - "favourite":"true", - "filterId":"10101" - } - ``` - - **Sample response** - - Given below is a sample response for the `updateFilterById` operation. - - ```json - { - "self": "http://localhost:8080/rest/api/2/filter/10101", - "id": "10101", - "name": "All Bugs", - "description": "Lists all bugs", - "owner": { - "self": "http://localhost:8080/rest/api/2/user?username=admin", - "key": "admin", - "name": "admin", - "avatarUrls": { - "48x48": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=48", - "24x24": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=24", - "16x16": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=16", - "32x32": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=32" - }, - "displayName": "admin@gmail.com", - "active": true - }, - "jql": "type = Bug AND resolution is EMPTY", - "viewUrl": "http://localhost:8080/issues/?filter=10101", - "searchUrl": "http://localhost:8080/rest/api/2/search?jql=type+%3D+Bug+AND+resolution+is+EMPTY", - "favourite": true, - "sharePermissions": [], - "editable": true, - "sharedUsers": { - "size": 0, - "items": [], - "max-results": 1000, - "start-index": 0, - "end-index": 0 - }, - "subscriptions": { - "size": 0, - "items": [], - "max-results": 1000, - "start-index": 0, - "end-index": 0 - } - } - ``` - -??? note "deleteFilter" - To delete a filter, use `deleteFilter` and specify the filter ID. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>filterId</td> - <td>Identifies the filter that you want to delete.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.deleteFilter> - <filterId>{$ctx:filterId}</filterId> - </jira.deleteFilter> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `deleteFilter` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"https://testcon.atlassian.net", - "filterId":"10101" - } - ``` - - **Sample response** - - For the successful response, you will get 204 No Content status code without any body. - - -??? note "getGroup" - This operation returns a JSON representation of the list of groups, including their names, IDs, and more. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>groupName</td> - <td>The name of the group that you want to get.</td> - <td>Yes</td> - </tr> - <tr> - <td>expand</td> - <td>The parameters to expand.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getGroup> - <groupName>{$ctx:groupName}</groupName> - <expand>{$ctx:expand}</expand> - </jira.getGroup> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `getGroup` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "groupName":"jira-administrators", - "expand":"users" - } - ``` - - **Sample response** - Given below is a sample response for the `getGroup` operation. - - ```json - { - "name": "jira-administrators", - "self": "http://localhost:8080/rest/api/2/group?groupname=jira-administrators", - "users": { - "size": 1, - "items": [], - "max-results": 50, - "start-index": 0, - "end-index": 0 - }, - "expand": "users" - } - ``` - -??? note "listGroupPicker" - This operation retrieves groups with substrings matching a given query. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>query</td> - <td>The query to match groups against.</td> - <td>Yes</td> - </tr> - <tr> - <td>exclude</td> - <td>Exclude from the result.</td> - <td>Yes</td> - </tr> - <tr> - <td>maxResults</td> - <td>The max results to return.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.listGroupPicker> - <query>{$ctx:query}</query> - <exclude>{$ctx:exclude}</exclude> - <maxResults>{$ctx:maxResults}</maxResults> - </jira.listGroupPicker> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `listGroupPicker` operation. - - ```json - { - "uri": "http://localhost:8080", - "username": "admin", - "password": "1qaz2wsx@", - "query": "administrators", - "exclude": "system-administrators", - "maxResults": "2" - } - ``` - - **Sample response** - - Given below is a sample response for the `listGroupPicker` operation. - - ```json - { - "header": "Showing 1 of 1 matching groups", - "total": 1, - "groups": [ - { - "name": "jira-administrators", - "html": "<b>jira-administrators</b>", - "labels": [ - { - "text": "Admin", - "title": "Users added to this group will be given administrative access", - "type": "ADMIN" - }, - { - "text": "Jira Software", - "title": "Users added to this group will be given access to <strong>Jira Software</strong>", - "type": "SINGLE" - } - ] - } - ] - } - ``` - -??? note "listGroupUserPicker" - This operation retrieves a list of users and groups matching a query. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>query</td> - <td>A string used to search. This can be username, name, or email address.</td> - <td>Yes</td> - </tr> - <tr> - <td>maxResults</td> - <td>The maximum number of users to return.</td> - <td>Yes</td> - </tr> - <tr> - <td>isShowAvatar</td> - <td>The boolean value to show avatar.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.listGroupUserPicker> - <query>{$ctx:query}</query> - <maxResults>{$ctx:maxResults}</maxResults> - <isShowAvatar>{$ctx:isShowAvatar}</isShowAvatar> - </jira.listGroupUserPicker> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `listGroupUserPicker` operation. - - ```json - { - "uri": "http://localhost:8080", - "username": "admin", - "password": "1qaz2wsx@", - "query": "admin", - "maxResults": "1", - "isShowAvatar": "true" - } - ``` - - **Sample response** - - Given below is a sample response for the `listGroupUserPicker` operation. - - ```json - { - "users": { - "users": [], - "total": 0, - "header": "Showing 0 of 0 matching users" - }, - "groups": { - "header": "Showing 1 of 1 matching groups", - "total": 1, - "groups": [ - { - "name": "jira-administrators", - "html": "jira-<b>admin</b>istrators", - "labels": [ - { - "text": "Admin", - "title": "Users added to this group will be given administrative access.", - "type": "ADMIN" - }, - { - "text": "Jira Software", - "title": "Users added to this group will be given access to <strong>Jira Software</strong>.", - "type": "SINGLE" - } - ] - } - ] - } - } - ``` - -??? note "getIssue" - To get an existing issue, use `getIssue` and specify the issue ID. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>issueIdOrKey</td> - <td>Identifies the issue to retrieve. This can be an issue ID, or an issue key.</td> - <td>Yes</td> - </tr> - <tr> - <td>fields</td> - <td>The list of fields to return for the issue.</td> - <td>Yes</td> - </tr> - <tr> - <td>expand</td> - <td>The parameters to expand.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getIssue> - <issueIdOrKey>{$ctx:issueIdOrKey}</issueIdOrKey> - <fields>{$ctx:fields}</fields> - <expand>{$ctx:expand}</expand> - </jira.getIssue> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `getIssue` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "issueIdOrKey":"EX-1" - } - ``` - - **Sample response** - - Given below is a sample response for the `getIssue` operation. - - ```json - { - "id": "10002", - "self": "http://localhost:8080/jira/rest/api/2/issue/10002", - "key": "EX-1", - "fields": { - "sub-tasks": [], - "timetracking": { - "originalEstimate": "10m", - "remainingEstimate": "3m", - "timeSpent": "6m", - "originalEstimateSeconds": 600, - "remainingEstimateSeconds": 200, - "timeSpentSeconds": 400 - }, - "project": { - "self": "http://localhost:8080/jira/rest/api/2/project/EX", - "id": "10000", - "key": "EX", - "name": "Example", - "avatarUrls": { - "24x24": "http://localhost:8080/jira/secure/projectavatar?size=small&pid=10000", - "16x16": "http://localhost:8080/jira/secure/projectavatar?size=xsmall&pid=10000", - "32x32": "http://localhost:8080/jira/secure/projectavatar?size=medium&pid=10000", - "48x48": "http://localhost:8080/jira/secure/projectavatar?size=large&pid=10000" - } - }, - "updated": 1, - "description": "example bug report", - "issuelinks": [ - { - "id": "10001", - "type": { - "id": "10000", - "name": "Dependent", - "inward": "depends on", - "outward": "is depended by" - }, - "outwardIssue": { - "id": "10004L", - "key": "PRJ-2", - "self": "http://localhost:8080/jira/rest/api/2/issue/PRJ-2", - "fields": { - "status": { - "iconUrl": "http://localhost:8080/jira//images/icons/statuses/open.png", - "name": "Open" - } - } - } - } - ], - "attachment": [], - "watcher": { - "self": "http://localhost:8080/jira/rest/api/2/issue/EX-1/watchers", - "isWatching": false, - "watchCount": 1, - "watchers": [] - }, - "comment": [], - "worklog": [] - } - } - ``` - -??? note "createIssue" - To create a new issue (or task), use `createIssue` and set the following properties. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>projectKey</td> - <td>The key (unique identifier) of the project in which you are creating the issue.</td> - <td>Yes</td> - </tr> - <tr> - <td>issueFields</td> - <td>Fields of the issue.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.createIssue> - <projectKey>{$ctx:projectKey}</projectKey> - <issueFields>{$ctx:issueFields}</issueFields> - </jira.createIssue> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `createIssue` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "issueFields":{ - "fields": { - "project":{ - "key": "TEST1" - }, - "summary": "Hello", - "description": "test issue", - "issuetype": { - "id": "10000" - } - } - } - } - ``` - - **Sample response** - - Given below is a sample response for the `createIssue` operation. - - ```json - { - "id": "10000", - "key": "TEST1", - "self": "http://localhost:8080/jira/rest/api/2/issue/10000" - } - ``` - -??? note "updateIssue" - To update an issue, use `updateIssue` and specify the issue ID. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>issueIdOrKey</td> - <td>The key (unique identifier) of the issue.</td> - <td>Yes</td> - </tr> - <tr> - <td>issueFields</td> - <td>Fields of the issue.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.updateIssue> - <issueIdOrKey>{$ctx:issueIdOrKey}</issueIdOrKey> - <issueFields>{$ctx:issueFields}</issueFields> - </jira.updateIssue> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `updateIssue` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "issueIdOrKey":"TEST-6", - "issueFields":{ - "update":{ - "summary":[ - { - "set":"Bug in business logic" - } - ], - "labels":[ - { - "add":"triaged" - }, - { - "remove":"blocker" - } - ] - } - } - } - ``` - - **Sample response** - - 200 will be returned if it updated the issue succesfully. - -??? note "updateIssueAssignee" - To assign an issue to another user, use `updateIssueAssignee` and specify the issue ID. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>issueIdOrKey</td> - <td>Identifies the issue to update. This can be an issue ID or an issue key.</td> - <td>Yes</td> - </tr> - <tr> - <td>name</td> - <td>The username of the user.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.updateIssueAssignee> - <name>{$ctx:name}</name> - <issueIdOrKey>{$ctx:issueIdOrKey}</issueIdOrKey> - </jira.updateIssueAssignee> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `updateIssueAssignee` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "name":"admin", - "issueIdOrKey":"TEST-2" - } - ``` - - **Sample response** - - Returned 204 if the issue is successfully assigned. - -??? note "getTransitions" - To get a list of the possible transitions the current user can perform for an issue, use `getTransitions` and specify the issue ID. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>issueIdOrKey</td> - <td>Identifies the issue to update. This can be an issue ID or an issue key.</td> - <td>Yes</td> - </tr> - <tr> - <td>expand</td> - <td>The parameters to expand.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getTransitions> - <issueIdOrKey>{$ctx:issueIdOrKey}</issueIdOrKey> - <expand>{$ctx:expand}</expand> - </jira.getTransitions> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `getTransitions` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "issueIdOrKey":"TEST-2" - } - ``` - - **Sample response** - - Given below is a sample response for the `getTransitions` operation. - - ```json - { - "transitions": [ - { - "id": "2", - "name": "Close Issue", - "to": { - "self": "http://localhost:8080/jira/rest/api/2.0/status/10000", - "description": "The issue is currently being worked on.", - "iconUrl": "http://localhost:8080/jira/images/icons/progress.gif", - "name": "In Progress", - "id": "10000", - "statusCategory": { - "self": "http://localhost:8080/jira/rest/api/2.0/statuscategory/1", - "id": 1, - "key": "in-flight", - "colorName": "yellow" - } - }, - "fields": {...}, - { - "id": "711", - "name": "QA Review", - "to": { - "self": "http://localhost:8080/jira/rest/api/2.0/status/5", - "description": "The issue is closed.", - "iconUrl": "http://localhost:8080/jira/images/icons/closed.gif", - "name": "Closed", - "id": "5", - "statusCategory": { - "self": "http://localhost:8080/jira/rest/api/2.0/statuscategory/9", - "id": 9, - "key": "completed", - "colorName": "green" - } - }, - "fields": { - ... - } - } - ] - } - ``` - -??? note "doTransition" - To perform a transition on an issue, use `doTransition`. Specify the issue ID and include the transition ID along with any other updates you want to make. Use the following properties: - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>issueIdOrKey</td> - <td>Identifies the issue to update. This can be an issue ID, or an issue key.</td> - <td>Yes</td> - </tr> - <tr> - <td>issueFields</td> - <td>Fields of the issue.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.doTransition> - <issueIdOrKey>{$ctx:issueIdOrKey}</issueIdOrKey> - <issueFields>{$ctx:issueFields}</issueFields> - </jira.doTransition> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `doTransitions` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "issueIdOrKey":"TEST-2" - "issueFields":{ - "update": { - "comment": [ - { - "add": { - "body": "Bug has been fixed." - } - } - ] - }, - "transition": { - "id": "11" - } - } - } - ``` - - **Sample response** - - Returned 204 if the transition was successful. - -??? note "getComments" - To get the comments for an issue, use `getComments` with the issue ID. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>issueIdOrKey</td> - <td>Identifies the issue that has the comments. This can be an issue ID, or an issue key.</td> - <td>Yes</td> - </tr> - <tr> - <td>expand</td> - <td>The parameters to expand.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getComments> - <issueIdOrKey>{$ctx:issueIdOrKey}</issueIdOrKey> - <expand>{$ctx:expand}</expand> - </jira.getComments> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `getComments` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "issueIdOrKey":"TEST-2" - } - ``` - - **Sample response** - - Given below is a sample response for the getComments operation. - - ```json - { - "startAt": 0, - "maxResults": 1, - "total": 1, - "comments": [ - { - "self": "http://localhost:8080/jira/rest/api/2/issue/10010/comment/10000", - "id": "10000", - "author": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "displayName": "Fred F. User", - "active": false - }, - "body": "Testing.", - "updateAuthor": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "displayName": "Fred F. User", - "active": false - }, - "created": "2013-08-23T16:57:35.982+0200", - "updated": "2013-08-23T16:57:35.983+0200", - "visibility": { - "type": "role", - "value": "Administrators" - } - } - ] - } - ``` - -??? note "postComment" - To post a comment to an issue, use `postComment` with the following properties. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>issueIdOrKey</td> - <td>Identifies the issue to which you are adding this comment. This can be an issue ID or an issue key.</td> - <td>Yes</td> - </tr> - <tr> - <td>Comment</td> - <td>The text to post as the comment.</td> - <td>Yes</td> - </tr> - <tr> - <td>visibleRole</td> - <td>User role that can view the comment.</td> - <td>Yes</td> - </tr> - <tr> - <td>expand</td> - <td>The parameters to expand.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.postComment> - <issueIdOrKey>{$ctx:issueIdOrKey}</issueIdOrKey> - <comment>{$ctx:comment}</comment> - <visibleRole>{$ctx:visibleRole}</visibleRole> - <expand>{$ctx:expand}</expand> - </jira.postComment> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `postComment` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "issueIdOrKey":"TEST-1", - "comment":"Waiting to hear back from the legal department.", - "visibleRole":"Administrators" - } - ``` - - **Sample response** - - Given below is a sample response for the `postComment` operation. - - ```json - { - "self": "http://localhost:8080/jira/rest/api/2/issue/10010/comment/10000", - "id": "10000", - "author": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "displayName": "Fred F. User", - "active": false - }, - "body": "Testing issue", - "updateAuthor": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "displayName": "Fred F. User", - "active": false - }, - "created": "2013-08-23T16:57:35.982+0200", - "updated": "2013-08-23T16:57:35.983+0200", - "visibility": { - "type": "role", - "value": "Administrators" - } - } - ``` - -??? note "updateComment" - To update an existing comment, use the `updateComment` operation. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>issueIdOrKey</td> - <td>Identifies the issue with the comments you want to update. This can be an issue ID, or an issue key. If the issue cannot be found via an exact match, Jira will also look for the issue in a case-insensitive way, or by looking to see if the issue was moved.</td> - <td>Yes</td> - </tr> - <tr> - <td>commentId</td> - <td>Identifies the comment you are updating.</td> - <td>Yes</td> - </tr> - <tr> - <td>comment</td> - <td>A string containing the comment to be posted.</td> - <td>Yes</td> - </tr> - <tr> - <td>visibleRole</td> - <td>A String containing the visible role.</td> - <td>Yes</td> - </tr> - <tr> - <td>expand</td> - <td>The parameters to expand. The 'renderedBody' optional parameter provides the body rendered in HTML.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.updateComment> - <commentId>{$ctx:commentId}</commentId> - <issueIdOrKey>{$ctx:issueIdOrKey}</issueIdOrKey> - <comment>{$ctx:comment}</comment> - <visibleRole>{$ctx:visibleRole}</visibleRole> - </jira.updateComment> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `updateComment` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "issueIdOrKey":"TEST-1", - "commentId":"10000", - "comment":"is this a bug?", - "visibleRole":"Administrators" - } - ``` - - **Sample response** - - Given below is a sample response for the updateComment operation. - - ```json - { - "self": "http://localhost:8080/jira/rest/api/2/issue/10010/comment/10000", - "id": "10000", - "author": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "displayName": "Fred F. User", - "active": false - }, - "body": "Testing.", - "updateAuthor": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "displayName": "Fred F. User", - "active": false - }, - "created": "2013-08-23T16:57:35.982+0200", - "updated": "2013-08-23T16:57:35.983+0200", - "visibility": { - "type": "role", - "value": "Administrators" - } - } - ``` - -??? note "deleteComment" - To delete an existing comment, use `deleteComment`. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>issueIdOrKey</td> - <td>Identifies the issue with the comments that you want to delete. This can be an issue ID or an issue key.</td> - <td>Yes</td> - </tr> - <tr> - <td>commentId</td> - <td>Identifies the comment you are deleting.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.deleteComment> - <issueIdOrKey>{$ctx:issueIdOrKey}</issueIdOrKey> - <commentId>{$ctx:commentId}</commentId> - </jira.deleteComment> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `deleteComment` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "issueIdOrKey":"TEST-2", - "commentId":"10000" - } - ``` - - **Sample response** - - Returned 204 if delete is successful. - -??? note "addAttachmentToIssueId" - To add one or more attachments to an issue, use `addAttachmentToIssueId` with the issue ID. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>issueIdOrKey</td> - <td>Identifies the issue to which you are adding attachments. This can be an issue ID or an issue key.</td> - <td>Yes</td> - </tr> - </table> - - !!! Info - Multipart/form-data cannot be processed inside the server. Therefore, the Micro Integrator/ESB should be in a content-unaware status. To achieve this, configure a pass-through proxy, build the message from the client end, and then send it to the proxy. - - **Sample configuration** - - ```xml - <jira.addAttachmentToIssueId> - <issueIdOrKey>{$url:issueIdOrKey}</issueIdOrKey> - </jira.addAttachmentToIssueId> - ``` - - **Sample response** - - Given below is a sample response to the `addAttachmentToIssueId` operation. - - ```json - [ - { - "self": "http://localhost:8080/jira/rest/api/2.0/attachments/10000", - "filename": "picture.jpg", - "author": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "avatarUrls": { - "24x24": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "16x16": "http://localhost:8080/jira/secure/useravatar?size=xsmall&ownerId=fred", - "32x32": "http://localhost:8080/jira/secure/useravatar?size=medium&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": false - }, - "created": "2013-08-23T16:57:35.977+0200", - "size": 23123, - "mimeType": "image/jpeg", - "content": "http://localhost:8080/jira/attachments/10000", - "thumbnail": "http://localhost:8080/jira/secure/thumbnail/10000" - } - ] - ``` - -??? note "getIssuePriorities" - To get the priorities available for issues, use `getIssuePriorities`. This operation returns detailed information about each priority, including its name, ID, and more. - - **Sample configuration** - - ```xml - <jira.getIssuePriorities/> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `getIssuePriorities` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080" - } - ``` - - **Sample response** - - Given below is a sample response for the getIssuePriorities operation. - - ```json - [ - { - "self": "http://localhost:8080/jira/rest/api/2/priority/3", - "statusColor": "#009900", - "description": "Major loss of function.", - "iconUrl": "http://localhost:8080/jira/images/icons/priorities/major.png", - "name": "Major" - } - ] - ``` - -??? note "getIssuePriorityById" - To get information on a specific priority, use `getIssuePriorityById` and specify the priority ID. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>issuePriorityId</td> - <td>Identifies the priority for retrieving information.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getIssuePriorityById> - <issuePriorityId>{$ctx:issuePriorityId}</issuePriorityId> - </jira.getIssuePriorityById> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `getIssuePriorityById` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "issuePriorityId":"3" - } - ``` - - **Sample response** - - Given below is a sample response for the `getIssuePriorityById` operation. - - ```json - { - "self": "http://localhost:8080/jira/rest/api/2/priority/3", - "statusColor": "#009900", - "description": "Major loss of function.", - "iconUrl": "http://localhost:8080/jira/images/icons/priorities/major.png", - "name": "Major" - } - ``` - -??? note "getIssueTypes" - To get the types of issues available in this Jira instance, use `getIssueTypes`. This operation returns detailed information about each issue type, including its name, ID, and more. - - **Sample configuration** - - ```xml - <jira.getIssueTypes/> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `getIssueTypes` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080" - } - ``` - - **Sample response** - - Given below is a sample response for the `getIssueTypes` operation. - - ```json - [ - { - "self": "http://localhost:8080/jira/rest/api/2.0/issueType/3", - "id": "3", - "description": "A task that needs to be done.", - "iconUrl": "http://localhost:8080/jira/images/icons/issuetypes/task.png", - "name": "Task", - "subtask": false - } - ] - ``` - -??? note "getIssueTypeById" - To get information on a specific issue type, use `getIssueTypeById` with the type ID. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>issueTypeId</td> - <td>Identifies the issue type to filter the issues that you want to get.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getIssueTypeById> - <issueTypeId>{$ctx:issueTypeId}</issueTypeId> - </jira.getIssueTypeById> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `getIssueTypeById` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "issueTypeId":"3" - } - ``` - - **Sample response** - - Given below is a sample response for the `getIssueTypeById` operation. - - ```json - { - "self": "http://localhost:8080/jira/rest/api/2.0/issueType/3", - "id": "3", - "description": "A task that needs to be done.", - "iconUrl": "http://localhost:8080/jira/images/icons/issuetypes/task.png", - "name": "Task", - "subtask": false - } - ``` - -??? note "getVotesForIssue" - To get the votes for a specific issue, use `getVotesForIssue` with the issue ID. This operation returns a JSON representation of the vote information including the number of votes and more. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>issueIdOrKey</td> - <td>Identifies the issue with the votes that you want to get. This can be an issue ID or an issue key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getVotesForIssue> - <issueIdOrKey>{$ctx:issueIdOrKey}</issueIdOrKey> - </jira.getVotesForIssue> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `getVotesForIssue` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "issueIdOrKey":"TEST-1" - } - ``` - - **Sample response** - - Given below is a sample response for the `getVotesForIssue` operation. - - ```json - { - "self": "http://localhost:8080/jira/rest/api/issue/TEST-1/votes", - "votes": 24, - "hasVoted": true, - "voters": [ - { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "avatarUrls": { - "24x24": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "16x16": "http://localhost:8080/jira/secure/useravatar?size=xsmall&ownerId=fred", - "32x32": "http://localhost:8080/jira/secure/useravatar?size=medium&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": false - } - ] - } - ``` - -??? note "createBulkIssue" - This operation creates many issues in one bulk operation. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>issueUpdates</td> - <td>The array of objects containing the issue details.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.createBulkIssue> - <issueUpdates>{$ctx:issueUpdates}</issueUpdates> - </jira.createBulkIssue> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `createBulkIssue` operation. - - ```json - { - "uri": "http://localhost:8080", - "username": "admin", - "password": "1qaz2wsx@", - "issueUpdates": [ - { - "update": {}, - "fields": { - "project": { - "id": "10000" - }, - "summary": "something's very wrong", - "issuetype": { - "id": "10000" - } - } - } - ] - } - ``` - - **Sample response** - - Given below is a sample response for the createBulkIssue operation. - - ```json - { - "issues": [ - { - "id": "10000", - "key": "TST-24", - "self": "http://localhost:8080/jira/rest/api/2/issue/10000" - }, - {..} - ], - "errors": [] - } - ``` - -??? note "assignIssueToUser" - This operation assigns an issue to a user. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>issueIdOrKey</td> - <td>A string containing an issue key.</td> - <td>Yes</td> - </tr> - <tr> - <td>name</td> - <td>The name of the assignee.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.assignIssueToUser> - <name>{$ctx:name}</name> - <issueIdOrKey>{$ctx:issueIdOrKey}</issueIdOrKey> - </jira.assignIssueToUser> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `assignIssueToUser` operation. - - ```json - { - "uri": "https://connector.atlassian.net", - "username": "admin", - "password": "1qaz2wsx@", - "name": "vrajenthiran", - "issueIdOrKey": "WSO2CON-4" - } - ``` - - **Sample response** - - Returned 204 if the issue is successfully assigned. - -??? note "getCommentById" - This operation retrieves all comments for an issue. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>commentId</td> - <td>The unique identifier of the comment.</td> - <td>Yes</td> - </tr> - <tr> - <td>expand</td> - <td>The parameters to expand. The optional 'renderedBody' flag provides the body rendered in HTML.</td> - <td>Yes</td> - </tr> - <tr> - <td>issueIdOrKey</td> - <td>A string containing the issue ID or key to which the comment belongs.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getCommentById> - <commentId>{$ctx:commentId}</commentId> - <expand>{$ctx:expand}</expand> - <issueIdOrKey>{$ctx:issueIdOrKey}</issueIdOrKey> - </jira.getCommentById> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `getCommentById` operation. - - ```json - { - "uri": "http://localhost:8080", - "username": "admin", - "password": "1qaz2wsx@", - "commentId" : "10000", - "issueIdOrKey":"TESTPM1-3" - } - ``` - - **Sample response** - - Given below is a sample response for the `getCommentById` operation. - - ```json - { - "startAt": 0, - "maxResults": 1, - "total": 1, - "comments": [ - { - "self": "http://localhost:8080/jira/rest/api/2/issue/10010/comment/10000", - "id": "10000", - "author": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "displayName": "Fred F. User", - "active": false - }, - "body": "Testing.", - "updateAuthor": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "displayName": "Fred F. User", - "active": false - }, - "created": "2013-08-23T16:57:35.982+0200", - "updated": "2013-08-23T16:57:35.983+0200", - "visibility": { - "type": "role", - "value": "Administrators" - } - } - ] - } - ``` - -??? note "sendNotification" - - This operation sends a notification (email) to the list or recipients defined in the request. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>subject</td> - <td>The subject of the notification.</td> - <td>Yes</td> - </tr> - <tr> - <td>issueIdOrKey</td> - <td>A string containing the issue ID or key to which the comment will be added.</td> - <td>Yes</td> - </tr> - <tr> - <td>textBody</td> - <td>The text body of the notification.</td> - <td>Yes</td> - </tr> - <tr> - <td>htmlBody</td> - <td>The HTML body of the notification.</td> - <td>Yes</td> - </tr> - <tr> - <td>toReporter</td> - <td>The boolean flag to indicate whether to notify the reporter.</td> - <td>Yes</td> - </tr> - <tr> - <td>toAssignee</td> - <td>The boolean flag to indicate whether to notify the assignee.</td> - <td>Yes</td> - </tr> - <tr> - <td>toWatchers</td> - <td>The boolean flag to indicate whether to notify the watchers.</td> - <td>Yes</td> - </tr> - <tr> - <td>toVoters</td> - <td>The boolean flag to indicate whether to notify the voters.</td> - <td>Yes</td> - </tr> - <tr> - <td>toUsers</td> - <td>The boolean flag to indicate whether to notify the users.</td> - <td>Yes</td> - </tr> - <tr> - <td>toGroups</td> - <td>The array of notification groups to be notified.</td> - <td>Yes</td> - </tr> - <tr> - <td>restrictGroups</td> - <td>The Array of notification groups to be restricted.</td> - <td>Yes</td> - </tr> - <tr> - <td>restrictPermissions</td> - <td>The array of restricted permissions for the notification.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.sendNotification> - <subject>{$ctx:subject}</subject> - <issueIdOrKey>{$ctx:issueIdOrKey}</issueIdOrKey> - <textBody>{$ctx:textBody}</textBody> - <htmlBody>{$ctx:htmlBody}</htmlBody> - <toReporter>{$ctx:toReporter}</toReporter> - <toAssignee>{$ctx:toAssignee}</toAssignee> - <toWatchers>{$ctx:toWatchers}</toWatchers> - <toVoters>{$ctx:toVoters}</toVoters> - <toUsers>{$ctx:toUsers}</toUsers> - <toGroups>{$ctx:toGroups}</toGroups> - <restrictGroups>{$ctx:restrictGroups}</restrictGroups> - <restrictPermissions>{$ctx:restrictPermissions}</restrictPermissions> - </jira.sendNotification> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `sendNotification` operation. - - ```json - { - "uri": "http://localhost:8080", - "username": "admin", - "password": "1qaz2wsx@", - "issueIdOrKey" : "TESTPM1-3", - "subject" : "notification subject", - "textBody":"The text body", - "htmlBody":"Lorem ipsum <strong>dolor</strong> sit amet, consectetur adipiscing elit. Pellentesque eget", - "toReporter":"false", - "toAssignee":"false", - "toWatchers":"true", - "toVoters":"true", - "toUsers":[ - { - "name": "vrajenthiran", - "active": false - } - ], - "toGroups":[ - { - "name": "notification-group", - "self": "http://localhost:8080/jira/rest/api/2/group?groupname=notification-group" - } - ], - "restrictPermissions":[ - { - "id": "10", - "key": "BROWSE" - } - ], "restrictGroups": [ - { - "name": "notification-group", - "self": "http://localhost:8080/jira/rest/api/2/group?groupname=notification-group" - } - ] - } - ``` - - **Sample response** - - Returned 204 if adding to the mail queue was successful. - -??? note "addVotesForIssue" - - This operation casts your vote in favour of an issue. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>issueId</td> - <td>The ID of the issues to which you are casting the votes.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.addVotesForIssue> - <issueId>{$ctx:issueId}</issueId> - </jira.addVotesForIssue> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `addVotesForIssue` operation. - - ```json - { - "uri": "https://testappmahesh.atlassian.net", - "username": "testapp.mahesh2", - "password": "1qaz2wsx@", - "issueId":"TP-1" - } - ``` - - **Sample response** - - Returned 204 if adding to the mail queue was successful. - - -??? note "getWatchersForIssue" - - This operation retrieves the list of watchers for the issue with the given key. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>issueId</td> - <td>The string containing an issue key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getWatchersForIssue> - <issueId>{$ctx:issueId}</issueId> - </jira.getWatchersForIssue> - ``` - - **Sample request** - - Following is a sample request that can be handled by the `getWatchersForIssue` operation. - - ```json - { - "uri":"http://localhost:8080", - "username":"admin", - "password":"1qaz2wsx@", - "issueId":"EX-1" - } - ``` - - **Sample response** - - Given below is a sample response for the `getWatchersForIssue` operation. - - ```json - { - "self": "http://localhost:8080/jira/rest/api/2/issue/EX-1/watchers", - "isWatching": false, - "watchCount": 1, - "watchers": [ - { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "avatarUrls": { - "24x24": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "16x16": "http://localhost:8080/jira/secure/useravatar?size=xsmall&ownerId=fred", - "32x32": "http://localhost:8080/jira/secure/useravatar?size=medium&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": false - } - ] - } - ``` - -??? note "removeUserFromWatcherList" - - This operation removes a user from an issue's watcher list. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>name</td> - <td>String containing the name of the user to remove.</td> - <td>Yes</td> - </tr> - <tr> - <td>issueId</td> - <td>String containing an issue key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.removeUserFromWatcherList> - <name>{$ctx:name}</name> - <issueId>{$ctx:issueId}</issueId> - </jira.removeUserFromWatcherList> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `removeUserFromWatcherList` operation. - - ```json - { - "uri":"https://connector.atlassian.net", - "username":"admin", - "password":"1qaz2wsx@", - "issueId":"TESTPM1-3", - "name" : "rasika" - } - ``` - - **Sample response** - - Returned 204 if the watcher was removed successfully. - -??? note "getProject" - - To get information about a specific project, use `getProject` with the project key. This operation returns a JSON representation of the entire project, including name, ID, components, and more. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>projectIdOrKey</td> - <td>The Identifier of the project that you want to get.</td> - <td>Yes</td> - </tr> - <tr> - <td>expand</td> - <td>The parameters to expand.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getProject> - <projectIdOrKey>{$ctx:projectIdOrKey}</projectIdOrKey> - <expand>{$ctx:expand}</expand> - </jira.getProject> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `getProject` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "projectIdOrKey":"EX" - } - ``` - - **Sample response** - - Given below is a sample response for the `getProject` operation. - - ```json - { - "self": "http://localhost:8080/jira/rest/api/2/project/EX", - "id": "10000", - "key": "EX", - "description": "This project was created as an example for REST.", - "lead": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "avatarUrls": { - "24x24": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "16x16": "http://localhost:8080/jira/secure/useravatar?size=xsmall&ownerId=fred", - "32x32": "http://localhost:8080/jira/secure/useravatar?size=medium&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": false - }, - "components": [ - { - "self": "http://localhost:8080/jira/rest/api/2/component/10000", - "id": "10000", - "name": "Component 1", - "description": "This is a JIRA component", - "lead": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "avatarUrls": { - "24x24": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "16x16": "http://localhost:8080/jira/secure/useravatar?size=xsmall&ownerId=fred", - "32x32": "http://localhost:8080/jira/secure/useravatar?size=medium&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": false - }, - "assigneeType": "PROJECT_LEAD", - "assignee": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "avatarUrls": { - "24x24": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "16x16": "http://localhost:8080/jira/secure/useravatar?size=xsmall&ownerId=fred", - "32x32": "http://localhost:8080/jira/secure/useravatar?size=medium&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": false - }, - "realAssigneeType": "PROJECT_LEAD", - "realAssignee": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "avatarUrls": { - "24x24": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "16x16": "http://localhost:8080/jira/secure/useravatar?size=xsmall&ownerId=fred", - "32x32": "http://localhost:8080/jira/secure/useravatar?size=medium&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": false - }, - "isAssigneeTypeValid": false - } - ], - .. - } - ``` - -??? note "getAvatarsForProject" - - To get the avatars available for a specific project, use `getAvatarsForProject` with the project key. This operation returns a JSON representation of the avatars, including their name, ID, and whether the avatar is currently selected for the project. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>projectIdOrKey</td> - <td>The identifier of the project that you want to get.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getAvatarsForProject> - <projectIdOrKey>{$ctx:projectIdOrKey}</projectIdOrKey> - </jira.getAvatarsForProject> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `getAvatarsForProject` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "projectIdOrKey":"TEST" - } - ``` - - **Sample response** - - Given below is a sample response for the `getAvatarsForProject` operation. - - ```json - { - "system": [ - { - "id": "1000", - "owner": "fred", - "isSystemAvatar": true, - "isSelected": true, - "selected": true - } - ], - "custom": [ - { - "id": "1010", - "owner": "andrew", - "isSystemAvatar": false, - "isSelected": false, - "selected": false - } - ] - } - ``` - -??? note "deleteAvatarForProject" - - To delete an avatar from a project, use `deleteAvatarForProject` with the project key and avatar ID. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>projectIdOrKey</td> - <td>The identifier of the project that you want to get.</td> - <td>Yes</td> - </tr> - <tr> - <td>avatarId</td> - <td>Identifies the avatar to delete.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.deleteAvatarForProject> - <projectIdOrKey>{$ctx:projectIdOrKey}</projectIdOrKey> - <avatarId>{$ctx:avatarId}</avatarId> - </jira.deleteAvatarForProject> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `deleteAvatarForProject` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "projectIdOrKey":"TEST", - "avatarId":"10412" - } - ``` - - **Sample response** - - 204 will returned if the avatar is successfully deleted. - -??? note "getComponentsOfProject" - - To get the components of a specific project, use `getComponentsOfProject` with the project key. This operation returns a JSON representation of the components, including their name, ID, and avatars. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>projectIdOrKey</td> - <td>The identifier of the project that you want to get.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getComponentsOfProject> - <projectIdOrKey>{$ctx:projectIdOrKey}</projectIdOrKey> - </jira.getComponentsOfProject> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `getComponentsOfProject` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "projectIdOrKey":"TEST" - } - ``` - - **Sample response** - - Given below is a sample response for the `getComponentsOfProject` operation. - - ```json - [ - { - "self": "http://localhost:8080/jira/rest/api/2/component/10000", - "id": "10000", - "name": "Component 1", - "description": "This is a JIRA component", - "lead": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "avatarUrls": { - "24x24": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "16x16": "http://localhost:8080/jira/secure/useravatar?size=xsmall&ownerId=fred", - "32x32": "http://localhost:8080/jira/secure/useravatar?size=medium&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": false - }, - "assigneeType": "PROJECT_LEAD", - "assignee": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "avatarUrls": { - "24x24": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "16x16": "http://localhost:8080/jira/secure/useravatar?size=xsmall&ownerId=fred", - "32x32": "http://localhost:8080/jira/secure/useravatar?size=medium&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": false - }, - "realAssigneeType": "PROJECT_LEAD", - "realAssignee": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "avatarUrls": { - "24x24": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "16x16": "http://localhost:8080/jira/secure/useravatar?size=xsmall&ownerId=fred", - "32x32": "http://localhost:8080/jira/secure/useravatar?size=medium&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": false - }, - "isAssigneeTypeValid": false - }, - ... - ] - ``` - -??? note "getStatusesOfProject" - - To get the statuses of a specific project, use `getStatusesOfProject` with the project key. This operation returns a JSON representation of each issue type in the project along with the status values. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>projectIdOrKey</td> - <td>The identifier of the project that you want to get.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getStatusesOfProject> - <projectIdOrKey>{$ctx:projectIdOrKey}</projectIdOrKey> - </jira.getStatusesOfProject> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `getStatusesOfProject` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "projectIdOrKey":"TEST" - } - ``` - - **Sample response** - - Given below is a sample response for the `getStatusesOfProject` operation. - - ```json - [ - { - "self": "http://localhost:8090/jira/rest/api/2.0/issueType/3", - "id": "3", - "name": "Task", - "subtask": false, - "statuses": [ - { - "self": "http://localhost:8090/jira/rest/api/2.0/status/10000", - "description": "The issue is currently being worked on.", - "iconUrl": "http://localhost:8090/jira/images/icons/progress.gif", - "name": "In Progress", - "id": "10000" - }, - { - "self": "http://localhost:8090/jira/rest/api/2.0/status/5", - "description": "The issue is closed.", - "iconUrl": "http://localhost:8090/jira/images/icons/closed.gif", - "name": "Closed", - "id": "5" - } - ] - } - ] - ``` - -??? note "getVersionsOfProject" - - To get the versions of a specific project, use `getVersionsOfProject` with the project key. This operation returns a JSON representation of the list of versions in the project. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>projectIdOrKey</td> - <td>The identifier of the project that you want to get.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getVersionsOfProject> - <projectIdOrKey>{$ctx:projectIdOrKey}</projectIdOrKey> - </jira.getVersionsOfProject> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `getVersionsOfProject` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "projectIdOrKey":"TEST" - } - ``` - - **Sample response** - - Given below is a sample response for the `getVersionsOfProject` operation. - - ```json - [ - { - "self": "http://localhost:8080/jira/rest/api/2/version/10000", - "id": "10000", - "description": "An excellent version", - "name": "New Version 1", - "archived": false, - "released": true, - "releaseDate": "2010-07-06", - "overdue": true, - "userReleaseDate": "6/Jul/2010", - "projectId": 10000 - }, - { - "self": "http://localhost:8080/jira/rest/api/2/version/10010", - "id": "10010", - "description": "Minor Bugfix version", - "name": "Next Version", - "archived": false, - "released": false, - "overdue": false, - "projectId": 10000 - } - ] - ``` - -??? note "getRolesOfProject" - - To get the roles of a specific project, use `getRolesOfProject` with the project key. This operation returns a JSON representation of the list of roles in the project, including each role's name and a link to more details. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>projectIdOrKey</td> - <td>The Identifier of the project that you want to get.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getRolesOfProject> - <projectIdOrKey>{$ctx:projectIdOrKey}</projectIdOrKey> - </jira.getRolesOfProject> - ``` - - **Sample request** - - Following is a sample request that can be handled by the `getRolesOfProject` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "projectIdOrKey":"TEST" - } - ``` - - **Sample response** - - Given below is a sample response for the `getRolesOfProject` operation. - - ```json - { - "Users": "http://localhost:8080/jira/rest/api/2/project/MKY/role/10001", - "Administrators": "http://localhost:8080/jira/rest/api/2/project/MKY/role/10002", - "Developers": "http://localhost:8080/jira/rest/api/2/project/MKY/role/10000" - } - ``` - -??? note "getRolesByIdOfProject" - - To get information about a specific role, use `getRolesByIdOfProject` with the project key and role ID. This operation returns a JSON representation of the role, including its name, ID, actors, and more. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>projectIdOrKey</td> - <td>The identifier of the project that you want to get.</td> - <td>Yes</td> - </tr> - <tr> - <td>roleId</td> - <td>Identifies the role for which you want to get information.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getRolesByIdOfProject> - <projectIdOrKey>{$ctx:projectIdOrKey}</projectIdOrKey> - <roleId>{$ctx:roleId}</roleId> - </jira.getRolesByIdOfProject> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `getRolesByIdOfProject` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "projectIdOrKey":"TEST", - "roleId":"10360" - } - ``` - - **Sample response** - - Given below is a sample response for the `getRolesByIdOfProject` operation. - - ```json - { - "self": "http://localhost:8080/jira/rest/api/2/project/MKY/role/10360", - "name": "Developers", - "id": 10360, - "description": "A project role that represents developers in a project", - "actors": [ - { - "id": 10240, - "displayName": "jira-developers", - "type": "atlassian-group-role-actor", - "name": "jira-developers" - } - ] - } - ``` - -??? note "getUserAssignableProjects" - - To get a list of users (that match the search string) that can be assigned to all projects, use `getUserAssignableProjects`. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>projectKeys</td> - <td>The comma-separated list of projects for which you are searching for assinees.</td> - <td>Yes</td> - </tr> - <tr> - <td>usernameForSearch</td> - <td>The username for search.</td> - <td>Yes</td> - </tr> - <tr> - <td>maxResults</td> - <td>The maximum number of users to return.</td> - <td>Yes</td> - </tr> - <tr> - <td>startAt</td> - <td>The index of the first user to return.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getUserAssignableProjects> - <projectKeys>{$ctx:projectKeys}</projectKeys> - <usernameForSearch>{$ctx:usernameForSearch}</usernameForSearch> - <maxResults>{$ctx:maxResults}</maxResults> - <startAt>{$ctx:startAt}</startAt> - </jira.getUserAssignableProjects> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `getUserAssignableProjects` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "projectKeys":"TEST", - "usernameForSearch":"fred" - } - ``` - - **Sample response** - - Given below is a sample response for the `getUserAssignableProjects` operation. - - ```json - [ - { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "avatarUrls": { - "24x24": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "16x16": "http://localhost:8080/jira/secure/useravatar?size=xsmall&ownerId=fred", - "32x32": "http://localhost:8080/jira/secure/useravatar?size=medium&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": false - } - .. - ] - ``` - -??? note "setActorsToRoleOfProject" - - To assign one or more users to a specific role in a project, use `setActorsToRoleOfProject` with the project key and role ID. You need to specify the users in the payload. You can specify individual users or groups. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>projectIdOrKey</td> - <td>The identifier of the project to which users should be asssigned.</td> - <td>Yes</td> - </tr> - <tr> - <td>roleId</td> - <td>Identifies the user role to which users should be assigned.</td> - <td>Yes</td> - </tr> - <tr> - <td>roles</td> - <td>The users who you want to assign.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.setActorsToRoleOfProject> - <projectIdOrKey>{$ctx:projectIdOrKey}</projectIdOrKey> - <roleId>{$ctx:roleId}</roleId> - <roles>{$ctx:roles}</roles> - </jira.setActorsToRoleOfProject> - ``` - - **Sample request** - - The following is a sample request that can be handled by the `setActorsToRoleOfProject` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "projectIdOrKey":"TEST", - "projectKey":"JAF", - "roleId":"10360", - "roles":{"user" :["James"]} - } - ``` - - **Sample response** - - Given below is a sample response for the `setActorsToRoleOfProject` operation. - - ```json - { - "self": "http://localhost:8080/jira/rest/api/2/project/MKY/role/10360", - "name": "Developers", - "id": 10360, - "description": "A project role that represents developers in a project", - "actors": [ - ... - ] - } - ``` - -??? note "searchJira" - - To get an existing issue, use `searchJira` with the JQL query. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>query</td> - <td>The JQL expression to use for finding issues. The query must include an ORDER BY clause. For more information, see the Jira documentation.</td> - <td>Yes</td> - </tr> - <tr> - <td>maxResults</td> - <td>The maximum number of issues to return, up to 1000 (default is 50).</td> - <td>Optional</td> - </tr> - <tr> - <td>startAt</td> - <td>The 0-based index of the first issue to return (default is 0).</td> - <td>Optional</td> - </tr> - <tr> - <td>fields</td> - <td>Specifies a comma-separated list of fields to be included in the response.</td> - <td>Yes</td> - </tr> - <tr> - <td>validateQuery</td> - <td>Specify whether to validate the JQL query.</td> - <td>Yes</td> - </tr> - <tr> - <td>expand</td> - <td>A comma-separated list of parameters to expand.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.C> - <query>{$ctx:query}</query> - <maxResults>{$ctx:maxResults}</maxResults> - <startAt>{$ctx:startAt}</startAt> - <fields>{$ctx:fields}</fields> - <validateQuery>{$ctx:validateQuery}</validateQuery> - <expand>{$ctx:expand}</expand> - </jira.searchJira> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `searchJira` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "query":"text~\"issue2\"" - } - ``` - - **Sample response** - - Given below is a sample response for the `searchJira` operation. - - ```json - { - "expand": "names,schema", - "startAt": 0, - "maxResults": 50, - "total": 1, - "issues": [ - { - "expand": "", - "id": "10001", - "self": "http://localhost:8080/jira/rest/api/2/issue/10001", - "key": "HSP-1" - } - ] - } - ``` - -??? note "getUser" - - To get information about a specified user, use `getUser` and specify the username. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>usernameFilter</td> - <td>Identifies the user whose information you want to get.</td> - <td>Yes</td> - </tr> - <tr> - <td>key</td> - <td>The user key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getUser> - <usernameFilter>{$ctx:usernameFilter}</usernameFilter> - <key>{$ctx:key}</key> - </jira.getUser> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `getUser` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "usernameFilter":"fred" - } - ``` - - **Sample response** - - Given below is a sample response for the `getUser` operation. - - ```json - { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "emailAddress": "fred@example.com", - "avatarUrls": { - "24x24": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "16x16": "http://localhost:8080/jira/secure/useravatar?size=xsmall&ownerId=fred", - "32x32": "http://localhost:8080/jira/secure/useravatar?size=medium&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": true, - "timeZone": "Australia/Sydney", - "groups": { - "size": 3, - "items": [] - } - } - ``` - -??? note "getUserPermissions" - - To get information on the current user's permissions, use `getUserPermissions`. You can optionally provide a specific context for which you want to get permissions (projectKey, projectId, issueKey, or issueId). - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>projectKey/projectId</td> - <td>Identifies the project for which you want to determine the current user's permissions.</td> - <td>Yes</td> - </tr> - <tr> - <td>issueKey/issueId</td> - <td>Identifies the issue for which you want to determine the current user's permissions.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getUserPermissions> - <projectKey>{$ctx:projectKey}</projectKey> - <projectId>{$ctx:projectId}</projectId> - <issueKey>{$ctx:issueKey}</issueKey> - <issueId>{$ctx:issueId}</issueId> - </jira.getUserPermissions> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `getUserPermissions` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "projectKey":"TEST" - } - ``` - - **Sample response** - - Given below is a sample response for the `getUserPermissions` operation. - - ```json - { - "permissions": { - "EDIT_ISSUE": { - "id": "12", - "key": "EDIT_ISSUE", - "name": "Edit Issues", - "description": "Ability to edit issues.", - "havePermission": true - } - } - } - ``` - -??? note "searchUser" - - To search for users whose username, name, or email address match a search string, use `searchUser` with a search string. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>usernameForSearch</td> - <td>The search string used to search the username, name, or email address.</td> - <td>Yes</td> - </tr> - <tr> - <td>startAt</td> - <td>The 0-based index of the first user to return (default is 0).</td> - <td>Optional</td> - </tr> - <tr> - <td>maxResults</td> - <td>The maximum number of users to return, up to 1000 (default is 50).</td> - <td>Optional</td> - </tr> - <tr> - <td>includeActive</td> - <td>Whether to return active users (default is true).</td> - <td>Optional</td> - </tr> - <tr> - <td>includeInactive</td> - <td>Whether to return inactive users (default is false).</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.searchUser> - <usernameForSearch>{$ctx:usernameForSearch}</usernameForSearch> - <startAt>{$ctx:startAt}</startAt> - <maxResults>{$ctx:maxResults}</maxResults> - <includeActive>{$ctx:includeActive}</includeActive> - <includeInactive>{$ctx:includeInactive}</includeInactive> - </jira.searchUser> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `searchUser` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "usernameForSearch":"fred" - } - ``` - - **Sample response** - - Given below is a sample response for the `searchUser` operation. - - ```json - [ - { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "avatarUrls": { - "24x24": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "16x16": "http://localhost:8080/jira/secure/useravatar?size=xsmall&ownerId=fred", - "32x32": "http://localhost:8080/jira/secure/useravatar?size=medium&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": false - } - ] - ``` - -??? note "searchIssueViewableUsers" - - To search for users whose username, name, or email address match a search string and have permission to view the specified issue or project, use `searchIssueViewableUsers`. You need to specify the search string and issue key or project key. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>username</td> - <td>The search string used to search the username, name, or email address.</td> - <td>Yes</td> - </tr> - <tr> - <td>issueKey</td> - <td>Identifies the issue that users must have permission to view. This issue will be included in the results.</td> - <td>Yes</td> - </tr> - <tr> - <td>projectKey</td> - <td>If you want to search for users who can browse a project instead of a specific issue, specify projectKey instead of issueKey.</td> - <td>Yes</td> - </tr> - <tr> - <td>startAt</td> - <td>The 0-based index of the first user to return (default is 0).</td> - <td>Optional</td> - </tr> - <tr> - <td>maxResults</td> - <td>The maximum number of users to return, up to 1000 (default is 50).</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.searchIssueViewableUsers> - <usernameForSearch>{$ctx:usernameForSearch}</usernameForSearch> - <issueKey>{$ctx:issueKey}</issueKey> - <projectKey>{$ctx:projectKey}</projectKey> - <startAt>{$ctx:startAt}</startAt> - <maxResults>{$ctx:maxResults}</maxResults> - </jira.searchIssueViewableUsers> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the searchIssueViewableUsers operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "usernameForSearch":"fred", - "projectKey":"TEST" - } - ``` - - **Sample response** - - Given below is a sample response for the `searchIssueViewableUsers` operation. - - ```json - [ - { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "avatarUrls": { - "24x24": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "16x16": "http://localhost:8080/jira/secure/useravatar?size=xsmall&ownerId=fred", - "32x32": "http://localhost:8080/jira/secure/useravatar?size=medium&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": false - } - ] - ``` - -??? note "searchAssignableUser" - - To search for users whose username, name, or email address match a search string and can be assigned to a specific issue, use `searchAssignableUser`. You specify the search string and either the project key (if you are getting users for a new issue you are creating) or the issue key (if you are getting users for an existing issue you are editing). - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>usernameForSearch</td> - <td>The search string used to search the username, name, or email address.</td> - <td>Yes</td> - </tr> - <tr> - <td>issueKey</td> - <td>Identifies the issue that users must have permission to view. This issue will be included in the results.</td> - <td>Yes</td> - </tr> - <tr> - <td>project</td> - <td>Identifies the project in which you are creating a new issue and want to get a list of users who can be assigned to it.</td> - <td>Yes</td> - </tr> - <tr> - <td>issueKey</td> - <td>Identifies the issue you are editing so that you can get a list of users who can be assigned to it.</td> - <td>Yes</td> - </tr> - <tr> - <td>startAt</td> - <td>The 0-based index of the first user to return (default is 0).</td> - <td>Optional</td> - </tr> - <tr> - <td>maxResults</td> - <td>The maximum number of users to return, up to 1000 (default is 50).</td> - <td>Optional</td> - </tr> - <tr> - <td>actionDescriptorId</td> - <td>The id of the workflow action.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.searchAssignableUser> - <usernameForSearch>{$ctx:usernameForSearch}</usernameForSearch> - <project>{$ctx:project}</project> - <issueKey>{$ctx:issueKey}</issueKey> - <startAt>{$ctx:startAt}</startAt> - <maxResults>{$ctx:maxResults}</maxResults> - <actionDescriptorId>{$ctx:actionDescriptorId}</actionDescriptorId> - </jira.searchAssignableUser> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `searchAssignableUser` operation. - - ```json - { - "username":"admin", - "password":"jira@jaffna", - "uri":"http://localhost:8080", - "projectKey":"TEST" - } - ``` - - **Sample response** - - Given below is a sample response for the `searchAssignableUser` operation. - - ```json - { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "emailAddress": "fred@example.com", - "avatarUrls": { - "24x24": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "16x16": "http://localhost:8080/jira/secure/useravatar?size=xsmall&ownerId=fred", - "32x32": "http://localhost:8080/jira/secure/useravatar?size=medium&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": true, - "timeZone": "Australia/Sydney", - "groups": { - "size": 3, - "items": [] - } - } - ``` - -??? note "getAttachmentById" - - This operation retrieves the metadata for an attachment, including the URL of the actual attached file. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>attachmentId</td> - <td>The ID to view the meta data of the attachment.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getAttachmentById> - <attachmentId>{$ctx:attachmentId}</attachmentId> - </jira.getAttachmentById> - ``` - - **Sample request** - - Following is a sample REST/JSON request that can be handled by the `getAttachmentById` operation. - - ```json - { - "uri": "http://localhost:8080", - "username": "admin", - "password": "1qaz2wsx@", - "attachmentId": "10000" - } - ``` - - **Sample response** - - Given below is a sample response for the `getAttachmentById` operation. - - ```json - { - "self": "http://localhost:8080/rest/api/2/attachment/10000", - "filename": "31714367_1982813478396639_3541297709187072000_n.jpg", - "author": { - "self": "http://localhost:8080/rest/api/2/user?username=admin", - "key": "admin", - "name": "admin", - "avatarUrls": { - "48x48": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=48", - "24x24": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=24", - "16x16": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=16", - "32x32": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=32" - }, - "displayName": "admin@gmail.com", - "active": true - }, - "created": "2018-12-09T22:59:08.690+0530", - "size": 45364, - "mimeType": "image/jpeg", - "properties": {}, - "content": "http://localhost:8080/secure/attachment/10000/31714367_1982813478396639_3541297709187072000_n.jpg", - "thumbnail": "http://localhost:8080/secure/thumbnail/10000/_thumb_10000.png" - } - ``` - -??? note "getAttachmentContent" - - This operation retrieves the content of an attachment. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>attachmentUrl</td> - <td>The URI of the attached file.</td> - <td>Yes</td> - </tr> - <tr> - <td>fileType</td> - <td>Type of the attachment.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getAttachmentContent> - <attachmentUrl>{$ctx:attachmentUrl}</attachmentUrl> - <fileType>{$ctx:fileType}</fileType> - </jira.getAttachmentContent> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the getAttachmentContent operation. - - ```json - { - "uri": "http://localhost:8080", - "username": "admin", - "password": "1qaz2wsx@", - "attachmentUrl": "http://localhost:8080/secure/attachment/10000/31714367_1982813478396639_3541297709187072000_n.jpg", - "fileType":"image/jpg" - } - ``` - - **Sample response** - - You will get 200 response code with the attached image as a response. - - -??? note "createComponent" - - The `createComponent` operation creates a component. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>name</td> - <td>The name of the component.</td> - <td>Yes</td> - </tr> - <tr> - <td>project</td> - <td>The key of the project to which the component should be belong.</td> - <td>Yes</td> - </tr> - <tr> - <td>description</td> - <td>The description for the component.</td> - <td>Yes</td> - </tr> - <tr> - <td>leadUserName</td> - <td>The key of the lead user name.</td> - <td>Yes</td> - </tr> - <tr> - <td>assigneeType</td> - <td>The type of the assignee.</td> - <td>Yes</td> - </tr> - <tr> - <td>isAssigneeTypeValid</td> - <td>A boolean, which specifies whether or not the assignee type is valid.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.createComponent> - <name>{$ctx:name}</name> - <project>{$ctx:project}</project> - <description>{$ctx:description}</description> - <leadUserName>{$ctx:leadUserName}</leadUserName> - <assigneeType>{$ctx:assigneeType}</assigneeType> - <isAssigneeTypeValid>{$ctx:isAssigneeTypeValid}</isAssigneeTypeValid> - </jira.createComponent> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `createComponent` operation. - - ```json - { - "uri": "http://localhost:8080", - "username": "admin", - "password": "1qaz2wsx@", - "name": "testing component1", - "project": "TESTPM1", - "description": "test description", - "leadUserName": "admin", - "assigneeType": "PROJECT_LEAD", - "isAssigneeTypeValid": "false" - } - ``` - - **Sample response** - - Given below is a sample response for the `createComponent` operation. - - ```json - { - "self": "http://localhost:8080/jira/rest/api/2/component/10000", - "id": "10000", - "name": "Component 1", - "description": "This is a JIRA component", - "lead": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "avatarUrls": { - "16x16": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": false - }, - "assigneeType": "PROJECT_LEAD", - "assignee": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "avatarUrls": { - "16x16": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": false - }, - "realAssigneeType": "PROJECT_LEAD", - "realAssignee": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "avatarUrls": { - "16x16": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": false - }, - "isAssigneeTypeValid": false - } - ``` - -??? note "getComponent" - - The `getComponent` operation retrieves a project component. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>componentId</td> - <td>The unique identifier for a particular component.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getComponent> - <componentId>{$ctx:componentId}</componentId> - </jira.getComponent> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `getComponent` operation. - - ```json - { - "uri": "http://localhost:8080", - "username": "admin", - "password": "1qaz2wsx@", - "componentId": "10000" - } - ``` - - **Sample response** - - Given below is a sample response for the `getComponent` operation. - - ```json - { - "self": "http://localhost:8080/jira/rest/api/2/component/10000", - "id": "10000", - "name": "Component 1", - "description": "This is a JIRA component", - "lead": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "avatarUrls": { - "16x16": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": false - }, - "assigneeType": "PROJECT_LEAD", - "assignee": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "avatarUrls": { - "16x16": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": false - }, - "realAssigneeType": "PROJECT_LEAD", - "realAssignee": { - "self": "http://localhost:8080/jira/rest/api/2/user?username=fred", - "name": "fred", - "avatarUrls": { - "16x16": "http://localhost:8080/jira/secure/useravatar?size=small&ownerId=fred", - "48x48": "http://localhost:8080/jira/secure/useravatar?size=large&ownerId=fred" - }, - "displayName": "Fred F. User", - "active": false - }, - "isAssigneeTypeValid": false - } - ``` - -??? note "updateComponent" - - The `updateComponent` operation modifies a component. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>componentId</td> - <td>The unique identifier for a particular component.</td> - <td>Yes</td> - </tr> - <tr> - <td>name</td> - <td>The name of the component.</td> - <td>Yes</td> - </tr> - <tr> - <td>description</td> - <td>The description for the component.</td> - <td>Yes</td> - </tr> - <tr> - <td>leadUserName</td> - <td>The key of the lead username.</td> - <td>Yes</td> - </tr> - <tr> - <td>assigneeType</td> - <td>The type of the assignee.</td> - <td>Yes</td> - </tr> - <tr> - <td>isAssigneeTypeValid</td> - <td>A boolean, which specifies whether or not the assignee type is valid.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.updateComponent> - <componentId>{$ctx:componentId}</componentId> - <name>{$ctx:name}</name> - <description>{$ctx:description}</description> - <leadUserName>{$ctx:leadUserName}</leadUserName> - <assigneeType>{$ctx:assigneeType}</assigneeType> - <isAssigneeTypeValid>{$ctx:isAssigneeTypeValid}</isAssigneeTypeValid> - </jira.updateComponent> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `updateComponent` operation. - - ```json - { - "uri": "http://localhost:8080", - "username": "admin", - "password": "1qaz2wsx@", - "componentId": "10000", - "name": "testing component1", - "description": "test description", - "leadUserName": "admin", - "assigneeType": "PROJECT_LEAD", - "isAssigneeTypeValid": "false" - } - ``` - - **Sample response** - - Given below is a sample response for the `updateComponent` operation. - - ```json - { - "self": "http://localhost:8080/rest/api/2/component/10000", - "id": "10000", - "name": "testing component1", - "description": "test description", - "lead": { - "self": "http://localhost:8080/rest/api/2/user?username=admin", - "key": "admin", - "name": "admin", - "avatarUrls": { - "48x48": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=48", - "24x24": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=24", - "16x16": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=16", - "32x32": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=32" - }, - "displayName": "admin@gmail.com", - "active": true - }, - "assigneeType": "PROJECT_LEAD", - "assignee": { - "self": "http://localhost:8080/rest/api/2/user?username=admin", - "key": "admin", - "name": "admin", - "avatarUrls": { - "48x48": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=48", - "24x24": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=24", - "16x16": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=16", - "32x32": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=32" - }, - "displayName": "admin@gmail.com", - "active": true - }, - "realAssigneeType": "PROJECT_LEAD", - "realAssignee": { - "self": "http://localhost:8080/rest/api/2/user?username=admin", - "key": "admin", - "name": "admin", - "avatarUrls": { - "48x48": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=48", - "24x24": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=24", - "16x16": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=16", - "32x32": "https://www.gravatar.com/avatar/9f2ee74106e4d9afc58bb796a0895908?d=mm&s=32" - }, - "displayName": "admin@gmail.com", - "active": true - }, - "isAssigneeTypeValid": true, - "project": "KANA", - "projectId": 10000 - } - ``` - -??? note "countComponentRelatedIssues" - - The `countComponentRelatedIssues` operation retrieves counts of issues related to this component. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>componentId</td> - <td>The unique identifier for a particular component.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.countComponentRelatedIssues> - <componentId>{$ctx:componentId}</componentId> - </jira.countComponentRelatedIssues> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `countComponentRelatedIssues` operation. - - ```json - { - "uri": "http://localhost:8080", - "username": "admin", - "password": "1qaz2wsx@", - "componentId": "10000" - } - ``` - - **Sample response** - - Given below is a sample response for the `searchIssueViewableUsers` operation. - - ```json - { - "self": "http://localhost:8080/jira/rest/api/2/component/10000", - "issueCount": 23 - } - ``` - -??? note "createIssueLink" - - The `createIssueLink` operation creates an issue link between two issues. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>typeName</td> - <td>Name of the issue type.</td> - <td>Yes</td> - </tr> - <tr> - <td>inwardIssueKey</td> - <td>Key of the inward issue.</td> - <td>Yes</td> - </tr> - <tr> - <td>outwardIssueKey</td> - <td>Key of the outward issue.</td> - <td>Yes</td> - </tr> - <tr> - <td>commentBody</td> - <td>Body of the comment.</td> - <td>Yes</td> - </tr> - <tr> - <td>commentVisibility</td> - <td>Visibility of the comment.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.createIssueLink> - <typeName>{$ctx:typeName}</typeName> - <inwardIssueKey>{$ctx:inwardIssueKey}</inwardIssueKey> - <outwardIssueKey>{$ctx:outwardIssueKey}</outwardIssueKey> - <commentBody>{$ctx:commentBody}</commentBody> - <commentVisibility>{$ctx:commentVisibility}</commentVisibility> - </jira.createIssueLink> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `createIssueLink` operation. - - ```json - { - "uri": "http://localhost:8080", - "username": "admin", - "password": "1qaz2wsx@", - "typeName": "Duplicate", - "inwardIssueKey": "TESTPM1-1", - "outwardIssueKey": "TESTPM1-2", - "commentBody": "Linked related issue!", - "commentVisibility": { - "type": "group", - "value": "jira-users" - } - } - ``` - - **Sample response** - - As a successful response, you will get 201 status code without any response body. - -??? note "getIssueLinkById" - - The `getIssueLinkById` operation retrieves an issue link with the specified ID. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>linkId</td> - <td>The issue link ID.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <jira.getIssueLinkById> - <linkId>{$ctx:linkId}</linkId> - </jira.getIssueLinkById> - ``` - - **Sample request** - - The following is a sample REST/JSON request that can be handled by the `getIssueLinkById` operation. - - ```json - { - "uri": "http://localhost:8080", - "username": "admin", - "password": "1qaz2wsx@", - "linkId": "10000" - } - ``` - - **Sample response** - - Given below is a sample response for the `getIssueLinkById` operation. - - ```json - { - "id": "10000", - "self": "http://localhost:8080/rest/api/2/issueLink/10000", - "type": { - "id": "10002", - "name": "Duplicate", - "inward": "is duplicated by", - "outward": "duplicates", - "self": "http://localhost:8080/rest/api/2/issueLinkType/10002" - }, - "inwardIssue": { - "id": "10002", - "key": "KANA-3", - "self": "http://localhost:8080/rest/api/2/issue/10002", - "fields": { - "summary": "New task", - "status": { - "self": "http://localhost:8080/rest/api/2/status/10000", - "description": "", - "iconUrl": "http://localhost:8080/", - "name": "To Do", - "id": "10000", - "statusCategory": { - "self": "http://localhost:8080/rest/api/2/statuscategory/2", - "id": 2, - "key": "new", - "colorName": "blue-gray", - "name": "To Do" - } - }, - "priority": { - "self": "http://localhost:8080/rest/api/2/priority/3", - "iconUrl": "http://localhost:8080/images/icons/priorities/medium.svg", - "name": "Medium", - "id": "3" - }, - "issuetype": { - "self": "http://localhost:8080/rest/api/2/issuetype/10003", - "id": "10003", - "description": "A task that needs to be done.", - "iconUrl": "http://localhost:8080/secure/viewavatar?size=xsmall&avatarId=10318&avatarType=issuetype", - "name": "Task", - "subtask": false, - "avatarId": 10318 - } - } - }, - "outwardIssue": { - "id": "10001", - "key": "KANA-2", - "self": "http://localhost:8080/rest/api/2/issue/10001", - "fields": { - "summary": "Framework IMplementation", - "status": { - "self": "http://localhost:8080/rest/api/2/status/10000", - "description": "", - "iconUrl": "http://localhost:8080/", - "name": "To Do", - "id": "10000", - "statusCategory": { - "self": "http://localhost:8080/rest/api/2/statuscategory/2", - "id": 2, - "key": "new", - "colorName": "blue-gray", - "name": "To Do" - } - }, - "priority": { - "self": "http://localhost:8080/rest/api/2/priority/3", - "iconUrl": "http://localhost:8080/images/icons/priorities/medium.svg", - "name": "Medium", - "id": "3" - }, - "issuetype": { - "self": "http://localhost:8080/rest/api/2/issuetype/10003", - "id": "10003", - "description": "A task that needs to be done.", - "iconUrl": "http://localhost:8080/secure/viewavatar?size=xsmall&avatarId=10318&avatarType=issuetype", - "name": "Task", - "subtask": false, - "avatarId": 10318 - } - } - } - } - ``` - -### Sample configuration in a scenario - -The following is a sample proxy service that illustrates how to connect to the Jira connector and use the getDashboardById operation to get dashboard details. You can use this sample as a template for using other operations in this category. - -**Sample Proxy** -```xml -<proxy xmlns="http://ws.apache.org/ns/synapse" - name="getDashboardById" - transports="https http" - startOnLoad="true" - trace="disable"> - <description/> - <target> - <inSequence> - <property name="username" expression="json-eval($.username)"/> - <property name="password" expression="json-eval($.password)"/> - <property name="uri" expression="json-eval($.uri)"/> - <property name="id" expression="json-eval($.id)"/> - <jira.init> - <username>{$ctx:username}</username> - <password>{$ctx:password}</password> - <uri>{$ctx:uri}</uri> - </jira.init> - <jira.getDashboardById> - <id>{$ctx:id}</id> - </jira.getDashboardById> - <log level="full"/> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </target> -</proxy> -``` - -**Note**: For more information on how this works in an actual scenario, see [Jira Connector Example]({{base_path}}/reference/connectors/jira-connector/jira-connector-example). diff --git a/en/docs/reference/connectors/jira-connector/jira-connector-example.md b/en/docs/reference/connectors/jira-connector/jira-connector-example.md deleted file mode 100644 index 1d8bcecb1e..0000000000 --- a/en/docs/reference/connectors/jira-connector/jira-connector-example.md +++ /dev/null @@ -1,343 +0,0 @@ -# Jira Connector Example - -The Jira REST API enables you to interact with Jira programmatically. The WSO2 JIRA Connector allows you to access the REST resources available in Jira Cloud [API Version v2](https://developer.atlassian.com/cloud/jira/platform/rest/v2/intro/) from an integration sequence. - -## What you'll build - -This example explains how to use the JIRA Connector to create an issue and read it. - -You will use two HTTP API resources, which are `createIssue` and `getIssue`. - -<img src="{{base_path}}/assets/img/integrate/connectors/jira.png" title="Calling insert operation" width="800" alt="Calling insert operation"/> - -* `/createIssue `: The user sends the request payload with the issue details (the project info, summary, description and the issue type). This request is sent to the integration runtime by invoking the Jira API. It creates the issue in the corresponding Jira account. - -* `/getIssue `: The user sends the request payload, which includes the issue id or key (that should be obtained from the `createIssue` API resource) and other parameters (**fields** and **expand**). - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Configure the connector in WSO2 Integration Studio - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -## Creating the Integration Logic - -1. Right click on the created Integration Project and select, -> **New** -> **Rest API** to create the REST API. - <img src="{{base_path}}/assets/img/integrate/connectors/adding-an-api.jpg" title="Adding a Rest API" width="800" alt="Adding a Rest API"/> - -2. Provide the API name as `jiraAPI` and the API context as `/jira`. You can go to the source view of the XML configuration file of the API and copy the following configuration. - - -``` -<?xml version="1.0" encoding="UTF-8"?> -<api context="/jira" name="jiraAPI" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST" uri-template="/createIssue"> - <inSequence> - <jira.init> - <username>****</username> - <password>****</password> - <uri>https://<site-url>/jira</uri> - </jira.init> - <jira.createIssue> - <issueFields>{$ctx:issueFields}</issueFields> - </jira.createIssue> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" uri-template="/getIssue"> - <inSequence> - <jira.init> - <username>****</username> - <password>****</password> - <uri>https://<site-url>/jira</uri> - </jira.init> - <jira.getIssue> - <issueIdOrKey>{$ctx:id}</issueIdOrKey> - </jira.getIssue> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> -</api> -``` - -{!includes/reference/connectors/exporting-artifacts.md!} - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/jira-connector.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - - -### Create Issue Operation - -1. Create a file named `createIssue.json` with the following payload: - ```json - { - "issueFields":{ - "fields": { - "project":{ - "key": "<project-key>" - }, - "summary": "For Testing", - "description": "Test issue", - "issuetype": { - "id": "6" - } - } - } - } - ``` - -2. Invoke the API using the following curl command. - - !!! Info - The Curl application can be downloaded from [here](https://curl.haxx.se/download.html). - - ```bash - curl -H "Content-Type: application/json" --request POST --data @createIssue.json http://localhost:8290/jira/createIssue - ``` - - **Expected Response** : You should get a response as given below and the data will be added to the database. - ```json - { - "id": "340135", - "key": "<project-key>-3400", - "self": "https://<site-url>/jira/rest/api/2/issue/340135" - } - ``` - -### Read Issue Operation - -1. Create a file named `getIssue.json` with the following payload: - - ```json - { - "id": "<project-key>-3400" - } - ``` - -2. Invoke the API using the curl command shown below. - - !!! Info - Curl application can be downloaded from [here](https://curl.haxx.se/download.html). - - ```bash - curl -H "Content-Type: application/json" --request POST --data @getIssue.json http://localhost:8290/jira/getIssue - ``` - - **Expected Response** : You should get a response similar to the one given below. - - ```json - { - "expand": "renderedFields,names,schema,operations,editmeta,changelog,versionedRepresentations", - "id": "340135", - "self": "https://<site-url>/jira/rest/api/2/issue/340135", - "key": "<project-key>-3400", - "fields": { - "issuetype": { - "self": "https://<site-url>/jira/rest/api/2/issuetype/6", - "id": "6", - "description": "A request for more information from ***", - "iconUrl": "https://<site-url>/jira/images/icons/issuetypes/undefined.png", - "name": "Query", - "subtask": false - }, - "timespent": null, - "project": { - "self": "https://<site-url>/jira/rest/api/2/project/11395", - "id": "11395", - "key": "<project-key>", - "name": "Project Name", - "avatarUrls": { - "48x48": "https://<site-url>/jira/secure/projectavatar?pid=11395&avatarId=10000", - "24x24": "https://<site-url>/jira/secure/projectavatar?size=small&pid=11395&avatarId=10000", - "16x16": "https://<site-url>/jira/secure/projectavatar?size=xsmall&pid=11395&avatarId=10000", - "32x32": "https://<site-url>/jira/secure/projectavatar?size=medium&pid=11395&avatarId=10000" - }, - "projectCategory": { - "self": "https://<site-url>/jira/rest/api/2/projectCategory/10021", - "id": "10021", - "description": "Project Category Description", - "name": "Internal" - } - }, - "aggregatetimespent": null, - "resolution": null, - "customfield_10467": null, - "resolutiondate": null, - "workratio": -1, - "lastViewed": "2021-02-18T20:48:28.596-0800", - "watches": { - "self": "https://<site-url>/jira/rest/api/2/issue/<project-key>-3400/watchers", - "watchCount": 1, - "isWatching": true - }, - "created": "2021-02-18T20:46:03.000-0800", - "customfield_10260": "2021-02-18 20:46:03.0", - "customfield_10460": null, - "customfield_10660": "{summaryBean=com.atlassian.jira.plugin.devstatus.rest.SummaryBean@6f87a945[summary={pullrequest=com.atlassian.jira.plugin.devstatus.rest.SummaryItemBean@5f248a01[overall=PullRequestOverallBean{stateCount=0, state='OPEN', details=PullRequestOverallDetails{openCount=0, mergedCount=0, declinedCount=0}},byInstanceType={}], build=com.atlassian.jira.plugin.devstatus.rest.SummaryItemBean@71ad9d41[overall=com.atlassian.jira.plugin.devstatus.summary.beans.BuildOverallBean@aa275f7[failedBuildCount=0,successfulBuildCount=0,unknownBuildCount=0,count=0,lastUpdated=<null>,lastUpdatedTimestamp=<null>],byInstanceType={}], review=com.atlassian.jira.plugin.devstatus.rest.SummaryItemBean@2498a36c[overall=com.atlassian.jira.plugin.devstatus.summary.beans.ReviewsOverallBean@1130f741[stateCount=0,state=<null>,dueDate=<null>,overDue=false,count=0,lastUpdated=<null>,lastUpdatedTimestamp=<null>],byInstanceType={}], deployment-environment=com.atlassian.jira.plugin.devstatus.rest.SummaryItemBean@cea37b3[overall=com.atlassian.jira.plugin.devstatus.summary.beans.DeploymentOverallBean@157ef614[topEnvironments=[],showProjects=false,successfulCount=0,count=0,lastUpdated=<null>,lastUpdatedTimestamp=<null>],byInstanceType={}], repository=com.atlassian.jira.plugin.devstatus.rest.SummaryItemBean@741ca414[overall=com.atlassian.jira.plugin.devstatus.summary.beans.CommitOverallBean@6280df29[count=0,lastUpdated=<null>,lastUpdatedTimestamp=<null>],byInstanceType={}], branch=com.atlassian.jira.plugin.devstatus.rest.SummaryItemBean@3f8a2b65[overall=com.atlassian.jira.plugin.devstatus.summary.beans.BranchOverallBean@5d26b4d6[count=0,lastUpdated=<null>,lastUpdatedTimestamp=<null>],byInstanceType={}]},errors=[],configErrors=[]], devSummaryJson={\"cachedValue\":{\"errors\":[],\"configErrors\":[],\"summary\":{\"pullrequest\":{\"overall\":{\"count\":0,\"lastUpdated\":null,\"stateCount\":0,\"state\":\"OPEN\",\"details\":{\"openCount\":0,\"mergedCount\":0,\"declinedCount\":0,\"total\":0},\"open\":true},\"byInstanceType\":{}},\"build\":{\"overall\":{\"count\":0,\"lastUpdated\":null,\"failedBuildCount\":0,\"successfulBuildCount\":0,\"unknownBuildCount\":0},\"byInstanceType\":{}},\"review\":{\"overall\":{\"count\":0,\"lastUpdated\":null,\"stateCount\":0,\"state\":null,\"dueDate\":null,\"overDue\":false,\"completed\":false},\"byInstanceType\":{}},\"deployment-environment\":{\"overall\":{\"count\":0,\"lastUpdated\":null,\"topEnvironments\":[],\"showProjects\":false,\"successfulCount\":0},\"byInstanceType\":{}},\"repository\":{\"overall\":{\"count\":0,\"lastUpdated\":null},\"byInstanceType\":{}},\"branch\":{\"overall\":{\"count\":0,\"lastUpdated\":null},\"byInstanceType\":{}}}},\"isStale\":false}}", - "customfield_10980": null, - "customfield_10464": null, - "customfield_10860": "<div>\r\n\t<div class=\"aui-message aui-message-generic generic draft-message\">\r\n\t\t<div class=\"message-content\">\r\n\t\t\t<div class=\"message-container\">\r\n\t\t\t<p>Closing an issue indicates that there is no more work to be done on it, if you have any questions regarding this announcement, you can raise a query ticket and team will attend</p>\r\n\t\t\t</div>\r\n\t\t\t<ul class=\"actions-list\"></ul>\r\n\t\t</div>\r\n\t</div>\r\n</div>", - "customfield_10981": null, - "customfield_10465": "0|i0suzb:", - "customfield_10982": null, - "labels": [], - "customfield_10466": null, - "customfield_10973": null, - "customfield_10974": null, - "customfield_10975": null, - "customfield_10976": null, - "customfield_10977": null, - "timeestimate": null, - "aggregatetimeoriginalestimate": null, - "customfield_10978": null, - "customfield_10979": null, - "issuelinks": [], - "assignee": { - "self": "https://<site-url>/jira/rest/api/2/user?username=portal-admin%40***.com", - "name": "portal-admin@***.com", - "key": "portal-admin@***.com", - "emailAddress": "portal-admin@***.com", - "avatarUrls": { - "48x48": "https://<site-url>/jira/secure/useravatar?avatarId=10432", - "24x24": "https://<site-url>/jira/secure/useravatar?size=small&avatarId=10432", - "16x16": "https://<site-url>/jira/secure/useravatar?size=xsmall&avatarId=10432", - "32x32": "https://<site-url>/jira/secure/useravatar?size=medium&avatarId=10432" - }, - "displayName": "Portal Admin", - "active": true, - "timeZone": "PST" - }, - "updated": "2021-02-18T20:46:03.000-0800", - "status": { - "self": "https://<site-url>/jira/rest/api/2/status/1", - "description": "The issue is open and ready for the assignee to start work on it.", - "iconUrl": "https://<site-url>/jira/images/icons/statuses/open.png", - "name": "Open", - "id": "1", - "statusCategory": { - "self": "https://<site-url>/jira/rest/api/2/statuscategory/2", - "id": 2, - "key": "new", - "colorName": "blue-gray", - "name": "To Do" - } - }, - "components": [], - "customfield_10051": [ - "portal-admin@***.com(portal-admin@***.com)", - "****(****)" - ], - "timeoriginalestimate": null, - "customfield_10052": null, - "description": "Test issue", - "customfield_10053": "****(****)", - "customfield_10054": "true", - "customfield_10011": null, - "customfield_10055": "8126", - "customfield_10012": null, - "customfield_10970": null, - "customfield_10971": null, - "timetracking": {}, - "customfield_10972": null, - "customfield_10962": "2021-02-18", - "customfield_10963": null, - "customfield_10964": null, - "customfield_10965": null, - "attachment": [], - "customfield_10966": null, - "aggregatetimeestimate": null, - "customfield_10967": null, - "customfield_10968": null, - "customfield_10969": null, - "summary": "For Testing", - "creator": { - "self": "https://<site-url>/jira/rest/api/2/user?username=****", - "name": "****", - "key": "****", - "emailAddress": "****@***.com", - "avatarUrls": { - "48x48": "https://<site-url>/jira/secure/useravatar?avatarId=10432", - "24x24": "https://<site-url>/jira/secure/useravatar?size=small&avatarId=10432", - "16x16": "https://<site-url>/jira/secure/useravatar?size=xsmall&avatarId=10432", - "32x32": "https://<site-url>/jira/secure/useravatar?size=medium&avatarId=10432" - }, - "displayName": "****", - "active": true, - "timeZone": "PST" - }, - "subtasks": [], - "customfield_10360": null, - "customfield_10361": null, - "reporter": { - "self": "https://<site-url>/jira/rest/api/2/user?username=****", - "name": "****", - "key": "****", - "emailAddress": "****@***.com", - "avatarUrls": { - "48x48": "https://<site-url>/jira/secure/useravatar?avatarId=10432", - "24x24": "https://<site-url>/jira/secure/useravatar?size=small&avatarId=10432", - "16x16": "https://<site-url>/jira/secure/useravatar?size=xsmall&avatarId=10432", - "32x32": "https://<site-url>/jira/secure/useravatar?size=medium&avatarId=10432" - }, - "displayName": "****", - "active": true, - "timeZone": "PST" - }, - "customfield_10363": null, - "aggregateprogress": { - "progress": 0, - "total": 0 - }, - "customfield_10364": null, - "customfield_10365": null, - "customfield_10366": null, - "customfield_10960": null, - "environment": null, - "progress": { - "progress": 0, - "total": 0 - }, - "comment": { - "comments": [], - "maxResults": 0, - "total": 0, - "startAt": 0 - }, - "votes": { - "self": "https://<site-url>/jira/rest/api/2/issue/<project-key>-3400/votes", - "votes": 0, - "hasVoted": false - }, - "worklog": { - "startAt": 0, - "maxResults": 20, - "total": 0, - "worklogs": [] - } - } - } - ``` - -## What's Next - -* You can deploy and run your project on Docker or Kubernetes. See the instructions in [Deploying your Integrations on Containers]({{base_path}}/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-deployments/). -* To customize this example for your own scenario, see [Jira Connector Configuration]({{base_path}}/reference/connectors/jira-connector/jira-connector-config) documentation for all operation details of the connector. diff --git a/en/docs/reference/connectors/jira-connector/jira-connector-overview.md b/en/docs/reference/connectors/jira-connector/jira-connector-overview.md deleted file mode 100644 index e0406398a3..0000000000 --- a/en/docs/reference/connectors/jira-connector/jira-connector-overview.md +++ /dev/null @@ -1,29 +0,0 @@ -# Jira Connector Overview - -The JIRA Connector allows you to connect to JIRA, which is an online issue-tracking database. The connector uses the JIRA REST API to connect to JIRA, view and update issues, work with filters, and more. - -<img src="{{base_path}}/assets/img/integrate/connectors/jira-store.png" title="Jira Connector Store" width="200" alt="Jira Connector Store"/> - -## Compatibility - -| Connector version | Supported product versions | -| ------------- |------------- | -| 1.0.5 | APIM 4.0.0, EI 7.1.0, EI 6.5.0 | - -For older versions, see the details in the connector store. - -## Email Connector documentation - -* **[Jira Connector Example]({{base_path}}/reference/connectors/jira-connector/jira-connector-example)**: This example explains how to use the Jira Connector to create new issues and to get existing issues from Jira. - -* **[Jira Connector Reference]({{base_path}}/reference/connectors/jira-connector/jira-connector-config)**: This documentation provides a reference guide for the Jira connector. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, please create a pull request in the following repository. - -* [Jira Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-jira) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/kafka-connector/3.0.x/kafka-connector-config.md b/en/docs/reference/connectors/kafka-connector/3.0.x/kafka-connector-config.md deleted file mode 100644 index 6cd7982166..0000000000 --- a/en/docs/reference/connectors/kafka-connector/3.0.x/kafka-connector-config.md +++ /dev/null @@ -1,385 +0,0 @@ -# Kafka Connector Reference - -The following operations allow you to work with the Kafka Connector. Click an operation name to see parameter details and samples on how to use it. - ---- - -To use the Kafka connector, add the `<kafkaTransport.init>` element in your configuration before carrying out any other Kafka operations. This can be with or without security depending on your requirements. - -??? note "kafkaTransport.init" - You can configure the kafkaTransport.init operation to setup your Kafka producer with or without security. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>name</td> - <td>Unique name to identify the connection.</td> - <td>Yes</td> - </tr> - <tr> - <td>bootstrapServers</td> - <td>The Kafka brokers listed as host1:port1 and host2:port2.</td> - <td>Yes</td> - </tr> - <tr> - <td>keySerializerClass</td> - <td>The serializer class for the key that implements the serializer interface.</td> - <td>Yes</td> - </tr> - <tr> - <td>valueSerializerClass</td> - <td>The serializer class for the value that implements the serializer interface.</td> - <td>Yes</td> - </tr> - <tr> - <td>acks</td> - <td>The number of acknowledgments that the producer requires for the leader to receive before considering a request to be complete.</td> - <td>Optional</td> - </tr> - <tr> - <td>bufferMemory</td> - <td>The total bytes of memory the producer can use to buffer records waiting to be sent to the server.</td> - <td>Optional</td> - </tr> - <tr> - <td>compressionType</td> - <td>The compression type for the data generated by the producer.</td> - <td>Optional</td> - </tr> - <tr> - <td>retries</td> - <td>Set a value greater than zero if you want the client to resent any records automatically when a request fails.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslKeyPassword</td> - <td>The password of the private key in the keystore file. Setting this for the client is optional.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslKeystoreLocation</td> - <td>The location of the key store file. Setting this for the client is optional. Set this when you want to have two-way authentication for the client.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslKeystorePassword</td> - <td>The store password for the keystore file. Setting this for the client is optional. Set it only if ssl.keystore.location is configured.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslTruststoreLocation</td> - <td>The location of the trust store file.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslTruststorePassword</td> - <td>The password for the trust store file.</td> - <td>Optional</td> - </tr> - <tr> - <td>batchSize</td> - <td>Specify how many records the producer should batch together when multiple records are sent to the same partition.</td> - <td>Optional</td> - </tr> - <tr> - <td>clientId</td> - <td>The client identifier that you pass to the server when making requests.</td> - <td>Optional</td> - </tr> - <tr> - <td>connectionsMaxIdleTime</td> - <td>The duration in milliseconds after which idle connections should be closed.</td> - <td>Optional</td> - </tr> - <tr> - <td>lingerTime</td> - <td>The time, in milliseconds, to wait before sending a record. Set this property when you want the client to reduce the number of requests sent when the load is moderate. This adds a small delay rather than immediately sending out a record. Therefore, the producer waits up to allow other records to be sent so that the requests can be batched together.</td> - <td>Optional</td> - </tr> - <tr> - <td>maxBlockTime</td> - <td>The maximum time in milliseconds that the KafkaProducer.send() and the KafkaProducer.partitionsFor() methods can be blocked.</td> - <td>Optional</td> - </tr> - <tr> - <td>maxRequestSize</td> - <td>The maximum size of a request in bytes.</td> - <td>Optional</td> - </tr> - <tr> - <td>partitionerClass</td> - <td>The partitioner class that implements the partitioner interface.</td> - <td>Optional</td> - </tr> - <tr> - <td>receiveBufferBytes</td> - <td>The size of the TCP receive buffer (SO_RCVBUF) to use when reading data.</td> - <td>Optional</td> - </tr> - <tr> - <td>requestTimeout</td> - <td>The maximum amount of time, in milliseconds, that a client waits for the server to respond.</td> - <td>Optional</td> - </tr> - <tr> - <td>saslJaasConfig</td> - <td>JAAS login context parameters for SASL connections in the format used by JAAS configuration files.</td> - <td>Optional</td> - </tr> - <tr> - <td>saslKerberosServiceName</td> - <td>The Kerberos principal name that Kafka runs as.</td> - <td>Optional</td> - </tr> - <tr> - <td>saslMechanism</td> - <td>The mechanism used for SASL.</td> - <td>Optional</td> - </tr> - <tr> - <td>securityProtocol</td> - <td>The protocol used to communicate with brokers.</td> - <td>Optional</td> - </tr> - <tr> - <td>sendBufferBytes</td> - <td>The size of the TCP send buffer (SO_SNDBUF) to use when sending data.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslEnabledProtocols</td> - <td>The list of protocols enabled for SSL connections.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslKeystoreType</td> - <td>The format of the keystore file. Setting this for the client is optional.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslProtocol</td> - <td>The SSL protocol used to generate the SSLContext.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslProvider</td> - <td>The name of the security provider used for SSL connections. The default value is the default security provider of the JVM.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslTruststoreType</td> - <td>The format of the trust store file.</td> - <td>Optional</td> - </tr> - <tr> - <td>timeout</td> - <td>The maximum amount of time, in milliseconds, that the server waits for the acknowledgments from followers to meet the acknowledgment requirements that the producer has specified with acks configuration.</td> - <td>Optional</td> - </tr> - <tr> - <td>blockOnBufferFull</td> - <td>Set to true to stop accepting new records when the memory buffer is full. When blocking is not desirable, set this property to false, which causes the producer to throw an exception if a recrord is sent to the memory buffer when it is full.</td> - <td>Optional</td> - </tr> - <tr> - <td>maxInFlightRequestsPerConnection</td> - <td>The maximum number of unacknowledged requests that the client can send via a single connection before blocking.</td> - <td>Optional</td> - </tr> - <tr> - <td>metadataFetchTimeout</td> - <td>The maximum amount of time, in milliseconds, to block and wait for the metadata fetch to succeed before throwing an exception to the client.</td> - <td>Optional</td> - </tr> - <tr> - <td>metadataMaxAge</td> - <td>The period of time, in milliseconds, after which you should refresh metadata even if there was no partition leadership changes to proactively discover any new brokers or partitions.</td> - <td>Optional</td> - </tr> - <tr> - <td>metricReporters</td> - <td>A list of classes to use as metrics reporters.</td> - <td>Optional</td> - </tr> - <tr> - <td>metricsNumSamples</td> - <td>The number of samples maintained to compute metrics.</td> - <td>Optional</td> - </tr> - <tr> - <td>metricsSampleWindow</td> - <td>The window of time, in milliseconds, that a metrics sample is computed over.</td> - <td>Optional</td> - </tr> - <tr> - <td>reconnectBackoff</td> - <td>The amount of time to wait before attempting to reconnect to a given host.</td> - <td>Optional</td> - </tr> - <tr> - <td>retryBackoff</td> - <td>The amount of time, in milliseconds, to wait before attempting to retry a failed request to a given topic partition.</td> - <td>Optional</td> - </tr> - <tr> - <td>saslKerberosKinitCmd</td> - <td>The kerberos kinit command path.</td> - <td>Optional</td> - </tr> - <tr> - <td>saslKerberosMinTimeBeforeRelogin</td> - <td>Login thread's sleep time, in milliseconds, between refresh attempts.</td> - <td>Optional</td> - </tr> - <tr> - <td>saslKerberosTicketRenewJitter</td> - <td>Percentage of random jitter added to the renewal time.</td> - <td>Optional</td> - </tr> - <tr> - <td>saslKerberosTicketRenewWindowFactor</td> - <td>The login thread sleeps until the specified window factor of time from the last refresh to the ticket's expiry is reached, after which it will try to renew the ticket.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslCipherSuites</td> - <td>A list of cipher suites.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslEndpointIdentificationAlgorithm</td> - <td>The endpoint identification algorithm to validate the server hostname using a server certificate.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslKeymanagerAlgorithm</td> - <td>The algorithm used by the key manager factory for SSL connections. The default value is the key manager factory algorithm configured for the Java Virtual Machine.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslSecureRandomImplementation</td> - <td>The SecureRandom PRNG implementation to use for SSL cryptography operations.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslTrustmanagerAlgorithm</td> - <td>The algorithm used by the trust manager factory for SSL connections. The default value is the trust manager factory algorithm configured for the Java Virtual Machine.</td> - <td>Optional</td> - </tr> - <tr> - <td>poolingEnabled</td> - <td>Indicates whether or not connection pooling is enabled. Set to 'true' if pooling is enabled and 'false' otherwise.</td> - <td>Optional</td> - </tr> - <tr> - <td>maxActiveConnections</td> - <td>Maximum number of active connections in the pool.</td> - <td>Optional</td> - </tr> - <tr> - <td>maxIdleConnections</td> - <td>Maximum number of idle connections in the pool.</td> - <td>Optional</td> - </tr> - <tr> - <td>maxWaitTime</td> - <td>Maximum number of idle connections in the pool.</td> - <td>Optional</td> - </tr> - <tr> - <td>minEvictionTime</td> - <td>The minimum amount of time an object may remain idle in the pool before it is eligible for eviction.</td> - <td>Optional</td> - </tr> - <tr> - <td>evictionCheckInterval</td> - <td>The number of milliseconds between runs of the object evictor.</td> - <td>Optional</td> - </tr> - <tr> - <td>exhaustedAction</td> - <td>The behavior of the pool when the pool is exhausted (WHEN_EXHAUSTED_FAIL/WHEN_EXHAUSTED_BLOCK/WHEN_EXHAUSTED_GROW).</td> - <td>Optional</td> - </tr> - </table> - - > **Performance Tuning Tip**: For better throughput, configure the parameter as follows in the configuration: - > - > ``` - > <maxPoolSize>20</maxPoolSize> - > ``` - > - > If you do not specify the maxPoolSizeparameter in the configuration, a Kafka connection is created for each message request. - - **Sample configuration** - - Given below is a sample configuration to create a producer without security. - - ```xml - <kafkaTransport.init> - <name>Sample_Kafka</name> - <bootstrapServers>localhost:9092</bootstrapServers> - <keySerializerClass>org.apache.kafka.common.serialization.StringSerializer</keySerializerClass> - <valueSerializerClass>org.apache.kafka.common.serialization.StringSerializer</valueSerializerClass> - </kafkaTransport.init> - ``` - - There is an additional feature for security found in Kafka version 0.9.0.0 and above. You can configure it using the element <kafkaTransport.init> as shown in the sample below: - - ```xml - <kafkaTransport.init> - <name>Sample_Kafka</name> - <bootstrapServers>localhost:9092</bootstrapServers> - <keySerializerClass>org.apache.kafka.common.serialization.StringSerializer</keySerializerClass> - <valueSerializerClass>org.apache.kafka.common.serialization.StringSerializer</valueSerializerClass> - <securityProtocol>SSL</securityProtocol> - <sslTruststoreLocation>/home/hariprasath/Desktop/kafkaNewJira/certKafka/kafka.server.truststore.jks</sslTruststoreLocation> - <sslTruststorePassword>test1234</sslTruststorePassword> - <sslKeystoreLocation>/home/hariprasath/Desktop/kafkaNewJira/certKafka/kafka.server.keystore.jks</sslKeystoreLocation> - <sslKeystorePassword>test1234</sslKeystorePassword> - <sslKeyPassword>test1234</sslKeyPassword> - </kafkaTransport.init> - ``` - ---- - -### Publishing messages to Kafka - -??? note "publishMessages" - The publishMessages operation allows you to publish messages to the Kafka brokers via Kafka topics. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>topic</td> - <td>The name of the topic.</td> - <td>Yes</td> - </tr> - <tr> - <td>partitionNo</td> - <td>The partition number of the topic.</td> - <td>Yes</td> - </tr> - </table> - - If required, you can add [custom headers](https://cwiki.apache.org/confluence/display/KAFKA/A+Case+for+Kafka+Headers) to the records in publishMessage operation: - - ```xml - <topic.Content-Type>Value</topic.Content-Type> - ``` - - You can add the parameter as follows in the publishMessage operation: - - ```xml - <kafkaTransport.publishMessage configKey="kafka_init"> - <topic>topicName</topic> - <partitionNo>partitionNo</partitionNo> - <topicName.Content-Type>Value</topicName.Content-Type> - </kafkaTransport.publishMessage> - ``` diff --git a/en/docs/reference/connectors/kafka-connector/enabling-security-for-kafka.md b/en/docs/reference/connectors/kafka-connector/enabling-security-for-kafka.md deleted file mode 100644 index 81eac97c25..0000000000 --- a/en/docs/reference/connectors/kafka-connector/enabling-security-for-kafka.md +++ /dev/null @@ -1,182 +0,0 @@ -# Enabling Security for Kafka - -Security is an important aspect today because cyber-attacks have become a common occurrence and the threat of data breaches is a reality for businesses of all sizes. Before version 0.9, Kafka did not support built-in security. Although it was possible to lock down access at the network level, that approach was not viable for a large shared multi-tenant cluster used across a large company. - -There are a number of different ways to secure a Kafka cluster depending on your requirements. Let's have a look at how to secure a Kafka cluster using Transport Layer Security (TLS) authentication. - -For client/broker and inter-broker communication you need to do the following: - -* Use TLS or Kerberos authentication -* Encrypt network traffic via TLS -* Perform authorization via access control lists (ACLs) - -Now let's take a look at how TLS authentication can be applied to Kafka brokers, producers and consumers. - -* [Generating TLS keys and certificates](#generating-tls-keys-and-certificates) -* [Configuring TLS authentication for the Kafka broker](#configuring-tls-authentication-for-the-kafka-broker) -* [Configuring TLS authentication for Kafka clients/producers ](#configuring-tls-authentication-for-kafka-clientsproducers) -* [Configuring TLS authentication for the Kafka consumer ](#configuring-tls-authentication-for-the-kafka-consumer) -* [Analyzing the output ](#analyzing-the-output) - -## Generating TLS keys and certificates -Before you start, you need to generate a key and certificate for each broker and client in the cluster. The common name (CN) of the broker certificate must match the fully qualified domain name (FQDN) of the server because the client compares the CN with the DNS domain name to ensure that it is connecting to the desired broker, instead of a malicious one. - -Now that each broker has a public-private key pair and an unsigned certificate to identify itself, it is important for each certificate to be signed by a certificate authority (CA) to prevent forged certificates. As long as the CA is a genuine and trusted authority, the clients have high assurance that they are connecting to authentic brokers. - -In contrast to the keystore, which stores each application’s identity, the truststore stores all the certificates that the application should trust. Importing a certificate into a truststore also means trusting all certificates that are signed by that certificate. This attribute is called the chain of trust, and it is particularly useful when deploying TLS on a large Kafka cluster. You can sign all certificates in the cluster with a single CA, and have all machines share the same truststore that contains the CA certificate. That way all machines can authenticate all other machines. A slightly more complex alternative is to use two CAs, one to sign brokers’ keys and another to sign clients’ keys. - -Now Let's see how you can generate your own CA, which is simply a public-private key pair and certificate. Then you can add the CA certificate to each client and broker’s truststore. - -The following bash script generates the keystore and truststore for brokers (kafka.server.keystore.jks and kafka.server.truststore.jks) and clients (kafka.client.keystore.jks and kafka.client.truststore.jks): - -**createCertificates** -```` -#!/bin/bash -PASSWORD=test1234 -VALIDITY=365 -keytool -keystore kafka.server.keystore.jks -alias localhost -validity $VALIDITY -genkey -openssl req -new -x509 -keyout ca-key -out ca-cert -days $VALIDITY -keytool -keystore kafka.server.truststore.jks -alias CARoot -import -file ca-cert -keytool -keystore kafka.client.truststore.jks -alias CARoot -import -file ca-cert -keytool -keystore kafka.server.keystore.jks -alias localhost -certreq -file cert-file -openssl - x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed --days $VALIDITY -CAcreateserial -passin pass:$PASSWORD -keytool -keystore kafka.server.keystore.jks -alias CARoot -import -file ca-cert -keytool -keystore kafka.server.keystore.jks -alias localhost -import -file cert-signed -keytool -keystore kafka.client.keystore.jks -alias localhost -validity $VALIDITY -genkey -keytool -keystore kafka.client.keystore.jks -alias localhost -certreq -file cert-file -openssl - x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed --days $VALIDITY -CAcreateserial -passin pass:$PASSWORD -keytool -keystore kafka.client.keystore.jks -alias CARoot -import -file ca-cert -keytool -keystore kafka.client.keystore.jks -alias localhost -import -file cert-signed -```` - -## Configuring TLS authentication for the Kafka broker - -* Configure the required security protocols and ports in the <KAFKA_HOME>/config/server.properties file. -```` -listeners=SSL://:9093 -```` -> **Note**: Do not enable an unsecured (PLAINTEXT) port because you need to ensure that all broker/client and inter-broker network communication is encrypted. You can select SSL as the security protocol for inter-broker communication (SASL_SSL is the other possible option for the configured listeners): -> ```` -> security.inter.broker.protocol=SSL -> ```` -> -> It is difficult to simultaneously upgrade all systems to the new secure clients. Therefore, you can allow supporting a mix of secure and unsecured clients. -> -> To support a mix of secure and unsecured clients, you need to add a PLAINTEXT port to listeners, but ensure that you restrict access to this port to trusted clients only. Network segmentation and/or authorization ACLs can be used to restrict access to trusted IPs in such cases. - -Now let's take a look at how to apply protocol-specific configuration settings. - -* Configure the following in the <KAFKA-HOME>/config/server.properties file: - -**TLS configuration in Broker** -```` -ssl.client.auth=required -ssl.keystore.location={file-path}/kafka.server.keystore.jks -ssl.keystore.password=test1234 -ssl.key.password=test1234 -ssl.truststore.location={file-path}/kafka.server.truststore.jks -ssl.truststore.password=test1234 -```` -The above configuration should have the TLS client authentication and configuration key details as well as keystore and truststore details. Since you need to store passwords in the broker configuration, it is important to restrict access to the broker configuration via filesystem permission. - -## Configuring TLS authentication for Kafka clients/producers - -Enabling TLS authentication for Kafka producers and consumers can be done by configuring a set of parameters. It does not require any code changes. - -> **Note**: Kafka versions 0.9.0.0 and above support TLS. The older APIs do not support TLS. - -##### TLS -The parameters you need to specify to support TLS is the same for both producers and consumers. It is required to specify the security protocol as well as the truststore and keystore information since you are using mutual authentication: - -##### Console Clients -The client configuration can slightly differ depending on whether you want the client to use TLS or SASL/Kerberos. - -Use the following configuration to create a producer that sends messages to the broker. - -**Proxy with Kafka Security** -```` -<proxy xmlns="http://ws.apache.org/ns/synapse" - name="testKafka" - startOnLoad="true" - statistics="disable" - trace="disable" - transports="http,https"> - <target> - <inSequence> - <kafkaTransport.init> - <bootstrapServers>localhost:9093</bootstrapServers> - <keySerializerClass>org.apache.kafka.common.serialization.StringSerializer</keySerializerClass> - <valueSerializerClass>org.apache.kafka.common.serialization.StringSerializer</valueSerializerClass> - <securityProtocol>SSL</securityProtocol> - <sslTruststoreLocation>{file-path}/kafka.client.truststore.jks</sslTruststoreLocation> - <sslTruststorePassword>test1234</sslTruststorePassword> - <sslKeystoreLocation>{file-path}/kafka.client.keystore.jks</sslKeystoreLocation> - <sslKeystorePassword>test1234</sslKeystorePassword> - <sslKeyPassword>test1234</sslKeyPassword> - </kafkaTransport.init> - <kafkaTransport.publishMessages> - <topic>test</topic> - </kafkaTransport.publishMessages> - </inSequence> - </target> - <description/> -</proxy> -```` -The console producer is a convenient way to send a small amount of data to the broker. - -Follow the sample scenario in [Kafka configuration documentation]({{base_path}}/reference/connectors/kafka-connector/kafka-connector-config/), and send the following message to the Kafka broker: - -> **Note**: Be sure to include the following configuration in the proxy service when you are building the sample: - -``` -bootstrap.servers=localhost:9093 -security.protocol=SSL -ssl.truststore.location={file-path}/kafka.client.truststore.jks -ssl.truststore.password=test1234 -ssl.keystore.location={file-path}/kafka.client.keystore.jks -ssl.keystore.password=test1234 -ssl.key.password=test1234 -``` -``` -{“test”:”wso2”} -{“test”:”wso2”} -{“test”:”wso2”} -``` - -## Configuring TLS authentication for the Kafka consumer -The console consumer is a convenient way to consume messages. You can either use the console consumer or the Kafka inbound endpoint to consume messages. - -* Execute the following command on the terminal to start the consumer with security: - -**command to start the cosumer** -```` -bin/kafka-console-consumer --bootstrap-server localhost:9093 --topic test --new-consumer ---from-beginning --consumer.config {file-path}/consumer_ssl.properties -```` - -* Ensure that you Include the following configuration to enable security. - -**consumer_ssl.properties** -```` -bootstrap.servers=localhost:9093 -security.protocol=SSL -ssl.truststore.location={file-path}/kafka.client.truststore.jks -ssl.truststore.password=test1234 -ssl.keystore.location={file-path}/kafka.client.keystore.jks -ssl.keystore.password=test1234 -ssl.key.password=test1234 -```` -Now that you have applied TLS authentication to Kafka brokers, producers and consumers, let's - -## Analyzing the output - -You will see the following output on the consumer console: -```` -{“test”:”wso2”} -{“test”:”wso2”} -{“test”:”wso2"} -```` \ No newline at end of file diff --git a/en/docs/reference/connectors/kafka-connector/kafka-connector-avro-producer-example.md b/en/docs/reference/connectors/kafka-connector/kafka-connector-avro-producer-example.md deleted file mode 100644 index 5698a94eef..0000000000 --- a/en/docs/reference/connectors/kafka-connector/kafka-connector-avro-producer-example.md +++ /dev/null @@ -1,172 +0,0 @@ -# Avro Message with Kafka Connector Example - -Given below is a sample scenario that demonstrates how to send Apache Avro messages to a Kafka broker via Kafka topics. The `publishMessages` operation allows you to publish messages to the Kafka brokers via Kafka topics. - -## What you'll build - -Given below is a sample API that illustrates how you can connect to a Kafka broker with the `init` operation and then use the `publishMessages` operation to publish messages via the topic. It exposes Kafka functionalities as a RESTful service. Users can invoke the API using HTTP/HTTPS with the required information. - -API has the `/publishMessages` context. It publishes messages via the topic to the Kafka server. - -## Set up Kafka - -Before you begin, set up Kafka by following the instructions in [Setting up Kafka](setting-up-kafka.md). - -## Configure the connector in WSO2 Integration Studio - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -1. Right-click the created Integration Project and select **New** -> **Rest API** to create the REST API. - -2. Specify the API name as `KafkaTransport` and API context as `/publishMessages`. You can go to the source view of the XML configuration file of the API and copy the following configuration (source view). - - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <api context="/publishMessages" name="KafkaTransport" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST"> - <inSequence> - <property name="valueSchema" - expression="json-eval($.test)" - scope="default" - type="STRING"/> - <property name="value" - expression="json-eval($.value)" - scope="default" - type="STRING"/> - <property name="key" - expression="json-eval($.key)" - scope="default" - type="STRING"/> - <property name="topic" - expression="json-eval($.topic)" - scope="default" - type="STRING"/> - <kafkaTransport.init> - <name>Sample_Kafka</name> - <bootstrapServers>localhost:9092</bootstrapServers> - <keySerializerClass>io.confluent.kafka.serializers.KafkaAvroSerializer</keySerializerClass> - <valueSerializerClass>io.confluent.kafka.serializers.KafkaAvroSerializer</valueSerializerClass> - <schemaRegistryUrl>http://localhost:8081</schemaRegistryUrl> - <maxPoolSize>100</maxPoolSize> - </kafkaTransport.init> - <kafkaTransport.publishMessages> - <topic>{$ctx:topic}</topic> - <key>{$ctx:key}</key> - <value>{$ctx:value}</value> - <valueSchema>{$ctx:valueSchema}</valueSchema> - </kafkaTransport.publishMessages> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - </api> - ``` -Now we can export the imported connector and the API into a single CAR application. The CAR application needs to be deployed during server runtime. - -{!includes/reference/connectors/exporting-artifacts.md!} - -## Deployment - -Follow these steps to deploy the exported CApp in the Enterprise Integrator Runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -Invoke the API (http://localhost:8290/publishMessages) with the following payload, - -````json -{ - "test": { - "type": "record", - "name": "myrecord", - "fields": [ - { - "name": "f1", - "type": ["string", "int"] - } - ] - }, - "value": { - "f1": "sampleValue" - }, - "key": "sampleKey", - "topic": "myTopic" -} -```` - -**Expected Response**: - -Run the following command to verify the messages: -````bash -[confluent_home]/bin/kafka-avro-console-consumer.sh --topic myTopic --bootstrap-server localhost:9092 --property print.key=true --from-beginning -```` -See the following message content: -````json -{"f1":{"string":"sampleValue"}} -```` -Sample API configuration when the Confluent Schema Registry is secured with basic auth, - -```xml -<?xml version="1.0" encoding="UTF-8"?> -<api context="/publishMessages" name="KafkaTransport" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST"> - <inSequence> - <property name="valueSchema" - expression="json-eval($.test)" - scope="default" - type="STRING"/> - <property name="value" - expression="json-eval($.value)" - scope="default" - type="STRING"/> - <property name="key" - expression="json-eval($.key)" - scope="default" - type="STRING"/> - <property name="topic" - expression="json-eval($.topic)" - scope="default" - type="STRING"/> - <kafkaTransport.init> - <name>Sample_Kafka</name> - <bootstrapServers>localhost:9092</bootstrapServers> - <keySerializerClass>io.confluent.kafka.serializers.KafkaAvroSerializer</keySerializerClass> - <valueSerializerClass>io.confluent.kafka.serializers.KafkaAvroSerializer</valueSerializerClass> - <schemaRegistryUrl>http://localhost:8081</schemaRegistryUrl> - <maxPoolSize>100</maxPoolSize> - <basicAuthCredentialsSource>USER_INFO</basicAuthCredentialsSource> - <basicAuthUserInfo>admin:admin</basicAuthUserInfo> - </kafkaTransport.init> - <kafkaTransport.publishMessages> - <topic>{$ctx:topic}</topic> - <key>{$ctx:key}</key> - <value>{$ctx:value}</value> - <valueSchema>{$ctx:valueSchema}</valueSchema> - </kafkaTransport.publishMessages> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> -</api> -``` -In the above example, the <b>basicAuthCredentialsSource</b> parameter is configured as <b>USER_INFO</b>. For example, consider a scenario where the <b>basicAuthCredentialsSource</b> parameter is set to <b>URL</b> as follows: - -````xml -<basicAuthCredentialsSource>URL</basicAuthCredentialsSource> -```` - -Then, the <b>schemaRegistryUrl</b> parameter should be configured as shown below. - -````xml -<schemaRegistryUrl>http://admin:admin@localhost:8081</schemaRegistryUrl> -```` -Refer the [confluent documentation](https://docs.confluent.io/platform/current/schema-registry/serdes-develop/serdes-avro.html) for more details. - -This demonstrates how the Kafka connector publishes Avro messages to Kafka brokers. - -## What's next - -* To customize this example for your own scenario, see [Kafka Connector Configuration](kafka-connector-config.md) documentation. diff --git a/en/docs/reference/connectors/kafka-connector/kafka-connector-config.md b/en/docs/reference/connectors/kafka-connector/kafka-connector-config.md deleted file mode 100644 index a376e97973..0000000000 --- a/en/docs/reference/connectors/kafka-connector/kafka-connector-config.md +++ /dev/null @@ -1,501 +0,0 @@ -# Kafka Connector Reference - -The following operations allow you to work with the Kafka Connector. Click an operation name to see parameter details and samples on how to use it. - ---- - -To use the Kafka connector, add the `<kafkaTransport.init>` element in your configuration before carrying out any other Kafka operations. This can be with or without security depending on your requirements. - -??? note "kafkaTransport.init" - You can configure the kafkaTransport.init operation to setup your Kafka producer with or without security. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>name</td> - <td>Unique name to identify the connection.</td> - <td>Yes</td> - </tr> - <tr> - <td>bootstrapServers</td> - <td>The Kafka brokers listed as host1:port1 and host2:port2.</td> - <td>Yes</td> - </tr> - <tr> - <td>keySerializerClass</td> - <td>The serializer class for the key that implements the serializer interface.</td> - <td>Yes</td> - </tr> - <tr> - <td>valueSerializerClass</td> - <td>The serializer class for the value that implements the serializer interface.</td> - <td>Yes</td> - </tr> - <tr> - <td>schemaRegistryUrl</td> - <td>The URL of the confluent schema registry, only applicable when dealing with apache avro serializer class..</td> - <td>Optional</td> - </tr> - <tr> - <td>basicAuthCredentialsSource</td> - <td>The source of basic auth credentials (e.g. USER_INFO, URL), when schema registry is secured to use basic auth..</td> - <td>Optional</td> - </tr> - <tr> - <td>basicAuthUserInfo</td> - <td>The relevant basic auth credentials (should be used with basicAuthCredentialsSource).</td> - <td>Optional</td> - </tr> - <tr> - <td>acks</td> - <td>The number of acknowledgments that the producer requires for the leader to receive before considering a request to be complete.</td> - <td>Optional</td> - </tr> - <tr> - <td>bufferMemory</td> - <td>The total bytes of memory the producer can use to buffer records waiting to be sent to the server.</td> - <td>Optional</td> - </tr> - <tr> - <td>compressionType</td> - <td>The compression type for the data generated by the producer.</td> - <td>Optional</td> - </tr> - <tr> - <td>retries</td> - <td>Set a value greater than zero if you want the client to resent any records automatically when a request fails.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslKeyPassword</td> - <td>The password of the private key in the keystore file. Setting this for the client is optional.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslKeystoreLocation</td> - <td>The location of the key store file. Setting this for the client is optional. Set this when you want to have two-way authentication for the client.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslKeystorePassword</td> - <td>The store password for the keystore file. Setting this for the client is optional. Set it only if ssl.keystore.location is configured.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslTruststoreLocation</td> - <td>The location of the trust store file.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslTruststorePassword</td> - <td>The password for the trust store file.</td> - <td>Optional</td> - </tr> - <tr> - <td>batchSize</td> - <td>Specify how many records the producer should batch together when multiple records are sent to the same partition.</td> - <td>Optional</td> - </tr> - <tr> - <td>clientId</td> - <td>The client identifier that you pass to the server when making requests.</td> - <td>Optional</td> - </tr> - <tr> - <td>connectionsMaxIdleTime</td> - <td>The duration in milliseconds after which idle connections should be closed.</td> - <td>Optional</td> - </tr> - <tr> - <td>lingerTime</td> - <td>The time, in milliseconds, to wait before sending a record. Set this property when you want the client to reduce the number of requests sent when the load is moderate. This adds a small delay rather than immediately sending out a record. Therefore, the producer waits up to allow other records to be sent so that the requests can be batched together.</td> - <td>Optional</td> - </tr> - <tr> - <td>maxBlockTime</td> - <td>The maximum time in milliseconds that the KafkaProducer.send() and the KafkaProducer.partitionsFor() methods can be blocked.</td> - <td>Optional</td> - </tr> - <tr> - <td>maxRequestSize</td> - <td>The maximum size of a request in bytes.</td> - <td>Optional</td> - </tr> - <tr> - <td>partitionerClass</td> - <td>The partitioner class that implements the partitioner interface.</td> - <td>Optional</td> - </tr> - <tr> - <td>receiveBufferBytes</td> - <td>The size of the TCP receive buffer (SO_RCVBUF) to use when reading data.</td> - <td>Optional</td> - </tr> - <tr> - <td>requestTimeout</td> - <td>The maximum amount of time, in milliseconds, that a client waits for the server to respond.</td> - <td>Optional</td> - </tr> - <tr> - <td>saslJaasConfig</td> - <td>JAAS login context parameters for SASL connections in the format used by JAAS configuration files.</td> - <td>Optional</td> - </tr> - <tr> - <td>saslKerberosServiceName</td> - <td>The Kerberos principal name that Kafka runs as.</td> - <td>Optional</td> - </tr> - <tr> - <td>saslMechanism</td> - <td>The mechanism used for SASL.</td> - <td>Optional</td> - </tr> - <tr> - <td>securityProtocol</td> - <td>The protocol used to communicate with brokers.</td> - <td>Optional</td> - </tr> - <tr> - <td>sendBufferBytes</td> - <td>The size of the TCP send buffer (SO_SNDBUF) to use when sending data.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslEnabledProtocols</td> - <td>The list of protocols enabled for SSL connections.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslKeystoreType</td> - <td>The format of the keystore file. Setting this for the client is optional.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslProtocol</td> - <td>The SSL protocol used to generate the SSLContext.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslProvider</td> - <td>The name of the security provider used for SSL connections. The default value is the default security provider of the JVM.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslTruststoreType</td> - <td>The format of the trust store file.</td> - <td>Optional</td> - </tr> - <tr> - <td>timeout</td> - <td>The maximum amount of time, in milliseconds, that the server waits for the acknowledgments from followers to meet the acknowledgment requirements that the producer has specified with acks configuration.</td> - <td>Optional</td> - </tr> - <tr> - <td>blockOnBufferFull</td> - <td>Set to true to stop accepting new records when the memory buffer is full. When blocking is not desirable, set this property to false, which causes the producer to throw an exception if a recrord is sent to the memory buffer when it is full.</td> - <td>Optional</td> - </tr> - <tr> - <td>maxInFlightRequestsPerConnection</td> - <td>The maximum number of unacknowledged requests that the client can send via a single connection before blocking.</td> - <td>Optional</td> - </tr> - <tr> - <td>metadataFetchTimeout</td> - <td>The maximum amount of time, in milliseconds, to block and wait for the metadata fetch to succeed before throwing an exception to the client.</td> - <td>Optional</td> - </tr> - <tr> - <td>metadataMaxAge</td> - <td>The period of time, in milliseconds, after which you should refresh metadata even if there was no partition leadership changes to proactively discover any new brokers or partitions.</td> - <td>Optional</td> - </tr> - <tr> - <td>metricReporters</td> - <td>A list of classes to use as metrics reporters.</td> - <td>Optional</td> - </tr> - <tr> - <td>metricsNumSamples</td> - <td>The number of samples maintained to compute metrics.</td> - <td>Optional</td> - </tr> - <tr> - <td>metricsSampleWindow</td> - <td>The window of time, in milliseconds, that a metrics sample is computed over.</td> - <td>Optional</td> - </tr> - <tr> - <td>reconnectBackoff</td> - <td>The amount of time to wait before attempting to reconnect to a given host.</td> - <td>Optional</td> - </tr> - <tr> - <td>retryBackoff</td> - <td>The amount of time, in milliseconds, to wait before attempting to retry a failed request to a given topic partition.</td> - <td>Optional</td> - </tr> - <tr> - <td>saslKerberosKinitCmd</td> - <td>The kerberos kinit command path.</td> - <td>Optional</td> - </tr> - <tr> - <td>saslKerberosMinTimeBeforeRelogin</td> - <td>Login thread's sleep time, in milliseconds, between refresh attempts.</td> - <td>Optional</td> - </tr> - <tr> - <td>saslKerberosTicketRenewJitter</td> - <td>Percentage of random jitter added to the renewal time.</td> - <td>Optional</td> - </tr> - <tr> - <td>saslKerberosTicketRenewWindowFactor</td> - <td>The login thread sleeps until the specified window factor of time from the last refresh to the ticket's expiry is reached, after which it will try to renew the ticket.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslCipherSuites</td> - <td>A list of cipher suites.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslEndpointIdentificationAlgorithm</td> - <td>The endpoint identification algorithm to validate the server hostname using a server certificate.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslKeymanagerAlgorithm</td> - <td>The algorithm used by the key manager factory for SSL connections. The default value is the key manager factory algorithm configured for the Java Virtual Machine.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslSecureRandomImplementation</td> - <td>The SecureRandom PRNG implementation to use for SSL cryptography operations.</td> - <td>Optional</td> - </tr> - <tr> - <td>sslTrustmanagerAlgorithm</td> - <td>The algorithm used by the trust manager factory for SSL connections. The default value is the trust manager factory algorithm configured for the Java Virtual Machine.</td> - <td>Optional</td> - </tr> - <tr> - <td>poolingEnabled</td> - <td>Indicates whether or not connection pooling is enabled. Set to 'true' if pooling is enabled and 'false' otherwise.</td> - <td>Optional</td> - </tr> - <tr> - <td>maxActiveConnections</td> - <td>Maximum number of active connections in the pool.</td> - <td>Optional</td> - </tr> - <tr> - <td>maxIdleConnections</td> - <td>Maximum number of idle connections in the pool.</td> - <td>Optional</td> - </tr> - <tr> - <td>maxWaitTime</td> - <td>Maximum number of idle connections in the pool.</td> - <td>Optional</td> - </tr> - <tr> - <td>minEvictionTime</td> - <td>The minimum amount of time an object may remain idle in the pool before it is eligible for eviction.</td> - <td>Optional</td> - </tr> - <tr> - <td>evictionCheckInterval</td> - <td>The number of milliseconds between runs of the object evictor.</td> - <td>Optional</td> - </tr> - <tr> - <td>exhaustedAction</td> - <td>The behavior of the pool when the pool is exhausted (WHEN_EXHAUSTED_FAIL/WHEN_EXHAUSTED_BLOCK/WHEN_EXHAUSTED_GROW).</td> - <td>Optional</td> - </tr> - </table> - - > **Performance Tuning Tip**: For better throughput, configure the parameter as follows in the configuration: - > - > ``` - > <maxPoolSize>20</maxPoolSize> - > ``` - > - > If you do not specify the maxPoolSizeparameter in the configuration, a Kafka connection is created for each message request. - - **Sample configuration** - - Given below is a sample configuration to create a producer without security. - - ```xml - <kafkaTransport.init> - <name>Sample_Kafka</name> - <bootstrapServers>localhost:9092</bootstrapServers> - <keySerializerClass>org.apache.kafka.common.serialization.StringSerializer</keySerializerClass> - <valueSerializerClass>org.apache.kafka.common.serialization.StringSerializer</valueSerializerClass> - </kafkaTransport.init> - ``` - - There is an additional feature for security found in Kafka version 0.9.0.0 and above. You can configure it using the element <kafkaTransport.init> as shown in the sample below: - - ```xml - <kafkaTransport.init> - <name>Sample_Kafka</name> - <bootstrapServers>localhost:9092</bootstrapServers> - <keySerializerClass>org.apache.kafka.common.serialization.StringSerializer</keySerializerClass> - <valueSerializerClass>org.apache.kafka.common.serialization.StringSerializer</valueSerializerClass> - <securityProtocol>SSL</securityProtocol> - <sslTruststoreLocation>/home/hariprasath/Desktop/kafkaNewJira/certKafka/kafka.server.truststore.jks</sslTruststoreLocation> - <sslTruststorePassword>test1234</sslTruststorePassword> - <sslKeystoreLocation>/home/hariprasath/Desktop/kafkaNewJira/certKafka/kafka.server.keystore.jks</sslKeystoreLocation> - <sslKeystorePassword>test1234</sslKeystorePassword> - <sslKeyPassword>test1234</sslKeyPassword> - </kafkaTransport.init> - ``` - **Sample configurations for dealing with Apache Avro Serialization** - - Given below is a sample configuration to create a producer for Kafka Avro Serialization, - - ````xml - <kafkaTransport.init> - <name>Sample_Kafka</name> - <bootstrapServers>localhost:9092</bootstrapServers> - <keySerializerClass>io.confluent.kafka.serializers.KafkaAvroSerializer</keySerializerClass> - <valueSerializerClass>io.confluent.kafka.serializers.KafkaAvroSerializer</valueSerializerClass> - <schemaRegistryUrl>http://localhost:8081</schemaRegistryUrl> - </kafkaTransport.init> - ```` - - Sample init configuration when confluent schema registry is secured with basic auth, - - ````xml - <kafkaTransport.init> - <name>Sample_Kafka</name> - <bootstrapServers>localhost:9092</bootstrapServers> - <keySerializerClass>io.confluent.kafka.serializers.KafkaAvroSerializer</keySerializerClass> - <valueSerializerClass>io.confluent.kafka.serializers.KafkaAvroSerializer</valueSerializerClass> - <schemaRegistryUrl>http://localhost:8081</schemaRegistryUrl> - <basicAuthCredentialsSource>USER_INFO</basicAuthCredentialsSource> - <basicAuthUserInfo>admin:admin</basicAuthUserInfo> - </kafkaTransport.init> - ```` ---- - -### Publishing messages to Kafka - -??? note "publishMessages" - The publishMessages operation allows you to publish messages to the Kafka brokers via Kafka topics. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>topic</td> - <td>The name of the topic.</td> - <td>Yes</td> - </tr> - <tr> - <td>partitionNo</td> - <td>The partition number of the topic.</td> - <td>Yes</td> - </tr> - <tr> - <td>key</td> - <td>Key of the kafka message.</td> - <td>Optional</td> - </tr> - <tr> - <td>keySchema</td> - <td>Schema of the provided key (applicable only with Kafka Avro Serialization).</td> - <td>Optional</td> - </tr> - <tr> - <td>keySchemaId</td> - <td>Schema id of the key schema that is stored in the confluent schema registry (applicable only with Kafka Avro Serialization).</td> - <td>Optional</td> - </tr> - <tr> - <td>value</td> - <td>The kafka value/message.</td> - <td>Optional</td> - </tr> - <tr> - <td>valueSchema</td> - <td>Schema of the Kafka value (applicable only with Kafka Avro Serialization).</td> - <td>Optional</td> - </tr> - <tr> - <td>valueSchemaId</td> - <td>Schema id of the value schema that is stored in the confluent schema registry (applicable only with Kafka Avro Serialization).</td> - <td>Optional</td> - </tr> - <tr> - <td>Content-Type</td> - <td>The Content-Type of the message.</td> - <td>Optional</td> - </tr> - </table> - - If required, you can add [custom headers](https://cwiki.apache.org/confluence/display/KAFKA/A+Case+for+Kafka+Headers) to the records in publishMessage operation: - - ```xml - <topic.Content-Type>Value</topic.Content-Type> - ``` - - You can add the parameter as follows in the publishMessage operation: - - ```xml - <kafkaTransport.publishMessage configKey="kafka_init"> - <topic>topicName</topic> - <partitionNo>partitionNo</partitionNo> - <topicName.Content-Type>Value</topicName.Content-Type> - </kafkaTransport.publishMessage> - ``` - When dealing with Avro Serialization the key and value parameters can be configured as: - - ```xml - <kafkaTransport.publishMessages> - <topic>topicName</topic> - <key>key of the message</key> - <keySchema>schema of the configured key</keySchema> - <value>value of the message</value> - <valueSchema>schema of the configured value</valueSchema> - </kafkaTransport.publishMessages> - ``` - Sample configuration to retrieve the key/value schema from the Confluent Schema Registry: - - ```xml - <kafkaTransport.publishMessages> - <topic>topicName</topic> - <key>key of the message</key> - <keySchemaId>schemaId of the configured key</keySchema> - <value>value of the message</value> - <valueSchemaId>schemaId of the configured value</valueSchema> - </kafkaTransport.publishMessages> - ``` - -### Error codes related to Kafka Connector - -!!! note - With Kafka connector v3.1.2 and above, when an error occurs, one of the following errors will get set to the message context. For details on how to access these error properties, refer [Generic Properties]({{base_path}}/reference/mediators/property-reference/generic-properties/#error_code). - - -| **Error Code** | **Detail** | -|----------------|----------------------------------------------------------| -| 700501 | Connection error. | -| 700502 | Invalid configuration. | -| 700503 | Error while serializing the Avro message in the producer.| -| 700504 | Illegal type is used in an Avro message. | -| 700505 | Error while building Avro schemas. | -| 700506 | Error while parsing schemas and protocols. | -| 700507 | Expected contents of a union cannot be resolved. | -| 700508 | The request message cannot be processed. | -| 700509 | Any other Kafka related error. | diff --git a/en/docs/reference/connectors/kafka-connector/kafka-connector-overview.md b/en/docs/reference/connectors/kafka-connector/kafka-connector-overview.md deleted file mode 100644 index 0e65eb5c88..0000000000 --- a/en/docs/reference/connectors/kafka-connector/kafka-connector-overview.md +++ /dev/null @@ -1,62 +0,0 @@ -# Kafka Connector Overview - -Kafka is a distributed publish-subscribe messaging system that maintains feeds of messages in topics. Producers write data to topics and consumers read from topics. For more information on Apache Kafka, see [Apache Kafka documentation](http://kafka.apache.org/documentation.html). - -Kafka mainly operates based on a topic model. A topic is a category or feed name to which records get published. Topics in Kafka are always multi-subscriber. - -To see the Kafka Connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "Kafka". - -<img src="{{base_path}}/assets/img/integrate/connectors/kafka-store.png" title="Kafka Connector Store" width="200" alt="Kafka Connector Store"/> - -## Compatibility - -| Connector Version | Supported product versions | -|-------------------|--------------------------------------------------| -| 3.2.0 | MI 4.2.0, MI 4.1.0, MI 4.0.0 | -| 3.1.0 | APIM 4.0.0, EI 7.1.0, EI 7.0.x EI 6.6.0 | -| 3.0.0 | APIM 4.0.0, EI 7.1.0, EI 7.0.x EI 6.6.0 | -| 2.0.9 | APIM 4.0.0, EI 7.1.0, EI 7.0.x EI 6.6.0 EI 6.5.0 | - -For older versions, see the details in the connector store. - -## Kafka Connector documentation - -The Kafka connector allows you to access the Kafka Producer API from the integration sequence and acts as a message producer that facilitates message publishing. The Kafka connector sends messages to the Kafka brokers. - -Follow the topics given below to get started with the Kafka connector. - -* **[Setting up Kafka]({{base_path}}/reference/connectors/kafka-connector/setting-up-kafka/)**: This includes instructions on setting up Kafka and Zookeeper. - -* **[Enabling Security for Kafka]({{base_path}}/reference/connectors/kafka-connector/enabling-security-for-kafka/)**: This includes a variety of security-related details that will be used to secure Kafka. - -* **[Kafka Connector Example]({{base_path}}/reference/connectors/kafka-connector/kafka-connector-producer-example/)**: This example demonstrates how to send messages to a Kafka broker via Kafka topics. - -The following topics are specific to connector version 3.1.0 and later version: - -!!! Tip - Apache Avro Message type is supported from connector version 3.1.0 onwards. - -* **[Kafka Connector Avro Message Producer Example]({{base_path}}/reference/connectors/kafka-connector/kafka-connector-avro-producer-example/)**: This example demonstrates how to send Apache Avro messages to a Kafka broker via Kafka topics. - -* **[Kafka Connector Reference]({{base_path}}/reference/connectors/kafka-connector/kafka-connector-config/)**: This documentation provides a reference guide for the Kafka Connector. - -The following topic is specific to connector version 3.0.0 and earlier versions: - -* **[Kafka Connector Reference]({{base_path}}/reference/connectors/kafka-connector/kafka-connector-config/)**: This documentation provides a reference guide for the Kafka Connector. - -## Kafka Inbound Endpoint documentation - -The Kafka inbound endpoint acts as a message consumer. It creates a connection to ZooKeeper and requests messages for a topic. The inbound endpoint is bundled with the Kafka connector. - -* **[Kafka Inbound Endpoint Example]({{base_path}}/reference/connectors/kafka-connector/kafka-inbound-endpoint-example/)**: This sample demonstrates how one way message bridging from Kafka to HTTP can be done using the inbound Kafka endpoint. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, create a pull request in one of the following repositories. - -* [Kafka Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-kafka) -* [Kafka Inbound Endpoint GitHub repository](https://github.com/wso2-extensions/esb-inbound-kafka) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/kafka-connector/kafka-connector-producer-example.md b/en/docs/reference/connectors/kafka-connector/kafka-connector-producer-example.md deleted file mode 100644 index 5516cee5ec..0000000000 --- a/en/docs/reference/connectors/kafka-connector/kafka-connector-producer-example.md +++ /dev/null @@ -1,105 +0,0 @@ -# Kafka Connector Example - -Given below is a sample scenario that demonstrates how to send messages to a Kafka broker via Kafka topics. The publishMessages operation allows you to publish messages to the Kafka brokers via Kafka topics. - -## What you'll build - -Given below is a sample API that illustrates how you can connect to a Kafka broker with the `init` operation and then use the `publishMessages` operation to publish messages via the topic. It exposes Kafka functionalities as a RESTful service. Users can invoke the API using HTTP/HTTPs with the required information. - -API has the context `/publishMessages`. It will publish messages via the topic to the Kafka server. - -The following diagram illustrates all the required functionality of the Kafka service that you are going to build. - -<a href="{{base_path}}/assets/img/integrate/connectors/kafkaconnectorpublishmessage.png"><img src="{{base_path}}/assets/img/integrate/connectors/kafkaconnectorpublishmessage.png" title="KafkaConnector" width="800" alt="KafkaConnector"/></a> - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Set up Kafka - -Before you begin, set up Kafka by following the instructions in [Setting up Kafka]({{base_path}}/reference/connectors/ksfka-connector/setting-up-kafka/). - -## Configure the connector in WSO2 Integration Studio - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -5. Create a new Kafka connection by selecting a particular operation. - - <a href="{{base_path}}/assets/img/integrate/connectors/filecon10.png"><img src="{{base_path}}/assets/img/integrate/connectors/filecon10.png" title="working directory" width="800" alt="working directory"/></a> - - -1. Right click on the created Integration Project and select **New** -> **Rest API** to create the REST API. - -2. Specify the API name as `KafkaTransport` and API context as `/publishMessages`. You can go to the source view of the XML configuration file of the API and copy the following configuration (source view). - - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <api context="/publishMessages" name="KafkaTransport" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST"> - <inSequence> - <kafkaTransport.publishMessages> - <topic>test</topic> - </kafkaTransport.publishMessages configKey="KAFKA_CONNECTION"> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - </api> - ``` -Now we can export the imported connector and the API into a single CAR application. The CAR application needs to be deployed during server runtime. - -{!includes/reference/connectors/exporting-artifacts.md!} - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/kafka-connector.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -**Create a topic**: - -Let’s create a topic named “test” with a single partition and only one replica. -Navigate to the <KAFKA_HOME> and run following command. - -```bash -bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test -``` - -**Sample Request**: - -Send a message to the Kafka broker using a CURL command or sample client. - -```bash -curl -X POST -d '{"name":"sample"}' "http://localhost:8290/publishMessages" -H "Content-Type:application/json" -v -``` - -**Expected Response**: - -Navigate to the <KAFKA_HOME> and run the following command to verify the messages: - -```bash -bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning -``` - -See the following message content: - -```bash -{"name":"sample"} -``` - -This demonstrates how the Kafka connector publishes messages to the Kafka brokers. - -## What's next - -* To customize this example for your own scenario, see [Kafka Connector Configuration]({{base_path}}/reference/connectors/kafka-connector/kafka-connector-config/) documentation. \ No newline at end of file diff --git a/en/docs/reference/connectors/kafka-connector/kafka-inbound-endpoint-config.md b/en/docs/reference/connectors/kafka-connector/kafka-inbound-endpoint-config.md deleted file mode 100644 index 347e5a966b..0000000000 --- a/en/docs/reference/connectors/kafka-connector/kafka-inbound-endpoint-config.md +++ /dev/null @@ -1,379 +0,0 @@ -# Kafka Inbound Endpoint Reference - -## Mandatory parameters for Kafka Inbound Endpoint - -The following parameters are required when configuring Kafka Inbound Endpoint. - -<table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - </tr> - <tr> - <td>bootstrap.servers</td> - <td>The Kafka brokers listed as host1:port1 and host2:port2</td> - </tr> - <tr> - <td>key.deserializer</td> - <td>Deserializer class for key that implements the org.apache.kafka.common.serialization.Deserializer interface.</td> - </tr> - <tr> - <td>value.deserializer</td> - <td>Deserializer class for value that implements the org.apache.kafka.common.serialization.Deserializer interface.</td> - </tr> - <tr> - <td>group.id</td> - <td>The consumer group ID.</td> - </tr> - <tr> - <td>poll.timeout</td> - <td>The max time to block in the consumer waiting for records.</td> - </tr> - <tr> - <td>topic.name</td> - <td>The name of the topic.</td> - </tr> - <tr> - <td>topic.pattern</td> - <td>The name pattern of the topic.</td> - </tr> - <tr> - <td>contentType</td> - <td>The content type of the message.</td> - </tr> -</table> - -## Optional parameters for Kafka Inbound Endpoint - -<table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Default Value</th> - </tr> - <tr> - <td>enable.auto.commit</td> - <td>Whether the consumer will automatically commit offsets periodically at the interval set by auto.commit.interval.ms.</td> - <td>true</td> - </tr> - <tr> - <td>auto.commit.interval.ms</td> - <td>Offsets are committed automatically with a frequency controlled by the config.</td> - <td>5000</td> - </tr> - <tr> - <td>session.timeout.ms</td> - <td>The timeout used to detect client failures when using Kafka’s group management facility.</td> - <td>10000</td> - </tr> - <tr> - <td>fetch.min.bytes</td> - <td>The minimum amount of data the server should return for a fetch request.</td> - <td>1</td> - </tr> - <tr> - <td>heartbeat.interval.ms</td> - <td>The expected time between heartbeats to the consumer coordinator when using Kafka’s group management facilities.</td> - <td>3000</td> - </tr> - <tr> - <td>max.partition.fetch.bytes</td> - <td>The maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer.</td> - <td>1048576</td> - </tr> - <tr> - <td>key.delegate.deserializer</td> - <td>Property name for the delegate key deserializer.</td> - <td></td> - </tr> - <tr> - <td>value.delegate.deserializer</td> - <td>Property name for the delegate value deserializer.</td> - <td></td> - </tr> - <tr> - <td>schema.registry.url</td> - <td>Comma-separated list of URLs for Schema Registry instances that can be used to register or look up schemas.</td> - <td></td> - </tr> - <tr> - <td>basic.auth.credentials.source</td> - <td>Specify how to pick the credentials for the Basic authentication header.</td> - <td></td> - </tr> - <tr> - <td>basic.auth.user.info</td> - <td>Specify the user info for the Basic authentication in the form of {username}:{password}.</td> - <td></td> - </tr> - <tr> - <td>ssl.key.password</td> - <td>The password of the private key in the key store file or the PEM key specified in `ssl.keystore.key`.</td> - <td></td> - </tr> - <tr> - <td>ssl.keystore.location</td> - <td>The location of the key store file. This is optional for client and can be used for two-way authentication for client.</td> - <td></td> - </tr> - <tr> - <td>ssl.keystore.password</td> - <td>The store password for the key store file. This is optional for client and only needed if ‘ssl.keystore.location’ is configured.</td> - <td></td> - </tr> - <tr> - <td>ssl.truststore.location</td> - <td>The location of the trust store file.</td> - <td></td> - </tr> - <tr> - <td>ssl.truststore.password</td> - <td>The password for the trust store file.</td> - <td></td> - </tr> - <tr> - <td>auto.offset.reset</td> - <td>Defines what to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server.</td> - <td>latest</td> - </tr> - <tr> - <td>connections.max.idle.ms</td> - <td>Close idle connections after the number of milliseconds specified by this config.</td> - <td>540000</td> - </tr> - <tr> - <td>exclude.internal.topics</td> - <td>Whether internal topics matching a subscribed pattern should be excluded from the subscription.</td> - <td>true</td> - </tr> - <tr> - <td>fetch.max.bytes</td> - <td>The maximum amount of data the server should return for a fetch request.</td> - <td>52428800</td> - </tr> - <tr> - <td>max.poll.interval.ms</td> - <td>The maximum delay between invocations of poll() when using consumer group management.</td> - <td>300000</td> - </tr> - <tr> - <td>max.poll.records</td> - <td>The maximum number of records returned in a single call to poll().</td> - <td>500</td> - </tr> - <tr> - <td>partition.assignment.strategy</td> - <td>A list of class names or class types, ordered by preference, of supported partition assignment strategies that the client will use to distribute partition ownership amongst consumer instances when group management is used.</td> - <td>org.apache.kafka.clients.consumer.RangeAssignor</td> - </tr> - <tr> - <td>receive.buffer.bytes</td> - <td>The size of the TCP receive buffer (SO_RCVBUF) to use when reading data.</td> - <td>65536</td> - </tr> - <tr> - <td>request.timeout.ms</td> - <td>The configuration controls the maximum amount of time the client will wait for the response of a request.</td> - <td>305000</td> - </tr> - <tr> - <td>sasl.jaas.config</td> - <td>JAAS login context parameters for SASL connections in the format used by JAAS configuration files.</td> - <td></td> - </tr> - <tr> - <td>sasl.client.callback.handler.class</td> - <td>The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.</td> - <td></td> - </tr> - <tr> - <td>sasl.login.class</td> - <td>The fully qualified name of a class that implements the Login interface.</td> - <td></td> - </tr> - <tr> - <td>sasl.kerberos.service.name</td> - <td>The Kerberos principal name that Kafka runs as.</td> - <td></td> - </tr> - <tr> - <td>sasl.mechanism</td> - <td>SASL mechanism used for client connections.</td> - <td></td> - </tr> - <tr> - <td>security.protocol</td> - <td>Protocol used to communicate with brokers.</td> - <td></td> - </tr> - <tr> - <td>send.buffer.bytes</td> - <td>The size of the TCP send buffer (SO_SNDBUF) to use when sending data.</td> - <td>131072</td> - </tr> - <tr> - <td>ssl.enabled.protocols</td> - <td>The list of protocols enabled for SSL connections.</td> - <td></td> - </tr> - <tr> - <td>ssl.keystore.type</td> - <td>The file format of the key store file.</td> - <td></td> - </tr> - <tr> - <td>ssl.protocol</td> - <td>The SSL protocol used to generate the SSLContext.</td> - <td></td> - </tr> - <tr> - <td>ssl.provider</td> - <td>The name of the security provider used for SSL connections.</td> - <td></td> - </tr> - <tr> - <td>ssl.truststore.type</td> - <td>The file format of the trust store file.</td> - <td></td> - </tr> - <tr> - <td>check.crcs</td> - <td>Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred.</td> - <td>true</td> - </tr> - <tr> - <td>client.id</td> - <td>An id string to pass to the server when making requests.</td> - <td></td> - </tr> - <tr> - <td>fetch.max.wait.ms</td> - <td>The maximum amount of time the server will block before answering the fetch request if there isn’t sufficient data to immediately satisfy the requirement given by fetch.min.bytes.</td> - <td>500</td> - </tr> - <tr> - <td>interceptor.classes</td> - <td>A list of classes to use as interceptors.</td> - <td></td> - </tr> - <tr> - <td>metadata.max.age.ms</td> - <td>The period of time in milliseconds after which we force a refresh of metadata even if we haven’t seen any partition leadership changes to proactively discover any new brokers or partitions.</td> - <td>300000</td> - </tr> - <tr> - <td>metric.reporters</td> - <td>A list of classes to use as metrics reporters.</td> - <td></td> - </tr> - <tr> - <td>metrics.num.samples</td> - <td>The number of samples maintained to compute metrics.</td> - <td>2</td> - </tr> - <tr> - <td>metrics.recording.level</td> - <td>The highest recording level for metrics.</td> - <td>INFO</td> - </tr> - <tr> - <td>metrics.sample.window.ms</td> - <td>The window of time a metrics sample is computed over.</td> - <td>30000</td> - </tr> - <tr> - <td>reconnect.backoff.ms</td> - <td>The base amount of time to wait before attempting to reconnect to a given host.</td> - <td>50</td> - </tr> - <tr> - <td>retry.backoff.ms</td> - <td>The amount of time to wait before attempting to retry a failed request to a given topic partition.</td> - <td>100</td> - </tr> - <tr> - <td>sasl.kerberos.kinit.cmd</td> - <td>Kerberos kinit command path.</td> - <td></td> - </tr> - <tr> - <td>sasl.kerberos.min.time.before.relogin</td> - <td>Login thread sleep time between refresh attempts.</td> - <td></td> - </tr> - <tr> - <td>sasl.kerberos.ticket.renew.jitter</td> - <td>Percentage of random jitter added to the renewal time.</td> - <td></td> - </tr> - <tr> - <td>sasl.kerberos.ticket.renew.window.factor</td> - <td>Login thread will sleep until the specified window factor of time from last refresh to ticket’s expiry has been reached, at which time it will try to renew the ticket.</td> - <td></td> - </tr> - <tr> - <td>ssl.cipher.suites</td> - <td>A list of cipher suites.</td> - <td></td> - </tr> - <tr> - <td>ssl.endpoint.identification.algorithm</td> - <td>The endpoint identification algorithm to validate server hostname using server certificate.</td> - <td></td> - </tr> - <tr> - <td>ssl.keymanager.algorithm</td> - <td>The algorithm used by key manager factory for SSL connections.</td> - <td></td> - </tr> - <tr> - <td>ssl.secure.random.implementation</td> - <td>The SecureRandom PRNG implementation to use for SSL cryptography operations.</td> - <td></td> - </tr> - <tr> - <td>ssl.trustmanager.algorithm</td> - <td>The algorithm used by trust manager factory for SSL connections.</td> - <td></td> - </tr> - <tr> - <td>sasl.oauthbearer.token.endpoint.url</td> - <td>The URL for the OAuth/OIDC identity provider.</td> - <td></td> - </tr> - <tr> - <td>sasl.oauthbearer.scope.claim.name</td> - <td>The OAuth claim for the scope is often named “scope”, but this (optional) setting can provide a different name to use for the scope included in the JWT payload’s claims if the OAuth/OIDC provider uses a different name for that claim.</td> - <td></td> - </tr> - <tr> - <td>sasl.login.callback.handler.class</td> - <td>The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface.</td> - <td></td> - </tr> - <tr> - <td>sasl.login.connect.timeout.ms</td> - <td>The (optional) value in milliseconds for the external authentication provider connection timeout.</td> - <td></td> - </tr> - <tr> - <td>sasl.login.read.timeout.ms</td> - <td>The (optional) value in milliseconds for the external authentication provider read timeout.</td> - <td></td> - </tr> - <tr> - <td>sasl.login.retry.backoff.ms</td> - <td>The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider.</td> - <td></td> - </tr> - <tr> - <td>sasl.login.retry.backoff.max.ms</td> - <td>The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider.</td> - <td></td> - </tr> - <tr> - <td>kafka.header.prefix</td> - <td>The prefix for Kafka headers.</td> - <td></td> - </tr> -</table> \ No newline at end of file diff --git a/en/docs/reference/connectors/kafka-connector/kafka-inbound-endpoint-example.md b/en/docs/reference/connectors/kafka-connector/kafka-inbound-endpoint-example.md deleted file mode 100644 index 78347b7049..0000000000 --- a/en/docs/reference/connectors/kafka-connector/kafka-inbound-endpoint-example.md +++ /dev/null @@ -1,155 +0,0 @@ -# Kafka Inbound Endpoint Example - -The Kafka inbound endpoint acts as a message consumer. It creates a connection to ZooKeeper and requests messages for either a topic/s or topic filters. - -## What you'll build -This sample demonstrates how one way message bridging from Kafka to HTTP can be done using the inbound Kafka endpoint. -See [Configuring Kafka Inbound Endpoint]({{base_path}}/reference/connectors/kafka-connector/kafka-inbound-endpoint-config/) for more information. - -The following diagram illustrates all the required functionality of the Kafka service that you are going to build. In this example, you only need to consider about the scenario of message consuming. - -<img src="{{base_path}}/assets/img/integrate/connectors/kafkainboundendpoint.png" title="Kafka inbound endpoint" width="800" alt="Kafka inbound endpoint"/> - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Set up Kafka - -Before you begin, set up Kafka by following the instructions in [Setting up Kafka]({{base_path}}/reference/connectors/kafka-connector/setting-up-kafka/). - -## Configure inbound endpoint using WSO2 Integration Studio - -1. Download [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/). Create an **Integration Project** as below. -<img src="{{base_path}}/assets/img/integrate/connectors/solution-project.jpg" title="Creating a new Integration Project" width="800" alt="Creating a new Integration Project" /> - -2. Right click on **Source** -> **main** -> **synapse-config** -> **inbound-endpoints** and add a new **custom inbound endpoint**.</br> -<img src="{{base_path}}/assets/img/integrate/connectors/db-event-inbound-ep.png" title="Creating inbound endpoint" width="400" alt="Creating inbound endpoint" style="border:1px solid black"/> - -3. Click on **Inbound Endpoint** in the design view and under the `properties` tab, update the class name to `org.wso2.carbon.inbound.kafka.KafkaMessageConsumer`. - -4. Navigate to the source view and update it with the following configuration as required. - - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <inboundEndpoint name="KAFKAListenerEP" sequence="kafka_process_seq" onError="fault" class="org.wso2.carbon.inbound.kafka.KafkaMessageConsumer" suspend="false" xmlns="http://ws.apache.org/ns/synapse"> - <parameters> - <parameter name="sequential">true</parameter> - <parameter name="interval">10</parameter> - <parameter name="coordination">true</parameter> - <parameter name="inbound.behavior">polling</parameter> - <parameter name="value.deserializer">org.apache.kafka.common.serialization.StringDeserializer</parameter> - <parameter name="topic.name">test</parameter> - <parameter name="poll.timeout">100</parameter> - <parameter name="bootstrap.servers">localhost:9092</parameter> - <parameter name="group.id">hello</parameter> - <parameter name="contentType">application/json</parameter> - <parameter name="key.deserializer">org.apache.kafka.common.serialization.StringDeserializer</parameter> - </parameters> - </inboundEndpoint> - ``` - Sequence to process the message: - - In this example for simplicity we will just log the message, but in a real world use case, this can be any type of message mediation. - - ```xml - <?xml version="1.0" encoding="ISO-8859-1"?> - <sequence xmlns="http://ws.apache.org/ns/synapse" name="kafka_process_seq"> - <log level="full"/> - <log level="custom"> - <property xmlns:ns="http://org.apache.synapse/xsd" name="partitionNo" expression="get-property('partitionNo')"/> - </log> - <log level="custom"> - <property xmlns:ns="http://org.apache.synapse/xsd" name="messageValue" expression="get-property('messageValue')"/> - </log> - <log level="custom"> - <property xmlns:ns="http://org.apache.synapse/xsd" name="offset" expression="get-property('offset')"/> - </log> - </sequence> - ``` - -## Exporting Integration Logic as a CApp - -**CApp (Carbon Application)** is the deployable artefact on the integration runtime. Let us see how we can export integration logic we developed into a CApp. To export the `Solution Project` as a CApp, a `Composite Application Project` needs to be created. Usually, when a solution project is created, this project is automatically created by Integration Studio. If not, you can specifically create it by navigating to **File** -> **New** -> **Other** -> **WSO2** -> **Distribution** -> **Composite Application Project**. - -1. Right click on Composite Application Project and click on **Export Composite Application Project**.</br> - <img src="{{base_path}}/assets/img/integrate/connectors/capp-project1.jpg" title="Export as a Carbon Application" width="300" alt="Export as a Carbon Application" /> - -2. Select an **Export Destination** where you want to save the .car file. - -3. In the next **Create a deployable CAR file** screen, select inbound endpoint and sequence artifacts and click **Finish**. The CApp will get created at the specified location provided in the previous step. - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/kafka-connector.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -## Deployment - -1. Navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for `Kafka`. Click on `Kafka Inbound Endpoint` and download the .jar file by clicking on `Download Inbound Endpoint`. Copy this .jar file into <PRODUCT-HOME>/lib folder. - -2. Copy the exported carbon application to the <PRODUCT-HOME>/repository/deployment/server/carbonapps folder. - -3. Start the integration server. - -## Testing - - **Sample request** - - Run the following on the Kafka command line to create a topic named test with a single partition and only one replica: - ``` - bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test - ``` - Run the following on the Kafka command line to send a message to the Kafka brokers. You can also use the WSO2 Kafka Producer connector to send the message to the Kafka brokers. - ``` - bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test - ``` - Executing the above command will open up the console producer. Send the following message using the console: - ``` - {"test":"wso2"} - ``` - **Expected response** - - You can see the following Message content in the Micro Integrator: - - ``` - [2020-02-19 12:39:59,331] INFO {org.apache.synapse.mediators.builtin.LogMediator} - To: , MessageID: d130fb8f-5d77-43f8-b6e0-85b98bf0f8c1, Direction: request, Payload: {"test":"wso2"} - [2020-02-19 12:39:59,335] INFO {org.apache.synapse.mediators.builtin.LogMediator} - partitionNo = 0 - [2020-02-19 12:39:59,336] INFO {org.apache.synapse.mediators.builtin.LogMediator} - messageValue = {"test":"wso2"} - [2020-02-19 12:39:59,336] INFO {org.apache.synapse.mediators.builtin.LogMediator} - offset = 6 - ``` - The Kafka inbound endpoint gets the messages from the Kafka brokers and logs the messages in the Micro Integrator. - -## Configure inbound endpoint with Kafka Avro message -You can setup WSO2 Micro Integrator inbound endpoint with Kafka Avro messaging format as well. Follow the instructions on [Setting up Kafka]({{base_path}}/reference/connectors/kafka-connector/setting-up-kafka/) to setup Kafka on the Micro Integrator. In inbound endpoint XML configurations, change the `value.deserializer` parameter to `io.confluent.kafka.serializers.KafkaAvroDeserializer` and `key.deserializer` parameter to `io.confluent.kafka.serializers.KafkaAvroDeserializer`. Add new parameter `schema.registry.url` and add schema registry URL in there. The following is the modiefied sample of the Kafka inbound endpoint: -``` -<?xml version="1.0" encoding="UTF-8"?> -<inboundEndpoint name="KAFKAListenerEP" sequence="kafka_process_seq" onError="fault" class="org.wso2.carbon.inbound.kafka.KafkaMessageConsumer" suspend="false" xmlns="http://ws.apache.org/ns/synapse"> - <parameters> - <parameter name="sequential">true</parameter> - <parameter name="interval">10</parameter> - <parameter name="coordination">true</parameter> - <parameter name="inbound.behavior">polling</parameter> - <parameter name="value.deserializer">io.confluent.kafka.serializers.KafkaAvroDeserializer</parameter> - <parameter name="topic.name">test</parameter> - <parameter name="poll.timeout">100</parameter> - <parameter name="bootstrap.servers">localhost:9092</parameter> - <parameter name="group.id">hello</parameter> - <parameter name="contentType">text/plain</parameter> - <parameter name="key.deserializer">io.confluent.kafka.serializers.KafkaAvroDeserializer</parameter> - <parameter name="schema.registry.url">http://localhost:8081/</parameter> - </parameters> -</inboundEndpoint> -``` - -Add following configs when the Confluent Schema Registry is secured with basic auth, -``` -<parameter name="basic.auth.credentials.source">source_of_basic_auth_credentials</parameter> -<parameter name="basic.auth.user.info">username:password</parameter> -``` -Make sure to start Kafka Schema Registry before starting up the Micro Integrator. - -## What's next - -* To customize this example for your own scenario, see [Kafka Inbound Endpoint Configuration]({{base_path}}/reference/connectors/kafka-connector/kafka-inbound-endpoint-config/) documentation. diff --git a/en/docs/reference/connectors/kafka-connector/setting-up-kafka.md b/en/docs/reference/connectors/kafka-connector/setting-up-kafka.md deleted file mode 100644 index 399eedc40b..0000000000 --- a/en/docs/reference/connectors/kafka-connector/setting-up-kafka.md +++ /dev/null @@ -1,75 +0,0 @@ -# Setting up Kafka - -## For connector version 3.2.0 and later - -To use the Kafka connector, download and install [Apache Kafka](http://kafka.apache.org/downloads.html). Before you start configuring Kafka you also need the integration runtime and we refer to that location as `<PRODUCT_HOME>`. - -> **Note**: The recommended version is Kafka 2.12-2.8.2. For all available versions of Kafka that you can download, see https://kafka.apache.org/downloads. The recommended Java version is 11. - -To configure the Kafka connector, copy the following client libraries from the `<KAFKA_HOME>/lib` directory to the `<MI_HOME>/lib` directory. - -* [kafka_2.12-2.8.2.jar](https://mvnrepository.com/artifact/org.apache.kafka/kafka_2.12/2.8.2) -* [kafka-clients-2.8.2.jar](https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients/2.8.2) -* [metrics-core-2.2.0.jar](https://mvnrepository.com/artifact/com.yammer.metrics/metrics-core/2.2.0) -* [scala-library-2.12.13jar](https://mvnrepository.com/artifact/org.scala-lang/scala-library/2.12.13) -* [zkclient-0.10.jar](https://mvnrepository.com/artifact/com.101tec/zkclient/0.10) -* [zookeeper-3.5.9.jar](https://mvnrepository.com/artifact/org.apache.zookeeper/zookeeper/3.5.9) - -Copy the following additional client libraries to the `<MI_HOME>/lib` directory (can be copied from the Confluent platform): - -* [avro-1.11.3.jar](https://mvnrepository.com/artifact/org.apache.avro/avro/1.11.3) -* [common-config-5.4.0.jar](https://mvnrepository.com/artifact/io.confluent/common-config/5.4.0) -* [common-utils-5.4.0.jar](https://mvnrepository.com/artifact/io.confluent/common-utils/5.4.0) -* [kafka-avro-serializer-5.3.0.jar](https://mvnrepository.com/artifact/io.confluent/kafka-avro-serializer/5.3.0) -* [kafka-schema-registry-client-5.3.0.jar](https://mvnrepository.com/artifact/io.confluent/kafka-schema-registry-client/5.3.0) - -Navigate to `<KAFKA_HOME>` and run the following command to start the ZooKeeper server: - -```bash -bin/zookeeper-server-start.sh config/zookeeper.properties -``` - -From the `<KAFKA_HOME>` directory, run the following command to start the Kafka server: - -```bash -bin/kafka-server-start.sh config/server.properties -``` - -Now that you have connected to Kafka, you can start publishing and consuming messages using the Kafka brokers. For more information, see [Publishing Messages using Kafka]({{base_path}}/reference/connectors/kafka-connector/kafka-connector-producer-example/) and [Consuming Messages using Kafka]({{base_path}}/reference/connectors/kafka-connector/kafka-inbound-endpoint-example/). - -## For connector version 3.1.x and below - -To use the Kafka connector, download and install [Apache Kafka](http://kafka.apache.org/downloads.html). Before you start configuring the Kafka you also need the integration runtime and we refer to that location as `<PRODUCT_HOME>`. - -> **Note**: The recommended version is Kafka_2.11-2.2.1. For all available versions of Kafka that you can download, see https://kafka.apache.org/downloads. The recommended Java version is 1.8. - -To configure the Kafka connector, copy the following client libraries from the `<KAFKA_HOME>/lib` directory to the `<MI_HOME>/lib` directory. - -* [kafka_2.11-2.2.1.jar](https://mvnrepository.com/artifact/org.apache.kafka/kafka_2.11/2.2.1) -* [kafka-clients-1.0.0.jar](https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients/1.0.0) -* [metrics-core-2.2.0.jar](https://mvnrepository.com/artifact/com.yammer.metrics/metrics-core/2.2.0) -* [scala-library-2.12.3.jar](https://mvnrepository.com/artifact/org.scala-lang/scala-library/2.12.3) -* [zkclient-0.10.jar](https://mvnrepository.com/artifact/com.101tec/zkclient/0.10) -* [zookeeper-3.4.10.jar](https://mvnrepository.com/artifact/org.apache.zookeeper/zookeeper/3.4.10) - -Copy the following additional client libraries to the `<MI_HOME>/lib` directory (can be copied from the Confluent platform): - -* [avro-1.8.1.jar](https://mvnrepository.com/artifact/org.apache.avro/avro/1.8.1) -* [common-config-5.4.0.jar](https://mvnrepository.com/artifact/io.confluent/common-config/5.4.0) -* [common-utils-5.4.0.jar](https://mvnrepository.com/artifact/io.confluent/common-utils/5.4.0) -* [kafka-avro-serializer-5.3.0.jar](https://mvnrepository.com/artifact/io.confluent/kafka-avro-serializer/5.3.0) -* [kafka-schema-registry-client-5.3.0.jar](https://mvnrepository.com/artifact/io.confluent/kafka-schema-registry-client/5.3.0) - -Navigate to <KAFKA_HOME> and run the following command to start the ZooKeeper server: - -```bash -bin/zookeeper-server-start.sh config/zookeeper.properties -``` - -From the <KAFKA_HOME> directory, run the following command to start the Kafka server: - -```bash -bin/kafka-server-start.sh config/server.properties -``` - -Now that you have connected to Kafka, you can start publishing and consuming messages using the Kafka brokers. For more information, see [Publishing Messages using Kafka]({{base_path}}/reference/connectors/kafka-connector/kafka-connector-producer-example/) and [Consuming Messages using Kafka]({{base_path}}/reference/connectors/kafka-connector/kafka-inbound-endpoint-example/). diff --git a/en/docs/reference/connectors/ldap-connector/ldap-connector-example.md b/en/docs/reference/connectors/ldap-connector/ldap-connector-example.md deleted file mode 100644 index 4ae7bb3a65..0000000000 --- a/en/docs/reference/connectors/ldap-connector/ldap-connector-example.md +++ /dev/null @@ -1,217 +0,0 @@ -# LDAP Connector Example - -Given below is a sample scenario that demonstrates how to perform CRUD operations on LDAP entries using LDAP Connector. - -## What you'll build - -This example demonstrates on how to use the LDAP connector to create and read LDAP entries on a student directory. - ![image]({{base_path}}/assets/img/integrate/connectors/ldap_connector/ldap_connector_usecase.png) - -This will have 2 API resources, `create`, `search`. - -todo : add an image - -* `/create` : This will create a new LDAP entry in the LDAP server. - -* `/search` : This will performs a search for one or more LDAP entities with the specified search keys. - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Configure the connector in WSO2 Integration Studio - -Before you begin, see [Setting up LDAP]({{base_path}}/reference/connectors/ldap-connector/setting-up-ldap/) if you need to setup an LDAP and try this out. - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -1. Right click on the created Integration Project and select, **New** -> **Rest API** to create the REST API. - -2. Provide the API name as `college_student_api` and the API context as `/student`. You can go to the source view of the -xml configuration file of the API and copy the following configuration. - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <api context="/student" name="college_student_api" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST" url-mapping="/create"> - <inSequence> - <sequence key="init_sequence"/> - <sequence key="add_student_sequence"/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" url-mapping="/search"> - <inSequence> - <sequence key="init_sequence"/> - <sequence key="search_student_sequence"/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - </api> - ``` - -3. Right click on the created Integration Project and select, -> **New** -> **Sequence** to create the following -sequences. - - * init_sequence - `<ldap.init>` element authenticates with the LDAP server in order to gain access to perform various - LDAP operations. - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <sequence name="init_sequence" trace="disable" xmlns="http://ws.apache.org/ns/synapse"> - <property expression="json-eval($.secureConnection)" name="secureConnection" scope="default" type="STRING"/> - <property expression="json-eval($.disableSSLCertificateChecking)" name="disableSSLCertificateChecking" scope="default" type="STRING"/> - <property expression="json-eval($.providerUrl)" name="providerUrl" scope="default" type="STRING"/> - <property expression="json-eval($.securityPrincipal)" name="securityPrincipal" scope="default" type="STRING"/> - <property expression="json-eval($.securityCredentials)" name="securityCredentials" scope="default" type="STRING"/> - <ldap.init> - <providerUrl>{$ctx:providerUrl}</providerUrl> - <securityPrincipal>{$ctx:securityPrincipal}</securityPrincipal> - <securityCredentials>{$ctx:securityCredentials}</securityCredentials> - <secureConnection>{$ctx:secureConnection}</secureConnection> - <disableSSLCertificateChecking>{$ctx:disableSSLCertificateChecking}</disableSSLCertificateChecking> - </ldap.init> - </sequence> - ``` - - * add_student_sequence - `<ldap.addEntry>` element creates a new LDAP entry in the LDAP server - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <sequence name="add_student_sequence" trace="disable" xmlns="http://ws.apache.org/ns/synapse"> - <property expression="json-eval($.content.objectClass)" name="objectClass" scope="default" type="STRING"/> - <property expression="json-eval($.content.attributes)" name="attributes" scope="default" type="STRING"/> - <property expression="json-eval($.content.dn)" name="dn" scope="default" type="STRING"/> - <ldap.addEntry> - <objectClass>{$ctx:objectClass}</objectClass> - <attributes>{$ctx:attributes}</attributes> - <dn>{$ctx:dn}</dn> - </ldap.addEntry> - <respond/> - </sequence> - ``` - - * search_student_sequence - `<ldap.searchEntry>` element search for one or more LDAP entities based on the specified - search keys. - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <sequence name="search_student_sequence" trace="disable" xmlns="http://ws.apache.org/ns/synapse"> - <property expression="json-eval($.content.objectClass)" name="objectClass" scope="default" type="STRING"/> - <property expression="json-eval($.content.filters)" name="filters" scope="default" type="STRING"/> - <property expression="json-eval($.content.attributes)" name="attributes" scope="default" type="STRING"/> - <property expression="json-eval($.content.dn)" name="dn" scope="default" type="STRING"/> - <ldap.searchEntry> - <objectClass>{$ctx:objectClass}</objectClass> - <limit>1000</limit> - <filters>{$ctx:filters}</filters> - <dn>{$ctx:dn}</dn> - <attributes>{$ctx:attributes}</attributes> - </ldap.searchEntry> - <respond/> - </sequence> - ``` - -{!includes/reference/connectors/exporting-artifacts.md!} - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/ldap_connector_project.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -### Create an entry in ldap server - -1. Create a file named student_data.json with following sample payload. - ```json - { - "providerUrl":"ldap://localhost:10389/", - "securityPrincipal":"uid=admin,ou=system", - "securityCredentials":"admin", - "secureConnection":"false", - "disableSSLCertificateChecking":"false", - "content":{ - "objectClass":"identityPerson", - "dn":"uid=triss.merigold,ou=Users,dc=wso2,dc=org", - "attributes":{ - "mail":"triss@wso2.com", - "userPassword":"geralt&triss", - "sn":"dim", - "cn":"dim", - "manager":"cn=geralt,ou=Groups,dc=example,dc=com" - } - } - } - ``` - -2. Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - ``` - curl -H "Content-Type: application/json" -X POST --data @student_data.json http://localhost:8290/student/create - ``` - -**Expected Response**: -1. You should get a 'Success' response. -2. Open Apache Directory Studio and category DIT (Directory Information Tree) shows the hierarchical content of the -directory. Expand, collapse the tree and you will see the new entries. Select the entry and you will see it's attributes -and values on Entry Editor. - ![image]({{base_path}}/assets/img/integrate/connectors/ldap_connector/ldap-connector-directory-studio-view.png) - -### Search ldap entries - -1. Create a file named search_student.json with following sample payload - ```json - { - "providerUrl": "ldap://localhost:10389/", - "securityPrincipal": "uid=admin,ou=system", - "securityCredentials": "admin", - "secureConnection": "false", - "disableSSLCertificateChecking": "false", - "application": "ldap", - "operation": "searchEntity", - "content": { - "objectClass": "identityPerson", - "filters": { - "manager": "cn=geralt,ou=Groups,dc=example,dc=com" - }, - "dn": "ou=Users,dc=wso2,dc=org", - "attributes": "mail,uid" - } - } - ``` - -2. Invoke the API as shown below using the curl command. - ``` - curl -H "Content-Type: application/json" -X POST --data @search_student.json http://localhost:8290/student/search - ``` - -**Expected Response**: -You should get all entries that match with the provided filter. A sample response is as follows. -```json - { - "result": { - "entry": [ - { - "dn": "uid=triss.merigold,ou=Users,dc=WSO2,dc=ORG", - "mail": "triss@wso2.com", - "uid": "triss.merigold" - }, - { - "dn": "uid=yennefer.of.vengerberg,ou=Users,dc=WSO2,dc=ORG", - "mail": "yenna@wso2.com", - "uid": "yennefer.of.vengerberg" - } - ] - } - } -``` -## What's Next - -* To customize this example for your own scenario, see [LDAP Connector Configuration]({{base_path}}/reference/connectors/ldap-connector/ldap-server-configuration/) documentation for all operation details of the connector. diff --git a/en/docs/reference/connectors/ldap-connector/ldap-connector-overview.md b/en/docs/reference/connectors/ldap-connector/ldap-connector-overview.md deleted file mode 100644 index 27a7634388..0000000000 --- a/en/docs/reference/connectors/ldap-connector/ldap-connector-overview.md +++ /dev/null @@ -1,35 +0,0 @@ -# LDAP Connector Overview - -The LDAP connector allows you to connect to any LDAP server through a simple web services interface and perform CRUD -(Create, Read, Update, Delete) operations on LDAP entries. This connector uses the JAVA JNDI APIs to connect to a -required LDAP server. - -To see the available LDAP connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "LDAP". - -<img src="{{base_path}}/assets/img/integrate/connectors/ldap-store.png" title="LDAP Connector Store" width="200" alt="LDAP Connector Store"/> - -## Compatibility - -| Connector version | Supported product versions | -| ------------- |------------- | -| 1.0.11 | APIM 4.0.0, EI 7.1.0, EI 7.0.x, EI 6.6.0, EI 6.5.0, EI 6.4.0 | - -For older versions, see the details in the connector store. - -## LDAP Connector documentation - -* **[Setting up an LDAP Server]({{base_path}}/reference/connectors/ldap-connector/setting-up-ldap/)**: This involves setting up an LDAP server. - -* **[LDAP Connector Example]({{base_path}}/reference/connectors/ldap-connector/ldap-connector-example/)**: This example demonstrates on how to use the LDAP connector to create and read LDAP entries on a student directory. - -* **[LDAP Connector Reference]({{base_path}}/reference/connectors/ldap-connector/ldap-server-configuration/)**: This documentation provides a reference guide for the LDAP Connector. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, please create a pull request in the following repository. - -* [LDAP Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-ldap) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/ldap-connector/ldap-server-configuration.md b/en/docs/reference/connectors/ldap-connector/ldap-server-configuration.md deleted file mode 100644 index 3c82512e75..0000000000 --- a/en/docs/reference/connectors/ldap-connector/ldap-server-configuration.md +++ /dev/null @@ -1,424 +0,0 @@ -# LDAP Connector Reference - -To use the LDAP connector, add the `<ldap.init>` element in your configuration before carrying out any other LDAP operations. - -??? note "ldap.init" - The ldap.init operation initializes the connector to interact with an LDAP. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>providerUrl</td> - <td>The URL of the LDAP server.</td> - <td>Yes</td> - </tr> - <tr> - <td>securityPrincipal</td> - <td>The Distinguished Name (DN) of the admin of the LDAP Server.</td> - <td>Yes</td> - </tr> - <tr> - <td>securityCredentials</td> - <td>The password of the LDAP admin.</td> - <td>Yes</td> - </tr> - <tr> - <td>secureConnection</td> - <td>The boolean value for the secure connection.</td> - <td>Yes</td> - </tr> - <tr> - <td>disableSSLCertificateChecking</td> - <td>The boolean value to check whether the certificate is enabled or not.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - ```xml - <ldap.init> - <providerUrl>{$ctx:providerUrl}</providerUrl> - <securityPrincipal>{$ctx:securityPrincipal}</securityPrincipal> - <securityCredentials>{$ctx:securityCredentials}</securityCredentials> - <secureConnection>{$ctx:secureConnection}</secureConnection> - <disableSSLCertificateChecking>{$ctx:disableSSLCertificateChecking}</disableSSLCertificateChecking> - </ldap.init> - ``` - - -You can follow the steps below to import your LDAP certificate into the Micro Integrator client’s keystore as follows: - -1. To encrypt the connections, you need to configure a certificate authority (https://www.digitalocean.com/community/tutorials/how-to-encrypt-openldap-connections-using-starttls) -and use it to sign the keys for the LDAP server. -2. Use the following command to import the certificate into the integration server's client keystore. - ```bash - keytool -importcert -file <certificate file> -keystore <PRODUCT_HOME>/repository/resources/security/client-truststore.jks -alias "LDAP" - ``` -3. Restart the server and deploy the LDAP configuration. - -**Ensuring secure data** - -Secure Vault is supported for encrypting passwords. See, -[Working with Secrets]({{base_path}}/install-and-setup/install-and-setup-overview/encrypting_plain_text) on integrating -and using Secure Vault. - -**Re-using LDAP configurations** - -You can save the LDAP configuration as a [local entry]({{base_path}}/integrate/develop/creating-artifacts/registry/creating-local-registry-entries) and then easily reference it with the configKey attribute in your operations. For example, if you saved the above **<ldap.init>** entry as a local entry named MyLDAPConfig, you could reference it from an operation like addEntry as follows: - -```xml -<ldap.addEntry configKey="MyLDAPConfig"/> -``` - ---- - -### User authentication - -??? note "authenticate" - LDAP authentication is a major requirement in most LDAP based applications. The authenticate operation simplifies the LDAP authentication mechanism. This operation authenticates the provided Distinguished Name(DN) and password against the LDAP server, and returns either a success or failure response depending on whether the authentication was successful or not. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>dn</td> - <td>The distinguished name of the user.</td> - <td>Yes</td> - </tr> - <tr> - <td>password</td> - <td>The password of the user.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <ldap.authenticate> - <dn>{$ctx:dn}</dn> - <password>{$ctx:password}</password> - </ldap.authenticate> - ``` - - **Sample request** - - ```json - { - "providerUrl":"ldap://localhost:10389/", - "securityPrincipal":"cn=admin,dc=wso2,dc=com", - "securityCredentials":"comadmin", - "secureConnection":"false", - "disableSSLCertificateChecking":"false", - "application": "ldap", - "operation":"authenticate", - "content":{ - "dn":"uid=testDim20,ou=staff,dc=wso2,dc=com", - "password":"12345" - } - } - ``` - - **Authentication success response** - - ```xml - <Response xmlns="http://localhost/services/ldap"> - <result> - <message>Success</message> - </result> - </Response> - ``` - - **Authentication failure response** - - ```xml - <Response xmlns="http://localhost/services/ldap"> - <result> - <message>Fail</message> - </result> - </Response> - ``` - - **Error codes** - - This section describes the connector error codes and their meanings. - - | Error Code | Description | - | ------------- | ------------- | - | 7000001 | An error occurred while searching a LDAP entry. | - | 7000002 | LDAP root user's credentials are invalid. | - | 7000003 | An error occurred while adding a new LDAP entry. | - | 7000004 | An error occurred while updating an existing LDAP entry. | - | 7000005 | An error occurred while deleting a LDAP entry. | - | 7000006 | The LDAP entry that is required to perform the operation does not exist. | - - **Sample error response** - - ```xml - <Fault xmlns="http://localhost/services/ldap"> - <error> - <errorCode>700000X</errorCode> - <errorMessage>Error Message</errorMessage> - </error> - </Fault> - ``` - -### CRUD operations - -??? note "addEntry" - The addEntry operation creates a new LDAP entry in the LDAP server. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>objectClass</td> - <td>The object class of the new entry.</td> - <td>Yes</td> - </tr> - <tr> - <td>dn</td> - <td>The distinguished name of the new entry. This should be a unique DN that does not already exist in the LDAP server.</td> - <td>Yes</td> - </tr> - <tr> - <td>attributes</td> - <td>The other attributes of the entry other than the DN. These attributes should be specified as comma separated key-value pairs.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <ldap.addEntry> - <objectClass>{$ctx:objectClass}</objectClass> - <dn>{$ctx:dn}</dn> - <attributes>{$ctx:attributes}</attributes> - </ldap.addEntry> - ``` - - **Sample request** - - ```json - { - "providerUrl":"ldap://localhost:10389/", - "securityPrincipal":"cn=admin,dc=wso2,dc=com", - "securityCredentials":"comadmin", - "secureConnection":"false", - "disableSSLCertificateChecking":"false", - "application":"ldap", - "operation":"createEntity", - "content":{ - "objectClass":"inetOrgPerson", - "dn":"uid=testDim20,ou=staff,dc=wso2,dc=com", - "attributes":{ - "mail":"testDim1s22c@wso2.com", - "userPassword":"12345", - "sn":"dim", - "cn":"dim", - "manager":"cn=dimuthuu,ou=Groups,dc=example,dc=com" - } - } - } - ``` - -??? note "searchEntry" - The searchEntry operation performs a search for one or more LDAP entities based on the specified search keys. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>objectClass</td> - <td>The object class of the new entry.</td> - <td>Yes</td> - </tr> - <tr> - <td>filters</td> - <td>The keywords to use in the search. The parameters should be in JSON format as follow: - "filters":{ "uid":"john", "mail":"testDim2@gmail.com"} - </td> - <td>Yes</td> - </tr> - <tr> - <td>dn</td> - <td>The distinguished name of the entry you need to search.</td> - <td>Yes</td> - </tr> - <tr> - <td>attributes</td> - <td>The attributes of the LDAP entry that should be included in the search result.</td> - <td>Yes</td> - </tr> - <tr> - <td>onlyOneReference</td> - <td>Boolean value whether to guarantee or not only one reference.</td> - <td>Yes</td> - </tr> - <tr> - <td>limit</td> - <td>This allows you to set a limit on the number of search results. If this property is not defined the maximum no of search results will be returned.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <ldap.searchEntry> - <objectClass>{$ctx:objectClass}</objectClass> - <dn>{$ctx:dn}</dn> - <filters>{$ctx:filters}</filters> - <attributes>{$ctx:attributes}</attributes> - <onlyOneReference>{$ctx:onlyOneReference}</onlyOneReference> - <limit>1000</limit> - </ldap.searchEntry> - ``` - - **Sample request** - - ```json - { - "providerUrl":"ldap://server.example.com", - "securityPrincipal":"cn=admin,dc=example,dc=com", - "securityCredentials":"admin", - "secureConnection":"false", - "disableSSLCertificateChecking":"false", - "application":"ldap", - "operation":"searchEntity", - "content":{ - "dn":"ou=sales,dc=example,dc=com", - "objectClass":"inetOrgPerson", - "attributes":"mail,uid,givenName,manager,objectGUID", - "filters":{ - "manager":"cn=sales-group,ou=sales,dc=example,dc=com","uid":"rajjaz","createTimestamp >":"20210412000000.0Z"}, - "onlyOneReference":"false" - } - } - ``` - -??? note "updateEntry" - The updateEntry operation updates an existing LDAP entry in the LDAP server based on the specified changes. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>mode</td> - <td>The mode of the update operation. Possible values are as follows: - <ul> - <li>replace : Replaces an existing attribute with the new attribute that is specified.</li> - <li>add : Adds a new attributes</li> - <li>remove : Removes an existing attribute.</li> - </ul> - </td> - <td>Yes</td> - </tr> - <tr> - <td>dn</td> - <td>The distinguished name of the entry.</td> - <td>Yes</td> - </tr> - <tr> - <td>attributes</td> - <td>Attributes of the entry to be updated. The attributes to be updated should be specified as comma separated key-value pairs.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <ldap.searchEntry> - <objectClass>{$ctx:objectClass}</objectClass> - <dn>{$ctx:dn}</dn> - <filters>{$ctx:filters}</filters> - <attributes>{$ctx:attributes}</attributes> - <onlyOneReference>{$ctx:onlyOneReference}</onlyOneReference> - <limit>1000</limit> - </ldap.searchEntry> - ``` - - **Sample request** - - ```json - { - "providerUrl":"ldap://server.example.com", - "securityPrincipal":"cn=admin,dc=example,dc=com", - "securityCredentials":"admin", - "secureConnection":"false", - "disableSSLCertificateChecking":"false", - "application":"ldap", - "operation":"searchEntity", - "content":{ - "dn":"ou=sales,dc=example,dc=com", - "objectClass":"inetOrgPerson", - "attributes":"mail,uid,givenName,manager,objectGUID", - "filters":{ - "manager":"cn=sales-group,ou=sales,dc=example,dc=com","uid":"rajjaz"}, - "onlyOneReference":"false" - } - } - ``` - -??? note "deleteEntry" - The deleteEntry operation deletes an existing LDAP entry from the LDAP server. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>dn</td> - <td>The distinguished name of the entry to be deleted.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <ldap.searchEntry> - <objectClass>{$ctx:objectClass}</objectClass> - <dn>{$ctx:dn}</dn> - <filters>{$ctx:filters}</filters> - <attributes>{$ctx:attributes}</attributes> - <onlyOneReference>{$ctx:onlyOneReference}</onlyOneReference> - <limit>1000</limit> - </ldap.searchEntry> - ``` - - **Sample request** - - ```json - { - "providerUrl":"ldap://server.example.com", - "securityPrincipal":"cn=admin,dc=example,dc=com", - "securityCredentials":"admin", - "secureConnection":"false", - "disableSSLCertificateChecking":"false", - "application":"ldap", - "operation":"searchEntity", - "content":{ - "dn":"ou=sales,dc=example,dc=com", - "objectClass":"inetOrgPerson", - "attributes":"mail,uid,givenName,manager,objectGUID", - "filters":{ - "manager":"cn=sales-group,ou=sales,dc=example,dc=com","uid":"rajjaz"}, - "onlyOneReference":"false" - } - } - ``` diff --git a/en/docs/reference/connectors/ldap-connector/setting-up-ldap.md b/en/docs/reference/connectors/ldap-connector/setting-up-ldap.md deleted file mode 100644 index 820e38a779..0000000000 --- a/en/docs/reference/connectors/ldap-connector/setting-up-ldap.md +++ /dev/null @@ -1,16 +0,0 @@ -# Setting up an LDAP Server - -WSO2 Identity Server offers an embedded LDAP as a primary user store. Download Identity Server from [here](https://wso2.com/identity-and-access-management/) and start the server. See [Quick Start Guide](https://is.docs.wso2.com/en/5.10.0/get-started/quick-start-guide/) for more information. - -### Apache Directory Studio - -1. Download Apache Directory Studio from [here](http://directory.apache.org/studio/) and open. -2. Right click on the LDAP Servers tab found on the bottom left corner and click **New Connection**.<br> - <img src="{{base_path}}/assets/img/integrate/connectors/ldap_connector/ds_create_new_connection.png" title="LDAP new connection" width="400" alt="LDAP new connection"/> -3. Configure network parameters as follows and click next.<br> - <img src="{{base_path}}/assets/img/integrate/connectors/ldap_connector/creating_a_new_connection.png" title="LDAP new connection" width="600" alt="LDAP new connection"/> -4. Provide authentication parameters as follows and click finish. - * Bind DN or user parameter - **uid=admin,ou=system** - * Bind password - **admin** -5. Right click on newly created connection and select **Open Connection**.<br> - <img src="{{base_path}}/assets/img/integrate/connectors/ldap_connector/open_connection.png" title="LDAP new connection" width="400" alt="LDAP new connection"/> \ No newline at end of file diff --git a/en/docs/reference/connectors/microsoft-azure-storage-connector/1.x/microsoft-azure-storage-connector-example.md b/en/docs/reference/connectors/microsoft-azure-storage-connector/1.x/microsoft-azure-storage-connector-example.md deleted file mode 100644 index a03bef3a44..0000000000 --- a/en/docs/reference/connectors/microsoft-azure-storage-connector/1.x/microsoft-azure-storage-connector-example.md +++ /dev/null @@ -1,334 +0,0 @@ -# Microsoft Azure Storage Connector Example - -Given below is a sample scenario that demonstrates how to work with container and blob operations using the WSO2 Microsoft Azure Storage Connector. - -## What you'll build - -This example demonstrates how to use Microsoft Azure Storage connector to: - -1. Create a container (a location for storing employee details) in Microsoft Azure Storage account. -2. Retrieve information about the created containers. -3. Upload text or binary employee details (blob) in to the container. -4. Retrieve information about the uploaded employee details(blob). -5. Remove uploaded employee details (blob). -6. Remove created container. -7. Retrieve the metadata from a specific file (blob). - -All seven operations are exposed via an API. The API with the context `/resources` has seven resources - -* `/createcontainer` : Creates a new container in the Microsoft Azure Storage account with the specified container name for store employee details. -* `/listcontainer` : Retrieve information about the created containers from the Microsoft Azure Storage account. -* `/adddetails`: Upload text or binary employee data (blob) and stored into the specified container. -* `/listdetails` : Retrieve information about the added employee data (blobs). -* `/deletedetails` : Remove added employee data from the specified text or binary employee data (blob). -* `/deletecontainer` : Remove created container in the Microsoft Azure Storage account. -* `/listmetadata` : Retrieve the metadata from a file (blob) stored in the Microsoft Azure Storage container. - -For more information about these operations, please refer to the [Microsoft Azure Storage connector reference guide]({{base_path}}/reference/connectors/microsoft-azure-storage-connector/microsoft-azure-storage-reference/). - -> **Note**: Before invoking the API, you need to create a **Storage Account** in **Microsoft Azure Storage account**. See [Azure Storage Configuration]({{base_path}}/reference/connectors/microsoft-azure-storage-connector/microsoft-azure-storage-configuration/) documentation for more information. - -The following diagram shows the overall solution. The user creates a container, stores some text or binary employee data (blob) into the container or the blob metadata, and then receives it back. To invoke each operation, the user uses the same API. - -<img src="{{base_path}}/assets/img/integrate/connectors/ms-azure-storage-connector.png" title="Microsoft Azure Storage Connector" width="800" alt="Microsoft Azure Storage Connector"/> - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Configure the connector in WSO2 Integration Studio - -Follow these steps to set up the ESB Solution Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -1. Right click on the created Integration Project and select, -> **New** -> **Rest API** to create the REST API. - - 2. Specify the API name as `MSAzureStorage` and API context as `/resources`. You can go to the XML configuration of the API (source view) and copy the following configuration. - - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <api context="/resources" name="MSAzureStorage" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST" url-mapping="/createcontainer"> - <inSequence> - <property expression="json-eval($.accountName)" name="accountName" scope="default" type="STRING"/> - <property expression="json-eval($.accountKey)" name="accountKey" scope="default" type="STRING"/> - <property expression="json-eval($.containerName)" name="containerName" scope="default" type="STRING"/> - <msazurestorage.init> - <accountName>eiconnectortest</accountName> - <accountKey>bWt69gFpheoD6lwVsMgeV5io2/KxlXK1KUcod68PhzuV1xHxje0LBD4Bd+y+ESAOlH5BTAfvdDG5q4Hhg==</accountKey> - </msazurestorage.init> - <msazurestorage.createContainer> - <containerName>{$ctx:containerName}</containerName> - </msazurestorage.createContainer> - <log level="full"> - <property name="Container created" value="Container created"/> - </log> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" url-mapping="/listcontainer"> - <inSequence> - <property expression="json-eval($.accountName)" name="accountName" scope="default" type="STRING"/> - <property expression="json-eval($.accountKey)" name="accountKey" scope="default" type="STRING"/> - <msazurestorage.init> - <accountName>eiconnectortest</accountName> - <accountKey>bWt69gFpheoD6lwVsMgeV5io2/KxlXK1KUcod68PhzuV1xHxje0LBD4Bd+y+ESAOlH5BTAfvdDG5q4Hhg==</accountKey> - </msazurestorage.init> - <msazurestorage.listContainers/> - <log level="full"> - <property name="List containers" value="List containers"/> - </log> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" url-mapping="/adddetails"> - <inSequence> - <property expression="json-eval($.accountName)" name="accountName" scope="default" type="STRING"/> - <property expression="json-eval($.accountKey)" name="accountKey" scope="default" type="STRING"/> - <property expression="json-eval($.containerName)" name="containerName" scope="default" type="STRING"/> - <property expression="json-eval($.fileName)" name="fileName" scope="default" type="STRING"/> - <property expression="json-eval($.filePath)" name="filePath" scope="default" type="STRING"/> - <msazurestorage.init> - <accountName>eiconnectortest</accountName> - <accountKey>bWt69gFpheoD6lwVsMgeV5io2/KxlXK1KUcod68PhzuV1xHxje0LBD4Bd+y+ESAOlH5BTAfvdDG5q4Hhg==</accountKey> - </msazurestorage.init> - <msazurestorage.uploadBlob> - <containerName>{$ctx:containerName}</containerName> - <filePath>{$ctx:filePath}</filePath> - <fileName>{$ctx:fileName}</fileName> - </msazurestorage.uploadBlob> - <log level="full"> - <property name="Uploaded emplyee details" value="Uploaded emplyee details"/> - </log> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" url-mapping="/listdetails"> - <inSequence> - <property expression="json-eval($.accountName)" name="accountName" scope="default" type="STRING"/> - <property expression="json-eval($.accountKey)" name="accountKey" scope="default" type="STRING"/> - <property expression="json-eval($.containerName)" name="containerName" scope="default" type="STRING"/> - <msazurestorage.init> - <accountName>eiconnectortest</accountName> - <accountKey>bWt69gFpheoD6lwVsMgeV5io2/KxlXK1KUcod68PhzuV1xHxje0LBD4Bd+y+ESAOlH5BTAfvdDG5q4Hhg==</accountKey> - </msazurestorage.init> - <msazurestorage.listBlobs> - <containerName>{$ctx:containerName}</containerName> - </msazurestorage.listBlobs> - <log level="full"> - <property name="List uploaded emplyee details" value="List uploaded emplyee details"/> - </log> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" url-mapping="/deletedetails"> - <inSequence> - <property expression="json-eval($.accountName)" name="accountName" scope="default" type="STRING"/> - <property expression="json-eval($.accountKey)" name="accountKey" scope="default" type="STRING"/> - <property expression="json-eval($.containerName)" name="containerName" scope="default" type="STRING"/> - <property expression="json-eval($.fileName)" name="fileName" scope="default" type="STRING"/> - <msazurestorage.init> - <accountName>eiconnectortest</accountName> - <accountKey>bWt69gFpheoD6lwVsMgeV5io2/KxlXK1KUcod68PhzuV1xHxje0LBD4Bd+y+ESAOlH5BTAfvdDG5q4Hhg==</accountKey> - </msazurestorage.init> - <msazurestorage.deleteBlob> - <containerName>{$ctx:containerName}</containerName> - <fileName>{$ctx:fileName}</fileName> - </msazurestorage.deleteBlob> - <log level="full"> - <property name="Delete selected employee details" value="Delete selected employee details"/> - </log> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" url-mapping="/deletecontainer"> - <inSequence> - <property expression="json-eval($.accountName)" name="accountName" scope="default" type="STRING"/> - <property expression="json-eval($.accountKey)" name="accountKey" scope="default" type="STRING"/> - <property expression="json-eval($.containerName)" name="containerName" scope="default" type="STRING"/> - <msazurestorage.init> - <accountName>eiconnectortest</accountName> - <accountKey>bWt69gFpheoD6lwVsMgeV5io2/KxlXK1KUcod68PhzuV1xHxje0LBD4Bd+y+ESAOlH5BTAfvdDG5q4Hhg==</accountKey> - </msazurestorage.init> - <msazurestorage.deleteContainer> - <containerName>{$ctx:containerName}</containerName> - </msazurestorage.deleteContainer> - <log level="full"> - <property name="Delete selected container" value="Delete selected container"/> - </log> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" url-mapping="/listmetadata"> - <inSequence> - <property expression="json-eval($.accountName)" name="accountName" scope="default" type="STRING"/> - <property expression="json-eval($.accountKey)" name="accountKey" scope="default" type="STRING"/> - <property expression="json-eval($.containerName)" name="containerName" scope="default" type="STRING"/> - <property expression="json-eval($.fileName)" name="fileName" scope="default" type="STRING"/> - <msazurestorage.init> - <accountName>eiconnectortest</accountName> - <accountKey>bWt69gFpheoD6lwVsMgeV5io2/KxlXK1KUcod68PhzuV1xHxje0LBD4Bd+y+ESAOlH5BTAfvdDG5q4Hhg==</accountKey> - </msazurestorage.init> - <msazurestorage.listMetadata> - <containerName>{$ctx:containerName}</containerName> - <fileName>{$ctx:fileName}</fileName> - </msazurestorage.listMetadata> - <log level="full"> - <property name="list Metadata" value="list Metadata"/> - </log> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - </api> - - ``` -**Note**: Please modify the following properties of the configuration as applicable. - -* As `accountKey` use the access key obtained from setting up the Microsoft Azure Storage account. -* As `accountName` get the name of created **Storage Account** inside the Microsoft Azure Storage account. - -Now we can export the imported connector and the API into a single CAR application. CAR application is the one we are going to deploy to server runtime. - -{!includes/reference/connectors/exporting-artifacts.md!} - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/ms-azure-connector.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -!!! tip - You may need to update the value of the credentials and make other such changes before deploying and running this project. - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - -1. Creating a new container in Microsoft Azure Storage for store employee details. - - **Sample request** - - `curl -v POST -d {"containerName":"employeedetails"} "http://localhost:8290/resources/createcontainer" -H "Content-Type:application/json"` - - **Expected Response** - - `{ - "success": true - }` - -2. Retrieve information about the created containers. - - **Sample request** - - `curl -v POST -d {} "http://localhost:8290/resources/listcontainer" -H "Content-Type:application/json"` - - **Expected Response** - - It will retrieve all the existing container names. - - `{ - "result":{ - "container":[ - "employeedetails", - "employeefinancedetails" - ] - } - }` - -3. Upload text or binary employee data (blob). - - **Sample request** - - `curl -v POST -d {"containerName": "employeedetails","fileName": "sample.txt","filePath": "/home/kasun/Documents/MSAZURESTORAGE/sample.txt"} "http://localhost:8290/resources/adddetails" -H "Content-Type:application/json"` - - **Please note :** /home/kasun/Documents/MSAZURESTORAGE/sample.txt should be a valid path to a text file containing employee information. - - **Expected Response** - - `{ - "success": true - }` - -4. Retrieve information about the uploaded text or binary employee data (blob). - - **Sample request** - - `curl -v POST -d {"containerName": "employeedetails"} "http://localhost:8290/resources/listdetails" -H "Content-Type:application/json"` - - **Expected Response** - - It will retrieve the uploaded text or binary name and the file path. - - `{ - "result": { - "blob": "http://eiconnectortest.blob.core.windows.net/employeedetails/sample.txt" - } - }` - -5. Remove uploaded employee details (blob). - - **Sample request** - - `curl -v POST -d {"containerName": "employeedetails","fileName": "sample.txt"} "http://localhost:8290/resources/deletedetails" -H "Content-Type:application/json"` - - **Expected Response** - - `{ - "success": true - }` - -6. Remove created container. - - **Sample request** - - `curl -v POST -d {"containerName": "employeedetails"} "http://localhost:8290/resources/deletecontainer" -H "Content-Type:application/json"` - - **Expected Response** - - `{ - "success": true - }` - -7. Retrieve blob metadata. - - **Sample request** - - `curl --location --request POST 'http://localhost:8290/resources/listmetadata' \ - --header 'Content-Type: application/json' \ - --data-raw '{ - "containerName": "employeedetails", - "fileName":"sample.pdf" - }'` - - **Expected Response** - - `{ - "result": { - "metadata": { - "metadataParameter1": "value1", - "metadataParameter2": "value2" - } - } - }` - diff --git a/en/docs/reference/connectors/microsoft-azure-storage-connector/1.x/microsoft-azure-storage-reference.md b/en/docs/reference/connectors/microsoft-azure-storage-connector/1.x/microsoft-azure-storage-reference.md deleted file mode 100644 index 123aab84cb..0000000000 --- a/en/docs/reference/connectors/microsoft-azure-storage-connector/1.x/microsoft-azure-storage-reference.md +++ /dev/null @@ -1,264 +0,0 @@ -# Microsoft Azure Storage Connector Reference - -The following operations allow you to work with the Microsoft Azure Storage Connector. Click an operation name to see parameter details and samples on how to use it. - ---- - -## Initialize the connector - -To use the Microsoft Azure Storage connector, add the <msazurestorage.init> element in your configuration before carrying out any other Microsoft Azure Storage operations. - -> **Note**: To work with the Microsoft Azure Storage connector, you need to have a Microsoft Azure account. If you do not have a Microsoft Azure account, go to [https://azure.microsoft.com/en-in/free/](https://azure.microsoft.com/en-in/free/) and create a Microsoft Azure account. - -??? note "init" - The init operation is used to initialize the connection to Microsoft Azure. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>accountName</td> - <td>The name of the Azure storage account.</td> - <td>Yes</td> - </tr> - <tr> - <td>accountKey</td> - <td>The access key for the storage account.</td> - <td>Yes</td> - </tr> - <tr> - <td>defaultEndpointsProtocol</td> - <td>Type of the protocol(HTTP/HTTPS) to connect.</td> - <td>No</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <msazurestorage.init> - <accountName>{$ctx:accountName}</accountName> - <accountKey>{$ctx:accountKey}</accountKey> - <defaultEndpointsProtocol>{$ctx:defaultEndpointsProtocol}</defaultEndpointsProtocol> - </msazurestorage.init> - ``` - ---- - -### Blobs - -??? note "uploadBlob" - The uploadBlob operation uploads a Blob file into the storage. See the [related API documentation](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-java-how-to-use-blob-storage) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>containerName</td> - <td>The name of the container.</td> - <td>Yes</td> - </tr> - <tr> - <td>fileName</td> - <td>The name of the file.</td> - <td>Yes</td> - </tr> - <tr> - <td>filePath</td> - <td>The path to a local file to be uploaded.</td> - <td>Yes</td> - </tr> - <tr> - <td>blobContentType</td> - <td>The Content-type of the file to be uploaded.</td> - <td>No</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <msazurestorage.uploadBlob> - <containerName>{$ctx:containerName}</containerName> - <fileName>{$ctx:fileName}</fileName> - <filePath>{$ctx:filePath}</filePath> - <blobContentType>{$ctx:fileContentType}</blobContentType> - </msazurestorage.uploadBlob> - ``` - - **Sample request** - - ```json - { - "accountName": "test", - "accountKey": "=gCetnaQlvsXQG4PnlXxxxxXXXXsW37DsDKw5rnCg==", - "containerName": "sales", - "fileName": "sample.txt", - "filePath": "/home/user/Pictures/a.txt" - } - ``` - -??? note "deleteBlob" - The deleteBlob operation deletes a Blob file from the storage. See the [related API documentation](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-java-how-to-use-blob-storage) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>containerName</td> - <td>The name of the container.</td> - <td>Yes</td> - </tr> - <tr> - <td>fileName</td> - <td>The name of the file.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <msazurestorage.deleteBlob> - <containerName>{$ctx:containerName}</containerName> - <fileName>{$ctx:fileName}</fileName> - </msazurestorage.deleteBlob> - ``` - - **Sample request** - - ```json - { - "accountName": "test", - "accountKey": "=gCetnaQlvsXQG4PnlXxxxxXXXXsW37DsDKw5rnCg==", - "containerName": "sales", - "fileName": "sample.txt" - } - ``` - -??? note "listBlobs" - The listBlobs operation retrieves information about all Blobs in a container. See the [related API documentation](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-java-how-to-use-blob-storage) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>containerName</td> - <td>The name of the container.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <msazurestorage.listBlobs> - <containerName>{$ctx:containerName}</containerName> - </msazurestorage.listBlobs> - ``` - - **Sample request** - - ```json - { - "accountName": "test", - "accountKey": "=gCetnaQlvsXQG4PnlXxxxxXXXXsW37DsDKw5rnCg==", - "containerName": "sales" - } - ``` - ---- - -### Containers - -??? note "createContainer" - The createContainer operation creates a container in the storage. See the [related API documentation](https://docs.microsoft.com/en-us/azure/storage/containers/storage-java-how-to-use-container-storage) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>containerName</td> - <td>The name of the container.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <msazurestorage.createContainer> - <containerName>{$ctx:containerName}</containerName> - </msazurestorage.createContainer> - ``` - - **Sample request** - - ```json - { - "accountName": "test", - "accountKey": "=gCetnaQlvsXQG4PnlXxxxxXXXXsW37DsDKw5rnCg==", - "containerName": "sales" - } - ``` - -??? note "deleteContainer" - The deleteContainer operation deletes a container from the storage. See the [related API documentation](https://docs.microsoft.com/en-us/azure/storage/containers/storage-java-how-to-use-container-storage) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>containerName</td> - <td>The name of the container.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <msazurestorage.deleteContainer> - <containerName>{$ctx:containerName}</containerName> - </msazurestorage.deleteContainer> - ``` - - **Sample request** - - ```json - { - "accountName": "test", - "accountKey": "=gCetnaQlvsXQG4PnlXxxxxXXXXsW37DsDKw5rnCg==", - "containerName": "sales" - } - ``` - -??? note "listContainers" - The listContainers operation retrieves information about all containers in the storage. See the [related API documentation](https://docs.microsoft.com/en-us/azure/storage/containers/storage-java-how-to-use-container-storage) for more information. - - **Sample configuration** - - ```xml - <msazurestorage.listContainers/> - ``` - - **Sample request** - - ```json - { - "accountName": "test", - "accountKey": "=gCetnaQlvsXQG4PnlXxxxxXXXXsW37DsDKw5rnCg==", - } - ``` diff --git a/en/docs/reference/connectors/microsoft-azure-storage-connector/2.x/microsoft-azure-storage-connector-example.md b/en/docs/reference/connectors/microsoft-azure-storage-connector/2.x/microsoft-azure-storage-connector-example.md deleted file mode 100644 index 4fcc9d9acb..0000000000 --- a/en/docs/reference/connectors/microsoft-azure-storage-connector/2.x/microsoft-azure-storage-connector-example.md +++ /dev/null @@ -1,338 +0,0 @@ -# Microsoft Azure Storage Connector Example - -Given below is a sample scenario that demonstrates how to work with container and blob operations using the WSO2 Microsoft Azure Storage Connector. - -## What you'll build - -This example demonstrates how to use Microsoft Azure Storage connector to: - -1. Create a container (a location for storing employee details) in Microsoft Azure Storage account. -2. Upload JSON employee details (blob) in to the container. -3. Download an employee details (blob). -4. Remove uploaded employee details (blob). -5. Retrieve the metadata from a specific file (blob). -6. Remove created container. - -For more information about these operations, please refer to the [Microsoft Azure Storage connector reference guide]({{base_path}}/reference/connectors/microsoft-azure-storage-connector/2.x/microsoft-azure-storage-reference/). - -> **Note**: Before invoking the API, you need to create a **Storage Account** in **Microsoft Azure Storage account**. See [Azure Storage Configuration]({{base_path}}/reference/connectors/microsoft-azure-storage-connector/microsoft-azure-storage-configuration/) documentation for more information. - -## Configure the connector in WSO2 Integration Studio - -Follow these steps to set up the ESB Solution Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -## Creating the Integration Logic - -1. Specify the API name as `MSAzureStorageTestAPI` and API context as `/azure`. - -2. First we will create the `/createcontainer` resource. This API resource will retrieve the container name from the incoming HTTP POST request and create a container in Microsoft Azure Storage. Right click on the API Resource and go to **Properties** view. We use a URL template called `/createcontainer` and POST HTTP method. - - <img src="{{base_path}}/assets/img/integrate/connectors/msazure-connector-2x/adding_create_container_resource.png" title="Adding the createbucket resource" width="800" alt="Microsoft Azure Storage use case"/> - -3. Next drag and drop the 'createContainer' operation of the Azure Storage Connector to the Design View. - -4. Create a connection from the properties window by clicking on the '+' icon as shown below. - - <img src="{{base_path}}/assets/img/integrate/connectors/msazure-connector-2x/create_new_connection_btn.png" title="Creating a new connection" width="800" alt="Microsoft Azure Storage use case"/> - - In the popup window, the following parameters must be provided. - - - Connection Name - Unique name to identify the connection by. - - Connection Type - Type of the connection that specifies the protocol to be used. - - Account Name - The name of the azure storage account. - - Client ID - The client ID of the application. - - Client Secret - The client Secret of the application. - - Tenant ID - The Tenant ID of the application. - - !!! note - You can either define the Account Access key or Client Credentials for authentication. For more information, please refer [Initialize the connector guide]({{base_path}}/reference/connectors/microsoft-azure-storage-connector/2.x/microsoft-azure-storage-reference/#initialize-the-connector). - - <img src="{{base_path}}/assets/img/integrate/connectors/msazure-connector-2x/configure_new_connection.png" title="Configuring a new connection" width="500" alt="Microsoft Azure Storage use case"/> - -5. After the connection is successfully created, select the created connection as 'Connection' from the drop down menu in the properties window. - -6. Next, configure the following parameters in the properties window, - - - Container Name - json-eval($.containerName) - - <img src="{{base_path}}/assets/img/integrate/connectors/msazure-connector-2x/configure_create_container_operation.png" title="Configuring create container operation" width="800" alt="Microsoft Azure Storage use case"/> - -7. Drag and drop the [Respond Mediator]({{base_path}}/reference/mediators/respond-mediator/) to send back the response from creating the container as shown below. - - <img src="{{base_path}}/assets/img/integrate/connectors/msazure-connector-2x/adding_respond_mediator.png" title="Adding a respond mediator" width="800" alt="Microsoft Azure Storage use case"/> - -8. Create the next API resource, which is `/addblob` by dragging and dropping another API resource to the design view. This API resource will retrieve information about the blob from the incoming HTTP POST request such as the container name, blob name and the file content and upload it to Microsoft Azure Storage. - -9. Drag and drop the ‘uploadBlob’ operation of the Microsoft Azure Storage Connector to the Design View. In the properties view, select the already created connection as 'Connection' from the drop down menu and provide the following expressions to the below properties, - - Container Name - json-eval($.containerName) - - Blob name - json-eval($.fileName) - - Content Type - json-eval($.contentType) - - Text Content - json-eval($.textContent) - - Metadata - json-eval($.metadata) - -10. Drag and drop the [Respond Mediator]({{base_path}}/reference/mediators/respond-mediator/) to send back the response from uploading the blob. - - <img src="{{base_path}}/assets/img/integrate/connectors/msazure-connector-2x/configure_add_blob_operation.png" title="Configuring upload blob operation" width="800" alt="Microsoft Azure Storage use case"/> - -11. Create the next API resource, which is `/downloadblob` by dragging and dropping another API resource to the design view. This API resource will retrieve information from the incoming HTTP POST request such as the container name and blob name and download from Microsoft Azure Storage. - -12. Next drag and drop the ‘downloadBlob’ operation of the Microsoft Azure Storage Connector to the Design View. In the properties view, select the already created connection as 'Connection' from the drop down menu and provide the following expressions to the below properties, - - - Container Name - json-eval($.containerName) - - Blob name - json-eval($.fileName) - -13. Finally, drag and drop the [Respond Mediator]({{base_path}}/reference/mediators/respond-mediator/) to send back the response from the downloadBlob operation. - - <img src="{{base_path}}/assets/img/integrate/connectors/msazure-connector-2x/configure_blob_download_operation.png" title="Configuring download blob operation" width="800" alt="Microsoft Azure Storage use case"/> - -14. Create the next API resource, which is `/deleteblob` by dragging and dropping another API resource to the design view. This API resource will retrieve information from the incoming HTTP POST request such as the container name and blob name and delete the blob from Microsoft Azure Storage. - -15. Next drag and drop the ‘deleteBlob’ operation of the Microsoft Azure Storage Connector to the Design View. In the properties view, select the already created connection as 'Connection' from the drop down menu and provide the following expressions to the below properties, - - - Container Name - json-eval($.containerName) - - Blob name - json-eval($.fileName) - -16. Finally, drag and drop the [Respond Mediator]({{base_path}}/reference/mediators/respond-mediator/) to send back the response from the deleteBlob operation. - - <img src="{{base_path}}/assets/img/integrate/connectors/msazure-connector-2x/configure_blob_delete_operation.png" title="Configuring delete blob operation" width="800" alt="Microsoft Azure Storage use case"/> - -17. Create the next API resource, which is `/listmetadata` by dragging and dropping another API resource to the design view. This API resource will retrieve information from the incoming HTTP POST request such as the container name and blob name and retrieve the metadata of the blob from Microsoft Azure Storage. - -18. Next drag and drop the ‘listMetadata’ operation of the Microsoft Azure Storage Connector to the Design View. In the properties view, select the already created connection as 'Connection' from the drop down menu and provide the following expressions to the below properties, - - - Container Name - json-eval($.containerName) - - Blob name - json-eval($.fileName) - -19. Finally, drag and drop the [Respond Mediator]({{base_path}}/reference/mediators/respond-mediator/) to send back the response from the listMetadata operation. - - <img src="{{base_path}}/assets/img/integrate/connectors/msazure-connector-2x/configure_list_metadata_operation.png" title="Configuring list metadata operation" width="800" alt="Microsoft Azure Storage use case"/> - -20. Create the next API resource, which is `/deletecontainer` by dragging and dropping another API resource to the design view. This API resource will retrieve information from the incoming HTTP POST request such as the container name and delete the container from Microsoft Azure Storage. - -21. Next drag and drop the ‘deleteContainer’ operation of the Microsoft Azure Storage Connector to the Design View. In the properties view, select the already created connection as 'Connection' from the drop down menu and provide the following expressions to the below properties, - - - Container Name - json-eval($.containerName) - -22. Finally, drag and drop the [Respond Mediator]({{base_path}}/reference/mediators/respond-mediator/) to send back the response from the deleteContainer operation. - - <img src="{{base_path}}/assets/img/integrate/connectors/msazure-connector-2x/configure_delete_container_operation.png" title="Configuring delete container operation" width="800" alt="Microsoft Azure Storage use case"/> - -23. You can find the complete API XML configuration below. You can go to the source view and copy paste the following config. - -```xml -<?xml version="1.0" encoding="UTF-8"?> -<api context="/azure" name="MSAzureStorageTestAPI" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST" uri-template="/createcontainer"> - <inSequence> - <msazurestorage.createContainer configKey="AZURE_CONNECTION"> - <containerName>{json-eval($.containerName)}</containerName> - </msazurestorage.createContainer> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" uri-template="/addblob"> - <inSequence> - <msazurestorage.uploadBlob configKey="AZURE_CONNECTION"> - <containerName>{json-eval($.containerName)}</containerName> - <textContent>{json-eval($.textContent)}</textContent> - <fileName>{json-eval($.fileName)}</fileName> - <blobContentType>{json-eval($.contentType)}</blobContentType> - <metadata>{json-eval($.metadata)}</metadata> - </msazurestorage.uploadBlob> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" uri-template="/downloadblob"> - <inSequence> - <msazurestorage.downloadBlob configKey="AZURE_CONNECTION"> - <containerName>{json-eval($.containerName)}</containerName> - <fileName>{json-eval($.fileName)}</fileName> - </msazurestorage.downloadBlob> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" uri-template="/deleteblob"> - <inSequence> - <msazurestorage.deleteBlob configKey="AZURE_CONNECTION"> - <containerName>{json-eval($.containerName)}</containerName> - <fileName>{json-eval($.fileName)}</fileName> - </msazurestorage.deleteBlob> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" uri-template="/listmetadata"> - <inSequence> - <msazurestorage.listMetadata configKey="AZURE_CONNECTION"> - <containerName>{json-eval($.containerName)}</containerName> - <fileName>{json-eval($.fileName)}</fileName> - </msazurestorage.listMetadata> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" uri-template="/deletecontainer"> - <inSequence> - <msazurestorage.deleteContainer configKey="AZURE_CONNECTION"> - <containerName>{json-eval($.containerName)}</containerName> - </msazurestorage.deleteContainer> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> -</api> -``` - -Now we can export the imported connector and the API into a single CAR application. CAR application is the one we are going to deploy to server runtime. - -{!includes/reference/connectors/exporting-artifacts.md!} - -Now the exported CApp can be deployed in the integration runtime so that we can run it and test. - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/MSAzureStorageConnector.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -!!! tip - You may need to update the value of the credentials and make other such changes before deploying and running this project. - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - -1. Creating a new container in Microsoft Azure Storage for store employee details. - - **Sample request** - - ```curl - curl -v POST -d {"containerName":"employeedetails"} "http://localhost:8290/azure/createcontainer" -H "Content-Type:application/json" - ``` - - **Expected Response** - - ```json - { - "result": { - "success": true - } - } - ``` - -2. Upload JSON employee details. - - **Sample request** - - ```curl - curl -v POST 'http://localhost:8290/azure/addblob' --header 'Content-Type: application/json' -d '{"containerName": "employeedetails", "fileName": "employee1.json", "textContent": "{\"name\":\"John\", \"salary\": 1000, \"age\": 44}", "contentType": "application/json", "metadata": {"key1": "value1"}}' - ``` - - **Expected Response** - - ```json - { - "result": { - "success": true - } - } - ``` - -4. Download JSON employee details. - - **Sample request** - - ```curl - curl -v POST 'http://localhost:8290/azure/downloadblob' --header 'Content-Type: application/json' -d '{"containerName": "employeedetails", "fileName": "employee1.json"}' - ``` - - **Expected Response** - - It will retrieve the content text or binary name and the file path. - - ```json - { - "name": "John", - "salary": 1000, - "age": 44 - } - ``` - -5. Retrieve blob metadata. - - **Sample request** - - ```curl - curl -v POST 'http://localhost:8290/azure/listmetadata' --header 'Content-Type: application/json' -d '{"containerName": "employeedetails", "fileName": "employee1.json"}' - ``` - - **Expected Response** - - ```json - { - "result": { - "metadata": { - "key1": "value1" - } - } - } - ``` - -6. Remove uploaded employee details (blob). - - **Sample request** - - ```curl - curl -v POST 'http://localhost:8290/azure/deleteblob' --header 'Content-Type: application/json' -d '{"containerName": "employeedetails", "fileName": "employee1.json"}' - ``` - - **Expected Response** - - ```json - { - "result": { - "success": true - } - } - ``` - -7. Remove created container. - - **Sample request** - - ```curl - curl -v POST -d {"containerName":"employeedetails"} "http://localhost:8290/azure/deletecontainer" -H "Content-Type:application/json" - ``` - - **Expected Response** - - ```json - { - "result": { - "success": true - } - } - ``` - -## What's next - -* You can deploy and run your project on Docker or Kubernetes. See the instructions in [Running the Micro Integrator on Containers]({{base_path}}/install-and-setup/installation/run_in_containers). diff --git a/en/docs/reference/connectors/microsoft-azure-storage-connector/2.x/microsoft-azure-storage-reference.md b/en/docs/reference/connectors/microsoft-azure-storage-connector/2.x/microsoft-azure-storage-reference.md deleted file mode 100644 index 7665dba4da..0000000000 --- a/en/docs/reference/connectors/microsoft-azure-storage-connector/2.x/microsoft-azure-storage-reference.md +++ /dev/null @@ -1,459 +0,0 @@ -# Microsoft Azure Storage Connector Reference - -The following operations allow you to work with the Microsoft Azure Storage Connector. Click an operation name to see parameter details and samples on how to use it. - ---- - -## Initialize the connector - -To use the Microsoft Azure Storage connector, you need to initialize the configuration before carrying out any other Microsoft Azure Storage operations. - -To use the Microsoft Azure Storage connector, add the element in your configuration before carrying out any Azure Storage operations. This Microsoft Azure Storage configuration authenticates with Microsoft Azure Storage by Account access key or Client Credentials, which are used for every operation. - -> **Note**: To work with the Microsoft Azure Storage connector, you need to have a Microsoft Azure account. If you do not have a Microsoft Azure account, go to [https://azure.microsoft.com/en-in/free/](https://azure.microsoft.com/en-in/free/) and create a Microsoft Azure account. - -### Initialize using Account name and Access key - -??? note "init" - The init operation is used to initialize the connection to Microsoft Azure. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>accountName</td> - <td>The name of the Azure storage account.</td> - <td>Yes</td> - </tr> - <tr> - <td>accountKey</td> - <td>The access key for the storage account.</td> - <td>Yes</td> - </tr> - <tr> - <td>defaultEndpointsProtocol</td> - <td>Type of the protocol(HTTP/HTTPS) to connect.</td> - <td>No</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <msazurestorage.init> - <accountName>{$ctx:accountName}</accountName> - <accountKey>{$ctx:accountKey}</accountKey> - <defaultEndpointsProtocol>{$ctx:defaultEndpointsProtocol}</defaultEndpointsProtocol> - </msazurestorage.init> - ``` - ---- - -### Initialize using Client Credentials - -??? note "init" - The init operation is used to initialize the connection to Microsoft Azure. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>accountName</td> - <td>The name of the Azure storage account.</td> - <td>Yes</td> - </tr> - <tr> - <td>clientId</td> - <td>The client ID of the application.</td> - <td>Yes</td> - </tr> - <tr> - <td>clientSecret</td> - <td>The client secret of the application.</td> - <td>Yes</td> - </tr> - <tr> - <td>tenantId</td> - <td>The tenant ID of the application.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <msazurestorage.init> - <accountName>{$ctx:accountName}</accountName> - <clientId>{$ctx:clientId}</clientId> - <clientSecret>{$ctx:clientSecret}</clientSecret> - <tenantId>{$ctx:tenantId}</tenantId> - </msazurestorage.init> - ``` - ---- - -## Blobs - -??? note "uploadBlob" - The uploadBlob operation uploads a Blob file into the storage. See the [related API documentation](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-java) for more information. - - **Note**: Either `filePath` or `textContent` parameter is mandatory. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>containerName</td> - <td>The name of the container.</td> - <td>Yes</td> - </tr> - <tr> - <td>fileName</td> - <td>The name of the file.</td> - <td>Yes</td> - </tr> - <tr> - <td>filePath</td> - <td>The path to a local file to be uploaded.</td> - <td>No</td> - </tr> - <tr> - <td>textContent</td> - <td>Text content to be uploaded (without using a file).</td> - <td>No</td> - </tr> - <tr> - <td>blobContentType</td> - <td>The Content-type of the file to be uploaded.</td> - <td>No</td> - </tr> - <tr> - <td>metadata</td> - <td>The metadata of the blob.</td> - <td>No</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <msazurestorage.uploadBlob> - <containerName>{$ctx:containerName}</containerName> - <fileName>{$ctx:fileName}</fileName> - <filePath>{$ctx:filePath}</filePath> - <blobContentType>{$ctx:fileContentType}</blobContentType> - <metadata>{$ctx:metadata}</metadata> - </msazurestorage.uploadBlob> - ``` - - **Sample request** - - ```json - { - "containerName": "sales", - "fileName": "sample.json", - "filePath": "/home/user/Pictures/a.json", - "blobContentType": "application/json", - "metadata": { - "key1":"value1" - } - } - ``` - -??? note "downloadBlob" - The downloadBlob operation download the Blob content. See the [related API documentation](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-java) for more information. - - **Note**: By default, the content of the blob will be written to the HTTP response. The `destinationFilePath` parameter can be used to download it to local storage. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>containerName</td> - <td>The name of the container.</td> - <td>Yes</td> - </tr> - <tr> - <td>fileName</td> - <td>The name of the file.</td> - <td>Yes</td> - </tr> - <tr> - <td>destinationFilePath</td> - <td>The local file path to download the blob. If the destination file already exists or if the file is not writable by the current user, an exception will be thrown.</td> - <td>No</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <msazurestorage.downloadBlob> - <containerName>{$ctx:containerName}</containerName> - <fileName>{$ctx:fileName}</fileName> - </msazurestorage.downloadBlob> - ``` - - **Sample request** - - ```json - { - "containerName": "sales", - "fileName": "sample.txt" - } - ``` - -??? note "deleteBlob" - The deleteBlob operation deletes a Blob file from the storage. See the [related API documentation](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-java) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>containerName</td> - <td>The name of the container.</td> - <td>Yes</td> - </tr> - <tr> - <td>fileName</td> - <td>The name of the file.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <msazurestorage.deleteBlob> - <containerName>{$ctx:containerName}</containerName> - <fileName>{$ctx:fileName}</fileName> - </msazurestorage.deleteBlob> - ``` - - **Sample request** - - ```json - { - "containerName": "sales", - "fileName": "sample.txt" - } - ``` - -??? note "listBlobs" - The listBlobs operation retrieves information about all Blobs in a container. See the [related API documentation](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-java) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>containerName</td> - <td>The name of the container.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <msazurestorage.listBlobs> - <containerName>{$ctx:containerName}</containerName> - </msazurestorage.listBlobs> - ``` - - **Sample request** - - ```json - { - "containerName": "sales" - } - ``` - ---- - -## Containers - -??? note "createContainer" - The createContainer operation creates a container in the storage. See the [related API documentation](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-java) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>containerName</td> - <td>The name of the container.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <msazurestorage.createContainer> - <containerName>{$ctx:containerName}</containerName> - </msazurestorage.createContainer> - ``` - - **Sample request** - - ```json - { - "containerName": "sales" - } - ``` - -??? note "deleteContainer" - The deleteContainer operation deletes a container from the storage. See the [related API documentation](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-java) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>containerName</td> - <td>The name of the container.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <msazurestorage.deleteContainer> - <containerName>{$ctx:containerName}</containerName> - </msazurestorage.deleteContainer> - ``` - - **Sample request** - - ```json - { - "containerName": "sales" - } - ``` - -??? note "listContainers" - The listContainers operation retrieves information about all containers in the storage. See the [related API documentation](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-java) for more information. - - **Sample configuration** - - ```xml - <msazurestorage.listContainers/> - ``` - -## Metadata - -??? note "listMetadata" - The listMetadata operation list metadata for a given blob. See the [related API documentation](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-java) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>containerName</td> - <td>The name of the container.</td> - <td>Yes</td> - </tr> - <tr> - <td>fileName</td> - <td>The name of the file.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <msazurestorage.listMetadata> - <containerName>{$ctx:containerName}</containerName> - <fileName>{$ctx:fileName}</fileName> - </msazurestorage.listMetadata> - ``` - - **Sample request** - - ```json - { - "containerName": "sales", - "fileName": "sample.txt" - } - ``` - -??? note "uploadMetadata" - The uploadMetadata operation uploads a list of metadata for a given blob. See the [related API documentation](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-java) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>containerName</td> - <td>The name of the container.</td> - <td>Yes</td> - </tr> - <tr> - <td>fileName</td> - <td>The name of the file.</td> - <td>Yes</td> - </tr> - <tr> - <td>metadata</td> - <td>The metadata of the blob.</td> - <td>No</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <msazurestorage.uploadMetadata> - <containerName>{$ctx:containerName}</containerName> - <fileName>{$ctx:fileName}</fileName> - <metadata>{$ctx:metadata}</metadata> - </msazurestorage.uploadMetadata> - ``` - - **Sample request** - - ```json - { - "containerName": "sales", - "fileName": "sample.json", - "metadata": { - "key1":"value1" - } - } - ``` - -## Error codes related to Microsoft Azure Storage Connector - -| Error code | Error message | -| -------- | ------- | -| 700701 | MS_AZURE_BLOB:CONNECTION_ERROR | -| 700702 | MS_AZURE_BLOB:INVALID_CONFIGURATION | -| 700703 | MS_AZURE_BLOB:MISSING_PARAMETERS | -| 700704 | MS_AZURE_BLOB:AUTHENTICATION_ERROR | -| 700705 | MS_AZURE_BLOB:FILE_ALREADY_EXISTS_ERROR | -| 700706 | MS_AZURE_BLOB:FILE_IO_ERROR | -| 700707 | MS_AZURE_BLOB:BLOB_STORAGE_ERROR | -| 700708 | MS_AZURE_BLOB:FILE_PERMISSION_ERROR | -| 700709 | MS_AZURE_BLOB:GENERAL_ERROR | - -In addition to the above `ERROR_DETAIL` property will contain detail information about the error. For more information refer [Using Fault Sequences]({{base_path}}/integrate/examples/sequence_examples/using-fault-sequences/). \ No newline at end of file diff --git a/en/docs/reference/connectors/microsoft-azure-storage-connector/microsoft-azure-overview.md b/en/docs/reference/connectors/microsoft-azure-storage-connector/microsoft-azure-overview.md deleted file mode 100644 index f3af565804..0000000000 --- a/en/docs/reference/connectors/microsoft-azure-storage-connector/microsoft-azure-overview.md +++ /dev/null @@ -1,43 +0,0 @@ -# Microsoft Azure Storage Connector Overview - -The Microsoft Azure Storage Connector allows you to access the [Azure Storage services](https://azure.microsoft.com/en-us/) (using Microsoft Azure Storage Java SDK) from an integration sequence. Azure Storage is a Microsoft-managed cloud service that provides storage that is highly available, secure, durable, scalable and redundant. The Azure Storage consists of four primary Azure Storage types. They are blob storage, table storage, file storage, and queue storage. - -To see the available Microsoft Azure Storage connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "Azure". - -<img src="{{base_path}}/assets/img/integrate/connectors/azure-store.png" title="Microsoft Azure Storage Connector Store" width="200" alt="Microsoft Azure Storage Connector Store"/> - -## Compatibility - -| Connector version | Supported product versions | -| ------------- |------------- | -| 2.x (latest) | MI 4.2.0, MI 4.1.0, MI 4.0.0 | -| 1.0.0 | MI 4.0.0, EI 7.1.0, EI 7.0.x, EI 6.6.0, EI 6.1.0 | - -For older versions, see the details in the connector store. - -## Microsoft Azure Storage Connector documentation (latest - 2.x version) - -!!! tip "What's New in 2.x?" - - Use Azure Blob Storage SDK v12.23.0. - - - New UI model in the Integration Studio. - - - Support Client credentials authentication. - -* **[Setting up the Microsoft Azure Storage Environment]({{base_path}}/reference/connectors/microsoft-azure-storage-connector/microsoft-azure-storage-configuration/)**: This involves setting up a Microsoft Azure Storage account and obtaining access credentials. - -* **[Microsoft Azure Storage Connector Example]({{base_path}}/reference/connectors/microsoft-azure-storage-connector/2.x/microsoft-azure-storage-connector-example/)**: This example demonstrates how to work with container and blob operations using the WSO2 Microsoft Azure Storage Connector. - -* **[Microsoft Azure Storage Connector Reference]({{base_path}}/reference/connectors/microsoft-azure-storage-connector/2.x/microsoft-azure-storage-reference/)**: This documentation provides a reference guide for the Microsoft Azure Storage Connector. - -For older versions, see the details in the relevant links. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, please create a pull request in the following repository. - -* [Microsoft Azure Storage Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-msazurestorage/) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/microsoft-azure-storage-connector/microsoft-azure-storage-configuration.md b/en/docs/reference/connectors/microsoft-azure-storage-connector/microsoft-azure-storage-configuration.md deleted file mode 100644 index b7ba287c13..0000000000 --- a/en/docs/reference/connectors/microsoft-azure-storage-connector/microsoft-azure-storage-configuration.md +++ /dev/null @@ -1,130 +0,0 @@ -# Setting up the Microsoft Azure Storage Environment - -To work with the Microsoft Azure Storage connector, you need to have a Microsoft Azure account. If you do not have a Microsoft Azure account, you are prompted to create one when you sign up. - -## Signing Up for Microsoft Azure - -To sign up for Microsoft Azure: - - 1. Navigate to [Microsoft Azure](https://azure.microsoft.com/en-in/free/) and create a **Microsoft Azure account** using **Start free** button. - - 2. Follow the online instructions. - -Part of the sign-up procedure involves receiving a phone call and entering a verification code using the phone keypad. Microsoft Azure will notify you by email when your account is active and available for you to use. - -## Create Microsoft Azure Storage account - -Follow the steps below to obtain the access credentials from Microsoft Azure Storage account. - - 1. Go to [Microsoft Azure](https://azure.microsoft.com/en-in/free/), and sign in to the created Microsoft Azure account. On the Azure portal menu, select **All services**. In the list of resources, type **Storage Accounts**. As you begin typing, the list filters based on your input. Select **Storage Accounts**. - - <img src="{{base_path}}/assets/img/integrate/connectors/ms-azure-storage-homepage.png" title="MS Azure Home Page" width="800" alt="MS Azure Home Page"/>MS-azure-storage-select-account.png - - 2. Go to the dashboard and click **Storage accounts** then click **Add** and fill the required details to create a new storage account. - - <img src="{{base_path}}/assets/img/integrate/connectors/ms-azure-storage-select-account.png" title="Select MS Azure storage account" width="800" alt="Select MS Azure storage account"/> - - 3. On the **Storage Accounts** window that appears, choose **Add**. - - <img src="{{base_path}}/assets/img/integrate/connectors/ms-azure-storage-add-storage-account.png" title="MS Azure add storage account" width="800" alt="MS Azure add storage account"/> - - 4. Select the subscription in which to create the storage account. - - <img src="{{base_path}}/assets/img/integrate/connectors/ms-azure-storage-basic-configurations.png" title="MS azure storage basic configurations" width="800" alt="MS azure storage basic configurations"/> - - 5. Under the **Resource group** field, select **Create new**. Enter a name for your new resource group. - - <img src="{{base_path}}/assets/img/integrate/connectors/ms-azure-storage-create-resource-group.png" title="Create resource group" width="800" alt="Create resource group"/> - - 6. Enter a name for your **storage account**. - - 7. Select a location for your storage account, or use the default location. - - 8. Leave these fields set to their default values: - - | Field | Value | - | ------------- |-------------| - |Deployment model |Resource Manager| - |Performance | Standard| - |Replication | Read-access geo-redundant storage (RA-GRS)| - |Access tier | Hot| - - 9. Select **Review + Create** to review your storage account settings and create the account. - - <img src="{{base_path}}/assets/img/integrate/connectors/ms-azure-storage-review-create.png" title="Review and create" width="800" alt="Review and create"/> - - 10. Select **Create**. - -## Obtaining the Client credentials - -!!! Note - If you are planning to use Access key for authentication, skip this and check [Obtaining the access credentials]({{base_path}}/reference/connectors/microsoft-azure-storage-connector/microsoft-azure-storage-configuration/#obtaining-the-access-key) - - 1. Create an Azure Active Directory application and service principal. For more information refer [Create an Azure Active Directory application](https://learn.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal). - - 2. Assign an Azure role for access to blob data. For more information refer [Assign an Azure role](https://learn.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal) and [Assign an Azure role for access to blob data](https://learn.microsoft.com/en-us/azure/storage/blobs/assign-azure-role-data-access?tabs=portal). - - 3. Obtain the Client ID, client Secret and Tenant ID. For more information refer [Create a new application secret](https://learn.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#option-3-create-a-new-application-secret) and [Active Directory tenant ID](https://learn.microsoft.com/en-us/azure/active-directory/fundamentals/how-to-find-tenant). - -## Obtaining the Access Key - -!!! Note - If you are planning to use Client credentials for authentication, skip this and check [Obtaining the Client credentials]({{base_path}}/reference/connectors/microsoft-azure-storage-connector/microsoft-azure-storage-configuration/#obtaining-the-client-credentials) - - 1. Navigate to the created **storage account** and click it. - - <img src="{{base_path}}/assets/img/integrate/connectors/ms-azure-storage-select-created-storage.png" title="Select created storage account" width="800" alt="Select created storage account"/> - - 2. Click **Access keys** under **Settings**. - - <img src="{{base_path}}/assets/img/integrate/connectors/ms-azure-storage-getting-accesskeys.png" title="Getting access keys" width="800" alt="Getting access keys"/> - - 3. Obtain the access key. - - <img src="{{base_path}}/assets/img/integrate/connectors/ms-azure-storage-copy-access-key.png" title="Copy access keys" width="800" alt="Copy access keys"/> - -> **Note**: Azure Storage Account does not support HTTP requests. If you are using a storage key to access the storage account, please set **Secure transfer required** to **Disabled** in storage account configuration on Azure Portal. - <img src="{{base_path}}/assets/img/integrate/connectors/ms-azure-storage-http-request.png" title="Copy access keys" width="800" alt="Copy access keys"/> - -## Setting up the Microsoft Azure Storage Connector - -Before you start configuring the Microsoft Azure Storage Connector, you also need the WSO2 integration runtime and we refer to that location as `<PRODUCT_HOME>`. - -!!! Note - If you are using the older connector **1.x.x** add only the [azure-storage-6.1.0.jar](https://mvnrepository.com/artifact/com.microsoft.azure/azure-storage/6.1.0) jar to `<PRODUCT_HOME>/lib` directory and skip the following. - -In order to use the Microsoft Azure Storage connector, you need to download the following jars and move them to the `<PRODUCT_HOME>/lib` directory. - - - [azure-storage-blob-12.23.0.jar](https://mvnrepository.com/artifact/com.azure/azure-storage-blob/12.23.0) - - [azure-identity-1.9.2.jar](https://mvnrepository.com/artifact/com.azure/azure-identity/1.9.2) - - [azure-storage-common-12.22.0.jar](https://mvnrepository.com/artifact/com.azure/azure-storage-common/12.22.0) - - [azure-json-1.0.1.jar](https://mvnrepository.com/artifact/com.azure/azure-json/1.0.1) - - [azure-core-http-netty-1.13.5.jar](https://mvnrepository.com/artifact/com.azure/azure-core-http-netty/1.13.5) - - [azure-core-1.41.0.jar](https://mvnrepository.com/artifact/com.azure/azure-core/1.41.0) - - [msal4j-1.13.8.jar](https://mvnrepository.com/artifact/com.microsoft.azure/msal4j/1.13.8) - - [content-type-2.2.jar](https://mvnrepository.com/artifact/com.nimbusds/content-type/2.2) - - [netty-resolver-dns-4.1.95.Final.jar](https://mvnrepository.com/artifact/io.netty/netty-resolver-dns/4.1.95.Final) - - [reactive-streams-1.0.4.jar](https://mvnrepository.com/artifact/org.reactivestreams/reactive-streams/1.0.4) - - [reactor-netty-http-1.1.9.jar](https://mvnrepository.com/artifact/io.projectreactor.netty/reactor-netty-http/1.1.9) - - [jackson-dataformat-xml-2.13.5.jar](https://mvnrepository.com/artifact/com.fasterxml.jackson.dataformat/jackson-dataformat-xml/2.13.5) - - [oauth2-oidc-sdk-10.7.1.jar](https://mvnrepository.com/artifact/com.nimbusds/oauth2-oidc-sdk) - - [reactor-core-3.4.30.jar](https://mvnrepository.com/artifact/io.projectreactor/reactor-core/3.4.30) - - [stax2-api-4.2.1.jar](https://mvnrepository.com/artifact/org.codehaus.woodstox/stax2-api/4.2.1) - - [reactor-netty-core-1.1.9.jar](https://mvnrepository.com/artifact/io.projectreactor.netty/reactor-netty-core/1.1.9) - - [woodstox-core-6.4.0.jar](https://mvnrepository.com/artifact/com.fasterxml.woodstox/woodstox-core/6.4.0) - -!!! Note - If you are using MI 4.0.0, in addition to the above you need to add [netty-codec-http2-4.1.95.Final.jar](https://mvnrepository.com/artifact/io.netty/netty-codec-http2/4.1.95.Final) and [netty-handler-proxy-4.1.95.Final.jar](https://mvnrepository.com/artifact/io.netty/netty-handler-proxy/4.1.95.Final) to `<PRODUCT_HOME>/lib` directory. - -!!! Note - By default `INFO` logs are enabled for the Microsoft Azure SDKs, therefore you may need to update the `log4j2.properties` of the WSO2 integration runtime (MI) accordingly to set the log level. The following configuration will disable the logs printed by the SDK. Eventhough the SDK logs are disabled, MI will print them in case of an error. - - 1. Add the following loggers. - - logger.Azure.name = com.azure - logger.Azure.level = OFF - - logger.Microsoft.name = com.microsoft - logger.Microsoft.level = OFF - - 2. Append `Azure` and `Microsoft` to the loggers list. diff --git a/en/docs/reference/connectors/microsoft-dynamics365-connector/microsoft-dynamics365-configuration.md b/en/docs/reference/connectors/microsoft-dynamics365-connector/microsoft-dynamics365-configuration.md deleted file mode 100644 index 059c86d8bb..0000000000 --- a/en/docs/reference/connectors/microsoft-dynamics365-connector/microsoft-dynamics365-configuration.md +++ /dev/null @@ -1,165 +0,0 @@ -# Setting up the Microsoft Dynamics365 Environment with Azure - -The Microsoft Dynamics 365 (Microsoft Dynamics CRM) Connector allows you to access the [Microsoft Dynamics 365 Web API](https://docs.microsoft.com/en-us/previous-versions/dynamicscrm-2016/developers-guide/mt593051(v=crm.8)?redirectedfrom=MSDN) through the WSO2 integration runtime. The Microsoft Dynamics CRM system (now known as Microsoft Dynamics 365) is a standalone CRM product from Microsoft that provides sales, marketing, and service management capabilities only via individual modules. - -To use the Microsoft Dynamics 365, you must have following accounts. - -* A Microsoft Dynamics 365 (online) system user account with administrator role for the Microsoft Office 365 subscription -* A Microsoft Azure subscription for application registration - -## Authentication to Dynamics 365 using Azure Apps - -Dynamics 365 authentication is recommended only through Azure AD (for online instances). To achieve this, - -1. Create and configure the app in Azure Active Directory. -2. Create a user in Azure AD and configure it as an application user in Dynamics 365 -3. Generate the Access Token and make requests to Dynamics 365 with the above-generated Access Token. - -* ## Setting Up an App in Azure - - 1. Navigate to the [Azure portal](https://portal.azure.com/) and select **Create an Azure Account**. - - > **Note**: If you creating a Azure account you should get a Microsoft Azure subscription for application registration. Purchase Azure services directly from Microsoft with [pay-as-you-go-pricing](https://azure.microsoft.com/en-us/offers/ms-azr-0003p/). This offer is billed at the standard Pay-As-You-Go rates. - - 2. Log in to the created **Azure account**. - - <img src="{{base_path}}/assets/img/integrate/connectors/portal-azure-com.png" title="Azure Management Console" width="800" alt="Azure Management Console"/> - - 3. Navigate to **Azure Active Directory** –> **App Registration** –> **New Application registration**. - - <img src="{{base_path}}/assets/img/integrate/connectors/new-application-registration.png" title="Azure new application registration console" width="800" alt="Azure new application registration console"/> - - 4. Now fill in the required fields as shown below and hit **Register**. - - <img src="{{base_path}}/assets/img/integrate/connectors/register-an-application.png" title="Register an application" width="800" alt="Register an application"/> - - > **Note**:Note that the sign-on URL only matters for something like a single page application – otherwise just putting a localhost URL is just fine. - - 5. Select **created Application**. - - <img src="{{base_path}}/assets/img/integrate/connectors/created-app.png" title="Created TestWebAPI application" width="800" alt="Created TestWebAPI application"/> - - 6. Now you have successfully created an Azure app. Double click the app and you will see its details as shown below. Copy the value of the **Application (client) ID** and **Directory (tenant) ID**. - - <img src="{{base_path}}/assets/img/integrate/connectors/application-id.jpg" title="Application ID" width="800" alt="Application ID"/> - - 7. You need to give permission to the app to access Dynamics 365. Navigate to **View API permissions**. - - <img src="{{base_path}}/assets/img/integrate/connectors/view-api-permissions.png" title="View API permissions" width="800" alt="View API permissions"/> - - 8. Click **Add a permission**. - - <img src="{{base_path}}/assets/img/integrate/connectors/add-permission.jpg" title="Add a permission" width="800" alt="Add a permission"/> - - 9. Then select **Dynamics CRM**. - - <img src="{{base_path}}/assets/img/integrate/connectors/select-api.png" title="Select an API" width="800" alt="Select an API"/> - - 10. Make sure to check the Delegated Permissions checkboxes as shown below. **Select permissions** and click **Add permission**. - - <img src="{{base_path}}/assets/img/integrate/connectors/select-permission.png" title="Select CRM permission" width="800" alt="Select CRM permission"/> - - 11. Click on **Grant Permissions** and click **Yes**. - - <img src="{{base_path}}/assets/img/integrate/connectors/grant-required-permission.png" title="Grant permission" width="800" alt="Grant permission"/> - - 12. After setting up CRM permissions you will see following console. - - <img src="{{base_path}}/assets/img/integrate/connectors/after-setup-permission.png" title="After setup CRM permissions" width="800" alt="After setup CRM permissions"/> - - 13. Now you need to create secret keys. Navigate to **Certificates & secrets**. - - <img src="{{base_path}}/assets/img/integrate/connectors/certificate-and-secrets.jpg" title="Create certificate and secrets" width="800" alt="Create certificate and secrets"/> - - 14. Click **New client secrets**. - - <img src="{{base_path}}/assets/img/integrate/connectors/new-client-secret.png" title="New client secret" width="800" alt="New client secret"/> - - 15. Add **Description** and **Expires** values. Click **Add** and copy the value. - - <img src="{{base_path}}/assets/img/integrate/connectors/add-client-secret.jpg" title="Add client secret" width="800" alt="Add client secret"/> - - 16. Search **Users** inside the Azure Active Directory and Create **New user** (this user would be linked to the Application User, which is created in the Dynamics 365 CRM). - - <img src="{{base_path}}/assets/img/integrate/connectors/add-new-user.jpg" title="Create new user" width="800" alt="Create new user"/> - - 17. Fill all mandatory fields and click **Create**. - - <img src="{{base_path}}/assets/img/integrate/connectors/create-user.png" title="Fill new user details" width="800" alt="Fill new user details"/> - -* ## Setting Up the Application user in Microsoft Dynamics 365 CRM - - 1. Navigate to [Microsoft Dynamics 365 account](https://portal.office.com/) and select **Create a Dynamics 365 Account**. - - 2. Log in to the created **Dynamics 365 account**. Click the **Admin** icon. - - <img src="{{base_path}}/assets/img/integrate/connectors/MSdynamics365-admin-icon.png" title="Dynamics365 admin center" width="800" alt="Dynamics365 admin center"/> - - 3. Click **Show all** from the dropdown in the left corner scroll bar. - - <img src="{{base_path}}/assets/img/integrate/connectors/MSdynamics365-show-all.png" title="Dynamics365 show all" width="800" alt="Dynamics365 show all"/> - - 4. Click **All admin centers** and click **Dynamics 365** icon. It will navigate to the **Power Platform admin center**. - - <img src="{{base_path}}/assets/img/integrate/connectors/MSdynamics365-all-admin-center.png" title="Dynamics365 all admin center" width="800" alt="Dynamics365 all admin center"/> - - 5. Select the created environment and click **Open environment**. - - <img src="{{base_path}}/assets/img/integrate/connectors/MSdynamics365-select-environment.png" title="Dynamics365 select environment" width="800" alt="Dynamics365 select environment"/> - - 6. Navigate to **Settings**. - - <img src="{{base_path}}/assets/img/integrate/connectors/MSdynamics365-settings.png" title="Dynamics365 settings" width="800" alt="Dynamics365 settings"/> - - 7. Click **Security** -> **Users**. - - <img src="{{base_path}}/assets/img/integrate/connectors/msdynamics365-users.png" title="Dynamics365 users" width="800" alt="Dynamics365 users"/> - - 8. Choose **Application Users** in the view filter.Select -> **New**. - - <img src="{{base_path}}/assets/img/integrate/connectors/MSdynamics365-filter-application-user.png" title="Filter application users" width="800" alt="Filter application users"/> - - 9. In the **Application User** form, enter the required information. - - <img src="{{base_path}}/assets/img/integrate/connectors/MSdynamics-new-application-user.png" title="Dynamics365 new application user" width="800" alt="Dynamics365 new application user"/> - - The user name information must not match a user that exists in the Azure Active Directory. - In the **Application ID** field, enter the application ID of the app you registered earlier in the Azure AD. - - 10. If all goes well, after selecting SAVE, the Application ID URI and Azure AD Object Id fields will auto-populate with correct values. - - <img src="{{base_path}}/assets/img/integrate/connectors/MSdynamics365-created-application-user.png" title="Dynamics365 created application user" width="800" alt="Dynamics365 created application user"/> - - 11. Before exiting the user form, choose MANAGE ROLES and assign a security role to this application user so that the application user can access the desired organization data. - -## Generate Access Token - -After setting up Azure and Microsoft Dynamics 365 CRM, you can get the access token by invoking the following HTTP request. - -POST URL: https://login.microsoftonline.com/<tenant_id>/oauth2/token - -Header: Content-Type: application/json - -Body: x-www-form-urlencoded - -| Key | Value | -| ------------- |-------------| -| client_id | Application ID of the registered app in Azure. | -| resource |https://trial.crm.dynamics.com (Dynamics 365 Online Instance URL | -| Client_secret |Key value from the registered app in Azure| -| Grant_type |client_credentials| - -Please note you need to replace the <tenant_id> with an actual value from your registered app. - -Make a request using Postman as below. - -<img src="{{base_path}}/assets/img/integrate/connectors/MSdynamics365-access-token.png" title="Obtaining access token" width="800" alt="Obtaining access token"/> - -You need to copy and save the following parameter values to proceed with configuring the WSO2 Microsoft Dynamics 365 Connector. - -| Key | Value | -| ------------- |-------------| -| apiUrl | The instance URL for your organization.| -| accessToken| Value of the Access Token to access the Microsoft Dynamic CRM Web API via request.| -| clientSecret| The secret key of the application that is registered in the Azure AD.| -| resource| The App ID URI of the web API (E.g "https://kavi859.crm5.dynamics.com/).| \ No newline at end of file diff --git a/en/docs/reference/connectors/mongodb-connector/mongodb-connector-config.md b/en/docs/reference/connectors/mongodb-connector/mongodb-connector-config.md deleted file mode 100644 index bf29012caf..0000000000 --- a/en/docs/reference/connectors/mongodb-connector/mongodb-connector-config.md +++ /dev/null @@ -1,1837 +0,0 @@ -# MongoDB Connector Reference - -The following operations allow you to work with the MongoDB Connector. - -## Connection configurations - -The MongoDB connector can be used to deal with two types of connections: - -- <b>Connection URI (URI)</b>: The connection URI used to connect to the MongoDB database. - -- <b>Connection Parameters</b>: The parameters used for creating the connection URI. Following protocols are supported by the MongoDB connector. - - - Standard Connection Parameters (STANDARD) - - DNS Seed List Connection Parameters (DSL) - -There are different connection configurations that can be used for the above protocols. They contain a common set of configurations and some additional configurations specific to the protocol. - -<img src="{{base_path}}/assets/img/integrate/connectors/mongodb-conn-9.png" title="Types of MongoDB connections" width="800" alt="Types of MongoDB connections"/> - -The supported connection URI types and connection options are listed in the [MongoDB Connection String](https://docs.mongodb.com/manual/reference/connection-string/) documentation. - -### Common configs to all connection types - -<table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - Connection Name - </td> - <td> - String - </td> - <td> - A unique name to identify the connection. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Connection Type - </td> - <td> - String - </td> - <td> - The protocol used to connect to the MongoDB database.</br> - <b>Possible values</b>: - <ul> - <li> - <b>STANDARD</b>: The standard format of the MongoDB connection URI. - </li> - <li> - <b>DSL</b>: The DNS-constructed seed list format of the MongoDB connection URI. - </li> - <li> - <b>URI</b>: The complete connection URI containing the server details, credentials and the connection options. - </li> - </ul> - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Database - </td> - <td> - String - </td> - <td> - The name of the database in the MongoDB server. - <td> - - - </td> - <td> - Yes - </td> - </tr> -</table> - -### URI connection configs - -<table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - Connection URI - </td> - <td> - String - </td> - <td> - The complete connection URI containing the server details, credentials, and the connection options. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> -</table> - -**Sample Configuration of URI configs** - -```xml -<?xml version="1.0" encoding="UTF-8"?> -<localEntry key="uriConnection" xmlns="http://ws.apache.org/ns/synapse"> - <mongodb.init> - <name>uriConnection</name> - <connectionType>URI</connectionType> - <connectionURI>mongodb+srv://server.example.com/?connectTimeoutMS=300000&authSource=aDifferentAuthDB</connectionURI> - <database>users</database> - </mongodb.init> -</localEntry> -``` - -### Common configs for STANDARD and DSL types - -<table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - Replica Set - </td> - <td> - String - </td> - <td> - If the mongod is a member of a replica set, this parameter specifies the name of the replica set. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Auth Source - </td> - <td> - String - </td> - <td> - The database name associated with the user’s credentials. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Auth Mechanism - </td> - <td> - String - </td> - <td> - The authentication mechanism that MongoDB will use for authenticating the connection. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Auth Mechanism Properties - </td> - <td> - String - </td> - <td> - Properties for the specified authorisation mechanism as a comma-separated list of colon-separated key-value pairs. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Gssapi Service Name - </td> - <td> - String - </td> - <td> - The Kerberos service name when connecting to Kerberized MongoDB instances. This value must match the service name set on MongoDB instances to which you are connecting. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - W - </td> - <td> - String - </td> - <td> - Corresponds to the write concern w Option. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - W Timeout MS - </td> - <td> - Number - </td> - <td> - The time limit (in milliseconds) of the write concern. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Journal - </td> - <td> - String - </td> - <td> - When this option used, the Micro Integrator requests an acknowledgement from MongoDB that the write operation has been written to the journal. This applies when the write concern is set to 'j'. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Maximum Pool Size - </td> - <td> - Number - </td> - <td> - The maximum number of connections in the connection pool. - </td> - <td> - 100 - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Minimum Pool Size - </td> - <td> - Number - </td> - <td> - The minimum number of connections in the connection pool. - </td> - <td> - 0 - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Maximum Idle Time MS - </td> - <td> - Number - </td> - <td> - The maximum number of milliseconds that a connection can remain idle in the pool before being removed and closed. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Wait Queue Multiple - </td> - <td> - Number - </td> - <td> - The maximum pool size is multiplied by this value to calculate the maximum number of threads that are allowed to wait for a connection to become available in the pool. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Wait Queue Timeout MS - </td> - <td> - Number - </td> - <td> - The maximum time in milliseconds that a thread can wait for a connection to become available. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - SSL - </td> - <td> - Boolean - </td> - <td> - A boolean to enable or disables TLS/SSL for the connection. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - SSL Invalid Host Names Allowed - </td> - <td> - Boolean - </td> - <td> - User name used to connect with the file server. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Connect Timeout TS - </td> - <td> - Number - </td> - <td> - The time in milliseconds for attempting a connection before timing out. For most drivers, the default is to never timeout. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Socket Timeout MS - </td> - <td> - Number - </td> - <td> - The time in milliseconds for attempting a send or receive on a socket before the attempt times out. For most drivers, the default is to never timeout. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Compressors - </td> - <td> - String - </td> - <td> - Comma-delimited string of compressors to enable network compression for communication between this client and a mongod/mongos instance. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Zlib Compression Level - </td> - <td> - Number - </td> - <td> - An integer that specifies the compression level when zlib is used for network compression. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Read Concern Level - </td> - <td> - String - </td> - <td> - The level of isolation. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Read Preference - </td> - <td> - String - </td> - <td> - Specifies the read preferences for this connection. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Maximum Staleness Seconds - </td> - <td> - Number - </td> - <td> - The maximum time (in seconds) a connection can remain stale before the client stops using it for read operations. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Read Preference Tags - </td> - <td> - String - </td> - <td> - Document tags as a comma-separated list or colon-separated key-value pairs. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Local Threshold MS - </td> - <td> - Number - </td> - <td> - The latency (in milliseconds) that is allowed when selecting a suitable MongoDB instance from the list of available instances. - </td> - <td> - 15 - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Server Selection Timeout MS - </td> - <td> - Number - </td> - <td> - The time (in milliseconds) that is allowed for server selection before an exception is thrown. - </td> - <td> - 30,000 - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Server Selection Try Once - </td> - <td> - Boolean - </td> - <td> - When true, the driver scans the MongoDB deployment exactly once after server selection fails and then either selects a server or raises an error. When false, the driver searches for a server until the serverSelectionTimeoutMS value is reached. Only applies for single-threaded drivers. - </td> - <td> - true - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Heartbeat Frequency MS - </td> - <td> - Number - </td> - <td> - Controls the intervals between which the driver checks the state of the MongoDB deployment. The interval (in milliseconds) between checks, counted from the end of the previous check until the beginning of the next one. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - App Name - </td> - <td> - String - </td> - <td> - Specify a custom app name. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Retry Reads - </td> - <td> - Boolean - </td> - <td> - Enables retryable reads. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Retry Writes - </td> - <td> - Boolean - </td> - <td> - Enable retryable writes. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - UUID Representation - </td> - <td> - String - </td> - <td> - The type of UUID representation. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> -</table> - -### STANDARD connection configs - -<table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - Host - </td> - <td> - String - </td> - <td> - The name of the host. It identifies either a hostname, IP address, or unix domain socket. Defaults to 127.0.0.1 if not provided. - </td> - <td> - 127.0.0.1 - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Port - </td> - <td> - Number - </td> - <td> - The port number. Defaults to 27017 if not provided. - </td> - <td> - 27017 - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Seed List - </td> - <td> - String - </td> - <td> - A seed list is used by drivers and clients (like the mongo shell) for initial discovery of the replica set configuration. Seed lists can be provided as host:port pairs. This is used in replica sets and shared clusters. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Username - </td> - <td> - String - </td> - <td> - The user name to authenticate the database associated with the user. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Password - </td> - <td> - String - </td> - <td> - The password to authenticate the database associated with the user. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> -</table> - -**Sample configuration of STANDARD configs** - -Sample configuration of STANDARD (standalone) configs. - -```xml -<?xml version="1.0" encoding="UTF-8"?> -<localEntry key="standaloneStandardConnection" xmlns="http://ws.apache.org/ns/synapse"> - <mongodb.init> - <name>standaloneStandardConnection</name> - <connectionType>STANDARD</connectionType> - <host>localhost</host> - <port>27017</port> - <database>users</database> - <username>administrator</username> - <password>1234</password> - </mongodb.init> -</localEntry> -``` - -Sample configuration of STANDARD (replica set) configs - -```xml -<?xml version="1.0" encoding="UTF-8"?> -<localEntry key="replicaSetStandardConnection" xmlns="http://ws.apache.org/ns/synapse"> - <mongodb.init> - <name>replicaSetStandardConnection</name> - <connectionType>STANDARD</connectionType> - <seedList>mongodb1.example.com:27317,mongodb2.example.com:27017</seedList> - <database>users</database> - <username>administrator</username> - <password>1234</password> - <authSource>aDifferentAuthDB</authSource> - <ssl>true</ssl> - <w>majority</w> - <replicaSet>mySet</replicaSet> - <retryWrites>true</retryWrites> - </mongodb.init> -</localEntry> -``` - -### DSL connection configs - -<table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - Host - </td> - <td> - String - </td> - <td> - The name of the host. It identifies either a hostname, IP address, or unix domain socket. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Username - </td> - <td> - String - </td> - <td> - The user name to authenticate the database associated with the user. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Password - </td> - <td> - String - </td> - <td> - The password to authenticate the database associated with the user. - </td> - <td> - - - </td> - <td> - No - </td> - </tr> -</table> - -**Sample Configuration of DSL configs** - -```xml -<?xml version="1.0" encoding="UTF-8"?> -<localEntry key="dslConnection" xmlns="http://ws.apache.org/ns/synapse"> - <mongodb.init> - <name>dslConnection</name> - <connectionType>DSL</connectionType> - <host>server.example.com</host> - <database>users</database> - <username>administrator</username> - <password>1234</password> - <authSource>aDifferentAuthDB</authSource> - <retryWrites>true</retryWrites> - <w>majority</w> - </mongodb.init> -</localEntry> -``` - -## Operations - -The following operations allow you to work with the MongoDB connector. Click an operation name to see parameter details and samples on how to use it. - -??? note "insertOne" - Inserts a document into a collection. See the related [insertOne documentation](https://docs.mongodb.com/manual/reference/method/db.collection.insertOne/) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - Collection - </td> - <td> - String - </td> - <td> - The name of the MongoDB collection. - </td> - <td> - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Document - </td> - <td> - JSON String - </td> - <td> - A document to insert into the collection. - </td> - <td> - - </td> - <td> - Yes - </td> - </tr> - </table> - - **Sample Configuration** - - ```xml - <mongodb.insertOne configKey="connectionURI"> - <collection>{json-eval($.collection)}</collection> - <document>{json-eval($.document)}</document> - </mongodb.insertOne> - - ``` - - **Sample Request** - - ```json - { - "collection": "TestCollection", - "document": { - "_id": "123", - "name": "John Doe" - } - } - ``` - -??? note "insertMany" - Inserts multiple documents into a collection. See the related [insertMany documentation](https://docs.mongodb.com/manual/reference/method/db.collection.insertMany) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - Collection - </td> - <td> - String - </td> - <td> - The name of the MongoDB collection. - </td> - <td> - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Documents - </td> - <td> - JSON String - </td> - <td> - An array of documents to insert into the collection. - </td> - <td> - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Ordered - </td> - <td> - Boolean - </td> - <td> - A boolean specifying whether the MongoDB instance should perform an ordered or unordered insert. - </td> - <td> - true - </td> - <td> - No - </td> - </tr> - </table> - - **Sample Configuration** - - ```xml - <mongodb.insertMany configKey="connectionURI"> - <collection>{json-eval($.collection)}</collection> - <documents>{json-eval($.documents)}</documents> - <ordered>True</ordered> - </mongodb.insertMany> - ``` - - **Sample Request** - - ```json - { - "collection": "TestCollection", - "documents": [ - { - "name": "Jane Doe", - "_id": "123" - }, - { - "name": "Jane Doe", - "_id": "1234" - }, - { - "name": "Jane Doe", - "_id": "12345" - } - ] - } - ``` - -??? note "findOne" - Returns one document that satisfies the specified query criteria on the collection. If multiple documents satisfy the query, this method returns the first document according to the [natural order](https://docs.mongodb.com/manual/reference/glossary/#term-natural-order). See the related [find documentation](https://docs.mongodb.com/manual/reference/method/db.collection.find/) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - Collection - </td> - <td> - String - </td> - <td> - The name of the MongoDB collection. - </td> - <td> - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Query - </td> - <td> - JSON String - </td> - <td> - Specifies query selection criteria using [query operators](https://docs.mongodb.com/manual/reference/operator/). To return the first document in a collection, omit this parameter or pass an empty document ({}). - </td> - <td> - {} - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Projection - </td> - <td> - JSON String - </td> - <td> - Specifies the fields to return using [projection operators](https://docs.mongodb.com/manual/reference/operator/projection/). Omit this parameter to return all fields in the matching document. - </td> - <td> - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Collation - </td> - <td> - JSON String - </td> - <td> - Collation allows users to specify language-specific rules for string comparison, such as rules for letter case and accent marks. - </td> - <td> - - </td> - <td> - No - </td> - </tr> - </table> - - **Sample Configuration** - - ```xml - <mongodb.findOne configKey="connectionURI"> - <collection>{json-eval($.collection)}</collection> - <query>{json-eval($.query)}</query> - </mongodb.findOne> - ``` - - **Sample Request** - - ```json - { - "collection": "TestCollection", - "query": { - "name": "Jane Doe" - } - } - ``` - -??? note "find" - Selects documents in a collection or [view](https://docs.mongodb.com/manual/core/views/) and returns a [cursor](https://docs.mongodb.com/manual/reference/glossary/#term-cursor) to the selected documents. See the related [find documentation](https://docs.mongodb.com/manual/reference/method/db.collection.find/) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - Collection - </td> - <td> - String - </td> - <td> - The name of the MongoDB collection. - </td> - <td> - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Query - </td> - <td> - JSON String - </td> - <td> - Selection filter using [query operators](https://docs.mongodb.com/manual/reference/operator/). To return all documents in a collection, omit this parameter or pass an empty document ({}). - </td> - <td> - {} - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Projection - </td> - <td> - JSON String - </td> - <td> - Specifies the fields to return using [projection operators](https://docs.mongodb.com/manual/reference/operator/projection/). Omit this parameter to return all fields in the matching document. - </td> - <td> - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Collation - </td> - <td> - JSON String - </td> - <td> - Collation allows users to specify language-specific rules for string comparison, such as rules for letter case and accent marks. - </td> - <td> - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Sort - </td> - <td> - JSON String - </td> - <td> - A document that defines the sort order of the result set. - </td> - <td> - - </td> - <td> - No - </td> - </tr> - </table> - - **Sample Configuration** - - ```xml - <mongodb.find configKey="connectionURI"> - <collection>{json-eval($.collection)}</collection> - <query>{json-eval($.query)}</query> - </mongodb.find> - ``` - - **Sample Request** - - ```json - { - "collection": "TestCollection", - "query": { - "name": "John Doe" - } - } - ``` - -??? note "updateOne" - Updates a single document within the collection based on the filter. See the related [updateOne documentation](https://docs.mongodb.com/manual/reference/method/db.collection.updateOne/) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - Collection - </td> - <td> - String - </td> - <td> - The name of the MongoDB collection. - </td> - <td> - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Query - </td> - <td> - JSON String - </td> - <td> - The selection criteria for the update. The same [query selectors](https://docs.mongodb.com/manual/reference/operator/query/#query-selectors) as in the [find()](https://docs.mongodb.com/manual/reference/method/db.collection.find/#db.collection.find) method are available. Specify an empty document {} to update the first document returned in the collection. - </td> - <td> - {} - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Update - </td> - <td> - JSON String - </td> - <td> - The modifications to apply. - </td> - <td> - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Upsert - </td> - <td> - Boolean - </td> - <td> - Creates a new document if no documents match the filter. - </td> - <td> - false - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Collation - </td> - <td> - JSON String - </td> - <td> - Collation allows users to specify language-specific rules for string comparison, such as rules for letter case and accent marks. - </td> - <td> - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Array Filters - </td> - <td> - JSON String - </td> - <td> - An array of filter documents that determine which array elements to modify for an update operation on an array field. - </td> - <td> - - </td> - <td> - No - </td> - </tr> - </table> - - !!! Info - Array Filters parameter should be in a JSON object format. See the example given below. - - ``` - { - "collection": "TestCollection", - "query": { - "grades": { - "$gte": 100 - } - }, - "update": { - "$set": { - "grades.$[element]": 100 - } - }, - "arrayFilters": { - "element": { - "$gte": 100 - } - } - } - ``` - - **Sample Configuration** - - ```xml - <mongodb.updateOne configKey="connectionURI"> - <collection>{json-eval($.collection)}</collection> - <query>{json-eval($.query)}</query> - <update>{json-eval($.update)}</update> - <upsert>False</upsert> - </mongodb.updateOne> - ``` - - **Sample Request** - - ```json - { - "collection": "TestCollection", - "query": { - "_id": "123" - }, - "update": { - "$set": { - "name": "Jane Doe" - } - } - } - ``` - -??? note "updateMany" - Updates all documents that match the specified filter for a collection. See the related [updateMany documentation](https://docs.mongodb.com/manual/reference/method/db.collection.updateMany/) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - Collection - </td> - <td> - String - </td> - <td> - The name of the MongoDB collection. - </td> - <td> - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Query - </td> - <td> - JSON String - </td> - <td> - The selection criteria for the update. The same [query selectors](https://docs.mongodb.com/manual/reference/operator/query/#query-selectors) as in the [find()](https://docs.mongodb.com/manual/reference/method/db.collection.find/#db.collection.find) method are available. Specify an empty document {} to update all documents in the collection. - </td> - <td> - {} - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Update - </td> - <td> - JSON String - </td> - <td> - The modifications to apply. - </td> - <td> - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Upsert - </td> - <td> - Boolean - </td> - <td> - Creates a new document if no documents match the filter. - </td> - <td> - false - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Collation - </td> - <td> - JSON String - </td> - <td> - Collation allows users to specify language-specific rules for string comparison, such as rules for letter case and accent marks. - </td> - <td> - - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Array Filters - </td> - <td> - JSON String - </td> - <td> - An array of filter documents that determine which array elements to modify for an update operation on an array field. - </td> - <td> - - </td> - <td> - No - </td> - </tr> - </table> - - !!! Info - Array filters parameter should be in a JSON object format. See the example given below. - - ``` - { - "collection": "TestCollection", - "query": { - "grades": { - "$gte": 100 - } - }, - "update": { - "$set": { - "grades.$[element]": 100 - } - }, - "arrayFilters": { - "element": { - "$gte": 100 - } - } - } - ``` - - **Sample Configuration** - - ```xml - <mongodb.updateMany configKey="connectionURI"> - <collection>{json-eval($.collection)}</collection> - <query>{json-eval($.query)}</query> - <update>{json-eval($.update)}</update> - <upsert>False</upsert> - </mongodb.updateMany> - ``` - - **Sample Request** - - ```json - { - "collection": "TestCollection", - "query": { - "_id": "123" - }, - "update": { - "$set": { - "name": "Jane Doe" - } - } - } - ``` - -??? note "deleteOne" - Removes a single document from a collection. See the related [deleteOne documentation](https://docs.mongodb.com/manual/reference/method/db.collection.deleteOne/#db.collection.deleteOne) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - Collection - </td> - <td> - String - </td> - <td> - The name of the MongoDB collection. - </td> - <td> - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Query - </td> - <td> - JSON String - </td> - <td> - Specifies deletion criteria using [query operators](https://docs.mongodb.com/manual/reference/operator/). Specify an empty document {} to delete the first document returned in the collection. - </td> - <td> - {} - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Collation - </td> - <td> - JSON String - </td> - <td> - Collation allows users to specify language-specific rules for string comparison, such as rules for letter case and accent marks. - </td> - <td> - - </td> - <td> - No - </td> - </tr> - </table> - - **Sample Configuration** - - ```xml - <mongodb.deleteOne configKey="connectionURI"> - <collection>{json-eval($.collection)}</collection> - <query>{json-eval($.query)}</query> - </mongodb.deleteOne> - ``` - - **Sample Request** - - ```json - { - "collection": "TestCollection", - "query": { - "name": "Jane Doe" - } - } - ``` - -??? note "deleteMany" - Removes all documents that match the query from a collection. See the related [deleteMany documentation](https://docs.mongodb.com/manual/reference/method/db.collection.deleteMany/#db.collection.deleteMany) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - Collection - </td> - <td> - String - </td> - <td> - The name of the MongoDB collection. - </td> - <td> - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Query - </td> - <td> - JSON String - </td> - <td> - Specifies deletion criteria using [query operators](https://docs.mongodb.com/manual/reference/operator/). To delete all documents in a collection, pass in an empty document ({}). - </td> - <td> - {} - </td> - <td> - No - </td> - </tr> - <tr> - <td> - Collation - </td> - <td> - JSON String - </td> - <td> - Collation allows users to specify language-specific rules for string comparison, such as rules for letter case and accent marks. - </td> - <td> - - </td> - <td> - No - </td> - </tr> - </table> - - **Sample Configuration** - - ```xml - <mongodb.deleteMany configKey="connectionURI"> - <collection>{json-eval($.collection)}</collection> - <query>{json-eval($.query)}</query> - </mongodb.deleteMany> - ``` - - **Sample Request** - - ```json - { - "collection": "TestCollection", - "query": { - "name": "John Doe" - } - } - ``` - -??? note "aggregate" - Process data in collections and return computed results. For more information, see the documentation for [aggregate](https://www.mongodb.com/docs/manual/reference/method/db.collection.aggregate/#db.collection.aggregate). - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Description</th> - <th>Default Value</th> - <th>Required</th> - </tr> - <tr> - <td> - Collection - </td> - <td> - String - </td> - <td> - The name of the MongoDB collection. - </td> - <td> - - </td> - <td> - Yes - </td> - </tr> - <tr> - <td> - Stages - </td> - <td> - JSON Array - </td> - <td> - The stages of the aggregation/aggregation pipeline. Each stage is a document with a corresponding operator name, such as $match or $group. - </td> - <td> - - - </td> - <td> - Yes - </td> - </tr> - </table> - - **Sample Configuration** - - ```xml - <mongodb.aggregate configKey="connectionURI"> - <collection>{json-eval($.collection)}</collection> - <stages>{json-eval($.stages)}</stages> - </mongodb.aggregate> - ``` - - **Sample Request** - - ```json - { - "collection": "TestCollection", - "stages": { - { - "$match": { - "category": "Bakery" - } - }, - { - "$group": { - "_id": "$star", - "totalStarCount": { - "$sum": { - "$multiply": [ - "$star", - "$count" - ] - } - }, - "averageStar": { - "$avg": "$star" - } - } - } - } - } - ``` diff --git a/en/docs/reference/connectors/mongodb-connector/mongodb-connector-example.md b/en/docs/reference/connectors/mongodb-connector/mongodb-connector-example.md deleted file mode 100644 index 58a30c9d7d..0000000000 --- a/en/docs/reference/connectors/mongodb-connector/mongodb-connector-example.md +++ /dev/null @@ -1,263 +0,0 @@ -# MongoDB Connector Example - -The MongoDB Connector can be used to perform CRUD operations in the local database as well as in MongoDB Atlas (cloud version of MongoDB). - -## What you'll build - -This example explains how to use MongoDB Connector to insert and find documents from a MongoDB database. - -The sample API given below demonstrates how the MongoDB connector can be used to connect to the MongoDB Server and perform **insert many** and **find** operations on it. - -- `/insertmany`: The user sends a request payload that includes the connection information, collection name, and the documents to be inserted. This request is sent to the integration runtime by invoking the MongodbConnector API. This will insert the documents into the MongoDB database. - - <p><img src="{{base_path}}/assets/img/integrate/connectors/mongodb-conn-1.png" title="Insert many function" width="800" alt="Insert many function" /></p> - -- `/find`: The user sends the request payload containing the connection information, collection name, and the find query. This request is sent to the integration runtime by invoking the MongodbConnector API. Once the API is invoked, it returns the documents matching the find query. - - <img src="{{base_path}}/assets/img/integrate/connectors/mongodb-conn-2.png" title="Find function" width="800" alt="Find function"/> - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Before you begin - -If you want to connect to MongoDB Atlas, follow the steps mentioned below to get the connection string. - -1. In the Clusters view, click **Connect** for the cluster to which you want to connect. - -2. Click **Choose a connection method**. - -3. Click **Connect your application**. - -4. Select Java from the **Driver** menu. - -5. Select the correct driver version from the **Version** menu. - -6. Clear the **Include full driver code example** check box to get the connection string. - -## Configure the connector in WSO2 Integration Studio - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -## Creating the Integration Logic - -1. Create a new integration project named `MongodbConnector`. Be sure to enable a Connector Exporter. - - <img src="{{base_path}}/assets/img/integrate/connectors/mongodb-conn-3.png" title="Create project" width="500" alt="Create project"/> - -2. Right-click the created Integration Project and select, -> **New** -> **Rest API** to create the REST API. - - <img src="{{base_path}}/assets/img/integrate/connectors/adding-an-api.jpg" title="Adding a Rest API" width="800" alt="Adding a Rest API"/> - -3. Provide the API name as `MongoConnector` and the API context as `/mongodbconnector`. - -4. First, create the `/insertmany` resource. This API resource inserts documents into the MongoDB database.<br/> - Right-click on the API Resource and go to the **Properties** view. Let's use a URL template called `/insertmany` as there are two API resources inside a single API. The method is `Post`. - - <img src="{{base_path}}/assets/img/integrate/connectors/mongodb-conn-4.png" title="Adding the API resource." width="800" alt="Adding the API resource."/> - -5. Drag the 'insertMany' operation of the MongoDB Connector to the Design view as shown below. - - <img src="{{base_path}}/assets/img/integrate/connectors/mongodb-conn-5.png" title="Adding the insert many operation." width="800" alt="Adding the insert many operation."/> - -6. Create a connection from the Properties view by clicking the '+' icon as shown below. - - Following values can be provided when connecting to the MongoDB database. <br/> - - - Connection Name - connectionURI - - Connection Type - URI - - Connection URI - mongodb+srv://server.example.com/?connectTimeoutMS=300000&authSource=aDifferentAuthDB - - Database - TestDatabase - - <img src="{{base_path}}/assets/img/integrate/connectors/mongodb-conn-6.png" title="Adding the connection." width="800" alt="Adding the connection."/> - -7. After the connection is successfully created, you can select the new connection from the 'Connection' menu in the properties view. - - <img src="{{base_path}}/assets/img/integrate/connectors/mongodb-conn-7.png" title="Selecting the connection." width="800" alt="Selecting the connection."/> - -8. Next, provide JSON expressions for the following two properties. These expressions will retrieve the respective values from the JSON request payload. - - - Collection - json-eval($.collection) - - Documents - json-eval($.documents) - -9. Drag the [Respond Mediator](https://ei.docs.wso2.com/en/latest/micro-integrator/references/mediators/respond-Mediator/) to the canvas. This returns the response message to the client (after inserting documents) as shown below. - - <img src="{{base_path}}/assets/img/integrate/connectors/mongodb-conn-8.png" title="Adding the respond mediator." width="800" alt="Adding the respond mediator."/> - -10. Create the next API resource (which is `/find`) by dragging another API resource to the Design view. This API resource will find all the documents matching the find query given by the user. This will also be a `POST` request. - -11. Drag the find operation of the Email Connector to the Design view as shown below. - -12. Select 'connectionURI' as the connection from the 'Connection' menu in the properties view. - -13. Next, provide JSON expressions for the following two properties. These expressions will retrieve the respective values from the JSON request payload. - - - Collection - json-eval($.collection) - - Query - json-eval($.query) - -14. Drag the [Respond Mediator](https://ei.docs.wso2.com/en/latest/micro-integrator/references/mediators/respond-Mediator/) to the canvas. This returns the response message to the client (after retrieving documents) as shown below. - -15. You can find the complete API XML configuration below. You can go to the source view and copy paste the following config. - -``` -<?xml version="1.0" encoding="UTF-8"?> -<api context="/mongodbconnector" name="MongodbConnector" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST" uri-template="/insertmany"> - <inSequence> - <mongodb.insertMany configKey="connectionURI"> - <collection>{json-eval($.collection)}</collection> - <documents>{json-eval($.documents)}</documents> - <ordered>True</ordered> - </mongodb.insertMany> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" uri-template="/find"> - <inSequence> - <mongodb.find configKey="connectionURI"> - <collection>{json-eval($.collection)}</collection> - <query>{json-eval($.query)}</query> - </mongodb.find> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> -</api> -``` - -{!includes/reference/connectors/exporting-artifacts.md!} - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/MongodbConnector.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -## Deployment - -Follow these steps to deploy the exported CApp to the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -??? note "Click here for instructions on removing the iterative mongodb server logs" - Add the configuration below to **remove** the iterative `org.mongodb.driver.cluster` server logs; - - 1. Add the following logger to the `log4j2.properties` file in the `<PRODUCT_HOME>/conf` folder. - - ```xml - logger.org-mongodb-driver-cluster.name = org.mongodb.driver.cluster - logger.org-mongodb-driver-cluster.level = WARN - ``` - - 2. Then, add `org-mongodb-driver-cluster` to the list of `loggers`. - -!!! Prerequisite - - 1. Download the Mongo java driver from [here](https://repo1.maven.org/maven2/org/mongodb/mongo-java-driver/3.12.12/mongo-java-driver-3.12.12.jar). - - 2. Add the driver to the `<PRODUCT_HOME>/dropins` folder. - - 3. Restart the server. - -## Testing - -### Insert Many Operation - -1. Create a file named `insertmany.json` with the following payload: - - ```json - { - "collection": "TestCollection", - "documents": [ - { - "name": "Jane Doe", - "_id": "123" - }, - { - "name": "John Doe", - "_id": "1234" - }, - { - "name": "Jane Doe", - "_id": "12345" - } - ] - } - ``` - -2. Invoke the API as shown below using the curl command. - - !!! Info - The Curl application can be downloaded from [here](https://curl.haxx.se/download.html). - - ```bash - curl -H "Content-Type: application/xml" --request POST --data @insertmany.json http://localhost:8290/mongodbconnector/insertmany - ``` - - **Expected Response** : You should get a response as given below and the data will be added to the database. - - ```json - { - "InsertManyResult": "Successful" - } - ``` - -### Find Operation - -!!! Note - In order to find documents by ObjectId, the find query payload should be in the following format: - - ```json - { - "query": { - "_id": { - "$oid": "6011b180007ce60ab2ad74a5" - } - } - } - ``` - -1. Create a file called `find.json` with the following payload. - - ```json - { - "collection": "TestCollection", - "query": { - "name": "Jane Doe" - } - } - ``` - -2. Invoke the API using the curl command shown below. - - !!! Info - Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - - ```bash - curl -H "Content-Type: application/xml" --request POST --data @find.json http://localhost:8290/mongodbconnector/find - ``` - - **Expected Response** : You should get a response similar to the one given below. - - ```json - [ - { - "_id": "123", - "name": "Jane Doe" - }, - { - "_id": "12345", - "name": "Jane Doe" - } - ] - ``` - -## What's Next - -- To customize this example for your own scenario, see [MongoDB Connector Configuration]({{base_path}}/reference/connectors/mongodb-connector/mongodb-connector-config/) documentation for all operation details of the connector. diff --git a/en/docs/reference/connectors/mongodb-connector/mongodb-connector-overview.md b/en/docs/reference/connectors/mongodb-connector/mongodb-connector-overview.md deleted file mode 100644 index 7c05083977..0000000000 --- a/en/docs/reference/connectors/mongodb-connector/mongodb-connector-overview.md +++ /dev/null @@ -1,35 +0,0 @@ -# MongoDB Connector Overview - -The MongoDB Connector allows you to connect to the MongoDB database via different connection URI and perform CRUD operations on the database. - -The supported connection URI types and connection options are listed in the [MongoDB Connection String](https://docs.mongodb.com/manual/reference/connection-string/) documentation. - -To download the MongoDB Connector, go to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "MongoDB". - -It is always recommended to download the latest version of the connector. - -<img src="{{base_path}}/assets/img/integrate/connectors/mongodb-connector-store.png" title="MongoDB Connector Store" width="200" alt="MongoDB Connector Store"/> - -## Compatibility - -| Connector version | Supported product versions | -| ----------------- | ----------------------------- | -| 1.0.0 | APIM 4.0.0, EI 7.1.0, EI 6.6.0 | - -This connector was tested with MongoDB version 4.4.3. - -## MongoDB Connector documentation - -- **[MongoDB Connector Example]({{base_path}}/reference/connectors/mongodb-connector/mongodb-connector-example/)**: This example demonstrates how to use MongoDB connector to connect to the MongoDB database and perform CRUD operations on it. - -- **[MongoDB Connector Reference]({{base_path}}/reference/connectors/mongodb-connector/mongodb-connector-config/)**: This documentation provides a reference guide for the MongoDB Connector. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, please create a pull request in the following repository. - -- [MongoDB Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-mongodb) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/redis-connector/1.0.1/redis-connector-reference.md b/en/docs/reference/connectors/redis-connector/1.0.1/redis-connector-reference.md deleted file mode 100644 index 11871d553d..0000000000 --- a/en/docs/reference/connectors/redis-connector/1.0.1/redis-connector-reference.md +++ /dev/null @@ -1,1987 +0,0 @@ -# Redis Connector Reference - -To use the Redis connector, add the <redis.init> element in your configuration before carrying out any other Redis operations. - -??? note "redis.init" - The redis.init operation initializes the connector to interact with Redis. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisHost</td> - <td>The Redis host name (default localhost).</td> - <td>Yes</td> - </tr> - <tr> - <td>redisPort</td> - <td>The port on which the Redis server is running (the default port is 6379).</td> - <td>Yes</td> - </tr> - <tr> - <td>redisTimeout</td> - <td>The server TTL (Time to Live) in milliseconds.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - ```xml - <redis.init> - <redisHost>{$ctx:redisHost}</redisHost> - <redisPort>{$ctx:redisPort}</redisPort> - <redisTimeout>{$ctx:redisTimeout}</redisTimeout> - </redis.init> - ``` - ---- - -### Connection Commands - -??? note "echo" - The echo operation returns a specified string. See the [related documentation](https://redis.io/commands/echo) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisMessage</td> - <td>The message that you want to echo.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.echo> - <redisMessage>{$ctx:redisMessage}</redisMessage> - </redis.echo> - ``` - - **Sample request** - - ```json - { - "redisMessage":"sampleMessage" - } - ``` - -??? note "ping" - The ping operation pings the server to verify whether the connection is still alive. See the [related documentation](https://redis.io/commands/ping) for more information. - - **Sample configuration** - - ```xml - <redis.ping/> - ``` - - **Sample request** - - An empty request can be handled by the ping operation. - -??? note "quit" - The quit operation closes the connection to the server. See the [related documentation](https://redis.io/commands/quit) for more information. - - **Sample configuration** - - ```xml - <redis.quit/> - ``` - - **Sample request** - - An empty request can be handled by the quit operation. - -### Hashes - -??? note "hDel" - The hDel operation deletes one or more specified hash fields. See the [related documentation](https://redis.io/commands/hdel) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The fields that you want to delete.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hDel> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hDel> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFields":"sampleField1 sampleField2" - } - ``` - -??? note "hExists" - The hExists operation determines whether a specified hash field exists. See the [related documentation](https://redis.io/commands/hexists) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The fields that determine existence.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hExists> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hExists> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFields":"sampleField" - } - ``` - -??? note "hGet" - The hGet operation retrieves the value of a particular field in a hash stored in a specified key. See the [related documentation](https://redis.io/commands/hget) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The field for which you want to retrieve the value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hGet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hGet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFields":"sampleField" - } - ``` - -??? note "hGetAll" - The hGetAll operation retrieves all the fields and values of a hash stored in a specified key. See the [related documentation](https://redis.io/commands/hgetall) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hGetAll> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.hGetAll> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "hIncrBy" - The hIncrBy operation increments the integer value of a hash field by the specified amount. See the [related documentation](https://redis.io/commands/hincrby) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The hash field for which you want to increment the value.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The amount by which you want to increment the hash field value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hIncrBy> - <redisKey>{$ctx:redisKey}</redisKey> - <redisField>{$ctx:redisField}</redisField> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.hIncrBy> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisField":"sampleField", - "redisValue":"1" - } - ``` - -??? note "hKeys" - The hKeys operation retrieves all the fields in a hash. See the [related documentation](https://redis.io/commands/hkeys) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hKeys> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.hKeys> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "hLen" - The hLen operation retrieves the number of fields in a hash. See the [related documentation](https://redis.io/commands/hlen) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hKeys> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.hKeys> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "hMGet" - The hMGet operation retrieves values associated with each of the specified fields in a hash that is stored in a particular key. See the [related documentation](https://redis.io/commands/hmget) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The hash field for which you want to retrieve values.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hMGet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hMGet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFields":"sampleField1 sampleField2" - } - ``` - -??? note "hMSet" - The hMSet operation sets specified fields to their respective values in the hash stored in a particular key. See the [related documentation](https://redis.io/commands/hmset) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFieldsValues</td> - <td>The fields you want to set and their respective values.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hMSet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFieldsValues>{$ctx:redisFieldsValues}</redisFieldsValues> - </redis.hMSet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFieldsValues":"sampleField1 sampleValue1 sampleField2 sampleValue2" - } - ``` - -??? note "hSet" - The hSet operation sets a specific field in a hash to a specified value. See the [related documentation](https://redis.io/commands/hset) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The field for which you want to set a value.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The amount by which you want to increment the hash field value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hSet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisField>{$ctx:redisField}</redisField> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.hSet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisField":"sampleField", - "redisValue":"1" - } - ``` - -??? note "hSetnX" - The hSetnX operation sets a specified field to a value, only if the field does not already exist in the hash. If field already exists, this operation has no effect. See the [related documentation](https://redis.io/commands/hsetnx) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The field for which you want to set a value.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The amount by which you want to increment the hash field value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hSetnX> - <redisKey>{$ctx:redisKey}</redisKey> - <redisField>{$ctx:redisField}</redisField> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.hSetnX> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisField":"sampleField", - "redisValue":"1" - } - ``` - -??? note "hVals" - The hVals operation retrieves all values in a hash that is stored in a particular key. See the [related documentation](https://redis.io/commands/hvals) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hVals> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.hVals> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -### Keys - -??? note "del" - The del operation deletes a specified key if it exists. See the [related documentation](https://redis.io/commands/del) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that you want to delete.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.del> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.del> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "exists" - The exists operation determines whether a specified key exists. See the [related documentation](https://redis.io/commands/exists) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that you want to determine existence.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.exists> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.exists> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "expire" - The expire operation sets a TTL(Time to live) for a key so that the key will automatically delete once it reaches the TTL. The TTL should be specified in seconds. See the [related documentation](https://redis.io/commands/expire) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that you want to specify a TTL.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisSeconds</td> - <td>The number of seconds representing the TTL that you want to set for the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.expire> - <redisKey>{$ctx:redisKey}</redisKey> - <redisSeconds>{$ctx:redisSeconds}</redisSeconds> - </redis.expire> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisSeconds":"10" - } - ``` - -??? note "expireAt" - The expireAt operation sets the time after which an existing key should expire. Here the time should be specified as a UNIX timestamp. See the [related documentation](https://redis.io/commands/expireat) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that you want to set an expiration.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisUnixTime</td> - <td>The time to expire specified in the UNIX timestamp format.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.expire> - <redisKey>{$ctx:redisKey}</redisKey> - <redisUnixTime>{$ctx:redisUnixTime}</redisUnixTime> - </redis.expire> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisUnixTime":"1293840000" - } - ``` - -??? note "keys" - The keys operation retrieves all keys that match a specified pattern. See the [related documentation](https://redis.io/commands/keys) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisPattern</td> - <td>The pattern that you want to match when retrieving keys.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.keys> - <redisPattern>{$ctx:redisPattern}</redisPattern> - </redis.keys> - ``` - - **Sample request** - - ```json - { - "redisPattern":"*" - } - ``` - -??? note "randomKey" - A sample request with an empty body can be handled by the randomKey operation. See the [related documentation](https://redis.io/commands/randomkey) for more information. - - **Sample configuration** - - ```xml - <redis.randomKey/> - ``` - - **Sample request** - - ```json - { - "redisPattern":"*" - } - ``` - -??? note "rename" - The rename operation renames an existing key to a new name that is specified. See the [related documentation](https://redis.io/commands/rename) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisOldKey</td> - <td>The name of an existing key that you want to rename.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisNewKey</td> - <td>The new name that you want the key to have.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.rename> - <redisOldKey>{$ctx:redisOldKey}</redisOldKey> - <redisNewKey>{$ctx:redisNewKey}</redisNewKey> - </redis.rename> - ``` - - **Sample request** - - ```json - { - "redisOldKey":"sampleOldKey", - "redisNewKey":"sampleNewKey" - } - ``` - -??? note "renamenX" - The renamenX operation renames a key to a new key, only if the new key does not already exist. See the [related documentation](https://redis.io/commands/renamenx) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisOldKey</td> - <td>The name of an existing key that you want to rename.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisNewKey</td> - <td>The new name that you want the key to have.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.renamenX> - <redisOldKey>{$ctx:redisOldKey}</redisOldKey> - <redisNewKey>{$ctx:redisNewKey}</redisNewKey> - </redis.renamenX> - ``` - - **Sample request** - - ```json - { - "redisOldKey":"sampleOldKey", - "redisNewKey":"sampleNewKey" - } - ``` - -??? note "ttl" - The ttl operation retrieves the TTL (Time to Live) value of a specified key. See the [related documentation](https://redis.io/commands/ttl) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key for which you want to retrieve the TTL.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.ttl> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.ttl> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "type" - The type operation retrieves the data type of a value stored in a specified key. See the [related documentation](https://redis.io/commands/type) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that the value is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.type> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.type> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -### Lists - -??? note "blPop" - The blPop operation retrieves the first element in a list, if available, or blocks the connection for a specified amount of time until an element is available. See the [related documentation](https://redis.io/commands/blpop) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisBrPopTimeout</td> - <td>The amount of time to keep the connection blocked, waiting for an element to be available in the tail of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.brPop> - <redisKey>{$ctx:redisKey}</redisKey> - <redisBrPopTimeout>{$ctx:redisBrPopTimeout}</redisBrPopTimeout> - </redis.brPop> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - "redisBrPopTimeout":"0" - } - ``` - -??? note "brPop" - The brPop operation retrieves the last element in a list, if available, or blocks the connection for a specified amount of time until an element is available. See the [related documentation](https://redis.io/commands/brpop) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisBlPopTimeout</td> - <td>The amount of time to keep the connection blocked, waiting for an element to be available in the head of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.blPop> - <redisKey>{$ctx:redisKey}</redisKey> - <redisBlPopTimeout>{$ctx:redisBlPopTimeout}</redisBlPopTimeout> - </redis.blPop> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - "redisBlPopTimeout":"0" - } - ``` - -??? note "lInsert" - The lInsert operation inserts a specified element before or after an existing element in a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/linsert) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisWhere</td> - <td>The place where you want to add an element. Possible values are BEFORE or AFTER. For example, whether you want to add an element before a particular element that exists in the list.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisPivot</td> - <td>An existing element in the list that is used as the pivot element.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The element that you want to insert to the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lInsert> - <redisKey>{$ctx:redisKey}</redisKey> - <redisWhere>{$ctx:redisWhere}</redisWhere> - <redisPivot>{$ctx:redisPivot}</redisPivot> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.lInsert> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisWhere":"BEFORE", - "redisPivot":"samplePivotElement", - "redisValue":"sampleInsertElement" - } - ``` - -??? note "lLen" - The lLen operation retrieves the length of a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/llen) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lLen> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.lLen> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - } - ``` - -??? note "lPop" - The lPop operation retrieves the first element in a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/lpop) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lLen> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.lLen> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "lPush" - The lPush operation inserts one or more elements to the head of a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/lpush) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStrings</td> - <td>One or more elements that you want to add to the head of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lPush> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStrings>{$ctx:redisStrings}</redisStrings> - </redis.lPush> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStrings":"sampleValues" - } - ``` - -??? note "lPushX" - The lPushX operation inserts one or more elements to the head of a list stored in a specified key, only if the key already exists and holds a list. See the [related documentation](https://redis.io/commands/lpushx) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStrings</td> - <td>One or more elements that you want to add to the head of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lPushX> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStrings>{$ctx:redisStrings}</redisStrings> - </redis.lPushX> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStrings":"sampleValues" - } - ``` - -??? note "lRange" - The lRange operation retrieves a range of elements from a list. See the [related documentation](https://redis.io/commands/lrange) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStart</td> - <td>The starting index.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisEnd</td> - <td>The ending index.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lRange> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStart>{$ctx:redisStart}</redisStart> - <redisEnd>{$ctx:redisEnd}</redisEnd> - </redis.lRange> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStart":"0", - "redisEnd":"-1" - } - ``` - -??? note "lRem" - The lRem operation removes elements from a list. See the [related documentation](https://redis.io/commands/lrem) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisCount</td> - <td>The number of occurrences of the element that you want to remove.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The element that you want to remove.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lRem> - <redisKey>{$ctx:redisKey}</redisKey> - <redisCount>{$ctx:redisCount}</redisCount> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.lRem> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisCount":"1", - "redisValue":"sampleValue" - } - ``` - -??? note "lSet" - The lSet operation sets the value of an element in a list by its index. See the [related documentation](https://redis.io/commands/lset) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisIndex</td> - <td>The starting index.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The value of the key</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lSet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisIndex>{$ctx:redisIndex}</redisIndex> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.lSet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisIndex":"0", - "redisValue":"sampleValue" - } - ``` - -??? note "lTrim" - The lTrim operation trims a list to a specified range. See the [related documentation](https://redis.io/commands/ltrim) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStart</td> - <td>The starting index.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisEnd</td> - <td>The ending index.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lTrim> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStart>{$ctx:redisStart}</redisStart> - <redisEnd>{$ctx:redisEnd}</redisEnd> - </redis.lTrim> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStart":"0", - "redisEnd":"-1" - } - ``` - -??? note "rPopLPush" - The rPopLPush operation removes the last element in a list, then inserts it to another list, and then returns it. See the [related documentation](https://redis.io/commands/rpoplpush) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisSrckey</td> - <td>The name of the source key from where the last element is retrieved.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of destination key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.rPopLPush> - <redisSrckey>{$ctx:redisSrckey}</redisSrckey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - </redis.rPopLPush> - ``` - - **Sample request** - - ```json - { - "redisSrckey":"sampleSourceKey", - "redisDstkey":"sampleDestinationKey" - } - ``` - -??? note "rPush" - The rPush operation inserts one or more elements to the tail of a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/rpush) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStrings</td> - <td>One or more elements that you want to add to the tail of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.rPush> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStrings>{$ctx:redisStrings}</redisStrings> - </redis.rPush> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStrings":"sampleValues" - } - ``` - -??? note "rPushX" - The rPushX operation inserts one or more elements to the tail of a list stored in a specified key, only if the key already exists and holds a list. See the [related documentation](https://redis.io/commands/rpushx) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>One or more elements that you want to add to the tail of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.rPushX> - <redisKey>{$ctx:redisKey}</redisKey> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.rPushX> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisValue":"sampleValue" - } - ``` - -### Server Commands - -??? note "flushAll" - The flushAll operation deletes all the keys from all existing databases. See the [related documentation](https://redis.io/commands/flushall) for more information. - - **Sample configuration** - - ```xml - <redis.flushAll/> - ``` - - **Sample request** - - A sample request with an empty body can be handled by the flushAll operation. - -??? note "flushDB" - The flushDB operation deletes all the keys from the currently selected database. See the [related documentation](https://redis.io/commands/flushdb) for more information. - - **Sample configuration** - - ```xml - <redis.flushDB/> - ``` - - **Sample request** - - A sample request with an empty body can be handled by the flushDB operation. - -### Sets - -??? note "sadd" - The sadd operation is used to add one or more members to a set. See the [related documentation](https://redis.io/commands/sadd) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMembers</td> - <td>The value to be added to the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sadd> - <redisKey>{$ctx:redisKey}</redisKey> - <redisMembers>{$ctx:redisMembers}</redisMembers> - </redis.sadd> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisMembers":"sampleValue" - } - ``` - -??? note "sDiffStore" - The sDiffStore operation is used to subtract multiple sets and store the resulting set in a key. See the [related documentation](https://redis.io/commands/sdiffstore) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of the destination key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sDiffStore> - <redisKey>{$ctx:redisKey}</redisKey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - </redis.sDiffStore> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisDstkey":"sampleDestinationKey" - } - ``` - -??? note "sInter" - The sInter operation is used to intersect multiple sets. See the [related documentation](https://redis.io/commands/sinter) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sInter> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sInter> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sInterStore" - The sInterStore operation is used to intersect multiple sets and store the resulting set in a key. See the [related documentation](https://redis.io/commands/sinterstore) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of the destination key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sDiffStore> - <redisKey>{$ctx:redisKey}</redisKey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - </redis.sDiffStore> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisDstkey":"sampleDestinationKey" - } - ``` - -??? note "sIsMember" - The sIsMember operation is used to determine if a given value is a member of a set. See the [related documentation](https://redis.io/commands/sismember) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMembers</td> - <td>The name of a member in a key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sIsMember> - <redisKey>{$ctx:redisKey}</redisKey> - <redisMembers>{$ctx:redisMembers}</redisMembers> - </redis.sIsMember> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisMembers":"sampleValue" - } - ``` - -??? note "sMembers" - The sMembers operation is used to get the all members in a set. See the [related documentation](https://redis.io/commands/smembers) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sMembers> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sMembers> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sMove" - The sMove operation is used to move a member from one set to another. See the [related documentation](https://redis.io/commands/smove) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisSrckey</td> - <td>The name of the source key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of the destination key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMember</td> - <td>The name of the member.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sMove> - <redisSrckey>{$ctx:redisSrckey}</redisSrckey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - <redisMember>{$ctx:redisMember}</redisMember> - </redis.sMove> - ``` - - **Sample request** - - ```json - { - "redisSrckey":"sampleSourceKey", - "redisDstkey":"sampleDestinationKey", - "redisMember":"sampleMember" - } - ``` - -??? note "sPop" - The sPop operation is used to remove and return one or multiple random members from a set. See the [related documentation](https://redis.io/commands/spop) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sPop> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sPop> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sRandMember" - The sRandMember operation is used to get one or multiple random members from a set. See the [related documentation](https://redis.io/commands/srandmember) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sRandMember> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sRandMember> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sRem" - The sRem operation is used to remove one or more members from a set. See the [related documentation](https://redis.io/commands/srem) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMembers</td> - <td>The name of a member in a key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sRem> - <redisKey>{$ctx:redisKey}</redisKey> - <redisMembers>{$ctx:redisMembers}</redisMembers> - </redis.sRem> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisMembers":"sampleValue" - } - ``` - -??? note "sUnion" - The sUnion operation is used to add multiple sets. See the [related documentation](https://redis.io/commands/sunion) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sUnion> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sUnion> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sUnionStore" - The sUnionStore operation is used to add multiple sets and store the resulting set in a key. See the [related documentation](https://redis.io/commands/sunionstore) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of the destination key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sUnionStore> - <redisKey>{$ctx:redisKey}</redisKey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - </redis.sUnionStore> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisDstkey":"sampleValue" - } - ``` - -### Sorted Sets - -??? note "zadd" - The zadd operation adds one or more members to a sorted set, or update its score if a specified member already exists. See the [related documentation](https://redis.io/commands/zadd) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisScore</td> - <td>The score of the sorted set.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMembers</td> - <td>The name of a member you want to add.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.zadd> - <redisKey>{$ctx:redisKey}</redisKey> - <redisScore>{$ctx:redisScore}</redisScore> - <redisMember>{$ctx:redisMember}</redisMember> - </redis.zadd> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisScore":"1.1", - "redisMembers":"sampleMember" - } - ``` - -??? note "zCount" - The zCount operation retrieves a count of members in a sorted set, with scores that are within the min and max values specified. See the [related documentation](https://redis.io/commands/zcount) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMin</td> - <td>The minimum score value.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMax</td> - <td>The maximum score value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.zCount> - <redisKey>{$ctx:redisKey}</redisKey> - <redisMin>{$ctx:redisMin}</redisMin> - <redisMax>{$ctx:redisMax}</redisMax> - </redis.zCount> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisMin":"1.1", - "redisMax":"2.2" - } - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/redis-connector/2.1.x/redis-connector-reference.md b/en/docs/reference/connectors/redis-connector/2.1.x/redis-connector-reference.md deleted file mode 100644 index acd6a7fc5f..0000000000 --- a/en/docs/reference/connectors/redis-connector/2.1.x/redis-connector-reference.md +++ /dev/null @@ -1,2102 +0,0 @@ -# Redis Connector Reference - -To use the Redis connector, add the <redis.init> element in your configuration before carrying out any other Redis operations. - -??? note "redis.init - Standalone mode" - The redis.init operation initializes the connector to interact with Redis. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisHost</td> - <td>The Redis host name (default localhost).</td> - <td>Yes</td> - </tr> - <tr> - <td>redisPort</td> - <td>The port on which the Redis server is running (the default port is 6379).</td> - <td>Yes</td> - </tr> - <tr> - <td>redisTimeout</td> - <td>The server TTL (Time to Live) in milliseconds.</td> - <td>Optional. The default is 2000ms. </td> - </tr> - <tr> - <td>redisConnectionTimeout</td> - <td>The connection TTL (Time to live) in milliseconds.</td> - <td>Optional. The default equals to the redisTimeout. </td> - </tr> - </table> - - **Sample configuration** - ```xml - <redis.init> - <redisHost>{$ctx:redisHost}</redisHost> - <redisPort>{$ctx:redisPort}</redisPort> - <redisTimeout>{$ctx:redisTimeout}</redisTimeout> - <redisConnectionTimeout>{$ctx:redisConnectionTimeout}</redisConnectionTimeout> - </redis.init> - ``` - - If you are connecting using a cache key, use the following init configuration. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>cacheKey</td> - <td>Key of the cache (password).</td> - <td>Optional</td> - </tr> - <tr> - <td>useSsl</td> - <td>A flag to switch between SSL and non-SSL.</td> - <td>Optional. Default is false.</td> - </tr> - </table> - - **Sample configuration** - ```xml - <redis.init> - <redisHost>{$ctx:redisHost}</redisHost> - <redisPort>{$ctx:redisPort}</redisPort> - <redisTimeout>{$ctx:redisTimeout}</redisTimeout> - <redisConnectionTimeout>{$ctx:redisConnectionTimeout}</redisConnectionTimeout> - <cacheKey>{$ctx:cacheKey}</cacheKey> - <useSsl>{$ctx:useSsl}</useSsl> - </redis.init> - ``` - -??? note "redis.init - Cluster mode" - The redis.init operation initializes the connector to interact with Redis cluster. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisClusterEnabled</td> - <td>A flag to enable the redis cluster mode (Default is false).</td> - <td>Yes</td> - </tr> - <tr> - <td>clusterNodes</td> - <td>Comma separated list of the cluster nodes as Node1_hostname:Port,Node2_hostname:Port, etc. Example: 127.0.0.1:40001,127.0.0.1:40002</td> - <td>Yes</td> - </tr> - <tr> - <td>redisTimeout</td> - <td>The server TTL (Time to Live) in milliseconds.</td> - <td>Optional. The default is 2000ms. </td> - </tr> - <tr> - <td>redisConnectionTimeout</td> - <td>The connection TTL (Time to live) in milliseconds.</td> - <td>Optional. The default equals to the redisTimeout. </td> - </tr> - <tr> - <td>maxAttempts</td> - <td>The number of retries.</td> - <td>Optional. The default is 5. </td> - </tr> - <tr> - <td>clientName</td> - <td>Name of the client.</td> - <td>Optional. Default is empty</td> - </tr> - <tr> - <td>cacheKey</td> - <td>Key of the cache (password).</td> - <td>Optional. </td> - </tr> - <tr> - <td>useSsl</td> - <td>A flag to switch between SSL and non-SSL.</td> - <td>Optional. Default is false.</td> - </tr> - </table> - - **Sample configuration** - ```xml - <redis.init> - <redisHost>{$ctx:redisHost}</redisHost> - <redisPort>{$ctx:redisPort}</redisPort> - <redisTimeout>{$ctx:redisTimeout}</redisTimeout> - <redisConnectionTimeout>{$ctx:redisConnectionTimeout}</redisConnectionTimeout> - <maxAttempts>5</maxAttempts> - <clientName>WSO2EI</clientName> - </redis.init> - ``` - - If you are connecting using a cache key, use the following init configuration. - - **Sample configuration** - ```xml - <redis.init> - <redisHost>{$ctx:redisHost}</redisHost> - <redisPort>{$ctx:redisPort}</redisPort> - <redisTimeout>{$ctx:redisTimeout}</redisTimeout> - <redisConnectionTimeout>{$ctx:redisConnectionTimeout}</redisConnectionTimeout> - <maxAttempts>5</maxAttempts> - <clientName>WSO2EI</clientName> - <cacheKey>{$ctx:cacheKey}</cacheKey> - <useSsl>{$ctx:useSsl}</useSsl> - </redis.init> - ``` ---- - -### Connection Commands - -??? note "echo" - The echo operation returns a specified string. See the [related documentation](https://redis.io/commands/echo) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisMessage</td> - <td>The message that you want to echo.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.echo> - <redisMessage>{$ctx:redisMessage}</redisMessage> - </redis.echo> - ``` - - **Sample request** - - ```json - { - "redisMessage":"sampleMessage" - } - ``` - -??? note "ping" - The ping operation pings the server to verify whether the connection is still alive. See the [related documentation](https://redis.io/commands/ping) for more information. - - **Sample configuration** - - ```xml - <redis.ping/> - ``` - - **Sample request** - - An empty request can be handled by the ping operation. - -??? note "quit" - The quit operation closes the connection to the server. See the [related documentation](https://redis.io/commands/quit) for more information. - - **Sample configuration** - - ```xml - <redis.quit/> - ``` - - **Sample request** - - An empty request can be handled by the quit operation. - -### Hashes - -??? note "hDel" - The hDel operation deletes one or more specified hash fields. See the [related documentation](https://redis.io/commands/hdel) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The fields that you want to delete.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hDel> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hDel> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFields":"sampleField1 sampleField2" - } - ``` - -??? note "hExists" - The hExists operation determines whether a specified hash field exists. See the [related documentation](https://redis.io/commands/hexists) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The fields that determine existence.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hExists> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hExists> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFields":"sampleField" - } - ``` - -??? note "hGet" - The hGet operation retrieves the value of a particular field in a hash stored in a specified key. See the [related documentation](https://redis.io/commands/hget) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The field for which you want to retrieve the value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hGet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hGet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFields":"sampleField" - } - ``` - -??? note "hGetAll" - The hGetAll operation retrieves all the fields and values of a hash stored in a specified key. See the [related documentation](https://redis.io/commands/hgetall) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hGetAll> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.hGetAll> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "hIncrBy" - The hIncrBy operation increments the integer value of a hash field by the specified amount. See the [related documentation](https://redis.io/commands/hincrby) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The hash field for which you want to increment the value.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The amount by which you want to increment the hash field value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hIncrBy> - <redisKey>{$ctx:redisKey}</redisKey> - <redisField>{$ctx:redisField}</redisField> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.hIncrBy> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisField":"sampleField", - "redisValue":"1" - } - ``` - -??? note "hKeys" - The hKeys operation retrieves all the fields in a hash. See the [related documentation](https://redis.io/commands/hkeys) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hKeys> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.hKeys> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "hLen" - The hLen operation retrieves the number of fields in a hash. See the [related documentation](https://redis.io/commands/hlen) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hKeys> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.hKeys> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "hMGet" - The hMGet operation retrieves values associated with each of the specified fields in a hash that is stored in a particular key. See the [related documentation](https://redis.io/commands/hmget) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The hash field for which you want to retrieve values.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hMGet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hMGet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFields":"sampleField1 sampleField2" - } - ``` - -??? note "hMSet" - The hMSet operation sets specified fields to their respective values in the hash stored in a particular key. See the [related documentation](https://redis.io/commands/hmset) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFieldsValues</td> - <td>The fields you want to set and their respective values.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hMSet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFieldsValues>{$ctx:redisFieldsValues}</redisFieldsValues> - </redis.hMSet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFieldsValues":"sampleField1 sampleValue1 sampleField2 sampleValue2" - } - ``` - -??? note "hSet" - The hSet operation sets a specific field in a hash to a specified value. See the [related documentation](https://redis.io/commands/hset) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The field for which you want to set a value.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The amount by which you want to increment the hash field value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hSet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisField>{$ctx:redisField}</redisField> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.hSet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisField":"sampleField", - "redisValue":"1" - } - ``` - -??? note "hSetnX" - The hSetnX operation sets a specified field to a value, only if the field does not already exist in the hash. If field already exists, this operation has no effect. See the [related documentation](https://redis.io/commands/hsetnx) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The field for which you want to set a value.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The amount by which you want to increment the hash field value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hSetnX> - <redisKey>{$ctx:redisKey}</redisKey> - <redisField>{$ctx:redisField}</redisField> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.hSetnX> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisField":"sampleField", - "redisValue":"1" - } - ``` - -??? note "hVals" - The hVals operation retrieves all values in a hash that is stored in a particular key. See the [related documentation](https://redis.io/commands/hvals) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hVals> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.hVals> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -### Keys - -??? note "del" - The del operation deletes a specified key if it exists. See the [related documentation](https://redis.io/commands/del) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that you want to delete.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.del> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.del> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "exists" - The exists operation determines whether a specified key exists. See the [related documentation](https://redis.io/commands/exists) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that you want to determine existence.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.exists> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.exists> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "expire" - The expire operation sets a TTL(Time to live) for a key so that the key will automatically delete once it reaches the TTL. The TTL should be specified in seconds. See the [related documentation](https://redis.io/commands/expire) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that you want to specify a TTL.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisSeconds</td> - <td>The number of seconds representing the TTL that you want to set for the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.expire> - <redisKey>{$ctx:redisKey}</redisKey> - <redisSeconds>{$ctx:redisSeconds}</redisSeconds> - </redis.expire> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisSeconds":"10" - } - ``` - -??? note "expireAt" - The expireAt operation sets the time after which an existing key should expire. Here the time should be specified as a UNIX timestamp. See the [related documentation](https://redis.io/commands/expireat) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that you want to set an expiration.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisUnixTime</td> - <td>The time to expire specified in the UNIX timestamp format.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.expire> - <redisKey>{$ctx:redisKey}</redisKey> - <redisUnixTime>{$ctx:redisUnixTime}</redisUnixTime> - </redis.expire> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisUnixTime":"1293840000" - } - ``` - -??? note "keys" - The keys operation retrieves all keys that match a specified pattern. See the [related documentation](https://redis.io/commands/keys) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisPattern</td> - <td>The pattern that you want to match when retrieving keys.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.keys> - <redisPattern>{$ctx:redisPattern}</redisPattern> - </redis.keys> - ``` - - **Sample request** - - ```json - { - "redisPattern":"*" - } - ``` - -??? note "randomKey" - A sample request with an empty body can be handled by the randomKey operation. See the [related documentation](https://redis.io/commands/randomkey) for more information. - - **Sample configuration** - - ```xml - <redis.randomKey/> - ``` - - **Sample request** - - ```json - { - "redisPattern":"*" - } - ``` - -??? note "rename" - The rename operation renames an existing key to a new name that is specified. See the [related documentation](https://redis.io/commands/rename) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisOldKey</td> - <td>The name of an existing key that you want to rename.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisNewKey</td> - <td>The new name that you want the key to have.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.rename> - <redisOldKey>{$ctx:redisOldKey}</redisOldKey> - <redisNewKey>{$ctx:redisNewKey}</redisNewKey> - </redis.rename> - ``` - - **Sample request** - - ```json - { - "redisOldKey":"sampleOldKey", - "redisNewKey":"sampleNewKey" - } - ``` - -??? note "renamenX" - The renamenX operation renames a key to a new key, only if the new key does not already exist. See the [related documentation](https://redis.io/commands/renamenx) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisOldKey</td> - <td>The name of an existing key that you want to rename.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisNewKey</td> - <td>The new name that you want the key to have.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.renamenX> - <redisOldKey>{$ctx:redisOldKey}</redisOldKey> - <redisNewKey>{$ctx:redisNewKey}</redisNewKey> - </redis.renamenX> - ``` - - **Sample request** - - ```json - { - "redisOldKey":"sampleOldKey", - "redisNewKey":"sampleNewKey" - } - ``` - -??? note "ttl" - The ttl operation retrieves the TTL (Time to Live) value of a specified key. See the [related documentation](https://redis.io/commands/ttl) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key for which you want to retrieve the TTL.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.ttl> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.ttl> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "type" - The type operation retrieves the data type of a value stored in a specified key. See the [related documentation](https://redis.io/commands/type) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that the value is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.type> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.type> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -### Lists - -??? note "blPop" - The blPop operation retrieves the first element in a list, if available, or blocks the connection for a specified amount of time until an element is available. See the [related documentation](https://redis.io/commands/blpop) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisBrPopTimeout</td> - <td>The amount of time to keep the connection blocked, waiting for an element to be available in the tail of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.brPop> - <redisKey>{$ctx:redisKey}</redisKey> - <redisBrPopTimeout>{$ctx:redisBrPopTimeout}</redisBrPopTimeout> - </redis.brPop> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - "redisBrPopTimeout":"0" - } - ``` - -??? note "brPop" - The brPop operation retrieves the last element in a list, if available, or blocks the connection for a specified amount of time until an element is available. See the [related documentation](https://redis.io/commands/brpop) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisBlPopTimeout</td> - <td>The amount of time to keep the connection blocked, waiting for an element to be available in the head of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.blPop> - <redisKey>{$ctx:redisKey}</redisKey> - <redisBlPopTimeout>{$ctx:redisBlPopTimeout}</redisBlPopTimeout> - </redis.blPop> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - "redisBlPopTimeout":"0" - } - ``` - -??? note "lInsert" - The lInsert operation inserts a specified element before or after an existing element in a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/linsert) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisWhere</td> - <td>The place where you want to add an element. Possible values are BEFORE or AFTER. For example, whether you want to add an element before a particular element that exists in the list.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisPivot</td> - <td>An existing element in the list that is used as the pivot element.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The element that you want to insert to the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lInsert> - <redisKey>{$ctx:redisKey}</redisKey> - <redisWhere>{$ctx:redisWhere}</redisWhere> - <redisPivot>{$ctx:redisPivot}</redisPivot> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.lInsert> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisWhere":"BEFORE", - "redisPivot":"samplePivotElement", - "redisValue":"sampleInsertElement" - } - ``` - -??? note "lLen" - The lLen operation retrieves the length of a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/llen) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lLen> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.lLen> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - } - ``` - -??? note "lPop" - The lPop operation retrieves the first element in a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/lpop) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lLen> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.lLen> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "lPush" - The lPush operation inserts one or more elements to the head of a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/lpush) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStrings</td> - <td>One or more elements that you want to add to the head of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lPush> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStrings>{$ctx:redisStrings}</redisStrings> - </redis.lPush> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStrings":"sampleValues" - } - ``` - -??? note "lPushX" - The lPushX operation inserts one or more elements to the head of a list stored in a specified key, only if the key already exists and holds a list. See the [related documentation](https://redis.io/commands/lpushx) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStrings</td> - <td>One or more elements that you want to add to the head of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lPushX> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStrings>{$ctx:redisStrings}</redisStrings> - </redis.lPushX> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStrings":"sampleValues" - } - ``` - -??? note "lRange" - The lRange operation retrieves a range of elements from a list. See the [related documentation](https://redis.io/commands/lrange) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStart</td> - <td>The starting index.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisEnd</td> - <td>The ending index.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lRange> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStart>{$ctx:redisStart}</redisStart> - <redisEnd>{$ctx:redisEnd}</redisEnd> - </redis.lRange> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStart":"0", - "redisEnd":"-1" - } - ``` - -??? note "lRem" - The lRem operation removes elements from a list. See the [related documentation](https://redis.io/commands/lrem) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisCount</td> - <td>The number of occurrences of the element that you want to remove.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The element that you want to remove.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lRem> - <redisKey>{$ctx:redisKey}</redisKey> - <redisCount>{$ctx:redisCount}</redisCount> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.lRem> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisCount":"1", - "redisValue":"sampleValue" - } - ``` - -??? note "lSet" - The lSet operation sets the value of an element in a list by its index. See the [related documentation](https://redis.io/commands/lset) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisIndex</td> - <td>The starting index.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The value of the key</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lSet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisIndex>{$ctx:redisIndex}</redisIndex> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.lSet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisIndex":"0", - "redisValue":"sampleValue" - } - ``` - -??? note "lTrim" - The lTrim operation trims a list to a specified range. See the [related documentation](https://redis.io/commands/ltrim) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStart</td> - <td>The starting index.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisEnd</td> - <td>The ending index.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lTrim> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStart>{$ctx:redisStart}</redisStart> - <redisEnd>{$ctx:redisEnd}</redisEnd> - </redis.lTrim> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStart":"0", - "redisEnd":"-1" - } - ``` - -??? note "rPopLPush" - The rPopLPush operation removes the last element in a list, then inserts it to another list, and then returns it. See the [related documentation](https://redis.io/commands/rpoplpush) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisSrckey</td> - <td>The name of the source key from where the last element is retrieved.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of destination key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.rPopLPush> - <redisSrckey>{$ctx:redisSrckey}</redisSrckey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - </redis.rPopLPush> - ``` - - **Sample request** - - ```json - { - "redisSrckey":"sampleSourceKey", - "redisDstkey":"sampleDestinationKey" - } - ``` - -??? note "rPush" - The rPush operation inserts one or more elements to the tail of a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/rpush) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStrings</td> - <td>One or more elements that you want to add to the tail of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.rPush> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStrings>{$ctx:redisStrings}</redisStrings> - </redis.rPush> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStrings":"sampleValues" - } - ``` - -??? note "rPushX" - The rPushX operation inserts one or more elements to the tail of a list stored in a specified key, only if the key already exists and holds a list. See the [related documentation](https://redis.io/commands/rpushx) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>One or more elements that you want to add to the tail of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.rPushX> - <redisKey>{$ctx:redisKey}</redisKey> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.rPushX> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisValue":"sampleValue" - } - ``` - -### Server Commands - -??? note "flushAll" - The flushAll operation deletes all the keys from all existing databases. See the [related documentation](https://redis.io/commands/flushall) for more information. - - **Sample configuration** - - ```xml - <redis.flushAll/> - ``` - - **Sample request** - - A sample request with an empty body can be handled by the flushAll operation. - -??? note "flushDB" - The flushDB operation deletes all the keys from the currently selected database. See the [related documentation](https://redis.io/commands/flushdb) for more information. - - **Sample configuration** - - ```xml - <redis.flushDB/> - ``` - - **Sample request** - - A sample request with an empty body can be handled by the flushDB operation. - -### Sets - -??? note "sadd" - The sadd operation is used to add one or more members to a set. See the [related documentation](https://redis.io/commands/sadd) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMembers</td> - <td>The value to be added to the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sadd> - <redisKey>{$ctx:redisKey}</redisKey> - <redisMembers>{$ctx:redisMembers}</redisMembers> - </redis.sadd> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisMembers":"sampleValue" - } - ``` - -??? note "sDiffStore" - The sDiffStore operation is used to subtract multiple sets and store the resulting set in a key. See the [related documentation](https://redis.io/commands/sdiffstore) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of the destination key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sDiffStore> - <redisKey>{$ctx:redisKey}</redisKey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - </redis.sDiffStore> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisDstkey":"sampleDestinationKey" - } - ``` - -??? note "sInter" - The sInter operation is used to intersect multiple sets. See the [related documentation](https://redis.io/commands/sinter) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sInter> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sInter> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sInterStore" - The sInterStore operation is used to intersect multiple sets and store the resulting set in a key. See the [related documentation](https://redis.io/commands/sinterstore) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of the destination key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sDiffStore> - <redisKey>{$ctx:redisKey}</redisKey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - </redis.sDiffStore> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisDstkey":"sampleDestinationKey" - } - ``` - -??? note "sIsMember" - The sIsMember operation is used to determine if a given value is a member of a set. See the [related documentation](https://redis.io/commands/sismember) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMembers</td> - <td>The name of a member in a key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sIsMember> - <redisKey>{$ctx:redisKey}</redisKey> - <redisMembers>{$ctx:redisMembers}</redisMembers> - </redis.sIsMember> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisMembers":"sampleValue" - } - ``` - -??? note "sMembers" - The sMembers operation is used to get the all members in a set. See the [related documentation](https://redis.io/commands/smembers) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sMembers> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sMembers> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sMove" - The sMove operation is used to move a member from one set to another. See the [related documentation](https://redis.io/commands/smove) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisSrckey</td> - <td>The name of the source key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of the destination key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMember</td> - <td>The name of the member.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sMove> - <redisSrckey>{$ctx:redisSrckey}</redisSrckey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - <redisMember>{$ctx:redisMember}</redisMember> - </redis.sMove> - ``` - - **Sample request** - - ```json - { - "redisSrckey":"sampleSourceKey", - "redisDstkey":"sampleDestinationKey", - "redisMember":"sampleMember" - } - ``` - -??? note "sPop" - The sPop operation is used to remove and return one or multiple random members from a set. See the [related documentation](https://redis.io/commands/spop) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sPop> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sPop> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sRandMember" - The sRandMember operation is used to get one or multiple random members from a set. See the [related documentation](https://redis.io/commands/srandmember) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sRandMember> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sRandMember> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sRem" - The sRem operation is used to remove one or more members from a set. See the [related documentation](https://redis.io/commands/srem) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMembers</td> - <td>The name of a member in a key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sRem> - <redisKey>{$ctx:redisKey}</redisKey> - <redisMembers>{$ctx:redisMembers}</redisMembers> - </redis.sRem> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisMembers":"sampleValue" - } - ``` - -??? note "sUnion" - The sUnion operation is used to add multiple sets. See the [related documentation](https://redis.io/commands/sunion) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sUnion> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sUnion> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sUnionStore" - The sUnionStore operation is used to add multiple sets and store the resulting set in a key. See the [related documentation](https://redis.io/commands/sunionstore) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of the destination key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sUnionStore> - <redisKey>{$ctx:redisKey}</redisKey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - </redis.sUnionStore> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisDstkey":"sampleValue" - } - ``` - -### Sorted Sets - -??? note "zadd" - The zadd operation adds one or more members to a sorted set, or update its score if a specified member already exists. See the [related documentation](https://redis.io/commands/zadd) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisScore</td> - <td>The score of the sorted set.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMembers</td> - <td>The name of a member you want to add.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.zadd> - <redisKey>{$ctx:redisKey}</redisKey> - <redisScore>{$ctx:redisScore}</redisScore> - <redisMember>{$ctx:redisMember}</redisMember> - </redis.zadd> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisScore":"1.1", - "redisMembers":"sampleMember" - } - ``` - -??? note "zCount" - The zCount operation retrieves a count of members in a sorted set, with scores that are within the min and max values specified. See the [related documentation](https://redis.io/commands/zcount) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMin</td> - <td>The minimum score value.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMax</td> - <td>The maximum score value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.zCount> - <redisKey>{$ctx:redisKey}</redisKey> - <redisMin>{$ctx:redisMin}</redisMin> - <redisMax>{$ctx:redisMax}</redisMax> - </redis.zCount> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisMin":"1.1", - "redisMax":"2.2" - } - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/redis-connector/2.2.x/redis-connector-reference.md b/en/docs/reference/connectors/redis-connector/2.2.x/redis-connector-reference.md deleted file mode 100644 index 15961410ea..0000000000 --- a/en/docs/reference/connectors/redis-connector/2.2.x/redis-connector-reference.md +++ /dev/null @@ -1,2136 +0,0 @@ -# Redis Connector Reference - -To use the Redis connector, add the <redis.init> element in your configuration before carrying out any other Redis operations. - -??? note "redis.init - Standalone mode" - The redis.init operation initializes the connector to interact with Redis. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisHost</td> - <td>The Redis host name (default localhost).</td> - <td>Yes</td> - </tr> - <tr> - <td>redisPort</td> - <td>The port on which the Redis server is running (the default port is 6379).</td> - <td>Yes</td> - </tr> - <tr> - <td>redisTimeout</td> - <td>The server TTL (Time to Live) in milliseconds.</td> - <td>Optional. The default is 2000ms. </td> - </tr> - <tr> - <td>redisConnectionTimeout</td> - <td>The connection TTL (Time to live) in milliseconds.</td> - <td>Optional. The default equals to the redisTimeout. </td> - </tr> - </table> - - **Sample configuration** - ```xml - <redis.init> - <redisHost>{$ctx:redisHost}</redisHost> - <redisPort>{$ctx:redisPort}</redisPort> - <redisTimeout>{$ctx:redisTimeout}</redisTimeout> - <redisConnectionTimeout>{$ctx:redisConnectionTimeout}</redisConnectionTimeout> - </redis.init> - ``` - - If you are connecting using a cache key, use the following init configuration. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>cacheKey</td> - <td>Key of the cache (password).</td> - <td>Optional</td> - </tr> - <tr> - <td>useSsl</td> - <td>A flag to switch between SSL and non-SSL.</td> - <td>Optional. Default is false.</td> - </tr> - </table> - - **Sample configuration** - ```xml - <redis.init> - <redisHost>{$ctx:redisHost}</redisHost> - <redisPort>{$ctx:redisPort}</redisPort> - <redisTimeout>{$ctx:redisTimeout}</redisTimeout> - <redisConnectionTimeout>{$ctx:redisConnectionTimeout}</redisConnectionTimeout> - <cacheKey>{$ctx:cacheKey}</cacheKey> - <useSsl>{$ctx:useSsl}</useSsl> - </redis.init> - ``` - - If you prefer to use the connectionURI over above configuration, use the following init configuration. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisConnectionURI</td> - <td>The Redis connection URI in the form of redis://[user:password@]host[:port]/[database] or rediss://[user:password@]host[:port]/[database] to connect over TLS/SSL</td> - <td>Yes</td> - </tr> - <tr> - <td>redisTimeout</td> - <td>The server TTL (Time to Live) in milliseconds.</td> - <td>Optional. The default is 2000ms. </td> - </tr> - <tr> - <td>redisConnectionTimeout</td> - <td>The connection TTL (Time to live) in milliseconds.</td> - <td>Optional. The default equals to the redisTimeout. </td> - </tr> - </table> - - **Sample configuration** - ```xml - <redis.init> - <redisConnectionURI>{$ctx:redisConnectionURI}</redisConnectionURI> - <redisTimeout>{$ctx:redisTimeout}</redisTimeout> - <redisConnectionTimeout>{$ctx:redisConnectionTimeout}</redisConnectionTimeout> - </redis.init> - ``` - -??? note "redis.init - Cluster mode" - The redis.init operation initializes the connector to interact with Redis cluster. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisClusterEnabled</td> - <td>A flag to enable the redis cluster mode (Default is false).</td> - <td>Yes</td> - </tr> - <tr> - <td>clusterNodes</td> - <td>Comma separated list of the cluster nodes as Node1_hostname:Port,Node2_hostname:Port, etc. Example: 127.0.0.1:40001,127.0.0.1:40002</td> - <td>Yes</td> - </tr> - <tr> - <td>redisTimeout</td> - <td>The server TTL (Time to Live) in milliseconds.</td> - <td>Optional. The default is 2000ms. </td> - </tr> - <tr> - <td>redisConnectionTimeout</td> - <td>The connection TTL (Time to live) in milliseconds.</td> - <td>Optional. The default equals to the redisTimeout. </td> - </tr> - <tr> - <td>maxAttempts</td> - <td>The number of retries.</td> - <td>Optional. The default is 5. </td> - </tr> - <tr> - <td>clientName</td> - <td>Name of the client.</td> - <td>Optional. Default is empty</td> - </tr> - <tr> - <td>cacheKey</td> - <td>Key of the cache (password).</td> - <td>Optional. </td> - </tr> - <tr> - <td>useSsl</td> - <td>A flag to switch between SSL and non-SSL.</td> - <td>Optional. Default is false.</td> - </tr> - </table> - - **Sample configuration** - ```xml - <redis.init> - <redisHost>{$ctx:redisHost}</redisHost> - <redisPort>{$ctx:redisPort}</redisPort> - <redisTimeout>{$ctx:redisTimeout}</redisTimeout> - <redisConnectionTimeout>{$ctx:redisConnectionTimeout}</redisConnectionTimeout> - <maxAttempts>5</maxAttempts> - <clientName>WSO2EI</clientName> - </redis.init> - ``` - - If you are connecting using a cache key, use the following init configuration. - - **Sample configuration** - ```xml - <redis.init> - <redisHost>{$ctx:redisHost}</redisHost> - <redisPort>{$ctx:redisPort}</redisPort> - <redisTimeout>{$ctx:redisTimeout}</redisTimeout> - <redisConnectionTimeout>{$ctx:redisConnectionTimeout}</redisConnectionTimeout> - <maxAttempts>5</maxAttempts> - <clientName>WSO2EI</clientName> - <cacheKey>{$ctx:cacheKey}</cacheKey> - <useSsl>{$ctx:useSsl}</useSsl> - </redis.init> - ``` ---- - -### Connection Commands - -??? note "echo" - The echo operation returns a specified string. See the [related documentation](https://redis.io/commands/echo) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisMessage</td> - <td>The message that you want to echo.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.echo> - <redisMessage>{$ctx:redisMessage}</redisMessage> - </redis.echo> - ``` - - **Sample request** - - ```json - { - "redisMessage":"sampleMessage" - } - ``` - -??? note "ping" - The ping operation pings the server to verify whether the connection is still alive. See the [related documentation](https://redis.io/commands/ping) for more information. - - **Sample configuration** - - ```xml - <redis.ping/> - ``` - - **Sample request** - - An empty request can be handled by the ping operation. - -??? note "quit" - The quit operation closes the connection to the server. See the [related documentation](https://redis.io/commands/quit) for more information. - - **Sample configuration** - - ```xml - <redis.quit/> - ``` - - **Sample request** - - An empty request can be handled by the quit operation. - -### Hashes - -??? note "hDel" - The hDel operation deletes one or more specified hash fields. See the [related documentation](https://redis.io/commands/hdel) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The fields that you want to delete.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hDel> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hDel> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFields":"sampleField1 sampleField2" - } - ``` - -??? note "hExists" - The hExists operation determines whether a specified hash field exists. See the [related documentation](https://redis.io/commands/hexists) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The fields that determine existence.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hExists> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hExists> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFields":"sampleField" - } - ``` - -??? note "hGet" - The hGet operation retrieves the value of a particular field in a hash stored in a specified key. See the [related documentation](https://redis.io/commands/hget) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The field for which you want to retrieve the value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hGet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hGet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFields":"sampleField" - } - ``` - -??? note "hGetAll" - The hGetAll operation retrieves all the fields and values of a hash stored in a specified key. See the [related documentation](https://redis.io/commands/hgetall) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hGetAll> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.hGetAll> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "hIncrBy" - The hIncrBy operation increments the integer value of a hash field by the specified amount. See the [related documentation](https://redis.io/commands/hincrby) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The hash field for which you want to increment the value.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The amount by which you want to increment the hash field value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hIncrBy> - <redisKey>{$ctx:redisKey}</redisKey> - <redisField>{$ctx:redisField}</redisField> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.hIncrBy> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisField":"sampleField", - "redisValue":"1" - } - ``` - -??? note "hKeys" - The hKeys operation retrieves all the fields in a hash. See the [related documentation](https://redis.io/commands/hkeys) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hKeys> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.hKeys> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "hLen" - The hLen operation retrieves the number of fields in a hash. See the [related documentation](https://redis.io/commands/hlen) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hKeys> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.hKeys> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "hMGet" - The hMGet operation retrieves values associated with each of the specified fields in a hash that is stored in a particular key. See the [related documentation](https://redis.io/commands/hmget) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The hash field for which you want to retrieve values.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hMGet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hMGet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFields":"sampleField1 sampleField2" - } - ``` - -??? note "hMSet" - The hMSet operation sets specified fields to their respective values in the hash stored in a particular key. See the [related documentation](https://redis.io/commands/hmset) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFieldsValues</td> - <td>The fields you want to set and their respective values.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hMSet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFieldsValues>{$ctx:redisFieldsValues}</redisFieldsValues> - </redis.hMSet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFieldsValues":"sampleField1 sampleValue1 sampleField2 sampleValue2" - } - ``` - -??? note "hSet" - The hSet operation sets a specific field in a hash to a specified value. See the [related documentation](https://redis.io/commands/hset) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The field for which you want to set a value.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The amount by which you want to increment the hash field value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hSet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisField>{$ctx:redisField}</redisField> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.hSet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisField":"sampleField", - "redisValue":"1" - } - ``` - -??? note "hSetnX" - The hSetnX operation sets a specified field to a value, only if the field does not already exist in the hash. If field already exists, this operation has no effect. See the [related documentation](https://redis.io/commands/hsetnx) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The field for which you want to set a value.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The amount by which you want to increment the hash field value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hSetnX> - <redisKey>{$ctx:redisKey}</redisKey> - <redisField>{$ctx:redisField}</redisField> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.hSetnX> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisField":"sampleField", - "redisValue":"1" - } - ``` - -??? note "hVals" - The hVals operation retrieves all values in a hash that is stored in a particular key. See the [related documentation](https://redis.io/commands/hvals) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hVals> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.hVals> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -### Keys - -??? note "del" - The del operation deletes a specified key if it exists. See the [related documentation](https://redis.io/commands/del) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that you want to delete.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.del> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.del> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "exists" - The exists operation determines whether a specified key exists. See the [related documentation](https://redis.io/commands/exists) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that you want to determine existence.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.exists> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.exists> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "expire" - The expire operation sets a TTL(Time to live) for a key so that the key will automatically delete once it reaches the TTL. The TTL should be specified in seconds. See the [related documentation](https://redis.io/commands/expire) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that you want to specify a TTL.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisSeconds</td> - <td>The number of seconds representing the TTL that you want to set for the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.expire> - <redisKey>{$ctx:redisKey}</redisKey> - <redisSeconds>{$ctx:redisSeconds}</redisSeconds> - </redis.expire> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisSeconds":"10" - } - ``` - -??? note "expireAt" - The expireAt operation sets the time after which an existing key should expire. Here the time should be specified as a UNIX timestamp. See the [related documentation](https://redis.io/commands/expireat) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that you want to set an expiration.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisUnixTime</td> - <td>The time to expire specified in the UNIX timestamp format.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.expire> - <redisKey>{$ctx:redisKey}</redisKey> - <redisUnixTime>{$ctx:redisUnixTime}</redisUnixTime> - </redis.expire> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisUnixTime":"1293840000" - } - ``` - -??? note "keys" - The keys operation retrieves all keys that match a specified pattern. See the [related documentation](https://redis.io/commands/keys) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisPattern</td> - <td>The pattern that you want to match when retrieving keys.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.keys> - <redisPattern>{$ctx:redisPattern}</redisPattern> - </redis.keys> - ``` - - **Sample request** - - ```json - { - "redisPattern":"*" - } - ``` - -??? note "randomKey" - A sample request with an empty body can be handled by the randomKey operation. See the [related documentation](https://redis.io/commands/randomkey) for more information. - - **Sample configuration** - - ```xml - <redis.randomKey/> - ``` - - **Sample request** - - ```json - { - "redisPattern":"*" - } - ``` - -??? note "rename" - The rename operation renames an existing key to a new name that is specified. See the [related documentation](https://redis.io/commands/rename) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisOldKey</td> - <td>The name of an existing key that you want to rename.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisNewKey</td> - <td>The new name that you want the key to have.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.rename> - <redisOldKey>{$ctx:redisOldKey}</redisOldKey> - <redisNewKey>{$ctx:redisNewKey}</redisNewKey> - </redis.rename> - ``` - - **Sample request** - - ```json - { - "redisOldKey":"sampleOldKey", - "redisNewKey":"sampleNewKey" - } - ``` - -??? note "renamenX" - The renamenX operation renames a key to a new key, only if the new key does not already exist. See the [related documentation](https://redis.io/commands/renamenx) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisOldKey</td> - <td>The name of an existing key that you want to rename.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisNewKey</td> - <td>The new name that you want the key to have.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.renamenX> - <redisOldKey>{$ctx:redisOldKey}</redisOldKey> - <redisNewKey>{$ctx:redisNewKey}</redisNewKey> - </redis.renamenX> - ``` - - **Sample request** - - ```json - { - "redisOldKey":"sampleOldKey", - "redisNewKey":"sampleNewKey" - } - ``` - -??? note "ttl" - The ttl operation retrieves the TTL (Time to Live) value of a specified key. See the [related documentation](https://redis.io/commands/ttl) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key for which you want to retrieve the TTL.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.ttl> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.ttl> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "type" - The type operation retrieves the data type of a value stored in a specified key. See the [related documentation](https://redis.io/commands/type) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that the value is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.type> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.type> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -### Lists - -??? note "blPop" - The blPop operation retrieves the first element in a list, if available, or blocks the connection for a specified amount of time until an element is available. See the [related documentation](https://redis.io/commands/blpop) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisBrPopTimeout</td> - <td>The amount of time to keep the connection blocked, waiting for an element to be available in the tail of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.brPop> - <redisKey>{$ctx:redisKey}</redisKey> - <redisBrPopTimeout>{$ctx:redisBrPopTimeout}</redisBrPopTimeout> - </redis.brPop> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - "redisBrPopTimeout":"0" - } - ``` - -??? note "brPop" - The brPop operation retrieves the last element in a list, if available, or blocks the connection for a specified amount of time until an element is available. See the [related documentation](https://redis.io/commands/brpop) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisBlPopTimeout</td> - <td>The amount of time to keep the connection blocked, waiting for an element to be available in the head of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.blPop> - <redisKey>{$ctx:redisKey}</redisKey> - <redisBlPopTimeout>{$ctx:redisBlPopTimeout}</redisBlPopTimeout> - </redis.blPop> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - "redisBlPopTimeout":"0" - } - ``` - -??? note "lInsert" - The lInsert operation inserts a specified element before or after an existing element in a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/linsert) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisWhere</td> - <td>The place where you want to add an element. Possible values are BEFORE or AFTER. For example, whether you want to add an element before a particular element that exists in the list.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisPivot</td> - <td>An existing element in the list that is used as the pivot element.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The element that you want to insert to the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lInsert> - <redisKey>{$ctx:redisKey}</redisKey> - <redisWhere>{$ctx:redisWhere}</redisWhere> - <redisPivot>{$ctx:redisPivot}</redisPivot> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.lInsert> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisWhere":"BEFORE", - "redisPivot":"samplePivotElement", - "redisValue":"sampleInsertElement" - } - ``` - -??? note "lLen" - The lLen operation retrieves the length of a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/llen) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lLen> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.lLen> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - } - ``` - -??? note "lPop" - The lPop operation retrieves the first element in a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/lpop) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lLen> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.lLen> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "lPush" - The lPush operation inserts one or more elements to the head of a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/lpush) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStrings</td> - <td>One or more elements that you want to add to the head of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lPush> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStrings>{$ctx:redisStrings}</redisStrings> - </redis.lPush> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStrings":"sampleValues" - } - ``` - -??? note "lPushX" - The lPushX operation inserts one or more elements to the head of a list stored in a specified key, only if the key already exists and holds a list. See the [related documentation](https://redis.io/commands/lpushx) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStrings</td> - <td>One or more elements that you want to add to the head of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lPushX> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStrings>{$ctx:redisStrings}</redisStrings> - </redis.lPushX> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStrings":"sampleValues" - } - ``` - -??? note "lRange" - The lRange operation retrieves a range of elements from a list. See the [related documentation](https://redis.io/commands/lrange) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStart</td> - <td>The starting index.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisEnd</td> - <td>The ending index.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lRange> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStart>{$ctx:redisStart}</redisStart> - <redisEnd>{$ctx:redisEnd}</redisEnd> - </redis.lRange> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStart":"0", - "redisEnd":"-1" - } - ``` - -??? note "lRem" - The lRem operation removes elements from a list. See the [related documentation](https://redis.io/commands/lrem) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisCount</td> - <td>The number of occurrences of the element that you want to remove.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The element that you want to remove.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lRem> - <redisKey>{$ctx:redisKey}</redisKey> - <redisCount>{$ctx:redisCount}</redisCount> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.lRem> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisCount":"1", - "redisValue":"sampleValue" - } - ``` - -??? note "lSet" - The lSet operation sets the value of an element in a list by its index. See the [related documentation](https://redis.io/commands/lset) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisIndex</td> - <td>The starting index.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The value of the key</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lSet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisIndex>{$ctx:redisIndex}</redisIndex> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.lSet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisIndex":"0", - "redisValue":"sampleValue" - } - ``` - -??? note "lTrim" - The lTrim operation trims a list to a specified range. See the [related documentation](https://redis.io/commands/ltrim) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStart</td> - <td>The starting index.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisEnd</td> - <td>The ending index.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lTrim> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStart>{$ctx:redisStart}</redisStart> - <redisEnd>{$ctx:redisEnd}</redisEnd> - </redis.lTrim> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStart":"0", - "redisEnd":"-1" - } - ``` - -??? note "rPopLPush" - The rPopLPush operation removes the last element in a list, then inserts it to another list, and then returns it. See the [related documentation](https://redis.io/commands/rpoplpush) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisSrckey</td> - <td>The name of the source key from where the last element is retrieved.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of destination key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.rPopLPush> - <redisSrckey>{$ctx:redisSrckey}</redisSrckey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - </redis.rPopLPush> - ``` - - **Sample request** - - ```json - { - "redisSrckey":"sampleSourceKey", - "redisDstkey":"sampleDestinationKey" - } - ``` - -??? note "rPush" - The rPush operation inserts one or more elements to the tail of a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/rpush) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStrings</td> - <td>One or more elements that you want to add to the tail of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.rPush> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStrings>{$ctx:redisStrings}</redisStrings> - </redis.rPush> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStrings":"sampleValues" - } - ``` - -??? note "rPushX" - The rPushX operation inserts one or more elements to the tail of a list stored in a specified key, only if the key already exists and holds a list. See the [related documentation](https://redis.io/commands/rpushx) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>One or more elements that you want to add to the tail of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.rPushX> - <redisKey>{$ctx:redisKey}</redisKey> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.rPushX> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisValue":"sampleValue" - } - ``` - -### Server Commands - -??? note "flushAll" - The flushAll operation deletes all the keys from all existing databases. See the [related documentation](https://redis.io/commands/flushall) for more information. - - **Sample configuration** - - ```xml - <redis.flushAll/> - ``` - - **Sample request** - - A sample request with an empty body can be handled by the flushAll operation. - -??? note "flushDB" - The flushDB operation deletes all the keys from the currently selected database. See the [related documentation](https://redis.io/commands/flushdb) for more information. - - **Sample configuration** - - ```xml - <redis.flushDB/> - ``` - - **Sample request** - - A sample request with an empty body can be handled by the flushDB operation. - -### Sets - -??? note "sadd" - The sadd operation is used to add one or more members to a set. See the [related documentation](https://redis.io/commands/sadd) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMembers</td> - <td>The value to be added to the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sadd> - <redisKey>{$ctx:redisKey}</redisKey> - <redisMembers>{$ctx:redisMembers}</redisMembers> - </redis.sadd> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisMembers":"sampleValue" - } - ``` - -??? note "sDiffStore" - The sDiffStore operation is used to subtract multiple sets and store the resulting set in a key. See the [related documentation](https://redis.io/commands/sdiffstore) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of the destination key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sDiffStore> - <redisKey>{$ctx:redisKey}</redisKey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - </redis.sDiffStore> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisDstkey":"sampleDestinationKey" - } - ``` - -??? note "sInter" - The sInter operation is used to intersect multiple sets. See the [related documentation](https://redis.io/commands/sinter) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sInter> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sInter> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sInterStore" - The sInterStore operation is used to intersect multiple sets and store the resulting set in a key. See the [related documentation](https://redis.io/commands/sinterstore) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of the destination key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sDiffStore> - <redisKey>{$ctx:redisKey}</redisKey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - </redis.sDiffStore> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisDstkey":"sampleDestinationKey" - } - ``` - -??? note "sIsMember" - The sIsMember operation is used to determine if a given value is a member of a set. See the [related documentation](https://redis.io/commands/sismember) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMembers</td> - <td>The name of a member in a key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sIsMember> - <redisKey>{$ctx:redisKey}</redisKey> - <redisMembers>{$ctx:redisMembers}</redisMembers> - </redis.sIsMember> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisMembers":"sampleValue" - } - ``` - -??? note "sMembers" - The sMembers operation is used to get the all members in a set. See the [related documentation](https://redis.io/commands/smembers) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sMembers> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sMembers> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sMove" - The sMove operation is used to move a member from one set to another. See the [related documentation](https://redis.io/commands/smove) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisSrckey</td> - <td>The name of the source key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of the destination key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMember</td> - <td>The name of the member.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sMove> - <redisSrckey>{$ctx:redisSrckey}</redisSrckey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - <redisMember>{$ctx:redisMember}</redisMember> - </redis.sMove> - ``` - - **Sample request** - - ```json - { - "redisSrckey":"sampleSourceKey", - "redisDstkey":"sampleDestinationKey", - "redisMember":"sampleMember" - } - ``` - -??? note "sPop" - The sPop operation is used to remove and return one or multiple random members from a set. See the [related documentation](https://redis.io/commands/spop) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sPop> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sPop> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sRandMember" - The sRandMember operation is used to get one or multiple random members from a set. See the [related documentation](https://redis.io/commands/srandmember) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sRandMember> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sRandMember> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sRem" - The sRem operation is used to remove one or more members from a set. See the [related documentation](https://redis.io/commands/srem) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMembers</td> - <td>The name of a member in a key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sRem> - <redisKey>{$ctx:redisKey}</redisKey> - <redisMembers>{$ctx:redisMembers}</redisMembers> - </redis.sRem> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisMembers":"sampleValue" - } - ``` - -??? note "sUnion" - The sUnion operation is used to add multiple sets. See the [related documentation](https://redis.io/commands/sunion) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sUnion> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sUnion> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sUnionStore" - The sUnionStore operation is used to add multiple sets and store the resulting set in a key. See the [related documentation](https://redis.io/commands/sunionstore) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of the destination key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sUnionStore> - <redisKey>{$ctx:redisKey}</redisKey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - </redis.sUnionStore> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisDstkey":"sampleValue" - } - ``` - -### Sorted Sets - -??? note "zadd" - The zadd operation adds one or more members to a sorted set, or update its score if a specified member already exists. See the [related documentation](https://redis.io/commands/zadd) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisScore</td> - <td>The score of the sorted set.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMembers</td> - <td>The name of a member you want to add.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.zadd> - <redisKey>{$ctx:redisKey}</redisKey> - <redisScore>{$ctx:redisScore}</redisScore> - <redisMember>{$ctx:redisMember}</redisMember> - </redis.zadd> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisScore":"1.1", - "redisMembers":"sampleMember" - } - ``` - -??? note "zCount" - The zCount operation retrieves a count of members in a sorted set, with scores that are within the min and max values specified. See the [related documentation](https://redis.io/commands/zcount) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMin</td> - <td>The minimum score value.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMax</td> - <td>The maximum score value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.zCount> - <redisKey>{$ctx:redisKey}</redisKey> - <redisMin>{$ctx:redisMin}</redisMin> - <redisMax>{$ctx:redisMax}</redisMax> - </redis.zCount> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisMin":"1.1", - "redisMax":"2.2" - } - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/redis-connector/2.4.x/redis-connector-reference.md b/en/docs/reference/connectors/redis-connector/2.4.x/redis-connector-reference.md deleted file mode 100644 index 7bf1e1ca2a..0000000000 --- a/en/docs/reference/connectors/redis-connector/2.4.x/redis-connector-reference.md +++ /dev/null @@ -1,2199 +0,0 @@ -# Redis Connector Reference - -To use the Redis connector, add the <redis.init> element in your configuration before carrying out any other Redis operations. - -??? note "redis.init - Standalone mode" - The redis.init operation initializes the connector to interact with Redis. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisHost</td> - <td>The Redis host name (default localhost).</td> - <td>Yes</td> - </tr> - <tr> - <td>redisPort</td> - <td>The port on which the Redis server is running (the default port is 6379).</td> - <td>Yes</td> - </tr> - <tr> - <td>maxConnections</td> - <td>The maximum number of connections that are supported by the pool (which should be less than the max client connection limit of Redis)</td> - <td>Yes</td> - </tr> - <tr> - <td>redisTimeout</td> - <td>The server TTL (Time to Live) in milliseconds.</td> - <td>Optional. The default is 2000ms. </td> - </tr> - <tr> - <td>redisConnectionTimeout</td> - <td>The connection TTL (Time to live) in milliseconds.</td> - <td>Optional. The default equals to the redisTimeout. </td> - </tr> - <tr> - <td>redisConnectionPoolId</td> - <td>We are keeping separate pools for each artifact by using ARTIFACT_NAME as a unique name. If and only if the user wants to add 2 or more connectors to a single artifact (say 2 connectors per one API) then the user has to differentiate the Redis connectors within that artifact with-param.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - ```xml - <redis.init> - <redisHost>{$ctx:redisHost}</redisHost> - <redisPort>{$ctx:redisPort}</redisPort> - <redisTimeout>{$ctx:redisTimeout}</redisTimeout> - <redisConnectionTimeout>{$ctx:redisConnectionTimeout}</redisConnectionTimeout> - </redis.init> - ``` - - If you are connecting using a cache key, use the following init configuration. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>cacheKey</td> - <td>Key of the cache (password).</td> - <td>Optional</td> - </tr> - <tr> - <td>useSsl</td> - <td>A flag to switch between SSL and non-SSL.</td> - <td>Optional. Default is false.</td> - </tr> - </table> - - **Sample configuration** - ```xml - <redis.init> - <redisHost>{$ctx:redisHost}</redisHost> - <redisPort>{$ctx:redisPort}</redisPort> - <redisTimeout>{$ctx:redisTimeout}</redisTimeout> - <redisConnectionTimeout>{$ctx:redisConnectionTimeout}</redisConnectionTimeout> - <cacheKey>{$ctx:cacheKey}</cacheKey> - <useSsl>{$ctx:useSsl}</useSsl> - </redis.init> - ``` - - If you prefer to use the connectionURI over above configuration, use the following init configuration. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisConnectionURI</td> - <td>The Redis connection URI in the form of redis://[user:password@]host[:port]/[database] or rediss://[user:password@]host[:port]/[database] to connect over TLS/SSL</td> - <td>Yes</td> - </tr> - <tr> - <td>redisTimeout</td> - <td>The server TTL (Time to Live) in milliseconds.</td> - <td>Optional. The default is 2000ms. </td> - </tr> - <tr> - <td>redisConnectionTimeout</td> - <td>The connection TTL (Time to live) in milliseconds.</td> - <td>Optional. The default equals to the redisTimeout. </td> - </tr> - </table> - - **Sample configuration** - ```xml - <redis.init> - <redisConnectionURI>{$ctx:redisConnectionURI}</redisConnectionURI> - <redisTimeout>{$ctx:redisTimeout}</redisTimeout> - <redisConnectionTimeout>{$ctx:redisConnectionTimeout}</redisConnectionTimeout> - </redis.init> - ``` - -??? note "redis.init - Cluster mode" - The redis.init operation initializes the connector to interact with Redis cluster. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisClusterEnabled</td> - <td>A flag to enable the redis cluster mode (Default is false).</td> - <td>Yes</td> - </tr> - <tr> - <td>clusterNodes</td> - <td>Comma separated list of the cluster nodes as Node1_hostname:Port,Node2_hostname:Port, etc. Example: 127.0.0.1:40001,127.0.0.1:40002</td> - <td>Yes</td> - </tr> - <tr> - <td>redisTimeout</td> - <td>The server TTL (Time to Live) in milliseconds.</td> - <td>Optional. The default is 2000ms. </td> - </tr> - <tr> - <td>redisConnectionTimeout</td> - <td>The connection TTL (Time to live) in milliseconds.</td> - <td>Optional. The default equals to the redisTimeout. </td> - </tr> - <tr> - <td>maxAttempts</td> - <td>The number of retries.</td> - <td>Optional. The default is 5. </td> - </tr> - <tr> - <td>clientName</td> - <td>Name of the client.</td> - <td>Optional. Default is empty</td> - </tr> - <tr> - <td>cacheKey</td> - <td>Key of the cache (password).</td> - <td>Optional. </td> - </tr> - <tr> - <td>useSsl</td> - <td>A flag to switch between SSL and non-SSL.</td> - <td>Optional. Default is false.</td> - </tr> - </table> - - **Sample configuration** - ```xml - <redis.init> - <redisHost>{$ctx:redisHost}</redisHost> - <redisPort>{$ctx:redisPort}</redisPort> - <redisTimeout>{$ctx:redisTimeout}</redisTimeout> - <redisConnectionTimeout>{$ctx:redisConnectionTimeout}</redisConnectionTimeout> - <maxAttempts>5</maxAttempts> - <clientName>WSO2EI</clientName> - </redis.init> - ``` - - If you are connecting using a cache key, use the following init configuration. - - **Sample configuration** - ```xml - <redis.init> - <redisHost>{$ctx:redisHost}</redisHost> - <redisPort>{$ctx:redisPort}</redisPort> - <redisTimeout>{$ctx:redisTimeout}</redisTimeout> - <redisConnectionTimeout>{$ctx:redisConnectionTimeout}</redisConnectionTimeout> - <maxAttempts>5</maxAttempts> - <clientName>WSO2EI</clientName> - <cacheKey>{$ctx:cacheKey}</cacheKey> - <useSsl>{$ctx:useSsl}</useSsl> - </redis.init> - ``` - -??? note "redis.init - sentinel mode" - The redis.init operation initializes the connector to interact with the Redis cluster. - Sentinel password configuration is available from version 2.5.0 - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>sentinelEnabled</td> - <td>A flag to enable the sentinel cluster mode (this is false by default).</td> - <td>Yes</td> - </tr> - <tr> - <td>sentinels</td> - <td>Comma separated list of the sentinel nodes in the following format: Node1_hostname:Port,Node2_hostname:Port, etc. For example: 172.18.0.4:26379,172.18.0.5:26379</td> - <td>Yes</td> - </tr> - <tr> - <td>sentinelSoTimeout</td> - <td>The server TTL (Time to Live) in milliseconds.</td> - <td>Optional. The default is 2000ms. </td> - </tr> - <tr> - <td>sentinelConnectionTimeout</td> - <td>The connection TTL (Time to live) in milliseconds.</td> - <td>Optional. The default equals to the redisTimeout. </td> - </tr> - <tr> - <td>sentinelPassword</td> - <td>The password of the sentinel node (if configured only)</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - ```xml - <redis.init> - <sentinelUser>{$ctx:sentinelUser}</sentinelUser> - <sentinelPassword>{$ctx:sentinelPassword}</sentinelPassword> - <masterName>{$ctx:masterName}</masterName> - <masterUser>{$ctx:masterName}</masterUser> - <masterPassword>{$ctx:masterPassword}</masterPassword> - <sentinelEnabled>true</sentinelEnabled> - <dbNumber>0</dbNumber> - <sentinels>172.18.0.4:26379,172.18.0.5:26379,172.18.0.6:26379</sentinels> - <sentinelClientName>{$ctx:sentinelClientName}</sentinelClientName> - <sentinelConnectionTimeout>{$ctx:sentinelConnectionTimeout}</sentinelConnectionTimeout> - <sentinelSoTimeout>{$ctx:sentinelSoTimeout}</sentinelSoTimeout> - </redis.init> - ``` ---- - -### Connection Commands - -??? note "echo" - The echo operation returns a specified string. See the [related documentation](https://redis.io/commands/echo) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisMessage</td> - <td>The message that you want to echo.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.echo> - <redisMessage>{$ctx:redisMessage}</redisMessage> - </redis.echo> - ``` - - **Sample request** - - ```json - { - "redisMessage":"sampleMessage" - } - ``` - -??? note "ping" - The ping operation pings the server to verify whether the connection is still alive. See the [related documentation](https://redis.io/commands/ping) for more information. - - **Sample configuration** - - ```xml - <redis.ping/> - ``` - - **Sample request** - - An empty request can be handled by the ping operation. - -??? note "quit" - The quit operation closes the connection to the server. See the [related documentation](https://redis.io/commands/quit) for more information. - - **Sample configuration** - - ```xml - <redis.quit/> - ``` - - **Sample request** - - An empty request can be handled by the quit operation. - -### Hashes - -??? note "hDel" - The hDel operation deletes one or more specified hash fields. See the [related documentation](https://redis.io/commands/hdel) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The fields that you want to delete.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hDel> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hDel> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFields":"sampleField1 sampleField2" - } - ``` - -??? note "hExists" - The hExists operation determines whether a specified hash field exists. See the [related documentation](https://redis.io/commands/hexists) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The fields that determine existence.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hExists> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hExists> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFields":"sampleField" - } - ``` - -??? note "hGet" - The hGet operation retrieves the value of a particular field in a hash stored in a specified key. See the [related documentation](https://redis.io/commands/hget) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The field for which you want to retrieve the value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hGet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hGet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFields":"sampleField" - } - ``` - -??? note "hGetAll" - The hGetAll operation retrieves all the fields and values of a hash stored in a specified key. See the [related documentation](https://redis.io/commands/hgetall) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hGetAll> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.hGetAll> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "hIncrBy" - The hIncrBy operation increments the integer value of a hash field by the specified amount. See the [related documentation](https://redis.io/commands/hincrby) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The hash field for which you want to increment the value.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The amount by which you want to increment the hash field value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hIncrBy> - <redisKey>{$ctx:redisKey}</redisKey> - <redisField>{$ctx:redisField}</redisField> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.hIncrBy> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisField":"sampleField", - "redisValue":"1" - } - ``` - -??? note "hKeys" - The hKeys operation retrieves all the fields in a hash. See the [related documentation](https://redis.io/commands/hkeys) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hKeys> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.hKeys> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "hLen" - The hLen operation retrieves the number of fields in a hash. See the [related documentation](https://redis.io/commands/hlen) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hKeys> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.hKeys> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "hMGet" - The hMGet operation retrieves values associated with each of the specified fields in a hash that is stored in a particular key. See the [related documentation](https://redis.io/commands/hmget) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The hash field for which you want to retrieve values.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hMGet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hMGet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFields":"sampleField1 sampleField2" - } - ``` - -??? note "hMSet" - The hMSet operation sets specified fields to their respective values in the hash stored in a particular key. See the [related documentation](https://redis.io/commands/hmset) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFieldsValues</td> - <td>The fields you want to set and their respective values.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hMSet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFieldsValues>{$ctx:redisFieldsValues}</redisFieldsValues> - </redis.hMSet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFieldsValues":"sampleField1 sampleValue1 sampleField2 sampleValue2" - } - ``` - -??? note "hSet" - The hSet operation sets a specific field in a hash to a specified value. See the [related documentation](https://redis.io/commands/hset) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The field for which you want to set a value.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The amount by which you want to increment the hash field value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hSet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisField>{$ctx:redisField}</redisField> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.hSet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisField":"sampleField", - "redisValue":"1" - } - ``` - -??? note "hSetnX" - The hSetnX operation sets a specified field to a value, only if the field does not already exist in the hash. If field already exists, this operation has no effect. See the [related documentation](https://redis.io/commands/hsetnx) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The field for which you want to set a value.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The amount by which you want to increment the hash field value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hSetnX> - <redisKey>{$ctx:redisKey}</redisKey> - <redisField>{$ctx:redisField}</redisField> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.hSetnX> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisField":"sampleField", - "redisValue":"1" - } - ``` - -??? note "hVals" - The hVals operation retrieves all values in a hash that is stored in a particular key. See the [related documentation](https://redis.io/commands/hvals) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hVals> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.hVals> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -### Keys - -??? note "del" - The del operation deletes a specified key if it exists. See the [related documentation](https://redis.io/commands/del) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that you want to delete.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.del> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.del> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "exists" - The exists operation determines whether a specified key exists. See the [related documentation](https://redis.io/commands/exists) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that you want to determine existence.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.exists> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.exists> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "expire" - The expire operation sets a TTL(Time to live) for a key so that the key will automatically delete once it reaches the TTL. The TTL should be specified in seconds. See the [related documentation](https://redis.io/commands/expire) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that you want to specify a TTL.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisSeconds</td> - <td>The number of seconds representing the TTL that you want to set for the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.expire> - <redisKey>{$ctx:redisKey}</redisKey> - <redisSeconds>{$ctx:redisSeconds}</redisSeconds> - </redis.expire> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisSeconds":"10" - } - ``` - -??? note "expireAt" - The expireAt operation sets the time after which an existing key should expire. Here the time should be specified as a UNIX timestamp. See the [related documentation](https://redis.io/commands/expireat) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that you want to set an expiration.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisUnixTime</td> - <td>The time to expire specified in the UNIX timestamp format.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.expire> - <redisKey>{$ctx:redisKey}</redisKey> - <redisUnixTime>{$ctx:redisUnixTime}</redisUnixTime> - </redis.expire> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisUnixTime":"1293840000" - } - ``` - -??? note "keys" - The keys operation retrieves all keys that match a specified pattern. See the [related documentation](https://redis.io/commands/keys) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisPattern</td> - <td>The pattern that you want to match when retrieving keys.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.keys> - <redisPattern>{$ctx:redisPattern}</redisPattern> - </redis.keys> - ``` - - **Sample request** - - ```json - { - "redisPattern":"*" - } - ``` - -??? note "randomKey" - A sample request with an empty body can be handled by the randomKey operation. See the [related documentation](https://redis.io/commands/randomkey) for more information. - - **Sample configuration** - - ```xml - <redis.randomKey/> - ``` - - **Sample request** - - ```json - { - "redisPattern":"*" - } - ``` - -??? note "rename" - The rename operation renames an existing key to a new name that is specified. See the [related documentation](https://redis.io/commands/rename) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisOldKey</td> - <td>The name of an existing key that you want to rename.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisNewKey</td> - <td>The new name that you want the key to have.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.rename> - <redisOldKey>{$ctx:redisOldKey}</redisOldKey> - <redisNewKey>{$ctx:redisNewKey}</redisNewKey> - </redis.rename> - ``` - - **Sample request** - - ```json - { - "redisOldKey":"sampleOldKey", - "redisNewKey":"sampleNewKey" - } - ``` - -??? note "renamenX" - The renamenX operation renames a key to a new key, only if the new key does not already exist. See the [related documentation](https://redis.io/commands/renamenx) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisOldKey</td> - <td>The name of an existing key that you want to rename.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisNewKey</td> - <td>The new name that you want the key to have.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.renamenX> - <redisOldKey>{$ctx:redisOldKey}</redisOldKey> - <redisNewKey>{$ctx:redisNewKey}</redisNewKey> - </redis.renamenX> - ``` - - **Sample request** - - ```json - { - "redisOldKey":"sampleOldKey", - "redisNewKey":"sampleNewKey" - } - ``` - -??? note "ttl" - The ttl operation retrieves the TTL (Time to Live) value of a specified key. See the [related documentation](https://redis.io/commands/ttl) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key for which you want to retrieve the TTL.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.ttl> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.ttl> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "type" - The type operation retrieves the data type of a value stored in a specified key. See the [related documentation](https://redis.io/commands/type) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that the value is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.type> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.type> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -### Lists - -??? note "blPop" - The blPop operation retrieves the first element in a list, if available, or blocks the connection for a specified amount of time until an element is available. See the [related documentation](https://redis.io/commands/blpop) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisBrPopTimeout</td> - <td>The amount of time to keep the connection blocked, waiting for an element to be available in the tail of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.brPop> - <redisKey>{$ctx:redisKey}</redisKey> - <redisBrPopTimeout>{$ctx:redisBrPopTimeout}</redisBrPopTimeout> - </redis.brPop> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - "redisBrPopTimeout":"0" - } - ``` - -??? note "brPop" - The brPop operation retrieves the last element in a list, if available, or blocks the connection for a specified amount of time until an element is available. See the [related documentation](https://redis.io/commands/brpop) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisBlPopTimeout</td> - <td>The amount of time to keep the connection blocked, waiting for an element to be available in the head of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.blPop> - <redisKey>{$ctx:redisKey}</redisKey> - <redisBlPopTimeout>{$ctx:redisBlPopTimeout}</redisBlPopTimeout> - </redis.blPop> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - "redisBlPopTimeout":"0" - } - ``` - -??? note "lInsert" - The lInsert operation inserts a specified element before or after an existing element in a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/linsert) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisWhere</td> - <td>The place where you want to add an element. Possible values are BEFORE or AFTER. For example, whether you want to add an element before a particular element that exists in the list.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisPivot</td> - <td>An existing element in the list that is used as the pivot element.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The element that you want to insert to the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lInsert> - <redisKey>{$ctx:redisKey}</redisKey> - <redisWhere>{$ctx:redisWhere}</redisWhere> - <redisPivot>{$ctx:redisPivot}</redisPivot> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.lInsert> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisWhere":"BEFORE", - "redisPivot":"samplePivotElement", - "redisValue":"sampleInsertElement" - } - ``` - -??? note "lLen" - The lLen operation retrieves the length of a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/llen) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lLen> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.lLen> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - } - ``` - -??? note "lPop" - The lPop operation retrieves the first element in a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/lpop) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lLen> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.lLen> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "lPush" - The lPush operation inserts one or more elements to the head of a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/lpush) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStrings</td> - <td>One or more elements that you want to add to the head of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lPush> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStrings>{$ctx:redisStrings}</redisStrings> - </redis.lPush> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStrings":"sampleValues" - } - ``` - -??? note "lPushX" - The lPushX operation inserts one or more elements to the head of a list stored in a specified key, only if the key already exists and holds a list. See the [related documentation](https://redis.io/commands/lpushx) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStrings</td> - <td>One or more elements that you want to add to the head of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lPushX> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStrings>{$ctx:redisStrings}</redisStrings> - </redis.lPushX> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStrings":"sampleValues" - } - ``` - -??? note "lRange" - The lRange operation retrieves a range of elements from a list. See the [related documentation](https://redis.io/commands/lrange) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStart</td> - <td>The starting index.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisEnd</td> - <td>The ending index.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lRange> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStart>{$ctx:redisStart}</redisStart> - <redisEnd>{$ctx:redisEnd}</redisEnd> - </redis.lRange> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStart":"0", - "redisEnd":"-1" - } - ``` - -??? note "lRem" - The lRem operation removes elements from a list. See the [related documentation](https://redis.io/commands/lrem) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisCount</td> - <td>The number of occurrences of the element that you want to remove.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The element that you want to remove.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lRem> - <redisKey>{$ctx:redisKey}</redisKey> - <redisCount>{$ctx:redisCount}</redisCount> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.lRem> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisCount":"1", - "redisValue":"sampleValue" - } - ``` - -??? note "lSet" - The lSet operation sets the value of an element in a list by its index. See the [related documentation](https://redis.io/commands/lset) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisIndex</td> - <td>The starting index.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The value of the key</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lSet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisIndex>{$ctx:redisIndex}</redisIndex> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.lSet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisIndex":"0", - "redisValue":"sampleValue" - } - ``` - -??? note "lTrim" - The lTrim operation trims a list to a specified range. See the [related documentation](https://redis.io/commands/ltrim) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStart</td> - <td>The starting index.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisEnd</td> - <td>The ending index.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lTrim> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStart>{$ctx:redisStart}</redisStart> - <redisEnd>{$ctx:redisEnd}</redisEnd> - </redis.lTrim> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStart":"0", - "redisEnd":"-1" - } - ``` - -??? note "rPopLPush" - The rPopLPush operation removes the last element in a list, then inserts it to another list, and then returns it. See the [related documentation](https://redis.io/commands/rpoplpush) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisSrckey</td> - <td>The name of the source key from where the last element is retrieved.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of destination key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.rPopLPush> - <redisSrckey>{$ctx:redisSrckey}</redisSrckey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - </redis.rPopLPush> - ``` - - **Sample request** - - ```json - { - "redisSrckey":"sampleSourceKey", - "redisDstkey":"sampleDestinationKey" - } - ``` - -??? note "rPush" - The rPush operation inserts one or more elements to the tail of a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/rpush) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStrings</td> - <td>One or more elements that you want to add to the tail of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.rPush> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStrings>{$ctx:redisStrings}</redisStrings> - </redis.rPush> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStrings":"sampleValues" - } - ``` - -??? note "rPushX" - The rPushX operation inserts one or more elements to the tail of a list stored in a specified key, only if the key already exists and holds a list. See the [related documentation](https://redis.io/commands/rpushx) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>One or more elements that you want to add to the tail of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.rPushX> - <redisKey>{$ctx:redisKey}</redisKey> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.rPushX> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisValue":"sampleValue" - } - ``` - -### Server Commands - -??? note "flushAll" - The flushAll operation deletes all the keys from all existing databases. See the [related documentation](https://redis.io/commands/flushall) for more information. - - **Sample configuration** - - ```xml - <redis.flushAll/> - ``` - - **Sample request** - - A sample request with an empty body can be handled by the flushAll operation. - -??? note "flushDB" - The flushDB operation deletes all the keys from the currently selected database. See the [related documentation](https://redis.io/commands/flushdb) for more information. - - **Sample configuration** - - ```xml - <redis.flushDB/> - ``` - - **Sample request** - - A sample request with an empty body can be handled by the flushDB operation. - -### Sets - -??? note "sadd" - The sadd operation is used to add one or more members to a set. See the [related documentation](https://redis.io/commands/sadd) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMembers</td> - <td>The value to be added to the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sadd> - <redisKey>{$ctx:redisKey}</redisKey> - <redisMembers>{$ctx:redisMembers}</redisMembers> - </redis.sadd> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisMembers":"sampleValue" - } - ``` - -??? note "sDiffStore" - The sDiffStore operation is used to subtract multiple sets and store the resulting set in a key. See the [related documentation](https://redis.io/commands/sdiffstore) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of the destination key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sDiffStore> - <redisKey>{$ctx:redisKey}</redisKey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - </redis.sDiffStore> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisDstkey":"sampleDestinationKey" - } - ``` - -??? note "sInter" - The sInter operation is used to intersect multiple sets. See the [related documentation](https://redis.io/commands/sinter) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sInter> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sInter> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sInterStore" - The sInterStore operation is used to intersect multiple sets and store the resulting set in a key. See the [related documentation](https://redis.io/commands/sinterstore) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of the destination key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sDiffStore> - <redisKey>{$ctx:redisKey}</redisKey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - </redis.sDiffStore> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisDstkey":"sampleDestinationKey" - } - ``` - -??? note "sIsMember" - The sIsMember operation is used to determine if a given value is a member of a set. See the [related documentation](https://redis.io/commands/sismember) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMembers</td> - <td>The name of a member in a key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sIsMember> - <redisKey>{$ctx:redisKey}</redisKey> - <redisMembers>{$ctx:redisMembers}</redisMembers> - </redis.sIsMember> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisMembers":"sampleValue" - } - ``` - -??? note "sMembers" - The sMembers operation is used to get the all members in a set. See the [related documentation](https://redis.io/commands/smembers) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sMembers> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sMembers> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sMove" - The sMove operation is used to move a member from one set to another. See the [related documentation](https://redis.io/commands/smove) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisSrckey</td> - <td>The name of the source key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of the destination key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMember</td> - <td>The name of the member.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sMove> - <redisSrckey>{$ctx:redisSrckey}</redisSrckey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - <redisMember>{$ctx:redisMember}</redisMember> - </redis.sMove> - ``` - - **Sample request** - - ```json - { - "redisSrckey":"sampleSourceKey", - "redisDstkey":"sampleDestinationKey", - "redisMember":"sampleMember" - } - ``` - -??? note "sPop" - The sPop operation is used to remove and return one or multiple random members from a set. See the [related documentation](https://redis.io/commands/spop) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sPop> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sPop> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sRandMember" - The sRandMember operation is used to get one or multiple random members from a set. See the [related documentation](https://redis.io/commands/srandmember) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sRandMember> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sRandMember> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sRem" - The sRem operation is used to remove one or more members from a set. See the [related documentation](https://redis.io/commands/srem) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMembers</td> - <td>The name of a member in a key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sRem> - <redisKey>{$ctx:redisKey}</redisKey> - <redisMembers>{$ctx:redisMembers}</redisMembers> - </redis.sRem> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisMembers":"sampleValue" - } - ``` - -??? note "sUnion" - The sUnion operation is used to add multiple sets. See the [related documentation](https://redis.io/commands/sunion) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sUnion> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sUnion> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sUnionStore" - The sUnionStore operation is used to add multiple sets and store the resulting set in a key. See the [related documentation](https://redis.io/commands/sunionstore) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of the destination key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sUnionStore> - <redisKey>{$ctx:redisKey}</redisKey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - </redis.sUnionStore> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisDstkey":"sampleValue" - } - ``` - -### Sorted Sets - -??? note "zadd" - The zadd operation adds one or more members to a sorted set, or update its score if a specified member already exists. See the [related documentation](https://redis.io/commands/zadd) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisScore</td> - <td>The score of the sorted set.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMembers</td> - <td>The name of a member you want to add.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.zadd> - <redisKey>{$ctx:redisKey}</redisKey> - <redisScore>{$ctx:redisScore}</redisScore> - <redisMember>{$ctx:redisMember}</redisMember> - </redis.zadd> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisScore":"1.1", - "redisMembers":"sampleMember" - } - ``` - -??? note "zCount" - The zCount operation retrieves a count of members in a sorted set, with scores that are within the min and max values specified. See the [related documentation](https://redis.io/commands/zcount) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMin</td> - <td>The minimum score value.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMax</td> - <td>The maximum score value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.zCount> - <redisKey>{$ctx:redisKey}</redisKey> - <redisMin>{$ctx:redisMin}</redisMin> - <redisMax>{$ctx:redisMax}</redisMax> - </redis.zCount> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisMin":"1.1", - "redisMax":"2.2" - } - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/redis-connector/2.7.x/redis-connector-reference.md b/en/docs/reference/connectors/redis-connector/2.7.x/redis-connector-reference.md deleted file mode 100644 index 1907bff2e1..0000000000 --- a/en/docs/reference/connectors/redis-connector/2.7.x/redis-connector-reference.md +++ /dev/null @@ -1,2204 +0,0 @@ -# Redis Connector Reference - -To use the Redis connector, add the <redis.init> element in your configuration before carrying out any other Redis operations. - -??? note "redis.init - Standalone mode" - The redis.init operation initializes the connector to interact with Redis. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisHost</td> - <td>The Redis host name (default localhost).</td> - <td>Yes</td> - </tr> - <tr> - <td>redisPort</td> - <td>The port on which the Redis server is running (the default port is 6379).</td> - <td>Yes</td> - </tr> - <tr> - <td>maxConnections</td> - <td>The maximum number of connections that are supported by the pool (which should be less than the max client connection limit of Redis)</td> - <td>Yes</td> - </tr> - <tr> - <td>redisTimeout</td> - <td>The server TTL (Time to Live) in milliseconds.</td> - <td>Optional. The default is 2000ms. </td> - </tr> - <tr> - <td>redisConnectionTimeout</td> - <td>The connection TTL (Time to live) in milliseconds.</td> - <td>Optional. The default equals to the redisTimeout. </td> - </tr> - <tr> - <td>redisConnectionPoolId</td> - <td>We are keeping separate pools for each artifact by using ARTIFACT_NAME as a unique name. If and only if the user wants to add 2 or more connectors to a single artifact (say 2 connectors per one API) then the user has to differentiate the Redis connectors within that artifact with-param.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - ```xml - <redis.init> - <redisHost>{$ctx:redisHost}</redisHost> - <redisPort>{$ctx:redisPort}</redisPort> - <redisTimeout>{$ctx:redisTimeout}</redisTimeout> - <redisConnectionTimeout>{$ctx:redisConnectionTimeout}</redisConnectionTimeout> - </redis.init> - ``` - - If you are connecting using a cache key, use the following init configuration. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>cacheKey</td> - <td>Key of the cache (password).</td> - <td>Optional</td> - </tr> - <tr> - <td>useSsl</td> - <td>A flag to switch between SSL and non-SSL.</td> - <td>Optional. Default is false.</td> - </tr> - </table> - - **Sample configuration** - ```xml - <redis.init> - <redisHost>{$ctx:redisHost}</redisHost> - <redisPort>{$ctx:redisPort}</redisPort> - <redisTimeout>{$ctx:redisTimeout}</redisTimeout> - <redisConnectionTimeout>{$ctx:redisConnectionTimeout}</redisConnectionTimeout> - <cacheKey>{$ctx:cacheKey}</cacheKey> - <useSsl>{$ctx:useSsl}</useSsl> - </redis.init> - ``` - - If you prefer to use the connectionURI over above configuration, use the following init configuration. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisConnectionURI</td> - <td>The Redis connection URI in the form of redis://[user:password@]host[:port]/[database] or rediss://[user:password@]host[:port]/[database] to connect over TLS/SSL</td> - <td>Yes</td> - </tr> - <tr> - <td>redisTimeout</td> - <td>The server TTL (Time to Live) in milliseconds.</td> - <td>Optional. The default is 2000ms. </td> - </tr> - <tr> - <td>redisConnectionTimeout</td> - <td>The connection TTL (Time to live) in milliseconds.</td> - <td>Optional. The default equals to the redisTimeout. </td> - </tr> - </table> - - **Sample configuration** - ```xml - <redis.init> - <redisConnectionURI>{$ctx:redisConnectionURI}</redisConnectionURI> - <redisTimeout>{$ctx:redisTimeout}</redisTimeout> - <redisConnectionTimeout>{$ctx:redisConnectionTimeout}</redisConnectionTimeout> - </redis.init> - ``` - -??? note "redis.init - Cluster mode" - The redis.init operation initializes the connector to interact with Redis cluster. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>isJmxEnabled</td> - <td>A flag to enable JMX if required (Default is false).</td> - <td>No</td> - </tr> - <tr> - <td>redisClusterEnabled</td> - <td>A flag to enable the redis cluster mode (Default is false).</td> - <td>Yes</td> - </tr> - <tr> - <td>clusterNodes</td> - <td>Comma separated list of the cluster nodes as Node1_hostname:Port,Node2_hostname:Port, etc. Example: 127.0.0.1:40001,127.0.0.1:40002</td> - <td>Yes</td> - </tr> - <tr> - <td>redisTimeout</td> - <td>The server TTL (Time to Live) in milliseconds.</td> - <td>Optional. The default is 2000ms. </td> - </tr> - <tr> - <td>redisConnectionTimeout</td> - <td>The connection TTL (Time to live) in milliseconds.</td> - <td>Optional. The default equals to the redisTimeout. </td> - </tr> - <tr> - <td>maxAttempts</td> - <td>The number of retries.</td> - <td>Optional. The default is 5. </td> - </tr> - <tr> - <td>clientName</td> - <td>Name of the client.</td> - <td>Optional. Default is empty</td> - </tr> - <tr> - <td>cacheKey</td> - <td>Key of the cache (password).</td> - <td>Optional. </td> - </tr> - <tr> - <td>useSsl</td> - <td>A flag to switch between SSL and non-SSL.</td> - <td>Optional. Default is false.</td> - </tr> - </table> - - **Sample configuration** - ```xml - <redis.init> - <redisHost>{$ctx:redisHost}</redisHost> - <redisPort>{$ctx:redisPort}</redisPort> - <redisTimeout>{$ctx:redisTimeout}</redisTimeout> - <redisConnectionTimeout>{$ctx:redisConnectionTimeout}</redisConnectionTimeout> - <maxAttempts>5</maxAttempts> - <clientName>WSO2EI</clientName> - </redis.init> - ``` - - If you are connecting using a cache key, use the following init configuration. - - **Sample configuration** - ```xml - <redis.init> - <redisHost>{$ctx:redisHost}</redisHost> - <redisPort>{$ctx:redisPort}</redisPort> - <redisTimeout>{$ctx:redisTimeout}</redisTimeout> - <redisConnectionTimeout>{$ctx:redisConnectionTimeout}</redisConnectionTimeout> - <maxAttempts>5</maxAttempts> - <clientName>WSO2EI</clientName> - <cacheKey>{$ctx:cacheKey}</cacheKey> - <useSsl>{$ctx:useSsl}</useSsl> - </redis.init> - ``` - -??? note "redis.init - sentinel mode" - The redis.init operation initializes the connector to interact with the Redis cluster. - Sentinel password configuration is available from version 2.5.0 - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>sentinelEnabled</td> - <td>A flag to enable the sentinel cluster mode (this is false by default).</td> - <td>Yes</td> - </tr> - <tr> - <td>sentinels</td> - <td>Comma separated list of the sentinel nodes in the following format: Node1_hostname:Port,Node2_hostname:Port, etc. For example: 172.18.0.4:26379,172.18.0.5:26379</td> - <td>Yes</td> - </tr> - <tr> - <td>sentinelSoTimeout</td> - <td>The server TTL (Time to Live) in milliseconds.</td> - <td>Optional. The default is 2000ms. </td> - </tr> - <tr> - <td>sentinelConnectionTimeout</td> - <td>The connection TTL (Time to live) in milliseconds.</td> - <td>Optional. The default equals to the redisTimeout. </td> - </tr> - <tr> - <td>sentinelPassword</td> - <td>The password of the sentinel node (if configured only)</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - ```xml - <redis.init> - <sentinelUser>{$ctx:sentinelUser}</sentinelUser> - <sentinelPassword>{$ctx:sentinelPassword}</sentinelPassword> - <masterName>{$ctx:masterName}</masterName> - <masterUser>{$ctx:masterName}</masterUser> - <masterPassword>{$ctx:masterPassword}</masterPassword> - <sentinelEnabled>true</sentinelEnabled> - <dbNumber>0</dbNumber> - <sentinels>172.18.0.4:26379,172.18.0.5:26379,172.18.0.6:26379</sentinels> - <sentinelClientName>{$ctx:sentinelClientName}</sentinelClientName> - <sentinelConnectionTimeout>{$ctx:sentinelConnectionTimeout}</sentinelConnectionTimeout> - <sentinelSoTimeout>{$ctx:sentinelSoTimeout}</sentinelSoTimeout> - </redis.init> - ``` ---- - -### Connection Commands - -??? note "echo" - The echo operation returns a specified string. See the [related documentation](https://redis.io/commands/echo) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisMessage</td> - <td>The message that you want to echo.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.echo> - <redisMessage>{$ctx:redisMessage}</redisMessage> - </redis.echo> - ``` - - **Sample request** - - ```json - { - "redisMessage":"sampleMessage" - } - ``` - -??? note "ping" - The ping operation pings the server to verify whether the connection is still alive. See the [related documentation](https://redis.io/commands/ping) for more information. - - **Sample configuration** - - ```xml - <redis.ping/> - ``` - - **Sample request** - - An empty request can be handled by the ping operation. - -??? note "quit" - The quit operation closes the connection to the server. See the [related documentation](https://redis.io/commands/quit) for more information. - - **Sample configuration** - - ```xml - <redis.quit/> - ``` - - **Sample request** - - An empty request can be handled by the quit operation. - -### Hashes - -??? note "hDel" - The hDel operation deletes one or more specified hash fields. See the [related documentation](https://redis.io/commands/hdel) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The fields that you want to delete.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hDel> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hDel> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFields":"sampleField1 sampleField2" - } - ``` - -??? note "hExists" - The hExists operation determines whether a specified hash field exists. See the [related documentation](https://redis.io/commands/hexists) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The fields that determine existence.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hExists> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hExists> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFields":"sampleField" - } - ``` - -??? note "hGet" - The hGet operation retrieves the value of a particular field in a hash stored in a specified key. See the [related documentation](https://redis.io/commands/hget) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The field for which you want to retrieve the value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hGet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hGet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFields":"sampleField" - } - ``` - -??? note "hGetAll" - The hGetAll operation retrieves all the fields and values of a hash stored in a specified key. See the [related documentation](https://redis.io/commands/hgetall) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hGetAll> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.hGetAll> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "hIncrBy" - The hIncrBy operation increments the integer value of a hash field by the specified amount. See the [related documentation](https://redis.io/commands/hincrby) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The hash field for which you want to increment the value.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The amount by which you want to increment the hash field value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hIncrBy> - <redisKey>{$ctx:redisKey}</redisKey> - <redisField>{$ctx:redisField}</redisField> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.hIncrBy> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisField":"sampleField", - "redisValue":"1" - } - ``` - -??? note "hKeys" - The hKeys operation retrieves all the fields in a hash. See the [related documentation](https://redis.io/commands/hkeys) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hKeys> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.hKeys> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "hLen" - The hLen operation retrieves the number of fields in a hash. See the [related documentation](https://redis.io/commands/hlen) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hKeys> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.hKeys> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "hMGet" - The hMGet operation retrieves values associated with each of the specified fields in a hash that is stored in a particular key. See the [related documentation](https://redis.io/commands/hmget) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The hash field for which you want to retrieve values.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hMGet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hMGet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFields":"sampleField1 sampleField2" - } - ``` - -??? note "hMSet" - The hMSet operation sets specified fields to their respective values in the hash stored in a particular key. See the [related documentation](https://redis.io/commands/hmset) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFieldsValues</td> - <td>The fields you want to set and their respective values.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hMSet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisFieldsValues>{$ctx:redisFieldsValues}</redisFieldsValues> - </redis.hMSet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisFieldsValues":"sampleField1 sampleValue1 sampleField2 sampleValue2" - } - ``` - -??? note "hSet" - The hSet operation sets a specific field in a hash to a specified value. See the [related documentation](https://redis.io/commands/hset) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The field for which you want to set a value.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The amount by which you want to increment the hash field value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hSet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisField>{$ctx:redisField}</redisField> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.hSet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisField":"sampleField", - "redisValue":"1" - } - ``` - -??? note "hSetnX" - The hSetnX operation sets a specified field to a value, only if the field does not already exist in the hash. If field already exists, this operation has no effect. See the [related documentation](https://redis.io/commands/hsetnx) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisFields</td> - <td>The field for which you want to set a value.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The amount by which you want to increment the hash field value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hSetnX> - <redisKey>{$ctx:redisKey}</redisKey> - <redisField>{$ctx:redisField}</redisField> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.hSetnX> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisField":"sampleField", - "redisValue":"1" - } - ``` - -??? note "hVals" - The hVals operation retrieves all values in a hash that is stored in a particular key. See the [related documentation](https://redis.io/commands/hvals) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the hash is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.hVals> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.hVals> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -### Keys - -??? note "del" - The del operation deletes a specified key if it exists. See the [related documentation](https://redis.io/commands/del) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that you want to delete.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.del> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.del> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "exists" - The exists operation determines whether a specified key exists. See the [related documentation](https://redis.io/commands/exists) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that you want to determine existence.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.exists> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.exists> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "expire" - The expire operation sets a TTL(Time to live) for a key so that the key will automatically delete once it reaches the TTL. The TTL should be specified in seconds. See the [related documentation](https://redis.io/commands/expire) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that you want to specify a TTL.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisSeconds</td> - <td>The number of seconds representing the TTL that you want to set for the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.expire> - <redisKey>{$ctx:redisKey}</redisKey> - <redisSeconds>{$ctx:redisSeconds}</redisSeconds> - </redis.expire> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisSeconds":"10" - } - ``` - -??? note "expireAt" - The expireAt operation sets the time after which an existing key should expire. Here the time should be specified as a UNIX timestamp. See the [related documentation](https://redis.io/commands/expireat) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that you want to set an expiration.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisUnixTime</td> - <td>The time to expire specified in the UNIX timestamp format.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.expire> - <redisKey>{$ctx:redisKey}</redisKey> - <redisUnixTime>{$ctx:redisUnixTime}</redisUnixTime> - </redis.expire> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisUnixTime":"1293840000" - } - ``` - -??? note "keys" - The keys operation retrieves all keys that match a specified pattern. See the [related documentation](https://redis.io/commands/keys) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisPattern</td> - <td>The pattern that you want to match when retrieving keys.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.keys> - <redisPattern>{$ctx:redisPattern}</redisPattern> - </redis.keys> - ``` - - **Sample request** - - ```json - { - "redisPattern":"*" - } - ``` - -??? note "randomKey" - A sample request with an empty body can be handled by the randomKey operation. See the [related documentation](https://redis.io/commands/randomkey) for more information. - - **Sample configuration** - - ```xml - <redis.randomKey/> - ``` - - **Sample request** - - ```json - { - "redisPattern":"*" - } - ``` - -??? note "rename" - The rename operation renames an existing key to a new name that is specified. See the [related documentation](https://redis.io/commands/rename) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisOldKey</td> - <td>The name of an existing key that you want to rename.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisNewKey</td> - <td>The new name that you want the key to have.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.rename> - <redisOldKey>{$ctx:redisOldKey}</redisOldKey> - <redisNewKey>{$ctx:redisNewKey}</redisNewKey> - </redis.rename> - ``` - - **Sample request** - - ```json - { - "redisOldKey":"sampleOldKey", - "redisNewKey":"sampleNewKey" - } - ``` - -??? note "renamenX" - The renamenX operation renames a key to a new key, only if the new key does not already exist. See the [related documentation](https://redis.io/commands/renamenx) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisOldKey</td> - <td>The name of an existing key that you want to rename.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisNewKey</td> - <td>The new name that you want the key to have.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.renamenX> - <redisOldKey>{$ctx:redisOldKey}</redisOldKey> - <redisNewKey>{$ctx:redisNewKey}</redisNewKey> - </redis.renamenX> - ``` - - **Sample request** - - ```json - { - "redisOldKey":"sampleOldKey", - "redisNewKey":"sampleNewKey" - } - ``` - -??? note "ttl" - The ttl operation retrieves the TTL (Time to Live) value of a specified key. See the [related documentation](https://redis.io/commands/ttl) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key for which you want to retrieve the TTL.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.ttl> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.ttl> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "type" - The type operation retrieves the data type of a value stored in a specified key. See the [related documentation](https://redis.io/commands/type) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key that the value is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.type> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.type> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -### Lists - -??? note "blPop" - The blPop operation retrieves the first element in a list, if available, or blocks the connection for a specified amount of time until an element is available. See the [related documentation](https://redis.io/commands/blpop) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisBrPopTimeout</td> - <td>The amount of time to keep the connection blocked, waiting for an element to be available in the tail of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.brPop> - <redisKey>{$ctx:redisKey}</redisKey> - <redisBrPopTimeout>{$ctx:redisBrPopTimeout}</redisBrPopTimeout> - </redis.brPop> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - "redisBrPopTimeout":"0" - } - ``` - -??? note "brPop" - The brPop operation retrieves the last element in a list, if available, or blocks the connection for a specified amount of time until an element is available. See the [related documentation](https://redis.io/commands/brpop) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisBlPopTimeout</td> - <td>The amount of time to keep the connection blocked, waiting for an element to be available in the head of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.blPop> - <redisKey>{$ctx:redisKey}</redisKey> - <redisBlPopTimeout>{$ctx:redisBlPopTimeout}</redisBlPopTimeout> - </redis.blPop> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - "redisBlPopTimeout":"0" - } - ``` - -??? note "lInsert" - The lInsert operation inserts a specified element before or after an existing element in a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/linsert) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisWhere</td> - <td>The place where you want to add an element. Possible values are BEFORE or AFTER. For example, whether you want to add an element before a particular element that exists in the list.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisPivot</td> - <td>An existing element in the list that is used as the pivot element.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The element that you want to insert to the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lInsert> - <redisKey>{$ctx:redisKey}</redisKey> - <redisWhere>{$ctx:redisWhere}</redisWhere> - <redisPivot>{$ctx:redisPivot}</redisPivot> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.lInsert> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisWhere":"BEFORE", - "redisPivot":"samplePivotElement", - "redisValue":"sampleInsertElement" - } - ``` - -??? note "lLen" - The lLen operation retrieves the length of a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/llen) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lLen> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.lLen> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - } - ``` - -??? note "lPop" - The lPop operation retrieves the first element in a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/lpop) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lLen> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.lLen> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "lPush" - The lPush operation inserts one or more elements to the head of a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/lpush) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStrings</td> - <td>One or more elements that you want to add to the head of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lPush> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStrings>{$ctx:redisStrings}</redisStrings> - </redis.lPush> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStrings":"sampleValues" - } - ``` - -??? note "lPushX" - The lPushX operation inserts one or more elements to the head of a list stored in a specified key, only if the key already exists and holds a list. See the [related documentation](https://redis.io/commands/lpushx) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStrings</td> - <td>One or more elements that you want to add to the head of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lPushX> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStrings>{$ctx:redisStrings}</redisStrings> - </redis.lPushX> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStrings":"sampleValues" - } - ``` - -??? note "lRange" - The lRange operation retrieves a range of elements from a list. See the [related documentation](https://redis.io/commands/lrange) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStart</td> - <td>The starting index.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisEnd</td> - <td>The ending index.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lRange> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStart>{$ctx:redisStart}</redisStart> - <redisEnd>{$ctx:redisEnd}</redisEnd> - </redis.lRange> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStart":"0", - "redisEnd":"-1" - } - ``` - -??? note "lRem" - The lRem operation removes elements from a list. See the [related documentation](https://redis.io/commands/lrem) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisCount</td> - <td>The number of occurrences of the element that you want to remove.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The element that you want to remove.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lRem> - <redisKey>{$ctx:redisKey}</redisKey> - <redisCount>{$ctx:redisCount}</redisCount> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.lRem> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisCount":"1", - "redisValue":"sampleValue" - } - ``` - -??? note "lSet" - The lSet operation sets the value of an element in a list by its index. See the [related documentation](https://redis.io/commands/lset) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisIndex</td> - <td>The starting index.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>The value of the key</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lSet> - <redisKey>{$ctx:redisKey}</redisKey> - <redisIndex>{$ctx:redisIndex}</redisIndex> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.lSet> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisIndex":"0", - "redisValue":"sampleValue" - } - ``` - -??? note "lTrim" - The lTrim operation trims a list to a specified range. See the [related documentation](https://redis.io/commands/ltrim) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStart</td> - <td>The starting index.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisEnd</td> - <td>The ending index.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.lTrim> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStart>{$ctx:redisStart}</redisStart> - <redisEnd>{$ctx:redisEnd}</redisEnd> - </redis.lTrim> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStart":"0", - "redisEnd":"-1" - } - ``` - -??? note "rPopLPush" - The rPopLPush operation removes the last element in a list, then inserts it to another list, and then returns it. See the [related documentation](https://redis.io/commands/rpoplpush) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisSrckey</td> - <td>The name of the source key from where the last element is retrieved.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of destination key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.rPopLPush> - <redisSrckey>{$ctx:redisSrckey}</redisSrckey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - </redis.rPopLPush> - ``` - - **Sample request** - - ```json - { - "redisSrckey":"sampleSourceKey", - "redisDstkey":"sampleDestinationKey" - } - ``` - -??? note "rPush" - The rPush operation inserts one or more elements to the tail of a list that is stored in a specified key. See the [related documentation](https://redis.io/commands/rpush) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisStrings</td> - <td>One or more elements that you want to add to the tail of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.rPush> - <redisKey>{$ctx:redisKey}</redisKey> - <redisStrings>{$ctx:redisStrings}</redisStrings> - </redis.rPush> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisStrings":"sampleValues" - } - ``` - -??? note "rPushX" - The rPushX operation inserts one or more elements to the tail of a list stored in a specified key, only if the key already exists and holds a list. See the [related documentation](https://redis.io/commands/rpushx) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key where the list is stored.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisValue</td> - <td>One or more elements that you want to add to the tail of the list.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.rPushX> - <redisKey>{$ctx:redisKey}</redisKey> - <redisValue>{$ctx:redisValue}</redisValue> - </redis.rPushX> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisValue":"sampleValue" - } - ``` - -### Server Commands - -??? note "flushAll" - The flushAll operation deletes all the keys from all existing databases. See the [related documentation](https://redis.io/commands/flushall) for more information. - - **Sample configuration** - - ```xml - <redis.flushAll/> - ``` - - **Sample request** - - A sample request with an empty body can be handled by the flushAll operation. - -??? note "flushDB" - The flushDB operation deletes all the keys from the currently selected database. See the [related documentation](https://redis.io/commands/flushdb) for more information. - - **Sample configuration** - - ```xml - <redis.flushDB/> - ``` - - **Sample request** - - A sample request with an empty body can be handled by the flushDB operation. - -### Sets - -??? note "sadd" - The sadd operation is used to add one or more members to a set. See the [related documentation](https://redis.io/commands/sadd) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMembers</td> - <td>The value to be added to the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sadd> - <redisKey>{$ctx:redisKey}</redisKey> - <redisMembers>{$ctx:redisMembers}</redisMembers> - </redis.sadd> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisMembers":"sampleValue" - } - ``` - -??? note "sDiffStore" - The sDiffStore operation is used to subtract multiple sets and store the resulting set in a key. See the [related documentation](https://redis.io/commands/sdiffstore) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of the destination key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sDiffStore> - <redisKey>{$ctx:redisKey}</redisKey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - </redis.sDiffStore> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisDstkey":"sampleDestinationKey" - } - ``` - -??? note "sInter" - The sInter operation is used to intersect multiple sets. See the [related documentation](https://redis.io/commands/sinter) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sInter> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sInter> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sInterStore" - The sInterStore operation is used to intersect multiple sets and store the resulting set in a key. See the [related documentation](https://redis.io/commands/sinterstore) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of the destination key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sDiffStore> - <redisKey>{$ctx:redisKey}</redisKey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - </redis.sDiffStore> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisDstkey":"sampleDestinationKey" - } - ``` - -??? note "sIsMember" - The sIsMember operation is used to determine if a given value is a member of a set. See the [related documentation](https://redis.io/commands/sismember) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMembers</td> - <td>The name of a member in a key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sIsMember> - <redisKey>{$ctx:redisKey}</redisKey> - <redisMembers>{$ctx:redisMembers}</redisMembers> - </redis.sIsMember> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisMembers":"sampleValue" - } - ``` - -??? note "sMembers" - The sMembers operation is used to get the all members in a set. See the [related documentation](https://redis.io/commands/smembers) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sMembers> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sMembers> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sMove" - The sMove operation is used to move a member from one set to another. See the [related documentation](https://redis.io/commands/smove) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisSrckey</td> - <td>The name of the source key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of the destination key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMember</td> - <td>The name of the member.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sMove> - <redisSrckey>{$ctx:redisSrckey}</redisSrckey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - <redisMember>{$ctx:redisMember}</redisMember> - </redis.sMove> - ``` - - **Sample request** - - ```json - { - "redisSrckey":"sampleSourceKey", - "redisDstkey":"sampleDestinationKey", - "redisMember":"sampleMember" - } - ``` - -??? note "sPop" - The sPop operation is used to remove and return one or multiple random members from a set. See the [related documentation](https://redis.io/commands/spop) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sPop> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sPop> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sRandMember" - The sRandMember operation is used to get one or multiple random members from a set. See the [related documentation](https://redis.io/commands/srandmember) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sRandMember> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sRandMember> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sRem" - The sRem operation is used to remove one or more members from a set. See the [related documentation](https://redis.io/commands/srem) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMembers</td> - <td>The name of a member in a key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sRem> - <redisKey>{$ctx:redisKey}</redisKey> - <redisMembers>{$ctx:redisMembers}</redisMembers> - </redis.sRem> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisMembers":"sampleValue" - } - ``` - -??? note "sUnion" - The sUnion operation is used to add multiple sets. See the [related documentation](https://redis.io/commands/sunion) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sUnion> - <redisKey>{$ctx:redisKey}</redisKey> - </redis.sUnion> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey" - } - ``` - -??? note "sUnionStore" - The sUnionStore operation is used to add multiple sets and store the resulting set in a key. See the [related documentation](https://redis.io/commands/sunionstore) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisDstkey</td> - <td>The name of the destination key.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.sUnionStore> - <redisKey>{$ctx:redisKey}</redisKey> - <redisDstkey>{$ctx:redisDstkey}</redisDstkey> - </redis.sUnionStore> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisDstkey":"sampleValue" - } - ``` - -### Sorted Sets - -??? note "zadd" - The zadd operation adds one or more members to a sorted set, or update its score if a specified member already exists. See the [related documentation](https://redis.io/commands/zadd) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisScore</td> - <td>The score of the sorted set.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMembers</td> - <td>The name of a member you want to add.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.zadd> - <redisKey>{$ctx:redisKey}</redisKey> - <redisScore>{$ctx:redisScore}</redisScore> - <redisMember>{$ctx:redisMember}</redisMember> - </redis.zadd> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisScore":"1.1", - "redisMembers":"sampleMember" - } - ``` - -??? note "zCount" - The zCount operation retrieves a count of members in a sorted set, with scores that are within the min and max values specified. See the [related documentation](https://redis.io/commands/zcount) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>redisKey</td> - <td>The name of the key.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMin</td> - <td>The minimum score value.</td> - <td>Yes</td> - </tr> - <tr> - <td>redisMax</td> - <td>The maximum score value.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <redis.zCount> - <redisKey>{$ctx:redisKey}</redisKey> - <redisMin>{$ctx:redisMin}</redisMin> - <redisMax>{$ctx:redisMax}</redisMax> - </redis.zCount> - ``` - - **Sample request** - - ```json - { - "redisKey":"sampleKey", - "redisMin":"1.1", - "redisMax":"2.2" - } - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/redis-connector/redis-connector-configuration.md b/en/docs/reference/connectors/redis-connector/redis-connector-configuration.md deleted file mode 100644 index 28553b5bb1..0000000000 --- a/en/docs/reference/connectors/redis-connector/redis-connector-configuration.md +++ /dev/null @@ -1,25 +0,0 @@ -# Setting up the Redis Environment - -The Redis connector allows you to access the Redis commands from an integration sequence. Redis stands for remote dictionary server. Redis store/server that stores data as key-value pairs and this key-value store can be used as a database. - -## Setting up the environment - -Before you start configuring the Redis connector, you need the WSO2 integration runtime. [Download](https://wso2.com/integration/micro-integrator/) the integration runtime and extract the ZIP file to a known location. In this setup guide we refer to that location as <PRODUCT_HOME>. - -To configure the Redis connector, download the following client libraries from the given locations and copy to the `<PRODUCT_HOME>/lib` directory. - -* For Redis connector v1.0.1 - [jedis-2.1.0.jar](https://mvnrepository.com/artifact/redis.clients/jedis/2.1.0) -* For Redis connector v2.1.x and above - [jedis-3.6.0.jar](https://mvnrepository.com/artifact/redis.clients/jedis/3.6.0) - -## Setting up the Redis server - -1. Download the [Redis server](http://redis.io/download) and follow the steps given in this page to install this in your local machine. -2. After setting up the **Redis Server**, navigate to the location you installed Redis and execute the **sudo make install** command. -3. Enter **redis-server** command to start the Redis server. -4. In the command line, you can see the Redis **port** and **PID** as shown below. - - <a href="{{base_path}}/assets/img/integrate/connectors/redis-server.png"><img src="{{base_path}}/assets/img/integrate/connectors/redis-server.png" title="Redis server" alt="Redis server"/></a> - -5. You can interact with Redis using the built-in client. In the command line, navigate to the location you installed Redis. Enter `redis-cli`. - - <a href="{{base_path}}/assets/img/integrate/connectors/redis-client.png"><img src="{{base_path}}/assets/img/integrate/connectors/redis-client.png" title="Redis Client" width="60%" alt="Redis Client"/> </a> diff --git a/en/docs/reference/connectors/redis-connector/redis-connector-example.md b/en/docs/reference/connectors/redis-connector/redis-connector-example.md deleted file mode 100644 index 7d35a41bfc..0000000000 --- a/en/docs/reference/connectors/redis-connector/redis-connector-example.md +++ /dev/null @@ -1,413 +0,0 @@ -# Redis Connector Example - -The Redis connector allows you to access the Redis commands from an integration sequence. - -## What you'll build - -Given below is a sample scenario that demonstrates how to work with the WSO2 Redis Connector and access the Redis server, using Redis commands. - -The user sends the request to invoke an API to get stock details. This REST call will get converted into a SOAP message and is sent to the back-end service. While the response from the backend service is converted back to JSON and sent back to the API caller, the integration runtime will extract stock volume details from the response and store it into a configured Redis server. -When users need to retrieve stock volumes collected, they can invoke the `getstockvolumedetails` resource. This example also demonstrates how users can manipulate this stock volume details by removing unwanted items from the Redis server. - -> **Note**: In this scenario you need to set up the Redis Server in your local machine. Please refer the [Setting up the Redis Connector]({{base_path}}/reference/connectors/redis-connector/redis-connector-configuration/) documentation. Follow the steps listed under `Setting up the Redis server` section to setup the Redis server and `Set up the back-end service` section to setup the Stockquote service. -This example demonstrates how to use Redis connector to: - -1. Retrieve stock volume details from the Stockquote back-end service. This is done while extracting the stock volume, creating a Redis hash map, and adding stock volume details to the Redis server. (In this example, Redis hashes are used to store different companies' stocks volume details. Since the “symbol” that is sent in the request is “WSO2”, the request is routed to the WSO2 endpoint. Once the response from the WSO2 endpoint is received, it is transformed according to the specified template and sent to the client. Then create a hash map and insert extracted details to the Redis hashmap). -2. Retrieve all stock volume details from the Redis server. -3. Remove stock volume details from Redis server. - -All three operations are exposed via an `StockQuoteAPI` API. The API with the context `/stockquote` has four resources - -* `/getstockquote/{symbol}`: This is used to get stock volume details while extracting and sending details to the Redis hash map. -* `/getstockvolumedetails` : Retrieve information about the inserted stock volume details. -* `/deletestockvolumedetails` : Remove unwanted stock volume details. - -The following diagram shows the overall solution. The user creates a hash map, stores WSO2 stock volume details into the list, and then receives it back and removes unwanted hash map items. To invoke each operation, the user uses the same API. - -<img src="{{base_path}}/assets/img/integrate/connectors/redis-connector-example-updated.png" title="Redis connector example" width="800" alt="Redis connector example"/> - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Configure the connector in WSO2 Integration Studio - -Connectors can be added to integration flows in [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/). Once added, the operations of the connector can be dragged onto your canvas and added to your resources. - -### Import the connector - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -### Add integration logic - -First create an API, which will be where we configure the integration logic. Right click on the created Integration Project and select, **New** -> **Rest API** to create the REST API. Specify the API name as `SampleRdisAPI` and API context as `/resources`. - -<img src="{{base_path}}/assets/img/integrate/connectors/adding-an-api.jpg" title="Adding a Rest API" width="800" alt="Adding a Rest API"/> - -#### Configuring the API - -##### Configure a resource for the getstockquote operation - -Create a resource that sets up Redis hash map and sets a specific field in a hash to a specified value. In this sample, the user sends the request to invoke the created API to get WSO2 stock volume details. To achieve this, add the following components to the configuration. - -1. Add an address endpoint using the send mediator to access SimpleStockQuoteService. - - <img src="{{base_path}}/assets/img/integrate/connectors/redis-address-endpoint.png" title="Address endpoint" width="500" alt="Address endpoint"/> - -2. Add a header to get a quote from the SimpleStockQuoteService. - - <img src="{{base_path}}/assets/img/integrate/connectors/redis-header.png" title="Add Header to get Quote" width="500" alt="Add Header to get Quote"/> - -3. Add a payload factory to extract the selected stock details. In this sample, we attempt to get WSO2 stock details from the SimpleStockQuoteService. - - <img src="{{base_path}}/assets/img/integrate/connectors/redis-payloadfactory.png" title="Add payloadfactory to extract WSO2 details" width="500" alt="Add payloadfactory to extract WSO2 details"/> - -4. In this example, we copy the original payload to a property using the Enrich mediator. - - <img src="{{base_path}}/assets/img/integrate/connectors/redis-enrich1.png" title="Add enrich mediator" width="500" alt="Add enrich mediator"/> - - When we need the original payload, we replace the message body with this property value using the Enrich mediator as follows. - - <img src="{{base_path}}/assets/img/integrate/connectors/redis-enrich2.png" title="Add enrich mediator" width="500" alt="Add enrich mediator"/> - -5. Initialize the connector. - - 1. Navigate into the **Palette** pane and select the graphical operations icons listed under **Redis Connector** section. Then drag and drop the `init` operation into the Design pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/redis-init-drag-and-shop.png" title="Drag and drop init operation" width="500" alt="Drag and drop init operation"/> - - 2. Add the property values into the `init` operation as shown below. Replace the `redisHost`, `redisPort`, `redisTimeout` with your values. - - - **redisHost**: The Redis host name (default localhost). - - **redisPort**: The port on which the Redis server is running (the default port is 6379). - - **redisTimeout** : The server TTL (Time to live) in milliseconds. - - <img src="{{base_path}}/assets/img/integrate/connectors/redis-init-parameterspng.png" title="Add values to the init operation" width="800" alt="Add values to the init operation"/> - -6. Set up the hSet operation. This operation sets a specific field in a hash to a specified value. - - 1. Navigate into the **Palette** pane and select the graphical operations icons listed under **Redis Connector** section. Then drag and drop the `hSet` operation into the Design pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/redis-hset-drag-and-drop.png" title="Drag and drop hSet operation" width="500" alt="Drag and drop hSet operation"/> - - 2. In this operation we are going to set a hash map to the Redis server. The hSet operation sets a specific field in a hash to a specified value. - - - **redisKey** : The name of the key where the hash is stored. - - **redisField** : The field for which you want to set a value. - - **redisValue** : The value that you want to set for the field. - - In this example, `redisKey` value is configured as **StockVolume**. While invoking the API, the above `redisField`,`redisValue` parameter values are extracted from the response of the SimpleStockQuoteService. Then they are populated as an input value for the Redis `hSet` operation. - - <img src="{{base_path}}/assets/img/integrate/connectors/redis-hset-drag-and-drop-parameter.png" title="hSet parameters" width="500" alt="hSet parameters"/> - -7. To get the input values in to the `hSet`, we can use the [property mediator]({{base_path}}/reference/mediators/property-mediator). Navigate into the **Palette** pane and select the graphical mediators icons listed under **Mediators** section. Then drag and drop the `Property` mediators onto the Design pane as shown below. - > **Note**: The properties should be added to the pallet before creating the operation. - - The parameters available for configuring the Property mediator are as follows: - - 1. Add the property mediator to capture the `symbol` value from the response of SimpleStockQuoteService. The 'symbol' contains the company name of the stock quote. - - - **name** : symbol - - **value expression** : $body/soapenv:Envelope/soapenv:Body/ns:getQuoteResponse/ax21:symbol - - <img src="{{base_path}}/assets/img/integrate/connectors/redis-getsymbol-properties1.png" title="Add property mediators to get symbol" width="600" alt="Add property mediators to get symbol"/> - - 2. Add the property mediator to capture the `volume` values. The 'volume' contains stock quote volume of the selected company. - - - **name** : volume - - **value expression** : $body/soapenv:Envelope/soapenv:Body/ns:getQuoteResponse/ax21:volume - - <img src="{{base_path}}/assets/img/integrate/connectors/redis-getvolume-properties1.png" title="Add property mediators to get volume" width="600" alt="Add property mediators to get volume"/> - -8. Forward the backend response to the API caller. - - When you are invoking the created resource, the request of the message is going through the `/getstockquote` resource. Finally, it is passed to the [Respond mediator]({{base_path}}/reference/mediators/respond-mediator/). The Respond Mediator stops the processing on the current message and sends the message back to the client as a response. - - 1. Drag and drop **respond mediator** to the **Design view**. - - <img src="{{base_path}}/assets/img/integrate/connectors/redis-respond-mediator.png" title="Add Respond mediator" width="800" alt="Add Respond mediator"/> - - 2. Once you have setup the resource, you can see the `getstockquote` resource as shown below. - - <img src="{{base_path}}/assets/img/integrate/connectors/redis-createstockvolume-resource.png" title="Resource design view" width="600" alt="Resource design view"/> - -##### Configure a resource for the getstockvolumedetails operation - -1. Initialize the connector. - You can use the same configuration to initialize the connector. Please follow the steps given in section 5 for setting up the `init` operation to the `getstockquote` operation. - - - **redisKey** : The name of the key where the hash is stored. - -2. Set up the lLen operation. - Navigate into the **Palette** pane and select the graphical operations icons listed under **Redis Connector** section. Then drag and drop the `hGetAll` operation into the Design pane. The `hGetAll` operation retrieves all the fields and values in a hash. - - You only need to send redisKey as parameter. In this example `redisKey` value is configured as **StockVolume** - - <img src="{{base_path}}/assets/img/integrate/connectors/redis-hgetall-drag-and-drop.png" title="Drag and drop hGetAll operation" width="500" alt="Drag and drop hGetAll operation"/> - -3. Forward the backend response to the API caller. Please follow the steps given in section 8 in the `getstockquote` operation. - -##### Configure a resource for the deletestockvolumedetails operation - -1. Initialize the connector. - You can use the same configuration to initialize the connector. Please follow the steps given in section 5 for setting up the `init` operation to the `getstockquote` operation. - -2. Set up the operation. - - Navigate into the **Palette** pane and select the graphical operations icons listed under **Redis Connector** section. Then drag and drop the `hDel` operation into the Design pane. The `hDel` operation deletes one or more hash fields - - - **redisKey** : The name of the key where the hash is stored. - - **redisFields** : The fields that you want to delete. - - <img src="{{base_path}}/assets/img/integrate/connectors/redis-hdell-drag-and-drop.png" title="Drag and drop hDell operation" width="500" alt="Drag and drop hDell operation"/> - -3. Forward the backend response to the API caller. Please follow the steps given in section 8 in the `getstockquote` operation. - -Now you can switch into the Source view and check the XML configuration files of the created API and sequences. - -??? note "StockQuoteAPI.xml" - ``` - <?xml version="1.0" encoding="UTF-8"?> - <api context="/stockquote" name="StockQuoteAPI" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="GET" uri-template="/getstockquote/{symbol}"> - <inSequence> - <payloadFactory media-type="xml"> - <format> - <m0:getQuote xmlns:m0="http://services.samples"> - <m0:request> - <m0:symbol>$1</m0:symbol> - </m0:request> - </m0:getQuote> - </format> - <args> - <arg evaluator="xml" expression="get-property('uri.var.symbol')"/> - </args> - </payloadFactory> - <header name="Action" scope="default" value="urn:getQuote"/> - <call> - <endpoint> - <address format="soap11" uri="http://localhost:9000/services/SimpleStockQuoteService"> - <suspendOnFailure> - <initialDuration>-1</initialDuration> - <progressionFactor>1</progressionFactor> - </suspendOnFailure> - <markForSuspension> - <retriesBeforeSuspension>0</retriesBeforeSuspension> - </markForSuspension> - </address> - </endpoint> - </call> - <enrich> - <source clone="false" type="body"/> - <target property="ORIGINAL_PAYLOAD" type="property"/> - </enrich> - <property expression="$body/soapenv:Envelope/soapenv:Body/ns:getQuoteResponse/ax21:symbol" name="symbol" scope="default" type="STRING" xmlns:ax21="http://services.samples/xsd" xmlns:ns="http://services.samples" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"/> - <property expression="$body/soapenv:Envelope/soapenv:Body/ns:getQuoteResponse/ax21:volume" name="volume" scope="default" type="STRING" xmlns:ax21="http://services.samples/xsd" xmlns:ns="http://services.samples" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"/> - <redis.init> - <redisHost>127.0.0.1</redisHost> - <redisPort>6379</redisPort> - <redisTimeout>10000000000</redisTimeout> - </redis.init> - <redis.hSet> - <redisKey>StockVolume</redisKey> - <redisField>{$ctx:symbol}</redisField> - <redisValue>{$ctx:volume}</redisValue> - </redis.hSet> - <enrich> - <source clone="false" property="ORIGINAL_PAYLOAD" type="property"/> - <target type="body"/> - </enrich> - <log level="full"/> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="GET" uri-template="/getstockvolumedetails"> - <inSequence> - <redis.init> - <redisHost>127.0.0.1</redisHost> - <redisPort>6379</redisPort> - <redisTimeout>10000000000</redisTimeout> - </redis.init> - <redis.hGetAll> - <redisKey>StockVolume</redisKey> - </redis.hGetAll> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" uri-template="/deletestockvolumedetails"> - <inSequence> - <property expression="json-eval($.redisFields)" name="redisFields" scope="default" type="STRING"/> - <redis.init> - <redisHost>127.0.0.1</redisHost> - <redisPort>6379</redisPort> - <redisTimeout>10000000000</redisTimeout> - </redis.init> - <redis.hDel> - <redisKey>StockVolume</redisKey> - <redisFields>{$ctx:redisFields}</redisFields> - </redis.hDel> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - </api> - ``` -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/smpp-connector.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - -1. Retrieve stock volume details from the Stockquote back-end service. - - **Sample request 1** - -``` -curl -v GET "http://localhost:8290/stockquote/view/WSO2" -H "Content-Type:application/json" -``` - - **Expected Response** - -```json - { - "Envelope": { - "Body": { - "getQuoteResponse": { - "change": -2.86843917118114, - "earnings": -8.540305401672558, - "high": -176.67958828498735, - "last": 177.66987465262923, - "low": -176.30898912339075, - "marketCap": 56495579.98178506, - "name": "WSO2 Company", - "open": 185.62740369461244, - "peRatio": 24.341353665128693, - "percentageChange": -1.4930577008849097, - "prevClose": 192.11844053187397, - "symbol": "WSO2", - "volume": 7791 - } - } - } - } -``` - - **Sample request 2** - -``` -curl -v GET "http://localhost:8290/stockquote/view/IBM" -H "Content-Type:application/json" -``` - - **Expected Response** - -```json - { - "Envelope": { - "Body": { - "getQuoteResponse": { - "change": -2.86843917118114, - "earnings": -8.540305401672558, - "high": -176.67958828498735, - "last": 177.66987465262923, - "low": -176.30898912339075, - "marketCap": 56495579.98178506, - "name": "IBM Company", - "open": 185.62740369461244, - "peRatio": 24.341353665128693, - "percentageChange": -1.4930577008849097, - "prevClose": 192.11844053187397, - "symbol": "IBM", - "volume": 7791 - } - } - } - } - -``` - - **Inserted hash map can check using `redis-cli`** - - Log in to the `redis-cli` and execute `HGETALL StockVolume` command to retrieve inserted hash map details. - -``` - 127.0.0.1:6379> HGETALL StockVolume - 1) "IBM" - 2) "7791" - 3) "WSO2" - 4) "7791" - 127.0.0.1:6379> -``` -2. Retrieve all stock volume details from the Redis server. - - **Sample request** - -``` -curl -v GET "http://localhost:8290/stockquote/getstockvolumedetails" -H "Content-Type:application/json" -``` - - **Expected Response** - -```json - { - "output": "{IBM=7791, WSO2=7791}" - } -``` -3. Remove stock volume details. - - **Sample request 1** - -``` -curl -v POST -d {"redisFields":"WSO2"} "http://localhost:8290/stockquote/deletestockvolumedetails" -H "Content-Type:application/json" -``` - - **Expected Response** - -```json - { - "output": 1 - } -``` - - **Sample request 2 : Check the remaining stock volume details** - - **Sample request** - -``` - curl -v GET "http://localhost:8290/stockquote/getstockvolumedetails" -H "Content-Type:application/json" -``` - - **Expected Response** - -```json - { - "output": "{IBM=7791}" - } -``` - - **Inserted list can retrieve using `redis-cli`** - - Log in to the `redis-cli` and execute `HGETALL StockVolume` command to retrieve list length. - -``` - 127.0.0.1:6379> HGETALL StockVolume - 1) "IBM" - 2) "7791" - 127.0.0.1:6379> -``` diff --git a/en/docs/reference/connectors/redis-connector/redis-connector-overview.md b/en/docs/reference/connectors/redis-connector/redis-connector-overview.md deleted file mode 100644 index d5e5b32633..0000000000 --- a/en/docs/reference/connectors/redis-connector/redis-connector-overview.md +++ /dev/null @@ -1,37 +0,0 @@ -# Redis Connector Overview - -Redis is an open source (BSD licensed), in-memory data structure store, used as a **cache** or a **database**, cache and message broker. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperlogs and geospatial indexes with radius queries. - -This connector enables developers to use an external Redis server as a cache or a database in the mediation logic. - -To see the available Redis connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "Redis". - -<img src="{{base_path}}/assets/img/integrate/connectors/redis-store.png" title="Redis Connector Store" width="200" alt="Redis Connector Store"/> - -## Compatibility - -| Connector version | Supported product versions | -| ------------- |------------- | -| 1.0.1 | APIM 4.0.0, EI 7.1.0, EI 7.0.x, EI 6.6.0, EI 6.5.0 | -| 2.1.0 | APIM 4.0.0, EI 7.1.0, EI 7.0.x, EI 6.6.0, EI 6.5.0 | -| 2.2.0 | APIM 4.0.0, EI 7.1.0, EI 7.0.x, EI 6.6.0, EI 6.5.0 | - -For older versions, see the details in the connector store. - -## Redis Connector documentation - -* **[Setting up the Redis Environment]({{base_path}}/reference/connectors/redis-connector/redis-connector-configuration/)**: This involves setting up the Redis server and a backend to test the flow. - -* **[Redis Connector Example]({{base_path}}/reference/connectos/redis-connector/redis-connector-example/)**: This example demonstrates how to work with the Redis Connector and access the Redis server using Redis commands. - -* **[Redis Connector Reference]({{base_path}}/reference/connectors/redis-connector/redis-connector-reference/)**: This documentation provides a reference guide for the Redis Connector. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, please create a pull request in the following repository. - -* [Redis Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-redis) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/salesforce-connectors/salesforce-soap-reference.md b/en/docs/reference/connectors/salesforce-connectors/salesforce-soap-reference.md deleted file mode 100644 index ec07b4903b..0000000000 --- a/en/docs/reference/connectors/salesforce-connectors/salesforce-soap-reference.md +++ /dev/null @@ -1,1881 +0,0 @@ -# Salesforce SOAP Connector Reference - -The following operations allow you to work with the Salesforce SOAP Connector. Click an operation name to see parameter details and samples on how to use it. - ---- - -## Initialize the connector - -To use the Salesforce SOAP connector, add the `<salesforcerest.init>` element in your configuration before carrying out any other Salesforce SOAP operations. - -??? note "salesforcebulk.init" - The salesforcerest.init operation initializes the connector to interact with the Salesforce SOAP API. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_quickstart_intro.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>username</td> - <td>The username to access the Salesforce account.</td> - <td>Yes</td> - </tr> - <tr> - <td>password</td> - <td>The password provided here is a concatenation of the user password and the security token provided by Salesforce.</td> - <td>Yes</td> - </tr> - <tr> - <td>loginUrl</td> - <td>The login URL to access the Salesforce account.</td> - <td>Yes</td> - </tr> - <tr> - <td>blocking</td> - <td>Indicates whether the connector needs to perform blocking invocations to Salesforce. (Supported in WSO2 ESB 4.9.0 and later.)</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.init> - <loginUrl>{$ctx:loginUrl}</loginUrl> - <username>{$ctx:username}</username> - <password>{$ctx:password}</password> - <blocking>{$ctx:blocking}</blocking> - </salesforce.init> - ``` - - **Sample request** - - ```xml - <salesforce.init> - <username>MyUsername</username> - <password>MyPassword</password> - <loginUrl>https://login.salesforce.com/services/Soap/u/42.0</loginUrl> - <blocking>false</blocking> - </salesforce.init> - ``` ---- - -## Working with emails - -??? note "emails" - The salesforcebulk.emails method creates and sends an email using Salesforce based on the properties that you specify. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_sendemail.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>sendEmail</td> - <td>XML representation of the email.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.sendEmail> - <sendEmail xmlns:sfdc="sfdc">{//sfdc:emailWrapper}</sendEmail> - </salesforce.sendEmail> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the sendEmail operation. - - ```xml - <payloadFactory> - <format> - <sfdc:emailWrapper xmlns:sfdc="sfdc"> - <sfdc:messages type="urn:SingleEmailMessage"> - <sfdc:bccSender>true</sfdc:bccSender> - <sfdc:emailPriority>High</sfdc:emailPriority> - <sfdc:replyTo>123@gmail.com</sfdc:replyTo> - <sfdc:saveAsActivity>false</sfdc:saveAsActivity> - <sfdc:senderDisplayName>wso2</sfdc:senderDisplayName> - <sfdc:subject>test</sfdc:subject> - <sfdc:useSignature>false</sfdc:useSignature> - <sfdc:targetObjectId>00390000001PBFn</sfdc:targetObjectId> - <sfdc:plainTextBody>Hello, this is a holiday greeting!</sfdc:plainTextBody> - </sfdc:messages> - </sfdc:emailWrapper> - </format> - <args/> - </payloadFactory> - - <salesforce.sendEmail> - <sendEmail xmlns:sfdc="sfdc">{//sfdc:emailWrapper}</sendEmail> - </salesforce.sendEmail> - ``` - **Sample response** - - Given below is a sample response for the sendEmail operation. - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>67</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <sendEmailResponse> - <result> - <success>true</success> - </result> - </sendEmailResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` - -??? note "sendEmailMessage" - The salesforcebulk.sendEmailMessage method sends emails that have already been drafted in Salesforce. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_send_email_message.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>sendEmailMessage</td> - <td>XML representation of the email IDs to send.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.sendEmailMessage config-ref="connectorConfig"> - <sendEmailMessage xmlns:sfdc="sfdc">{//sfdc:emails}</sendEmailMessage> - </salesforce.sendEmailMessage> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the sendEmailMessage operation. - - ```xml - <payloadFactory> - <format> - <sfdc:emails xmlns:sfdc="sfdc"> - <sfdc:Ids>0019000000aaMkK</sfdc:Ids> - <sfdc:Ids>0019000000bbMkK</sfdc:Ids> - </sfdc:emails> - </format> - <args/> - </payloadFactory> - - <salesforce.sendEmailMessage config-ref="connectorConfig"> - <sendEmailMessage xmlns:sfdc="sfdc">{//sfdc:emails}</sendEmailMessage> - </salesforce.sendEmailMessage> - ``` - - **Sample response** - - Given below is a sample response for the sendEmail operation. - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>67</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <sendEmailResponse> - <result> - <success>true</success> - </result> - </sendEmailResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` - ---- - -## Working with records - -??? note "salesforcebulk.create" - The salesforcerest.create operation creates one or more record with the Salesforce SOAP API. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_create.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>allOrNone</td> - <td>Whether to rollback changes if an object fails (see Common Parameters).</td> - <td>Yes</td> - </tr> - <tr> - <td>allowFieldTruncate</td> - <td>Whether to truncate strings that exceed the field length (see Common Parameters).</td> - <td>Yes</td> - </tr> - <tr> - <td>sobjects</td> - <td>XML representation of the records to add.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.create configKey="MySFConfig"> - <allOrNone>0</allOrNone> - <allowFieldTruncate>0</allowFieldTruncate> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.create> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the create operation. - - ```xml - <payloadFactory> - <format> - <sfdc:sObjects xmlns:sfdc="sfdc" type="Account"> - <sfdc:sObject> - <sfdc:Name>wso2123</sfdc:Name> - </sfdc:sObject> - <sfdc:sObject> - <sfdc:Name>abc123</sfdc:Name> - </sfdc:sObject> - </sfdc:sObjects> - </format> - <args/> - </payloadFactory> - - <salesforce.create> - <allOrNone>0</allOrNone> - <allowFieldTruncate>0</allowFieldTruncate> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.create> - ``` - **Sample response** - - Given below is a sample response that can be handled by the create operation - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>9</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <createResponse> - <result> - <id>0036F00002mdwl2QAA</id> - <success>true</success> - </result> - </createResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` - -??? note "salesforcebulk.update" - The salesforcerest.update operation updates one or more existing records with the Salesforce SOAP API. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_update.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>allOrNone</td> - <td>Whether to rollback changes if an object fails (see Common Parameters).</td> - <td>Yes</td> - </tr> - <tr> - <td>allowFieldTruncate</td> - <td>Whether to truncate strings that exceed the field length (see Common Parameters).</td> - <td>Yes</td> - </tr> - <tr> - <td>sobjects</td> - <td>XML representation of the records to add.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.update configKey="MySFConfig"> - <allOrNone>0</allOrNone> - <allowFieldTruncate>0</allowFieldTruncate> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.update> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the create operation. - - - ```xml - <payloadFactory> - <format> - <sfdc:sObjects xmlns:sfdc="sfdc" type="Account"> - <sfdc:sObject> - <sfdc:Id>0019000000aaMkZ</sfdc:Id> - <sfdc:Name>newname01</sfdc:Name> - </sfdc:sObject> - <sfdc:sObject> - <sfdc:Id>0019000000aaMkP</sfdc:Id> - <sfdc:Name>newname02</sfdc:Name> - </sfdc:sObject> - </sfdc:sObjects> - </format> - <args/> - </payloadFactory> - - <salesforce.update> - <allOrNone>0</allOrNone> - <allowFieldTruncate>0</allowFieldTruncate> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.update> - ``` - **Sample response** - - Given below is a sample response that can be handled by the update operation. - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>53</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <updateResponse> - <result> - <id>0016F00002S4Wj0QAF</id> - <success>true</success> - </result> - </updateResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` - -??? note "salesforcebulk.upsert" - The salesforcerest.upsert operation update existing records and insert new records in a single operation, with the Salesforce SOAP API. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_upsert.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>allOrNone</td> - <td>Whether to rollback changes if an object fails (see Common Parameters).</td> - <td>Yes</td> - </tr> - <tr> - <td>allowFieldTruncate</td> - <td>Whether to truncate strings that exceed the field length (see Common Parameters).</td> - <td>Yes</td> - </tr> - <tr> - <td>externalId</td> - <td>The field containing the record ID, that is used by Salesforce to determine whether to update an existing record or create a new one. This is done by matching the ID to the record IDs in Salesforce. By default, the field is assumed to be named "Id".</td> - <td>Yes</td> - </tr> - <tr> - <td>sObjects</td> - <td>XML representation of the records to update and insert. When inserting a new record, you do not specify sfdc:Id.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.upsert configKey="MySFConfig"> - <allOrNone>0</allOrNone> - <allowFieldTruncate>0</allowFieldTruncate> - <externalId>Id</externalId> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.upsert> - ``` - **Sample request** - - Set the externalId field : If you need to give any existing externalId field of sObject to externalId then the payload should be with that externalId field and value as follows in sample. - - Sample to set ExternalId field and value - - ```xml - <payloadFactory> - <format> - <sfdc:sObjects xmlns:sfdc="sfdc" type="Account"> - <sfdc:sObject> - <sfdc:sample__c>{any value}</sfdc:sample__c> - <sfdc:Name>newname001</sfdc:Name> - </sfdc:sObject> - </sfdc:sObjects> - </format> - <args/> - </payloadFactory> - - <salesforce.upsert> - <allOrNone>0</allOrNone> - <allowFieldTruncate>0</allowFieldTruncate> - <externalId>sample__c</externalId> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.upsert> - ``` - Given below is a sample request that can be handled by the create operation. - - ```xml - <payloadFactory> - <format> - <sfdc:sObjects xmlns:sfdc="sfdc" type="Account"> - <sfdc:sObject> - <sfdc:Id>0019000000aaMkZ</sfdc:Id> - <sfdc:Name>newname001</sfdc:Name> - </sfdc:sObject> - <sfdc:sObject> - <sfdc:Name>newname002</sfdc:Name> - </sfdc:sObject> - </sfdc:sObjects> - </format> - <args/> - </payloadFactory> - - <salesforce.upsert> - <allOrNone>0</allOrNone> - <allowFieldTruncate>0</allowFieldTruncate> - <externalId>Id</externalId> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.upsert> - ``` - - **Sample response** - - Given below is a sample response that can be handled by the upsert operation. - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>54</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <upsertResponse> - <result> - <created>false</created> - <id>0016F00002S4Wj0QAF</id> - <success>true</success> - </result> - <result> - <created>true</created> - <id>0016F00002pUVTMQA4</id> - <success>true</success> - </result> - </upsertResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` - -??? note "salesforcebulk.search" - The salesforcerest.search operation searchs for records, use salesforce.search and specify the search string. If you already know the record IDs, use retrieve instead. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_search.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>searchString</td> - <td>The SQL query to use to search for records.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.search configKey="MySFConfig"> - <searchString>FIND {map*} IN ALL FIELDS RETURNING Account (Id, Name), Contact, Opportunity, Lead</searchString> - </salesforce.search> - ``` - **Sample response** - - Given below is a sample response that can be handled by the search operation. - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com" xmlns:sf="urn:sobject.partner.soap.sforce.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>56</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <searchResponse> - <result> - <searchRecords> - <record xsi:type="sf:sObject"> - <sf:type>Account</sf:type> - <sf:Id>0016F00002SN7qiQAD</sf:Id> - <sf:Id>0016F00002SN7qiQAD</sf:Id> - <sf:Name>GenePoint</sf:Name> - </record> - </searchRecords> - </result> - </searchResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` - -??? note "salesforcebulk.query" - The salesforcerest.query operation retrieve data from an object, use salesforce.query with the Salesforce SOAP API. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_query.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>batchSize</td> - <td>The number of records to return. If more records are available than the batch size, you can use the queryMore operation to get additional results.</td> - <td>Yes</td> - </tr> - <tr> - <td>queryString</td> - <td>The SQL query to use to search for records.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - Note : If you want your search results to include deleted records that are available in the Recycle Bin, use salesforce.queryAll in place of salesforce.query. - - ```xml - <salesforce.query configKey="MySFConfig"> - <batchSize>200</batchSize> - <queryString>select id,name from Account</queryString> - </salesforce.query> - ``` - - **Sample request** - - Following is a sample configuration to query records. It also illustrates the use of queryMore operation to get additional results: - - - ```xml - <salesforce.query> - <batchSize>200</batchSize> - <queryString>select id,name from Account</queryString> - </salesforce.query> - <!-- Execute the following to get the other batches --> - <iterate xmlns:sfdc="http://wso2.org/salesforce/adaptor" continueParent="true" expression="//sfdc:iterator"> - <target> - <sequence> - <salesforce.queryMore> - <batchSize>200</batchSize> - </salesforce.queryMore> - </sequence> - </target> - </iterate> - ``` - **Sample response** - - Given below is a sample response for the query operation. - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com" xmlns:sf="urn:sobject.partner.soap.sforce.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>58</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <queryResponse> - <result xsi:type="QueryResult"> - <done>true</done> - <queryLocator xsi:nil="true"/> - <records xsi:type="sf:sObject"> - <sf:type>Account</sf:type> - <sf:Id>0016F00002SasNYQAZ</sf:Id> - <sf:Id>0016F00002SasNYQAZ</sf:Id> - <sf:Name>wso2New</sf:Name> - </records> - . - . - <size>129</size> - </result> - </queryResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` - -??? note "salesforcebulk.retrieve" - The salesforcerest.retrieve operation IDs of the records you want to retrieve with the Salesforce SOAP API. If you do not know the record IDs, use query instead. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_retrieve.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>fieldList</td> - <td>A comma-separated list of the fields you want to retrieve from the records.</td> - <td>Yes</td> - </tr> - <tr> - <td>objectType</td> - <td> The object type of the records.</td> - <td>Yes</td> - </tr> - <tr> - <td>sobjects</td> - <td>XML representation of the records to retrieve.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.retrieve configKey="MySFConfig"> - <fieldList>id,name</fieldList> - <objectType>Account</objectType> - <objectIDS xmlns:sfdc="sfdc">{//sfdc:sObjects}</objectIDS> - </salesforce.retrieve> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the retrieve operation. - - - ```xml - <payloadFactory> - <format> - <sfdc:sObjects xmlns:sfdc="sfdc"> - <sfdc:Ids>0019000000aaMkK</sfdc:Ids> - <sfdc:Ids>0019000000aaMjl</sfdc:Ids> - </sfdc:sObjects> - </format> - <args/> - </payloadFactory> - - <salesforce.retrieve configKey="MySFConfig"> - <fieldList>id,name</fieldList> - <objectType>Account</objectType> - <objectIDS xmlns:sfdc="sfdc">{//sfdc:sObjects}</objectIDS> - </salesforce.retrieve> - ``` - **Sample response** - - Given below is a sample response for the retrieve operation. - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com" xmlns:sf="urn:sobject.partner.soap.sforce.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>60</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <retrieveResponse> - <result xsi:type="sf:sObject"> - <sf:type>Account</sf:type> - <sf:Id>0016F00002S4Wj0QAF</sf:Id> - <sf:Id>0016F00002S4Wj0QAF</sf:Id> - <sf:Name>newname01</sf:Name> - </result> - </retrieveResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` - -??? note "salesforcebulk.delete" - The salesforcerest.delete operation delete one or more records. If you do not know the record IDs, use query instead. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_delete.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>allOrNone</td> - <td>Whether to rollback changes if an object fails (see Common Parameters).</td> - <td>Yes</td> - </tr> - <tr> - <td>sobjects</td> - <td>XML representation of the records to delete, as shown in the following example.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.delete configKey="MySFConfig"> - <allOrNone>0</allOrNone> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.delete> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the delete operation. - - - ```xml - <payloadFactory> - <format> - <sfdc:sObjects xmlns:sfdc="sfdc"> - <sfdc:Ids>0019000000aaMkZ</sfdc:Ids> - <sfdc:Ids>0019000000aaMkP</sfdc:Ids> - </sfdc:sObjects> - </format> - <args/> - </payloadFactory> - - <salesforce.delete> - <allOrNone>0</allOrNone> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.delete> - ``` - **Sample response** - - Given below is a sample response that can be handled by the delete operation. - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>63</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <deleteResponse> - <result> - <id>0016F00002S4Wj0QAF</id> - <success>true</success> - </result> - </deleteResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` - -??? note "salesforcebulk.undelete" - The salesforcerest.undelete operation restore records that were previously deleted. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_undelete.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>allOrNone</td> - <td>Whether to rollback changes if an object fails (see Common Parameters).</td> - <td>Yes</td> - </tr> - <tr> - <td>sobjects</td> - <td>XML representation of the records to delete, as shown in the following example.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.undelete configKey="MySFConfig"> - <allOrNone>0</allOrNone> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.undelete> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the undelete operation. - - - ```xml - <payloadFactory> - <format> - <sfdc:sObjects xmlns:sfdc="sfdc"> - <sfdc:Ids>0019000000aaMkZ</sfdc:Ids> - <sfdc:Ids>0019000000aaMkP</sfdc:Ids> - </sfdc:sObjects> - </format> - <args/> - </payloadFactory> - - <salesforce.undelete> - <allOrNone>0</allOrNone> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.undelete> - ``` - **Sample response** - - Given below is a sample response that can be handled by the undelete operation. - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>64</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <undeleteResponse> - <result> - <id>0016F00002S4Wj0QAF</id> - <success>true</success> - </result> - </undeleteResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` - -??? note "salesforcebulk.getDeleted" - The salesforcerest.getDeleted operation retrieve the list of records that were previously deleted. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_getdeleted.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>sObjectType</td> - <td>sObjectType from which we need to retrieve deleted records</td> - <td>Yes</td> - </tr> - <tr> - <td>startDate</td> - <td>start date and time for deleted records lookup</td> - <td>Yes</td> - </tr> - <tr> - <td>endDate</td> - <td>end date and time for deleted records lookup</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.getDeleted configKey="MySFConfig"> - <sObjectType>{$ctx:sObjectType}</sObjectType> - <startDate>{$ctx:startDate}</startDate> - <endDate>{$ctx:endDate}</endDate> - </salesforce.getDeleted> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getDeleted operation. - - - ```xml - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" - xmlns:urn="wso2.connector.salesforce"> - <soapenv:Header/> - <soapenv:Body> - <urn:loginUrl>https://login.salesforce.com/services/Soap/u/30.0</urn:loginUrl> - <urn:username>XXXXXXXXXX</urn:username> - <urn:password>XXXXXXXXXX</urn:password> - <urn:blocking>false</urn:blocking> - <urn:sObjectType>Account</urn:sObjectType> - <urn:startDate>2020-06-15T05:05:53+0000</urn:startDate> - <urn:endDate>2020-06-30T05:05:53+0000</urn:endDate> - </soapenv:Body> - </soapenv:Envelope> - ``` - **Sample response** - - Given below is a sample response that can be handled by the getDeleted operation. - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>55</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <getDeletedResponse> - <result> - <deletedRecords> - <deletedDate>2020-06-18T04:10:20.000Z</deletedDate> - <id>0012x000007RqnHAAS</id> - </deletedRecords> - <earliestDateAvailable>2020-04-27T13:43:00.000Z</earliestDateAvailable> - <latestDateCovered>2020-06-30T05:05:00.000Z</latestDateCovered> - </result> - </getDeletedResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` - -??? note "salesforcebulk.getUpdated" - The salesforcerest.getUpdated operation retrieve the list of records that were previously updated. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_getupdated.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>sObjectType</td> - <td>sObjectType from which we need to retrieve deleted records</td> - <td>Yes</td> - </tr> - <tr> - <td>startDate</td> - <td>start date and time for deleted records lookup</td> - <td>Yes</td> - </tr> - <tr> - <td>endDate</td> - <td>end date and time for deleted records lookup</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.getUpdated configKey="MySFConfig"> - <sObjectType>{$ctx:sObjectType}</sObjectType> - <startDate>{$ctx:startDate}</startDate> - <endDate>{$ctx:endDate}</endDate> - </salesforce.getUpdated> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getUpdated operation. - - - ```xml - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" - xmlns:urn="wso2.connector.salesforce"> - <soapenv:Header/> - <soapenv:Body> - <urn:loginUrl>https://login.salesforce.com/services/Soap/u/30.0</urn:loginUrl> - <urn:username>XXXXXXXXXX</urn:username> - <urn:password>XXXXXXXXXX</urn:password> - <urn:blocking>false</urn:blocking> - <urn:sObjectType>Account</urn:sObjectType> - <urn:startDate>2020-06-15T05:05:53+0000</urn:startDate> - <urn:endDate>2020-06-30T05:05:53+0000</urn:endDate> - </soapenv:Body> - </soapenv:Envelope> - ``` - **Sample response** - - Given below is a sample response that can be handled by the getUpdated operation. - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>66</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <getUpdatedResponse> - <result> - <ids>0012x000007RVCcAAO</ids> - <ids>0012x000007RVD1AAO</ids> - <ids>0012x000007RVG8AAO</ids> - <ids>0012x000007RVw7AAG</ids> - <ids>0012x000007RW3uAAG</ids> - <latestDateCovered>2020-06-30T05:05:00.000Z</latestDateCovered> - </result> - </getUpdatedResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` - -??? note "salesforcebulk.findDuplicates" - The salesforcerest.findDuplicates operation retrieve the list of records that are duplicate entries. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_findduplicates.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>sobjects</td> - <td>sObjectType from which we need to retrieve duplicate records</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.findDuplicates configKey="MySFConfig"> - <sobjects xmlns:ns="wso2.connector.salesforce">{//ns:sObjects}</sobjects> - </salesforce.findDuplicates> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the findDuplicates operation. - - - ```xml - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" - xmlns:urn="wso2.connector.salesforce"> - <soapenv:Header/> - <soapenv:Body> - <urn:loginUrl>https://login.salesforce.com/services/Soap/u/48.0</urn:loginUrl> - <urn:username>XXXXXXXXXXXX</urn:username> - <urn:password>XXXXXXXXXXXX</urn:password> - <urn:blocking>false</urn:blocking> - <urn:sObjects> - <urn:sObject> - <urn:type>Account</urn:type> - <urn:fieldsToNull>name</urn:fieldsToNull> - <urn:fieldsToNull>id</urn:fieldsToNull> - </urn:sObject> - </urn:sObjects> - </soapenv:Body> - </soapenv:Envelope> - ``` - **Sample response** - - Given below is a sample response that can be handled by the findDuplicates operation. - - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>11</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <findDuplicatesResponse> - <result> - <duplicateResults> - <allowSave>false</allowSave> - <duplicateRule>Standard_Account_Duplicate_Rule</duplicateRule> - <duplicateRuleEntityType>Account</duplicateRuleEntityType> - <errorMessage xsi:nil="true"/> - <matchResults> - <entityType>Account</entityType> - <matchEngine>FuzzyMatchEngine</matchEngine> - <rule>Standard_Account_Match_Rule_v1_0</rule> - <size>0</size> - <success>true</success> - </matchResults> - </duplicateResults> - <success>true</success> - </result> - </findDuplicatesResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` - -??? note "salesforcebulk.findDuplicatesByIds" - The salesforcerest.findDuplicatesByIds operation retrieves the list of records that are duplicate entries by using ids. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_findduplicatesbyids.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>ids</td> - <td>ids for which duplicate records need to be found</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.findDuplicatesByIds configKey="MySFConfig"> - <ids xmlns:ns="wso2.connector.salesforce">{//ns:ids}</ids> - </salesforce.findDuplicatesByIds> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the findDuplicatesByIds operation. - - - ```xml - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" - xmlns:urn="wso2.connector.salesforce"> - <soapenv:Header/> - <soapenv:Body> - <urn:loginUrl>https://login.salesforce.com/services/Soap/u/48.0</urn:loginUrl> - <urn:username>XXXXXXXXXX</urn:username> - <urn:password>XXXXXXXXXX</urn:password> - <urn:blocking>false</urn:blocking> - <urn:ids> - <urn:id>0012x000005mqKuAAI</urn:id> - <urn:id>0012x000005orjlAAA</urn:id> - </urn:ids> - </soapenv:Body> - </soapenv:Envelope> - ``` - **Sample response** - - Given below is a sample response that can be handled by the findDuplicatesByIds operation. - - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>53</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <findDuplicatesByIdsResponse> - <result> - <duplicateResults> - <allowSave>false</allowSave> - <duplicateRule>Standard_Account_Duplicate_Rule</duplicateRule> - <duplicateRuleEntityType>Account</duplicateRuleEntityType> - <errorMessage xsi:nil="true"/> - <matchResults> - <entityType>Account</entityType> - <matchEngine>FuzzyMatchEngine</matchEngine> - <rule>Standard_Account_Match_Rule_v1_0</rule> - <size>0</size> - <success>true</success> - </matchResults> - </duplicateResults> - <success>true</success> - </result> - </findDuplicatesByIdsResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` - -??? note "salesforcebulk.merge" - The salesforcerest.merge operation merge records into one master record. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_merge.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>mergerequests</td> - <td>The merge requests according to the format defined in to Salesforce docs (See Related Salesforce documentation section)</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.merge configKey="MySFConfig"> - <mergerequests xmlns:ns="wso2.connector.salesforce">{//ns:requests}</mergerequests> - </salesforce.merge> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the merge operation. - - - ```xml - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" - xmlns:urn="wso2.connector.salesforce"> - <soapenv:Header/> - <soapenv:Body> - <urn:loginUrl>https://login.salesforce.com/services/Soap/u/48.0</urn:loginUrl> - <urn:password>XXXXXXXXXXX</urn:password> - <urn:blocking>false</urn:blocking> - <urn:requests> - <urn:request> - <urn:masterRecord> - <urn:type>Account</urn:type> - <urn:Id>0012x000008un5bAAA</urn:Id> - </urn:masterRecord> - <urn:recordToMergeIds>0012x000008un5lAAA</urn:recordToMergeIds> - </urn:request> - </urn:requests> - </soapenv:Body> - </soapenv:Envelope> - ``` - - **Sample response** - - Given below is a sample response that can be handled by the merge operation. - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>70</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <mergeResponse> - <result> - <id>0012x000008un5bAAA</id> - <mergedRecordIds>0012x000008un5lAAA</mergedRecordIds> - <success>true</success> - </result> - </mergeResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` - -??? note "salesforcebulk.convertLead" - The salesforcerest.convertLead operation convert a lead into an account. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_merge.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>leadconvertrequests</td> - <td>The lead convert requests according to the format defined in to Salesforce docs (See Related Salesforce documentation section)</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.convertLead configKey="MySFConfig"> - <leadconvertrequests xmlns:ns="wso2.connector.salesforce">{//ns:leadconvertrequests}</leadconvertrequests> - </salesforce.convertLead> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the merge operation. - - - ```xml - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" - xmlns:urn="wso2.connector.salesforce"> - <soapenv:Header/> - <soapenv:Body> - <urn:loginUrl>https://login.salesforce.com/services/Soap/u/48.0</urn:loginUrl> - <urn:username>XXXXXXXXXX</urn:username> - <urn:password>XXXXXXXXXX</urn:password> - <urn:blocking>false</urn:blocking> - <urn:leadconvertrequests> - <urn:leadConverts> - <urn:accountId>0012x000005mqKuAAI</urn:accountId> - <urn:leadId>00Q2x00000AH981EAD</urn:leadId> - <urn:convertedStatus>Closed - Converted</urn:convertedStatus> - </urn:leadConverts> - </urn:leadconvertrequests> - </soapenv:Body> - </soapenv:Envelope> - ``` - - **Sample response** - - Given below is a sample response that can be handled by the merge operation. - - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>128</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <convertLeadResponse> - <result> - <accountId>0012x000005mqKuAAI</accountId> - <contactId>0032x000006I2xYAAS</contactId> - <leadId>00Q2x00000AH981EAD</leadId> - <opportunityId>0062x0000053r8FAAQ</opportunityId> - <success>true</success> - </result> - </convertLeadResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` ---- - -## Working with Recycle Bin - -??? note "salesforcebulk.emptyRecycleBin" - The Recycle Bin allows you to view and restore recently deleted records for a maximum of 15 days before they are permanently deleted. To purge records from the Recycle Bin so that they cannot be restored, use salesforce.emptyRecycleBin and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_emptyrecyclebin.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>allOrNone</td> - <td>Whether to rollback changes if an object fails (see Common Parameters).</td> - <td>Yes</td> - </tr> - <tr> - <td>sobjects</td> - <td>XML representation of the records to add.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.emptyRecycleBin config-ref="connectorConfig"> - <allOrNone>0</allOrNone> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.emptyRecycleBin> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the emptyRecycleBin operation. - - - ```xml - <payloadFactory> - <format> - <sfdc:sObjects xmlns:sfdc="sfdc"> - <sfdc:Ids>0019000000aaMkZ</sfdc:Ids> - <sfdc:Ids>0019000000aaMkP</sfdc:Ids> - </sfdc:sObjects> - </format> - <args/> - </payloadFactory> - - <salesforce.emptyRecycleBin> - <allOrNone>0</allOrNone> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.emptyRecycleBin> - ``` - - **Sample response** - - Given below is a sample response that can be handled by the emptyRecycleBin operation. - - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>27</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <emptyRecycleBinResponse> - <result> - <id>0016F00002S4WaGQAV</id> - <success>true</success> - </result> - </emptyRecycleBinResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` ---- - -## Working with sObjects - -??? note "salesforcebulk.describeGlobal" - The salesforcerest.describeGlobal operation retrieve a list of objects that are available in the system. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_describeglobal.htm) for more information. - - - **Sample configuration** - - ```xml - <salesforce.describeGlobal configKey="MySFConfig"/> - ``` - - **Sample response** - - Given below is a sample response that can be handled by the describeGlobal operation. - - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>29</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <describeGlobalResponse> - <result> - <encoding>UTF-8</encoding> - <maxBatchSize>200</maxBatchSize> - <sobjects> - <activateable>false</activateable> - <createable>false</createable> - <custom>false</custom> - <customSetting>false</customSetting> - <deletable>false</deletable> - <deprecatedAndHidden>false</deprecatedAndHidden> - <feedEnabled>false</feedEnabled> - <keyPrefix xsi:nil="true"/> - <label>Accepted Event Relation</label> - <labelPlural>Accepted Event Relations</labelPlural> - <layoutable>false</layoutable> - <mergeable>false</mergeable> - <name>AcceptedEventRelation</name> - <queryable>true</queryable> - <replicateable>false</replicateable> - <retrieveable>true</retrieveable> - <searchable>false</searchable> - <triggerable>false</triggerable> - <undeletable>false</undeletable> - <updateable>false</updateable> - </sobjects> - . - . - </result> - </describeGlobalResponse> - </soapenv:Body> - </soapenv:Envelope> - - ``` - -??? note "salesforcebulk.describeSobject" - The salesforcerest.describeSobject operation retrieve metadata (such as name, label, and fields, including the field properties) for a specific object type. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_describesobject.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>sobject</td> - <td> The object type of where you want to retrieve the metadata.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.describeSObject configKey="MySFConfig"> - <sobject>Account</sobject> - </salesforce.describeSObject> - ``` - - **Sample response** - - Given below is a sample response that can be handled by the describeSobject operation. - - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>31</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <describeSObjectResponse> - <result> - <activateable>false</activateable> - <childRelationships> - <cascadeDelete>false</cascadeDelete> - <childSObject>Account</childSObject> - <deprecatedAndHidden>false</deprecatedAndHidden> - <field>ParentId</field> - <relationshipName>ChildAccounts</relationshipName> - </childRelationships> - . - . - </result> - </describeSObjectResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` - -??? note "salesforcebulk.describeSobjects" - The salesforcerest.describeSobjects operation retrieve metadata (such as name, label, and fields, including the field properties) for multiple object types returned as an array. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_describesobjects.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>sobjects</td> - <td>An XML representation of the object types of where you want to retrieve the metadata.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.describeSobjects configKey="MySFConfig"> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.describeSobjects> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the describeSobjects operation. - - - ```xml - <payloadFactory> - <format> - <sfdc:sObjects xmlns:sfdc="sfdc"> - <sfdc:sObjectType>Account</sfdc:sObjectType> - <sfdc:sObjectType>Contact</sfdc:sObjectType> - </sfdc:sObjects> - </format> - <args/> - </payloadFactory> - - <salesforce.describeSobjects configKey="MySFConfig"> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.describeSobjects> - ``` - - **Sample response** - - Given below is a sample response that can be handled by the describeSobjects operation. - - - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>51</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <describeSObjectsResponse> - <result> - <activateable>false</activateable> - <childRelationships> - <cascadeDelete>false</cascadeDelete> - <childSObject>Account</childSObject> - <deprecatedAndHidden>false</deprecatedAndHidden> - <field>ParentId</field> - <relationshipName>ChildAccounts</relationshipName> - </childRelationships> - . - . - </result> - <result> - <activateable>false</activateable> - <childRelationships> - <cascadeDelete>false</cascadeDelete> - <childSObject>AcceptedEventRelation</childSObject> - <deprecatedAndHidden>false</deprecatedAndHidden> - <field>RelationId</field> - <relationshipName>AcceptedEventRelations</relationshipName> - </childRelationships> - . - . - </result> - </describeSObjectsResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` - ---- - -## Working with User - -??? note "salesforcebulk.emptyRecycleBin" - To retrieve information about the user who is currently logged in, use salesforce.getUserInfo. The information provided includes the name, ID, and contact information of the user. See, the [Salesforce documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_getuserinfo_getuserinforesult.htm) for details of the information that is returned using this operation. If you want to get additional information about the user that is not returned by this operation, use retrieve operation on the User object providing the ID returned from getUserInfo. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_getuserinfo.htm) for more information. - - **Sample configuration** - - ```xml - <salesforce.getUserInfo configKey="MySFConfig"/> - ``` - - **Sample response** - - Given below is a sample response that can be handled by the emptyRecycleBin operation. - - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>11</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <getUserInfoResponse> - <result> - <accessibilityMode>false</accessibilityMode> - <currencySymbol>$</currencySymbol> - <orgAttachmentFileSizeLimit>5242880</orgAttachmentFileSizeLimit> - <orgDefaultCurrencyIsoCode>USD</orgDefaultCurrencyIsoCode> - <orgDisallowHtmlAttachments>false</orgDisallowHtmlAttachments> - <orgHasPersonAccounts>false</orgHasPersonAccounts> - <organizationId>00D6F000002SofgUAC</organizationId> - <organizationMultiCurrency>false</organizationMultiCurrency> - <organizationName>john</organizationName> - <profileId>00e6F000003GTmYQAW</profileId> - <roleId xsi:nil="true"/> - <sessionSecondsValid>7200</sessionSecondsValid> - <userDefaultCurrencyIsoCode xsi:nil="true"/> - <userEmail>iamjohn@gmail.com</userEmail> - <userFullName>john doe</userFullName> - <userId>0056F000009wCJgQAM</userId> - <userLanguage>en_US</userLanguage> - <userLocale>en_US</userLocale> - <userName>iamjohn@gmail.com</userName> - <userTimeZone>America/Los_Angeles</userTimeZone> - <userType>Standard</userType> - <userUiSkin>Theme3</userUiSkin> - </result> - </getUserInfoResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` - -??? note "salesforcebulk.setPassword" - The salesforcerest.setPassword operation change the user password by specifying the password. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_setpassword.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>userId</td> - <td> The user's Salesforce ID.</td> - <td>Yes</td> - </tr> - <tr> - <td>password</td> - <td>If using setPassword, the new password to assign to the user.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - setPassword - - ```xml - <salesforce.setPassword configKey="MySFConfig"> - <userId>0056F000009wCJgQAM</userId> - <password>abc123</password> - </salesforce.setPassword> - ``` - - resetPassword - - ```xml - <salesforce.resetPassword configKey="MySFConfig"> - <userId>0056F000009wCJgQAM</userId> - </salesforce.resetPassword> - ``` - - **Sample setPassword** - - Given below is a sample response that can be handled by the setPassword operation. - - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>23</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <resetPasswordResponse> - <result> - <password>H5fj8A6M</password> - </result> - </resetPasswordResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` ---- - -## Working with Utility - -??? note "salesforcebulk.getServerTimestamp" - The salesforcerest.getServerTimestamp operation retrieve the timestamp of the server. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_getservertimestamp.htm) for more information. - - **Sample configuration** - - ```xml - <salesforce.getServerTimestamp configKey="MySFConfig"/> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the undelete operation. - - - ```xml - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" - xmlns:urn="wso2.connector.salesforce"> - <soapenv:Header/> - <soapenv:Body> - <urn:loginUrl>https://login.salesforce.com/services/Soap/u/30.0</urn:loginUrl> - <urn:username>XXXXXXXXXX</urn:username> - <urn:password>XXXXXXXXXX</urn:password> - <urn:blocking>false</urn:blocking> - </soapenv:Body> - </soapenv:Envelope> - ``` - - **Sample response** - - Given below is a sample response that can be handled by the getServerTimestamp operation. - - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>58</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <getServerTimestampResponse> - <result> - <timestamp>2020-07-03T09:14:41.321Z</timestamp> - </result> - </getServerTimestampResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/salesforce-connectors/salesforcebulk-connector-configuration.md b/en/docs/reference/connectors/salesforce-connectors/salesforcebulk-connector-configuration.md deleted file mode 100644 index a64263fccb..0000000000 --- a/en/docs/reference/connectors/salesforce-connectors/salesforcebulk-connector-configuration.md +++ /dev/null @@ -1,36 +0,0 @@ -# Setting up the SalesforceBulk Environment - -The SalesforceBulk connector allows you to access the [SalesforceBulk REST API](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/) from an integration sequence. SalesforceBulk is a RESTful API that allows you to either quickly load large sets of your organization's data into Salesforce or delete large sets of your organization's data from Salesforce. - -> **Note**: To work with the Salesforce Bulk connector, you need to have a Salesforce account. If you do not have a Salesforce account, go to [https://developer.salesforce.com/signup](https://developer.salesforce.com/signup) and create a Salesforce developer account. - -Salesforce uses the OAuth protocol to allow application users to securely access data without having to reveal their user credentials. For more information on authentication is done in Salesforce, see [Understanding Authentication](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/intro_oauth_and_connected_apps.htm). - -### Obtaining user credentials - -Follow the steps below to create a connected application using Salesforce and to obtain the consumer key as well as the consumer secret for the created connected application. - -{!includes/reference/connectors/salesforce-connectors/sf-access-token-generation.md!} - -### Configuring Axis2 configurations - -Be sure to add and enable the following Axis2 configurations in the `<PRODUCT_HOME>/conf/axis2/axis2.xml` file. - -* **Required message formatters** - - ``` - <messageFormatter contentType="text/csv" class="org.wso2.carbon.relay.ExpandingMessageFormatter"/> - <messageFormatter contentType="zip/xml" class="org.wso2.carbon.relay.ExpandingMessageFormatter"/> - <messageFormatter contentType="zip/csv" class="org.wso2.carbon.relay.ExpandingMessageFormatter"/> - <messageFormatter contentType="text/xml" class="org.wso2.carbon.relay.ExpandingMessageFormatter"/> - <messageFormatter contentType="text/html" class="org.wso2.carbon.relay.ExpandingMessageFormatter"/> - ``` -* **Required message builders** - - ``` - <messageBuilder contentType="text/csv" class="org.wso2.carbon.relay.BinaryRelayBuilder"/> - <messageBuilder contentType="zip/xml" class="org.wso2.carbon.relay.BinaryRelayBuilder"/> - <messageBuilder contentType="zip/csv" class="org.wso2.carbon.relay.BinaryRelayBuilder"/> - <messageBuilder contentType="text/xml" class="org.wso2.carbon.relay.BinaryRelayBuilder"/> - <messageBuilder contentType="text/html" class="org.wso2.carbon.relay.BinaryRelayBuilder"/> - ``` diff --git a/en/docs/reference/connectors/salesforce-connectors/salesforcebulk-connector-example.md b/en/docs/reference/connectors/salesforce-connectors/salesforcebulk-connector-example.md deleted file mode 100644 index 7f73519bb9..0000000000 --- a/en/docs/reference/connectors/salesforce-connectors/salesforcebulk-connector-example.md +++ /dev/null @@ -1,339 +0,0 @@ -# Salesforce Bulk Connector Example - -The Salesforce Bulk Connector allows you to access the [Salesforce Bulk REST API](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_intro.htm) from an integration sequence. SalesforceBulk is a RESTful API that allows you to quickly load large sets of your organization's data into Salesforce or delete large sets of your organization's data from Salesforce. You can use SalesforceBulk to query, insert, update, upsert or delete a large number of records asynchronously, by submitting the records in batches. Salesforce can process these batches in the background. - -## What you'll build - -This example demonstrates how to use Microsoft Azure Storage connector to: - -1. Insert employee details (job and batch) into Salesforce. -2. Get status of the inserted employee details. - -Both operations are exposed via an API. The API with the context `/resources` has two resources. - -* `/insertEmployeeDetails` : Creating a new job in the Salesforce account and insert employee details. -* `/getStatusOfBatch` : Retrieve status about the created batch from the Salesforce account. - -In this example, the user sends the request to invoke an API to insert employee details in bulk to the Salesforce account. When invoking the `insertEmployeeDetails` resource, it creates a new job based on the properties that you specify. Read the CSV data file by using the WSO2 File Connector and the extracted dataset is inserted as a batch. Afterwards it responds according to the specified template and is sent back to the client. Finally a user can retrieve the batch status using the `getStatusOfBatch` resource. - -<img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-connector.png" title="Using Salesforce Bulk Connector" width="800" alt="Using Salesforce Bulk Connector"/> - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Configure the connector in WSO2 Integration Studio - -Connectors can be added to integration flows in [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/). Once added, the operations of the connector can be dragged onto your canvas and added to your sequences. - -### Import the connector - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -### Add integration logic - -First create an API, which will be where we configure the integration logic. Right click on the created Integration Project and select, **New** -> **Rest API** to create the REST API. Specify the API name as `Salesforcebulk-API` and API context as `/salesforce`. - -<img src="{{base_path}}/assets/img/integrate/connectors/adding-an-api.jpg" title="Adding a Rest API" alt="Adding a Rest API"/> - -#### Configure a resource for the insertEmployeeBulkRecords - -Now follow the steps below to add configurations to the `insertEmployeeBulkRecords` resource. - -1. Initialize the connector. - - 1. Follow these steps to [generate the Access Tokens for Salesforce]({{base_path}}/reference/connectors/salesforce-connectors/salesforcebulk-connector-configuration/) and obtain the Client Id, Client Secret, Access Token, and Refresh Token. - - 2. Navigate into the **Palette** pane and select the graphical operations icons listed under **Salesforcebulk Connector** section. Then drag and drop the `init` operation into the Design pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-drag-and-drop-init.png" title="Drag and drop init operation" width="500" alt="Drag and drop init operation"/> - - 3. Add the property values into the `init` operation as shown below. Replace the `clientSecret`, `clientId`, `accessToken`, `refreshToken` with obtained values from above steps. - - - **clientSecret** : Value of your client secret given when you registered your application with Salesforce. - - **clientId** : Value of your client ID given when you registered your application with Salesforce. - - **accessToken** : Value of the access token to access the API via request. - - **refreshToken** : Value of the refresh token. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-api-init-operation-parameters.png" title="Add values to the init operation" width="800" alt="Add values to the init operation"/> - -2. Set up the `createJob` operation. - - 1. Setup the `createJob` configurations. In this operation we are going to create a job in the Salesforce account. Please find the `createJob` operation parameters listed here. - - - **operation** : The processing operation that the job should perform. - - **object** : The object type of data that is to be processed by the job. - - **contentType** : The content type of the job. - - While invoking the API, the above `object` parameter value comes as a user input. - - 2. Navigate into the **Palette** pane and select the graphical operations icons listed under **Salesforcebulk Connector** section. Then drag and drop the `createJob` operation into the Design pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-drag-and-drop-createjob.png" title="Drag and drop creatJobe operation" width="500" alt="Drag and drop createJob operations"/> - - 3. To get the input values into the API, we can use the [property mediator]({{base_path}}/reference/mediators/property-mediator). Navigate into the **Palette** pane and select the graphical mediators icons listed under **Mediators** section. Then drag and drop the `Property` mediators into the Design pane as shown below. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-api-drag-and-drop-property-mediator.png" title="Add property mediators" width="800" alt="Add property mediators"/> - - The parameters available for configuring the Property mediator are as follows: - - > **Note**: The properties should be added to the palette before creating the operation. - - 4. Add the property mediator to capture the `objectName` value. This is the object type of data that is to be processed by the job. - - - **name** : objectName - - **expression** : //object/text() - - **type** : STRING - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-api-property-mediator-property1-value1.png" title="Add values to capture ObjectName value" width="600" alt="Add values to capture ObjectName value"/> - -3. Set up the fileconnector operation. - - 1. Setup the `fileconnector.read` configurations. In this operation we are going to read the CSV file content by using the [WSO2 File Connector]({{base_path}}/reference/connectors/file-connector/file-connector-overview). - - - **contentType** : Content type of the files processed by the connector. - - **source** : The location of the file. This can be a file on the local physical file system or a file on an FTP server. - - **filePattern** : The pattern of the file to be read. - - While invoking the API, the above `source` parameter value come as a user input. - - > **Note**: When you configuring this `source` parameter in Windows operating system you need to set this property shown below `<source>C:\\Users\Name\Desktop\Salesforcebulk-connector\SFBulk.csv</source>`. - - 2. Navigate into the **Palette** pane and select the graphical operations icons listed under **Fileconnector Connector** section. Then drag and drop the `read` operation into the Design pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-drag-and-drop-file-read.png" title="Drag and drop file read operation" width="500" alt="Drag and drop file read operations"/> - - 3. To get the input values in to the API we can use the [property mediator]({{base_path}}/reference/mediators/property-mediator). Navigate into the **Palette** pane and select the graphical mediators icons listed under **Mediators** section. Then drag and drop the `Property` mediators into the Design pane as steps given in section 2.3 the `createJob` operation. . - - 4. Add the property mediator to capture the `source` value. The source is location of the file. This can be a file on the local physical file system or a file on an FTP server. - - - **name** : source - - **expression** : //source/text() - - **type** : STRING - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-api-property-mediator-source-property1-value1.png" title="Add values to capture source value" width="600" alt="Add values to capture source value"/> - -4. Set up the addBatch operation. - - 1. Initialize the connector. Please follow the steps given in section 1 in the `createJob` operation. - - 2. Navigate into the **Palette** pane and select the graphical operations icons listed under **Salesforcebulk Connector** section. Then drag and drop the `addBatch` operation into the Design pane. - - - **objects** : A list of records to process. - - **jobId** : The unique identifier of the job to which you want add a new batch. - - **isQuery** : Set to true if the operation is query. - - **contentType** : The content type of the batch data. The content type you specify should be compatible with the content type of the associated job. Possible values are application/xml and text/csv. - - While invoking the API, the above `jobId` and `objects` parameters values come as a user input. Using a property mediator will extract the `jobId` from the `createJob` response and store it into a configured `addBatch` operation. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-drag-and-drop-addBatch.png" title="Drag and drop addBatch operation" width="500" alt="Drag and drop addBatch operations"/> - - 3. To get the input values in to the API we can use the [property mediator]({{base_path}}/reference/mediators/property-mediator). Navigate into the **Palette** pane and select the graphical mediators icons listed under **Mediators** section. Then drag and drop the `Property` mediators into the Design pane as steps given in section 2.3 the `createJob` operation. . - - 4. Add the property mediator to capture the `jobId` value. - - - **name** : jobId - - **expression** : //n0:jobInfo/n0:id - - **type** : STRING - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-api-property-mediator-jobid-property1-value1.png" title="Add values to capture jobid value" width="600" alt="Add values to capture jobid value"/> - - 5. To extract the `objects` from the file read operation, we used [data mapper]({{base_path}}/reference/mediators/data-mapper-mediator). It will grab the CSV file content and insert in to the `addBatch` operation. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-drag-and-drop-datamapper.png" title="Drag and drop data mapper operation" width="500" alt="Drag and drop data mapper operations"/> - -5. Forward the backend response to the API caller. - - When you are invoking the created resource, the request of the message is going through the `/insertEmployeeBulkRecords` resource. Finally, it is passed to the [Respond mediator]({{base_path}}/reference/mediators/respond-mediator/). The Respond Mediator stops the processing on the current message and sends the message back to the client as a response. - - 1. Drag and drop **respond mediator** to the **Design view**. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-respond-mediator.png" title="Add Respond mediator" width="800" alt="Add Respond mediator"/> - -#### Configure a resource for the getStatusOfBatch - -1. Initialize the connector. - - You can use the generated tokens to initialize the connector. Please follow the steps given in insertEmployeeBulkRecords section 1 for setting up the `init` operation. - -2. Set up the getBatchStatus operation. - - 1. To retrieve created batch status from the added batches in the Salesforce account, you need to add the `getBatchStatus` operation. - - 2. Navigate into the **Palette** pane and select the graphical operations icons listed under **Salesforce Connector** section. Then drag and drop the `getBatchStatus` operations into the Design pane. - - - **jobId** : The unique identifier of the job to which the batch you specify belongs. - - **batchId** : The unique identifier of the batch for which you want to retrieve the status. - - While invoking the API, the above `jobId` and `batchId` parameters values come as a user input. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-getbatchstatus-drag-and-drop-query.png" title="Add query operation to getBatchStatus" width="500" alt="Add query operation to getBatchStatus"/> - - 3. Add the property mediator to capture the `jobId` value. - - - **name** : jobId - - **expression** : //jobId/text() - - **type** : STRING - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-api-property-mediator-jobidgetstatus-property1-value1.png" title="Add values to capture jobid value" width="600" alt="Add values to capture jobid value"/> - - 4. Add the property mediator to capture the `batchId` value. - - - **name** : batchId - - **expression** : //batchId/text() - - **type** : STRING - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-api-property-mediator-batchidgetstatus-property1-value1.png" title="Add values to capture batchId value" width="600" alt="Add values to capture batchId value"/> - -3. Forward the backend response to the API caller. - - When you are invoking the created resource, the request of the message is going through the `/insertEmployeeBulkRecords` resource. Finally, it is passed to the [Respond mediator]({{base_path}}/reference/mediators/respond-mediator/). The Respond Mediator stops the processing on the current message and sends the message back to the client as a response. - - 1. Drag and drop **respond mediator** to the **Design view**. - -Now you can switch into the Source view and check the XML configuration files of the created API and sequences. - -??? note "create.xml" - ``` - <?xml version="1.0" encoding="UTF-8"?> - <api context="/salesforce" name="Salesforcebulk-API" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST" url-mapping="/insertEmployeeBulkRecords"> - <inSequence> - <property expression="//object/text()" name="objectName" scope="default" type="STRING"/> - <property expression="//source/text()" name="source" scope="default" type="STRING"/> - <salesforcebulk.init> - <apiUrl>https://ap17.salesforce.com</apiUrl> - <accessToken>00D2x000000pIxA!AR0AQJxgll8UgZqaocCP_U516yo.bpzV19USOFzw4tFsvjbdE6x_ccIKrZgQXLQesOt_VX6FeuSrGq_VxyLdrjvryqh8EBas</accessToken> - <apiVersion>34</apiVersion> - <refreshToken>5Aep861Xq7VoDavIt5QG2vWIHGbv.B1Q.4rMXb9o3DFmhvbChN3tF24fOGHvUcOU2iMWSF06w5bWFjmHgu0bA5s</refreshToken> - <clientSecret>37D9E930DEEB0BAF7842124352065F6DB2D90219D9DB06238978590665EDEFEC</clientSecret> - <clientId>3MVG97quAmFZJfVyr_k_q7IC1iEc71lap9m4ayJWpUrkVe85mnF0GNjsIu2G4__FGC4NOzS.3o10Eh_H81xX8</clientId> - </salesforcebulk.init> - <salesforcebulk.createJob> - <operation>insert</operation> - <object>{$ctx:objectName}</object> - <contentType>XML</contentType> - </salesforcebulk.createJob> - <property expression="//n0:jobInfo/n0:id" name="jobId" scope="default" type="STRING" xmlns:n0="http://www.force.com/2009/06/asyncapi/dataload"/> - <fileconnector.read> - <source>{$ctx:source}</source> - <contentType>text/plain</contentType> - <filePattern>.*.csv</filePattern> - </fileconnector.read> - <datamapper config="gov:datamapper/NewConfig.dmc" inputSchema="gov:datamapper/NewConfig_inputSchema.json" inputType="XML" outputSchema="gov:datamapper/NewConfig_outputSchema.json" outputType="XML" xsltStyleSheet="gov:datamapper/NewConfig_xsltStyleSheet.xml"/> - <salesforcebulk.init> - <apiUrl>https://ap17.salesforce.com</apiUrl> - <accessToken>00D2x000000pIxA!AR0AQJxgll8UgZqaocCP_U516yo.bpzV19USOFzw4tFsvjbdE6x_ccIKrZgQXLQesOt_VX6FeuSrGq_VxyLdrjvryqh8EBas</accessToken> - <apiVersion>34</apiVersion> - <refreshToken>5Aep861Xq7VoDavIt5QG2vWIHGbv.B1Q.4rMXb9o3DFmhvbChN3tF24fOGHvUcOU2iMWSF06w5bWFjmHgu0bA5s</refreshToken> - <clientSecret>37D9E930DEEB0BAF7842124352065F6DB2D90219D9DB06238978590665EDEFEC</clientSecret> - <clientId>3MVG97quAmFZJfVyr_k_q7IC1iEc71lap9m4ayJWpUrkVe85mnF0GNjsIu2G4__FGC4NOzS.3o10Eh_H81xX8</clientId> - </salesforcebulk.init> - <salesforcebulk.addBatch> - <objects>{//values}</objects> - <jobId>{$ctx:jobId}</jobId> - <isQuery>false</isQuery> - <contentType>application/xml</contentType> - </salesforcebulk.addBatch> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" url-mapping="/getStatusOfBatch"> - <inSequence> - <property expression="//jobId/text()" name="jobId" scope="default" type="STRING"/> - <property expression="//batchId/text()" name="batchId" scope="default" type="STRING"/> - <salesforcebulk.init> - <apiUrl>https://ap17.salesforce.com</apiUrl> - <accessToken>00D2x000000pIxA!AR0AQJxgll8UgZqaocCP_U516yo.bpzV19USOFzw4tFsvjbdE6x_ccIKrZgQXLQesOt_VX6FeuSrGq_VxyLdrjvryqh8EBas</accessToken> - <apiVersion>34</apiVersion> - <refreshToken>5Aep861Xq7VoDavIt5QG2vWIHGbv.B1Q.4rMXb9o3DFmhvbChN3tF24fOGHvUcOU2iMWSF06w5bWFjmHgu0bA5s</refreshToken> - <clientSecret>37D9E930DEEB0BAF7842124352065F6DB2D90219D9DB06238978590665EDEFEC</clientSecret> - <clientId>3MVG97quAmFZJfVyr_k_q7IC1iEc71lap9m4ayJWpUrkVe85mnF0GNjsIu2G4__FGC4NOzS.3o10Eh_H81xX8</clientId> - </salesforcebulk.init> - <salesforcebulk.getBatchStatus> - <jobId>{$ctx:jobId}</jobId> - <batchId>{$ctx:batchId}</batchId> - </salesforcebulk.getBatchStatus> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - </api> - ``` -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/salesforcebulk.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -!!! tip - You may need to update the value of the access token and make other such changes before deploying and running this project. - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -Invoke the API as shown below using the curl command. Curl application can be downloaded from [here](https://curl.haxx.se/download.html). - -1. Creating a new job in the in the Salesforce account and insert employee details. - - **Sample request** - - `curl -v POST -d <inserRecord><object>Account</object><source>/home/kasun/Documents/SFbulk.csv</source></inserRecord> "http://localhost:8290/salesforce/insertEmployeeBulkRecords" -H "Content-Type:application/xml"` - - **Expected Response** - -```xml - <?xml version="1.0" encoding="UTF-8"?> - <batchInfo - xmlns="http://www.force.com/2009/06/asyncapi/dataload"> - <id>7512x000002ywZNAAY</id> - <jobId>7502x000002ypCDAAY</jobId> - <state>Queued</state> - <createdDate>2020-07-16T06:41:53.000Z</createdDate> - <systemModstamp>2020-07-16T06:41:53.000Z</systemModstamp> - <numberRecordsProcessed>2</numberRecordsProcessed> - <numberRecordsFailed>2</numberRecordsFailed> - <totalProcessingTime>93</totalProcessingTime> - <apiActiveProcessingTime>2</apiActiveProcessingTime> - <apexProcessingTime>0</apexProcessingTime> - </batchInfo> -``` - -2. Get status of the inserted employee details. - - **Sample request** - - curl -v POST -d <getBatchStatus><jobId>7502x000002yp73AAA</jobId><batchId>7512x000002ywWrAAI</batchId></getBatchStatus> "http://localhost:8290/resources/getStatusOfBatch" -H "Content-Type:application/xml"` - - **Expected Response** - -```xml - <?xml version="1.0" encoding="UTF-8"?> - <batchInfo - xmlns="http://www.force.com/2009/06/asyncapi/dataload"> - <id>7512x000002ywWrAAI</id> - <jobId>7502x000002yp73AAA</jobId> - <state>Failed</state> - <stateMessage>InvalidBatch : Records not found</stateMessage> - <createdDate>2020-07-16T06:14:36.000Z</createdDate> - <systemModstamp>2020-07-16T06:14:37.000Z</systemModstamp> - <numberRecordsProcessed>2</numberRecordsProcessed> - <numberRecordsFailed>0</numberRecordsFailed> - <totalProcessingTime>93</totalProcessingTime> - <apiActiveProcessingTime>3</apiActiveProcessingTime> - <apexProcessingTime>0</apexProcessingTime> - </batchInfo> -``` -## What's Next - -* To customize this example for your own scenario, see [Salesforce bulk Connector Configuration]({{base_path}}/reference/connectors/salesforce-connectors/salesforcebulk-reference/) documentation for all operation details of the connector. diff --git a/en/docs/reference/connectors/salesforce-connectors/salesforcebulk-reference.md b/en/docs/reference/connectors/salesforce-connectors/salesforcebulk-reference.md deleted file mode 100644 index 94ddd48532..0000000000 --- a/en/docs/reference/connectors/salesforce-connectors/salesforcebulk-reference.md +++ /dev/null @@ -1,663 +0,0 @@ -# SalesforceBulk Connector Reference - -The following operations allow you to work with the Salesforce Bulk Connector. Click an operation name to see parameter details and samples on how to use it. - ---- - -## Initialize the connector - -Salesforce Bulk API uses the OAuth protocol to allow application users to securely access data without having to reveal their user credentials. For more information on how authentication is done in Salesforce, see [Understanding Authentication](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/intro_understanding_authentication.htm). - -To use the Salesforce Bulk connector, add the `<salesforcerest.init>` element in your configuration before carrying out any other Salesforce Bulk operations. - -??? note "salesforcebulk.init" - The salesforcerest.init operation initializes the connector to interact with the Salesforce REST API. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_intro.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>apiVersion</td> - <td>The version of the Salesforce API.</td> - <td>Yes</td> - </tr> - <tr> - <td>accessToken</td> - <td>The access token to authenticate your API calls.</td> - <td>Yes</td> - </tr> - <tr> - <td>apiUrl</td> - <td>The instance URL for your organization.</td> - <td>Yes</td> - </tr> - <tr> - <td>tokenEndpointHostname</td> - <td>The instance url for OAuth 2.0 token endpoint when issuing authentication requests in your application. If you haven't set any token endpoint hostname, the default hostname [https://login.salesforce.com](https://login.salesforce.com) will be set.</td> - <td>No</td> - </tr> - <tr> - <td>refreshToken</td> - <td>The refresh token that you received to refresh the API access token.</td> - <td>Yes</td> - </tr> - <tr> - <td>clientId</td> - <td>The consumer key of the connected application that you created.</td> - <td>Yes</td> - </tr> - <tr> - <td>clientSecret</td> - <td>The consumer secret of the connected application that you created.</td> - <td>Yes</td> - </tr> - <tr> - <td>intervalTime</td> - <td>The time interval in milliseconds, after which you need to check the validity of the access token.</td> - <td>Yes</td> - </tr> - <tr> - <td>registryPath</td> - <td>The registry path of the connector. You must specify the registry path as follows: registryPath = “connectors/salesforcebulk”</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcebulk.init> - <apiVersion>{$ctx:apiVersion}</apiVersion> - <accessToken>{$ctx:accessToken}</accessToken> - <apiUrl>{$ctx:apiUrl}</apiUrl> - <tokenEndpointHostname>{$ctx:tokenEndpointHostname}</tokenEndpointHostname> - <refreshToken>{$ctx:refreshToken}</refreshToken> - <clientId>{$ctx:clientId}</clientId> - <clientSecret>{$ctx:clientSecret}</clientSecret> - <intervalTime>{$ctx:intervalTime}</intervalTime> - <registryPath>{$ctx:registryPath}</registryPath> - </salesforcebulk.init> - ``` - - **Sample request** - - ```xml - <salesforcebulk.init> - <apiVersion>34.0</apiVersion> - <accessToken>00D280000011oQO!ARwAQFPbKzWInyf.4veB3NY0hiKNQTxaSiZnPh9AybHplDpix34y_UOdwiKcL3e1_IquaUuO3A54A4thmSplNUQei9ARsNFV</accessToken> - <apiUrl>https://ap17.salesforce.com</apiUrl> - <tokenEndpointHostname>{$ctx:tokenEndpointHostname}</tokenEndpointHostname> - <refreshToken>5Aep861TSESvWeug_wHqvFVePrOMjj7CUFncs.cGdlPln68mKYpAbAJ9l7A5FTFsmqFY8Jl0m6fkIMWkIKc4WKL</refreshToken> - <clientId>3MVG9ZL0ppGP5UrDGNWmP9oSpiNtudQv6b06Ru7K6UPW5xQhd6vakhfjA2HUGsLSpDOQmO8JGozttODpABcnY</clientId> - <clientSecret>5437293348319318299</clientSecret> - <intervalTime>1000000</intervalTime> - <registryPath>connectors/SalesforceBulk</registryPath> - </salesforcebulk.init> - ``` ---- - -## Working with Jobs - -??? note "createJob" - The salesforcebulk.createJob method creates a new job based on the properties that you specify. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_quickstart_create_job.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>operation</td> - <td>The processing operation that the job should perform.</td> - <td>Yes</td> - </tr> - <tr> - <td>contentType</td> - <td>The content type of the job.</td> - <td>Yes</td> - </tr> - <tr> - <td>object</td> - <td>The object type of data that is to be processed by the job.</td> - <td>Yes</td> - </tr> - <tr> - <td>externalIdFieldName</td> - <td>The id of the external object.</td> - <td>No</td> - </tr> - </table> - - **Sample configuration** - - Following is a sample request that can be handled by the createJob operation. - - ```xml - <salesforcebulk.createJob> - <operation>{$ctx:operation}</operation> - <contentType>{$ctx:contentType}</contentType> - <object>{$ctx:object}</object> - <externalIdFieldName>{$ctx:externalIdFieldName}</externalIdFieldName> - </salesforcebulk.createJob> - ``` - - **Sample request** - - ```xml - <createJob> - <apiVersion>34.0</apiVersion> - <accessToken>00D280000011oQO!ARwAQFPbKzWInyf.4veB3NY0hiKNQTxaSiZnPh9AybHplDpix34y_UOdwiKcL3e1_IquaUuO3A54A4thmSplNUQei9ARsNFV</accessToken> - <apiUrl>https://ap2.salesforce.com</apiUrl> - <refreshToken>5Aep861TSESvWeug_wHqvFVePrOMjj7CUFncs.cGdlPln68mKYpAbAJ9l7A5FTFsmqFY8Jl0m6fkIMWkIKc4WKL</refreshToken> - <clientId>3MVG9ZL0ppGP5UrDGNWmP9oSpiNtudQv6b06Ru7K6UPW5xQhd6vakhfjA2HUGsLSpDOQmO8JGozttODpABcnY</clientId> - <clientSecret>5437293348319318299</clientSecret> - <intervalTime>1000000</intervalTime> - <registryPath>connectors/SalesforceBulk</registryPath> - <operation>insert</operation> - <contentType>CSV</contentType> - <object>Contact</object> - <externalIdFieldName>Languages__c</externalIdFieldName> - </createJob> - ``` - -??? note "updateJob" - The salesforcebulk.updateJob method closes or aborts a job that you specify. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_quickstart_close_job.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>jobId</td> - <td>The ID of the job that you either want to close or abort.</td> - <td>Yes</td> - </tr> - <tr> - <td>state</td> - <td>The state of processing of the job.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - Following is a sample request that can be handled by the updateJob operation. - - ```xml - <salesforcebulk.updateJob> - <jobId>{$ctx:jobId}</jobId> - <state>{$ctx:state}</state> - </salesforcebulk.updateJob> - ``` - - **Sample request** - - ```xml - <updateJob> - <apiVersion>34.0</apiVersion> - <accessToken>00D280000011oQO!ARwAQFPbKzWInyf.4veB3NY0hiKNQTxaSiZnPh9AybHplDpix34y_UOdwiKcL3e1_IquaUuO3A54A4thmSplNUQei9ARsNFV</accessToken> - <apiUrl>https://ap2.salesforce.com</apiUrl> - <refreshToken>5Aep861TSESvWeug_wHqvFVePrOMjj7CUFncs.cGdlPln68mKYpAbAJ9l7A5FTFsmqFY8Jl0m6fkIMWkIKc4WKL</refreshToken> - <clientId>3MVG9ZL0ppGP5UrDGNWmP9oSpiNtudQv6b06Ru7K6UPW5xQhd6vakhfjA2HUGsLSpDOQmO8JGozttODpABcnY</clientId> - <clientSecret>5437293348319318299</clientSecret> - <intervalTime>1000000</intervalTime> - <registryPath>connectors/SalesforceBulk</registryPath> - <jobId>75028000000MCtIAAW</jobId> - <state>Closed</state> - </updateJob> - ``` - -??? note "getJob" - The salesforcebulk.getJob method retrieves all details of an existing job based on the job ID that you specify. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_jobs_get_details.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>jobId</td> - <td> The ID of the job that you either want to close or abort.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - Following is a sample request that can be handled by the getJob operation. - - ```xml - <salesforcebulk.getJob> - <jobId>{$ctx:jobId}</jobId> - </salesforcebulk.getJob> - ``` - - **Sample request** - - ```xml - <getJob> - <apiVersion>34.0</apiVersion> - <accessToken>00D280000011oQO!ARwAQFPbKzWInyf.4veB3NY0hiKNQTxaSiZnPh9AybHplDpix34y_UOdwiKcL3e1_IquaUuO3A54A4thmSplNUQei9ARsNFV</accessToken> - <apiUrl>https://ap2.salesforce.com</apiUrl> - <refreshToken>5Aep861TSESvWeug_wHqvFVePrOMjj7CUFncs.cGdlPln68mKYpAbAJ9l7A5FTFsmqFY8Jl0m6fkIMWkIKc4WKL</refreshToken> - <clientId>3MVG9ZL0ppGP5UrDGNWmP9oSpiNtudQv6b06Ru7K6UPW5xQhd6vakhfjA2HUGsLSpDOQmO8JGozttODpABcnY</clientId> - <clientSecret>5437293348319318299</clientSecret> - <intervalTime>1000000</intervalTime> - <registryPath>connectors/SalesforceBulk</registryPath> - <jobId>75028000000MCqEAAW</jobId> - </getJob> - ``` - -## Working with Batches - -??? note "addBatch" - The salesforcebulk.addBatch method adds a new batch to a job based on the properties that you specify. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_batches_create.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>jobId</td> - <td>The ID of the job that you either want to close or abort.</td> - <td>Yes</td> - </tr> - <tr> - <td>objects</td> - <td>A list of records to process.</td> - <td>Yes</td> - </tr> - <tr> - <td>contentType</td> - <td>The content type of the batch data. The content type you specify should be compatible with the content type of the associated job. Possible values are application/xml and text/csv.</td> - <td>Yes</td> - </tr> - <tr> - <td>isQuery</td> - <td>Set to true if the operation is query.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcebulk.addBatch> - <jobId>{$ctx:jobId}</jobId> - <objects>{$ctx:objects}</objects> - <contentType>{$ctx:contentType}</contentType> - <isQuery>{$ctx:isQuery}</isQuery> - </salesforcebulk.addBatch> - ``` - - **Sample request** - - Following is a sample request that can be handled by the addBatch operation, where the content type of the batch data is in application/xml format. - - ```xml - <addBatch> - <apiUrl>https://ap2.salesforce.com</apiUrl> - <apiVersion>34.0</apiVersion> - <accessToken>5Aep861TSESvWeug_xOdumSVTdDsD7OrADzhKVu9YrPFLB1zce_I21lnWIBR7uaGvedTTXJ4uPswE676H2pQpCZ</accessToken> - <refreshToken>5Aep861TSESvWeug_wHqvFVePrOMjj7CUFncs.cGdlPln68mKYpAbAJ9l7A5FTFsmqFY8Jl0m6fkIMWkIKc4WKL</refreshToken> - <clientId>3MVG9ZL0ppGP5UrDGNWmP9oSpiNtudQv6b06Ru7K6UPW5xQhd6vakhfjA2HUGsLSpDOQmO8JGozttODpABcnY</clientId> - <clientSecret>5437293348319318299</clientSecret> - <intervalTime>1000000</intervalTime> - <registryPath>connectors/SalesforceBulk</registryPath> - <contentType>application/xml</contentType> - <isQuery>false</isQuery> - <jobId>75028000000McSwAAK</jobId> - <objects> - <values> - <sObject> - <description>Created from Bulk API on Tue Apr 14 11:15:59 PDT 2009</description> - <name>Account 711 (batch 0)</name> - </sObject> - <sObject> - <description>Created from Bulk API on Tue Apr 14 11:15:59 PDT 2009</description> - <name>Account 37811 (batch 5)</name> - </sObject> - </values> - </objects> - </addBatch> - ``` - Following is a sample request that can be handled by the addBatch operation, where the content type of the batch data is in text/csv format. - - ```xml - <addBatch> - <apiUrl>https://ap2.salesforce.com</apiUrl> - <apiVersion>34.0</apiVersion> - <accessToken>5Aep861TSESvWeug_xOdumSVTdDsD7OrADzhKVu9YrPFLB1zce_I21lnWIBR7uaGvedTTXJ4uPswE676H2pQpCZ</accessToken> - <refreshToken>5Aep861TSESvWeug_wHqvFVePrOMjj7CUFncs.cGdlPln68mKYpAbAJ9l7A5FTFsmqFY8Jl0m6fkIMWkIKc4WKL</refreshToken> - <clientId>3MVG9ZL0ppGP5UrDGNWmP9oSpiNtudQv6b06Ru7K6UPW5xQhd6vakhfjA2HUGsLSpDOQmO8JGozttODpABcnY</clientId> - <clientSecret>5437293348319318299</clientSecret> - <intervalTime>1000000</intervalTime> - <registryPath>connectors/SalesforceBulk</registryPath> - <contentType>text/csv</contentType> - <isQuery>false</isQuery> - <jobId>75028000000McSwAAK</jobId> - <objects> - <values>Name,description - Tom Dameon,Created from Bulk API - </values> - </objects> - </addBatch> - ``` - Following is a sample request that can be handled by the addBatch operation, where the operation is query and the content type of the bulk query results is in application/xml format. - - ```xml - <addBatch> - <apiUrl>https://ap2.salesforce.com</apiUrl> - <apiVersion>34.0</apiVersion> - <accessToken>5Aep861TSESvWeug_xOdumSVTdDsD7OrADzhKVu9YrPFLB1zce_I21lnWIBR7uaGvedTTXJ4uPswE676H2pQpCZ</accessToken> - <refreshToken>5Aep861TSESvWeug_wHqvFVePrOMjj7CUFncs.cGdlPln68mKYpAbAJ9l7A5FTFsmqFY8Jl0m6fkIMWkIKc4WKL</refreshToken> - <clientId>3MVG9ZL0ppGP5UrDGNWmP9oSpiNtudQv6b06Ru7K6UPW5xQhd6vakhfjA2HUGsLSpDOQmO8JGozttODpABcnY</clientId> - <clientSecret>5437293348319318299</clientSecret> - <intervalTime>1000000</intervalTime> - <registryPath>connectors/SalesforceBulk</registryPath> - <contentType>application/xml</contentType> - <isQuery>true</isQuery> - <jobId>75028000000McSwAAK</jobId> - <objects> - <values>SELECT Id, Name FROM Account LIMIT 100</values> - </objects> - </addBatch> - ``` - -??? note "getBatchStatus" - The salesforcebulk.getBatchStatus method retrieves the status of a batch based on the properties that you specify. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_quickstart_check_status.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>jobId</td> - <td>The unique identifier of the job to which the batch you specify belongs.</td> - <td>Yes</td> - </tr> - <tr> - <td>batchId</td> - <td>The unique identifier of the batch for which you want to retrieve the status.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - Following is a sample request that can be handled by the getBatchStatus operation. - - ```xml - <salesforcebulk.getBatchStatus> - <jobId>{$ctx:jobId}</jobId> - <batchId>{$ctx:batchId}</batchId> - </salesforcebulk.getBatchStatus> - ``` - - **Sample request** - - ```xml - <getBatchStatus> - <apiUrl>https://ap2.salesforce.com</apiUrl> - <accessToken>5Aep861TSESvWeug_xOdumSVTdDsD7OrADzhKVu9YrPFLB1zce_I21lnWIBR7uaGvedTTXJ4uPswE676H2pQpCZ</accessToken> - <apiVersion>34.0</apiVersion> - <refreshToken>5Aep861TSESvWeug_wHqvFVePrOMjj7CUFncs.cGdlPln68mKYpAbAJ9l7A5FTFsmqFY8Jl0m6fkIMWkIKc4WKL</refreshToken> - <clientId>3MVG9ZL0ppGP5UrDGNWmP9oSpiNtudQv6b06Ru7K6UPW5xQhd6vakhfjA2HUGsLSpDOQmO8JGozttODpABcnY</clientId> - <clientSecret>5437293348319318299</clientSecret> - <intervalTime>1000000</intervalTime> - <registryPath>connectors/SalesforceBulk</registryPath> - <jobId>75028000000M5X0</jobId> - <batchId>75128000000OZzq</batchId> - </getBatchStatus> - ``` - -??? note "getBatchResults" - The salesforcebulk.getBatchResults method retrieves results of a batch that has completed processing. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_batches_get_results.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>jobId</td> - <td>The unique identifier of the job to which the batch you specify belongs.</td> - <td>Yes</td> - </tr> - <tr> - <td>batchId</td> - <td>The unique identifier of the batch for which you want to retrieve results.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - Following is a sample request that can be handled by the getBatchResults operation. - - ```xml - <salesforcebulk.getBatchRequest> - <jobId>{$ctx:jobId}</jobId> - <batchId>{$ctx:batchId}</batchId> - </salesforcebulk.getBatchRequest> - ``` - - **Sample request** - - ```xml - <getBatchResults> - <apiUrl>https://ap2.salesforce.com</apiUrl> - <apiVersion>34.0</apiVersion> - <accessToken>5Aep861TSESvWeug_xOdumSVTdDsD7OrADzhKVu9YrPFLB1zce_I21lnWIBR7uaGvedTTXJ4uPswE676H2pQpCZ</accessToken> - <refreshToken>5Aep861TSESvWeug_wHqvFVePrOMjj7CUFncs.cGdlPln68mKYpAbAJ9l7A5FTFsmqFY8Jl0m6fkIMWkIKc4WKL</refreshToken> - <clientId>3MVG9ZL0ppGP5UrDGNWmP9oSpiNtudQv6b06Ru7K6UPW5xQhd6vakhfjA2HUGsLSpDOQmO8JGozttODpABcnY</clientId> - <clientSecret>5437293348319318299</clientSecret> - <intervalTime>1000000</intervalTime> - <registryPath>connectors/SalesforceBulk</registryPath> - <jobId>75028000000M5X0</jobId> - <batchId>75128000000OZzq</batchId> - </getBatchResults> - ``` - -??? note "getBatchRequest" - The salesforcebulk.getBatchRequest method retrieves a batch request based on the properties that you specify. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_batches_get_request.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>jobId</td> - <td>The unique identifier of the job to which the batch you specify belongs.</td> - <td>Yes</td> - </tr> - <tr> - <td>batchId</td> - <td>The unique identifier of the batch for which you want to retrieve the batch request.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - Following is a sample request that can be handled by the getBatchRequest operation. - - ```xml - <salesforcebulk.getBatchRequest> - <jobId>{$ctx:jobId}</jobId> - <batchId>{$ctx:batchId}</batchId> - </salesforcebulk.getBatchRequest> - ``` - - **Sample request** - - ```xml - <getBatchRequest> - <apiVersion>34.0</apiVersion> - <accessToken>00D280000011oQO!ARwAQFPbKzWInyf.4veB3NY0hiKNQTxaSiZnPh9AybHplDpix34y_UOdwiKcL3e1_IquaUuO3A54A4thmSplNUQei9ARsNFV</accessToken> - <apiUrl>https://ap2.salesforce.com</apiUrl> - <refreshToken>5Aep861TSESvWeug_wHqvFVePrOMjj7CUFncs.cGdlPln68mKYpAbAJ9l7A5FTFsmqFY8Jl0m6fkIMWkIKc4WKL</refreshToken> - <clientId>3MVG9ZL0ppGP5UrDGNWmP9oSpiNtudQv6b06Ru7K6UPW5xQhd6vakhfjA2HUGsLSpDOQmO8JGozttODpABcnY</clientId> - <clientSecret>5437293348319318299</clientSecret> - <intervalTime>1000000</intervalTime> - <registryPath>connectors/SalesforceBulk</registryPath> - <jobId>75028000000MCtIAAW</jobId> - <batchId>75128000000OpZFAA0</batchId> - </getBatchRequest> - ``` - -??? note "listBatches" - The salesforcebulk.listBatches method retrieves details of all batches in a job that you specify. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_batches_get_info_all.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>jobId</td> - <td>The unique identifier of the job for which you want to retrieve batch details.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - Following is a sample request that can be handled by the listBatches operation. - - ```xml - <salesforcebulk.listBatches> - <jobId>{$ctx:jobId}</jobId> - </salesforcebulk.listBatches> - ``` - - **Sample request** - - ```xml - <listBatches> - <apiVersion>34.0</apiVersion> - <accessToken>00D280000011oQO!ARwAQFPbKzWInyf.4veB3NY0hiKNQTxaSiZnPh9AybHplDpix34y_UOdwiKcL3e1_IquaUuO3A54A4thmSplNUQei9ARsNFV</accessToken> - <apiUrl>https://ap2.salesforce.com</apiUrl> - <refreshToken>5Aep861TSESvWeug_wHqvFVePrOMjj7CUFncs.cGdlPln68mKYpAbAJ9l7A5FTFsmqFY8Jl0m6fkIMWkIKc4WKL</refreshToken> - <clientId>3MVG9ZL0ppGP5UrDGNWmP9oSpiNtudQv6b06Ru7K6UPW5xQhd6vakhfjA2HUGsLSpDOQmO8JGozttODpABcnY</clientId> - <clientSecret>5437293348319318299</clientSecret> - <intervalTime>1000000</intervalTime> - <registryPath>connectors/SalesforceBulk</registryPath> - <jobId>75028000000MCqEAAW</jobId> - </listBatches> - ``` - -??? note "getBulkQueryResults" - The salesforcebulk.getBulkQueryResults method retrieves the bulk query results that you specify. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_code_curl_walkthrough.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>jobId</td> - <td>The unique identifier of the job for which you want to retrieve batch details.</td> - <td>Yes</td> - </tr> - <tr> - <td>batchId</td> - <td>The unique identifier of the batch for which you want to retrieve the batch request.</td> - <td>Yes</td> - </tr> - <tr> - <td>resultsId</td> - <td>The unique identifier of the results for which you want to retrieve.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - Following is a sample request that can be handled by the getBulkQueryResults operation. - - ```xml - <salesforcebulk.getBulkQueryResults> - <jobId>{$ctx:jobId}</jobId> - <batchId>{$ctx:batchId}</batchId> - <resultsId>{$ctx:resultsId}</resultsId> - </salesforcebulk.getBulkQueryResults> - ``` - - **Sample request** - - ```xml - <getBulkQueryResults> - <apiVersion>34.0</apiVersion> - <accessToken>00D280000011oQO!ARwAQFPbKzWInyf.4veB3NY0hiKNQTxaSiZnPh9AybHplDpix34y_UOdwiKcL3e1_IquaUuO3A54A4thmSplNUQei9ARsNFV</accessToken> - <apiUrl>https://ap2.salesforce.com</apiUrl> - <refreshToken>5Aep861TSESvWeug_wHqvFVePrOMjj7CUFncs.cGdlPln68mKYpAbAJ9l7A5FTFsmqFY8Jl0m6fkIMWkIKc4WKL</refreshToken> - <clientId>3MVG9ZL0ppGP5UrDGNWmP9oSpiNtudQv6b06Ru7K6UPW5xQhd6vakhfjA2HUGsLSpDOQmO8JGozttODpABcnY</clientId> - <clientSecret>5437293348319318299</clientSecret> - <intervalTime>1000000</intervalTime> - <registryPath>connectors/SalesforceBulk</registryPath> - <jobId>75028000000MCqEAAW</jobId> - <batchId>7510K00000Kzb6XQAR</batchId> - <resultId>7520K000006xofz</resultId> - </getBulkQueryResults> - ``` - ---- - -## Working with Binary Attachments - - -??? note "createJobToUploadBatchFile" - The salesforcebulk.createJobToUploadBatchFile method creates a job for batches that contain attachment records. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/binary_create_job.htm) for more information. - - **Sample configuration** - - Following is a sample request that can be handled by the createJobToUploadBatchFile operation. It creates a job for batches that contain attachment records.. - - ```xml - <salesforcebulk.createJobToUploadBatchFile> - </salesforcebulk.createJobToUploadBatchFile> - ``` - - **Sample request** - - ```xml - http://localhost:8280/services/salesforcebulk_uploadBatchFile?apiUrl=https://ap2.salesforce.com&accessToken=00D280000011oQO!ARwAQFPbKzWInyf.4veB3NY0hiKNQTxaSiZnPh9AybHplDpix34y_UOdwiKcL3e1_IquaUuO3A54A4thmSplNUQei9ARsNFV&apiVersion=34.0&refreshToken=5Aep861TSESvWeug_wHqvFVePrOMjj7CUFncs.cGdlPln68mKYpAbAJ9l7A5FTFsmqFY8Jl0m6fkIMWkIKc4WKL&clientId=3MVG9ZL0ppGP5UrDGNWmP9oSpiNtudQv6b06Ru7K6UPW5xQhd6vakhfjA2HUGsLSpDOQmO8JGozttODpABcnY&clientSecret=5437293348319318299&intervalTime=1000000&jobId=75028000000MCv9AAG - ``` - -??? note "getBulkQueryResults" - The salesforcebulk.getBulkQueryResults method creates a batch of attachment records. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/binary_create_batch.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>jobId</td> - <td>The ID of the job for which you want to create a batch of attachment records.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - Following is a sample request that can be handled by the uploadBatchFile operation.It creates a job for batches that contain attachment records. - - ```xml - <salesforcebulk.uploadBatchFile> - <jobId>{$url:jobId}</jobId> - </salesforcebulk.uploadBatchFile> - ``` - - **Sample request** - - ```xml - http://localhost:8280/services/salesforcebulk_uploadBatchFile?apiUrl=https://ap2.salesforce.com&accessToken=00D280000011oQO!ARwAQFPbKzWInyf.4veB3NY0hiKNQTxaSiZnPh9AybHplDpix34y_UOdwiKcL3e1_IquaUuO3A54A4thmSplNUQei9ARsNFV&apiVersion=34.0&refreshToken=5Aep861TSESvWeug_wHqvFVePrOMjj7CUFncs.cGdlPln68mKYpAbAJ9l7A5FTFsmqFY8Jl0m6fkIMWkIKc4WKL&clientId=3MVG9ZL0ppGP5UrDGNWmP9oSpiNtudQv6b06Ru7K6UPW5xQhd6vakhfjA2HUGsLSpDOQmO8JGozttODpABcnY&clientSecret=5437293348319318299&intervalTime=1000000&jobId=75028000000MCv9AAG - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/salesforce-connectors/salesforcebulk-v2-connector-example.md b/en/docs/reference/connectors/salesforce-connectors/salesforcebulk-v2-connector-example.md deleted file mode 100644 index 0611e25140..0000000000 --- a/en/docs/reference/connectors/salesforce-connectors/salesforcebulk-v2-connector-example.md +++ /dev/null @@ -1,760 +0,0 @@ -# Salesforce Bulk v2.0 Connector Example - -The **Salesforce Bulk v2.0 Connector** provides seamless integration with the [Salesforce Bulk v2.0 REST API](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_intro.htm), enabling easy and efficient handling of large volumes of data. The SalesforceBulk API operates on a RESTful architecture, offering a fast and reliable method to load or delete vast amounts of data from your organization's Salesforce account. With SalesforceBulk, you can perform asynchronous operations like querying, inserting, updating, upserting, or deleting a considerable number of records by submitting them in batches. These batches can be processed by Salesforce in the background, ensuring minimal disruption to your workflow. - -## What you'll build - -The following example demonstrates how to use the Salesforce Bulk v2.0 Connector for performing various operations on your Salesforce data: - -1. Insert account records to the salesforce. -2. Insert account records from a file to the salesforce. -3. Get the created bulk job information. -4. Get the successfully processed records. -5. Get the unprocessed records to a file. -6. Delete the bulk job. -7. Create a query job to get account details. -8. Get the created query job information. -9. Get the successful results of the created query job to a file. - -You can use the following resources to achieve your requirements. - -1. `/createJobAndUploadData` : - - Create a new bulk ingest job for insert operation. - - Upload the CSV content passed through the request body. - - Close the job to denote that the upload is completed. -2. `/createJobAndUploadFile` : - - Create a new bulk ingest job for insert operation. - - Read a CSV file using [File Connector]({{base_path}}/reference/connectors/file-connector/file-connector-overview/). - - Upload the CSV content read from the file. - - Close the job to denote that the upload is completed. -3. `/getJobInfo` : - - Get the bulkJob info identified by the jobId passed through the request body. -4. `/getSuccessfulResults` : - - Retrive the successful results of the bulk job identified by the `jobId`. -5. `/getUnprocessedResults` : - - Retrive the unprocessed records of the bulk job identified by the `jobId`. - - Store the results to a CSV file. -6. `/deleteJob` : - - Delete the bulkJob identified by the jobId passed through the request body. -7. `/createQuery` : - - Create a query job in salesforce. -8. `/getQueryJobInfo` : - - Get the queryJob info identified by the jobId passed through the request body. -6. `/getSuccessfulQueryResults` : - - Retrive the successful results of the bulk query job identified by the `queryJobId`. - - Store it in a CSV file. - -## Setting up the environment - -By default, the `text/csv` message formatter and message builder are not configured in the Micro Integrator settings. To enable this connector to function correctly with `text/csv` data, you will need to follow these steps to add the necessary message formatter and message builder configurations. - -1. Open `[MI_Root]/conf/axis2/axis2.xml` using a text editor. -2. Navigate to the `Message Formatters` section. -3. Add a new message formatter for the type `text/csv`. - - `<messageFormatter contentType="text/csv" class="org.apache.axis2.format.PlainTextFormatter"/>` -4. Navigate to the `Message Builders` section. -5. Add a new message builder for the type `text/csv`. - - ` <messageBuilder contentType="text/csv" class="org.apache.axis2.format.PlainTextBuilder"/>` -6. Save the file and restart the Micro Integrator. - -## Configure the connector in WSO2 Integration Studio - -Connectors can be added to integration flows in [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/). Once added, the operations of the connector can be dragged onto your canvas and added to your sequences. - -### Import the connector - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -### Add integration logic - -First create a REST API called `Salesforce` in your project - -| Name | Context | -| ---------------- | ---------------- | -| Salesforce | /salesforce | - -Create the following resources in 'Salesforce' REST API - -| uri-template | method| -| ---------------- | ------| -| /createJobAndUploadData | POST | -| /createJobAndUploadFile | GET, POST | -| /getJobInfo | POST | -| /getSuccessfulResults | POST | -| /getUnprocessedResults | POST | -| /deleteJob | POST | -| /createQuery | GET, POST | -| /getQueryJobInfo | POST | -| /getSuccessfulQueryResults | POST | - - -Lets add the operations to the resources in `Salesforce` API - -#### - /createJobAndUploadData - - Users can utilize this resource to send CSV content for upload via the request body. The API will utilize an `enrich` mediator to store the CSV content in a `csvContent` property. The 'UploadJobData' operation will then upload the `csvContent`. After uploading the content, the `CloseJob` operation will be used to change the job status to `UploadComplete`. - - 1. In the API insequence drag and drop the Enrich mediator. Using the Enrich mediator clone the body content to a property called `csvContent`. - Enrich source: - - ```xml - <enrich> - <source clone="true" type="body"/> - <target property="csvContent" type="property"/> - </enrich> - ``` - - 2. Drag and drop `createJob` operation from **Salesforce_bulkapi_v2_Connector** section. - 1. Double click the operation. It will show you the properties section. - 2. In the properties section, In the General Section, click on the `+` button next to `Salesforce Configuration` - 1. In the `Connection configurtion` section give a name for `Salesforce Connection Name` - 2. Provide your Salesforce instance URL in the `Instance URL` text box. - 3. Provide your Salesforce connected app's client id in the `Client ID` text box. - 4. Provide your Salesforce connected app's client secret in the `Client Secret` text box. - 5. Provide your Salesforce connected app's refresh token in the `Refresh Token` text box. - 6. Provide your Salesforceconnected app's Access Token in the `Access Token` text box. - - We recommend not to use `Access Token`. - - If you are using an `Access Token`, please update it promptly upon expiration. - - If you are providing an `Access Token` along with `Client ID, Client Secret, and Refresh Token`, and if the `Access Token` has expired, kindly remove the expired `Access Token`. An invalid `Access Token` could lead to poor connector performance. - 7. Click finish. - 3. In the properties section, under `Basic`, select `INSERT` in the Operation dropdown. - 4. Input `Account` in the `Object` text box - 5. Select `COMMA` in the `Column Delimeter` dropbox - 6. Select `LF` or `CRLF` in the `Line Ending` dropbox based on your operating system. IF Windows : `CRLF`, for Unix-based systems : `LF` - - 3. Drag and drop a property mediator. Using this mediator we will extract the jobId from the response and will use it in other operations in this sequence. - - ```xml - <property expression="json-eval($.id)" name="jobId" scope="default" type="STRING"/> - ``` - - 4. Drag and drop `uploadJobData` operation from **Salesforce_bulkapi_v2_Connector** section. - 1. Double click the operation. It will show you the properties section. - 2. In the 'General' section of the properties, select the Salesforce connection configuration you created. - 3. For `Job ID` text box enter `$ctx:jobId` as expression. - 4. For `Input Data` enter `$ctx:csvContent` as the expression - - ```xml - <salesforce_bulkapi_v2.uploadJobData configKey="SF_CONNECTION_CONFIG_NAME_1"> - <jobId>{$ctx:jobId}</jobId> - <inputData>{$ctx:csvContent}</inputData> - </salesforce_bulkapi_v2.uploadJobData> - ``` - - 5. Drag and drop `closeJob` operation from **Salesforce_bulkapi_v2_Connector** section. - 1. Double-click the operation to view its properties section. - 2. In the 'General' section of the properties, select the Salesforce connection configuration you created. - 3. In the 'Job ID' text box, enter the expression `$ctx:jobId`. - - 6. Drag and drop 'Respond' mediator. - -#### - /createJobAndUploadFile - - Users can utilize this resource to send CSV content for upload via a CSV file. The API will utilize an `File Connector` to store the CSV content in a `csvContent` property. The 'UploadJobData' operation will then upload the `csvContent`. After uploading the content, the `CloseJob` operation will be used to change the job status to `UploadComplete`. - - 1. Drag and drop `createJob` operation from **Salesforce_bulkapi_v2_Connector** section. - 1. Double-click the operation to view its properties section. - 2. In the 'General' section of the properties, select the Salesforce connection configuration you created. - 3. In the properties section, under `Basic`, select `INSERT` in the Operation dropdown. - 4. Input `Account` in the `Object` text box - 5. Select `COMMA` in the `Column Delimeter` dropbox - 6. Select `LF` or `CRLF` in the `Line Ending` dropbox based on your operating system. IF Windows : `CRLF`, for Unix-based systems : `LF` - - 2. Drag and drop a property mediator. Using this mediator we will extract the jobId from the response and will use it in other operations in this sequence. - - ```xml - <property expression="json-eval($.id)" name="jobId" scope="default" type="STRING"/> - ``` - - 3. Drag and drop `read` operation from **[File_Connector]({{base_path}}/reference/connectors/file-connector/file-connector-config/#operations)** section. - 1. Prior to this step, you must configure the **File Connector**. For setup instructions, please refer to the [File Connector Documentation]({{base_path}}/reference/connectors/file-connector/file-connector-overview/). - 2. Create a File Connection and select it. - 3. In the `Basic` section, enter the file path. - 4. In the `Operation Result` section, select `Add Result To` as "Message Property", - 5. Set the `Property Name` as "csvContent". - - 4. Drag and drop `uploadJobData` operation from **Salesforce_bulkapi_v2_Connector** section. - 1. Double click the operation. It will show you the properties section. - 2. In the 'General' section of the properties, select the Salesforce connection configuration you created. - 3. For `Job ID` text box enter `$ctx:jobId` as expression. - 4. For `Input Data` enter `$ctx:csvContent` as the expression - - ```xml - <salesforce_bulkapi_v2.uploadJobData configKey="SF_CONNECTION_CONFIG_NAME_1"> - <jobId>{$ctx:jobId}</jobId> - <inputData>{$ctx:csvContent}</inputData> - </salesforce_bulkapi_v2.uploadJobData> - ``` - - 5. Drag and drop `closeJob` operation from **Salesforce_bulkapi_v2_Connector** section. - 1. Double-click the operation to view its properties section. - 2. In the 'General' section of the properties, select the Salesforce connection configuration you created. - 3. In the 'Job ID' text box, enter the expression `$ctx:jobId`. - - 6. Drag and drop 'Respond' mediator. - -#### - /getJobInfo - - Using this resource, users can get the job information. - - 1. Drag and drop a 'Property' mediator. This mediator will extract the jobId from the request payload and enable its use in other operations within this sequence. - ```xml - <property expression="json-eval($.id)" name="jobId" scope="default" type="STRING"/> - ``` - 2. Drag and drop `getJobInfo` operation from **Salesforce_bulkapi_v2_Connector** section. - 1. Double-click the operation to view its properties section. - 2. In the 'General' section of the properties, select the Salesforce connection configuration you created. - 3. In the 'Job ID' text box, enter the expression `$ctx:jobId`. - - - ```xml - <salesforce_bulkapi_v2.getJobInfo configKey="SF_CONNECTION_CONFIG_NAME_1"> - <jobId>{$ctx:jobId}</jobId> - </salesforce_bulkapi_v2.getJobInfo> - ``` - - 3. Drag and drop 'Respond' mediator. - -#### - /getSuccessfulResults - - Using this resource, users can retrieve the successfully processed records of a particular bulk job. - - 1. Drag and drop a 'Property' mediator. This mediator will extract the jobId from the request payload and enable its use in other operations within this sequence. - ```xml - <property expression="json-eval($.id)" name="jobId" scope="default" type="STRING"/> - ``` - 2. Drag and drop `getSuccessfulResults` operation from **Salesforce_bulkapi_v2_Connector** section. - 1. Double-click the operation to view its properties section. - 2. In the 'General' section of the properties, select the Salesforce connection configuration you created. - 3. In the 'Job ID' text box, enter the expression `$ctx:jobId`. - 4. In the 'Output type' drop down, select `JSON` or `CSV`. - - - ```xml - <salesforce_bulkapi_v2.getSuccessfulResults configKey="SF_CONNECTION_CONFIG_NAME_1"> - <jobId>{$ctx:jobId}</jobId> - <outputType>JSON</outputType> - <includeResultTo>BODY</includeResultTo> - </salesforce_bulkapi_v2.getSuccessfulResults> - ``` - - 3. Drag and drop 'Respond' mediator. - -#### - /getUnprocessedResults - - Using this resource, users can retrieve the unprocessed records of a particular bulk job. - - 1. Drag and drop a 'Property' mediator. This mediator will extract the jobId from the request payload and enable its use in other operations within this sequence. - ```xml - <property expression="json-eval($.id)" name="jobId" scope="default" type="STRING"/> - ``` - 2. Drag ann drop `getUnprocessedResults` operation from **Salesforce_bulkapi_v2_Connector** section. - 1. Double-click the operation to view its properties section. - 2. In the 'General' section of the properties, select the Salesforce connection configuration you created. - 3. In the 'Job ID' text box, enter the expression `$ctx:jobId`. - 4. In the 'Output type' drop, select `CSV`. - - - ```xml - <salesforce_bulkapi_v2.getUnprocessedResults configKey="SF_CONNECTION_CONFIG_NAME_1"> - <jobId>{$ctx:jobId}</jobId> - <outputType>CSV</outputType> - <includeResultTo>BODY</includeResultTo> - </salesforce_bulkapi_v2.getUnprocessedResults> - ``` - - - 3. Drag and drop `write` operation from **[File_Connector]({{base_path}}/reference/connectors/file-connector/file-connector-config/#operations)** section. - 1. In the `General` section of the properties, select the File Connection configuration you created. - 2. In the `Basic` section, enter the file path. - - 4. Drag and drop 'Respond' mediator. - -#### - /deleteJob - - Using this resource, users can delete a perticular bulk job - - 1. Drag and drop a 'Property' mediator. This mediator will extract the jobId from the request payload and enable its use in other operations within this sequence. - ```xml - <property expression="json-eval($.id)" name="jobId" scope="default" type="STRING"/> - ``` - 2. Drag and drop `deleteJob` operation from **Salesforce_bulkapi_v2_Connector** section. - 1. Double-click the operation to view its properties section. - 2. In the 'General' section of the properties, select the Salesforce connection configuration you created. - 3. In the 'Job ID' text box, enter the expression `$ctx:jobId`. - - - ```xml - <salesforce_bulkapi_v2.deleteJob configKey="SF_CONNECTION_CONFIG_NAME_1"> - <jobId>{$ctx:jobId}</jobId> - </salesforce_bulkapi_v2.deleteJob> - ``` - - 3. Drag and drop 'Respond' mediator. - -#### - /createQuery - - Using this resource, users can create a bulk query job in salesforce - - 1. Drag and drop `createQueryJob` operation from **Salesforce_bulkapi_v2_Connector** section. - 1. Double-click the operation to view its properties section. - 2. In the 'General' section of the properties, select the Salesforce connection configuration you created. - 3. In the properties section, under `Basic`, select `QUERY` in the Operation dropdown. - 4. Input `SELECT Id, name FROM Account` in the `Object` text box - 5. Select `COMMA` in the `Column Delimeter` dropbox - 6. Select `LF` or `CRLF` in the `Line Ending` dropbox based on your operating system. IF Windows : `CRLF`, for Unix-based systems : `LF` - - - ```xml - <salesforce_bulkapi_v2.createQueryJob configKey="SF_CONFIG_1"> - <query>SELECT Name FROM Account</query> - <operation>QUERY</operation> - <columnDelimiter>COMMA</columnDelimiter> - <lineEnding>LF</lineEnding> - </salesforce_bulkapi_v2.createQueryJob> - ``` - - 2. Drag and drop 'Respond' mediator. - -#### - /getQueryJobInfo - - Using this resource, users can get the query job information. - - 1. Drag and drop a 'Property' mediator. This mediator will extract the jobId from the request payload and enable its use in other operations within this sequence. - ```xml - <property expression="json-eval($.id)" name="jobId" scope="default" type="STRING"/> - ``` - 2. Drag and drop `getQueryJobInfo` operation from **Salesforce_bulkapi_v2_Connector** section. - 1. Double-click the operation to view its properties section. - 2. In the 'General' section of the properties, select the Salesforce connection configuration you created. - 3. In the 'Job ID' text box, enter the expression `$ctx:jobId`. - - - ```xml - <salesforce_bulkapi_v2.getQueryJobInfo configKey="SF_CONNECTION_CONFIG_NAME_1"> - <jobId>{$ctx:jobId}</jobId> - </salesforce_bulkapi_v2.getQueryJobInfo> - ``` - - 3. Drag and drop 'Respond' mediator. - -#### - /getSuccessfulQueryResults - - Using this resource, users can get the successful query results from salesforce - - 1. Drag and drop a 'Property' mediator. This mediator will extract the queryJobId from the request payload and enable its use in other operations within this sequence. - ```xml - <property expression="json-eval($.id)" name="queryJobId" scope="default" type="STRING"/> - ``` - 2. Drag and drop `getQueryJobResults` operation from **Salesforce_bulkapi_v2_Connector** section. - 1. Double-click the operation to view its properties section. - 2. In the 'General' section of the properties, select the Salesforce connection configuration you created. - 3. In the 'Job ID' text box, enter the expression `$ctx:jobId`. - ```xml - <salesforce_bulkapi_v2.getQueryJobResults configKey="SF_CONNECTION_CONFIG_NAME_1"> - <queryJobId>{$ctx:queryJobId}</queryJobId> - <outputType>JSON</outputType> - <includeResultTo>FILE</includeResultTo> - <filePath>/path/to/file/out.json</filePath> - </salesforce_bulkapi_v2.getQueryJobResults> - ``` - - > **Note:** The includeResultTo 'FILE' feature is `deprecated`. - - 3. Drag and drop 'Respond' mediator. - - -??? info "The resources are now ready to be tested. The API source should resemble the following. Expand to see." - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <api context="/salesforce" name="createjob" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST" uri-template="/createJobAndUploadData"> - <inSequence> - <enrich> - <source clone="true" type="body"/> - <target property="csvContent" type="property"/> - </enrich> - <salesforce_bulkapi_v2.createJob configKey="SF_CONNECTION_CONFIG_NAME_1"> - <operation>INSERT</operation> - <object>Account</object> - <columnDelimiter>COMMA</columnDelimiter> - <lineEnding>LF</lineEnding> - </salesforce_bulkapi_v2.createJob> - <property expression="json-eval($.id)" name="jobId" scope="default" type="STRING"/> - <salesforce_bulkapi_v2.uploadJobData configKey="SF_CONNECTION_CONFIG_NAME_1"> - <jobId>{$ctx:jobId}</jobId> - <inputData>{$ctx:csvContent}</inputData> - </salesforce_bulkapi_v2.uploadJobData> - <salesforce_bulkapi_v2.closeJob configKey="SF_CONNECTION_CONFIG_NAME_1"> - <jobId>{$ctx:jobId}</jobId> - </salesforce_bulkapi_v2.closeJob> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST GET" uri-template="/createJobAndUploadFile"> - <inSequence> - <salesforce_bulkapi_v2.createJob configKey="SF_CONNECTION_CONFIG_NAME_1"> - <operation>INSERT</operation> - <object>Account</object> - <columnDelimiter>COMMA</columnDelimiter> - <lineEnding>LF</lineEnding> - </salesforce_bulkapi_v2.createJob> - <property expression="json-eval($.id)" name="jobId" scope="default" type="STRING"/> - <file.read configKey="MY_CONN"> - <path>data.csv</path> - <readMode>Complete File</readMode> - <startLineNum>0</startLineNum> - <lineNum>0</lineNum> - <includeResultTo>Message Property</includeResultTo> - <resultPropertyName>csvContent</resultPropertyName> - <enableStreaming>false</enableStreaming> - <enableLock>false</enableLock> - </file.read> - <salesforce_bulkapi_v2.uploadJobData configKey="SF_CONNECTION_CONFIG_NAME_1"> - <jobId>{$ctx:jobId}</jobId> - <inputData>{$ctx:csvContent}</inputData> - </salesforce_bulkapi_v2.uploadJobData> - <salesforce_bulkapi_v2.closeJob configKey="SF_CONNECTION_CONFIG_NAME_1"> - <jobId>{$ctx:jobId}</jobId> - </salesforce_bulkapi_v2.closeJob> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" uri-template="/getJobInfo"> - <inSequence> - <property expression="json-eval($.id)" name="jobId" scope="default" type="STRING"/> - <salesforce_bulkapi_v2.getJobInfo configKey="SF_CONNECTION_CONFIG_NAME_1"> - <jobId>{$ctx:jobId}</jobId> - </salesforce_bulkapi_v2.getJobInfo> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" uri-template="/getSuccessfulResults"> - <inSequence> - <property expression="json-eval($.id)" name="jobId" scope="default" type="STRING"/> - <salesforce_bulkapi_v2.getSuccessfulResults configKey="SF_CONNECTION_CONFIG_NAME_1"> - <jobId>{$ctx:jobId}</jobId> - <outputType>JSON</outputType> - <includeResultTo>BODY</includeResultTo> - </salesforce_bulkapi_v2.getSuccessfulResults> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" uri-template="/getUnprocessedResults"> - <inSequence> - <property expression="json-eval($.id)" name="jobId" scope="default" type="STRING"/> - <salesforce_bulkapi_v2.getUnprocessedResults configKey="SF_CONNECTION_CONFIG_NAME_1"> - <jobId>{$ctx:jobId}</jobId> - <outputType>CSV</outputType> - <includeResultTo>BODY</includeResultTo> - </salesforce_bulkapi_v2.getUnprocessedResults> - <file.write configKey="MY_CONN"> - <filePath>path/to/folder/out.csv</filePath> - <mimeType>Automatic</mimeType> - <writeMode>Append</writeMode> - <enableStreaming>false</enableStreaming> - <appendNewLine>false</appendNewLine> - <enableLock>false</enableLock> - <includeResultTo>Message Body</includeResultTo> - <updateLastModified>true</updateLastModified> - </file.write> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" uri-template="/deleteJob"> - <inSequence> - <property expression="json-eval($.id)" name="jobId" scope="default" type="STRING"/> - <salesforce_bulkapi_v2.deleteJob configKey="SF_CONNECTION_CONFIG_NAME_1"> - <jobId>{$ctx:jobId}</jobId> - </salesforce_bulkapi_v2.deleteJob> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST GET" uri-template="/createQuery"> - <inSequence> - <salesforce_bulkapi_v2.createQueryJob configKey="SF_CONNECTION_CONFIG_NAME_1"> - <query>SELECT Id, name FROM Account</query> - <operation>QUERY</operation> - <columnDelimiter>COMMA</columnDelimiter> - <lineEnding>LF</lineEnding> - </salesforce_bulkapi_v2.createQueryJob> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" uri-template="/getQueryJobInfo"> - <inSequence> - <property expression="json-eval($.id)" name="jobId" scope="default" type="STRING"/> - <salesforce_bulkapi_v2.getQueryJobInfo configKey="SF_CONFIG_1"> - <queryJobId>{$ctx:jobId}</queryJobId> - </salesforce_bulkapi_v2.getQueryJobInfo> - <log level="full"/> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" uri-template="/getSuccessfulQueryResults"> - <inSequence> - <property expression="json-eval($.id)" name="queryJobId" scope="default" type="STRING"/> - <log level="custom"> - <property expression="$ctx:queryJobId" name="testprop1"/> - </log> - <salesforce_bulkapi_v2.getQueryJobResults configKey="SF_CONNECTION_CONFIG_NAME_1"> - <queryJobId>{$ctx:queryJobId}</queryJobId> - <outputType>JSON</outputType> - <includeResultTo>FILE</includeResultTo> - <filePath>path/to/file/out.json</filePath> - </salesforce_bulkapi_v2.getQueryJobResults> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - </api> - ``` - -### Testing the resources - -Let's test the API. Start the MI and deploy the API. - -1. Let's create a bulk ingest job using our `/createJobAndUploadData` resource. To invoke the resource, use the following curl command: - ```bash - curl --location 'http://localhost:8290/salesforce/createJobAndUploadData' \ - --header 'Content-Type: text/plain' \ - --header 'Cookie: CookieConsentPolicy=0:1; LSKey-c$CookieConsentPolicy=0:1' \ - --data 'Name,ShippingCity,NumberOfEmployees,AnnualRevenue,Website,Description - Lorem Ipsum,Milano,2676,912260031,https://ft.com/lacus/at.jsp,"Lorem ipsum dolor sit amet"' - ``` - You will receive a response similar to the following: - ```json - { - "id": "7508d00000Ihhl5AAB", - "operation": "insert", - "object": "Account", - "createdById": "0058d000006mtd1AAA", - "createdDate": "2023-03-16T06:43:09.000+0000", - "systemModstamp": "2023-03-16T06:43:09.000+0000", - "state": "UploadComplete", - "concurrencyMode": "Parallel", - "contentType": "CSV", - "apiVersion": 57.0 - } - ``` - Note down the `id` from the response. - -2. Let's create a bulk ingest job using our `/createJobAndUploadFile` resource. To invoke the resource, use the following curl command: - ```bash - curl --location 'http://localhost:8290/salesforce/createJobAndUploadFile' \ - --header 'Content-Type: text/plain' \ - --header 'Cookie: CookieConsentPolicy=0:1; LSKey-c$CookieConsentPolicy=0:1' - ``` - You will receive a response similar to the following: - ```json - { - "id": "7508d00000Ahhl5AAB", - "operation": "insert", - "object": "Account", - "createdById": "0058d000006mtd1AAA", - "createdDate": "2023-03-16T06:43:09.000+0000", - "systemModstamp": "2023-03-16T06:43:09.000+0000", - "state": "UploadComplete", - "concurrencyMode": "Parallel", - "contentType": "CSV", - "apiVersion": 57.0 - } - ``` - Note down the `id` from the response. - -3. Let's get the job information of the bulk job using our `/getJobInfo` resource. To invoke the resource, please use the following curl command: - ```bash - curl --location 'http://localhost:8290/salesforce/getJobInfo' \ - --header 'Content-Type: application/json' \ - --header 'Cookie: CookieConsentPolicy=0:1; LSKey-c$CookieConsentPolicy=0:1' \ - --data '{ - "id" : "7508d00000Ihhl5AAB" - }' - ``` - Make sure you replace the `id` value. You will receive a response similar to the following: - - ```json - { - "id": "7508d00000Ihhl5AAB", - "operation": "insert", - "object": "Account", - "createdById": "0058d000006mtd1AAA", - "createdDate": "2023-03-16T06:43:09.000+0000", - "systemModstamp": "2023-03-16T06:43:13.000+0000", - "state": "JobComplete", - "concurrencyMode": "Parallel", - "contentType": "CSV", - "apiVersion": 57.0, - "jobType": "V2Ingest", - "lineEnding": "LF", - "columnDelimiter": "COMMA", - "numberRecordsProcessed": 1, - "numberRecordsFailed": 0, - "retries": 0, - "totalProcessingTime": 139, - "apiActiveProcessingTime": 81, - "apexProcessingTime": 0 - } - ``` - -4. Let's get the successfully processed records using our `/getSuccessfulResults` resource. To invoke the resource, please use the following curl command: - ```bash - curl --location 'http://localhost:8290/salesforce/getSuccessfulResults' \ - --header 'Content-Type: application/json' \ - --header 'Cookie: CookieConsentPolicy=0:1; LSKey-c$CookieConsentPolicy=0:1' \ - --data '{ - "id" : "7508d00000Ihhl5AAB" - }' - ``` - Make sure you replace the `id` value. You will receive a response similar to the following: - - ```json - [ - { - "sf__Id": "0018d00000UVCjuAAH", - "sf__Created": "true", - "Name": "Lorem Ipsum", - "ShippingCity": "Milano", - "NumberOfEmployees": "2676", - "AnnualRevenue": "9.12260031E8", - "Website": "https://ft.com/lacus/at.jsp", - "Description": "Lorem ipsum dolor sit amet" - } - ] - ``` - -5. Let's get the successfully processed records using our `/getUnprocessedResults` resource. To invoke the resource, please use the following curl command: - ```bash - curl --location 'http://localhost:8290/salesforce/getUnprocessedResults' \ - --header 'Content-Type: application/json' \ - --header 'Cookie: CookieConsentPolicy=0:1; LSKey-c$CookieConsentPolicy=0:1' \ - --data '{ - "id" : "7508d00000Ihhl5AAB" - }' - ``` - Make sure you replace the `id` value. Upon successful execution, you will receive a `200 OK` response, and the output will be written to the designated file. - ```json - { - "result": "success", - } - ``` - -6. Let's delete the bulk job using our `/deleteJob` resource. To invoke the resource, please use the following curl command: - - ```bash - curl --location 'http://localhost:8290/salesforce/deleteJob' \ - --header 'Content-Type: application/json' \ - --header 'Cookie: CookieConsentPolicy=0:1; LSKey-c$CookieConsentPolicy=0:1' \ - --data '{ - "id" : "7508d00000Ihhl5AAB" - }' - ``` - Make sure you replace the `id` value. - Upon successful execution, you will receive a response similar to the following, - ```json - { - "result": "success", - } - ``` - In the event that the provided job ID does not exist, the API will respond with a `404 Not Found` response. - -7. Let's create a bulk query job using our `/createQuery` resource. To invoke the resource, please use the following curl command: - - ```bash - curl --location --request POST 'http://localhost:8290/salesforce/createQuery' \ - --header 'Cookie: CookieConsentPolicy=0:1; LSKey-c$CookieConsentPolicy=0:1' - ``` - You will receive a response similar to the following: - - ```json - { - "id": "7508d00000IhhkKAAR", - "operation": "query", - "object": "Account", - "createdById": "0058d000006mtd1AAA", - "createdDate": "2023-03-16T06:37:50.000+0000", - "systemModstamp": "2023-03-16T06:37:50.000+0000", - "state": "UploadComplete", - "concurrencyMode": "Parallel", - "contentType": "CSV", - "apiVersion": 57.0, - "lineEnding": "LF", - "columnDelimiter": "COMMA" - } - ``` - Make sure you replace the `id` value. - -8. Let's get the job information of the query job using our `/getQueryJobInfo` resource. To invoke the resource, please use the following curl command: - ```bash - curl --location 'http://localhost:8290/salesforce/getQueryJobInfo' \ - --header 'Content-Type: application/json' \ - --header 'Cookie: CookieConsentPolicy=0:1; LSKey-c$CookieConsentPolicy=0:1' \ - --data '{ - "id" : "7508d00000Ihhl5AAB" - }' - ``` - Make sure you replace the `id` value. You will receive a response similar to the following: - - ```json - { - "id":"7508d00000Ihhl5AAB", - "operation":"query", - "object":"Account", - "createdById":"0055j000008dizPAAQ", - "createdDate":"2023-08-23T16:12:50.000+0000", - "systemModstamp":"2023-08-23T16:12:50.000+0000", - "state":"JobComplete", - "concurrencyMode":"Parallel", - "contentType":"CSV", - "apiVersion":57.0, - "jobType":"V2Query", - "lineEnding":"LF", - "columnDelimiter":"COMMA", - "numberRecordsProcessed":28, - "retries":0, - "totalProcessingTime":255 - } - ``` - -9. Let's get the query results using our `/getSuccessfulQueryResults` resource. To invoke the resource, please use the following curl command: - - ```bash - curl --location 'http://localhost:8290/salesforce/getSuccessfulQueryResults' \ - --header 'Content-Type: application/json' \ - --header 'Cookie: CookieConsentPolicy=0:1; LSKey-c$CookieConsentPolicy=0:1' \ - --data '{ - "id" : "7508d00000IhhkKAAR" - }' - ``` - Make sure you replace the `id` value. You will receive a response similar to the following: - ```json - [ - { - "Id": "0018d00000SIDcyAAH", - "Name": "Sample Account for Entitlements" - } - ] - ``` - -## What's Next - -- To customize this example for your own scenario, see [Salesforce bulk V2 Connector Configuration]({{base_path}}/reference/connectors/salesforce-connectors/salesforcebulk-v2-reference/) documentation for all operation details of the connector. diff --git a/en/docs/reference/connectors/salesforce-connectors/salesforcebulk-v2-reference.md b/en/docs/reference/connectors/salesforce-connectors/salesforcebulk-v2-reference.md deleted file mode 100644 index 12e1c65174..0000000000 --- a/en/docs/reference/connectors/salesforce-connectors/salesforcebulk-v2-reference.md +++ /dev/null @@ -1,552 +0,0 @@ -# SalesforceBulkV2 Connector Reference - -The following operations allow you to work with the Salesforce Bulk V2 Connector. Click an operation name to see parameter details and samples on how to use it. - -Salesforce Bulk API uses the OAuth protocol to allow application users to securely access data without having to reveal their user credentials. For more information on how authentication is done in Salesforce, see [Understanding Authentication](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/intro_understanding_authentication.htm). - - -## Bulk API 2.0 Connector Connector Configuration - -??? note "Connection configuration" - In the 'Properties' section of each operation, users can configure connection-related information. Once the configuration is created, it can be reused in other operations. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>Salesforce Configuration Name</td> - <td>Name of the configuration.</td> - <td>Yes</td> - </tr> - <tr> - <td>Instance URL</td> - <td>Salesforce instance url.</td> - <td>Yes</td> - </tr> - <tr> - <td>Client ID</td> - <td>Salesforce connected app's client id.</td> - <td>No. Connector will renew the access token if it gets 4xx response and clientId, clientSecret, refreshToken are configured.</td> - </tr> - <tr> - <td>Client Secret</td> - <td>Salesforce connected app's client secret.</td> - <td>No. Connector will renew the access token if it gets 4xx response and clientId, clientSecret, refreshToken are configured.</td> - </tr> - <tr> - <td>Refresh Token</td> - <td>Salesforce connected app's refresh token.</td> - <td>No. Connector will renew the access token if it gets 4xx response and clientId, clientSecret, refreshToken are configured.</td> - </tr> - <tr> - <td>Access Token</td> - <td>Salesforce connected app's access token.</td> - <td>Optional if clientId, clientSecret, refreshToken are configured. Required otherwise.</td> - </tr> - </table> - - > **Note:** It is recommended to use the OAuth client credentials (`Client ID` and `Client Secret`) along with the `Refresh Token` instead of `Access Token`. - - -## Bulk API 2.0 Ingest - -??? note "salesforce_bulkapi_v2.abortJob" - The `salesforce_bulkapi_v2.abortJob` operation aborts a bulk ingest job in Salesforce using Salesforce Bulk API v2. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/close_job.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>Salesforce configuration</td> - <td>The Salesforce configuration to store OAuth related data.</td> - <td>Yes</td> - </tr> - <tr> - <td>Job ID</td> - <td>Bulk job ID</td> - <td>Yes</td> - </tr> - </table> - -??? note "salesforce_bulkapi_v2.createJob" - The `salesforce_bulkapi_v2.createJob` operation creates a bulk ingest job in Salesforce using Salesforce Bulk API v2. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/create_job.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>Salesforce configuration</td> - <td>The Salesforce configuration to store OAuth related data.</td> - <td>Yes</td> - </tr> - <tr> - <td>Operation</td> - <td>The ID of an assignment rule to run for a Case or a Lead. The assignment rule can be active or inactive. The ID can be retrieved by using the Lightning Platform SOAP API or the Lightning Platform REST API to query the AssignmentRule object.</td> - <td>Yes</td> - </tr> - <tr> - <td>Object</td> - <td>The object type for the data being processed. Use only a single object type per job.</td> - <td>Yes</td> - </tr> - <tr> - <td>Column Delimeter</td> - <td>The column delimiter used for CSV job data. The default value is COMMA. Valid values are: - <ul> - <li>BACKQUOTE—backquote character (``)</li> - <li>CARET—caret character (^)</li> - <li>COMMA—comma character (,) which is the default delimiter</li> - <li>PIPE—pipe character (|)</li> - <li>SEMICOLON—semicolon character (;)</li> - <li>TAB—tab character</li> - </ul> - </td> - <td>Yes</td> - </tr> - <tr> - <td>Line Ending</td> - <td>The line ending used for CSV job data, marking the end of a data row. The default is LF. Valid values are: - <ul> - <li>LF—linefeed character</li> - <li>CRLF—carriage return character followed by a linefeed character</li> - </ul> - </td> - <td>Yes</td> - </tr> - <tr> - <td>Assignment Rule ID</td> - <td>The ID of an assignment rule to run for a Case or a Lead. The assignment rule can be active or inactive. The ID can be retrieved by using the Lightning Platform SOAP API or the Lightning Platform REST API to query the AssignmentRule object.</td> - <td>No</td> - </tr> - <tr> - <td>External ID Field Name</td> - <td>The external ID field in the object being updated. Only needed for upsert operations. Field values must also exist in CSV job data.</td> - <td>No</td> - </tr> - </table> - -??? note "salesforce_bulkapi_v2.closeJob" - The `salesforce_bulkapi_v2.closeJob` operation closes a bulk ingest job in Salesforce using Salesforce Bulk API v2. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/close_job.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>Salesforce configuration</td> - <td>The Salesforce configuration to store OAuth related data.</td> - <td>Yes</td> - </tr> - <tr> - <td>Job ID</td> - <td>Bulk job ID</td> - <td>Yes</td> - </tr> - </table> - -??? note "salesforce_bulkapi_v2.deleteJob" - The `salesforce_bulkapi_v2.deleteJob` operation deletes a bulk ingest job in Salesforce using Salesforce Bulk API v2. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/delete_job.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>Salesforce configuration</td> - <td>The Salesforce configuration to store OAuth related data.</td> - <td>Yes</td> - </tr> - <tr> - <td>Job ID</td> - <td>Bulk job ID</td> - <td>Yes</td> - </tr> - </table> - -??? note "salesforce_bulkapi_v2.getAllJobInfo" - The `salesforce_bulkapi_v2.getJobInfo` operation retrieve all the bulk ingest job information from Salesforce using Salesforce Bulk API v2. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/get_all_jobs.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>Salesforce configuration</td> - <td>The Salesforce configuration to store OAuth related data.</td> - <td>Yes</td> - </tr> - <tr> - <td>IsPKChunkingEnabled</td> - <td>If set to true, the request only returns information about jobs where PK Chunking is enabled. This only applies to Bulk API (not Bulk API 2.0) jobs.</td> - <td>No</td> - </tr> - <tr> - <td>Job Type</td> - <td>Gets information only about jobs matching the specified job type. Possible values are: - <ul> - <li>Classic—Bulk API jobs. This includes both query jobs and ingest jobs.</li> - <li>BigObjectIngest.</li> - <li>V2Ingest—Bulk API 2.0 ingest (upload and upsert) jobs.</li> - <li>All-Gets information about all job types.</li> - </ul> - </td> - <td>No</td> - </tr> - <tr> - <td>Query Locator</td> - <td>Gets information about jobs starting with that locator value.</td> - <td>No</td> - </tr> - </table> - -??? note "salesforce_bulkapi_v2.getFailedResults" - The salesforce_bulkapi_v2.getFailedResults operation retrieves failed records of a specific bulk job from Salesforce using the Salesforce Bulk API v2. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/get_job_failed_results.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>Salesforce configuration</td> - <td>The Salesforce configuration to store OAuth related data.</td> - <td>Yes</td> - </tr> - <tr> - <td>Job ID</td> - <td>Bulk job ID</td> - <td>Yes</td> - </tr> - <tr> - <td>Output Type</td> - <td>The response content type</td> - <td>Yes</td> - </tr> - <tr> - <td>Add Results To</td> - <td>Store the result in FILE or BODY</td> - <td>Yes</td> - </tr> - <tr> - <td>File Path</td> - <td>The file path to store results</td> - <td>If selected `FILE` in `Add Results To` </td> - </tr> - </table> - -??? note "salesforce_bulkapi_v2.getJobInfo" - The `salesforce_bulkapi_v2.getJobInfo` operation retrieve bulk ingest job information from Salesforce using Salesforce Bulk API v2. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/get_job_info.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>Salesforce configuration</td> - <td>The Salesforce configuration to store OAuth related data.</td> - <td>Yes</td> - </tr> - <tr> - <td>Job ID</td> - <td>Bulk job ID</td> - <td>Yes</th> - </tr> - </table> - - - - - -??? note "salesforce_bulkapi_v2.getSuccessfulResults" - The salesforce_bulkapi_v2.getSuccessfulResults operation retrieves successful records of a specific bulk job from Salesforce using the Salesforce Bulk API v2. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/get_job_successful_results.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>Salesforce configuration</td> - <td>The Salesforce configuration to store OAuth related data.</td> - <td>Yes</td> - </tr> - <tr> - <td>Job ID</td> - <td>Bulk job ID</td> - <td>Yes</td> - </tr> - <tr> - <td>Output Type</td> - <td>The response content type</td> - <td>Yes</td> - </tr> - <tr> - <td>Add Results To</td> - <td>Store the result in FILE or BODY</td> - <td>Yes</td> - </tr> - <tr> - <td>File Path</td> - <td>The file path to store results</td> - <td>If selected `FILE` in `Add Results To` </td> - </tr> - </table> - </table> - - -??? note "salesforce_bulkapi_v2.getUnprocessedResults" - The salesforce_bulkapi_v2.getUnprocessedResults operation retrieves unprocessed records of a specific bulk job from Salesforce using the Salesforce Bulk API v2. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/get_job_unprocessed_results.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>Salesforce configuration</td> - <td>The Salesforce configuration to store OAuth related data.</td> - <td>Yes</td> - </tr> - <tr> - <td>Job ID</td> - <td>Bulk job ID</td> - <td>Yes</td> - </tr> - <tr> - <td>Output Type</td> - <td>The response content type</td> - <td>Yes</td> - </tr> - <tr> - <td>Add Results To</td> - <td>Store the result in FILE or BODY</td> - <td>Yes</td> - </tr> - <tr> - <td>File Path</td> - <td>The file path to store results</td> - <td>If selected `FILE` in `Add Results To` </td> - </tr> - </table> - - - -??? note "salesforce_bulkapi_v2.uploadJobData" - The salesforce_bulkapi_v2.uploadJobData operation upload the csv records to a bulk job in Salesforce using the Salesforce Bulk API v2. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/upload_job_data.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>Salesforce configuration</td> - <td>The Salesforce configuration to store OAuth related data.</td> - <td>Yes</td> - </tr> - <tr> - <td>Job ID</td> - <td>Bulk job ID</td> - <td>Yes</td> - </tr> - <tr> - <td>Input Data</td> - <td>The CSV content that needs to be uploaded.</td> - <td>Required</td> - </tr> - </table> - - - -## Bulk API 2.0 Query - -??? note "salesforce_bulkapi_v2.createQueryJob" - The `salesforce_bulkapi_v2.createQueryJob` operation creates a bulk query job in Salesforce using Salesforce Bulk API v2. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/query_create_job.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>Salesforce configuration</td> - <td>The Salesforce configuration to store OAuth related data.</td> - <td>Yes</td> - </tr> - <tr> - <td>Operation</td> - <td>The type of query. Possible values are: QUERY, QUERY_ALL</td> - <td>Yes</td> - </tr> - <tr> - <td>Column Delimiter</td> - <td>The column delimiter used for CSV job data. The default value is COMMA. Valid values are: - <ul> - <li>BACKQUOTE—backquote character (``)</li> - <li>CARET—caret character (^)</li> - <li>COMMA—comma character (,) which is the default delimiter</li> - <li>PIPE—pipe character (|)</li> - <li>SEMICOLON—semicolon character (;)</li> - <li>TAB—tab character</li> - </ul> - </td> - <td>Yes</td> - </tr> - <tr> - <td>Line Ending</td> - <td>The line ending used for CSV job data, marking the end of a data row. The default is LF. Valid values are: LF—linefeed character, CRLF—carriage return character followed by a linefeed character</td> - <td>Yes</td> - </tr> - </table> - -??? note "salesforce_bulkapi_v2.abortQueryJob" - The `salesforce_bulkapi_v2.abortQueryJob` operation aborts a bulk query job in Salesforce using Salesforce Bulk API v2. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/query_abort_job.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>Salesforce configuration</td> - <td>The Salesforce configuration to store OAuth related data.</td> - <td>Yes</td> - </tr> - <tr> - <td>Query Job ID</td> - <td>Bulk Query job ID</td> - <td>Yes</td> - </tr> - </table> - -??? note "salesforce_bulkapi_v2.deleteQueryJob" - The `salesforce_bulkapi_v2.deleteQueryJob` operation deletes a bulk query job in Salesforce using Salesforce Bulk API v2. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/query_delete_job.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>Salesforce configuration</td> - <td>The Salesforce configuration to store OAuth related data.</td> - <td>Yes</td> - </tr> - <tr> - <td>Query Job ID</td> - <td>Bulk Query job ID</td> - <td>Yes</td> - </tr> - </table> - -??? note "salesforce_bulkapi_v2.getQueryJobInfo" - The `salesforce_bulkapi_v2.getQueryJobInfo` operation retrieve bulk query job information from Salesforce using Salesforce Bulk API v2. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/query_get_one_job.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>Salesforce configuration</td> - <td>The Salesforce configuration to store OAuth related data.</td> - <td>Yes</td> - </tr> - <tr> - <td>Job ID</td> - <td>Query job ID</td> - <td>Yes</th> - </tr> - </table> - -??? note "salesforce_bulkapi_v2.getAllQueryJobInfo" - The `salesforce_bulkapi_v2.getAllQueryJobInfo` operation retrieve all the bulk query job information from Salesforce using Salesforce Bulk API v2. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/query_get_all_jobs.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>Salesforce configuration</td> - <td>The Salesforce configuration to store OAuth related data.</td> - <td>Yes</td> - </tr> - <tr> - <td>IsPKChunkingEnabled</td> - <td>If set to true, the request only returns information about jobs where PK Chunking is enabled. This only applies to Bulk API (not Bulk API 2.0) jobs.</td> - <td>No</td> - </tr> - <tr> - <td>Job Type</td> - <td>Gets information only about jobs matching the specified job type. Possible values are: - <ul> - <li>Classic—Bulk API jobs. This includes both query jobs and ingest jobs.</li> - <li>V2Query—Bulk API 2.0 query jobs.</li> - <li>V2Ingest—Bulk API 2.0 ingest (upload and upsert) jobs.</li> - <li>All-Gets information about all job types.</li> - </ul> - </td> - <td>No</td> - </tr> - <tr> - <td>Query Locator</td> - <td>Gets information about jobs starting with that locator value.</td> - <td>No</td> - </tr> - </table> - -??? note "salesforce_bulkapi_v2.getQueryJobResults" - The salesforce_bulkapi_v2.getQueryJobResults operation retrieves the results of a specified bulk query job from Salesforce using the Salesforce Bulk API v2. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/query_get_job_results.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>Salesforce configuration</td> - <td>The Salesforce configuration to store OAuth related data.</td> - <td>Yes</td> - </tr> - <tr> - <td>Job ID</td> - <td>The ID of the bulk query job.</td> - <td>Yes</td> - </tr> - <tr> - <td>Locator</td> - <td>A string that identifies a specific set of query results. Providing a value for this parameter returns only that set of results.</td> - <td>Optional</td> - </tr> - <tr> - <td>Max Records</td> - <td>The maximum number of records to retrieve per set of results for the query.</td> - <td>Optional</td> - </tr> - <tr> - <td>Output Type</td> - <td>The response content type</td> - <td>Yes</td> - </tr> - <tr> - <td>Add Results To</td> - <td>Store the result in FILE or BODY</td> - <td>Yes</td> - </tr> - <tr> - <td>File Path</td> - <td>The file path to store results</td> - <td>If selected `FILE` in `Add Results To` </td> - </tr> - </table> diff --git a/en/docs/reference/connectors/salesforce-connectors/sf-inbound-endpoint-configuration.md b/en/docs/reference/connectors/salesforce-connectors/sf-inbound-endpoint-configuration.md deleted file mode 100644 index df3b8d732f..0000000000 --- a/en/docs/reference/connectors/salesforce-connectors/sf-inbound-endpoint-configuration.md +++ /dev/null @@ -1,135 +0,0 @@ -# Setting up the PushTopic in Salesforce - -This documentation explains how to set up the Salesforce environment to connect with WSO2 Salesforce Inbound Endpoint. Please follow up the steps given below - -* Create a custom object or object in Salesforce. -* Creating a PushTopic. -* Subscribing to the PushTopic Channel -* Testing the PushTopic Channel. -* Reset Security Token. - -## Create a custom object or object in Salesforce. - -As first step you need to [create a custom object in Salesforce](https://developer.salesforce.com/docs/atlas.en-us.202.0.api_streaming.meta/api_streaming/create_object.htm). In this scenario we use the `Account` object to store the records. - -## Creating a PushTopic - -The [PushTopic](https://developer.salesforce.com/docs/atlas.en-us.202.0.api_streaming.meta/api_streaming/create_a_pushtopic.htm) record contains a SOQL query. Event notifications are generated for updates that match the query. Alternatively, you can also use Workbench to create a PushTopic. In this sample we using Salesforce Developer Console to create a Push Topic. - -1. **Login** to the **Salesforce Account**. Navigate to the top right corner of the **Home page** and click the **Setup** icon. Then select **Developer Console**. - - <img src="{{base_path}}/assets/img/integrate/connectors/open-the-developer-console-updated.png" title="Open the Developer Console." width="500" alt="Open the Developer Console."/> - -2. After populating the Developer console, click **Debug** -> Open **Execute Anonymous Window**. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-inboundep-execute-anonymous-window-updated.png" title="Open the Anonymous Window." width="500" alt="Open the Anonymous Window."/> - -3. Add the following entry in the **Enter Apex Code** window and click **Execute**. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-inboundep-enter-apex-code-updated.png" title="Enter Apex code." width="700" alt="Enter Apex code."/> - - ``` - PushTopic pushTopic = new PushTopic(); - pushTopic.Name = 'Account'; - pushTopic.Query = 'SELECT Id, Name FROM Account'; - pushTopic.ApiVersion = 37.0; - pushTopic.NotifyForOperationCreate = true; - pushTopic.NotifyForOperationUpdate = true; - pushTopic.NotifyForOperationUndelete = true; - pushTopic.NotifyForOperationDelete = true; - pushTopic.NotifyForFields = 'Referenced'; - insert pushTopic; - ``` - We are essentially creating a SOQL query with a few extra parameters that watch for changes in a specified object. If the Push Topic is executed successfully then Salesforce is ready to post notification to WSO2 Salesforce Inbound Endpoint, if any changes are made in the Account object in Salesforce. This is because the below Push Topic has been created for Salesforce's Account object. - -## Subscribing to the PushTopic Channel - -In this step, we need to [subscribe](https://developer.salesforce.com/docs/atlas.en-us.202.0.api_streaming.meta/api_streaming/subscribe_to_pushtopic_channel.htm) to the channel that we created with the PushTopic record in the previous step. For this can be done through the `Workbench`. Workbench is a free, open source, community-supported tool that helps administrators and developers to interact with Salesforce for Data Insert, Update, Upsert, Delete, and Export purposes. - -> **Note**: Salesforce provides a hosted instance of Workbench for demonstration purposes only - Salesforce recommends that you do not use this hosted instance of Workbench to access data in a production database. - -1. Using your browser, navigate to the [workbench](https://developer.salesforce.com/page/Workbench). - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-inboundep-slaesforce-login-workbench.png" title="Login Workbench" width="70%" alt="Login Workbench"/> - -2. Select **Environment** as **Production** and **API Version** as **37.0**. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-inboundep-select-environment-updated.png" title="Select Environment" width="70%" alt="Select Environment"/> - -3. **Accept the terms of service**, and click **Login with Salesforce**. - -4. After logging in with Salesforce, you establish a connection to your database, and land on the **Select page**. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-inboundep-landing-page.png" title="Select page." width="700" alt="Select page."/> - -5. Select **queries** -> **Streaming Push Topics**. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-inboundep-streaming-push-topics-updated.png" title="Streaming PushTopic" width="70%" alt="Streaming PushTopic"/> - -6. In the **Push Topic** field, select **Account**. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-inboundep-select-pushtopic-account-updated.png" title="Select created PushTopic." width="70%" alt="Select created PushTopic"/> - -7. Click **Subscribe**. You’ll see the connection and response information and a response like `Subscribed to /topic/Account`. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-inboundep-subscribe-updated.png" title="Subscribe to the PushTopic" width="70%" alt="Subscribe to the PushTopic"/> - - > **Note**: Keep this browser window open and make sure that the connection does not time out. You’ll be able to see the event notifications triggered by the Account record you create when testing the PushTopic channel. - -## Testing the PushTopic Channel. - -1. Open new browser window and navigate to the [workbench](https://developer.salesforce.com/page/Workbench) using the same username and password. Please follow the steps given in `Subscribe to the PushTopic Channel` `Step 1`. - -2. Select **data** -> **Insert**. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-inboundep-data-insert-updated.png" title="Insert data to test the PushTopic" width="70%" alt="Insert data to test the PudhTopic"/> - -3. For **Object Type**, select *Account*. Ensure that the **Single Record** field is selected, and click **Next**. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-inboundep-select-single-record-updated.png" title="Select single record" width="70%" alt="Select single record"/> - -4. Type in a value for the `Name` field. Then click **Confirm Insert**. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-inboundep-insert-values-to-name-updated.png" title="Insert value to the object" width="50%" alt="Insert value to the object"/> - -5. Switch to your **Streaming Push Topics** browser window. You’ll see a notification that the *Account* update was created. The notification returns the `Id` and `Name` fields that we defined in the SELECT statement of our **PushTopic query**. Please find the notification message as shown bellow. - - ``` - Message received from: /topic/Account - { - "data": { - "event": { - "createdDate": "2020-04-21T13:02:56.967Z", - "replayId": 11, - "type": "created" - }, - "sobject": { - "Id": "0012x0000048qhUAAQ", - "Name": "Doctor" - } - }, - "channel": "/topic/Account" - } - ``` -## Reset Security Token - -1. **Login** to the **Salesforce Account**. Navigate to the top right corner of the **Home page** and click **Settings**. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-inboundep-click-settings-updated.png" title="Select Settings." width="40%" alt="Select Settings"/> - -2. Select **Reset My Security Token** and then click **Reset Security Token** button. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-inboundep-resetsecurity-token-updated.png" title="Reset Security Token" width="70%" alt="Reset Security Token"/> - - When setting up the Inbound Endpoint you need to provide the Salesforce password in the following manner. The password provided here is a concatenation of the user password and the security token provided by Salesforce. For more information, see [information on creating a security token in Salesforce](https://help.salesforce.com/articleView?id=user_security_token.htm&type=5). - - Example : - - | Field | Value | - | ------------------ |-----------------| - |salesforce password | test123 | - |Security Token | XXXXXXXXXX | - - ``` - <parameter name="connection.salesforce.password">test123XXXXXXXXXX</parameter> - ``` diff --git a/en/docs/reference/connectors/salesforce-connectors/sf-inbound-endpoint-example.md b/en/docs/reference/connectors/salesforce-connectors/sf-inbound-endpoint-example.md deleted file mode 100644 index ed94782ac4..0000000000 --- a/en/docs/reference/connectors/salesforce-connectors/sf-inbound-endpoint-example.md +++ /dev/null @@ -1,124 +0,0 @@ -# Salesforce Inbound Endpoint Example - -The Salesforce streaming Inbound Endpoint allows you to perform various operations on Salesforce streaming data. - -The [Salesforce streaming API](https://developer.salesforce.com/docs/atlas.en-us.api_streaming.meta/api_streaming/intro_stream.htm) receives notifications based on the changes that happen to Salesforce data with respect to an SQQL (Salesforce Object Query Language) query you define, in a secured and scalable way. For more information, navigate to [Salesforce streaming documentation](https://developer.salesforce.com/docs/atlas.en-us.202.0.api_streaming.meta/api_streaming/quick_start_workbench.htm). - -## What you'll build - -The Salesforce inbound endpoint is a listening inbound endpoint that can consume messages from Salesforce. This injects messages to an integration sequence. However, for simplicity of this example, we will just log the message. You can extend the sample as required using WSO2 [mediators]({{base_path}}/reference/mediators/about-mediators/). - -In this example we can trigger the notifications to the Salesforce Inbound Endpoint via creating the `Platform events` or `PushTopic` methods. Please note that our example configurations are based on creating the `PushTopic` method. You can use the instructions given in the [sf-rest inbound endpoint configuration]({{base_path}}/reference/connectors/salesforce-connectors/sf-inbound-endpoint-configuration/) documentation. - -The following diagram illustrates all the required functionality of the Salesforce inbound operations that you are going to build. - -For example, we are building an integrated example driven through the [Salesforce connector]({{base_path}}/reference/connectors/salesforce-connectors/sf-rest-connector-example/) and Salesforce Inbound Endpoint. The user calls the Salesforce REST API. It invokes the **create** sequence and creates a new account in Salesforce. Then, through the **retrieve** sequence, it displays all the existing account details to the user. - -Now that you have configured the Salesforce Inbound Endpoint, use the following Inbound Endpoint configuration to retrieve account details from your Salesforce account. The Salesforce inbound endpoint acts as a message receiver. You can inject that message into the mediation flow for getting the required output. - -<a href="{{base_path}}/assets/img/integrate/connectors/salesforce-inboundep-example.png"><img src="{{base_path}}/assets/img/integrate/connectors/salesforce-inboundep-example.png" title="Salesforce Inbound Endpoint" alt="Salesforce Inbound Endpoint"/></a> - -## Configure inbound endpoint using WSO2 Integration Studio - -1. Download [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/). Create an **Integration Project** as below. - - <img src="{{base_path}}/assets/img/integrate/connectors/integration-project.png" title="Creating a new Integration Project" width="800" alt="Creating a new Integration Project" /> - -2. Right click on **Created Integration Project** -> **New** -> **Inbound Endpoint** -> **Create A New Inbound Endpoint** -> **Inbound Endpoint Creation Type** and select as **custom** -> Click **Next**. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-inboundep-create-new-ie.png" title="Creating inbound endpoint" width="400" alt="Creating inbound endpoint" style="border:1px solid black"/> - -3. Click on **Inbound Endpoint** in design view and under `properties` tab, update class name to `org.wso2.carbon.inbound.salesforce.poll.SalesforceStreamData`. - -4. Navigate to the source view and update it with the following configuration as required. - - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <inboundEndpoint xmlns="http://ws.apache.org/ns/synapse" - name="SaleforceInboundEP" - sequence="test" - onError="fault" - class="org.wso2.carbon.inbound.salesforce.poll.SalesforceStreamData" - suspend="false"> - <parameters> - <parameter name="inbound.behavior">polling</parameter> - <parameter name="interval">100</parameter> - <parameter name="sequential">true</parameter> - <parameter name="coordination">true</parameter> - <parameter name="connection.salesforce.replay">false</parameter> - <parameter name="connection.salesforce.EventIDStoredFilePath">/home/kasun/Documents/SalesForceConnector/a.txt</parameter> - <parameter name="connection.salesforce.packageVersion">37.0</parameter> - <parameter name="connection.salesforce.salesforceObject">/topic/Account</parameter> - <parameter name="connection.salesforce.loginEndpoint">https://login.salesforce.com</parameter> - <parameter name="connection.salesforce.userName">Username</parameter> - <parameter name="connection.salesforce.password">test123XXXXXXXXXX</parameter> - <parameter name="connection.salesforce.waitTime">5000</parameter> - <parameter name="connection.salesforce.connectionTimeout">20000</parameter> - <parameter name="connection.salesforce.soapApiVersion">22.0</parameter> - </parameters> - </inboundEndpoint> - ``` - Sequence to process the message. - - In this example for simplicity we will just log the message, but in a real world use case, this can be any type of message mediation. - - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <sequence name="test" onError="fault" xmlns="http://ws.apache.org/ns/synapse"> - <log level="full"/> - <drop/> - </sequence> - ``` -> **Note**: To configure the `connection.salesforce.password` parameter value, please use the steps given under the topic `Reset Security Token` in the [Salesforce inbound endpoint configuration]({{base_path}}/reference/connectors/salesforce-connectors/sf-inbound-endpoint-configuration/) document. - -## Exporting Integration Logic as a CApp - -**CApp (Carbon Application)** is the deployable artefact on the integration runtime. Let us see how we can export integration logic we developed into a CApp. To export the `Solution Project` as a CApp, a `Composite Application Project` needs to be created. Usually, when a solution project is created, this project is automatically created by Integration Studio. If not, you can specifically create it by navigating to **File** -> **New** -> **Other** -> **WSO2** -> **Distribution** -> **Composite Application Project**. - -1. Right click on Composite Application Project and click on **Export Composite Application Project**.</br> - <img src="{{base_path}}/assets/img/integrate/connectors/capp-project1.jpg" title="Export as a Carbon Application" width="300" alt="Export as a Carbon Application" /> - -2. Select an **Export Destination** where you want to save the .car file. - -3. In the next **Create a deployable CAR file** screen, select inbound endpoint and sequence artifacts and click **Finish**. The CApp will get created at the specified location provided in the previous step. - -## Deployment - -1. Navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for `SalesforceRest`. Click on `Salesforce Inbound Endpoint` and download the .jar file by clicking on `Download Inbound Endpoint`. Copy this .jar file into <PRODUCT-HOME>/lib folder. - -2. Copy the exported carbon application to the `<PRODUCT-HOME>/repository/deployment/server/carbonapps` folder. - -4. Start the integration server. - -## Testing - -> **Note**: If you want to test this scenario by inserting data manually into the created object records, please follow the steps given under topic `Testing the PushTopic Channel` in the [Salesforce inbound endpoint configuration document]({{base_path}}/reference/connectors/salesforce-connectors/sf-inbound-endpoint-configuration/). - - Please use the [Salesforce REST Connector example]({{base_path}}/reference/connectors/salesforce-connectors//sf-rest-connector-example/) testing steps to test this Inbound Endpoint scenario; - - Save a file called data.json with the following payload (change the value of `Name` field as `Manager`). - ``` - { - "sObject":"Account", - "fieldAndValue": { - "name": "Manager", - "description":"This Account belongs to WSO2" - } - } - ``` - Invoke the API as shown below using the curl command. Curl application can be downloaded from [here](https://curl.haxx.se/download.html). - - ``` - curl -X POST -d @data.json http://localhost:8280/salesforcerest --header "Content-Type:application/json" - ``` - You will get a set of account names and the respective IDs as the output. At the same time, in the server console, you can see the following message. - - **Expected response** - - ``` - To: , MessageID: urn:uuid:2D8F9AFA30E66278831587368713372, Direction: request, Payload: {"event":{"createdDate":"2020-04-20T07:45:12.686Z","replayId":4,"type":"created"},"sobject":{"Id":"0012x0000048j9mAAA","Name":"Manager"}} - ``` -## What's next - -* You can deploy and run your project using [different Micro Integrator installation options]({{base_path}}/install-and-setup/install/installation-options/). -* To customize this example for your own scenario, see [Salesforce Inbound Endpoint Reference]({{base_path}}/reference/connectors/salesforce-connectors/sf-inbound-endpoint-reference-configuration/) documentation for all operation details of the connector. diff --git a/en/docs/reference/connectors/salesforce-connectors/sf-inbound-endpoint-reference-configuration.md b/en/docs/reference/connectors/salesforce-connectors/sf-inbound-endpoint-reference-configuration.md deleted file mode 100644 index 291d6b009b..0000000000 --- a/en/docs/reference/connectors/salesforce-connectors/sf-inbound-endpoint-reference-configuration.md +++ /dev/null @@ -1,96 +0,0 @@ -# Salesforce Inbound Endpoint Reference - -The following configurations allow you to configure Salesforce Inbound Endpoint for your scenario. - -<style type="text/css"> -.tg {border-collapse:collapse;border-spacing:0;} -.tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;} -.tg th{font-family:Arial, sans-serif;font-size:20px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;} -.tg .tg-0pky{border-color:inherit;text-align:left;vertical-align:top} -</style> -<table class="tg"> - <tr> - <th class="tg-0pky">Parameter</th> - <th class="tg-0pky">Description</th> - <th class="tg-0pky">Required</th> - <th class="tg-0pky">Possible Values</th> - <th class="tg-0pky">Default Value</th> - </tr> - <tr> - <td class="tg-0pky">sequential</td> - <td class="tg-0pky">Whether the messages should be polled and injected sequentially.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">true , false</td> - <td class="tg-0pky">TRUE</td> - </tr> - <tr> - <td class="tg-0pky">replay</td> - <td class="tg-0pky"> Enabling this will read the event ID stored in the Registry DB or from the text file stored in the local machine.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">enable or disable</td> - <td class="tg-0pky">false</td> - </tr> - <tr> - <td class="tg-0pky">packageVersion</td> - <td class="tg-0pky">The version of the Salesforce API.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">37.0</td> - <td class="tg-0pky">-</td> - </tr> - <tr> - <td class="tg-0pky">salesforceObject</td> - <td class="tg-0pky">The name of the Push Topic or the Platform Event that is added to the Salesforce account.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">/topic/Account</td> - <td class="tg-0pky">-</td> - </tr> - <tr> - <td class="tg-0pky">loginEndpoint</td> - <td class="tg-0pky">The Endpoint of the Salesforce account.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">https://login.salesforce.com</td> - <td class="tg-0pky">https://login.salesforce.com</td> - </tr> - <tr> - <td class="tg-0pky">userName</td> - <td class="tg-0pky">The username for accessing the Salesforce account.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">-</td> - <td class="tg-0pky">-</td> - </tr> - <tr> - <td class="tg-0pky">password</td> - <td class="tg-0pky"> The password provided here is a concatenation of the user password and the security token provided by Salesforce. For more information, see <a href="https://help.salesforce.com/articleView?id=user_security_token.htm&type=5">Information on creating a security token in Salesforce</a></td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">eitest123xxxxxxx</td> - <td class="tg-0pky">-</td> - </tr> - <tr> - <td class="tg-0pky">waitTime</td> - <td class="tg-0pky">The time to wait to connect to the Salesforce account.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">5000</td> - <td class="tg-0pky">5 * 1000 ms</td> - </tr> - <tr> - <td class="tg-0pky">connectionTimeout</td> - <td class="tg-0pky">The time to wait to connect to the client.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">20000</td> - <td class="tg-0pky">20 * 1000 ms</td> - </tr> - <tr> - <td class="tg-0pky">soapApiVersion</td> - <td class="tg-0pky">The version of the Salesforce SOAP API.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">22.0</td> - <td class="tg-0pky">-</td> - </tr> - <tr> - <td class="tg-0pky">EventIDStoredFilePath</td> - <td class="tg-0pky">When replay is enabled, do not define any value for this property (i.e., keep it blank), to replay from the last event ID stored in the config Registry DB (property- name of the Salesforce object (follow the example below for more understanding) resource path - connector/salesforce/event). When replay is enabled, specify the directory path of a text file to start replaying from the event ID stored in it.</td> - <td class="tg-0pky">No</td> - <td class="tg-0pky">/home/kasun/Documents/SalesForceConnector/a.txt</td> - <td class="tg-0pky">-</td> - </tr> -</table> \ No newline at end of file diff --git a/en/docs/reference/connectors/salesforce-connectors/sf-overview.md b/en/docs/reference/connectors/salesforce-connectors/sf-overview.md deleted file mode 100644 index 5ecea51de3..0000000000 --- a/en/docs/reference/connectors/salesforce-connectors/sf-overview.md +++ /dev/null @@ -1,85 +0,0 @@ -# Salesforce Connectors Overview - -Salesforce is a Customer Relationship Management (CRM) solution that helps bridge the gap between customers and enterprises. This enables you to integrate with Salesforce and perform various actions with ease. This is done using connectors that interact with available Salesforce APIs. - -## Types of Salesforce connectors - -To see the available Salesforce connectors, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "Salesforce". You get the following connectors: - -<img src="{{base_path}}/assets/img/integrate/connectors/sf-connector-store.png" title="Salesforce Connector Store" width="800" alt="Salesforce Connector Store"/> - -### Salesforce Connector - -The Salesforce connector allows you to work with records in Salesforce. You can use the Salesforce connector to create, query, retrieve, update, and delete records in your organization's Salesforce data. This is typically used when sending XML requests. The connector uses the [Salesforce SOAP API](http://www.salesforce.com/us/developer/docs/api/) to interact with Salesforce. The Salesforce streaming inbound endpoint allows you to perform various Salesforce streaming data through the integration runtime of WSO2. - -* **[Configuring Salesforce Connector Operations](https://docs.wso2.com/display/ESBCONNECTORS/Configuring+Salesforce+Connector+Operations)**: Includes an overview of the connector and links to associated documentation. - -### Salesforce REST Connector - -The **Salesforce REST Connector** uses the [Salesforce REST API](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/intro_what_is_rest_api.htm) to interact with Salesforce. This connector is more useful when sending JSON requests. The Salesforce REST Connector allows you to work with records in Salesforce, a web-based service that allows organizations to manage Customer Relationship Management (CRM) data. You can use the Salesforce connector to create, query, retrieve, update, and delete records in your organization's Salesforce data. - -* **[Salesforce Access Token Generation]({{base_path}}/includes/reference/connectors/salesforce-connectors/sf-access-token-generation/)**: This section includes how to obtain the OAuth2 tokens from Salesforce REST API. - -* **[Salesforce Rest API Connector Example]({{base_path}}/reference/connectors/salesforce-connectors/sf-rest-connector-example/)**: This example explains how to use the Salesforce client to connect with the Salesforce instance and perform the **create** and **retrieve** operations. - -* **[Salesforce Rest API Connector Reference]({{base_path}}/reference/connectors/salesforce-connectors/sf-rest-connector-config/)**: This documentation provides a reference guide for the Salesforce REST API operations. - -The following table lists out compatibility information for Salesforce REST Connector. - -| Connector version | Supported Salesforce REST API version | Supported WSO2 product versions | -| ------------- | ------------- | ------------- | -| [1.0.8](https://github.com/wso2-extensions/esb-connector-salesforcerest/tree/org.wso2.carbon.connector.salesforcerest-1.0.8) | v32.0 | APIM 4.0.0, EI 7.1.0, EI 7.0.x, EI 6.6.0, EI 6.5.0 | - -### Salesforce Bulk Connector - -The Salesforce Bulk connector allows you to access the [Salesforce Bulk REST API](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/) from an integration sequence. As the name implies, this is used for bulk operations when adding multiple entries into Salesforce. Salesforce Bulk is a RESTful API that allows you to quickly load or delete large sets of your organization's data into Salesforce. You can use the Salesforce Bulk connector to query, insert, update, upsert or delete a large number of records asynchronously, by submitting the records in batches. Salesforce can process these batches in the background. - -* **[Salesforce Bulk README](https://github.com/wso2-extensions/esb-connector-salesforcebulk/tree/org.wso2.carbon.connector.salesforcebulk-1.0.3/docs)**: Includes an overview of the connector and links to associated documentation. - -### Salesforce Inbound Endpoint - -**Salesforce Inbound Endpoint** uses the [Salesforce streaming API](https://developer.salesforce.com/docs/atlas.en-us.api_streaming.meta/api_streaming/intro_stream.htm) to receive notifications. This is bundled with and can be obtained from the Salesforce connector available in the store. The Salesforce Inbound Endpoint receives notifications based on the changes that happen to Salesforce data with respect to an SOQL (Salesforce Object Query Language) query you define, in a secured and scalable way. - -* **[Setting up the PushTopic in Salesforce]({{base_path}}/reference/connectors/salesforce-connectors/sf-inbound-endpoint-configuration/)**: This documentation explains how to set up the Salesforce environment to connect with WSO2 Salesforce Inbound Endpoint. - -* **[Salesforce Inbound Endpoint Example]({{base_path}}/reference/connectors/salesforce-connectors/sf-inbound-endpoint-example/)**: This example explains how Salesforce Inbound Endpoint acts as a message consumer. The integration runtime of WSO2 is a listening inbound endpoint that can consume messages from Salesforce. - -* **[Salesforce Inbound Endpoint Reference]({{base_path}}/reference/connectors/salesforce-connectors/sf-inbound-endpoint-reference-configuration/)**: This documentation provides a reference guide for the Salesforce Inbound Endpoint. - -The following table lists out compatibility information for the Salesforce Inbound Endpoint Connector. - -| Inbound version | Supported Salesforce API version | Supported WSO2 product versions | -| ------------- | ------------- | ------------- | -| 2.0.1| 22.0 | APIM 4.0.0, EI 7.1.0, EI 7.0.x, EI 6.6.0, EI 6.5.0 | - -### Salesforce Wave Analytics - -The Salesforce Wave Analytics Connector allows you to work with records in Salesforce. You can use the Salesforce connector to create, query, retrieve and update records in your organization's Salesforce data. The connector uses the [Analytics REST API](https://developer.salesforce.com/docs/atlas.en-us.bi_dev_guide_rest.meta/bi_dev_guide_rest/bi_rest_overview.htm) to interact with Salesforce. - -### Salesforce Desk Connector - -The Salesforce Desk connector allows you to access the [Salesforce Desk REST API](http://dev.desk.com/API/using-the-api/#general) from an integration sequence. Salesforce Desk is a customer service application that helps small businesses to provide exceptional, multi-channel customer service. - -* **[Salesforce Desk Connector documentation](https://docs.wso2.com/display/ESBCONNECTORS/Salesforce+Desk+Connector)**: Includes an overview of the connector and links to associated documentation. - -### Pardot - -The Pardot connector allows you to access the Pardot REST API through the WSO2 integration runtime. Pardot, B2B marketing automation by Salesforce, offers a marketing automation solution that allows marketing and sales departments to create, deploy, and manage online marketing campaigns. - -* **[Pardot Connector documentation](https://docs.wso2.com/display/ESBCONNECTORS/Pardot+Connector)**: Includes an overview of the connector and links to associated documentation. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for these connectors, create a pull request in the following repositories. - -* [Salesforce REST API Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-salesforcerest) -* [Salesforce Inbound Endpoint GitHub repository](https://github.com/wso2-extensions/esb-inbound-salesforce) -* [Salesforce SOAP API Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-salesforce) -* [Salesforce Bulk API Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-salesforcebulk) -* [Salesforce Wave Analytics Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-salesforcewaveanalytics) -* [Salesforce Desk Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-salesforcedesk) -* [Pardot Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-pardot) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/salesforce-connectors/sf-rest-connector-config.md b/en/docs/reference/connectors/salesforce-connectors/sf-rest-connector-config.md deleted file mode 100644 index e5637af4f5..0000000000 --- a/en/docs/reference/connectors/salesforce-connectors/sf-rest-connector-config.md +++ /dev/null @@ -1,4060 +0,0 @@ -# Salesforce REST Connector Reference - -The following operations allow you to work with the Salesforce REST Connector. Click an operation name to see parameter details and samples on how to use it. - ---- - -## Initialize the connector - -Salesforce REST API uses the OAuth protocol to allow application users to securely access data without having to reveal -their user credentials. For more information on how authentication is done in Salesforce, see -[Understanding Authentication](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/intro_understanding_authentication.htm). -You can provide only access token and use it until it expires. After expiry, you will be responsible for getting a new -access token and using it. Alternatively, you have the option of providing refresh token, client secret, and client ID -which will be used to get access token initially and after every expiry by the connector itself. You will not be -required to handle access token expiry in this case. - -There also option to use basic authentication with username and password. - -To use the Salesforce REST connector, add the `<salesforcerest.init>` element in your configuration before carrying out any other Salesforce REST operations. - -??? note "salesforcerest.init" - The salesforcerest.init operation initializes the connector to interact with the Salesforce REST API. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/intro_understanding_web_server_oauth_flow.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>apiVersion</td> - <td>The version of the Salesforce API.</td> - <td>Yes</td> - <td>v32.0</td> - </tr> - <tr> - <td>accessToken</td> - <td>The access token to authenticate your API calls.</td> - <td>No</td> - <td>XXXXXXXXXXXX (Replace with your access token)</td> - </tr> - <tr> - <td>apiUrl</td> - <td>The instance URL for your organization.</td> - <td>Yes</td> - <td>https://ap2.salesforce.com</td> - </tr> - <tr> - <td>hostName</td> - <td>SalesforceOAuth endpoint when issuing authentication requests in your application.</td> - <td>Yes</td> - <td>https://login.salesforce.com</td> - </tr> - <tr> - <td>refreshToken</td> - <td>The refresh token that you received to refresh the API access token.</td> - <td>No</td> - <td>XXXXXXXXXXXX (Replace with your refresh token)</td> - </tr> - <tr> - <td>tokenEndpointHostname</td> - <td>The endpoint of the refresh token that you invoke to refresh the API access token. </td> - <td>No</td> - <td>XXXXXXXXXXXX (Replace this with your refresh token endpoint)</td> - </tr> - <tr> - <td>clientId</td> - <td>The consumer key of the connected application that you created.</td> - <td>No</td> - <td>XXXXXXXXXXXX (Replace with your client ID)</td> - </tr> - <tr> - <td>clientSecret</td> - <td>The consumer secret of the connected application that you created.</td> - <td>No</td> - <td>XXXXXXXXXXXX (Replace with your client secret)</td> - </tr> - <tr> - <td>blocking</td> - <td>Indicates whether the connector needs to perform blocking invocations to Salesforce.</td> - <td>Yes</td> - <td>false</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.init> - <accessToken>{$ctx:accessToken}</accessToken> - <apiUrl>{$ctx:apiUrl}</apiUrl> - <hostName>{$ctx:hostName}</hostName> - <apiVersion>{$ctx:apiVersion}</apiVersion> - <blocking>{$ctx:blocking}</blocking> - </salesforcerest.init> - ``` - - **Sample request** - - ```json - { - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "blocking" : "false" - } - ``` - - Or if you want conector to handle token expiry - - **Sample configuration** - - ```xml - <salesforcerest.init> - <accessToken>{$ctx:accessToken}</accessToken> - <apiUrl>{$ctx:apiUrl}</apiUrl> - <hostName>{$ctx:hostName}</hostName> - <apiVersion>{$ctx:apiVersion}</apiVersion> - <refreshToken>{$ctx:refreshToken}</refreshToken> - <clientId>{$ctx:clientId}</clientId> - <clientSecret>{$ctx:clientSecret}</clientSecret> - <blocking>{$ctx:blocking}</blocking> - </salesforcerest.init> - ``` - - **Sample request** - - ```json - { - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "refreshToken":"XXXXXXXXXXXX (Replace with your refresh token)", - "apiUrl":"https://(your_instance).salesforce.com", - "clientId": "XXXXXXXXXXXX (Replace with your client ID)", - "clientSecret": "XXXXXXXXXXXX (Replace with your client secret)", - "blocking" : "false" - } - ``` - - -??? note "salesforcerest.init for username/password flow" - The salesforcerest.init operation initializes the connector to interact with the Salesforce REST API using a username/password flow. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/intro_understanding_username_password_oauth_flow.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>apiVersion</td> - <td>The version of the Salesforce API.</td> - <td>Yes</td> - <td>v32.0</td> - </tr> - <tr> - <td>apiUrl</td> - <td>The instance URL for your organization.</td> - <td>Yes</td> - <td>https://ap2.salesforce.com</td> - </tr> - <tr> - <td>hostName</td> - <td>SalesforceOAuth endpoint when issuing authentication requests in your application.</td> - <td>Yes</td> - <td>https://login.salesforce.com</td> - </tr> - <tr> - <td>clientId</td> - <td>The consumer key of the connected application that you created.</td> - <td>Yes</td> - <td>XXXXXXXXXXXX (Replace with your client ID)</td> - </tr> - <tr> - <td>clientSecret</td> - <td>The consumer secret of the connected application that you created.</td> - <td>Yes</td> - <td>XXXXXXXXXXXX (Replace with your client secret)</td> - </tr> - <tr> - <td>username</td> - <td>The username for Salesforce.</td> - <td>Yes</td> - <td>youruser@gmail.com</td> - </tr> - <tr> - <td>password</td> - <td>The password for Salesforce (need to append the password with security key).</td> - <td>Yes</td> - <td>xxxxxxxxxxxxxxxxxxxxxx</td> - </tr> - <tr> - <td>blocking</td> - <td>Indicates whether the connector needs to perform blocking invocations to Salesforce.</td> - <td>Yes</td> - <td>false</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.init> - <apiUrl>{$ctx:apiUrl}</apiUrl> - <clientId>{$ctx:clientId}</clientId> - <clientSecret>{$ctx:clientSecret}</clientSecret> - <hostName>{$ctx:hostName}</hostName> - <apiVersion>{$ctx:apiVersion}</apiVersion> - <username>{$ctx:username}</username> - <password>{$ctx:password}</password> - <blocking>{$ctx:blocking}</blocking> - </salesforcerest.init> - ``` - - **Sample request** - - ```json - { - "clientId": "xxxxxxxxxxxxxxxxxxxxxxxx", - "clientSecret": "xxxxxxxxxxxxxxxxxxxxxxxx", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "username": "youruser@gmail.com", - "password": "xxxxxxxxxxxxxxxxxxxxxx", - "apiUrl":"https://(your_instance).salesforce.com", - "blocking" : "false" - } - ``` - ---- - -### AppMenu - -??? note "listItemsInMenu" - To retrieve the list of items in either the Salesforce app drop-down menu or the Salesforce1 navigation menu, use salesforcerest.listItemsInMenu and specify the following property. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/resources_appmenu.htm?search_text=menu) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>menuType</td> - <td>The type of the menu, either AppSwitcher or Salesforce.</td> - <td>Yes</td> - <td>AppSwitcher, Salesforce</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.listItemsInMenu> - <menuType>{$ctx:menuType}</menuType> - </salesforcerest.listItemsInMenu> - ``` - - **Sample request** - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "menuType": "AppSwitcher", - } - ``` - - **Sample response** - - ```json - {"NetworkTabs":"/services/data/v32.0/appMenu/NetworkTabs","Salesforce1":"/services/data/v32.0/appMenu/Salesforce1","AppSwitcher":"/services/data/v32.0/appMenu/AppSwitcher"} - ``` - -??? note "tabs" - To retrieve a list of all tabs, use salesforcerest.tabs. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/resources_tabs.htm?search_text=tabs) for more information. - - **Sample configuration** - - ```xml - <salesforcerest.tabs/> - ``` - - **Sample request** - - The following is a sample request that can be handled by the tabs operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - } - ``` - - **Sample response** - - Given below is a sample response for the tabs operation. - - ```json - {"output":"[{\"colors\":[{\"color\":\"4dca76\",\"context\":\"primary\",\"theme\":\"theme4\"},{\"color\":\"319431\",\"context\":\"primary\",\"theme\":\"theme3\"}],\"custom\":true,\"iconUrl\":\"https://sampletest-dev-ed.my.salesforce.com/img/icon/form32.png\",..} - ``` - -??? note "themes" - To retrieve a list of icons and colors used by themes in the Salesforce application, use salesforcerest.themes. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/resources_themes.htm?search_text=themes) for more information. - - **Sample configuration** - - ```xml - <salesforcerest.themes/> - ``` - - **Sample request** - - The following is a sample request that can be handled by the themes operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - } - ``` - - **Sample response** - - Given below is a sample response for the themes operation. - - ```json - { - "themeItems":[ - { - "name":"Account", - "icons":[ - { - "width":32, - "theme":"theme3", - "contentType":"image/png", - "url":"https://kesavan-dev-ed.my.salesforce.com/img/icon/accounts32.png", - "height":32 - } - ] - } - ] - } - ``` - ---- - -### Approvals - -??? note "listApprovals" - To retrieve the list of approvals in Salesforce, use salesforcerest.listApprovals. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/dome_process_approvals.htm) for more information. - - **Sample configuration** - - ```xml - <salesforcerest.listApprovals/> - ``` - - **Sample request** - - The following is a sample request that can be handled by the listApprovals operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - } - ``` - - **Sample response** - - Given below is a sample response for the listApprovals operation. - - ```json - { - "approvals":{ - - } - } - ``` - ---- - -### Event Monitoring - -??? note "describeEventMonitoring" - To retrieve the description of the event monitoring log, use salesforcerest.describeEventMonitoring. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/dome_event_log_file_describe.htm) for more information. - - **Sample configuration** - - ```xml - <salesforcerest.describeEventMonitoring/> - ``` - - **Sample request** - - The following is a sample request that can be handled by the describeEventMonitoring operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - } - ``` - - **Sample response** - - Given below is a sample response for the describeEventMonitoring operation. - - ```json - { - "updateable":false, - "activateable":false, - "childRelationships":[ - - ], - "recordTypeInfos":[ - - ], - "deprecatedAndHidden":false, - "searchLayoutable":false, - "deletable":false, - "replicateable":false, - "actionOverrides":[ - - ], - . - . - ], - "labelPlural":"Event Log Files", - "triggerable":false - } - ``` - -??? note "queryEventMonitoringData" - To retrieve the field values from a record, use salesforcerest.queryEventMonitoringData and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/dome_event_log_file_query.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>queryStringForEventMonitoringData</td> - <td>The query string to use to get the field values from the log.</td> - <td>Yes</td> - <td>SELECT+Id+,+EventType+,+LogFile+,+LogDate+,+LogFileLength+FROM+EventLogFile+WHERE+LogDate+>+Yesterday+AND+EventType+=+'API' - </td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.queryEventMonitoringData> - <queryStringForEventMonitoringData>{$ctx:queryStringForEventMonitoringData}</queryStringForEventMonitoringData> - </salesforcerest.queryEventMonitoringData> - ``` - - **Sample request** - - The following is a sample request that can be handled by the queryEventMonitoringData operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "clientId": "3MVG9ZL0ppGP5UrBztM9gSLYyUe7VwAVhD9.yQnZX2mmCu_48Uwc._doxrBTgY4jqmOSDhxRAiUBf8gCr2mk7", - "refreshToken": "5Aep861TSESvWeug_ztpnAk6BGQxRdovMLhHso81iyYKO6hTm45JVxz3FLewCKgI4BbUp19OzGfqG2TdCfqa2ZU", - "clientSecret": "1187341468789253319", - "hostName": "https://login.salesforce.com", - "apiVersion": "v34.0", - "queryStringForEventMonitoringData": "SELECT+Id+,+EventType+,+LogFile+,+LogDate+,+LogFileLength+FROM+EventLogFile+WHERE+LogDate+>+Yesterday+AND+EventType+=+'API'", - } - ``` - - **Sample response** - - Given below is a sample response for the queryEventMonitoringData operation. - - ```json - { - "totalSize" : 4, - "done" : true, - "records" : [ { - "attributes" : { - "type" : "EventLogFile", - "url" : "/services/data/v32.0/sobjects/EventLogFile/0ATD000000001bROAQ" } - "Id" : "0ATD000000001bROAQ", - "EventType" : "API", - "LogFile" : "/services/data/v32.0/sobjects/EventLogFile/0ATD000000001bROAQ/LogFile", - "LogDate" : "2014-03-14T00:00:00.000+0000", - "LogFileLength" : 2692.0 - }, - . - ] - } - ``` - ---- - -### Invocable Actions - -??? note "getListOfAction" - To retrieve the list of general action types for the current organization, use salesforcerest.getListOfAction and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_actions_invocable.htm?search_text=action) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>actionType</td> - <td>The type of the invocable action.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.getListOfAction/> - ``` - - **Sample request** - - The following is a sample request that can be handled by the getListOfAction operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - } - ``` - - **Sample response** - - Given below is a sample response for the getListOfAction operation. - - ```json - { - "standard":"/services/data/v32.0/actions/standard", - "custom":"/services/data/v32.0/actions/custom" - } - ``` - -??? note "getSpecificListOfAction" - To retrieve an attribute of a single action, use salesforcerest.getAttributeOfSpecificAction and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_actions_invocable_standard.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>actionType</td> - <td>The type of the invocable action.</td> - <td>Yes</td> - <td>standard</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.getSpecificListOfAction> - <actionType>{$ctx:actionType}</actionType> - </salesforcerest.getSpecificListOfAction> - ``` - - **Sample request** - - The following is a sample request that can be handled by the getSpecificListOfAction operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "actionType": "standard", - } - ``` - - **Sample response** - - Given below is a sample response for the getSpecificListOfAction operation. - - ```json - { - "standard":"/services/data/v32.0/actions/standard", - "custom":"/services/data/v32.0/actions/custom" - } - ``` - -??? note "getAttributeOfSpecificAction" - To retrieve an attribute of a single action, use salesforcerest.getAttributeOfSpecificAction and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_actions_invocable_standard.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>actionType</td> - <td>The type of the invocable action.</td> - <td>Yes</td> - <td>standard</td> - </tr> - <tr> - <td>attribute</td> - <td>The attribute whose details you want to retrieve.</td> - <td>Yes</td> - <td>emailSimple</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.getAttributeOfSpecificAction> - <actionType>{$ctx:actionType}</actionType> - <attribute>{$ctx:attribute}</attribute> - </salesforcerest.getAttributeOfSpecificAction> - ``` - - **Sample request** - - The following is a sample request that can be handled by the getAttributeOfSpecificAction operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "actionType": "standard", - "attribute": "emailSimple", - } - ``` - - **Sample response** - - Given below is a sample response for the getAttributeOfSpecificAction operation. - - ```json - { - "actions":[ - { - "name":"chatterPost", - "label":"Post to Chatter", - "type":"CHATTERPOST" - }, - { - "name":"emailSimple", - "label":"Send Email", - "type":"EMAILSIMPLE" - } - . - ] - } - ``` - -### Layouts - -??? note "sObjectLayouts" - To retrieve a list of layouts and descriptions (including for actions) for a specific object, use salesforcerest.sObjectLayouts and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/resources_sobject_layouts.htm?search_text=layouts) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The type of object whose layouts and descriptions you want to retrieve.</td> - <td>Yes</td> - <td>Account</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.sObjectLayouts> - <sObjectName>{$ctx:sObjectName}</sObjectName> - </salesforcerest.sObjectLayouts> - ``` - - **Sample request** - - The following is a sample request that can be handled by the sObjectLayouts operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName": "Account", - } - ``` - - **Sample Response** - - Given below is a sample response for the sObjectLayouts operation. - - ```json - "layouts":[ - { - "detailLayoutSections":[ - { - "heading":"Account Information", - "columns":2, - "tabOrder":"TopToBottom", - "useCollapsibleSection":false, - "rows":8, - "useHeading":false, - "layoutRows":[ - { - "layoutItems":[ - { - "editableForUpdate":false, - "editableForNew":false, - "layoutComponents":[ - { - "tabOrder":1, - "details":{ - "defaultValue":null, - "precision":0, - "nameField":false, - "type":"reference", - "restrictedDelete":false, - "relationshipName":"Owner", - "calculatedFormula":null, - "controllerName":null, - "namePointing":false, - "defaultValueFormula":null, - "calculated":false, - "writeRequiresMasterRead":false, - "inlineHelpText":null, - "picklistValues":[ - - ] - } - } - ] - } - . - } - ``` - -??? note "globalSObjectLayouts" - To retrieve descriptions of global publisher layouts, use salesforcerest.globalSObjectLayouts. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/resources_sobject_layouts.htm?search_text=layouts) for more information. - - **Sample configuration** - - ```xml - <salesforcerest.globalSObjectLayouts/> - ``` - - **Sample request** - - The following is a sample request that can be handled by the globalSObjectLayouts operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - } - ``` - - **Sample Response** - - Given below is a sample response for the globalSObjectLayouts operation. - - ```json - { - "layouts":[ - { - "detailLayoutSections":[ - - ], - "relatedContent":null, - "editLayoutSections":[ - - ], - "relatedLists":[ - - ], - "buttonLayoutSection":null, - "id":"00h28000001hExeAAE", - "offlineLinks":[ - - ], - . - . - } - } - ``` - -??? note "compactLayouts" - To retrieve a list of compact layouts for multiple objects, use salesforcerest.compactLayouts and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/resources_compact_layouts.htm?search_text=layouts) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>sObjectNameList</td> - <td>A comma-separated list of the objects whose compact layouts you want to retrieve.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.compactLayouts/> - ``` - - **Sample request** - - The following is a sample request that can be handled by the compactLayouts operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectNameList":"Account,User", - } - ``` - - **Sample Response** - - Given below is a sample response for the compactLayouts operation. - - ```json - { - "Account":{ - "name":"SYSTEM", - "id":null, - "label":"System Default", - "actions":[ - { - "showsStatus":false, - "custom":false, - "label":"Call", - "overridden":false, - "encoding":null, - "icons":[ - { - "width":0, - "theme":"theme4", - "contentType":"image/svg+xml", - "url":"https://kesavan-dev-ed.my.salesforce.com/img/icon/t4v32/action/call.svg", - "height":0 - }, - ], - "windowPosition":null, - "colors":[ - { - "color":"F2CF5B", - "context":"primary", - "theme":"theme4" - } - ], - . - . - ], - "objectType":"User" - } - } - ``` - -??? note "sObjectApprovalLayouts" - To retrieve a list of approval layouts for a specified object, use salesforcerest.sObjectApprovalLayouts and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/resources_sobject_approvallayouts.htm?search_text=layouts) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The type of object whose layouts you want to retrieve.</td> - <td>Yes</td> - <td>Account</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.sObjectApprovalLayouts> - <sObjectName>{$ctx:sObjectName}</sObjectName> - </salesforcerest.sObjectApprovalLayouts> - ``` - - **Sample request** - - The following is a sample request that can be handled by the sObjectApprovalLayouts operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName":"Account", - } - ``` - - **Sample Response** - - Given below is a sample response for the sObjectApprovalLayouts operation. - - ```json - {"approvalLayouts":[]} - ``` - -??? note "sObjectCompactLayouts" - To retrieve a list of compact layouts for a specific object, use salesforcerest.sObjectCompactLayouts and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/resources_sobject_compactlayouts.htm?search_text=layouts) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The type of object whose layouts you want to retrieve.</td> - <td>Yes</td> - <td>Account</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.sObjectCompactLayouts> - <sObjectName>{$ctx:sObjectName}</sObjectName> - </salesforcerest.sObjectCompactLayouts> - ``` - - **Sample request** - - The following is a sample request that can be handled by the sObjectCompactLayouts operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName":"Account", - } - ``` - - **Sample Response** - - Given below is a sample response for the sObjectCompactLayouts operation. - - ```json - { - "compactLayouts":[ - { - "name":"SYSTEM", - "id":null, - "label":"System Default", - "actions":[ - { - "showsStatus":false, - "custom":false, - "label":"Call", - "overridden":false, - "encoding":null, - "icons":[ - { - "width":0, - "theme":"theme4", - "contentType":"image/svg+xml", - "url":"https://kesavan-dev-ed.my.salesforce.com/img/icon/t4v32/action/call.svg", - "height":0 - } - ], - "defaultCompactLayoutId":null - . - ] - } - ``` - -??? note "sObjectNamedLayouts" - To retrieve information about alternative named layouts for a specific object, use salesforcerest.sObjectNamedLayouts and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/resources_sobject_named_layouts.htm?search_text=layouts) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The type of object whose layouts you want to retrieve.</td> - <td>Yes</td> - <td>Account</td> - </tr> - <tr> - <td>layoutName</td> - <td>The type of layout.</td> - <td>Yes</td> - <td>UserAlt</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.sObjectNamedLayouts> - <sObjectName>{$ctx:sObjectName}</sObjectName> - <layoutName>{$ctx:layoutName}</layoutName> - </salesforcerest.sObjectNamedLayouts> - ``` - - **Sample request** - - The following is a sample request that can be handled by the sObjectCompactLayouts operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName":"Account", - "layoutName": "UserAlt", - } - ``` - - **Sample Response** - - Given below is a sample response for the sObjectCompactLayouts operation. - - ```json - { - "layouts":[ - { - "detailLayoutSections":[ - { - "heading":"About", - "columns":2, - "tabOrder":"LeftToRight", - "useCollapsibleSection":false, - "rows":2, - "useHeading":false, - "layoutRows":[ - { - "layoutItems":[ - { - "editableForUpdate":false, - "editableForNew":false, - "layoutComponents":[ - { - "components":[ - { - "tabOrder":2, - "details":{ - "defaultValue":null, - "precision":0, - "nameField":false, - "type":"string", - "restrictedDelete":false, - "relationshipName":null, - "calculatedFormula":null, - "controllerName":null, - "namePointing":false, - "defaultValueFormula":null, - "calculated":false, - "writeRequiresMasterRead":false, - "inlineHelpText":null, - "picklistValues":[ - - ] - } - } - ] - } - . - } - ``` - -### List Views - -??? note "listViews" - To retrieve a list of list views for the specific sObject, use salesforcerest.listViews and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_listviews.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The type of object whose list views you want to retrieve.</td> - <td>Yes</td> - <td>Account</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.listViews> - <sObjectName>{$ctx:sObjectName}</sObjectName> - </salesforcerest.listViews> - ``` - - **Sample request** - - The following is a sample request that can be handled by the listViews operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName": "Account", - } - ``` - - **Sample Response** - - Given below is a sample response for the listViews operation. - - ```json - { - "nextRecordsUrl":null, - "size":7, - "listviews":[ - { - "resultsUrl":"/services/data/v32.0/sobjects/Account/listviews/00B280000032AihEAE/results", - "soqlCompatible":true, - "id":"00B280000032AihEAE", - "label":"New This Week", - "describeUrl":"/services/data/v32.0/sobjects/Account/listviews/00B280000032AihEAE/describe", - "developerName":"NewThisWeek", - "url":"/services/data/v32.0/sobjects/Account/listviews/00B280000032AihEAE" - }, - . - . - ], - "done":true, - "sobjectType":"Account" - } - ``` - -??? note "listViewById" - To retrieve the basic information about one list view for the specific sObject, use salesforcerest.listViewById and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_listviews.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The type of object whose list of list views you want to retrieve.</td> - <td>Yes</td> - <td>Account</td> - </tr> - <tr> - <td>listViewId</td> - <td>The ID of the specific list view whose information you want to return. This can be obtained by `listViews` operation</td> - <td>Yes</td> - <td>00B28000002yqeVEAQ</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.listViewById> - <listViewID>{$ctx:listViewID}</listViewID> - <sObjectName>{$ctx:sObjectName}</sObjectName> - </salesforcerest.listViewById> - ``` - - **Sample request** - - The following is a sample request that can be handled by the listViewById operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName": "Account", - "listViewID":"00B28000002yqeVEAQ", - } - ``` - - **Sample Response** - - Given below is a sample response for the listViewById operation. - - ```json - { - "resultsUrl":"/services/data/v32.0/sobjects/Account/listviews/00B280000032AihEAE/results", - "soqlCompatible":true, - "id":"00B280000032AihEAE", - "label":"New This Week", - "describeUrl":"/services/data/v32.0/sobjects/Account/listviews/00B280000032AihEAE/describe", - "developerName":"NewThisWeek", - "url":"/services/data/v32.0/sobjects/Account/listviews/00B280000032AihEAE" - } - ``` - -??? note "recentListViews" - To retrieve the list of recently used list views for the given sObject type, use salesforcerest.recentListViews and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_recentlistviews.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The type of object whose recently used list views you want to return.</td> - <td>Yes</td> - <td>Account</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.recentListViews> - <sObjectName>{$ctx:sObjectName}</sObjectName> - </salesforcerest.recentListViews> - ``` - - **Sample request** - - The following is a sample request that can be handled by the recentListViews operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName": "Account", - } - ``` - - **Sample Response** - - Given below is a sample response for the recentListViews operation. - - ```json - { - "nextRecordsUrl":null, - "size":2, - "listviews":[ - { - "resultsUrl":"/services/data/v32.0/sobjects/Account/listviews/00B280000032AihEAE/results", - "soqlCompatible":true, - "id":"00B280000032AihEAE", - "label":"New This Week", - "describeUrl":"/services/data/v32.0/sobjects/Account/listviews/00B280000032AihEAE/describe", - "developerName":"NewThisWeek", - "url":"/services/data/v32.0/sobjects/Account/listviews/00B280000032AihEAE" - } - . - . - ], - "done":true, - "sobjectType":"Account" - } - ``` - -??? note "describeListViewById" - To retrieve detailed information (ID, columns, and SOQL query) about a specific list view for the given sObject type, use salesforcerest.describeListViewById and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_listviewdescribe.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The type of object to which the list view applies.</td> - <td>Yes</td> - <td>Account</td> - </tr> - <tr> - <td>listViewID</td> - <td>The ID of the list view.</td> - <td>Yes</td> - <td>00B28000002yqeVEAQ (obtained by `listViews` operation)</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.describeListViewById> - <sObjectName>{$ctx:sObjectName}</sObjectName> - <listViewID>{$ctx:listViewID}</listViewID> - </salesforcerest.describeListViewById> - ``` - - **Sample request** - - The following is a sample request that can be handled by the describeListViewById operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName": "Account", - "listViewID":"00B28000002yqeVEAQ", - } - ``` - - **Sample Response** - - Given below is a sample response for the describeListViewById operation. - - ```json - { - "whereCondition":{ - "field":"CreatedDate", - "values":[ - "THIS_WEEK" - ], - "operator":"equals" - }, - "columns":[ - { - "fieldNameOrPath":"Name", - "sortDirection":"ascending", - "hidden":false, - "sortIndex":0, - "ascendingLabel":"Z-A", - "label":"Account Name", - "sortable":true, - "type":"string", - "descendingLabel":"A-Z", - "selectListItem":"Name" - }, - . - . - ], - "query":"SELECT Name, Site, BillingState, Phone, toLabel(Type), Owner.Alias, Id, CreatedDate, LastModifiedDate, SystemModstamp FROM Account WHERE CreatedDate = THIS_WEEK ORDER BY Name ASC NULLS FIRST, Id ASC NULLS FIRST", - "scope":null, - "orderBy":[ - { - "fieldNameOrPath":"Name", - "sortDirection":"ascending", - "nullsPosition":"first" - }, - { - "fieldNameOrPath":"Id", - "sortDirection":"ascending", - "nullsPosition":"first" - } - ], - "id":"00B280000032Aih", - "sobjectType":"Account" - } - ``` - -??? note "listViewResults" - To execute the SOQL query for the list view and return the resulting data and presentation information, use salesforcerest.listViewResults and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/resources_listviewresults.htm?search_text=list%20view) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The type of object to which the list view applies.</td> - <td>Yes</td> - <td>Account</td> - </tr> - <tr> - <td>listViewID</td> - <td>The ID of the list view (obtained by `listViews` operation).</td> - <td>Yes</td> - <td>00B28000002yqeVEAQ</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.listViewResults> - <sObjectName>{$ctx:sObjectName}</sObjectName> - <listViewID>{$ctx:listViewID}</listViewID> - </salesforcerest.listViewResults> - ``` - - **Sample request** - - The following is a sample request that can be handled by the listViewResults operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName": "Account", - "listViewID":"00B28000002yqeVEAQ", - } - ``` - - **Sample Response** - - Given below is a sample response for the listViewResults operation. - - ```json - { - "size":0, - "records":[ - - ], - "columns":[ - { - "fieldNameOrPath":"Name", - "sortDirection":"ascending", - "hidden":false, - "sortIndex":0, - "ascendingLabel":"Z-A", - "label":"Account Name", - "sortable":true, - "type":"string", - "descendingLabel":"A-Z", - "selectListItem":"Name" - }, - . - . - ], - "id":"00B280000032Aih", - "label":"New This Week", - "developerName":"NewThisWeek", - "done":true - } - ``` - -### Process Rules - -??? note "listProcessRules" - To retrieve the list of process rules in the organization, use salesforcerest.listProcessRules. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/dome_process_rules.htm) for more information. - - **Sample configuration** - - ```xml - <salesforcerest.listProcessRules/> - ``` - - **Sample request** - - The following is a sample request that can be handled by the listProcessRules operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - } - ``` - - **Sample Response** - - Given below is a sample response for the listProcessRules operation. - - ```json - { - "rules":{ - - } - } - ``` - -??? note "getSpecificProcessRule" - To retrieve the metadata for a specific sObject process rule, use salesforcerest.getSpecificProcessRule and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/dome_process_rules_particular.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The object whose process rule you want to retrieve.</td> - <td>Yes</td> - <td>Account</td> - </tr> - <tr> - <td>workflowRuleId</td> - <td>The ID of the process rule. You can get IDs using operation `listProcessRules`.</td> - <td>Yes</td> - <td>01QD0000000APli</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.getSpecificProcessRule> - <sObjectName>{$ctx:sObjectName}</sObjectName> - <workflowRuleId>{$ctx:workflowRuleId}</workflowRuleId> - </salesforcerest.getSpecificProcessRule> - ``` - - **Sample request** - - The following is a sample request that can be handled by the getSpecificProcessRule operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName": "Account", - "workflowRuleId": "01QD0000000APli", - } - ``` - - **Sample Response** - - Given below is a sample response for the getSpecificProcessRule operation. - - ```json - { - "actions" : [ { - "id" : "01VD0000000D2w7", - "name" : "ApprovalProcessTask", - "type" : "Task" - } ], - "description" : null, - "id" : "01QD0000000APli", - "name" : "My Rule", - "namespacePrefix" : null, - "object" : "Account" - } - ``` - -### Queries - -??? note "query" - To retrieve data from an object, use salesforcerest.query and specify the following properties. If you want your results to include deleted records in the Recycle Bin, use salesforcerest.queryAll in place of salesforcerest.query. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_query.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value<th> - </tr> - <tr> - <td>queryString</td> - <td>The SQL query to use to search for records.</td> - <td>Yes</td> - <td>select id, name from Account</td> - </tr> - </table> - - **Sample configuration** - - query: - - ```xml - <salesforcerest.query> - <queryString>{$ctx:queryString}</queryString> - </salesforcerest.query> - ``` - - queryAll: - - ```xml - <salesforcerest.queryAll> - <queryString>{$ctx:queryString}</queryString> - </salesforcerest.queryAll> - ``` - - **Sample request** - - The following is a sample request that can be handled by the query operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "queryString": "select id, name from Account", - } - ``` - - **Sample Response** - - Given below is a sample response for the query operation. - - ```json - { - "done" : false, - "totalSize" : 2014, - "nextRecordsUrl" : "/services/data/v20.0/query/01gD0000002HU6KIAW-2000", - "records" : - [ - { - "attributes" : - { - "type" : "Account", - "url" : "/services/data/v20.0/sobjects/Account/001D000000IRFmaIAH" - }, - "Name" : "Test 1" - }, - { - "attributes" : - { - "type" : "Account", - "url" : "/services/data/v20.0/sobjects/Account/001D000000IomazIAB" - }, - "Name" : "Test 2" - }, - - ... - - ] - } - ``` - -??? note "queryMore" - If the results from the query or queryAll operations are too large, the first batch of results is returned along with an ID that you can use with salesforcerest.queryMore to get additional results. If you want your results to include deleted records in the Recycle Bin, use salesforcerest.queryAllMore in place of salesforcerest.queryMore. See the [related API documentation for queryMore](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_query.htm) and [queryAllMore](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_query.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>nextRecordsUrl</td> - <td>The query identifier for retrieving additional results.</td> - <td>Yes</td> - <td>QWE45HUJ39D9UISD00</td> - </tr> - </table> - - **Sample configuration** - - queryMore: - - ```xml - <salesforcerest.queryMore> - <nextRecordsUrl>{$ctx:nextRecordsUrl}</nextRecordsUrl> - </salesforcerest.queryMore> - ``` - - queryAllMore: - - ```xml - <salesforcerest.queryAllMore> - <nextRecordsUrl>{$ctx:nextRecordsUrl}</nextRecordsUrl> - </salesforcerest.queryAllMore> - ``` - - **Sample request** - - The following is a sample request that can be handled by the queryMore operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "nextRecordsUrl": "QWE45HUJ39D9UISD00", - } - ``` - - **Sample Response** - - Given below is a sample response for the queryMore operation. - - ```json - { - "done" : true, - "totalSize" : 3214, - "records" : [...] - } - ``` - -??? note "queryPerformanceFeedback" - To get feedback on how Salesforce will execute your query, use the salesforcerest.queryPerformanceFeedback operation. It uses the Query resource along with the explain parameter to get feedback. Salesforce analyzes each query to find the optimal approach to obtain the query results. Depending on the query and query filters, an index or internal optimization might be used. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_query_explain.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>queryString</td> - <td>The SQL query to use to get feedback for a query.</td> - <td>Yes</td> - <td>select id, name from Account</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.queryPerformanceFeedback> - <queryString>{$ctx:queryString}</queryString> - </salesforcerest.queryPerformanceFeedback> - ``` - - **Sample request** - - The following is a sample request that can be handled by the queryPerformanceFeedback operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "queryString": "select id, name from Account", - } - ``` - - **Sample Response** - - Given below is a sample response for the queryPerformanceFeedback operation. - - ```json - { - "plans":[ - { - "leadingOperationType":"TableScan", - "relativeCost":2.8324836601307193, - "sobjectCardinality":2549, - "fields":[ - - ], - "cardinality":2549, - "sobjectType":"Account" - } - ] - } - ``` - -??? note "listviewQueryPerformanceFeedback" - For retrieving query performance feedback on a report or list view, use salesforcerest.listviewQueryPerformanceFeedback and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_query_explain.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>listViewID</td> - <td>The ID of the report or list view to get feedback for a query.</td> - <td>Yes</td> - <td>00B28000002yqeVEAQ</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.listviewQueryPerformanceFeedback> - <listViewID>{$ctx:listViewID}</listViewID> - </salesforcerest.listviewQueryPerformanceFeedback> - ``` - - **Sample request** - - The following is a sample request that can be handled by the listviewQueryPerformanceFeedback operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "listViewID":"00B28000002yqeVEAQ", - } - ``` - - **Sample Response** - - Given below is a sample response for the listviewQueryPerformanceFeedback operation. - - ```json - { - "plans":[ - { - "leadingOperationType":"Index", - "relativeCost":0, - "sobjectCardinality":2549, - "fields":[ - "CreatedDate" - ], - "cardinality":0, - "sobjectType":"Account" - }, - . - . - ] - } - ``` - -### Quick Actions - -??? note "quickActions" - To retrieve a list of global actions, use salesforcerest.quickActions. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_quickactions.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>queryString</td> - <td>The SQL query to use to search for records.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.quickActions/> - ``` - - **Sample request** - - The following is a sample request that can be handled by the quickActions operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - } - ``` - - **Sample Response** - - Given below is a sample response for the quickActions operation. - - ```json - { - "output":"[ - {\"label\":\"Log a Call\", - \"name\":\"LogACall\", - \"type\":\"LogACall\", - \"urls\":{\"defaultValues\":\"/services/data/v32.0/quickActions/LogACall/defaultValues\",\"quickAction\":\"/services/data/v32.0/quickActions/LogACall\",\"describe\":\"/services/data/v32.0/quickActions/LogACall/describe\",\"defaultValuesTemplate\":\"/services/data/v32.0/quickActions/LogACall/defaultValues/{ID}\"}}, - . - . - ]" - } - ``` - -??? note "sObjectAction" - To retrieve a list of object-specific actions, use salesforcerest.sObjectAction and specify the following property. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_sobject_quickactions.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The type of object for which you want to retrieve a list of quick actions.</td> - <td>Yes</td> - <td>Account</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.sObjectAction> - <sObjectName>{$ctx:sObjectName}</sObjectName> - </salesforcerest.sObjectAction> - ``` - - **Sample request** - - The following is a sample request that can be handled by the sObjectAction operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName": "Account", - } - ``` - - **Sample Response** - - Given below is a sample response for the sObjectAction operation. - - ```json - { - "output":"[ - {\"label\":\"Log a Call\", - \"name\":\"LogACall\",\"type\":\"LogACall\", - \"urls\":{\"defaultValues\":\"/services/data/v32.0/quickActions/LogACall/defaultValues\", - \"quickAction\":\"/services/data/v32.0/quickActions/LogACall\", - \"describe\":\"/services/data/v32.0/quickActions/LogACall/describe\", - \"defaultValuesTemplate\":\"/services/data/v32.0/quickActions/LogACall/defaultValues/{ID}\"}}, - . - . - ]" - } - ``` - -??? note "getSpecificAction" - To retrieve a specific action, use salesforcerest.getSpecificAction and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_sobject_quickactions.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>actionName</td> - <td>The name of action to return.</td> - <td>Yes</td> - <td>hariprasath__LogACall</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.getSpecificAction> - <actionName>{$ctx:actionName}</actionName> - </salesforcerest.getSpecificAction> - ``` - - **Sample request** - - The following is a sample request that can be handled by the getSpecificAction operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "actionName":"hariprasath__LogACall", - } - ``` - - **Sample Response** - - Given below is a sample response for the getSpecificAction operation. - - ```json - { - "iconName":null, - "targetRecordTypeId":null, - "targetSobjectType":"Task", - "canvasApplicationName":null, - "label":"Log a Call", - "accessLevelRequired":null, - "icons":[ - { - "width":0, - "theme":"theme4", - "contentType":"image/svg+xml", - "url":"https://kesavan-dev-ed.my.salesforce.com/img/icon/t4v32/action/log_a_call.svg", - "height":0 - }, - . - . - ], - "targetParentField":null, - "iconUrl":"https://kesavan-dev-ed.my.salesforce.com/img/icon/log_a_call_32.png", - "height":null - } - ``` - -??? note "getDescribeSpecificAction" - To retrieve the description of a specific action, use salesforcerest.getDescribeSpecificAction and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/resources_sobject_quickactions.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>actionName</td> - <td>The action whose description you want to return.</td> - <td>Yes</td> - <td>hariprasath__LogACall</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.getDescribeSpecificAction> - <actionName>{$ctx:actionName}</actionName> - </salesforcerest.getDescribeSpecificAction> - ``` - - **Sample request** - - The following is a sample request that can be handled by the getDescribeSpecificAction operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName":"Account", - } - ``` - - **Sample Response** - - Given below is a sample response for the getDescribeSpecificAction operation. - - ```json - { - "iconName":null, - "targetRecordTypeId":null, - "targetSobjectType":"Task", - "canvasApplicationName":null, - "label":"Log a Call", - "accessLevelRequired":null, - "icons":[ - { - "width":0, - "theme":"theme4", - "contentType":"image/svg+xml", - "url":"https://kesavan-dev-ed.my.salesforce.com/img/icon/t4v32/action/log_a_call.svg", - "height":0 - } - ], - . - . - "targetParentField":null, - "iconUrl":"https://kesavan-dev-ed.my.salesforce.com/img/icon/log_a_call_32.png", - "height":null - } - ``` - -??? note "getDefaultValueOfAction" - To return a specific action’s default values, including default field values, use salesforcerest.getDefaultValueOfAction and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/resources_sobject_quickactions.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>actionName</td> - <td>The specific action.</td> - <td>Yes</td> - <td>hariprasath__LogACall</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.getDefaultValueOfAction> - <actionName>{$ctx:actionName}</actionName> - </salesforcerest.getDefaultValueOfAction> - ``` - - **Sample request** - - The following is a sample request that can be handled by the getDefaultValueOfAction operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "actionName":"hariprasath__LogACall", - } - ``` - - **Sample Response** - - Given below is a sample response for the getDefaultValueOfAction operation. - - ```json - { - "WhoId":null, - "Description":null, - "WhatId":null, - "attributes":{ - "type":"Task" - }, - "Subject":"Call" - } - ``` - -### Records - -??? note "create" - To create a record, use salesforcerest.create and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_sobject_create.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The type of object for which you will create a record.</td> - <td>Yes</td> - <td>Account</td> - </tr> - <tr> - <td>fieldAndValue</td> - <td>The .json format property used to create the record. Include all mandatory fields according to the requirements for the specified sObject.</td> - <td>Yes</td> - <td><pre>{ - "name": "wso2", - "description":"This Account belongs to WSO2"} - </pre></td> - </tr> - </table> - - > **Note**: For example, if you are creating a record for the Account sObject, "name" is a mandatory parameter, and you might want to include the optional description, so the fieldAndValue property would look like this: - > ```json - > { - > "name":"wso2", - > "description":"This account belongs to WSO2" - > } - > ``` - - **Sample configuration** - - ```xml - <salesforcerest.create> - <sObjectName>{$ctx:sObjectName}</sObjectName> - <fieldAndValue>{$ctx:fieldAndValue}</fieldAndValue> - </salesforcerest.create> - ``` - - **Sample request** - - The following is a sample request that can be handled by the create operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName":"Account",, - "fieldAndValue": { - "name": "wso2", - "description":"This Account belongs to WSO2" - } - } - ``` - - **Sample Response** - - Given below is a sample response for the create operation. - - ```json - { - "success":true, - "id":"0010K00001uiAn8QAE", - "errors":[ - - ] - } - ``` - -??? note "createMultipleRecords" - To create multiple records of the same sObject type, use salesforcerest.createMultipleRecords and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/dome_composite_sobject_tree_flat.htm#topic-title) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The type of object for which you will create a record.</td> - <td>Yes</td> - <td>Account</td> - </tr> - <tr> - <td>fieldAndValue</td> - <td>The .json format property, which specifies each record as an entry within the records array. Include all mandatory fields according to the requirements for the specified sObject.</td> - <td>Yes</td> - <td><pre>{ - "records": [ - { - "attributes": {"type": "Account", "referenceId": "ref1"}, - "name": "wso2", - "phone": "1111111", - "website": "www.salesforce1.com" - }, - { - "attributes": {"type": "Account", "referenceId": "ref2"}, - "name": "slwso2", - "phone": "22222222", - "website": "www.salesforce2.com" - }] - } - </pre></td> - </tr> - </table> - - > **Note**: For example, if you are creating a record for the Account sObject, "name" is a mandatory parameter, and you might want to include the optional description, so the fieldAndValue property would look like this: - > ```json - > { - > "records": [ - > { - > "attributes": {"type": "Account", "referenceId": "ref1"}, - > "name": "wso2", - > "phone": "1111111", - > "website": "www.salesforce1.com" - > }, - > { - > "attributes": {"type": "Account", "referenceId": "ref2"}, - > "name": "slwso2", - > "phone": "22222222", - > "website": "www.salesforce2.com" - > }] - > } - > ``` - - **Sample configuration** - - ```xml - <salesforcerest.createMultipleRecords> - <sObjectName>{$ctx:sObjectName}</sObjectName> - <fieldAndValue>{$ctx:fieldAndValue}</fieldAndValue> - </salesforcerest.createMultipleRecords> - ``` - - **Sample request** - - The following is a sample request that can be handled by the createMultipleRecords operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName":"Account",, - "fieldAndValue": { - "records": [ - { - "attributes": {"type": "Account", "referenceId": "ref1"}, - "name": "wso2", - "phone": "1111111", - "website": "www.salesforce1.com" - }, - { - "attributes": {"type": "Account", "referenceId": "ref2"}, - "name": "slwso2", - "phone": "22222222", - "website": "www.salesforce2.com" - }] - } - } - ``` - - **Sample Response** - - Given below is a sample response for the createMultipleRecords operation. - - ```json - { - "hasErrors" : false, - "results" : [{ - "referenceId" : "ref1", - "id" : "001D000000K1YFjIAN" - },{ - "referenceId" : "ref2", - "id" : "001D000000K1YFkIAN" - },{ - "referenceId" : "ref3", - "id" : "001D000000K1YFlIAN" - },{ - "referenceId" : "ref4", - "id" : "001D000000K1YFmIAN" - }] - } - ``` - -??? note "createNestedRecords" - To create nested records for a specific sObject, use salesforcerest.createNestedRecords and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/dome_composite_sobject_tree_create.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The type of object for which you will create a record.</td> - <td>Yes</td> - <td></td> - </tr> - <tr> - <td>fieldAndValue</td> - <td>The .json format property, which specifies each record as an entry within the records array. Include all mandatory fields according to the requirements for the specified sobject.</td> - <td>Yes</td> - <td><pre>{ - "records" :[{ - "attributes" : {"type" : "Account", "referenceId" : "ref1"}, - "name" : "SampleAccount1", - "phone" : "1234567890", - "website" : "www.salesforce.com", - "numberOfEmployees" : "100", - "type" : "Analyst", - "industry" : "Banking", - "Contacts" : { - "records" : [{ - "attributes" : {"type" : "Contact", "referenceId" : "ref2"}, - "lastname" : "Smith", - "Title" : "President", - "email" : "sample@salesforce.com" - },{ - "attributes" : {"type" : "Account", "referenceId" : "ref3"}, - "lastname" : "Evans", - "title" : "Vice President", - "email" : "sample@salesforce.com" - }] - } - },{ - "attributes" : {"type" : "Account", "referenceId" : "ref4"}, - "name" : "SampleAccount2", - "phone" : "1234567890", - "website" : "www.salesforce.com", - "numberOfEmployees" : "52000", - "type" : "Analyst", - "industry" : "Banking", - "childAccounts" : { - "records" : [{ - "attributes" : {"type" : "Account", "referenceId" : "ref5"}, - "name" : "SampleChildAccount1", - "phone" : "1234567890", - "website" : "www.salesforce.com", - "numberOfEmployees" : "100", - "type" : "Analyst", - "industry" : "Banking" - }] - }, - "Contacts" : { - "records" : [{ - "attributes" : {"type" : "Contact", "referenceId" : "ref6"}, - "lastname" : "Jones", - "title" : "President", - "email" : "sample@salesforce.com" - }] - } - }] - } - </pre></td> - </tr> - </table> - - > **Note**: For example, if you are creating records for the Account sObject, "name" is a mandatory parameter, and you might want to include additional optional values for each record, so the fieldAndValue property might look like this: - > ```json - > { - > "records" :[{ - > "attributes" : {"type" : "Account", "referenceId" : "ref1"}, - > "name" : "SampleAccount1", - > "phone" : "1234567890", - > "website" : "www.salesforce.com", - > "numberOfEmployees" : "100", - > "type" : "Analyst", - > "industry" : "Banking", - > "Contacts" : { - > "records" : [{ - > "attributes" : {"type" : "Contact", "referenceId" : "ref2"}, - > "lastname" : "Smith", - > "Title" : "President", - > "email" : "sample@salesforce.com" - > },{ - > "attributes" : {"type" : "Contact", "referenceId" : "ref3"}, - > "lastname" : "Evans", - > "title" : "Vice President", - > "email" : "sample@salesforce.com" - > }] - > } - > },{ - > "attributes" : {"type" : "Account", "referenceId" : "ref4"}, - > "name" : "SampleAccount2", - > "phone" : "1234567890", - > "website" : "www.salesforce.com", - > "numberOfEmployees" : "52000", - > "type" : "Analyst", - > "industry" : "Banking", - > "childAccounts" : { - > "records" : [{ - > "attributes" : {"type" : "Account", "referenceId" : "ref5"}, - > "name" : "SampleChildAccount1", - > "phone" : "1234567890", - > "website" : "www.salesforce.com", - > "numberOfEmployees" : "100", - > "type" : "Analyst", - > "industry" : "Banking" - > }] - > }, - > "Contacts" : { - > "records" : [{ - > "attributes" : {"type" : "Contact", "referenceId" : "ref6"}, - > "lastname" : "Jones", - > "title" : "President", - > "email" : "sample@salesforce.com" - > }] - > } - > }] - > } - > ``` - - **Sample configuration** - - ```xml - <salesforcerest.createNestedRecords> - <sObjectName>{$ctx:sObjectName}</sobject> - <fieldAndValue>{$ctx:fieldAndValue}</fieldAndValue> - </salesforcerest.createNestedRecords> - ``` - - **Sample request** - - The following is a sample request that can be handled by the createNestedRecords operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName":"Account",, - "fieldAndValue": - { - "records" :[{ - "attributes" : {"type" : "Account", "referenceId" : "ref1"}, - "name" : "SampleAccount1", - "phone" : "1234567890", - "website" : "www.salesforce.com", - "numberOfEmployees" : "100", - "type" : "Analyst", - "industry" : "Banking", - "Contacts" : { - "records" : [{ - "attributes" : {"type" : "Contact", "referenceId" : "ref2"}, - "lastname" : "Smith", - "Title" : "President", - "email" : "sample@salesforce.com" - },{ - "attributes" : {"type" : "Account", "referenceId" : "ref3"}, - "lastname" : "Evans", - "title" : "Vice President", - "email" : "sample@salesforce.com" - }] - } - },{ - "attributes" : {"type" : "Account", "referenceId" : "ref4"}, - "name" : "SampleAccount2", - "phone" : "1234567890", - "website" : "www.salesforce.com", - "numberOfEmployees" : "52000", - "type" : "Analyst", - "industry" : "Banking", - "childAccounts" : { - "records" : [{ - "attributes" : {"type" : "Account", "referenceId" : "ref5"}, - "name" : "SampleChildAccount1", - "phone" : "1234567890", - "website" : "www.salesforce.com", - "numberOfEmployees" : "100", - "type" : "Analyst", - "industry" : "Banking" - }] - }, - "Contacts" : { - "records" : [{ - "attributes" : {"type" : "Contact", "referenceId" : "ref6"}, - "lastname" : "Jones", - "title" : "President", - "email" : "sample@salesforce.com" - }] - } - }] - } - } - ``` - - **Sample Response** - - Given below is a sample response for the createNestedRecords operation. - - ```json - { - "hasErrors" : false, - "results" : [{ - "referenceId" : "ref1", - "id" : "001D000000K0fXOIAZ" - },{ - "referenceId" : "ref4", - "id" : "001D000000K0fXPIAZ" - },{ - "referenceId" : "ref2", - "id" : "003D000000QV9n2IAD" - },{ - "referenceId" : "ref3", - "id" : "003D000000QV9n3IAD" - },{ - "referenceId" : "ref5", - "id" : "001D000000K0fXQIAZ" - },{ - "referenceId" : "ref6", - "id" : "003D000000QV9n4IAD" - }] - } - ``` - -??? note "update" - To update a record, use salesforcerest.update and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_update_fields.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The type of object for which you will create a record.</td> - <td>Yes</td> - <td>Account</td> - </tr> - <tr> - <td>fieldAndValue</td> - <td>The json format property with the new definition for the record.</td> - <td>Yes</td> - <td><pre>{ - "name": "wso2", - "description":"This Account belongs to WSO2" - } - </pre></td> - </tr> - <tr> - <td>Id</td> - <td>The ID of the record you are updating.</td> - <td>Yes</td> - <td>00128000002OOhD</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.update> - <sObjectName>{$ctx:sObjectName}</sObjectName> - <fieldAndValue>{$ctx:fieldAndValue}</fieldAndValue> - <Id>{$ctx:Id}</Id> - </salesforcerest.update> - ``` - - **Sample request** - - The following is a sample request that can be handled by the update operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName":"Account", - "Id":"00128000002OOhD",, - "fieldAndValue": { - "name": "wso2", - "description":"This Account belongs to WSO2" - } - } - ``` - -??? note "delete" - To delete a record, use salesforcerest.delete and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_delete_record.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The object type of the record.</td> - <td>Yes</td> - <td>Account</td> - </tr> - <tr> - <td>Id</td> - <td>The ID of the record you are deleting.</td> - <td>Yes</td> - <td>00128000002OOhD</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.update> - <sObjectName>{$ctx:sObjectName}</sObjectName> - <fieldAndValue>{$ctx:fieldAndValue}</fieldAndValue> - <Id>{$ctx:Id}</Id> - </salesforcerest.update> - ``` - - **Sample request** - - The following is a sample request that can be handled by the delete operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName":"Account", - "idToDelete":"00128000002OOhD", - } - ``` - -??? note "recentlyViewedItem" - To retrieve the recently viewed items, use salesforcerest.recentlyViewedItem and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_recent_items.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>limit</td> - <td>The maximum number of records to be returned.</td> - <td>Yes</td> - <td>5</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.recentlyViewedItem> - <limit>{$ctx:limit}</limit> - </salesforcerest.recentlyViewedItem> - ``` - - **Sample request** - - The following is a sample request that can be handled by the recentlyViewedItem operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "limit":"5", - } - ``` - - **Sample Response** - - Given below is a sample response for the recentlyViewedItem operation. - - ```json - { - "output":"[{\"attributes\": - {\"type\":\"User\", - \"url\":\"/services/data/v32.0/sobjects/User/00528000000ToIrAAK\"}, - \"Id\":\"00528000000ToIrAAK\", - \"Name\":\"kesan yoga\"}, - . - . - ]" - } - ``` - -??? note "retrieveFieldValues" - To retrieve specific field values for a specific sObject, use salesforcerest.retrieveFieldValues and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_get_field_values.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The object type whose metadata you want to retrieve.</td> - <td>Yes</td> - <td>Account</td> - </tr> - <tr> - <td>rowId</td> - <td>The ID of the record whose values you want to retrieve.</td> - <td>Yes</td> - <td>00128000005YjDnAAK</td> - </tr> - <tr> - <td>fields</td> - <td>A comma-separated list of fields whose values you want to retrieve.</td> - <td>Yes</td> - <td>AccountNumber,BillingPostalCode</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.recentlyViewedItem> - <limit>{$ctx:limit}</limit> - </salesforcerest.recentlyViewedItem> - ``` - - **Sample request** - - The following is a sample request that can be handled by the retrieveFieldValues operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName": "Account", - "rowId":"00128000005YjDnAAK", - "fields":"AccountNumber,BillingPostalCode", - } - ``` - - **Sample Response** - - Given below is a sample response for the retrieveFieldValues operation. - - ```json - { - "AccountNumber" : "CD656092", - "BillingPostalCode" : "27215", - } - ``` - -??? note "upsert" - To create or update (upsert) a record using an external ID, use salesforcerest.upsert and specify the following properties. This method is used to create records or update existing records based on the value of a specified external ID field. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_upsert.htm) for more information. - ``` - * If the specified value does not exist, a new record is created. - * If a record does exist with that value, the field values specified in the request body are updated. - ``` - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The object type whose value you want to upsert.</td> - <td>Yes</td> - <td>Account</td> - </tr> - <tr> - <td>externalIDField</td> - <td>The external Id Field of the subject.</td> - <td>Yes</td> - <td>sample__c</td> - </tr> - <tr> - <td>Id</td> - <td>The value of the customExtIdField.</td> - <td>Yes</td> - <td>15222</td> - </tr> - <tr> - <td>fieldAndValue</td> - <td>The json format property/payload used to create the record.</td> - <td>Yes</td> - <td>{ - "Name":"john" - } - </td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.upsert> - <sObjectName>{$ctx:sObjectName}</sObjectName> - <externalIDField>{$ctx:externalIDField}</externalIDField> - <Id>{$ctx:Id}</Id> - <fieldAndValue>{$ctx:fieldAndValue}</fieldAndValue> - </salesforcerest.upsert> - ``` - - **Sample request** - - The following is a sample request that can be handled by the upsert operation. - - ```json - { - "accessToken":"00D280000017q6q!AQoAQMMZWoN9MQZcXLW475YYoIdJFUICTjbGh67jEfAeV7Q57Ac2Ov.0ZuM_2Zx6SnrOmwpml8Qf.XclstTQiXtCYSGRBcEv", - "apiUrl":"https://(your_instance).salesforce.com", - "clientId": "3MVG9ZL0ppGP5UrBrnsanGUZRgHqc8gTV4t_6tfuef8Zz4LhFPipmlooU6GBszpplbTzVXXWjqkGHubhRip1s", - "refreshToken": "5Aep861TSESvWeug_ztpnAk6BGQxRdovMLhHso81iyYKO6hTm68KfebpK7UYtEzF0ku8JCz7CNto8b3YMRmZrhy", - "clientSecret": "9104967092887676680", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName":"Account",, - "intervalTime" : "2400000", - "externalIDField":"sample__c", - "Id":"15222", - "fieldAndValue": - { - "Name":"john" - } - } - ``` - - **Sample Response** - - Given below is a sample response for the upsert operation. - - ```json - { - "id" : "00190000001pPvHAAU", - "errors" : [ ], - "success" : true - } - ``` - -??? note "getDeleted" - To retrieve a list of individual records that have been deleted within the given timespan for the specified object, - use salesforcerest.getDeleted. The date and time should be provided in ISO 8601 format:YYYY-MM-DDThh:mm:ss+hh:mm. - See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_getdeleted.htm) - for more information. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The object where you want to look for deleted records</td> - <td>Yes</td> - <td>Account</td> - </tr> - <tr> - <td>startTime</td> - <td>Starting date/time (Coordinated Universal Time (UTC)—not local—timezone) of the timespan for which to retrieve the data.</td> - <td>Yes</td> - <td>2015-10-05T12:30:30+05:30</td> - </tr> - <tr> - <td>endTime</td> - <td>Ending date/time (Coordinated Universal Time (UTC)—not local—timezone) of the timespan for which to retrieve the data.</td> - <td>Yes</td> - <td>2015-10-10T20:30:30+05:30</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.getDeleted> - <sObjectName>{$ctx:sObjectName}</sObjectName> - <startTime>{$ctx:startTime}</startTime> - <endTime>{$ctx:endTime}</endTime> - </salesforcerest.getDeleted> - ``` - - **Sample request** - - The following is a sample request that can be handled by the getDeleted operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName":"Account", - "startTime":"2015-10-05T12:30:30+05:30", - "endTime":"2015-10-10T20:30:30+05:30" - } - ``` - - **Sample Response** - - Given below is a sample response for the getDeleted operation. - - ```json - { - "earliestDateAvailable":"2018-09-20T07:52:00.000+0000", - "deletedRecords":[ - - ], - "latestDateCovered":"2018-10-27T15:00:00.000+0000" - } - ``` - -??? note "getUpdated" - To retrieve a list of individual records that have been updated within the given timespan for the specified object, - use salesforcerest.getUpdated. The date and time should be provided in ISO 8601 format:YYYY-MM-DDThh:mm:ss+hh:mm. - See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_getupdated.htm) - for more information. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The object where you want to look for updated records</td> - <td>Yes</td> - <td>Account</td> - </tr> - <tr> - <td>startTime</td> - <td>Starting date/time (Coordinated Universal Time (UTC)—not local—timezone) of the timespan for which to retrieve the data.</td> - <td>Yes</td> - <td>2015-10-05T12:30:30+05:30</td> - </tr> - <tr> - <td>endTime</td> - <td>Ending date/time (Coordinated Universal Time (UTC)—not local—timezone) of the timespan for which to retrieve the data.</td> - <td>Yes</td> - <td>2015-10-10T20:30:30+05:30</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.getUpdated> - <sObjectName>{$ctx:sObjectName}</sObjectName> - <startTime>{$ctx:startTime}</startTime> - <endTime>{$ctx:endTime}</endTime> - </salesforcerest.getUpdated> - ``` - - **Sample request** - - The following is a sample request that can be handled by the getUpdated operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName":"Account", - "startTime":"2015-10-05T12:30:30+05:30", - "endTime":"2015-10-10T20:30:30+05:30" - } - ``` - - **Sample Response** - - Given below is a sample response for the getDeleted operation. - - ```json - { - "ids":[ - - ], - "latestDateCovered":"2018-10-27T15:00:00.000+0000" - } - ``` - -### sObjects - -??? note "describeGlobal" - To retrieve a list of the objects that are available in the system, use salesforcerest.describeGlobal. You can then get metadata for an object or objects as described in the next sections. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_describeGlobal.htm) for more information. - - **Sample configuration** - - ```xml - <salesforcerest.describeGlobal/> - ``` - - **Sample request** - - The following is a sample request that can be handled by the describeGlobal operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - } - ``` - - **Sample Response** - - Given below is a sample response for the describeGlobal operation. - - ```json - { - "maxBatchSize":200, - "sobjects":[ - { - "updateable":false, - "activateable":false, - "deprecatedAndHidden":false, - "layoutable":false, - "custom":false, - "deletable":false, - "replicateable":false, - "undeletable":false, - "label":"Accepted Event Relation", - "keyPrefix":null, - "searchable":false, - "queryable":true, - "mergeable":false, - "urls":{ - "rowTemplate":"/services/data/v32.0/sobjects/AcceptedEventRelation/{ID}", - "describe":"/services/data/v32.0/sobjects/AcceptedEventRelation/describe", - "sobject":"/services/data/v32.0/sobjects/AcceptedEventRelation" - }, - "createable":false, - "feedEnabled":false, - "retrieveable":true, - "name":"AcceptedEventRelation", - "customSetting":false, - "labelPlural":"Accepted Event Relations", - "triggerable":false - }, - . - . - ], - "encoding":"UTF-8" - } - ``` - -??? note "describeSObject" - To get metadata (such as name, label, and fields, including the field properties) for a specific object type, use salesforcerest.describeSObject and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_sobject_describe.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The object type whose metadata you want to retrieve.</td> - <td>Yes</td> - <td>Account</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.describeSObject> - <sObjectName>{$ctx:sObjectName}</sObjectName> - </salesforcerest.describeSObject> - ``` - - **Sample request** - - The following is a sample request that can be handled by the describeSObject operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName":"Account", - } - ``` - - **Sample Response** - - Given below is a sample response for the describeSObject operation. - - ```json - { - "updateable":true, - "activateable":false, - "childRelationships":[ - { - "relationshipName":"ChildAccounts", - "field":"ParentId", - "deprecatedAndHidden":false, - "childSObject":"Account", - "cascadeDelete":false, - "restrictedDelete":false - }, - { - "relationshipName":"AccountCleanInfos", - "field":"AccountId", - "deprecatedAndHidden":false, - "childSObject":"AccountCleanInfo", - "cascadeDelete":true, - "restrictedDelete":false - }, - . - ] - } - ``` - -??? note "listAvailableApiVersion" - To retrieve a list of summary information about each REST API version that is currently available, use salesforcerest.listAvailableApiVersion. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/dome_versions.htm) for more information. - - **Sample configuration** - - ```xml - <salesforcerest.listAvailableApiVersion/> - ``` - - **Sample request** - - The following is a sample request that can be handled by the listAvailableApiVersion operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - } - ``` - - **Sample Response** - - Given below is a sample response for the listAvailableApiVersion operation. - - ```json - { - "output":"[ - {\"label\":\"Winter '11\",\"url\":\"/services/data/v20.0\",\"version\":\"20.0\"}, - . - . - ]" - } - ``` - -??? note "listOrganizationLimits" - To retrieve the limit information for your organization, use salesforcerest.listOrganizationLimits. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/dome_limits.htm) for more information. - - **Sample configuration** - - ```xml - <salesforcerest.listOrganizationLimits/> - ``` - - **Sample request** - - The following is a sample request that can be handled by the listOrganizationLimits operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - } - ``` - - **Sample Response** - - Given below is a sample response for the listOrganizationLimits operation. - - ```json - { - "DailyApiRequests":{ - "Dataloader Bulk":{ - "Max":0, - "Remaining":0 - }, - "test":{ - "Max":0, - "Remaining":0 - }, - "Max":5000, - "Salesforce Mobile Dashboards":{ - "Max":0, - "Remaining":0 - }, - . - . - } - ``` - -??? note "listResourcesByApiVersion" - To retrieve the resources that are available in the specified API version, use salesforcerest.listResourcesByApiVersion. You can then get the details of those resources. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_discoveryresource.htm) for more information. - - **Sample configuration** - - ```xml - <salesforcerest.listResourcesByApiVersion/> - ``` - - **Sample request** - - The following is a sample request that can be handled by the listResourcesByApiVersion operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - } - ``` - - **Sample Response** - - Given below is a sample response for the listResourcesByApiVersion operation. - - ```json - { - "tooling":"/services/data/v32.0/tooling", - "folders":"/services/data/v32.0/folders", - "eclair":"/services/data/v32.0/eclair", - "prechatForms":"/services/data/v32.0/prechatForms", - "chatter":"/services/data/v32.0/chatter", - "tabs":"/services/data/v32.0/tabs", - "appMenu":"/services/data/v32.0/appMenu", - "quickActions":"/services/data/v32.0/quickActions", - "queryAll":"/services/data/v32.0/queryAll", - "commerce":"/services/data/v32.0/commerce", - . - } - ``` - -??? note "sObjectBasicInfo" - To retrieve the individual metadata for the specified object, use salesforcerest.sObjectBasicInfo. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_sobject_basic_info.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The object type whose metadata you want to retrieve.</td> - <td>Yes</td> - <td>Account</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.sObjectBasicInfo> - <sObjectName>{$ctx:sObjectName}</sObjectName> - </salesforcerest.sObjectBasicInfo> - ``` - - **Sample request** - - The following is a sample request that can be handled by the sObjectBasicInfo operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName":"Account", - } - ``` - - **Sample Response** - - Given below is a sample response for the sObjectBasicInfo operation. - - ```json - { - "objectDescribe":{ - "updateable":true, - "activateable":false, - "deprecatedAndHidden":false, - "layoutable":true, - "custom":false, - "deletable":true, - "replicateable":true, - "undeletable":true, - "label":"Account", - "keyPrefix":"001", - "searchable":true, - "queryable":true, - "mergeable":true, - "urls":{ - "compactLayouts":"/services/data/v32.0/sobjects/Account/describe/compactLayouts", - "rowTemplate":"/services/data/v32.0/sobjects/Account/{ID}" - }, - "createable":true, - "feedEnabled":true, - "retrieveable":true, - "name":"Account", - "customSetting":false, - "labelPlural":"Accounts", - "triggerable":true - }, - . - } - ``` - -??? note "sObjectGetDeleted" - To retrieve a list of individual records that have been deleted within the given timespan for the specified object, use salesforcerest.sObjectGetDeleted. The date and time should be provided in ISO 8601 format:YYYY-MM-DDThh:mm:ss+hh:mm. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_sobject_basic_info.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The object type whose metadata you want to retrieve.</td> - <td>Yes</td> - <td>Account</td> - </tr> - <tr> - <td>startTime</td> - <td>Starting date/time (Coordinated Universal Time (UTC)—not local—timezone) of the timespan for which to retrieve the data.</td> - <td>Yes</td> - <td>2015-10-05T12:30:30+05:30</td> - </tr> - <tr> - <td>endTime</td> - <td>Ending date/time (Coordinated Universal Time (UTC)—not local—timezone) of the timespan for which to retrieve the data.</td> - <td>Yes</td> - <td>2015-10-10T20:30:30+05:30</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.sObjectGetDeleted> - <sObjectName>{$ctx:sObjectName}</sObjectName> - <startTime>{$ctx:startTime}</startTime> - <endTime>{$ctx:endTime}</endTime> - </salesforcerest.sObjectGetDeleted> - ``` - - **Sample request** - - The following is a sample request that can be handled by the sObjectGetDeleted operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName":"Account", - "startTime":"2015-10-05T12:30:30+05:30", - "endTime":"2015-10-10T20:30:30+05:30", - } - ``` - - **Sample Response** - - Given below is a sample response for the sObjectGetDeleted operation. - - ```json - { - "objectDescribe":{ - "updateable":true, - "activateable":false, - "deprecatedAndHidden":false, - "layoutable":true, - "custom":false, - "deletable":true, - "replicateable":true, - "undeletable":true, - "label":"Account", - "keyPrefix":"001", - "searchable":true, - "queryable":true, - "mergeable":true, - "urls":{ - "compactLayouts":"/services/data/v32.0/sobjects/Account/describe/compactLayouts", - "rowTemplate":"/services/data/v32.0/sobjects/Account/{ID}" - }, - "createable":true, - "feedEnabled":true, - "retrieveable":true, - "name":"Account", - "customSetting":false, - "labelPlural":"Accounts", - "triggerable":true - }, - . - } - ``` - -??? note "sObjectGetUpdated" - To retrieve a list of individual records that have been updated within the given timespan for the specified object, use salesforcerest.sObjectGetUpdated. The date and time should be provided in ISO 8601 format:YYYY-MM-DDThh:mm:ss+hh:mm. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_getupdated.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The object type whose metadata you want to retrieve.</td> - <td>Yes</td> - <td>Account</td> - </tr> - <tr> - <td>startTime</td> - <td>Starting date/time (Coordinated Universal Time (UTC)—not local—timezone) of the timespan for which to retrieve the data.</td> - <td>Yes</td> - <td>2015-10-05T12:30:30+05:30</td> - </tr> - <tr> - <td>endTime</td> - <td>Ending date/time (Coordinated Universal Time (UTC)—not local—timezone) of the timespan for which to retrieve the data.</td> - <td>Yes</td> - <td>2015-10-10T20:30:30+05:30</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.sObjectGetUpdated> - <sObjectName>{$ctx:sObjectName}</sObjectName> - <startTime>{$ctx:startTime}</startTime> - <endTime>{$ctx:endTime}</endTime> - </salesforcerest.sObjectGetUpdated> - ``` - - **Sample request** - - The following is a sample request that can be handled by the sObjectGetUpdated operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName":"Account", - "startTime":"2015-10-05T12:30:30+05:30", - "endTime":"2015-10-10T20:30:30+05:30", - } - ``` - - **Sample Response** - - Given below is a sample response for the sObjectGetUpdated operation. - - ```json - { - "ids":[ - - ], - "latestDateCovered":"2018-10-27T15:00:00.000+0000" - } - ``` - -??? note "sObjectPlatformAction" - To retrieve the description of the PlatformAction, use salesforcerest.sObjectPlatformAction. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/resources_sobject_platformaction.htm?search_text=PlatformAction) for more information. - - **Sample configuration** - - ```xml - <salesforcerest.sObjectPlatformAction/> - ``` - - **Sample request** - - The following is a sample request that can be handled by the sObjectPlatformAction operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - } - ``` - - **Sample Response** - - Given below is a sample response for the sObjectPlatformAction operation. - - ```json - { - "objectDescribe":{ - "updateable":false, - "activateable":false, - "deprecatedAndHidden":false, - "layoutable":false, - "custom":false, - "deletable":false, - "replicateable":false, - "undeletable":false, - "label":"Platform Action", - "keyPrefix":"0JV", - "searchable":false, - "queryable":true, - "mergeable":false, - "urls":{ - "rowTemplate":"/services/data/v32.0/sobjects/PlatformAction/{ID}", - "describe":"/services/data/v32.0/sobjects/PlatformAction/describe", - "sobject":"/services/data/v32.0/sobjects/PlatformAction" - }, - "createable":false, - "feedEnabled":false, - "retrieveable":false, - "name":"PlatformAction", - "customSetting":false, - "labelPlural":"Platform Actions", - "triggerable":false - }, - "recentItems":[ - - ] - } - ``` - -??? note "sObjectRows" - To retrieve details of a specific record, use salesforcerest.sObjectRows. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_sobject_retrieve.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectName</td> - <td>The object type of the record.</td> - <td>Yes</td> - <td>Account</td> - </tr> - <tr> - <td>rowId</td> - <td>The ID of the record whose details you want to retrieve.</td> - <td>Yes</td> - <td>00128000005YjDnAAK</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.sObjectRows> - <sObjectName>{$ctx:sObjectName}</sObjectName> - <rowId>{$ctx:rowId}</rowId> - </salesforcerest.sObjectRows> - ``` - - **Sample request** - - The following is a sample request that can be handled by the sObjectRows operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName":"Account", - "rowId":"00128000005YjDnAAK", - } - ``` - - **Sample Response** - - Given below is a sample response for the sObjectRows operation. - - ```json - { - "AccountNumber" : "CD656092", - "BillingPostalCode" : "27215" - } - ``` - -### Search - -??? note "search" - To search for records, use salesforcerest.search and specify the search string. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_search.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>searchString</td> - <td>The SQL query to use to search for records.</td> - <td>Yes</td> - <td>sample string</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.search> - <searchString>{$ctx:searchString}</searchString> - </salesforcerest.search> - ``` - - **Sample request** - - The following is a sample request that can be handled by the search operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "searchString": "FIND {map*} IN ALL FIELDS RETURNING Account (Id, Name), Contact, Opportunity, Lead", - } - ``` - - **Sample Response** - - Given below is a sample response for the search operation. - - ```json - { - {"output":"[{\"attributes\":{\"type\":\"Account\",\"url\":\"/services/data/v32.0/sobjects/Account/00128000005dMcSAAU\"},\"Id\":\"00128000005dMcSAAU\",\"Name\":\"GenePoint\"}]"} - } - ``` - -??? note "searchScopeAndOrder" - To retrieve the search scope and order for the currently logged-in user, use salesforcerest.searchScopeAndOrder. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_search_scope_order.htm) for more information. - - **Sample configuration** - - ```xml - <salesforcerest.searchScopeAndOrder/> - ``` - - **Sample request** - - The following is a sample request that can be handled by the searchScopeAndOrder operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - } - ``` - - **Sample Response** - - Given below is a sample response for the searchScopeAndOrder operation. - - ```json - { - {"output":"[]"} - } - ``` - -??? note "searchResultLayout" - To retrieve the search result layouts for one or more sObjects, use salesforcerest.searchResultLayout and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_retrieve_search_layouts.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>sObjectNameList</td> - <td>A comma-delimited list of the objects whose search result layouts you want to retrieve.</td> - <td>Yes</td> - <td>Account,User</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.searchResultLayout> - <sObjectNameList>{$ctx:sObjectNameList}</sObjectNameList> - </salesforcerest.searchResultLayout> - ``` - - **Sample request** - - The following is a sample request that can be handled by the searchResultLayout operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectNameList": "Account,User", - } - ``` - - **Sample Response** - - Given below is a sample response for the searchResultLayout operation. - - ```json - { - {"output":"[{\"errorMsg\":null,\"label\":\"Search Results\",\"limitRows\":25,\"objectType\":\"Account\",\"searchColumns\":[{\"field\":\"Account.Name\",\"format\":null,\"label\":\"Account Name\",\"name\":\"Name\"},{\"field\":\"Account.Site\",\"format\":null,\"label\":\"Account Site\",\"name\":\"Site\"},.]"} - } - ``` - -??? note "searchSuggestedRecords" - To return a list of suggested records whose names match the user’s search string, use salesforcerest.searchSuggestedRecords and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/resources_search_suggest_records.htm?search_text=search%20Suggested%20records) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>stringForSearch</td> - <td>The object type that the search is scoped to.</td> - <td>Yes</td> - <td>hari</td> - </tr> - <tr> - <td>sObjectName</td> - <td>The SOQL query to execute the search.</td> - <td>Yes</td> - <td>Account</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.searchSuggestedRecords> - <stringForSearch>{$ctx:stringForSearch}</stringForSearch> - <sObjectName>{$ctx:sObjectName}</sObjectName> - </salesforcerest.searchSuggestedRecords> - ``` - - **Sample request** - - The following is a sample request that can be handled by the searchSuggestedRecords operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "sObjectName": "Account", - "stringForSearch": "hari", - } - ``` - - **Sample Response** - - Given below is a sample response for the searchSuggestedRecords operation. - - ```json - { - {"autoSuggestResults":[],"hasMoreResults":false} - } - ``` - -### Users - -??? note "getUserInformation" - To retrieve information about a specific user, use salesforcerest.getUserInformation and specify the following property. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.198.0.api_rest.meta/api_rest/dome_process_rules.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>userId</td> - <td>The ID of the user whose information you want to retrieve.</td> - <td>Yes</td> - <td>00528000000yl7j</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.getUserInformation> - <userId>{$ctx:userId}</userId> - </salesforcerest.getUserInformation> - ``` - - **Sample request** - - The following is a sample request that can be handled by the getUserInformation operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "userId": "00528000000yl7j", - } - ``` - - **Sample Response** - - Given below is a sample response for the getUserInformation operation. - - ```json - { - "ProfileId":"00e28000000xIEQAA2", - "LastModifiedDate":"2016-11-29T05:40:45.000+0000", - "Address":{ - "country":"LK", - "city":null, - "street":null, - "latitude":null, - "postalCode":null, - "geocodeAccuracy":null, - "state":null, - "longitude":null - }, - "LanguageLocaleKey":"en_US", - "EmailPreferencesAutoBccStayInTouch":false - . - . - } - ``` - -??? note "resetPassword" - To reset the password of a specific user, use salesforcerest.resetPassword and specify the following property. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_sobject_user_password.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>userId</td> - <td>The ID of the user whose information you want to retrieve.</td> - <td>Yes</td> - <td>00528000000yl7j</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.getUserInformation> - <userId>{$ctx:userId}</userId> - </salesforcerest.getUserInformation> - ``` - - **Sample request** - - The following is a sample request that can be handled by the getUserInformation operation. - - ```json - { - "accessToken":"XXXXXXXXXXXX (Replace with your access token)", - "apiUrl":"https://(your_instance).salesforce.com", - "hostName": "https://login.salesforce.com", - "apiVersion": "v32.0", - "userId": "00528000000yl7j", - } - ``` - - **Sample Response** - - Given below is a sample response for the getUserInformation operation. - - ```json - { - "NewPassword" : "myNewPassword1234" - } - ``` - -### Reports - -??? note "getReport" - To retrieve information about a report, use salesforcerest.getReport and specify the following property. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.234.0.api_analytics.meta/api_analytics/sforce_analytics_rest_api_getreportrundata.htm) for more information. - - > **Note**: This operation is available only with Salesforce REST Connector v1.1.2 and above. - - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - <th>Sample Value</th> - </tr> - <tr> - <td>reportId</td> - <td>The ID of the report that you want to retrieve.</td> - <td>Yes</td> - <td>00O8d000004MWaGEAW</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcerest.getReport> - <reportId>{$ctx:reportId}</reportId> - </salesforcerest.getReport> - ``` - - **Sample request** - - The following is a sample request that can be handled by the getReport operation. - - ```json - { - "reportId": "00O8d000004MWaGEAW", - } - ``` - - **Sample Response** - - Given below is a sample response from the getReport operation. - - ```json - { - "attributes": { - "describeUrl": "/services/data/v55.0/analytics/reports/00O8d000004MWaGEAW/describe", - "instancesUrl": "/services/data/v55.0/analytics/reports/00O8d000004MWaGEAW/instances", - "reportId": "00O8d000004MWaGEAW", - "reportName": "SampleReport", - "type": "Report" - }, - "allData": true, - "factMap": { - "T!T": { - "aggregates": [ - { - "label": "13", - "value": 13 - } - ], - "rows": [ - { - "dataCells": [ - { - "label": "Customer - Direct", - "recordId": "0018d00000FgQblAAF", - "value": "Customer - Direct" - }, - { - "label": "Warm", - "recordId": "0018d00000FgQblAAF", - "value": "Warm" - }, - { - "label": "-", - "recordId": "0018d00000FgQblAAF", - "value": null - }, - { - "label": "16/08/2022", - "value": "2022-08-15" - }, - . - . - . - ] - } - ] - } - } - } - ``` diff --git a/en/docs/reference/connectors/salesforce-connectors/sf-rest-connector-example.md b/en/docs/reference/connectors/salesforce-connectors/sf-rest-connector-example.md deleted file mode 100644 index 49ec8f3f20..0000000000 --- a/en/docs/reference/connectors/salesforce-connectors/sf-rest-connector-example.md +++ /dev/null @@ -1,247 +0,0 @@ -# Salesforce Rest API Connector Example - -The Salesforce REST Connector allows you to work with records in Salesforce, a web-based service that allows organizations to manage contact relationship management (CRM) data. You can use the Salesforce connector to create, query, retrieve, update, and delete records in your organization's Salesforce data. The connector uses the [Salesforce REST API](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/intro_what_is_rest_api.htm) to interact with Salesforce. - -## What you'll build - -This example explains how to use the Salesforce client to connect with the Salesforce instance and perform the -following operations: - -* Create an account. - - The user sends the request payload that includes sObjects (any object that can be stored in the Lightning platform database), to create a new Account object in Salesforce. This request is sent to the integration runtime by invoking the Salesforce connector API. - -* Execute a SOQL query to retrieve the Account Name and ID in all the existing accounts. - - In this example use the Salesforce Object Query Language (SOQL) to search stored Salesforce data for specific information which is created under `sObjects`. - -<img src="{{base_path}}/assets/img/integrate/connectors/salesforce.png" title="Using Salesforce Rest Connector" width="800" alt="Using Salesforce Rest Connector"/> - -The user calls the Salesforce REST API. It invokes the **create** sequence and creates a new account in Salesforce. Then through the **retrieve** sequence, it displays all the existing account details to the user. - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Configure the connector in WSO2 Integration Studio - -Connectors can be added to integration flows in [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/). Once added, the operations of the connector can be dragged onto your canvas and added to your sequences. - -### Import the connector - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -### Add integration logic - -First create an API, which will be where we configure the integration logic. Right click on the created Integration Project and select, **New** -> **Rest API** to create the REST API. Specify the API name as `salesforcerest` and API context as `/salesforcerest`. - -<img src="{{base_path}}/assets/img/integrate/connectors/adding-an-api.jpg" title="Adding a Rest API" alt="Adding a Rest API"/> - -#### Configure a sequence for the create operation - -Create the sequence needed to create Salesforce object. We will create two defined sequences called `create.xml` and `retrieve.xml` to create an account and retrieve data. Right click on the created Integration Project and select, -> **New** -> **Sequence** to create the Sequence. - -<img src="{{base_path}}/assets/img/integrate/connectors/add-sequence.jpg" title="Adding a Sequnce" width="500" alt="Adding a Sequnce"/> - -Now follow the steps below to add configurations to the sequence. - -1. Initialize the connector. - - 1. Follow these steps to [generate the Access Tokens for Salesforce]({{base_path}}/reference/connectors/salesforce-connectors/sf-access-token-generation/) and obtain the Client Id, Client Secret, Access Token, and Refresh Token. - - 2. Navigate into the **Palette** pane and select the graphical operations icons listed under **Salesforce Connector** section. Then drag and drop the `init` operation into the Design pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-drag-and-drop-init.png" title="Drag and drop init operation" width="500" alt="Drag and drop init operation"/> - - 3. Add the property values into the `init` operation as shown bellow. Replace the `clientSecret`, `clientId`, `accessToken`, `refreshToken` with obtained values from above steps. - - - **clientSecret** : Value of your client secret given when you registered your application with Salesforce. - - **clientId** : Value of your client ID given when you registered your application with Salesforce. - - **accessToken** : Value of the access token to access the API via request. - - **refreshToken** : Value of the refresh token. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-api-init-operation-sequnce1.png" title="Add values to the init operation" width="800" alt="Add values to the init operation"/> - - -2. Set up the created operation. - - 1. Setup the `create` sequence configurations. In this operation we are going to create a `sObjects` in the Salesforce account. An `SObject` represents a specific table in the database that you can discretely query. It describes the individual metadata for the specified object. Please find the `create` operation parameters listed here. - - - **sObjectName** : Name of the sObject that you need to create in Salesforce. - - **fieldAndValue** : The field and value you need to store in the created Salesforce sObject. - - While invoking the API, the above two parameters values come as a user input. - - 2. Navigate into the **Palette** pane and select the graphical operations icons listed under **Salesforce Connector** section. Then drag and drop the `create` operation into the Design pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-drag-and-drop-create.png" title="Drag and drop create operation" width="500" alt="Drag and drop create operations"/> - - 3. To get the input values in to the API we can use the [property mediator]({{base_path}}/reference/mediators/property-mediator/). Navigate into the **Palette** pane and select the graphical mediators icons listed under **Mediators** section. Then drag and drop the `Property` mediators into the Design pane as shown bellow. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-api-drag-and-drop-property-mediator.png" title="Add property mediators" width="800" alt="Add property mediators"/> - - The parameters available for configuring the Property mediator are as follows: - - > **Note**: That the properties should be add to the pallet before create the operation. - - 4. Add the property mediator to capture the `sObjectName` value.The sObjectName type can be used to retrieve the metadata for the Account object using the GET method, or create a new Account object using the POST method. In this example we are going to create a new Account object using the POST method. - - - **name** : sObjectName - - **expression** : json-eval($.sObject) - - **type** : STRING - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-api-property-mediator-property1-value1.png" title="Add values to capture sObjectName value" width="600" alt="Add values to capture sObjectName value"/> - - 5. Add the property mediator to capture the `fieldAndValue` values. The fieldAndValue contains object fields and values that user need to store. - - - **name** : fieldAndValue - - **expression** : json-eval($.fieldAndValue) - - **type** : STRING - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-api-property-mediator-property2-value2.png" title="Add values to capture fieldAndValue value" width="600" alt="Add values to capture fieldAndValue value"/> - -#### Configure a sequence for the retrieve operation - -Create the sequence to retrieve the Salesforce objects created. - -1. Initialize the connector. - - You can use the generated tokens to initialize the connector. Please follow the steps given in 2.1 for setting up the `init` operation to the `retrive.xml` sequence. - -2. Set up the retrieve operation. - - 1. To retrieve data from the created objects in the Salesforce account, you need to add the `query` operation to the `retrieve` sequence. - - - **queryString** : This variable contains specified SOQL query. In this sample this SOQL query executes to retrieve `id` and `name` from created `Account`. If the query results are too large, the response contains the first batch of results. - - 2. Navigate into the **Palette** pane and select the graphical operations icons listed under **Salesforce Connector** section. Then drag and drop the `query` operations into the Design pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-drag-and-drop-query.png" title="Add query operation to retrive sequnce" width="500" alt="Add query operation to retrive sequnce"/> - - 3. Select the query operation and add `id, name from Account` query to the properties section shown as bellow. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-api-retrive-query-operation-sequnce1.png" title="Add query to the query operation in retrive sequnce" width="800" alt="Add query to the query operation in retrive sequnce"/> - -#### Configuring the API - -1. Configure the `salesforcerest API` using the created `create` and `retrive` sequences. - - Now you can select the API that we created initially. Navigate into the **Palette** pane and select the graphical operations icons listed under **Defined Sequences** section. Drag and drop the created `create` and `retrive` sequences to the Design pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-drag-and-drop-sequencestothe-designpane.png" title="Drag and drop sequences to the Design view" width="500" alt="Drag and drop sequences to the Design view"/> - -2. Get a response from the user. - - When you invoking the created API the request of the message is going through the `create` and `retrive` sequences. Finally pass to the the [Respond mediator]({{base_path}}/reference/mediators/respond-mediator/). In here the Respond Mediator stops the processing on the current message and sends the message back to the client as a response. - - 1. Drag and drop **respond mediator** to the **Design view**. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-drag-and-drop-respond-mediator.png" title="Add Respond mediator" width="800" alt="Add Respond mediator"/> - - 2. Once you have setup the sequences and API, you can see the `salesforcerest` API as shown below. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-api-design-view.png" title="API Design view" width="600" alt="API Design view"/> - -3. Now you can switch into the Source view and check the XML configuration files of the created API and sequences. - - ??? note "create.xml" - ``` - <?xml version="1.0" encoding="UTF-8"?> - <sequence name="create" trace="disable" xmlns="http://ws.apache.org/ns/synapse"> - <property expression="json-eval($.sObject)" name="sObject" scope="default" type="STRING"/> - <property expression="json-eval($.fieldAndValue)" name="fieldAndValue" scope="default" type="STRING"/> - <salesforcerest.init> - <accessToken></accessToken> - <apiVersion>v44.0</apiVersion> - <hostName>https://login.salesforce.com</hostName> - <refreshToken></refreshToken> - <clientSecret></clientSecret> - <clientId></clientId> - <apiUrl>https://ap16.salesforce.com</apiUrl> - <registryPath>connectors/SalesforceRest</registryPath> - </salesforcerest.init> - <salesforcerest.create> - <sObjectName>{$ctx:sObject}</sObjectName> - <fieldAndValue>{$ctx:fieldAndValue}</fieldAndValue> - </salesforcerest.create> - </sequence> - ``` - - ??? note "retrieve.xml" - ``` - <?xml version="1.0" encoding="UTF-8"?> - <sequence name="retrieve" trace="disable" xmlns="http://ws.apache.org/ns/synapse"> - <salesforcerest.init> - <accessToken></accessToken> - <apiVersion>v44.0</apiVersion> - <hostName>https://login.salesforce.com</hostName> - <refreshToken></refreshToken> - <clientSecret></clientSecret> - <clientId></clientId> - <apiUrl>https://ap16.salesforce.com</apiUrl> - <registryPath>connectors/SalesforceRest</registryPath> - </salesforcerest.init> - <salesforcerest.query> - <queryString>select id, name from Account</queryString> - </salesforcerest.query> - </sequence> - ``` - - ??? note "salesforcerest.xml" - ``` - <?xml version="1.0" encoding="UTF-8"?> - <api context="/salesforcerest" name="salesforcerest" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST"> - <inSequence> - <sequence key="create"/> - <sequence key="retrieve"/> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - </api> - ``` - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/salesforcerest.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -!!! tip - You may need to update the value of the access token and make other such changes before deploying and running this project. - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing -Save a file called data.json with the following payload. - -```json -{ - "sObject":"Account", - "fieldAndValue": { - "name": "Engineers", - "description":"This Account belongs to WSO2" - } -} -``` - -Invoke the API as shown below using the curl command. Curl application can be downloaded from [here](https://curl.haxx.se/download.html). - - -``` -curl -X POST -d @data.json http://localhost:8280/salesforcerest --header "Content-Type:application/json" -``` - -You will get a set of account names and the respective IDs as the output. - -## What's Next - -* To customize this example for your own scenario, see [Salesforce REST Connector Configuration]({{base_path}}/reference/connectorssalesforce-connectors/sf-rest-connector-config/) documentation for all operation details of the connector. diff --git a/en/docs/reference/connectors/salesforce-connectors/sf-soap-connector-config.md b/en/docs/reference/connectors/salesforce-connectors/sf-soap-connector-config.md deleted file mode 100644 index 38aa25241b..0000000000 --- a/en/docs/reference/connectors/salesforce-connectors/sf-soap-connector-config.md +++ /dev/null @@ -1,53 +0,0 @@ -# Salesforce SOAP Connector Configuration - -The Salesforce SOAP connector allows you to access the [Salesforce SOAP API](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_quickstart_intro.htm?search_text=SOAP%20API%20Developer%20Guide) from the integration sequence. - -## Setting up the Salesforce account - -1. To work with the Salesforce SOAP connector, you need to have a Salesforce account. If you do not have a Salesforce account, go to [https://developer.salesforce.com/signup](https://developer.salesforce.com/signup) and create a Salesforce developer account. - -2. After creating a Salesforce account you will get a [Salesforce security token](https://help.salesforce.com/articleView?id=user_security_token.htm&type=5). - -3. To configure the Salesforce SOAP Connector you need to save and keep the **username**, **password**, and **security token** of your Salesforce account. - -## Importing the Salesforce Certificate - -To use the Salesforce connector, add the `<salesforce.init>` element to your configuration before carrying out any other Salesforce operations. - -Before you start configuring the connector, import the **Salesforce certificate** to your integration runtime's **client keystore**. - -Follow the steps below to import the Salesforce certificate into the integration runtime's client keystore: - -1. To view the certificate, log in to your Salesforce account in your browser. -2. Search the **Certificate and Key Management** in the search box. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcesoap-certificste-and-key-management.png" title="salesforcesoap-certificste-and-key-management" width="90%" alt="salesforcesoap-certificste-and-key-management"/> - -3. Export the certificate to the file system. -4. Import the certificate to the integration runtime's client keystore using the following [command]({{base_path}}/install-and-setup/security/importing_ssl_certificate/). - - ``` - keytool -importcert -file <certificate file> -keystore <PRODUCT_HOME>/repository/resources/security/client-truststore.jks -alias "Salesforce" - ``` - -5. Restart the server and deploy the following Salesforce configuration: - - ``` - <salesforce.init> - <username>MyUsername</username> - <password>MyPassword</password> - <loginUrl>https://login.salesforce.com/services/Soap/u/42.0</loginUrl> - <blocking>false</blocking> - </salesforce.init> - ``` - -> **Note**: Secure Vault is supported for [encrypting passwords]({{base_path}}/install-and-setup/security/encrypting_plain_text/). See, Working with Passwords on integrating and using Secure Vault. - -## Re-using Salesforce configurations - -You can save the Salesforce connection configuration as a [local entry]({{base_path}}/integrate/develop/creating-artifacts/registry/creating-local-registry-entries/) and then easily reference it with the configKey attribute in your operations. For example, if you saved the above <salesforce.init> entry as a local entry named MySFConfig, you could reference it from an operation like getUserInfo as follows: - -``` -<salesforce.getUserInformation configKey="MySFConfig"/> -``` -The Salesforce connector operation examples use this convention to show how to specify the connection configuration for that operation. In all cases, the configKey attribute is optional if the connection to Salesforce has already been established and is required only if you need to specify a different connection from the current connection. diff --git a/en/docs/reference/connectors/salesforce-connectors/sf-soap-connector-example.md b/en/docs/reference/connectors/salesforce-connectors/sf-soap-connector-example.md deleted file mode 100644 index caf44491c0..0000000000 --- a/en/docs/reference/connectors/salesforce-connectors/sf-soap-connector-example.md +++ /dev/null @@ -1,270 +0,0 @@ -# Salesforce SOAP Connector Example - -The Salesforce SOAP Connector allows you to work with records in Salesforce, a web-based service that allows organizations to manage contact relationship management (CRM) data. You can use the Salesforce connector to create, query, retrieve, update, and delete records in your organization's Salesforce data. The connector uses the [Salesforce SOAP API](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_quickstart_intro.htm?search_text=SOAP%20API%20Developer%20Guide) to interact with Salesforce. - -## What you'll build - -This example explains how to use the Salesforce client to connect with the Salesforce instance and perform the create sObjects operation. Then execute a SOQL query to retrieve the Account Names in all the existing accounts. All operations are handling as SOAP messages. - -* Create an sObjects in Salesforce. - - The user sends the request payload that includes sObjects (any object that can be stored in the Lightning platform database), to create a new Account object in Salesforce. This request is sent to the integration runtime by invoking the Salesforce SOAP connector API. - -* Execute a SOQL query to retrieve the Account Names in all the existing accounts. - - In this example use the Salesforce Object Query Language (SOQL) to search stored Salesforce data for specific information which is created under `sObjects`. - -All two operations are exposed via an `salesforce-soap-API` API. The API with the context `/salesforce` has two resources - -* `/createRecords`: Creates a new `Account` object in Salesforce. -* `/queryRecords` : Retrieve the Account Names in all the existing accounts in Salesforce. - -<img src="{{base_path}}/assets/img/integrate/connectors/salesforcesoap-connector.png" title="Using Salesforcesoap SOAP Connector" width="800" alt="Using Salesforcesoap SOAP Connector"/> - -The user calls the Salesforce SOAP API. It invokes the **createRecords** resource and creates a new account in Salesforce. Then through the **queryRecords** resource, it displays all the existing account details to the user. - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Configure the connector in WSO2 Integration Studio - -Connectors can be added to integration flows in [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/). Once added, the operations of the connector can be dragged onto your canvas and added to your sequences. - -### Import the connector - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -### Add integration logic - -First create an API, which will be where we configure the integration logic. Right click on the created Integration Project and select, **New** -> **Rest API** to create the REST API. Specify the API name as `salesforcerest` and API context as `/salesforcerest`. - -<img src="{{base_path}}/assets/img/integrate/connectors/adding-an-api.jpg" title="Adding a Rest API" alt="Adding a Rest API"/> - -#### Configuring the createRecords resource - -Now follow the steps below to add configurations to the resource. - -1. Initialize the connector. - - 1. Navigate into the **Palette** pane and select the graphical operations icons listed under **Salesforce Connector** section. Then drag and drop the `init` operation into the Design pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcesoap-init-drag-and-drop.png" title="Drag and drop init operation" width="80%" alt="Drag and drop init operation"/> - - 2. Add the property values into the `init` operation as shown below. Replace the `username`, `password`, `loginUrl` and `blocking` with your values. - - - **username**: The username to access the Salesforce account. - - **password**: The password provided here is a concatenation of the user password and the security token provided by Salesforce. - - **loginUrl** : The login URL to access the Salesforce account. - - **blocking** : Indicates whether the connector needs to perform blocking invocations to Salesforce. (Supported in WSO2 ESB 4.9.0 and later.) - -2. Set up the salesforce.create operation. - - 1. Setup the `create` configurations. - - In this operation we are going to create a `sObjects` in the Salesforce account. An `SObject` represents a specific table in the database that you can discretely query. It describes the individual metadata for the specified object. Please find the `create` operation parameters listed here. - - - **sObjectName** : XML representation of the records to add. - - **allowFieldTruncate** : Whether to truncate strings that exceed the field length (see Common Parameters). - - **allOrNone** : Whether to rollback changes if an object fails (see Common Parameters). - - While invoking the API, the above three parameters values come as a user input. - - 2. Navigate into the **Palette** pane and select the graphical operations icons listed under **Salesforce Connector** section. Then drag and drop the `create` operation into the Design pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcesoap-drag-and-drop-create.png" title="Drag and drop create operation" width="80%" alt="Drag and drop create operations"/> - - 3. To get the input values in to the API we can use the [property mediator]({{base_path}}/reference/mediators/property-mediator). Navigate into the **Palette** pane and select the graphical mediators icons listed under **Mediators** section. Then drag and drop the `Property` mediators into the Design pane as shown bellow. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-api-drag-and-drop-property-mediator.png" title="Add property mediators" width="70%" alt="Add property mediators"/> - - The parameters available for configuring the Property mediator are as follows: - - > **Note**: That the properties should be add to the pallet before create the operation. - - 4. Add the property mediator to capture the sObject `Name` value. In this example we are going to create a new Account object using the POST method. - - - **name** : Name - - **expression** : //Name/text() - - **type** : STRING - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcesoap-api-property-mediator-property1-value1.png" title="Add values to capture sObjectName value" width="80%" alt="Add values to capture sObjectName value"/> - - 5. Add the [payload factory]({{base_path}}/reference/mediators/payloadfactory-mediator) mediator to capture the sObject content. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcesoap-payloadfactory-mediator-property1-value1.png" title="Add values to capture sObject value" width="80%" alt="Add values to capture sObjec value"/> - - 6. Forward the backend response to the API caller. - - When you are invoking the created resource, the request of the message is going through the `/createRecords` resource. Finally, it is passed to the [Respond mediator]({{base_path}}/reference/mediators/respond-mediator/). The Respond Mediator stops the processing on the current message and sends the message back to the client as a response. - - Drag and drop **respond mediator** to the **Design view**. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcesoap-respond-mediator.png" title="Add Respond mediator" width="800" alt="Add Respond mediator"/> - -#### Configuring the queryRecords resource - -1. Initialize the connector. - - 1. You can use the same configuration to initialize the connector. Please follow the steps given in section 1.1 and 1.2 for setting up the `init` operation to the `createRecords` operation. - -2. Set up the salesforce.query operation. - - 1. Setup the `query` configurations. - - In this operation we are going to retrieve data from an object, use `salesforce.query` and specify the following properties. If you already know the record IDs, you can use retrieve instead. - - - **batchSize** : The number of records to return. If more records are available than the batch size, you can use the queryMore operation to get additional results. - - **queryString** : The SQL query to use to search for records. - - While invoking the API, the above two parameters values come as a user input. - - 2. Navigate into the **Palette** pane and select the graphical operations icons listed under **Salesforce Connector** section. Then drag and drop the `query` operation into the Design pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcesoap-drag-and-drop-query.png" title="Drag and drop create operation" width="80%" alt="Drag and drop query operations"/> - - 3. To get the input values in to the API we can use the [property mediator]({{base_path}}/reference/mediators/property-mediator). Navigate into the **Palette** pane and select the graphical mediators icons listed under **Mediators** section. Then drag and drop the `Property` mediators into the Design pane as shown below. - - 4. Add the property mediator to capture the sObject `queryString` value. In this example we are going to create a new Account object using the POST method. - - - **name** : queryString - - **expression** : //queryString/text() - - **type** : STRING - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcesoap-api-property-querystring-mediator-property1-value1.png" title="Add values to capture queryString value" width="80%" alt="Add values to capture queryString value"/> - - 5. Forward the backend response to the API caller. - - When you are invoking the created resource, the request of the message is going through the `/createRecords` resource. Finally, it is passed to the [Respond mediator]({{base_path}}/reference/mediators/respond-mediator/). The Respond Mediator stops the processing on the current message and sends the message back to the client as a response. - - Drag and drop **respond mediator** to the **Design view**. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcesoap-respond-mediator.png" title="Add Respond mediator" width="800" alt="Add Respond mediator"/> - - Now you can switch into the Source view and check the XML configuration files of the created API and sequences. - - ??? note "create.xml" - ``` - <?xml version="1.0" encoding="UTF-8"?> - <api context="/salesforce" name="salesforce-soap-API" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST" url-mapping="/createRecords"> - <inSequence> - <property expression="//Name/text()" name="Name" scope="default" type="STRING"/> - <salesforce.init> - <username>kasunXXX@wso2.com</username> - <password>eiconnectortestXXXnO9Nz4Qpiz5Us4N7ijj9zyA</password> - <loginUrl>https://login.salesforce.com/services/Soap/u/42.0</loginUrl> - <blocking>false</blocking> - </salesforce.init> - <payloadFactory media-type="xml"> - <format> - <sfdc:sObjects type="Account" xmlns:sfdc="sfdc"> - <sfdc:sObject> - <sfdc:Name>{$ctx:Name}</sfdc:Name> - </sfdc:sObject> - </sfdc:sObjects> - </format> - <args/> - </payloadFactory> - <salesforce.create> - <allOrNone>0</allOrNone> - <allowFieldTruncate>0</allowFieldTruncate> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.create> - <respond/> - </inSequence> - <outSequence> - <send/> - </outSequence> - <faultSequence/> - </resource> - <resource methods="POST" url-mapping="/queryRecords"> - <inSequence> - <property expression="//queryString/text()" name="queryString" scope="default" type="STRING"/> - <salesforce.init> - <username>kasunXXX@wso2.com</username> - <password>eiconnectortestXXXnO9Nz4Qpiz5Us4N7ijj9zyA</password> - <loginUrl>https://login.salesforce.com/services/Soap/u/42.0</loginUrl> - <blocking>false</blocking> - </salesforce.init> - <salesforce.query> - <batchSize>200</batchSize> - <queryString>{$ctx:queryString}</queryString> - </salesforce.query> - <respond/> - </inSequence> - <outSequence> - <send/> - </outSequence> - <faultSequence/> - </resource> - </api> - ``` -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/salesforcesoap.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -!!! tip - You may need to update the value of the access token and make other such changes before deploying and running this project. - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - -1. Create a new `Account` object in Salesforce. - - **Sample request** - - ``` - curl -v POST -d '<Name>Engineers</Name>' "http://172.17.0.1:8290/salesforce/createRecords" -H "Content-Type:text/xml" - ``` - - **Expected Response** - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>55</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <createResponse> - <result> - <id>0012x00000Am4kXAAR</id> - <success>true</success> - </result> - </createResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` - -2. Retrieve the Account Names in all the existing accounts in Salesforce. - - **Sample request** - - ``` - curl -v POST -d '<queryString>select id,name from Account</queryString>' "http://172.17.0.1:8290/salesforce/queryRecords" -H "Content-Type:text/xml" - ``` - **Expected Response** - - You will get a set of account names and the respective IDs as the output. - -## What's Next - -* To customize this example for your own scenario, see [Salesforce SOAP Connector Configuration]({{base_path}}/reference/connectors/salesforce-connectors/sf-soap-connector-config/) documentation for all operation details of the connector. diff --git a/en/docs/reference/connectors/salesforce-soap-connector/salesforce-soap-reference.md b/en/docs/reference/connectors/salesforce-soap-connector/salesforce-soap-reference.md deleted file mode 100644 index 747edd4a25..0000000000 --- a/en/docs/reference/connectors/salesforce-soap-connector/salesforce-soap-reference.md +++ /dev/null @@ -1,1121 +0,0 @@ -# SalesforceBulk Connector Reference - -The following operations allow you to work with the Salesforce SOAP Connector. Click an operation name to see parameter details and samples on how to use it. - ---- - -## Initialize the connector - -To use the Salesforce SOAP connector, add the `<salesforcerest.init>` element in your configuration before carrying out any other Salesforce SOAP operations. - -??? note "salesforcebulk.init" - The salesforcerest.init operation initializes the connector to interact with the Salesforce SOAP API. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_quickstart_intro.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>username</td> - <td>The username to access the Salesforce account.</td> - <td>Yes</td> - </tr> - <tr> - <td>password</td> - <td>The password provided here is a concatenation of the user password and the security token provided by Salesforce.</td> - <td>Yes</td> - </tr> - <tr> - <td>loginUrl</td> - <td>The login URL to access the Salesforce account.</td> - <td>Yes</td> - </tr> - <tr> - <td>blocking</td> - <td>Indicates whether the connector needs to perform blocking invocations to Salesforce. (Supported in WSO2 ESB 4.9.0 and later.)</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.init> - <loginUrl>{$ctx:loginUrl}</loginUrl> - <username>{$ctx:username}</username> - <password>{$ctx:password}</password> - <blocking>{$ctx:blocking}</blocking> - </salesforce.init> - ``` - - **Sample request** - - ```xml - <salesforce.init> - <username>MyUsername</username> - <password>MyPassword</password> - <loginUrl>https://login.salesforce.com/services/Soap/u/42.0</loginUrl> - <blocking>false</blocking> - </salesforce.init> - ``` ---- - -## Working with emails - -??? note "emails" - The salesforcebulk.emails method creates and sends an email using Salesforce based on the properties that you specify. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_sendemail.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>sendEmail</td> - <td>XML representation of the email.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.sendEmail> - <sendEmail xmlns:sfdc="sfdc">{//sfdc:emailWrapper}</sendEmail> - </salesforce.sendEmail> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the sendEmail operation. - - ```xml - <payloadFactory> - <format> - <sfdc:emailWrapper xmlns:sfdc="sfdc"> - <sfdc:messages type="urn:SingleEmailMessage"> - <sfdc:bccSender>true</sfdc:bccSender> - <sfdc:emailPriority>High</sfdc:emailPriority> - <sfdc:replyTo>123@gmail.com</sfdc:replyTo> - <sfdc:saveAsActivity>false</sfdc:saveAsActivity> - <sfdc:senderDisplayName>wso2</sfdc:senderDisplayName> - <sfdc:subject>test</sfdc:subject> - <sfdc:useSignature>false</sfdc:useSignature> - <sfdc:targetObjectId>00390000001PBFn</sfdc:targetObjectId> - <sfdc:plainTextBody>Hello, this is a holiday greeting!</sfdc:plainTextBody> - </sfdc:messages> - </sfdc:emailWrapper> - </format> - <args/> - </payloadFactory> - - <salesforce.sendEmail> - <sendEmail xmlns:sfdc="sfdc">{//sfdc:emailWrapper}</sendEmail> - </salesforce.sendEmail> - ``` - -??? note "sendEmailMessage" - The salesforcebulk.sendEmailMessage method sends emails that have already been drafted in Salesforce. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_send_email_message.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>sendEmailMessage</td> - <td>XML representation of the email IDs to send.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.sendEmailMessage config-ref="connectorConfig"> - <sendEmailMessage xmlns:sfdc="sfdc">{//sfdc:emails}</sendEmailMessage> - </salesforce.sendEmailMessage> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the sendEmailMessage operation. - - ```xml - <payloadFactory> - <format> - <sfdc:emails xmlns:sfdc="sfdc"> - <sfdc:Ids>0019000000aaMkK</sfdc:Ids> - <sfdc:Ids>0019000000bbMkK</sfdc:Ids> - </sfdc:emails> - </format> - <args/> - </payloadFactory> - - <salesforce.sendEmailMessage config-ref="connectorConfig"> - <sendEmailMessage xmlns:sfdc="sfdc">{//sfdc:emails}</sendEmailMessage> - </salesforce.sendEmailMessage> - ``` - ---- - -## Working with records - -??? note "salesforcebulk.create" - The salesforcerest.create operation creates one or more record with the Salesforce SOAP API. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_create.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>allOrNone</td> - <td>Whether to rollback changes if an object fails (see Common Parameters).</td> - <td>Yes</td> - </tr> - <tr> - <td>allowFieldTruncate</td> - <td>Whether to truncate strings that exceed the field length (see Common Parameters).</td> - <td>Yes</td> - </tr> - <tr> - <td>sobjects</td> - <td>XML representation of the records to add.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.create configKey="MySFConfig"> - <allOrNone>0</allOrNone> - <allowFieldTruncate>0</allowFieldTruncate> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.create> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the create operation. - - - ```xml - <payloadFactory> - <format> - <sfdc:sObjects xmlns:sfdc="sfdc" type="Account"> - <sfdc:sObject> - <sfdc:Name>wso2123</sfdc:Name> - </sfdc:sObject> - <sfdc:sObject> - <sfdc:Name>abc123</sfdc:Name> - </sfdc:sObject> - </sfdc:sObjects> - </format> - <args/> - </payloadFactory> - - <salesforce.create> - <allOrNone>0</allOrNone> - <allowFieldTruncate>0</allowFieldTruncate> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.create> - ``` -??? note "salesforcebulk.update" - The salesforcerest.update operation updates one or more existing records with the Salesforce SOAP API. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_update.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>allOrNone</td> - <td>Whether to rollback changes if an object fails (see Common Parameters).</td> - <td>Yes</td> - </tr> - <tr> - <td>allowFieldTruncate</td> - <td>Whether to truncate strings that exceed the field length (see Common Parameters).</td> - <td>Yes</td> - </tr> - <tr> - <td>sobjects</td> - <td>XML representation of the records to add.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.update configKey="MySFConfig"> - <allOrNone>0</allOrNone> - <allowFieldTruncate>0</allowFieldTruncate> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.update> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the create operation. - - - ```xml - <payloadFactory> - <format> - <sfdc:sObjects xmlns:sfdc="sfdc" type="Account"> - <sfdc:sObject> - <sfdc:Id>0019000000aaMkZ</sfdc:Id> - <sfdc:Name>newname01</sfdc:Name> - </sfdc:sObject> - <sfdc:sObject> - <sfdc:Id>0019000000aaMkP</sfdc:Id> - <sfdc:Name>newname02</sfdc:Name> - </sfdc:sObject> - </sfdc:sObjects> - </format> - <args/> - </payloadFactory> - - <salesforce.update> - <allOrNone>0</allOrNone> - <allowFieldTruncate>0</allowFieldTruncate> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.update> - ``` -??? note "salesforcebulk.upsert" - The salesforcerest.upsert operation update existing records and insert new records in a single operation, with the Salesforce SOAP API. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_upsert.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>allOrNone</td> - <td>Whether to rollback changes if an object fails (see Common Parameters).</td> - <td>Yes</td> - </tr> - <tr> - <td>allowFieldTruncate</td> - <td>Whether to truncate strings that exceed the field length (see Common Parameters).</td> - <td>Yes</td> - </tr> - <tr> - <td>externalId</td> - <td>The field containing the record ID, that is used by Salesforce to determine whether to update an existing record or create a new one. This is done by matching the ID to the record IDs in Salesforce. By default, the field is assumed to be named "Id".</td> - <td>Yes</td> - </tr> - <tr> - <td>sObjects</td> - <td>XML representation of the records to update and insert. When inserting a new record, you do not specify sfdc:Id.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.upsert configKey="MySFConfig"> - <allOrNone>0</allOrNone> - <allowFieldTruncate>0</allowFieldTruncate> - <externalId>Id</externalId> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.upsert> - ``` - **Sample request** - - Set the externalId field : If you need to give any existing externalId field of sObject to externalId then the payload should be with that externalId field and value as follows in sample. - - Sample to set ExternalId field and value - - ```xml - <payloadFactory> - <format> - <sfdc:sObjects xmlns:sfdc="sfdc" type="Account"> - <sfdc:sObject> - <sfdc:sample__c>{any value}</sfdc:sample__c> - <sfdc:Name>newname001</sfdc:Name> - </sfdc:sObject> - </sfdc:sObjects> - </format> - <args/> - </payloadFactory> - - <salesforce.upsert> - <allOrNone>0</allOrNone> - <allowFieldTruncate>0</allowFieldTruncate> - <externalId>sample__c</externalId> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.upsert> - ``` - Given below is a sample request that can be handled by the create operation. - - ```xml - <payloadFactory> - <format> - <sfdc:sObjects xmlns:sfdc="sfdc" type="Account"> - <sfdc:sObject> - <sfdc:Id>0019000000aaMkZ</sfdc:Id> - <sfdc:Name>newname001</sfdc:Name> - </sfdc:sObject> - <sfdc:sObject> - <sfdc:Name>newname002</sfdc:Name> - </sfdc:sObject> - </sfdc:sObjects> - </format> - <args/> - </payloadFactory> - - <salesforce.upsert> - <allOrNone>0</allOrNone> - <allowFieldTruncate>0</allowFieldTruncate> - <externalId>Id</externalId> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.upsert> - ``` -??? note "salesforcebulk.search" - The salesforcerest.search operation searchs for records, use salesforce.search and specify the search string. If you already know the record IDs, use retrieve instead. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_search.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>searchString</td> - <td>The SQL query to use to search for records.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.search configKey="MySFConfig"> - <searchString>FIND {map*} IN ALL FIELDS RETURNING Account (Id, Name), Contact, Opportunity, Lead</searchString> - </salesforce.search> - ``` -??? note "salesforcebulk.query" - The salesforcerest.query operation retrieve data from an object, use salesforce.query with the Salesforce SOAP API. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_query.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>batchSize</td> - <td>The number of records to return. If more records are available than the batch size, you can use the queryMore operation to get additional results.</td> - <td>Yes</td> - </tr> - <tr> - <td>queryString</td> - <td>The SQL query to use to search for records.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - Note : If you want your search results to include deleted records that are available in the Recycle Bin, use salesforce.queryAll in place of salesforce.query. - - ```xml - <salesforce.query configKey="MySFConfig"> - <batchSize>200</batchSize> - <queryString>select id,name from Account</queryString> - </salesforce.query> - ``` - - **Sample request** - - Following is a sample configuration to query records. It also illustrates the use of queryMore operation to get additional results: - - - ```xml - <salesforce.query> - <batchSize>200</batchSize> - <queryString>select id,name from Account</queryString> - </salesforce.query> - <!-- Execute the following to get the other batches --> - <iterate xmlns:sfdc="http://wso2.org/salesforce/adaptor" continueParent="true" expression="//sfdc:iterator"> - <target> - <sequence> - <salesforce.queryMore> - <batchSize>200</batchSize> - </salesforce.queryMore> - </sequence> - </target> - </iterate> - ``` -??? note "salesforcebulk.retrieve" - The salesforcerest.retrieve operation IDs of the records you want to retrieve with the Salesforce SOAP API. If you do not know the record IDs, use query instead. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_retrieve.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>fieldList</td> - <td>A comma-separated list of the fields you want to retrieve from the records.</td> - <td>Yes</td> - </tr> - <tr> - <td>objectType</td> - <td> The object type of the records.</td> - <td>Yes</td> - </tr> - <tr> - <td>sobjects</td> - <td>XML representation of the records to retrieve.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.retrieve configKey="MySFConfig"> - <fieldList>id,name</fieldList> - <objectType>Account</objectType> - <objectIDS xmlns:sfdc="sfdc">{//sfdc:sObjects}</objectIDS> - </salesforce.retrieve> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the retrieve operation. - - - ```xml - <payloadFactory> - <format> - <sfdc:sObjects xmlns:sfdc="sfdc"> - <sfdc:Ids>0019000000aaMkK</sfdc:Ids> - <sfdc:Ids>0019000000aaMjl</sfdc:Ids> - </sfdc:sObjects> - </format> - <args/> - </payloadFactory> - - <salesforce.retrieve configKey="MySFConfig"> - <fieldList>id,name</fieldList> - <objectType>Account</objectType> - <objectIDS xmlns:sfdc="sfdc">{//sfdc:sObjects}</objectIDS> - </salesforce.retrieve> - ``` -??? note "salesforcebulk.delete" - The salesforcerest.delete operation delete one or more records. If you do not know the record IDs, use query instead. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_delete.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>allOrNone</td> - <td>Whether to rollback changes if an object fails (see Common Parameters).</td> - <td>Yes</td> - </tr> - <tr> - <td>sobjects</td> - <td>XML representation of the records to delete, as shown in the following example.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.delete configKey="MySFConfig"> - <allOrNone>0</allOrNone> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.delete> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the retrieve operation. - - - ```xml - <payloadFactory> - <format> - <sfdc:sObjects xmlns:sfdc="sfdc"> - <sfdc:Ids>0019000000aaMkZ</sfdc:Ids> - <sfdc:Ids>0019000000aaMkP</sfdc:Ids> - </sfdc:sObjects> - </format> - <args/> - </payloadFactory> - - <salesforce.delete> - <allOrNone>0</allOrNone> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.delete> - ``` -??? note "salesforcebulk.undelete" - The salesforcerest.undelete operation restore records that were previously deleted. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_undelete.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>allOrNone</td> - <td>Whether to rollback changes if an object fails (see Common Parameters).</td> - <td>Yes</td> - </tr> - <tr> - <td>sobjects</td> - <td>XML representation of the records to delete, as shown in the following example.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.undelete configKey="MySFConfig"> - <allOrNone>0</allOrNone> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.undelete> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the undelete operation. - - - ```xml - <payloadFactory> - <format> - <sfdc:sObjects xmlns:sfdc="sfdc"> - <sfdc:Ids>0019000000aaMkZ</sfdc:Ids> - <sfdc:Ids>0019000000aaMkP</sfdc:Ids> - </sfdc:sObjects> - </format> - <args/> - </payloadFactory> - - <salesforce.undelete> - <allOrNone>0</allOrNone> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.undelete> - ``` -??? note "salesforcebulk.getDeleted" - The salesforcerest.getDeleted operation retrieve the list of records that were previously deleted. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_getdeleted.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>sObjectType</td> - <td>sObjectType from which we need to retrieve deleted records</td> - <td>Yes</td> - </tr> - <tr> - <td>startDate</td> - <td>start date and time for deleted records lookup</td> - <td>Yes</td> - </tr> - <tr> - <td>endDate</td> - <td>end date and time for deleted records lookup</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.getDeleted configKey="MySFConfig"> - <sObjectType>{$ctx:sObjectType}</sObjectType> - <startDate>{$ctx:startDate}</startDate> - <endDate>{$ctx:endDate}</endDate> - </salesforce.getDeleted> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getDeleted operation. - - - ```xml - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" - xmlns:urn="wso2.connector.salesforce"> - <soapenv:Header/> - <soapenv:Body> - <urn:loginUrl>https://login.salesforce.com/services/Soap/u/30.0</urn:loginUrl> - <urn:username>XXXXXXXXXX</urn:username> - <urn:password>XXXXXXXXXX</urn:password> - <urn:blocking>false</urn:blocking> - <urn:sObjectType>Account</urn:sObjectType> - <urn:startDate>2020-06-15T05:05:53+0000</urn:startDate> - <urn:endDate>2020-06-30T05:05:53+0000</urn:endDate> - </soapenv:Body> - </soapenv:Envelope> - ``` -??? note "salesforcebulk.getUpdated" - The salesforcerest.getUpdated operation retrieve the list of records that were previously updated. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_getupdated.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>sObjectType</td> - <td>sObjectType from which we need to retrieve deleted records</td> - <td>Yes</td> - </tr> - <tr> - <td>startDate</td> - <td>start date and time for deleted records lookup</td> - <td>Yes</td> - </tr> - <tr> - <td>endDate</td> - <td>end date and time for deleted records lookup</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.getUpdated configKey="MySFConfig"> - <sObjectType>{$ctx:sObjectType}</sObjectType> - <startDate>{$ctx:startDate}</startDate> - <endDate>{$ctx:endDate}</endDate> - </salesforce.getUpdated> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getUpdated operation. - - - ```xml - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" - xmlns:urn="wso2.connector.salesforce"> - <soapenv:Header/> - <soapenv:Body> - <urn:loginUrl>https://login.salesforce.com/services/Soap/u/30.0</urn:loginUrl> - <urn:username>XXXXXXXXXX</urn:username> - <urn:password>XXXXXXXXXX</urn:password> - <urn:blocking>false</urn:blocking> - <urn:sObjectType>Account</urn:sObjectType> - <urn:startDate>2020-06-15T05:05:53+0000</urn:startDate> - <urn:endDate>2020-06-30T05:05:53+0000</urn:endDate> - </soapenv:Body> - </soapenv:Envelope> - ``` -??? note "salesforcebulk.findDuplicates" - The salesforcerest.findDuplicates operation retrieve the list of records that are duplicate entries. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_findduplicates.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>sobjects</td> - <td>sObjectType from which we need to retrieve duplicate records</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.findDuplicates configKey="MySFConfig"> - <sobjects xmlns:ns="wso2.connector.salesforce">{//ns:sObjects}</sobjects> - </salesforce.findDuplicates> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the findDuplicates operation. - - - ```xml - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" - xmlns:urn="wso2.connector.salesforce"> - <soapenv:Header/> - <soapenv:Body> - <urn:loginUrl>https://login.salesforce.com/services/Soap/u/48.0</urn:loginUrl> - <urn:username>XXXXXXXXXXXX</urn:username> - <urn:password>XXXXXXXXXXXX</urn:password> - <urn:blocking>false</urn:blocking> - <urn:sObjects> - <urn:sObject> - <urn:type>Account</urn:type> - <urn:fieldsToNull>name</urn:fieldsToNull> - <urn:fieldsToNull>id</urn:fieldsToNull> - </urn:sObject> - </urn:sObjects> - </soapenv:Body> - </soapenv:Envelope> - ``` -??? note "salesforcebulk.findDuplicatesByIds" - The salesforcerest.findDuplicatesByIds operation retrieves the list of records that are duplicate entries by using ids. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_findduplicatesbyids.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>ids</td> - <td>ids for which duplicate records need to be found</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.findDuplicatesByIds configKey="MySFConfig"> - <ids xmlns:ns="wso2.connector.salesforce">{//ns:ids}</ids> - </salesforce.findDuplicatesByIds> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the findDuplicatesByIds operation. - - - ```xml - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" - xmlns:urn="wso2.connector.salesforce"> - <soapenv:Header/> - <soapenv:Body> - <urn:loginUrl>https://login.salesforce.com/services/Soap/u/48.0</urn:loginUrl> - <urn:username>XXXXXXXXXX</urn:username> - <urn:password>XXXXXXXXXX</urn:password> - <urn:blocking>false</urn:blocking> - <urn:ids> - <urn:id>0012x000005mqKuAAI</urn:id> - <urn:id>0012x000005orjlAAA</urn:id> - </urn:ids> - </soapenv:Body> - </soapenv:Envelope> - ``` -??? note "salesforcebulk.merge" - The salesforcerest.merge operation merge records into one master record. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_merge.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>mergerequests</td> - <td>The merge requests according to the format defined in to Salesforce docs (See Related Salesforce documentation section)</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.merge configKey="MySFConfig"> - <mergerequests xmlns:ns="wso2.connector.salesforce">{//ns:requests}</mergerequests> - </salesforce.merge> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the merge operation. - - - ```xml - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" - xmlns:urn="wso2.connector.salesforce"> - <soapenv:Header/> - <soapenv:Body> - <urn:loginUrl>https://login.salesforce.com/services/Soap/u/48.0</urn:loginUrl> - <urn:password>XXXXXXXXXXX</urn:password> - <urn:blocking>false</urn:blocking> - <urn:requests> - <urn:request> - <urn:masterRecord> - <urn:type>Account</urn:type> - <urn:Id>0012x000008un5bAAA</urn:Id> - </urn:masterRecord> - <urn:recordToMergeIds>0012x000008un5lAAA</urn:recordToMergeIds> - </urn:request> - </urn:requests> - </soapenv:Body> - </soapenv:Envelope> - ``` -??? note "salesforcebulk.convertLead" - The salesforcerest.convertLead operation convert a lead into an account. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_merge.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>leadconvertrequests</td> - <td>The lead convert requests according to the format defined in to Salesforce docs (See Related Salesforce documentation section)</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.convertLead configKey="MySFConfig"> - <leadconvertrequests xmlns:ns="wso2.connector.salesforce">{//ns:leadconvertrequests}</leadconvertrequests> - </salesforce.convertLead> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the merge operation. - - - ```xml - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" - xmlns:urn="wso2.connector.salesforce"> - <soapenv:Header/> - <soapenv:Body> - <urn:loginUrl>https://login.salesforce.com/services/Soap/u/48.0</urn:loginUrl> - <urn:username>XXXXXXXXXX</urn:username> - <urn:password>XXXXXXXXXX</urn:password> - <urn:blocking>false</urn:blocking> - <urn:leadconvertrequests> - <urn:leadConverts> - <urn:accountId>0012x000005mqKuAAI</urn:accountId> - <urn:leadId>00Q2x00000AH981EAD</urn:leadId> - <urn:convertedStatus>Closed - Converted</urn:convertedStatus> - </urn:leadConverts> - </urn:leadconvertrequests> - </soapenv:Body> - </soapenv:Envelope> - ``` ---- - -## Working with Recycle Bin - -??? note "salesforcebulk.emptyRecycleBin" - The Recycle Bin allows you to view and restore recently deleted records for a maximum of 15 days before they are permanently deleted. To purge records from the Recycle Bin so that they cannot be restored, use salesforce.emptyRecycleBin and specify the following properties. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_emptyrecyclebin.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>allOrNone</td> - <td>Whether to rollback changes if an object fails (see Common Parameters).</td> - <td>Yes</td> - </tr> - <tr> - <td>sobjects</td> - <td>XML representation of the records to add.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.emptyRecycleBin config-ref="connectorConfig"> - <allOrNone>0</allOrNone> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.emptyRecycleBin> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the emptyRecycleBin operation. - - - ```xml - <payloadFactory> - <format> - <sfdc:sObjects xmlns:sfdc="sfdc"> - <sfdc:Ids>0019000000aaMkZ</sfdc:Ids> - <sfdc:Ids>0019000000aaMkP</sfdc:Ids> - </sfdc:sObjects> - </format> - <args/> - </payloadFactory> - - <salesforce.emptyRecycleBin> - <allOrNone>0</allOrNone> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.emptyRecycleBin> - ``` - ---- - -## Working with sObjects - -??? note "salesforcebulk.describeGlobal" - The salesforcerest.describeGlobal operation retrieve a list of objects that are available in the system. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_describeglobal.htm) for more information. - - - **Sample configuration** - - ```xml - <salesforce.describeGlobal configKey="MySFConfig"/> - ``` -??? note "salesforcebulk.describeSobject" - The salesforcerest.describeSobject operation retrieve metadata (such as name, label, and fields, including the field properties) for a specific object type. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_describesobject.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>sobject</td> - <td> The object type of where you want to retrieve the metadata.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.describeSObject configKey="MySFConfig"> - <sobject>Account</sobject> - </salesforce.describeSObject> - ``` -??? note "salesforcebulk.describeSobjects" - The salesforcerest.describeSobjects operation retrieve metadata (such as name, label, and fields, including the field properties) for multiple object types returned as an array. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_describesobjects.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>sobjects</td> - <td>An XML representation of the object types of where you want to retrieve the metadata.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforce.describeSobjects configKey="MySFConfig"> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.describeSobjects> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the describeSobjects operation. - - - ```xml - <payloadFactory> - <format> - <sfdc:sObjects xmlns:sfdc="sfdc"> - <sfdc:sObjectType>Account</sfdc:sObjectType> - <sfdc:sObjectType>Contact</sfdc:sObjectType> - </sfdc:sObjects> - </format> - <args/> - </payloadFactory> - - <salesforce.describeSobjects configKey="MySFConfig"> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.describeSobjects> - ``` - ---- - -## Working with User - -??? note "salesforcebulk.emptyRecycleBin" - To retrieve information about the user who is currently logged in, use salesforce.getUserInfo. The information provided includes the name, ID, and contact information of the user. See, the [Salesforce documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_getuserinfo_getuserinforesult.htm) for details of the information that is returned using this operation. If you want to get additional information about the user that is not returned by this operation, use retrieve operation on the User object providing the ID returned from getUserInfo. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_getuserinfo.htm) for more information. - - **Sample configuration** - - ```xml - <salesforce.getUserInfo configKey="MySFConfig"/> - ``` - -??? note "salesforcebulk.setPassword" - The salesforcerest.setPassword operation change the user password by specifying the password. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_setpassword.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>userId</td> - <td> The user's Salesforce ID.</td> - <td>Yes</td> - </tr> - <tr> - <td>password</td> - <td>If using setPassword, the new password to assign to the user.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - setPassword - - ```xml - <salesforce.setPassword configKey="MySFConfig"> - <userId>0056F000009wCJgQAM</userId> - <password>abc123</password> - </salesforce.setPassword> - ``` - - resetPassword - - ```xml - <salesforce.resetPassword configKey="MySFConfig"> - <userId>0056F000009wCJgQAM</userId> - </salesforce.resetPassword> - ``` ---- - -## Working with Utility - -??? note "salesforcebulk.getServerTimestamp" - The salesforcerest.getServerTimestamp operation retrieve the timestamp of the server. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_getservertimestamp.htm) for more information. - - **Sample configuration** - - ```xml - <salesforce.getServerTimestamp configKey="MySFConfig"/> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the emptyRecycleBin operation. - - - ```xml - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" - xmlns:urn="wso2.connector.salesforce"> - <soapenv:Header/> - <soapenv:Body> - <urn:loginUrl>https://login.salesforce.com/services/Soap/u/30.0</urn:loginUrl> - <urn:username>XXXXXXXXXX</urn:username> - <urn:password>XXXXXXXXXX</urn:password> - <urn:blocking>false</urn:blocking> - </soapenv:Body> - </soapenv:Envelope> - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/salesforce-soap-connector/sf-soap-connector-config.md b/en/docs/reference/connectors/salesforce-soap-connector/sf-soap-connector-config.md deleted file mode 100644 index 75647420c7..0000000000 --- a/en/docs/reference/connectors/salesforce-soap-connector/sf-soap-connector-config.md +++ /dev/null @@ -1,59 +0,0 @@ -# Salesforce SOAP Connector Configuration - -The Salesforce SOAP connector allows you to access the [Salesforce SOAP API](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_quickstart_intro.htm?search_text=SOAP%20API%20Developer%20Guide) from an integration sequence. - -## Setting up the Salesforce account - -1. To work with the Salesforce SOAP connector, you need to have a Salesforce account. If you do not have a Salesforce account, go to [https://developer.salesforce.com/signup](https://developer.salesforce.com/signup) and create a Salesforce developer account. - -2. After creating a Salesforce account you will get a [Salesforce security token](https://help.salesforce.com/articleView?id=user_security_token.htm&type=5). - -3. To configure the Salesforce SOAP Connector you need to save and keep the **username**, **password**, and **security token** of your Salesforce account. - -## Importing the Salesforce Certificate - -To use the Salesforce connector, add the `<salesforce.init>` element to your configuration before carrying out any other Salesforce operations. - -Before you start configuring the connector, import the **Salesforce certificate** to your integration runtime's **client keystore**. - -Follow the steps below to import the Salesforce certificate into the EI client keystore: - -1. To view the certificate, log in to your Salesforce account in your browser. - -2. Search the **Certificate and Key Management** in the search box. - - [![salesforcesoap-certificste-and-key-management]({{base_path}}/assets/img/integrate/connectors/salesforcesoap-certificste-and-key-management.png)]({{base_path}}/assets/img/integrate/connectors/salesforcesoap-certificste-and-key-management.png) - -3. Export the certificate to the file system. -<<<<<<< HEAD -4. Import the certificate to the EI client keystore using either the following [command](../{{base_path}}/install-and-setup/setup/mi-setup/setup/security/importing_ssl_certificate/) or the EI Management Console. -======= - -4. Import the certificate to the EI client keystore using either the following [command]({{base_path}}/install-and-setup/setup/mi-setup/setup/security/importing_ssl_certificate) or the EI Management Console. ->>>>>>> c2692680c... fixes https://github.com/wso2/docs-apim/issues/5273 - - ``` - keytool -importcert -file <certificate file> -keystore <EI>/repository/resources/security/client-truststore.jks -alias "Salesforce" - ``` - -5. Restart the server and deploy the following Salesforce configuration: - - ``` - <salesforce.init> - <username>MyUsername</username> - <password>MyPassword</password> - <loginUrl>https://login.salesforce.com/services/Soap/u/42.0</loginUrl> - <blocking>false</blocking> - </salesforce.init> - ``` - -> **Note**: Secure Vault is supported for [encrypting passwords](../../../../setup/security/encrypting_plain_text/). See, Working with Passwords on integrating and using Secure Vault. - -## Re-using Salesforce configurations - -You can save the Salesforce connection configuration as a [local entry]({{base_path}}/integrate/develop/creating-artifacts/registry/creating-local-registry-entries/) and then easily reference it with the configKey attribute in your operations. For example, if you saved the above <salesforce.init> entry as a local entry named MySFConfig, you could reference it from an operation like getUserInfo as follows: - -``` -<salesforce.getUserInformation configKey="MySFConfig"/> -``` -The Salesforce connector operation examples use this convention to show how to specify the connection configuration for that operation. In all cases, the configKey attribute is optional if the connection to Salesforce has already been established and is required only if you need to specify a different connection from the current connection. diff --git a/en/docs/reference/connectors/salesforce-soap-connector/sf-soap-connector-example.md b/en/docs/reference/connectors/salesforce-soap-connector/sf-soap-connector-example.md deleted file mode 100644 index dc584d0d14..0000000000 --- a/en/docs/reference/connectors/salesforce-soap-connector/sf-soap-connector-example.md +++ /dev/null @@ -1,270 +0,0 @@ -# Salesforce Rest API Connector Example - -The Salesforce REST Connector allows you to work with records in Salesforce, a web-based service that allows organizations to manage contact relationship management (CRM) data. You can use the Salesforce connector to create, query, retrieve, update, and delete records in your organization's Salesforce data. The connector uses the [Salesforce SOAP API](https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_quickstart_intro.htm?search_text=SOAP%20API%20Developer%20Guide) to interact with Salesforce. - -## What you'll build - -This example explains how to use the Salesforce client to connect with the Salesforce instance and perform the create sObjects operation. Then execute a SOQL query to retrieve the Account Names in all the existing accounts. All operations are handling as SOAP messages. - -* Create an sObjects in Salesforce. - - The user sends the request payload that includes sObjects (any object that can be stored in the Lightning platform database), to create a new Account object in Salesforce. This request is sent to the integration runtime by invoking the Salesforce SOAP connector API. - -* Execute a SOQL query to retrieve the Account Names in all the existing accounts. - - In this example use the Salesforce Object Query Language (SOQL) to search stored Salesforce data for specific information which is created under `sObjects`. - -All two operations are exposed via an `salesforce-soap-API` API. The API with the context `/salesforce` has two resources - -* `/createRecords`: Creates a new `Account` object in Salesforce. -* `/queryRecords` : Retrieve the Account Names in all the existing accounts in Salesforce. - -<img src="{{base_path}}/assets/img/integrate/connectors/salesforcesoap-connector.png" title="Using Salesforcesoap SOAP Connector" width="800" alt="Using Salesforcesoap SOAP Connector"/> - -The user calls the Salesforce SOAP API. It invokes the **createRecords** resource and creates a new account in Salesforce. Then through the **queryRecords** resource, it displays all the existing account details to the user. - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Configure the connector in WSO2 Integration Studio - -Connectors can be added to integration flows in [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/). Once added, the operations of the connector can be dragged onto your canvas and added to your sequences. - -### Import the connector - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -### Add integration logic - -First create an API, which will be where we configure the integration logic. Right click on the created Integration Project and select, **New** -> **Rest API** to create the REST API. Specify the API name as `salesforcerest` and API context as `/salesforcerest`. - -<img src="{{base_path}}/assets/img/integrate/connectors/adding-an-api.jpg" title="Adding a Rest API" width="800" alt="Adding a Rest API"/> - -#### Configuring the createRecords resource - -Now follow the steps below to add configurations to the resource. - -1. Initialize the connector. - - 1. Navigate into the **Palette** pane and select the graphical operations icons listed under **Salesforce Connector** section. Then drag and drop the `init` operation into the Design pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcesoap-init-drag-and-drop.png" title="Drag and drop init operation" width="500" alt="Drag and drop init operation"/> - - 2. Add the property values into the `init` operation as shown below. Replace the `username`, `password`, `loginUrl` and `blocking` with your values. - - - **username**: The username to access the Salesforce account. - - **password**: The password provided here is a concatenation of the user password and the security token provided by Salesforce. - - **loginUrl** : The login URL to access the Salesforce account. - - **blocking** : Indicates whether the connector needs to perform blocking invocations to Salesforce. (Supported in WSO2 ESB 4.9.0 and later.) - -2. Set up the salesforce.create operation. - - 1. Setup the `create` configurations. - - In this operation we are going to create a `sObjects` in the Salesforce account. An `SObject` represents a specific table in the database that you can discretely query. It describes the individual metadata for the specified object. Please find the `create` operation parameters listed here. - - - **sObjectName** : XML representation of the records to add. - - **allowFieldTruncate** : Whether to truncate strings that exceed the field length (see Common Parameters). - - **allOrNone** : Whether to rollback changes if an object fails (see Common Parameters). - - While invoking the API, the above three parameters values come as a user input. - - 2. Navigate into the **Palette** pane and select the graphical operations icons listed under **Salesforce Connector** section. Then drag and drop the `create` operation into the Design pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcesoap-drag-and-drop-create.png" title="Drag and drop create operation" width="500" alt="Drag and drop create operations"/> - - 3. To get the input values in to the API we can use the [property mediator]({{base_path}}/reference/mediators/property-mediator). Navigate into the **Palette** pane and select the graphical mediators icons listed under **Mediators** section. Then drag and drop the `Property` mediators into the Design pane as shown bellow. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforce-api-drag-and-drop-property-mediator.png" title="Add property mediators" width="800" alt="Add property mediators"/> - - The parameters available for configuring the Property mediator are as follows: - - > **Note**: That the properties should be add to the pallet before create the operation. - - 4. Add the property mediator to capture the sObject `Name` value. In this example we are going to create a new Account object using the POST method. - - - **name** : Name - - **expression** : //Name/text() - - **type** : STRING - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcesoap-api-property-mediator-property1-value1.png" title="Add values to capture sObjectName value" width="600" alt="Add values to capture sObjectName value"/> - - 5. Add the [payload factory]({{base_path}}/reference/mediators/payloadfactory-mediator) mediator to capture the sObject content. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcesoap-payloadfactory-mediator-property1-value1.png" title="Add values to capture sObject value" width="600" alt="Add values to capture sObjec value"/> - - 6. Forward the backend response to the API caller. - - When you are invoking the created resource, the request of the message is going through the `/createRecords` resource. Finally, it is passed to the [Respond mediator]({{base_path}}/reference/mediators/respond-mediator/). The Respond Mediator stops the processing on the current message and sends the message back to the client as a response. - - Drag and drop **respond mediator** to the **Design view**. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcesoap-respond-mediator.png" title="Add Respond mediator" width="800" alt="Add Respond mediator"/> - -#### Configuring the queryRecords resource - -1. Initialize the connector. - - 1. You can use the same configuration to initialize the connector. Please follow the steps given in section 1.1 and 1.2 for setting up the `init` operation to the `createRecords` operation. - -2. Set up the salesforce.query operation. - - 1. Setup the `query` configurations. - - In this operation we are going to retrieve data from an object, use `salesforce.query` and specify the following properties. If you already know the record IDs, you can use retrieve instead. - - - **batchSize** : The number of records to return. If more records are available than the batch size, you can use the queryMore operation to get additional results. - - **queryString** : The SQL query to use to search for records. - - While invoking the API, the above two parameters values come as a user input. - - 2. Navigate into the **Palette** pane and select the graphical operations icons listed under **Salesforce Connector** section. Then drag and drop the `query` operation into the Design pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcesoap-drag-and-drop-query.png" title="Drag and drop create operation" width="500" alt="Drag and drop query operations"/> - - 3. To get the input values in to the API we can use the [property mediator]({{base_path}}/reference/mediators/property-mediator). Navigate into the **Palette** pane and select the graphical mediators icons listed under **Mediators** section. Then drag and drop the `Property` mediators into the Design pane as shown below. - - 4. Add the property mediator to capture the sObject `queryString` value. In this example we are going to create a new Account object using the POST method. - - - **name** : queryString - - **expression** : //queryString/text() - - **type** : STRING - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcesoap-api-property-querystring-mediator-property1-value1.png" title="Add values to capture queryString value" width="600" alt="Add values to capture queryString value"/> - - 5. Forward the backend response to the API caller. - - When you are invoking the created resource, the request of the message is going through the `/createRecords` resource. Finally, it is passed to the [Respond mediator]({{base_path}}/reference/mediators/respond-mediator/). The Respond Mediator stops the processing on the current message and sends the message back to the client as a response. - - Drag and drop **respond mediator** to the **Design view**. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcesoap-respond-mediator.png" title="Add Respond mediator" width="800" alt="Add Respond mediator"/> - - Now you can switch into the Source view and check the XML configuration files of the created API and sequences. - - ??? note "create.xml" - ``` - <?xml version="1.0" encoding="UTF-8"?> - <api context="/salesforce" name="salesforce-soap-API" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST" url-mapping="/createRecords"> - <inSequence> - <property expression="//Name/text()" name="Name" scope="default" type="STRING"/> - <salesforce.init> - <username>kasunXXX@wso2.com</username> - <password>eiconnectortestXXXnO9Nz4Qpiz5Us4N7ijj9zyA</password> - <loginUrl>https://login.salesforce.com/services/Soap/u/42.0</loginUrl> - <blocking>false</blocking> - </salesforce.init> - <payloadFactory media-type="xml"> - <format> - <sfdc:sObjects type="Account" xmlns:sfdc="sfdc"> - <sfdc:sObject> - <sfdc:Name>{$ctx:Name}</sfdc:Name> - </sfdc:sObject> - </sfdc:sObjects> - </format> - <args/> - </payloadFactory> - <salesforce.create> - <allOrNone>0</allOrNone> - <allowFieldTruncate>0</allowFieldTruncate> - <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects> - </salesforce.create> - <respond/> - </inSequence> - <outSequence> - <send/> - </outSequence> - <faultSequence/> - </resource> - <resource methods="POST" url-mapping="/queryRecords"> - <inSequence> - <property expression="//queryString/text()" name="queryString" scope="default" type="STRING"/> - <salesforce.init> - <username>kasunXXX@wso2.com</username> - <password>eiconnectortestXXXnO9Nz4Qpiz5Us4N7ijj9zyA</password> - <loginUrl>https://login.salesforce.com/services/Soap/u/42.0</loginUrl> - <blocking>false</blocking> - </salesforce.init> - <salesforce.query> - <batchSize>200</batchSize> - <queryString>{$ctx:queryString}</queryString> - </salesforce.query> - <respond/> - </inSequence> - <outSequence> - <send/> - </outSequence> - <faultSequence/> - </resource> - </api> - ``` -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/salesforcesoap.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -!!! tip - You may need to update the value of the access token and make other such changes before deploying and running this project. - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - -1. Create a new `Account` object in Salesforce. - - **Sample request** - - ``` - curl -v POST -d '<Name>Engineers</Name>' "http://172.17.0.1:8290/salesforce/createRecords" -H "Content-Type:text/xml" - ``` - - **Expected Response** - - ```xml - <?xml version='1.0' encoding='utf-8'?> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com"> - <soapenv:Header> - <LimitInfoHeader> - <limitInfo> - <current>55</current> - <limit>15000</limit> - <type>API REQUESTS</type> - </limitInfo> - </LimitInfoHeader> - </soapenv:Header> - <soapenv:Body> - <createResponse> - <result> - <id>0012x00000Am4kXAAR</id> - <success>true</success> - </result> - </createResponse> - </soapenv:Body> - </soapenv:Envelope> - ``` - -2. Retrieve the Account Names in all the existing accounts in Salesforce. - - **Sample request** - - ``` - curl -v POST -d '<queryString>select id,name from Account</queryString>' "http://172.17.0.1:8290/salesforce/queryRecords" -H "Content-Type:text/xml" - ``` - **Expected Response** - - You will get a set of account names and the respective IDs as the output. - -## What's Next - -- To customize this example for your own scenario, see [Salesforce SOAP Connector Configuration]({{base_path}}/reference/connectors/salesforce-connectors/salesforce-soap-connector/sf-soap-connector-config/) documentation for all operation details of the connector. diff --git a/en/docs/reference/connectors/salesforcebulk-connector/salesforcebulk-connector-configuration.md b/en/docs/reference/connectors/salesforcebulk-connector/salesforcebulk-connector-configuration.md deleted file mode 100644 index e9c5ed86f8..0000000000 --- a/en/docs/reference/connectors/salesforcebulk-connector/salesforcebulk-connector-configuration.md +++ /dev/null @@ -1,36 +0,0 @@ -# Setting up the SalesforceBulk Environment - -The SalesforceBulk connector allows you to access the [SalesforceBulk REST API](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/) from an integration sequence. SalesforceBulk is a RESTful API that allows you to either quickly load large sets of your organization's data into Salesforce or delete large sets of your organization's data from Salesforce. - -> **Note**: To work with the Salesforce Bulk connector, you need to have a Salesforce account. If you do not have a Salesforce account, go to [https://developer.salesforce.com/signup](https://developer.salesforce.com/signup) and create a Salesforce developer account. - -Salesforce uses the OAuth protocol to allow application users to securely access data without having to reveal their user credentials. For more information on authentication is done in Salesforce, see [Understanding Authentication](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/intro_oauth_and_connected_apps.htm). - -### Obtaining user credentials - -Follow the steps below to create a connected application using Salesforce and to obtain the consumer key as well as the consumer secret for the created connected application. - -{!includes/reference/connectors/salesforce-connectors/sf-access-token-generation.md!} - -### Configuring Axis2 configurations - -Be sure to add and enable the following Axis2 configurations in the `<PRODUCT_HOME>/conf/axis2/axis2.xml` file. - -* **Required message formatters** - -``` -<messageFormatter contentType="text/csv" class="org.wso2.carbon.relay.ExpandingMessageFormatter"/> -<messageFormatter contentType="zip/xml" class="org.wso2.carbon.relay.ExpandingMessageFormatter"/> -<messageFormatter contentType="zip/csv" class="org.wso2.carbon.relay.ExpandingMessageFormatter"/> -<messageFormatter contentType="text/xml" class="org.wso2.carbon.relay.ExpandingMessageFormatter"/> -<messageFormatter contentType="text/html" class="org.wso2.carbon.relay.ExpandingMessageFormatter"/> -``` - -* **Required message builders** -``` -<messageBuilder contentType="text/csv" class="org.wso2.carbon.relay.BinaryRelayBuilder"/> -<messageBuilder contentType="zip/xml" class="org.wso2.carbon.relay.BinaryRelayBuilder"/> -<messageBuilder contentType="zip/csv" class="org.wso2.carbon.relay.BinaryRelayBuilder"/> -<messageBuilder contentType="text/xml" class="org.wso2.carbon.relay.BinaryRelayBuilder"/> -<messageBuilder contentType="text/html" class="org.wso2.carbon.relay.BinaryRelayBuilder"/> -``` \ No newline at end of file diff --git a/en/docs/reference/connectors/salesforcebulk-connector/salesforcebulk-connector-example.md b/en/docs/reference/connectors/salesforcebulk-connector/salesforcebulk-connector-example.md deleted file mode 100644 index 5f4739f2a4..0000000000 --- a/en/docs/reference/connectors/salesforcebulk-connector/salesforcebulk-connector-example.md +++ /dev/null @@ -1,341 +0,0 @@ -# Salesforce Bulk Connector Example - -The Salesforce Bulk Connector allows you to access the [Salesforce Bulk REST API](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_intro.htm) from an integration sequence. SalesforceBulk is a RESTful API that allows you to quickly load large sets of your organization's data into Salesforce or delete large sets of your organization's data from Salesforce. You can use SalesforceBulk to query, insert, update, upsert or delete a large number of records asynchronously, by submitting the records in batches. Salesforce can process these batches in the background. - -## What you'll build - -This example demonstrates how to use Microsoft Azure Storage connector to: - -1. Insert employee details (job and batch) into Salesforce. -2. Get status of the inserted employee details. - -Both operations are exposed via an API. The API with the context `/resources` has two resources. - -* `/insertEmployeeDetails` : Creating a new job in the Salesforce account and insert employee details. -* `/getStatusOfBatch` : Retrieve status about the created batch from the Salesforce account. - -In this example, the user sends the request to invoke an API to insert employee details in bulk to the Salesforce account. When invoking the `insertEmployeeDetails` resource, it creates a new job based on the properties that you specify. Read the CSV data file by using the WSO2 File Connector and the extracted dataset is inserted as a batch. Afterwards it responds according to the specified template and is sent back to the client. Finally a user can retrieve the batch status using the `getStatusOfBatch` resource. - -<img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-connector.png" title="Using Salesforce Bulk Connector" width="800" alt="Using Salesforce Bulk Connector"/> - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Configure the connector in WSO2 Integration Studio - -Connectors can be added to integration flows in [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/). Once added, the operations of the connector can be dragged onto your canvas and added to your sequences. - -### Import the connector - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -### Add integration logic - -First create an API, which will be where we configure the integration logic. Right click on the created Integration Project and select, **New** -> **Rest API** to create the REST API. Specify the API name as `Salesforcebulk-API` and API context as `/salesforce`. - -<img src="{{base_path}}/assets/img/integrate/connectors/adding-an-api.jpg" title="Adding a Rest API" alt="Adding a Rest API"/> - -#### Configure a resource for the insertEmployeeBulkRecords - -Now follow the steps below to add configurations to the `insertEmployeeBulkRecords` resource. - -1. Initialize the connector. - - 1. Follow these steps to [generate the Access Tokens for Salesforce](salesforcebulk-connector-configuration/) and obtain the Client Id, Client Secret, Access Token, and Refresh Token. - - 2. Navigate into the **Palette** pane and select the graphical operations icons listed under **Salesforcebulk Connector** section. Then drag and drop the `init` operation into the Design pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-drag-and-drop-init.png" title="Drag and drop init operation" width="60%" alt="Drag and drop init operation"/> - - 3. Add the property values into the `init` operation as shown below. Replace the `clientSecret`, `clientId`, `accessToken`, `refreshToken` with obtained values from above steps. - - - **clientSecret** : Value of your client secret given when you registered your application with Salesforce. - - **clientId** : Value of your client ID given when you registered your application with Salesforce. - - **accessToken** : Value of the access token to access the API via request. - - **refreshToken** : Value of the refresh token. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-api-init-operation-parameters.png" title="Add values to the init operation" width="60%" alt="Add values to the init operation"/> - -2. Set up the `createJob` operation. - - 1. Setup the `createJob` configurations. In this operation we are going to create a job in the Salesforce account. Please find the `createJob` operation parameters listed here. - - - **operation** : The processing operation that the job should perform. - - **object** : The object type of data that is to be processed by the job. - - **contentType** : The content type of the job. - - While invoking the API, the above `object` parameter value comes as a user input. - - 2. Navigate into the **Palette** pane and select the graphical operations icons listed under **Salesforcebulk Connector** section. Then drag and drop the `createJob` operation into the Design pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-drag-and-drop-createjob.png" title="Drag and drop creatJobe operation" width="60%" alt="Drag and drop createJob operations"/> - - 3. To get the input values into the API, we can use the [property mediator]({{base_path}}/reference/mediators/property-mediator). Navigate into the **Palette** pane and select the graphical mediators icons listed under **Mediators** section. Then drag and drop the `Property` mediators into the Design pane as shown below. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-api-drag-and-drop-property-mediator.png" title="Add property mediators" width="60%" alt="Add property mediators"/> - - The parameters available for configuring the Property mediator are as follows: - - > **Note**: The properties should be added to the palette before creating the operation. - - 4. Add the property mediator to capture the `objectName` value. This is the object type of data that is to be processed by the job. - - - **name** : objectName - - **expression** : //object/text() - - **type** : STRING - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-api-property-mediator-property1-value1.png" title="Add values to capture ObjectName value" width="60%" alt="Add values to capture ObjectName value"/> - -3. Set up the fileconnector operation. - - 1. Setup the `fileconnector.read` configurations. In this operation we are going to read the CSV file content by using the [WSO2 File Connector]({{base_path}}/reference/connectors/file-connector/file-connector-overview). - - - **contentType** : Content type of the files processed by the connector. - - **source** : The location of the file. This can be a file on the local physical file system or a file on an FTP server. - - **filePattern** : The pattern of the file to be read. - - While invoking the API, the above `source` parameter value come as a user input. - - > **Note**: When you configuring this `source` parameter in Windows operating system you need to set this property shown bellow `<source>C:\\Users\Kasun\Desktop\Salesforcebulk-connector\SFBulk.csv</source>`. - - 2. Navigate into the **Palette** pane and select the graphical operations icons listed under **Fileconnector Connector** section. Then drag and drop the `read` operation into the Design pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-drag-and-drop-file-read.png" title="Drag and drop file read operation" width="70%" alt="Drag and drop file read operations"/> - - 3. To get the input values in to the API we can use the [property mediator]({{base_path}}/reference/mediators/property-mediator). Navigate into the **Palette** pane and select the graphical mediators icons listed under **Mediators** section. Then drag and drop the `Property` mediators into the Design pane as steps given in section 2.3 the `createJob` operation. . - - 4. Add the property mediator to capture the `source` value. The source is location of the file. This can be a file on the local physical file system or a file on an FTP server. - - - **name** : source - - **expression** : //source/text() - - **type** : STRING - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-api-property-mediator-source-property1-value1.png" title="Add values to capture source value" width="600" alt="Add values to capture source value"/> - -4. Set up the addBatch operation. - - 1. Initialize the connector. Please follow the steps given in section 1 in the `createJob` operation. - - 2. Navigate into the **Palette** pane and select the graphical operations icons listed under **Salesforcebulk Connector** section. Then drag and drop the `addBatch` operation into the Design pane. - - - **objects** : A list of records to process. - - **jobId** : The unique identifier of the job to which you want add a new batch. - - **isQuery** : Set to true if the operation is query. - - **contentType** : The content type of the batch data. The content type you specify should be compatible with the content type of the associated job. Possible values are application/xml and text/csv. - - While invoking the API, the above `jobId` and `objects` parameters values come as a user input. Using a property mediator will extract the `jobId` from the `createJob` response and store it into a configured `addBatch` operation. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-drag-and-drop-addbatch.png" title="Drag and drop addBatch operation" width="70%" alt="Drag and drop addBatch operations"/> - - 3. To get the input values in to the API we can use the [property mediator]({{base_path}}/reference/mediators/property-mediator). Navigate into the **Palette** pane and select the graphical mediators icons listed under **Mediators** section. Then drag and drop the `Property` mediators into the Design pane as steps given in section 2.3 the `createJob` operation. . - - 4. Add the property mediator to capture the `jobId` value. - - - **name** : jobId - - **expression** : //n0:jobInfo/n0:id - - **type** : STRING - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-api-property-mediator-jobid-property1-value1.png" title="Add values to capture jobid value" width="600" alt="Add values to capture jobid value"/> - - 5. To extract the `objects` from the file read operation, we used [data mapper]({{base_path}}/reference/mediators/data-mapper-mediator). It will grab the CSV file content and insert in to the `addBatch` operation. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-drag-and-drop-datamapper.png" title="Drag and drop data mapper operation" width="70%" alt="Drag and drop data mapper operations"/> - -5. Forward the backend response to the API caller. - - When you are invoking the created resource, the request of the message is going through the `/insertEmployeeBulkRecords` resource. Finally, it is passed to the [Respond mediator]({{base_path}}/reference/mediators/respond-mediator/). The Respond Mediator stops the processing on the current message and sends the message back to the client as a response. - - 1. Drag and drop **respond mediator** to the **Design view**. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-respond-mediator.png" title="Add Respond mediator" width="800" alt="Add Respond mediator"/> - -#### Configure a resource for the getStatusOfBatch - -1. Initialize the connector. - - You can use the generated tokens to initialize the connector. Please follow the steps given in insertEmployeeBulkRecords section 1 for setting up the `init` operation. - -2. Set up the getBatchStatus operation. - - 1. To retrieve created batch status from the added batches in the Salesforce account, you need to add the `getBatchStatus` operation. - - 2. Navigate into the **Palette** pane and select the graphical operations icons listed under **Salesforce Connector** section. Then drag and drop the `getBatchStatus` operations into the Design pane. - - - **jobId** : The unique identifier of the job to which the batch you specify belongs. - - **batchId** : The unique identifier of the batch for which you want to retrieve the status. - - While invoking the API, the above `jobId` and `batchId` parameters values come as a user input. - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-getbatchstatus-drag-and-drop-query.png" title="Add query operation to getBatchStatus" width="70%" alt="Add query operation to getBatchStatus"/> - - 3. Add the property mediator to capture the `jobId` value. - - - **name** : jobId - - **expression** : //jobId/text() - - **type** : STRING - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-api-property-mediator-jobidgetstatus-property1-value1.png" title="Add values to capture jobid value" width="600" alt="Add values to capture jobid value"/> - - 4. Add the property mediator to capture the `batchId` value. - - - **name** : batchId - - **expression** : //batchId/text() - - **type** : STRING - - <img src="{{base_path}}/assets/img/integrate/connectors/salesforcebulk-api-property-mediator-batchidgetstatus-property1-value1.png" title="Add values to capture batchId value" width="600" alt="Add values to capture batchId value"/> - -3. Forward the backend response to the API caller. - - When you are invoking the created resource, the request of the message is going through the `/insertEmployeeBulkRecords` resource. Finally, it is passed to the [Respond mediator]({{base_path}}/reference/mediators/respond-mediator/). The Respond Mediator stops the processing on the current message and sends the message back to the client as a response. - - 1. Drag and drop **respond mediator** to the **Design view**. - - Now you can switch into the Source view and check the XML configuration files of the created API and sequences. - - ??? note "create.xml" - ``` - <?xml version="1.0" encoding="UTF-8"?> - <api context="/salesforce" name="Salesforcebulk-API" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST" url-mapping="/insertEmployeeBulkRecords"> - <inSequence> - <property expression="//object/text()" name="objectName" scope="default" type="STRING"/> - <property expression="//source/text()" name="source" scope="default" type="STRING"/> - <salesforcebulk.init> - <apiUrl>https://ap17.salesforce.com</apiUrl> - <accessToken>00D2x000000pIxA!AR0AQJxgll8UgZqaocCP_U516yo.bpzV19USOFzw4tFsvjbdE6x_ccIKrZgQXLQesOt_VX6FeuSrGq_VxyLdrjvryqh8EBas</accessToken> - <apiVersion>34</apiVersion> - <refreshToken>5Aep861Xq7VoDavIt5QG2vWIHGbv.B1Q.4rMXb9o3DFmhvbChN3tF24fOGHvUcOU2iMWSF06w5bWFjmHgu0bA5s</refreshToken> - <clientSecret>37D9E930DEEB0BAF7842124352065F6DB2D90219D9DB06238978590665EDEFEC</clientSecret> - <clientId>3MVG97quAmFZJfVyr_k_q7IC1iEc71lap9m4ayJWpUrkVe85mnF0GNjsIu2G4__FGC4NOzS.3o10Eh_H81xX8</clientId> - </salesforcebulk.init> - <salesforcebulk.createJob> - <operation>insert</operation> - <object>{$ctx:objectName}</object> - <contentType>XML</contentType> - </salesforcebulk.createJob> - <property expression="//n0:jobInfo/n0:id" name="jobId" scope="default" type="STRING" xmlns:n0="http://www.force.com/2009/06/asyncapi/dataload"/> - <fileconnector.read> - <source>{$ctx:source}</source> - <contentType>text/plain</contentType> - <filePattern>.*.csv</filePattern> - </fileconnector.read> - <datamapper config="gov:datamapper/NewConfig.dmc" inputSchema="gov:datamapper/NewConfig_inputSchema.json" inputType="XML" outputSchema="gov:datamapper/NewConfig_outputSchema.json" outputType="XML" xsltStyleSheet="gov:datamapper/NewConfig_xsltStyleSheet.xml"/> - <salesforcebulk.init> - <apiUrl>https://ap17.salesforce.com</apiUrl> - <accessToken>00D2x000000pIxA!AR0AQJxgll8UgZqaocCP_U516yo.bpzV19USOFzw4tFsvjbdE6x_ccIKrZgQXLQesOt_VX6FeuSrGq_VxyLdrjvryqh8EBas</accessToken> - <apiVersion>34</apiVersion> - <refreshToken>5Aep861Xq7VoDavIt5QG2vWIHGbv.B1Q.4rMXb9o3DFmhvbChN3tF24fOGHvUcOU2iMWSF06w5bWFjmHgu0bA5s</refreshToken> - <clientSecret>37D9E930DEEB0BAF7842124352065F6DB2D90219D9DB06238978590665EDEFEC</clientSecret> - <clientId>3MVG97quAmFZJfVyr_k_q7IC1iEc71lap9m4ayJWpUrkVe85mnF0GNjsIu2G4__FGC4NOzS.3o10Eh_H81xX8</clientId> - </salesforcebulk.init> - <salesforcebulk.addBatch> - <objects>{//values}</objects> - <jobId>{$ctx:jobId}</jobId> - <isQuery>false</isQuery> - <contentType>application/xml</contentType> - </salesforcebulk.addBatch> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" url-mapping="/getStatusOfBatch"> - <inSequence> - <property expression="//jobId/text()" name="jobId" scope="default" type="STRING"/> - <property expression="//batchId/text()" name="batchId" scope="default" type="STRING"/> - <salesforcebulk.init> - <apiUrl>https://ap17.salesforce.com</apiUrl> - <accessToken>00D2x000000pIxA!AR0AQJxgll8UgZqaocCP_U516yo.bpzV19USOFzw4tFsvjbdE6x_ccIKrZgQXLQesOt_VX6FeuSrGq_VxyLdrjvryqh8EBas</accessToken> - <apiVersion>34</apiVersion> - <refreshToken>5Aep861Xq7VoDavIt5QG2vWIHGbv.B1Q.4rMXb9o3DFmhvbChN3tF24fOGHvUcOU2iMWSF06w5bWFjmHgu0bA5s</refreshToken> - <clientSecret>37D9E930DEEB0BAF7842124352065F6DB2D90219D9DB06238978590665EDEFEC</clientSecret> - <clientId>3MVG97quAmFZJfVyr_k_q7IC1iEc71lap9m4ayJWpUrkVe85mnF0GNjsIu2G4__FGC4NOzS.3o10Eh_H81xX8</clientId> - </salesforcebulk.init> - <salesforcebulk.getBatchStatus> - <jobId>{$ctx:jobId}</jobId> - <batchId>{$ctx:batchId}</batchId> - </salesforcebulk.getBatchStatus> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - </api> - ``` - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/salesforcebulk.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -!!! tip - You may need to update the value of the access token and make other such changes before deploying and running this project. - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -Invoke the API as shown below using the curl command. Curl application can be downloaded from [here](https://curl.haxx.se/download.html). - -1. Creating a new job in the in the Salesforce account and insert employee details. - - **Sample request** - - `curl -v POST -d <inserRecord><object>Account</object><source>/home/kasun/Documents/SFbulk.csv</source></inserRecord> "http://localhost:8290/salesforce/insertEmployeeBulkRecords" -H "Content-Type:application/xml"` - - **Expected Response** - - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <batchInfo - xmlns="http://www.force.com/2009/06/asyncapi/dataload"> - <id>7512x000002ywZNAAY</id> - <jobId>7502x000002ypCDAAY</jobId> - <state>Queued</state> - <createdDate>2020-07-16T06:41:53.000Z</createdDate> - <systemModstamp>2020-07-16T06:41:53.000Z</systemModstamp> - <numberRecordsProcessed>2</numberRecordsProcessed> - <numberRecordsFailed>2</numberRecordsFailed> - <totalProcessingTime>93</totalProcessingTime> - <apiActiveProcessingTime>2</apiActiveProcessingTime> - <apexProcessingTime>0</apexProcessingTime> - </batchInfo> - ``` - -2. Get status of the inserted employee details. - - **Sample request** - - `curl -v POST -d <getBatchStatus><jobId>7502x000002yp73AAA</jobId><batchId>7512x000002ywWrAAI</batchId></getBatchStatus> "http://localhost:8290/resources/getStatusOfBatch" -H "Content-Type:application/xml"` - - **Expected Response** - - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <batchInfo - xmlns="http://www.force.com/2009/06/asyncapi/dataload"> - <id>7512x000002ywWrAAI</id> - <jobId>7502x000002yp73AAA</jobId> - <state>Failed</state> - <stateMessage>InvalidBatch : Records not found</stateMessage> - <createdDate>2020-07-16T06:14:36.000Z</createdDate> - <systemModstamp>2020-07-16T06:14:37.000Z</systemModstamp> - <numberRecordsProcessed>2</numberRecordsProcessed> - <numberRecordsFailed>0</numberRecordsFailed> - <totalProcessingTime>93</totalProcessingTime> - <apiActiveProcessingTime>3</apiActiveProcessingTime> - <apexProcessingTime>0</apexProcessingTime> - </batchInfo> - ``` - -## What's Next - -- To customize this example for your own scenario, see [Salesforce bulk Connector Configuration]({{base_path}}/reference/connectors/salesforce-connectors/salesforcebulk-reference/) documentation for all operation details of the connector. diff --git a/en/docs/reference/connectors/salesforcebulk-connector/salesforcebulk-reference.md b/en/docs/reference/connectors/salesforcebulk-connector/salesforcebulk-reference.md deleted file mode 100644 index c730096af7..0000000000 --- a/en/docs/reference/connectors/salesforcebulk-connector/salesforcebulk-reference.md +++ /dev/null @@ -1,628 +0,0 @@ -# SalesforceBulk Connector Reference - -The following operations allow you to work with the Salesforce Bulk Connector. Click an operation name to see parameter -details and samples on how to use it. - ---- - -## Initialize the connector - -Salesforce Bulk API uses the OAuth protocol to allow application users to securely access data without having to reveal -their user credentials. For more information on how authentication is done in Salesforce, see -[Understanding Authentication](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/intro_understanding_authentication.htm). -You can provide only access token and use it until it expires. After expiry, you will be responsible for getting a new -access token and using it. Alternatively, you have the option of providing refresh token, client secret, and client ID -which will be used to get access token initially and after every expiry by the connector itself. You will not be -required to handle access token expiry in this case. - -To use the Salesforce Bulk connector, add the `<salesforcebulk.init>` element in your configuration before carrying out -any other Salesforce Bulk operations. - -??? note "salesforcebulk.init" - The salesforcebulk.init operation initializes the connector to interact with the Salesforce Bulk API. See the - [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_intro.htm) - for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>apiVersion</td> - <td>The version of the Salesforce API.</td> - <td>Yes</td> - </tr> - <tr> - <td>accessToken</td> - <td>The access token to authenticate your API calls.</td> - <td>No</td> - </tr> - <tr> - <td>apiUrl</td> - <td>The instance URL for your organization.</td> - <td>Yes</td> - </tr> - <tr> - <td>tokenEndpointHostname</td> - <td>The instance url for OAuth 2.0 token endpoint when issuing authentication requests in your application. - If you haven't set any token endpoint hostname, the default hostname [https://login.salesforce.com](https://login.salesforce.com) - will be set.</td> - <td>No</td> - </tr> - <tr> - <td>refreshToken</td> - <td>The refresh token that you received to refresh the API access token.</td> - <td>No</td> - </tr> - <tr> - <td>clientId</td> - <td>The consumer key of the connected application that you created.</td> - <td>No</td> - </tr> - <tr> - <td>clientSecret</td> - <td>The consumer secret of the connected application that you created.</td> - <td>No</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcebulk.init> - <apiVersion>{$ctx:apiVersion}</apiVersion> - <accessToken>{$ctx:accessToken}</accessToken> - <apiUrl>{$ctx:apiUrl}</apiUrl> - <tokenEndpointHostname>{$ctx:tokenEndpointHostname}</tokenEndpointHostname> - </salesforcebulk.init> - ``` - - **Sample request** - - ```xml - <salesforcebulk.init> - <apiVersion>34.0</apiVersion> - <accessToken>XXXXXXXXXXXX (Replace with your access token)</accessToken> - <apiUrl>https://ap17.salesforce.com</apiUrl> - <tokenEndpointHostname>{$ctx:tokenEndpointHostname}</tokenEndpointHostname> - </salesforcebulk.init> - ``` - - Or if you want the connector to handle token expiry - - **Sample configuration** - - ```xml - <salesforcebulk.init> - <apiVersion>{$ctx:apiVersion}</apiVersion> - <apiUrl>{$ctx:apiUrl}</apiUrl> - <tokenEndpointHostname>{$ctx:tokenEndpointHostname}</tokenEndpointHostname> - <refreshToken>{$ctx:refreshToken}</refreshToken> - <clientId>{$ctx:clientId}</clientId> - <clientSecret>{$ctx:clientSecret}</clientSecret> - </salesforcebulk.init> - ``` - - **Sample request** - - ```xml - <salesforcebulk.init> - <apiVersion>34.0</apiVersion> - <apiUrl>https://ap17.salesforce.com</apiUrl> - <tokenEndpointHostname>{$ctx:tokenEndpointHostname}</tokenEndpointHostname> - <refreshToken>XXXXXXXXXXXX (Replace with your refresh token)</refreshToken> - <clientId>XXXXXXXXXXXX (Replace with your client ID)</clientId> - <clientSecret>XXXXXXXXXXXX (Replace with your client secret)</clientSecret> - </salesforcebulk.init> - ``` ---- - -## Working with Jobs - -??? note "createJob" - The salesforcebulk.createJob method creates a new job based on the properties that you specify. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_quickstart_create_job.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>operation</td> - <td>The processing operation that the job should perform.</td> - <td>Yes</td> - </tr> - <tr> - <td>contentType</td> - <td>The content type of the job.</td> - <td>Yes</td> - </tr> - <tr> - <td>object</td> - <td>The object type of data that is to be processed by the job.</td> - <td>Yes</td> - </tr> - <tr> - <td>externalIdFieldName</td> - <td>The id of the external object.</td> - <td>No</td> - </tr> - </table> - - **Sample configuration** - - Following is a sample request that can be handled by the createJob operation. - - ```xml - <salesforcebulk.createJob> - <operation>{$ctx:operation}</operation> - <contentType>{$ctx:contentType}</contentType> - <object>{$ctx:object}</object> - <externalIdFieldName>{$ctx:externalIdFieldName}</externalIdFieldName> - </salesforcebulk.createJob> - ``` - - **Sample request** - - ```xml - <createJob> - <apiVersion>34.0</apiVersion> - <accessToken>XXXXXXXXXXXX (Replace with your access token)</accessToken> - <apiUrl>https://(your_instance).salesforce.com</apiUrl> - <operation>insert</operation> - <contentType>CSV</contentType> - <object>Contact</object> - <externalIdFieldName>Languages__c</externalIdFieldName> - </createJob> - ``` - -??? note "updateJob" - The salesforcebulk.updateJob method closes or aborts a job that you specify. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_quickstart_close_job.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>jobId</td> - <td>The ID of the job that you either want to close or abort.</td> - <td>Yes</td> - </tr> - <tr> - <td>state</td> - <td>The state of processing of the job.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - Following is a sample request that can be handled by the updateJob operation. - - ```xml - <salesforcebulk.updateJob> - <jobId>{$ctx:jobId}</jobId> - <state>{$ctx:state}</state> - </salesforcebulk.updateJob> - ``` - - **Sample request** - - ```xml - <updateJob> - <apiVersion>34.0</apiVersion> - <accessToken>XXXXXXXXXXXX (Replace with your access token)</accessToken> - <apiUrl>https://(your_instance).salesforce.com</apiUrl> - <jobId>75028000000MCtIAAW</jobId> - <state>Closed</state> - </updateJob> - ``` - -??? note "getJob" - The salesforcebulk.getJob method retrieves all details of an existing job based on the job ID that you specify. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_jobs_get_details.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>jobId</td> - <td> The ID of the job that you either want to close or abort.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - Following is a sample request that can be handled by the getJob operation. - - ```xml - <salesforcebulk.getJob> - <jobId>{$ctx:jobId}</jobId> - </salesforcebulk.getJob> - ``` - - **Sample request** - - ```xml - <getJob> - <apiVersion>34.0</apiVersion> - <accessToken>XXXXXXXXXXXX (Replace with your access token)</accessToken> - <apiUrl>https://(your_instance).salesforce.com</apiUrl> - <jobId>75028000000MCqEAAW</jobId> - </getJob> - ``` - -## Working with Batches - -??? note "addBatch" - The salesforcebulk.addBatch method adds a new batch to a job based on the properties that you specify. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_batches_create.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>jobId</td> - <td>The ID of the job that you either want to close or abort.</td> - <td>Yes</td> - </tr> - <tr> - <td>objects</td> - <td>A list of records to process.</td> - <td>Yes</td> - </tr> - <tr> - <td>contentType</td> - <td>The content type of the batch data. The content type you specify should be compatible with the content type of the associated job. Possible values are application/xml and text/csv.</td> - <td>Yes</td> - </tr> - <tr> - <td>isQuery</td> - <td>Set to true if the operation is query.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <salesforcebulk.addBatch> - <jobId>{$ctx:jobId}</jobId> - <objects>{$ctx:objects}</objects> - <contentType>{$ctx:contentType}</contentType> - <isQuery>{$ctx:isQuery}</isQuery> - </salesforcebulk.addBatch> - ``` - - **Sample request** - - Following is a sample request that can be handled by the addBatch operation, where the content type of the batch data is in application/xml format. - - ```xml - <addBatch> - <apiUrl>https://(your_instance).salesforce.com</apiUrl> - <apiVersion>34.0</apiVersion> - <accessToken>XXXXXXXXXXXX (Replace with your access token)</accessToken> - <contentType>application/xml</contentType> - <isQuery>false</isQuery> - <jobId>75028000000McSwAAK</jobId> - <objects> - <values> - <sObject> - <description>Created from Bulk API on Tue Apr 14 11:15:59 PDT 2009</description> - <name>Account 711 (batch 0)</name> - </sObject> - <sObject> - <description>Created from Bulk API on Tue Apr 14 11:15:59 PDT 2009</description> - <name>Account 37811 (batch 5)</name> - </sObject> - </values> - </objects> - </addBatch> - ``` - Following is a sample request that can be handled by the addBatch operation, where the content type of the batch data is in text/csv format. - - ```xml - <addBatch> - <apiUrl>https://(your_instance).salesforce.com</apiUrl> - <apiVersion>34.0</apiVersion> - <accessToken>XXXXXXXXXXXX (Replace with your access token)</accessToken> - <contentType>text/csv</contentType> - <isQuery>false</isQuery> - <jobId>75028000000McSwAAK</jobId> - <objects> - <values>Name,description - Tom Dameon,Created from Bulk API - </values> - </objects> - </addBatch> - ``` - Following is a sample request that can be handled by the addBatch operation, where the operation is query and the content type of the bulk query results is in application/xml format. - - ```xml - <addBatch> - <apiUrl>https://(your_instance).salesforce.com</apiUrl> - <apiVersion>34.0</apiVersion> - <accessToken>XXXXXXXXXXXX (Replace with your access token)</accessToken> - <contentType>application/xml</contentType> - <isQuery>true</isQuery> - <jobId>75028000000McSwAAK</jobId> - <objects> - <values>SELECT Id, Name FROM Account LIMIT 100</values> - </objects> - </addBatch> - ``` - -??? note "getBatchStatus" - The salesforcebulk.getBatchStatus method retrieves the status of a batch based on the properties that you specify. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_quickstart_check_status.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>jobId</td> - <td>The unique identifier of the job to which the batch you specify belongs.</td> - <td>Yes</td> - </tr> - <tr> - <td>batchId</td> - <td>The unique identifier of the batch for which you want to retrieve the status.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - Following is a sample request that can be handled by the getBatchStatus operation. - - ```xml - <salesforcebulk.getBatchStatus> - <jobId>{$ctx:jobId}</jobId> - <batchId>{$ctx:batchId}</batchId> - </salesforcebulk.getBatchStatus> - ``` - - **Sample request** - - ```xml - <getBatchStatus> - <apiUrl>https://(your_instance).salesforce.com</apiUrl> - <accessToken>XXXXXXXXXXXX (Replace with your access token)</accessToken> - <apiVersion>34.0</apiVersion> - <jobId>75028000000M5X0</jobId> - <batchId>75128000000OZzq</batchId> - </getBatchStatus> - ``` - -??? note "getBatchResults" - The salesforcebulk.getBatchResults method retrieves results of a batch that has completed processing. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_batches_get_results.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>jobId</td> - <td>The unique identifier of the job to which the batch you specify belongs.</td> - <td>Yes</td> - </tr> - <tr> - <td>batchId</td> - <td>The unique identifier of the batch for which you want to retrieve results.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - Following is a sample request that can be handled by the getBatchResults operation. - - ```xml - <salesforcebulk.getBatchRequest> - <jobId>{$ctx:jobId}</jobId> - <batchId>{$ctx:batchId}</batchId> - </salesforcebulk.getBatchRequest> - ``` - - **Sample request** - - ```xml - <getBatchResults> - <apiUrl>https://(your_instance).salesforce.com</apiUrl> - <apiVersion>34.0</apiVersion> - <accessToken>XXXXXXXXXXXX (Replace with your access token)</accessToken> - <jobId>75028000000M5X0</jobId> - <batchId>75128000000OZzq</batchId> - </getBatchResults> - ``` - -??? note "getBatchRequest" - The salesforcebulk.getBatchRequest method retrieves a batch request based on the properties that you specify. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_batches_get_request.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>jobId</td> - <td>The unique identifier of the job to which the batch you specify belongs.</td> - <td>Yes</td> - </tr> - <tr> - <td>batchId</td> - <td>The unique identifier of the batch for which you want to retrieve the batch request.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - Following is a sample request that can be handled by the getBatchRequest operation. - - ```xml - <salesforcebulk.getBatchRequest> - <jobId>{$ctx:jobId}</jobId> - <batchId>{$ctx:batchId}</batchId> - </salesforcebulk.getBatchRequest> - ``` - - **Sample request** - - ```xml - <getBatchRequest> - <apiVersion>34.0</apiVersion> - <accessToken>XXXXXXXXXXXX (Replace with your access token)</accessToken> - <apiUrl>https://(your_instance).salesforce.com</apiUrl> - <jobId>75028000000MCtIAAW</jobId> - <batchId>75128000000OpZFAA0</batchId> - </getBatchRequest> - ``` - -??? note "listBatches" - The salesforcebulk.listBatches method retrieves details of all batches in a job that you specify. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_batches_get_info_all.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>jobId</td> - <td>The unique identifier of the job for which you want to retrieve batch details.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - Following is a sample request that can be handled by the listBatches operation. - - ```xml - <salesforcebulk.listBatches> - <jobId>{$ctx:jobId}</jobId> - </salesforcebulk.listBatches> - ``` - - **Sample request** - - ```xml - <listBatches> - <apiVersion>34.0</apiVersion> - <accessToken>XXXXXXXXXXXX (Replace with your access token)</accessToken> - <apiUrl>https://(your_instance).salesforce.com</apiUrl> - <jobId>75028000000MCqEAAW</jobId> - </listBatches> - ``` - -??? note "getBulkQueryResults" - The salesforcebulk.getBulkQueryResults method retrieves the bulk query results that you specify. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_code_curl_walkthrough.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>jobId</td> - <td>The unique identifier of the job for which you want to retrieve batch details.</td> - <td>Yes</td> - </tr> - <tr> - <td>batchId</td> - <td>The unique identifier of the batch for which you want to retrieve the batch request.</td> - <td>Yes</td> - </tr> - <tr> - <td>resultId</td> - <td>The unique identifier of the results for which you want to retrieve.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - Following is a sample request that can be handled by the getBulkQueryResults operation. - - ```xml - <salesforcebulk.getBulkQueryResults> - <jobId>{$ctx:jobId}</jobId> - <batchId>{$ctx:batchId}</batchId> - <resultId>{$ctx:resultId}</resultId> - </salesforcebulk.getBulkQueryResults> - ``` - - **Sample request** - - ```xml - <getBulkQueryResults> - <apiVersion>34.0</apiVersion> - <accessToken>XXXXXXXXXXXX (Replace with your access token)</accessToken> - <apiUrl>https://(your_instance).salesforce.com</apiUrl> - <jobId>75028000000MCqEAAW</jobId> - <batchId>7510K00000Kzb6XQAR</batchId> - <resultId>7520K000006xofz</resultId> - </getBulkQueryResults> - ``` - ---- - -## Working with Binary Attachments - - -??? note "createJobToUploadBatchFile" - The salesforcebulk.createJobToUploadBatchFile method creates a job for batches that contain attachment records. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/binary_create_job.htm) for more information. - - **Sample configuration** - - Following is a sample request that can be handled by the createJobToUploadBatchFile operation. It creates a job for batches that contain attachment records.. - - ```xml - <salesforcebulk.createJobToUploadBatchFile> - </salesforcebulk.createJobToUploadBatchFile> - ``` - - **Sample request** - - ```xml - http://localhost:8280/services/salesforcebulk_uploadBatchFile?apiUrl=https://(your_instance).salesforce.com&accessToken=XXXXXXXXXXXXXXXXX&apiVersion=34.0&refreshToken=XXXXXXXXXXXXXXXXX&clientId=XXXXXXXXXXXXXXXXX&clientSecret=XXXXXXXXXXXXXXXXX&jobId=75028000000MCv9AAG - ``` - -??? note "getBulkQueryResults" - The salesforcebulk.getBulkQueryResults method creates a batch of attachment records. See the [related API documentation](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/binary_create_batch.htm) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>jobId</td> - <td>The ID of the job for which you want to create a batch of attachment records.</td> - <td>Yes</td> - </tr> - </table> - - **Sample configuration** - - Following is a sample request that can be handled by the uploadBatchFile operation.It creates a job for batches that contain attachment records. - - ```xml - <salesforcebulk.uploadBatchFile> - <jobId>{$url:jobId}</jobId> - </salesforcebulk.uploadBatchFile> - ``` - - **Sample request** - - ```xml - http://localhost:8280/services/salesforcebulk_uploadBatchFile?apiUrl=https://(your_instance).salesforce.com&accessToken=XXXXXXXXXXXXXXXXX&apiVersion=34.0&refreshToken=XXXXXXXXXXXXXXXXX&clientId=XXXXXXXXXXXXXXXXX&clientSecret=XXXXXXXXXXXXXXXXX&jobId=75028000000MCv9AAG - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/servicenow-connector/servicenow-connector-config.md b/en/docs/reference/connectors/servicenow-connector/servicenow-connector-config.md deleted file mode 100644 index c6d88b7e3d..0000000000 --- a/en/docs/reference/connectors/servicenow-connector/servicenow-connector-config.md +++ /dev/null @@ -1,268 +0,0 @@ -# ServiceNow Connector Reference - -The following operations allow you to work with the ServiceNow Connector. Click an operation name to see parameter details and samples on how to use it. - ---- - -## Initialize the connector - -To use the ServiceNow connector, add the <servicenow.init> element in your configuration before carrying out any other ServiceNow operations. - -The ServiceNow API requires all requests to be authenticated as a user. User has to create a own instance with his user credentials. When u create a account in ServiceNow Developer page then you are enable to create your own instance. For more information, see [the ServiceNow Developer page](https://developer.servicenow.com/app.do#!/home). - -??? note "servicenow.init" - The servicenow.init operation initializes the connector to interact with the ServiceNow API. For more information, see [the API documentation](http://wiki.servicenow.com/index.php?title=REST_API#gsc.tab=0). - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>serviceNowInstanceURL</td> - <td>The base endpoint URL of the ServiceNow API. </td> - <td>Yes.</td> - </tr> - <tr> - <td>username</td> - <td>The user Name of the own instance.</td> - <td>Yes.</td> - </tr> - <tr> - <td>password</td> - <td>The Password of the own instance.</td> - <td>Yes.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <servicenow.init> - <serviceNowInstanceURL>{$ctx:serviceNowInstanceURL}</serviceNowInstanceURL> - <username>{$ctx:username}</username> - <password>{$ctx:password}</password> - </servicenow.init> - ``` - - **Sample request** - - ```json - { - "serviceNowInstanceURL":"https://dev17686.service-now.com", - "username":"admin", - "password":"12345" - } - ``` - ---- - -### Aggregate API - -??? note "servicenow.getAggregateRecord" - The getAggregateRecord operation allows you to compute aggregate statistics about existing table and column data. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>tableName</td> - <td>Name of the table you want to retrieve a record.</td> - <td>Yes.</td> - </tr> - <tr> - <td>sysparmAvgFields</td> - <td>A comma-separated list of fields for which to calculate the average value.</td> - <td>Yes.</td> - </tr> - <tr> - <td>sysparmMinFields</td> - <td>A comma-separated list of fields for which to calculate the minimum value.</td> - <td>Yes.</td> - </tr> - <tr> - <td>sysparmMaxFields</td> - <td>A comma-separated list of fields for which to calculate the maximum value.</td> - <td>Yes.</td> - </tr> - <tr> - <td>sysparmCount</td> - <td>You can set this parameter to true for the number of records returned by the query.</td> - <td>Yes.</td> - </tr> - <tr> - <td>sysparmSumFields</td> - <td>A comma-separated list of fields for which to calculate the sum of the values.</td> - <td>Yes.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <servicenow.getAggregateRecord> - <tableName>{$ctx:tableName}</tableName> - <sysparmAvgFields>{$ctx:sysparmAvgFields}</sysparmAvgFields> - <sysparmMinFields>{$ctx:sysparmMinFields}</sysparmMinFields> - <sysparmMaxFields>{$ctx:sysparmMaxFields}</sysparmMaxFields> - <sysparmCount>{$ctx:sysparmCount}</sysparmCount> - <sysparmSumFields>{$ctx:sysparmSumFields}</sysparmSumFields> - </servicenow.getAggregateRecord> - ``` - - **Sample request** - - ```json - { - "serviceNowInstanceURL":"https://dev17686.service-now.com", - "username":"admin", - "password":"12345", - "tableName":"incident", - "sysparmAvgFields":"category,active", - "sysparmMinFields":"number", - "sysparmMaxFields":"number", - "sysparmCount":"true", - "sysparmSumFields":"priority" - } - ``` - ---- - -### Import Set API - -??? note "servicenow.getRecordStagingTable" - The getRecordStagingTable operation retrieves the associated record and resulting transformation result. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>tableNameStaging</td> - <td>Name of the staging table you want to retrieve records.</td> - <td>Yes.</td> - </tr> - <tr> - <td>sysIdStaging</td> - <td>The Id that is automatically generated by ServiceNow. It is a unique value for each record.</td> - <td>Yes.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <servicenow.getRecordsStagingTable> - <tableNameStaging>{$ctx:tableNameStaging}</tableNameStaging> - <sysIdStaging>{$ctx:sysIdStaging}</sysIdStaging> - </servicenow.getRecordsStagingTable> - ``` - - **Sample request** - - ```json - { - "serviceNowInstanceURL":"https://dev17686.service-now.com", - "username":"admin", - "password":"12345", - "tableName":"incident", - "sysparmAvgFields":"category,active", - "sysparmMinFields":"number", - "sysparmMaxFields":"number", - "sysparmCount":"true", - "sysparmSumFields":"priority" - } - ``` - -??? note "servicenow.postRecordStagingTable" - The postRecordStagingTable operation inserts incoming data into a specified staging table and triggers transformation based on predefined transform maps in the import set table. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>tableNameStaging</td> - <td>Name of the staging table you want to retrieve records.</td> - <td>Yes.</td> - </tr> - <tr> - <td>serialNumber</td> - <td>This is an attribute in the table. Specify the row value for serialNumber.</td> - <tdh>Yes.</td> - </tr> - <tr> - <td>cpuCount</td> - <td>This is an attribute in the table. Specify the row value for cpuCount.</td> - <td>Yes.</td> - </tr> - <tr> - <td>manufacturer</td> - <td>This is an attribute in the table. Specify the row value for manufacturer.</td> - <td>Yes.</td> - </tr> - <tr> - <td>name</td> - <td>This is an attribute in the table. Specify the row value for name.</td> - <td>Yes.</td> - </tr> - <tr> - <td>operatingSystem</td> - <td>This is an attribute in the table. Specify the row value for operatingSystem.</td> - <td>Yes.</td> - </tr> - <tr> - <td>diskSpace</td> - <td>This is an attribute in the table. Specify the row value for diskSpace.</td> - <td>Yes.</td> - </tr> - <tr> - <td>ram</td> - <td>This is an attribute in the table. Specify the row value for ram. </td> - <td>Yes.</td> - </tr> - <tr> - <td>apiColumns</td> - <td>The attribute values of your table in your instance.</td> - <td>Yes.</td> - </tr> - </table> - - **Sample configurations** - - ```xml - <servicenow.postRecordStagingTable> - <tableNameStaging>{$ctx:tableNameStaging}</tableNameStaging> - <serialNumber>{$ctx:serialNumber}</serialNumber> - <cpuCount>{$ctx:cpuCount}</cpuCount> - <manufacturer>{$ctx:manufacturer}</manufacturer> - <name>{$ctx:name}</name> - <operatingSystem>{$ctx:operatingSystem}</operatingSystem> - <diskSpace>{$ctx:diskSpace}</diskSpace> - <ram>{$ctx:ram}</ram> - <apiColumns>{$ctx:apiColumns}</apiColumns> - </servicenow.postRecordStagingTable> - ``` - - **Sample request** - - ```json - { - "serviceNowInstanceURL":"https://dev17686.service-now.com", - "username":"admin", - "password":"12345", - "tableNameStaging":"imp_computer", - "serialNumber":"282", - "cpuCount":"234", - "name":"Mac", - "operatingSystem":"ubunthu", - "manufacturer":"IBM", - "diskSpace":"400Gb", - "ram":"ram 1500", - "apiColumns": {"sys_mod_count":"2","sys_import_state_comment":"wwww"} - } - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/servicenow-connector/servicenow-connector-example.md b/en/docs/reference/connectors/servicenow-connector/servicenow-connector-example.md deleted file mode 100644 index 774ab415b9..0000000000 --- a/en/docs/reference/connectors/servicenow-connector/servicenow-connector-example.md +++ /dev/null @@ -1,264 +0,0 @@ -# ServiceNow Connector Example - -The WSO2 ServiceNow connector allows you to access the ServiceNow REST API from an integration sequence. Using ServiceNow connector you can work with Aggregate API, Import Set API and Table API in ServiceNow. You can further read about ServiceNow REST APIs from [here](https://developer.servicenow.com/dev.do#!/reference/api/orlando/rest/c_TableAPI). - -## What you'll build - -This example explains how to use ServiceNow Connector to create records in a table and retrieve its information. Assume your organization uses ServiceNow support and you need to create an incident. In order to do that we can use the tableAPI, which is a REST API. This can be easily done using WSO2 ServiceNow connector. Whenever you need to raise an incident at ServiceNow, the above API can be called with the required information. - -It will have two HTTP API resources, which are `postRecord` and `readRecord`. - -[![ServiceNow scenario]({{base_path}}/assets/img/integrate/connectors/servicenow-scenario.png)]({{base_path}}/assets/img/integrate/connectors/servicenow-scenario.png) - -* `/postRecord`: It creates a new record in the existing incident table in the ServiceNow instance. - -* `/readRecord `: It reads the detailed information about the created incident record in the incident table. - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Setting up the environment - -Please follow the steps mentioned in the [Setting up ServiceNow Instance]({{base_path}}/reference/connectors/servicenow-connector/settingup-servicenow-instance/) document in order to create a ServiceNow Instance and obtain the credentials. Keep them saved to be used in the next steps. - -## Configure the connector in WSO2 Integration Studio - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -1. First let's create postRecord sequence and ReadRecord sequences. Right click on the created Integration Project and select, -> **New** -> **Sequence** to create the Sequence. - - <a href="{{base_path}}/assets/img/integrate/connectors/add-sequence.jpg"><img src="{{base_path}}/assets/img/integrate/connectors/add-sequence.jpg" title="Adding a Sequence" width="800" alt="Adding a Sequence"/></a> - -2. Provide the Sequence name as PostRecord. You can go to the source view of the XML configuration file of the API and copy the following configuration. - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <sequence name="PostRecord" trace="disable" xmlns="http://ws.apache.org/ns/synapse"> - <servicenow.init> - <serviceNowInstanceURL>https://dev55707.service-now.com</serviceNowInstanceURL> - <username>admin</username> - <password>Diazo123@</password> - </servicenow.init> - <servicenow.postRecord> - <tableName>incident</tableName> - <sysparmDisplayValue>true</sysparmDisplayValue> - <sysparmFields>short_description,number,sys_id</sysparmFields> - <sysparmView>short_description,number,sys_id</sysparmView> - <sysparmInputDisplayValue>true</sysparmInputDisplayValue> - <number>34</number> - <shortDescription>{$ctx:shortDescription}</shortDescription> - <active>true</active> - <approval>owner</approval> - <category>inquiry</category> - <contactType>{$ctx:contactType}</contactType> - </servicenow.postRecord> - <property expression="json-eval($.result.sys_id)" name="sysId" scope="default" type="STRING"/> - </sequence> - ``` -3. Create the ReadRecord sequence as below. - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <sequence name="ReadRecord" trace="disable" xmlns="http://ws.apache.org/ns/synapse"> - <servicenow.init> - <serviceNowInstanceURL>https://dev55707.service-now.com</serviceNowInstanceURL> - <username>admin</username> - <password>Diazo123@</password> - </servicenow.init> - <servicenow.getRecordById> - <sysId>{$ctx:sysId}</sysId> - <tableName>incident</tableName> - </servicenow.getRecordById> - </sequence> - ``` -4. Now right click on the created Integration Project and select **New** -> **Rest API** to create the REST API. - -5. Provide the API name as ServiceNowAPI and the API context as `/servicenow`. You can go to the source view of the XML configuration file of the API and copy the following configuration. - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <api context="/servicenow" name="ServiceNowAPI" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST" uri-template="/postRecord"> - <inSequence> - <property expression="json-eval($.shortDescription)" name="shortDescription" scope="default" type="STRING"/> - <property expression="json-eval($.contactType)" name="contactType" scope="default" type="STRING"/> - <sequence key="PostRecord"/> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - <resource methods="POST" uri-template="/readRecord"> - <inSequence> - <property expression="json-eval($.sysId)" name="sysId" scope="default" type="STRING"/> - <sequence key="ReadRecord"/> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - </api> - - ``` - -{!includes/reference/connectors/exporting-artifacts.md!} - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/servicenow.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -!!! tip - You may need to update the instance details and make other such changes before deploying and running this project. - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -### Post Record Operation - -1. Create a file called data.json with the following payload. You can further refer to the parameters from [here](https://docs.servicenow.com/bundle/orlando-application-development/page/integrate/inbound-rest/concept/c_TableAPI.html#c_TableAPI). - ``` - { - "shortDescription":"Incident type: L2", - "contacttype":"email" - } - ``` -2. Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here] (https://curl.haxx.se/download.html). - ``` - curl -H "Content-Type: application/json" --request POST --data @body.json http://localhost:8290/servicenow/postRecord - ``` -**Expected Response**: -You should get the following response with the 'sys_id' and keep it saved. -``` - { - "result": { - "short_description": "Incident type: L2", - "number": "34", - "sys_id": "fd7e0271073f801036baf03c7c1ed0ff" - } -} -``` - -### Read Record Operation - -1. Create a file called data.json with the following payload. Make sure you paste above saved sys_id as the sysId below. - ``` - { - "sysId":"fd7e0271073f801036baf03c7c1ed0ff" - } - ``` -2. Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here] (https://curl.haxx.se/download.html). - ``` - curl -H "Content-Type: application/json" --request POST --data @body.json http://localhost:8290/fileconnector/readrecord - ``` - -**Expected Response**: -You should get the following text returned. - - ``` - { - "result":{ - "parent":"", - "made_sla":"true", - "caused_by":"", - "watch_list":"", - "upon_reject":"cancel", - "sys_updated_on":"2020-03-27 17:45:43", - "child_incidents":"0", - "hold_reason":"", - "approval_history":"", - "number":"34", - "resolved_by":"", - "sys_updated_by":"admin", - "opened_by":{ - "link":"https://dev55707.service-now.com/api/now/table/sys_user/6816f79cc0a8016401c5a33be04be441", - "value":"6816f79cc0a8016401c5a33be04be441" - }, - "user_input":"", - "sys_created_on":"2020-03-27 17:45:43", - "sys_domain":{ - "link":"https://dev55707.service-now.com/api/now/table/sys_user_group/global", - "value":"global" - }, - "state":"1", - "sys_created_by":"admin", - "knowledge":"false", - "order":"", - "calendar_stc":"", - "closed_at":"", - "cmdb_ci":"", - "delivery_plan":"", - "contract":"", - "impact":"3", - "active":"true", - "work_notes_list":"", - "business_service":"", - "priority":"5", - "sys_domain_path":"/", - "rfc":"", - "time_worked":"", - "expected_start":"", - "opened_at":"2020-03-27 17:45:43", - "business_duration":"", - "group_list":"", - "work_end":"", - "caller_id":"", - "reopened_time":"", - "resolved_at":"", - "approval_set":"", - "subcategory":"", - "work_notes":"", - "short_description":"Incident type: L2", - "close_code":"", - "correlation_display":"", - "delivery_task":"", - "work_start":"", - "assignment_group":"", - "additional_assignee_list":"", - "business_stc":"", - "description":"", - "calendar_duration":"", - "close_notes":"", - "notify":"1", - "service_offering":"", - "sys_class_name":"incident", - "closed_by":"", - "follow_up":"", - "parent_incident":"", - "sys_id":"fd7e0271073f801036baf03c7c1ed0ff", - "contact_type":"", - "reopened_by":"", - "incident_state":"1", - "urgency":"3", - "problem_id":"", - "company":"", - "reassignment_count":"0", - "activity_due":"", - "assigned_to":"", - "severity":"3", - "comments":"", - "approval":"not requested", - "sla_due":"", - "comments_and_work_notes":"", - "due_date":"", - "sys_mod_count":"0", - "reopen_count":"0", - "sys_tags":"", - "escalation":"0", - "upon_approval":"proceed", - "correlation_id":"", - "location":"", - "category":"inquiry" - } - } - ``` - -## What's Next - -* To customize this example for your own scenario, see [ServiceNow Connector Configuration]({{base_path}}/reference/servicenow-connector/servicenow-connector-config/) documentation for all operation details of the connector. diff --git a/en/docs/reference/connectors/servicenow-connector/servicenow-overview.md b/en/docs/reference/connectors/servicenow-connector/servicenow-overview.md deleted file mode 100644 index 1f43119f89..0000000000 --- a/en/docs/reference/connectors/servicenow-connector/servicenow-overview.md +++ /dev/null @@ -1,35 +0,0 @@ -# ServiceNow Connector Overview - -ServiceNow is an application platform as a service, which is a cloud-based computing model that provides the infrastructure needed to develop, run, and manage applications. It offers activities of an organization such as data collection, storage, workflow automation, and reporting through a single user interface. This software as a service (SaaS) platform contains a number of modular applications that can vary by instance and user. It focuses on service-orientation toward the tasks, activities, and processes. - -The WSO2 ServiceNow connector allows you to access the ServiceNow REST API from an integration sequence. Using ServiceNow connector you can work with Aggregate API, Import Set API and Table API in ServiceNow. You can further read about ServiceNow REST APIs from [here](https://developer.servicenow.com/dev.do#!/reference/api/orlando/rest/c_TableAPI). - -To see the available ServiceNow connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "ServiceNow". - -<img src="{{base_path}}/assets/img/integrate/connectors/servicenow-store.png" title="ServiceNow Connector Store" width="200" alt="ServiceNow Connector Store"/> - -## Compatibility - -| Connector version | Supported product versions | -| ------------- |------------- | -| 1.0.2 | APIM 4.0.0, EI 7.1.0, EI 7.0.x, EI 6.6.0, EI 6.5.0 | - -For older versions, see the details in the connector store. - -## ServiceNow Connector documentation - -* **[Setting up the ServiceNow Instance]({{base_path}}/reference/servicenow-connector/servicenow-connector-config/)**: This involves creating and setting up a developer account and instance. - -* **[ServiceNow Connector Example]({{base_path}}/reference/servicenow-connector/servicenow-connector-example/)**: This example explains how to use ServiceNow Connector to create records in a table and retrieve its information. - -* **[ServiceNow Connector Reference]({{base_path}}/reference/servicenow-connector/servicenow-connector-config/)**: This documentation provides a reference guide for the ServiceNow. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, please create a pull request in the following repository. - -* [ServiceNow Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-servicenow) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/servicenow-connector/settingup-servicenow-instance.md b/en/docs/reference/connectors/servicenow-connector/settingup-servicenow-instance.md deleted file mode 100644 index 94ecc19493..0000000000 --- a/en/docs/reference/connectors/servicenow-connector/settingup-servicenow-instance.md +++ /dev/null @@ -1,28 +0,0 @@ -# Setting up the ServiceNow Instance - -The ServiceNow connector allows you to access the ServiceNow REST API from an integration sequence. To use the REST API, we need to have a [ServiceNow](https://www.servicenow.com/) account. - -## Signing Up for Servicenow - -1. Visit [ServiceNow](https://www.servicenow.com/) site and sign up for an account and do the verification step. - -2. Now you need to create an instance. To do that visit [ServiceNow Developer account](https://developer.servicenow.com/dev.do) with the credentials you obtained when following the previous step. Click on the **Request Instance** button. - <br> - <img src="{{base_path}}/assets/img/integrate/connectors/servicenow-2.png" title="Login to Developer Account" width="800" alt="Login to Developer Account"/> - -3. Once you reach the **Request an Instance** page, you can choose the version of ServiceNow instance you like. - <br> - <img src="{{base_path}}/assets/img/integrate/connectors/servicenow-3.png" title="Choose the release" width="800" alt="Choose the release"/> - -4. You will receive the instance details. Make a note of this for future reference. - <br> - <img src="{{base_path}}/assets/img/integrate/connectors/servicenow-4.png" title="Instance Details" width="800" alt="Instance Details"/> - -5. Log in to the instance URL provided in step 4 with the credentials provided. At this point you need to change your password. - <br> - <img src="{{base_path}}/assets/img/integrate/connectors/servicenow-4.png" title="Instance Details" width="800" alt="Instance Details"/> - -6. With the changed password, you can log in again to the instance and you will be redirected to the Dashboard. - <br> - <img src="{{base_path}}/assets/img/integrate/connectors/servicenow-5.png" title="Dashboard" width="800" alt="Dashboard"/> - diff --git a/en/docs/reference/connectors/smpp-connector/smpp-connector-config.md b/en/docs/reference/connectors/smpp-connector/smpp-connector-config.md deleted file mode 100644 index e7b88e1d5e..0000000000 --- a/en/docs/reference/connectors/smpp-connector/smpp-connector-config.md +++ /dev/null @@ -1,1035 +0,0 @@ -# SMPP Connector Reference - -The following operations allow you to work with the SMPP Connector. Click an operation name to see parameter details and samples on how to use it. - -## Create SMSC Connection - -To use the SMPP connector, need to have a SMSC connection. To create a SMSC connection add the `<SMPP.init>` element as a local entry configuration before carrying out any other SMPP operation. This is used to bind with the SMSC (Short Message service center). Once a connection is defined this can be reused among other SMPP operators. - -??? note "init" - The init operation appends content to an existing file in a specified location. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>host</td> - <td>IP address of the SMSC.</td> - <td>Yes</td> - </tr> - <tr> - <td>port</td> - <td>Port to access the SMSC.</td> - <td>Yes</td> - </tr> - <tr> - <td>systemId</td> - <td>username to access the SMSC.</td> - <td>Yes</td> - </tr> - <tr> - <td>password</td> - <td>password to access the SMSC.</td> - <td>Yes</td> - </tr> - <tr> - <td>enquireLinkTimer</td> - <td>Used to check the connectivity between the SMPP connector and SMSC.</td> - <td>Optional</td> - </tr> - <tr> - <td>transactionTimer</td> - <td>Time elapsed between SMPP connector request and corresponding response.</td> - <td>Optional</td> - </tr> - <tr> - <td>systemType</td> - <td>It is used to categorize the type of ESME that is binding to the SMSC. Examples include “CP” (Content providers), “VMS” (voice mail system) and “OTA” (over-the-air activation system).</td> - <td>Optional</td> - </tr> - <tr> - <td>addressTon</td> - <td>Indicates Type of Number of the ESME address. </td> - <td>Optional</td> - </tr> - <tr> - <td>addressNpi</td> - <td>Numbering Plan Indicator for ESME address.</td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <localEntry key="SMSC_CONFIG_1" xmlns="http://ws.apache.org/ns/synapse"> - <SMPP.init> - <host>{$ctx:host}</host> - <port>{$ctx:port}</port> - <systemId>{$ctx:systemId}</systemId> - <password>{$ctx:password}</password> - <enquireLinkTimer>{$ctx:enquireLinkTimer}</enquireLinkTimer> - <transactionTimer>{$ctx:transactionTimer}</transactionTimer> - <systemType>{$ctx:systemType}</systemType> - <addressTon>{$ctx:addressTon}</addressTon> - <addressNpi>{$ctx:addressNpi}</addressNpi> - <connectionType>init</connectionType> - <name>SMSC_CONFIG_1</name> - </SMPP.init> - </localEntry> - ``` - - **Sample request** - - Following is a sample REST/JSON request that can be handled by the init operation. - ```json - { - "host": "127.0.0.1", - "port": 2775, - "systemId": "DAMIEN", - "password": "neimad", - "systemType": "UNKNOWN", - "addressTon": "UNKNOWN", - "addressNpi": "UNKNOWN", - "enquireLinkTimer": "50000", - } - ``` - -## Send SMS Message - -??? note "sendSMS" - Use to send SMS Message to the SMSC (Short Message Service Center), - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>serviceType</td> - <td> - Indicates SMS application service. The following generic service_types are defined: - <table> - <tr> - <td>"" (NULL) </td> - <td>Default</td> - </tr> - <tr> - <td>"CMT"</td> - <td>Cellular Messaging</td> - </tr> - <tr> - <td>"CPT" </td> - <td>Cellular Paging</td> - </tr> - <tr> - <td>"VMN"</td> - <td>Voice Mail Notification</td> - </tr> - <tr> - <td>"VMA"</td> - <td>Voice Mail Alerting</td> - </tr> - <tr> - <td>"WAP"</td> - <td>Wireless Application Protocol</td> - </tr> - <tr> - <td>"USSD"</td> - <td>Unstructured Supplementary Services Data</td> - </tr> - </table> - </td> - <td>Optional</td> - </tr> - <tr> - <td>sourceAddressTon</td> - <td>Type of number for source address.</td> - <td>Optional</td> - </tr> - <tr> - <td>sourceAddressNpi</td> - <td>Numbering plan indicator for source address.</td> - <td>Optional</td> - </tr> - <tr> - <td>sourceAddress</td> - <td>Source address of the SMS message.</td> - <td>Yes</td> - </tr> - <tr> - <td>destinationAddressTon</td> - <td>Type of number for destination. Used as a default for the destination address.</td> - <td>Optional</td> - </tr> - <tr> - <td>destinationAddressNpi</td> - <td>numbering plan indicator for destination.</td> - <td>Optional</td> - </tr> - <tr> - <td>destinationAddress</td> - <td> - Destination address of the SMS message. - Source address TON, Destination address TON - <table> - <tr> - <th>TON</th> - <th>VALUE</th> - </tr> - <tr> - <td>Unknown</td> - <td>0</td> - </tr> - <tr> - <td>International</td> - <td>1</td> - </tr> - <tr> - <td>National</td> - <td>2</td> - </tr> - <tr> - <td>Network Specific</td> - <td>3</td> - </tr> - <tr> - <td>Subscriber Number</td> - <td>4</td> - </tr> - <tr> - <td>Alphanumeric</td> - <td>5</td> - </tr> - <tr> - <td>Abbreviated</td> - <td>6</td> - </tr> - <tr> - <td>All other values reserved</td> - </table> - Source address NPI, Destination address NPI - <table> - <tr> - <th>NPI</th> - <th>VALUE</th> - </tr> - <tr> - <td>Data (X.121)</td> - <td>2</td> - </tr> - <tr> - <td>ERMES</td> - <td>16</td> - </tr> - <tr> - <td>Internet (IP)</td> - <td>20</td> - </tr> - <tr> - <td>ISDN (E163/E164)</td> - <td>1</td> - </tr> - <tr> - <td>Land Mobile (E.212)</td> - <td>4</td> - </tr> - <tr> - <td>National</td> - <td>8</td> - </tr> - <tr> - <td>Private</td> - <td>9</td> - </tr> - <tr> - <tr> - <td>Telex (F.69)</td> - <td>3</td> - </tr> - <tr> - <td>Unknown</td> - <td>0</td> - </tr> - <tr> - <tr> - <td>WAP Client Id (to be defined by WAP Forum)</td> - <td>24</td> - </tr> - <tr> - </table> - </td> - <td>Yes</td> - </tr> - <tr> - <td>message</td> - <td>Content of the SMS message.</td> - <td>Yes</td> - </tr> - <tr> - <td>esmClass</td> - <td> - The esmClass parameter is used to indicate special message attributes associated with the short Message(message mode and type). - <table> - <tr> - <th>Bits 7 6 5 4 3 2 1</th> - <th>Meanning</th> - </tr> - <tr> - <td> - x x x x x x 0 0<br> - x x x x x x 0 1<br> - x x x x x x 1 0<br> - x x x x x x 1 1<br> - </td> - <td>Messaging Mode (bits 1-0)<br> - Default SMSC Mode (e.g. Store and Forward)<br> - Datagram mode<br> - Forward (i.e. Transaction) mode<br> - Store and Forward mode<br> - (use to select Store and Forward mode if Default SMSC Mode is non Store and Forward)<br> - </td> - </tr> - <tr> - <td>x x 0 0 0 0 x x<br> - x x 0 0 1 0 x x<br> - x x 0 1 0 0 x x<br> - </td> - <td>Message Type (bits 5-2)<br> - Default message Type (i.e. normal message)<br> - Short Message contains ESME Delivery Acknowledgement<br> - Short Message contains ESME Manual/User Acknowledgement<br> - </td> - </tr> - <tr> - <td>0 0 x x x x x x<br> - 0 1 x x x x x x<br> - 1 0 x x x x x x<br> - 1 1 x x x x x x<br> - </td> - <td>GSM Network Specific Features (bits 7-6)<br> - No specific features selected<br> - UDHI Indicator (only relevant for MT short messages)<br> - Set Reply Path (only relevant for GSM network)<br> - Set UDHI and Reply Path (only relevant for GSM network)<br> - </td> - </tr> - </table> - </td> - <td>Optional</td> - </tr> - <tr> - <td>protocolId</td> - <td>protocol identifier (network specific).<br> - GSM - Set according to GSM 03.40 [ GSM 03.40]<br> - ANSI-136 (TDMA)<br> - For mobile terminated messages, this field is not used and is therefore ignored by the SMSC.<br> - For ANSI-136 mobile originated messages, the SMSC should set this value to NULL.<br> - IS-95 (CDMA)<br> - For mobile terminated messages, this field is not used and is therefore ignored by the SMSC.<br> - For IS-95 mobile originated messages, the SMSC should set this value to NULL.<br> - </td> - <td>Optional</td> - </tr> - <tr> - <td>priorityFlag</td> - <td> - sets the priority of the message. - <table> - <tr> - <th>Priority Level</th> - <th>GSM</th> - <th>ANSI-136</th> - <th>IS-95</th> - </tr> - <tr> - <td>0</td> - <td>Non-priority</td> - <td>Bulk</td> - <td>Normal</td> - </tr> - <tr> - <td>1</td> - <td>Priority</td> - <td>Normal</td> - <td>Interactive</td> - </tr> - <tr> - <td>2</td> - <td>Priority</td> - <td>Urgent</td> - <td>Urgent</td> - </tr> - <tr> - <td>3</td> - <td>Priority</td> - <td>Very Urgent</td> - <td>Emergency</td> - </tr> - <tr>All other values reserved - </tr> - </table> - Priority<br> - There are two types of priority. - <ol> - <li>Delivery priority - Message delivery is attempted even if the mobile is temporarily absent. - E.g., Temporarily out of reach or another short message is being delivered at the same time. - </li> - <li>Content priority - No free message memory capacity. - E.g., The user does not delete any received message and maximum storage space has been reached. - </li> - </ol> - Non-priority<br> - It will attempt delivery if the mobile has not been identified as temporarily absent. - </td> - <td>Optional</td> - </tr> - <tr> - <td>scheduleDeliveryTime</td> - <td>This parameter specifies the scheduled time at which the message delivery should be first attempted. Set to NULL for immediate delivery.</td> - <td>Optional</td> - </tr> - <tr> - <td>validityPeriod</td> - <td>The validity_period parameter indicates the SMSC expiration time, after which the message should be discarded if not delivered to the destination. It can be defined in absolute time format or relative time format.</td> - <td>Optional</td> - </tr> - <tr> - <td>registeredDelivery</td> - <td>Indicator to signify if an SMSC delivery receipt or acknowledgment is required - Value other than 0 represents delivery report request.</td> - <td>Optional</td> - </tr> - <tr> - <td>validityPeriod</td> - <td>The validity_period parameter indicates the SMSC expiration time, after which the message should be discarded if not delivered to the destination. It can be defined in absolute time format or relative time format.</td> - <td>Optional</td> - </tr> - <tr> - <td>replaceIfPresentFlag</td> - <td> - The replace_if_present_flag parameter is used to request the SMSC to replace a previously submitted message, that is still pending delivery. The SMSC will replace an existing message provided that the source address, destination address and service_type match the same fields in the new message. - <table> - <tr> - <th>Value</th> - <th>Description</th> - </tr> - <tr> - <td>0</td> - <td>Don't replace (default)</td> - </tr> - <tr> - <td>1</td> - <td>Replace</td> - </tr> - <tr> - <td>2-255</td> - <td>reserved</td> - </tr> - </table> - <td>Optional</td> - </tr> - <tr> - <td>alphabet</td> - <td> - Alphabet is used in the data encoding of SMS message. Following alphabets are supported. - <ol> - <li>ALPHA_DEFAULT</li> - <li>ALPHA_8_BIT</li> - <li>ALPHA_UCS2</li> - <li>ALPHA_RESERVED</li> - </ol> - </td> - <td>Optional</td> - </tr> - <tr> - <td>charset</td> - <td> - Charset is used when decoding the message in SMSC. Following charsets are supported. - <ol> - <li>UTF-8</li> - <li>UTF-16</li> - </ol> - </td> - <td>Optional</td> - </tr> - <tr> - <td>isCompressed</td> - <td>It allows SMS message compression.</td> - <td>Optional</td> - </tr> - <tr> - <td>messageClass</td> - <td> - <table> - <tr> - <th>Value</th> - <th>Message Class</th> - </tr> - <tr> - <td>CLASS0</td> - <td>Flash messages. Display only not store into the phone</td> - </tr> - <tr> - <td>CLASS1</td> - <td>ME specific - the SMS is stored in the mobile phone memory</td> - </tr> - <tr> - <td>CLASS2</td> - <td>SIM specific - the SMS is stored on the SIM</td> - </tr> - <tr> - <td>CLASS3</td> - <td>TE specific - this means the SMS is sent to a computer attached to the receiving mobile phone</td> - </tr> - </table> - Data encoding - defines the encoding scheme of the SMS message. You can find general data coding scheme from [here](https://en.wikipedia.org/wiki/Data_Coding_Scheme) for different combination of alphabet, message class, isCompressed values. - </td> - <td>Optional</td> - </tr> - <tr> - <td>submitDefaultMsgId</td> - <td> - Indicates short message to send from a predefined list of messages stored on SMSC.<br> - <table> - <tr> - <th>Value</th> - <th>Description</th> - </tr> - <tr> - <td>0</td> - <td>reserved</td> - </tr> - <tr> - <td>1 - 254</td> - <td>Allowed values</td> - </tr> - <tr> - <td>255</td> - <td>reserved</td> - </tr> - </table> - </td> - <td>Optional</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <SMPP.sendSMS configKey="SMSC_CONFIG_1"> - <serviceType>{$ctx:serviceType}</serviceType> - <sourceAddressTon>{$ctx:sourceAddressTon}</sourceAddressTon> - <sourceAddressNpi>{$ctx:sourceAddressNpi}</sourceAddressNpi> - <sourceAddress>{$ctx:sourceAddress}</sourceAddress> - <destinationAddressTon>{$ctx:destinationAddressTon}</destinationAddressTon> - <destinationAddressNpi>{$ctx:destinationAddressNpi}</destinationAddressNpi> - <destinationAddress>{$ctx:destinationAddress}</destinationAddress> - <alphabet>{$ctx:alphabet}</alphabet> - <charset>{$ctx:charset}</charset> - <message>{$ctx:message}</message> - <smscDeliveryReceipt>{$ctx:smscDeliveryReceipt}</smscDeliveryReceipt> - <messageClass>{$ctx:messageClass}</messageClass> - <isCompressed>{$ctx:isCompressed}</isCompressed> - <esmclass>{$ctx:esmclass}</esmclass> - <protocolid>{$ctx:protocolid}</protocolid> - <priorityflag>{$ctx:priorityflag}</priorityflag> - <replaceIfPresentFlag>{$ctx:replaceIfPresentFlag}</replaceIfPresentFlag> - <submitDefaultMsgId>{$ctx:submitDefaultMsgId}</submitDefaultMsgId> - <validityPeriod>{$ctx:validityPeriod}</validityPeriod> - </SMPP.sendSMS> - ``` - - **Sample request** - - Following is a sample REST/JSON request that can be handled by the sendSMS operation. - ```json - { - "serviceType": "CMT", - "sourceAddressTon": "NETWORK_SPECIFIC", - "sourceAddressNpi": "INTERNET", - "sourceAddress": "16116", - "destinationAddressTon": "SUBSCRIBER_NUMBER", - "destinationAddressNpi": "LAND_MOBILE", - "destinationAddress": "628176504657", - "messageClass":"CLASS1", - "alphabet": "ALPHA_DEFAULT", - "charset": "UTF-8", - "isCompressed":"true", - "esmclass": "0", - "protocolid": "0", - "priorityflag":"1", - "replaceIfPresentFlag": "0", - "submitDefaultMsgId": "1", - "validityPeriod": “020610233429000R”, - "message": "hi hru", - "smscDeliveryReceipt": "SUCCESS_FAILURE", - "enquireLinkTimer": "50000", - "transactionTimer": "100" - } - ``` - -### Sample configuration in a scenario - -The following is a sample proxy service that illustrates how to connect to the SMPP connector and use the sendSMS operation to send a SMS message to the SMSC (Short Message Service Center). You can use this sample as a template for using other operations in this category. - -**Sample Proxy** -```xml -<proxy xmlns="http://ws.apache.org/ns/synapse" - name="SMPP" - transports="http,https,local" - statistics="disable" - trace="disable" - startOnLoad="true"> - <target> - <inSequence> - <property name="OUT_ONLY" value="true"/> - <property name="serviceType" expression="json-eval($.serviceType)"/> - <property name="sourceAddressTon" expression="json-eval($.sourceAddressTon)"/> - <property name="sourceAddressNpi" expression="json-eval($.sourceAddressNpi)"/> - <property name="sourceAddress" expression="json-eval($.sourceAddress)"/> - <property name="destinationAddressTon" expression="json-eval($.destinationAddressTon)"/> - <property name="destinationAddressNpi" expression="json-eval($.destinationAddressNpi)"/> - <property name="destinationAddress" expression="json-eval($.destinationAddress)"/> - <property name="alphabet" expression="json-eval($.alphabet)"/> - <property name="message" expression="json-eval($.message)"/> - <property name="smscDeliveryReceipt" expression="json-eval($.smscDeliveryReceipt)"/> - <property name="messageClass" expression="json-eval($.messageClass)"/> - <property name="isCompressed" expression="json-eval($.isCompressed)"/> - <property name="esmclass" expression="json-eval($.esmclass)"/> - <property name="protocolid" expression="json-eval($.protocolid)"/> - <property name="priorityflag" expression="json-eval($.priorityflag)"/> - <property name="replaceIfPresentFlag" expression="json-eval($.replaceIfPresentFlag)"/> - <property name="submitDefaultMsgId" expression="json-eval($.submitDefaultMsgId)"/> - <property name="validityPeriod" expression="json-eval($.validityPeriod)"/> - <property name="enquireLinkTimer" expression="json-eval($.enquireLinkTimer)"/> - <property name="transactionTimer" expression="json-eval($.transactionTimer)"/> - <SMPP.sendSMS configKey="SMSC_CONFIG_1"> - <serviceType>{$ctx:serviceType}</serviceType> - <sourceAddressTon>{$ctx:sourceAddressTon}</sourceAddressTon> - <sourceAddressNpi>{$ctx:sourceAddressNpi}</sourceAddressNpi> - <sourceAddress>{$ctx:sourceAddress}</sourceAddress> - <destinationAddressTon>{$ctx:destinationAddressTon}</destinationAddressTon> - <destinationAddressNpi>{$ctx:destinationAddressNpi}</destinationAddressNpi> - <destinationAddress>{$ctx:destinationAddress}</destinationAddress> - <alphabet>{$ctx:alphabet}</alphabet> - <charset>{$ctx:charset}</charset> - <message>{$ctx:message}</message> - <smscDeliveryReceipt>{$ctx:smscDeliveryReceipt}</smscDeliveryReceipt> - <messageClass>{$ctx:messageClass}</messageClass> - <isCompressed>{$ctx:isCompressed}</isCompressed> - <esmclass>{$ctx:esmclass}</esmclass> - <protocolid>{$ctx:protocolid}</protocolid> - <priorityflag>{$ctx:priorityflag}</priorityflag> - <replaceIfPresentFlag>{$ctx:replaceIfPresentFlag}</replaceIfPresentFlag> - <submitDefaultMsgId>{$ctx:submitDefaultMsgId}</submitDefaultMsgId> - <validityPeriod>{$ctx:validityPeriod}</validityPeriod> - </SMPP.sendSMS> - <respond/> - </inSequence> - </target> - <description/> -</proxy> -``` -**Note**: For more information on how this works in an actual scenario, see [SMPP Connector Example]({{base_path}}/reference/connectors/smpp-connector/smpp-connector-example/). - -## Send bulk SMS message - -??? note "sendBulkSMS" - Used to send SMS messages to multiple destinations. - <table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - <th>Required</th> - </tr> - <tr> - <td>serviceType</td> - <td> - Indicates the SMS application service used. The following generic service_types are defined: - <table> - <tr> - <td>"" (NULL) </td> - <td>Default</td> - </tr> - <tr> - <td>"CMT"</td> - <td>Cellular Messaging</td> - </tr> - <tr> - <td>"CPT" </td> - <td>Cellular Paging</td> - </tr> - <tr> - <td>"VMN"</td> - <td>Voice Mail Notification</td> - </tr> - <tr> - <td>"VMA"</td> - <td>Voice Mail Alerting</td> - </tr> - <tr> - <td>"WAP"</td> - <td>Wireless Application Protocol</td> - </tr> - <tr> - <td>"USSD"</td> - <td>Unstructured Supplementary Services Data</td> - </tr> - </table> - </td> - <td>Optional</td> - </tr> - <tr> - <td>sourceAddressTon</td> - <td>Type of number for source address.</td> - <td>Optional</td> - </tr> - <tr> - <td>sourceAddressNpi</td> - <td>Numbering plan indicator for source address.</td> - <td>Optional</td> - </tr> - <tr> - <td>sourceAddress</td> - <td>Source address of the SMS message.</td> - <td>Yes</td> - </tr> - <tr> - <td>destinationAddresses</td> - <td> - Destination addresses can be defined in the following 3 formats. - <ol> - <li>Default Values: Default numbering plan and type will be added for all numbers</li> - <code> - {"mobileNumbers": ["+94715XXXXXX", "+1434XXXXXX"]} - </code> - <li> Custom numbering plan and type for all numbers. The specified plan and type will be set to all numbers.</li> - <pre> - { - "type": "NATIONAL", - "numberingPlan":"NATIONAL", - "mobileNumbers": [ - "+94713089759", - "+189718674" - ] - } - </pre> - <li>Each number is assigned different properties. Numbers without the properties will be assigned default values (You can set a custom type and numbering plan as default, as mentioned in the previous format). Properties can be set to numbers individually.</li> - <pre> - { - "mobileNumbers": - ["+9471XXXXXX", "+189718674",{ "type": "INTERNATIONAL", "numberingPlan": "NATIONAL", "mobileNumber": "222333" }] - } - </pre> - </ol> - </td> - <td>Yes</td> - </tr> - <tr> - <td>message</td> - <td>Content of the SMS message.</td> - <td>Yes</td> - </tr> - <tr> - <td>esmClass</td> - <td> - The esmClass parameter is used to indicate special message attributes associated with the short Message (message mode and type). - <table> - <tr> - <th>Bits 7 6 5 4 3 2 1</th> - <th>Meanning</th> - </tr> - <tr> - <td> - x x x x x x 0 0<br> - x x x x x x 0 1<br> - x x x x x x 1 0<br> - x x x x x x 1 1<br> - </td> - <td>Messaging Mode (bits 1-0)<br> - Default SMSC Mode (e.g. Store and Forward)<br> - Datagram mode<br> - Forward (i.e. Transaction) mode<br> - Store and Forward mode<br> - (use to select Store and Forward mode if Default SMSC Mode is non Store and Forward)<br> - </td> - </tr> - <tr> - <td>x x 0 0 0 0 x x<br> - x x 0 0 1 0 x x<br> - x x 0 1 0 0 x x<br> - </td> - <td>Message Type (bits 5-2)<br> - Default message Type (i.e. normal message)<br> - Short Message contains ESME Delivery Acknowledgement<br> - Short Message contains ESME Manual/User Acknowledgement<br> - </td> - </tr> - <tr> - <td>0 0 x x x x x x<br> - 0 1 x x x x x x<br> - 1 0 x x x x x x<br> - 1 1 x x x x x x<br> - </td> - <td>GSM Network Specific Features (bits 7-6)<br> - No specific features selected<br> - UDHI Indicator (only relevant for MT short messages)<br> - Set Reply Path (only relevant for GSM network)<br> - Set UDHI and Reply Path (only relevant for GSM network)<br> - </td> - </tr> - </table> - </td> - <td>Optional</td> - </tr> - <tr> - <td>protocolId</td> - <td>protocol identifier (network specific).<br> - GSM - Set according to GSM 03.40 [ GSM 03.40]<br> - ANSI-136 (TDMA)<br> - For mobile terminated messages, this field is not used and is therefore ignored by the SMSC.<br> - For ANSI-136 mobile originated messages, the SMSC should set this value to NULL.<br> - IS-95 (CDMA)<br> - For mobile terminated messages, this field is not used and is therefore ignored by the SMSC.<br> - For IS-95 mobile originated messages, the SMSC should set this value to NULL.<br> - </td> - <td>Optional</td> - </tr> - <tr> - <td>priorityFlag</td> - <td> - sets the priority of the message. - <table> - <tr> - <th>Priority Level</th> - <th>GSM</th> - <th>ANSI-136</th> - <th>IS-95</th> - </tr> - <tr> - <td>0</td> - <td>Non-priority</td> - <td>Bulk</td> - <td>Normal</td> - </tr> - <tr> - <td>1</td> - <td>Priority</td> - <td>Normal</td> - <td>Interactive</td> - </tr> - <tr> - <td>2</td> - <td>Priority</td> - <td>Urgent</td> - <td>Urgent</td> - </tr> - <tr> - <td>3</td> - <td>Priority</td> - <td>Very Urgent</td> - <td>Emergency</td> - </tr> - <tr>All other values reserved - </tr> - </table> - Priority<br> - There are two types of priority. - <ol> - <li>Delivery priority - Message delivery is attempted even if the mobile is temporarily absent. - E.g., Temporarily out of reach or another short message is being delivered at the same time. - </li> - <li>Content priority - No free message memory capacity. - E.g., The user does not delete any received message and maximum storage space has been reached. - </li> - </ol> - Non-priority<br> - It will attempt delivery if the mobile has not been identified as temporarily absent. - </td> - <td>Optional</td> - </tr> - <tr> - <td>scheduleDeliveryTime</td> - <td>This parameter specifies the scheduled time at which the message delivery should be first attempted. Set to NULL for immediate delivery.</td> - <td>Optional</td> - </tr> - <tr> - <td>validityPeriod</td> - <td>The validity_period parameter indicates the SMSC expiration time, after which the message should be discarded if not delivered to the destination. It can be defined in absolute time format or relative time format.</td> - <td>Optional</td> - </tr> - <tr> - <td>registeredDelivery</td> - <td>Indicator to signify if an SMSC delivery receipt or acknowledgment is required - Value other than 0 represents delivery report request.</td> - <td>Optional</td> - </tr> - <tr> - <td>validityPeriod</td> - <td>The validity_period parameter indicates the SMSC expiration time, after which the message should be discarded if not delivered to the destination. It can be defined in absolute time format or relative time format.</td> - <td>Optional</td> - </tr> - <tr> - <td>replaceIfPresentFlag</td> - <td> - The replace_if_present_flag parameter is used to request the SMSC to replace a previously submitted message, that is still pending delivery. The SMSC will replace an existing message provided that the source address, destination address and service_type match the same fields in the new message. - <table> - <tr> - <th>Value</th> - <th>Description</th> - </tr> - <tr> - <td>0</td> - <td>Don't replace (default)</td> - </tr> - <tr> - <td>1</td> - <td>Replace</td> - </tr> - <tr> - <td>2-255</td> - <td>reserved</td> - </tr> - <td>Optional</td> - <tr> - <td>alphabet</td> - <td> - Alphabet is used in the data encoding of SMS message. Following alphabets are supported. - <ol> - <li>ALPHA_DEFAULT</li> - <li>ALPHA_8_BIT</li> - <li>ALPHA_UCS2</li> - <li>ALPHA_RESERVED</li> - </ol> - </td> - <td>Optional</td> - </tr> - <tr> - <td>charset</td> - <td> - Charset is used when decoding the message in SMSC. Following charsets are supported. - <ol> - <li>UTF-8</li> - <li>UTF-16</li> - </ol> - </td> - <td>Optional</td> - </tr> - <tr> - <td>isCompressed</td> - <td>It allows SMS message compression.</td> - <td>Optional</td> - </tr> - <tr> - <td>messageClass</td> - <td> - <table> - <tr> - <th>Value</th> - <th>Message Class</th> - </tr> - <tr> - <td>CLASS0</td> - <td>Flash messages. These are display only and not stored in the phone</td> - </tr> - <tr> - <td>CLASS1</td> - <td>ME specific - the SMS is stored in the mobile phone memory</td> - </tr> - <tr> - <td>CLASS2</td> - <td>SIM specific - the SMS is stored on the SIM</td> - </tr> - <tr> - <td>CLASS3</td> - <td>TE specific - this means the SMS is sent to a computer attached to the receiving mobile phone</td> - </tr> - </table> - Data encoding - defines the encoding scheme of the SMS message. You can find general data coding scheme from [here](https://en.wikipedia.org/wiki/Data_Coding_Scheme) for different combination of alphabet, message class, isCompressed values. - </td> - <td>Optional</td> - </tr> - <tr> - <td>submitDefaultMsgId</td> - <td> - Indicates a short message to send from a predefined list of messages stored on SMSC.<br> - <table> - <tr> - <th>Value</th> - <th>Description</th> - </tr> - <tr> - <td>0</td> - <td>reserved</td> - </tr> - <tr> - <td>1 - 254</td> - <td>Allowed values</td> - </tr> - <tr> - <td>255</td> - <td>reserved</td> - </tr> - </table> - </td> - <td>Optional</td> - </tr> - </table> - </table> - - **Sample configuration** - - ```xml - <SMPP.sendBulkSMS configKey="SMSC_CONFIG_1"> - <serviceType>{$ctx:serviceType}</serviceType> - <sourceAddressTon>{$ctx:sourceAddressTon}</sourceAddressTon> - <sourceAddressNpi>{$ctx:sourceAddressNpi}</sourceAddressNpi> - <sourceAddress>{$ctx:sourceAddress}</sourceAddress> - <destinationAddress>{$ctx:destinationAddresses}</destinationAddress> - <alphabet>{$ctx:alphabet}</alphabet> - <charset>{$ctx:charset}</charset> - <message>{$ctx:message}</message> - <smscDeliveryReceipt>{$ctx:smscDeliveryReceipt}</smscDeliveryReceipt> - <messageClass>{$ctx:messageClass}</messageClass> - <isCompressed>{$ctx:isCompressed}</isCompressed> - <esmclass>{$ctx:esmclass}</esmclass> - <protocolid>{$ctx:protocolid}</protocolid> - <priorityflag>{$ctx:priorityflag}</priorityflag> - <replaceIfPresentFlag>{$ctx:replaceIfPresentFlag}</replaceIfPresentFlag> - <submitDefaultMsgId>{$ctx:submitDefaultMsgId}</submitDefaultMsgId> - <validityPeriod>{$ctx:validityPeriod}</validityPeriod> - </SMPP.sendBulkSMS> - ``` - - **Sample request** - - Following is a sample REST/JSON request that can be handled by the sendbulkSMS operation. - ```json - { - "serviceType": "CMT", - "sourceAddressTon": "NETWORK_SPECIFIC", - "sourceAddressNpi": "INTERNET", - "sourceAddress": "16116", - "destinationAddresses": { - "type": "ALPHANUMERIC", - "numberingPlan": "LAND_MOBILE", - "mobileNumbers": ["+189718785", "+189718674"] - }, - "messageClass":"CLASS1", - "alphabet": "ALPHA_DEFAULT", - "charset": "UTF-8", - "isCompressed":"true", - "esmclass": "0", - "protocolid": "0", - "priorityflag":"1", - "replaceIfPresentFlag": "0", - "submitDefaultMsgId": "1", - "validityPeriod": “020610233429000R”, - "message": "hi hru", - "smscDeliveryReceipt": "SUCCESS_FAILURE", - "enquireLinkTimer": "50000", - "transactionTimer": "100" - } - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/smpp-connector/smpp-connector-configuration.md b/en/docs/reference/connectors/smpp-connector/smpp-connector-configuration.md deleted file mode 100644 index a0d97f7ffc..0000000000 --- a/en/docs/reference/connectors/smpp-connector/smpp-connector-configuration.md +++ /dev/null @@ -1,37 +0,0 @@ -# Setting up the SMPP Connector - -SMPP (Short Message Peer-to-Peer Protocol) Connector allows you to send an SMS from an integration sequence. You need to set up the environment and SMSC simulator before using this. - -## Setting up the environment - -Before you start configuring the SMPP connector, you also need WSO2 MI, and we refer to that location as <PRODUCT_HOME>. - -To configure the SMPP connector, copy the following client libraries from the given locations to the `<PRODUCT_HOME>/repository/components/lib` directory. - -* [jsmpp-3.0.0.jar](https://repo1.maven.org/maven2/org/jsmpp/jsmpp/3.0.0/jsmpp-3.0.0.jar) - -## Configure the SMSC (Short Message Service Center) simulator - -For testing purposes, it is not practical always to connect with a real SMSC. SMSC Simulator is an application that can act like an SMSC. Using a simulator we can test our scenario without having access to a real SMSC. For the real production servers we have to use a real SMSC. In this example scenario we will be using [logica-smpp-sim](https://github.com/smn/logica-smpp-sim) simulator. - -JSMPP is a Java implementation of SMPP protocol. The SMPP server in SMSC have all ESME (External Short Messaging Entity) addresses. ESME is an external application that connects to a SMSC and the active connection. It provides an API to communicate with a SMSC simulator as well. - -1. Navigate to [logica-smpp-sim](https://github.com/smn/logica-smpp-sim) and clone or download the repository. - -2. Make sure that **Java** is installed and set up in your machine. - -3. Navigate to cloned **logica-smpp-sim** -> **users.txt** and edit `username` and `password` as you wish. - -4. After setting up the **users.txt** you can start the simulator. Execute **./start.sh** script. - -5. In the terminal you will see the following list of options. **Enter 1** to start simulation. - - <img src="{{base_path}}/assets/img/integrate/connectors/smpp-simulator.png" title="SMSC Simulator Console" width="600" alt="SMSC Simulator Console"/> - -6. After you enter 1 for simulation it will ask for a **port number**. In this example we added port number as 2775. - - <img src="{{base_path}}/assets/img/integrate/connectors/smpp-simulator-port.png" title="SMSC Simulator Port" width="600" alt="SMSC Simulator Port"/> - -7. Once you setup WSO2 MI and invoke the `SmppTestApi` API, you will able to see logs in you simulator as shown below. - - <img src="{{base_path}}/assets/img/integrate/connectors/smpp-simulator-output.png" title="SMSC Simulator Console Output" width="600" alt="SMSC Simulator Console Output"/> \ No newline at end of file diff --git a/en/docs/reference/connectors/smpp-connector/smpp-connector-example.md b/en/docs/reference/connectors/smpp-connector/smpp-connector-example.md deleted file mode 100644 index 79425643b3..0000000000 --- a/en/docs/reference/connectors/smpp-connector/smpp-connector-example.md +++ /dev/null @@ -1,206 +0,0 @@ -# SMPP Connector Example - -SMPP (Short Message Peer-to-Peer Protocol) Connector allows you to send an SMS from an integration sequence. It uses the [jsmpp API](https://jsmpp.org/) to communicate with an SMSC (Short Message Service Center), which is used to store, forward, convert, and deliver Short Message Service (SMS) messages. jsmpp is a Java implementation of the SMPP protocol. - -## What you'll build - -Given below is a sample scenario that demonstrates how to work with the WSO2 SMPP Connector and send SMS messages via the SMPP protocol. - -The SMPP server in SMSC have all the ESME (External Short Messaging Entity) addresses. This is an external application that connects to a SMSC and the active connection. When you send an SMS to a destination, it comes to the SMSC. Then one of the modules in SMSC checks if the destination address is available or not. If it is available, it creates a connection object that is responsible for sending the SMS message. -There are many SMPP gateways available in the world and now almost all the message centers support SMPP. It is not practical always to connect with real SMSC. However, in this scenario we will try it with **SMSC simulator**. Please refer the [Setting up the SMPP Connector]({{base_path}}/reference/connectors/smpp-connector/smpp-connector-configuration/) documentation. - -The following `sendSMS`operation is exposed via an API. The API with the context `/send` has one resource. - -* `/send` : Used to send SMS Message to the Short Message Service Center. - -The following diagram shows the overall solution. There is an HTTP API that you can invoke with an HTTP call with JSON. The API is able to send a SMS for the request number in a JSON request with the message in JSON. - -<img src="{{base_path}}/assets/img/integrate/connectors/smpp-connector-example.png" title="SMPP connector example" width="800" alt="smpp connector example"/> - -If you do not want to configure this yourself, you can simply [get the project](#get-the-project) and run it. - -## Configure the connector in WSO2 Integration Studio - -Connectors can be added to integration flows in [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/). Once added, the operations of the connector can be dragged onto your canvas and added to your resources. - -### Import the connector - -Follow these steps to set up the ESB Solution Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -### Add integration logic - -First create an API, which will be where we configure the integration logic. Right click on the created Integration Project and select, **New** -> **Rest API** to create the REST API. Specify the API name as `SmppTestApi` and API context as `/send`. - -<img src="{{base_path}}/assets/img/integrate/connectors/adding-an-api.jpg" title="Adding a Rest API" width="800" alt="Adding a Rest API"/> - -#### Configuring the API - -Create a resource to send an SMS to the Short Message Service Center. - -1. Set up the sendSMS operation. - - 1. Navigate into the **Palette** pane and select the graphical operations icons listed under **SMPP Connector** section. Then drag and drop the `sendSMS` operation into the Design pane. - - <img src="{{base_path}}/assets/img/integrate/connectors/smpp-drag-and-drop-sendsms.png" title="Drag and drop send operation" width="500" alt="Drag and drop send operation"/> - - 2. Go to property values of `sendSMS` and click the `+` sign to create a new SMSC Connection. Replace the `host`, `port`, `systemId`, `password` with your values. You can reuse the SMSC connection among other operators. - <br/> - - <img src="{{base_path}}/assets/img/integrate/connectors/smpp-api-connection.png" title="Create SMPP connection" width="500" alt="Create SMPP connection"/> - <br/> - - - **host** : IP address of the SMSC. - - **port** : Port to access the SMSC. - - **systemId** : username to access the SMSC. - - **password** : password to access the SMSC. - - **systemType [Optional]** : It is used to categorize the type of ESME that is binding to the SMSC. Examples include “CP” (Content providers), “VMS” (voice mail system) and “OTA” (over-the-air activation system). - - **addressTon [Optional]** : Indicates Type of Number of the ESME address. - - **addressNpi [Optional]** : Numbering Plan Indicator for ESME address. - - - <img src="{{base_path}}/assets/img/integrate/connectors/smpp-api-init-operation.png" title="Add values to the init operation" width="400" height="400" alt="Add values to the init operation"/> - - 3. In this operation we are going to send a SMS messages peer to peer using SMPP protocol. It provides a flexible data communications interface for transfer of short message data between a Message Centers, such as a Short Message Service Centre (SMSC), GSM Unstructured Supplementary Services Data (USSD) Server or other type of Message Center and a SMS application system, such as a WAP Proxy Server, EMail Gateway or other Messaging Gateway. Please find the mandatory `send` operation parameters listed here. - - - **sourceAddress** : Source address of the SMS message. - - **destinationAddress** : Destination address of the SMS message. - - **message** : Content of the SMS message. - - While invoking the API, the above three parameters values come as a user input. - - 3. To get the input values in to the API we can use the [property mediator]({{base_path}}/reference/mediators/property-mediator). Navigate into the **Palette** pane and select the graphical mediators icons listed under **Mediators** section. Then drag and drop the `Property` mediators into the Design pane as shown bellow. - - <img src="{{base_path}}/assets/img/integrate/connectors/smpp-api-drag-and-drop-property-mediator.png" title="Add property mediators" width="800" alt="Add property mediators"/> - - The parameters available for configuring the Property mediator are as follows: - - > **Note**: That the properties should be add to the pallet before create the operation. - - 4. Add the property mediator to capture the `sourceAddress` value. The sourceAddress contains Source address of the SMS message. - - - **name** : sourceAddress - - **expression** : json-eval($.sourceAddress) - - <img src="{{base_path}}/assets/img/integrate/connectors/smpp-api-property-mediator-property1-value1.png" title="Add property mediators sourceAddress" width="600" alt="Add property mediators sourceAddress"/> - - 5. Add the property mediator to capture the `message` values. The message contains content of the SMS message. - - - **name** : message - - **expression** : json-eval($.message) - - <img src="{{base_path}}/assets/img/integrate/connectors/smpp-api-property-mediator-property2-value2.png" title="Add values to capture message" width="600" alt="Add values to capture message"/> - - 6. Add the property mediator to capture the `destinationAddress` values. The message contains content of the SMS message. - - - **name** : destinationAddress - - **expression** : json-eval($.destinationAddress) - - <img src="{{base_path}}/assets/img/integrate/connectors/smpp-api-property-mediator-property3-value3.png" title="Add values to capture destinationAddress" width="600" alt="Add values to capture destinationAddress"/> - -2. Get a response from the user. - - When you are invoking the created API, the request of the message is going through the `/send` resource. Finally, it is passed to the [Respond mediator]({{base_path}}/reference/mediators/respond-mediator/). The Respond Mediator stops the processing on the current message and sends the message back to the client as a response. - - 1. Drag and drop **respond mediator** to the **Design view**. - - <img src="{{base_path}}/assets/img/integrate/connectors/smpp-drag-and-drop-respond-mediator.png" title="Add Respond mediator" width="600" alt="Add Respond mediator"/> - - 2. Once you have setup the sequences and API, you can see the `salesforcerest` API as shown below. - - <img src="{{base_path}}/assets/img/integrate/connectors/smpp-api-design-view.png" title="API Design view" width="600" alt="API Design view"/> - - > **Note**: The properties should be added to the pallet before creating the operation. - -3. Now you can switch into the Source view and check the XML configuration files of the created API and sequences. - - ??? note "SmppTestApi.xml" - ``` - <?xml version="1.0" encoding="UTF-8"?> - <api context="/send" name="SmppTestApi" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST"> - <inSequence> - <property expression="json-eval($.destinationAddress)" name="destinationAddress" scope="default" type="STRING"/> - <property expression="json-eval($.message)" name="message" scope="default" type="STRING"/> - <property expression="json-eval($.sourceAddress)" name="sourceAddress" scope="default" type="STRING"/> - <SMPP.sendSMS configKey="SMSC_CONFIG_1"> - <sourceAddress>{$ctx:sourceAddress}</sourceAddress> - <destinationAddress>{$ctx:distinationAddress}</destinationAddress> - <message>{$ctx:message}</message> - </SMPP.sendSMS> - <log level="full"> - <property name="Message delivered sucessfully" value="Message delivered sucessfully"/> - </log> - <respond/> - </inSequence> - <outSequence/> - <faultSequence/> - </resource> - </api> - ``` - - ??? note "SMSC_CONFIG_1.xml" - ``` - <?xml version="1.0" encoding="UTF-8"?> - <localEntry key="SMSC_CONFIG_1" xmlns="http://ws.apache.org/ns/synapse"> - <SMPP.init> - <systemId>kasun</systemId> - <connectionType>init</connectionType> - <addressTon>INTERNATIONAL</addressTon> - <password>kasun</password> - <port>10003</port> - <host>localhost</host> - <systemType>SMS1009</systemType> - <name>SMSC_CONFIG_1</name> - <addressNpi>ISDN</addressNpi> - </SMPP.init> - </localEntry> - ``` - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/smpp-connector.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -!!! tip - You may need to update the simulator details and make other such changes before deploying and running this project. - -## Deployment - -Follow these steps to deploy the exported CApp in the integration runtime. - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing - -Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - -**Sample request** - - ``` - curl -v POST -d '{"sourceAddress":"16111", "message":"Hi! This is the first test SMS message.","distinationAddress":"071XXXXXXX"}' "http://172.17.0.1:8290/send" -H "Content-Type:application/json" - ``` -**You will receive the `messageId` as expected response** - - ``` - {"messageId":"Smsc2001"} - ``` -**Expected Response in SMSC simulator console** - - ``` - 06:33:09 [sys] new connection accepted - 06:33:09 [] client request: (bindreq: (pdu: 40 2 0 1) kasun kasun SMS1009 52 (addrrang: 1 1 ) ) - 06:33:09 [kasun] authenticated kasun - 06:33:09 [kasun] server response: (bindresp: (pdu: 0 80000002 0 1) Smsc Simulator) - 06:33:09 [kasun] client request: (submit: (pdu: 106 4 0 2) (addr: 1 1 16111) (addr: 1 1 071XXXXXXX) (sm: msg: Hi! This is the first test SMS message.) (opt: ) ) - 06:33:09 [kasun] putting message into message store - 06:33:09 [kasun] server response: (submit_resp: (pdu: 0 80000004 0 2) Smsc2001 ) - 06:33:59 [kasun] client request: (enquirelink: (pdu: 16 15 0 3) ) - 06:33:59 [kasun] server response: (enquirelink_resp: (pdu: 0 80000015 0 3) ) - 06:34:49 [kasun] client request: (enquirelink: (pdu: 16 15 0 4) ) - 06:34:49 [kasun] server response: (enquirelink_resp: (pdu: 0 80000015 0 4) ) - ``` diff --git a/en/docs/reference/connectors/smpp-connector/smpp-connector-overview.md b/en/docs/reference/connectors/smpp-connector/smpp-connector-overview.md deleted file mode 100644 index e26d051a2a..0000000000 --- a/en/docs/reference/connectors/smpp-connector/smpp-connector-overview.md +++ /dev/null @@ -1,44 +0,0 @@ -# SMPP Connector Overview - -SMPP (Short Message Peer-to-Peer Protocol) is an open, industry standard protocol designed to provide a flexible data communications interface for transfer of short message data between SMSCs (Short Message Service Center). There are many SMPP gateways available in the world and now almost all the Message Centers support SMPP. - -To see the available SMPP connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "SMPP". - -<img src="{{base_path}}/assets/img/integrate/connectors/smpp-store.png" title="SMPP Connector Store" width="200" alt="SMPP Connector Store"/> - -## Compatibility - -| Connector version | Supported product versions | -| ------------- |------------- | -| 1.1.0 | APIM 4.0.0, EI 7.1.0, EI 7.0.x, EI 6.6.0, EI 6.5.0, MI 4.0.0, MI 4.1.0, MI 4.2.0 | - -For older versions, see the details in the connector store. - -## SMPP Connector documentation - -The SMPP Connector allows you to send an SMS from an integration sequence. It uses the [jsmpp API](https://jsmpp.org/) to communicate with an SMSC, which is used to store, forward, convert, and deliver Short Message Service (SMS) messages. JSMPP is a Java implementation of the SMPP protocol. - -* **[Setting up the SMPP Connector]({{base_path}}/reference/connectors/smpp-connector/smpp-connector-configuration/)**: You need to set up the environment and SMSC simulator before using the connector. - -* **[SMPP Connector Example]({{base_path}}/reference/connectors/smpp-connector/smpp-connector-example/)**: This example demonstrates how to work with the WSO2 SMPP Connector and send SMS messages via the SMPP protocol. - -* **[SMPP Connector Reference]({{base_path}}/reference/connectors/smpp-connector/smpp-connector-config/)**: This documentation provides a reference guide for SMPP. - -## SMPP Inbound Endpoint documentation - -The SMPP inbound endpoint allows you to consume messages from SMSC from an integration sequence. The WSO2 SMPP inbound endpoint acts as a message consumer. It creates a connection with the SMSC, then listens over a port to consume only SMS messages from the SMSC and injects the messages to the integration sequence. It will receive alert notifications or will notify when a data short message accepted. - -* **[SMPP Inbound Endpoint Example]({{base_path}}/reference/connectors/smpp-connector/smpp-inbound-endpoint-example/)**: This scenario demonstrates how the SMPP inbound endpoint works as an message consumer. - -* **[SMPP Inbound Endpoint Reference]({{base_path}}/reference/connectors/smpp-connector/smpp-inbound-endpoint-config/)**: This documentation provides a reference guide for SMPP Inbound Endpoint. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, please create a pull request in the following repository. - -* [SMPP Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-smpp) -* [SMPP Inbound Endpoint GitHub repository](https://github.com/wso2-extensions/esb-inbound-smpp) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/smpp-connector/smpp-inbound-endpoint-config.md b/en/docs/reference/connectors/smpp-connector/smpp-inbound-endpoint-config.md deleted file mode 100644 index e5dab0b8f9..0000000000 --- a/en/docs/reference/connectors/smpp-connector/smpp-inbound-endpoint-config.md +++ /dev/null @@ -1,149 +0,0 @@ -# SMPP Inbound Endpoint Reference - -The following configurations allow you to configure SMPP Inbound Endpoint for your scenario. - -<style type="text/css"> -.tg {border-collapse:collapse;border-spacing:0;} -.tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;} -.tg th{font-family:Arial, sans-serif;font-size:20px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;} -.tg .tg-0pky{border-color:inherit;text-align:left;vertical-align:top} -</style> -<table class="tg"> - <tr> - <th class="tg-0pky">Parameter</th> - <th class="tg-0pky">Description</th> - <th class="tg-0pky">Required</th> - <th class="tg-0pky">Possible Values</th> - <th class="tg-0pky">Default Value</th> - </tr> - <tr> - <td class="tg-0pky">host</td> - <td class="tg-0pky"> IP address of the SMSC.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">N/A</td> - <td class="tg-0pky">N/A</td> - </tr> - <tr> - <td class="tg-0pky">port</td> - <td class="tg-0pky">Port to access the SMSC.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">N/A</td> - <td class="tg-0pky">N/A</td> - </tr> - <tr> - <td class="tg-0pky">systemType</td> - <td class="tg-0pky">Identifies the type of ESME system requesting to bind as a receiver with the SMSC.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">"" - (NULL)<br> - CMT - Cellular Messaging<br> - CPT - Cellular Paging<br> - VMN - Voice Mail Notification<br> - VMA - Voice Mail Alerting<br> - WAP - Wireless Application Protocol<br> - USSD - Unstructured Supplementary Services Data</td> - <td class="tg-0pky">"" - (NULL)</td> - </tr> - <tr> - <td class="tg-0pky">systemId</td> - <td class="tg-0pky">Identifies the ESME system requesting to bind as a receiver with the SMSC.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">N/A</td> - <td class="tg-0pky">N/A</td> - </tr> - <tr> - <td class="tg-0pky">password</td> - <td class="tg-0pky">The password may be used by the SMSC to authenticate the ESME requesting to bind.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">N/A</td> - <td class="tg-0pky">N/A</td> - </tr> - <tr> - <td class="tg-0pky">addressNpi</td> - <td class="tg-0pky">Numbering Plan Indicator for ESME address.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">Unknown<br> - ISDN (E163/E164) - Data (X.121) - Telex (F.69)<br> - Land Mobile (E.212)<br> - National - Private - ERMES<br> - Internet (IP)<br> - WAP Client Id (to be defined by WAP Forum)</td> - <td class="tg-0pky">N/A</td> - </tr> - <tr> - <td class="tg-0pky">addressTon</td> - <td class="tg-0pky">Indicates Type of Number of the ESME address.</a></td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">Unknown<br> - International - National - Network Specific<br> - Subscriber Number<br> - Alphanumeric - Abbreviated</td> - <td class="tg-0pky">N/A</td> - </tr> - <tr> - <td class="tg-0pky">bindType</td> - <td class="tg-0pky">An ESME bound as a Receiver or Transceiver is authorised to receive short messages from the SMSC.</td> - <td class="tg-0pky">Yes</td> - <td class="tg-0pky">BIND_RX<br> - BIND_TRX</td> - <td class="tg-0pky">N/A</td> - </tr> - <tr> - <td class="tg-0pky">addressRange</td> - <td class="tg-0pky">A single ESME address or a range of ESME addresses served via this SMPP receiver session.</td> - <td class="tg-0pky">No</td> - <td class="tg-0pky">N/A</td> - <td class="tg-0pky">null</td> - </tr> - <tr> - <td class="tg-0pky">enquireLinktimer</td> - <td class="tg-0pky">Used to check whether SMSC is connected or not.</td> - <td class="tg-0pky">No</td> - <td class="tg-0pky">N/A</td> - <td class="tg-0pky">10000</td> - </tr> - <tr> - <td class="tg-0pky">transactiontimer</td> - <td class="tg-0pky">Time elapsed between SMPP request and the corresponding response.</td> - <td class="tg-0pky">No</td> - <td class="tg-0pky">N/A</td> - <td class="tg-0pky">200</td> - </tr> - <tr> - <td class="tg-0pky">reconnectInterval</td> - <td class="tg-0pky">The Initial retry interval to reconnect with the SMSC while SMSC is not available.</td> - <td class="tg-0pky">No</td> - <td class="tg-0pky">N/A</td> - <td class="tg-0pky">3000ms</td> - </tr> - <tr> - <td class="tg-0pky">retryCount</td> - <td class="tg-0pky">The number of times to retry to connect with SMSC, while connection with the SMSC is closed. If you want to retry forever, give the retry count value as less than 0.</td> - <td class="tg-0pky">No</td> - <td class="tg-0pky">N/A</td> - <td class="tg-0pky">5</td> - </tr> - <tr> - <td class="tg-0pky">exponentialFactor</td> - <td class="tg-0pky">Start with Initial reconnectInterval delay until first retry attempt is made but if that one - fails, we should wait (reconnectInterval * exponentialFactor) times more. For example<br> - let’s say we start with exponentialFactor 2 and 100ms delay until first retry attempt is<br> - made but if that one fails as well, we should wait two times more (200ms). And later 400ms, 800ms…</td> - <td class="tg-0pky">No</td> - <td class="tg-0pky">N/A</td> - <td class="tg-0pky">5</td> - </tr> - <tr> - <td class="tg-0pky">maximumBackoffTime</td> - <td class="tg-0pky">The above one is an exponential function that can grow very fast. Thus it’s useful to set maximum backoff time at some reasonable level, e.g. 10 seconds:</td> - <td class="tg-0pky">No</td> - <td class="tg-0pky">N/A</td> - <td class="tg-0pky">10000ms</td> - </tr> -</table> \ No newline at end of file diff --git a/en/docs/reference/connectors/smpp-connector/smpp-inbound-endpoint-example.md b/en/docs/reference/connectors/smpp-connector/smpp-inbound-endpoint-example.md deleted file mode 100644 index 3f9edadd17..0000000000 --- a/en/docs/reference/connectors/smpp-connector/smpp-inbound-endpoint-example.md +++ /dev/null @@ -1,139 +0,0 @@ -# SMPP Inbound Endpoint Example - -The SMPP inbound endpoint allows you to consume messages from SMSC. The WSO2 SMPP inbound endpoint acts as a message consumer. It creates a connection with the SMSC, then listens over a port to consume only SMS messages from the SMSC and injects the messages to the integration sequence. It will receive alert notifications or will notify when a data short message accepted. - -## What you'll build - -This scenario demonstrates how the SMPP inbound endpoint works as an message consumer. In this scenario, you should have a connectivity with SMSC (Short Message service center) via SMPP protocol. For this we are using **SMSC simulator** to accomplish the required requirements. Please refer the [Setting up the SMPP Connector]({{base_path}}/reference/connectors/smpp-connector/smpp-connector-configuration/) documentation for more information. - -The SMPP inbound endpoint is listening to the Short Message service center for consuming messages using defined port number in the Inbound Endpoint configurations. If SMSC generates some message by itself or user injects SMS messages to the SMSC, the WSO2 SMPP Inbound Endpoint will receive and notify. Then just log the SMS message content. In your own scenarios, you can inject that message into the mediation flow for getting the required output. - -Following diagram shows the overall solution we are going to build. The SMSC will generate or receive messages from the outside, while the SMPP inbound endpoint will consume messages based on the updates. - -<img src="{{base_path}}/assets/img/integrate/connectors/smpp-inboundep-example.png" title="SMPP Inbound Endpoint" width="800" alt="SMPP Inbound Endpoint"/> - -## Configure inbound endpoint using WSO2 Integration Studio - -1. Download [WSO2 Integration Studio](https://wso2.com/integration/integration-studio/). Create an **Integration Project** as below. - - <img src="{{base_path}}/assets/img/integrate/connectors/integration-project.png" title="Creating a new Integration Project" width="800" alt="Creating a new Integration Project" /> - -2. Right click on **Created Integration Project** -> **New** -> **Inbound Endpoint** -> **Create A New Inbound Endpoint** -> **Inbound Endpoint Creation Type**and select as **custom** -> Click **Next**. - - <img src="{{base_path}}/assets/img/integrate/connectors/smpp-inboundep-create-new-ie.png" title="Creating inbound endpoint" width="400" alt="Creating inbound endpoint" style="border:1px solid black"/> - -3. Click on **Inbound Endpoint** in design view and under `properties` tab, update class name to `org.wso2.carbon.inbound.smpp.SMPPListeningConsumer`. - -4. Navigate to the source view and update it with the following configuration as required. - - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <inboundEndpoint xmlns="http://ws.apache.org/ns/synapse" - name="SMPP" - sequence="request" - onError="fault" - class="org.wso2.carbon.inbound.smpp.SMPPListeningConsumer" - suspend="false"> - <parameters> - <parameter name="inbound.behavior">eventBased</parameter> - <parameter name="sequential">true</parameter> - <parameter name="coordination">true</parameter> - <parameter name="port">2775</parameter> - <parameter name="addressNpi">UNKNOWN</parameter> - <parameter name="host">localhost</parameter> - <parameter name="reconnectInterval">3000</parameter> - <parameter name="addressTon">UNKNOWN</parameter> - <parameter name="systemType">CPT</parameter> - <parameter name="retryCount">-1</parameter> - <parameter name="bindType">BIND_RX</parameter> - <parameter name="addressRange">null</parameter> - <parameter name="systemId">kasun</parameter> - <parameter name="password">kasun</parameter> - <parameter name="exponentialFactor">5</parameter> - <parameter name="maximumBackoffTime">10000</parameter> - </parameters> - </inboundEndpoint> - ``` - **Sequence to process the message** - - In this example for simplicity we will just log the message, but in a real world use case, this can be any type of message mediation. - - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <sequence xmlns="http://ws.apache.org/ns/synapse" name="request" onError="fault"> - <log level="custom"> - <property xmlns:ns="http://org.apache.synapse/xsd" - name="MessageId" - expression="get-property('SMPP_MessageId')"/> - <property xmlns:ns="http://org.apache.synapse/xsd" - name="SourceAddress" - expression="get-property('SMPP_SourceAddress')"/> - <property xmlns:ns="http://org.apache.synapse/xsd" - name="DataCoding" - expression="get-property('SMPP_DataCoding')"/> - <property xmlns:ns="http://org.apache.synapse/xsd" - name="ScheduleDeliveryTime" - expression="get-property('SMPP_ScheduleDeliveryTime')"/> - <property xmlns:ns="http://org.apache.synapse/xsd" - name="SequenceNumber" - expression="get-property('SMPP_SequenceNumber')"/> - <property xmlns:ns="http://org.apache.synapse/xsd" - name="ServiceType" - expression="get-property('SMPP_ServiceType')"/> - </log> - <log level="full"/> - </sequence> - ``` -> **Note**: To configure the `systemId` and `password` parameter value, please use the steps given under the topic `Configure the SMSC (Short Message Service Center) simulator` in the [Setting up the SMPP Connector ]({{base_path}}/rference/connectors/smpp-connector/smpp-connector-configuration/) documentation. -> - **systemId** : username to access the SMSC -> - **password** : password to access the SMSC - -## Exporting Integration Logic as a CApp - -**CApp (Carbon Application)** is the deployable artefact on the integration runtime. Let us see how we can export integration logic we developed into a CApp. To export the `Solution Project` as a CApp, a `Composite Application Project` needs to be created. Usually, when a solution project is created, this project is automatically created by Integration Studio. If not, you can specifically create it by navigating to **File** -> **New** -> **Other** -> **WSO2** -> **Distribution** -> **Composite Application Project**. - -1. Right click on Composite Application Project and click on **Export Composite Application Project**.</br> - <img src="{{base_path}}/assets/img/integrate/connectors/capp-project1.jpg" title="Export as a Carbon Application" width="300" alt="Export as a Carbon Application" /> - -2. Select an **Export Destination** where you want to save the .car file. - -3. In the next **Create a deployable CAR file** screen, select inbound endpoint and sequence artifacts and click **Finish**. The CApp will get created at the specified location provided in the previous step. - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/smpp-inbound-endpoint.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - -!!! tip - You may need to update the simulator details and make other such changes before deploying and running this project. - -## Deployment - -1. Navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for `SMPP Connector`. Click on `SMPP Inbound Endpoint` and download the .jar file by clicking on `Download Inbound Endpoint`. Copy this .jar file into **<PRODUCT-HOME>/lib** folder. - -2. Download [jsmpp-2.1.0-RELEASE.jar](http://central.maven.org/maven2/com/googlecode/jsmpp/jsmpp/2.1.0-RELEASE/) and [asyncretry-jdk7-0.0.6.jar](https://mvnrepository.com/artifact/com.nurkiewicz.asyncretry/asyncretry-jdk7/0.0.6) copy inside **<PRODUCT-HOME>/lib** folder. - -3. Copy the exported carbon application to the **<PRODUCT-HOME>/repository/deployment/server/carbonapps** folder. - -4. [Start the integration server]({{base_path}}/get-started/quick-start-guide/integration-qsg#start-the-micro-integrator). - -## Testing - - Please use the [smpp-connector-example]({{base_path}}/reference/connectors/smpp-connector/smpp-connector-example/) testing steps to test this Inbound Endpoint scenario. You need to send the SMS message to the SMSC via the SMPP connector example API(SmppTestApi.xml). - - **Sample request** - - ``` - curl -v POST -d '{"sourceAddress":"16111", "message":"Hi! This is the first test SMS message.","distinationAddress":"071XXXXXXX"}' "http://172.17.0.1:8290/send" -H "Content-Type:application/json" - ``` - SMPP Inbound Endpoint will consume message from the SMSC. - - **Expected response** - - ``` - [2020-05-18 10:56:05,495] INFO {org.apache.synapse.mediators.builtin.LogMediator} - MessageId = 0, SourceAddress = null, DataCoding = 0, ScheduleDeliveryTime = null, SequenceNumber = 7, ServiceType = null - [2020-05-18 10:56:05,506] INFO {org.apache.synapse.mediators.builtin.LogMediator} - To: , MessageID: urn:uuid:F767BC9689D3D2221B1589779565430, Direction: request, Envelope: <?xml version='1.0' encoding='utf-8'?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"><soapenv:Body><text xmlns="http://ws.apache.org/commons/ns/payload">Hi! This is the first test SMS message.</text></soapenv:Body></soapenv:Envelope> - ``` diff --git a/en/docs/reference/connectors/twitter-connector/twitter-connector-configuration.md b/en/docs/reference/connectors/twitter-connector/twitter-connector-configuration.md deleted file mode 100644 index 2218766199..0000000000 --- a/en/docs/reference/connectors/twitter-connector/twitter-connector-configuration.md +++ /dev/null @@ -1,24 +0,0 @@ -# Setting up the Twitter Connector in Integration Runtime - -Before you start configuring the Twitter connector, you need to configure the integration runtime. - -## Adding message builders - -Consider the root of the Micro Integrator/ Enterprise Integrator as `<PRODUCT_HOME>`. - -If you are using the **Micro Integrator 4.2.0**, you need to add the following message builder to **`<PRODUCT_HOME>`/conf/deployment.toml** file. For more information, refer the [Working with Message Builders and Formatters](https://ei.docs.wso2.com/en/latest/micro-integrator/setup/message_builders_formatters/message-builders-and-formatters/) and [Product Configurations]({{base_path}}/reference/config-catalog-mi/) documentation. - -```toml -[[custom_message_builders]] -class="org.wso2.micro.integrator.core.json.JsonStreamBuilder" -content_type = "application/problem+json" -``` - -If you are using **EI 6.x** version, you can enable this property by doing the following Axis2 configurations in the **`<PRODUCT_HOME>`/repository/conf/axis2/axis2.xml** and **`<PRODUCT_HOME>`/repository/conf/axis2/axis2_blocking_client.xml** files. - -**messageBuilders** - -```xml -<messageBuilder contentType="application/problem+json" - class="org.wso2.carbon.integrator.core.json.JsonStreamBuilder"/> -``` diff --git a/en/docs/reference/connectors/twitter-connector/twitter-connector-credentials.md b/en/docs/reference/connectors/twitter-connector/twitter-connector-credentials.md deleted file mode 100644 index d4dbf750dc..0000000000 --- a/en/docs/reference/connectors/twitter-connector/twitter-connector-credentials.md +++ /dev/null @@ -1,35 +0,0 @@ -# Creating the Client ID, Access Token and Refresh Token - -In this documentation, you will learn how to create the Client ID, Access Token and Refresh Token for the Twitter connector using the Twitter developer portal. - -For the Twitter connector 3.0.x version, **OAuth 2.0 Authorization Code Flow with PKCE** is used to authenticate the user. Therefore, obtaining credentials is different from the Twitter connector 2.0.x version as it uses **OAuth 1.0a** authentication mechanism. For more information about the authentication mechanism, see [Twitter OAuth 2 guide](https://developer.twitter.com/en/docs/authentication/oauth-2-0/authorization-code). - -## Steps to follow - -1. To get started with the new Twitter API, you need a developer account. If you do not have one yet, you can [sign up](https://developer.twitter.com/en/portal/petition/essential/basic-info) for one. - -2. Then log into the [developer portal](https://developer.twitter.com/en/portal/dashboard). - -!!! info - The Twitter **Free tier** subscription is only sufficient for **createTweet, deleteTweet**, and **getMe** operations. If you want to use other operations, you need to upgrade your subscription to **Basic Tier**. - -3. Create a new project and create an app inside the project. - <img src="{{base_path}}/assets/img/integrate/connectors/twitter-connector-newproject.png" title="New Project" width="800" alt="New Project" /> - -4. In the app, you will need to set up the OAuth 2.0 as the WSO2 Twitter connector uses this authentication mechanism. - <img src="{{base_path}}/assets/img/integrate/connectors/twitter-connector-auth-setup.png" title="OAuth setup" width="800" alt="OAuth setup" /> - -5. Provide necessary variables. The Access tokens and refresh tokens will be sent to the callback URL. - <img src="{{base_path}}/assets/img/integrate/connectors/twitter-connector-callbackurl.png" title="Callback URL" width="400" alt="Callback URL" /> - -6. After successfully setting up the user authentication you can obtain the client ID of the Twitter app which is used for the Twitter connector configuration. - <img src="{{base_path}}/assets/img/integrate/connectors/twitter-connector-clientid.png" title="Client ID" width="800" alt="Client ID" /> - -7. To obtain the Access Token and Refresh Token, follow the [Twitter OAuth 2 guide](https://developer.twitter.com/en/docs/authentication/oauth-2-0/user-access-token#:~:text=Steps%20to%20connect%20using%20OAuth%202.0). - -!!! info - The Twitter access token is valid for 2 hours. The refresh token is valid until a new access token is created from the refresh token. - -!!! warning - By default the Twitter App provides an access token for OAuth 1.0a flow which is not used in the Twitter connector. You need to create a new access token for OAuth 2.0 flow. - \ No newline at end of file diff --git a/en/docs/reference/connectors/twitter-connector/twitter-connector-example.md b/en/docs/reference/connectors/twitter-connector/twitter-connector-example.md deleted file mode 100644 index f970285e9b..0000000000 --- a/en/docs/reference/connectors/twitter-connector/twitter-connector-example.md +++ /dev/null @@ -1,112 +0,0 @@ -# Twitter API Connector Example - -This example explains how to use the Twitter client to connect with the Twitter platform and perform operations. The connector uses the [Twitter API](https://developer.twitter.com/en/docs/twitter-api) to interact with Twitter. - -## What you'll build - -In this guide, you will build a project to perform the following operation. - -* Create a Tweet. - - The user sends the request payload that includes the necessary parameters for a Tweet, to create a New Tweet in Twitter. This request is sent to the integration runtime by invoking the Twitter connector API. - -<center><img src="{{base_path}}/assets/img/integrate/connectors/twitter-connector-store.png" title="Using Twitter Rest Connector" width="200" alt="Using Twitter Rest Connector"/></center> - -The user calls the Twitter REST API. It invokes the **createTweet** sequence and creates a new Tweet on Twitter. - -## Configure the connector in WSO2 Integration Studio - -Follow these steps to set up the Integration Project and the Connector Exporter Project. - -{!includes/reference/connectors/importing-connector-to-integration-studio.md!} - -2. Right-click on the created Integration Project and select, -> **New** -> **Rest API** to create the REST API. - <img src="{{base_path}}/assets/img/integrate/connectors/adding-an-api.jpg" title="Adding a Rest API" width="800" alt="Adding a Rest API"/> - -3. Follow these steps to [configure the Twitter API]({{base_path}}/reference/connectors/twitter-connector/twitter-connector-credentials/) and obtain the Client Id, Access Token and Refresh Token. - -4. Provide the API name as **createTweet**. You can go to the source view of the XML configuration file of the API and copy the following configuration. -```xml -<?xml version="1.0" encoding="UTF-8"?> -<api context="/createtweet" name="createTweet" xmlns="http://ws.apache.org/ns/synapse"> - <resource methods="POST"> - <inSequence> - <property expression="json-eval($.clientId)" name="clientId"/> - <property expression="json-eval($.accessToken)" name="accessToken"/> - <property expression="json-eval($.id)" name="id"/> - <property expression="json-eval($.text)" name="text"/> - <property expression="json-eval($.for_super_followers_only)" name="for_super_followers_only"/> - <property expression="json-eval($.poll)" name="place_fields"/> - <twitter.init> - <accessToken>{$ctx:accessToken}</accessToken> - <clientId>{$ctx:clientId}</clientId> - </twitter.init> - <twitter.createTweet> - <for_super_followers_only>{$ctx:for_super_followers_only}</for_super_followers_only> - <poll>{$ctx:poll}</poll> - <text>{$ctx:text}</text> - </twitter.createTweet> - <respond/> - </inSequence> - <outSequence> - <send/> - </outSequence> - <faultSequence/> - </resource> -</api> -``` - -5. Follow these steps to export the artifacts. -{!includes/reference/connectors/exporting-artifacts.md!} - -## Get the project - -You can download the ZIP file and extract the contents to get the project code. - -<a href="{{base_path}}/assets/attachments/connectors/twitter-connector.zip"> - <img src="{{base_path}}/assets/img/integrate/connectors/download-zip.png" width="200" alt="Download ZIP"> -</a> - - -## Deployment - -!!! attention - Before deploying you will have to configure runtime. If you have not followed the [Configuring Integration Runtime]({{base_path}}/reference/connectors/twitter-connector/twitter-connector-configuration/) guide, please follow it before deploying the CApp. - -Follow these steps to deploy the exported CApp in the integration runtime.<br> - -{!includes/reference/connectors/deploy-capp.md!} - -## Testing -Invoke the API as shown below using the curl command. Curl Application can be downloaded from [here](https://curl.haxx.se/download.html). - -``` -curl --location 'http://<HOST_NAME>:<PORT>/createtweet' \ ---header 'Content-Type: application/json' \ ---data '{ - "ClientId": "ZW82OS1rYkJnOEhmUUpjSDNnS246MTpjaQ", - "accessToken": "eENYRW5OczRKbFZCd2JRcm9EejFVUVp4N1JIcmNHY1RCLVBmckpHMjQycE1nOjE2ODczMjcxMzk4NjY6MTowOmF0OjE", - "text": "Hello from WSO2", - "for_super_followers_only": false, - "poll": {"options": ["yes", "maybe", "no"], "duration_minutes": 120} -}' -``` - -If you are using MI 4.2.0 in your local environment without configuring, `<HOST_NAME> = localhost` and `<PORT> = 8290`. - -A response simillar to following will be received. -```json -{ - "data": { - "edit_history_tweet_ids": [ - "1667035675894640640" - ], - "id": "1667035675894640640", - "text": "Hello from WSO2" - } -} -``` - -## What's Next - -* To explore further the Twitter connector operations, see [Twitter Connector Reference]({{base_path}}/reference/connectors/twitter-connector/twitter-connector-reference/) documentation. diff --git a/en/docs/reference/connectors/twitter-connector/twitter-connector-overview.md b/en/docs/reference/connectors/twitter-connector/twitter-connector-overview.md deleted file mode 100644 index fc46935e31..0000000000 --- a/en/docs/reference/connectors/twitter-connector/twitter-connector-overview.md +++ /dev/null @@ -1,38 +0,0 @@ -# Twitter Connector Overview - -The Twitter Connector allows you to work with Twitter, a social networking site where users broadcast short posts known as Tweets. You can use the Twitter connector to work with Tweets, users, lists, and direct messages. The connector uses the [Twitter API v2](https://developer.twitter.com/en/docs/twitter-api) to interact with Twitter. - -!!! info - If your Twitter application is using **Twitter API v1.1**, you have to use the [Twitter v2.0.7 Connector](https://docs.wso2.com/display/ESBCONNECTORS/Twitter+Connector+and+Inbound) to interact with Twitter. - -To see the Twitter Connector, navigate to the [connector store](https://store.wso2.com/store/assets/esbconnector/list) and search for "twitter". - -<center><img src="{{base_path}}/assets/img/integrate/connectors/twitter-connector-store.png" title="Twitter Connector Store" width="200" alt="Twitter Connector Store"/></center> - -## Compatibility - -| Connector Version | Supported product versions | -| ------------- |-------------| -| 3.0.0 | MI 4.2.x, EI 6.6.0, EI 6.4.0 | - -For older versions, see the details in the connector store. - -## Twitter Connector documentation - -* **[Creating the Client ID and Access Token]({{base_path}}/reference/connectors/twitter-connector/twitter-connector-credentials/)**: You need to first create Twitter credentials for the connector to interact with Twitter. - -* **[Configuring Integration Runtime]({{base_path}}/reference/connectors/twitter-connector/twitter-connector-configuration/)**: Secondly, you need to configure MI/EI for the Twitter connector to work. - -* **[Twitter Connector Example]({{base_path}}/reference/connectors/twitter-connector/twitter-connector-example/)**: This example demonstrates a scenario where creating a Tweet using the WSO2 Twitter Connector. - -* **[Twitter Connector Reference]({{base_path}}/reference/connectors/twitter-connector/twitter-connector-reference/)**: This documentation provides a reference guide for the Twitter Connector. - -## How to contribute - -As an open-source project, WSO2 extensions welcome contributions from the community. - -To contribute to the code for this connector, create a pull request in the following repository. - -* [Twitter Connector GitHub repository](https://github.com/wso2-extensions/esb-connector-twitter) - -Check the issue tracker for open issues that interest you. We look forward to receiving your contributions. diff --git a/en/docs/reference/connectors/twitter-connector/twitter-connector-reference.md b/en/docs/reference/connectors/twitter-connector/twitter-connector-reference.md deleted file mode 100644 index c8a04277a4..0000000000 --- a/en/docs/reference/connectors/twitter-connector/twitter-connector-reference.md +++ /dev/null @@ -1,3462 +0,0 @@ -# Twitter Connector Reference - -The following operations allow you to work with the Twitter Connector. Click an operation name to see parameter details and samples on how to use it. - ---- - -## Initialize the connector - -To use the Twitter connector, add the `<twitter.init>` element in your configuration before carrying out any other Twitter operations. - -??? note "twitter.init" - The twitter.init operation initializes the connector to interact with the Twitter API. See the [related API documentation]( https://developer.twitter.com/en/docs/authentication/oauth-2-0/authorization-code) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>accessToken</td> - <td>String</td> - <td>Yes if the refresh token is not present</td> - <td>The access token of the OAuth 2.0 Twitter app. Not to be mistaken with the OAuth 1.0 access token.</td> - </tr> - <tr> - <td>refreshToken</td> - <td>String</td> - <td>Yes if the access token is not present</td> - <td>The refresh token of the OAuth 2.0 Twitter app. Not to be mistaken with the OAuth 1.0 refresh token.</td> - </tr> - <tr> - <td>clientId</td> - <td>String</td> - <td>Yes</td> - <td>User Id that allows you to use OAuth 2.0 as an authentication method.</td> - </tr> - <tr> - <td>apiUrl</td> - <td>String</td> - <td>No, the default value is https://api.twitter.com</td> - <td>The URL of the Twitter REST API.</td> - </tr> - <tr> - <td>timeout</td> - <td>Integer</td> - <td>No, the default value is 5000</td> - <td>Timeout duration of the API request.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.init> - <clientId>{$ctx:clientId}</clientId> - <accessToken>{$ctx:accessToken}</accessToken> - <refreshToken>{$ctx:refreshToken}</refreshToken> - <apiUrl>{$ctx:apiUrl}</apiUrl> - <timeout>{$ctx:timeout}</timeout> - </twitter.init> - ``` - - **Sample request** - - ```xml - <twitter.init> - <clientId>"rG9n6402A3dbUJKzXTNX4oWHJ"</clientId> - <accessToken>"MFpJRmFlbGJTZHVDdkNIbDN4WURTYTFiUmZtRV9HckdsUmlmd1ZxVjRvWHVUOjE2ODY1NDIwMjM5MTk6MTowOmF0OjE"</accessToken> - <refreshToken>"bWRWa3gzdnk3WHRGU1o0bmRRcTJ5VUxWX1lZTDdJSUtmaWcxbTVxdEFXcW5tOjE2MjIxNDc3NDM5MTQ6MToxOnJ0OjE"</refreshToken> - </twitter.init> - ``` ---- - -## Working with Tweets - -The following operations allow you to work with tweets. To be authorized for the following endpoints, you will need an access token with the correct scopes. Please refer the [Twitter authentication map](https://developer.twitter.com/en/docs/authentication/guides/v2-authentication-mapping) to get the required scopes for the access token. - -??? note "createTweet" - The twitter.createTweet method creates a Tweet. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/tweets/manage-tweets/api-reference/post-tweets) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>text</td> - <td>String</td> - <td>Yes if media field is not present</td> - <td>The text of your Tweet. Up to 280 characters are permitted.</td> - </tr> - <tr> - <td>direct_message_deep_link</td> - <td>String</td> - <td>No</td> - <td>Tweets a link directly to a Direct Message conversation with an account.</td> - </tr> - <tr> - <td>for_super_followers_only</td> - <td>Boolean</td> - <td>No</td> - <td>Allows you to Tweet exclusively for Super Followers.</td> - </tr> - <tr> - <td>geo</td> - <td>JSON Object</td> - <td>No</td> - <td>A JSON object that contains location information for a Tweet. You can only add a location to Tweets if you have geo enabled in your profile settings. If you don't have geo enabled, you can still add a location parameter in your request body, but it won't get attached to your Tweet.</td> - </tr> - <tr> - <td>media</td> - <td>JSON Object</td> - <td>No</td> - <td>A JSON object that contains media information being attached to created Tweet. This is mutually exclusive from Quote Tweet ID and Poll.</td> - </tr> - <tr> - <td>poll</td> - <td>JSON Object</td> - <td>No</td> - <td>A JSON object that contains options for a Tweet with a poll. This is mutually exclusive from Media and Quote Tweet ID.</td> - </tr> - <tr> - <td>quote_tweet_id</td> - <td>String</td> - <td>No</td> - <td>Link to the Tweet being quoted.</td> - </tr> - <tr> - <td>reply</td> - <td>JSON Object</td> - <td>No</td> - <td>A JSON object that contains information of the Tweet being replied to.</td> - </tr> - <tr> - <td>reply_settings</td> - <td>String</td> - <td>No</td> - <td>Settings to indicate who can reply to the Tweet. Valid values are: `mentionedUsers, following`. If the field isn’t specified, it will default to everyone.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.createTweet> - <text>{$ctx:text}</text> - <direct_message_deep_link>{$ctx:direct_message_deep_link}</direct_message_deep_link> - <for_super_followers_only>{$ctx:for_super_followers_only}</for_super_followers_only> - <geo>{$ctx:geo}</geo> - <media>{$ctx:media}</media> - <poll>{$ctx:poll}</poll> - <quote_tweet_id>{$ctx:quote_tweet_id}</quote_tweet_id> - <reply>{$ctx:reply}</reply> - <reply_settings>{$ctx:reply_settings}</reply_settings> - </twitter.createTweet> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the createTweet operation. - - ```xml - <twitter.createTweet> - <text>"Hello World!"</text> - <for_super_followers_only>true</for_super_followers_only> - <poll>{"options": ["yes", "maybe", "no"], "duration_minutes": 120}</poll> - <reply>{"in_reply_to_tweet_id": "1455953449422516226", "exclude_reply_user_ids": ["6253282"]}</reply> - <reply_settings>"mentionedUsers"</reply_settings> - </twitter.createTweet> - ``` - **Sample response** - - Given below is a sample response for the createTweet operation. - - ```json - { - "data": { - "edit_history_tweet_ids": [ - "1667035675894640640" - ], - "id": "1667035675894640640", - "text": "Hello World!" - } - } - ``` - -??? note "deleteTweet" - The twitter.deleteTweet method deletes a Tweet when given the Tweet ID. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/tweets/manage-tweets/api-reference/delete-tweets-id) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>The Tweet ID you are deleting.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.deleteTweet> - <id>{$ctx:id}</id> - </twitter.deleteTweet> - - ``` - - **Sample request** - - Given below is a sample request that can be handled by the deleteTweet operation. - - ```xml - <twitter.deleteTweet> - <id>"1667035675894640640"</id> - </twitter.deleteTweet> - ``` - - **Sample response** - - Given below is a sample response for the deleteTweet operation. - - ```json - { - "data": { - "deleted": true - } - } - ``` - -??? note "getTweetById" - The twitter.getTweetById method retrieves information about a single Tweet specified by the requested ID. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/tweets/lookup/api-reference/get-tweets-id) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>Unique identifier of the Tweet to request.</td> - </tr> - <tr> - <td>expansions</td> - <td>String</td> - <td>No</td> - <td>Expansions enable you to request additional data objects that relate to the originally returned Tweets. Submit a list of desired expansions in a comma-separated list without spaces. The ID that represents the expanded data object will be included directly in the Tweet data object, but the expanded object metadata will be returned within the includes response object, and will also include the ID so that you can match this data object to the original Tweet object. Valid values for this parameter are: `attachments.poll_ids, attachments.media_keys, author_id, edit_history_tweet_ids, entities.mentions.username, geo.place_id, in_reply_to_user_id, referenced_tweets.id, referenced_tweets.id.author_id`.</td> - </tr> - <tr> - <td>media_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific media fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet will only return media fields if the Tweet contains media and if you've also included the expansions=attachments.media_keys query parameter in your request. While the media ID will be located in the Tweet object, you will find this ID and all additional media fields in the includes data object. Valid values for this parameter are: `duration_ms, height, media_key, preview_image_url, type, url, width, public_metrics, non_public_metrics, organic_metrics, promoted_metrics, alt_text, variants`</td> - </tr> - <tr> - <td>place_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific place fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The response will contain the selected fields only if you've also included the expansions=geo.place_id query parameter in your request. Valid values for this parameter are: `contained_within, country, country_code, full_name, geo, id, name, place_type`.</td> - </tr> - <tr> - <td>poll_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific poll fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet will only return poll fields if the Tweet contains a poll and if you've also included the expansions=attachments.poll_ids query parameter in your request. While the poll ID will be located in the Tweet object, you will find this ID and all additional poll fields in the includes data object. Valid values for this parameter are: `duration_minutes, end_datetime, id, options, voting_status`.</td> - </tr> - <tr> - <td>tweet_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific Tweet fields will deliver in each returned Tweet object. Specify the desired fields in a comma-separated list without spaces between commas and fields. You can also pass the expansions=referenced_tweets.id expansion to return the specified fields for both the original Tweet and any included referenced Tweets. The requested Tweet fields will display in both the original Tweet data object, as well as in the referenced Tweet expanded data object that will be located in the includes data object. Valid values for this parameter are: `attachments, author_id, context_annotations, conversation_id, created_at, edit_controls, entities, geo, id, in_reply_to_user_id, lang, non_public_metrics, public_metrics, organic_metrics, promoted_metrics, possibly_sensitive, referenced_tweets, reply_settings, source, text, withheld`.</td> - </tr> - <tr> - <td>user_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific user fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. While the user ID will be located in the original Tweet object, you will find this ID and all additional user fields in the includes data object. Valid values for this parameter are: `created_at, description, entities, id, location, name, pinned_tweet_id, profile_image_url, protected, public_metrics, url, username, verified, verified_type, withheld`.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.getTweetById> - <id>{$ctx:id}</id> - <expansions>{$ctx:expansions}</expansions> - <media_fields>{$ctx:media_fields}</media_fields> - <place_fields>{$ctx:place_fields}</place_fields> - <poll_fields>{$ctx:poll_fields}</poll_fields> - <tweet_fields>{$ctx:tweet_fields}</tweet_fields> - <user_fields>{$ctx:user_fields}</user_fields> - </twitter.getTweetById> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getTweetById operation. - - ```xml - <twitter.getTweetById> - <id>"1460323737035677698"</id> - <expansions>"attachments.media_keys,author_id"</expansions> - <media_fields>"duration_ms,media_key"</media_fields> - <tweet_fields>"lang"</tweet_fields> - </twitter.getTweetById> - ``` - - **Sample response** - - Given below is a sample response for the getTweetById operation. - - ```json - { - "data": { - "lang": "en", - "author_id": "2244994945", - "text": "Introducing a new era for the Twitter Developer Platform! \n\n The Twitter API v2 is now the primary API and full of new features\n⏱Immediate access for most use cases, or apply to get more access for free", - "attachments": { - "media_keys": [ - "7_1460322142680072196" - ] - }, - "id": "1460323737035677698", - "edit_history_tweet_ids": [ - "1460323737035677698" - ] - } - } - - ``` - -??? note "getTweetsLookup" - The twitter.getTweetsLookup method retrieves information about one or more Tweets specified by the requested IDs. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/tweets/lookup/api-reference/get-tweets) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>ids</td> - <td>String</td> - <td>Yes</td> - <td>A comma separated list of Tweet IDs. Up to 100 are allowed in a single request. Make sure to not include a space between commas and fields.</td> - </tr> - <tr> - <td>expansions</td> - <td>String</td> - <td>No</td> - <td>Expansions enable you to request additional data objects that relate to the originally returned Tweets. Submit a list of desired expansions in a comma-separated list without spaces. The ID that represents the expanded data object will be included directly in the Tweet data object, but the expanded object metadata will be returned within the includes response object, and will also include the ID so that you can match this data object to the original Tweet object. Valid values for this parameter are: `attachments.poll_ids, attachments.media_keys, author_id, edit_history_tweet_ids, entities.mentions.username, geo.place_id, in_reply_to_user_id, referenced_tweets.id, referenced_tweets.id.author_id`.</td> - </tr> - <tr> - <td>media_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific media fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet will only return media fields if the Tweet contains media and if you've also included the expansions=attachments.media_keys query parameter in your request. While the media ID will be located in the Tweet object, you will find this ID and all additional media fields in the includes data object. Valid values for this parameter are: `duration_ms, height, media_key, preview_image_url, type, url, width, public_metrics, non_public_metrics, organic_metrics, promoted_metrics, alt_text, variants`</td> - </tr> - <tr> - <td>place_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific place fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The response will contain the selected fields only if you've also included the expansions=geo.place_id query parameter in your request. Valid values for this parameter are: `contained_within, country, country_code, full_name, geo, id, name, place_type`.</td> - </tr> - <tr> - <td>poll_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific poll fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet will only return poll fields if the Tweet contains a poll and if you've also included the expansions=attachments.poll_ids query parameter in your request. While the poll ID will be located in the Tweet object, you will find this ID and all additional poll fields in the includes data object. Valid values for this parameter are: `duration_minutes, end_datetime, id, options, voting_status`.</td> - </tr> - <tr> - <td>tweet_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific Tweet fields will deliver in each returned Tweet object. Specify the desired fields in a comma-separated list without spaces between commas and fields. You can also pass the expansions=referenced_tweets.id expansion to return the specified fields for both the original Tweet and any included referenced Tweets. The requested Tweet fields will display in both the original Tweet data object, as well as in the referenced Tweet expanded data object that will be located in the includes data object. Valid values for this parameter are: `attachments, author_id, context_annotations, conversation_id, created_at, edit_controls, entities, geo, id, in_reply_to_user_id, lang, non_public_metrics, public_metrics, organic_metrics, promoted_metrics, possibly_sensitive, referenced_tweets, reply_settings, source, text, withheld`.</td> - </tr> - <tr> - <td>user_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific user fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. While the user ID will be located in the original Tweet object, you will find this ID and all additional user fields in the includes data object. Valid values for this parameter are: `created_at, description, entities, id, location, name, pinned_tweet_id, profile_image_url, protected, public_metrics, url, username, verified, verified_type, withheld`.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.getTweetsLookup> - <ids>{$ctx:ids}</ids> - <expansions>{$ctx:expansions}</expansions> - <media_fields>{$ctx:media_fields}</media_fields> - <place_fields>{$ctx:place_fields}</place_fields> - <poll_fields>{$ctx:poll_fields}</poll_fields> - <tweet_fields>{$ctx:tweet_fields}</tweet_fields> - <user_fields>{$ctx:user_fields}</user_fields> - </twitter.getTweetsLookup> - - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getTweetsLookup operation. - - ```xml - <twitter.getTweetsLookup> - <ids>"1460323737035677698,1519781379172495360,1519781381693353984"</ids> - <expansions>"attachments.poll_ids,author_id"</expansions> - <poll_fields>"duration_minutes"</poll_fields> - </twitter.getTweetsLookup> - - ``` - - **Sample response** - - Given below is a sample response for the getTweetsLookup operation. - - ```json - { - "data": [ - { - "text": "Introducing a new era for the Twitter Developer Platform! \n\n📣The Twitter API v2 is now the primary API and full of new features\n⏱Immediate access for most use cases, or apply to get more access for free\n📖Removed certain restrictions in the Policy\nhttps://t.co/Hrm15bkBWJ https://t.co/YFfCDErHsg", - "edit_history_tweet_ids": [ - "1460323737035677698" - ], - "id": "1460323737035677698", - "lang": "en", - "author_id": "2244994945" - }, - { - "text": "Our mission remains just as important as ever: to deliver an open platform that serves the public conversation. We’re continuing to innovate on the Twitter API v2 and invest in our developer community 🧵\n\nhttps://t.co/Rug1l46sUc", - "edit_history_tweet_ids": [ - "1519781379172495360" - ], - "id": "1519781379172495360", - "lang": "en", - "author_id": "2244994945" - }, - { - "text": "Catch up on recent launches and build with the core elements of the Twitter experience:\n🔖 New Bookmarks endpoints\n💬 New Quote Tweets lookup endpoints\n🔼 New sort_order parameter on search endpoints, and improvements to the Likes and Retweets endpoints", - "edit_history_tweet_ids": [ - "1519781381693353984" - ], - "id": "1519781381693353984", - "lang": "en", - "author_id": "2244994945" - } - ] - } - ``` - -??? note "searchTweets" - The twitter.searchTweets method retrieves a collection of tweets that meet the specified search criteria. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/tweets/search/api-reference/get-tweets-search-recent) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>query</td> - <td>String</td> - <td>Yes</td> - <td>Query for matching Tweets. For more info check [Twitter query guide.](https://developer.twitter.com/en/docs/twitter-api/tweets/search/integrate/build-a-query)</td> - </tr> - <tr> - <td>start_time</td> - <td>String</td> - <td>No</td> - <td>`YYYY-MM-DDTHH:mm:ssZ` (ISO 8601/RFC 3339). The oldest UTC timestamp (from most recent seven days) from which the Tweets will be provided. Timestamp is in second granularity and is inclusive (for example, 12:00:01 includes the first second of the minute). If included with the same request as a since_id parameter, only since_id will be used. By default, a request will return Tweets from up to seven days ago if you do not include this parameter.</td> - </tr> - <tr> - <td>end_time</td> - <td>String</td> - <td>No</td> - <td>`YYYY-MM-DDTHH:mm:ssZ` (ISO 8601/RFC 3339). The newest, most recent UTC timestamp to which the Tweets will be provided. Timestamp is in second granularity and is exclusive (for example, 12:00:01 excludes the first second of the minute). By default, a request will return Tweets from as recent as 30 seconds ago if you do not include this parameter.</td> - </tr> - <tr> - <td>since_id</td> - <td>String</td> - <td>No</td> - <td>Returns results with a Tweet ID greater than (that is, more recent than) the specified ID. The ID specified is exclusive and responses will not include it. If included with the same request as a start_time parameter, only since_id will be used.</td> - </tr> - <tr> - <td>sort_order</td> - <td>String</td> - <td>No</td> - <td>This parameter is used to specify the order in which you want the Tweets returned. By default, a request will return the most recent Tweets first (sorted by recency). se object, and will also include the ID so that you can match this data object to the original Tweet object. Valid values for this parameter are: `recency, relevancy`</td> - </tr> - <tr> - <td>max_results</td> - <td>Integer</td> - <td>No</td> - <td>The maximum number of results to be returned per page. This can be a number between 1 and the 1000. By default, each page will return 100 results.</td> - </tr> - <tr> - <td>next_token</td> - <td>String</td> - <td>No</td> - <td>This parameter is used to get the next 'page' of results. The value used with the parameter is pulled directly from the response provided by the API, and should not be modified.</td> - </tr> - <tr> - <td>expansions</td> - <td>String</td> - <td>No</td> - <td>Expansions enable you to request additional data objects that relate to the originally returned Tweets. Submit a list of desired expansions in a comma-separated list without spaces. The ID that represents the expanded data object will be included directly in the Tweet data object, but the expanded object metadata will be returned within the includes response object, and will also include the ID so that you can match this data object to the original Tweet object. Valid values for this parameter are: `attachments.poll_ids, attachments.media_keys, author_id, edit_history_tweet_ids, entities.mentions.username, geo.place_id, in_reply_to_user_id, referenced_tweets.id, referenced_tweets.id.author_id`.</td> - </tr> - <tr> - <td>media_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific media fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet will only return media fields if the Tweet contains media and if you've also included the expansions=attachments.media_keys query parameter in your request. While the media ID will be located in the Tweet object, you will find this ID and all additional media fields in the includes data object. Valid values for this parameter are: `duration_ms, height, media_key, preview_image_url, type, url, width, public_metrics, non_public_metrics, organic_metrics, promoted_metrics, alt_text, variants`</td> - </tr> - <tr> - <td>place_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific place fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The response will contain the selected fields only if you've also included the expansions=geo.place_id query parameter in your request. Valid values for this parameter are: `contained_within, country, country_code, full_name, geo, id, name, place_type`.</td> - </tr> - <tr> - <td>poll_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific poll fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet will only return poll fields if the Tweet contains a poll and if you've also included the expansions=attachments.poll_ids query parameter in your request. While the poll ID will be located in the Tweet object, you will find this ID and all additional poll fields in the includes data object. Valid values for this parameter are: `duration_minutes, end_datetime, id, options, voting_status`.</td> - </tr> - <tr> - <td>tweet_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific Tweet fields will deliver in each returned Tweet object. Specify the desired fields in a comma-separated list without spaces between commas and fields. You can also pass the expansions=referenced_tweets.id expansion to return the specified fields for both the original Tweet and any included referenced Tweets. The requested Tweet fields will display in both the original Tweet data object, as well as in the referenced Tweet expanded data object that will be located in the includes data object. Valid values for this parameter are: `attachments, author_id, context_annotations, conversation_id, created_at, edit_controls, entities, geo, id, in_reply_to_user_id, lang, non_public_metrics, public_metrics, organic_metrics, promoted_metrics, possibly_sensitive, referenced_tweets, reply_settings, source, text, withheld`.</td> - </tr> - <tr> - <td>user_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific user fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. While the user ID will be located in the original Tweet object, you will find this ID and all additional user fields in the includes data object. Valid values for this parameter are: `created_at, description, entities, id, location, name, pinned_tweet_id, profile_image_url, protected, public_metrics, url, username, verified, verified_type, withheld`.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.searchTweets> - <query>{$ctx:query}</query> - <start_time>{$ctx:start_time}</start_time> - <end_time>{$ctx:end_time}</end_time> - <since_id>{$ctx:since_id}</since_id> - <until_id>{$ctx:until_id}</until_id> - <sort_order>{$ctx:sort_order}</sort_order> - <max_results>{$ctx:max_results}</max_results> - <next_token>{$ctx:next_token}</next_token> - <expansions>{$ctx:expansions}</expansions> - <media_fields>{$ctx:media_fields}</media_fields> - <place_fields>{$ctx:place_fields}</place_fields> - <poll_fields>{$ctx:poll_fields}</poll_fields> - <tweet_fields>{$ctx:tweet_fields}</tweet_fields> - <user_fields>{$ctx:user_fields}</user_fields> - </twitter.searchTweets> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the searchTweets operation. - - ```xml - <twitter.searchTweets> - <query>"(from:TwitterDev) new -is:retweet"</query> - <start_time>"2020-01-01T00:00:00Z"</start_time> - <sort_order>"recency"</sort_order> - <max_results>10</max_results> - <tweet_fields>"created_at,lang,conversation_id"</tweet_fields> - </twitter.searchTweets> - ``` - - **Sample response** - - Given below is a sample response for the searchTweets operation. - - ```json - { - "data": [ - { - "text": "Looking to get started with the Twitter API but new to APIs in general? @jessicagarson will walk you through everything you need to know in APIs 101 session. She’ll use examples using our v2 endpoints, Tuesday, March 23rd at 1 pm EST.", - "author_id": "2244994945", - "id": "1373001119480344583", - "edit_history_tweet_ids": [ - "1373001119480344583" - ], - "lang": "en", - "conversation_id": "1373001119480344583", - "created_at": "2021-03-19T19:59:10.000Z" - } - ], - } - ``` - -??? note "likeTweet" - The twitter.likeTweet method likes a tweet. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/tweets/likes/api-reference/post-users-id-likes) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>user_id</td> - <td>String</td> - <td>Yes</td> - <td>The user ID who you are liking a Tweet on behalf of.</td> - </tr> - <tr> - <td>tweet_id</td> - <td>String</td> - <td>Yes</td> - <td>The ID of the Tweet that you would give a Like.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.likeTweet> - <user_id>{$ctx:user_id}</user_id> - <tweet_id>{$ctx:tweet_id}</tweet_id> - </twitter.likeTweet> - - ``` - - **Sample request** - - Given below is a sample request that can be handled by the likeTweet operation. - - ```xml - <twitter.likeTweet> - <user_id>"1655515285577936899"</user_id> - <tweet_id>"1521887626935947265"</tweet_id> - </twitter.likeTweet> - ``` - - **Sample response** - - Given below is a sample response for the likeTweet operation. - - ```json - { - "data": { - "liked": true - } - } - - ``` - -??? note "unlikeTweet" - The twitter.unlikeTweet method unlikes a tweet. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/tweets/likes/api-reference/delete-users-id-likes-tweet_id) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>user_id</td> - <td>String</td> - <td>Yes</td> - <td>The user ID who you are unliking a Tweet on behalf of.</td> - </tr> - <tr> - <td>tweet_id</td> - <td>String</td> - <td>Yes</td> - <td>The ID of the Tweet that you would unlike.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.unlikeTweet> - <user_id>{$ctx:user_id}</user_id> - <tweet_id>{$ctx:tweet_id}</tweet_id> - </twitter.unlikeTweet> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the unlikeTweet operation. - - ```xml - <twitter.unlikeTweet> - <user_id>"1655515285577936899"</user_id> - <tweet_id>"1521887626935947265"</tweet_id> - </twitter.unlikeTweet> - ``` - - **Sample response** - - Given below is a sample response for the unlikeTweet operation. - - ```json - { - "data": { - "liked": false - } - } - - ``` - -??? note "getLikedTweetsList" - The twitter.getLikedTweetsList method retrieves a list of liked Tweets of the specified user ID. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/tweets/likes/api-reference/get-users-id-liked_tweets) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>User ID of the user to request liked Tweets for.</td> - </tr> - <tr> - <td>max_results</td> - <td>Integer</td> - <td>No</td> - <td>The maximum number of results to be returned per page. This can be a number between 1 and the 1000. By default, each page will return 100 results.</td> - </tr> - <tr> - <td>pagination_token</td> - <td>String</td> - <td>No</td> - <td>Used to request the next page of results if all results weren't returned with the latest request, or to go back to the previous page of results. To return the next page, pass the next_token returned in your previous response. To go back one page, pass the previous_token returned in your previous response.</td> - </tr> - <tr> - <td>expansions</td> - <td>String</td> - <td>No</td> - <td>Expansions enable you to request additional data objects that relate to the originally returned Tweets. Submit a list of desired expansions in a comma-separated list without spaces. The ID that represents the expanded data object will be included directly in the Tweet data object, but the expanded object metadata will be returned within the includes response object, and will also include the ID so that you can match this data object to the original Tweet object. Valid values for this parameter are: `attachments.poll_ids, attachments.media_keys, author_id, edit_history_tweet_ids, entities.mentions.username, geo.place_id, in_reply_to_user_id, referenced_tweets.id, referenced_tweets.id.author_id`.</td> - </tr> - <tr> - <td>media_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific media fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet will only return media fields if the Tweet contains media and if you've also included the expansions=attachments.media_keys query parameter in your request. While the media ID will be located in the Tweet object, you will find this ID and all additional media fields in the includes data object. Valid values for this parameter are: `duration_ms, height, media_key, preview_image_url, type, url, width, public_metrics, non_public_metrics, organic_metrics, promoted_metrics, alt_text, variants`</td> - </tr> - <tr> - <td>place_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific place fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The response will contain the selected fields only if you've also included the expansions=geo.place_id query parameter in your request. Valid values for this parameter are: `contained_within, country, country_code, full_name, geo, id, name, place_type`.</td> - </tr> - <tr> - <td>poll_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific poll fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet will only return poll fields if the Tweet contains a poll and if you've also included the expansions=attachments.poll_ids query parameter in your request. While the poll ID will be located in the Tweet object, you will find this ID and all additional poll fields in the includes data object. Valid values for this parameter are: `duration_minutes, end_datetime, id, options, voting_status`.</td> - </tr> - <tr> - <td>tweet_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific Tweet fields will deliver in each returned Tweet object. Specify the desired fields in a comma-separated list without spaces between commas and fields. You can also pass the expansions=referenced_tweets.id expansion to return the specified fields for both the original Tweet and any included referenced Tweets. The requested Tweet fields will display in both the original Tweet data object, as well as in the referenced Tweet expanded data object that will be located in the includes data object. Valid values for this parameter are: `attachments, author_id, context_annotations, conversation_id, created_at, edit_controls, entities, geo, id, in_reply_to_user_id, lang, non_public_metrics, public_metrics, organic_metrics, promoted_metrics, possibly_sensitive, referenced_tweets, reply_settings, source, text, withheld`.</td> - </tr> - <tr> - <td>user_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific user fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. While the user ID will be located in the original Tweet object, you will find this ID and all additional user fields in the includes data object. Valid values for this parameter are: `created_at, description, entities, id, location, name, pinned_tweet_id, profile_image_url, protected, public_metrics, url, username, verified, verified_type, withheld`.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.getLikedTweetsList> - <id>{$ctx:id}</id> - <max_results>{$ctx:max_results}</max_results> - <pagination_token>{$ctx:pagination_token}</pagination_token> - <expansions>{$ctx:expansions}</expansions> - <media_fields>{$ctx:media_fields}</media_fields> - <place_fields>{$ctx:place_fields}</place_fields> - <poll_fields>{$ctx:poll_fields}</poll_fields> - <tweet_fields>{$ctx:tweet_fields}</tweet_fields> - <user_fields>{$ctx:user_fields}</user_fields> - </twitter.getLikedTweetsList> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getLikedTweetsList operation. - - ```xml - <twitter.getLikedTweetsList> - <id>"1655515285577936899"</id> - <max_results>10</max_results> - </twitter.getLikedTweetsList> - ``` - - **Sample response** - - Given below is a sample response for the getLikedTweetsList operation. - - ```json - { - "data": [ - { - "edit_history_tweet_ids": [ - "1519781381693353984" - ], - "id": "1519781381693353984", - "text": "Catch up on recent launches and build with the core elements of the Twitter experience:\n🔖 New Bookmarks endpoints\n💬 New Quote Tweets lookup endpoints\n🔼 New sort_order parameter on search endpoints, and improvements to the Likes and Retweets endpoints" - }, - { - "edit_history_tweet_ids": [ - "1519781379172495360" - ], - "id": "1519781379172495360", - "text": "Our mission remains just as important as ever: to deliver an open platform that serves the public conversation. We’re continuing to innovate on the Twitter API v2 and invest in our developer community 🧵\n\nhttps://t.co/Rug1l46sUc" - }, - { - "edit_history_tweet_ids": [ - "1460323737035677698" - ], - "id": "1460323737035677698", - "text": "Introducing a new era for the Twitter Developer Platform! \n\n📣The Twitter API v2 is now the primary API and full of new features\n⏱Immediate access for most use cases, or apply to get more access for free\n📖Removed certain restrictions in the Policy\nhttps://t.co/Hrm15bkBWJ https://t.co/YFfCDErHsg" - } - ], - "meta": { - "result_count": 3, - "next_token": "7140dibdnow9c7btw482mq8hweo1bqos2tvjtvo5vftx2" - } - } - ``` - -??? note "createRetweet" - The twitter.createRetweet method retweets a Tweet. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/tweets/retweets/api-reference/post-users-id-retweets) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>user_id</td> - <td>String</td> - <td>Yes</td> - <td>The user ID who you are Retweeting a Tweet on behalf of.</td> - </tr> - <tr> - <td>tweet_id</td> - <td>String</td> - <td>Yes</td> - <td>The ID of the Tweet that you would like to Retweet.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.createRetweet> - <user_id>{$ctx:user_id}</user_id> - <tweet_id>{$ctx:tweet_id}</tweet_id> - </twitter.createRetweet> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the createRetweet operation. - - ```xml - <twitter.createRetweet> - <user_id>"1655515285577936899"</user_id> - <tweet_id>"1519781381693353984"</tweet_id> - </twitter.createRetweet> - ``` - - **Sample response** - - Given below is a sample response for the createRetweet operation. - - ```json - { - "data": { - "retweeted": true - } - } - ``` - -??? note "getUserHomeTimeline" - The twitter.getUserHomeTimeline method retrieves a collection of the most recent Tweets and Retweets posted by you and users you follow. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/tweets/timelines/api-reference/get-users-id-reverse-chronological) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>Unique identifier of the user that is requesting their chronological home timeline.</td> - </tr> - <tr> - <td>start_time</td> - <td>String</td> - <td>No</td> - <td>`YYYY-MM-DDTHH:mm:ssZ` (ISO 8601/RFC 3339). The oldest UTC timestamp (from most recent seven days) from which the Tweets will be provided. Timestamp is in second granularity and is inclusive (for example, 12:00:01 includes the first second of the minute). If included with the same request as a since_id parameter, only since_id will be used. By default, a request will return Tweets from up to seven days ago if you do not include this parameter.</td> - </tr> - <tr> - <td>end_time</td> - <td>String</td> - <td>No</td> - <td>`YYYY-MM-DDTHH:mm:ssZ` (ISO 8601/RFC 3339). The newest, most recent UTC timestamp to which the Tweets will be provided. Timestamp is in second granularity and is exclusive (for example, 12:00:01 excludes the first second of the minute). By default, a request will return Tweets from as recent as 30 seconds ago if you do not include this parameter.</td> - </tr> - <tr> - <td>since_id</td> - <td>String</td> - <td>No</td> - <td>Returns results with a Tweet ID greater than (that is, more recent than) the specified ID. The ID specified is exclusive and responses will not include it. If included with the same request as a start_time parameter, only since_id will be used.</td> - </tr> - <tr> - <td>until_id</td> - <td>String</td> - <td>No</td> - <td>Returns results with a Tweet ID less than (that is, older than) the specified 'until' Tweet ID. There are limits to the number of Tweets that can be accessed through the API. If the limit of Tweets has occurred since the until_id, the until_id will be forced to the most recent ID available.</td> - </tr> - <tr> - <td>sort_order</td> - <td>String</td> - <td>No</td> - <td>This parameter is used to specify the order in which you want the Tweets returned. By default, a request will return the most recent Tweets first (sorted by recency). se object, and will also include the ID so that you can match this data object to the original Tweet object. Valid values for this parameter are: `recency, relevancy`</td> - </tr> - <tr> - <td>max_results</td> - <td>Integer</td> - <td>No</td> - <td>The maximum number of results to be returned per page. This can be a number between 1 and the 1000. By default, each page will return 100 results.</td> - </tr> - <tr> - <td>pagination_token</td> - <td>String</td> - <td>No</td> - <td>This parameter is used to move forwards or backwards through 'pages' of results, based on the value of the next_token or previous_token in the response. The value used with the parameter is pulled directly from the response provided by the API, and should not be modified.</td> - </tr> - <tr> - <td>exclude</td> - <td>String</td> - <td>No</td> - <td> Comma-separated list of the types of Tweets to exclude from the response. Valid values for this parameter are: `retweets, replies`</td> - </tr> - <tr> - <td>expansions</td> - <td>String</td> - <td>No</td> - <td>Expansions enable you to request additional data objects that relate to the originally returned Tweets. Submit a list of desired expansions in a comma-separated list without spaces. The ID that represents the expanded data object will be included directly in the Tweet data object, but the expanded object metadata will be returned within the includes response object, and will also include the ID so that you can match this data object to the original Tweet object. Valid values for this parameter are: `attachments.poll_ids, attachments.media_keys, author_id, edit_history_tweet_ids, entities.mentions.username, geo.place_id, in_reply_to_user_id, referenced_tweets.id, referenced_tweets.id.author_id`.</td> - </tr> - <tr> - <td>media_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific media fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet will only return media fields if the Tweet contains media and if you've also included the expansions=attachments.media_keys query parameter in your request. While the media ID will be located in the Tweet object, you will find this ID and all additional media fields in the includes data object. Valid values for this parameter are: `duration_ms, height, media_key, preview_image_url, type, url, width, public_metrics, non_public_metrics, organic_metrics, promoted_metrics, alt_text, variants`</td> - </tr> - <tr> - <td>place_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific place fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The response will contain the selected fields only if you've also included the expansions=geo.place_id query parameter in your request. Valid values for this parameter are: `contained_within, country, country_code, full_name, geo, id, name, place_type`.</td> - </tr> - <tr> - <td>poll_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific poll fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet will only return poll fields if the Tweet contains a poll and if you've also included the expansions=attachments.poll_ids query parameter in your request. While the poll ID will be located in the Tweet object, you will find this ID and all additional poll fields in the includes data object. Valid values for this parameter are: `duration_minutes, end_datetime, id, options, voting_status`.</td> - </tr> - <tr> - <td>tweet_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific Tweet fields will deliver in each returned Tweet object. Specify the desired fields in a comma-separated list without spaces between commas and fields. You can also pass the expansions=referenced_tweets.id expansion to return the specified fields for both the original Tweet and any included referenced Tweets. The requested Tweet fields will display in both the original Tweet data object, as well as in the referenced Tweet expanded data object that will be located in the includes data object. Valid values for this parameter are: `attachments, author_id, context_annotations, conversation_id, created_at, edit_controls, entities, geo, id, in_reply_to_user_id, lang, non_public_metrics, public_metrics, organic_metrics, promoted_metrics, possibly_sensitive, referenced_tweets, reply_settings, source, text, withheld`.</td> - </tr> - <tr> - <td>user_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific user fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. While the user ID will be located in the original Tweet object, you will find this ID and all additional user fields in the includes data object. Valid values for this parameter are: `created_at, description, entities, id, location, name, pinned_tweet_id, profile_image_url, protected, public_metrics, url, username, verified, verified_type, withheld`.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.getUserHomeTimeline> - <id>{$ctx:id}</id> - <start_time>{$ctx:start_time}</start_time> - <end_time>{$ctx:end_time}</end_time> - <since_id>{$ctx:since_id}</since_id> - <until_id>{$ctx:until_id}</until_id> - <sort_order>{$ctx:sort_order}</sort_order> - <max_results>{$ctx:max_results}</max_results> - <pagination_token>{$ctx:pagination_token}</pagination_token> - <exclude>{$ctx:exclude}</exclude> - <expansions>{$ctx:expansions}</expansions> - <media_fields>{$ctx:media_fields}</media_fields> - <place_fields>{$ctx:place_fields}</place_fields> - <poll_fields>{$ctx:poll_fields}</poll_fields> - <tweet_fields>{$ctx:tweet_fields}</tweet_fields> - <user_fields>{$ctx:user_fields}</user_fields> - </twitter.getUserHomeTimeline> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getUserHomeTimeline operation. - - ```xml - <twitter.getUserHomeTimeline> - <id>"1655515285577936899"</id> - <start_time>"2020-01-01T00:00:00Z"</start_time> - <max_results>10</max_results> - <tweet_fields>"created_at,lang,conversation_id"</tweet_fields> - </twitter.getUserHomeTimeline> - ``` - - **Sample response** - - Given below is a sample response for the getUserHomeTimeline operation. - - ```json - { - "data": [ - { - "created_at": "2022-05-12T17:00:00.000Z", - "text": "Today marks the launch of Devs in the Details, a technical video series made for developers by developers building with the Twitter API. 🚀nnIn this premiere episode, @jessicagarson walks us through how she built @FactualCat #WelcomeToOurTechTalkn⬇️nnhttps://t.co/nGa8JTQVBJ", - "author_id": "2244994945", - "edit_history_tweet_ids": [ - "1524796546306478083" - ], - "id": "1524796546306478083" - }, - { - "created_at": "2022-05-11T19:16:40.000Z", - "text": "📢 Join @jessicagarson @alanbenlee and @i_am_daniele tomorrow, May 12 | 5:30 ET / 2:30pm PT as they discuss the future of bots https://t.co/sQ2bIO1fz6", - "author_id": "2244994945", - "edit_history_tweet_ids": [ - "1524468552404668416" - ], - "id": "1524468552404668416" - }, - { - "created_at": "2022-05-09T20:12:01.000Z", - "text": "Do you make bots with the Twitter API? 🤖nnJoin @jessicagarson @alanbenlee and @iamdaniele on Thursday, May 12 | 5:30 ET / 2:30pm PT as they discuss the future of bots and answer any questions you might have. 🎙📆⬇️nnhttps://t.co/2uVt7hCcdG", - "author_id": "2244994945", - "edit_history_tweet_ids": [ - "1523757705436958720" - ], - "id": "1523757705436958720" - }, - { - "created_at": "2022-05-06T18:19:54.000Z", - "text": "If you’d like to apply, or would like to nominate someone else for the program, please feel free to fill out the following form:nnhttps://t.co/LUuWj24HLu", - "author_id": "2244994945", - "edit_history_tweet_ids": [ - "1522642324781633536" - ], - "id": "1522642324781633536" - }, - { - "created_at": "2022-05-06T18:19:53.000Z", - "text": "We’ve gone into more detail on each Insider in our forum post. nnJoin us in congratulating the new additions! 🥳nnhttps://t.co/0r5maYEjPJ", - "author_id": "2244994945", - "edit_history_tweet_ids": [ - "1522642323535847424" - ], - "id": "1522642323535847424" - } - ], - "includes": { - "users": [ - { - "created_at": "2013-12-14T04:35:55.000Z", - "name": "Twitter Dev", - "username": "TwitterDev", - "id": "2244994945" - } - ] - }, - "meta": { - "result_count": 5, - "newest_id": "1524796546306478083", - "oldest_id": "1522642323535847424", - "next_token": "7140dibdnow9c7btw421dyz6jism75z99gyxd8egarsc4" - } - } - ``` - -??? note "getUserMentionsTimeline" - The twitter.getUserMentionsTimeline method retrieves Tweets mentioning a single user specified by the requested user ID. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/tweets/timelines/api-reference/get-users-id-mentions) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>Unique identifier of the user for whom to return Tweets mentioning the user.</td> - </tr> - <tr> - <td>start_time</td> - <td>String</td> - <td>No</td> - <td>`YYYY-MM-DDTHH:mm:ssZ` (ISO 8601/RFC 3339). The oldest UTC timestamp (from most recent seven days) from which the Tweets will be provided. Timestamp is in second granularity and is inclusive (for example, 12:00:01 includes the first second of the minute). If included with the same request as a since_id parameter, only since_id will be used. By default, a request will return Tweets from up to seven days ago if you do not include this parameter.</td> - </tr> - <tr> - <td>end_time</td> - <td>String</td> - <td>No</td> - <td>`YYYY-MM-DDTHH:mm:ssZ` (ISO 8601/RFC 3339). The newest, most recent UTC timestamp to which the Tweets will be provided. Timestamp is in second granularity and is exclusive (for example, 12:00:01 excludes the first second of the minute). By default, a request will return Tweets from as recent as 30 seconds ago if you do not include this parameter.</td> - </tr> - <tr> - <td>since_id</td> - <td>String</td> - <td>No</td> - <td>Returns results with a Tweet ID greater than (that is, more recent than) the specified ID. The ID specified is exclusive and responses will not include it. If included with the same request as a start_time parameter, only since_id will be used.</td> - </tr> - <tr> - <td>until_id</td> - <td>String</td> - <td>No</td> - <td>Returns results with a Tweet ID less than (that is, older than) the specified 'until' Tweet ID. There are limits to the number of Tweets that can be accessed through the API. If the limit of Tweets has occurred since the until_id, the until_id will be forced to the most recent ID available.</td> - </tr> - <tr> - <td>sort_order</td> - <td>String</td> - <td>No</td> - <td>This parameter is used to specify the order in which you want the Tweets returned. By default, a request will return the most recent Tweets first (sorted by recency). se object, and will also include the ID so that you can match this data object to the original Tweet object. Valid values for this parameter are: `recency, relevancy`</td> - </tr> - <tr> - <td>max_results</td> - <td>Integer</td> - <td>No</td> - <td>The maximum number of results to be returned per page. This can be a number between 1 and the 1000. By default, each page will return 100 results.</td> - </tr> - <tr> - <td>pagination_token</td> - <td>String</td> - <td>No</td> - <td>This parameter is used to move forwards or backwards through 'pages' of results, based on the value of the next_token or previous_token in the response. The value used with the parameter is pulled directly from the response provided by the API, and should not be modified.</td> - </tr> - <tr> - <td>expansions</td> - <td>String</td> - <td>No</td> - <td>Expansions enable you to request additional data objects that relate to the originally returned Tweets. Submit a list of desired expansions in a comma-separated list without spaces. The ID that represents the expanded data object will be included directly in the Tweet data object, but the expanded object metadata will be returned within the includes response object, and will also include the ID so that you can match this data object to the original Tweet object. Valid values for this parameter are: `attachments.poll_ids, attachments.media_keys, author_id, edit_history_tweet_ids, entities.mentions.username, geo.place_id, in_reply_to_user_id, referenced_tweets.id, referenced_tweets.id.author_id`.</td> - </tr> - <tr> - <td>media_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific media fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet will only return media fields if the Tweet contains media and if you've also included the expansions=attachments.media_keys query parameter in your request. While the media ID will be located in the Tweet object, you will find this ID and all additional media fields in the includes data object. Valid values for this parameter are: `duration_ms, height, media_key, preview_image_url, type, url, width, public_metrics, non_public_metrics, organic_metrics, promoted_metrics, alt_text, variants`</td> - </tr> - <tr> - <td>place_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific place fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The response will contain the selected fields only if you've also included the expansions=geo.place_id query parameter in your request. Valid values for this parameter are: `contained_within, country, country_code, full_name, geo, id, name, place_type`.</td> - </tr> - <tr> - <td>poll_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific poll fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet will only return poll fields if the Tweet contains a poll and if you've also included the expansions=attachments.poll_ids query parameter in your request. While the poll ID will be located in the Tweet object, you will find this ID and all additional poll fields in the includes data object. Valid values for this parameter are: `duration_minutes, end_datetime, id, options, voting_status`.</td> - </tr> - <tr> - <td>tweet_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific Tweet fields will deliver in each returned Tweet object. Specify the desired fields in a comma-separated list without spaces between commas and fields. You can also pass the expansions=referenced_tweets.id expansion to return the specified fields for both the original Tweet and any included referenced Tweets. The requested Tweet fields will display in both the original Tweet data object, as well as in the referenced Tweet expanded data object that will be located in the includes data object. Valid values for this parameter are: `attachments, author_id, context_annotations, conversation_id, created_at, edit_controls, entities, geo, id, in_reply_to_user_id, lang, non_public_metrics, public_metrics, organic_metrics, promoted_metrics, possibly_sensitive, referenced_tweets, reply_settings, source, text, withheld`.</td> - </tr> - <tr> - <td>user_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific user fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. While the user ID will be located in the original Tweet object, you will find this ID and all additional user fields in the includes data object. Valid values for this parameter are: `created_at, description, entities, id, location, name, pinned_tweet_id, profile_image_url, protected, public_metrics, url, username, verified, verified_type, withheld`.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.getUserMentionsTimeline> - <id>{$ctx:id}</id> - <start_time>{$ctx:start_time}</start_time> - <end_time>{$ctx:end_time}</end_time> - <since_id>{$ctx:since_id}</since_id> - <until_id>{$ctx:until_id}</until_id> - <sort_order>{$ctx:sort_order}</sort_order> - <max_results>{$ctx:max_results}</max_results> - <pagination_token>{$ctx:pagination_token}</pagination_token> - <expansions>{$ctx:expansions}</expansions> - <media_fields>{$ctx:media_fields}</media_fields> - <place_fields>{$ctx:place_fields}</place_fields> - <poll_fields>{$ctx:poll_fields}</poll_fields> - <tweet_fields>{$ctx:tweet_fields}</tweet_fields> - <user_fields>{$ctx:user_fields}</user_fields> - </twitter.getUserMentionsTimeline> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getUserMentionsTimeline operation. - - ```xml - <twitter.getUserMentionsTimeline> - <id>"1655515285577936899"</id> - <start_time>"2020-01-01T00:00:00Z"</start_time> - <max_results>10</max_results> - <tweet_fields>"created_at,lang,conversation_id"</tweet_fields> - </twitter.getUserMentionsTimeline> - ``` - - **Sample response** - - Given below is a sample response for the getUserMentionsTimeline operation. - - ```json - { - "data": [ - { - "public_metrics": { - "retweet_count": 5, - "reply_count": 2, - "like_count": 22, - "quote_count": 0 - }, - "text": "Live now! https://t.co/9BbWekeWq2", - "author_id": "2244994945", - "id": "1374405406261268481", - "edit_history_tweet_ids": [ - "1374405406261268481" - ], - "created_at": "2021-03-23T16:59:18.000Z", - "context_annotations": [ - { - "domain": { - "id": "46", - "name": "Brand Category", - "description": "Categories within Brand Verticals that narrow down the scope of Brands" - }, - "entity": { - "id": "781974596752842752", - "name": "Services" - } - }, - { - "domain": { - "id": "47", - "name": "Brand", - "description": "Brands and Companies" - }, - "entity": { - "id": "10045225402", - "name": "Twitter" - } - } - ], - "conversation_id": "1374405406261268481" - }, - { - "public_metrics": { - "retweet_count": 7, - "reply_count": 1, - "like_count": 21, - "quote_count": 2 - }, - "text": "Hope to see you tomorrow at 1 pm EST for APIs 101! nhttps://t.co/GrtBOXyHmB https://t.co/YyQfmgiLlL", - "author_id": "2244994945", - "id": "1374104599456534531", - "edit_history_tweet_ids": [ - "1374104599456534531" - ], - "created_at": "2021-03-22T21:04:00.000Z", - "context_annotations": [ - { - "domain": { - "id": "46", - "name": "Brand Category", - "description": "Categories within Brand Verticals that narrow down the scope of Brands" - }, - "entity": { - "id": "781974596752842752", - "name": "Services" - } - }, - { - "domain": { - "id": "47", - "name": "Brand", - "description": "Brands and Companies" - }, - "entity": { - "id": "10045225402", - "name": "Twitter" - } - } - ], - "conversation_id": "1374104599456534531" - } - ] - } - ``` - -??? note "getUserTweetsTimeline" - The twitter.getUserTweetsTimeline method retrieves Tweets composed by a single user, specified by the requested user ID. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/tweets/timelines/api-reference/get-users-id-tweets) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>Unique identifier of the user who composed the Tweets.</td> - </tr> - <tr> - <td>start_time</td> - <td>String</td> - <td>No</td> - <td>`YYYY-MM-DDTHH:mm:ssZ` (ISO 8601/RFC 3339). The oldest UTC timestamp (from most recent seven days) from which the Tweets will be provided. Timestamp is in second granularity and is inclusive (for example, 12:00:01 includes the first second of the minute). If included with the same request as a since_id parameter, only since_id will be used. By default, a request will return Tweets from up to seven days ago if you do not include this parameter.</td> - </tr> - <tr> - <td>end_time</td> - <td>String</td> - <td>No</td> - <td>`YYYY-MM-DDTHH:mm:ssZ` (ISO 8601/RFC 3339). The newest, most recent UTC timestamp to which the Tweets will be provided. Timestamp is in second granularity and is exclusive (for example, 12:00:01 excludes the first second of the minute). By default, a request will return Tweets from as recent as 30 seconds ago if you do not include this parameter.</td> - </tr> - <tr> - <td>since_id</td> - <td>String</td> - <td>No</td> - <td>Returns results with a Tweet ID greater than (that is, more recent than) the specified ID. The ID specified is exclusive and responses will not include it. If included with the same request as a start_time parameter, only since_id will be used.</td> - </tr> - <tr> - <td>until_id</td> - <td>String</td> - <td>No</td> - <td>Returns results with a Tweet ID less than (that is, older than) the specified 'until' Tweet ID. There are limits to the number of Tweets that can be accessed through the API. If the limit of Tweets has occurred since the until_id, the until_id will be forced to the most recent ID available.</td> - </tr> - <tr> - <td>sort_order</td> - <td>String</td> - <td>No</td> - <td>This parameter is used to specify the order in which you want the Tweets returned. By default, a request will return the most recent Tweets first (sorted by recency). se object, and will also include the ID so that you can match this data object to the original Tweet object. Valid values for this parameter are: `recency, relevancy`</td> - </tr> - <tr> - <td>max_results</td> - <td>Integer</td> - <td>No</td> - <td>The maximum number of results to be returned per page. This can be a number between 1 and the 1000. By default, each page will return 100 results.</td> - </tr> - <tr> - <td>pagination_token</td> - <td>String</td> - <td>No</td> - <td>This parameter is used to move forwards or backwards through 'pages' of results, based on the value of the next_token or previous_token in the response. The value used with the parameter is pulled directly from the response provided by the API, and should not be modified.</td> - </tr> - <tr> - <td>expansions</td> - <td>String</td> - <td>No</td> - <td>Expansions enable you to request additional data objects that relate to the originally returned Tweets. Submit a list of desired expansions in a comma-separated list without spaces. The ID that represents the expanded data object will be included directly in the Tweet data object, but the expanded object metadata will be returned within the includes response object, and will also include the ID so that you can match this data object to the original Tweet object. Valid values for this parameter are: `attachments.poll_ids, attachments.media_keys, author_id, edit_history_tweet_ids, entities.mentions.username, geo.place_id, in_reply_to_user_id, referenced_tweets.id, referenced_tweets.id.author_id`.</td> - </tr> - <tr> - <td>media_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific media fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet will only return media fields if the Tweet contains media and if you've also included the expansions=attachments.media_keys query parameter in your request. While the media ID will be located in the Tweet object, you will find this ID and all additional media fields in the includes data object. Valid values for this parameter are: `duration_ms, height, media_key, preview_image_url, type, url, width, public_metrics, non_public_metrics, organic_metrics, promoted_metrics, alt_text, variants`</td> - </tr> - <tr> - <td>place_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific place fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The response will contain the selected fields only if you've also included the expansions=geo.place_id query parameter in your request. Valid values for this parameter are: `contained_within, country, country_code, full_name, geo, id, name, place_type`.</td> - </tr> - <tr> - <td>poll_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific poll fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet will only return poll fields if the Tweet contains a poll and if you've also included the expansions=attachments.poll_ids query parameter in your request. While the poll ID will be located in the Tweet object, you will find this ID and all additional poll fields in the includes data object. Valid values for this parameter are: `duration_minutes, end_datetime, id, options, voting_status`.</td> - </tr> - <tr> - <td>tweet_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific Tweet fields will deliver in each returned Tweet object. Specify the desired fields in a comma-separated list without spaces between commas and fields. You can also pass the expansions=referenced_tweets.id expansion to return the specified fields for both the original Tweet and any included referenced Tweets. The requested Tweet fields will display in both the original Tweet data object, as well as in the referenced Tweet expanded data object that will be located in the includes data object. Valid values for this parameter are: `attachments, author_id, context_annotations, conversation_id, created_at, edit_controls, entities, geo, id, in_reply_to_user_id, lang, non_public_metrics, public_metrics, organic_metrics, promoted_metrics, possibly_sensitive, referenced_tweets, reply_settings, source, text, withheld`.</td> - </tr> - <tr> - <td>user_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific user fields will deliver in each returned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. While the user ID will be located in the original Tweet object, you will find this ID and all additional user fields in the includes data object. Valid values for this parameter are: `created_at, description, entities, id, location, name, pinned_tweet_id, profile_image_url, protected, public_metrics, url, username, verified, verified_type, withheld`.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.getUserTweetsTimeline> - <id>{$ctx:id}</id> - <start_time>{$ctx:start_time}</start_time> - <end_time>{$ctx:end_time}</end_time> - <since_id>{$ctx:since_id}</since_id> - <until_id>{$ctx:until_id}</until_id> - <sort_order>{$ctx:sort_order}</sort_order> - <max_results>{$ctx:max_results}</max_results> - <pagination_token>{$ctx:pagination_token}</pagination_token> - <expansions>{$ctx:expansions}</expansions> - <media_fields>{$ctx:media_fields}</media_fields> - <place_fields>{$ctx:place_fields}</place_fields> - <poll_fields>{$ctx:poll_fields}</poll_fields> - <tweet_fields>{$ctx:tweet_fields}</tweet_fields> - <user_fields>{$ctx:user_fields}</user_fields> - </twitter.getUserTweetsTimeline> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getUserTweetsTimeline operation. - - ```xml - <twitter.getUserTweetsTimeline> - <id>"1655515285577936899"</id> - <start_time>"2020-01-01T00:00:00Z"</start_time> - <max_results>10</max_results> - <tweet_fields>"created_at,lang,conversation_id"</tweet_fields> - </twitter.getUserTweetsTimeline> - ``` - - **Sample response** - - Given below is a sample response for the getUserTweetsTimeline operation. - - ```json - { - "data": [ - { - "public_metrics": { - "retweet_count": 5, - "reply_count": 2, - "like_count": 22, - "quote_count": 0 - }, - "text": "Live now! https://t.co/9BbWekeWq2", - "author_id": "2244994945", - "id": "1374405406261268481", - "edit_history_tweet_ids": [ - "1374405406261268481" - ], - "created_at": "2021-03-23T16:59:18.000Z", - "context_annotations": [ - { - "domain": { - "id": "46", - "name": "Brand Category", - "description": "Categories within Brand Verticals that narrow down the scope of Brands" - }, - "entity": { - "id": "781974596752842752", - "name": "Services" - } - }, - { - "domain": { - "id": "47", - "name": "Brand", - "description": "Brands and Companies" - }, - "entity": { - "id": "10045225402", - "name": "Twitter" - } - } - ], - "conversation_id": "1374405406261268481" - } - ] - } - ``` ---- - -## Working with Users - -The following operations allow you to work with users in Twitter. To be authorized for the following endpoints, you will need an access token with the correct scopes. Please refer the [Twitter authentication map](https://developer.twitter.com/en/docs/authentication/guides/v2-authentication-mapping) to get the required scopes for the access token. - -??? note "getMe" - The twitter.getMe method retrieves information about the authorized user. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/users/lookup/api-reference/get-users-me) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>expansions</td> - <td>String</td> - <td>No</td> - <td> Expansions enable you to request additional data objects that relate to the originally returned users. The ID that represents the expanded data object will be included directly in the user data object, but the expanded object metadata will be returned within the includes response object, and will also include the ID so that you can match this data object to the original Tweet object. At this time, the only expansion available to endpoints that primarily return user objects is expansions=`pinned_tweet_id`. You will find the expanded Tweet data object living in the includes response object.</td> - </tr> - <tr> - <td>tweet_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific Tweet fields will deliver in each returned pinned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet fields will only return if the user has a pinned Tweet and if you've also included the expansions=pinned_tweet_id query parameter in your request. While the referenced Tweet ID will be located in the original Tweet object, you will find this ID and all additional Tweet fields in the includes data object. Valid values for this parameter are: `attachments, author_id, context_annotations, conversation_id, created_at, edit_controls, entities, geo, id, in_reply_to_user_id, lang, non_public_metrics, public_metrics, organic_metrics, promoted_metrics, possibly_sensitive, referenced_tweets, reply_settings, source, text, withheld`.</td> - </tr> - <tr> - <td>user_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific user fields will deliver with each returned users objects. Specify the desired fields in a comma-separated list without spaces between commas and fields. These specified user fields will display directly in the user data objects. Valid values for this parameter are: `created_at, description, entities, id, location, name, pinned_tweet_id, profile_image_url, protected, public_metrics, url, username, verified, verified_type, withheld`.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.getMe> - <expansions>{$ctx:expansions}</expansions> - <tweet_fields>{$ctx:tweet_fields}</tweet_fields> - <user_fields>{$ctx:user_fields}</user_fields> - </twitter.getMe> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getMe operation. - - ```xml - <twitter.getMe> - <expansions>"pinned_tweet_id"</expansions> - <user_fields>"created_at,username,id,name"</user_fields> - </twitter.getMe> - ``` - - **Sample response** - - Given below is a sample response for the getMe operation. - - ```json - { - "data": { - "name": "GrawKraken", - "username": "GrawKraken", - "pinned_tweet_id": "1667091290889256961", - "id": "1655515285577936899", - "created_at": "2023-05-08T10:09:55.000Z" - }, - "includes": { - "tweets": [ - { - "public_metrics": { - "retweet_count": 0, - "reply_count": 0, - "like_count": 0, - "quote_count": 0, - "bookmark_count": 0, - "impression_count": 0 - }, - "edit_history_tweet_ids": [ - "1667091290889256961" - ], - "text": "Hi", - "id": "1667091290889256961" - } - ] - } - } - ``` - -??? note "getUserById" - The twitter.getUserById method retrieves information about a single user specified by the requested ID. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/users/lookup/api-reference/get-users-id) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>The ID of the user to lookup.</td> - </tr> - <tr> - <td>expansions</td> - <td>String</td> - <td>No</td> - <td> Expansions enable you to request additional data objects that relate to the originally returned users. The ID that represents the expanded data object will be included directly in the user data object, but the expanded object metadata will be returned within the includes response object, and will also include the ID so that you can match this data object to the original Tweet object. At this time, the only expansion available to endpoints that primarily return user objects is expansions=`pinned_tweet_id`. You will find the expanded Tweet data object living in the includes response object.</td> - </tr> - <tr> - <td>tweet_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific Tweet fields will deliver in each returned pinned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet fields will only return if the user has a pinned Tweet and if you've also included the expansions=pinned_tweet_id query parameter in your request. While the referenced Tweet ID will be located in the original Tweet object, you will find this ID and all additional Tweet fields in the includes data object. Valid values for this parameter are: `attachments, author_id, context_annotations, conversation_id, created_at, edit_controls, entities, geo, id, in_reply_to_user_id, lang, non_public_metrics, public_metrics, organic_metrics, promoted_metrics, possibly_sensitive, referenced_tweets, reply_settings, source, text, withheld`.</td> - </tr> - <tr> - <td>user_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific user fields will deliver with each returned users objects. Specify the desired fields in a comma-separated list without spaces between commas and fields. These specified user fields will display directly in the user data objects. Valid values for this parameter are: `created_at, description, entities, id, location, name, pinned_tweet_id, profile_image_url, protected, public_metrics, url, username, verified, verified_type, withheld`.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.getUserById> - <id>{$ctx:id}</id> - <expansions>{$ctx:expansions}</expansions> - <tweet_fields>{$ctx:tweet_fields}</tweet_fields> - <user_fields>{$ctx:user_fields}</user_fields> - </twitter.getUserById> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getUserById operation. - - ```xml - <twitter.getUserById> - <id>"1655515285577936899"</id> - <user_fields>"created_at,username,id,name"</user_fields> - </twitter.getUserById> - ``` - - **Sample response** - - Given below is a sample response for the getUserById operation. - - ```json - { - "data": { - "pinned_tweet_id": "1667091290889256961", - "name": "GrawKraken", - "id": "1655515285577936899", - "created_at": "2023-05-08T10:09:55.000Z", - "username": "GrawKraken" - }, - "includes": { - "tweets": [ - { - "id": "1667091290889256961", - "edit_history_tweet_ids": [ - "1667091290889256961" - ], - "public_metrics": { - "retweet_count": 0, - "reply_count": 0, - "like_count": 0, - "quote_count": 0, - "bookmark_count": 0, - "impression_count": 0 - }, - "text": "Hi" - } - ] - } - } - ``` - -??? note "getUserByUsername" - The twitter.getUserByUsername method retrieves information about a single user specified by the requested username. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/users/lookup/api-reference/get-users-by-username-username) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>username</td> - <td>String</td> - <td>Yes</td> - <td> The Twitter username (handle) of the user.</td> - </tr> - <tr> - <td>expansions</td> - <td>String</td> - <td>No</td> - <td> Expansions enable you to request additional data objects that relate to the originally returned users. The ID that represents the expanded data object will be included directly in the user data object, but the expanded object metadata will be returned within the includes response object, and will also include the ID so that you can match this data object to the original Tweet object. At this time, the only expansion available to endpoints that primarily return user objects is expansions=`pinned_tweet_id`. You will find the expanded Tweet data object living in the includes response object.</td> - </tr> - <tr> - <td>tweet_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific Tweet fields will deliver in each returned pinned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet fields will only return if the user has a pinned Tweet and if you've also included the expansions=pinned_tweet_id query parameter in your request. While the referenced Tweet ID will be located in the original Tweet object, you will find this ID and all additional Tweet fields in the includes data object. Valid values for this parameter are: `attachments, author_id, context_annotations, conversation_id, created_at, edit_controls, entities, geo, id, in_reply_to_user_id, lang, non_public_metrics, public_metrics, organic_metrics, promoted_metrics, possibly_sensitive, referenced_tweets, reply_settings, source, text, withheld`.</td> - </tr> - <tr> - <td>user_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific user fields will deliver with each returned users objects. Specify the desired fields in a comma-separated list without spaces between commas and fields. These specified user fields will display directly in the user data objects. Valid values for this parameter are: `created_at, description, entities, id, location, name, pinned_tweet_id, profile_image_url, protected, public_metrics, url, username, verified, verified_type, withheld`.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.getUserByUsername> - <username>{$ctx:username}</username> - <expansions>{$ctx:expansions}</expansions> - <tweet_fields>{$ctx:tweet_fields}</tweet_fields> - <user_fields>{$ctx:user_fields}</user_fields> - </twitter.getUserByUsername> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getUserByUsername operation. - - ```xml - <twitter.getUserByUsername> - <username>"GrawKraken"</username> - <tweet_fields>"public_metrics"</tweet_fields> - </twitter.getUserByUsername> - ``` - - **Sample response** - - Given below is a sample response for the getUserByUsername operation. - - ```json - { - "data": { - "name": "GrawKraken", - "username": "GrawKraken", - "pinned_tweet_id": "1667091290889256961", - "id": "1655515285577936899", - "created_at": "2023-05-08T10:09:55.000Z" - }, - "includes": { - "tweets": [ - { - "public_metrics": { - "retweet_count": 0, - "reply_count": 0, - "like_count": 0, - "quote_count": 0, - "bookmark_count": 0, - "impression_count": 0 - }, - "edit_history_tweet_ids": [ - "1667091290889256961" - ], - "text": "Hi", - "id": "1667091290889256961" - } - ] - } - } - ``` - -??? note "getUsersLookup" - The twitter.getUsersLookup method retrieves information about one or more users specified by the requested IDs. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/users/lookup/api-reference/get-users) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>ids</td> - <td>String</td> - <td>Yes</td> - <td> A comma separated list of user IDs. Up to 100 are allowed in a single request. Make sure to not include a space between commas and fields.</td> - </tr> - <tr> - <td>expansions</td> - <td>String</td> - <td>No</td> - <td> Expansions enable you to request additional data objects that relate to the originally returned users. The ID that represents the expanded data object will be included directly in the user data object, but the expanded object metadata will be returned within the includes response object, and will also include the ID so that you can match this data object to the original Tweet object. At this time, the only expansion available to endpoints that primarily return user objects is expansions=`pinned_tweet_id`. You will find the expanded Tweet data object living in the includes response object.</td> - </tr> - <tr> - <td>tweet_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific Tweet fields will deliver in each returned pinned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet fields will only return if the user has a pinned Tweet and if you've also included the expansions=pinned_tweet_id query parameter in your request. While the referenced Tweet ID will be located in the original Tweet object, you will find this ID and all additional Tweet fields in the includes data object. Valid values for this parameter are: `attachments, author_id, context_annotations, conversation_id, created_at, edit_controls, entities, geo, id, in_reply_to_user_id, lang, non_public_metrics, public_metrics, organic_metrics, promoted_metrics, possibly_sensitive, referenced_tweets, reply_settings, source, text, withheld`.</td> - </tr> - <tr> - <td>user_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific user fields will deliver with each returned users objects. Specify the desired fields in a comma-separated list without spaces between commas and fields. These specified user fields will display directly in the user data objects. Valid values for this parameter are: `created_at, description, entities, id, location, name, pinned_tweet_id, profile_image_url, protected, public_metrics, url, username, verified, verified_type, withheld`.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.getUsersLookup> - <ids>{$ctx:ids}</ids> - <expansions>{$ctx:expansions}</expansions> - <tweet_fields>{$ctx:tweet_fields}</tweet_fields> - <user_fields>{$ctx:user_fields}</user_fields> - </twitter.getUsersLookup> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getUsersLookup operation. - - ```xml - <twitter.getUsersLookup> - <ids>"1655515285577936899,15594932"</ids> - <expansions>"pinned_tweet_id"</expansions> - <tweet_fields>"created_at"</tweet_fields> - </twitter.getUsersLookup> - ``` - - **Sample response** - - Given below is a sample response for the getUsersLookup operation. - - ```json - { - "data": [ - { - "pinned_tweet_id": "1667091290889256961", - "username": "GrawKraken", - "name": "GrawKraken", - "id": "1655515285577936899" - }, - { - "username": "wso2", - "name": "WSO2", - "id": "15594932" - } - ], - "includes": { - "tweets": [ - { - "edit_history_tweet_ids": [ - "1667091290889256961" - ], - "text": "Hi", - "created_at": "2023-06-09T08:48:31.000Z", - "id": "1667091290889256961" - } - ] - } - } - ``` - -??? note "followUser" - The twitter.followUser method follows a specified user. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/users/follows/api-reference/post-users-source_user_id-following) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>The authenticated user ID of whom you would like to initiate the following on behalf. You must pass the Access Tokens that relate to this user when authenticating the request.</td> - </tr> - <tr> - <td>target_user_id</td> - <td>String</td> - <td>Yes</td> - <td>The user ID of the user that you would like to follow.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.followUser> - <id>{$ctx:id}</id> - <target_user_id>{$ctx:target_user_id}</target_user_id> - </twitter.followUser> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the followUser operation. - - ```xml - <twitter.followUser> - <id>"1655515285577936899"</id> - <target_user_id>"15594932"</target_user_id> - </twitter.followUser> - ``` - - **Sample response** - - Given below is a sample response for the followUser operation. - - ```json - { - "data": { - "following": true, - "pending_follow": false - } - } - ``` - -??? note "getFollowingUsers" - The twitter.getFollowingUsers method retrieves a list of users who are followed by the specified user ID. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/users/follows/api-reference/get-users-id-following) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>The user ID whose following you would like to retrieve.</td> - </tr> - <tr> - <td>max_results</td> - <td>Integer</td> - <td>No</td> - <td>The maximum number of results to be returned per page. This can be a number between 1 and the 1000. By default, each page will return 100 results.</td> - </tr> - <tr> - <td>pagination_token</td> - <td>String</td> - <td>No</td> - <td>Used to request the next page of results if all results weren't returned with the latest request, or to go back to the previous page of results. To return the next page, pass the next_token returned in your previous response. To go back one page, pass the previous_token returned in your previous response.</td> - </tr> - <tr> - <td>expansions</td> - <td>String</td> - <td>No</td> - <td> Expansions enable you to request additional data objects that relate to the originally returned users. The ID that represents the expanded data object will be included directly in the user data object, but the expanded object metadata will be returned within the includes response object, and will also include the ID so that you can match this data object to the original Tweet object. At this time, the only expansion available to endpoints that primarily return user objects is expansions=`pinned_tweet_id`. You will find the expanded Tweet data object living in the includes response object.</td> - </tr> - <tr> - <td>tweet_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific Tweet fields will deliver in each returned pinned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet fields will only return if the user has a pinned Tweet and if you've also included the expansions=pinned_tweet_id query parameter in your request. While the referenced Tweet ID will be located in the original Tweet object, you will find this ID and all additional Tweet fields in the includes data object. Valid values for this parameter are: `attachments, author_id, context_annotations, conversation_id, created_at, edit_controls, entities, geo, id, in_reply_to_user_id, lang, non_public_metrics, public_metrics, organic_metrics, promoted_metrics, possibly_sensitive, referenced_tweets, reply_settings, source, text, withheld`.</td> - </tr> - <tr> - <td>user_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific user fields will deliver with each returned users objects. Specify the desired fields in a comma-separated list without spaces between commas and fields. These specified user fields will display directly in the user data objects. Valid values for this parameter are: `created_at, description, entities, id, location, name, pinned_tweet_id, profile_image_url, protected, public_metrics, url, username, verified, verified_type, withheld`.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.getFollowingUsers> - <id>{$ctx:id}</id> - <max_results>{$ctx:max_results}</max_results> - <pagination_token>{$ctx:pagination_token}</pagination_token> - <expansions>{$ctx:expansions}</expansions> - <tweet_fields>{$ctx:tweet_fields}</tweet_fields> - <user_fields>{$ctx:user_fields}</user_fields> - </twitter.getFollowingUsers> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getFollowingUsers operation. - - ```xml - <twitter.getFollowingUsers> - <id>"1655515285577936899"</id> - </twitter.getFollowingUsers> - ``` - - **Sample response** - - Given below is a sample response for the getFollowingUsers operation. - - ```json - { - "data": [ - { - "pinned_tweet_id": "1293595870563381249", - "id": "6253282", - "username": "TwitterAPI", - "name": "Twitter API" - }, - { - "pinned_tweet_id": "1293593516040269825", - "id": "2244994945", - "username": "TwitterDev", - "name": "Twitter Dev" - }, - { - "id": "783214", - "username": "Twitter", - "name": "Twitter" - }, - { - "pinned_tweet_id": "1271186240323432452", - "id": "95731075", - "username": "TwitterSafety", - "name": "Twitter Safety" - }, - { - "id": "3260518932", - "username": "TwitterMoments", - "name": "Twitter Moments" - }, - { - "pinned_tweet_id": "1293216056274759680", - "id": "373471064", - "username": "TwitterMusic", - "name": "Twitter Music" - }, - { - "id": "791978718", - "username": "OfficialPartner", - "name": "Twitter Official Partner" - }, - { - "pinned_tweet_id": "1289000334497439744", - "id": "17874544", - "username": "TwitterSupport", - "name": "Twitter Support" - }, - { - "pinned_tweet_id": "1283543147444711424", - "id": "234489024", - "username": "TwitterComms", - "name": "Twitter Comms" - }, - { - "id": "1526228120", - "username": "TwitterData", - "name": "Twitter Data" - } - ] - } - ``` - -??? note "getFollowers" - The twitter.getFollowers method retrieves a list of users who are followers of the specified user ID. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/users/follows/api-reference/get-users-id-followers) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>The user ID whose followers you would like to retrieve.</td> - </tr> - <tr> - <td>max_results</td> - <td>Integer</td> - <td>No</td> - <td>The maximum number of results to be returned per page. This can be a number between 1 and the 1000. By default, each page will return 100 results.</td> - </tr> - <tr> - <td>pagination_token</td> - <td>String</td> - <td>No</td> - <td>Used to request the next page of results if all results weren't returned with the latest request, or to go back to the previous page of results. To return the next page, pass the next_token returned in your previous response. To go back one page, pass the previous_token returned in your previous response.</td> - </tr> - <tr> - <td>expansions</td> - <td>String</td> - <td>No</td> - <td> Expansions enable you to request additional data objects that relate to the originally returned users. The ID that represents the expanded data object will be included directly in the user data object, but the expanded object metadata will be returned within the includes response object, and will also include the ID so that you can match this data object to the original Tweet object. At this time, the only expansion available to endpoints that primarily return user objects is expansions=`pinned_tweet_id`. You will find the expanded Tweet data object living in the includes response object.</td> - </tr> - <tr> - <td>tweet_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific Tweet fields will deliver in each returned pinned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet fields will only return if the user has a pinned Tweet and if you've also included the expansions=pinned_tweet_id query parameter in your request. While the referenced Tweet ID will be located in the original Tweet object, you will find this ID and all additional Tweet fields in the includes data object. Valid values for this parameter are: `attachments, author_id, context_annotations, conversation_id, created_at, edit_controls, entities, geo, id, in_reply_to_user_id, lang, non_public_metrics, public_metrics, organic_metrics, promoted_metrics, possibly_sensitive, referenced_tweets, reply_settings, source, text, withheld`.</td> - </tr> - <tr> - <td>user_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific user fields will deliver with each returned users objects. Specify the desired fields in a comma-separated list without spaces between commas and fields. These specified user fields will display directly in the user data objects. Valid values for this parameter are: `created_at, description, entities, id, location, name, pinned_tweet_id, profile_image_url, protected, public_metrics, url, username, verified, verified_type, withheld`.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.getFollowers> - <id>{$ctx:id}</id> - <max_results>{$ctx:max_results}</max_results> - <pagination_token>{$ctx:pagination_token}</pagination_token> - <expansions>{$ctx:expansions}</expansions> - <tweet_fields>{$ctx:tweet_fields}</tweet_fields> - <user_fields>{$ctx:user_fields}</user_fields> - </twitter.getFollowers> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getFollowers operation. - - ```xml - <twitter.getFollowers> - <id>"1655515285577936899"</id> - </twitter.getFollowers> - ``` - - **Sample response** - - Given below is a sample response for the getFollowers operation. - - ```json - { - "data": [ - { - "pinned_tweet_id": "1293595870563381249", - "id": "6253282", - "username": "TwitterAPI", - "name": "Twitter API" - }, - { - "pinned_tweet_id": "1293593516040269825", - "id": "2244994945", - "username": "TwitterDev", - "name": "Twitter Dev" - }, - { - "id": "783214", - "username": "Twitter", - "name": "Twitter" - }, - { - "pinned_tweet_id": "1271186240323432452", - "id": "95731075", - "username": "TwitterSafety", - "name": "Twitter Safety" - }, - { - "id": "3260518932", - "username": "TwitterMoments", - "name": "Twitter Moments" - }, - { - "pinned_tweet_id": "1293216056274759680", - "id": "373471064", - "username": "TwitterMusic", - "name": "Twitter Music" - }, - { - "id": "791978718", - "username": "OfficialPartner", - "name": "Twitter Official Partner" - }, - { - "pinned_tweet_id": "1289000334497439744", - "id": "17874544", - "username": "TwitterSupport", - "name": "Twitter Support" - }, - { - "pinned_tweet_id": "1283543147444711424", - "id": "234489024", - "username": "TwitterComms", - "name": "Twitter Comms" - }, - { - "id": "1526228120", - "username": "TwitterData", - "name": "Twitter Data" - } - ] - } - ``` - -??? note "unfollowUser" - The twitter.unfollowUser method unfollows a specified user. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/users/follows/api-reference/delete-users-source_id-following) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>The authenticated user ID of whom you would like to initiate the unfollowing on behalf. You must pass the Access Tokens that relate to this user when authenticating the request.</td> - </tr> - <tr> - <td>target_user_id</td> - <td>String</td> - <td>Yes</td> - <td>The user ID of the user that you would like to unfollow.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.unfollowUser> - <id>{$ctx:id}</id> - <target_user_id>{$ctx:target_user_id}</target_user_id> - </twitter.unfollowUser> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the unfollowUser operation. - - ```xml - <twitter.unfollowUser> - <id>"1655515285577936899"</id> - <target_user_id>"15594932"</target_user_id> - </twitter.unfollowUser> - ``` - - **Sample response** - - Given below is a sample response for the unfollowUser operation. - - ```json - { - "data": { - "following": false - } - } - ``` - -??? note "blockUser" - The twitter.blockUser method blocks a specified user. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/users/blocks/api-reference/post-users-user_id-blocking) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>The authenticated user ID of whom you would like to initiate the blocking on behalf. You must pass the Access Tokens that relate to this user when authenticating the request.</td> - </tr> - <tr> - <td>target_user_id</td> - <td>String</td> - <td>Yes</td> - <td>The user ID of the user that you would like to block.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.blockUser> - <id>{$ctx:id}</id> - <target_user_id>{$ctx:target_user_id}</target_user_id> - </twitter.blockUser> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the blockUser operation. - - ```xml - <twitter.blockUser> - <id>"1655515285577936899"</id> - <target_user_id>"15594932"</target_user_id> - </twitter.blockUser> - ``` - - **Sample response** - - Given below is a sample response for the blockUser operation. - - ```json - { - "data": { - "blocking": true - } - } - ``` - -??? note "getBlockedUsers" - The twitter.getBlockedUsers method retrieves a list of users who are blocked by the specified user ID. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/users/blocks/api-reference/get-users-blocking) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>The user ID whose blocked users you would like to retrieve.</td> - </tr> - <tr> - <td>max_results</td> - <td>Integer</td> - <td>No</td> - <td>The maximum number of results to be returned per page. This can be a number between 1 and the 1000. By default, each page will return 100 results.</td> - </tr> - <tr> - <td>pagination_token</td> - <td>String</td> - <td>No</td> - <td>Used to request the next page of results if all results weren't returned with the latest request, or to go back to the previous page of results. To return the next page, pass the next_token returned in your previous response. To go back one page, pass the previous_token returned in your previous response.</td> - </tr> - <tr> - <td>expansions</td> - <td>String</td> - <td>No</td> - <td> Expansions enable you to request additional data objects that relate to the originally returned users. The ID that represents the expanded data object will be included directly in the user data object, but the expanded object metadata will be returned within the includes response object, and will also include the ID so that you can match this data object to the original Tweet object. At this time, the only expansion available to endpoints that primarily return user objects is expansions=`pinned_tweet_id`. You will find the expanded Tweet data object living in the includes response object.</td> - </tr> - <tr> - <td>tweet_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific Tweet fields will deliver in each returned pinned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet fields will only return if the user has a pinned Tweet and if you've also included the expansions=pinned_tweet_id query parameter in your request. While the referenced Tweet ID will be located in the original Tweet object, you will find this ID and all additional Tweet fields in the includes data object. Valid values for this parameter are: `attachments, author_id, context_annotations, conversation_id, created_at, edit_controls, entities, geo, id, in_reply_to_user_id, lang, non_public_metrics, public_metrics, organic_metrics, promoted_metrics, possibly_sensitive, referenced_tweets, reply_settings, source, text, withheld`.</td> - </tr> - <tr> - <td>user_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific user fields will deliver with each returned users objects. Specify the desired fields in a comma-separated list without spaces between commas and fields. These specified user fields will display directly in the user data objects. Valid values for this parameter are: `created_at, description, entities, id, location, name, pinned_tweet_id, profile_image_url, protected, public_metrics, url, username, verified, verified_type, withheld`.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.getBlockedUsers> - <id>{$ctx:id}</id> - <max_results>{$ctx:max_results}</max_results> - <pagination_token>{$ctx:pagination_token}</pagination_token> - <expansions>{$ctx:expansions}</expansions> - <tweet_fields>{$ctx:tweet_fields}</tweet_fields> - <user_fields>{$ctx:user_fields}</user_fields> - </twitter.getBlockedUsers> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getBlockedUsers operation. - - ```xml - <twitter.getBlockedUsers> - <id>"1655515285577936899"</id> - </twitter.getBlockedUsers> - ``` - - **Sample response** - - Given below is a sample response for the getBlockedUsers operation. - - ```json - { - "data": [ - { - "pinned_tweet_id": "1293595870563381249", - "id": "6253282", - "username": "TwitterAPI", - "name": "Twitter API" - }, - { - "pinned_tweet_id": "1293593516040269825", - "id": "2244994945", - "username": "TwitterDev", - "name": "Twitter Dev" - }, - { - "id": "783214", - "username": "Twitter", - "name": "Twitter" - }, - { - "pinned_tweet_id": "1271186240323432452", - "id": "95731075", - "username": "TwitterSafety", - "name": "Twitter Safety" - }, - { - "id": "3260518932", - "username": "TwitterMoments", - "name": "Twitter Moments" - }, - { - "pinned_tweet_id": "1293216056274759680", - "id": "373471064", - "username": "TwitterMusic", - "name": "Twitter Music" - }, - { - "id": "791978718", - "username": "OfficialPartner", - "name": "Twitter Official Partner" - }, - { - "pinned_tweet_id": "1289000334497439744", - "id": "17874544", - "username": "TwitterSupport", - "name": "Twitter Support" - }, - { - "pinned_tweet_id": "1283543147444711424", - "id": "234489024", - "username": "TwitterComms", - "name": "Twitter Comms" - }, - { - "id": "1526228120", - "username": "TwitterData", - "name": "Twitter Data" - } - ] - } - ``` - -??? note "unblockUser" - The twitter.unblockUser method unblocks a specified user. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/users/blocks/api-reference/delete-users-user_id-blocking) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>The authenticated user ID of whom you would like to initiate the unblocking on behalf. You must pass the Access Tokens that relate to this user when authenticating the request.</td> - </tr> - <tr> - <td>target_user_id</td> - <td>String</td> - <td>Yes</td> - <td>The user ID of the user that you would like to unblock.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.unblockUser> - <id>{$ctx:id}</id> - <target_user_id>{$ctx:target_user_id}</target_user_id> - </twitter.unblockUser> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the unblockUser operation. - - ```xml - <twitter.unblockUser> - <id>"1655515285577936899"</id> - <target_user_id>"15594932"</target_user_id> - </twitter.unblockUser> - ``` - - **Sample response** - - Given below is a sample response for the unblockUser operation. - - ```json - { - "data": { - "blocking": false - } - } - ``` - -??? note "muteUser" - The twitter.muteUser method mutes a specified user. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/users/mutes/api-reference/post-users-user_id-muting) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>The authenticated user ID of whom you would like to initiate the muting on behalf. You must pass the Access Tokens that relate to this user when authenticating the request.</td> - </tr> - <tr> - <td>target_user_id</td> - <td>String</td> - <td>Yes</td> - <td>The user ID of the user that you would like to mute.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.muteUser> - <id>{$ctx:id}</id> - <target_user_id>{$ctx:target_user_id}</target_user_id> - </twitter.muteUser> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the muteUser operation. - - ```xml - <twitter.muteUser> - <id>"1655515285577936899"</id> - <target_user_id>"15594932"</target_user_id> - </twitter.muteUser> - ``` - - **Sample response** - - Given below is a sample response for the muteUser operation. - - ```json - { - "data": { - "muting": true - } - } - ``` - -??? note "getMutedUsers" - The twitter.getMutedUsers method retrieves a list of users who are muted by the specified user ID. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/users/mutes/api-reference/get-users-muting) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>The user ID whose muted users you would like to retrieve.</td> - </tr> - <tr> - <td>max_results</td> - <td>Integer</td> - <td>No</td> - <td>The maximum number of results to be returned per page. This can be a number between 1 and the 1000. By default, each page will return 100 results.</td> - </tr> - <tr> - <td>pagination_token</td> - <td>String</td> - <td>No</td> - <td>Used to request the next page of results if all results weren't returned with the latest request, or to go back to the previous page of results. To return the next page, pass the next_token returned in your previous response. To go back one page, pass the previous_token returned in your previous response.</td> - </tr> - <tr> - <td>expansions</td> - <td>String</td> - <td>No</td> - <td> Expansions enable you to request additional data objects that relate to the originally returned users. The ID that represents the expanded data object will be included directly in the user data object, but the expanded object metadata will be returned within the includes response object, and will also include the ID so that you can match this data object to the original Tweet object. At this time, the only expansion available to endpoints that primarily return user objects is expansions=`pinned_tweet_id`. You will find the expanded Tweet data object living in the includes response object.</td> - </tr> - <tr> - <td>tweet_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific Tweet fields will deliver in each returned pinned Tweet. Specify the desired fields in a comma-separated list without spaces between commas and fields. The Tweet fields will only return if the user has a pinned Tweet and if you've also included the expansions=pinned_tweet_id query parameter in your request. While the referenced Tweet ID will be located in the original Tweet object, you will find this ID and all additional Tweet fields in the includes data object. Valid values for this parameter are: `attachments, author_id, context_annotations, conversation_id, created_at, edit_controls, entities, geo, id, in_reply_to_user_id, lang, non_public_metrics, public_metrics, organic_metrics, promoted_metrics, possibly_sensitive, referenced_tweets, reply_settings, source, text, withheld`.</td> - </tr> - <tr> - <td>user_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific user fields will deliver with each returned users objects. Specify the desired fields in a comma-separated list without spaces between commas and fields. These specified user fields will display directly in the user data objects. Valid values for this parameter are: `created_at, description, entities, id, location, name, pinned_tweet_id, profile_image_url, protected, public_metrics, url, username, verified, verified_type, withheld`.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.getMutedUsers> - <id>{$ctx:id}</id> - <max_results>{$ctx:max_results}</max_results> - <pagination_token>{$ctx:pagination_token}</pagination_token> - <expansions>{$ctx:expansions}</expansions> - <tweet_fields>{$ctx:tweet_fields}</tweet_fields> - <user_fields>{$ctx:user_fields}</user_fields> - </twitter.getMutedUsers> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getMutedUsers operation. - - ```xml - <twitter.getMutedUsers> - <id>"1655515285577936899"</id> - </twitter.getMutedUsers> - ``` - - **Sample response** - - Given below is a sample response for the getMutedUsers operation. - - ```json - { - "data": [ - { - "pinned_tweet_id": "1293595870563381249", - "id": "6253282", - "username": "TwitterAPI", - "name": "Twitter API" - }, - { - "pinned_tweet_id": "1293593516040269825", - "id": "2244994945", - "username": "TwitterDev", - "name": "Twitter Dev" - }, - { - "id": "783214", - "username": "Twitter", - "name": "Twitter" - }, - { - "pinned_tweet_id": "1271186240323432452", - "id": "95731075", - "username": "TwitterSafety", - "name": "Twitter Safety" - }, - { - "id": "3260518932", - "username": "TwitterMoments", - "name": "Twitter Moments" - }, - { - "pinned_tweet_id": "1293216056274759680", - "id": "373471064", - "username": "TwitterMusic", - "name": "Twitter Music" - }, - { - "id": "791978718", - "username": "OfficialPartner", - "name": "Twitter Official Partner" - }, - { - "pinned_tweet_id": "1289000334497439744", - "id": "17874544", - "username": "TwitterSupport", - "name": "Twitter Support" - }, - { - "pinned_tweet_id": "1283543147444711424", - "id": "234489024", - "username": "TwitterComms", - "name": "Twitter Comms" - }, - { - "id": "1526228120", - "username": "TwitterData", - "name": "Twitter Data" - } - ] - } - ``` - -??? note "unmuteUser" - The twitter.unmuteUser method unmutes a specified user. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/users/mutes/api-reference/delete-users-user_id-muting) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>The authenticated user ID of whom you would like to initiate the unmuting on behalf. You must pass the Access Tokens that relate to this user when authenticating the request.</td> - </tr> - <tr> - <td>target_user_id</td> - <td>String</td> - <td>Yes</td> - <td>The user ID of the user that you would like to unmute.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.unmuteUser> - <id>{$ctx:id}</id> - <target_user_id>{$ctx:target_user_id}</target_user_id> - </twitter.unmuteUser> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the unmuteUser operation. - - ```xml - <twitter.unmuteUser> - <id>"1655515285577936899"</id> - <target_user_id>"15594932"</target_user_id> - </twitter.unmuteUser> - ``` - - **Sample response** - - Given below is a sample response for the unmuteUser operation. - - ```json - { - "data": { - "muting": false - } - } - ``` ---- - -## Working with Lists - -The following operations allow you to work with lists in Twitter. To be authorized for the following endpoints, you will need an access token with the correct scopes. Please refer the [Twitter authentication map](https://developer.twitter.com/en/docs/authentication/guides/v2-authentication-mapping) to get the required scopes for the access token. - -??? note "createList" - The twitter.createList method creates a new list for the authenticated user. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/lists/manage-lists/api-reference/post-lists) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>name</td> - <td>String</td> - <td>Yes</td> - <td>The name of the List you wish to create.</td> - </tr> - <tr> - <td>description</td> - <td>String</td> - <td>No</td> - <td>Description of the List.</td> - </tr> - <tr> - <td>private</td> - <td>Boolean</td> - <td>No</td> - <td>Determine whether the List should be private.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.createList> - <name>{$ctx:name}</name> - <description>{$ctx:description}</description> - <private>{$ctx:private}</private> - </twitter.createList> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the createList operation. - - ```xml - <twitter.createList> - <name>"test list"</name> - <description>"list for testing"</description> - <private>true</private> - </twitter.createList> - ``` - - **Sample response** - - Given below is a sample response for the createList operation. - - ```json - { - "data": { - "id": "1667124005638397955", - "name": "test list" - } - } - ``` - -??? note "updateList" - The twitter.updateList method updates an existing list for the authenticated user. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/lists/manage-lists/api-reference/put-lists-id) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>The ID of the List to be updated.</td> - </tr> - <tr> - <td>name</td> - <td>String</td> - <td>No</td> - <td>The new name of the List you wish to update.</td> - </tr> - <tr> - <td>description</td> - <td>String</td> - <td>No</td> - <td>Description of the List.</td> - </tr> - <tr> - <td>private</td> - <td>Boolean</td> - <td>No</td> - <td>Determine whether the List should be private.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.updateList> - <id>{$ctx:id}</id> - <name>{$ctx:name}</name> - <description>{$ctx:description}</description> - <private>{$ctx:private}</private> - </twitter.updateList> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the updateList operation. - - ```xml - <twitter.updateList> - <id>"1669209684962865153"</id> - <description>"list for testing"</description> - <private>true</private> - </twitter.updateList> - ``` - - **Sample response** - - Given below is a sample response for the updateList operation. - - ```json - { - "data": { - "updated": true - } - } - ``` - -??? note "deleteList" - The twitter.deleteList method deletes a list for the authenticated user. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/lists/manage-lists/api-reference/delete-lists-id) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>The ID of the List you wish to delete.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.deleteList> - <id>{$ctx:id}</id> - </twitter.deleteList> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the deleteList operation. - - ```xml - <twitter.deleteList> - <id>"1669209684962865153"</id> - </twitter.deleteList> - ``` - - **Sample response** - - Given below is a sample response for the deleteList operation. - - ```json - { - "data": { - "deleted": true - } - } - ``` - -??? note "getListById" - The twitter.getListById method retrieves information about a single list specified by the requested ID. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/lists/list-lookup/api-reference/get-lists-id) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>The ID of the list to lookup.</td> - </tr> - <tr> - <td>expansions</td> - <td>String</td> - <td>No</td> - <td> Expansions enable you to request additional data objects that relate to the originally returned List. The ID that represents the expanded data object will be included directly in the List data object, but the expanded object metadata will be returned within the includes response object, and will also include the ID so that you can match this data object to the original user object. At this time, the only expansion available to endpoints that primarily return List objects is expansions=`owner_id`. You will find the expanded user data object living in the includes response object.</td> - </tr> - <tr> - <td>list_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific List fields will deliver with each returned List objects. Specify the desired fields in a comma-separated list without spaces between commas and fields. These specified List fields will display directly in the List data objects. Valid values for this parameter are: `created_at, follower_count, member_count, private, description, owner_id`.</td> - </tr> - <tr> - <td>user_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific user fields will deliver with the users object. Specify the desired fields in a comma-separated list without spaces between commas and fields. The user fields will only be returned if you have included expansions=owner_id query parameter in your request. You will find this ID and all additional user fields in the included data object. Valid values for this parameter are: `created_at, description, entities, id, location, name, pinned_tweet_id, profile_image_url, protected, public_metrics, url, username, verified, withheld`.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.getListById> - <id>{$ctx:id}</id> - <expansions>{$ctx:expansions}</expansions> - <list_fields>{$ctx:list_fields}</list_fields> - <user_fields>{$ctx:user_fields}</user_fields> - </twitter.getListById> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getListById operation. - - ```xml - <twitter.getListById> - <id>"1667124005638397955"</id> - <user_fields>"created_at,username,id,name"</user_fields> - </twitter.getListById> - ``` - - **Sample response** - - Given below is a sample response for the getListById operation. - - ```json - { - "data": { - "id": "1667124005638397955", - "name": "test list", - "owner_id": "1655515285577936899" - }, - "includes": { - "users": [ - { - "id": "1655515285577936899", - "name": "GrawKraken", - "username": "GrawKraken" - } - ] - } - } - ``` -??? note "getFollowingLists" - The twitter.getFollowingLists method retrieves all lists the authenticating or specified user is following, including their own. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/lists/list-follows/api-reference/get-users-id-followed_lists) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>The user ID whose followed Lists you would like to retrieve.</td> - </tr> - <tr> - <td>max_results</td> - <td>Integer</td> - <td>No</td> - <td>The maximum number of results to be returned per page. This can be a number between 1 and 100. By default, each page will return 100 results.</td> - </tr> - <tr> - <td>pagination_token</td> - <td>String</td> - <td>No</td> - <td>Used to request the next page of results if all results weren't returned with the latest request, or to go back to the previous page of results. To return the next page, pass the next_token returned in your previous response. To go back one page, pass the previous_token returned in your previous response.</td> - </tr> - <tr> - <td>expansions</td> - <td>String</td> - <td>No</td> - <td> Expansions enable you to request additional data objects that relate to the originally returned List. The ID that represents the expanded data object will be included directly in the List data object, but the expanded object metadata will be returned within the includes response object, and will also include the ID so that you can match this data object to the original user object. At this time, the only expansion available to endpoints that primarily return List objects is expansions=`owner_id`. You will find the expanded user data object living in the includes response object.</td> - </tr> - <tr> - <td>list_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific List fields will deliver with each returned List objects. Specify the desired fields in a comma-separated list without spaces between commas and fields. These specified List fields will display directly in the List data objects. Valid values for this parameter are: `created_at, follower_count, member_count, private, description, owner_id`.</td> - </tr> - <tr> - <td>user_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific user fields will deliver with the users object. Specify the desired fields in a comma-separated list without spaces between commas and fields. The user fields will only be returned if you have included expansions=owner_id query parameter in your request. You will find this ID and all additional user fields in the included data object. Valid values for this parameter are: `created_at, description, entities, id, location, name, pinned_tweet_id, profile_image_url, protected, public_metrics, url, username, verified, withheld`.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.getFollowingLists> - <id>{$ctx:id}</id> - <max_results>{$ctx:max_results}</max_results> - <pagination_token>{$ctx:pagination_token}</pagination_token> - <expansions>{$ctx:expansions}</expansions> - <list_fields>{$ctx:list_fields}</list_fields> - <user_fields>{$ctx:user_fields}</user_fields> - </twitter.getFollowingLists> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getFollowingLists operation. - - ```xml - <twitter.getFollowingLists> - <id>"1655515285577936899"</id> - <user_fields>"created_at,username,id,name"</user_fields> - </twitter.getFollowingLists> - ``` - - **Sample response** - - Given below is a sample response for the getFollowingLists operation. - - ```json - { - "data": [ - { - "follower_count": 123, - "id": "1630685563471", - "name": "Test List", - "owner_id": "1324848235714736129" - } - ], - "includes": { - "users": [ - { - "username": "alanbenlee", - "id": "1324848235714736129", - "created_at": "2009-08-28T18:30:45.000Z", - "name": "Alan Lee" - } - ] - }, - "meta": { - "result_count": 1 - } - } - ``` - -??? note "getListsMemberships" - The twitter.getListsMemberships method retrieves all Lists a specified user is a member of. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/lists/list-members/api-reference/get-users-id-list_memberships) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>id</td> - <td>String</td> - <td>Yes</td> - <td>The user ID whose List memberships you would like to retrieve.</td> - </tr> - <tr> - <td>max_results</td> - <td>Integer</td> - <td>No</td> - <td>The maximum number of results to be returned per page. This can be a number between 1 and 100. By default, each page will return 100 results.</td> - </tr> - <tr> - <td>pagination_token</td> - <td>String</td> - <td>No</td> - <td>Used to request the next page of results if all results weren't returned with the latest request, or to go back to the previous page of results. To return the next page, pass the next_token returned in your previous response. To go back one page, pass the previous_token returned in your previous response.</td> - </tr> - <tr> - <td>expansions</td> - <td>String</td> - <td>No</td> - <td> Expansions enable you to request additional data objects that relate to the originally returned List. The ID that represents the expanded data object will be included directly in the List data object, but the expanded object metadata will be returned within the includes response object, and will also include the ID so that you can match this data object to the original user object. At this time, the only expansion available to endpoints that primarily return List objects is expansions=`owner_id`. You will find the expanded user data object living in the includes response object.</td> - </tr> - <tr> - <td>list_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific List fields will deliver with each returned List objects. Specify the desired fields in a comma-separated list without spaces between commas and fields. These specified List fields will display directly in the List data objects. Valid values for this parameter are: `created_at, follower_count, member_count, private, description, owner_id`.</td> - </tr> - <tr> - <td>user_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific user fields will deliver with the users object. Specify the desired fields in a comma-separated list without spaces between commas and fields. The user fields will only be returned if you have included expansions=owner_id query parameter in your request. You will find this ID and all additional user fields in the included data object. Valid values for this parameter are: `created_at, description, entities, id, location, name, pinned_tweet_id, profile_image_url, protected, public_metrics, url, username, verified, withheld`.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.getListsMemberships> - <id>{$ctx:id}</id> - <max_results>{$ctx:max_results}</max_results> - <pagination_token>{$ctx:pagination_token}</pagination_token> - <expansions>{$ctx:expansions}</expansions> - <list_fields>{$ctx:list_fields}</list_fields> - <user_fields>{$ctx:user_fields}</user_fields> - </twitter.getListsMemberships> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getListsMemberships operation. - - ```xml - <twitter.getListsMemberships> - <id>"1655515285577936899"</id> - <user_fields>"created_at,username,id,name"</user_fields> - </twitter.getListsMemberships> - ``` - - **Sample response** - - Given below is a sample response for the getListsMemberships operation. - - ```json - { - "data": [ - { - "description": "list for editing and testing", - "id": "1667130158023860224", - "name": "test listss", - "owner_id": "1655515285577936899" - } - ], - "includes": { - "users": [ - { - "id": "1655515285577936899", - "name": "GrawKraken", - "username": "GrawKraken", - "created_at": "2023-05-08T10:09:55.000Z" - } - ] - }, - "meta": { - "result_count": 1 - } - } - ``` ---- - -## Working with Direct Messages - -The following operations allow you to work with direct messages in Twitter. To be authorized for the following endpoints, you will need an access token with the correct scopes. Please refer the [Twitter authentication map](https://developer.twitter.com/en/docs/authentication/guides/v2-authentication-mapping) to get the required scopes for the access token. - -??? note "sendNewDirectMessage" - The twitter.sendNewDirectMessage method sends a new direct message to the specified user from the authenticating user. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/direct-messages/manage/api-reference/post-dm_conversations-with-participant_id-messages) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>participant_id</td> - <td>String</td> - <td>Yes</td> - <td>The User ID of the account this one-to-one Direct Message is to be sent to.</td> - </tr> - <tr> - <td>attachments</td> - <td>String</td> - <td>Yes if text is not present</td> - <td>A single Media ID being attached to the Direct Message. Currently, Twitter supports only 1 attachment.</td> - </tr> - <tr> - <td>text</td> - <td>String</td> - <td>Yes if attachments is not present</td> - <td>Text of the Direct Message being created. Text messages support up to 10,000 characters.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.sendNewDirectMessage> - <participant_id>{$ctx:participant_id}</participant_id> - <attachments>{$ctx:attachments}</attachments> - <text>{$ctx:text}</text> - </twitter.sendNewDirectMessage> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the sendNewDirectMessage operation. - - ```xml - <twitter.sendNewDirectMessage> - <participant_id>"1668111685234708487"</participant_id> - <text>"Test message!"</text> - </twitter.sendNewDirectMessage> - ``` - - **Sample response** - - Given below is a sample response for the sendNewDirectMessage operation. - - ```json - { - "data": { - "dm_conversation_id": "1655515285577936899-1668111685234708487", - "dm_event_id": "1668112397700067333" - } - } - ``` - -??? note "addDirectMessage" - The twitter.addDirectMessage method creates a Direct Message on behalf of an authenticated user, and adds it to the specified conversation. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/direct-messages/manage/api-reference/post-dm_conversations-dm_conversation_id-messages) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>dm_conversation_id</td> - <td>String</td> - <td>Yes</td> - <td>The dm_conversation_id of the conversation to add the Direct Message to. Supports both 1-1 and group conversations.</td> - </tr> - <tr> - <td>attachments</td> - <td>String</td> - <td>Yes if text is not present</td> - <td>A single Media ID being attached to the Direct Message. Currently, Twitter supports only 1 attachment.</td> - </tr> - <tr> - <td>text</td> - <td>String</td> - <td>Yes if attachments is not present</td> - <td>Text of the Direct Message being created. Text messages support up to 10,000 characters.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.addDirectMessage> - <dm_conversation_id>{$ctx:dm_conversation_id}</dm_conversation_id> - <attachments>{$ctx:attachments}</attachments> - <text>{$ctx:text}</text> - </twitter.addDirectMessage> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the addDirectMessage operation. - - ```xml - <twitter.addDirectMessage> - <dm_conversation_id>"1655515285577936899-1668111685234708487"</dm_conversation_id> - <text>"Second Test message!"</text> - </twitter.addDirectMessage> - ``` - - **Sample response** - - Given below is a sample response for the addDirectMessage operation. - - ```json - { - "data": { - "dm_conversation_id": "1655515285577936899-1668111685234708487", - "dm_event_id": "1668112397700067333" - } - } - ``` - -??? note "getDirectMessages" - The twitter.getDirectMessages method retrives a list of Direct Messages for the authenticated user, both sent and received. See the [related API documentation](https://developer.twitter.com/en/docs/twitter-api/direct-messages/lookup/api-reference/get-dm_events) for more information. - <table> - <tr> - <th>Parameter Name</th> - <th>Type</th> - <th>Required</th> - <th>Description</th> - </tr> - <tr> - <td>event_types</td> - <td>String</td> - <td>No</td> - <td>The type of Direct Message event to return. If not included, all types are returned. Valid values for this parameter are: `MessageCreate, ParticipantsJoin, ParticipantsLeave`.</td> - </tr> - <tr> - <td>max_results</td> - <td>Integer</td> - <td>No</td> - <td>The maximum number of results to be returned in a page. Must be between 1 and 100. The default is 100.</td> - </tr> - <tr> - <td>pagination_token</td> - <td>String</td> - <td>No</td> - <td>Contains either the next_token or previous_token value.</td> - </tr> - <tr> - <td>expansions</td> - <td>String</td> - <td>No</td> - <td> Expansions enable you to request additional data objects that relate to the returned Direct Message conversation events. Submit a list of desired expansions in a comma-separated list without spaces. The IDs that represents the expanded data objects will be included directly in the event data object, and the expanded object metadata will be returned within the includes response object. Valid values for this parameter are: `attachments.media_keys, referenced_tweets.id, sender_id, participant_ids`.</td> - </tr> - <tr> - <td>dm_event_fields</td> - <td>String</td> - <td>No</td> - <td>Extra fields to include in the event payload. id, and event_type are returned by default. The text value isn't included for ParticipantsJoin and ParticipantsLeave events. Valid values for this parameter are: `id, text, event_type, created_at, dm_conversation_id, sender_id, participant_ids, referenced_tweets, attachments`.</td> - </tr> - <tr> - <td>media_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific media fields will be delivered in Direct Message 'MessageCreate' events. Specify the desired fields in a comma-separated list without spaces between commas and fields. While the media ID will be located in the event object, you will find this ID and all additional media fields in the includes data object. The event object will only include media fields if the Direct Message contains media and if you've also included the expansions=attachments.media_keys query parameter in your request. Valid values for this parameter are: `duration_ms, height, media_key, preview_image_url, type, url, width, public_metrics, alt_text, variants`.</td> - </tr> - <tr> - <td>tweet_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific Tweet fields will be delivered in each returned Direct Message 'MessageCreate' event object that contains a Tweet reference. Specify the desired fields in a comma-separated list without spaces between commas and fields. While the Tweet ID will be in the event object, you will find this ID and all additional Tweet fields in the includes data object. The event object will include Tweet fields only if the Direct Message references a Tweet and the expansions=referenced_tweet.id query parameter is included in the request. Valid values for this parameter are: `attachments, author_id, context_annotations, conversation_id, created_at, edit_controls, entities, geo, id, in_reply_to_user_id, lang, public_metrics, possibly_sensitive, referenced_tweets, reply_settings, source, text, withheld`.</td> - </tr> - <tr> - <td>user_fields</td> - <td>String</td> - <td>No</td> - <td>This fields parameter enables you to select which specific user fields will be delivered for Direct Message conversation events that reference a sender or participant ID. Specify the desired fields in a comma-separated list without spaces between commas and fields. While the user ID will be located in the event object, you will find this ID and all additional user fields in the includes data object. Valid values for this parameter are: `created_at, description, entities, id, location, name, pinned_tweet_id, profile_image_url, protected, public_metrics, url, username, verified, withheld`.</td> - </tr> - </table> - - **Sample configuration** - - ```xml - <twitter.getDirectMessages> - <event_types>{$ctx:event_types}</event_types> - <max_results>{$ctx:max_results}</max_results> - <pagination_token>{$ctx:pagination_token}</pagination_token> - <expansions>{$ctx:expansions}</expansions> - <dm_event_fields>{$ctx:dm_event_fields}</dm_event_fields> - <media_fields>{$ctx:media_fields}</media_fields> - <tweet_fields>{$ctx:tweet_fields}</tweet_fields> - <user_fields>{$ctx:user_fields}</user_fields> - </twitter.getDirectMessages> - ``` - - **Sample request** - - Given below is a sample request that can be handled by the getDirectMessages operation. - - ```xml - <twitter.getDirectMessages> - <dm_event_fields>"event_type,sender_id"</dm_event_fields> - <user_fields>"created_at,username,id,name"</user_fields> - </twitter.getDirectMessages> - ``` - - **Sample response** - - Given below is a sample response for the getDirectMessages operation. - - ```json - { - "data": [ - { - "text": "Test message!", - "id": "1668113164393672708", - "sender_id": "1655515285577936899", - "event_type": "MessageCreate" - }, - { - "text": "Test DM", - "id": "1668112842107547653", - "sender_id": "1655515285577936899", - "event_type": "MessageCreate" - } - ], - "meta": { - "result_count": 2 - } - } - ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/utility-module/utility-module-config.md b/en/docs/reference/connectors/utility-module/utility-module-config.md deleted file mode 100644 index 2c316189d8..0000000000 --- a/en/docs/reference/connectors/utility-module/utility-module-config.md +++ /dev/null @@ -1,521 +0,0 @@ -# Utility Module Reference - -The Utility Module in WSO2 Enterprise Integrator helps to perform basic utility functions such as math, string, date, and signature. The connector will compute the result and save it to a property. - -The following operations can be performed with this module. - -## string.Length - -You can use the `string.Length` operation to retrieve the length of a string. - -### Operation details - -<table> -<thead> - <tr> - <th><b>Name</b></th> - <th><b>Parameter</b></th> - <th><b>Default Value</b></th> - <th><b>Description</b></th> - </tr> -</thead> -<tbody> - <tr> - <td>Input String</td> - <td>inputString</td> - <td></td> - <td>The string for which you need to identify the length. The string can contain any characters. It will also consider whitespace characters when calculating the length.</td> - </tr> - <tr> - <td>Target Property</td> - <td>target</td> - <td><code>length</code></td> - <td>Specify the property name to which the result should be saved.</td> - </tr> -</tbody> -</table> - -### Sample configuration - -The following is a sample request, Synapse configuration, and response for the given request. - -=== "Request" - ``` - {"string":"utility module"} - ``` - -=== "Synapse Configuration" - ```xml - <utility.string.Length> - <inpuString>{json-eval($.string)}</inputString> - <target>length</target> - </utility.string.Length> - ``` - -=== "Response" - ``` - length=14 - ``` - -## string.LowerCase - -You can use the `string.LowerCase` operation to change the case of the string to lowercase. - -### Operation details - -<table> -<thead> - <tr> - <th><b>Name</b></th> - <th><b>Parameter</b></th> - <th><b>Default Value</b></th> - <th><b>Description</b></th> - </tr> -</thead> -<tbody> - <tr> - <td>Input String</td> - <td>inputString</td> - <td></td> - <td>The string that needs to be transformed to lowercase.</td> - </tr> - <tr> - <td>Target Property</td> - <td>target</td> - <td><code>lower</code></td> - <td>Specify the property name to which the result should be saved.</td> - </tr> -</tbody> -</table> - -### Sample configuration - -The following is a sample request, Synapse configuration, and response for the given request. - -=== "Request" - ``` - {"string":"UTILITY MODULE"} - ``` - -=== "Synapse Configuration" - ```xml - <utility.string.LowerCase> - <inputString>json-eval($.string)</inputString> - <target>lowercase</target> - </utility.string.LowerCase> - ``` - -=== "Response" - ``` - lowercase="utility module" - ``` - -## string.UpperCase - -You can use the `string.UpperCase` operation to change the case of the string to uppercase. - -### Operation details - -<table> -<thead> - <tr> - <th><b>Name</b></th> - <th><b>Parameter</b></th> - <th><b>Default Value</b></th> - <th><b>Description</b></th> - </tr> -</thead> -<tbody> - <tr> - <td>Input String</td> - <td>inputString</td> - <td></td> - <td>The string that needs to be transformed to uppercase.</td> - </tr> - <tr> - <td>Target Property</td> - <td>target</td> - <td><code>upper</code></td> - <td>Specify the property name to which the result should be saved.</td> - </tr> -</tbody> -</table> - -### Sample configuration - -The following is a sample request, Synapse configuration, and response for the given request. - -=== "Request" - ``` - {"string":"utility module"} - ``` - -=== "Synapse Configuration" - ```xml - <utility.string.UpperCase> - <inputString>json-eval($.string)</inputString> - <target>uppercase</target> - </utility.string.UpperCase> - ``` - -=== "Response" - ``` - uppercase="UTILITY MODULE" - ``` - -## string.RegexMatcher - -You can use the `string.RegexMatcher` operation to check whether the given string is in the desired format. It returns true if the string matches with the regular expression (Regex). - -### Operation details - -<table> -<thead> - <tr> - <th><b>Name</b></th> - <th><b>Parameter</b></th> - <th><b>Default Value</b></th> - <th><b>Description</b></th> - </tr> -</thead> -<tbody> - <tr> - <td>Input String</td> - <td>inputString</td> - <td></td> - <td>The string that needs to be checked with the regular expression.</td> - </tr> - <tr> - <td>Regular Expression</td> - <td>regex</td> - <td></td> - <td>The regular expression of the desired string.</td> - </tr> - <tr> - <td>Target Property</td> - <td>target</td> - <td><code>match</code></td> - <td>Specify the property name to which the result should be saved.</td> - </tr> -</tbody> -</table> - -### Sample configuration - -The following is a sample request, Synapse configuration, and response for the given request. - -=== "Request" - ``` - {"string":"utility module"} - ``` - -=== "Synapse Configuration" - ```xml - <utility.string.RegexMatcher> - <regex>u.*m.*e</regex> - <inputString>json-eval($.string)</inputString> - <target>isMatching</target> - </utility.string.RegexMatcher> - ``` - -=== "Response" - ``` - isMatching="true" - ``` - -## string.UUID - -You can use the `string.UUID` operation to generate a random UUID. - -### Operation details - -<table> -<thead> - <tr> - <th><b>Name</b></th> - <th><b>Parameter</b></th> - <th><b>Default Value</b></th> - <th><b>Description</b></th> - </tr> -</thead> -<tbody> - <tr> - <td>Target Property</td> - <td>target</td> - <td><code>uuid</code></td> - <td>Specify the property name to which the generated random UUID should be saved.</td> - </tr> -</tbody> -</table> - -### Sample configuration - -The following is a sample configuration and response. - -=== "Synapse Configuration" - ```xml - <utility.string.UUID> - <target>uuid</target> - </utility.string.UUID> - ``` - -=== "Response" - ``` - uuid="07801d34-bbaf-43aa-8d70-98b4ead1b198" - ``` - -## date.GetDate - -You can use the `date.GetDate` operation to get the current date and time in a preferred date format. - -### Operation details - -<table> -<thead> - <tr> - <th><b>Name</b></th> - <th><b>Parameter</b></th> - <th><b>Default Value</b></th> - <th><b>Description</b></th> - </tr> -</thead> -<tbody> - <tr> - <td>Date Format</td> - <td>format</td> - <td>default value:<code>yyyy-MM-dd HH:mm:ss</code></td> - <td>The format in which the date is needed. Refer to Java date format patterns.</td> - </tr> - <tr> - <td>Target Property</td> - <td>target</td> - <td><code>date</code></td> - <td>Specify the property name to which the current date should be saved.</td> - </tr> -</tbody> -</table> - -### Sample configuration - -The following is a sample Synapse configuration and response. - -=== "Synapse Configuration" - ```xml - <utility.date.GetDate> - <format>yy/MM/dd HH:mm:ss</format> - <target>date</target> - </utility.date.GetDate> - ``` - -=== "Response" - ``` - date="22/02/01 08:32:40" - ``` - -## math.GetRandomInt - -You can use the `math.GetRandomInt` operation to get a random integer in a given range. - -### Operation details - -<table> -<thead> - <tr> - <th><b>Name</b></th> - <th><b>Parameter</b></th> - <th><b>Default Value</b></th> - <th><b>Description</b></th> - </tr> -</thead> -<tbody> - <tr> - <td>Lower Bound</td> - <td>lowerBound</td> - <td></td> - <td>Lower bound for the random integer. If it is kept blank, the lower bound will be considered as 0.</td> - </tr> - <tr> - <td>Upper Bound</td> - <td>upperBound</td> - <td></td> - <td>Upper bound for the random integer. If it is kept blank, it will consider the upper bound as infinity.</td> - </tr> - <tr> - <td>Target Property</td> - <td>target</td> - <td><code>random</code></td> - <td>Specify the property name to which the generated random integer should be saved.</td> - </tr> -</tbody> -</table> - -### Sample configuration - -The following is a sample Synapse configuration and response. - -=== "Synapse Configuration" - ```xml - <utility.math.GetRandomInt> - <lowerBound>100</lowerBound> - <upperBound>1000</upperBound> - <target>random</target> - </utility.math.GetRandomInt> - ``` - -=== "Response" - ``` - random=785 - ``` - -## signature.Generate - -You can use the `signature.Generate` operation to generate a HMAC signature for the payload of the request. - -### Operation details - -<table> -<thead> - <tr> - <th><b>Name</b></th> - <th><b>Parameter</b></th> - <th><b>Default Value</b></th> - <th><b>Description</b></th> - </tr> -</thead> -<tbody> - <tr> - <td>Payload</td> - <td>payload</td> - <td><code>Body</code></td> - <td>Dropdown menu to select whether the payload is from the body of the request or a custom payload.<br/>The following are the supported HTTP MIME types. <br/><code>application/json</code><br/><code>application/xml</code><br/><code>text/plain</code></td> - </tr> - <tr> - <td>Custom Payload</td> - <td>customPayload</td> - <td></td> - <td>The field to enter a custom payload when the payload is selected as <code>Custom Payload</code>.</td> - </tr> - <tr> - <td>Secret</td> - <td>secret</td> - <td></td> - <td>The secret used to generate the signature for the payload using an algorithm.</td> - </tr> - <tr> - <td>Algorithm</td> - <td>algorithm</td> - <td><code>HMACSHA1</code></td> - <td>The algorithm that is used to generate the signature.<br/>The following are the supported algorithms:<br/><code>HMACSHA1</code><br/><code>HMACSHA256</code><br/><code>HMACSHA384</code><br/><code>HMACSHA512</code><br/><code>HMACMD5</code></td> - </tr> - <tr> - <td>Target Property</td> - <td>target</td> - <td><code>sign</code></td> - <td>Specify the property name to which the signature should be saved.</td> - </tr> -</tbody> -</table> - -### Sample configuration - -The following is a sample request, Synapse configuration, and response for the given request. - -=== "Request" - ``` - {"string":"utility module"} - ``` - -=== "Synapse Configuration" - ```xml - <utility.signature.Generate> - <payload>Body</payload> - <secret>123</secret> - <algorithm>HMACSHA1</algorithm> - <target>signature</target> - </utility.signature.Generate> - ``` - -=== "Response" - ``` - signature="32423411140bdebed0b017e738797be452481dbb" - ``` - -## signature.Verify - -You can use the `signature.Verify` operation to verify the payload using the HMAC signature in the header of the request. Thereby, this is used to ensure that the payload is not modified. - -### Operation details - -<table> -<thead> - <tr> - <th><b>Name</b></th> - <th><b>Parameter</b></th> - <th><b>Default Value</b></th> - <th><b>Description</b></th> - </tr> -</thead> -<tbody> - <tr> - <td>Payload</td> - <td>payload</td> - <td><code>Body</code></td> - <td>Dropdown menu to select whether the payload is from the body of the request or a custom payload.<br/>The following are the supported HTTP MIME types. <br/><code>application/json</code><br/><code>application/xml</code><br/><code>text/plain</code></td> - </tr> - <tr> - <td>Custom Payload</td> - <td>customPayload</td> - <td></td> - <td>The field to enter a custom payload when the payload is selected as <code>Custom Payload</code>.</td> - </tr> - <tr> - <td>Signature</td> - <td>signature</td> - <td></td> - <td>The HMAC signature of the payload.</td> - </tr> - <tr> - <td>Secret</td> - <td>secret</td> - <td></td> - <td>The secret used to generate the signature for the payload using an algorithm.</td> - </tr> - <tr> - <td>Algorithm</td> - <td>algorithm</td> - <td><code>HMACSHA1</code></td> - <td>The algorithm that is used to genearte the signature.<br/>The following algorithms are supported,<br/><code>HMACSHA1</code><br/><code>HMACSHA256</code><br/><code>HMACSHA384</code><br/><code>HMACSHA512</code><br/><code>HMACMD5</code></td> - </tr> - <tr> - <td>Target Property</td> - <td>target</td> - <td><code>verify</code></td> - <td>Specify the property name to which the signature should be saved.</td> - </tr> -</tbody> -</table> - -### Sample configuration - -The following is a sample request, Synapse configuration, and response for the given request. - -=== "Request" - ``` - {"string":"utility module"} - ``` - -=== "Synapse Configuration" - ```xml - <utility.signature.Verify> - <payload>Body</payload> - <signature>32423411140bdebed0b017e738797be452481dbb</signature> - <secret>123</secret> - <algorithm>HMACSHA1</algorithm> - <target>verify</target> - </utility.signature.Verify> - ``` - -=== "Response" - ``` - verify="true" - ``` diff --git a/en/docs/reference/connectors/utility-module/utility-module-overview.md b/en/docs/reference/connectors/utility-module/utility-module-overview.md deleted file mode 100644 index dcd7bc03b9..0000000000 --- a/en/docs/reference/connectors/utility-module/utility-module-overview.md +++ /dev/null @@ -1,39 +0,0 @@ -# Utility Module Overview - -The Utility Module allows you to do the following. - -- Math functions (e.g., generating a random integer). -- String function (e.g., transform a string to uppercase and lowercase). -- Obtain the length of a string. -- Generate a random UUID. -- Check a string against a regular expression. - -Access the Utility Module by navigating to the [Connector Store](https://store.wso2.com/store/assets/esbconnector/list) and search for `Utility`. - -<img src="{{base_path}}/assets/img/integrate/connectors/utility-store.png"><img src="http://localhost:8000/assets/img/integrate/connectors/utility-store.png" title="Utility Module" width="200" alt="Utility Module logo in Connector Store"/></img> - -## Compatibility - -| **Connector version** | **Supported Product Versions** | -| ------------- |------------- | -| [1.0.1](https://github.com/wso2-extensions/mediation-utility-module) | MI 4.1.0</br>MI 4.0.0 | -| [1.0.0](https://github.com/wso2-extensions/mediation-utility-module) | MI 4.1.0</br>MI 4.0.0 | - - -For older versions, see the details in the Connector Store. - -## Utility Module documentation - -* **[Utility Module Reference]({{base_path}}/reference/connectors/utility-module/utility-module-config/)**: This documentation provides a reference guide for the Utility Module. - -## How to contribute - -As an open source project, WSO2 extensions welcome contributions from the community. - -- To contribute to the code for this connector, please create a pull request in the following repository. - - [Utility Module GitHub repository](https://github.com/wso2-extensions/mediation-utility-module) - -- Check the issue tracker for open issues that interest you. - -WSO2 looks forward to receiving your contributions. diff --git a/en/docs/reference/connectors/why-connectors.md b/en/docs/reference/connectors/why-connectors.md deleted file mode 100644 index a3a968bbb5..0000000000 --- a/en/docs/reference/connectors/why-connectors.md +++ /dev/null @@ -1,45 +0,0 @@ -# When to Use Integration Connectors - -A connector is a collection or a set of operations that can be used in an integration flow to access a specific service or a functionality. This can be a third-party HTTP API, remote SOAP service, a legacy system with a proprietary protocol, or even a local library function. - -<img src="{{base_path}}/assets/img/integrate/connectors/why-connectors.png" title="Why Connectors" width="500" alt="Why Connectors"/> - -## Why and when connectors are useful - -### Enables hybrid integration - -Hybrid integration is a popular topic in the integration arena due to the rapid growth of cloud computing and cloud platforms. In the past, integration was limited to the on-premise applications and the platform could only provide functionalities available in the on-premise systems. Nowadays, with SaaS applications, a much broader application landscape is enabled. From established on-premises systems to newly adopted software-as-a-service (SaaS) applications, integration is a critical, yet increasingly complicated, step toward digital business transformation. - -<img src="{{base_path}}/assets/img/integrate/connectors/why-connectors2.png" title="Hybrid Integration" width="500" alt="Hybrid Integration"/> - -When the WSO2 integration runtime is used as the integration core, connectors are the enablers for hybrid integration. For different SaaS applications, there are different connectors and they all fit into the same integration runtime. Mediators, endpoints, and data services that integrate on-premise data also fit into the integration runtime. As a result, the integration platform of WSO2 as a whole becomes a bridge between on-premise and cloud applications. - -### Reusable modules - -Imagine within your enterprise you need to connect to a legacy system with a custom protocol. There are several teams working with different integrations, however they all need to connect to that system. One team can develop a software module to enable the WSO2 integration runtime to connect to that system. Question is, how they can develop it in a way that other teams can also reuse it. - -Developing a connector is the solution here. The other teams can use the operations exposed by it in the way they need. Connectors are like libraries for the mediation engine. Connector project can be versioned and maintained. When integration logic is compiled into a deployable artifact, relevant versions can be imported. - -<img src="{{base_path}}/assets/img/integrate/connectors/why-connectors3.png" title="Reusable modules" width="500" alt="Reusable modules"/> - -CAppA and CAppB are developed by two different teams and contain different integration logic. However, underneath they share the same connector. - -### Legacy modernization and custom integration - -The WSO2 integration runtime is shipped with a set of inbound and outbound transports and with the support to integrate with popular protocols (i.e HTTP, JMS, AMQP, SMTP). However, there are instances where it needs to support custom protocols and custom logic that are not supported by the runtime by default. In such instances, writing a connector is one of the extension points available. Developers can plug and play connectors. - -This brings the capability of fulfilling legacy modernization requirements. Brownfield integration is a major part of that. As enterprises are interested in accelerating their digital transformation, they tend to integrate new technologies with legacy technologies rather than waiting until all legacy technologies are transformed to new ones. - -## WSO2 Integration Studio support - -WSO2 Integration Studio is the tooling or the IDE that developers use to code their integration logic. Integration connectors can be easily imported and immediately used in WSO2 Integration Studio. When the connector UI model is provided in the connector, all custom operations and their properties will get rendered in WSO2 Integration Studio automatically. Any integration logic developer can use it with WSO2 Integration Studio so that the connector developer does not need to worry about it as long as development rules are met. - -<img src="{{base_path}}/assets/img/integrate/connectors/why-connectors4.png" title="Integration Studio Connectors" width="600" alt="Integration Studio Connectors"/> - -Operations of the WSO2 connector that you import are listed on the right-side panel. Developers can drag and drop connector operations to construct the integration logic. Input parameters to the operations can be provided as static values or expressions using the properties panel that appears when the connector operation is clicked. - -## What's Next? - -* [Learn how to write a connector from scratch]({{base_path}}/reference/connectors/develop-connectors/) -* Publication process for connectors -* [Connector best practices and Integration Studio]({{base_path}}/reference/connectors/connector-usage/) \ No newline at end of file diff --git a/en/docs/reference/mediators/about-mediators.md b/en/docs/reference/mediators/about-mediators.md deleted file mode 100644 index dcd11b5f01..0000000000 --- a/en/docs/reference/mediators/about-mediators.md +++ /dev/null @@ -1,63 +0,0 @@ -# About Mediators - -Mediators are individual processing units that perform a specific function on messages that pass through the Micro Integrator. The mediator takes the message received by the proxy service or REST API, carries out some predefined actions on it (such as transforming, enriching, filtering), and outputs the modified message. - -For example, the [Clone]({{base_path}}/reference/mediators/clone-mediator) mediator splits a message into several clones, the [Send]({{base_path}}/reference/mediators/send-Mediator) mediator sends the messages, and the [Aggregate]({{base_path}}/reference/mediators/aggregate-mediator) mediator collects and merges the responses before sending them back to the client. - -Mediators also include functionality to match incompatible protocols, data formats, and interaction patterns across different resources. [XQuery]({{base_path}}/reference/mediators/xquery-mediator) and [XSLT]({{base_path}}/reference/mediators/xslt-mediator) mediators allow rich transformations on the messages. Content-based routing using XPath filtering is supported in different flavors, allowing users to get the most convenient configuration experience. Built-in capability to handle transactions allow message mediation to be done transactionally inside the Micro Integrator. - -Mediators are always defined within a [mediation sequence]({{base_path}}/reference/synapse-properties/sequence-properties). - -## Classification of Mediators - -Mediators are classified as follows based on whether or not they access the message's content: - -<table> - <col width="140"> - <tr> - <th>Classification</th> - <th>Description</th> - </tr> - <tr> - <td><b>Content-Aware</b> mediators</td> - <td> - These mediators always access the message content when mediating messages (e.g., <a href="..{{base_path}}/reference/mediators/enrich-Mediator">Enrich</a> mediator). - </td> - </tr> - <tr> - <td><b>Content-Unaware</b> mediators</td> - <td> - These mediators never access the message content when mediating messages (e.g., <a href="..{{base_path}}/reference/mediators/send-Mediator">Send</a> mediator). - </td> - </tr> - <tr> - <td><b>Conditionally Content-Aware</b> mediators</td> - <td> - These mediators could be either content-aware or content-unaware depending on their exact instance configuration. For example, a simple <a href="{{base_path}}/reference/mediators/log-Mediator"></a> mediator instance (i.e. configured as <log/>) is content-unaware. However a log mediator configured as <log level=”full”/> would be content-aware since it is expected to log the message payload. - </td> - </tr> -</table> - -## List of Mediators - -WSO2 Micro Integrator includes a comprehensive library of mediators that provide functionality for implementing widely used **Enterprise Integration Patterns** (EIPs). You can also easily write a custom mediator to provide additional functionality using various technologies such as Java, scripting, and Spring. - -**Core Mediators** - -[Call]({{base_path}}/reference/mediators/call-mediator) | [Send]({{base_path}}/reference/mediators/send-mediator) | [Loopback]({{base_path}}/reference/mediators/loopback-mediator) | [Sequence]({{base_path}}/reference/mediators/sequence-mediator) | [Respond]({{base_path}}/reference/mediators/respond-mediator) | [Drop]({{base_path}}/reference/mediators/drop-mediator) | [Call Template]({{base_path}}/reference/mediators/call-template-mediator) | [Enrich]({{base_path}}/reference/mediators/enrich-mediator) | [Property]({{base_path}}/reference/mediators/property-mediator) | [Property Group]({{base_path}}/reference/mediators/property-group-mediator) | [Log]({{base_path}}/reference/mediators/log-mediator) | - -**Filter Mediators** - -[Filter]({{base_path}}/reference/mediators/filter-mediator) | [Validate]({{base_path}}/reference/mediators/validate-mediator) | [Switch]({{base_path}}/reference/mediators/switch-mediator) | - -**Transform Mediators** - -[XSLT]({{base_path}}/reference/mediators/xslt-mediator) | [FastXSLT]({{base_path}}/reference/mediators/fastxslt-mediator) | [URLRewrite]({{base_path}}/reference/mediators/urlrewrite-mediator) | [XQuery]({{base_path}}/reference/mediators/xquery-mediator) | [Header]({{base_path}}/reference/mediators/header-mediator) | [Fault]({{base_path}}/reference/mediators/fault-mediator) | [PayloadFactory]({{base_path}}/reference/mediators/payloadfactory-mediator) | [JSONTransform](json-transform-mediator) | - -**Advanced Mediators** - -[Cache]({{base_path}}/reference/mediators/cache-mediator) | [ForEach]({{base_path}}/reference/mediators/foreach-mediator) | [Clone]({{base_path}}/reference/mediators/clone-mediator) | [Store]({{base_path}}/reference/mediators/store-mediator) | [Iterate]({{base_path}}/reference/mediators/iterate-mediator) | [Aggregate]({{base_path}}/reference/mediators/aggregate-mediator) | [Callout]({{base_path}}/reference/mediators/callout-mediator) | [Transaction]({{base_path}}/reference/mediators/transaction-mediator) | [Throttle]({{base_path}}/reference/mediators/throttle-mediator) | [DBReport]({{base_path}}/reference/mediators/db-report-mediator) | [DBLookup]({{base_path}}/reference/mediators/dblookup-mediator) | [EJB]({{base_path}}/reference/mediators/ejb-mediator) | [Binder]({{base_path}}/reference/mediators/builder-mediator) | [Entitlement]({{base_path}}/reference/mediators/call-mediator) | [OAuth]({{base_path}}/reference/mediators/call-mediator) | [Smooks]({{base_path}}/reference/mediators/smooks-mediator) | [Data Mapper]({{base_path}}/reference/mediators/data-mapper-mediator) | - -**Extension Mediators** - -[Class]({{base_path}}/reference/mediators/class-mediator) | [Script]({{base_path}}/reference/mediators/script-mediator) | diff --git a/en/docs/reference/mediators/aggregate-mediator.md b/en/docs/reference/mediators/aggregate-mediator.md deleted file mode 100644 index f9414e8c2b..0000000000 --- a/en/docs/reference/mediators/aggregate-mediator.md +++ /dev/null @@ -1,146 +0,0 @@ -# Aggregate Mediator - -The **Aggregate mediator** implements the [Aggregator enterprise integration pattern](https://docs.wso2.com/display/EIP/Aggregator). It -combines (aggregates) the **response messages** of messages that were split by the split by the [Clone]({{base_path}}/reference/mediators/clone-mediator) or -[Iterate]({{base_path}}/reference/mediators/iterate-mediator) mediator. Note that the responses are not necessarily aggregated in the same order that the requests were sent, -even if you set the ` sequential ` attribute to ` true ` on the Iterate mediator. - -!!! Info - The Aggregate mediator is a [content-aware]({{base_path}}/reference/mediators/about-mediators/#classification-of-mediators) mediator. - -## Syntax - -```xml -<aggregate> - <correlateOn expression="xpath | json-eval(JSON-Path)"/>? - <completeCondition [timeout="time-in-seconds"]> - <messageCount min="int-min" max="int-max"/>? - </completeCondition>? - <onComplete expression="xpath | json-eval(JSON-Path)" [sequence="sequence-ref"]> - (mediator +)? - </onComplete> -</aggregate> -``` - -## Configuration - -The parameters available for configuring the Aggregate mediator are as follows. - -<table> -<thead> -<tr class="header"> -<th>Parameter Name</th> -<th>Description</th> -</tr> -</thead> -<tbody> -<tr class="odd"> -<td><strong>Aggregate ID</strong></td> -<td>This optional attribute can be used to aggregate only responses for split messages that are created by a specific clone/iterate mediator. Aggregate ID should be the same as the ID of the corresponding clone/iterate mediator that creates split messages. This is particularly useful when aggregating responses for messages that are created using nested clone/iterate mediators.</td> -</tr> -<tr class="even"> -<td><strong>Aggregation Expression</strong></td> -<td>An XPath expression specifying which elements should be aggregated. A set of messages that are selected for aggregation is determined by the value specified in the <strong>Correlation Expression</strong> field.</td> -</tr> -<tr class="odd"> -<td><strong>Completion Timeout</strong></td> -<td>The number of seconds taken by the Aggregate mediator to wait for messages. When this time duration elapses, the aggregation will be completed. If the number of response messages reaches the number specified in the <strong>Completion Max-messages</strong> field, the aggregation will be completed even if the time duration specified for the <strong>Completion Timeout</strong> field has not elapsed.</td> -</tr> -<tr class="even"> -<td><strong>Completion Max-messages</strong></td> -<td>Maximum number of messages that can exist in an aggregation. When the number of response messages received reaches this number, the aggregation will be completed.</td> -</tr> -<tr class="odd"> -<td><strong>Completion Min-messages</strong></td> -<td>Minimum number of messages required for the aggregation to complete. When the time duration entered in the <strong>Completion Timeout</strong> field is elapsed, the aggregation will be completed even if the number of minimum response messages specified has not been received. If no value is entered in the <strong>Completion Timeout</strong> field, the aggregation will not be completed until the number of response messages entered in the <strong>Completion Min-messages</strong> field is received.</td> -</tr> -<tr class="even"> -<td><strong>Correlation Expression</strong></td> -<td><div class="content-wrapper"> -<p>This is an XPath expression which provides the basis on which response messages should be selected for aggregation. This is done by specifying a set of elements for which the messages selected should have matching values. A specific aggregation condition is set via the <strong>Aggregation Expression</strong> field.</p> - <p>You can click <strong>NameSpaces</strong> to add namespaces if you are providing an expression. Then the <strong>Namespace Editor</strong> panel would appear where you can provide any number of namespace prefixes and URLs used in the XPath expression.</p> - <p>If a correlation expression is included, all the data is aggregated and filtered by the correlation expression, regardless of the request which the data originates from, and once the completion condition is achieved the specified action is performed on the aggregated data.</p> -</div></td> -</tr> -<tr class="odd"> -<td><strong>Enclosing Element Property</strong></td> -<td>This parameter is used to accumulate the aggregated messages inside a single property. The name of the relevant property is entered in this field.</td> -</tr> -<tr class="even"> -<td><strong>On Complete</strong></td> -<td><p>The sequence to run when the aggregation is complete. You can select one of the following options:</p> -<ul> -<li><strong>Anonymous</strong>: Select this value if you want to specify the sequence to run by adding child mediators to the Aggregate mediator instead of selecting an existing sequence. For example, if you want to send the aggregated message via the <a href="{{base_path}}/reference/mediators/send-mediator">Send mediator</a>, you can add the Send mediator as a child mediator.</li> -<li><strong>Pick from Registry</strong>: Select this option if you want to specify a sequence which is already defined and saved in the registry. You can select the sequence from the Configuration Registry or Governance Registry.</li> -</ul></td> -</tr> -</tbody> -</table> - -## Examples - -### Example 1 - Sending aggregated messages through the send mediator - -``` java -<outSequence> - <aggregate> - <onComplete expression="//m0:getQuoteResponse" - xmlns:m0="http://services.samples"> - <send/> - </onComplete> - </aggregate> -</outSequence> -``` - -In this example, the mediator aggregates the responses coming into the Micro Integrator, and on completion it sends the aggregated message through -the Send mediator. - -### Example 2 - Sending aggregated messages with the enclosing element - -The following example shows how to configure the Aggregate mediator to -annotate the responses sent from multiple backends before forwarding -them to the client. - -``` xml -<outSequence> - <property name="info" scope="default"> - <ns:Information xmlns:ns="www.asankatechtalks.com" /> - </property> - <aggregate id="sa"> - <completeCondition /> - <onComplete expression="$body/*[1]" enclosingElementProperty="info"> - <send /> - </onComplete> - </aggregate> -</outSequence> -``` - -The above configuration includes the following: -<table> -<thead> -<tr class="header"> -<th>Parameter</th> -<th>Description</th> -</tr> -</thead> -<tbody> -<tr class="odd"> -<td><pre><code><property name="info" scope="default"> - <ns:Information xmlns:ns="www.asankatechtalks.com" /> - </property></code></pre></td> -<td>This creates the property named <code> info </code> of the <code> OM </code> type in which all the aggregated responses are accumulated.</td> -</tr> -<tr class="even"> -<td><code class="sourceCode xml"> <span class="er"><</span> </code> <code class="sourceCode xml"> aggregate </code> <code class="sourceCode xml"> id </code> <code class="sourceCode xml"> = </code> <code class="sourceCode xml"> "sa" </code> <code class="sourceCode xml"> > </code></td> -<td>The ID of the corresponding Clone mediator that splits the messages to be aggregated by the Aggregate mediator.</td> -</tr> -<tr class="odd"> -<td><code class="sourceCode xml"> <span class="er"><</span> </code> <code class="sourceCode xml"> onComplete </code> <code class="sourceCode xml"> expression </code> <code class="sourceCode xml"> = </code> <code class="sourceCode xml"> "$body/*[1]" </code> <code class="sourceCode xml"> enclosingElementProperty </code> <code class="sourceCode xml"> = </code> <code class="sourceCode xml"> "info" </code> <code class="sourceCode xml"> > </code></td> -<td>This expression is used to add the <code> info </code> property (created earlier in this configuration) to be added to the payload of the message and for accumulating all the aggregated messages from different endpoints inside the tag created inside this property.</td> -</tr> -<tr class="even"> -<td><code class="sourceCode xml"> <span class="er"><</span> </code> <code class="sourceCode xml"> send </code> <code class="sourceCode xml"> /> </code></td> -<td>This is the Send mediator added as a child mediator to the Aggregate mediator in order to send the aggregated and annotated messages back to the client once the aggregation is complete.</td> -</tr> -</tbody> -</table> diff --git a/en/docs/reference/mediators/builder-mediator.md b/en/docs/reference/mediators/builder-mediator.md deleted file mode 100644 index 662d982927..0000000000 --- a/en/docs/reference/mediators/builder-mediator.md +++ /dev/null @@ -1,21 +0,0 @@ -# Builder Mediator - -The **Builder Mediator** can be used to build the actual SOAP message from a message coming into the Micro Integrator through the Binary Relay. One usage is to use this before trying to log the actual message in case of an error. Also the Builder Mediator in the Micro Integrator can be configured to build some of the messages while passing the others along. - -!!! Info - In order to use the Builder mediator, `BinaryRealyBuilder` should be specified as the message builder in the `MI_HOME/conf/ei.toml` file for at least one content type. The message formatter specified for the same content types should be `ExpandingMessageFormatter`. Unlike other message builders, the BinaryRelayBuilder works by passing through a binary stream of the received content. The Builder mediator is used in conjunction with the BinaryRelayBuilder when we require to build the binary stream into a particular content type during mediation. We can specify the message builder that should be used to build the binary stream using the Builder mediator. - -By default, the Builder Mediator uses the ` axis2 ` default Message builders for the content types. Users can override those by using the optional ` messageBuilder ` configuration. For more information, see [Configuring Message Builders and Formatters]({{base_path}}/install-and-setup/setup/mi-setup/message_builders_formatters/message-builders-and-formatters/). - -A user has to specify the content type and the implementation class of the ` messageBuilder `. Also, users can specify the message `formatter` for this content type. This is used by the ` ExpandingMessageFormatter ` to format the message before sending to the destination. - -## Syntax - -``` java -<builder> - <messageBuilder contentType="" class="" [formatterClass=""]/> -</builder> -``` - - - diff --git a/en/docs/reference/mediators/cache-mediator.md b/en/docs/reference/mediators/cache-mediator.md deleted file mode 100644 index 1e4cbc99c4..0000000000 --- a/en/docs/reference/mediators/cache-mediator.md +++ /dev/null @@ -1,293 +0,0 @@ -# Cache Mediator - -When a message enters a message flow, the Cache mediator checks whether the incoming message is similar to a previous message that was received -within a specified period of time. This is done by evaluating the hash value of incoming messages. If a similar message was identified before, the Cache mediator executes the ` onCacheHit ` sequence (if specified), fetches the cached response, and prepares the Micro Integrator to send the response. The ` onCacheHit ` sequence can send back the response message using the [Respond Mediator]({{base_path}}/reference/mediators/respond-mediator). If the ` onCacheHit ` sequence is not specified, the cached response is sent back to the requester and the message is not passed on. If a similar message has not been seen before, then the message is passed on. - -!!! Info - - The Cache mediator is a [content-aware]({{base_path}}/reference/mediators/about-mediators/#classification-of-mediators) mediator. - - The Cache mediator supports only local caching. It does not support distributed caching. - -## Syntax - -``` java -<cache [timeout="seconds"] [collector=(true | false)] [maxMessageSize="in-bytes"] > - <onCacheHit [sequence="key"]> - (mediator)+ - </onCacheHit>? - <protocol type="http" >? - <methods>comma separated list</methods> - <headersToExcludeInHash>comma separated list</headersToExcludeInHash> - <responseCodes>regular expression</responseCodes> - <enableCacheControl>(true | false)</enableCacheControl> - <includeAgeHeader>(true | false)</includeAgeHeader> - <hashGenerator>class</hashGenerator> - </protocol> - <implementation [maxSize="int"]/> -</cache> -``` - -!!! Info - In a message flow, you can use the cache mediator as a **finder** (in the incoming path to check the request) or as a **collector** (in the outgoing path to cache the response). It is not possible to have more than one cache mediator in the same message flow because mediation is terminated after the finder on a cache hit, and the response is not passed on to the next finder after a cache hit. See the [Example 1](#example-1) given below. - -!!! Note - The message needs to be explicitly marked as *RESPONSE* using the following property when collecting the cached - response in the same sequence after using the call mediator. This will not be required if the back end is - called via send mediator. See the [Example 1](#example-1) given below. - ```xml - <property name="RESPONSE" value="true" scope="default" type="STRING"/> - ``` - -## Configuration - -### Cache Mediator as a Finder - -The parameters available to configure the Cache mediator as a **Finder** are as follows. - -<table> -<thead> -<tr class="header"> -<th>Parameter Name</th> -<th>Description</th> -</tr> -</thead> -<tbody> -<tr class="odd"> -<td><strong>Cache Type</strong></td> -<td><p>This parameter specifies whether the Cache mediator should be in the incoming path (to check the request) or in the outgoing path (to cache the response). Possible values are as follows.</p> -<ul> -<li><strong>Finder</strong> : If this is selected, the Cache mediator is used to search for the request hash of incoming messages.</li> -<li><strong>Collector</strong> : If this is selected, the Cache mediator is used to collect response messages in the cache.</li> -</ul></td> -</tr> -<tr class="even"> -<td><strong>Cache Timeout (Seconds)</strong></td> -<td>The time duration that the cache should be retained specified in seconds. The cache expires once this time duration elapses. The default value is 5000 seconds.</td> -</tr> -<tr class="odd"> -<td><strong>Maximum Message Size</strong></td> -<td>The maximum size of the message to be cached. This should be specified in bytes.</td> -</tr> -<tr class="even"> -<td><strong>Protocol Type</strong></td> -<td>The protocol type to be cached in the message flow. In the current implementation, HTTP is the only value that you can select. Although the only configuration supported for other protocols is the <code> HashGenerator </code> , you can specify the protocol type to be anything and specify a <code> HashGenerator </code> that you prefer.</td> -</tr> -<tr class="odd"> -<td><strong>HTTP Methods</strong></td> -<td>A comma separated list of HTTP methods that should be cached for the HTTP protocol. The default value is <code>*</code>, and it caches all HTTP methods.</td> -</tr> -<tr class="even"> -<td><strong>Headers to Exclude in Hash</strong></td> -<td>A comma separated list of headers to ignore when hashing an incoming messages. If you want to exclude all headers when hashing an incoming message, specify *.</td> -</tr> -<tr class="odd"> -<td><strong>Response Codes</strong></td> -<td>Specify the response codes to be cached as a regular expression. If the http status code of a response matches the regular expression, the response should be cached. The default setting is to cache any response code.</td> -</tr> -<tr class="even"> -<td><strong>Hash Generator</strong></td> -<td><div class="content-wrapper"> -<p>This parameter is used to define the logic used by the Cache mediator to evaluate the hash values of incoming messages. The value specified here should be a class that implements the <code>org.separated.carbon.mediator.cache.digest.DigestGenerator</code> class interface. The default hash generator is <code>org.wso2.carbon.mediator.cache.digest.HttpRequestHashGenerator</code>. If the generated hash value is found in the cache, then the Cache mediator executes the <code>onCacheHit</code> sequence, which can be specified inline or referenced.</p> -<b>Note</b>: -<p>The hash generator is specific to the HTTP protocol.</p> -<p>If you are using any other protocol, you need to write a custom hash generator or use one of the following deprecated hash generator classes:</p> -<ul> -<li><code> org.wso2.carbon.mediator.cache.digest.DOMHASHGenerator </code></li> -<li><code> org.wso2.carbon.mediator.cache.digest.REQUESTHASHGenerator </code></li> -</ul> - -</div></td> -</tr> -<tr class="odd"> -<td><strong>Enable Cache Control Headers</strong></td> -<td><p>Whether the Cache mediator should honor the Cache-Control header(no-cache, no-store, max-age headers). If you set this to the default value (i.e., <code> false </code> ), the Cache mediator will not consider the Cache-Control headers when caching the response or when returning the cached response.</p> -<div> -<br /> - -</div></td> -</tr> -<tr class="even"> -<td><strong>Include Age Header</strong></td> -<td>Whether an Age header needs to be included when returning the cached response.</td> -</tr> -<tr class="odd"> -<td><strong>Maximum Size</strong></td> -<td>The maximum number of elements to be cached. The default size is 1000.</td> -</tr> -<tr class="even"> -<td><strong>Anonymous</strong></td> -<td>If this option is selected, an anonymous sequence is executed when an incoming message is identified as an equivalent to a previously received message based on the value defined in the <strong>Hash Generator</strong> field.</td> -</tr> -<tr class="odd"> -<td><strong>Sequence Reference</strong></td> -<td>The reference to the <code>onCacheHit</code> sequence to be executed when an incoming message is identified as an equivalent to a previously received message, based on the value defined in the <strong>Hash Generator</strong> field. The sequence should be created in the registry in order to be specified in this field. You can click either <strong>Configuration</strong>, <strong>Registry</strong>, or <strong>Governance Registry</strong> as applicable to select the required sequence from the resource tree.</td> -</tr> -</tbody> -</table> - -### Cache Mediator as a Collector - -The parameters available to configure the Cache mediator as a **Collector** are as follows. - -<table> -<thead> -<tr class="header"> -<th>Parameter Name</th> -<th>Description</th> -</tr> -</thead> -<tbody> -<tr class="odd"> -<td><strong>Cache Type</strong></td> -<td><p>This parameter specifies whether the mediator should be in the incoming path (to check the request) or in the outgoing path (to cache the response). Possible values are as follows.</p> -<ul> -<li><strong>Finder</strong> : If this is selected, the mediator is used to search for the request hash of incoming messages.</li> -<li><strong>Collector</strong> : If this is selected, the mediator is used to collect response messages in the cache.</li> -</ul></td> -</tr> -</tbody> -</table> - -## Examples - -Following are examples of how you can use the Cache mediator. - -### Example 1 - -Following is an example where the expected response from the last cache hit is not received because the response is sent once the request comes -to the first finder: - -``` java -<?xml version="1.0" encoding="UTF-8"?> -<proxy xmlns="http://ws.apache.org/ns/synapse" name="cache115" transports="http https" startOnLoad="true"> - <description /> - <target> - <inSequence> - <cache collector="false" timeout="60"> - <protocol type="HTTP"> - <methods>POST</methods> - <headersToExcludeInHash /> - <responseCodes>.*</responseCodes> - <enableCacheControl>false</enableCacheControl> - <includeAgeHeader>false</includeAgeHeader> - <hashGenerator>org.wso2.carbon.mediator.cache.digest.HttpRequestHashGenerator</hashGenerator> - </protocol> - </cache> - <call> - <endpoint> - <address uri="http://demo0585968.mockable.io/some" /> - </endpoint> - </call> - <property name="RESPONSE" value="true" scope="default" type="STRING" /> - <log level="full" /> - <cache collector="true" /> - <property name="RESPONSE" value="false" scope="default" type="STRING" /> - <cache collector="false" timeout="60"> - <protocol type="HTTP"> - <methods>POST</methods> - <headersToExcludeInHash /> - <responseCodes>.*</responseCodes> - <hashGenerator>org.wso2.carbon.mediator.cache.digest.HttpRequestHashGenerator</hashGenerator> - </protocol> - </cache> - <call> - <endpoint> - <address uri="http://demo0585968.mockable.io/hello" /> - </endpoint> - </call> - <property name="RESPONSE" value="true" scope="default" type="STRING" /> - <log level="full" /> - <cache collector="true" /> - <respond /> - </inSequence> - </target> -</proxy> -``` - -### Example 2 - -According to this example configuration, when the first message is sent -to the endpoint, the cache is not hit. The Cache mediator configured in -the ` Out ` sequence caches the response to this message. -When a similar message is sent to the endpoint for the second time, the -previous response is directly fetched from the cache and sent to the -requester. This happens because the ` onCacheHit ` -sequence is not defined in this configuration. - -``` java -<?xml version="1.0" encoding="UTF-8"?> -<sequence name="main"> - <in> - <cache collector="false" maxMessageSize="10000" timeout="20"> - <protocol type="HTTP"> - <methods>POST</methods> - <headersToExcludeInHash/> - <responseCodes>2[0-9][0-9]</responseCodes> - <enableCacheControl>false</enableCacheControl> - <includeAgeHeader>false</includeAgeHeader> - <hashGenerator>org.wso2.carbon.mediator.cache.digest.HttpRequestHashGenerator</hashGenerator> - </protocol> - <implementation maxSize="100"/> - </cache> - <send> - <endpoint name="inlined"> - <address uri="http://localhost:9000/services/SimpleStockQuoteService"/> - </endpoint> - </send> - </in> - <out> - <cache collector="true"/> - <send/> - </out> - </sequence> -``` - -### Example 3 - -According to this example configuration, if you define a cache collector -using the cache mediator in the in sequence, you need to add the -` RESPONSE ` property to consider the message as a -response message. - -``` xml -<?xml version="1.0" encoding="UTF-8"?> -<api xmlns="http://ws.apache.org/ns/synapse" name="cacheAPI" context="/cache"> -<resource methods="POST GET" uri-template="/headerapi/*"> - <inSequence> - <cache collector="false" timeout="5000"> - <protocol type="HTTP"> - <methods>GET, POST</methods> - <headersToExcludeInHash>*</headersToExcludeInHash> - <responseCodes>.*</responseCodes> - <enableCacheControl>false</enableCacheControl> - <includeAgeHeader>false</includeAgeHeader> - <hashGenerator>org.wso2.carbon.mediator.cache.digest.HttpRequestHashGenerator</hashGenerator> - </protocol> - </cache> - <call> - <endpoint> - <address uri="http://localhost:9000/services/SimpleStockQuoteService"/> - </endpoint> - </call> - <property name="RESPONSE" value="true" scope="default" type="STRING"/> - <enrich> - <source type="inline" clone="true"> - <ax21:newvalue - xmlns:ax21="http://services.samples/xsd">testsamplevalue - </ax21:newvalue> - </source> - <target - xmlns:ax21="http://services.samples/xsd" - xmlns:ns="http://services.samples" action="sibling" xpath="//ns:getQuoteResponse/ns:return/ax21:volume"/> - </enrich> - <cache collector="true"/> - <respond/> - </inSequence> -</resource> -</api> -``` - -### Invalidating cached responses remotely - -You can invalidate all cached response remotely by using any [JMX monitoring tool such as Jconsole]({{base_path}}/observe/micro-integrator/classic-observability-metrics/jmx-monitoring) via the exposed MBeans. You can use the ` invalidateTheWholeCache() ` operation of the ` org.wso2.carbon.mediation ` MBean for this as shown below. - -![]({{base_path}}/assets/img/integrate/jmx/jmx_monitoring_cache_mediator.png) diff --git a/en/docs/reference/mediators/call-mediator.md b/en/docs/reference/mediators/call-mediator.md deleted file mode 100644 index f68d1e973e..0000000000 --- a/en/docs/reference/mediators/call-mediator.md +++ /dev/null @@ -1,417 +0,0 @@ -# Call Mediator - -The **Call mediator** is used to send messages out of the Micro Integrator to an **endpoint**. You can invoke services either in blocking or non-blocking manner. - -When you invoke a service in non-blocking mode, the underlying worker -thread returns without waiting for the response. In blocking mode, the -underlying worker thread gets blocked and waits for the response after -sending the request to the endpoint. Call mediator in blocking mode is -very much similar to the [Callout mediator]({{base_path}}/reference/mediators/callout-Mediator). - -In both blocking and non-blocking modes, Call mediator behaves in a synchronous manner. Hence, mediation pauses after the service invocation, and resumes from the next mediator in the sequence when the response is received. Call mediator allows you to create your configuration independent from the underlying architecture. - -Non-blocking mode of the Call mediator leverages the non-blocking transports for better performance. Therefore, it is recommended to use it in non-blocking mode as much as possible. However, there are scenarios where you need to use the blocking mode. For example, when you implement a scenario related to JMS transactions, it is vital to use the underlying threads in blocking mode. - -You can obtain the service endpoint for the Call mediator as follows: - -- Pick from message-level information -- Pick from a predefined endpoint - -If you do not specify an endpoint, the Call mediator tries to send the -message using the ` WSA:TO ` address of the message. If -you specify an endpoint, the Call mediator sends the message based on -the specified endpoint. - -The endpoint type can be Leaf Endpoint (i.e. Address/WSDL/Default/HTTP) -or Group Endpoint (i.e. Failover/Load balance/Recipient list). Group -Endpoint is only supported in non-blocking mode. - -By default, when you use the Call mediator, the current message body in the mediation is sent out -as the request payload. The response you receive replaces the current message body. - -!!! Info - The Call mediator is a [conditionally content aware]({{base_path}}/reference/mediators/about-mediators/#classification-of-mediators). - -## Enabling mutual SSL in the blocking mode - -When using the Call mediator in **blocking mode** (blocking=true), enable the mediator to handle mutual SSL by adding the following JVM settings to the `MI_HOME/bin/micro-integrator.sh` file: - -``` java --Djavax.net.ssl.keyStore="$CARBON_HOME/repository/resources/security/wso2carbon.jks" \ --Djavax.net.ssl.keyStorePassword="wso2carbon" \ --Djavax.net.ssl.keyPassword="wso2carbon" \ --Drampart.axiom.parser.pool=false \ -``` - -## Syntax - -``` java -<call [blocking="true|false"]> - <source contentType=" " type="custom|inline|property">{xpath|inline|property}</source>? - <target type=”property”>{property_name}</target>? - (endpointref | endpoint) -</call> -``` - -!!! Note - The call mediator in the **blocking mode** (blocking=true), builds the message from the response payload. If no response message is expected by the client, you can set the OUT_ONLY property before the call mediator to avoid building the response payload. - ``` xml - <property name="OUT_ONLY" value="true"/> - ``` - -If the message is to be sent to one or more endpoints, use the following syntax: - -``` java -<call [blocking="true"]> - (endpointref | endpoint)+ -</call> -``` - -- The ` endpointref ` token refers to the following: - ``` java - <endpoint key="name"/> - ``` - -- The ` endpoint ` token refers to an anonymous - endpoint definition. - -## Configuration - -### Endpoint configuration - -Select one of the following options to define the endpoint to which the message should be delivered. - -<table> -<thead> -<tr class="header"> -<th>Parameter Name</th> -<th>Description</th> -</tr> -</thead> -<tbody> -<tr class="odd"> -<td><strong>None</strong></td> -<td>Select this option if you do not want to provide an endpoint. The Call mediator will send the message using its <code> wsa:to </code> address.</td> -</tr> -<tr class="even"> -<td><strong>Define Inline</strong></td> -<td>If this is selected, the endpoint to which the message should be sent can be included within the Call mediator configuration. Click <strong>Add</strong> to add the required endpoint. For more information on Adding an endpoint, see <a href="{{base_path}}/integrate/develop/creating-artifacts/creating-endpoints">Adding an Endpoint</a> .</td> -</tr> -<tr class="odd"> -<td><strong>Pick From Registry</strong></td> -<td>If this is selected, the message can be sent to a predefined endpoint, which is currently saved as a resource in the registry. Click either <strong>Configuration Registry</strong> or <strong>Governance Registry</strong> as relevant to select the required endpoint from the resource tree.</td> -</tr> -<tr class="even"> -<td><strong>XPath</strong></td> -<td><div class="content-wrapper"> -<p>If this is selected, the endpoint to which the message should be sent will be derived via an XPath expression. You are required to enter the relevant XPath expression in the text field that appears when this option is selected.</p> -<b>Note</b>:<p>You can click <strong>NameSpaces</strong> to add namespaces if you are providing an expression. Then the <strong>Namespace Editor</strong> panel would appear where you can provide any number of namespace prefixes and URLs used in the XPath expression.</p> -</div></td> -</tr> -<tr class="odd"> -<td><strong>Blocking</strong></td> -<td>If set to <code>true</code>, you can use the call mediator in blocking mode.</td> -</tr> -</tbody> -</table> - -### Source configuration - -The following properties are available when you want to configure the source of the request payload. - -<table> - <tr> - <th>Parameter Name</th> - <th>Description</th> - </tr> - <tr> - <td> - Type - </td> - <td> - You can use one of the following source types: - <ul> - <li> - <b>Custom</b>: Provide a valid XPATH/json-eval expression as the source element. The result that is derived from this expression will be the payload. - </li> - <li> - <b>Inline</b>: Provide a static payload inline as the payload source. Be sure to use proper encording and escaping. - </li> - <li> - <b>Property</b>: Provide a property as the payload source. You can only refer properties with the <code>synpase</code> scope. For other properties, use an XPath with the <b>Custom</b> source type. - </li> - </ul> - </td> - </tr> - <tr> - <td> - contentType - </td> - <td> - Use this paramter to define the content type that is used when sending the message to the endpoint specified in the Call mediator. When the response from the endpoint is received, the original content type is restored. - </td> - </tr> -</table> - -### Target configuration - -The following properties are available when you want to configure a target property to store the response (received from the endpoint). - -<table> - <tr> - <th>Paramete Name</th> - <th>Description</th> - </tr> - <tr> - <td> - Type - </td> - <td> - Use <b>property</b> as the target type to store the response (received from the endpoint) to a property. The property name has to be provided as the value of this element. When you use this target type, a new property is generated for the mediation sequence with the correct data type. - </td> - </tr> -</table> - -## Examples - Using Endpoint configurations - -### Example 1 - Service orchestration - -In this example, the Call mediator invokes a backend service. An [Enrich mediator]({{base_path}}/reference/mediators/enrich-Mediator) stores the response received for -that service invocation. - -The [Filter Mediator]({{base_path}}/reference/mediators/filter-Mediator) added after the Call mediator -carries out a filter to determine whether the first call has been -successful. If it is successful, second backend service is invoked. The -payload of the request to the second backend is the response of the -first service invocation . - -After a successful second backend service invocation, response of the -first service is retrieved by the [Enrich mediator]({{base_path}}/reference/mediators/enrich-Mediator) -from the property where it was formerly stored. This response is sent to -the client by the [Respond mediator]({{base_path}}/reference/mediators/respond-Mediator). - -If it is not successful, a custom JSON error message is sent with HTTP -500. If the first call itself is not successful, the output is just sent -back with the relevant error code. - -``` xml -<target> - <inSequence> - <log/> - <call> - <endpoint> - <http method="get" uri-template="http://192.168.1.10:8088/mockaxis2service"/> - </endpoint> - </call> - <enrich> - <source type="body" clone="true"/> - <target type="property" action="child" property="body_of_first_call"/> - </enrich> - <filter source="get-property('axis2', 'HTTP_SC')" regex="200"> - <then> - <log level="custom"> - <property name="switchlog" value="Case: first call successful"/> - </log> - <call> - <endpoint> - <http method="get" uri-template="http://localhost:8080/MockService1"/> - </endpoint> - </call> - <filter source="get-property('axis2', 'HTTP_SC')" regex="200"> - <then> - <log level="custom"> - <property name="switchlog" value="Case: second call successful"/> - </log> - <enrich> - <source type="property" clone="true" property="body_of_first_call"/> - <target type="body"/> - </enrich> - <respond/> - </then> - <else> - <log level="custom"> - <property name="switchlog" value="Case: second call unsuccessful"/> - </log> - <property name="HTTP_SC" value="500" scope="axis2"/> - <payloadFactory media-type="json"> - <format>{ "status": "ERROR!"}</format> - <args/> - </payloadFactory> - <respond/> - </else> - </filter> - </then> - <else> - <log level="custom"> - <property name="switchlog" value="Case: first call unsuccessful"/> - </log> - <respond/> - </else> - </filter> - </inSequence> - </target> -``` - -### Example 2 - Continuing mediation without waiting for responses - -In this example, the message will be cloned by the [Clone Mediator]({{base_path}}/reference/mediators/clone-Mediator) and sent via the Call mediator. The Drop mediator drops the response so that no further mediation is carried out for the cloned message. However, since the ` continueParent ` attribute of the [Clone mediator]({{base_path}}/reference/mediators/clone-Mediator) is set to ` true ` , the original message is mediated in parallel. Therefore, the [Log Mediator]({{base_path}}/reference/mediators/log-Mediator) at the end of the configuration will log the ` After call mediator ` log message without waiting for -the Call mediator response. - -``` xml -... -<log level="full"/> -<clone continueParent="true"> -<target> -<sequence> -<call> -<endpoint> -<address uri="http://localhost:8080/echoString"/> -</endpoint> -</call> -<drop/> -</sequence> -</target> -</clone> -<log level="custom"> -<property name="MESSAGE" value="After call mediator"/> -</log> -... -``` - -### Example 3 - Call mediator in blocking mode - -In the following sample configuration, the [Header Mediator]({{base_path}}/reference/mediators/header-Mediator) is used to add the action, the [PayloadFactory Mediator]({{base_path}}/reference/mediators/payloadFactory-Mediator) is used to store the the request message and the Call mediator is used to invoke a backend service. You will see that the payload of the request and header action are sent to the backend. After successful backend service invocation, you will see that the response of the service is retrieved by the Micro Integrator and sent to the client as the response using the [Respond Mediator]({{base_path}}/reference/mediators/respond-Mediator). - -``` -<target> - <inSequence> - <header name="Action" value="urn:getQuote" /> - <payloadFactory media-type="xml"> - <format> - <m0:getQuote xmlns:m0="http://services.samples"> - <m0:request> - <m0:symbol>WSO2</m0:symbol> - </m0:request> - </m0:getQuote> - </format> - <args /> - </payloadFactory> - <call blocking="true"> - <endpoint> - <address uri="http://localhost:9000/services/SimpleStockQuoteService" /> - </endpoint> - </call> - <respond /> - </inSequence> -</target> -``` - -### Example 4 - Receiving response headers in blocking mode - -If you want to receive the response message headers, when you use the Call mediator in blocking mode, add the `BLOCKING_SENDER_PRESERVE_REQ_HEADERS` property within the proxy service, or in a sequence as shown in the sample proxy configuration below. - -!!! Info - Set the value of the `BLOCKING_SENDER_PRESERVE_REQ_HEADERS` property to `false` to receive the response message headers. If you set it to `true`, you cannot get the response headers, but the request headers will be preserved. - -``` -<proxy xmlns="http://ws.apache.org/ns/synapse" - name="sample" - transports="https" - statistics="enable" - trace="enable" - startOnLoad="true"> - <target> - <inSequence> - <property name="FORCE_ERROR_ON_SOAP_FAULT" - value="true" - scope="default" - type="STRING"/> - <property name="HTTP_METHOD" value="POST" scope="axis2" type="STRING"/> - <property name="messageType" value="text/xml" scope="axis2" type="STRING"/> - <property name="BLOCKING_SENDER_PRESERVE_REQ_HEADERS" value="false"/> - <call blocking="true"> - <endpoint> - <address uri="https://localhost:8243/services/sampleBE" - trace="enable" - statistics="enable"/> - </endpoint> - </call> - - </inSequence> - <outSequence/> - </target> - <description/> -</proxy> -``` - -## Examples - Using Source and Target configurations - -Consider the following payload that is sent to the example sequences listed below. -The content type used for this request is `application/json`. - -```json -{"INCOMING" : {"INCOMING2":"INCOMING2"}} -``` - -In all of the following example sequences, the `contentType` property of the Call mediator's **source configuration** is set to `application/xml`. Therefore, the sequence receives `application/json` as the content type and converts it to `application/xml` before sending the request to the endpoint. The Call mediator's **target configuration** will store the response (received from the endpoint) to a property. Thereafter, the mediation continues with the original payload that was received by the sequence. - -### Example 1 - Using a property as the payload source - -```xml -<inSequence> - <property name= "SOURCE" expression="$body//INCOMING" type="OM"/> - <log level="custom"> - <property name="log" expression="$ctx:SOURCE"/> - </log> - <property name="REST_URL_POSTFIX" scope="axis2" action="remove"/> - <call> - <endpoint name="Sample"> - <address uri="BACKEND_URL"></address> - </endpoint> - <source contentType="application/xml" type="property">SOURCE</source> - <target type="property">TARGET</target> - </call> - <log level="custom"> - <property name="TARGET PAYLOAD" expression="$ctx:TARGET"/> - </log> - <respond/> -</inSequence> -``` - -### Example 2 - Using an XPath as the payload source - -```xml -<inSequence> - <property name="REST_URL_POSTFIX" scope="axis2" action="remove"/> - <call> - <endpoint name="Sample"> - <address uri="BACKEND_URL"></address> - </endpoint> - <source contentType="application/xml" type="custom">$body//INCOMING2</source> - <target type="property">TARGET</target> - </call> - <log level="custom"> - <property name="TARGET PAYLOAD" expression="$ctx:TARGET"/> - </log> - <respond/> -</inSequence> -``` - -### Example 3 - Using an inline payload as the source - -```xml -<inSequence> - <property name="REST_URL_POSTFIX" scope="axis2" action="remove"/> - <call> - <endpoint name="Sample"> - <address uri="BACKEND_URL"></address> - </endpoint> - <source contentType="application/xml" type="inline"><Intermediate><Intermediate1>Intermediate</Intermediate1></Intermediate></source> - <target type="property">TARGET</target> - </call> - <log level="custom"> - <property name="TARGET PAYLOAD" expression="$ctx:TARGET"/> - </log> - <respond/> -</inSequence> -``` diff --git a/en/docs/reference/mediators/call-template-mediator.md b/en/docs/reference/mediators/call-template-mediator.md deleted file mode 100644 index 7eac300c91..0000000000 --- a/en/docs/reference/mediators/call-template-mediator.md +++ /dev/null @@ -1,192 +0,0 @@ -# Call Template Mediator - -The Call Template mediator allows you to construct a sequence by passing values into a **sequence template**. - -!!! Info - This is currently only supported for special types of mediators such as the [Iterator]({{base_path}}/reference/mediators/iterate-mediator) and [Aggregate Mediators]({{base_path}}/reference/mediators/aggregate-mediator), where actual XPath operations are performed on a different SOAP message, and not on the message coming into the mediator. - -## Syntax - -``` java -<call-template target="string"> - <!-- parameter values will be passed on to a sequence template --> - ( - <!--passing plain static values --> - <with-param name="string" value="string" /> | - <!--passing xpath expressions --> - <with-param name="string" value="{string}" /> | - <!--passing dynamic xpath expressions where values will be compiled dynamically--> - <with-param name="string" value="{{string}}" /> | - ) * - <!--this is the in-line sequence of the template --> - </call-template> -``` - -You use the `target` attribute to specify the sequence template you want to use. The `<with-param>` element is used to parse parameter values to the target sequence template. The parameter names should be the same as the names specified in target template. The parameter value can contain a string, an XPath expression (passed in with curly braces { }), or a dynamic XPath expression (passed in with double curly braces) of which the values are compiled dynamically. - -## Configuration - -The parameters available to configure the Call-Template mediator are as follows. - -| Parameter Name | Description | -|---------------------|-------------------------------------------------------------------------------------------------------------------------| -| **Target Template** | The sequence template to which values should be passed. You can select a template from the **Available Templates** list | - -When a target template is selected, the parameter section will be displayed as shown below if the sequence template selected has any parameters. This enables parameter values to be parsed into the sequence template selected. - -<table> -<thead> -<tr class="header"> -<th>Parameter Name</th> -<th>Description</th> -</tr> -</thead> -<tbody> -<tr class="odd"> -<td><strong>Parameter Name</strong></td> -<td>The name of the parameter.</td> -</tr> -<tr class="even"> -<td><strong>Parameter Type</strong></td> -<td><p>The type of the parameter. Possible values are as follows.</p> -<ul> -<li><strong>Value</strong>: Select this to define the parameter value as a static value. This value should be entered in the <strong>Value/ Expression</strong> parameter.</li> -<li><strong>Expression</strong>: Select this to define the parameter value as a dynamic value. The XPath expression to calculate the parameter value should be entered in the <strong>Value/Expression</strong> parameter.</li> -</ul></td> -</tr> -<tr class="odd"> -<td><strong>Value / Expression</strong></td> -<td>The parameter value. This can be a static value, or an XPath expression to calculate a dynamic value depending on the value you selected for the <strong>Parameter Type</strong> parameter.</td> -</tr> -<tr class="even"> -<td><strong>Action</strong></td> -<td>Click <strong>Delete</strong> <strong></strong> to delete a parameter.</td> -</tr> -<tr> - <td> - <b>onError</b> - </td> - <td> - Use this parameter to specify the error handling sequence that should be called if there is an error when the Call Template logic is executed. -</tr> -</tbody> -</table> - -## Examples - -Following examples demonstrate different use cases of the Call Template mediator. - -### Example 1 - -The following four Call Template mediator configurations populate a -sequence template named HelloWorld_Logger with the "hello world" text -in four different languages. - -``` xml -<call-template target="HelloWorld_Logger"> - <with-param name="message" value="HELLO WORLD!!!!!!" /> -</call-template> -``` - -``` xml -<call-template target="HelloWorld_Logger"> - <with-param name="message" value="Bonjour tout le monde!!!!!!" /> -</call-template> -``` - -``` xml -<call-template target="HelloWorld_Logger"> - <with-param name="message" value="Ciao a tutti!!!!!!!" /> -</call-template> -``` - -``` xml -<call-template target="HelloWorld_Logger"> - <with-param name="message" value="???????!!!!!!!" /> -</call-template> -``` - -The sequence template can be configured as follows to log any greetings -message passed to it by the Call Template mediator. Thus, due to the -availability of the Call Template mediator, you are not required to have -the message entered in all four languages included in the sequence -template configuration itself. - -``` java -<template name="HelloWorld_Logger"> - <parameter name="message"/> - <sequence> - <log level="custom"> - <property expression="$func:message" name="GREETING_MESSAGE"/> - </log> - </sequence> -</template> -``` - -### Example 2 - -The following Call Template mediator configuration populates a sequence template named `Testtemp` with a dynamic XPath expression. - -``` xml -<call-template target="Testtemp"> - <with-param name="message_store" value="<MESSAGE_STORE_NAME>" /> -</call-template> -``` - -The following `Testtemp` template includes a dynamic XPath expression to save messages in a message store, which is -dynamically set via the message context. - -``` java -<template name="Testtemp"> - <parameter name="message_store"/> - <sequence> - <log level="custom"> - <property expression="$func:message_store" - name="STORENAME" - xmlns:ns="http://org.apache.synapse/xsd" - xmlns:ns2="http://org.apache.synapse/xsd" xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope"/> - </log> - <store messageStore="{$func:message_store}" - xmlns:ns="http://org.apache.synapse/xsd" - xmlns:ns2="http://org.apache.synapse/xsd" xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope"/> - </sequence> - </template> -``` - -### Example 3 - -Consider an example where the sequence template is configured to log the greeting message that is passed from the mediation sequence in the REST API. According to the sequence template, a value for the greeting message is mandatory. - -```xml -<?xml version="1.0" encoding="UTF-8"?> -<template name="sequence-temp" xmlns="http://ws.apache.org/ns/synapse"> - <parameter isMandatory="true" defaultValue="Welcome" name="greeting_message"/> - <sequence> - <log level="custom"> - <property expression="$func:greeting_message" name="greeting"/> - </log> - </sequence> -</template> -``` - -However, in the following example, the Call template mediator in the REST API is not passing a greeting message to the template. Also, a <b>default</b> greeting message is not defined in the template. In this scenario, an error will get triggered when the REST API calls the template. If you need to handle this error, or in general any error that may occur at execution of the mediation logic inside the template, you can add the 'onError' parameter to the Call Template mediator and call an error-handling sequence. - -=== "Call Template" - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <api context="/test" name="test" xmlns="http://ws.apache.org/ns/synapse"> - ...... - <call-template target="sequence-temp" onError="error-handling-sequence" /> - ........ - </api> - ``` - -=== "error-handling-sequence" - ```xml - <?xml version="1.0" encoding="UTF-8"?> - <sequence name="error-handling-sequence" trace="disable" xmlns="http://ws.apache.org/ns/synapse"> - <log level="custom"> - <property name="faultMessage" value="Call Template Error"/> - </log> - </sequence> - ``` \ No newline at end of file diff --git a/en/docs/reference/mediators/callout-mediator.md b/en/docs/reference/mediators/callout-mediator.md deleted file mode 100644 index a124b085c9..0000000000 --- a/en/docs/reference/mediators/callout-mediator.md +++ /dev/null @@ -1,210 +0,0 @@ -# Callout Mediator - -The **Callout** mediator performs a blocking external service invocation during mediation. As the Callout mediator performs a blocking call, it cannot use the default non-blocking HTTP/S transports based on Java NIO. - -!!! Tip - The [Call mediator]({{base_path}}/reference/mediators/call-Mediator) leverages the non-blocking transports for much greater performance than the Callout mediator. Therefore, you should use the Call mediator in most cases. However, the Callout mediator is recommended in situations where you need to execute the mediation flow in a single thread. - -## Enabling mutual SSL - -The Callout mediators default https transport sender is `org.apache.axis2.transport.http.CommonsHTTPTransportSender`. Therefore, the Callout mediator does not have access to the required key store to handle mutual SSL. To enable the Callout mediator to handle mutual SSL, the following JVM settings should be added to the `MI_HOME/bin/micro-integrator.sh` file. - -``` --Djavax.net.ssl.keyStore="$CARBON_HOME/repository/resources/security/wso2carbon.jks" \ --Djavax.net.ssl.keyStorePassword="wso2carbon" \ --Djavax.net.ssl.keyPassword="wso2carbon" \ -``` - -## Disabling chunking - -The Callout mediator is not affected by the [DISABLE_CHUNKING property]({{base_path}}/reference/mediators/property-reference/http-transport-properties). Instead, you can disable chunking for the Callout mediator by setting the following parameters in the `MI_HOME/conf/deployment.toml` file: - -```toml -[transport.blocking.http] -sender.transfer_encoding = "chunked" -``` - -This will disable chunking for all Callout mediators present in the Micro Integrator. - -If you want to disable chunking for only a single Callout mediator instance, create a new `axis2.xml` file by copying the ` MI_HOME/conf/axis2/axis2_blocking_client.xml ` file, set the ` Transfer-Encoding ` parameter as shown, and then configure that Callout mediator to use this new ` axis2.xml ` file as described below. - -## Syntax - -``` java -<callout [serviceURL="string"] [action="string"] [initAxis2ClientOptions="true|false"] [endpointKey="string"]> - <configuration [axis2xml="string"] [repository="string"]/>? - <source xpath="expression" | key="string" | type="envelope"/> - <target xpath="expression" | key="string"/> - <enableSec policy="string" | outboundPolicy="String" | inboundPolicy="String" />? -</callout> -``` - -## Configuration - -The parameters available for configuring the Callout mediator are as follows. - -<table> -<thead> -<tr class="header"> -<th>Parameter Name</th> -<th>Description</th> -</tr> -</thead> -<tbody> -<tr class="odd"> -<td><strong>Specify As</strong></td> -<td><div class="content-wrapper"> -<p>This parameter determines whether the target external service should be configured by using either a <code>serviceURL</code> attribute or an <code>endpointKey</code> attribute.</p> -<p>Callout mediator does not support endpoint configurations such as <code> timeout </code> , <code> suspendOnFailure </code>, and <code> markForSuspension </code> when the <code> endpointKey </code> attribute is used to specify an existing endpoint.</p> -<ul> -<li><strong>URL</strong> : Select <strong>URL</strong> if you want to call the external service by specifying its URL in the Call mediator configuration.</li> -<li><strong>Address Endpoint</strong>: Select <strong>Address Endpoint</strong> if you want to call the external service via an <b>Endpoint</b>, which is already saved in the <b>Registry</b>. This option should be selected if you want to make use of the WSO2 functionality related to endpoints such as format conversions, security etc. Note that only Leaf endpoint types (i.e. <code> Address </code>, <code>WSDL</code>, <code>Default</code> and <code>Http</code>) are supported for the Callout mediator.</li> -</ul> -<p>If neither a URL or an address endpoint is specified, the <code>To</code> header on the request is used as the target endpoint.</p> -</div></td> -</tr> -<tr class="even"> -<td><strong>URL</strong></td> -<td>If you selected <strong>URL</strong> for the <strong>Specify As</strong> parameter, use this parameter to enter the URL of the external service that you want to call. This URL will be used as the End Point Reference (EPR) of the external service.</td> -</tr> -<tr class="odd"> -<td><strong>Address Endpoint</strong></td> -<td>If you selected <strong>Address Endpoint</strong> for the <strong>Specify As</strong> parameter, use this parameter to enter a key to access the endpoint that should be used to call the external service. Click <strong>Configuration Registry</strong> or <strong>Governance Registry</strong> as relevant to select the required endpoint from the resource tree.</td> -</tr> -<tr class="even"> -<td><strong>Action</strong></td> -<td>The SOAP action which should be appended to the service call.</td> -</tr> -<tr class="odd"> -<td><strong>Axis2 Repository</strong></td> -<td>The path to Axis2 client repository where the services and modules are located. The purpose of this parameter is to make the Callout mediator initialize with the required client repository.</td> -</tr> -<tr class="even"> -<td><strong>Axis2 XML</strong></td> -<td>The path to the location of the axis2.xml configuration file. The purpose of this parameter is to make the Callout mediator initialize with the relevant Axis2 configurations.</td> -</tr> -<tr class="odd"> -<td><strong>initAxis2ClientOptions</strong></td> -<td>If this parameter is set to <code>false</code>, the existing Axis2 client options available in the Synapse message context will be reused when the Callout mediator is invoked. This is useful when you want to use NLTM authentication. The default value for this parameter is <code>true</code>.</td> -</tr> -<tr class="even"> -<td><strong>Source</strong></td> -<td><div class="content-wrapper"> -<p>This parameter defines the payload for the request. It can be defined using one of the following options.</p> -<ul> -<li><p><strong>XPath</strong> - This option allows you to specify an expression that defines the location in the message.</p> -<b>Tip</b> -<p>You can click <strong>NameSpaces</strong> to add namespaces if you are providing an expression. Then the <strong>Namespace Editor</strong> panel would appear where you can provide any number of namespace prefixes and URLs used in the XPath expression.</p> -</p></li> -<li><strong>Property</strong>: This option allows you to specify the payload for a request via a property included in the mediation flow.</li> -<li><strong>Envelope</strong>: This option allows you to select the entire envelope which is available in the message flow as the source.</li> -</ul> -</div></td> -</tr> -<tr class="odd"> -<td><strong>Target</strong></td> -<td><div class="content-wrapper"> -<p>The node or the property of the request message to which the payload (resulting from the value specified for the <strong>Source</strong> parameter) would be attached. The target can be specified using one of the following options.</p> -<ul> -<li><p><strong>XPath</strong>: This option allows you to specify an expression that defines the location in the message.</p> -<b>Tip</b>: -<p>You can click <strong>NameSpaces</strong> to add namespaces if you are providing an expression. Then the <strong>Namespace Editor</strong> panel would appear where you can provide any number of namespace prefixes and URLs used in the XPath expression.</p> -</p></li> -<li><strong>Property</strong>: This option allows you to specify a property included in the mediation flow.</li> -</ul> -</div></td> -</tr> -<tr class="even"> -<td><strong>WS-Security</strong></td> -<td><div class="content-wrapper"> -<p>If you select the check box, WS-Security is enabled for the Callout mediator. This section would expand as shown below when you select this check box.</p> -</div></td> -</tr> -<tr class="odd"> -<td><strong>Specify as Inbound and Outbound Policies</strong></td> -<td><div class="content-wrapper"> -<p>If this check box is selected, you can define separate security policies for the inbound and outbound messages (flows). This is done by entering the required policy keys in the <strong>Outbound Policy Key</strong> and <strong>Inbound Policy Key</strong> parameters which are displayed as follows when this check box is selected. You can click <strong>Configuration Registry</strong> or <strong>Governance Registry</strong> to select a security policy saved in the <b>Registry</b> from the resource tree.</p> -</div></td> -</tr> -<tr class="even"> -<td><strong>Policy Key</strong></td> -<td>If the <strong>Specify as Inbound and Outbound Policies</strong> check box is not selected, this parameter is used to enter a key to access a security policy which will be applied to both inbound and outbound messages. You can click <strong>Configuration Registry</strong> or <strong>Governance Registry</strong> to select a security policy saved in the <b>Registry</b> from the resource tree.</td> -</tr> -</tbody> -</table> - -## Examples - -Following examples demonstrate the usage of the Callout mediator. - -### Example 1 - Performing a direct service invocation - -In this example, the Callout Mediator does the direct service invocation to the `StockQuoteService` using the client request, gets the response, and sets the response as the first child of the SOAP message body. You can then use the [Send Mediator]({{base_path}}/reference/mediators/send-Mediator) to send the message back to the client. - -``` java -<callout serviceURL="http://localhost:9000/services/SimpleStockQuoteService" - action="urn:getQuote"> - <source xmlns:s11="http://schemas.xmlsoap.org/soap/envelope/" - xmlns:s12="http://www.w3.org/2003/05/soap-envelope" - xpath="s11:Body/child::*[fn:position()=1] | s12:Body/child::*[fn:position()=1]"/> - <target xmlns:s11="http://schemas.xmlsoap.org/soap/envelope/" - xmlns:s12="http://www.w3.org/2003/05/soap-envelope" - xpath="s11:Body/child::*[fn:position()=1] | s12:Body/child::*[fn:position()=1]"/> -</callout> -``` - -### Example 2 - Setting an HTTP method when invoking a REST service - -The below example uses a Callout mediator to set a HTTP method when invoking a REST service. - -!!! Info - For this, you need to define the following property: ` <property name="HTTP_METHOD" expression="$axis2:HTTP_METHOD" scope="axis2-client"/> ` - -``` java -<proxy xmlns="http://ws.apache.org/ns/synapse" - name="CalloutProxy" - startOnLoad="true" - statistics="disable" - trace="disable" - transports="http,https"> - <target> - <inSequence> - <property name="enableREST" - scope="axis2-client" - type="BOOLEAN" - value="true"/> - <property expression="$axis2:HTTP_METHOD" - name="HTTP_METHOD" - scope="axis2-client"/> - <callout initAxis2ClientOptions="false" - serviceURL="http://localhost:8280/callout/CalloutRESTApi"> - <source type="envelope"/> - <target key="response"/> - </callout> - <log level="custom"> - <property expression="$ctx:response" name="MESSAGE###########################3"/> - </log> - <property expression="$ctx:response" name="res" type="OM"/> - <property action="remove" name="NO_ENTITY_BODY" scope="axis2"/> - <property name="RESPONSE" value="true"/> - <property name="messageType" scope="axis2" value="application/xml"/> - <header action="remove" name="To"/> - <payloadFactory media-type="xml"> - <format> - <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> - <soapenv:Header/> - <soapenv:Body>$1 - </soapenv:Body> - </soapenv:Envelope> - </format> - <args> - <arg evaluator="xml" expression="$ctx:res"/> - </args> - </payloadFactory> - <send/> - </inSequence> - </target> - <description/> -</proxy> -``` - diff --git a/en/docs/reference/mediators/class-mediator.md b/en/docs/reference/mediators/class-mediator.md deleted file mode 100644 index beebb779a2..0000000000 --- a/en/docs/reference/mediators/class-mediator.md +++ /dev/null @@ -1,138 +0,0 @@ -# Class Mediator - -The **Class Mediator** creates an instance of a custom-specified class -and sets it as a mediator. The class must implement the -` org.apache.synapse.api.Mediator ` interface. If any -properties are specified, the corresponding setter methods are invoked -once on the class during initialization. - -The Class mediator is a custom Java class, which you need to maintain by -yourself. Therefore, it is recommended to use the Class mediator only -for not frequently re-used custom developments and very user-specific -scenarios, for which, there is no built-in mediator that already -provides the required functionality. - -Your class mediator might not be picked up and updated if you use an existing package when creating. For best results, use [WSO2 Integration Studio]({{base_path}}/develop/WSO2-Integration-Studio) for debugging Class mediators. - -## Syntax - -``` java -<class name="class-name"> - <property name="string" (value="literal" | expression="[XPath|json-eval(JSON Path)]")/>* -</class> -``` - -## Configuration - -**Class Name**: The name of the class. To load a class, enter the qualified name of the relevant class in this parameter and click **Load Class**. - -## Example - -In this configuration, the Micro Integrator sends the requested message to the endpoint specified via the [Send mediator]({{base_path}}/reference/mediators/send-Mediator). This endpoint is the Axis2server running on port 9000. The response message is passed through a Class mediator before it is sent back to the client. Two parameters named ` variable1 ` and ` variable2 ` are passed to the instance mediator implementation class ( `SimpleClassMediator`). - -!!! Info - If you want, you can pass the same variables as a value or an expression: - - - Example for passing the variable as a value: ` <property name="variable1" value="10"/> ` - - - Example for passing the variable as an expression: ` <property name="variable2" expression="get-property('variable1')"/> ` - For more information on using the get property method, see the [Property Mediator]({{base_path}}/reference/mediators/property-Mediator). - -!!! Warning - Using the class variables with expressions will lead to the values evaluated being mixed up when there are concurrent requests and will lead to erroneous behaviors. - -``` java -<sequence xmlns="http://ws.apache.org/ns/synapse" name="errorHandler"> - <makefault> - <code value="tns:Receiver" xmlns:tns="http://www.w3.org/2003/05/soap-envelope"/> - <reason value="Mediation failed."/> - </makefault> - <send/> -</sequence> - -<proxy name="SimpleProxy" transports="http https" startonload="true" trace="disable" xmlns="http://ws.apache.org/ns/synapse"> - <target> - <inSequence> - <send> - <endpoint name="stockquote"> - <address uri="http://localhost:9000/services/SimpleStockQuoteService"/> - </endpoint> - </send> - </inSequence> - <outSequence> - <class name="samples.mediators.SimpleClassMediator"> - <property name="variable1" value="10"/> - <property name="variable2" value="5"/> - </class> - <send/> - </outSequence> - <faultSequence> - <sequence key="errorHandler"/> - </faultSequence> - </target> -</proxy> -``` - -See the following sample Class Mediator and note the ` SynapseMessageContext ` and the full Synapse API in there. - -``` java -package samples.mediators; - - import org.apache.synapse.MessageContext; - import org.apache.synapse.mediators.AbstractMediator; - import org.apache.axiom.om.OMElement; - import org.apache.axiom.om.OMAbstractFactory; - import org.apache.axiom.om.OMFactory; - import org.apache.axiom.soap.SOAPFactory; - import org.apache.commons.logging.Log; - import org.apache.commons.logging.LogFactory; - - import javax.xml.namespace.QName; - - public class SimpleClassMediator extends AbstractMediator { - - private static final Log log = LogFactory.getLog(SimpleClassMediator.class); - - private String variable1="10"; - - private String variable2="10"; - - private int variable3=0; - - public SimpleClassMediator(){} - - public boolean mediate(MessageContext mc) { - // Do somthing useful.. - // Note the access to the Synapse Message context - return true; - } - - public String getType() { - return null; - } - - public void setTraceState(int traceState) { - traceState = 0; - } - - public int getTraceState() { - return 0; - } - - public void setVariable1(String newValue) { - variable1=newValue; - } - - public String getVariable1() { - return variable1; - } - - public void setVariable2(String newValue){ - variable2=newValue; - } - - public String getVariable2(){ - return variable2; - } - } -``` diff --git a/en/docs/reference/mediators/clone-mediator.md b/en/docs/reference/mediators/clone-mediator.md deleted file mode 100644 index 10ce82aaf1..0000000000 --- a/en/docs/reference/mediators/clone-mediator.md +++ /dev/null @@ -1,124 +0,0 @@ -# Clone Mediator - -The **Clone Mediator** can be used to clone a message into several messages. It resembles the [Scatter-Gather enterprise integration pattern](http://docs.wso2.org/display/IntegrationPatterns/Scatter-Gather). The Clone mediator is similar to the [Iterate mediator]({{base_path}}/reference/mediators/iterate-mediator). The difference between the two mediators is that the Iterate mediator splits a message into different parts, whereas the Clone mediator makes multiple identical copies of the message. - -!!! Info - The Clone mediator is a [content-aware]({{base_path}} -/reference/mediators/about-mediators/#classification-of-mediators) mediator. Also, note that to get the -asynchronous behavior we have to have the sequence to inject the message context to that sequence asynchronously. We -can not achieve that by adding the endpoint itself to the target without adding the sequence. - -## Syntax - -``` java -<clone [continueParent=(true | false)]> - <target [to="uri"] [soapAction="qname"] [sequence="sequence_ref"] [endpoint="endpoint_ref"]> - <sequence> - (mediator)+ - </sequence>? - <endpoint> - endpoint - </endpoint>? - </target>+ -</clone> -``` - -## Configuration - -The parameters available to configure the Clone mediator is as follows. - -<table> -<thead> -<tr class="header"> -<th>Parameter Name</th> -<th>Description</th> -</tr> -</thead> -<tbody> -<tr class="odd"> -<td><strong>Clone ID</strong></td> -<td>Identification of messages created by the clone mediator. This is particularly useful when aggregating responses of messages that are created using nested Clone mediators.</td> -</tr> -<tr class="even"> -<td><strong>Sequential Mediation</strong></td> -<td><p>This parameter is used to specify whether the cloned messages should be processed sequentially or not. The processing is carried based on the information relating to the sequence and endpoint specified in the <a href="#target-configuration">target configuration</a>. The possible values are as follows.</p> -<ul> -<li><strong>Yes</strong>: If this is selected, the cloned messages will be processed sequentially. Note that selecting <strong>True</strong> might cause delays due to high resource consumption.</li> -<li><strong>No</strong>: If this is selected, the cloned messages will not be processed sequentially. This is the default value and it results in better performance.</li> -</ul></td> -</tr> -<tr class="odd"> -<td><strong>Continue Parent</strong></td> -<td><p>This parameter is used to specify whether the original message should be preserved or not. Possible values are as follows.</p> -<ul> -<li><strong>Yes</strong>: If this is selected, the original message will be preserved.</li> -<li><strong>No</strong>: If this is selected, the original message will be discarded. This is the default value.</li> -</ul></td> -</tr> -<tr class="even"> -<td><div class="content-wrapper"> -<strong>Number of Clones</strong> -</div></td> -<td><div class="content-wrapper"> -<p>The parameter indicates the number of targets which currently exist for the Clone mediator. Click <strong>Add Clone Target</strong> to add a new target. Each time you add a target, it will be added as a child of the Clone mediator in the mediator tree as shown below.</p> -<p>Click <strong>Target</strong> to add the target configuration as described below.</p> -</div></td> -</tr> -</tbody> -</table> - -### Target configuration - -The parameters available to configure the target are as follows. - -<table> -<thead> -<tr class="header"> -<th>Parameter Name</th> -<th>Description</th> -</tr> -</thead> -<tbody> -<tr class="odd"> -<td><strong>SOAP Action</strong></td> -<td>The SOAP action of the message.</td> -</tr> -<tr class="even"> -<td><strong>To Address</strong></td> -<td>The target endpoint address.</td> -</tr> -<tr class="odd"> -<td><strong>Sequence</strong></td> -<td><p>This parameter is used to specify whether cloned messages should be mediated via a <b>sequence</b> or not, and to specify the sequence if they are to be further mediated. Possible options are as follows.</p> -<ul> -<li><strong>None</strong>: If this is selected, no further mediation will be performed for the cloned messages.</li> -<li><strong>Anonymous</strong>: If this is selected, you can define an anonymous <b>sequence</b> for the cloned messages by adding the required mediators as children to <strong>Target</strong> in the mediator tree.</li> -<li><strong>Pick From Registry</strong>: If this is selected, you can refer to a predefined <b>sequence</b> that is currently saved as a resource in the registry. Click either <strong>Configuration Registry</strong> or <strong>Governance Registry</strong> as relevant to select the required <b>sequence</b> from the resource tree.</li> -</ul></td> -</tr> -<tr class="even"> -<td><strong>Endpoint</strong></td> -<td><p>The <b>endpoint</b> to which the cloned messages should be sent. Possible options are as follows.</p> -<ul> -<li><strong>None</strong> : If this is selected, the cloned messages are not sent to an <b>endpoint</b>.</li> -<li><strong>Anonymous</strong> : If this is selected, you can define an anonymous endpoint within the iterate target configuration to which the cloned messages should be sent. Click the <strong>Add</strong> link which appears after selecting this option to add the anonymous endpoint. </li> -<li><strong>Pick from Registry</strong> : If this is selected, you can refer to a predefined endpoint that is currently saved as a resource in the registry. Click either <strong>Configuration Registry</strong> or <strong>Governance Registry</strong> as relevant to select the required endpoint from the resource tree.</li> -</ul></td> -</tr> -</tbody> -</table> - -## Example - -In this example, the Clone Mediator clones messages and redirects them to a **Default** endpoint and an existing sequence. - -``` java -<clone xmlns="http://ws.apache.org/ns/synapse"> - <target> - <endpoint name="endpoint_urn_uuid_73A47733EB1E6F30812921609540392-849227072"> - <default /> - </endpoint> - </target> - <target sequence="test1" /> -</clone> -``` diff --git a/en/docs/reference/mediators/data-mapper-json-schema-specification.md b/en/docs/reference/mediators/data-mapper-json-schema-specification.md deleted file mode 100644 index 4739bf62b1..0000000000 --- a/en/docs/reference/mediators/data-mapper-json-schema-specification.md +++ /dev/null @@ -1,427 +0,0 @@ -# Data Mapper JSON Schema Specification - -The following specification defines the Data Mapper JSON schema of the -ESB profile . It is intended to be the authoritative specification. -Implementations of schemas for the Data Mapper mediator must adhere to -this. - -## Schema declaration - -A schema is represented in JSON by one of: - -- A JSON string, naming a defined type. -- A JSON object, of the form: - ` {"type": "typeName" ...attributes...} ` , where - ` typeName ` is either a primitive or a derived - type name, as defined below. -- A JSON array, representing a union of embedded types. - -A Data Mapper schema should start with the ` $schema ` -attribute with the Data Mapper schema version. For example: -` { “$schema”:” http://wso2-data-mapper-json-schema/1.0v ”} ` - -Also, it can contain following optional attributes that will define more -information about the schema. - -- **“id”** : a JSON string declaring a unique identifier for the - schema. -- **“title”** : a JSON string defining the root element name. -- **“description”** : a JSON string providing a detailed description - about the schema. -- **“type”** : a JSON string providing the element type. -- **“namespaces”** : a JSON array of JSON objects defining namespaces - and prefix values used in the schema as shown in the following - example. - - -``` js -{ “$schema” : ”http://wso2-data-mapper-json-schema/1.0v”, -“id”:”http://wso2-data-mapper-json-schema-sample-o1”, -“title”:”RootElement”, -"type":"object", -“description”:”This schema represent any form of object without any restriction” , -"namespaces":[ -{ "prefix":"ns1", "url":"http://ns1.com"}, -{"prefix":"ns2", "url":"http://ns2.com"}] -} -``` - -## Primitive types - -Primitive types have no specified attributes. The set of primitive type -names are as follows. - -- **null** : no value -- **boolean** : a binary value -- **integer** : integer value -- **number** : rational numbers -- **string** : unicode character sequence - -Primitive type names are also defined type names. Thus, for example, the -schema "string" is equivalent to: -` {"type": "string"} ` - -## Complex types - -The Data Mapper schema supports the following complex types: object and -array. - -### Object - -Object uses the type name ` “object” ` , and supports -the following attributes. - -- **“id”** : a JSON string declaring a unique identifier for the - object (required). -- **“type”** : a JSON string providing the element type. -- **“description”** : a JSON string providing documentation to the - user of this schema. -- **“properties”** : a JSON object listing fields (required). Each - field is a JSON object. -- **“attributes”** : a JSON object listing XML attribute fields. Each - field is a JSON object. - -### Arrays - -Arrays use the type name ` "array" ` , and support a -single attribute out of the following. - -- **“items”** : the schema representing the items of the of the array. -- **“id”** : a JSON string declaring a unique identifier for the - object (required). -- **“attributes”** : a JSON object listing XML attribute fields. Each - field is a JSON object. -- **“description”** : a JSON string providing documentation to the - user of this schema - -For example, an array of an object containing a field named -` firstname ` is declared as shown below. - -``` java -{ -"type": "array", -"items": [ -{ - "id":"http://jsonschema.net/employee/0", - "type":"object", - "properties":{ -“firstname":{ - "id":"http://jsonschema.net/employee/0/firstname", - "type":"string" - } -} -}] -} -``` - -## Defining WSO2 schemas to represent an XML payload - -There are differences between XML and JSON message specifications. -Therefore, to represent XML message formats in JSON schemas, you need to -introduce a few more configurations as explained below. - -### Representing XML attributes and namespaces in WSO2 JSON schemas - -For example, you can build a JSON schema, which follows the WSO2 -specification using the following XML code as described below. - - -``` xml -<?xml version="1.0" encoding="UTF-8"?> -<ns:employees xmlns:ns="http://wso2.employee.info" xmlns:sn="http://wso2.employee.address"> - <ns:employee> - <ns:firstname>Mark</ns:firstname> - <ns:lastname>Taylor</ns:lastname> - <sn:addresses> - <sn:address location="home"> - <sn:city postalcode="30000">LA</sn:city> - <sn:road>baker street</sn:road> - </sn:address> - <sn:address location="office"> - <sn:city postalcode="10003">Colombo 03</sn:city> - <sn:road>duplication road</sn:road> - </sn:address> - </sn:addresses> - </ns:employee> - <ns:employee> - <ns:firstname>Mathew</ns:firstname> - <ns:lastname>Hayden</ns:lastname> - <sn:addresses> - <sn:address location="home"> - <sn:city postalcode="60000">Sydney</sn:city> - <sn:road>101 street</sn:road> - </sn:address> - <sn:address location="office"> - <sn:city postalcode="10003">Colombo 03</sn:city> - <sn:road>duplication road</sn:road> - </sn:address> - </sn:addresses> - </ns:employee> -</ns:employees -``` - -!!! Info - WSO2 Data Mapper supports o nly single rooted XML messages. In the above -example, ` employees ` is the root element of the -payload, and it should be the value of the ` title ` -element. - - -Also, there are two namespace values used. Those should be listed under -the ` namespaces ` field with any prefix value. - -!!! Info - Prefix value can be any valid string that contains only \[a-z,A-Z,0-1\] -characters. You need not match them with the prefix values of the -sample. - - -When you include above information, the schema will be as follows. - -!!! Info - The ` "required" ` field specifies the fields that -are mandatory to contain in that level of schema. - - -``` js - { “$schema” : ”http://wso2-data-mapper-json-schema/1.0v”, -“id”:”http://wso2-data-mapper-json-schema-sample-o1”, -“title”:”employees”, -"type":"object", -“description”:”This schema represent wso2 employee xml message format” , - "required":[ - "employees" - ], -"namespaces":[ -{ "prefix":"ns1", "url":"http://wso2.employee.info"}, -{"prefix":"ns2", "url":"http://wso2.employee.address"}] -} -``` - -#### Including the child elements and attribute values - -Define child elements under the ` ”properties” ` -field as a JSON object with fields to describe the child element. In the -above employee example, the ` employees ` element -contains an array of similar employee elements. Hence, this can be -represented as the following schema. - -``` js - { “$schema” : ”http://wso2-data-mapper-json-schema/1.0v”, -“id”:”http://wso2-data-mapper-json-schema-sample-employees”, -“title”:”employees”, -"type":"object", -“description”:”This schema represent wso2 employee xml message format” , -“properties”: { - “employee”:{ -"id":"http://wso2-data-mapper-json-schema-sample-employees/employee", - "type":"array", - “Items”:[ ], - "required":[ "arrayRequired" ] - } - }, -"required":[ - "employees" - ], -"namespaces":[ -{ "prefix":"ns1", "url":"http://wso2.employee.info"}, -{"prefix":"ns2", "url":"http://wso2.employee.address"}] -} -``` - -Since the ` employee ` element is an array type -element, it contains a field named ` “items” ` , -which defines the element format of the array of employee elements. It -contains three child fields as ` firstname ` , -` lastname ` , ` ` and -` address ` with string, string, and object types -accordingly. Hence, when you include these elements into the schema, it -will look as the following schema. - -``` js - { “$schema” : ”http://wso2-data-mapper-json-schema/1.0v”, -“id”:”http://wso2-data-mapper-json-schema-sample-employees”, -“title”:”employees”, -"type":"object", -“description”:”This schema represent wso2 employee xml message format” , -“properties”: { - “employee”:{ -"id":"http:/….employees/employee", - "type":"array", - “Items”:[{ - "id":"http://jsonschema.net/employee/0", - "type":"object", - "properties":{ - "firstname":{ - "id":"http://.../employee/firstname", - "type":"string" - }, - "lastname":{ - "id":"http://.../employee/lastname", - "type":"string" - }, - "addresses":{ - "id":"http://.../employee//addresses", - "type":"object", - "properties":{ - "address":{ - "id":"http://.../employee/ -addresses/address", - "type":"array", - "Items":[ … ] - } - } - } - }, - "required":[ - "firstname", - "lastname", - "address" - ] - } ], - "required":["arrayRequired" ] - } - }, - "required":["employees" ], -"namespaces":[ -{ "prefix":"ns1", "url":"http://wso2.employee.info"}, -{"prefix":"ns2", "url":"http://wso2.employee.address"}] -} -``` - -Define the XML attributes under the ` “attributes” ` -field similar to the "properties in the element definition. In the above -employees example, address array element and city element contain -attributes, and those can be represented as follows. - -``` js -"addresses":{ - "id":"http://.../addresses", - "type":"object", - "properties":{ - "address":{ - "id":"http://.../addresses/address", - "type":"array", - "items":[ - { - "id":"http://.../addresses/address/element", - "type":"object", - "properties":{ - "city":{ - "id":"http://.../addresses/address/element/city", - "type":"string", - "attributes":{ - "postalcode":{ - "id":".../element/city/postalcode", - "type":"string" - } - } - }, - "road":{ - "id":".../addresses/address/element/road", - "type":"string" - } - } - }], - “attributes”:{ - "location":{ - "id":".../addresses/address/element/location", - "type":"string" - } - } - } - } -``` - -Now, the format of the XML payload is complete. However, you need to -define namespaces. You have defined the namespaces used in the payload -before with prefix values in the root element under the -` “namespaces” ` tag. To assign the namespace to each -element, you should only add the prefix before the element name with a -colon as ` “ns1:employees” `, -` “ns1:employee” ` etc. - -The complete schema to represent the employee payload is as follows. - -``` java -{ - “$schema” : ”http://wso2-data-mapper-json-schema/1.0v”, - “id”:”http://wso2-data-mapper-json-schema-sample-employees”, - “title”:”ns2:employees”, - "type":"object", - “description”:”This schema represent wso2 employee xml message format” , - “properties”: { - "ns2:employee":{ - "id":"http://.../employee", - "type":"array", - "items":[ - { - "id":"http://.../employee/element", - "type":"object", - "properties":{ - "ns2:firstname":{ - "id":"http://.../employee/element/firstname", - "type":"string" - }, - "ns2:lastname":{ - "id":"http://.../employee/element/lastname", - "type":"string" - }, - "ns1:addresses":{ - "id":"http://.../employees/employee/element/addresses", - "type":"object", - "properties":{ - "ns1:address":{ - "id":"http://.../addresses/address", - "type":"array", - "items":[ - { - "id":"http://.../addresses/address/0", - "type":"object", - "properties":{ - “ns1:city":{ - "id":"http://.../addresses/address/element/city", - "type":"string", - "attributes":{ - "postalcode":{ - "id":"http://.../city/-postalcode", - "type":"string" - } - } - }, - "ns1:road":{ - "id":"http://.../addresses/address/element/road", - "type":"string" - } - } - “attributes”: { - "location":{ - "id":"http://jsonschema.net/employees/employee/0/addresses/address/0/-location", - "type":"string" - }, -} - } - ] - } - } - } - }, - "required":[ - "firstname", - "lastname", - "address" - ] - } - ], - "required":[ - "arrayRequired" - ] - } - }, - "required":[ - "employees" - ], - "namespaces":[{ "prefix":"ns1", "url":"http://wso2.employee.address"},{"prefix":"ns2", "url":"http://wso2.employee.info"}] - -} -``` diff --git a/en/docs/reference/mediators/data-mapper-mediator.md b/en/docs/reference/mediators/data-mapper-mediator.md deleted file mode 100644 index 926ca7e915..0000000000 --- a/en/docs/reference/mediators/data-mapper-mediator.md +++ /dev/null @@ -1,652 +0,0 @@ -# Data Mapper Mediator - -Data Mapper mediator is a data mapping solution that can be integrated -into a mediation sequence. It converts and transforms one data format to -another, or changes the structure of the data in a message. It provides -WSO2 Integration Studio with a graphical mapping configuration and -generates the files required for executing this graphical mapping -configuration through the WSO2 Data Mapper engine. - -WSO2 Data Mapper is an independent component that does not depend on any -other WSO2 product. However, other products can use the Data Mapper to -achieve/offer data mapping capabilities. Data Mapper Mediator is the -intermediate component, which gives the data mapping -capability into WSO2 Micro Integrator. - -Data Mapper mediator finds the configuration files from the Registry and configures the Data Mapper Engine with the input message type (XML/JSON/CSV) and output message type (XML/JSON/CSV). Then it takes the request message from the Micro Integrator message flow and uses the configured Data Mapper Engine to execute the transformation and adds the output message to the Micro Integrator message flow. - -!!! Info - The Data Mapper mediator is a [content-aware]({{base_path}}/reference/mediators/about-mediators/#classification-of-mediators) mediator. - If you are running JDK 17 or later NashornJS is not provided by JDK and you need to manually add org.openjdk.nashorn.nashorn-core version 15.3 (https://mvnrepository.com/artifact/org.openjdk.nashorn/nashorn-core/15.3) and org.ow2.asm.asm-util verion 9.3 jars (https://mvnrepository.com/artifact/org.ow2.asm/asm-util/9.2) to <MI_HOME>/wso2/lib directory - -## Syntax - -```xml -<datamapper config="gov:datamapper/FoodMapping.dmc" inputSchema="gov:datamapper/FoodMapping_inputSchema.json" inputType="XML" outputSchema="gov:datamapper/FoodMapping_outputSchema.json" outputType="XML"/>  -``` - -## Configuration - -The parameters available for configuring the Data Mapper mediator are as follows. - -<table> -<thead> -<tr class="header"> -<th>Parameter name</th> -<th>Description</th> -</tr> -</thead> -<tbody> -<tr class="odd"> -<td><strong>Mapping Configuration</strong></td> -<td>The file, which contains the script file that is used to execute the mapping. You need to create a mapping configuration file using WSO2 Integration Studio and store it either in the <b>Configuration Registry</b> or <b>Governance Registry</b> to select and upload it from here.</td> -</tr> -<tr class="even"> -<td><strong>Input Schema</strong></td> -<td>JSON schema, which represents the input message format. You need to create an input schema file using WSO2 Integration Studio and store it either in the <b>Configuration Registry</b> or <b>Governance Registry</b> to select and upload it from here.<br /> -</td> -</tr> -<tr class="odd"> -<td><strong>Output Schema</strong></td> -<td>JSON schema, which represents the output message format. You need to create an output schema file using the WSO2 Integration Studio plugin, and store it either in the <b>Configuration Registry</b> or <b>Governance Registry</b> to select and upload it from here.</td> -</tr> -<tr class="even"> -<td><strong>Input Type</strong></td> -<td>Expected input message type (XML/JSON/CSV)</br> -<div class="admonition note"> -<p class="admonition-title">Note</p> -<p>By default, the Input type for the Data Mapper is XML regardless of the Input Schema Type. Therefore, based on your requirement, you may need to change the Input Type manually.</p> -</div> - -</td> -</tr> -<tr class="odd"> -<td><strong>Output Type</strong></td> -<td>Target output message type (XML/JSON/CSV)</br> -<div class="admonition note"> -<p class="admonition-title">Note</p> -<p>By default, the Output Type for the Data Mapper is XML regardless of the Output Schema Type. Therefore, based on your requirement, you may need to change the Output Type manually.</p> -</div></td> -</tr> -</tbody> -</table> - -## Components of Data Mapper - -WSO2 Data Mapper consists of two components. They are the <b>Data Mapper Tooling</b> and the <b>Data Mapper Engine</b>. - -### Data Mapper Tooling - -The Data Mapper Tooling component is the interface used to create configuration files that are required by the Data Mapper Engine to execute the mapping. Following configuration files are needed by the Data Mapper engine. - -- Input schema file (`<data_mapper_name>_inputSchema.json`) -- Output schema file (`<data_mapper_name>_outputSchema.json`) -- Mapping configuration file (`<data_mapper_name>.dmc`) -- XSLT stylesheet (`<data_mapper_name>_xsltStyleSheet.xml`) - This is **applicable only when using XML to XML transformations**. - -These files are generated by the Data Mapper Tool and saved in a Registry Resource project, which you deploy in WSO2 Micro Integrator as shown in the example below. - -[![Generated configuration files]({{base_path}}/assets/img/integrate/mediators/119131284/autogen_config_files.png){: style="width:50%"}]({{base_path}}/assets/img/integrate/mediators/119131284/autogen_config_files.png) - -<div class="admonition info"> -<p class="admonition-title">Info</p> -<p> -<ul><li>The <code>.datamapper</code> and <code>.datamapper_diagram</code> files as shown in the example above contain meta data related to the Data Mapper diagram.</br> They are ignored when you deploy the project to a server to be used by the Data Mapper Engine. </li> -<li>Only the two schema files and the <code>.dmc</code> (Data Mapper Configuration) file gets deployed.</li> -<li>The <code>XSLT stylesheet</code> is an auto-generated file that optimizes XML to XML transformation. Optimization of XML to XML transformation is enabled by default.</li> -</ul> -</p> -</div> - -<div class="admonition note"> -<p class="admonition-title">Troubleshooting</p> -<p> -<b>XML to XML transformation is incorrect</b></br> -The XML to XML tranformation optimization is enabled by default to increase performance. However, if your XML to XML transformation is not happening as anticipated (e.g, the data mapping related to the XML to XML tranformation is incorrect for certain scenarios), disable the XML to XML tranformation optimization as follows: -<ol> -<li>Click on the <code><data_mapper_name>_xsltStyleSheet.xml</code> file.</li> -<li>Change the <code>xmlns:notXSLTCompatible</code> property in the <code>XSLT stylesheet</code> to <code>true</code>.</br>This will disable the XML to XML tranformation optimization.</li> -</ol> - -</p> -</div> - -#### Input and output schema files - -Input and output schema files are custom-defined JSON schemas that -define the input/output format of input/output messages. The Data -Mapper tool generates them when loading the input and output files as -shown below. - -!!! Info - You can also create the input and output JSON Schemas manually using the Data Mapper Diagram Editor. For instructions, see [Creating a JSON Schema Manually]({{base_path}}/reference/mediators/data-mapper-json-schema-specification). - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134796.png) -[![]({{base_path}}/assets/img/integrate/mediators/119131284/119131291.png){: style="width:70%"}]({{base_path}}/assets/img/integrate/mediators/119131284/119131291.png) - -You can load the following input/output message formats: - -!!! Info - When loading a sample input XML file, you cannot have the default namespace (i.e. without a prefix in the namespace element). Also, you need to use the same prefix in all occurrences that refer to the same namespace within one XML file. For example, see the use of the prefix ` axis2ns11 ` in the example below. - -Sample input XML file: - -```xml -<?xml version="1.0" encoding="utf-8"?> -<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> -<soapenv:Header> - <axis2ns11:LimitInfoHeader xmlns:axis2ns11="urn:partner.soap.sforce.com"> - <axis2ns11:limitInfo> - <axis2ns11:current>42336</axis2ns11:current> - <axis2ns11:limit>83000</axis2ns11:limit> - <axis2ns11:type>API REQUESTS</axis2ns11:type> - </axis2ns11:limitInfo> - </axis2ns11:LimitInfoHeader> -</soapenv:Header> -<soapenv:Body> - <axis2ns11:records xmlns:axis2ns11="urn:partner.soap.sforce.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="sf:sObject"> - <sf:type xmlns:sf="urn:sobject.partner.soap.sforce.com">Account</sf:type> - <sf:Id xmlns:sf="urn:sobject.partner.soap.sforce.com">001E0000002SFO2IAO</sf:Id> - <sf:CreatedDate xmlns:sf="urn:sobject.partner.soap.sforce.com">2011-03-15T00:15:00.000Z</sf:CreatedDate> - <sf:Id xmlns:sf="urn:sobject.partner.soap.sforce.com">001E0000002SFO2IAO</sf:Id> - <sf:Name xmlns:sf="urn:sobject.partner.soap.sforce.com">WSO2</sf:Name> -</axis2ns11:records> -</soapenv:Body> -</soapenv:Envelope> -``` - -- **XML:** to load a sample XML file -- **JSON:** to load a sample JSON file -- **CSV:** to load a sample CSV file with column names as the first - record -- **JSONSCHEMA:** to load a WSO2 Data Mapper JSON schema -- **CONNECTOR:** to use Data Mapper with Connectors. - Connectors will contain JSON schemas for each operation that defines - the message formats to which it will respond and expect. Therefore, - when you integrate connectors in a project this Connector option - searches through the workspace and find the available Connectors. - Then, you can select the respective Connector in the operation, so - that the related JSON schema will be loaded for the Data Mapper by - the tool. - -#### Mapping configuration file - -This is a JavaScript file generated by looking at the diagram you draw -in the Data Mapper Diagram Editor by connecting input elements to output -elements. Every operation you define in the diagram gets converted to -a JavaScript operation. - -### Data Mapper Engine - -You need the following information to configure the Data Mapper Engine: - -- Input message type -- Output message type -- Input schema Java Scripting API -- Output schema -- Mapping configuration - -At the runtime, the Data Mapper Engine gets the input message and the -runtime variable map object and outputs the transformed message. The Data Mapper -Engine uses the Java Scripting API, to execute the mapping -configuration. Therefore, if your runtime is JAVA 7, it uses the Rhino -JS Engine and if your runtime is JAVA 8, it uses the Nashorn JS engine. - -When you use JAVA 7, there are several limitations in the Rhino engine -that directly affects the Data mapper Engine. There are several -functions that Rhino does not support. F or example, String object -functions like ` startsWith() ` and -` endsWith() ` . Therefore, the Rhino engine may have -limitations in executing those when using custom functions -and operators. - -#### Using product-specific runtime variables - -Also, the Data Mapper engine allows you to use runtime product-specific -variables in the mapping. The intermediate component should construct a -map object containing runtime product-specific variables and send it to -the Data Mapper Engine, thereby, when the mapping happens in the Data -Mapper Engine, these variables become available. - -For example, the Data Mapper mediator provides properties like -` axis2/transport/synapse/axis2client/operation/. ` . In -the Data Mapper diagram, you can use the **Property operator** and -define the scope and the property name and use it in the mapping. Then, -the Data Mapper mediator will identify the required properties to -execute the mapping and populate a map with the required properties and -will send it to the Data Mapper Engine. - -### Data Mapper element and attribute types - -Following are the element and attribute types that are supported by the -Data Mapper. - -- {} - represents object elements -- \[\] - represents array elements -- \<\> - represents primitive field values -- A - represents XML attribute values - -### Data Mapper operations - -The operations palette placed in the left-hand side of the WSO2 Data -Mapping Diagram Editor displays the operations that the Data Mapper -supports as shown below . - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119131286.png) - -You can drag and drop these operations to the Editor area. There are six -categories of operations as follows: - -- Links -- Common -- Arithmetic -- Conditional -- Boolean -- Type Conversion -- String - -#### Links - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134807.png) - -**Data Mapping -Link:** maps elements with other operators and elements. - -#### Common - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134806.png) **Constant:** -defines String, number or boolean constant values. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134805.png) **Custom -Function:** defines custom functions to use in the mapping. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134804.png) **Properties:** -uses product-specific runtime variables. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134802.png) **Global -Variable:** instantiates global variables that you can access from -anywhere. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134803.png) **Compare:** -compares two inputs in the mapping. - -#### Arithmetic - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134936.png) **Add:** adds two -numbers. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134935.png) **Subtract:** -subtracts two or more numbers. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134934.png) **Multiply:** -multiplies two or more numbers. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134933.png) **Divide:** divides -two numbers. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134932.png) **Ceiling:** -derives the ceiling value of a number (closest larger integer value). - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134931.png)**Floor:** derives -the floor value of a number (closest lower integer value). - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134930.png) **Round:** derives -the nearest integer value. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134929.png) **Set Precision:** -formats a number into a specified length. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134928.png) **Absolute Value:** -derives the absolute value of a rational number. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134927.png) **Min:** derives -the minimum number from given inputs - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134926.png) **Max:** derives -the maximum number from given inputs - -#### Conditional - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134949.png) **IfElse:** uses a -condition and selects one input from given two. - -#### Boolean - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134954.png) **AND:** performs -the boolean AND operation on inputs. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134953.png) **OR:** performs -the boolean OR operation on inputs. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134952.png) **NOT:** performs -the boolean NOT operation on inputs. - -#### Type conversion - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134964.png) **StringToNumber:** -converts a String value to number (“0” -> 0). - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134963.png) -**StringToBoolean:** converts a String value to boolean (“true” -> -true). - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134962.png) **ToString:** -converts a number or a boolean value to String. - -#### String - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134978.png) **Concat:** -concatenates two or more Strings. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134977.png) **Split:** splits a -String by a matching String value. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134975.png) **Uppercase:** -converts a String to uppercase letters. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134976.png) **Lowercase:** -converts a String to lowercase letters. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134974.png) **String Length:** -gets the length of the String. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134973.png)**StartsWith:** -checks whether a String starts with a specific value. (This is not -supported in Java 7.) - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134972.png) **EndsWith:** -checks whether String ends with a specific value. (This is not supported -in Java 7.) - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134971.png) **Substring:** -extracts a part of the String value. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134970.png) **Trim:** removes -white spaces from the beginning and end of a String. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134969.png) **Replace:** -replaces the first occurrence of a target String with another. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119134968.png) **Match** – check -whether the input match with a (JS) Regular Expression - -## AI Data Mapper - -WSO2 Integration Studio allows you to seamlessly generate the input-output mapping using its sophisticated AI Data Mapping generator. You simply have to load the input and output to the relevant sections as shown below and click **Apply**. Alternatively, you can manually create the mapping using the graphical drag-and-drop tool. - -!!! Note "Important" - The AI Data Mapping generator uploads the data to an internal WSO2 server for processing. However, your data is <b>not</b> stored by WSO2. - -![example one Data mapper diagram]({{base_path}}/assets/img/integrate/mediators/119131284/ai_datamapper.png) - -## Examples - -### Example 1 - Creating a SOAP payload with namespaces - -This example creates a Salesforce login SOAP payload using a JSON -payload. The login payload consists of XML namespaces. Even though the -JSON payload does not contain any namespace information, the output JSON -schema will be generated with XML namespace information using the -provided SOAP payload. - -![example one Data mapper diagram]({{base_path}}/assets/img/integrate/mediators/119131284/119131296.png) - -The sample input JSON payload is as follows. - -``` js -{ - "name":"Watson", - "password":"watson@123" -} -``` - -The sample output XML is as follows. - -``` xml -<soapenv:Envelope xmlns:urn="urn:enterprise.soap.sforce.com" xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope/"> - <soapenv:Body> - <urn:login> - <urn:username><b>user@domain.com</b></urn:username> - <urn:password><b>secret</b></urn:password> - </urn:login> - </soapenv:Body> -</soapenv:Envelope> -``` - -### Example 2 - Mapping SOAP header elements - -This example demonstrates how to map SOAP header elements along with -SOAP body elements to create a certain SOAP payload, by creating a -Salesforce convertLead SOAP payload using a JSON payload. The Convert -Lead SOAP payload needs mapping SOAP header information. -E.g. ` <urn:sessionId>QwWsHJyTPW.1pd0_jXlNKOSU</urn:sessionId> ` - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119131295.png) - -The sample input JSON payload is as follows. - -``` js -{ - "owner":{ - "ID":"005D0000000nVYVIA2", - "name":"Smith", - "city":"CA", - "code":"94041", - "country":"US" - }, - "lead":{ - "ID":"00QD000000FP14JMAT", - "name":"Carl", - "city":"NC", - "code":"97788", - "country":"US" - }, - "sendNotificationEmail":"true", - "convertedStatus":"Qualified", - "doNotCreateOpportunity":"true", - "opportunityName":"Partner Opportunity", - "overwriteLeadSource":"true", - "sessionId":"QwWsHJyTPW.1pd0_jXlNKOSU" -} -``` - -The sample output XML is as follows. - -``` xml -<?xml version="1.0" encoding="utf-8"?> -<soapenv:Envelope xmlns:urn="urn:enterprise.soap.sforce.com" xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope/"> - <soapenv:Header> - <urn:SessionHeader> - <urn:sessionId>QwWsHJyTPW.1pd0_jXlNKOSU</urn:sessionId> - </urn:SessionHeader> - </soapenv:Header> - <soapenv:Body> - <urn:convertLead > - <urn:leadConverts> <!-- Zero or more repetitions --> - <urn:convertedStatus>Qualified</urn:convertedStatus> - <urn:doNotCreateOpportunity>false</urn:doNotCreateOpportunity> - <urn:leadId>00QD000000FP14JMAT</urn:leadId> - <urn:opportunityName>Partner Opportunity</urn:opportunityName> - <urn:overwriteLeadSource>true</urn:overwriteLeadSource> - <urn:ownerId>005D0000000nVYVIA2</urn:ownerId> - <urn:sendNotificationEmail>true</urn:sendNotificationEmail> - </urn:leadConverts> - </urn:convertLead> -</soapenv:Body> -</soapenv:Envelope> -``` - -### Example 3 - Mapping primitive types - -This example demonstrates how you can map an XML payload with integer, -boolean etc. values into a JSON payload with required primitive types, -by specifying the required primitive type in the JSON schema. - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119131294.png) - -The sample input XML payload is as follows. - -``` xml -<?xml version="1.0" encoding="UTF-8" ?> - <name>app_name</name> - <version>version</version> - <manifest_version>2</manifest_version> - <description>description_text</description> - <container>GOOGLE_DRIVE</container> - <api_console_project_id>YOUR_APP_ID</api_console_project_id> - <gdrive_mime_types> - <http://drive.google.com/intents/opendrivedoc> - <type>image/png</type> - <type>image/jpeg</type> - <type>image/gif</type> - <type>application/vnd.google.drive.ext-type.png</type> - <type>application/vnd.google.drive.ext-type.jpg</type> - <type>application/vnd.google.drive.ext-type.gif</type> - <href>http://your_web_url/</href> - <title>Open - window - - - - <128>icon_128.png - - - - http://yoursite.com - - -``` - -The sample output JSON is as follows. - -``` js -{ -"name" : "app_name", -"version" : "version", -"manifest_version" : 2, -"description" : "description_text", -"container" : "GOOGLE_DRIVE", -"api_console_project_id" : "YOUR_APP_ID", -"gdrive_mime_types": { - "http://drive.google.com/intents/opendrivedoc": [ - { - "type": ["image/png", "image/jpeg", "image/gif", "application/vnd.google.drive.ext-type.png", - "application/vnd.google.drive.ext-type.jpg","application/vnd.google.drive.ext-type.gif"], - "href": "http://your_web_url/", - "title" : "Open", - "disposition" : "window" - } - ] -}, -"icons": { - "128": "icon_128.png" -}, -"app" : { - "launch" : { - "web_url" : "http://yoursite.com" - } -} -} -``` - -### Example 4 - Mapping XML to CSV - -This example demonstrates how you can map an XML payload to CSV format. - -!!! Info - If you specify special characters (e.g., ` & ` , - ` & ) ` within the ` ` - tag w hen converting from CSV to CSV , they will be displayed as follows - by default. - - ` & -> & ` - - ` & -> &amp; ` - - ` < -> < ` - - ` < -> <lt; ` - -To avoid this and to display the exact special characters as text in the -returned output, add the following properties in the Synapse -configuration. - -``` xml - - -``` - -![]({{base_path}}/assets/img/integrate/mediators/119131284/119131293.png) - -The sample in put XML payload is as follows. - -``` xml - - -
    - James Yee - Downtown Bartow - Old Town - PA - 95819 - USA -
    -
    - Elen Smith - 123 Maple Street - Mill Valley - CA - 10999 - USA -
    - Please leave packages in shed by driveway. -
    -``` - -The sample out put CSV is as follows. - -``` text -Name,Street,City,State,Zip,Country -James Yee,Downtown Bartow,Old Town,PA,95819,USA -Ellen Smith,123 Maple Street,Mill Valley,CA,10999,USA -``` - -### Example 5 - Mapping XSD to JSON - -This example demonstrates how you can map an XSD payload to JSON format. - -![example 5 mapping]({{base_path}}/assets/img/integrate/mediators/119131284/119131293.png) - -The sample in put XSD payload is as follows. - -``` xml - - - - - - - - - - - - - - - - - - -``` - -The sample out put JSON is as follows. - -``` java -{ - "books": { - "book": { - "id": "001", - "author": "Writer", - "title": "Great book on nature", - "price": "44.95" - } - } -} -``` diff --git a/en/docs/reference/mediators/db-report-mediator.md b/en/docs/reference/mediators/db-report-mediator.md deleted file mode 100644 index e674035a91..0000000000 --- a/en/docs/reference/mediators/db-report-mediator.md +++ /dev/null @@ -1,449 +0,0 @@ -# DB Report Mediator - -The **DB Report Mediator** is similar to the [DBLookup Mediator]({{base_path}}/reference/mediators/dblookup-mediator). The difference between the two mediators is that the DB Report mediator writes information to a database using the specified insert SQL statement. - -!!! Info - The DB Report mediator is a [content-aware]({{base_path}}/reference/mediators/about-mediators/#classification-of-mediators) mediator. - -!!! Note - Currently, the 'DB-Report-mediator' does not support the 'json-eval' expression used to extract the parameters. - -## Syntax - -The syntax of the DB Report mediator changes depending on whether you connect to the database using a connection pool, or using a data source. - -- **Connection Pool** - ``` java - - - - ( - - - - - - - - - - - ) - * - - - - insert into something values(?, ?, ?, ?) - * - + - - ``` - -- **Data source** - The syntax of the DBLookup mediator further differs based on whether the connection to the database is made using an external datasource or a Carbon datasource. Click on the relevant tab to view the required syntax. - - === "External Datasource" - ``` java - - - - - - - - - * - - - - select something from table where something_else = ? - * - + - - ``` - - === "Carbon Datasource" - ``` java - - - - - - - - select something from table where something_else = ? - * - + - - ``` - -## Configurations - -The configuration of the DBQuery mediator changes depending on whether you connect to the database using a connection pool, or using a data -source. - -### Connection Pool configurations - -The parameters available to configure the DB Report mediator are as follows. - -!!! Info - When specifying the DB connection using a connection pool, other than specifying parameter values inline, you can also specify following parameter values of the connection information (i.e. Driver, URL, User and password) as registry entries. The advantage of specifying a parameter value as a registry entry is that the same connection information configurations can be used in different environments simply by changing the registry entry value. To do this, give the registry path within the `key` attribute as shown in the example below. - ``` - - - - - - - - - - - ``` - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    Use Transaction
    -

    This parameter specifies whether the database operation should be performed within a transaction or not. Click Yes or No as relevant.

    -

    To include multiple database reports within the same database transaction inside a particular message flow, set the value of this Use Transaction property to Yes .
    -

    -

    However, when you have more reports it takes more time to complete a transaction and when multiple messages flow in, then multiple transactions can become active at the same time.

    -

    By default, the maximum number of active transactions is 50 as imposed by the Atomikos JTA implementation. To override this, create a file named transaction.properties by including the following property and add it to the <MI_HOME>/lib directory:

    -
    com.atomikos.icatch.max_actives=1000
    -
    -Specifying the value as -1 allows unlimited transactions. Change the value accordingly to limit the number of active transactions based on your environment and the concurrency level of the service. -
    -

    If you click Yes to perform the database operation within a transaction, you need to ensure the following:

    -
      -
    • The DBReport mediator configuration must be preceded by a Transaction Mediator configuration with new as the transaction action.
    • -
    • The DBReport mediator configuration must be followed by a Transaction Mediator configuration with commit as the transaction action.
    • -
    -

    For detailed information about configuring Transaction mediators, see Transaction Mediator .

    -
    DriverThe class name of the database driver.
    Url

    The JDBC URL of the database that data will be written to.

    -

    Set the autoReconnect parameter to true to help reconnect to the database when the connection between the client and the database is dropped. For example, <url>jdbc:mysql://<ip>:<port>/test?autoReconnect=true</url> .

    UserThe user name for connecting to the database.
    PasswordThe password used to connect to the database.
    - -To add properties to the DBReport mediator, start with the following parameters: - -| Parameter Name | Description | -|----------------|--------------------------------------------------| -| **Name** | The name of the property. | -| **Value** | The value of the property. | -| **Action** | This parameter enables a property to be deleted. | - -Once you have defined the above parameters, enter the following properties: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    Name

    Value

    Description

    autocommit

    true / false

    The auto-commit state of the connections created by the pool.

    isolation

    Connection.TRANSACTION_NONE / Connection.TRANSACTION_READ_COMMITTED / Connection.TRANSACTION_READ_UNCOMMITTED / Connection.TRANSACTION_REPEATABLE_READ / Connection.TRANSACTION_SERIALIZABLE

    The isolation state of the connections created by the pool.

    initialsize

    int

    The initial number of connections created when the pool is started.

    maxactive

    int

    The maximum number of active connections that can be allocated from this pool at a given time. When this maximum limit is reached, no more active connections will be created by the connection pool. Specify 0 or a negative value if you do not want to set a limit.

    maxidle

    int

    The maximum number of idle connections to be allowed in the connection pool at a given time. Specify 0 or a negative value if you want the pool to wait indefinitely.

    maxopenstatements

    int

    The maximum number of open statements that can be allocated from the statement pool at a given time. When this maximum limit is reached, no more new statements will be created by the statement pool. Specify 0 or a negative value if you do not want to set a limit.

    maxwait

    long

    The maximum number of milliseconds that the connection pool will wait for a connection to return before throwing an exception when there are no connections available in the pool. Specify 0 or a negative value if you want the pool to wait indefinitely.

    minidle

    int

    The minimum number of idle connections to be allowed in the connection pool at a given time. Specify 0 or a negative value if you want the pool to wait indefinitely.

    poolstatements

    true/ false

    If the value is true, statement pooling is enabled for the pool.

    testonborrow

    true/ false

    If the value is true , objects are validated before they are borrowed from the pool. An object which fails the validation test will be dropped from the pool and another object in the pool will be picked instead.

    testwhileidle

    true/ false

    If the value is true , the objects in the pool will be validated using an idle object evictor (if any exists). Any object which fails this validation test would be dropped from the pool.

    validationquery

    String

    The SQL query that will be used to validate connections from this pool before returning them to the caller.

    -

    This property helps to reconnect to the database when the database connection between the client and the database is dropped. For example, <property name="validationquery" value="select 1"/> .

    - -### Datasource configurations - -The configuration of the DBLookup mediator further differs based on whether the connection to the database is made using an external datasource or a Carbon datasource. - -#### External Datasource -The parameters available to configure the DB Report mediator as an external datasource are as follows. - -| Parameter Name | Description | -|---------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| **Use Transaction** | This parameter specifies whether the database operation should be performed within a transaction or not. Click **Yes** or **No** as relevant. | -| **Initial Context** | The initial context factory class. The corresponding ` Java ` environment property is ` java.naming.factory.initial ` . | -| **Datasource Name** | The naming service provider URL . The corresponding ` Java ` environment property is ` java.naming.provider.url ` . | -| **URL** | The JDBC URL of the database that data will be written to. | -| **User** | The user name used to connect to the database. | -| **Password** | The password used to connect to the database. | - -To add properties to the DBReport mediator, start with the following parameters: - -| Parameter Name | Description | -|----------------|--------------------------------------------------| -| **Name** | The name of the property. | -| **Value** | The value of the property. | -| **Action** | This parameter enables a property to be deleted. | - - -Once you have defined the above parameters, enter the following properties: - -| Name | Value | Description | -|-------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| autocommit | true / false | The auto-commit state of the connections created by the pool. | -| isolation | Connection.TRANSACTION\_NONE / Connection.TRANSACTION\_READ\_COMMITTED / Connection.TRANSACTION\_READ\_UNCOMMITTED / Connection.TRANSACTION\_REPEATABLE\_READ / Connection.TRANSACTION\_SERIALIZABLE | The isolation state of the connections created by the pool. | -| initialsize | int | The initial number of connections created when the pool is started. | -| maxactive | int | The maximum number of active connections that can be allocated from this pool at a given time. When this maximum limit is reached, no more active connections will be created by the connection pool. Specify 0 or a negative value if you do not want to set a limit. | -| maxidle | int | The maximum number of idle connections to be allowed in the connection pool at a given time. Specify 0 or a negative value if you want the pool to wait indefinitely. | -| maxopenstatements | int | The maximum number of open statements that can be allocated from the statement pool at a given time. When this maximum limit is reached, no more new statements will be created by the statement pool. Specify 0 or a negative value if you do not want to set a limit. | -| maxwait | long | The maximum number of milliseconds that the connection pool will wait for a connection to return before throwing an exception when there are no connections available in the pool. Specify 0 or a negative value if you want the pool to wait indefinitely. | -| minidle | int | The minimum number of idle connections to be allowed in the connection pool at a given time. Specify 0 or a negative value if you want the pool to wait indefinitely. | -| poolstatements | true/ false | If the value is ` true ` , statement pooling is enabled for the pool. | -| testonborrow | true/ false | If the value is ` true ` , objects are validated before they are borrowed from the pool. An object which fails the validation test will be dropped from the pool and another object in the pool will be picked instead. | -| testwhileidle | true/ false | If the value is ` true ` , the objects in the pool will be validated using an idle object evictor (if any exists). Any object which fails this validation test would be dropped from the pool. | -| validationquery | String | The SQL query that will be used to validate connections from this pool before returning them to the caller. | - -#### Carbon Datasource - -| Parameter Name | Description | -|---------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| **Use Transaction** | This parameter specifies whether the database operation should be performed within a transaction or not. Click **Yes** or **No** as relevant. | -| **Datasource** | This parameter is used to selected a specific Carbon datasource you want to use to make the connection. All the Carbon datasources which are currently available are included in the list. | - -### SQL statements - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    SQLThis parameter is used to enter one or more SQL statements.
    ParametersThis section is used to specify how the values of parameters in the SQL will be determined. A parameter value can be static or calculated at runtime based on a given expression.
    Parameter Type

    The data type of the parameter. Possible values are as follows.

    -
      -
    • CHAR
    • -
    • VARCHAR
    • -
    • LONGVARCHAR
      -
    • -
    • NUMERIC
    • -
    • DECIMAL
    • -
    • BIT
    • -
    • TINYINT
    • -
    • SAMLLINT
    • -
    • INTEGER
    • -
    • BIGINT
    • -
    • REAL
    • -
    • DOUBLE
    • -
    • DATE
    • -
    • TIME
    • -
    • TIMESTAMP
    • -
    Property Type

    This determines whether the parameter value should be a static value or calculated at run time via an expression.

    -
      -
    • Value : If this is selected, a static value would be considered as the property value and this value should be entered in the Value/Expression parameter.
    • -
    • Expression: If this is selected, the property value will be determined during mediation by evaluating an expression. This expression should be entered in the Value/Expression parameter.

    • -
    Value/Expression
    -

    This parameter is used to enter the static value or the XPath expression used to determine the property value based on the option you selected for the Property Type parameter.

    -

    You can click NameSpaces to add namespaces if you are providing an expression. Then the Namespace Editor panel would appear where you can provide any number of namespace prefixes and URLs used in the XPath expression.

    - -
    ActionThis allows you to delete a parameter.
    - -## Examples - -### Simple database write operation - -This example demonstrates simple database write operations. The DB Report mediator writes to a table using the details of the message. It updates the stock price of the company using the last quote value, which is calculated by evaluating an XPath expression against the response message. - -``` java - - - - org.apache.derby.jdbc.ClientDriver - jdbc:derby://localhost:1527/esbdb;create=false - esb - esb - - - - - - - - -``` - -### Database write operation within a transaction - -In this example, `` is a Transaction Mediator configuration that starts a new transaction. The DBReport mediator configuration performs a few write operations including deleting records when the name matches a specific value derived via an -expression as well as a few insertions. Once the database operations are -complete, they are committed via -`` , which is another -Transaction Mediator configuration. - -``` java - - - - - - - - - - - - -
    - - - - - - - - - - - - java:jdbc/XADerbyDS - org.jnp.interfaces.NamingContextFactory - localhost:1099 - EI - EI - - - - delete from company where name =? - - - - - - - - - - java:jdbc/XADerbyDS1 - org.jnp.interfaces.NamingContextFactory - localhost:1099 - EI - EI - - - - INSERT into company values (?,'c4',?) - - - - - - - - - - - - -``` \ No newline at end of file diff --git a/en/docs/reference/mediators/dblookup-mediator.md b/en/docs/reference/mediators/dblookup-mediator.md deleted file mode 100644 index 0ce072ad05..0000000000 --- a/en/docs/reference/mediators/dblookup-mediator.md +++ /dev/null @@ -1,305 +0,0 @@ -# DBLookup Mediator - -The **DBLookup Mediator** can execute an arbitrary SQL select statement -and then set a resulting values as a local message property in the -message context. The DB connection used may be looked up from an -external data source or specified inline. - -!!! Info - - The DBLookup mediator is a [content-aware]({{base_path}}/reference/mediators/about-mediators/#classification-of-mediators) mediator. - - The DBLookup mediator can set a property from one row in a result set. It cannot return multiple rows. If you need to get multiple records, or if you have a table with multiple parameters (such as URLs), you can create a data service and invoke that service from the Micro Integrator using the [Callout mediator]({{base_path}}/reference/mediators/callout-mediator) instead. - -## Syntax - -The syntax of the DBLookup mediator changes depending on whether you connect to the database using a connection pool, or using a data source. Click on the relevant tab to view the required syntax. - -- **Connection Pool** - ``` java - - - - - - - - * - - - - select something from table where something_else = ? - * - * - + - - ``` - -- **Data source** - The syntax of the DBLookup mediator further differs based on whether the connection to the database is made using an external datasource or a Carbon datasource. - - === "External Datasource" - ``` java - - - - - - - - - * - - - - select something from table where something_else = ? - * - * - + - - ``` - - === "Carbon Datasource" - ``` java - - - - - - - - select something from table where something_else = ? - * - * - + - - ``` - -## Configurations - -The configuration of the DBLookup mediator changes depending on whether you connect to the database using a connection pool, or using a data source. - -### Connection Pool configurations - -The parameters available to configure the DBLookup mediator are as -follows: - -!!! Info - When specifying the DB connection using a connection pool, other than specifying parameter values inline, you can also specify following parameter values of the connection information (i.e. Driver, URL, User and password) as registry entries. The advantage of specifying a parameter value as a registry entry is that the same connection information configurations can be used in different environments simply by changing the registry entry value. To do this, give the registry path within the `key` attribute as shown in the example below. - - ``` - - - - - - - - - - - ``` - -| Parameter Name | Description | -|----------------------------|------------------------------------------------------------------------------------------------------------------| -| **Connection Information** | This parameter is used to specify whether the connection should be taken from a connection pool or a datasource. | -| **Driver** | The class name of the database driver. | -| **URL** | JDBC URL of the database where the data will be looked up. | -| **User** | Username used to connect to the database. | -| **Password** | Password used to connect to the database. | - - -To add properties to the DBLookup mediator, start with the following parameters: - -| Parameter Name | Description | -|----------------|--------------------------------------------------| -| **Name** | The name of the property. | -| **Value** | The value of the property. | -| **Action** | This parameter enables a property to be deleted. | - -Once you have defined the above parameters, enter the following properties: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    Name

    Value

    Description

    autocommit

    true / false

    The auto-commit state of the connections created by the pool.

    isolation

    Connection.TRANSACTION_NONE / Connection.TRANSACTION_READ_COMMITTED / Connection.TRANSACTION_READ_UNCOMMITTED / Connection.TRANSACTION_REPEATABLE_READ / Connection.TRANSACTION_SERIALIZABLE

    The isolation state of the connections created by the pool.

    initialsize

    int

    The initial number of connections created when the pool is started.

    maxactive

    int

    The maximum number of active connections that can be allocated from this pool at a given time. When this maximum limit is reached, no more active connections will be created by the connection pool. Specify 0 or a negative value if you do not want to set a limit.

    maxidle

    int

    -

    The maximum number of idle connections allowed in the connection pool at a given time. The value should be less than the maxActive value. For high performance, tune maxIdle to match the number of average, concurrent requests to the pool. If this value is set to a large value, the pool will contain unnecessary idle connections.

    -

    The enabled idle connections are checked periodically whenever a new connection is requested, and connections that are being idle for longer than minEvictableIdleTimeMillis are released, since it takes time to create a new connection.

    -


    -

    -

    maxopenstatements

    int

    The maximum number of open statements that can be allocated from the statement pool at a given time. When this maximum limit is reached, no more new statements will be created by the statement pool. Specify 0 or a negative value if you do not want to set a limit.

    maxwait

    long

    The maximum number of milliseconds that the connection pool will wait for a connection to return before throwing an exception when there are no connections available in the pool. Specify 0 or a negative value if you want the pool to wait indefinitely.

    minidle

    int

    -

    The minimum number of idle connections allowed in the connection pool at a given time, without extra ones being created . Default value is 0, and is derived from initialSize . The connection pool can shrink below this number if validation queries fail.

    -

    This value should be similar or near to the average number of requests that will be received by the server at the same time. With this setting, you can avoid having to open and close new connections every time a request is received by the server.

    -


    -

    -

    poolstatements

    true/ false

    If the value is true, statement pooling is enabled for the pool.

    testonborrow

    true/ false

    If the value is true, objects are validated before they are borrowed from the pool. An object which fails the validation test will be dropped from the pool and another object in the pool will be picked instead.

    testwhileidle

    true/ false

    If the value is true , the objects in the pool will be validated using an idle object evictor (if any exists). Any object which fails this validation test would be dropped from the pool.

    validationquery

    String

    The SQL query that will be used to validate connections from this pool before returning them to the caller.
    - -### Datasource configurations - -The configuration of the DBLookup mediator further differs based on whether the connection to the database is made using an external datasource or a Carbon datasource. - -The parameters available to configure the DBLookup mediator are as follows. - -| Parameter Name | Description | -|----------------------------|----------------------------------------------------------------------------------------------------------------------------------------------| -| **Connection Information** | This parameter is used to specify whether the connection should be taken from a connection pool or a datasource. | -| **Datasource Type** | This parameter is used to specify whether the connection to the database should be made using an external datasource or a Carbon datasource. | -| **JNDI Name** | The JNDI used to look up data. | - -### SQL statements - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    SQLThis parameter is used to enter one or more SQL statements.
    ParametersThis section is used to specify how the values of parameters in the SQL will be determined. A parameter value can be static or calculated at runtime based on a given expression.
    Parameter Type

    The data type of the parameter. Possible values are as follows.

    -
      -
    • CHAR
    • -
    • VARCHAR
    • -
    • LONGVARCHAR
      -
    • -
    • NUMERIC
    • -
    • DECIMAL
    • -
    • BIT
    • -
    • TINYINT
    • -
    • SAMLLINT
    • -
    • INTEGER
    • -
    • BIGINT
    • -
    • REAL
    • -
    • DOUBLE
    • -
    • DATE
    • -
    • TIME
    • -
    • TIMESTAMP
    • -
    Property Type

    This determines whether the parameter value should be a static value or calculated at run time via an expression.

    -
      -
    • Value : If this is selected, a static value would be considered as the property value and this value should be entered in the Value/Expression parameter.
    • -
    • Expression: If this is selected, the property value will be determined during mediation by evaluating an expression. This expression should be entered in the Value/Expression parameter.

    • -
    Value/Expression
    -

    This parameter is used to enter the static value or the XPath expression used to determine the property value based on the option you selected for the Property Type parameter.

    -

    You can click NameSpaces to add namespaces if you are providing an expression. Then the Namespace Editor panel would appear where you can provide any number of namespace prefixes and URLs used in the XPath expression.

    -
    ActionThis allows you to delete a parameter.
    Results

    This section is used to specify how to deal with the rerun result from a Database query execution.

    -
      -
    • Result Name
    • -
    • Column
    • -
    • Action - Deletes the result.
    • -
    - -## Example - -``` java - - - - org.apache.derby.jdbc.ClientDriver - jdbc:derby://localhost:1527/esbdb;create=false - esb - esb - - - - - - - - -``` - -In this example, when a message is received by a proxy service with a DBLookup mediator configuration, it opens a connection to the database and executes the SQL query. The SQL query uses **?** character for attributes that will be filled at runtime. The parameters define how to calculate the value of those attributes at runtime. In this sample, the DBLookup Mediator has been used to extract  the ` id ` of the company from the company database using the symbol which is evaluated using an XPath against the SOAP envelope. diff --git a/en/docs/reference/mediators/drop-mediator.md b/en/docs/reference/mediators/drop-mediator.md deleted file mode 100644 index 24885f5dff..0000000000 --- a/en/docs/reference/mediators/drop-mediator.md +++ /dev/null @@ -1,48 +0,0 @@ -# Drop Mediator - -The **Drop Mediator** stops the processing of the current message. This mediator is useful for ensuring that the message is sent only once and -then dropped by the Micro Integrator. If you have any mediators defined after the `` element, they will not be executed, because `` is considered to be the end of the message flow. - -When the Drop mediator is within the ` In ` sequence, it sends an HTTP 202 Accepted response to the client when it stops the message flow. When the Drop mediator is within the ` Out ` sequence before the Send mediator, no response is sent to the client. - -!!! Info - The Drop mediator is a [content-unaware]({{base_path}}/reference/mediators/about-mediators/#classification-of-mediators) mediator. - -## Syntax - -The drop token refers to a `` element, which is used to stop further processing of a message: - -``` java - -``` - -## Configuration - -As with other mediators, after adding the drop mediator to a sequence, you can click its up and down arrows to move its location in the sequence. - -## Example - -You can use the drop mediator for messages that do not meet the filter criteria in case the client is waiting for a response to ensure the message was received by the Micro Integrator. For example: - -``` - - - - - - - - -
    - - - - - - - -... -``` - -In this scenario, if the message doesn't meet the filter condition, it is dropped, and the HTTP 202 Accepted response is sent to the client. If -you did not include the drop mediator, the client would not receive any response. diff --git a/en/docs/reference/mediators/dss-mediator.md b/en/docs/reference/mediators/dss-mediator.md deleted file mode 100644 index 60ee0771fd..0000000000 --- a/en/docs/reference/mediators/dss-mediator.md +++ /dev/null @@ -1,537 +0,0 @@ -# Data Service Call Mediator - -The **Data Service Call Mediator** is used to invoke data service operations. It automatically creates a payload and sets up the necessary headers to invoke the data service. Also, it improves the performance by directly calling the data service (without HTTP transport). - -!!! Info - - You need to first have a [Data Service Project]({{base_path}}/integrate/develop/creating-artifacts/data-services/creating-data-services) to use the Data Service Call mediator. - - The Data Service Call mediator is a [content-aware]({{base_path}}/reference/mediators/about-mediators/#classification-of-mediators) mediator. - -## Syntax - -``` java - - - - - - - - - -``` - -## Configuration - -The Source Configuration properties of the Data Service Call Mediator are as follows: - - - - - - - - - - - - - - -
    Parameter NameDescription
    Type

    The type defines the source for the payload that is required for the data service call. By default, the source type is set to ‘body’. The available values are as follows:

    -
      -
    • INLINE - The payload should be configured within the mediator configuration.
    • -
    • BODY - The body of the original message is passed as the payload to the data service.
    • -
    - -The Operation Configurations for the Data Source Call mediator are as follows: - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    nameDefines the name of the operation that is to be invoked
    Params Configuration

    The possible values for this parameter are as follows:

    -
      -
    • Name: Defines the name of the parameter. -
    • Evaluator: Only required for json param expressions (json).
    • -
    • Value/Expression: Value of the parameter. If the expression is configured, the parameter value is determined during message mediation by evaluating an expression. The expression should be specified for the Expression parameter. -

    - -The Target Configuration properties of the Data Service Call mediator are as follows: - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    Type

    By setting the target type, the response payload of the data service call can be stored in the body or a property. By default, the target type is set to ‘body’. The available values are as follows:

    -
      -
    • BODY: The response payload is stored in the message body.
    • -
    • PROPERTY: The response payload is stored in the defined property.
    • -
    NameSpecifies the property name. - -You can define dynamic property names when the target type is defined as a property: - -```java - - - -``` - -
    - -## Examples - -Use the following datasource to try out the Data Service Call mediator. Create a new data service configuration and then copy the following content to define the `DSSCallMediatorTest` data service: - -**Sample data service to invoke using the Data Service Call mediator** - -```xml - - - com.mysql.jdbc.Driver - jdbc:mysql://localhost:3306/employeeDB - root - root - - - select EmployeeNumber, FirstName, LastName, Email, Salary from Employees where EmployeeNumber=:EmployeeNumber - - - - - - - - - - - insert into Employees (EmployeeNumber, FirstName, LastName, Email, Salary) values(:EmployeeNumber,:FirstName,:LastName,:Email,:Salary) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -``` - -### Example 1: Inline single request operation - -In this example, an inline single request is configured and sent to the `DSSCallMediatorTest` service. - -**Synapse Configuration** - -```xml - - - - - - - - - - - - - - - - - - - - - -``` - -**Sample Request** - -Invoke the `dssCallMediatorInlineSingleRequestProxy` proxy service: - -```bash -http://localhost:8290/services/dssCallMediatorInlineSingleRequestProxy -``` - -**Response** - -``` -SUCCESSFUL -``` - -### Example 2: Inline batch request operation - -In this example, an inline batch request is configured and sent to the `DSSCallMediatorTest` service. - -**Synapse Configuration** - -```xml - - - - - - - - - - - - - - - - - - - - - - - - - - - - -``` - -**Sample Request** - -Invoke the `dssCallMediatorInlineBatchRequestProxy` proxy service: - -```bash -http://localhost:8290/services/dssCallMediatorInlineBatchRequestProxy -``` - -**Response** - -``` -SUCCESSFUL -``` - -### Example 3: Inline request box operation - -In this example, an inline batch request is configured and sent to the `DSSCallMediatorTest` service. - -**Synapse Configuration** - -```xml - - - - - - - - - - - - - - - - - - - - - - - - -``` - -**Sample Request** - -Invoke the `dssCallMediatorInlineRequestBoxProxy` proxy service: - -``` -http://localhost:8290/services/dssCallMediatorInlineRequestBoxProxy -``` - -**Response** - -``` -444EllieDinadina@wso2.com4000 -``` - -### Example 4: Single request operation when the source type is set to body - -In this example, an inline single request is configured and sent to the `DSSCallMediatorTest` service. - -**Synapse Configuration** - -```xml - - - - - - - - - - - - -``` - -**Sample Request** - -Invoke the `dssCallMediatorSourceTypeBodyProxy` proxy service with the given payload: - -```bash -http://localhost:8290/services/dssCallMediatorSourceTypeBodyProxy -``` - -```xml - - 555 - Peter - Parker - peter@wso2.com - 5000 - -``` - -**Response** - -``` -SUCCESSFUL -``` - -### Example 5: Batch request operation when source type is set to body - -In this example, an inline batch request is configured and sent to the `DSSCallMediatorTest` service. - -**Synapse Configuration** - -```xml - - - - - - - - - - - - -``` - -**Sample Request** - -Invoke the `dssCallMediatorSourceTypeBodyProxy` proxy service with the given payload. - -```bash -http://localhost:8290/services/dssCallMediatorSourceTypeBodyProxy -``` - -```xml - - - 666 - Miles - Jimmy - jimmy@wso2.com - 2000 - - - 777 - Dia - Jesse - jesse@wso2.com - 1500 - - -``` - -**Response** -``` -SUCCESSFUL -``` - -### Example 6: Request box operation when source type is set to body - -In this example, an inline request box request is configured and sent to the `DSSCallMediatorTest` service. - -**Synapse Configuration** - -```xml - - - - - - - - - - - - -``` - -**Sample Request** - -Invoke the `dssCallMediatorSourceTypeBodyProxy` proxy service with the given payload. - -```bash -http://localhost:8290/services/dssCallMediatorSourceTypeBodyProxy -``` - -```xml -< - - 888 - William - Sakai - sakai@wso2.com - 3000 - - - 888 - - -``` - -**Response** - -``` - - - - 888 - William - Sakai - sakai@wso2.com - 3000 - - - -``` - -### Example 7: Inline single request operation when the target type is set to the property - -In this example, an inline single request is configured and sent to the `DSSCallMediatorTest` service and gets the response to a property. - -**Synapse Configuration** - -```xml - - - - - - - - - - - - - - - - - - - - - - - - -``` - -**Sample Request** - -Invoke the `testDSSResposeTarget` proxy service with the given payload. - -```bash -http://localhost:8290/services/testDSSResposeTarget -``` - -**Response** - -The following log will appear in the server console: - -```bash -INFO {LogMediator} - {proxy:test} reponseValue = SUCCESSFUL -``` diff --git a/en/docs/reference/mediators/ejb-mediator.md b/en/docs/reference/mediators/ejb-mediator.md deleted file mode 100644 index fe60a3e8f6..0000000000 --- a/en/docs/reference/mediators/ejb-mediator.md +++ /dev/null @@ -1,77 +0,0 @@ -# EJB Mediator - -The **EJB mediator** calls an external Enterprise JavaBean(EJB) and stores the result in the message payload or in a message context property. Currently, this mediator supports EJB3 Stateless Session Beans and Stateful Session Beans. - -!!! Info - The EJB mediator is a [content-aware]({{base_path}}/concepts/message-processing-units/#classification-of-mediators) mediator. - -## Syntax - -``` java - - - * - - -``` - -## Configuration - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    Beanstalk IDReference to the application server specific connection source information, which is defined at the synapse.properties.
    ClassThis required the remote interface definition provided in the EJB 3.0 (EJB service invocation remote/home interface).
    Session ID

    When the EJB context is invoked in the form state-full bean then the related ejb session status specified will be stored in here. Possible values are as follows.

    -
      -
    • Value: If this is selected, the session ID can be entered as a static value.
    • -
    • Expression: If this is selected, an XPath expression can be entered to evaluate the session ID.
    • -
    Remove

    This parameter specifies whether the Enterprise Entity Manager should remove the EJB context related parameters once the state full/stateless session is invoked.

    Target

    If a particular EJB method returns, then the return object can be saved against the the name provided in the target at the synapse property context.

    JNDI Name

    The Java Naming and Directory Interface (JNDI) is an application programming interface (API) for accessing different kinds of naming and directory services. JNDI is not specific to a particular naming or directory service. It can be used to access many different kinds of systems including file systems; distributed objects systems such as CORBA, Java RMI, and EJB; and directory services such as LDAP, Novell NetWare, and NIS+.

    Add ArgumentCan be used to define the arguments which is required for the particular ejb method to be invoked Expression/Value.
    - -!!! Info - You can click the "Namespaces" link to add namespaces if you are providing an expression. You will be provided another panel named "Namespace Editor" where you can provide any number of namespace prefixes and the URL used in the XPath expression. - -## Example - -``` java - - - - - -``` - -In this example, the EJB Mediator does the EJB service invocation by calling **getStoreById** published at the application server and exposed via ` ejb:/EJBDemo/StoreRegsiterBean!org.ejb.wso2.test.StoreRegister`. The response will then be assigned to the **target** specified (variable/expression). diff --git a/en/docs/reference/mediators/enrich-mediator.md b/en/docs/reference/mediators/enrich-mediator.md deleted file mode 100644 index 86df0a5290..0000000000 --- a/en/docs/reference/mediators/enrich-mediator.md +++ /dev/null @@ -1,495 +0,0 @@ -# Enrich Mediator - -The **Enrich Mediator** can process a message based on a given source configuration and then perform the specified action on the message by using the target configuration. It gets an ` OMElement ` using the configuration specified in the source and then modifies the message by putting it on the current message using the configuration in the target. - -!!! Info - The Enrich mediator is a [content-aware]({{base_path}}/concepts/message-processing-units/#classification-of-mediators) mediator. - -## Syntax - -``` java - - - - -``` - -## Configuration - -The main properties of the Enrich Mediator are as follows: - -### Source configuration - -The following properties are available: - -- **Clone** - By setting the clone configuration, the message can be cloned or used as a reference during enriching. The default value is true. - - **True** - - **False** -- **Type** - The type that the mediator uses from the original message to enrich the modified message that passes through the mediator. - - **Custom** - Custom XPath value. - - **Envelope** - Envelope of the original message used for enriching. - - **Body** - Body of the original message used for enriching. - - **Property** - Specifies a property. For information on how you can use the Property mediator to specify properties, see [Property Mediator]({{base_path}}/reference/mediators/property-Mediator). - - **Key** - Specifies that the target type is key. Specifically used to rename an existing key name in JSON payloads. *(Supported for JSON only)*. -- **XPath Expression** - This field is used to specify the custom XPath value if you selected **custom** for the **Type** field. - -!!! Tip - You can click the Namespaces link to add namespaces if you are providing an expression. You will be provided another panel named "Namespace Editor" where you can provide any number of namespace prefixes and URL that you have used in the XPath expression. - -### Target Configuration - -The following properties are available: - -- **Action** - By specifying the action type, the relevant action can be applied to outgoing messages. - - **Replace** - Replace is the default value of *Action* . It will - be used if a specific value for *Action* is not given. This - replaces the XML message based on the target type specified on - the target configuration. - - **Child** - Adding as a child of the specified target type. - - **Sibling** - Adding as a sibling of the specified target type. - - **Remove** - Removing a selected part. *(Supported for JSON only)*. - - !!! Info - For the target type ' ` envelope ` ', the action - type should be ` 'replace ` '. Herein, action - type ' ` child ` ' is not acceptable because it - adds an envelope within an envelope, and action type ' - ` sibling ` ' is also not acceptable because - there will be two envelopes in a message if you use it. - -- **Type** and **XPath Expression** - Refer the [Source configuration](#source-configuration) above. - - !!! Info - The target type depends on the source type. For the valid and - invalid combinations of source and target types, see below table. - -## Examples - -### Example 1: Setting the property symbol - -In this example, you are setting the property symbol. Later, you can log it using the [Log Mediator]({{base_path}}/reference/mediators/log-Mediator) . - -``` java - - - - -``` - -### Example 2: Adding a child object to a property - -In this example, you add a child property named Lamborghini to a property named Cars. The configuration for this is as follows: - -``` - - - - - - - - - - - - - - - Lamborghini - - - - - - - - - - - -``` - -### Example 3: Adding a SOAPEnvelope type object as a property to a message - -In this example, you add the SOAP envelope in a SOAP request as a property to a message. The Enrich mediator is useful in this scenario since adding the property directly using the [Property mediator]({{base_path}}/reference/mediators/property-Mediator) results in the ` SOAPEnvelope ` object being created as an ` OM ` type object. The ` OM ` type object created cannot be converted back to a ` SOAPEnvelope ` object. - -``` - - - - -``` - -### Example 4: Preserving the original payload - -In this example, you copy the original payload to a property using the Enrich mediator. - -``` - - - - -``` - -Then whenever you need the original payload, you replace the message body with this property value using the Enrich mediator as follows: - -``` - - - - -``` - -## Enriching in JSON format - Examples - -!!! Info - In JSON enriching scenarios if the enrich mediator source defined as a property it should contain a json object or json array. - -Below is the JSON payload that is sent in the request for the following examples. - -**Payload** - -```json -{ - "data": { - "students": [ - { - "id": "01", - "name": "Tom", - "lastName": "Price", - "modules": ["CS001", "CS002", "CS003"] - }, - { - "id":"02", - "name": "Nick", - "lastname": "Thameson", - "modules": ["CS011", "CS012"] - } - ] - } -} -``` - -### Example 1: Extract content from message payload and set to message body - -In this example, we will extract the content in the `data` object and set it as the message body. - -```xml - - - - -``` - -#### Response - -```json -{ - "students": [ - { - "id": "01", - "name": "Tom", - "lastName": "Price", - "modules": [ "CS001", "CS002", "CS003"] - }, - { - "id": "02", - "name": "Nick", - "lastname": "Thameson", - "modules": ["CS011", "CS012"] - } - ] -} -``` - -### Example 2: Setting a property as a child in the target - -In this example, we will enroll the first student in the payload for a new module. The new module is set -in the `NewModule` property. - -```xml - - - - - -``` - -#### Response - -```json -{ - "data": { - "students": [ - { - "id": "01", - "name": "Tom", - "lastName": "Price", - "modules": ["CS001", "CS002", "CS003", "CS004"] - }, - { - "id": "02", - "name": "Nick", - "lastname": "Thameson", - "modules": ["CS011", "CS012"] - } - ] - } -} -``` - -### Example 3: Setting an inline content as a child in the target - -In this example, we will define a new student inline and add it to the `students` array in the payload. - -```xml - - - { - "id": "03", - "name": "Mary", - "lastName": "Jane", - "modules": ["CS001", "CS002", "CS004"] - } - - - -``` - -#### Response - -```json -{ - "data": { - "students": [ - { - "id": "01", - "name": "Tom", - "lastName": "Price", - "modules": ["CS001", "CS002", "CS003"] - }, - { - "id": "02", - "name": "Nick", - "lastname": "Thameson", - "modules": ["CS011", "CS012"] - }, - { - "id": "03", - "name": "Mary", - "lastName": "Jane", - "modules": ["CS001","CS002","CS004"] - } - ] - } -} -``` - -### Example 4: Setting a custom path expressions to a property - -In this example, we will assign the first student's name to a property called `Name`. - -```xml - - - - - - - -``` - -The following line can be observed in the log. - -```text -INFO {LogMediator} - {proxy:TestEnrich} Student name is : = "Tom" -``` - -### Example 5: Removing selected parts from a payload - -!!! Info - - This feature is currently supported only for JSON. - - You can provide multiple JSONPath expressions as a comma-separated list for the `remove` operation (as given in the following example). - -In this example, we will remove the `modules` from every student and also remove the first student in the array. - -```xml - - - - -``` - -#### Response - -```json -{ - "data": { - "students": [ - { - "id": "02", - "name": "Nick", - "lastname": "Thameson" - } - ] - } -} -``` - -### Example 6: Removing selected parts from a property - -As you removed selected parts from a payload, you can also remove selected parts synapse properties. - -```xml - - - - - - - - - - - -``` - -Here, in the first Enrich mediator, you are creating a property called `students` with the incoming message payload. -In the second Enrich mediator, you are removing selected parts from the property, and finally logging the property. - -After invoking we can see the following log appearing in the terminal. - -``` -result = {"data":{"students":[{"id":"02","name":"Nick","lastname":"Thameson"}]} -``` - -### Example 7: Updating a value of an existing object - -In this example, we will replace the `modules` array of every student with `[]`. - -```xml - - [] - - -``` - -#### Response - -```json -{ - "data": { - "students": [ - { - "id": "01", - "name": "Tom", - "lastName": "Price", - "modules": [] - }, - { - "id": "02", - "name": "Nick", - "lastname": "Thameson", - "modules": [] - } - ] - } -} -``` - -### Example 8: Updating the key name of an existing object - -!!! Info - This feature is supported only for JSON. - -In this example, we will replace the key name `name` of every student with `firstName`. - -```xml - - firstName - - -``` - -!!! Info - When specifying the json path of the target, it should comply to the below syntax. - - ```text - . - ``` - - E.g.: - In the above configuration, we are trying to replace the `name` key of the student objects and - json path to locate the student objects would be `$.data.students[*]`. Therefore json path would look like below. - - ```text - $.data.students[*].name - ``` - -#### Response - -```json -{ - "data": { - "students": [ - { - "id": "01", - "firstName": "Tom", - "lastName": "Price", - "modules": ["CS001","CS002","CS003"] - }, - { - "id": "02", - "firstName": "Nick", - "lastname": "Thameson", - "modules": ["CS011","CS012"] - } - ] - } -} -``` - -### Example 9: Enriching JSON primitive values - -You can use Property mediators with `JSON` data type to enrich any JSON primitive, object, or an array to a given target. - -!!! Note - When we use a Property with `STRING` data type in the Enrich mediator, it supports native JSON capabilities - only if the property contains a JSON object or a JSON array. The rest of the values are considered to be XML. - -```xml - - - - - -``` - -!!! Note - When the JSON primitive string contains white spaces, you should enclose them with quotes as shown in the example below. This is due to restrictions enforced by the JSON schema. - -#### Response - -```json -{ - "data": { - "students": [ - { - "id": "01", - "name": "Tom", - "lastName": "Price", - "modules": ["CS001", "CS002", "CS003"] - }, - { - "id": "02", - "name": "Nick", - "lastname": "Thameson", - "modules": ["CS011", "CS012", "CS013 II" - ] - } - ] - } -} -``` - diff --git a/en/docs/reference/mediators/entitlement-mediator.md b/en/docs/reference/mediators/entitlement-mediator.md deleted file mode 100644 index 1bf3f70bec..0000000000 --- a/en/docs/reference/mediators/entitlement-mediator.md +++ /dev/null @@ -1,167 +0,0 @@ -# Entitlement Mediator - -The **Entitlement Mediator** intercepts requests and evaluates the actions performed by a user against an [eXtensible Access Control Markup Language (XACML)](http://en.wikipedia.org/wiki/XACML) policy. This supports XACML 2.0 and 3.0. WSO2 Identity Server can be used as the XACML Policy Decision Point (PDP) where the policy is set, and the Micro Integrator serves as the XACML Policy Enforcement Point (PEP) where the policy is enforced. - -## Syntax - -``` java - - - - - - -``` - -## Configurations - -When you add the Entitlement mediator to a sequence, the Entitlement mediator node appears as follows with four sub elements. These sub elements are used to define a mediation sequence to be applied based on the entitlement result. - -The following are descriptions for the four sub elements of the Entitlement mediator. - -| Parameter Name | Description | -|-----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| **OnAccept** | The sequence to execute when the result returned by the Entitlement mediator is ` Permit ` . For example, you can configure the sequence to direct the request to the back end server as requested. | -| **OnReject** | The sequence to execute when the result returned by the Entitlement mediator is ` Deny ` , ` Not Applicable ` or ` Indeterminate ` . For example, you can configure the sequence to respond to the client with the message ` Unauthorized Request. ` | -| **Obligations** | The sequence to execute when the XACML response contains an obligation statement. When this response is received, the Entitlement mediator clones the current message context, creates a new message context, adds the obligation statement to the SOAP body and then executes the sequence. Since the **Obligations** sequence is executed synchronously, the Entitlement mediator waits for a response. If the sequence returns a ` true ` value, the sequence defined for the **OnAccept** sub element is applied. If the sequence returns a ` false ` value, the sequence defined for the **OnReject** sub element is applied. | -| **Advice** | The sequence to execute when the XACML response contains an advice statement. When this response is received, the Entitlement mediator clones the current message context, creates a new message context, adds the advice statement to the SOAP body and then executes the sequence. Since the **Advice** sequence is executed asynchronously, the Entitlement mediator does not wait for a response. | The parameters available for configuring the Entitlement mediator are as follows. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    Entitlement ServerServer URL of the WSO2 Identity Server that acts as the PDP (e.g., https://localhost:9443/services ).
    User NameThis user should have permissions to log in and manage configurations in the WSO2 Identity Server.
    PasswordThe password of the username entered in the User Name parameter.
    Entitlement Callback Handler
    -

    The handler that should be used to get the subject (user name) for the XACML request.

    -

    You need to secure the proxy service, which uses the Entitlement mediator using one of the following methods and select the Entitlement Callback Handler based on the method you used.

    -
      -
    • UT : This class looks for the subject name in the Axis2 message context under the username property. This is useful when the UsernameToken security is enabled for a proxy service, because when the user is authenticated for such a proxy service, the username would be set in the Axis2 message context. As a result, the Entitlement mediator would automatically get the subject value for the XACML request from there. This is the default callback class.
    • -
    • X509: Specify this class if the proxy is secured with X509 certificates.
    • -
    • SAML: Specify this class if the proxy is secured with WS-Trust.
    • -
    • Kerberos: Specify this class if the proxy is secured with Kerberos.

    • -
    • Custom: This allows you to specify a custom entitlement callback handler class.

    • -
    -

    You can also set properties that control how the subject is retrieved; see Advanced Callback Properties .

    -
    Entitlement Service Client
    -

    The method of communication to use between the PEP and the PDP. For SOAP, choose whether to use Basic Authentication (available with WSO2 Identify Server 4.0.0 and later) OR the AuthenticationAdmin service, which authenticates with the Entitlement service in Identity Server 3.2.3 and earlier. Thrift uses its own authentication service over TCP. WS-XACML uses Basic Authentication.

    -

    The XAMCL standard refrains from specifying which method should be used to communicate from the PEP to the PDP, and many vendors have implemented a proprietary approach. There is a standard called “Web Services Profile of XACML (WS-XACML) Version 1.0″, but it has not been widely adopted because of its bias toward SOAP and the performance implications from XML signatures. However, the benefit of adopting a standard is the elimination of vendor locking, because it will allow your current PEP to work even if you move to a PDP from another vendor (as long as the new PDP also supports this standard). Otherwise you may need to modify your existing PEP to adopt to the new PDP. WSO2 Identity Server has its proprietary SOAP API, Thrift API, and basic support for WS-XACML.

    -
    Thrift HostThe host used to establish a Thrift connection with the Entitlement service when the Entitlement Service Client is set to Thrift.
    Thrift PortThe port used to establish a Thrift connection with the Entitlement service when the Entitlement Service Client is set to Thrift. The default port is 10500.
    - -You will now define the sequences you want to run for the entitlement results. - -1. If you want to specify an existing sequence for a result, click **Referring Sequence** for that result and select the sequence from the registry. -2. If you want to define the sequence in the tree, leave **In-Lined Sequence** selected. -3. Click **Update**. -4. In the tree, click the first result node for which you want to define the sequence, and then add the appropriate mediators to create the sequence. Repeat for each result node. - -### Advanced Callback Properties - -The abstract EntitlementCallbackHandler class supports the following properties for getting the XACML subject (user name), specifying the action, and setting the service name. The various implementations of this class (UTEntitlementCallbackHandler, X509EntitlementCallbackHandler, etc.) can use some or all of these properties. You implement these properties by adding [Property mediators]({{base_path}}/reference/mediators/property-Mediator) before the Entitlement mediator in the sequence. - -The default UTEntitlementCallbackHandler looks for a property called -` username ` in the Axis2 message context, which it uses -as the XACML request ` subject-id ` value. Likewise, the -other handlers look at various properties for values for the attributes -and construct the XACML request. The following attribute IDs are used by -the default handlers. - -- ` urn:oasis:names:tc:xacml:1.0:subject:subject-id ` - of category - ` urn:oasis:names:tc:xacml:1.0:subject-category:access-subject ` -- ` urn:oasis:names:tc:xacml:1.0:action:action-id ` - of category - ` urn:oasis:names:tc:xacml:3.0:attribute-category:action ` -- ` urn:oasis:names:tc:xacml:1.0:resource:resource-id ` - of category - ` urn:oasis:names:tc:xacml:3.0:attribute-category:resource ` -- ` IssuerDN ` of category - ` urn:oasis:names:tc:xacml:3.0:attribute-category:environment ` - (used only by X509 handler) -- ` SignatureAlgorithm ` of category - ` urn:oasis:names:tc:xacml:3.0:attribute-category:environment ` - (used only by X509 handler) - -!!! Info - In most scenarios, you do not need to configure any of these properties. - - -| Property name | Acceptable values | Scope | Description | -|-------------------------------|-------------------|-------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| xacml\_subject\_identifier | string | axis2 | By default, the Entitlement mediator expects to find the XACML subject (user name) in a property called ` username ` in the message's Axis2 context. If your authentication mechanism specifies the user name by adding a property of a different name, create a property called ` xacml_subject_identifier ` and set it to the name of the property in the message context that contains the subject. | -| xacml\_action | string | axis2 | If you are using REST and want to specify a different HTTP verb to use with the service, specify it with the xacml\_action property and specify the xacml\_use\_rest property to true. | -| xacml\_use\_rest | true/false | axis2 | If you are using REST, and you want to override the HTTP verb to send with the request, you can set this property to true to set to true. | -| xacml\_resource\_prefix | string | axis2 | If you want to change the service name, use this property to specify the new service name or the text you want to prepend to the service name. | -| xacml\_resource\_prefix\_only | true/false | axis2 | If set to true, the xacml\_resource\_prefix value is used as the whole service name. If set to false (default), the xacml\_resource\_prefix is prepended to the service name. | - -## Example - -In the following example, the WSO2 Identity Server (with log in URL `https://localhost:9443/services`) is see to authenticate the user invoking the secured backend service. - -If the authorization test performed on a request sent to this URL fails, the [Fault mediator]({{base_path}}/reference/mediators/fault-Mediator) converts the request into a fault -message giving ` Unauthorized ` as the reason for the request to be rejected and ` XACML Authorization Failed ` as the detail. Then the [Respond mediator]({{base_path}}/reference/mediators/respond-Mediator) sends the converted message back to the client. - -If the user is successfully authenticated, the request is sent using the [Send Mediator]({{base_path}}/reference/mediators/send-Mediator) to the endpoint with the -`http://localhost:8281/services/echo"/` URL. - -``` - - - - - - - - XACML Authorization Failed - - - - - - -
    - - - - - - -``` diff --git a/en/docs/reference/mediators/fastxslt-mediator.md b/en/docs/reference/mediators/fastxslt-mediator.md deleted file mode 100644 index 5cdfa1735d..0000000000 --- a/en/docs/reference/mediators/fastxslt-mediator.md +++ /dev/null @@ -1,167 +0,0 @@ -# FastXSLT Mediator - -The **FastXSLT Mediator** is similar to the [XSLT mediator]({{base_path}}/reference/mediators/xslt-mediator), but it uses the [Streaming XPath Parser](https://wso2.com/library/articles/2013/01/streaming-xpath-parser-wso2-esb/) and applies the XSLT transformation to the message stream instead of to the XML message payload. The result is a faster transformation, but you cannot specify the source, properties, features, or resources as you can with the XSLT mediator. Therefore, the FastXSLT mediator is intended to be used to gain performance in cases where the original message remains unmodified. Any pre-processing performed on the message payload will not be visible to the FastXSLT mediator, because the transformation logic is applied on the original message stream instead of the message payload. In cases where the message payload needs to be pre-processed, use the XSLT mediator instead of the FastXSLT mediator. - -!!! Note - The streaming XPath parser used in the Fast XSLT mediator does not support Xpath functions specified with the prefix " fn: ". Examples are " ` fn:contains ` ", " ` fn:count ` ", and " ` fn:concat ` ". - -For example, if you are using the VFS transport to handle files, you might want to read the content of the file as a stream and directly send the content for XSLT transformation. If you need to pre-process the message payload, such as adding or removing properties, use the XSLT mediator instead. - -In summary, following are the key differences between the XSLT and FastXSLT mediators: - -| XSLT Mediator | FastXSLT Mediator | -|----------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------| -| Performs XSLT transformations on the message **payload** . | Performs XSLT transformations on the message **stream** . | -| The message is built before processing. Therefore, you can pre-process the message payload before the XSLT transformation. | The message is not built before processing. Therefore, any pre-processing on the message will not be reflected in the XSLT transformation. | -| The performance is slower than the FastXSLT mediator. | The performance is faster than the XSLT mediator. | - -!!! Note - To enable the FastXSLT mediator, your XSLT script must include the following parameter in the XSL output. - ` omit-xml-declaration="yes" ` - For example: - ``` xml - - ``` - If you do not include this parameter in your XSLT when using the FastXSLT mediator, you will get the following error. - ``` java - ERROR XSLTMediator Error creating XSLT transformer - ``` - -!!! Info - The FastXSLT mediator is a [conditionally content-aware]({{base_path}}/concepts/message-processing-units/#classification-of-mediators) mediator. - -## Syntax - -``` java - -``` - -For example, specify the XSLT by the key ` transform/example.xslt `, which is used to transform the message stream as shown below. - -``` java - -``` - -## Configuration - -The parameters available to configure the FastXSLT mediator are as follows. - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    Key Type

    You can select one of the following options.

    -
      -
    • Static Key: If this is selected, an existing key can be selected from the registry for the Key parameter.
    • -
    • Dynamic Key: If this is selected, the key can be entered dynamically in the Key parameter.
    • -
    Key
    -

    This specifies the registry key to refer the XSLT to. This supports static and dynamic keys.

    -

    Tip

    -

    You can click NameSpaces to add namespaces if you are providing an expression. Then the Namespace Editor panel would appear where you can provide any number of namespace prefixes and URLs used in the XPath expression.

    -
    - -## Example - -The following example applies a simple XSLT stylesheet to a message payload via the FastXSLT mediator. The FastXSLT mediator reads values -from the current XML payload using XPath and populates them into the -stylesheet to create a new or different payload as the response. The API configuration of this example is as follows: - -``` xml - - - - - - - - - - - -``` - -Follow the steps below to specify the stylesheet as a Registry entry in the above API. - -1. Double click on the API and click the following link in the - **Properties** tab. - ![]({{base_path}}/assets/img/integrate/mediators/fastxslt-props.png) -2. Click **Create & point to a new resource...** link. - ![]({{base_path}}/assets/img/integrate/mediators/new-reg-resource.png) -3. Enter the following details to create the empty XSL file in which - you enter the stylesheet, in the Registry. - ![]({{base_path}}/assets/img/integrate/mediators/create-xsl.png) -4. Double-click the stylesheet file in the **Project Explorer**, and add the following stylesheet as the content of the XSL file. - - **discountPayment.xsl** - - ``` xml - - - - - - - - - - - - - ``` - -Pass the following XML payload using SOAP UI. - -!!! Info - You pass this payload into the XSLT mediator specifying a certain - ` drinkName ` as a parameter to the style sheet. For - example, the following payload passes the ` drinkName ` - as 'Coffee'. The style sheet traverses through the incoming payload and - finds the ` ` elements, which contains 'Coffee' - as `drinkName` . When it finds matching entries, it - adds the prices of those elements under a new - `` element. Therefore, when the message flow - comes out of XSLT mediator, the payload changes the - ` ` entry, where it contains the - ` drinkPrice ` values of matching elements. - -``` xml - - - Rice and Curry - USD 10 - Dark Coffee - USD 1.8 - - - Sandwiches - USD 4 - Milk Shake - USD 2.6 - - - Chicken Burger - USD 5 - Iced Coffee - USD 1.5 - - - Noodles - USD 8 - Bottled Water - USD 2.5 - - -``` \ No newline at end of file diff --git a/en/docs/reference/mediators/fault-mediator.md b/en/docs/reference/mediators/fault-mediator.md deleted file mode 100644 index 48d56cb4a6..0000000000 --- a/en/docs/reference/mediators/fault-mediator.md +++ /dev/null @@ -1,221 +0,0 @@ -# Fault Mediator - -The **Fault Mediator** (also called the **Makefault Mediator**) transforms the current message into a fault message. However, this -mediator does not send the converted message. The [Send Mediator]({{base_path}}/reference/mediators/send-mediator) needs to be invoked to send a fault message -created via the Fault mediator. The fault message's ` To ` header is set to the ` Fault-To ` of the original message (if such a header exists in the original message). You can create the fault message as a SOAP 1.1, SOAP 1.2, or plain-old XML (POX) fault. - -For more information on faults and errors, see [Error Handling]({{base_path}}/reference/error_handling). - -## Syntax - -``` java - - - -? -? -? - -``` - -## Configuration - -The parameters available to configure the Fault mediator to create a SOAP 1.1 fault are as follows. - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    Fault Code

    This parameter is used to select the fault code for which the fault string should be defined. Possible values are as follows.

    -
      -
    • versionMismatch: Select this to specify the fault string for a SOAP version mismatch.
    • -
    • mustUnderstand: Select this to specify the fault string for the mustUnderstand error in SOAP.
    • -
    • Client: Select this to specify the fault string for client side errors.
    • -
    • Server: Select this to specify the fault string for server side errors.
    • -
    Fault String
    -

    The detailed fault string of the fault code. The following options are available.

    -
      -
    • value: If this option is selected, the fault string is specified as a string value.
    • -
    • expression: If this option is selected, the fault string is specified as an expression.
    • -
    -Tip: -

    You can click NameSpaces to add namespaces if you are providing an expression. Then the Namespace Editor panel would appear where you can provide any number of namespace prefixes and URLs used in the XPath expression.

    -
    Fault ActorThe element of the SOAP fault message which is used to capture the party which caused the fault.
    DetailThis parameter is used to enter a custom description of the error.
    - -The parameters available to configure the Fault mediator to create a -SOAP 1.2 fault are as follows. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    Code

    This parameter is used to select the fault code for which the reason should be defined. Possible values are as follows.

    -
      -
    • versionMismatch: Select this to specify the reason for a SOAP version mismatch.
    • -
    • mustUnderstand: Select this to specify the reason for the mustUnderstand error in SOAP.
    • -
    • dataEncodingUnknown: Select this to specify the reason for a SOAP encoding error.
    • -
    • Sender: Select this ti specify the reason for a sender-side error.
    • -
    • Receiver: Select this to specify the reason for a receiver-side error.
    • -
    Reason
    -This parameter is used to specify the reason for the error code selected in the Code parameter. The following options are available. -
      -
    • value: If this option is selected, the reason is specified as a string value.
    • -
    • expression: If this option is selected, the reason is specified as an expression.
    • -
    -Tip: -

    You can click NameSpaces to add namespaces if you are providing an expression. Then the Namespace Editor panel would appear where you can provide any number of namespace prefixes and URLs used in the XPath expression.

    -
    RoleThe SOAP 1.1 role name.
    NodeThe SOAP 1.2 node name.
    DetailThis parameter is used to enter a custom description of the error.
    - -The parameters available to configure the Fault mediator to create a plain-old XML (POX) fault are as follows. - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    Reason
    -

    This parameter is used to enter a custom fault message. The following options are available.

    -
      -
    • value: If this option is selected, the fault message is specified as a string value.
    • -
    • expression: If this option is selected, the fault message is specified as an expression.
    • -
    -Tip: -

    You can click NameSpaces to add namespaces if you are providing an expression. Then the Namespace Editor panel would appear where you can provide any number of namespace prefixes and URLs used in the XPath expression.

    -
    Detail
    -

    This parameter is used to enter details for the fault message. The following options are available.

    -
      -
    • value: If this option is selected, the detail is specified as a string value.
    • -
    • expression: If this option is selected, the detail is specified as an expression.
    • -
    -Tip: -

    You can click NameSpaces to add namespaces if you are providing an expression. Then the Namespace Editor panel would appear where you can provide any number of namespace prefixes and URLs used in the XPath expression.

    -
    - -## Examples - -Following are examples of different usages of the Fault Mediator. - -### Example 1 - -In the following example, the ` testmessage ` -string value is given as the reason for the SOAP error -` versionMismatch ` . - -``` java - - - - - -``` - -### Example 2 - -The following sample proxy validates the content type using the Filter -Mediator based on the ` Content-Type ` header property. -If the result is true, it sends an exception back to the client using -the Fault Mediator. Else, if the result is false, it continues the flow. - -``` xml - - - - - - - - - - - - - - - - - - Content-Type: application/xhtml+xml is not a valid content type. - -
    - - - - - - - - - -
    - - - - - - - - - - - -``` - - diff --git a/en/docs/reference/mediators/filter-mediator.md b/en/docs/reference/mediators/filter-mediator.md deleted file mode 100644 index 00707bf49b..0000000000 --- a/en/docs/reference/mediators/filter-mediator.md +++ /dev/null @@ -1,120 +0,0 @@ -# Filter Mediator - -The **Filter Mediator** can be used for filtering messages based on an -XPath, JSONPath or a regular expression. If the test succeeds, the -Filter mediator executes the other mediators enclosed in the sequence. - -The Filter Mediator closely resembles the "If-else" control structure. - -!!! Info - The Filter mediator is a [conditionally]({{base_path}}/concepts/message-processing-units/#classification-of-mediators) [content aware]({{base_path}}/concepts/message-processing-units/#classification-of-mediators) mediator. - -## Syntax - -``` java - - mediator+ - -``` - -This mediator could also be used to handle a scenario where two -different sequences are applied to messages that meet the filter -criteria and messages that do not meet the filter criteria. - -``` java - - - mediator+ - - - mediator+ - - -``` - -In this case, the Filter condition remains the same. The messages that -match the filter criteria will be mediated using the set of mediators -enclosed in the ` then ` element. The messages that do -not match the filter criteria will be mediated using the set of -mediators enclosed in the ` else ` element. - -## Configuration - -The parameters available for configuring the Filter mediator are as -follows: - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    Specify As

    This is used to specify whether you want to specify the filter criteria via an XPath expression or a regular expression.

    -
      -
    • XPath : If this option is selected, the Filter mediator tests the given XPath/JSONPath expression as a Boolean expression. When specifying a JSONPath, use the format json-eval(<JSON_PATH>) , such as json-eval(getQuote.request.symbol) .
    • -
    • Source and Regular Expression : If this option is selected, the Filter mediator matches the evaluation result of a source XPath/JSONPath expression as a string against the given regular expression.
    • -
    Source
    -

    The expression to locate the value that matches the regular expression that you can define in the Regex parameter.

    -

    Tip

    -

    You can click NameSpaces to add namespaces if you are providing an expression. Then the Namespace Editor panel would appear where you can provide any number of namespace prefixes and URLs used in the XPath expression.

    - -
    RegexThe regular expression to match the source value.
    - -## Examples - -### Sending only messages matching the filter criteria - -In this example, the Filter will get the ` To ` header -value and match it against the given regular expression. If this -evaluation returns ` true ` , it will send the message. -If the evaluation returns ` false ` , it will drop the -message. - -``` java - - - - - - - - -``` - -### Applying separate sequences - -In this example, the [Log mediator]({{base_path}}/reference/mediators/log-mediator) is used to log -information from a service named Bus Services via a property when the -request matches the filter criteria. When the request does not match the -filter criteria, another log mediator configuration is used log -information from a service named Train Service in a similar way. - -``` - - - - - - - - - - - - -``` diff --git a/en/docs/reference/mediators/foreach-mediator.md b/en/docs/reference/mediators/foreach-mediator.md deleted file mode 100644 index 6e633cf6e2..0000000000 --- a/en/docs/reference/mediators/foreach-mediator.md +++ /dev/null @@ -1,121 +0,0 @@ -# ForEach Mediator - -The ForEach mediator requires an XPath/JSONPath expression and a sequence (inline or referred). It splits the message into a number of different messages -derived from the original message by finding matching elements for the -XPath/JSONPath expression specified. Based on the matching elements, new messages -are created for each iteration and processed sequentially. The -processing is carried out based on a specified sequence. The behaviour -of ForEach mediator is similar to a generic loop. After mediation, the -sub-messages are merged back to their original parent element in the -original message sequentially. - -The ForEach mediator creates the following properties during mediation. - -| Property | Description | -|----------------------------|-------------------------------------------------------------------------------------------------------| -| FOREACH_ORIGINAL_MESSAGE | This contains the original envelop of the messages split by the ForEach mediator. | -| FOREACH_COUNTER | This contains the count of the messages processed. The message count increases during each iteration. | - -!!! Note - [Iterate Mediator]({{base_path}}/reference/mediators/iterate-Mediator) is quite similar to the ForEach - mediator. You can use complex XPath expressions to conditionally select - elements to iterate over in both mediators. Following are the main - difference between ForEach and Iterate mediators: - - - Use the ForEach mediator only for message transformations. If you - need to make back-end calls from each iteration, then use the - iterate mediator. - - ForEach supports modifying the original payload. You can use Iterate - for situations where you send the split messages to a target and - collect them by an Aggregate in a different flow - - You need to always accompany an Iterate with an Aggregate mediator. - ForEach loops over the sub-messages and merges them back to the same - parent element of the message. - - In Iterate you need to send the split messages to an endpoint to - continue the message flow. However, ForEach does not allow using - [Call]({{base_path}}/reference/mediators/call-Mediator), [Send]({{base_path}}/reference/mediators/send-Mediator) and - [Callout]({{base_path}}/reference/mediators/callout-Mediator) mediators in the sequence. - - ForEach does not split the message flow, unlike Iterate Mediator. It - guarantees to execute in the same thread until all iterations are - complete. - -When you use ForEach mediator, you can only loop through segments of the -message and do changes to a particular segment. For example, you can -change the payload using payload factory mediator. But you cannot send -the split message out to a service. Once you exit from the ForEach loop, -it automatically aggregates the split segments. This replaces the -ForEach function of the complex XSLT mediators using a ForEach mediator -and a Payload Factory mediator. However, to implement the -split-aggregate pattern, you still need to use Iterate mediator. - -## Syntax - -``` - - - (mediator)+ - ? - -``` - -## Configuration - -The parameters available to configure the ForEach mediator are as follows. - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    ForEach IDIf a value is entered for this parameter, it will be used as the prefix for the FOREACH_ORIGINAL_MESSAGE and FOREACH_COUNTER properties created during mediation. This is an optional parameter. However, it is recommended to define a ForEach ID in nested ForEach scenarios to avoid the properties mentioned from being overwritten.
    Expression
    -

    The XPath/JSONPath expression with which different messages are derived by splitting the parent message. This expression should have matching elements based on which the splitting is carried out.

    -

    You can click NameSpaces to add namespaces when you are providing an expression. Then the Namespace Editor panel would appear where you can provide any number of namespace prefixes and URLs used in the XPath expression.

    -
    Sequence

    The mediation sequence that should be applied to the messages derived from the parent message. ForEach mediator is used only for transformations, thereby, you should not include Call , Send and Callout mediators, which are used to invoke endpoints, in t his sequence.

    -

    You can select one of the following options.

    -
      -
    • Anonymous: This allows you to define an anonymous sequence to be applied to the split messages by adding the required mediators as children of the ForEach mediator in the mediator tree.
    • -
    • Pick from Registry: This allows you to pick an existing mediation sequence that is saved in the Registry. Click either Configuration Registry or Governance Registry as relevant to select the required mediation sequence from the Resource Tree.
    • -
    - -## Examples - -In this configuration, the ` "//m0:getQuote/m0:request" ` -XPath and ` "json-eval($.getQuote.request)" ` JSONPath expression evaluates the split messages to be derived from the -parent message. Then the split messages pass through a sequence which -includes a [Log mediator]({{base_path}}/reference/mediators/log-Mediator) with the log level set to -` full ` . - -=== "Using a XPath expression" - ``` java - - - - - - ``` - -=== "Using a JSONPath expression" - ``` java - - - - - - ``` - diff --git a/en/docs/reference/mediators/header-mediator.md b/en/docs/reference/mediators/header-mediator.md deleted file mode 100644 index 9d8d84a288..0000000000 --- a/en/docs/reference/mediators/header-mediator.md +++ /dev/null @@ -1,176 +0,0 @@ -# Header Mediator - -The **Header Mediator** allows you to manipulate SOAP and HTTP headers. - -!!! Info - The Header mediator is a [conditionally]({{base_path}}/concepts/message-processing-units/#classification-of-mediators) [content aware]({{base_path}}/concepts/message-processing-units/#classification-of-mediators) mediator. - -## Syntax - -``` java -
    -``` - -The optional ` action ` attribute specifies whether the -mediator should set or remove the header. If no value is specified, the -header is set by default. - -## Configuration - -The parameters available to configure the Header mediator are as follows. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    NameThe name of the header element. You can specify the namespace used in the header element by clicking the Namespaces link next to the text field.
    ActionSelect Set if you want to set the header as a new header. Select Remove if you want to remove the header from the incoming message.
    Value/ExpressionA static value or an XPath/JSONPath expression that will be executed on the message to set the header value.
    Inline XML Header
    -

    This parameter allows you to directly input any XML syntax related to the Header mediator (specifically for SOAP headers). For example, to achieve the following configuration, you should enter the lastTradeTimestamp element in the Inline XML Header parameter.

    -
    -
    -
    <header>  
    -   <urn:lastTradeTimestamp xmlns:urn=" http://synapse.apache.org/ ">Mon May 13 13:52:17 IST 2013</urn:lastTradeTimestamp>  
    -</header>
    -
    -
    -
    ScopeSelect Synapse if you want to manipulate SOAP headers. Select Transport if you want to manipulate HTTP headers.
    NamespacesYou can click this link to add namespaces if you are providing an expression. The Namespace Editor panel would appear. You can enter any number of namespace prefixes and URL that you have used in the XPath expression in this panel.
    - -## Examples - -This section covers the following scenarios in which the Header mediator can be used. - -### Using SOAP headers - -In the following example, the value for ` P1 code ` -should be included in the SOAP header of the message sent from the -client to the Micro Integrator. To do this, the header mediator is added to -the in sequence of the proxy configuration as shown below. - -To get a response with ` Hello World ` in the SOAP -header, the header mediator is also added to the out sequence. - -``` java - -
    - XYZ -
    - - -
    - - - - -
    - - World - -
    - -
    -``` - -### Using HTTP headers - -The following example makes the ESB profile add the HTTP header -` Accept ` with the value ` image/jpeg ` -to the HTTP request made to the endpoint. - -``` - -
    - - -
    - - - - - - -``` - -If you have [enabled wire logs]({{base_path}}/integrate/develop/using-wire-logs), you will view the following output. - -``` text -<< GET /people/eric+cooke HTTP/1.1 -<< Accept: image/jpeg -<< Host: localhost:9763 -<< Connection: Keep-Alive -``` - -### Handling headers with complex XML - -A header can contain XML structured values by embedding XML content -within the `
    ` element as shown below. - -``` -
    - - - - -
    -``` - -### Adding a dynamic SOAP header - -The following configuration takes the value of an element named -` symbol ` in the message body (the namespace -` http://services.samples/xsd `), and adds it as a SOAP -header named ` header1 ` . - -``` -
    -``` - -### Setting the endpoint URL dynamically - -In this example, the Header mediator allows the endpoint URL to which -the message is sent to be set dynamically. It specifies the default -address to which the message is sent dynamically by deriving the To -header of the message via an XPath expression. Then the [Send mediator]({{base_path}}/reference/mediators/send-mediator) sends the message to a **Default Endpoint**. A Default Endpoint sends the message to the default address of the message (i.e. address specified in the To header). Therefore, in this scenario, selecting the Default Endpoint results in the message being sent to relevant URL calculated via the ` fn:concat('http://localhost:9764/services/Axis2SampleService_',get-property('epr')) ` -expression. - -``` -
    - - - - - -``` - -### Setting the header with a value in the JSON body - -``` -
    -``` diff --git a/en/docs/reference/mediators/iterate-mediator.md b/en/docs/reference/mediators/iterate-mediator.md deleted file mode 100644 index 03fe28cce8..0000000000 --- a/en/docs/reference/mediators/iterate-mediator.md +++ /dev/null @@ -1,199 +0,0 @@ -# Iterate Mediator - -The **Iterate Mediator** implements the [Splitter enterprise integration -pattern](http://docs.wso2.org/wiki/display/IntegrationPatterns/Splitter) -and splits the message into a number of different messages derived from -the parent message. The Iterate mediator is similar to the [Clone mediator]({{base_path}}/reference/mediators/clone-Mediator). The difference between the two mediators -is, the Iterate mediator splits a message into different parts, whereas the Clone mediator makes multiple identical copies of the message. - -!!! Info - - The Iterate mediator is a [content aware]({{base_path}}/concepts/message-processing-units/#classification-of-mediators) mediator. - - Iterate Mediator is quite similar to the [ForEach mediator]({{base_path}}/reference/mediators/foreach-mediator). You can use complex XPath expressions or JSON expressions to conditionally select elements to iterate over in both mediators. Following are the main difference between ForEach and Iterate mediators: - - Use the ForEach mediator only for message transformations. If you - need to make back-end calls from each iteration, then use the - iterate mediator. - - ForEach supports modifying the original payload. You can use Iterate - for situations where you send the split messages to a target and - collect them by an Aggregate in a different flow - - You need to always accompany an Iterate with an Aggregate mediator. - ForEach loops over the sub-messages and merges them back to the same - parent element of the message. - - In Iterate you need to send the split messages to an endpoint to continue the message flow. However, ForEach does not allow using [Call]({{base_path}}/reference/mediators/call-mediator), [Send]({{base_path}}/reference/mediators/send-mediator) and - [Callout]({{base_path}}/reference/mediators/callout-mediator) mediators in the sequence. - - ForEach does not split the message flow, unlike Iterate Mediator. It - guarantees to execute in the same thread until all iterations are - complete. - -When you use ForEach mediator, you can only loop through segments of the -message and do changes to a particular segment. For example, you can -change the payload using payload factory mediator. But you cannot send -the split message out to a service. Once you exit from the for-each -loop, it automatically aggregates the split segments. This replaces the -for-each function of the complex XSLT mediators using a ForEach mediator -and a Payload Factory mediator. However, to implement the -split-aggregate pattern, you still need to use Iterate mediator. - -## Syntax - -``` java - - - - (mediator)+ - ? - - endpoint - ? - + - -``` - -## Configuration - -The parameters available to configure the Iterate mediator are as -follows. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    Iterate IDThe iterate ID can be used to identify messages created by the iterate mediator. This is particularly useful when aggregating responses of messages that are created using nested iterate mediators.
    Sequential Mediation
    -

    This parameter is used to specify whether the split messages should be processed sequentially or not. The processing is carried based on the information relating to the sequence and endpoint specified in the target configuration . The possible values are as follows.

    -
      -
    • True : If this is selected, the split messages will be processed sequentially. Note that selecting True might cause delays due to high resource consumption.
    • -
    • False : If this is selected, the split messages will not be processed sequentially. This is the default value and it results in better performance.
    • -
    -

    The responses will not necessarily be aggregated in the same order that the requests were sent, even if the sequential Mediation parameter is set to true .

    - -
    Continue Parent

    This parameter is used to specify whether the original message should be preserved or not. Possible values are as follows.

    -
      -
    • True : If this is selected, the original message will be preserved.
    • -
    • False : If this is selected, the original message will be discarded. This is the default value.
    • -
    Preserve Payload

    This parameter is used to specify whether the original message payload should be used as a template when creating split messages. Possible values are as follows.

    -
      -
    • True : If this is selected, the original message payload will be used as a template.
    • -
    • False : If this is selected, the original message payload will not be used as a template. This is the default value.
    • -
    Iterate Expression
    -

    The XPath expression used to split the message.. This expression selects the set of XML elements from the request payload that are applied to the mediation defined within the iterate target. Each iteration of the iterate mediator will get one element from that set. New messages are created for each and every matching element and processed in parallel or in sequence based on the value specified for the Sequential Mediation parameter.

    -

    You can click NameSpaces to add namespaces if you are providing an expression. Then the Namespace Editor panel would appear where you can provide any number of namespace prefixes and URLs used in the XPath expression.

    - -
    Attach Path
    -

    To form new messages, you can specify an XPath expression or a JSONPath expression to identify the parent element to which the split elements are attached (as expressed in Iterate expression).

    -

    You can click NameSpaces to add namespaces if you are providing an expression. Then the Namespace Editor panel would appear where you can provide any number of namespace prefixes and URLs used in the XPath expression.

    - -
    - -### Target configuration - -Each Iterate mediator has its own target by default. It appears in the -mediation tree once you configure the above parameters -and save them. - -The parameters available to configure the target configuration are as -follows: - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    SOAP ActionThe SOAP action of the message.
    To AddressThe target endpoint address.
    Sequence

    This parameter is used to specify whether split messages should be mediated via a sequence or not, and to specify the sequence if they are to be further mediated. Possible options are as follows.

    -
      -
    • None : If this is selected, no further mediation will be performed for the split messages.
    • -
    • Anonymous : If this is selected, you can define an anonymous sequence for the split messages by adding the required mediators as children to Target in the mediator tree.
    • -
    • Pick From Registry : If this is selected, you can refer to a predefined sequence that is currently saved as a resource in the registry. Click either Configuration Registry or Governance Registry as relevant to select the required sequence from the resource tree.
    • -
    Endpoint

    The endpoint to which the split messages should be sent. Possible options are as follows.

    -
      -
    • None: If this is selected, the split messages are not sent to an endpoint.
    • -
    • Anonymous: If this is selected, you can define an anonymous endpoint within the iterate target configuration to which the split messages should be sent.
    • -
    • Pick from Registry: If this is selected, you can refer to a predefined endpoint that is currently saves as a resource in the registry. Click either Configuration Registry or Governance Registry as relevant to select the required endpoint from the resource tree.
    • -
    - -## Examples - -In these examples, the **Iterate** mediator splits the messages into parts and processes them asynchronously. Also see [Splitting Messages into Parts and Processing in Parallel (Iterate/Aggregate)](https://docs.wso2.com/pages/viewpage.action?pageId=119129658). - -=== "Using an XPath expression" - ``` java - - - - - -
    - - - - - - ``` - -=== "Using a JSONpath expression" - ``` java - - - - - - - - - - - - ``` - diff --git a/en/docs/reference/mediators/json-transform-mediator.md b/en/docs/reference/mediators/json-transform-mediator.md deleted file mode 100644 index 329173bf00..0000000000 --- a/en/docs/reference/mediators/json-transform-mediator.md +++ /dev/null @@ -1,302 +0,0 @@ -# JSON Transform Mediator - -The **JSON Transform mediator** is used for controlling XML to JSON transformations (possibly with a JSON Schema) inside a mediation. Normally XML to JSON transformations are controlled by the properties defined in `synapse.properties`. - -Those configurations are applied globally and you cannot have independent configurations for each mediation scenario. -With JSON Transform mediator you can define the properties inside the mediation and control the transformation independently. -Also you can have a JSON schema to correct the payload if there are inconsistencies in the transformation. - - - -!!! Info - The JSON Transform mediator is a [content aware]({{base_path}}/reference/mediators/about-mediators/#classification-of-mediators) mediator. - -## Syntax - -``` java - - * - -``` - -## Configuration - -The general parameters available for configuring the JSON Transform mediator are as follows. - - - - - - - - - - - - - - -
    Parameter NameDescription
    Schema

    This parameter is used for specifying the registry location of the JSON schema file. You can specify a Local Entry as well

    -
    - -Apart from defining a schema, you can also add properties to control XML to JSON transformation. The parameters available for configuring a property are as follows: - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    Schema

    The name of the property that needs to be overridden in the sequence. The JSON Transform mediator supports only the parameters related to XML to JSON conversion. The list of properties that are supported can be found here.

    -
    Property Value

    The value that should be overridden.

    -
    - -## Challenges when converting XML to JSON - -### Converting an array of one from XML to JSON - -Let's say we do a search and get the results in XML. We want the results to be converted to JSON array when returned to the client. A blind XML to JSON transformation would look like this. - -```xml - - Harry Potter - Lord of the Rings - -``` - -```json - {"books" : { "book" : ["Harry Potter", "Lord of the Rings"]}} -``` - -Let's say we get only one result. The converted JSON would come out like below. - -```xml - - Harry Potter - -``` - -```json - {"books" : { "book" : "Harry Potter"}} -``` - -Theoretically the above conversion is correct. However, a client might expect an array and not a string. - -We can tackle the above issue with a JSON Schema and correct the output to be an JSON Array. - -```json - { - "$schema": "http://json-schema.org/draft-04/schema#", - "type": "object", - "properties": { - "books": { - "type": "object", - "properties": { - "book": { - "type": "array" - } - } - } - } - } -``` - -### Losing data type information - -Since we cannot differentiate between String, Numeric and Boolean in XML, when we do a conversion from XML to JSON, the data type of value might be not what we expected. -Look at the following the example. - -```xml - - 56783 - Alice - true - -``` -By default, the JSON output would look like below after the mediation. - -```json -{ - "person": { - "id": 56783, - "name": "Alice", - "isAdmin": true - } -} -``` - -The field `id` has been converted to number, `name` to String and `isAdmin` to boolean. - -The runtime has automatically detected and parsed the values to native data-types. But there might be a scenario where the client expects a String type for `id`. - -We want the native conversion rules applied to `name` and `isAdmin` fields and not `id`. - -With JSON Transform mediator, we can use a JSON schema to tackle this issue. - -```json -{ - "$schema": "http://json-schema.org/draft-04/schema#", - "type": "object", - "properties": { - "person": { - "type": "object", - "properties": { - "id": { - "type": "string" - } - } - } - } -} -``` - -With this schema correction, the JSON payload would come out as below. This gives granular level control over individual field data types rather than using a global property. - -```json -{ - "person": { - "id": "56783", - "name": "Alice", - "isAdmin": true - } -} -``` - -## Example - -Given below is a sample schema file (Schema.json) file that you can use for running the examples given below. -Add this sample schema file (i.e. Schema.json) to the following registry path: conf:/Schema.json. -For instructions on adding the schema file to the Registry Resources Project, see [Creating Registry Resource]({{base_path}}/integrate/develop/creating-artifacts/creating-registry-resources). - -```json -{ - "$schema": "http://json-schema.org/draft-04/schema#", - "type": "object", - "properties": { - "fruit": { - "type": "string", - "minLength": 4, - "maxLength": 6, - "pattern": "^[0-9]{1,45}$" - }, - "price": { - "type": "number", - "minimum": 2, - "maximum": 20, - "exclusiveMaximum": 20, - "multipleOf": 2.5 - } - }, - "required": [ - "price" - ] -} -``` - -Use the following payload to test the examples: - -```xml - - 12345 - 7.5 - 10 - -``` - -### Example 1 - Overriding global synapse properties - -This example will override the XML to JSON transformation properties defined in the **synapse.properies** configuration with the properties given by the JSON transform mediator. - -``` xml - - - - - - - - - - - - -``` - -Output: All the numeric values have been converted to string values since the auto primitive property is defined as false. - -```json -{ - "fruit": "12345", - "price": "7.5", - "quantity": "10" -} -``` - -### Example 2 - Using a JSON schema - -This will perform the XML to JSON transformation using the global synapse settings and then apply the JSON schema that is added from the JSON transform mediator. - -``` xml - - - - - - - - - - -``` - -Output: The 'fruit' value has been converted to string and the 'price' value has been converted to a number according to the schema definition. Please note that 'quantity' has been converted to a number because it is the default property according to the **synapse.properties** file. - -``` json - { - "fruit": "12345", - "price": 7.5, - "quantity": 10 - } -``` - -### Example 3 - Overriding global synapse properties and applying a JSON schema - -This will first override the XML to JSON transformation properties defined in the synapse.properies configuration with the properties given by the JSON transform mediator and also apply the JSON Schema given in the mediator. - -``` java - - - - - - - - - - - - -``` - -Output: The 'fruit' value has been converted to a string and the 'price' value has been converted to a number according to the schema definition. -Please note that 'quantity' has been converted to a string because we have overridden the global synapse.properties file. - -``` json - { - "fruit": "12345", - "price": 7.5, - "quantity": "10" - } -``` - diff --git a/en/docs/reference/mediators/log-mediator.md b/en/docs/reference/mediators/log-mediator.md deleted file mode 100644 index ea53be24d7..0000000000 --- a/en/docs/reference/mediators/log-mediator.md +++ /dev/null @@ -1,127 +0,0 @@ -# Log Mediator - -The **Log mediator** is used to log mediated messages. For more information on logging, see [Monitoring Logs]({{base_path}}/observe/micro-integrator/classic-observability-logs/monitoring-logs/). - -!!! Info - The Log mediator is a [conditionally]({{base_path}}/concepts/message-processing-units/#classification-of-mediators) [content aware]({{base_path}}/concepts/message-processing-units/#classification-of-mediators) mediator. - -## Syntax - -The log token refers to a ` ` element, which may be -used to log messages being mediated. - -``` java - - * - -``` - -## Configuration - -The general parameters available to configure the Log mediator are as -follows. - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    Log Category

    This parameter is used to specify the log category. Possible values are as follows. Following log levels correspond to the ESB profile service level logs.

    -
      -
    • TRACE - This designates fine-grained informational events than the DEBUG.
    • -
    • DEBUG - This designates fine-grained informational events that are most useful to debug an application.
    • -
    • INFO - This designates informational messages that highlight the progress of the application at coarse-grained level.
    • -
    • WARN - This designates potentially harmful situations.
    • -
    • ERROR - This designates error events that might still allow the application to continue running.
    • -
    • FATAL - This designate s very severe error events that will presumably lead the application to abort.

    • -
    -

    Log Level

    -
    -

    This parameter is used to specify the log level. The possible values are as follows.

    -
      -
    • Full : If this is selected, all the standard headers logged at the Simple level as well as the full payload of the message will be logged. This log level causes the message content to be parsed and hence incurs a performance overhead.
    • -
    • Simple : If this is selected, the standard headers (i.e. To , From , WSAction , SOAPAction , ReplyTo , and MessageID ) will be logged.
    • -
    • Headers : If this is selected, all the SOAP header blocks are logged.
    • -
    • Custom : If this is selected, only the properties added to the Log mediator configuration will be logged.
    • -
    -

    The properties included in the Log mediator configuration will be logged regardless of the log level selected.

    -
    Log Separator
    -

    This parameter is used to specify a value to be used in the log to separate attributes. The , comma is default.

    -

    Use only the Source View to add a tab (i.e., by defining the separator="&#x9;" parameter in the syntax) or a new line (i.e., by defining the separator="&#xA;" parameter in the syntax ) as the Log Separator , since the Design View does not support this.

    -
    - -The parameters available to configure a property are as follows: - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    Property NameThe name of the property to be logged.
    Property Value

    The possible values for this parameter are as follows:

    -
      -
    • Value: If this is selected, a static value would be considered as the property value and this value should be entered in the Value/Expression parameter.
    • -
    • Expression: If this is selected, the property value will be determined during mediation by evaluating an expression. This expression should be entered in the Value/Expression parameter.

    • -
    Value/Expression
    -

    This parameter is used to enter a status value as the property value, or to enter an expression to evaluate the property value based on the what you entered for the Property Value parameter. When specifying a JSONPath, use the format json-eval(<JSON_PATH>) , such as json-eval(getQuote.request.symbol).

    -
    ActionThis parameter allows the property to be deleted.
    - -## Examples - -### Using Full log - -In this example, everything is logged including the complete SOAP -message. - -``` java - -``` - -### Using Custom logs - -In this example, the log level is ` custom ` . A property -with an XPath expression which is used to get a stock price from a -message is included. This results in logging the stock, price which is a -dynamic value. - -``` - - - -``` diff --git a/en/docs/reference/mediators/loopback-mediator.md b/en/docs/reference/mediators/loopback-mediator.md deleted file mode 100644 index f2a84e1c40..0000000000 --- a/en/docs/reference/mediators/loopback-mediator.md +++ /dev/null @@ -1,69 +0,0 @@ -# Loopback Mediator - -The **Loopback Mediator** moves messages from the in flow (request path) to the out flow (response path). All the configuration included in the in sequence that appears after the Loopback mediator is skipped. - -!!! Info - - The Loopback mediator is a [content-unaware]({{base_path}}/reference/mediators/about-mediators/#classification-of-mediators) mediator. - - The messages that have already been passed from the In sequence to the Out sequence cannot be moved to the Out sequence again via the Loopback mediator. - -## Syntax - -The loopback token refers to a `` element, which is used to skip the rest of the in flow and move the message to the out flow. - -``` java - -``` - -## Configuration - -As with other mediators, after adding the Loopback mediator to a sequence, you can click its up and down arrows to move its location in the sequence. - -## Example - -This example is a main sequence configuration with two [PayloadFactory mediators]({{base_path}}/reference/mediators/payloadfactory-mediator). Assume you only want to use the -first factory but need to keep the second factory in the configuration for future reference. The Loopback mediator is added after the first -PayloadFactory mediator configuration to skip the second PayloadFactory mediator configuration. This configuration will cause the message to be processed -with the first payload factory and then immediately move to the out flow, skipping the second payload factory in the in flow. - -``` java - - - - - - - - $1 - - - - - - - - - - - - - - - $1 - - - - - - - - - - - - - -``` diff --git a/en/docs/reference/mediators/ntlm-mediator.md b/en/docs/reference/mediators/ntlm-mediator.md deleted file mode 100644 index 99d4bb680e..0000000000 --- a/en/docs/reference/mediators/ntlm-mediator.md +++ /dev/null @@ -1,88 +0,0 @@ -# NTLM Mediator - -NTLM (Windows NT LAN Manager) is an authentication protocol provided in Windows server. NTLM authentication is based on a challenge response-based protocol and WSO2 API Manager gives support to access NTLM protected services by using the NTLM mediator. You need to configure the NTLM backend and use that credentials to access NTLM protected services by using the WSO2 API Manager. First you need to initialize the NTLM mediator and then you can use call mediator or callout mediator to send requests to the backend service. - -!!! Info - - The NTLM mediator is a [content-unaware]({{base_path}}/reference/mediators/about-mediators/#classification-of-mediators) mediator. - -## Syntax - -```xml - - -``` - -## Configuration - -The parameters available for configuring the NTLM mediator are as follows. - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    DomainDomain of the NTLM configured host. Set your NTLM configured computer domain name in here.
    HostNTLM configured the backend host name.
    ntlmVersionNTLM version to connect with. Currently there are two NTLM versions available as v1 and v2.
    UsernameNTLM backend username. This is the username of the NTLM enabled backend Windows server.
    PasswordNTLM backend password. This is the password of the NTLM enabled backend Windows server.
    - -## Example - -An example NTLM mediator config is as follows: - -```xml - -``` - -For MI versions, make sure to include jcifs dependency as it is not included in the product by default. - -Use call or callout mediator with initAxis2ClientOptions option set to "false". - -Once you have initialized the NTLM mediator, you can call the NTLM enabled endpoint with a call with blocking mode or with a callout mediator. Check the following two example scenarios: - -**Example 1 - With Callout Mediator calling a SOAP endpoint** - -```xml - - - - - - - - - -``` - -**Example 2 - With Call Mediator calling a REST endpoint** - -```xml - - - - -
    - - - - - -``` \ No newline at end of file diff --git a/en/docs/reference/mediators/oauth-mediator.md b/en/docs/reference/mediators/oauth-mediator.md deleted file mode 100644 index d8104c9bf5..0000000000 --- a/en/docs/reference/mediators/oauth-mediator.md +++ /dev/null @@ -1,48 +0,0 @@ -# OAuth Mediator - -The **OAuth Mediator** supports 2 forms of OAuth. It bypasses the RESTful requests and authenticates users against WSO2 Identity Server. - -When a client tries to invoke a RESTful service, it may be required to verify the credentials of the client. This can be achieved by registering an OAuth application in the WSO2 Identity Server. When the client sends a REST call with the Authorization header to the Micro Integrator, the OAuth mediator validates it with the Identity server and proceeds. - -See [2-legged OAuth for Securing a RESTful Service](https://docs.wso2.com/display/IS570/2-legged+OAuth+for+Securing+a+RESTful+Service) for detailed instructions to carry out this process. - -!!! Info - If you are using OAuth 1 a, you will get the `org.apache.synapse.SynapseException: Unable to find SCOPE value in Synapse Message Context` error when the ` SCOPE ` property is not set in the synapse message context. To avoid this error, add a property with the name `scope` and a value in the synapse message context as shown in the [Example](#example) section. - -## Syntax - -``` java - -``` - -## Configuration - -The parameters available to configure the OAuth mediator are as follows. - -| Parameter Name | Description | -|------------------|----------------------------------------------------------------| -| **OAuth Server** | The server URL of the WSO2 Identity Server. | -| **Username** | The user name to be used to log into the WSO2 Identity Server. | -| **Password** | The password used to log into the WSO2 Identity Server. | - -## Example - -In the following OAuth mediator configuration accesses a remote service -via the ` https://localhost:9443/service ` URL. The user -accessing this service is authenticated via the OAuth application -registered in the WSO2 Identity Server and accessed via the -` http://ws.apache.org/ns/synapse ` URL. The username -used to log into the WSO2 Identity Server is ` foo ` and -the password is ` bar ` . Both the user name and the -password should be registered in the Identity Server. The [Property mediator]({{base_path}}/reference/mediators/property-mediator) adds a property named -` scope ` to the synapse message context. The value of -this property will be used by the OAuth mediator to send the OAuth -request. - -!!! Info - The following example is applicable for OAuth 2.0 as well. - -``` xml - - -``` diff --git a/en/docs/reference/mediators/payloadfactory-mediator.md b/en/docs/reference/mediators/payloadfactory-mediator.md deleted file mode 100644 index 97ce5899d6..0000000000 --- a/en/docs/reference/mediators/payloadfactory-mediator.md +++ /dev/null @@ -1,1280 +0,0 @@ -# PayloadFactory Mediator - -The **PayloadFactory Mediator** transforms or replaces the contents of a -message. That is, you can configure the format of the request or response -and also map it with arguments provided in the payloadfactory configuration. - -You can use two methods to format the payload using this mediator. - -- Use the **default** template to write the payload in the required format (JSON, XML, or text). -- Use the **FreeMarker** template to write the payload. This is particularly useful when - defining complex JSON payloads. - -You can provide arguments in the mediator configuration to pass values to your payload during runtime. -You can specify a static value or use an XPath/JSON expression to pass values dynamically. -The values passed by the arguments are evaluated against the existing -message. - -!!! Info - The PayloadFactory mediator is a [content aware]({{base_path}}/reference/mediators/about-mediators/#classification-of-mediators) mediator. - -## Syntax - -``` java - - - - * - - -``` - -If you want to change the payload type of -the outgoing message, such as to change it to JSON, add the -` messageType ` property after the -` ` tag. For example: - -``` -... - - -``` - -## Configuration - -Parameters available to configure the PayloadFactory mediator are as follows: - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Parameter Name - - Description -
    - media-type - - This parameter is used to specify whether the message payload should be formatted in JSON, XML, or Text. If no media type is specified, the message is formatted in XML. -
    - template-type - - The template type determines how you define a new payload. Select one of the following template types: -
      -
    • - default: If you select this template type, you can define the payload using the normal syntax of the specified media type. -
    • -
    • - FreeMarker: If you select this template type, the mediator will accept FreeMarker templates to define the payload. -
    • -
    - See the examples given below for details. -
    - format - - Select one of the following values: -
      -
    • - Define Inline: If this is selected, the payload format can be defined within the PayloadFactory mediator in the Payload field. -
    • -
    • - Pick from Registry: If this is selected, an existing payload format that is saved in the registry can be selected. Click either Governance Registry or Configuration Registry as relevant to select the payload format from the resource tree. -
    • -
    -
    - Payload - - Define the payload accoding to the template type you selected for the template-type parameter. -
    - Arguments - - This section is used to add an argument that defines the actual value of each variable in the format definition. -
    - -You need to specify the payload format and arguments depending on the template-type you specified in the mediator configuration. - -### Default Template - -If you select **default** as the **template-type**, you can define the payload and arguments as shown below. This example defines an XML payload. - -```xml - - - - $1 - $2 - - - - - - - -``` - -- Payload Format - - As shown above, you can add content to the payload by specifying variables for each value that you want to add. Use the $n format. Start with n=1 and then increment the value for each additional variable as follows: `$1`, `$2`, etc. - -- Arguments - - The arguments must be entered in the same order as the variables in the payload. That is, the first argument defines the value for variable $1, the second argument defines the value for variable $2, etc. An argument can specify a literal string (e.g., "John") or an XPath/JSON expression that extracts the value from the content in the incoming payload as shown above. - -### FreeMarker Template - -The payloadFactory mediator of WSO2 APIM 4.0.0 supports [FreeMarker Templates](https://freemarker.apache.org/docs/). If you select **freemarker** as the **template-type**, you can define the payload as a FreeMarker template. The following example defines a JSON payload. - -!!! Note - - FreeMarker version 2.3.30 is tested with WSO2 APIM 4.0.0. - - You are not required to specify the CDATA tag manually when defining the payload. WSO2 Integration Studio will apply the tag automatically. - -```xml - - - - - - -``` - -When you use the FreeMarker template type as shown above, note that the script is wrapped inside a CDATA tag. This is applicable for all media types when the payload is defined **inline**. If you get the payload from the registry, the CDATA tag does not apply. - -The following root variables are available when you format a FreeMarker payload: - - - - - - - - - - - - - - - - - - - - - - -
    - payload - - This variable represents the current payload in the message context. It can be JSON, XML, or TEXT. Regardless of the payload type, the payload variable is a FreeMarker Hash type container. -
    - ctx - - You can use the ctx variable to access properties with the 'default' scope. For example, if you have a property named 'customer_id' in the default scope, you can get the property in the FreeMarker template by using 'ctx.customer_id'. -
    - axis2 - - This represents all the axis2 properties. -
    - trp - - This variable represents transport headers. You can access transport header values in the same way as accessing properties. -
    - arg - - This variable represents the arguments created at the PayloadFactory mediator. You can use 'args.arg#' to get any argument. Replace '#' with the argument index. For example, if you want to access the 1st argument in the FreeMarker template, you can use 'args.arg1'. -
    - -See the [Freemarker examples](#examples-using-the-freemarker-template) for details. - -## Examples: Using the default template - -### Using XML - -```xml - - - - - - - - - $1 - - - - - - - - - - - - - - $1 - $2 - - - - - - - - - - - -``` - -### Using JSON - -``` - - - { - "coordinates": null, - "created_at": "Fri Jun 24 17:43:26 +0000 2011", - "truncated": false, - "favorited": false, - "id_str": "$1", - "entities": { - "urls": [ - - ], - "hashtags": [ - { - "text": "$2", - "indices": [ - 35, - 45 - ] - } - ], - "user_mentions": [ - - ] - }, - "in_reply_to_user_id_str": null, - "contributors": null, - "text": "$3", - "retweet_count": 0, - "id": "##", - "in_reply_to_status_id_str": null, - "geo": null, - "retweeted": false, - "in_reply_to_user_id": null, - - "source": "<a -href=\"http://sites.google.com/site/yorufukurou/\" -rel=\"nofollow\">YoruFukurou</a>", - "in_reply_to_screen_name": null, - "user": { - "id_str": "##", - "id": "##" - }, - "place": null, - "in_reply_to_status_id": null -} - - - - - - - - - - - -``` - -If you specify a JSON expression in the PayloadFactory mediator, you -must use the ` evaluator ` attribute to specify that it -is JSON. You can also use the evaluator to specify that an XPath -expression is XML, or if you omit the evaluator attribute, XML is -assumed by default. For example: - - ---- - - - - - - - - - - -
    XML

    <arg xmlns:m0= " http://sample " expression="// m0:symbol " evaluator=”xml” />

    -

    or
    -

    -

    <arg xmlns:m0= " http://sample " expression="// m0:symbol " />

    JSON <arg expression="$.user.id" evaluator="json" />
    - -!!! Note - To evaluate the json-path against a property, use the following syntax: - ```xml - - ``` - Learn more about the [json-path syntax]({{base_path}}/integrate/examples/json_examples/json-examples). - -### Adding arguments - -In the following configuration, the values for format parameters -` code ` and ` price ` will be assigned -with values that are evaluated from arguments given in the specified -order. - -``` - - - - $1 - $2 - - - - - - - -``` - -### Suppressing the namespace - -To prevent the ESB profile from adding the default Synapse namespace in -an element in the payload format, use ` xmlns="" ` as -shown in the following example. - -``` java - - sagara - -``` - -### Including a complete SOAP envelope as the format - -In the following configuration, an entire SOAP envelope is added as the -format defined inline. This is useful when you want to generate the -result of the PayloadFactory mediator as a complete SOAP message with -SOAP headers. - -``` - - - - - -$1 - - - - - - - - -``` - -### Uploading a file to an HTTP endpoint via a multipart request - -The below example configuration uses VFS to upload the file in the -specified location to the given HTTP endpoint via a HTTP multipart -request. - -``` - - - - - - - - - - - - $1 - $2 - $4 - - - - - - - - - -
    - - - - -
    - - - - - 5 - file:/// - application/octet-stream - DELETE - .*\..* - - -``` - -In the above example, the following property mediator configuration sets -the message type as ` multipart/form-data ` . - -``` - -``` - -The below ` file ` parameter of the payload factory -mediator defines the HTTP multipart request. - -!!! Tip - Do not change the ` http://org.apache.axis2/xsd/form-data ` namesapce. - -``` xml -$4 -``` - -Also, the below property mediator configuration sets the content of the -uploaded file. - -``` -
    - -``` - -### Adding a literal argument - -The following example adds a literal argument to the Payload Factory -mediator, and sets it to true. This allows you to consider the type of -the argument value as String and to stop processing it. - -``` - - - - - - {"newValue" : "$1"} - - - - - - - - -``` - -Following is a sample payload (i.e., ` a.json ` file), -which you can process using the above configuration. - -**a.json** - -``` js -{"hello" : "abc"} -``` - -You can use the below sample cURL command to send the request to the -above configuration. - -``` js -curl -d @a.json http://localhost:8280/payload -H "Content-Type: application/json" -v -``` - -You view the below output: - -``` js -{"newValue" : "{"pqr":"abc"}"} -``` - -!!! Info - If you do not add the ` literal="true" ` within the -argument in the Payload Factory mediator of the above configuration, you -view the output as follows: - - {"newValue" : "abc"} - -If you want to evaluate a valid JSON object as a string, you need to use `literal="true"` in the PayloadFactoryMediator as indicated below, - -``` - - { "message":{ "payload": "$1" } } - - - - - -``` - -### Adding a custom SOAP header - -You can add custom SOAP headers to a request by using the PayloadFactory -Mediator in a proxy service as shown in the example below. - -``` xml - - - - -
    - - - - - - - - -$1 -$2 - - - - - -$3 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -``` - -## Examples: Using the FreeMarker template - -### XML to JSON Transformation - -This example shows how an XML payload can be converted to a JSON payload using a freemarker template. - -- Input Payload - ```xml - - John - Deo - 35 - - New York - Manhattan - - - ``` - -- Output Payload - ```json - { - "Name": "John Doe", - "Age": 35, - "Address": "Manhattan, NY" - } - ``` - -- FreeMarker Tamplate - ```json - { - "Name": "${payload.user.first_name} ${payload.user.last_name}", - "Age": "${payload.user.age}", - "Address": "${payload.user.location.city},${payload.user.location.state.@code}" - } - ``` - -- Synapse Code - ```xml - - - - - - - ``` - -You can get more info on how to use XML payloads from [FreeMarker's official documentation](https://freemarker.apache.org/docs/xgui.html). - -### JSON to XML Transformation - -This example shows how a JSON payload can be converted to an XML payload using a freemarker template. - -- Input Payload - ```json - { - "first_name": "John", - "last_name": "Deo", - "age": 35, - "location": { - "state": { - "code": "NY", - "name": "New York" - }, - "city": "Manhattan" - } - } - ``` - -- Output Payload - ```xml - - John Deo - 35 -
    Manhattan, NY
    -
    - ``` - -- FreeMarker Tamplate - ```xml - - ${payload.first_name} ${payload.last_name} - ${payload.age} -
    ${payload.location.city}, ${payload.location.state.code}
    -
    - ``` - -- Synapse Code - ```xml - - - ${payload.first_name} ${payload.last_name} - ${payload.age} -
    ${payload.location.city}, ${payload.location.state.code}
    - ]]> -
    - -
    - ``` - -### JSON to JSON Transformation - -This example shows how a JSON payload is transformed into another JSON format using a freemarker template. - -- Input Payload - ```json - { - "first_name": "John", - "last_name": "Deo", - "age": 35, - "location": { - "state": { - "code": "NY", - "name": "New York" - }, - "city": "Manhattan" - } - } - ``` - -- Output Payload - ```json - { - "Name": "John Doe", - "Age": 35, - "Address": "Manhattan, NY" - } - ``` - -- FreeMarker Tamplate - ```json - { - "Name": "${payload.first_name} ${payload.last_name}", - "Age": "${payload.age}", - "Address": "${payload.location.city}, ${payload.location.state.code}" - } - ``` - -- Synapse Code - ```xml - - - - - - - ``` - -### Handling Arrays - -#### XML Arrays - -This example shows how to loop through an XML array in the input payload and then transform the data using a freemarker template. - -- Input Payload - ```xml - - - 1 - Veronika - Lacroux - - - 2 - Trescha - Campaigne - - - 3 - Mayor - Moscrop - - - ``` - -- Output Payload - ```xml - - - 1 - Veronika Lacroux - - - 2 - Trescha Campaigne - - - 3 - Mayor Moscrop - - - ``` - - Note that, we have looped through the person list in the input XML, and received a person list in the output. However, the name attribute in the output is a combination of the first_name and last_name attributes from the input. - -- FreeMarker Tamplate - ```xml - - <#list payload.people.person as person> - - ${person.id} - ${person.first_name} ${person.last_name} - - - - ``` - - In this FreeMarker template, we are using the list directive. This is used to loop through a list in the input and transform it into another structure in the output. You can get more information about the list directive from [FreeMarker documentation](https://freemarker.apache.org/docs/ref_directive_list.html). - -- Synapse Code - ```xml - - - - <#list payload.people.person as person> - - ${person.id} - ${person.first_name} ${person.last_name} - - - ]]> - - - - ``` - -#### JSON Arrays - -This example shows how to loop through a JSON array in the input payload and then transform the data using a freemarker template. - -- Input Payload - ```json - [{ - "id": 1, - "first_name": "Veronika", - "last_name": "Lacroux" - }, { - "id": 2, - "first_name": "Trescha", - "last_name": "Campaigne" - }, { - "id": 3, - "first_name": "Mayor", - "last_name": "Moscrop" - }] - ``` - -- FreeMarker Tamplate - ```xml - - <#list payload as person> - - ${person.id} - ${person.first_name} ${person.last_name} - - - - ``` - - As you can see here, it is almost the same as the XML list. You have to use an identical syntax to loop through a JSON array. - -- Synapse Code - ```xml - - - - <#list payload as person> - - ${person.id} - ${person.first_name} ${person.last_name} - - - ]]> - - - - ``` - -### Generating CSV Payloads - -Using FreeMarker templates, it is straightforward to generate text payloads. The payload you generate could be plain text, a CSV, or EDI, and any other text related format. In this example, we are showing how to transform an XML payload into a CSV payload. - -- Input Payload - ```xml - - - 1 - Veronika - Lacroux - - - 2 - Trescha - Campaigne - - - 3 - Mayor - Moscrop - - - ``` - -- Output Payload - ``` - ID,First Name, Last Name - 1,Veronika,Lacroux - 2,Trescha,Campaigne - 3,Mayor,Moscrop - ``` - - In this output, we have converted the person list in the XML payload into a CSV payload. - -- FreeMarker Tamplate - ``` - ID,First Name, Last Name - <#list payload.people.person as person> - ${person.id},${person.first_name},${person.last_name} - - ``` - - In this template, we define the CSV structure and fill it by looping through the payload list. If the input payload is JSON, there will not be a significant difference in this template. See the example on [Handling Arrays](#handling-arrays) to understand the difference between JSON and XML array traversing. - -- Synapse Code - ```xml - - - ${person.id},${person.first_name},${person.last_name} - ]]> - - - - ``` - - If you don’t know the CSV column names and the number of columns, you can use a FreeMarker template like the following to generate a CSV for the given XML. - - ```xml - <#list payload.people.person[0]?children?filter(c -> c?node_type == 'element') as c>${c?node_name}<#sep>, - <#list payload.people.person as person> - <#list person?children?filter(c -> c?node_type == 'element') as c>${c}<#sep>, - - ``` -### XML to EDI Transformation - -This example shows how an XML payload can be converted to an EDI format using a freemarker template. In this example, we have referenced the freemarker template as a registry resource. -See the instructions on how to [build and run](#build-and-run) this example. - -=== "XMLtoEDI - Proxy" - ```xml - - - - - - - - - - - - - - - ``` - -=== "template.ftl - Registry Resource" - ```injectedfreemarker - <#-- Assign * as element separator --> - <#assign element_separator="*"> - <#-- Assign ! as segment terminator --> - <#assign segment_terminator="!"> - <#-- Interchange Control Header --> - ISA${element_separator}${payload.UniversalTransaction.Interchange_Control_Header.Authorization_Information_Qualifier}${element_separator}${payload.UniversalTransaction.Interchange_Control_Header.Authorization_Information}${element_separator}${payload.UniversalTransaction.Interchange_Control_Header.Security_Information_Qualifier}${element_separator}${payload.UniversalTransaction.Interchange_Control_Header.Security_Information}${element_separator}${payload.UniversalTransaction.Interchange_Control_Header.Interchange_ID_Qualifier[0]}${element_separator}${payload.UniversalTransaction.Interchange_Control_Header.Interchange_Sender_ID}${element_separator}${payload.UniversalTransaction.Interchange_Control_Header.Interchange_ID_Qualifier[1]}${element_separator}${payload.UniversalTransaction.Interchange_Control_Header.Interchange_Receiver_ID}${element_separator}${payload.UniversalTransaction.Interchange_Control_Header.Interchange_Date}${element_separator}${payload.UniversalTransaction.Interchange_Control_Header.Interchange_Time}${element_separator}${payload.UniversalTransaction.Interchange_Control_Header.Interchange_Control_Standards_ID}${element_separator}${payload.UniversalTransaction.Interchange_Control_Header.Interchange_Control_Version_Nbr}${element_separator}${payload.UniversalTransaction.Interchange_Control_Header.Interchange_Control_Number}${element_separator}${payload.UniversalTransaction.Interchange_Control_Header.Acknowledgment_Request}${element_separator}${payload.UniversalTransaction.Interchange_Control_Header.Test_Indicator}${element_separator}${payload.UniversalTransaction.Interchange_Control_Header.Subelement_Separator}${segment_terminator} - <#-- Functional_Group_Header --> - GS${element_separator}${payload.UniversalTransaction.Functional_Group_Header.Functional_Identifier_Code}${element_separator}${payload.UniversalTransaction.Functional_Group_Header.Application_Senders_Code}${element_separator}${payload.UniversalTransaction.Functional_Group_Header.Application_Receivers_Code}${element_separator}${payload.UniversalTransaction.Functional_Group_Header.Date}${element_separator}${payload.UniversalTransaction.Functional_Group_Header.Time}${element_separator}${payload.UniversalTransaction.Functional_Group_Header.Group_Control_Number}${element_separator}${payload.UniversalTransaction.Functional_Group_Header.Responsible_Agency_Code}${element_separator}${payload.UniversalTransaction.Functional_Group_Header.Industry_ID}${segment_terminator} - <#-- Transaction_Set_Header --> - ST${element_separator}${payload.UniversalTransaction.Transaction_Set_Header.Transaction_Set_Identifier_Code}${element_separator}${payload.UniversalTransaction.Transaction_Set_Header.Transaction_Set_Control_Number}${segment_terminator} - <#-- Begin_Invoice --> - BIG${element_separator}${payload.UniversalTransaction.Begin_Invoice.Invoice_Date}${element_separator}${payload.UniversalTransaction.Begin_Invoice.Invoice_Number}${element_separator}${payload.UniversalTransaction.Begin_Invoice.PO_Date[0]!''}${element_separator}${payload.UniversalTransaction.Begin_Invoice.PO_Number}${element_separator}${payload.UniversalTransaction.Begin_Invoice.Release_Number[0]!''}${element_separator}${payload.UniversalTransaction.Begin_Invoice.Changed_Order_Sequence[0]!''}${element_separator}${payload.UniversalTransaction.Begin_Invoice.Transaction_Type_Code[0]!''}${segment_terminator} - <#-- Currency --> - CUR${element_separator}${payload.UniversalTransaction.Currency.Entity_Identifier_Code}${element_separator}${payload.UniversalTransaction.Currency.Currency_Code}${segment_terminator} - <#-- Reference_Identification --> - <#list payload.UniversalTransaction.Reference_Identification as ref> - <#assign REF="REF${element_separator}${ref.Reference_Identification_Qualifier[0]!''}${element_separator}${ref.Reference_Identification[0]!''}${segment_terminator}"> - ${REF} - - <#-- Name --> - <#list payload.UniversalTransaction.Name as name> - <#assign N1="N1${element_separator}${name.Entity_Identifier_Code[0]!''}${element_separator}${name.Name[0]!''}${segment_terminator}"> - ${N1} - - <#-- Total --> - TDS${element_separator}${payload.UniversalTransaction.Total_invoice_amount}${segment_terminator} - <#-- Service, Promotion, Allowance, or Charge Information --> - <#list payload.UniversalTransaction.SAC_Information as sac> - <#assign SAC="SAC${element_separator}${sac.Allowance_or_Charge_Indicator[0]!''}${element_separator}${sac.Service_or_Charge_Code[0]!''}${element_separator}${sac.SAC_03[0]!''}${element_separator}${sac.SAC_04[0]!''}${element_separator}${sac.Amount[0]!''}${element_separator}${sac.Description[0]!''}${segment_terminator}"> - ${SAC} - - <#-- Transaction_Set_Trailer --> - SE${element_separator}${payload.UniversalTransaction.Transaction_Set_Trailer.Number_of_Included_Segments}${element_separator}${payload.UniversalTransaction.Transaction_Set_Trailer.Transaction_Set_Control_Number}${segment_terminator} - <#-- Functional_Group_Trailer --> - GE${element_separator}${payload.UniversalTransaction.Functional_Group_Trailer.Number_of_Transaction_Sets_Incl}${element_separator}${payload.UniversalTransaction.Functional_Group_Trailer.Group_Control_Number}${segment_terminator} - <#-- Interchange_Control_Trailer --> - IEA${element_separator}${payload.UniversalTransaction.Interchange_Control_Trailer.Nbr_of_Included_Functional_Groups}${element_separator}${payload.UniversalTransaction.Interchange_Control_Trailer.Interchange_Control_Number}${segment_terminator} - ``` - -=== "Request Payload" - ```xml - - - 00 - - 00 - - ZZ - XXXXXXXXX - 01 - 834469876 - 200221 - 1946 - U - 00401 - 100015519 - 1 - P - > - - - IN - XXXXXXXXX - 834469876 - 20200221 - - 100014444 - X - 004010 - - - 810 - 100014444 - - - 20200221 - E064784444 - - X1055555 - - - - - - BY - USD - - - BM - 999749873334 - - - CN - G0205016 - - - CN2 - G0305017 - - 8550 - - C - D500 - ZZ - HDLG - 800 - HANDLING - - - 15 - 100015519 - - - 1 - 100015511 - - - 1 - 100015511 - - - ``` - -#### Build and run - -1. [Set up WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -2. [Create an integration project]({{base_path}}/integrate/develop/create-integration-project) with an ESB Configs module and an Composite Exporter. -3. Create the artifacts (proxy service, registry resource) with the configurations given above. -4. [Deploy the artifacts]({{base_path}}/integrate/develop/deploy-artifacts) in your Micro Integrator. -5. Send a POST request to the `xml-to-edi-proxy` with the above given payload. - -- Output Payload - ```text - ISA*00**00**ZZ*XXXXXXXXX*01*834469876*200221*1946*U*00401*100015519*1*P*>! - GS*IN*XXXXXXXXX*834469876*20200221*1946*100014444*X*004010! - ST*810*100014444! - BIG*20200221*E064784444**X1055555***! - CUR*BY*USD! - REF*BM*999749873334! - N1*CN*G0205016! - N1*CN2*G0305017! - TDS*8550! - SAC*C*D500*ZZ*HDLG*800*HANDLING! - SE*15*100015519! - GE*1*100015511! - IEA*1*100015511! - ``` - -### Accessing Properties - -This example shows how to access properties using the following variables: `ctx`, `axis2`, and `trp`. - -- FreeMarker Tamplate - ```json - { - "ctx property" : "${ctx.user_name}", - "axis2 property": "${axis2.REST_URL_POSTFIX}", - "trp property": "${trp.Host}" - } - ``` - - In this freemarker template, we have referenced the default scoped property named `user_name`, the axis2 scoped property named `REST_URL_POSTFIX`, and the transport header `Host`. The output is returned as a JSON object. - -- Output Payload - ```json - { - "ctx property": "john", - "axis2 property": "/demo", - "trp property": "localhost:8290" - } - ``` - -- Synapse Code - ```xml - - - - - - - ``` - -### Accessing Arguments - -This example shows how to use arguments in a freemarker template to pass values to the variables in the payload. - -- FreeMarker Tamplate - ```json - { - "argument one": "${args.arg1}", - "argument two": "${args.arg2}" - } - ``` - -- Output Payload - ```json - { - "argument one": "Value One", - "argument two": 500 - } - ``` - -- Synapse Code - ```xml - - - - - - - - - ``` - -In this example, the value for the “argument one” key is replaced by the first argument value. The argument for the "argument two" key is replaced by the second argument value. - -### Handling optional values - -Some of the input parameters you specify in the FreeMarker template (payload, properties, and arguments) may be optional. This -means that the value can be null or empty during runtime. It is important to handle optional parameters in the FreeMarker template to avoid runtime issues due to null or empty values. FreeMarker -[documentation](https://freemarker.apache.org/docs/dgui_template_exp.html#dgui_template_exp_missing) -describes methods for handling optional parameters properly. The following example shows how to handle optional values in a -FreeMarker template by using the **Default value operator** described in the FreeMarker documentation. - -- Input Payload - ```json - { - "first_name": "John", - "age": 35 - } - ``` -- FreeMarker Tamplate - ``` - { - "Name": "${payload.first_name} ${payload.last_name ! "" }", - "Age": ${payload.age} - } - ``` - -- Output Payload - ```json - { - "Name": "John ", - "Age": 35 - } - ``` - -- Synapse Code - ```xml - - - - - ``` - -In this example, The FreeMarker template is expecting a property named `last_name` from the input payload. However, the -payload does not contain that property. To handle that, the -`${payload.last_name ! "" }` syntax is used in the template. This syntax replaces the `last_name` value with an empty -string if it is not present in the input payload. diff --git a/en/docs/reference/mediators/property-group-mediator.md b/en/docs/reference/mediators/property-group-mediator.md deleted file mode 100644 index 9725466631..0000000000 --- a/en/docs/reference/mediators/property-group-mediator.md +++ /dev/null @@ -1,67 +0,0 @@ -# Property Group Mediator - -The Property Group Mediator is similar to the [Property Mediator]({{base_path}}/reference/mediators/property-mediator). It sets or removes properties on the message context flowing through synapse. However, unlike the Property mediator, the Property Group mediator handles multiple properties as a -group. You can select the property action (i.e., whether the property -must be added to or removed from the message context) for each -individual property. Therefore, in a scenario where you need to -set/remove multiple properties, you can add a single Property Group -Mediator configuration instead of multiple Property Mediator -configurations. - -!!! Info - The Property Group mediator is a [conditionally content aware]({{base_path}}/reference/mediators/about-mediators/#classification-of-mediators) mediator. - -## Syntax - -``` - - - - - ........ - -``` - -## Configuration - -The Property Group Mediator configuration includes a description and a set of properties grouped together. - -In the source view, multiple Property Mediator configurations are -enclosed within the ` ` element. You can -also add a description in the opening element. (e.g., ``). - -In the design view, you can configure the Property Group Mediator as -follows: - -- Enter a meaningful description for the property group in the - **Description** field. -- To add a new property, click the **Add a new element** icon. - ![]({{base_path}}/assets/img/integrate/mediators/119134127/119134143.png) - As a result, the **Property Mediator** dialog box opens. Here, you - can select a predefined property from the list or configure a custom - property. -- To remove a property, click the **Delete selected element(s)** - icon. - ![]({{base_path}}/assets/img/integrate/mediators/119134127/119134161.png) -- To arrange the properties in the required order within the property - group configuration, you can select any property and then click the - following icons to move it up/down the list. - ![]({{base_path}}/assets/img/integrate/mediators/119134127/119134166.png) - ![]({{base_path}}/assets/img/integrate/mediators/119134127/119134167.png) - -## Example - -The following Property Group Mediator configuration adds the -` From ` , ` Message ` , and -` To ` properties to the message context. It also removes -the ` MessageID ` property from the context. All four -properties are handled together as a group. - -``` xml - - - - - - -``` diff --git a/en/docs/reference/mediators/property-mediator.md b/en/docs/reference/mediators/property-mediator.md deleted file mode 100644 index 047819ffc3..0000000000 --- a/en/docs/reference/mediators/property-mediator.md +++ /dev/null @@ -1,323 +0,0 @@ -# Property Mediator - -The **Property Mediator** has no direct impact on the message, but rather on the message context flowing through Synapse. You can retrieve -the properties set on a message later through the Synapse XPath Variables or the `get-property()` extension function. A property can have a defined scope for which it is valid. If a property has no defined scope, it defaults to the Synapse message context scope. Using the property element with the **action** specified as `remove`, you can remove any existing message context properties. - -!!! Info - The Property mediator is a [conditionally content aware]({{base_path}}/reference/mediators/about-mediators/#classification-of-mediators) mediator. - -## Syntax - -``` - - ? - -``` - -## Configuration - -The parameters available for configuring the Property mediator are as follows: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    Name
    -

    A name for the property.

    -

    You can provide a static value or a dynamic value for the property name. A dynamic property name can be retrieved -by using an XPath function. You can use any of the XPath functions that you use for the property value or property expression.

    -

    Note that the XPath function should be contained within curly brackets ({}) as well as double quotations (""). See the examples given below.

    -
      -
    • - property name="{get-property('propertyName')}" -
    • -
    • - property name="{$ctx:propertyName}" -
    • -
    • - property name="{json-eval({$ctx:propertyName})}" -
    • -
    -

    For names of the generic properties that come by default, see Generic Properties . You can select them from the drop-down list if you are adding the Property Mediator as shown below.

    -

    generic properties list

    -
    Action

    The action to be performed for the property.

    -
      -
    • Set: If this is selected, the property will be set in the message context.
    • -
    • Remove: If this is selected, the property will be removed from the message context.
    • -
    Set Action As

    The possible values for this parameter are as follows:

    -
      -
    • Value: If this is selected, a static value would be considered as the property value and this value should be entered in the Value parameter.
    • -
    • Expression: If this is selected, the property value will be determined during mediation by evaluating an expression. This expression should be entered in the Expression parameter.

    • -
    Type
    -

    The data type for the property. Property mediator will handle the property as a property of selected type. Available values are as follows:

    -
      -
    • STRING
    • -
    • INTEGER
    • -
    • BOOLEAN
    • -
    • DOUBLE
    • -
    • FLOAT
    • -
    • LONG
    • -
    • SHORT
    • -
    • OM

    • -
    • JSON

    • -
    -

    String is the default type.

    -

    The OM type is used to set xml property values on the message context. This is useful when the expression associated with the property mediator evaluates to an XML node during mediation. When the OM type is used, the XML is converted to an AXIOM OMElement before it is assigned to a property.

    -

    The JSON type is used to set JSON values on the message context. It is recommended to use the JSON - data type (rather than the STRING data type) for JSON payloads. -

    -

    - Note that when the JSON is just a string, you need to add quotes around them. This is due to the restrictions in - RFC. -

    -

    - Example 1: Creating a property with a JSON string by giving the value.
    -

    - Example 2 : Creating a property with a JSON object via expression evaluation.
    - -

    -
    ValueIf the Value option is selected for the Set Action As parameter, the property value should be entered as a constant in this parameter.
    Expression
    -If the Expression option is selected for the Set Action As parameter, the expression which determines the property value should be entered in this parameter. This expression can be an XPath expression or a JSONPath expression. -

    When specifying a JSONPath, use the format json-eval(<JSON_PATH>) , such as json-eval(getQuote.request.symbol). In both XPath and JSONPath expressions, you can return the value of another property by calling get-property(property-name). For example, you might create a property called JSON_PATH of which the value is json-eval(pizza.toppings) , and then you could create another property called SON_PRINT of which the value is get-property('JSON_PATH'), allowing you to use the value of the JSON_PATH property in the JSON_PRINT property.

    -
    PatternThis parameter is used to enter a regular expression that will be evaluated against the value of the property or result of the XPath/JSON Path expression.
    GroupThe number (index) of the matching item evaluated using the regular expression entered in the Pattern parameter.
    Scope

    The scope at which the property will be set or removed from. Possible values are as follows.

    -
      -
    • Synapse: This is the default scope. The properties set in this scope last as long as the transaction (request-response) exists.
    • -
    • Transport: The properties set in this scope will be considered transport headers. For example, if it is required to send an HTTP header named 'CustomHeader' with an outgoing request, you can use the property mediator configuration with this scope.
    • -
    • Axis2: Properties set in this scope have a shorter life span than those set in the Synapse scope. They are mainly used for passing parameters to the underlying Axis2 engine
    • -
    • axis2-client: This is similar to the Synapse scope. The difference between the two scopes is that the axis2-client scope can be accessed inside the mediate() method of a mediator via a custom mediator created using the Class mediator.
    • -
    • Operation: This scope is used to retrieve a property in the operation context level.
    • -
    • Registry: This scope is used to retrieve properties within the registry .
    • -
    • System: This scope is used to retrieve Java system properties.
    • -
    • Environment: This scope is used to retrieve environment variables ('env').
    • -
    • File: This scope is used to retrieve properties defined in the `file.properties` configuration file ('file').
    • -
    -

    For a detailed explanation of each scope, see Accessing Properties with XPath.

    - -!!! Note - There are predefined XPath variables (such as `$ctx` ) that you can directly use in the Synapse configuration, instead of using the synapse:get-property() function. These XPath variables get properties of various scopes and have better performance than the `get-property()` function, which can have much lower performance because it does a registry lookup. These XPath variables get properties of various scopes. For more information on these XPath variables, see [Accessing Properties with XPath]({{base_path}}/reference/mediators/property-reference/accessing-properties-with-xpath). - -## Examples - -### Setting and logging and property - -In this example, we are setting the property symbol and later we can log it using the [Log Mediator]({{base_path}}/reference/mediators/log-Mediator). - -```xml - - - - - -``` - -### Sending a fault message based on the Accept http header - -In this configuration, a response is sent to the client based on the ` Accept ` header. The [PayloadFactory mediator]({{base_path}}/reference/mediators/payloadfactory-mediator) transforms the message contents. Then a [Property mediator]({{base_path}}/reference/mediators/property-mediator) sets the message type -based on the `Accept` header using the `$ctx:accept` expression. The message is then sent back to the client via the [Respond mediator]({{base_path}}/reference/mediators/respond-mediator). - -``` xml - - - - - Error - - - - - - -``` - -### Reading a property stored in the Registry - -You can read a property that is stored in the Registry by using the -` get-property() ` method in your Synapse configuration. -For example, the following Synapse configuration retrieves the -` abc ` property of the collection -` gov:/data/xml/collectionx ` , and stores it in the -` regProperty ` property. - -``` xml - -``` - -!!! Info - You can use the following syntax to read properties or resources stored in the `gov` or `conf` Registries. When specifying the path to the resource, do not give the absolute path. Instead, use the `gov` or `conf` prefixes. - -#### Reading a property stored under a collection - -- ` get-property('registry','gov:@') ` -- ` get-property('registry','conf:@') ` - -#### Reading a property stored under a resource - -- ` get-property('registry','gov:/@') ` -- ` get-property('registry','conf:/@') ` - -#### Reading an XML resource - -- ` get-property('registry','gov:') ` -- ` get-property('registry','conf:') ` - - -### Reading a file stored in the Registry - -Following is an example, in which you read an XML file that is stored in the registry using XPath, to retrieve a value from it. Assume you have -the following XML file stored in the Registry (i.e., ` gov:/test.xml ` ). - -**test.xml** - -```xml - - A Song of Ice and Fire - George R. R. Martin - -``` - -Your Synapse configuration should be as follows. This uses XPath to read XML. - -**reg_xpath.xml** - -``` xml - - - - -``` - -Your output log will look like this. - -``` text -[2015-09-21 16:01:28,750] INFO - LogMediator Book_Name = A Song of Ice and Fire -``` - -### Reading SOAP headers - -SOAP headers provide information about the message, such as the To and From values. You can use the ` get-property() ` function of the Property mediator to retrieve these headers. You can also add Custom SOAP Headers using the [PayloadFactory mediator]({{base_path}}/reference/mediators/payloadfactory-mediator) and the [Script Mediator]({{base_path}}/reference/mediators/script-mediator). - -#### To - -| Property | Description | -|---------------------|-------------------------------| -| **Header Name** | To | -| **Possible Values** | Any URI | -| **Description** | The To header of the message. | -| **Example** | get-property("To") | - -#### From - -| Property | Description | -|---------------------|---------------------------------| -| **Header Name** | From | -| **Possible Values** | Any URI | -| **Description** | The From header of the message. | -| **Example** | get-property("From") | - -#### Action - -| Property | Description | -|---------------------|---------------------------------------| -| **Header Name** | Action | -| **Possible Values** | Any URI | -| **Description** | The SOAPAction header of the message. | -| **Example** | get-property("Action") | - -#### ReplyTo - -| Property | Description | -|---------------------|--------------------------------------------| -| **Header Name** | ReplyTo | -| **Possible Values** | Any URI | -| **Description** | The ReplyTo header of the message. | -| **Example** |
    | - -#### MessageID - -| Property | Description | -|---------------------|----------------------------------------------------------------------------------------------------------------| -| **Header Name** | MessageID | -| **Possible Values** | UUID | -| **Description** | The unique message ID of the message. It is not recommended to make alterations to this property of a message. | -| **Example** | get-property("MessageID") | - -#### RelatesTo - -| Property | Description | -|---------------------|--------------------------------------------------------------------------------------------------------------| -| **Header Name** | RelatesTo | -| **Possible Values** | UUID | -| **Description** | The unique ID of the request to which the current message is related. It is not recommended to make changes. | -| **Example** | get-property("RelatesTo") | - -#### FaultTo - - - - - - - - - - - - - - - - - - - - - - -
    PropertyDescription

    Header Name

    FaultTo

    Possible Values

    Any URI

    Description

    The FaultTo header of the message.

    Example

    -
    -
    -
    <header name="FaultTo" value="http://localhost:9000"/>
    -
    -
    -
    diff --git a/en/docs/reference/mediators/property-reference/accessing-properties-with-xpath.md b/en/docs/reference/mediators/property-reference/accessing-properties-with-xpath.md deleted file mode 100644 index 5b3faaa80d..0000000000 --- a/en/docs/reference/mediators/property-reference/accessing-properties-with-xpath.md +++ /dev/null @@ -1,634 +0,0 @@ -# Accessing Properties with XPath - -The WSO2 Micro Integrator supports standard XPath functions and variables through its underlying XPath engine. It supports XPath 1.0 by default where as the support for XPath 2.0 can be introduced by adding the following property in /conf/deployment.toml. - -```toml -[mediation] -synapse.enable_xpath_dom_failover=true -``` - -The Micro Integrator also provides custom XPath functions and variables for accessing message properties. - -## XPath Extension Functions - -In addition to standard XPath functions, the Micro Integrator supports the following custom functions for -working with XPath expressions: - -### base64Encode() function - -The base64Encode function returns the base64-encoded value of the -specified string. - -Syntax: - -- ` base64Encode(string value) ` -- ` base64Encode(string value, string charset) ` - ` ` - -### base64Decode() function - -The base64Decode function returns the original value of the specified -base64-encoded value. - -Syntax: - -- ` base64Decode(string encodedValue) ` -- ` base64Decode(string encodedValue , string charset) ` - ` ` - -### get-property() function - -The `get-property()` function allows any XPath expression used in a configuration to look up information from the current message context. Using the [Property mediator]({{base_path}}/reference/mediators/property-Mediator), you can retrieve properties from the message context and header. - -The syntax of the function takes the following format. - -- ` get-property(String propertyName) ` -- ` get-property(String scope, String propertyName) ` - -The function accepts scope as an optional parameter. It retrieves a -message property at the given scope, which can be one of the following. - -If you provide only the property name without the scope, the default s -` ynapse ` scope will be used. - -!!! Info - When the result of an XPath evaluation results in a single XML node, the - evaluator will return the text content of this node by default - (equivalent of doing /root/body/node/text()). If you want to retrieve - the node itself, you can configure the [Enrich mediator]({{base_path}}/reference/mediators/enrich-Mediator) as shown - in the following example. - ``` xml - - - - - - - - - - - - - - - - - - - - - ``` - -#### Synapse scope - -When the scope of a property mediator is ` synapse ` , -its value is available throughout both the in sequence and the out -sequence. In addition to the user-defined properties, you can retrieve -the following special properties from the ` synapse ` -scope. - -| | | -|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Name | Return Value | -| To | Incoming URL as a String, or empty string («») if a To address is not defined. | -| From | From address as a String, or empty string («») if a From address is not defined. | -| Action | SOAP Addressing Action header value as a String, or empty string («») if an Action is not defined. | -| FaultTo | SOAP FaultTo header value as a String, or empty string («») if a FaultTo address is not defined. | -| ReplyTo | ReplyTo header value as a String, or empty string («») if a ReplyTo address is not defined. | -| MessageID | A unique identifier (UUID) for the message as a String, or empty string («») if a MessageID is not defined. This ID is guaranteed to be unique. | -| FAULT | TRUE if the message has a fault, or empty string if the message does not have a fault. | -| MESSAGE_FORMAT | Returns pox, get, soap11, or soap12 depending on the message. If a message type is unknown this returns soap12 | -| OperationName | Operation name corresponding to the message. A proxy service with a WSDL can have different operations. If the WSDL is not defined, ESB defines fixed operations. | - - -To access a property with the ` synapses ` cope inside the ` mediate() ` method of a mediator, you can include the following configuration in a custom mediator created using the [Class mediator]({{base_path}}/reference/mediators/class-Mediator): - -``` java -public boolean mediate(org.apache.synapse.MessageContext mc) { -// Available in both in-sequence and out-sequenc -String propValue = (String) mc.getProperty("PropName"); -System.out.println("SCOPE_SYNAPSE : " + propValue); -return true; -} -``` - -#### axis2 scope - -When the scope of a property mediator is ` axis2 ` , its -value is available only throughout the sequence for which the property -is defined (e.g., if you add the property to an in sequence, its value -will be available only throughout the in sequence). You can retrieve -message context properties within the ` axis2 ` scope -using the following syntax. - -Syntax: -` get-property('axis2', String propertyName) ` - -To access a property with the `axis2` scope inside the `mediate()` method of a mediator, you can include the following configuration in a custom mediator created using the [Class mediator]({{base_path}}/reference/mediators/class-Mediator): - -``` java -public boolean mediate(org.apache.synapse.MessageContext mc) { -org.apache.axis2.context.MessageContext axis2MsgContext; -axis2MsgContext = ((Axis2MessageContext) mc).getAxis2MessageContext(); - -// Available only in the sequence the property is defined. -String propValue = (String) axis2MsgContext.getProperty("PropName"); -System.out.println("SCOPE_AXIS2 : " + propValue); -return true; -} -``` - -#### axis2-client - -This is similar to the ` synapse ` -scope. The difference is that it can be accessed inside the -` mediate() ` method of a mediator by including one of -the following configurations in a custom mediator, created using the -[Class mediator]({{base_path}}/reference/mediators/class-Mediator) : - -``` java -public boolean mediate(org.apache.synapse.MessageContext mc) { -org.apache.axis2.context.MessageContext axis2MsgContext; -axis2MsgContext = ((Axis2MessageContext) mc).getAxis2MessageContext(); -String propValue = (String) axis2MsgContext.getProperty("PropName"); -System.out.println("SCOPE_AXIS2_CLIENT - 1 : " + propValue); -``` - -or - -``` java -propValue = (String) axis2MsgContext.getOptions().getProperty("PropName"); -System.out.println("SCOPE_AXIS2_CLIENT - 2: " + propValue); -return true; -} -``` - -#### transport scope - -When the scope of a property mediator is ` transport ` , -it will be added to the transport header of the outgoing message from -the ESB profile. You can retrieve message context properties within the -` transport ` scope using the following syntax. - -Syntax: -`get-property('transport', String propertyName) ` - -#### registry scope - -You can retrieve properties within the registry using the following syntax. - -Syntax: -`get-property('registry', String registryPath@propertyName)` -`get-property('registry', String registryPath)` - -#### system scope - -You can retrieve Java System properties using the following syntax. - -Syntax: -`get-property('system', String propertyName)` - -#### environment scope - -You can retrieve environment variables using the following syntax. - -Syntax: -`get-property('env', String propertyName)` - -#### file scope - -You can retrieve properties defined in the `file.properties` configuration file using the following syntax. - -Syntax: -`get-property('file', String propertyName)` - -#### operation scope - -You can retrieve a property in the operation context level from the -` operation ` scope. The properties within -iterated/cloned message with the ` operation ` scope are -preserved in the in sequence even if you have configured your API -resources to be sent through the fault sequence when faults exist. A -given property with the ` operation ` scope only exists -in a single request and can be accessed by a single resource. The -properties in this scope are passed to the error handler when the -` FORCE_ERROR_ON_SOAP_FAULT ` property is set to -` true ` . See `FORCE_ERROR_ON_SOAP_FAULT` section in [Generic Properties]({{base_path}}/reference/mediators/property-reference/generic-Properties) for more information. - -Syntax: -` get-property('operation', String propertyName) ` - -### url-encode() function - -The url-encode function returns the URL-encoded value of the specified -string. - -Syntax: - -- url-encode(string value) -- url-encode(string value, string charset) - -## Synapse XPath Variables - -There is a set of predefined XPath variables that you can directly use -to write XPaths in the Synapse configuration, instead of using the -synapse:get-property() function . These XPath variables get properties -of various scopes as follows: - -### $body - -The SOAP 1.1 or 1.2 body element. For example, the expression **$body//getQuote** refers to the first **getQuote** element in a SOAP body, regardless of whether the message is SOAP-11 or SOAP-12. We have discussed an example below. - -**Example of $body usage**: - -1. Deploy the following proxy service using instructions in [Creating a Proxy Service]({{base_path}}/develop/creating-artifacts/creating-a-proxy-service). - - Note the property, ` ` in the configuration. It is used to log the first ` ` element of the request SOAP body. - - ``` - - - - - - - - -
    - - - - - - - - - - ``` - -2. Send the following StockQuote request: - - ``` xml - ant stockquote -Daddurl=http://localhost:8280/services/StockQuoteProxy - ``` - -3. Note the following message in the log. - - ``` java - [2013-03-18 14:04:41,019] INFO - LogMediator To: /services/StockQuoteProxy, WSAction: urn:getQuote, SOAPAction: urn:getQuote, ReplyTo: http://www.w3.org/2005/08/addressing/anonymous, MessageID: urn:uuid:930f68f5-199a-4eff-90d2-ea679c2362ab, Direction: request, stockprop = IBM - ``` - -### $header - -The SOAP 1.1 or 1.2 header element. For example, the expression -**$header/wsa:To** refers to the addressing **To** header regardless of -whether this message is SOAP-11 or SOAP-12. We have discussed an example -below. - -**Example of $header usage** : - -1. Deploy the following proxy service using instructions in [Creating a Proxy Service]({{base_path}}/develop/creating-artifacts/creating-a-proxy-service). - - Note the property, ` ` in the configuration. It is used to log the value of **wsa:To** - header of the SOAP request. - - ``` - - - - - - - - -
    - - - - - - - - - - ``` - -2. Send the following StockQuote request: - - ``` xml - ant stockquote -Daddurl=http://localhost:8280/services/StockQuoteProxy - ``` - -3. Note the following message in the log. - - ``` java - [2013-03-18 14:14:16,356] INFO - LogMediator To: http://localhost:9000/services/SimpleStockQuoteService, WSAction: urn:getQuote, SOAPAction: urn:getQuote, ReplyTo: http://www.w3.org/2005/08/addressing/anonymous, MessageID: urn:uuid:8a64c9cb-b82f-4d6f-a45d-bef37f8b664a, Direction: request, - stockprop = http://localhost:9000/services/SimpleStockQuoteService - ``` - -### $axis2 - -Prefix for Axis2 MessageContext properties. This is used to get the -property value at the axis2 scope. For example, to get the value of -Axis2 message context property with name REST_URL_POSTFIX, use the -XPath expression **$axis2:REST_URL_POSTFIX**. We have discussed an -example below. - -**Example of $axis2 usage** : - -1. Deploy the following proxy service. For instructions, see [Creating a Proxy Service]({{base_path}}/develop/creating-artifacts/creating-a-proxy-service). - - Note the property, ` ` in the configuration which is used to log the REST_URL_POSTFIX - value of the request message. - - ``` - - - - - - - - -
    - - - - - - - - - - ``` - -2. Send the following StockQuote request: - - ``` xml - ant stockquote -Daddurl=http://localhost:8280/services/StockQuoteProxy/test/prefix - ``` - -3. Note the following message in the log. - - ``` java - INFO - LogMediator To: http://localhost:8280/services/StockQuoteProxy/test/prefix, WSAction: urn:getQuote, SOAPAction: urn:getQuote, ReplyTo: http://www.w3.org/2005/08/addressing/anonymous, MessageID: urn:uuid:ecd228c5-106a-4448-9c83-3b1e957e2fe5, Direction: request, stockprop = /test/prefix - ``` - -In this example, the property definition, -` ` -is equivalent to -` ` - -Similarly, you can use $axis2 prefix with [HTTP Transport Properties](http-transport-properties.md). - -### $ctx - -Prefix for Synapse MessageContext properties and gets a property at the default scope. For example, to get the value of Synapse message context property with name ERROR_MESSAGE, use the XPath expression **$ctx:ERROR_MESSAGE**. We have discussed an example below. - -**Example of $ctx usage**: - -This example sends a request to a sample proxy service, and sets the -target endpoint to a non-existent endpoint reference key. It causes a -mediation fault, which triggers the fault sequence. - -1. Deploy the following proxy service. For instructions, see [Creating a Proxy Service]({{base_path}}/develop/creating-artifacts/creating-a-proxy-service). - - Note the property, `` in the fault sequence configuration. It is used to log the error message that occurs due to a  mediation fault. - - ``` - - - - - - - - - - - - - - - - - - - - ``` - -2. Send the following StockQuote request: - - ``` - ant stockquote -Dtrpurl=http://localhost:8280/services/StockQuoteProxy - ``` - -3. Note the following message in the log. - - ``` java - INFO - LogMediator To: /services/StockQuoteProxy, WSAction: urn:getQuote, SOAPAction: urn:getQuote, ReplyTo: http://www.w3.org/2005/08/addressing/anonymous, MessageID: urn:uuid:54205f7d-359b-4e82-9099-0f8e3bf9d014, Direction: request, stockerrorprop = Couldn't find the endpoint with the key : ep2 - ``` - -In this example, the property definition, \ is equivalent -to \. - -Similarly, you can use $ctx prefix with [Generic Properties]({{base_path}}/reference/property-reference/generic-Properties) . - -### $trp - -Prefix used to get the transport headers. For example, to get the -transport header named Content-Type of the current message, use the -XPath expression **$trp:Content-Type** . HTTP transport headers are not -case sensitive. Therefore, $trp:Content-Type and $trp:CONTENT-TYPE are -regarded as the same. We have discussed an example below. - -**Example of $trp usage:** - -1. Deploy the following proxy service. For instructions, see [Creating a Proxy Service]({{base_path}}/develop/creating-artifacts/creating-a-proxy-service). - - Note the property, \ in the configuration, which is - used to log the Content-Type HTTP header of the request message. - - ``` - - - - - - - - -
    - - - - - - - - - - ``` - -2. Send the following StockQuote request: - - ``` - ant stockquote -Daddurl=http://localhost:8280/services/StockQuoteProxy - ``` - -3. Note the following message in the log. - - ``` java - [2013-03-18 12:23:14,101] INFO - LogMediator To: http://localhost:8280/services/StockQuoteProxy, WSAction: urn:getQuote, SOAPAction: urn:getQuote, ReplyTo: http://www.w3.org/2005/08/addressing/anonymous, MessageID: urn:uuid:25a3143a-5b18-4cbb-b8e4-27d4dd1895d2, Direction: request, stockprop = text/xml; charset=UTF-8 - ``` - -In this example, the property definition, \ is equivalent to \. Similarly, you -can use $trp prefix with [HTTP Transport -Properties](_HTTP_Transport_Properties_) . - -### $url - -The prefix used to get the URI element of a request URL. - -**Example of $url usage:** - -1. Create a REST API with the following configuration using instructions given in page [Working with APIs]({{base_path}}/develop/creating-artifacts/creating-an-api). - - ``` xml - - - - - - - - - - - - ``` - -2. Send a request to the REST API you created using a browser as - follows: - - ``` xml - http://10.100.5.73:8280/editing/edit?a=wso2&b=2.4 - ``` - - You will see the following in the log: - - ``` xml - LogMediator To: /editing/edit?a=wso2&b=2.4, MessageID: urn:uuid:36cb5ad7-f150-490d-897a-ee7b86a9307d, Direction: request, SYMBOL = wso2, VALUE = 2.4, Envelope: - ``` - -### $func - -The prefix used to refer to a particular parameter value passed -externally by an invoker such as the [Call Template -Mediator](_Call_Template_Mediator_) . - -**Example of $func usage:** - -1. Add a sequence template with the following configuration. See [Adding a New Sequence Template]({{base_path}}/develop/creating-artifacts/creating-reusable-sequences) for detailed instructions. - - ``` xml - - ``` - -2. Deploy the following proxy service. For instructions, see [Creating a Proxy Service]({{base_path}}/develop/creating-artifacts/creating-a-proxy-service). - - ``` xml - - - - - - - - - - - - -
    - - - - - ``` - -3. Send the following StockQuote request: - - ``` xml - ant stockquote -Daddurl=http://localhost:8280/services/StockQuoteProxy - ``` - -4. Note the following message in the log. - - ``` xml - LogMediator To: http://localhost:8280/services/StockQuoteProxy, WSAction: urn:getQuote, SOAPAction: urn:getQuote, ReplyTo: http://www.w3.org/2005/08/addressing/anonymous, MessageID: urn:uuid:8d90e21b-b5cc-4a02-98e2-24b324fa704c, Direction: request, message = HelloWorld - ``` - -### $env - -Prefix used to get a SOAP 1.1 or 1.2 envelope level element. For example, to get the body element from the SOAP envelope, use the expression **$env/\*\[local-name()='Body'\]** . - -**Example of $env usage:** - -1. Create an API with the following configuration. For information on how to create an API, see [Working with APIs]({{base_path}}/develop/creating-artifacts/creating-an-api). - - ``` xml - - - - - - - - - - - $1 - - - - - - - - - - - ``` - -2. Send a post request to the API you created (i.e., , with the following json payload using a rest client. - - ``` xml - {"content":{ "paramA": "ValueA", "paramB": "valueB" }} - ``` - - You will receive the following response: - - ``` xml - {"theData":{"item":{"content":{"paramA":"ValueA","paramB":"valueB"}}}} - ``` \ No newline at end of file diff --git a/en/docs/reference/mediators/property-reference/axis2-properties.md b/en/docs/reference/mediators/property-reference/axis2-properties.md deleted file mode 100644 index 2db22ffffd..0000000000 --- a/en/docs/reference/mediators/property-reference/axis2-properties.md +++ /dev/null @@ -1,558 +0,0 @@ -# Axis2 Properties - -!!! Info - The following are Axis2 properties that can be used with the [Property mediator]({{base_path}}/reference/mediators/property-Mediator) and the [Property Group mediator]({{base_path}}/reference/mediators/property-Group-Mediator). - -Axis2 properties allow you to configure the web services engine in WSO2 Micro Integrator, such as specifying how to cache JMS objects, setting the minimum and maximum threads for consuming messages, and forcing outgoing HTTP/S messages to use HTTP 1.0. You can access some of these properties by using the [Property mediator]({{base_path}}/reference/mediators/property-Mediator) with the scope set to `axis2` or `axis2-client` as shown below. - -## CacheLevel - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    CacheLevel

    Possible Values

    none, connection, session, consumer, producer, auto

    Description

    This property determines which JMS objects should be cached. JMS objects are cached so that they can be reused in the subsequent invocations. Each caching level can be described as follows:

    -

    none : No JMS object will be cached.
    - connection : JMS connection objects will be cached.
    - session : JMS connection and session objects will be cached.
    - consumer : JMS connection, session and consumer objects will be cached.
    - producer : JMS connection, session and producer objects will be cached.

    - auto : An appropriate caching level will be used depending on the transaction strategy.

    Example

    -
    -
    -
    <parameter name="transport.jms.CacheLevel">consumer</parameter>
    -
    -
    -
    - -## ConcurrentConsumers - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    ConcurrentConsumers

    Possible Values

    integer -


    -

    Description

    The minimum number of threads for message consuming. The value specified for this property is the initial number of threads started. As the number of messages to be consumed increases, number of threads are also increased to match the load until the total number of threads equals the value specified for the transport.jms.MaxConcurrentConsumers property.

    Example

    -
    -
    -
    <parameter name="transport.jms.ConcurrentConsumers"locked="false">50</parameter>
    -
    -
    -
    - -## HTTP_ETAG - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription
    NameHTTP_ETAG
    Possible Valuestrue/false
    Scopeaxis2
    Description
    -

    This property determines whether the HTTP Etag should be enabled for the request or not.

    -Note: -

    HTTP Etag is a mechanism provided by HTTP for Web cache validation.

    -
    Example
    -
    -
    -
    <property name="HTTP_ETAG" scope="axis2" type="BOOLEAN" value="true"/>
    -
    -
    -
    - -## JMS_COORELATION_ID - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    JMS_COORELATION_ID

    Possible Values

    String

    Scope

    axis2

    Description

    The JMS corelation ID is used to match responses with specific requests. This property can be used to set the JMS corelation ID as a dynamic or a hard coded value in a request. As a result, responses with the matching JMS correlation IDs will be matched with the request.

    Example

    -
    -
    -
    <property name="JMS_COORELATION_ID" action="set" scope="axis2" expression="$header/wsa:MessageID" xmlns:sam="http://sample.esb.org/>
    -
    -
    -
    - -## MaxConcurrentConsumers - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    MaxConcurrentConsumers

    Possible Values

    integer -


    -

    Description

    The maximum number of threads that can be added for message consuming. See ConcurrentConsumers .

    Example

    -
    -
    -
    <parameter name="transport.jms.MaxConcurrentConsumers"locked="false">50</parameter>
    -
    -
    -
    - -## MercurySequenceKey - -| Parameter | Value | -|---------------------|------------------------------------------------------------------| -| **Name** | MercurySequenceKey | -| **Possible Values** | integer | -| **Description** | Can be an identifier specifying a Mercury internal sequence key. | - -## MercuryLastMessage - -| Parameter | Value | -|---------------------|------------------------------------------------------------------------------------| -| **Name** | MercuryLastMessage | -| **Possible Values** | true/false | -| **Description** | When set to "true", it will make this the last message and terminate the sequence. | - -## FORCE_HTTP_1.0 - -| Parameter | Value | -|---------------------|-------------------------------------------------------------------------------| -| **Name** | FORCE_HTTP_1.0 | -| **Possible Values** | true/false | -| **Scope** | axis2-client | -| **Description** | Forces outgoing http/s messages to use HTTP 1.0 (instead of the default 1.1). | - -## setCharacterEncoding - -| Parameter | Value | -|---------------------|-------------------------------------------------------------------------------| -| **Name** | setCharacterEncoding | -| **Possible Values** | false | -| **Default Behavior** | By default character encoding is enabled in the Micro Integrator. | -| **Scope** | axis2 | -| **Description** | This property can be used to remove character encode. Note that if this property is set to 'false', the 'CHARACTER_SET_ENCODING' property cannot be used. | -| **Example** | ` ` | - -## CHARACTER_SET_ENCODING - -| Parameter | Value | -|---------------------|-------------------------------------------------------------------------------| -| **Name** | CHARACTER_SET_ENCODING | -| **Possible Values** | Any valid encoding standard (E.g., UTF-8, UTF-16 etc.) | -| **Default Behavior** | N/A | -| **Scope** | axis2 | -| **Description** | Specifies the encoding type used for the content of the files processed by the transport. Note that this property cannot be used if the 'setCharacterEncoding' property is set to 'false'. | -| **Example** | ` ` | -## DECODE_MULTIPART_DATA - -| Parameter | Value | -|---------------------|-------------------------------------------------------------------------------| -| **Name** | DECODE_MULTIPART_DATA | -| **Possible Values** | true/false | -| **Default Behavior** | false | -| **Scope** | axis2 | -| **Description** | Specifies whether to decode multipart messages when the message is built in a content aware mediation scenario. Otherwise, the outgoing message will be in encoded form | -| **Example** | `` | | - -## HL7 Properties - -### HL7_GENERATE_ACK - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Parameter - - Description -
    - Name - - HL7_GENERATE_ACK -
    - Possible Values - - true/false -
    - Scope - - axis2 -
    - Description - - Use this property to disable auto acknowledgement of HL7 messages that are received by the Micro Integrator. By default, auto acknowledgement is enabled in the Micro Integrator. You can disable this by setting this property to 'false'. -
    - Example - -
    -
    -
    -
    <property name="HL7_GENERATE_ACK"  scope="axis2" value="true"</property>
    -
    -
    -
    -
    - -### HL7_RESULT_MODE - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Parameter - - Description -
    - Name - - HL7_RESULT_MODE -
    - Possible Values - - ACK or NACK -
    - Scope - - axis2 -
    - Description - - Use this property to specify whether an ACK or NACK should be returned to the messaging client as an acknowledgement. If you select a NACK response, you have the option to specify a custom NACK message that should be sent to the client along with the NACK. -
    - Example - -
    -
    -
    -
    <property name="HL7_RESULT_MODE"  scope="axis2" value="ACK|NACK"</property>
    -
    -
    -
    -
    - -### HL7_NACK_MESSAGE - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Parameter - - Description -
    - Name - - HL7_NACK_MESSAGE -
    - Possible Values - - User defined string value. -
    - Scope - - axis2 -
    - Description - - Use this property to set a custom NACK message that should be sent to the HL7 client as an acknowledgement. This property can be used only if the HL7 result mode is set to NACK. -
    - Example - -
    -
    -
    -
    <property name="HL7_NACK_MESSAGE"  scope="axis2" value="error message"</property>
    -
    -
    -
    -
    - -### HL7_APPLICATION_ACK - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Parameter - - Description -
    - Name - - HL7_APPLICATION_ACK -
    - Possible Values - - true/false -
    - Scope - - axis2 -
    - Description - - Use this property to specify whether the Micro Integrator should wait for the backend to process the message before sending an acknowledgement (ACK or NACK message) back to the messaging client. -
    - Example - -
    -
    -
    -
    <property name="HL7_APPLICATION_ACK"  scope="axis2" value="true|false"</property>
    -
    -
    -
    -
    - -### HL7_RAW_MESSAGE - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Parameter - - Description -
    - Name - - HL7_RAW_MESSAGE -
    - Possible Values - - $axis2:HL7_RAW_MESSAGE -
    - Scope - - axis2 -
    - Description - - Use this property to retrieve the original raw EDI format HL7 message in an InSequence. -
    - Example - -
    -
    -
    -
    <property name="HL7_RAW_MESSAGE"  scope="axis2" value="$axis2:HL7_RAW_MESSAGE"</property>
    -
    -
    -
    -
    - -## enableREST - -| Parameter | Value | -|---------------------|---------------------------------------------------------------------------------------------------| -| **Name** | enableREST | -| **Possible Values** | true/false | -| **Default Behavior** | false | -| **Scope** | axis2 | -| **Description** | This property enables the check whether the original request to the endpoint was a REST request, which needs converting the response's `text/xml` content type into `application/xml` if the request was not a SOAP request.| -| **Example** | `` | \ No newline at end of file diff --git a/en/docs/reference/mediators/property-reference/generic-properties.md b/en/docs/reference/mediators/property-reference/generic-properties.md deleted file mode 100644 index 09ea9f6a2b..0000000000 --- a/en/docs/reference/mediators/property-reference/generic-properties.md +++ /dev/null @@ -1,959 +0,0 @@ -# Generic Properties - -!!! Info - The following are generic properties that can be used with the [Property mediator]({{base_path}}/reference/mediators/property-Mediator) and the [Property Group mediator]({{base_path}}/reference/mediators/property-Group-Mediator). - -Generic properties allow you to configure messages as they're processed by the Micro Integrator, such as marking a message as out-only (no response message will be expected), adding a custom error message or code to the message, and disabling WS-Addressing headers. - -## PRESERVE_WS_ADDRESSING - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    PRESERVE_WS_ADDRESSING

    Possible Values

    "true", "false"

    Default Behavior

    none

    Scope

    synapse

    Description

    By default, the Micro Integrator adds a new set of WS-Addressing headers to the messages forwarded from the Micro Integrator. If this property is set to " true " on a message, the Micro Integrator will forward it without altering its existing WS-Addressing headers.

    Example

    -
    -
    -
    <property name="PRESERVE_WS_ADDRESSING" value="true"/>
    -
    -
    -
    - -## RESPONSE - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    RESPONSE

    Possible Values

    "true", "false"

    Default Behavior

    none

    Scope

    synapse

    Description

    Once this property is set to 'true' on a message, the Micro Integrator will start treating it as a response message. It is generally used to route a request message back to its source as the response.

    Example

    -
    -
    -
    <property name="RESPONSE" value="true"/>
    -
    -
    -
    - -## OUT_ONLY - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    OUT_ONLY

    Possible Values

    "true", "false"

    Default Behavior

    none

    Scope

    synapse

    Description

    Set this property to "true" on a message to indicate that no response message is expected for it once it is forwarded from the Micro Integrator. In other words, the Micro Integrator will do an out-only invocation with such messages. It is very important to set this property on messages that are involved in out-only invocations to prevent the Micro Integrator from registering unnecessary callbacks for response handling and eventually running out of memory.

    Description for value="true"
    -

    -

    Set this property to "true" on a message to indicate that no response message is expected for it once it is forwarded from the Micro Integrator. In other words, the Micro Integrator will do an out-only invocation with such messages. It is very important to set this property on messages that are involved in out-only invocations to prevent the Micro Integrator from registering unnecessary callbacks for response handling and eventually running out of memory.

    -
    -
    -
    <property name="OUT_ONLY" value="true"/>
    -
    -
    -

    Description for value="false"
    -

    -

    Set this property to "false" to call the endpoint and get a response once it is forwarded from the Micro Integrator.

    -
    -
    -
    <property name="OUT_ONLY" value="false"/>
    -
    -
    -
    - -## ERROR_CODE - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    ERROR_CODE

    Possible Values

    string

    Default Behavior

    none

    Scope

    synapse

    Description

    Use this property to set a custom error code on a message which can be later processed by a Synapse fault handler. If the Synapse encounters an error during mediation or routing, this property will be automatically populated.

    Example

    -
    -
    -
    <property name="ERROR_CODE" value="100100"/>
    -
    -
    -
    - -## ERROR_MESSAGE - - - - - - ---- - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    ERROR_MESSAGE

    Possible Values

    string

    Default Behavior

    none

    Scope

    synapse

    Description

    Use this property to set a custom error message on a message which can be later processed by a Synapse fault handler. If the Synapse encounters an error during mediation or routing, this property will be automatically populated.

    Example

    -
    -
    -
    <log level="custom">
    - <property name="Cause" expression="get-property('ERROR_MESSAGE')"/>
    -</log>
    -
    -
    -
    - -## ERROR_DETAIL - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    ERROR_DETAIL

    Possible Values

    string

    Default Behavior

    none

    Scope

    synapse

    Description

    Use this property to set the exception stacktrace in case of an error. If the Micro Integrator encounters an error during mediation or routing, this property will be automatically populated.

    Example

    -
    -
    -
    <log level="custom">
    - <property name="Trace" expression="get-property('ERROR_DETAIL')"/>
    -</log>
    -
    -
    -
    - -## ERROR_EXCEPTION - -| Parameter | Description | -|----------------------|------------------------------------------------------------------| -| **Name** | ERROR_EXCEPTION | -| **Possible Values** | java.lang.Exception | -| **Default Behavior** | none | -| **Scope** | synapse | -| **Description** | Contains the actual exception thrown in case of a runtime error. | - -## TRANSPORT_HEADERS - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    TRANSPORT_HEADERS

    Possible Values

    java.util.Map

    Default Behavior

    Populated with the transport headers of the incoming request.

    Scope

    axis2

    Description

    Contains the map of transport headers. Automatically populated. Individual values of this map can be accessed using the property mediator in the transport scope.

    Example

    -
    -
    -
    <property name="TRANSPORT_HEADERS" action="remove" scope="axis2"/>
    -
    -
    -
    - -## messageType - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    messageType

    Possible Values

    string

    Default Behavior

    Content type of incoming request.

    Scope

    axis2

    Description

    Message formatter is selected based on this property. This property should have the content type, such as text/xml, application/xml, or application/json.

    Example

    -
    -
    -
    <property name="messageType" value="text/xml" scope="axis2"/>
    -
    -
    -
    - -## ContentType - - - - - - ---- - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    ContentType

    Possible Values

    string

    Default Behavior

    Value of the Content-type header of the incoming request.

    Scope

    axis2

    Description

    This property will be in effect only if the messageType property is set. If the messageType is set, the value of Content-Type HTTP header of the outgoing request will be chosen based on this property. Note that this property is required to be set only if the message formatter seeks it in the message formatter implementation.

    Example

    -
    -
    -
    <property name="ContentType" value="text/xml" scope="axis2"/>
    -
    -
    -
    - -## disableAddressingForOutMessages - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    disableAddressingForOutMessages

    Possible Values

    "true", "false"

    Default Behavior

    false

    Scope

    axis2

    Description

    Set this property to "true" if you do not want the Micro Integrator to add WS-Addressing headers to outgoing messages. This property can affect messages sent to backend services as well as the responses routed back to clients.

    Example

    -
    -
    -
    <property name="disableAddressingForOutMessages" value="true" scope="axis2"/>
    -
    -
    -
    - -## DISABLE_SMOOKS_RESULT_PAYLOAD - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    DISABLE_SMOOKS_RESULT_PAYLOAD

    Possible Values

    "true", "false"

    Default Behavior

    false

    Scope

    synapse

    Description

    If this property is set to true , the result of file content processing carried out by the Smooks Mediator will not be loaded into the message context. This is useful in situations where you want to avoid large memory growth/out of heap space issue that may occur when large files processed by the Smooks mediator are reprocessed. See VFS Transport for a proxy service configuration where this property is used.

    Example

    -
    -
    -
    <property name="DISABLE_SMOOKS_RESULT_PAYLOAD"   value="true" scope="default"     type="STRING"/>
    -
    -
    -
    - -## ClientApiNonBlocking - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    ClientApiNonBlocking

    Possible Values

    "true", "false"

    Default Behavior

    true

    Scope

    axis2

    Description

    By default, Axis2 spawns a new thread to handle each outgoing message. This property holds the primary thread until a VFS proxy writes to a VFS endpoint. You need to remove this property from the message to change this behavior when queuing transports like JMS are involved.

    Example

    -
    -
    -
    <property name="ClientApiNonBlocking" action="remove" scope="axis2"/>
    -
    -
    -
    - -## transportNonBlocking - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    transportNonBlocking

    Possible Values

    "true", "false"

    Default Behavior

    true

    Scope

    axis2

    Description

    This property works the same way as ClientApiNonBlocking . It is recommended to use ClientApiNonBlocking for this purpose instead of transportNonBlocking since the former uses the latest axis2 translations.

    Example

    -
    -
    -
    <property name="transportNonBlocking" action="remove" scope="axis2" value="true"/>
    -
    -
    -
    - -## TRANSPORT_IN_NAME - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    TRANSPORT_IN_NAME

    Scope

    synapse

    Description

    Mediation logic can read incoming transport name using this property (since WSO2 ESB 4.7.0)

    Example

    -
    -
    -
    <log level="custom">
    -    <property name="INCOMING_TRANSPORT" expression="get-property('TRANSPORT_IN_NAME')"/>
    -</log>
    -
    -
    -
    - -## preserveProcessedHeaders - - - - - - ---- - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    preserveProcessedHeaders

    Possible Values

    "true", "false"

    Default Behavior
    -

    Preserving SOAP headers

    Scope

    synapse(default)

    Description

    By default, Synapse removes the SOAP headers of incoming requests that have been processed. If we set this property to 'true', Synapse preserves the SOAP headers.

    Example

    -
    -
    -
    <property name="preserveProcessedHeaders" value="true" scope="default"/>
    -
    -
    -
    - -## SERVER_IP - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    SERVER_IP

    Possible Values

    IP address or hostname of the Micro Integrator host

    Default Behavior
    -

    Set automatically by the mediation engine upon startup

    Scope

    synapse

    - -## FORCE_ERROR_ON_SOAP_FAULT - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    FORCE_ERROR_ON_SOAP_FAULT

    Possible Values

    "true", "false"

    Default Behavior
    -

    true

    Scope

    synapse(default)

    Description

    When a SOAP error occurs in a response, the SOAPFault sent from the back end is received by the out sequence as a usual response by default. If this property is set to true, the SOAPFault is redirected to a fault sequence. Note that when this property is true , only properties in the 'operation' scope will be passed to the error handler, and other properties in the axis2 or default scopes will not be passed to the error handler.

    Example

    -
    -
    -
    <property name="FORCE_ERROR_ON_SOAP_FAULT" value="true" scope="default" type="STRING"></property>
    -
    -
    -
    - -## QUOTE_STRING_IN_PAYLOAD_FACTORY_JSON - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    QUOTE_STRING_IN_PAYLOAD_FACTORY_JSON

    Possible Values

    "true", "false"

    Default Behavior
    -

    none

    Scope

    synapse

    Description

    -

    When you create a JSON payload using the PayloadFactory mediator, a string value evaluated from an argument is replaced as it is. If you want to force double quotes to be added to a string value evaluated from an argument, set this property to true .

    -

    Note

    -

    Double quotes are added only if the value evaluated from an argument is string. If the value is a valid JSON number, boolean value or null, double quotes are not added.

    - -

    Example

    -
    -
    -
    -
    <property name="QUOTE_STRING_IN_PAYLOAD_FACTORY_JSON" value="true"/> 
    -
    -
    -
    -
    -
    - -## RabbitMQ Properties - -The following generic properties can be used in the [Property mediator]({{base_path}}/reference/mediators/property-mediator) and the [Property Group mediator]({{base_path}}/reference/mediators/property-group-mediator/) for RabbitMQ use cases. - -### SET_ROLLBACK_ONLY - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Parameter - - Description -
    - Name - - SET_ROLLBACK_ONLY -
    - Possible Values - - true/false -
    - Scope - - default -
    - Description - - When a message is read from a RabbitMQ message queue, it will be sent to a service running in the backend. If a failure occurs, the Micro Integrator will do a basicReject with the requeue flag set to 'false'. In that case, the user must configure a Dead Letter Exchange to avoid losing messages. The same concept could be used to control the number of retries and to delay messages.

    - Note that you need to set the SET_ROLLBACK_ONLY property in the fault handler (e.g., the fault sequence). -
    - Example - -
    -
    -
    -
    <property name="SET_ROLLBACK_ONLY" value="true" scope="default" type="STRING"></property>
    -
    -
    -
    -
    - -### SET_REQUEUE_ON_ROLLBACK - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Parameter - - Description -
    - Name - - SET_REQUEUE_ON_ROLLBACK -
    - Possible Values - - true/false -
    - Scope - - default -
    - Description - - If this property is set to true in the fault sequence, when a message is read from a RabbitMQ message queue, the Micro Integrator will do a basicReject with the requeue flag set to 'true'. This allows RabbitMQ to immediately redeliver the rejected messages to the consumer.

    - Note that you need to set the SET_REQUEUE_ON_ROLLBACK property in the fault handler (e.g., the fault sequence). -
    - Example -
    -
    -
    -
    <property name="SET_REQUEUE_ON_ROLLBACK" value="true" scope="default" type="STRING"></property>
    -
    -
    -
    -
    - -### FORCE_COLLECT_PAYLOAD - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Parameter - - Description -
    - Name - - FORCE_COLLECT_PAYLOAD -
    - Possible Values - - true/false -
    - Scope - - default -
    - Description - - When tracing data, the payload is collected when the respective mediator is content altering only. This - property can be used when you want to collect payload forcefully regardless of the nature of the mediator. - Note that there will be a performance impact because payload is collected for every mediator. -
    - Example -
    -
    -
    -
    <property name="FORCE_COLLECT_PAYLOAD" value="true" scope="default" type="STRING"></property>
    -
    -
    -
    -
    diff --git a/en/docs/reference/mediators/property-reference/http-transport-properties.md b/en/docs/reference/mediators/property-reference/http-transport-properties.md deleted file mode 100644 index f0fa11fa40..0000000000 --- a/en/docs/reference/mediators/property-reference/http-transport-properties.md +++ /dev/null @@ -1,609 +0,0 @@ -# HTTP Transport Properties - -!!! Info - The following are HTTP transport properties that can be used with the [Property mediator]({{base_path}}/reference/mediators/property-Mediator) and the [Property Group mediator]({{base_path}}/reference/mediators/property-Group-Mediator). - -HTTP transport properties allow you to configure how the HTTP transport -processes messages, such as forcing a 202 HTTP response to the client so -that it stops waiting for a response, setting the HTTP status code, and -appending a context to the target URL in RESTful invocations. - -## POST_TO_URI - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    POST_TO_URI

    Possible Values

    "true", "false"

    Default Behavior

    false

    Scope

    axis2

    Description

    This property makes the request URL that is sent from the Micro Integrator a complete URL. When set to false only the context path will be included in the request URL that is sent. It is important that this property is set to true when the Micro Integrator needs to communicate with the back-end service through a proxy server.

    Example

    -
    -
    -
    <property name="POST_TO_URI" scope="axis2" value="true"/>
    -
    -
    -
    - -## FORCE_SC_ACCEPTED - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    FORCE_SC_ACCEPTED

    Possible Values

    "true", "false"

    Default Behavior

    false

    Scope

    axis2

    Description

    When set to true, this property forces a 202 HTTP response to the client immediately after the Micro Integrator receives the message so that the client stops waiting for a response.

    Example

    -
    -
    -
    <property name="FORCE_SC_ACCEPTED" value="true" scope="axis2"/>
    -
    -
    -
    - -## DISABLE_CHUNKING - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    DISABLE_CHUNKING

    Possible Values

    "true", "false"

    Default Behavior

    false

    Scope

    axis2

    Description

    -

    If you set this to true, it disables HTTP chunking for outgoing messages. Instead, the Micro Integrator builds the message to calculate the content length and then sends the particular message to the backend with the content length (e.g., Content-Length: 25 ).

    -

    You can use this parameter if the client sends the request with HTTP chunking (i.e., with Transfer Encoding:chunked ) although you need to send the message without HTTP chunking to the backend, or if you need to modify the message payload, which the client receives before sending it to the backend.


    -Note: -

    This property might decrease performance since the messages get built per each invocation. Also, this property does not affect Callout mediators, whose chunking must be disabled separately.

    -

    Example

    -
    -
    -
    <property name="DISABLE_CHUNKING" value="true" scope="axis2"/>
    -
    -
    -
    - -## NO_ENTITY_BODY - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    NO_ENTITY_BODY

    Possible Values

    "true", "false"

    Default Behavior

    In case of GET requests this property is set to true.

    Scope

    Axis2

    Description

    -

    Set this property if you want to do the following:

    -
      -
    • check if an incoming request to the mediation flow has an entity body or not
    • -
    • check if an outgoing request/response generated from the mediation flow has an entity body or not
    • -
    -Note: -

    If using the PayloadFactory mediator, this property does not need to be manually set since it is done automatically by the mediator.

    -

    Example

    -
    -
    -
    <property name="NO_ENTITY_BODY" value="true" scope="axis2" type="BOOLEAN"/>
    -
    -
    -
    - -## FORCE_HTTP_1.0 - - - - - - ---- - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    FORCE_HTTP_1.0

    Possible Values

    "true", "false"

    Default Behavior

    false

    Scope

    axis2

    Description

    Force HTTP 1.0 for outgoing HTTP messages.

    Example

    -
    -
    -
    <property name="FORCE_HTTP_1.0" value="true" scope="axis2"/>
    -
    -
    -
    - -## HTTP_SC - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    HTTP_SC

    Possible Values

    HTTP status code number

    Default Behavior

    none

    Scope

    axis2

    Description

    Set the HTTP status code.

    Example

    -
    -
    -
    <property name="HTTP_SC" value="500" scope="axis2"/>
    -
    -
    -
    - -## HTTP_SC_DESC - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    HTTP_SC_DESC

    Possible Values

    -HTTP response's Reason- Phrase that is sent by the backend. For example, if the backend sends the response's status as HTTP/1.1 200 OK, then the value of HTTP_SC_DESC is OK. -

    Default Behavior

    none

    Scope

    axis2

    Description

    Set the HTTP status message (Reason-Phrase).

    Example

    -
    -
    -
    <property name="HTTP_SC_DESC" value="Your description here" scope="axis2"/>
    -
    -
    -
    - -## FAULTS_AS_HTTP_200 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    FAULTS_AS_HTTP_200

    Possible Values

    "true", "false"

    Default Behavior

    false

    Scope

    axis2

    Description

    When the Micro Integrator receives a soap fault as a HTTP 500 message, the Micro Integrator will forward this fault to client with status code 200.

    Example

    -
    -
    -
    <property name="FAULTS_AS_HTTP_200" value="true" scope="axis2"/>
    -
    -
    -
    - -## NO_KEEPALIVE - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    NO_KEEPALIVE

    Possible Values

    "true", "false"

    Default Behavior

    false

    Scope

    axis2

    Description

    Disables HTTP keep alive for outgoing requests.

    Example

    -
    -
    -
    <property name="NO_KEEPALIVE" value="true" scope="axis2"/>
    -
    -
    -
    - -## REST_URL_POSTFIX - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    REST_URL_POSTFIX

    Possible Values

    A URL fragment starting with "/"

    Default Behavior

    In the case of GET requests through an address endpoint, this contains the query string.

    Scope

    axis2

    Description

    The value of this property will be appended to the target URL when sending messages out in a RESTful manner through an address endpoint. This is useful when you need to append a context to the target URL in case of RESTful invocations. If you are using an HTTP endpoint instead of an address endpoint, specify variables in the format of "uri.var.*" instead of using this property.

    Example

    -
    -
    -
    <property name="REST_URL_POSTFIX" value="/context" scope="axis2"/>
    -
    -
    -
    - -## REQUEST_HOST_HEADER - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    REQUEST_HOST_HEADER

    Possible Values

    string

    Default Behavior

    The Micro Integrator will set hostname of target endpoint and port as the HTTP host header

    Scope

    axis2

    Description

    The value of this property will be set as the HTTP host header of outgoing request

    Example

    -
    -
    -
    <property name="REQUEST_HOST_HEADER" value="www.wso2.org" scope="axis2"/>
    -
    -
    -
    - -## FORCE_POST_PUT_NOBODY - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    FORCE_POST_PUT_NOBODY

    Possible Values

    "true", "false"

    Default Behavior

    false

    Scope

    axis2

    Description

    This property allows to send a request without a body for POST and PUT HTTP methods.

    -

    Applicable only for HTTP PassThrough transport.

    Example

    -
    -
    -
    <property name="FORCE_POST_PUT_NOBODY" value="true" scope="axis2" type="BOOLEAN"/>
    -
    -
    -
    - -## FORCE_HTTP_CONTENT_LENGTH - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    FORCE_HTTP_CONTENT_LENGTH

    Possible Values

    "true", "false"

    Default Behavior

    false

    Scope

    axis2

    Description

    -

    If the request sent by the client contains the ‘Content-Length’ header, this property allows the Micro Integrator to send the request with the content length (without HTTP chunking) to the back end server.

    -

    You should set this to true in scenarios where the backend server is not able to accept chunked content. For example, in a scenario where a pass-through proxy is defined and the backend does not accept chunked content, this property should be used together with the COPY_CONTENT_LENGTH_FROM_INCOMING property, to simply add the content length without chunking.

    -

    When HTTP 1.1 is used, this property disables chunking and sends the content length. When HTTP 1.0 is used, the property only sends the content length.

    -Note: -

    This property can cause performance degradation, and thereby, you should only use it with message relay. If you set this to true, the Micro Integrator forwards the content length coming from the client request to the backend without building the message and calculating the content length. Since the message doesn’t get build, using these properties will perform better than using DISABLE_CHUNKING . However, if you change the receiving payload before sending it to the backend, then having this property will result in an error due to a content length mismatch.

    -

    Example

    -
    -
    -
    <property name="FORCE_HTTP_CONTENT_LENGTH" scope="axis2" value="true"></property>
    -
    -
    -
    - -## COPY_CONTENT_LENGTH_FROM_INCOMING - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    COPY_CONTENT_LENGTH_FROM_INCOMING

    Possible Values

    "true", "false"

    Default Behavior

    false

    Scope

    axis2

    Description

    This property allows the HTTP content length to be copied from an incoming message. It is only valid when the FORCE_HTTP_CONTENT_LENGTH property is used. The COPY_CONTENT_LENGTH_FROM_INCOMING avoids buffering the message in memory for calculating the content length, thus reducing the risk of performance degradation.

    Example

    -
    -
    -
    <property name="COPY_CONTENT_LENGTH_FROM_INCOMING" value="true" scope="axis2"/>
    -
    -
    -
    diff --git a/en/docs/reference/mediators/property-reference/message-context-properties.md b/en/docs/reference/mediators/property-reference/message-context-properties.md deleted file mode 100644 index 37cfe8a380..0000000000 --- a/en/docs/reference/mediators/property-reference/message-context-properties.md +++ /dev/null @@ -1,108 +0,0 @@ -# Synapse Message Context Properties - -!!! Info - The following are message context properties that can be used with the [Property mediator]({{base_path}}/reference/mediators/property-Mediator) and the [Property Group mediator]({{base_path}}/reference/mediators/property-Group-Mediator). - -The Synapse message context properties allow you to get information -about the message, such as the date/time it was sent, the message -format, and the message operation. You can use the -` get-property() ` function in the [Property mediator]({{base_path}}/references/mediators/property-Mediator) with the scope set to -` Synapse ` to retrieve these properties. - -## SYSTEM_DATE - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription

    Name

    SYSTEM_DATE

    Possible Values

    -

    Default Behavior

    -

    Scope

    -

    Description

    Returns the current date as a String.

    -Optionally, a date format as per the standard date format may be supplied by entering the following in the Namespaced Property Editor: get-property("SYSTEM_DATE", "yyyy-MM-dd'T'HH:mm:ss.SSSXXX") -
    - -## SYSTEM_TIME - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDescription
    Name
    Possible Values
    Default Behavior
    Scope
    Description - Returns the current time in milliseconds, i.e. the difference, measured in milliseconds, between the current time and midnight, January 1, 1970 UTC. -
    - - -## To, From, Action, FaultTo, ReplyTo, MessageID - -| Parameter | Description | -|----------------------|------------------------------------------------------| -| **Names** | To, From, Action, FaultTo, ReplyTo, MessageID | -| **Possible Values** | \- | -| **Default Behavior** | \- | -| **Scope** | \- | -| **Description** | The message To, Action and WS-Addressing properties. | - - -## MESSAGE_FORMAT - - -| Parameter | Description | -|----------------------|----------------------------------------------------------------------| -| **Names** | MESSAGE\_FORMAT | -| **Possible Values** | \- | -| **Default Behavior** | \- | -| **Scope** | \- | -| **Description** | Returns the message format, i.e. returns pox, get, soap11 or soap12. | - - -## OperationName - -| Parameter | Description | -|----------------------|---------------------------------------------| -| **Names** | OperationName | -| **Possible Values** | \- | -| **Default Behavior** | \- | -| **Scope** | \- | -| **Description** | Returns the operation name for the message. | diff --git a/en/docs/reference/mediators/property-reference/soap-headers.md b/en/docs/reference/mediators/property-reference/soap-headers.md deleted file mode 100644 index cf3613da91..0000000000 --- a/en/docs/reference/mediators/property-reference/soap-headers.md +++ /dev/null @@ -1,102 +0,0 @@ -# SOAP Headers - -!!! Info - The following properties are SOAP headers that can be used with the [Property mediator]({{base_path}}/reference/mediators/property-mediator) and the [Property Group mediator]({{base_path}}/reference/mediators/property-group-mediator). - -SOAP headers provide information about the message, such as the To and -From values. You can use the `get-property()` function -in the [Property mediator]({{base_path}}/reference/mediators/property-mediator) to retrieve these -headers. You can also add Custom SOAP headers using either the -[PayloadFactory mediator]({{base_path}}/reference/mediators/payloadfactory-mediator) -or the [Script mediator]({{base_path}}/reference/mediators/script-mediator). - -## To - -| Parameter | Value | -|---------------------|-------------------------------| -| **Header Name** | To | -| **Possible Values** | Any URI | -| **Description** | The To header of the message. | -| **Example** | get-property("To") | - -## From - -| Parameter | Value | -|---------------------|---------------------------------| -| **Header Name** | From | -| **Possible Values** | Any URI | -| **Description** | The From header of the message. | -| **Example** | get-property("From") | - -## Action - -| Parameter | Value | -|---------------------|---------------------------------------| -| **Header Name** | Action | -| **Possible Values** | Any URI | -| **Description** | The SOAPAction header of the message. | -| **Example** | get-property("Action") | - -## ReplyTo - -| Parameter | Value | -|---------------------|--------------------------------------------| -| **Header Name** | ReplyTo | -| **Possible Values** | Any URI | -| **Description** | The ReplyTo header of the message. | -| **Example** | \
    | - -## MessageID - -| Parameter | Value | -|---------------------|----------------------------------------------------------------------------------------------------------------| -| **Header Name** | MessageID | -| **Possible Values** | UUID | -| **Description** | The unique message ID of the message. It is not recommended to make alterations to this property of a message. | -| **Example** | get-property("MessageID") | - -## RelatesTo - -| Parameter | Value | -|---------------------|--------------------------------------------------------------------------------------------------------------| -| **Header Name** | RelatesTo | -| **Possible Values** | UUID | -| **Description** | The unique ID of the request to which the current message is related. It is not recommended to make changes. | -| **Example** | get-property("RelatesTo") | - -## FaultTo - - ---- - - - - - - - - - - - - - - - - - - - - - - -
    ParameterValue

    Header Name

    FaultTo

    Possible Values

    Any URI

    Description

    The FaultTo header of the message.

    Example

    -
    -
    -
    <header name="FaultTo" value="http://localhost:9000"/>
    -
    -
    -
    diff --git a/en/docs/reference/mediators/respond-mediator.md b/en/docs/reference/mediators/respond-mediator.md deleted file mode 100644 index a5e0918c61..0000000000 --- a/en/docs/reference/mediators/respond-mediator.md +++ /dev/null @@ -1,70 +0,0 @@ -# Respond Mediator - -The **Respond Mediator** stops the processing on the current message and sends the message back to the client as a response. - -## Syntax - -The respond token refers to a \< ` respond ` \> element, -which is used to stop further processing of a message and send the -message back to the client. - -``` java - -``` - -## Configuration - -As with other mediators, after adding the Respond mediator to a -sequence, you can click its up and down arrows to move its location in -the sequence. - -## Example - -Assume that you have a configuration that sends the request to the Stock -Quote service and changes the response value when the symbol is WSO2 or -CRF. Also assume that you want to temporarily change the configuration -so that if the symbol is CRF, the ESB profile just sends the message -back to the client without sending it to the Stock Quote service or -performing any additional processing. To achieve this, you can add the -Respond mediator at the beginning of the CRF case as shown below. All -the configuration after the Respond mediator is ignored. As a result, -the rest of the CRF case configuration is left intact, allowing you to -revert to the original behavior in the future by removing the Respond -mediator if required. - -```xml - - - - - - - - -
    - - - - - - - - - - -
    - - - - - - - - - - - - - -``` diff --git a/en/docs/reference/mediators/script-mediator.md b/en/docs/reference/mediators/script-mediator.md deleted file mode 100644 index 6f04538096..0000000000 --- a/en/docs/reference/mediators/script-mediator.md +++ /dev/null @@ -1,503 +0,0 @@ -# Script Mediator - -The **Script Mediator** is used to invoke the functions of a variety of scripting languages such as JavaScript, Groovy, or Ruby. - -!!! Note - The Micro Integrator uses Rhino engine to execute JavaScript. Rhino engine converts the script to a method inside a Java class. Therefore, when processing large JSON data volumes, the code length must be less than 65536 characters, since the Script mediator converts the payload into a Java object. However, you can use the following alternative options to process large JSON data volumes. - - - Achieve the same functionality via a [Class mediator]({{base_path}}/reference/mediators/class-mediator). - - If the original message consists of repetitive sections, you can use the [Iterate mediator]({{base_path}}/reference/mediators/iterate-mediator/) to generate a relatively - small payload using those repetitive sections. This will then allow you to use the Script mediator. - - The Script Mediator supports using Nashorn to execute JavaScripts, in addition to its default Rhino engine. - -A Script mediator can be created in one of the following methods. - -- With the script program statements stored in a separate file, referenced via the **Local or Remote Registry entry**. -- With the script program statements embedded inline within the Synapse configuration. - -Synapse uses the Apache [Bean Scripting -Framework](http://jakarta.apache.org/bsf/) for scripting language -support. Any script language supported by BSF may be used to implement -the Synapse Mediator. With the Script Mediator, you can invoke a -function in the corresponding script. With these functions, it is -possible to access the Synapse predefined in a script variable named -` mc ` . The ` mc ` variable represents an -implementation of the ` MessageContext ` , named -` ScriptMessageContext.java ` , which contains the -following methods that can be accessed within the script as -` mc.methodName ` . - -| Return? | Method Name | Description | -|---------|------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Yes | getPayloadXML() | This gets an XML representation of SOAP Body payload. | -| No | setPayloadXML(payload) | This sets the SOAP body payload from XML. | -| No | addHeader(mustUnderstand, content) | This adds a new SOAP header to the message. | -| Yes | getEnvelopeXML() | This gets the XML representation of the complete SOAP envelope. | -| No | setTo(reference) | This is used to set the value which specifies the receiver of the message. | -| Yes | setFaultTo(reference) | This is used to set the value which specifies the receiver of the faults relating to the message. | -| No | setFrom(reference) | This is used to set the value which specifies the sender of the message. | -| No | setReplyTo(reference) | This is used to set the value which specifies the receiver of the replies to the message. | -| Yes | getPayloadJSON() | This gets the JSON representation of a SOAP Body payload. | -| No | setPayloadJSON( payload ) | This sets the JSON representation of a payload obtained via the ` getPayloadJSON() ` method and sets it in the current message context. | -| Yes | getProperty(name) | This gets a property from the current message context. | -| No | setProperty(key, value) | This is used to set a property in the current message context. The previously set property values are replaced by this method. | - -Implementing a Mediator with a script language has advantages over using -the built-in Synapse Mediator types or implementing a custom Java class -Mediator. The Script Mediators have the flexibility of a class Mediator -with access to the Synapse ` MessageContext ` and -` SynapseEnvironment ` APIs. Also, the ease of use and -dynamic nature of scripting languages allow the rapid development and -prototyping of custom mediators. An additional benefit of some scripting -languages is that they have very simple and elegant XML manipulation -capabilities, which make them very usable in a Synapse mediation -environment. e.g., JavaScript E4X or Ruby REXML. - -For both types of script mediator definitions, the -` MessageContext ` passed into the script has additional -methods over the standard Synapse ` MessageContext ` to -enable working with XML natural to the scripting language. Example are -when using JavaScript ` getPayloadXML ` and -` setPayloadXML ` , ` E4X ` XML objects -and when using Ruby, REXML documents. - -!!! Info - The Script mediator is a [content-aware]({{base_path}}/reference/mediators/about-mediators/#classification-of-mediators) mediator. - -## Prerequisites - -- If you are using **nashornJS** as the JavaScript language, and also if you have JSON operations defined in the Script mediator, you need to have JDK version `8u112` or a later version in your environment. - If your environment has an older JDK version, the Script mediator (that uses nashornJS and JSON operations) will not function properly because of this [JDK bug](https://bugs.openjdk.java.net/browse/JDK-8157160). That is, you will encounter server exceptions in the Micro Integrator. - - !!! Note - If you are using JDK 15 or above, you need to manually copy the [nashorn-core](https://mvnrepository.com/artifact/org.openjdk.nashorn/nashorn-core/15.4) and [asm-util](https://mvnrepository.com/artifact/org.ow2.asm/asm-util/9.5) jars to the <MI_HOME>/lib directory since Nashorn was [removed](https://openjdk.org/jeps/372) from the JDK in Java 15. - -- Listed below are the prerequisites for writing a Script mediator using -JavaScript, Groovy, or Ruby. - - - - - - - - - - - - - - - - - - - - - - -
    Scripting LanguagePrerequisite
    GroovyDownload the groovy-all -2.4.4.jar file and copy it to the <MI_HOME>/ dropins directory. Note that when you define the script, you need to start by importing Groovy.
    Ruby

    Install the JRuby engine for mediation. This is available in the WSO2 P2 repository as a feature (WSO2 Carbon - JRuby Engine for Mediation).

    -

    Alternatively, you can download and install the JRuby engine manually: Download the jruby-complete-1.3.0.wso2v1.jar file from the WSO2 P2 repository and copy it to the <MI_HOME>/ dropins directory.

    JavaScriptThe JavaScript/E4X support is enabled by default in the Micro Integrator and ready for use.
    - -## Syntax - -Click on the relevant tab to view the syntax for a script mediator using an Inline script, or a script mediator using a script of a registry - -- **Using an Inline script**: - The following syntax applies when you create a Script mediator with the script program statements embedded inline within the Synapse configuration. - ``` - - ``` -- **Using a script of the registry**: - The following syntax applies when you create a Script mediator with the script program statements stored in a separate file, referenced via the Local or Remote Registry entry. - - !!! Info - If you are creating the Registry Resource via Tooling, you need not specify the content/media type, because it gets automatically applied when you select the **JavaScript File Template** as shown below. - - ![select the JavaScript File Template]({{base_path}}/assets/img/integrate/mediators/119131139/119131140.png) - - ``` - - ``` - -## Configuration - -- **Inline**: If this script type is selected, the script is specified inline. The parameters available to configure a Script mediator using an inline script are as follows. - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    Language

    The scripting language for the Script mediator. You can select from the following available languages.

    -
      -
    • JavaScript - This is represented as js in the source view.
    • -
    • Groovy - This is represented as groovy in the source view.
    • -
    • Ruby - This is represented as rb in the source view.
    • -
    Source
    -

    Enter the source in this parameter.

    -

    Note: If you are using Groovy as the scripting language, you need to first import Groovy in your script by adding the following:

    -
    -
    -
    import groovy.json.*;
    -
    -
    -
    - -- **Registry**: If this script type is selected, a script which is already saved in the registry will be referred using a key. The parameters available to configure a Script mediator using a script saved in the registry are as follows. - - - - - - - - - - - - - - - - - - - - - - - - - -
    Parameter NameDescription
    Language -
    -

    The scripting language for the Script mediator. You can select from the following available languages.

    -
      -
    • JavaScript - This is represented as js in the source view.
    • -
    • -

      Groovy - This is represented as groovy in the source view. Note: Be sure that your script starts with the following, which indicates that Groovy is imported:

      -
      -
      -
      -
      import groovy.json.*;
      -
      -
      -
      -
    • -
    • Ruby - This is represented as rb in the source view.
    • -
    -
    -
    FunctionThe function of the selected script language to be invoked. This is an optional parameter. If no value is specified, a default function named mediate will be applied. This function considers the Synapse MessageContext as a single parameter. The function may return a boolean. If it does not, then the value true is assumed and the Script mediator returns this value.
    Key Type -

    You can select one of the following options.

    -
      -
    • Static Key : If this is selected, an existing key can be selected from the registry for the Key parameter.
    • -
    • Dynamic Key : If this is selected, the key can be entered dynamically in the Key parameter.
    • -
    -
    KeyThe Registry location of the source. You can click either Configuration Registry or the Governance Registry to select the source from the resource tree.
    Include keys -
    -

    This parameter allows you to include functions defined in two or more scripts your Script mediator configuration. After pointing to one script in the Key parameter, you can click Add Include Key to add the function in another script.

    -

    When you click Add Include Key , the following parameters will be displayed. Enter the script to be included in the Key parameter by clicking either Configuration Registry or the Governance Registry and then selecting the relevant script from the resource tree.

    -
    -
    - -## Examples - -### Using an inline script - -The following configuration is an example of an inline mediator using `JavaScript/E4X` which returns false if the SOAP message body contains an element named `symbol`, which has a value of `IBM`. - -``` java - -``` - -### Using a script saved in the registry - -In the following example, script is loaded from the registry by using the key `repository/conf/sample/resources/script/test.js`. - -``` java - -``` - -The script is written in JavaScript. The function to be executed is ` transformRequest ` . This function may be as follows in -a script saved in the **Registry**. - -``` js -// stockquoteTransform.js -function transformRequest(mc) { -transformRequestFunction(mc); -} - -function transformResponse(mc) { -transformResponseFunction(mc); -} -``` - -In addition, the function in the script named -` sampleScript ` which is included in the mediation -configuration via the ` include key ` sub element is also -executed in the mediation. Note that in order to do this, -` sampleScript ` script should also be saved as a -resource in the Registry . This script can be as follows. - -``` js -// sample.js -function transformRequestFunction(mc) { -var symbol = mc.getPayloadXML()..*::Code.toString(); -mc.setPayloadXML( - - - {symbol} - - ); -} - -function transformResponse(mc) { -var symbol = mc.getPayloadXML()..*::symbol.toString(); -var price = mc.getPayloadXML()..*::last.toString(); -mc.setPayloadXML( - - {symbol} - {price} - ); -} -``` - -### Adding a custom SOAP header - -You can add custom SOAP headers to a request by using the -` addHeader(mustUnderstand, content) ` of the Script -Mediator in a proxy service as shown in the example below. - -``` - - - - - - - - - - - - - - - -``` - -### Example per method - -The following table contains examples of how some of the commonly used methods can be included in the script invoked by the following sample Script mediator configuration. - -``` -