diff --git a/en/docs/administer/logging-and-monitoring/logging/configuring-logging.md b/en/docs/administer/logging-and-monitoring/logging/configuring-logging.md index 40a2e6a968..bb9f9e9341 100644 --- a/en/docs/administer/logging-and-monitoring/logging/configuring-logging.md +++ b/en/docs/administer/logging-and-monitoring/logging/configuring-logging.md @@ -37,7 +37,7 @@ appenders = CARBON_LOGFILE, CARBON_CONSOLE, AUDIT_LOGFILE, ATOMIKOS_LOGFILE, CAR DELETE_EVENT_LOGFILE, TRANSACTION_LOGFILE ``` -For information on managing the log growth of the Carbon Logs, see the [Managing log growth]({{base_path}}/administer/logging-and-monitoring/logging/managing-log-growth) guide. +For information on managing the log growth of the Carbon Logs, see the [Managing log growth](https://mi.docs.wso2.com/en/latest/administer/logging-and-monitoring/logging/managing-log-growth/) guide. ### Enabling logs for a tenant diff --git a/en/docs/administer/logging-and-monitoring/logging/managing-log-growth.md b/en/docs/administer/logging-and-monitoring/logging/managing-log-growth.md deleted file mode 100644 index af7f510f6d..0000000000 --- a/en/docs/administer/logging-and-monitoring/logging/managing-log-growth.md +++ /dev/null @@ -1,116 +0,0 @@ -# Managing Log Growth - -See the following content on managing the growth of [Carbon Logs](#managing-the-growth-of-carbon-logs) and [Audit Logs](#managing-the-growth-of-audit-log-files): - -Log4j2 supports two main log rotation options. - -- Rollover based on log file size. -- Rollover based on a time period. - -By default, WSO2 supports rollover based on a time period. This interval is, by default, one day. The log4j-based logging mechanism uses appenders to append all the log messages into a file, then at the end of the log rotation period, a new file will be created with the appended logs and archived. The name of the archived log file will always contain the date on which the file is archived. - -## Managing the growth of Carbon Logs - -Log growth in [Carbon Logs]({{base_path}}/administer/logging-and-monitoring/logging/configuring-logging/#configuring-carbon-logs) can be managed by following configurations in `/repository/conf/log4j2.properties` file. - - -- Rollover based on a time period can be configured by changing `appender.CARBON_LOGFILE.policies.time.interval` value in days (Default value is 1 day). - - ``` - appender.CARBON_LOGFILE.policies.time.interval = 1 - ``` - -- Rollover based on log file size can be configured by following steps. - - 1. Disable time based triggering policy configuration for CARBON_LOGFILE logger in log4j2.properties file. - - ``` toml - appender.CARBON_LOGFILE.policies.time.modulate = false - ``` - - 2. By default, the size limit of the log file is 10MB. You can change the default value using the following configuration. - - ```toml - appender.CARBON_LOGFILE.policies.size.size= - ``` - - If the size of the log file is exceeding the value defined in the `appender.CARBON_LOGFILE.policies.size.size` property, the content is copied to a backup file and the logs are continued to be added to a new empty log file. - - 3. Append timestamp(mm-dd-yyyy) to file pattern `appender.CARBON_LOGFILE.filePattern`. - - !!!Note - When file size based log rollover has been enabled, the timestamp should be appended to file pattern in order to differentiate the backup file names by time stamp. Unless, the current backup file will be replaced by the next backup which is created on the same day, since both file will be having the same name(ie: wso2carbon-12-16-2019.log). - - - ```toml - appender.CARBON_LOGFILE.filePattern = ${sys:carbon.home}/repository/logs/wso2carbon-%d{mm-dd-yyyy}-%i.log - ``` - -- The following property under `CARBON_LOGFILE` appender is used to limit the number of backup files. - You can change it as per your requirement by changing its value. - - ``` toml - appender.CARBON_LOGFILE.strategy.max - ``` - - !!! Note - This will only work with size-based rolling approach. For time-based rolling approach, you need to add the following configuration in order to delete the older files for this to work. - - ``` - appender.CARBON_LOGFILE.strategy.action.type = Delete - appender.CARBON_LOGFILE.strategy.action.basepath = ${sys:carbon.home}/repository/logs/ - appender.CARBON_LOGFILE.strategy.action.maxdepth = 1 - appender.CARBON_LOGFILE.strategy.action.condition.type = IfLastModified - appender.CARBON_LOGFILE.strategy.action.condition.age = 3D - appender.CARBON_LOGFILE.strategy.action.PathConditions.type = IfFileName - appender.CARBON_LOGFILE.strategy.action.PathConditions.glob = wso2carbon-* - ``` - - You can change the `appender.CARBON_LOGFILE.strategy.action.condition.age` parameter to accept files that are as old or older than the specified duration. - -#### Managing the growth of audit log files - -- Rollover based on a time period can be configured by changing `appender.AUDIT_LOGFILE.policies.time.interval` value in days (Default value is 1 day). - - ``` - appender.AUDIT_LOGFILE.policies.time.interval = 1 - ``` - -- Rollover based on log file size can be configured by following steps. - - 1. Disable time based triggering policy configuration for AUDIT_LOGFILE logger in log4j2.properties file. - - ``` toml - appender.AUDIT_LOGFILE.policies.time.modulate = false - ``` - - 2. Add the following configuration to the `log4j2.properties` file, in order to enable size based triggering policy. - - ``` toml - appender.AUDIT_LOGFILE.policies.size.modulate = true - ``` - - 3. By default, the size limit of the log file is 10MB. You can change the default value using the following configuration. - - ```toml - appender.AUDIT_LOGFILE.policies.size.size= - ``` - - If the size of the log file exceeds the value defined in the `appender.CARBON_LOGFILE.policies.size.size` property, the content is copied to a backup file, and the logs are continued to be added to a new empty log file. - - 4. Append timestamp(mm-dd-yyyy) to file pattern `appender.AUDIT_LOGFILE.filePattern`. - - !!!Note - When file size based log rollover has been enabled, the timestamp should be appended to the file pattern in order to differentiate the backup file names by time stamp. Unless, the current backup file will be replaced by the next backup, which is created on the same day since both file names are same (i.e., audit-12-16-2019.log). - - - ```toml - appender.AUDIT_LOGFILE.filePattern = ${sys:carbon.home}/repository/logs/audit-%d{mm-dd-yyyy}-%i.log - ``` - -- The following property under `AUDIT_LOGFILE` appender is used to limit the number of Audit Log backup files. - You can change it as per your requirement by changing its value. - - ``` toml - appender.AUDIT_LOGFILE.strategy.max - ``` \ No newline at end of file diff --git a/en/docs/administer/logging-and-monitoring/logging/masking-sensitive-information-in-logs.md b/en/docs/administer/logging-and-monitoring/logging/masking-sensitive-information-in-logs.md deleted file mode 100644 index ddbf3495bf..0000000000 --- a/en/docs/administer/logging-and-monitoring/logging/masking-sensitive-information-in-logs.md +++ /dev/null @@ -1,46 +0,0 @@ -# Masking Sensitive Information in Logs - -There can be business sensitive information that is added to logs in the WSO2 product console and/or WSO2 Carbon log files. When these logs are analyzed, the information is exposed to those who check this. - -To avoid this potential security pitfall, users can mask sensitive information from the log file at the time of logging. In this feature, you can define patterns that need to be masked from the logs. This is particularly useful in the case of credit card numbers, access tokens, etc. - -To configure this feature, follow the instructions below. - -### Enabling log masking - -1. Log masking in not enabled by default in API Manager. Therefore, you need to manually enable it and configure the required masking patterns. - -2. To enable log masking, navigate to `/repository/conf/log4j2.properties` and do the necessary changes. The masking feature can be enabled by adding an additional `m` after the `%m` in the `layout.pattern`. Therefore you can add an additional `m` to the log files in which you want the values to be masked, as shown below. - - ```java - appender.CARBON_CONSOLE.layout.pattern = [%d{DEFAULT}] %5p - %c{1} %mm%n - ``` - -3. The masking patterns are configured in `/repository/conf/wso2-log-masking.properties`. You can change its default configurations in `/repository/conf/deployment.toml` - -### The masking pattern file - -The masking pattern file is a property file that can contain one or more masking patterns. The following is a sample configuration that showcases how to mask the credit card numbers from the logs. - -Navigate to `/repository/conf/deployment.toml` and add the following configuration. - -```properties -[masking_pattern.properties] -"CREDIT_CARD_VISA" = "4[0-9]{6,}$" -"CREDIT_CARD_MASTER" = "(?:5[1-5][0-9]{2}|222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}" -"CREDIT_CARD_AMEX" = "[34|37][0-9]{14}$" -``` - -With this configuration, each log line is checked for all the configured patterns. If any match is found, it is masked with ‘\*\*\*\*\*’. - -!!! danger "Using single quotes in TOML configs to avoid parsing escape characters" - If the strings defined in the `deployment.toml` file are within double quotes, it is parsed along with the escape characters. To avoid this, use single quotes when you need to add escape characters as shown in the example below. - ``` - [masking_pattern.properties] - "ACCT_ID" = '(?<=accountId\':)(.*)(?=\')' - "ACCT_ID.replace_pattern"='(.?).(?=.*)' - "ACCT_ID.replacer"="*" - ``` - -!!! warning - There can be a performance impact when using this feature with many masking patterns since each log line is matched with each of the patterns. So it is highly advisable only to use the most necessary patterns. diff --git a/en/docs/administer/managing-users-and-roles/managing-user-stores/configuring-secondary-user-stores-mi.md b/en/docs/administer/managing-users-and-roles/managing-user-stores/configuring-secondary-user-stores-mi.md deleted file mode 100644 index efe11cf399..0000000000 --- a/en/docs/administer/managing-users-and-roles/managing-user-stores/configuring-secondary-user-stores-mi.md +++ /dev/null @@ -1,33 +0,0 @@ -# Configuring Secondary User Stores - -For user management purposes, the WSO2 Micro Integrator can connect to several secondary user stores. - -Users from the primary and secondary user store(s) can be authenticated and authorized for integration use cases after configuration. - -!!! info - **Note** : It's mandatory to have a primary user store configured before adding secondary user stores. Refer - [configuring an LDAP user store]({{base_path}}/install-and-setup/setup/mi-setup/user_stores/setting_up_a_userstore/#configuring-an-ldap-user-store) - -## Enabling the user-core feature - -To deploy secondary user stores, we need to enable the user-core feature which is disabled by default. - -To enable the user-core feature, change the following entry to false in micro-integrator.sh/micro-integrator.bat files as required. -```bash --DNonUserCoreMode=true \ -``` - -## Adding a new secondary user store - -1. [Download]({{base_path}}/assets/attachments/migration/micro-integrator/secondary-userstore-templates.zip) the template files provided and change the urls and credentials accordingly. -2. Create a directory named **userstores** in `/repository/deployment/server` location. -3. Add the modified template files to the above directory. -4. Rename the file with the domain name you choose for the secondary user store. - -!!! note - The secondary user store configuration file must have the same name as the domain with an underscore (_) in place of the period. For example, if the domain is 'wso2.com', name the file as wso2_com.xml . One file contains the definition for one user store domain. - - Users from secondary user stores have the non-admin (read-only) permissions in the management dashboard. - - - diff --git a/en/docs/deploy-and-publish/deploy-on-gateway/choreo-connect/deploy-api/deploy-rest-to-soap-api.md b/en/docs/deploy-and-publish/deploy-on-gateway/choreo-connect/deploy-api/deploy-rest-to-soap-api.md index 0a8ce64627..b000496a69 100644 --- a/en/docs/deploy-and-publish/deploy-on-gateway/choreo-connect/deploy-api/deploy-rest-to-soap-api.md +++ b/en/docs/deploy-and-publish/deploy-on-gateway/choreo-connect/deploy-api/deploy-rest-to-soap-api.md @@ -2,7 +2,7 @@ When it comes to web services, they are designed to provide rich functionlity to end users by supporting interoperable interactions over a network. Web services are mainly categorized into two types called SOAP and RESTful services. However, due to various reasons like flexibility, scalability, complexity, performance, etc. RESTful services became better for modern clients. Due to this reason exposing a SOAP endpoint as a RESTfull service is helpful as it provides more flexibilities when integrating web services with various end user applications. -This guide will explain you on how to perform the SOAP to REST transformation using [WSO2 Micro Integrator]({{base_path}}/integrate/integration-overview/) and how to deploy the converted API in Choreo Connect gateway to provision the [key features]({{base_path}}/deploy-and-publish/deploy-on-gateway/choreo-connect/getting-started/supported-features/) that the Choreo Connect is supporting by using an example. +This guide will explain you on how to perform the SOAP to REST transformation using [WSO2 Micro Integrator](https://mi.docs.wso2.com/en/latest/get-started/introduction/) and how to deploy the converted API in Choreo Connect gateway to provision the [key features]({{base_path}}/deploy-and-publish/deploy-on-gateway/choreo-connect/getting-started/supported-features/) that the Choreo Connect is supporting by using an example. The following diagram illustrates the request flow from client to the backend through WSO2 Micro Integrator and the response flow from backend to client through WSO2 Micro Integrator for this example. The WSO2 Micro Integrator acting as the backend for the Choreo Connect Gateway whereas its handling the `JSON` to `SOAP` message transformation as well as `GET` to `POST` method transformation. @@ -27,7 +27,7 @@ Following steps will guide you through creation of SOAP to REST transformation u !!! info "Before you begin" - This guide assumes that you already have installed WSO2 Integration Studio, if not you can follow up instructions on [Installing WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio/). + This guide assumes that you already have installed WSO2 Integration Studio, if not you can follow up instructions on [Installing WSO2 Integration Studio](https://mi.docs.wso2.com/en/latest/develop/installing-wso2-integration-studio/). !!! Tip The project that will be creating in below **Step 1** & **Step 2** is available in `/samples/rest-to-soap-conversion` directory of the [Choreo Connect's github repository](https://github.com/wso2/product-microgateway). @@ -35,7 +35,7 @@ Following steps will guide you through creation of SOAP to REST transformation u If you wish to use that, you can go through the following steps to import it directly to the Integration Studio and use. 1. Clone the [Choreo Connect repository](https://github.com/wso2/product-microgateway). - 2. Import the `PhoneVerify` sample project to the Integration studio. Refer [Importing Projects]({{base_path}}/integrate/develop/importing-projects/) for more information. + 2. Import the `PhoneVerify` sample project to the Integration studio. Refer [Importing Projects](https://mi.docs.wso2.com/en/latest/develop/importing-projects/) for more information. 3. [Configure Micro Integrator to Update APIM Service Catalog]({{base_path}}/deploy-and-publish/deploy-on-gateway/choreo-connect/deploy-api/deploy-rest-to-soap-api/#step-3-configure-micro-integrator-to-update-apim-service-catalog) if required. 4. [Deploy the Artifacts]({{base_path}}/deploy-and-publish/deploy-on-gateway/choreo-connect/deploy-api/deploy-rest-to-soap-api/#step-5-deploy-the-artifacts-in-micro-integrator). @@ -45,7 +45,7 @@ Select **File->New->Integration Project** from Integration studio and enter the Make sure to tick the **Create ESB Configs** and **Create Composite Exporter** when creating the Integration project. -For more information refer to [Creating an Integration Project]({{base_path}}/integrate/develop/create-integration-project/). +For more information refer to [Creating an Integration Project](https://mi.docs.wso2.com/en/latest/develop/create-integration-project/). ### Step 2 - Create the REST API @@ -54,10 +54,10 @@ For more information refer to [Creating an Integration Project]({{base_path}}/in 3. Select the **Import API Artifact** option and provide file having [this synapse configuration](https://raw.githubusercontent.com/wso2/product-microgateway/rest-to-soap-cc/samples/rest-to-soap-conversion/PhoneVerification/PhoneVerificationConfigs/src/main/synapse-config/api/PhoneVerify.xml). !!! Note - If you want to design your API from scratch, select **Create New API Artifact** option in the above step and create it using Integration Studio. For more information on this refer documentation on [Creating a REST API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api/). + If you want to design your API from scratch, select **Create New API Artifact** option in the above step and create it using Integration Studio. For more information on this refer documentation on [Creating a REST API](https://mi.docs.wso2.com/en/latest/develop/creating-artifacts/creating-an-api/). !!! Tip - This example is using [PayloadFactory Mediator]({{base_path}}/reference/mediators/payloadfactory-mediator/) to transform `JSON` content to `SOAP`. Also a similar example can be find in [Converting JSON to SOAP]({{base_path}}/integrate/examples/message_transformation_examples/json-to-soap-conversion/). + This example is using [PayloadFactory Mediator](https://mi.docs.wso2.com/en/latest/reference/mediators/payloadfactory-mediator/) to transform `JSON` content to `SOAP`. Also a similar example can be find in [Converting JSON to SOAP](https://mi.docs.wso2.com/en/latest/learn/examples/message-transformation-examples/json-to-soap-conversion/). ### Step 3 - Configure Micro Integrator to Update APIM Service Catalog @@ -120,9 +120,9 @@ This step will show you how to Update the swagger definition to change some impo ### Step 5 - Deploy the Artifacts in Micro Integrator -You have multiple options to deploy your REST API. See the [Deploying Artifacts]({{base_path}}/integrate/develop/deploy-artifacts/#deploy-artifacts-in-the-embedded-micro-integrator) for more information. +You have multiple options to deploy your REST API. See the [Deploying Artifacts](https://mi.docs.wso2.com/en/latest/develop/deploy-artifacts/#deploy-artifacts-in-the-embedded-micro-integrator) for more information. -In this example we are using the [Embedded Micro Integrator]({{base_path}}/integrate/develop/deploy-artifacts/#deploy-artifacts-in-the-embedded-micro-integrator) in your WSO2 Integration Studio. +In this example we are using the [Embedded Micro Integrator](https://mi.docs.wso2.com/en/latest/develop/deploy-artifacts/#deploy-artifacts-in-the-embedded-micro-integrator) in your WSO2 Integration Studio. Below steps will show you, how to deploy your REST API. @@ -220,7 +220,7 @@ apictl init PhoneVerify --oas http://localhost:8290/PhoneVerify?swagger.json ``` !!! Note - The Swagger URL `http://localhost:8290/PhoneVerify?swagger.json` is used to get the Swagger definition of the deployed integration service in previous steps. You may read more details on this from [Using Swagger Documents]({{base_path}}/integrate/develop/advanced-development/using-swagger-for-apis/). + The Swagger URL `http://localhost:8290/PhoneVerify?swagger.json` is used to get the Swagger definition of the deployed integration service in previous steps. You may read more details on this from [Using Swagger Documents](https://mi.docs.wso2.com/en/latest/develop/advanced-development/using-swagger-for-apis/). #### Step 2 - Deploy the API @@ -266,6 +266,6 @@ curl -X 'GET' 'https://localhost:9095/phoneverify/checkphonenumber?PhoneNumber=8 ## See Also -- [Converting JSON to SOAP]({{base_path}}/integrate/examples/message_transformation_examples/json-to-soap-conversion/) -- [Exposing a SOAP Endpoint as a RESTful API]({{base_path}}/integrate/examples/rest_api_examples/enabling-rest-to-soap/) -- [Exposing an Integration Service as a Managed API]({{base_path}}/tutorials/integration-tutorials/service-catalog-tutorial/) +- [Converting JSON to SOAP](https://mi.docs.wso2.com/en/latest/learn/examples/message-transformation-examples/json-to-soap-conversion/) +- [Exposing a SOAP Endpoint as a RESTful API](https://mi.docs.wso2.com/en/latest/learn/examples/rest-api-examples/enabling-rest-to-soap/) +- [Exposing an Integration Service as a Managed API](https://mi.docs.wso2.com/en/latest/learn/integration-tutorials/service-catalog-tutorial/) diff --git a/en/docs/design/api-policies/regular-gateway-policies/creating-and-uploading-using-integration-studio.md b/en/docs/design/api-policies/regular-gateway-policies/creating-and-uploading-using-integration-studio.md index 2ca2747394..43c06581b7 100644 --- a/en/docs/design/api-policies/regular-gateway-policies/creating-and-uploading-using-integration-studio.md +++ b/en/docs/design/api-policies/regular-gateway-policies/creating-and-uploading-using-integration-studio.md @@ -16,7 +16,7 @@ This custom policy adds a full trace log that gets printed when you invoke a par [![Integration Studio]({{base_path}}/assets/img/learn/api-gateway/message-mediation/integration-studio.png)]({{base_path}}/assets/img/learn/api-gateway/message-mediation/integration-studio.png) !!! tip - To learn more about using WSO2 Integration Studio, see the [WSO2 Integration Studio]({{base_path}}/integrate/develop/wso2-integration-studio/) documentation. + To learn more about using WSO2 Integration Studio, see the [WSO2 Integration Studio](https://mi.docs.wso2.com/en/latest/develop/wso2-integration-studio/) documentation. 4. Click **Window -> Perspective -> Open Perspective -> Other** to get the Perspective options. diff --git a/en/docs/design/api-policies/regular-gateway-policies/transforming-api-message-payload.md b/en/docs/design/api-policies/regular-gateway-policies/transforming-api-message-payload.md index 99155c8b9b..eed6ad5d45 100644 --- a/en/docs/design/api-policies/regular-gateway-policies/transforming-api-message-payload.md +++ b/en/docs/design/api-policies/regular-gateway-policies/transforming-api-message-payload.md @@ -20,10 +20,10 @@ from the message. As with message builders, the message formatter is selected ba !!! info Also see the following sections in the documentation. The integration runtime is used to implement the API Gateway through which API messages are transformed: - - [Accessing content from JSON payloads]({{base_path}}/integrate/examples/json_examples/json-examples/#accessing-content-from-json-payloads) - - [Logging JSON payloads]({{base_path}}/integrate/examples/json_examples/json-examples/#logging-json-payloads) - - [Constructing and transforming JSON payloads]({{base_path}}/integrate/examples/json_examples/json-examples/#constructing-and-transforming-json-payloads) - - [Troubleshooting, debugging, and logging]({{base_path}}/integrate/examples/json_examples/json-examples/#troubleshooting-debugging-and-logging) + - [Accessing content from JSON payloads](https://mi.docs.wso2.com/en/latest/learn/examples/json-examples/json-examples/#accessing-content-from-json-payloads) + - [Logging JSON payloads](https://mi.docs.wso2.com/en/latest/learn/examples/json-examples/json-examples/#logging-json-payloads) + - [Constructing and transforming JSON payloads](https://mi.docs.wso2.com/en/latest/learn/examples/json-examples/json-examples/#constructing-and-transforming-json-payloads) + - [Troubleshooting, debugging, and logging](https://mi.docs.wso2.com/en/latest/learn/examples/json-examples/json-examples/#troubleshooting-debugging-and-logging) ### JSON message builders and formatters @@ -452,7 +452,7 @@ The response payload will look like this: Note that we have used the Property mediator to mark the outgoing payload to be formatted as JSON. For more information about the Property Mediator, see the -[Property Mediator]({{base_path}}/reference/mediators/property-mediator) page. +[Property Mediator](https://mi.docs.wso2.com/en/latest/reference/mediators/property-mediator/) page. ``` xml diff --git a/en/docs/design/create-api/create-an-api-using-a-service.md b/en/docs/design/create-api/create-an-api-using-a-service.md index 34db194160..d3373c533f 100644 --- a/en/docs/design/create-api/create-an-api-using-a-service.md +++ b/en/docs/design/create-api/create-an-api-using-a-service.md @@ -10,7 +10,7 @@ Create and publish your integration service or streaming integration service to More information: -- For information on creating and publishing a REST API based on an integration service, see [Exposing an Integration Service as a Managed API]({{base_path}}/integrate/develop/working-with-service-catalog). +- For information on creating and publishing a REST API based on an integration service, see [Exposing an Integration Service as a Managed API](https://mi.docs.wso2.com/en/latest/develop/working-with-service-catalog/). - For information on creating and publishing a Streaming API based on a streaming integration service, see [Exposing a Stream as a Managed API]({{base_path}}/use-cases/streaming-usecase/exposing-stream-as-managed-api-in-service-catalog/). diff --git a/en/docs/design/endpoints/resiliency/endpoint-timeouts.md b/en/docs/design/endpoints/resiliency/endpoint-timeouts.md index e0118beb13..11522fa876 100644 --- a/en/docs/design/endpoints/resiliency/endpoint-timeouts.md +++ b/en/docs/design/endpoints/resiliency/endpoint-timeouts.md @@ -102,4 +102,4 @@ The following are Advanced Endpoint Configurations that you can configure for bo -For more information on endpoints and how to add, edit, or delete them, see [Endpoint Properties]({{base_path}}/reference/synapse-properties/endpoint-properties). +For more information on endpoints and how to add, edit, or delete them, see [Endpoint Properties](https://mi.docs.wso2.com/en/latest/reference/synapse-properties/endpoint-properties/). diff --git a/en/docs/get-started/apim-architecture.md b/en/docs/get-started/apim-architecture.md index d26e9d81b4..96ca4af33f 100644 --- a/en/docs/get-started/apim-architecture.md +++ b/en/docs/get-started/apim-architecture.md @@ -108,7 +108,7 @@ There are multiple developer-friendly tools that can be used to help you work wi #### Integration Studio -The WSO2 API Manager and the Micro Integrator are coupled with [WSO2 Integration Studio]({{base_path}}/integrate/develop/wso2-integration-studio); a comprehensive graphical integration flow designer for building integrations using a simple drag-and-drop functionality. +The WSO2 API Manager and the Micro Integrator are coupled with [WSO2 Integration Studio](https://mi.docs.wso2.com/en/latest/develop/wso2-integration-studio/); a comprehensive graphical integration flow designer for building integrations using a simple drag-and-drop functionality. Integration Studio diff --git a/en/docs/get-started/integration-quick-start-guide.md b/en/docs/get-started/integration-quick-start-guide.md deleted file mode 100644 index af8bad91f8..0000000000 --- a/en/docs/get-started/integration-quick-start-guide.md +++ /dev/null @@ -1,389 +0,0 @@ -# Quick Start Guide - Integration - -Let's get started with WSO2 Micro Integrator by running a simple integration use case in your local environment. - -## Before you begin... - -1. Install Java SE Development Kit (JDK) version 11 and set the `JAVA_HOME` environment variable. - - !!! Info - For information on the compatible JDK types and setting the `JAVA_HOME` environment variable for different operating systems, see [Setup and Install]({{base_path}}/install-and-setup/install/installing-the-product/installing-api-m-runtime/). - -2. Go to the [WSO2 Micro Integrator web page](https://wso2.com/integration/micro-integrator/#), click **Download**, and then click **Zip Archive** to download the Micro Integrator distribution as a ZIP file. -3. Optionally, navigate to the [API Manager Tooling web page](https://wso2.com/api-management/tooling/), and download WSO2 Integration Studio. - - !!! Info - For more information, see the [installation instructions]({{base_path}}/install-and-setup/install-and-setup-overview/#installing_1). - -4. Download the [sample files]({{base_path}}/assets/attachments/quick-start-guide/mi-qsg-home.zip). From this point onwards, let's refer to this directory as ``. -5. Download [curl](https://curl.haxx.se/) or a similar tool that can call an HTTP endpoint. -6. Optionally, go to the [WSO2 API Manager website](https://wso2.com/api-management/), click **TRY IT NOW**, and then click **Zip Archive** to download the API Manager distribution as a ZIP file. - -## What you'll build - -This is a simple service orchestration scenario. The scenario is about a basic healthcare system where the Micro Integrator is used to integrate two back-end hospital services to provide information to the client. - -Most healthcare centers have a system that is used to make doctor appointments. To check the availability of the doctors for a particular time, users typically need to visit the hospitals or use each and every online system that is dedicated to a particular healthcare center. Here, we are making it easier for patients by orchestrating those isolated systems for each healthcare provider and exposing a single interface to the users. - - - - -!!! Tip - You may export` /HealthcareIntegrationProject` to Integration Studio to view the project structure. - -In the above scenario, the following takes place: - -1. The client makes a call to the Healthcare API created using Micro Integrator. - -2. The Healthcare API calls the Pine Valley Hospital back-end service and gets the queried information. - -3. The Healthcare API calls the Grand Oak Hospital back-end service and gets the queried information. - -4. The response is returned to the client with the required information. - -Both Grand Oak Hospital and Pine Valley Hospital have services exposed over the HTTP protocol. - -The Pine Valley Hospital service accepts a POST request in the following service endpoint URL. - -```bash -http://:/pineValley/doctors -``` - -The Grand Oak Hospital service accepts a GET request in the following service endpoint URL. - -```bash -http://:/grandOak/doctors/ -``` - -The expected payload should be in the following JSON format: - -```bash -{ - "doctorType": "" -} -``` - -Let’s implement a simple integration solution that can be used to query the availability of doctors for a particular category from all the available healthcare centers. - - -### Step 1 - Set up the workspace - -To set up the integration workspace for this quick start guide, we will use an integration project that was built using WSO2 Integration Studio: - -1. Extract the downloaded WSO2 Micro Integrator and sample files into the same directory location. - -2. Navigate to the `` directory. -The following project files and executable back-end services are available in the ``. - -- **HealthcareIntegrationProject/HealthcareIntegrationProjectConfigs**: This is the ESB Config module with the integration artifacts for the healthcare service. This service consists of the following REST API: - - - -
- HealthcareAPI.xml - ```xml - - - - - - - - - - - - - - - - { - "doctorType": "$1" - } - - - - - - - - - - - - - - - - - - - - - - - - ``` -
- - - It also contains the following two files in the metadata folder. - - - !!! Tip - This data is used later in this guide by the API management runtime to generate the managed API proxy. - - - - - - - - - - - -
- HealthcareAPI_metadata.yaml - - This file contains the metadata of the integration service. - The default **serviceUrl** is configured as `http://localhost:8290/healthcare`. - If you are running Micro Integrator on a different host and port, you may have to change these values. -
- HealthcareAPI_swagger.yaml - - This Swagger file contains the OpenAPI definition of the integration service. -
- -- **HealthcareIntegrationProject/HealthcareIntegrationProjectCompositeExporter**: This is the Composite Application Project folder, which contains the packaged CAR file of the healthcare service. - -- **Backend**: This contains an executable .jar file that contains mock back-end service implementations for the Pine Valley Hospital and Grand Oak Hospital. - -- **bin**: This contains a script to copy artifacts and run the backend service. - -### Step 2 - Running the integration artifacts - -Follow the steps given below to run the integration artifacts we developed on a Micro Integrator instance that is installed on a VM. - -1. Run `run.sh/run.bat` script in `/bin` based on your operating system to start up the workspace. - 1. Open a terminal and navigate to the `/bin` folder. - 2. Execute the relevant OS specific command: - - === "On MacOS/Linux/CentOS" - ```bash - sh run.sh - ``` - - === "On Windows" - ```bash - run.bat - ``` - - !!! Tip - The script assumes `MI_HOME` and `` are located in the same directory. It carries out the following steps. - - - Start the back-end services. - - Two mock hospital information services are available in the `DoctorInfo.jar` file located in the `/Backend/` directory. - - To manually start the service, open a terminal window, navigate to the `/Backend/` folder, and use the following command to start the services: - - ``` bash - java -jar DoctorInfo.jar - ``` - - - Deploy the Healthcare service. - - Copy the CAR file of the Healthcare service (HealthcareIntegrationProjectCompositeExporter_1.0.0-SNAPSHOT.car) from the `/HealthcareIntegrationProject/HealthcareIntegrationProjectCompositeExporter/target/` directory to the `/repository/deployment/server/carbonapps` directory. - -2. Start the Micro Integrator. - - 1. Execute the relevant command in a terminal based on the OS: - - === "On MacOS/Linux/CentOS" - ```bash - sh micro-integrator.sh - ``` - === "On Windows" - ```bash - micro-integrator.bat - ``` - -4. (Optional) Start the Dashboard. - - If you want to view the integration artifacts deployed in the Micro Integrator, you can start the dashboard. The instructions on running the MI dashboard is given in the installation guide: - - 1. [Install]({{base_path}}/install-and-setup/install/installing-the-product/installing-mi-dashboard) the MI dashboard. - 2. [Start]({{base_path}}/install-and-setup/install/installing-the-product/running-the-mi-dashboard) the MI dashboard. - - You can now test the **HealthcareIntegrationService** that you just generated. - -### Step 3 - Testing the integration service - -1. Invoke the healthcare service. - - Open a terminal and execute the following curl command to invoke the service: - - ```bash - curl -v http://localhost:8290/healthcare/doctor/Ophthalmologist - ``` - - Upon invocation, you should be able to observe the following response: - - ```bash - [ - [ - { - "name":"John Mathew", - "time":"03:30 PM", - "hospital":"Grand Oak" - }, - { - "name":"Allan Silvester", - "time":"04:30 PM", - "hospital":"Grand Oak" - } - ], - [ - { - "name":"John Mathew", - "time":"07:30 AM", - "hospital":"pineValley" - }, - { - "name":"Roma Katherine", - "time":"04:30 PM", - "hospital":"pineValley" - } - ] - ] - ``` - **Congratulations!** - Now you have created your first integration service. Optionally, you can follow the steps given below to expose the service as a Managed API in API Manager. - -## Exposing an Integration Service as a Managed API - -The REST API you deployed in the Micro Integrator is an **integration service** for the API Manager. Now, let's look at how you can expose the integration service to the API Management layer and generate a managed API by using the service. - -### Step 1 - Expose your integration as a service - -1. Start the API Manager runtime: - - 1. Extract the API Manager ZIP file. - 3. Start WSO2 API Manager: - - Open a terminal, navigate to the `/bin` directory, and execute the relevant command. - - - === "On MacOS/Linux" - ```bash - ./api-manager.sh - ``` - - === "On Windows" - ```bash - api-manager.bat --run - ``` - -2. Update and start the Micro Integrator runtime: - - 1. Stop the Micro Integrator. - - 2. Uncomment the following configuration from the `/conf/deployment.toml` file of the Micro Integrator. - - !!! Tip - The default username and password for connecting to the API gateway is `admin:admin`. - - - ```toml - [[service_catalog]] - apim_host = "https://localhost:9443" - enable = true - username = "admin" - password = "admin" - ``` - - 3. Start the Micro Integrator again. - - You will see the following in the server start-up log. - - ```bash - Successfully updated the service catalog - ``` - -3. Access the integration service from the **API Publisher**: - - 1. Sign in to the **API Publisher**: `https://localhost:9443/publisher` - - !!! Tip - Use `admin` as the user name and password. - - 2. Select the **Services** from the menu. - - - - 3. See that the `HealthcareAPI` is listed as a service. -` ` -### Step 2 - Create a managed API using the Integration Service - -1. Click on the `HealthcareAPI` that is in the service catalog. - -2. Click **Create API**. - - This opens the **Create API** dialog box with the API details that are generated based on the service. - - create api dialog box - -3. Update the API name, context, and version if required, and click **Create API**. - - The overview page of the API that you just created appears. - - apis list - -4. Navigate to **Develop -> API Configurations -> Endpoints** from the left menu. You will see that **Service Endpoint** is already selected and the production endpoint is already provided. - - Select the `Sandbox Endpoint`, add the endpoint `http://localhost:8290/healthcare`, and **Save**. - -5. Update the portal configurations and API configurations as required. - - Now, you have successfully created an API using the service. - -### Step 3 - Publish the managed API - -1. Navigate to **Deployments** and click **Deploy** to create a revision to deploy in the default Gateway environment. - -2. Navigate to **Lifecycle** and click **Publish** to publish the API in the Gateway environment. - - - - If the API is published successfully, the lifecycle state will shift to **PUBLISHED**. - -### Step 4 - Invoke the Managed `HealthcareAPI` via Developer Portal - -1. Navigate to the **Developer Portal** by clicking on the `View In Dev Portal` at the top menu. - - - -2. Sign in using the default username/password `admin/admin`. You will be redirected to the **APIs**. - -3. Under **APIs**, you will see the published `HealthcareAPI`. Click on it to navigate to the Overview of the API. - -4. Click `Try Out`. This will create a subscription to the API using `Default Application`. - - - -5. Click `GET TEST KEY` to get a test token to invoke the API. - - - -6. Click **GET** resource `/doctor​/{doctorType}`. Click on **Try It Out**. Enter `Ophthalmologist` in the doctorType field and click **Execute**. - - - - -## What's next? - -- [Develop your first integration solution]({{base_path}}/integrate/develop/integration-development-kickstart). -- Try out the **examples** available in the [Integrate section of our documentation]({{base_path}}/integrate/integration-overview/). -- Try out the entire developer guide on [Exposing an Integration Service as a Managed API]({{base_path}}/tutorials/integration-tutorials/service-catalog-tutorial/). -- Try out the entire developer guide on [Exposing an Integration SOAP Service as a Managed API]({{base_path}}/tutorials/integration-tutorials/service-catalog-tutorial-for-proxy-services/). \ No newline at end of file diff --git a/en/docs/get-started/overview.md b/en/docs/get-started/overview.md index f4abd3b4c5..0cba9c0cfb 100644 --- a/en/docs/get-started/overview.md +++ b/en/docs/get-started/overview.md @@ -29,7 +29,7 @@ The following are some of the main capabilities of the product.
You can implement an API-led integration strategy by easily combining the API management layer and the integration layer of the product's platform.
@@ -137,7 +137,7 @@ The following are some of the main capabilities of the product. @@ -155,7 +155,7 @@ The following are some of the main capabilities of the product. diff --git a/en/docs/includes/integration/pull-content-migration-esb-mi.md b/en/docs/includes/integration/pull-content-migration-esb-mi.md deleted file mode 100644 index d6341cb27b..0000000000 --- a/en/docs/includes/integration/pull-content-migration-esb-mi.md +++ /dev/null @@ -1,35 +0,0 @@ -## Why upgrade to WSO2 API-M 4.2.0? - -Listed below are some of the advantages of moving to API-M 4.2.0 from the ESB. - -- The Micro Integrator of API-M 4.2.0 is now the most improved version of the battle-tested WSO2 ESB runtime. - - !!! Tip - WSO2 ESB 5.0, the ESB profile of WSO2 EI 6.x, the Micro Integrator of WSO2 EI 7.x, as well as the Micro Integrator of WSO2 API-M 4.0.0 and WSO2 API-M 4.2.0 contains the same version of WSO2 ESB runtime. - -- All the ESB runtimes of WSO2 can use the same developer tool ([WSO2 Integration Studio](../../../integrate/develop/wso2-integration-studio)) for developing integrations. - -- All the integration capabilities that you used in the ESB can be used in the Micro Integrator with minimal changes. - -- The Micro Integrator contains improvements to ease your product experience. - - !!! Note - The [Toml-based configuration strategy](../../../reference/config-catalog-mi) in API-M 4.2.0 replaces the XML configurations in previous versions of the ESB runtime. Some of the features are [removed from WSO2 Micro Integrator](../../../get-started/about-this-release/#compare-this-release-with-previous-esbs) as they are not frequently used. - -Upgrading to WSO2 API-M 4.2.0 is recommended for the following requirements: - -- You need to expose integrations as managed APIs so that integration solutions can be managed and monetized in an API marketplace. -- You need to switch to a microservices architecture from the conventional centralized architecture. -- You need a more lightweight, user-friendly version of the battle-tested WSO2 ESB. -- You need a more lightweight, container-friendly runtime in a centralized architecture. -- You need native support for Kubernetes. - - -## Before you begin - -Note the following: - -- Ports are different in the Micro Integrator of API-M 4.2.0. Find out about [ports in the Micro Integrator](../../../install-and-setup/setup/reference/default-product-ports/#micro-integrator-ports). -- The Micro Integrator of API-M 4.2.0 contains changes that impact your migration process. Be sure to read the [Comparison: ESB vs the Micro Integrator](../../../get-started/about-this-release/#compare-this-release-with-previous-esbs) before you start the migration. - -- Note that API-M 4.2.0 does not allow manual patches. You can use [WSO2 Updates](https://updates.docs.wso2.com/en/latest/updates/overview) to get the latest fixes or updates for this release.. \ No newline at end of file diff --git a/en/docs/includes/integration/pull-content-registry-migration.md b/en/docs/includes/integration/pull-content-registry-migration.md deleted file mode 100644 index 6257289daa..0000000000 --- a/en/docs/includes/integration/pull-content-registry-migration.md +++ /dev/null @@ -1,231 +0,0 @@ -!!! Info "Before you begin" - Note the following: - - - The Micro Integrator uses a [file-based registry](../../../install-and-setup/setup/mi-setup/deployment/file_based_registry) instead of a database (which is used in your WSO2 EI version). - - Your WSO2 EI registry may have the following partitions: Local, Config, and Gov. However, you only need to migrate the Config and Gov registry partitions. See the instructions on configuring [registry partitions in the Micro Integrator](../../../install-and-setup/setup/mi-setup/deployment/file_based_registry). - - Message processor tasks stored in the registry should be stored with a new naming convention in the Micro Integrator. Therefore, all entries in the registry with the `MSMP` prefix (which correspond to message processor tasks) should not be migrated to the Micro Integrator. New entries will be automatically created when you start the Micro Integrator server. - - If you have shared the registry of your WSO2 EI among multiple nodes, you can do the same for the file-based registry of the Micro Integrator. However, note that registry mounting/sharing is only required for [**persisting message processor states** among nodes of the Micro Integrator](../../../install-and-setup/setup/mi-setup/deployment/deploying_wso2_ei/#registry-synchronization-sharing). - - Since the recommended approach to deploying Datasources in MI is to create a Datasources project in the Integration Studio and deploy them as CAR files, if you have any Datasources deployed in the EI registry, you need to manually download the content from the EI Management Console and create the data source definitions in Integration Studio and export them as a CAR app. See the instructions on [Create Datasource Configs](../../../integrate/develop/create-datasources). - - -You can migrate the registry resources by using the **registry migration tool** as follows: - -1. Download the [tool](https://github.com/wso2-docs/WSO2_EI/blob/master/RegistryMigration-EI6.x.xtoMI/registry-migration-service-1.0.0.jar) and save it to a location on your computer. - -2. Execute one of the commands given below to start the tool. - - - To start the tool without a log file: - - ```bash - java -jar /registry-migration-service-1.0.0.jar - ``` - - - To start the tool with a log file: - - !!! Tip - Replace `` with the location where you want the log file to be created. - - ```bash - java -Dlog.file.location= -jar /registry-migration-service-1.0.0.jar - ``` - -3. Specify the following input values to log in to your WSO2 EI server from the migration tool: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- Input Value - - Description -
- EI Server URL - - Specify the EI server URL with the servlet port. The default is https://localhost:9443. -
- Internal Truststore Location of the EI Server - - Specify the location of the internal truststore used by the EI server. -
- Internal Truststore Type of EI Server - - Specify the type of the internal Truststore used by the EI server. The default is JKS. -
- Internal Truststore Password of EI Server - - Specify the password of the internal Truststore used by the EI server. The default is wso2carbon. -
- Username of the EI Server - - admin. -
- Password of the EI Server - - admin. -
- -4. Select one of the following options and proceed. - - - - - - - - - - - - - - -
- Option - - Description -
- Export as a Registry Resource Module - - Recommended. If you select this option, the registry resources are exported as a Registry Resources module, which you import to WSO2 Integration Studio. You can then create a CAR file by selecting resources from the registry resources module. -
- Export as a Carbon Application - - If you select this option, the registry resources in your WSO2 EI instance are exported as a single CAR file, which you directly copy to your Micro Integrator distribution. -
- -5. Specify input values depending on which export option you selected. - - - If you selected **Export as a Registry Resource Module**, follow the steps given below. - - 1. Enter the following input values: - - - - - - - - - - - - - - - - - - - - - - - - - - -
- Input Value - - Description -
- Integration Project Name - - Specify the name of the Integration project. -
- Project’s Group ID - - Specify the group ID of the integration project. The default value is com.example. -
- Project’s Artifact ID - - Specify the artifact ID of the integration project. The default value is the integration project name. -
- Project Version - - Specify the version of the integration project. The default value is 1.0.0. -
- Export Location - - Specify the location where the integration project should be created. -
- - 2. Verify the following: - - - If the process is successful, the **Registry Resource Project** is created in the location you specified. - - A summary report is created at the export location with file name: `registry_export_summary_.txt`. This report explains whether the registry resource is successfully exported and also provides reasons in case the exprot fails. - - 3. [Import the Registry Resource Project](../../../integrate/develop/creating-artifacts/creating-registry-resources/#import-from-file-system) to the Registry Resources module in WSO2 Integration Studio. - - 4. Open the resource editor and make sure that the media type of the resource is set properly. - - ![Registry Resource Editor](../../assets/img/integrate/migration/registry-resource-editor.png) - - 5. Select the required resources from your registry resources project and export a CAR file. - - - If you selected **Export as a Carbon Application**, enter the following input values: - - - - - - - - - - - - - - - - - - -
- Input Value - - Description -
- CAR File Name - - Specify the name of the Carbon application. -
- CAR File Version - - Specify the version of the Carbon application. The default value is 1.0.0. -
- Export Location - - Specify the location where the CAR file should be created. -
- - You should now have a CAR file with the required registry resources. - -6. Copy the CAR file to the `/repository/deployment/server/carbonapps` folder. \ No newline at end of file diff --git a/en/docs/includes/integration/pull-content-user-store-db-id.md b/en/docs/includes/integration/pull-content-user-store-db-id.md deleted file mode 100644 index f96618f9de..0000000000 --- a/en/docs/includes/integration/pull-content-user-store-db-id.md +++ /dev/null @@ -1,11 +0,0 @@ - -!!! Note "About User DB" - If you replace 'WSO2CarbonDB' with a different id in the user DB configuration, you also need to list the id as a datasource under the [realm_manager] section in the deployment.toml file as shown below. - - ```toml - [realm_manager] - data_source = "new_id" - ``` - - Otherwise the user store database id defaults to 'WSO2CarbonDB' in the realm manager configurations.. - \ No newline at end of file diff --git a/en/docs/includes/reference/connectors/deploy-capp.md b/en/docs/includes/reference/connectors/deploy-capp.md deleted file mode 100644 index 6cbb669d37..0000000000 --- a/en/docs/includes/reference/connectors/deploy-capp.md +++ /dev/null @@ -1,12 +0,0 @@ -**Deploying on Micro Integrator** - -You can copy the composite application to the `/repository/deployment/server/carbonapps` folder and start the server. Micro Integrator will be started and the composite application will be deployed. - -You can further refer the application deployed through the CLI tool. See the instructions on [managing integrations from the CLI](../../../../install-and-setup/setup/api-controller/managing-integrations/managing-integrations-with-ctl/). - -??? note "Click here for instructions on deploying on WSO2 Enterprise Integrator 6" - 1. You can copy the composite application to the `/repository/deployment/server/carbonapps` folder and start the server. - - 2. WSO2 EI server starts and you can login to the Management Console https://localhost:9443/carbon/ URL. Provide login credentials. The default credentials will be admin/admin. - - 3. You can see that the API is deployed under the API section. \ No newline at end of file diff --git a/en/docs/install-and-setup/install-and-setup-overview.md b/en/docs/install-and-setup/install-and-setup-overview.md index bb4170f5dd..0aa0793320 100644 --- a/en/docs/install-and-setup/install-and-setup-overview.md +++ b/en/docs/install-and-setup/install-and-setup-overview.md @@ -430,461 +430,6 @@ To upgrade to the current API Manager component from a previous version, see the -## Micro Integrator - -This component develops complex integration services that can be exposed as managed APIs. - -### Installing - -To install and run the Micro Integrator on a virtual machine, see the topics given below. - - - - - - - - - - - - - - -
- Installing the Micro Integrator Runtime - - Explains how to download the Micro Integrator runtime as a binary and install it on a virtual machine. -
- Running the Micro Integrator Runtime - - Explains how you can execute the Micro Integrator runtime and start using its features. -
- Running the Micro Integrator as a Windows Service - - Explains how to install and run the Micro Integrator as a Windows service. -
- -To install and run the Micro Integrator Dashboard on a virtual machine, see the topics given below. - - - - - - - - - - -
- Installing the Micro Integrator Dashboard - - Explains how to download the Micro Integrator Dashboard as a binary and install it on a virtual machine. -
- Running the Micro Integrator Dashboard - - Explains how you can execute the Micro Integrator Dashboard and start using its features. -
- -### Setting up - -To set up and configure the Micro Integrator runtime, see the topics given below. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- Applying WSO2 Updates - - Explains how to get the latest updates that are available for a particular release of the Micro Integrator. -
- Data Stores - - Explains how to set up a user store, databases (multiple types), and a file-based registry. The topics covered are as follows: - -
- Securing the Micro Integrator - - Covers the different ways in which you can secure the Micro Integrator and the data it handles. The topic covered are as follows: - -
- Performance - - Explains how to configure the Micro Integrator at different levels to optimize performance. -
  • - Tuning JVM Performance -
  • -
  • - Tuning Network and OS Performance -
  • -
  • - Tuning JDBC Configurations -
  • -
  • - Tuning the HTTP Transport -
  • -
  • - Tuning the JMS Transport -
  • -
  • - Tuning the VFS Transport -
  • -
  • - Tuning the RabbitMQ Transport -
  • -
  • - Tuning the Inbound Endpoints -
  • -
    - Message Brokers - - Explains how to set up the Micro Integrator component to integrate with message brokers such as RabbitMQ, Kafka, and JMS. The topics covered are as follows: -
  • - AMQP (RabbitMQ) -
  • -
  • - Deploying RabbitMQ -
  • -
  • - Connecting to RabbitMQ -
  • -
  • - JMS -
  • -
  • - Connecting to ActiveMQ -
  • -
  • - Connecting to Apache Artemis -
  • -
  • - Connecting to HornetQ -
  • -
  • - Connecting to IBM Websphere App Server -
  • -
  • - Connecting to IBM WebSphere MQ -
  • -
  • - Connecting to JBoss MQ -
  • -
  • - Connecting to MSMQ -
  • -
  • - Connecting to Swift MQ -
  • -
  • - Connecting to TIBCO EMS -
  • -
  • - Connecting to Weblogic -
  • -
  • - Connecting to WSO2 MB -
  • -
  • - Connecting to Multiple Brokers -
  • -
  • - Kafka -
  • -
  • - Azure Service Bus -
  • -
    - Transports - - Explains how to configure the Micro Integrator component to work with a range of transports. These include all the widely used transports including HTTP/S, JMS, VFS, as well as domain-specific transports such as FIX. -
    - Multi-HTTPS Transport - - Explains how to enable dynamic SSL profiles for the Micro Integrator component and how to dynamically load the SSL profiles at runtime using a periodic schedule or JMX invocation. -
    - Message Builders and Formatters - - When the Micro Integrator receives a request via a mode of transport, the transport uses a **message builder** to process the payload and convert it to SOAP. Similarly, when the Micro Integrator sends a message via a mode of transport, the publishing transport uses a **message formatter** to present the payload in the required format. This section explains how to configure these message builders and message formatters. -
    - Message Relay - - Enabling message relay allows the Micro Integrator component to to pass messages along without building or processing them unless specifically requested to do so. This way, the Micro Integrator can handle a higher throughput. -
    - Observability - - There are two possible observability solutions that you can enable for the Micro Integrator component. This section explains how to set them up and well as how to configure logging. The topics covered are as follows: -
  • - Setting up Cloud-Native Observability on a VM -
  • -
  • - Setting up Cloud-Native Observability on Kubernetes -
  • -
  • - Setting up Classic Observability Deployment -
  • -
  • - Configuring Logs -
  • -
  • - Enabling Logs for a Component -
  • -
  • - Configuring Logs -
  • -
  • - Managing Log Growth -
  • -
  • - Masking Sensitive Information in Logs -
  • -
    - Timestamp Conversion for RDBMS - - Explains how to enable/disable time stamp conversions for the RDBMS databases configured for the Micro Integrator component.. -
    - -### Deploying - -To deploy the Micro Integrator runtime, see the topics given below. - - - - - - - - - - - - - - -
    - Deployment Patterns - - This explains all the deployment patterns you can follow when you deploy WSO2 API manager. These patterns involve deploying the API Manager component together with Micro Integrator and Streaming Integrator components in clustered setups. -
    - Configuring a Micro Integrator Cluster - - Explains how to set up a two-node Micro Integrator cluster. -
    - Deployment Synchronization - - Set up deployment synchronization for the Micro Integrator. -
    - -### CI/CD - -To implement continuous integration and continuous deployment pipelines for integrations, see the topics given below. - - - - - - - - - - - - - - -
    - CI/CD for Integrations - Overview - - Find out about the methods of implementing CI/CD for integrations in the Micro Integrator. -
    - Building a CI/CD Pipeline for Integrations (VM deployment) - - See the instructions on how to implement a CI/CD pipeline for integrations in a VM deployment of the Micro Integrator. -
    - Building a CI/CD Pipeline for Integrations (K8s deployment) - - See the instructions on how to implement a CI/CD pipeline for integrations in a Kubernetes deployment of the Micro Integrator. -
    - -To manage integration artifacts and logs in the Micro Integrator by using the API Controller (apictl), see the topics given below. - - - - - - - - - - -
    - Getting Started with WSO2 API Controller - - Explains how to set up the API Controller. -
    - Managing Integrations - - Explains how to manage integrations with the API Controller. -
    - -### Upgrading - -The Micro integrator of WSO2 Enterprise Integrator is the predecessor of the Micro Integrator component of WSO2 API Manager. To upgrade from a WSO2 Enterprise Integrator version, see the following topics. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    - Upgrading from WSO2 EI 7.1.x to WSO2 API-M 4.0.0 -
    - Upgrading from WSO2 EI 7.0.x to WSO2 API-M 4.0.0 -
    - Upgrading from WSO2 EI 6.6.x to WSO2 API-M 4.0.0 -
    - Migrating from WSO2 EI 6.5.x to WSO2 API-M 4.0.0 -
    - Migrating from WSO2 EI 6.4.x to WSO2 API-M 4.0.0 -
    - Migrating from WSO2 EI 6.3.x to WSO2 API-M 4.0.0 -
    - Migrating from WSO2 EI 6.2.x to WSO2 API-M 4.0.0 -
    - Migrating from WSO2 EI 6.1.1 to WSO2 API-M 4.0.0 -
    - Migrating from WSO2 EI 6.1.0 to WSO2 API-M 4.0.0 -
    - ## Streaming Integrator This component develops streaming solutions that can be exposed as managed APIs asynchronously. diff --git a/en/docs/install-and-setup/install/installing-the-product/installing-mi-as-a-windows-service.md b/en/docs/install-and-setup/install/installing-the-product/installing-mi-as-a-windows-service.md deleted file mode 100644 index 2c40345210..0000000000 --- a/en/docs/install-and-setup/install/installing-the-product/installing-mi-as-a-windows-service.md +++ /dev/null @@ -1,68 +0,0 @@ -# Running the Micro Integrator as a Windows Service - -Follow the instructions given below to run the Micro Integrator as a Windows service. - -## Prerequisites - -- Go to the [product page](https://wso2.com/integration/micro-integrator/#), click **Download**, and then click **Zip Archive** to download the product distribution as a ZIP file. - -- Extract the downloaded ZIP file to a location on your computer. The micro-integrator folder inside the extracted ZIP file will be your MI_HOME directory. - -- Set up a [JDK that is compatible with the Micro Integrator]({{base_path}}/install-and-setup/install/installation-prerequisites/#environment-compatibility) and point the `java_home` variable to your JDK instance. - -- Point the `wso2mi_home` environment variable to the `MI_HOME` directory. - -!!! Note - Be sure to use **lower case** letters when setting the `wso2mi_home` in the Windows OS. That is, you must not use `WSO2MI_HOME`. - -## Setting up the YAJSW wrapper - -YASJW uses the configurations defined in the `/conf/wrapper.conf` file to wrap Java applications. Replace the contents of this file with the configurations that are relevant to the Micro Integrator instance that you want to run as a service. Use the **wrapper.conf** file available in `/bin/yajsw` folder to get the relevant configurations. - -!!! Info - WSO2 recommends Yet Another Java Service Wrapper (YAJSW) version 13.05. If you are running on JDK 11 or JDK 17, previous versions of YAJSW will not be compatible. - -!!! tip - You may encounter the following issue when starting Windows Services when the file "java" or a "dll" used by Java cannot be found by YAJSW. - - ```bash - "Error 2: The system cannot find the file specified" - ``` - - This can be resolved by providing the "complete java path" for the wrapper.java.command as follows. - - ```bash - wrapper.java.command = ${JAVA_HOME}/bin/java - ``` - -## Installing the service - -Navigate to the `/bat/` directory in the Windows command prompt with administrative privileges, and execute the following command: - -```bash -installService.bat -``` - -## Starting the service - -Navigate to the `/bat/` directory in the Windows command prompt with administrative privileges, and execute the following command: - -```bash -startService.bat -``` - -## Stopping the service - -Navigate to the `/bat/` directory in the Windows command prompt with administrative privileges, and execute the following command: - -```bash -stopService.bat -``` - -## Uninstalling the service - -To uninstall the service, navigate to the `/bat/` directory in the Windows command prompt with administrative privileges, and execute the following command: - -```bash -uninstallService.bat -``` diff --git a/en/docs/install-and-setup/install/installing-the-product/installing-mi-dashboard.md b/en/docs/install-and-setup/install/installing-the-product/installing-mi-dashboard.md deleted file mode 100644 index 422d5a28fe..0000000000 --- a/en/docs/install-and-setup/install/installing-the-product/installing-mi-dashboard.md +++ /dev/null @@ -1,133 +0,0 @@ -# Installing the Micro Integrator Dashboard - -Follow the steps given below to install the Micro Integrator (MI) Dashboard. - -## Installing the MI Dashboard - -1. Go to the [WSO2 Micro Integrator web page](https://wso2.com/integration/micro-integrator/#), click **Download**, and then click **Other Resources/MI Dashboard** to download the MI Dashboard as a ZIP file. -2. Extract the archive file to a dedicated directory for the Micro Integrator Dashboard, which will hereafter be referred to as ``. - -!!! info - To connect the MI servers with the dashboard, add the following configuration to the `deployment.toml` file (stored in the `/conf/` folder of each server instance. - ``` - [dashboard_config] - dashboard_url = "https://{hostname/ip}:{port}/dashboard/api/" - heartbeat_interval = 5 - group_id = "mi_dev" - node_id = "dev_node_2" - ``` - For more information, see [Micro Integrator Dashboard]({{base_path}}/observe/mi-observe/working-with-monitoring-dashboard/#step-2-configure-the-mi-servers). - - - -## Setting up JAVA_HOME - -You must set your `JAVA_HOME` environment variable to point to the directory where the Java Development Kit (JDK) is installed on the computer. - -!!! info - Environment variables are global system variables accessible by all the processes running under the operating system. - -??? note "On Linux/OS X" - - 1. In your home directory, open the BASHRC file (.bash\_profile file on Mac) using editors such as vi, emacs, pico, or mcedit. - 2. Assuming you have JDK 11.0.x in your system, add the following two lines at the bottom of the file, replacing `/usr/java/jdk-11.0.x` with the actual directory where the JDK is installed. - - ``` java - On Linux: - export JAVA_HOME=/usr/java/jdk-11.0.x - export PATH=${JAVA_HOME}/bin:${PATH} - - On OS X: - export JAVA_HOME=/System/Library/Java/JavaVirtualMachines/11.0.x.jdk/Contents/Home - ``` - - 3. Save the file. - - !!! info - - If you do not know how to work with text editors in a Linux SSH session, run the following command: `cat >> .bashrc.` Paste the string from the clipboard and press "Ctrl+D." - - - 4. To verify that the `JAVA_HOME` variable is set correctly, execute the following command. - - ``` java - On Linux: - echo $JAVA_HOME - - On OS X: - which java - If the above command gives you a path like /usr/bin/java, then it is a symbolic link to the real location. To get the real location, run the following: - ls -l `which java` - ``` - The system returns the JDK installation path. - -??? note "On Solaris" - - 1. In your home directory, open the BASHRC file in your favorite text editor such as vi, emacs, pico, or mcedit. - 2. Assuming you have JDK 11.0.x in your system, add the following two lines at the bottom of the file, replacing `/usr/java/jdk-11.0.x` with the actual directory where the JDK is installed. - - ``` java - export JAVA_HOME=/usr/java/jdk-11.0.x - export PATH=${JAVA_HOME}/bin:${PATH} - ``` - - 3. Save the file. - - !!! info - - If you do not know how to work with text editors in an SSH session, run the following command: `cat >> .bashrc ` - - Paste the string from the clipboard and press "Ctrl+D." - - - 4. To verify that the `JAVA_HOME` variable is set correctly, execute the following command: - - `echo $JAVA_HOME` - - 5. The system returns the JDK installation path. - -??? note "On Windows" - - Typically, the JDK is installed in a directory under `C:/Program Files/Java` , such as `C:/Program Files/Java/jdk-11.0.x`. If you have multiple versions installed, choose the latest one, which you can find by sorting by date. - - You set up `JAVA_HOME` using the System Properties, as described below. Alternatively, if you just want to set `JAVA_HOME` temporarily for the current command prompt window, set it at the command prompt. - - **Setting up JAVA\_HOME using the system properties** - - 1. Right-click the **My Computer** icon on the desktop and click **Properties.** - - ![]({{base_path}}/assets/attachments/thumbnails/26838941/27042151) - - 2. In the System Properties window, click the **Advanced** tab, and then click **Environment Variables**. - - ![]({{base_path}}/assets/attachments/26838941/27042150.png) - - 3. Click **New** under **System variables** (for all users) or under **User variables** (just for the user who is currently logged in). - - ![]({{base_path}}/assets/attachments/thumbnails/26838941/27042154) - - 4. Enter the following information: - - In the **Variable name** field, enter: `JAVA_HOME` - - In the **Variable value** field, enter the installation path of the Java Development Kit, such as: `c:/Program Files/Java/jdk-11.0.x ` - - The `JAVA_HOME` variable is now set and will apply to any subsequent command prompt windows you open. If you have existing command prompt windows running, you must close and reopen them for the `JAVA_HOME` variable to take effect, or manually set the `JAVA_HOME` variable in those command prompt windows as described in the next section. To verify that the `JAVA_HOME` variable is set correctly, open a command window (from the **Start** menu, click **Run**, and then type `CMD` and click **Enter**) and execute the following command: - - `set JAVA_HOME` - - The system returns the JDK installation path. - -## Setting system properties - -If you need to set additional system properties when the server starts, you can take the following approaches: - -- **Set the properties from a script** : Setting your system properties in the startup script is ideal because it ensures that you set the properties every time you start the server. To avoid having to modify the script each time you upgrade, the best approach is to create your startup script that wraps the WSO2 startup script and adds the properties you want to set, rather than editing the WSO2 startup script directly. -- **Set the properties from an external registry** : If you want to access properties from an external registry, you could create Java code that reads the properties at runtime from that registry. Be sure to store sensitive data such as username and password to connect to the registry in a property file instead of in the Java code and secure the properties file with the [secure vault](https://docs.wso2.com/display/ADMIN44x/Carbon+Secure+Vault+Implementation). - -!!! info - - When using SUSE Linux, it ignores `/etc/resolv.conf` and only looks at the `/etc/hosts` file. This means that the server will throw an exception on startup if you have not specified anything besides localhost. To avoid this error, add the following line above `127.0.0.1 localhost` in the `/etc/hosts` file: ` localhost`. - -## What's Next? - -- [Running the MI Dashboard]({{base_path}}/install-and-setup/install/installing-the-product/running-the-mi-dashboard). - diff --git a/en/docs/install-and-setup/install/installing-the-product/installing-mi.md b/en/docs/install-and-setup/install/installing-the-product/installing-mi.md deleted file mode 100644 index 2a2a987041..0000000000 --- a/en/docs/install-and-setup/install/installing-the-product/installing-mi.md +++ /dev/null @@ -1,125 +0,0 @@ -# Installing the Micro Integrator Runtime - -Follow the steps given below to install the Micro Integrator (MI) runtime of WSO2 API Manager. - -## Before you begin - -See the [Installation Prerequisites]({{base_path}}/install-and-setup/install/installation-prerequisites). -Java Development Kit (JDK) is essential to run the product. - -## Installing the Micro Integrator - -1. Go to the [WSO2 Micro Integrator web page](https://wso2.com/integration/micro-integrator/#), click **Download**, and then click **Zip Archive** to download the Micro Integrator distribution as a ZIP file. -2. Extract the archive file to a dedicated directory for the Micro Integrator, which will hereafter be referred to as ``. - -## Setting up JAVA_HOME - -You must set your `JAVA_HOME` environment variable to point to the directory where the Java Development Kit (JDK) is installed on the computer. - -!!! info - Environment variables are global system variables accessible by all the processes running under the operating system. - -??? note "On Linux/OS X" - - 1. In your home directory, open the BASHRC file (.bash\_profile file on Mac) using editors such as vi, emacs, pico, or mcedit. - 2. Assuming you have JDK 11 in your system, add the following two lines at the bottom of the file, replacing `/usr/java/jdk-11.0.x` with the actual directory where the JDK is installed. - - ``` java - On Linux: - export JAVA_HOME=/usr/java/jdk-11.0.x - export PATH=${JAVA_HOME}/bin:${PATH} - - On OS X: - export JAVA_HOME=/System/Library/Java/JavaVirtualMachines/11.0.x.jdk/Contents/Home - ``` - - 3. Save the file. - - !!! info - - If you do not know how to work with text editors in a Linux SSH session, run the following command: `cat >> .bashrc.` Paste the string from the clipboard and press "Ctrl+D." - - - 4. To verify that the `JAVA_HOME` variable is set correctly, execute the following command. - - ``` java - On Linux: - echo $JAVA_HOME - - On OS X: - which java - If the above command gives you a path like /usr/bin/java, then it is a symbolic link to the real location. To get the real location, run the following: - ls -l `which java` - ``` - The system returns the JDK installation path. - -??? note "On Solaris" - - 1. In your home directory, open the BASHRC file in your favorite text editor such as vi, emacs, pico, or mcedit. - 2. Assuming you have JDK 11 in your system, add the following two lines at the bottom of the file, replacing `/usr/java/jdk-11.0.x` with the actual directory where the JDK is installed. - - ``` java - export JAVA_HOME=/usr/java/jdk-11.0.x - export PATH=${JAVA_HOME}/bin:${PATH} - ``` - - 3. Save the file. - - !!! info - - If you do not know how to work with text editors in an SSH session, run the following command: `cat >> .bashrc ` - - Paste the string from the clipboard and press "Ctrl+D." - - - 4. To verify that the `JAVA_HOME` variable is set correctly, execute the following command: - - `echo $JAVA_HOME` - - 5. The system returns the JDK installation path. - -??? note "On Windows" - - Typically, the JDK is installed in a directory under `C:/Program Files/Java` , such as `C:/Program Files/Java/jdk-11.0.x`. If you have multiple versions installed, choose the latest one, which you can find by sorting by date. - - You set up `JAVA_HOME` using the System Properties, as described below. Alternatively, if you just want to set `JAVA_HOME` temporarily for the current command prompt window, set it at the command prompt. - - **Setting up JAVA\_HOME using the system properties** - - 1. Right-click the **My Computer** icon on the desktop and click **Properties.** - - ![]({{base_path}}/assets/attachments/thumbnails/26838941/27042151) - - 2. In the System Properties window, click the **Advanced** tab, and then click **Environment Variables**. - - ![]({{base_path}}/assets/attachments/26838941/27042150.png) - - 3. Click **New** under **System variables** (for all users) or under **User variables** (just for the user who is currently logged in). - - ![]({{base_path}}/assets/attachments/thumbnails/26838941/27042154) - - 4. Enter the following information: - - In the **Variable name** field, enter: `JAVA_HOME` - - In the **Variable value** field, enter the installation path of the Java Development Kit, such as: `c:/Program Files/Java/jdk-11.0.x ` - - The `JAVA_HOME` variable is now set and will apply to any subsequent command prompt windows you open. If you have existing command prompt windows running, you must close and reopen them for the `JAVA_HOME` variable to take effect, or manually set the `JAVA_HOME` variable in those command prompt windows as described in the next section. To verify that the `JAVA_HOME` variable is set correctly, open a command window (from the **Start** menu, click **Run**, and then type `CMD` and click **Enter**) and execute the following command: - - `set JAVA_HOME` - - The system returns the JDK installation path. - -## Setting system properties - -If you need to set additional system properties when the server starts, you can take the following approaches: - -- **Set the properties from a script** : Setting your system properties in the startup script is ideal because it ensures that you set the properties every time you start the server. To avoid having to modify the script each time you upgrade, the best approach is to create your startup script that wraps the WSO2 startup script and adds the properties you want to set, rather than editing the WSO2 startup script directly. -- **Set the properties from an external registry** : If you want to access properties from an external registry, you could create Java code that reads the properties at runtime from that registry. Be sure to store sensitive data such as username and password to connect to the registry in a property file instead of in the Java code and secure the properties file with the [secure vault](https://docs.wso2.com/display/ADMIN44x/Carbon+Secure+Vault+Implementation). - -!!! info - - When using SUSE Linux, it ignores `/etc/resolv.conf` and only looks at the `/etc/hosts` file. This means that the server will throw an exception on startup if you have not specified anything besides localhost. To avoid this error, add the following line above `127.0.0.1 localhost` in the `/etc/hosts` file: ` localhost`. - -## What's Next? - -- [Running the MI Runtime]({{base_path}}/install-and-setup/install/installing-the-product/running-the-mi). - diff --git a/en/docs/install-and-setup/install/installing-the-product/running-the-mi-dashboard-as-windows-service.md b/en/docs/install-and-setup/install/installing-the-product/running-the-mi-dashboard-as-windows-service.md deleted file mode 100644 index 25751d810c..0000000000 --- a/en/docs/install-and-setup/install/installing-the-product/running-the-mi-dashboard-as-windows-service.md +++ /dev/null @@ -1,64 +0,0 @@ -# Running the Micro Integrator Dashboard as a Windows Service - -Follow the instructions given below to run the Micro Integrator Dashboard as a Windows service. - -## Prerequisites - -- Setup Micro Integrator runtime and Dashboard according to the instructions given [here]({{base_path}}/install-and-setup/install/installing-the-product/running-the-mi-dashboard/#before-you-begin). - -- Point the `wso2mi_dashboard_home` environment variable to the `MI_DASHBOARD_HOME` directory. - -!!! Note - Be sure to use **lower case** letters when setting the `java_home` and `wso2mi_dashboard_home` in the Windows OS. That is, you must not use `JAVA_HOME` or `WSO2MI_DASHBOARD_HOME`. - -## Setting up the YAJSW wrapper - -YASJW uses the configurations defined in the `/conf/wrapper.conf` file to wrap Java applications. Replace the contents of this file with the configurations that are relevant to the Micro Integrator Dashboard instance that you want to run as a service. Use the **wrapper.conf** file available in `/bin/yajsw` folder to get the relevant configurations. - -!!! Info - WSO2 recommends Yet Another Java Service Wrapper (YAJSW) version [13.05](https://sourceforge.net/projects/yajsw/files/yajsw/yajsw-stable-13.05/yajsw-stable-13.05.zip/download). If you are running on JDK 11 or JDK 17, previous versions of YAJSW will not be compatible. - -!!! tip - You may encounter the following issue when starting Windows Services when the file "java" or a "dll" used by Java cannot be found by YAJSW. - - ```bash - "Error 2: The system cannot find the file specified" - ``` - - This can be resolved by providing the "complete java path" for the wrapper.java.command as follows. - - ```bash - wrapper.java.command = ${java_home}/bin/java - ``` - -## Installing the service - -Navigate to the `/bat/` directory in the Windows command prompt with administrative privileges, and execute the following command: - -```bash -installService.bat -``` - -## Starting the service - -Navigate to the `/bat/` directory in the Windows command prompt with administrative privileges, and execute the following command: - -```bash -startService.bat -``` - -## Stopping the service - -Navigate to the `/bat/` directory in the Windows command prompt with administrative privileges, and execute the following command: - -```bash -stopService.bat -``` - -## Uninstalling the service - -To uninstall the service, navigate to the `/bat/` directory in the Windows command prompt with administrative privileges, and execute the following command: - -```bash -uninstallServiceService.bat -``` diff --git a/en/docs/install-and-setup/install/installing-the-product/running-the-mi-dashboard.md b/en/docs/install-and-setup/install/installing-the-product/running-the-mi-dashboard.md deleted file mode 100644 index a6ce4fc907..0000000000 --- a/en/docs/install-and-setup/install/installing-the-product/running-the-mi-dashboard.md +++ /dev/null @@ -1,188 +0,0 @@ -# Running the Micro Integrator Dashboard - -Follow the steps given below to run the WSO2 Micro Integrator runtime and its monitoring Dashboard. - -## Before you begin - -Follow the steps given below before you start. - -1. Download and install the servers: - - - [Download and install]({{base_path}}/install-and-setup/install/installing-the-product/installing-mi-dashboard) the Micro Integrator dashboard. - - [Download and install]({{base_path}}/install-and-setup/install/installing-the-product/installing-mi) the Micro Integrator. - -2. Set up the Micro Integrator: - - 1. Open the `deployment.toml` file (stored in the `/conf/` folder) of the Micro Integrator, and add the following configuration. - - ```toml - [dashboard_config] - dashboard_url = "https://localhost:9743/dashboard/api/" - heartbeat_interval = 5 - group_id = "mi_dev" - node_id = "dev_node_2" - ``` - - 2. Be sure to change the host and port number of the `dashboard_url` in the above configuration if you have changed the default host and port for the dashboard. - - !!! Info - See the section on [configuring the MI servers for the dashboard]({{base_path}}/observe/mi-observe/working-with-monitoring-dashboard/#step-2-configure-the-mi-servers) for more information. - -3. [Start the Micro Integrator]({{base_path}}/install-and-setup/install/installing-the-product/running-the-mi). - -## Configuring Single Sign-on with OpenID Connect - -!!! note "Before you begin" - - Upgrade Micro Integrator Dashboard to version 4.0.1 or above to enable this feature. - - Upgrade Micro Integrator to version 4.0.0.5 or above to use File based User Store for Authorization. - - By default, the Micro Inetgrator user store is used to authenticate users. The following instructions are applicable only if you want to enable Single Sign-On. - - See the documentation of your preferred Identity provider for instructions on setting up OpenID Connect. - - This feature was tested with WSO2 IS 5.10.0 and Shibboleth 4.1.2. There may be compatibility issues when using other vendors. - -Follow the steps given below to connect the Micro Integrator Dashboard to your Identity provider. - -1. Open the `deployment.toml` file stored in the `/conf/` directory. -2. Add the following configurations and update the required values. - - ```toml - [sso] - enable = true - client_id = "8e4uDF4ewc2aEa" - base_url = "https://localhost:9443" - jwt_issuer = "https://localhost:9443/oauth2/token" - resource_server_URLs = ["https://localhost:9743"] - sign_in_redirect_URL = "https://localhost:9743/sso" - ``` - - Parameters used above are explained below. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDesciption
    - enable - - Use this paramater to enable Single Sign-On. -
    - client_id - - The client ID generated from the Identity Provider. -
    - base_url - - The URL of the Identity Provider. -
    - jwt_issuer - - The Identity Provider's issuer identifier. -
    - resource_server_URLs - - The URL of the Micro Integrator Dashboard. -
    - sign_in_redirect_URL - - The Sign In redirect URL of the Micro Integrator Dashboard. -
    - -See the [complete list of parameters]({{base_path}}/reference/config-catalog-mi-dashboard/#single-sign-on) you can configure for the single sign-on. - -## Starting the dashboard server - -Follow the steps given below. - -1. Open a command prompt as explained below. - - - - - - - - -
    On Linux/macOS - Establish an SSH connection to the server, log on to the text Linux console, or open a terminal window.
    On Windows - Click Start >Run, type cmd at the prompt, and then press Enter.
    - -2. Navigate to the `/bin` folder from your command line. -3. Execute one of the commands given below. - - === "On macOS/Linux" - ```bash - sh dashboard.sh - ``` - - === "On Windows" - ```bash - dashboard.bat - ``` - -## Accessing the dashboard - -Once you have [started the dashboard server](#starting-the-dashboard-server): - -1. Access the dashboard using the following URL: - - ```bash - https://localhost:9743/dashboard - ``` - - ![login form for monitoring dashboard]({{base_path}}/assets/img/integrate/monitoring-dashboard/login.png) - -2. Enter the following details to sign in: - - - - - - - - - - -
    - Username - - The user name to sign in.

    - Note: This should be a valid username that is saved in the Micro Integrator server's user store. By default, the 'admin' user name is configured in the default user store.

    - See configuring user stores for information. -
    - Password - - The password of the user name. By default, 'admin' is the user name and password. -
    - -2. Be sure that the Micro Integrator servers are [already configured and started](#before-you-begin) before you sign in. - -See the [Micro Integrator Dashboard]({{base_path}}/observe/mi-observe/working-with-monitoring-dashboard) documentation for information on the dashboard's capabilities and how to use them. - -## Stopping the dashboard server - -To stop the dashboard standalone application, go to the terminal and press Ctrl+C. diff --git a/en/docs/install-and-setup/install/installing-the-product/running-the-mi.md b/en/docs/install-and-setup/install/installing-the-product/running-the-mi.md deleted file mode 100644 index 89d242d041..0000000000 --- a/en/docs/install-and-setup/install/installing-the-product/running-the-mi.md +++ /dev/null @@ -1,70 +0,0 @@ -# Running the Micro Integrator Runtime - -Follow the steps given below to run the WSO2 Micro Integrator (MI) runtime. - -## Before you begin - -[Download and install]({{base_path}}/install-and-setup/install/installing-the-product/installing-mi) the Micro Integrator. - -## Starting the MI server - -Follow the steps given below to start the server. - -1. Open a command prompt as explained below. - - - - - - - - -
    On Linux/macOS - Establish an SSH connection to the server, log on to the text Linux console, or open a terminal window.
    On Windows - Click Start >Run, type cmd at the prompt, and then press Enter.
    - -2. Navigate to the `/bin` folder from your command line. -3. Execute one of the commands given below. - - - To start the server: - - === "On macOS/Linux" - ```bash - sh micro-integrator.sh - ``` - - === "On Windows" - ```bash - micro-integrator.bat - ``` - - - To start the server in background mode: - - === "On macOS/Linux" - ```bash - sh micro-integrator.sh start - ``` - - === "On Windows" - ```bash - micro-integrator.bat --start - ``` - -## Stopping the MI server - -- To stop the Micro Integrator standalone application, go to the terminal and press Ctrl+C. -- To stop the Micro Integrator in background mode: - - === "On macOS/Linux" - ```bash - sh micro-integrator.sh stop - ``` - - === "On Windows" - ```bash - micro-integrator.bat --stop - ``` - -## See Also - -- [Running the MI as a Windows Service]({{base_path}}/install-and-setup/install/installing-the-product/installing-mi-as-a-windows-service) \ No newline at end of file diff --git a/en/docs/install-and-setup/setup/api-controller/encrypting-secrets-with-ctl.md b/en/docs/install-and-setup/setup/api-controller/encrypting-secrets-with-ctl.md index cbb5d803e5..64d103b595 100644 --- a/en/docs/install-and-setup/setup/api-controller/encrypting-secrets-with-ctl.md +++ b/en/docs/install-and-setup/setup/api-controller/encrypting-secrets-with-ctl.md @@ -1,6 +1,6 @@ # Encrypting Secrets with apictl -**WSO2 API Controller (apictl)** allows you to encrypt a plain-text secret. You can use this feature to export secrets as environment variables, system properties, Docker secrets, or Kubernetes secrets. For more information on using dynamic secrets refer [Dynamic secrets]({{base_path}}/install-and-setup/setup/mi-setup/security/encrypting_plain_text/#dynamic-secrets). +**WSO2 API Controller (apictl)** allows you to encrypt a plain-text secret. You can use this feature to export secrets as environment variables, system properties, Docker secrets, or Kubernetes secrets. For more information on using dynamic secrets refer [Dynamic secrets](https://mi.docs.wso2.com/en/latest/install-and-setup/setup/security/encrypting-plain-text/#dynamic-secrets). ## Initialize apictl with a key store diff --git a/en/docs/install-and-setup/setup/api-controller/managing-integrations/managing-integrations-with-ctl.md b/en/docs/install-and-setup/setup/api-controller/managing-integrations/managing-integrations-with-ctl.md deleted file mode 100644 index 7cbb54347e..0000000000 --- a/en/docs/install-and-setup/setup/api-controller/managing-integrations/managing-integrations-with-ctl.md +++ /dev/null @@ -1,1604 +0,0 @@ -# Managing Integrations with apictl - -WSO2 API Controller, **apictl** allows you to monitor the Synapse artifacts (deployed in a specified Micro Integrator server) and perform various management and administration tasks from the command line. - -!!! info - **Before you begin** - - - Ensure that WSO2 Micro Integrator is started. See the instructions on [installing the Micro Integrator]({{base_path}}/install-and-setup/install/installing-the-product/installing-mi). - - - Make sure the apictl is downloaded and initialized, if not follow the steps in [Download and Initialize the apictl]({{base_path}}/install-and-setup/setup/api-controller/getting-started-with-wso2-api-controller/#download-and-initialize-the-apictl). - - - Ensure that the Micro Integrator management endpoint is added to the environment configurations of CTL, before you start working with the following CTL commands. For more information, see [Add an Environment]({{base_path}}/install-and-setup/setup/api-controller/getting-started-with-wso2-api-controller/#add-an-environment). - -## Login to a Micro Integrator - -After adding an environment, you can login to the Micro Integrator instance of that environment using credentials. - -1. Run any of the following CTL commands to login to a Micro Integrator. - - - **Command** - - ```go - apictl mi login -k - ``` - - ```go - apictl mi login -u -k - ``` - - ```go - apictl mi login -u -p -k - ``` - - !!! tip - If you run `apictl mi login ` you are prompted to provide both the username and the password. - If you run `apictl mi login --username `, you are prompted to provide the password. - - !!! info - **Flags:** - - - Optional : - `--username` or `-u` : Username for login - `--password` or `-p` : Password for login - `--password-stdin` : Get password from stdin - - !!! example - ```bash - apictl mi login dev -k - ``` - ```bash - apictl mi login dev -u admin -p admin -k - ``` - - ```bash - apictl mi login dev --username admin --password admin -k - ``` - - - **Response** - - === "Response Format" - ``` bash - Logged into MI in '' environment - ``` - - === "Example Response" - ```bash - Logged into MI in dev environment - ``` - - !!! warning - Using `--password` in CTL is not secure. You can use `--password-stdin` instead. For example, - ```bash - cat ~/.mypassword | ./apictl mi login dev --username admin --password-stdin -k - ``` - -## Logout from a Micro Integrator - -1. Run the following command to logout from the current session of the Micro Integrator. - - - **Command** - - ```go - apictl mi logout - ``` - - !!! example - ```go - apictl mi logout dev - ``` - - - **Response** - - === "Response Format" - ``` bash - Logged out from MI in '' environment - ``` - - === "Example Response" - ```bash - Logged out from MI in dev environment - ``` - -## Manage Users - -You can view details of users stored in the [external user store]({{base_path}}/install-and-setup/setup/mi-setup/user_stores/managing_users). If you are logged in to the apictl with administrator credentials, you can also add new users, and remove users from the user store. - -### Get information about users - -1. List users of the Micro Integrator. - - - **Command** - ``` bash - apictl mi get users -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - `--pattern` or `-p` : Filter users by regex - `--role` or `-r` : Filter users by role - - !!! example - ```bash - apictl mi get users -e dev - ``` - ```bash - apictl mi get users -r admin -e dev - ``` - ```bash - apictl mi get users -p *tester* -e dev - ``` - - - **Response** - - ```go - USER ID - admin - capp-tester - ``` - -2. Get information on a specific user. - - - **Command** - ``` bash - apictl mi get users [user-name] -d [domain] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--domain` or `-d` : Domain name of the secondary user store to be searched - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get users capp-tester -d testing.com -e dev - ``` - - - **Response** - - ```go - Name - TESTING.COM/capp-tester - Is Admin - false - Roles - TESTING.COM/tester, Application/applicationRole1 - ``` - -### Add a new user - -You can use the command below to add a new user to a Micro Integrator. - -- **Command** - ``` bash - apictl mi add user [user-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - !!! example - ```bash - apictl mi add user capp-tester -e dev - ``` - -- **Response** - - ```go - Adding new user [ capp-tester ] status: Added - ``` - -!!! note - To add a new user to a secondary user store, provide the corresponding user store domain when prompted. - -### Delete a user - -You can use the command below to remove a user from the Micro Integrator. - -- **Command** - ``` bash - apictl mi delete user [user-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator - - Optional : - `--domain` or `-d` : The domain of the secondary user store from which the user should be deleted - - !!! example - ```bash - apictl mi delete user capp-tester -d testing.com -e dev - ``` - -- **Response** - - ```go - Deleting user [ capp-tester ] status: Deleted - ``` - -## Manage Roles - -The Micro Integrator has limited role support without fine-grained permission tree support as in the Enterprise Integrator. - -In Micro Integrator, we have one admin role and all the other roles from primary and secondary user stores are considered non-admin roles. - -### Get information about roles - -1. List roles of the Micro Integrator. - - **Command** - ``` bash - apictl mi get roles -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - !!! example - ```bash - apictl mi get roles -e dev - ``` - - - **Response** - - ```go - ROLE - admin - primaryRole1 - Application/applicationRole1 - Internal/everyone - Internal/internalRole1 - TEST.COM/testRole1 - ``` - -2. Get information on a specific role. - - **Command** - ``` bash - apictl mi get roles [role-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--domain` or `-d` : Domain of the secondary user store to be searched - - !!! example - ```bash - apictl mi get roles tester -d testing.com -e dev - ``` - - - **Response** - - ```go - Role Name - TESTING.COM/tester - Users - TESTING.COM/capp-tester - ``` - -!!! note - To get hybrid roles (application/internal) specify the role type as the domain. - - ```go - apictl mi get roles tester -d application -e dev - ``` - -### Add a new role - -- **Command** - ``` bash - apictl mi add role [role-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator - - !!! example - ```bash - apictl mi add role tester -e dev - ``` -- **Response** - - ```go - Adding new role [ tester ] status: Added - ``` - -!!! note - To add a new role to a secondary user store, provide the corresponding user store domain when prompted. - -!!! note - To add hybrid roles (application/internal) specify the type in the role name. - - ```go - apictl mi add role internal/InternalRole -e dev - ``` - -### Delete a role - -- **Command** - - ``` bash - apictl mi delete role [role-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator - - Optional : - `--domain` or `-d` : The domain of the secondary user store from which the role should be deleted - - !!! example - ```bash - apictl mi delete role tester -d testing.com -e dev - ``` -- **Response** - - ```go - Deleting new role [ tester ] status: Deleted - ``` - -!!! note - To delete hybrid roles (application/internal) specify the role type as domain. - ```go - apictl mi delete role InternalRole -d internal -e dev - ``` - -### Assign/revoke roles to/from users - -- **Command** - - ``` bash - apictl mi update user [user-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator - - !!! example - ```bash - apictl mi update user capp-tester -e dev - ``` - -- **Response** - - ```go - Added/removed the roles - ``` - -!!! note - Use a space-separated list of role names when entering the added/removed roles - -## Monitor Integration Artifacts - -Follow the instructions below to display a list of artifacts or get information about a specific artifact in an environment using CTL: - -### Composite Applications (CApps) - -1. List composite applications (CApps) in an environment. - - - **Command** - ``` bash - apictl mi get composite-apps -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get composite-apps -e dev - ``` - - - **Response** - - ```go - NAME VERSION - HealthCareCompositeExporter 1.0.0 - FoodServiceCApp 2.0.0 - ``` - -2. Get information on a specific composite application in an environment. - - - **Command** - ``` bash - apictl mi get composite-apps [capp-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get composite-apps HealthCareCompositeExporter -e dev - ``` - - - **Response** - - ```go - Name - HealthCareCompositeExporter - Version - 1.0.0 - Artifacts : - NAME TYPE - sample-local-entry local-entry - email-connector lib - in-memory-message-store message-store - GrandOakEndpoint endpoint - sample_seq_template template - scheduled-msg-processor message-processors - sample_template template - HealthcareAPI api - sample-sequence sequence - PineValleyEndpoint endpoint - StockQuoteProxy proxy-service - sample-cron-task task - httpInboundEP inbound-endpoint - ``` - -### Integration APIs - -1. List integration APIs in an environment. - - - **Command** - ``` bash - apictl mi get apis -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get apis -e dev - ``` - - - **Response** - - ```go - NAME URL - HealthcareAPI http://localhost:8290/healthcare - FoodService http://localhost:8480/foodservice - ``` - -2. Get information on a specific integration API in an environment. - - - **Command** - ``` bash - apictl mi get apis [api-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get apis HealthcareAPI -e dev - ``` - - - **Response** - - ```go - Name - HealthcareAPI - Version - N/A - Url - http://localhost:8290/healthcare - Stats - disabled - Tracing - disabled - Resources : - URL METHOD - /doctor/{doctorType} [GET] - /report [GET] - ``` - -### Connectors - -1. List connectors in an environment. - - - **Command** - ``` bash - apictl mi get connectors -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get connectors -e dev - ``` - - - **Response** - - ```go - NAME STATS PACKAGE DESCRIPTION - email enabled org.wso2.carbon.connector WSO2 email connector library - ``` - -### Data Services - -1. List data services in an environment. - - - **Command** - ``` bash - apictl mi get data-services -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get data-services -e dev - ``` - - - **Response** - - ```go - NAME WSDL 1.1 WSDL 2.0 - RESTDataService http://localhost:8290/services/RESTDataService?wsdl http://localhost:8290/services/RESTDataService?wsdl2 - ``` - -2. Get information on a specific data service in an environment. - - - **Command** - ``` bash - apictl mi get data-services [data-service-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get data-services RESTDataService -e dev - ``` - - - **Response** - - ```go - Name - RESTDataService - Group Name - RESTDataService - Description - Exposing the data service as a REST service. - WSDL 1.1 - http://localhost:8290/services/RESTDataService?wsdl - WSDL 2.0 - http://localhost:8290/services/RESTDataService?wsdl2 - Queries : - ID NAMESPACE - ReadStudents http://ws.wso2.org/dataservice/ReadStudents - DeleteStudent http://ws.wso2.org/dataservice - ``` - -### Endpoints - -1. List endpoints in an environment. - - - **Command** - ``` bash - apictl mi get endpoints -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get endpoints -e dev - ``` - - - **Response** - - ```go - NAME TYPE ACTIVE - GrandOakEndpoint http true - PineValleyEndpoint http true - ``` - -2. Get information on a specific endpoint in an environment. - - - **Command** - ``` bash - apictl mi get endpoints [endpoint-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get endpoints GrandOakEndpoint -e dev - ``` - - - **Response** - - ```go - Name - GrandOakEndpoint - Type - HTTP Endpoint - Active - true - Method - GET - URI Template - http://localhost:9091/grand/doctors - ``` - -### Inbound Endpoints - -1. List inbound endpoints in an environment. - - - **Command** - ``` bash - apictl mi get inbound-endpoints -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get inbound-endpoints -e dev - ``` - - - **Response** - - ```go - NAME TYPE - httpInboundEP http - ``` - -2. Get information on a specific inbound endpoint in an environment. - - - **Command** - ``` bash - apictl mi get inbound-endpoints [inbound-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get inbound-endpoints httpInboundEP -e dev - ``` - - - **Response** - - ```go - Name - httpInboundEP - Type - http - Stats - enabled - Tracing - enabled - Parameters : - NAME VALUE - inbound.http.port 8697 - inbound.worker.pool.size.core 400 - inbound.worker.pool.size.max 500 - inbound.worker.thread.keep.alive.sec 60 - inbound.worker.pool.queue.length -1 - inbound.thread.id PassThroughInboundWorkerPool - ``` - -### Local Entries - -1. List local entries in an environment. - - - **Command** - ``` bash - apictl mi get local-entries -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get local-entries -e dev - ``` - - - **Response** - - ```go - NAME TYPE - sample-local-entry Inline Text - ``` - -2. Get information on a specific local entry in an environment. - - - **Command** - ``` bash - apictl mi get local-entries [local-entry-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get local-entries sample-local-entry -e dev - ``` - - - **Response** - - ```go - Name - sample-local-entry - Type - Inline Text - Value - 0, 1 - ``` - -### Message Processors - -1. List message processors in an environment. - - - **Command** - ``` bash - apictl mi get message-processors -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get message-processors -e dev - ``` - - - **Response** - - ```go - NAME TYPE STATUS - scheduled-msg-processor Scheduled-message-forwarding-processor active - ``` - -2. Get information on a specific message processor in an environment. - - - **Command** - ``` bash - apictl mi get message-processors [message-processor-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get message-processors scheduled-msg-processor -e dev - ``` - - - **Response** - - ```go - Name - scheduled-msg-processor - Type - Scheduled-message-forwarding-processor - File Name - scheduled-msg-processor-1.0.0.xml - Message Store - in-memory-message-store - Artifact Container - [ Deployed From Artifact Container: HealthCareCompositeExporter ] - Status - active - Parameters : - client.retry.interval = 1000 - interval = 1000 - is.active = true - max.delivery.attempts = 4 - max.delivery.drop = Disabled - max.store.connection.attempts = -1 - member.count = 1 - store.connection.retry.interval = 1000 - target.endpoint = PineValleyEndpoint - throttle = false - ``` - -### Message Stores - -1. List message stores in an environment. - - - **Command** - ``` bash - apictl mi get message-stores -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get message-stores -e dev - ``` - - - **Response** - - ```go - NAME TYPE SIZE - in-memory-message-store in-memory-message-store 0 - ``` - -2. Get information on a specific message store in an environment. - - - **Command** - ``` bash - apictl mi get message-stores [message-store-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get message-stores in-memory-message-store -e dev - ``` - - - **Response** - - ```go - Name - in-memory-message-store - File Name - in-memory-message-store-1.0.0.xml - Container - [ Deployed From Artifact Container: HealthCareCompositeExporter ] - Producer - org.apache.synapse.message.store.impl.memory.InMemoryProducer@3d288f9e - Consumer - org.apache.synapse.message.store.impl.memory.InMemoryConsumer@5e6443d6 - Size - 0 - Properties : - No Properties found - ``` - -### Proxy Services - -1. List proxy services in an environment. - - - **Command** - ``` bash - apictl mi get proxy-services -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get proxy-services -e dev - ``` - - - **Response** - - ```go - NAME WSDL 1.1 WSDL 2.0 - StockQuoteProxy http://localhost:8290/services/StockQuoteProxy?wsdl http://localhost:8290/services/StockQuoteProxy?wsdl2 - ``` - -2. Get information on a specific proxy service in an environment. - - - **Command** - ``` bash - apictl mi get proxy-services [proxy-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get proxy-services StockQuoteProxy -e dev - ``` - - - **Response** - - ```go - Name - StockQuoteProxy - WSDL 1.1 - http://localhost:8290/services/StockQuoteProxy?wsdl - WSDL 2.0 - http://localhost:8290/services/StockQuoteProxy?wsdl2 - Stats - disabled - Tracing - disabled - ``` - -### Sequences - -1. List sequences in an environment. - - - **Command** - ``` bash - apictl mi get sequences -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get sequences -e dev - ``` - - - **Response** - - ```go - NAME STATS TRACING - fault disabled disabled - main disabled disabled - sample-sequence disabled disabled - ``` - -2. Get information on a specific sequence in an environment. - - - **Command** - ``` bash - apictl mi get sequences [sequence-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get sequences sample-sequence -e dev - ``` - - - **Response** - - ```go - Name - sample-sequence - Container - [ Deployed From Artifact Container: HealthCareCompositeExporter ] - Stats - disabled - Tracing - disabled - Mediators - LogMediator, STRING - ``` - -### Scheduled Tasks - -1. List scheduled tasks in an environment. - - - **Command** - ``` bash - apictl mi get tasks -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get tasks -e dev - ``` - - - **Response** - - ```go - NAME - sample-cron-task - CheckPriceTask - ``` - -2. Get information on a specific scheduled task in an environment. - - - **Command** - ``` bash - apictl mi get tasks [task-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get tasks sample-cron-task -e dev - ``` - - - **Response** - - ```go - Name - sample-cron-task - Trigger Type - cron - Cron Expression - 0 30 1 * * ? - ``` - -### Templates - -1. List all templates in an environment. - - - **Command** - ``` bash - apictl mi get templates -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get templates -e dev - ``` - - - **Response** - - ```go - NAME TYPE - sample_seq_template Sequence - sample_template Endpoint - ``` - -2. List a specific type of template in an environment. - - - **Command** - ``` bash - apictl mi get templates [template-type] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get templates endpoint -e dev - ``` - ```bash - apictl mi get templates sequence -e dev - ``` - - - **Response** - - ```go - NAME - sample_seq_template - ``` - -3. Get information on a specific template in an environment. - - - **Command** - ``` bash - apictl mi get templates [template-type] [template-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get templates endpoint sample_template -e dev - ``` - - - **Response** - - ```go - Name - sample_template - Parameters : name, uri - ``` - -## Change status of an Artifact - -You can use the commands below to activate or deactivate endpoints, message processors or proxy services deployed in a Micro Integrator. - -### Endpoint - -1. Activate an endpoint deployed in a Micro Integrator. - - - **Command** - ``` bash - apictl mi activate endpoint [endpoint-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - !!! example - ```bash - apictl mi activate endpoint GrandOakEndpoint -e dev - ``` - - - **Response** - - ```go - GrandOakEndpoint is switched On - ``` - -2. Deactivate an endpoint deployed in a Micro Integrator. - - - **Command** - ``` bash - apictl mi deactivate endpoint [endpoint-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - !!! example - ```bash - apictl mi deactivate endpoint GrandOakEndpoint -e dev - ``` - - - **Response** - - ```go - GrandOakEndpoint is switched Off - ``` - -### Message Processor - -1. Activate a message processor deployed in a Micro Integrator. - - - **Command** - ``` bash - apictl mi activate message-processor [message-processor-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - !!! example - ```bash - apictl mi activate message-processor scheduled-msg-processor -e dev - ``` - - - **Response** - - ```go - scheduled-msg-processor : is activated - ``` - -2. Deactivate a message processor deployed in a Micro Integrator. - - - **Command** - ``` bash - apictl mi deactivate message-processor [message-processor-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - !!! example - ```bash - apictl mi deactivate message-processor scheduled-msg-processor -e dev - ``` - - - **Response** - - ```go - scheduled-msg-processor : is deactivated - ``` - -### Proxy Service - -1. Activate a proxy service deployed in a Micro Integrator. - - - **Command** - ``` bash - apictl mi activate proxy-service [proxy-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - !!! example - ```bash - apictl mi activate proxy-service StockQuoteProxy -e dev - ``` - - - **Response** - - ```go - Proxy service StockQuoteProxy started successfully - ``` - -2. Deactivate a proxy service deployed in a Micro Integrator. - - - **Command** - ``` bash - apictl mi deactivate proxy-service [proxy-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - !!! example - ```bash - apictl mi deactivate proxy-service StockQuoteProxy -e dev - ``` - - - **Response** - - ```go - Proxy service StockQuoteProxy stopped successfully - ``` - -## Manage Loggers used in Micro Integrator - -### Get information on a specific logger - -- **Command** - ``` bash - apictl mi get log-levels [logger-name] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get log-levels org-apache-coyote -e dev - ``` - -- **Response** - - ```go - NAME LOG LEVEL COMPONENT - org-apache-coyote WARN org.apache.coyote - ``` - -### Add a new logger - -You can use the command below to add a new logger to a Micro Integrator. - -- **Command** - ``` bash - apictl mi add log-level [logger-name] [class-name] [log-level] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - !!! example - ```bash - apictl mi add log-level synapse-api org.apache.synapse.rest.API DEBUG -e dev - ``` - -- **Response** - - ```go - Successfully added logger for ('synapse-api') with level DEBUG for class org.apache.synapse.rest.API - ``` - -### Update a logger - -You can use the command below to update the log level of an existing logger. - -- **Command** - ``` bash - apictl mi update log-level [logger-name] [log-level] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - !!! example - ```bash - apictl mi update log-level org-apache-coyote DEBUG -e dev - ``` - -- **Response** - - ```go - Successfully added logger for ('org-apache-coyote') with level DEBUG - ``` - -## Download log files - -### List available log files - -- **Command** - ``` bash - apictl mi get logs -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get logs -e dev - ``` - -- **Response** - - ```go - NAME SIZE - wso2carbon.log 429.5 KB - correlation.log 0 B - wso2carbon-trace-messages.log 0 B - wso2-mi-api.log 11.9 KB - patches.log 15.7 KB - audit.log 0 B - wso2-mi-service.log 10.3 KB - http_access_.log 35.8 KB - wso2error.log 156.2 KB - ``` - -### Download a specific log file - -- **Command** - ``` bash - apictl mi get logs [file-name] -p [download-location] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - - Optional : - `--path` or `-p` : Path the file should be downloaded (default is current executable directory) - - !!! example - ```bash - apictl mi get logs wso2carbon.log -p log-files -e dev - ``` - -- **Response** - - ```go - Log file downloaded to log-files/wso2carbon.log - ``` - -## Monitor transactions - -### Transaction Counts - -You can use the command below to get information about the inbound transactions received by the Micro Integrator. - -- **Command** - ``` bash - apictl mi get transaction-counts -e - ``` - ``` bash - apictl mi get transaction-counts [year] [month] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--format` : pretty-print using templates - - !!! example - ```bash - apictl mi get transaction-counts -e dev - ``` - ```bash - apictl mi get transaction-counts 2021 01 -e dev - ``` - -- **Response** - - ```go - YEAR MONTH TRANSACTION COUNT - 2021 1 126 - ``` - -### Transaction Reports - -You can use the command below to generate the transaction count summary report based on the inbound transactions received by the Micro Integrator. - -- **Command** - ``` bash - apictl mi get transaction-reports [start] [end] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator to be searched - - Optional : - `--path` or `-p` : Path the file should be downloaded (default is current executable directory) - - !!! example - ```bash - apictl mi get transaction-reports 2020-05 2020-06 -e dev - ``` - ```bash - apictl mi get transaction-reports 2020-05 -e dev -p reports/mi - ``` - -- **Response** - - ```go - Transaction Count Report created in reports/mi/transaction-count-summary-1610597725520763836.csv - ``` - -## Update HashiCorp AppRole Pull secret ID - -You can use the command below to update the HashiCorp AppRole Pull secret ID that is used by the Micro Integrator to connect with HashiCorp. - -!!! note - - The HashiCorp secret ID is only applicable when **AppRole Pull** authentication is used between the Micro Integrator and HashiCorp. - - This command only updates the SecretId for the current session of the Micro Integrator. To persist the Secret Id, you need to update the `deployment.toml` file and restart the Micro Integrator. - - See [Using HashiCorp Secrets]({{base_path}}/install-and-setup/setup/mi-setup/security/using-hashicorp-secrets) for details. - -- **Command** - ``` bash - apictl mi update hashicorp-secret [secret_id] -e - ``` - - !!! info - **Flags:** - - - Required : - `--environment` or `-e` : Environment of the Micro Integrator for which the HashiCorp secret ID should be updated. - - !!! example - ```bash - apictl mi update hashicorp-secret 47c39b09-c0a9-6ebf-196e-038eb7aad336 -e dev - ``` - -- **Response** - - ```go - SecretId value is updated in HashiCorp vault runtime configurations. To persist the new SecretId in the next server startup, please update the deployment.toml file - ``` diff --git a/en/docs/install-and-setup/setup/deployment-best-practices/basic-health-checks.md b/en/docs/install-and-setup/setup/deployment-best-practices/basic-health-checks.md index b883efc0ad..1c6690a46a 100644 --- a/en/docs/install-and-setup/setup/deployment-best-practices/basic-health-checks.md +++ b/en/docs/install-and-setup/setup/deployment-best-practices/basic-health-checks.md @@ -56,61 +56,3 @@ Sample usages of this are shown below port: 9443 scheme: HTTPS ``` - -## Micro Integrator health checks - -WSO2 Micro Integrator provides a dedicated API for checking the health of the server. This can be used by a load -balancer prior to routing traffic to a particular server node. - -### Health Check API - -The health check API gives a **ready** status only if all the CApps are deployed successfully during server startup. If there are faulty CApps, the probe returns the list of faulty CApps. The health check API serves at: - -`http://localhost:9201/healthz` - -### Liveness Check API - -The liveness check API gives a **ready** status when the server starts successfully. -The health check API serves at: - -`http://localhost:9201/liveness` - -!!! Note - If you are running the server instance with a different port offset other than the default (which is 10), the heath - check API serves at 9191 + offset. - - -### Readiness Probe - -The readiness probe is a vital configuration for deployments in Kubernetes as it governs the routing logic. The requests -are not routed to a pod that is not ready. - -Add the following configurations to your `deployment.yaml` file in order to configure the readiness probe for -the server. **Initial delay** and the **period** has to be fine-tuned according to your deployment. - -```yaml -readinessProbe: - httpGet: - path: /healthz - port: 9201 - initialDelaySeconds: 3 - periodSeconds: 1 -``` - -### Liveness Probe - -The Liveness probe is a primary configuration in Kubernetes since it is used for knowing when to restart a container. For -example, if the server stops serving requests on the HTTP port, even though the server is alive, the container needs to -be restarted so that the Micro Integrator instances serve the requests flawlessly. The default HTTP socket of WSO2 Micro -Integrator can be used to health check for Liveness. - -Add the following configurations to your `deployment.yaml` file in order to configure the Liveness probe for -the server. **Initial delay** and the **period** have to be fine-tuned according to your deployment. - -```yaml -livenessProbe: - tcpSocket: - port: 8290 - initialDelaySeconds: 15 - periodSeconds: 5 -``` diff --git a/en/docs/install-and-setup/setup/deployment-best-practices/changing-the-default-ports-with-offset.md b/en/docs/install-and-setup/setup/deployment-best-practices/changing-the-default-ports-with-offset.md index 1450f1b39e..fd89527667 100644 --- a/en/docs/install-and-setup/setup/deployment-best-practices/changing-the-default-ports-with-offset.md +++ b/en/docs/install-and-setup/setup/deployment-best-practices/changing-the-default-ports-with-offset.md @@ -69,98 +69,7 @@ The default port offset in the WSO2 API-M runtime is `0`. Use one of the followi api-manager.bat -DportOffset=3 ``` -When you offset the server's port, it automatically changes all ports. - -## Changing the default MI ports - -The default port offset in the WSO2 Micro Integrator runtime is `10`. Use one of the following two methods to apply an offset to the Micro Integrator runtime. - -!!! Tip - - The internal offset of 10 is overriden by this manual offset. That is, if the manual offset is 3, the default ports will change as follows: - - `8290` -> `8283` (8290 - 10 + 3) - - `8253` -> `8246` (8253 - 10 + 3) - - `9164` -> `9157` (9164 - 10 + 3) - - Note that if you manually set an offset of 10 using the following method, you will get the same default ports. - -#### Update the server configurations - -1. Stop the MI server if it is already running. - -2. Open the `/conf/deployment.toml` file. - -3. Uncomment the `offset` parameter under `[server]` and set the offset value. - - === "Format" - ```toml - [server] - offset= - ``` - - === "Example" - ```toml - [server] - offset = 3 - ``` - -4. [Restart the server]({{base_path}}/install-and-setup/install/installing-the-product/running-the-mi). - -#### Pass the port offset during server startup - -1. Stop the MI server if it is already running. - -2. Restart the server with the `-DportOffset` system property. - - - Linux/Mac OS - - === "Format" - ```toml - ./micro-integrator.sh -DportOffset= - ``` - - === "Example" - ```toml - ./micro-integrator.sh -DportOffset=3 - ``` - - - Windows - - === "Format" - ```toml - micro-integrator.bat -DportOffset= - ``` - - === "Example" - ```toml - micro-integrator.bat -DportOffset=3 - ``` - -#### Changing the default EI Analytics ports - -If required, you can manually change the HTTP/HTTPS ports in the `deployment.yaml` file (stored in `EI_ANALYTICS_HOME/conf/server` folder) as shown below. - -!!! Note - With the default internal port offset, the effective port is https_port + 1. - -=== "HTTPS Port" - ```yaml - wso2.transport.http: - listenerConfigurations: - - - id: "msf4j-https" - host: "0.0.0.0" - port: https_port - scheme: https - ``` - -=== "HTTP Port" - ```yaml - wso2.transport.http: - listenerConfigurations: - - - id: "default" - host: "0.0.0.0" - port: http_port - ``` +When you offset the server's port, it automatically changes all ports. ## Changing the default SI ports diff --git a/en/docs/install-and-setup/setup/deployment-best-practices/changing-the-hostname.md b/en/docs/install-and-setup/setup/deployment-best-practices/changing-the-hostname.md index 9a9a7b1e65..9f382dc851 100644 --- a/en/docs/install-and-setup/setup/deployment-best-practices/changing-the-hostname.md +++ b/en/docs/install-and-setup/setup/deployment-best-practices/changing-the-hostname.md @@ -54,30 +54,3 @@ Follow the steps given below. !!! Warning After you change the hostname, if you encounter login failures when trying to access the API Publisher and API Developer Portal with the error `Registered callback does not match with the provided url`, see ['Registered callback does not match with the provided url' error]({{base_path}}/troubleshooting/troubleshooting-invalid-callback-error) in the Troubleshooting guide. - -## Changing the Micro Integrator hostname - -Follow the steps given below. - -1. Open the `/conf/deployment.toml` file -2. Define the `hostname` attribute under server configuration as shown below. - - === "Format" - ``` toml - [server] - hostname = "{hostname}" - ``` - - === "Format" - ``` toml - [server] - hostname="localhost" - ``` - -To configure hostnames for WSDLs and endpoints, it is recommended to add the following parameter for the transport listener in the `deployment.toml` file. - -```toml -[transport.http] -listener.wsdl_epr_prefix="$ref{server.hostname}" -``` - diff --git a/en/docs/install-and-setup/setup/deployment-best-practices/monitoring-transaction-counts.md b/en/docs/install-and-setup/setup/deployment-best-practices/monitoring-transaction-counts.md deleted file mode 100644 index 75d58c66a1..0000000000 --- a/en/docs/install-and-setup/setup/deployment-best-practices/monitoring-transaction-counts.md +++ /dev/null @@ -1,138 +0,0 @@ -# Monitoring Integration Transactions Counts - -A **Transaction** in WSO2 Micro Integrator is typically defined as an inbound request (a request coming to the server). That is, any inbound request to a [REST API]({{base_path}}/integrate/develop/creating-artifacts/creating-an-api), [Proxy service]({{base_path}}/integrate/develop/creating-artifacts/creating-a-proxy-service), or [Inbound Endpoint]({{base_path}}/integrate/develop/creating-artifacts/creating-an-inbound-endpoint) is considered as one transaction. - -However, when the Micro Integrator is configured as both the message producer and consumer to handle **asynchronous** messaging scenarios, the two requests (listening request and sending request) are considered as a single transaction. - -If you need to track the number of transactions in your Micro Integrator deployment, you can enable the transaction counter component in each Micro Integrator instance of your deployment. Currently, the transaction counter is responsible for counting all requests received via the [HTTP Passthru]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-httphttps-transport) and [JMS]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-jms-transport) transports and for persisting the summary of the transaction count in a database for future use. - -Follow the instructions given below. - -## Step 1 - Enabling the transaction counter - -Configure a relational database to persist transaction count information and then enable the **Transaction Counter** component from the `deployment.toml` file (stored in the `/conf` folder). - -1. Select the preferred database type from the list given below and follow the relevant link to set up a database. - - - [Setting up a MySQL database]({{base_path}}/install-and-setup/setup/mi-setup/databases/setting-up-MySQL) - - [Setting up an MSSQL database]({{base_path}}/install-and-setup/setup/mi-setup/databases/setting-up-MSSQL) - - [Setting up an Oracle database]({{base_path}}/install-and-setup/setup/mi-setup/databases/setting-up-Oracle) - - [Setting up a Postgre database]({{base_path}}/install-and-setup/setup/mi-setup/databases/setting-up-PostgreSQL) - - [Setting up an IBM database]({{base_path}}/install-and-setup/setup/mi-setup/databases/setting-up-IBM-DB2) - -2. Once you have set up the database, verify that the `deployment.toml` file of your Micro Integrator contains the relevant datasource configurations: - - === "MySQL" - ```toml - [[datasource]] - id = "WSO2_TRANSACTION_DB" - url= "jdbc:mysql://localhost:3306/transactiondb" - username="root" - password="root" - driver="com.mysql.jdbc.Driver" - pool_options.maxActive=50 - pool_options.maxWait = 60000 - pool_options.testOnBorrow = true - ``` - - === "MSSQL" - ```toml - [[datasource]] - id = "WSO2_TRANSACTION_DB" - url= "jdbc:sqlserver://:1433;databaseName=transactiondb;SendStringParametersAsUnicode=false" - username="root" - password="root" - driver="com.microsoft.sqlserver.jdbc.SQLServerDriver" - pool_options.maxActive=50 - pool_options.maxWait = 60000 - pool_options.testOnBorrow = true - ``` - - === "Oracle" - ```toml - [[datasource]] - id = "WSO2_TRANSACTION_DB" - url= "jdbc:oracle:thin:@SERVER_NAME:PORT/SID" - username="root" - password="root" - driver="oracle.jdbc.OracleDriver" - pool_options.maxActive=50 - pool_options.maxWait = 60000 - pool_options.testOnBorrow = true - ``` - - === "PostgreSQL" - ```toml - [[datasource]] - id = "WSO2_TRANSACTION_DB" - url= "jdbc:postgresql://localhost:5432/transactiondb" - username="root" - password="root" - driver="org.postgresql.Driver" - pool_options.maxActive=50 - pool_options.maxWait = 60000 - pool_options.testOnBorrow = true - ``` - - === "IBM DB" - ```toml - [[datasource]] - id = "WSO2_TRANSACTION_DB" - url="jdbc:db2://SERVER_NAME:PORT/transactiondb" - username="root" - password="root" - driver="com.ibm.db2.jcc.DB2Driver" - pool_options.maxActive=50 - pool_options.maxWait = 60000 - pool_options.testOnBorrow = true - ``` - -3. Add the parameters given below to the `deployment.toml` file and update the values. - - ```toml - [transaction_counter] - enable = true - data_source = "WSO2_TRANSACTION_DB" - update_interval = 2 - ``` - - Parameters used above are explained below. - - - - - - - - - - - - - - - - - - -
    ParameterDescription
    - enable - - This paramter is used for enabling the Transaction Counter. Default value if 'false'. -
    - data_source - - The ID of the datasource. This refers the datasource ID configured under the datasource configuration. -
    - update_interval - - The transaction count is stored in the database with an interval (specified by this parameter, which will be taken as the number of minutes) between the insert queries. The default update interval is one minute. -
    - -## Step 2 - Getting the transaction count - -You can get the transaction count for a particular month or period. This data can be viewed or saved to a report. There are two ways to get transaction count data: - -- Start the [APICTL]({{base_path}}/install-and-setup/setup/api-controller/getting-started-with-wso2-api-controller) and use the [mi transaction]({{base_path}}/install-and-setup/setup/api-controller/managing-integrations/managing-integrations-with-ctl/#monitor-transactions) option. - -- Directly access the [Management API resources]({{base_path}}/observe/mi-observe/working-with-management-api) and invoke the [/transaction/count]({{base_path}}/observe/mi-observe/working-with-management-api/#get-transaction-count) and [/transaction/report]({{base_path}}/observe/mi-observe/working-with-management-api/#get-transaction-report-data) resources. \ No newline at end of file diff --git a/en/docs/install-and-setup/setup/deployment-best-practices/production-deployment-guidelines.md b/en/docs/install-and-setup/setup/deployment-best-practices/production-deployment-guidelines.md index 3d65298ff1..f11134882c 100644 --- a/en/docs/install-and-setup/setup/deployment-best-practices/production-deployment-guidelines.md +++ b/en/docs/install-and-setup/setup/deployment-best-practices/production-deployment-guidelines.md @@ -67,12 +67,6 @@ Given below is a checklist that will guide you to set up your production environ
    Database registry for the API-M runtime. -

    The Micro Integrator runtime uses a file-based registry instead of a database.

    - @@ -85,9 +79,6 @@ Given below is a checklist that will guide you to set up your production environ
  • Performance Tuning - WSO2 API-M runtime
  • -
  • - Performance tuning - WSO2 Micro Integrator -
  • @@ -105,22 +96,15 @@ Given below is a checklist that will guide you to set up your production environ
  • 8280 - Default HTTP port used by ESB for proxy services.
  • 8243 - Default HTTPS port used by ESB for proxy services.
  • - Micro Integrator Ports -
      -
    • 8290 - Default HTTP port used by the Micro Integrator for proxy services and APIs.
    • -
    • 8253 - Default HTTPS port used by the Micro Integrator for proxy services and APIs.
    • -
    • 9164 - Default HTTPS port used by the Micro Integrator Management APIs.
    • -
    Proxy servers - If the runtime is hosted behind a proxy such as ApacheHTTPD, you can configure the runtime to use the proxy server. See the following topics for instructions: + If the runtime is hosted behind a proxy such as ApacheHTTPD, you can configure the runtime to use the proxy server. See the following topic for instructions: diff --git a/en/docs/install-and-setup/setup/deployment-best-practices/security-guidelines-for-production-deployment.md b/en/docs/install-and-setup/setup/deployment-best-practices/security-guidelines-for-production-deployment.md index 3a8f27f7ca..bc07fd8e27 100644 --- a/en/docs/install-and-setup/setup/deployment-best-practices/security-guidelines-for-production-deployment.md +++ b/en/docs/install-and-setup/setup/deployment-best-practices/security-guidelines-for-production-deployment.md @@ -211,7 +211,7 @@ instead of granting all permission to one administrator, you can distribute the configured in the <PRODUCT_HOME>/repository/conf/log4j2.properties file. Rollover based on a time period can be configured by changing the below configuration (Default value is 1 day).

    appender.CARBON_LOGFILE.policies.time.interval = 1

    You can also configure rollover based on log file size, and also it is possible to limit the number of backup -files. For details on how to configure log rotation and manage log growth details in the API-M runtime, see Managing log growth.

    +files. For details on how to configure log rotation and manage log growth details in the API-M runtime, see Managing log growth.

    Prevent log forging

    @@ -266,236 +266,6 @@ been removed from Hotspot JVM.

    -### Micro Integrator runtime security - -Given below are the security guidelines for the Micro Integrator runtime. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    GuidelineDetails

    Apply security updates

    Apply all the security patches relevant to your product version. Use WSO2 Updates to get the latest security patches.

    - -

    Note the following:

    -
      -
    • WSO2 releases security patch notifications monthly via the Support Portal. However, WSO2 issues patches immediately to customers if there are highly - critical issues.
    • -
    • WSO2 does not issue patches publicly for older product versions. Community users are encouraged to use the - latest product version to receive all the security issues resolved until that particular product release.
    • -
    • WSO2 Docker repository releases docker images with security fixes. Users with a subscription can fetch these docker images.
    • -
    -
    -

    Change default keystores

    -
    -

    Change the default key stores and create new keys for all the cryptographic operations. WSO2 products, by default, come with a self-signed SSL key. Since these keys are public, it is recommended to configure your own keys for security purposes. Consider the following guidelines when creating the keystores:

    -
      -
    • -

      Select a key size of at least 2048 bits.

      -
    • -
    • -

      Use an SHA256 certificate.

      -
    • -
    • -

      Make sure that WSO2 default certificates do not exist in any of the keystores in your production environment. For example, be sure to delete the default public certificate in the default trust store that is shipped with the product.

      -
    • -
    - See Creating New Keystores for information on how to create and configure your own keys. -

    -
    Encrypt passwords in configuration files -

    WSO2 products use a tool called Secure Vault to encrypt the plain-text passwords in configuration files.

    -

    See Securing Passwords in Configuration Files for instructions.

    -
    -

    Change default ports

    -


    -
    -

    For information on all the default ports used by WSO2 API Manager, see Default Product Ports.

    -

    For information on changing a default port, see Changing the Default Ports with Offset.

    -
    -

    Enable read-only access to external user stores (LDAPs etc.)

    -
    -

    If your product runtimes are connecting to an external user store for the purpose of reading and retrieving user information, be sure to enable read-only access to that user store.

    -

    See Configuring a User Store for the Micro Integrator runtime.

    -
    -

    Always communicate (with external user stores) over TLS

    -
    -

    All connections from your server to external databases, userstores (LDAP), or other services, should be over TLS, to ensure adequate network-level protection. Therefore, be sure to use external systems (user stores, databases) that are TLS-enabled.

    -
    -

    Connect to data stores using a less privileged user

    -
    -

    When connecting the server to external databases or user stores (LDAP), be sure to go through a user who does not have permission to change the data store's schema. Be sure not to use the root user of the data store because all permissions are generally granted to the root user.

    -
    -

    Configure strong HTTP(S) security

    -
    -

    To have strong transport-level security, use TLS 1.2 and disable SSL, TLS 1.0 and 1.1. The TLS protocol and strong ciphers are configured for the passthrough transport in the deployment.toml file. See the following links for instructions:

    -

    Configuring Transport-Level Security

    -

    Note the following:

    -
      -
    • When deciding on the TLS protocol and the ciphers, consider the compatibility with existing client applications. Imposing maximum security might cause functional problems with client applications.
    • -
    • Apply ciphers with 256 bit key length if you have applied unlimited strength policy. Note that unlimited strength policy is recommended.
    • -
    • - Also, consider the following factors when deciding on the ciphers: -
        -
      • DES/3DES are deprecated and should not be used.
      • -
      • MD5 should not be used due to known collision attacks.
      • -
      • RC4 should not be used due to crypto-analytical attacks.
      • -
      • DSS is limited to a small 1024 bit key size.
      • -
      • Cipher-suites that do not provide Perfect Forward Secrecy/ Forward Secrecy (PFS/FS).
      • -
      • GCM based ciphers are recommended over CBC ciphers.
      • -
      -
    • -
    -
    -

    Remove weak ciphers for PassThrough transport

    -
    -

    Remove any weak ciphers from the PassThrough transport and ensure that the server does not accept connections using those weak ciphers. The PassThrough transport is configured using the deployement.toml file. -

    See Disabling Weak Ciphers for instructions.

    -
    -

    Increase Ephemeral Diffie-Hellman Key size

    -
    -

    Before starting the server, open the product startup script ( micro-integrator.sh in Linux and micro-integrator.bat in Windows) and enter the following with the other Java properties:

    -
    -
    -
    -
    -Djdk.tls.ephemeralDHKeySize=2048 \
    -
    -
    -
    -
    -

    Disable client-initiated renegotiation

    -


    -
    -

    Before starting the server, open the product startup script ( micro-integrator.sh in Linux and micro-integrator.bat in Windows) and enter the following with the other Java properties:

    -
    -
    -
    -
    -Djdk.tls.rejectClientInitiatedRenegotiation=true \
    -
    -
    -
    -
    -

    Enable HostName Verification

    -


    -
    -

    Make sure that hostname verification is enabled in the product startup script ( micro-integrator.sh in Linux and micro-integrator.bat in Windows) with the Strict mode. That is, you need to enable the following parameter:

    -
    -
    -
    -
    -Dhttpclient.hostnameVerifier=Strict \
    -
    -
    -
    -
    -

    Verify super admin credentials

    -


    -
    -

    The user name and password of the super administrator of your Micro Integrator (the first administrator) is created by adding the following configuration to the deployment.toml file. When you go into production, be sure to manually check your user store and ensure that unwanted super admin records are removed.

    -
    -
    -
    -
    [super_admin]
    -username = "admin"
    -password = "admin"
    -admin_role = "admin"
    -create_admin_account = true
    -
    -
    -
    -

    Note that you can easily use the management API to add, update, and delete admins and regular users in the user store. However, the super admin users created from the deployment.toml file should be managed manually.

    -

    See the following topics for instructions to correctly create your administrators and other users in the Micro Integrator.

    - -
    -

    Enable log rotation and monitoring

    -


    -
    -

    Ensure that you have a relevant log rotation scheme to manage logs. Log4J properties for Micro Integrator can be configured in the <MI_HOME>/conf/log4j2.properties file. To roll the wso2carbon.log based on size, this guide can be used.

    -

    See Monitoring Logs for details on how to configure logging details in WSO2 products.

    -
    -

    Prevent Log Forging

    -
    -

    Log forging can be prevented by appending a UUID to the log message.

    -

    Read about configuring logs in the Micro Integrator.

    -
    -

    Set appropriate JVM parameters

    -


    -
    -

    The recommended JDK version is JDK 11. See the installation pre-requisites for more information.

    -

    Tip: To run the JVM with 2 GB (-Xmx2048m), you should ideally have about 4GB of memory on the physical machine.

    -
    - ### Streaming Integrator runtime security Given below are the security guidelines for the Streaming Integrator runtime. @@ -541,7 +311,7 @@ Given below are the security guidelines for the Streaming Integrator runtime.

    Make sure that WSO2 default certificates do not exist in any of the keystores in your production environment. For example, be sure to delete the default public certificate in the default trust store that is shipped with the product.

    - See Creating New Keystores for information on how to create and configure your own keys. + See Creating New Keystores for information on how to create and configure your own keys.

    @@ -549,7 +319,7 @@ Given below are the security guidelines for the Streaming Integrator runtime. Encrypt passwords in configuration files

    WSO2 products use a tool called Secure Vault to encrypt the plain-text passwords in configuration files.

    -

    See Securing Passwords in Configuration Files for instructions.

    +

    See Securing Passwords in Configuration Files for instructions.

    @@ -664,7 +434,7 @@ Given below are the security guidelines for the Streaming Integrator runtime.


    -

    Ensure that you have a relevant log rotation scheme to manage logs. Log4J properties for Streaming Integrator can be configured in the <SI_HOME>/conf/server/log4j2.xml file. To roll the wso2carbon.log based on size, this guide can be used.

    +

    Ensure that you have a relevant log rotation scheme to manage logs. Log4J properties for Streaming Integrator can be configured in the <SI_HOME>/conf/server/log4j2.xml file. To roll the wso2carbon.log based on size, this guide can be used.

    diff --git a/en/docs/install-and-setup/setup/deployment-best-practices/tuning-performance.md b/en/docs/install-and-setup/setup/deployment-best-practices/tuning-performance.md index 4f5779d064..839f9e83e8 100644 --- a/en/docs/install-and-setup/setup/deployment-best-practices/tuning-performance.md +++ b/en/docs/install-and-setup/setup/deployment-best-practices/tuning-performance.md @@ -96,7 +96,7 @@ The following diagram shows the communication/network paths that occur when an A - **Client call API Gateway + API Gateway call Backend** - For backend communication, the API Manager uses PassThrough transport. This is configured in the `/repository/conf/deployment.toml` file. For more information, see [Configuring PassThrough properties]({{base_path}}/install-and-setup/setup/mi-setup/transport_configurations/configuring-transports/#configuring-the-httphttps-transport). Add the following section to the `deployment.toml` file to configure the Socket timeout value. + For backend communication, the API Manager uses PassThrough transport. This is configured in the `/repository/conf/deployment.toml` file. For more information, see [Configuring PassThrough properties](https://mi.docs.wso2.com/en/latest/install-and-setup/setup/transport-configurations/configuring-transports/#configuring-the-httphttps-transport). Add the following section to the `deployment.toml` file to configure the Socket timeout value. ``` java [passthru_http] http.socket.timeout=180000 diff --git a/en/docs/install-and-setup/setup/deployment-overview.md b/en/docs/install-and-setup/setup/deployment-overview.md index 96efdee5e5..92d3bec5a7 100644 --- a/en/docs/install-and-setup/setup/deployment-overview.md +++ b/en/docs/install-and-setup/setup/deployment-overview.md @@ -29,7 +29,7 @@ The integration cluster may be a Micro Integrator cluster or a Streaming Integra
    • - Micro Integrator Cluster with Minimum High Availability + Micro Integrator Cluster with Minimum High Availability
    • Streaming Integrator Cluster with Minimum High Availability @@ -63,7 +63,7 @@ The integration cluster consists of two nodes of the integration runtime for eac
      • - Micro Integrator Cluster with Minimum High Availability + Micro Integrator Cluster with Minimum High Availability
      • Streaming Integrator Cluster with Minimum High Availability @@ -112,7 +112,7 @@ The integration cluster consist of a minimum of two nodes of the integration run
        • - Micro Integrator Cluster with Minimum High Availability + Micro Integrator Cluster with Minimum High Availability
        • Streaming Integrator Cluster with Minimum High Availability @@ -170,7 +170,7 @@ The integration cluster consist of a minimum of two nodes of the integration run
          • - Micro Integrator Cluster with Minimum High Availability + Micro Integrator Cluster with Minimum High Availability
          • Streaming Integrator Cluster with Minimum High Availability diff --git a/en/docs/install-and-setup/setup/kubernetes-operators/k8s-api-operator/install.md b/en/docs/install-and-setup/setup/kubernetes-operators/k8s-api-operator/install.md index d5d40b196e..53d556e680 100644 --- a/en/docs/install-and-setup/setup/kubernetes-operators/k8s-api-operator/install.md +++ b/en/docs/install-and-setup/setup/kubernetes-operators/k8s-api-operator/install.md @@ -184,5 +184,5 @@ By default, the K8s API operator is configured to watch the deployed namespace. ## What's Next -- [Deploying Integrations using the Operator]({{base_path}}/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-deployments) +- [Deploying Integrations using the Operator](https://mi.docs.wso2.com/en/latest/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-deployments/) - [Deploying APIs using the Operator]({{base_path}}/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-apis/api-deployments) diff --git a/en/docs/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-deployments.md b/en/docs/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-deployments.md deleted file mode 100644 index 50e1dc125c..0000000000 --- a/en/docs/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-deployments.md +++ /dev/null @@ -1,655 +0,0 @@ -# Deploying Integrations using the Operator - -The Kubernetes API operator (**k8s-api-operator**) provides first-class support for Micro Integrator deployments in the Kubernetes ecosystem. It uses the **Integration custom resource** (`integration_cr.yaml` file) that is available in the Kubernetes exporter module (exported from WSO2 Integration Studio) and deploys the integration in your Kubernetes environment. - -The operator is configured with an NGINX Ingress controller by default, which exposes the deployed integration through HTTP/HTTPS protocols. If required, you can use the operator's configuration mapper (`integration_controller_conf.yaml` file) to update ingress controller configurations. Also, you can use the same file to disable ingress controller creation when applying the operator if you plan to use a custom ingress controller. -## Prerequisites (system requirements) - -Listed below are the system requirements for deploying integration solutions in Kubernetes using the K8s API Operator. - -!!! Info - The K8s API Operator (k8s-api-operator) is built with **operator-sdk v0.16.0** and supported in the below environments. - -- [Kubernetes](https://kubernetes.io/docs/setup/) cluster and **v1.14+** client. -- [Docker](https://docs.docker.com/) -- [Install K8s API Operator]({{base_path}}/install-and-setup/setup/kubernetes-operators/k8s-api-operator/install) - -## Deploy integration solutions in K8s - -!!! Tip - To try the end-to-end process of deploying integration solutions on Kubernetes, see the integration examples: - - - [Hello World example]({{base_path}}/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-samples/hello-world) - - [Message Routing example]({{base_path}}/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-samples/content-based-routing) - - [JMS Sender/Receiver exampe]({{base_path}}/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-samples/jms-sender-receiver) - -Given below are the main steps your will follow when you deploy integration solutions in a Kubernetes cluster. - -1. Be sure that the [system requirements](#prerequisites-system-requirements) are fulfilled, and that the [K8s API operator]({{base_path}}/install-and-setup/setup/kubernetes-operators/k8s-api-operator/install) is installed in your Kubernetes environment. -2. Your integration solution should be prepared using **WSO2 Integration Studio** as follows: - - 1. Create the integration solution. - 2. Generate a Kubernetes project for the solution. - 3. Build the Docker image of your integration and push it to your Docker registry. - -3. Open the `integration_cr.yaml` file from the Kubernetes project in WSO2 Integration Studio. -4. See that the details of your **integration** are correctly included in the file. See the example given below. - - ```yaml - apiVersion: "wso2.com/v1alpha2" - kind: "Integration" - metadata: - name: "sample-integration" - spec: - image: "" - deploySpec: - minReplicas: 1 - requestCPU: "500m" - reqMemory: "512Mi" - cpuLimit: "2000m" - memoryLimit: "2048Mi" - livenessProbe: - tcpSocket: - port: 8290 - initialDelaySeconds: 30 - periodSeconds: 10 - readinessProbe: - httpGet: - path: /healthz - port: 9201 - initialDelaySeconds: 30 - periodSeconds: 10 - autoScale: - enabled: "true" - maxReplicas: 3 - expose: - passthroPort: 8290 - inboundPorts: - - 8000 - - 9000 - ..... - env: - - name: DEV_ENV_USER_EP - value: "https://reqres.in/api" - - name: SECRET_USERNAME - valueFrom: - secretRef: - name: backend-user - key: backend-username - - name: CONFIGMAP_USERNAME - valueFrom: - configMapRef: - name: backend-map - key: backend-username - envFrom: - - configMapRef: - name: CONFIG_MAP_NAME - - secretRef: - name: SECRET_NAME - ``` - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
            - Property - - Description -
            - kind - - The Integration kind represents the integration solution that will be deployed in the Kubernetes environment. -
            - metadata name - - The name of the integration solution. -
            - minReplicas - - The minimim number of pods that should be created in the Kubernetes cluster. If not defined, the operator will get pick up what is defined at integration_controller_conf.yaml file as the default. -
            - image - - Specify the Docker image of your integration solution. If you are using a Docker image from a private registry, you need to push the deployment as a Kuberntes secret. Follow the instructions in using a private registry image. -
            - cpuLimit - - Describes the maximum amount of compute resources allowed. -
            - requestCPU - - Describes the minimum amount of compute resources required. -
            - memoryLimit - - Describes the maximum amount of memory resources allowed. -
            - reqMemory - - Describes the minimum amount of memory resources required. -
            - livenessProbe - - Describes liveness probe to let K8s to know when to restart the containers. -
            - readinessProbe - - Describes readiness probe to let K8s to know when container is ready to start accepting traffic. -
            - autoScale.enabled - - Enable auto scale with hpa. -
            - maxReplicas - - The maximum number of pods that should be created in the Kubernetes cluster. If not defined, the operator will get pick up what is defined at integration_controller_conf.yaml file as the default. -
            - passthroPort - - Passthro port of the runtime server to be exposed. -
            - inboundPorts - - Inbound endpoint ports of the runtime server to be exposed. -
            - env - - Key-value pairs or value from references as environment variables. -
            - envFrom - - Environment variables from ConfigMap or Secret references. -
            - -5. Open a terminal, navigate to the location of your `integration_cr.yaml` file, and execute the following command to deploy the integration solution into the Kubernetes cluster: - ```bash - kubectl apply -f integration_cr.yaml - ``` - -When the integration is successfully deployed, it should create the `hello-world` integration, `hello-world-deployment`, `hello-world-service`, and `ei-operator-ingress` as follows: - -!!! Tip - The `ei-operator-ingress` will not be created if you have [disabled the ingress controller](#Disable-ingress-controller-creation). - -```bash -kubectl get integration - -NAME STATUS SERVICE-NAME AGE -hello-world Running hello-service 2m - -kubectl get deployment - -NAME READY UP-TO-DATE AVAILABLE AGE -hello-world-deployment 1/1 1 1 2m - -kubectl get services -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -hello-world-service ClusterIP 10.101.107.154 8290/TCP 2m -kubernetes ClusterIP 10.96.0.1 443/TCP 2d -k8s-api-operator ClusterIP 10.98.78.238 443/TCP 1d - -kubectl get ingress -NAME HOSTS ADDRESS PORTS AGE -api-operator-ingress wso2ei.ingress.wso2.com 10.0.2.15 80, 443 2m -``` - -## Use Docker images from private registries - -The API operator allows the use of any custom Docker image in a private registry as an integration solution. To achieve this, users have to pass the credentials to pull the image to the Kubernetes containers as a Kubernetes secret. The API operator uses `imagePullSecret` to read the Kubernetes secret of the registry credentials. - -1. You can use the following command to create a Kubernetes secret with the credentials of the private Docker image: - - ```bash - kubectl create secret docker-registry --docker-server= --docker-username= --docker-password= --docker-email= - ``` - - - - - - - - - - - - - - - - - - - - - - - - - - -
            - Parameter - - Description -
            - secret-alias - - The name of the secret. -
            - your-registry-server - - The private Docker registry FQDN for DockerHub. -
            - your-name - - Your Docker user name. -
            - your-password - - Your Docker password. -
            - your-email - - You Docker email. -
            - -2. Add the `imagePullSecret` property to the `integration_cr.yaml` custom resource file as follows: - - ```yaml - apiVersion: "wso2.com/v1alpha2" - kind: "Integration" - metadata: - name: "hello-world" - spec: - image: "" - imagePullSecret: - deploySpec: - minReplicas: 1 - ``` - -3. You can now deploy the integration: Open a terminal and execute the following command from the location of your `integration_cr.yaml` file. - - ```bash - kubectl apply -f integration_cr.yaml - ``` - -## View integration process logs - -Once you have [deployed your integrations](#deploy-integration-solutions-in-k8s) in the Kubernetes cluster, see the output of the running integration solutions using the pod's logs. - -1. First, you need to get the associated **pod id**. Use the `kubectl get pods` command to list down all the deployed pods. - - ```bash - kubectl get pods - - NAME READY STATUS RESTARTS AGE - hello-deployment-c68cbd55d-j4vcr 1/1 Running 0 3m - k8s-api-operator-6698d8f69d-6rfb6 1/1 Running 0 2d - ``` - -2. To view the logs of the associated pod, run the `kubectl logs ` command. This will print the output of the given pod ID. - - ```bash - kubectl logs hello-deployment-c68cbd55d-j4vcr - - ... - [2019-10-28 05:29:24,225] INFO {org.wso2.micro.integrator.initializer.deployment.application.deployer.CAppDeploymentManager} - Successfully Deployed Carbon Application : HelloWorldCompositeApplication_1.0.0{super-tenant} - [2019-10-28 05:29:24,242] INFO {org.apache.synapse.transport.passthru.core.PassThroughListeningIOReactorManager} - Pass-through HTTP Listener started on 0.0.0.0:8290 - [2019-10-28 05:29:24,242] INFO {org.apache.axis2.transport.mail.MailTransportListener} - MAILTO listener started - [2019-10-28 05:29:24,250] INFO {org.apache.synapse.transport.passthru.core.PassThroughListeningIOReactorManager} - Pass-through HTTPS Listener started on 0.0.0.0:8253 - [2019-10-28 05:29:24,251] INFO {org.wso2.micro.integrator.initializer.StartupFinalizer} - WSO2 Micro Integrator started in 4 seconds - ``` - -## Invoke the integration solution - -You can invoke the integration solution you deployed in Kubernetes using two methods. - -### Invoke using Ingress controller - -Once you have [deployed your integrations](#deploy-integration-solutions-in-k8s) in the Kubernetes cluster, you can use the default Ingress controller in the deployment to invoke the solution: - -1. Obtain the **External IP** of the ingress load balancer using the `kubectl get ingress` command as follows: - - ```bash - kubectl get ingress - NAME HOSTS ADDRESS PORTS AGE - api-operator-ingress wso2ei.ingress.wso2.com 10.0.2.15 80, 443 2m - ``` - For **Minikube**, you have to use the Minikube IP as the external IP. Hence, run `minikube ip` command to get the IP of the Minikube cluster. - -2. Add the **HOST** (`wso2ei.ingress.wso2.com`) and related **ADDRESS** (external IP) to the `/etc/hosts` file in your machine. - - !!! Tip - Note that the HOST of the Ingress controller is configured in the [integration_controller_conf.yaml](https://github.com/wso2/k8s-api-operator/blob/master/api-operator/deploy/controller-configs/integration_controller_conf.yaml) file. The default host is `wso2ei.ingress.wso2.com`. - -3. Execute the following CURL command to run the `hello-world` service in Kubernetes: - - ```bash - curl http://wso2ei.ingress.wso2.com/hello-world-service/services/HelloWorld - ``` - - You will receive the following response: - - ```bash - {"Hello":"World"}% - ``` - -### Invoke without Ingress controller - -Once you have [deployed your integrations](#deploy-integration-solutions-in-k8s) in the Kubernetes cluster, you can also invoke the integration solutions without going through the Ingress controller by using the [port-forward](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod) method for services. - -Follow the steps given below: - -1. Apply port forwarding: - - ```bash - kubectl port-forward service/hello-world-service 8290:8290 - ``` - -2. Invoke the proxy service: - - ```bash - curl http://localhost:8290/services/HelloWorld - ``` - - You will receive the following response: - - ```bash - {"Hello":"World"}% - ``` - -!!! Tip - The `ei-operator-ingress` will not be created if you have [disabled the ingress controller](#Disable-ingress-controller-creation). - - -## Update existing integration deployments - -The K8s API operator allows you to update the Docker images used in Kubernetes pods(replicas) with the latest update of the tag. To pull the latest tag, we need to delete the associated pod with its pod ID as follows: - -```bash -kubectl delete pod -``` - -When you run the above command, Kubernetes will spawn another temporary pod, which has the same behavior of the pod we have deleted. Then the deleted pod will restart by pulling the latest tag from the Docker image path. - -!!! Note - Here we are recommending to use a different image path for the updated integration solution. Otherwise, (because the Docker image is re-pulled from the existing deployment) some traffic from outside might get lost. - - -## Run inbound endpoints - -[Inbound Endpoints]({{base_path}}/reference/synapse-properties/inbound-endpoints/about-inbound-endpoints/) in the Micro Integrator are used for separating endpoint listeners. That is, for each HTTP inbound endpoint, messages are handled separately. Also, we can create any number of inbound endpoints on any port. - -Therefore, we can expose the inbound endpoint ports from the Kubernetes cluster by passing the `inboundPorts` property inside our `integration_cr.yaml` custom resource file as follows: - -```yaml -apiVersion: "integration.wso2.com/v1alpha2" -kind: "Integration" -metadata: - name: "inbound-samples" -spec: - image: "" - deploySpec: - minReplicas: 1 - expose: - passthroPort: 8290 - inboundPorts: - - 8000 - - 9000 - ... -``` - -Use the following methods to invoke the inbound endpoints in HTTP and HTTPS transports. Note that `` is the value we have used as the metadata name in the `integration_cr.yaml` file. - -- HTTP request - - ```bash - curl http:///-inbound// - ``` - -- HTTPS request - - ```bash - curl --cacert https:///-inbound// - ``` - - -## Manage resources of pods in integration deployment - -The K8s operator allows you to define the resources that are required for running the pods in a deployment. You can also define resource limits when you define these resources. The following example shows how CPU and memory resources are configured in the `integration_cr.yaml` file. - -```yaml -apiVersion: wso2.com/v1alpha2 -kind: Integration -metadata: - name: test-integration -spec: - image: - deploySpec: - minReplicas: 1 - requestCPU: "500m" - reqMemory: "512Mi" - cpuLimit: "2000m" - memoryLimit: "2048Mi" -``` - -If you don't have any resources defined in the `integration_cr.yaml` file, the K8s operator refers the default resource configurations in the `integration_controller_conf.yaml` file. Therefore, be sure to update the `integration-config` section in the `integration_controller_conf.yaml` file with default resource configurations. See the example given below. - -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: integration-config -data: - requestCPU: "500m" - reqMemory: "512Mi" - cpuLimit: "2000m" - memoryLimit: "2048Mi" -``` - -See [Managing Resources for Containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) in the Kubernetes documentation for more details. - - -## Enable Auto Scaling for deployment - -When the traffic to your pods increase, the deployment may need to scale horizontally. -Kubernetes allows you to define the resource limits and policy in a way that the deployment can auto scale based on resource usage. -See [Horizontal Pod Scaling (HPA)](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) in the Kubernetes documentation for details. - -You can enable auto scaling and define the maximum number of replicas to scale by updating the `autoScale` section in the `integration_cr.yaml` file as shown below. - - ```yaml -apiVersion: wso2.com/v1alpha2 -kind: Integration -metadata: - name: test-integration -spec: - autoScale: - enabled: "true" - maxReplicas: 3 - ``` - -If you don't have auto scaling defined in the `integration_cr.yaml` file, the K8s operator refers the default configurations in the `integration_controller_conf.yaml` file. Therefore, be sure to update the `integration_controller_conf.yaml` file with auto scaling configurations as shown below. - -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: integration-config -data: - enableAutoScale: "true" - maxReplicas: "5" -``` - -Note how you can set required resources and resource limits for the pods in the deployment referring to the above section [Managing resources for pods in EI deployment](#Managing-resources-for-pods-in-EI-deployment). HPA configs are injected through the integration configmap. See how you can define it at ([integration_controller_conf.yaml](https://github.com/wso2/k8s-api-operator/blob/master/api-operator/deploy/controller-configs/integration_controller_conf.yaml)) file. - -```yaml - hpaMetrics: | - - type: Resource - resource: - name: cpu - target: - type: Utilization - averageUtilization: 50 -``` - -## Liveness and readiness probes - -The operator gives you the flexibility of defining liveness and readiness probes. These probes are used by Kubernetes to know whether the pods are live and whether they are ready to accept traffic. - -You can configure the default probe definition in the `integration_controller_conf.yaml` file as shown below. - -The Micro Integrator is considered **live** when it is accepting HTTP/HTTPS traffic. Usually, it listens for HTTP traffic on the PassThrough port (default 8290). Liveness of the Micro Integrator pod is checked by a ping to that port. - -The Micro Integrator is **ready** to accept traffic only when all the CApps are successfully deployed. The API with the `/healthz` path, which is exposed through the HTTP inbound port (default 9201) is used to check the readiness. - -!!! Note - These ports can change as per the configurations used in the Micro Integrator based image. - -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: integration-config -data: - livenessProbe: | - tcpSocket: - port: 8290 - initialDelaySeconds: 10 - periodSeconds: 5 - readinessProbe: | - httpGet: - path: /healthz - port: 9201 - initialDelaySeconds: 10 - periodSeconds: 5 -``` - -Use the `integration_cr.yaml` file to define the probes specific to a particular deployment. See the example given below. - -```yaml -apiVersion: wso2.com/v1alpha2 -kind: Integration -metadata: - name: test-integration -spec: - image: - deploySpec: - livenessProbe: - tcpSocket: - port: 8290 - initialDelaySeconds: 30 - periodSeconds: 10 - readinessProbe: - httpGet: - path: /healthz - port: 9201 - initialDelaySeconds: 30 - periodSeconds: 10 -``` - -Note that any you can configure any configuration supported under these probes as defined at [Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/). - - -## Update the existing deployment - -You can update the configurations in the `integration_cr.yaml` file and reapply it to your existing deployment when you use the K8s operator. The Docker image, exposed ports, resource definitions, environment variables, etc. can be updated. - -## Additional operator configurations - -### Disable ingress controller creation - -By default, an ingress controller named `ei-operator-ingress` is created to expose all the deployments created by the operator. Per deployment, new rules are added to the same ingress controller to route the traffic. Sometimes you may use an external ingress or define an ingress yourself. In such cases, you can disable ingress controller creation from the `integration_controller_conf.yaml` file as shown below. - - -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: integration-config -data: - reconcileInterval: "10" - autoIngressCreation: "false" -``` - -### Change reconcile interval - -The K8s operator continuously runs a task that listens for changes applied to the deployment. When you change the `integration_cr.yaml` file and reapply, this is the task that updates the deployment. You can configure this task at the operator level (in seconds) by using the `integration_controller_conf.yaml` file as shown below. You can specify how often the task should run. - -defined at `integration_controller_conf.yaml` file - -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: integration-config -data: - reconcileInterval: "10" -``` diff --git a/en/docs/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-samples/content-based-routing.md b/en/docs/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-samples/content-based-routing.md deleted file mode 100644 index b5dba6bd02..0000000000 --- a/en/docs/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-samples/content-based-routing.md +++ /dev/null @@ -1,238 +0,0 @@ -# K8s Deployment Sample 2: Content Based Routing - -Let's define a content-based routing scenario using WSO2 Micro Integrator and deploy it on your Kubernetes environment. - -## Prerequisites - -- Install and set up [WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -- Install a [Kubernetes](https://kubernetes.io/docs/setup/) cluster and **v1.11+** client. Alternatively, you can [run Kubernetes locally via Minikube](https://kubernetes.io/docs/setup/learning-environment/minikube/). -- Install [Docker](https://docs.docker.com/). -- Install the [Kubernetes API Operator]({{base_path}}/install-and-setup/setup/kubernetes-operators/k8s-api-operator/install). - -## Step 1 - Create the integration solution - -Let's use the Content Routing integration template in WSO2 Integration Studio: - -1. Open WSO2 Integration Studio. -2. In the Getting Started view, select the Content Based Routing template. - - - -3. Give a project name and click Finish. - - k8s project structure - -5. Create a **Kubernetes Project** inside the integration project. - - 1. Right-click the content-routing-sample project, go to **New -> Kubernetes Exporter**: - - Create Kubernetes Project - - 2. In the **Kubernetes Exporter Information for K8s EI Operator** dialog box that opens, enter the following details: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
            - Parameter - - Description -
            - Kubernetes Exporter Name - - Give a unique name for the project. -
            - Integration Name - - This name will be used to identify the integration solution in the kubernetes custom resource. Let's use content-routing as the integration name for this example. -
            - Number of Replicas - - Specify the number of pods that should be created in the kubernetes cluster. -
            - Base Image Repository - - Specify the base Micro Integrator Docker image for your solution. For this example, let's use the Micro Integrator docker image from the WSO2 public docker registry: wso2/wso2mi.

            - Note that the image value format should be 'docker_user_name/repository_name'. -
            - Base Image Tag - - Give a tag name for the base Docker image. -
            - Target Image Repository - - The Docker repository to which the Docker image will be pushed: 'docker_user_name/repository_name'. -
            - Target Image Tag - - Give a tag name for the Docker image. -
            - - 3. Click Finish. - -Your integration project with the content routing sample is now ready to be deployed in Kubernetes. - -k8s project structure - -## Step 2 - Build and Push the Docker image - -!!! Note - Be sure to start your Docker instance before building the image. If Docker is not started, the build process will fail. - -There are two ways to build a Docker image of the integration solution and push it to your Docker registry: - -- Using Maven: - - !!! Note "Before you begin" - You need **Maven 3.5.2** or a later version when you build the Docker image manually (without using WSO2 Integration Studio). - - 1. Open a terminal and navigate to the integration project. - 2. Execute the following command. - - Be sure to specify the user name and password of the correct Docker registry. - - ```bash - mvn clean install -Dmaven.test.skip=true -Ddockerfile.username={username} -Ddockerfile.password={password} - ``` - - This will build the Docker image and then push it to the specified Docker registry. - -- Using WSO2 Integration Studio: - - 1. Open the **pom.xml** file in the Kubernetes exporter. - 2. Ensure that the composite exporter is selected under **Dependencies** and click Build & Push. - - - - 3. In the dialog box that opens, enter the credentials of your Docker registry to which the image should be pushed. - - docker registry credentials - - 4. Click Push Image. - -Run the `docker image ls` command to verify that the Docker image is created. - -## Step 3 - Deploy the solution in K8s - -!!! Info - **Before you begin**, the [API Kubernetes Operator]({{base_path}}/install-and-setup/setup/kubernetes-operators/k8s-api-operator/install) should be installed in your Kubernetes environment. - -Follow the steps given below: - -1. Open the `integration_cr.yaml` file from the Kubernetes project in WSO2 Integration Studio. -2. See that the **integration** details of the `content-routing` solution are updated. Be sure to add the image name in the following format: `docker_user/repository:tag` - - ```yaml - apiVersion: "wso2.com/v1alpha2" - kind: "Integration" - metadata: - name: "content-routing" - spec: - image: "" - deploySpec: - minReplicas: 1 - ``` - -3. Open a terminal and start the Kubernetes cluster. -4. Navigate to the location of your `integration_cr.yaml` file, and execute the following command to deploy the integration solution in the Kubernetes cluster: - - ```bash - kubectl apply -f integration_cr.yaml - ``` - -When the integration is successfully deployed, it should create the `content-routing` integration, `content-routing-deployment`, `content-routing-service`, and `ei-operator-ingress` as follows: - -!!! Tip - The `api-operator-ingress` is not created if you have [disabled the ingress controller]({{base_path}}/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-deployments/#disable-ingress-controller-creation). - -```bash -kubectl get integration - -NAME STATUS SERVICE-NAME AGE -content-routing 40s - -kubectl get deployment - -NAME READY UP-TO-DATE AVAILABLE AGE -content-routing-deployment 1/1 1 1 2m - -kubectl get services -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -content-routing-service ClusterIP 10.101.107.154 8290/TCP 2m -kubernetes ClusterIP 10.96.0.1 443/TCP 2d -k8s-api-operator ClusterIP 10.98.78.238 443/TCP 1d - -kubectl get ingress -NAME HOSTS ADDRESS PORTS AGE -api-operator-ingress wso2ei.ingress.wso2.com 10.0.2.15 80, 443 2m -``` - -## Step 4 - Test the deployment - -Let's invoke the service without going through the ingress controller. - -1. Create a `request.xml` file as follows: - ```xml - - Add - 10 - 25 - - ``` - or - ```xml - - Divide - 25 - 5 - - ``` - -2. Apply port forwarding as shown below. This will allow you to invoke the service without going through the Ingress controller: - ```bash - kubectl port-forward service/content-routing-service 8290:8290 - ``` - -2. Execute the following command to invoke the service: - ```bash - curl -X POST -d @request.xml http://localhost:8290/ArithmaticOperationService -H "Content-Type: text/xml" - ``` - -You will receive the following SOAP response: - -```xml - - - 35 - -``` \ No newline at end of file diff --git a/en/docs/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-samples/hello-world.md b/en/docs/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-samples/hello-world.md deleted file mode 100644 index 312dd3abd1..0000000000 --- a/en/docs/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-samples/hello-world.md +++ /dev/null @@ -1,139 +0,0 @@ -# K8s Deployment Sample 1: Hello World Scenario -Let's define a basic Hello World scenario using WSO2 Micro Integrator and deploy it on your Kubernetes environment. - -## Prerequisites - -- Install and set up [WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -- Install a [Kubernetes](https://kubernetes.io/docs/setup/) cluster and **v1.11+** client. Alternatively, you can [run Kubernetes locally via Minikube](https://kubernetes.io/docs/setup/learning-environment/minikube/). -- Install [Docker](https://docs.docker.com/). -- Install the [Kubernetes API Operator]({{base_path}}/install-and-setup/setup/kubernetes-operators/k8s-api-operator/install). - -## Step 1 - Create the integration solution - -Let's use an integration template in WSO2 Integration Studio to generate a sample integration solution that returns a 'Hello World' response when invoked. - -1. Open WSO2 Integration Studio. -2. In the Getting Started view, select the Hello Kubernetes template. - - getting started view - -3. Give a project name and click Finish. - -This generates the complete integration project with the 'Hello World' solution, which is ready to be deployed in Kubernetes. - -k8s project structure - -## Step 2 - Build and Push the Docker image - -!!! Note - Be sure to start your Docker instance before building the image. If Docker is not started, the build process will fail. - -There are two ways to build a Docker image of the integration solution and push it to your Docker registry: - -- Using Maven: - - !!! Note "Before you begin" - You need **Maven 3.5.2** or a later version when you build the Docker image manually (without using WSO2 Integration Studio). - - 1. Open a terminal and navigate to the integration project. - 2. Execute the following command. - - Be sure to specify the user name and password of the correct Docker registry. - - ```bash - mvn clean install -Dmaven.test.skip=true -Ddockerfile.username={username} -Ddockerfile.password={password} - ``` - - This will build the Docker image and then push it to the specified Docker registry. - -- Using WSO2 Integration Studio: - - 1. Open the **pom.xml** file in the Kubernetes project as shown below. - - - - 2. Ensure that the composite exporter is selected under **Dependencies**. - 3. In the Target Repository field, enter the name of the Docker registry to which you will push a Docker image. - 4. Click Build & Push to build the image and push to the Docker registry. - 5. In the dialog box that opens, enter the credentials of your Docker registry to which the image should be pushed. - - docker registry credentials - - 6. Click Push Image. - -Run the `docker image ls` command to verify that the Docker image is created. - -## Step 3 - Deploy the solution in K8s - -!!! Info - **Before you begin**, the [API Kubernetes Operator]({{base_path}}/install-and-setup/setup/kubernetes-operators/k8s-api-operator/install) should be installed in your Kubernetes environment. - -Follow the steps given below. - -1. Open the `integration_cr.yaml` file from the Kubernetes exporter in WSO2 Integration Studio. -2. See that the **integration** details of the `hello-world` solution are updated. Be sure to add the image name in the following format: `docker_user/repository:tag` - - ```yaml - apiVersion: "wso2.com/v1alpha2" - kind: "Integration" - metadata: - name: "hello-world" - spec: - image: "" - deploySpec: - minReplicas: 1 - ``` - -3. Open a terminal and start the Kubernetes cluster. -4. Navigate to the location of your `integration_cr.yaml` file and execute the following command to deploy the integration solution in the Kubernetes cluster: - - ```bash - kubectl apply -f integration_cr.yaml - ``` - -When the integration is successfully deployed, it should create the `hello-world` integration, `hello-world-deployment`, `hello-world-service`, and `ei-operator-ingress` as follows: - -!!! Tip - The `ei-operator-ingress` is not created if you have [disabled the ingress controller]({{base_path}}/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-deployments/#disable-ingress-controller-creation). - -```bash -kubectl get integration - -NAME STATUS SERVICE-NAME AGE -hello-world Running hello-service 2m - -kubectl get deployment - -NAME READY UP-TO-DATE AVAILABLE AGE -hello-world-deployment 1/1 1 1 2m - -kubectl get services -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -hello-world-service ClusterIP 10.101.107.154 8290/TCP 2m -kubernetes ClusterIP 10.96.0.1 443/TCP 2d -k8s-api-operator ClusterIP 10.98.78.238 443/TCP 1d - -kubectl get ingress -NAME HOSTS ADDRESS PORTS AGE -api-operator-ingress wso2ei.ingress.wso2.com 10.0.2.15 80, 443 2m -``` - -## Step 4 - Test the deployment - -Let's invoke the service without going through the ingress controller. - -1. Apply port forwarding as shown below. This will allow you to invoke the service without going through the Ingress controller: - ```bash - kubectl port-forward service/hello-world-service 8290:8290 - ``` - -2. Invoke the service as follows: - ```bash - curl http://localhost:8290/HelloWorld - ``` - -You will receive the following response: - -```bash -{"Hello":"World"}% -``` \ No newline at end of file diff --git a/en/docs/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-samples/jms-sender-receiver.md b/en/docs/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-samples/jms-sender-receiver.md deleted file mode 100644 index c27b1b1e7a..0000000000 --- a/en/docs/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-samples/jms-sender-receiver.md +++ /dev/null @@ -1,324 +0,0 @@ -# K8s Deployment Sample 3: JMS Sender/Receiver - -Let's define a JMS (sender and receiver) scenario using WSO2 Micro Integrator and deploy it on your Kubernetes environment. - -## Prerequisites - -- Install and set up [WSO2 Integration Studio]({{base_path}}/integrate/develop/installing-wso2-integration-studio). -- Install a [Kubernetes](https://kubernetes.io/docs/setup/) cluster and **v1.11+** client. Alternatively, you can [run Kubernetes locally via Minikube](https://kubernetes.io/docs/setup/learning-environment/minikube/). -- Install [Docker](https://docs.docker.com/). -- Install the [Kubernetes API Operator]({{base_path}}/install-and-setup/setup/kubernetes-operators/k8s-api-operator/install). - -- Deploy an ActiveMQ pod inside your Kubernetes cluster. - -## Step 1 - Create the integration solution - -Follow the steps given below. - -1. Open WSO2 Integration Studio. -2. In the Getting Started view, click New Integration Project - - - -3. In the New Integration Project dialog box, give a name for the integration project and select the following check boxes: Create ESB Configs, Create Composite Exporter, and Create Kubernetes Exporter. - - Create ESB Config Project - -4. Click Next and enter the following details for your Kubernetes Exporter. - - Create Kubernetes Project - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
            - Parameter - - Description -
            - Kubernetes Exporter Name - - Give a unique name for the project. -
            - Integration Name - - This name will be used to identify the integration solution in the kubernetes custom resource. Let's use jms-example as the integration name for this example. -
            - Number of Replicas - - Specify the number of pods that should be created in the kubernetes cluster. -
            - Base Image Repository - - Specify the base Micro Integrator Docker image for your solution. For this example, let's use the Micro Integrator docker image from the WSO2 public docker registry: wso2/wso2mi.

            - Note that the image value format should be 'docker_user_name/repository_name'. -
            - Base Image Tag - - Give a tag name for the base Docker image. -
            - Target Image Repository - - The Docker repository to which the Docker image will be pushed: 'docker_user_name/repository_name'. -
            - Target Image Tag - - Give a tag name for the Docker image. -
            - -3. Add the following proxy service configuration to your project. This service listens to messages from ActiveMQ and publishes to another queue in ActiveMQ. - - 1. Right-click ESB Config in the project explorer, go to **New -> Proxy Service** and create a custom proxy service named `JmsSenderListener`. - - Create ESB Config Project - - 2. You can then use the **Source View** to copy the following configuration. - - !!! Tip - Be sure to update the **tcp://localhost:61616** URL given below with the actual/connecting URL that will be reachable from the Kubernetes pod. - - ```xml - - - - - - -
            - - -1 - -1 - 0 - - - 0 - -
            -
            -
            -
            - - -
            - AUTO_ACKNOWLEDGE - $SYSTEM:destination - firstQueue - $SYSTEM:contenttype - $SYSTEM:jmsurl - false - $SYSTEM:jmsconfac - $SYSTEM:jmsuname - $SYSTEM:jmspass -
            - ``` - -4. Open the **integration_cr.yaml** file inside the Kubernetes exporter and add the environment variables as shown below. These values will be injected to the parameters defined in the proxy service. - - !!! Tip - Be sure to update the **tcp://localhost:61616** URL in the above configuration with the actual/connecting URL that will be reachable from the Kubernetes pod. - - ```yaml - --- - apiVersion: "wso2.com/v1alpha2" - kind: "Integration" - metadata: - name: "jms" - spec: - image: "Docker/image/path/to/the/JMSSenderListner" - deploySpec: - minReplicas: 1 - expose: - passthroPort: 8290 - env: - - name: "jmsconfac" - value: "TopicConnectionFactory" - - name: "jmsuname" - value: "admin" - - name: "destination" - value: "queue" - - name: "jmsurl" - value: "tcp://localhost:61616" - - name: "jmspass" - value: "admin" - - name: "contenttype" - value: "application/xml" - ``` - -Finally, the created Maven Multi Module project should look as follows: - -Hello World Project - -## Step 2 - Update JMS configurations - -1. Uncomment the following two commands in the Dockerfile inside the Kubernetes project. - ```bash - COPY Libs/*.jar $WSO2_SERVER_HOME/lib/ - COPY Conf/* $WSO2_SERVER_HOME/conf/ - ``` -2. Download [Apache ActiveMQ](http://activemq.apache.org/). -3. Copy the following client libraries from the `/lib` directory to the `//Lib` directory. - - **ActiveMQ 5.8.0 and above** - - - activemq-broker-5.8.0.jar - - activemq-client-5.8.0.jar - - activemq-kahadb-store-5.8.0.jar - - geronimo-jms_1.1_spec-1.1.1.jar - - geronimo-j2ee-management_1.1_spec-1.0.1.jar - - geronimo-jta_1.0.1B_spec-1.0.1.jar - - hawtbuf-1.9.jar - - Slf4j-api-1.6.6.jar - - activeio-core-3.1.4.jar (available in the /lib/optional directory) - - **Earlier version of ActiveMQ** - - - activemq-core-5.5.1.jar - - geronimo-j2ee-management_1.0_spec-1.0.jar - - geronimo-jms_1.1_spec-1.1.1.jar - -4. Open the `deployment.toml` file in your Kubernetes project and add the following content to enable the JMS sender and listener: - - !!! Tip - Be sure to update the **tcp://localhost:61616** URL in the above configuration with the actual/connecting URL that will be reachable from the Kubernetes pod. - - ```toml - [server] - hostname = "localhost" - - [keystore.primary] - file_name = "wso2carbon.jks" - password = "wso2carbon" - alias = "wso2carbon" - key_password = "wso2carbon" - - [truststore] - file_name = "client-truststore.jks" - password = "wso2carbon" - alias = "symmetric.key.value" - algorithm = "AES" - - [[transport.jms.listener]] - name = "default" - parameter.initial_naming_factory = "org.apache.activemq.jndi.ActiveMQInitialContextFactory" - parameter.provider_url = "tcp://localhost:61616" - parameter.connection_factory_name = "QueueConnectionFactory" - parameter.connection_factory_type = "queue" - - [[custom_transport.sender]] - protocol = "jms" - class="org.apache.axis2.transport.jms.JMSSender" - ``` - -## Step 3 - Build and Push the Docker image - -!!! Note - Be sure to start your Docker instance before building the image. If Docker is not started, the build process will fail. - -There are two ways to build a Docker image of the integration solution and push it to your Docker registry: - -- Using Maven: - - !!! Note "Before you begin" - You need **Maven 3.5.2** or a later version when you build the Docker image manually (without using WSO2 Integration Studio). - - 1. Open a terminal and navigate to the integration project. - 2. Execute the following command. - - Be sure to specify the user name and password of the correct Docker registry. - - ```bash - mvn clean install -Dmaven.test.skip=true -Ddockerfile.username={username} -Ddockerfile.password={password} - ``` - - This will build the Docker image and then push it to the specified Docker registry. - -- Using WSO2 Integration Studio: - - 1. Open the **pom.xml** file in the Kubernetes exporter. - 2. Ensure that the composite exporter is selected under **Dependencies** and click Build & Push. - - - - 3. In the dialog box that opens, enter the credentials of your Docker registry to which the image should be pushed. - - docker registry credentials - - 4. Click Push Image. - -Run the `docker image ls` command to verify that the Docker image is created. - -## Step 4 - Deploy the solution in K8s - -!!! Info - **Before you begin**, the [API Kubernetes Operator]({{base_path}}/install-and-setup/setup/kubernetes-operators/k8s-api-operator/install) should be installed in your Kubernetes environment. - -Follow the steps given below: - -1. Open the `integration_cr.yaml` file from the Kubernetes project in WSO2 Integration Studio. -2. See that the **integration** details of the `jms-example` solution is updated. -3. Open a terminal, navigate to the location of your `integration_cr.yaml` file, and execute the following command to deploy the integration solution into the Kubernetes cluster: - ```bash - kubectl apply -f integration_cr.yaml - ``` - -When the integration is successfully deployed, it should create the `jms-example` integration, `jms-example-deployment`, `jms-example-service`, and `ei-operator-ingress` as follows: - -!!! Tip - The `ei-operator-ingress` is not created if you have [disabled the ingress controller]({{base_path}}/install-and-setup/setup/kubernetes-operators/k8s-api-operator/manage-integrations/integration-deployments/#disable-ingress-controller-creation). - -```bash -kubectl get integration - -NAME STATUS SERVICE-NAME AGE -jms-example Running 2m - -kubectl get deployment - -NAME READY UP-TO-DATE AVAILABLE AGE -jms-example-deployment 1/1 1 1 2m - -kubectl get services -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -jms-example-service ClusterIP 10.101.107.154 8290/TCP 2m -kubernetes ClusterIP 10.96.0.1 443/TCP 2d -k8s-api-operator ClusterIP 10.98.78.238 443/TCP 1d - -kubectl get ingress -NAME HOSTS ADDRESS PORTS AGE -api-operator-ingress wso2ei.ingress.wso2.com 10.0.2.15 80, 443 2m -``` - -This will create a new queue called **queue** in ActiveMQ. - -## Step 5 - Test the deployment - -Send a message to this queue. The proxy service you added in **step 3** above will listen to this message and send that message to a new queue called **secondQueue**. diff --git a/en/docs/install-and-setup/setup/mi-setup/brokers/configure-with-activemq.md b/en/docs/install-and-setup/setup/mi-setup/brokers/configure-with-activemq.md deleted file mode 100644 index 495ab95276..0000000000 --- a/en/docs/install-and-setup/setup/mi-setup/brokers/configure-with-activemq.md +++ /dev/null @@ -1,298 +0,0 @@ -# Connecting to ActiveMQ - -This section describes how to configure WSO2 Micro Integrator to connect with ActiveMQ. - -## Setting up the Micro Integrator with ActiveMQ - -Follow the instructions below to set up and configure. - -1. Download [Apache ActiveMQ](http://activemq.apache.org/). -2. Download and install WSO2 Micro Integrator. -3. Copy the following client libraries from the `ACTIVEMQ_HOME/lib` directory to the `MI_HOME/lib` directory. - - **ActiveMQ 5.8.0 and above** - - - activemq-broker-5.8.0.jar - - activemq-client-5.8.0.jar - - activemq-kahadb-store-5.8.0.jar - - geronimo-jms_1.1_spec-1.1.1.jar - - geronimo-j2ee-management_1.1_spec-1.0.1.jar - - geronimo-jta_1.0.1B_spec-1.0.1.jar - - hawtbuf-1.9.jar - - Slf4j-api-1.6.6.jar - - activeio-core-3.1.4.jar (available in the `ACTIVEMQ_HOME/lib/optional` directory) - - **Earlier version of ActiveMQ** - - - activemq-core-5.5.1.jar - - geronimo-j2ee-management_1.0_spec-1.0.jar - - geronimo-jms_1.1_spec-1.1.1.jar - -4. If you want the Micro Integrator to receive messages from an ActiveMQ instance, or to send messages to an ActiveMQ instance, you need to update the deployment.toml file with the relevant connection parameters. - - - Add the following configurations to enable the JMS listener with ActiveMQ connection parameters. - ```toml - [[transport.jms.listener]] - name = "myTopicListener" - parameter.initial_naming_factory = "org.apache.activemq.jndi.ActiveMQInitialContextFactory" - parameter.provider_url = "tcp://localhost:61616" - parameter.connection_factory_name = "TopicConnectionFactory" - parameter.connection_factory_type = "topic" - parameter.cache_level = "consumer" - ``` - ```toml - [[transport.jms.listener]] - name = "myQueueListener" - parameter.initial_naming_factory = "org.apache.activemq.jndi.ActiveMQInitialContextFactory" - parameter.provider_url = "tcp://localhost:61616" - parameter.connection_factory_name = "QueueConnectionFactory" - parameter.connection_factory_type = "queue" - parameter.cache_level = "consumer" - ``` - !!! Note - When configuring the jms listener, be sure to add the connection factory [service-level jms parameter]({{base_path}}/reference/synapse-properties/transport-parameters/jms-transport-parameters) to the synapse configuration with the name of the already defined connection factory. - ```xml - myQueueListener - ``` - - - Add the following configurations to enable the JMS sender with ActiveMQ connection parameters. - ```toml - [[transport.jms.sender]] - name = "myTopicSender" - parameter.initial_naming_factory = "org.apache.activemq.jndi.ActiveMQInitialContextFactory" - parameter.provider_url = "tcp://localhost:61616" - parameter.connection_factory_name = "TopicConnectionFactory" - parameter.connection_factory_type = "topic" - parameter.cache_level = "producer" - ``` - ```toml - [[transport.jms.sender]] - name = "myQueueSender" - parameter.initial_naming_factory = "org.apache.activemq.jndi.ActiveMQInitialContextFactory" - parameter.provider_url = "tcp://localhost:61616" - parameter.connection_factory_name = "QueueConnectionFactory" - parameter.connection_factory_type = "queue" - parameter.cache_level = "producer" - ``` - - !!! Note - - When configuring the JMS transport with ActiveMQ, you can append [ActiveMQ-specific properties](http://activemq.apache.org/connection-configuration-uri.html) to the value of the `parameter.provider_url` property. For example, you can set the `redeliveryDelay` and `initialRedeliveryDelay` properties when configuring a JMS inbound endpoint as follows: - ```toml - parameter.provider_url = "tcp://localhost:61616?jms.redeliveryPolicy.redeliveryDelay=10000&jms.redeliveryPolicy.initialRedeliveryDelay=10000" - ``` - - The above configurations do not address the problem of transient failures of the ActiveMQ message broker. - For example, if the ActiveMQ goes down and becomes active again after a while, the Micro Integrator will not reconnect to ActiveMQ. Instead, an error will be thrown until the Micro Integrator is restarted.
            - To avoid this problem, you need to add the following value as the `parameter.provider_url`: `failover:tcp://localhost:61616`. This simply makes sure that reconnection takes place. The `failover` prefix is associated with the [Failover transport of ActiveMQ](http://activemq.apache.org/failover-transport-reference.html). - -5. Start ActiveMQ by navigating to the `ACTIVEMQ_HOME/bin` directory and executing `./activemq console` (on Linux/OSX) or `activemq start ` (on Windows). - -## Configuring redelivery in ActiveMQ queues - -When the Micro Integrator is configured to consume messages from an ActiveMQ queue, you have the option to configure message re-delivery. This is useful when the Micro Integrator is unable to process messages due to failures. - -- **JMS parameters** - - Add the following JMS parameters to the proxy service configuration. - - ```xml - 1 - queue - true - JMStoHTTPStockQuoteProxy - 2000 - consumer - ``` - - - `redeliveryPolicy.maximumRedeliveries`: Maximum number of retries for delivering the message. If set to `-1` ActiveMQ will retry inifinitely. - - `transport.jms.SessionTransacted`: When set to `true`, this enables the JMS session transaction for the proxy service. - - `redeliveryPolicy.redeliveryDelay`: Delay time in milliseconds between retries. - - `transport.jms.CacheLevel`: This needs to be set to `consumer` for the ActiveMQ redelivery mechanism to work. - -- **Fault sequence** - - Add the following line in your fault sequence: `` - - !!! Info - When the Micro Integrator is unable to deliver a message to the back-end service due to an error, it will be routed to the fault sequence in the configuration. When the `SET_ROLLBACK_ONLY` property is set in the fault sequence, the Micro Integrator informs ActiveMQ to redeliver the message. - -Following is a sample proxy service configuration: - -```xml - - - - - - - - - - - - - - - - - 1 - queue - true - JMStoHTTPStockQuoteProxy - 2000 - consumer - - -``` - -## Securing the ActiveMQ server - -JMS is an integral part of enterprise integration solutions that are highly-reliable, loosely-coupled, and asynchronous. As a result, implementing proper security to your JMS deployments is vital. The below sections discuss some of the best practices of an effective JMS security implementation when used in combination with WSO2 Micro Integrator. - -Let's see how some of the key concepts of system security such as authentication, authorization, and availability are implemented in different types of brokers. Given below is an overview of how some common security concepts are implemented in Apache ActiveMQ. - -| Security Concept | How it is Implemented | -|------------------------------------------------|----------------------------------------------------------------| -| [Authentication](#authentication) | Simple authentication and JAAS plugins. | -| [Authorization](#authorization) | Built-in authorization mechanism using XML configuration. | -| [Availability](#availability) | Primary/secondary configurations using fail-over transport in ActiveMQ (not to be confused with the Micro Integrator's transports). | -| [Integrity](#integrity) | WS-Security | - -### Authentication - -Simple Authentication: ActiveMQ comes with an authentication plugin, which provides basic authentication between the ActiveMQ JMS and the Micro Integrator. The steps below describe how to configure. - -1. Add the following configuration in `ACTIVEMQ_HOME/conf/activemq-security.xml` file. - ```xml - - - - - - - - ``` - -2. Update the `ACTIVEMQ_HOME/conf/credentials.properties` file (for credentials in plain text) or the `ACTIVEMQ_HOME/conf/credentials-enc.properties` file for encrypted version to define the list of usernames and passwords lists referenced in the configuration above. - - - The **anonymousAccessAllowed** attribute defines whether or not to allow anonymous access. - - The groups and users defined in step 1 are used to provide authorization schemes. Refer to section [Authorization](#authorization) for more information. - -3. When you configure the JMS listener in the deployment.toml file of your Micro Integrator, use the ActiveMQ user name and password you configured above. - ```toml - [[transport.jms.listener]] - name = "myTopicListener" - parameter.initial_naming_factory = "org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory" - parameter.provider_url = "tcp://localhost:61616" - parameter.connection_factory_name = "TopicConnectionFactory" - parameter.connection_factory_type = "topic" - parameter.cache_level = "consumer" - parameter.username = "system" - parameter.password = "manager" - ``` - -!!! Info - For more advanced authentication schemes that use JAAS, which are supported in ActiveMQ, refer the official [ActiveMQ documentation](http://activemq.apache.org/security.html). - -### Authorization - -ActiveMQ provides authorization schemes using simple XML configurations, which you can apply to the users defined in the [authentication plugin](#authentication). To setup authorization, ensure you have the following configuration in the `ACTIVEMQ_HOME/conf/activemq-sequrity.xml` file. - -```xml - - - - - - - - - - - - - - - -``` - -!!! Info - This configuration defines role-based authorization on queues and topics, and uses ActiveMQ wildcards. For information on wildcards, refer the official [ActiveMQ documentation](http://activemq.apache.org/security.html). - -### Availability - -ActiveMQ supports the use of primary and secondary configurations and the failover transport to provide high availability. ActiveMQ supports two types of primary and secondary configurations and they are as follows: - -- Primary/secondary configurations using shared file systems -- Primary/secondary configurations using JDBC - -!!! Info - For more information on either model, see the [ActiveMQ documentation](http://activemq.apache.org/masterslave.html). - -**Primary/secondary configurations using JDBC** - -ActiveMQ uses a special URI similar to the following to facilitate fail-over functionality: `failover://(tcp://127.0.0.1:61616,tcp://127.0.0.1:61617,tcp://127.0.0.1:61618)?initialReconnectDelay=100`. Use this URI inside the Micro Integrator for a highly-available JMS solution. See the example proxy service given below. - -```xml - - - - - - - - -
            - - - - - - - - - -``` - -Note `java.naming.provider.url=failover:(tcp://localhost:61616,tcp://localhost:61617)?randomize=false` inside the address endpoint URI attribute. The `randomize=false` parameter allows the fail-over configuration to be prioritized. This ensures that when the first instance fails, it moves to the next. For more information on ActiveMQ fail-over transport and its parameters, refer the [official documentation of ActiveMQ](http://activemq.apache.org/failover-transport-reference.html). - -### Integrity - -Integrity is part of message-level security and can be implemented using a standard like WS-Security. The following sample shows the application of WS-Security for message-level encryption where messages are stored in a message store in WSO2 Micro Integrator. - -```xml - -``` - -```xml - - - - - -
            - - -
            -
            -
            -
            - -
            - - - - - -``` diff --git a/en/docs/install-and-setup/setup/mi-setup/brokers/configure-with-apache-artemis.md b/en/docs/install-and-setup/setup/mi-setup/brokers/configure-with-apache-artemis.md deleted file mode 100644 index c872e7d599..0000000000 --- a/en/docs/install-and-setup/setup/mi-setup/brokers/configure-with-apache-artemis.md +++ /dev/null @@ -1,61 +0,0 @@ -# Connecting to Apache Artemis - -This section describes how to configure WSO2 Micro Integrator to connect with Apache Artemis (version 2.6.1). - -Follow the instructions below to set up and configure. - -1. Download and setup [Apache Artemis](https://activemq.apache.org/artemis/). -2. Download and install WSO2 Micro Integrator. -3. If you want the Micro Integrator to receive messages from an Artemis instance, or to send messages to an Artemis instance, you need to update the deployment.toml file with the relevant connection parameters. - - - Add the following configurations to enable the JMS listener with ActiveMQ connection parameters. - ```toml - [[transport.jms.listener]] - name = "myTopicConnectionFactory" - parameter.initial_naming_factory = "org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory" - parameter.provider_url = "tcp://localhost:61616" - parameter.connection_factory_name = "TopicConnectionFactory" - parameter.connection_factory_type = "topic" - - [[transport.jms.listener]] - name = "myQueueConnectionFactory" - parameter.initial_naming_factory = "org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory" - parameter.provider_url = "tcp://localhost:61616" - parameter.connection_factory_name = "QueueConnectionFactory" - parameter.connection_factory_type = "queue" - - [[transport.jms.listener]] - name = "default" - parameter.initial_naming_factory = "org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory" - parameter.provider_url = "tcp://localhost:61616" - parameter.connection_factory_name = "QueueConnectionFactory" - parameter.connection_factory_type = "queue" - ``` - - - Add the following configurations to enable the ActiveMQ JMS sender with ActiveMQ connection parameters. - ```toml - [[transport.jms.sender]] - name = "commonJmsSenderConnectionFactory" - parameter.initial_naming_factory = "org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory" - parameter.provider_url = "tcp://localhost:61616" - parameter.connection_factory_name = "QueueConnectionFactory" - parameter.connection_factory_type = "queue" - - [[transport.jms.sender]] - name = "commonTopicPublisherConnectionFactory" - parameter.initial_naming_factory = "org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory" - parameter.provider_url = "tcp://localhost:61616" - parameter.connection_factory_name = "TopicConnectionFactory" - parameter.connection_factory_type = "topic" - ``` -4. Remove any existing Apache ActiveMQ client JAR files from the `MI_HOME/dropins/` and `MI_HOME/lib/` directories. -5. Download the [artemis-jms-client-all-2.6.1.jar](https://docs.wso2.com/download/attachments/119130330/artemis-jms-client-all-2.6.1.jar?version=1&modificationDate=1558091414000&api=v2) file and copy it to the `MI_HOME/lib/` directory. -6. Remove the below line from the `MI_HOME/conf/etc/launch.ini` file. - - ```text - javax.jms,\ - ``` -7. Start Apache Artemis. For instructions, see the [Apache Artemis Documentation](https://activemq.apache.org/artemis/docs.html). -8. Start the Micro Integrator. - -Now you have configured instances of Apache Artemis and WSO2 Micro Integrator. \ No newline at end of file diff --git a/en/docs/install-and-setup/setup/mi-setup/brokers/configure-with-azureservicebus.md b/en/docs/install-and-setup/setup/mi-setup/brokers/configure-with-azureservicebus.md deleted file mode 100644 index d293c26dd7..0000000000 --- a/en/docs/install-and-setup/setup/mi-setup/brokers/configure-with-azureservicebus.md +++ /dev/null @@ -1,187 +0,0 @@ -# Connecting to Azure Service Bus - -This section describes how to configure WSO2 Micro Integrator to connect with [Azure Service Bus](https://azure.microsoft.com/en-us/services/service-bus/). Azure Service Bus is a messaging service that exists on Azure Cloud. It only needs to be configured in order to work. - -The Azure Service Bus [complies with both AMQP 1.0 and JMS 2.0](https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-messaging-overview#compliance-with-standards-and-protocols). The Micro Integrator uses its inbuilt JMS transport to send and receive messages from the Azure Service Bus. The configurations are similar to connecting with other JMS brokers. - -## Setting up the Micro Integrator with Azure Service Bus - -Follow the instructions below to set up and configure Micro Integrator to work with Azure Service Bus. - -* To get started, download and install WSO2 Micro Integrator. - - -### Setting up Azure Service Bus - -To set this up, configure a queue in Azure Service Bus to work with the synapse configuration. - -Service Bus queues can be used to communicate between various on-premise and cloud applications and components. Using queues enables you to scale your applications more easily, and enable more resiliency to your architecture. - -For more information on creating a queue, see [the Azure Service Bus documentation](https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-quickstart-portal). - -Following the above Azure Service Bus documentation to do the following. - -1. Create an Azure Service Bus named `my-mq` by selecting a pricing tier of your choice. - -2. Create a queue named `integration-queue`. Navigate to that by going to **Home** > **my-mq** | **Queues** - -3. Add a shared access policy named `my-mq-policy` assigning **Send** and **Listen**. Go to **Home** > **my-mq** | **Queues** > **integration-queue (my-mq/integration-queue)** | **Shared access policies**. - - !!! Note - The primary key will be generated by the system. Please note that if the primary key has + sign, then you have to delete and create the policy again with the same name until you get a primary key without the + sign. This is done to avoid issues in the library used to connect with the Azure Service Bus that does not working correctly when the primary key has the + sign. - -4. The `Maximum Delivery Count` property is set to 5 because it reduces the testing time. The default value is 10. Navigate to **Home** > **my-mq** | **Queues** > **integration-queue (my-mq/integration-queue)** | **Properties** to configure this. - -Now the WSO2 enterprise integrator should be configured to work with Azure Service Bus. - -### Setting up the Micro Integrator to work with Azure Service Bus - -The messaging flow is as shown below. - -![]({{base_path}}/assets/img/integrate/broker-configs/azure-service-bus.jpg) - -To set up the messaging flow in WSO2 Integration Studio, create synapse artifacts to create a consumer and a producer. - -**Azure Service Bus Producer** - -```xml - - - - - - - - - - - -
            - - - - - {"status":"successful"} - - - - - - - - - - - - {"status":"failed"} - - - - - - - - - - -``` - -**Azure Service Bus Consumer** - -```xml - - - - - - - - - - - - - - - - - - - - - - - queue - integration-queue - - - contentType - text/plain - - - azureQueueConsumerConnectionFactory - - -``` - -**Sample API** - -```xml - - - - - - - - - - {"status":"successful"} - - - - - - - - - - - -``` - -Do the following to set this up. - -1. Copy the following external .jar files to the `MI_HOME/lib` directory. - - [geronimo-jms_2.0_spec-1.0-alpha-2.jar](https://mvnrepository.com/artifact/org.apache.geronimo.specs/geronimo-jms_2.0_spec/1.0-alpha-2) - - [netty-transport-native-epoll-4.0.40.Final.jar](https://mvnrepository.com/artifact/io.netty/netty-transport-native-epoll/4.0.40.Final) - - [proton-j-0.27.1.jar](https://mvnrepositor`y.com/artifact/org.apache.qpid/proton-j/0.27.1) - - [Qpid-jms-client-0.32.0.jar](https://mvnrepository.com/artifact/org.apache.qpid/qpid-jms-client/0.32.0) - -2. If you want the Micro Integrator to receive messages from an Azure Service Bus instance, or to send messages to an Azure Service Bus instance, you need to update the deployment.toml file with the relevant connection parameters. - - - Add the following configurations to enable the JMS listener with Azure Service Bus connection parameters. - - ```toml - [[transport.jms.listener]] - name = "azureQueueConsumerConnectionFactory" - parameter.initial_naming_factory = "org.apache.qpid.jms.jndi.JmsInitialContextFactory" - parameter.provider_url = "conf/jndi.properties" - parameter.connection_factory_name = "SBCF" - parameter.connection_factory_type = "queue" - ``` - - - Add the following configurations to enable the JMS sender with Azure Service Bus connection parameters. - - ```toml - [[transport.jms.sender]] - name = "azureQueueProducerConnectionFactory" - parameter.initial_naming_factory = "org.apache.qpid.jms.jndi.JmsInitialContextFactory" - parameter.provider_url = "conf/jndi.properties" - parameter.connection_factory_name = "SBCF" - parameter.connection_factory_type = "queue" - ``` -3. Start the Micro Integrator. - -Now you have configured instances of Azure Service Bus and WSO2 Micro Integrator. - diff --git a/en/docs/install-and-setup/setup/mi-setup/brokers/configure-with-hornetq.md b/en/docs/install-and-setup/setup/mi-setup/brokers/configure-with-hornetq.md deleted file mode 100644 index f908e21731..0000000000 --- a/en/docs/install-and-setup/setup/mi-setup/brokers/configure-with-hornetq.md +++ /dev/null @@ -1,374 +0,0 @@ -# Connecting to HornetQ - -This section describes how to configure WSO2 WSO2 Micro Integrator to connect with -HornetQ, which is an open source project to build a multi-protocol, asynchronous messaging system. - -You can either use a standalone HornetQ server or the HornetQ embedded in a JBoss Enterprise Application Platform (JBoss EAP) server. - -## Configure with a standalone HornetQ server - -Follow the instructions below to configure WSO2 Micro Integrator JMS transport with a -a standalone HornetQ server. - -1. Download HornetQ from the [HornetQ Downloads](http://hornetq.jboss.org/downloads.html) site. -2. Download and install WSO2 Micro Integrator. -3. Create a sample queue by editing the `HORNET_HOME/config/stand-alone/non-clustered/hornetq-jms.xml` file as follows: - ```xml - - - - ``` - -4. Add the following two connection entries to the same file. These entries are required to enable WSO2 Micro Integrator to act as a JMS consumer. - ```xml - - false - - - - - - - - - false - - - - - - - - ``` - -5. If you have not already done so, download and install WSO2 Micro Integrator. -6. Download the [hornet-all-new.jar](https://github.com/wso2-docs/WSO2_EI/raw/master/Broker-Setup-Artifacts/HornetQ/hornetq-all-new.jar) file and copy it into the `MI_HOME/lib/` directory. - - !!! Info - If you are packing the JARs yourself, make sure you remove the javax.jms package from the assembled JAR to avoid the carbon runtime from picking this implementation of JMS over the bundled-in distribution. - -7. If you want the Micro Integrator to receive messages from a HornetQ instance, or to send messages to a HornetQ instance, you need to update the deployment.toml file with the relevant connection parameters. - - Add the following configurations to `MI_HOME/conf/deployment.toml` file to enable the JMS sender and listener with HornetQ connection parameters. - ```toml - [transport.jms] - sender_enable = true - - [[transport.jms.listener]] - name = "myTopicConnectionFactory" - parameter.initial_naming_factory = "org.jnp.interfaces.NamingContextFactory" - parameter.provider_url = "jnp://localhost:1099" - parameter.connection_factory_name = "TopicConnectionFactory" - parameter.connection_factory_type = "topic" - parameter.'java.naming.factory.url.pkgs' = "org.jboss.naming:org.jnp.interfaces" - - [[transport.jms.listener]] - name = "myQueueConnectionFactory" - parameter.initial_naming_factory = "org.jnp.interfaces.NamingContextFactory" - parameter.provider_url = "jnp://localhost:1099" - parameter.connection_factory_name = "QueueConnectionFactory" - parameter.connection_factory_type = "queue" - parameter.'java.naming.factory.url.pkgs' = "org.jboss.naming:org.jnp.interfaces" - - [[transport.jms.listener]] - name = "default" - parameter.initial_naming_factory = "org.jnp.interfaces.NamingContextFactory" - parameter.provider_url = "jnp://localhost:1099" - parameter.connection_factory_name = "QueueConnectionFactory" - parameter.connection_factory_type = "queue" - parameter.'java.naming.factory.url.pkgs' = "org.jboss.naming:org.jnp.interfaces" - ``` - !!! Info - For details on the JMS configuration parameters used in the code segments above, see [JMS connection factory parameters]({{base_path}}/reference/config-catalog-mi/#jms-transport-listener-non-blocking-mode). - -8. Start HornetQ with the following command. - - On Windows: - `HORNETQ_HOME\bin\run.bat --run ` - - On Linux/Solaris: - `sh HORNETQ_HOME/bin/run.sh` - -Now you have configured WSO2 Micro Integrator with a standalone HornetQ server. - -### Testing the configuration - -To test the configuration, create a proxy service named `JMSPublisher` that will publish messages from the Micro Integrator to a sample queue in HornetQ, and create the `JMSListener` queue to read messages from the HornetQ sample queue. - -1. Create the `JMSPublisher` proxy service with the following configuration: - - ```xml - - - - - - - - - - - - -
            - - - - - - HornetQ-WSO2 ESB sample - - ``` - - !!! Info - - The `OUT_ONLY` parameter is set to `true` since this proxy service is created only for the purpose of publishing the messages from WSO2 Micro Integrator to the `mySampleQueue` queue specified in the address URI. - - You may have to change the host name, port etc. of the JMS string based on your environment. - -2. Create the `JMSListener` proxy service with the following configuration: - - ```xml - - - - - - - - - - - - - contentType - application/xml - - - queue/mySampleQueue - - - ``` - -4. Send the following request: - - ```xml - - - - - - - - 20 - - 20 - - IBM - - - - - ``` - -5. Check the log on your WSO2 Micro Integrator terminal. You will see the following log, which indicates that the request published in the queue is picked up by the `JMSListener` proxy. - - ```xml - [