diff --git a/en/docs/administer/logging-and-monitoring/logging/admin-configuring-the-log-provider.md b/en/docs/administer/logging-and-monitoring/logging/admin-configuring-the-log-provider.md deleted file mode 100644 index 02be394d15..0000000000 --- a/en/docs/administer/logging-and-monitoring/logging/admin-configuring-the-log-provider.md +++ /dev/null @@ -1,100 +0,0 @@ -# Configuring the Log Provider - -Logs of a system can be stored in many ways. For example, they can be stored in a file system, an SQL server such as MySQL, a no-sql server like Cassandra, etc. According to the default configurations in a Carbon product, the logs are stored in the `/repository/logs/` directory as `.log` files. - -To [view and download the logs](https://docs.wso2.com/display/ADMIN44x/View+and+Download+Logs) using the management console, the following configurations are required: the [Logging Management](https://docs.wso2.com/display/ADMIN44x/Monitoring+Logs+using+Management+Console) feature should be installed, [the log4j properties should be configured](https://docs.wso2.com/display/ADMIN44x/Configuring+Log4j+Properties) and the LogProvider and LogFileProvider interfaces should be implemented and configured for the server as described below. - -- [Implementing the LogProvider interface](#ConfiguringtheLogProvider-ImplementingtheLogProviderinterface) -- [Implementing the LogFileProvider interface](#ConfiguringtheLogProvider-ImplementingtheLogFileProviderinterface) -- [Configuring Carbon to plug the log provider](#ConfiguringtheLogProvider-ConfiguringCarbontoplugthelogprovider) - -### Implementing the LogProvider interface - -This `org.wso2.carbon.logging.service.provider.api.LogProvider` interface is used for viewing logs in the management console. It is introduced as an extension point to provide logs to the "Log Viewer" (in the management console). Any log provider can implement this interface to fetch logs from any mechanism, and the Log Viewer will use this interface to retrieve and show logs in the management console. - -The `LogProvider` interface has the following methods: - --`init(LoggingConfig loggingConfig)` - Initialize the log provider by reading the properties defined in the [logging configuration](#ConfiguringtheLogProvider-ConfigureLogProvidersinCarbonProducts) file. This will be called immediately after creating a new instance of LogProvider. -- getApplicationNames(String tenantDomain, String serverKey) - Return list of all application names deployed under provided tenant domain and server key. -- getSystemLogs() - Return a list of system LogEvents. -- getAllLogs(String tenantDomain, String serverKey) - Return list of all the logs available under given domain and server key -- getLogsByAppName(String appName, String tenantDomain, String serverKey) - Return list of all the LogEvents belonging to the application, which is deployed under given tenant domain and server key. -- getLogs(String type, String keyword, String appName, String tenantDomain, String serverKey) - Returns list of all LogEvents related to the given application, which match to given type and LogEvent message has given key word with it. User can use this API for search operations. -- logsCount(String tenantDomain, String serverKey) - Return LogEvent count -- clearLogs() - Clear operation. For example, if it is an "in memory" log provider, this method can be used to clear the memory. - -### Implementing the LogFileProvider interface - -The `org.wso2.carbon.logging.service.provider.api.LogFileProvider` interface is used to list and download the archived log files using the management console. It is introduced as an extension point providing the list of log file names and the ability to download these logs to the "Log Viewer". - -The `LogFileProvider` interface has the following methods: - -- init(LoggingConfig loggingConfig)-  Initialize the file log provider by reading the properties defined in the [logging configuration](#ConfiguringtheLogProvider-ConfigureLogProvidersinCarbonproducts) file. This will be called immediately after creating a new instance of LogFileProvider. -- getLogFileInfoList(String tenantDomain, String serviceName) - Return information about the log files, which is available under given tenant domain and serviceName. For example, info about logs: log name, log date, log size. -- downloadLogFile(String logFile, String tenantDomain, String serviceName) - Download the file. - -!!! info -Default log provider in Carbon products - -A default "in memory" log provider, which implements the `LogProvider` interface has been created both as a sample and as the default log provider option in carbon. Main task of this class is to read the carbon logs available in the `/repository/logs/` directory to a buffer stored in memory and enable the LogViewer to fetch and view these logs in the management console. - -A default log file provider that implements the `LogFileProvider` interface has also been implemented as a sample and as the default log file provider option in carbon. The main task of this class is to read the log file names (including the size and date of these files) from the `/repository/logs/` directory and to enable the download of these logs. - - -### Configuring Carbon to plug the log provider - -After implementing the above interfaces, update the `logging-config.xml` file stored in the `/repository/conf/etc/` directory. - -- Shown below is the configuration for the the default log provider and the default log file provider of a Carbon product: - - ``` java - - - - - - - - - - - - ``` - !!! note - The default "InMemoryLogProvider" uses the CarbonMemoryAppender. Therefore the log4j.properties file stored in <PRODUCT\_HOME>/repository/conf/ directory should be updated with the following log4j.appender.CARBON\_MEMORY property: - - ``` java - log4j.appender.CARBON_MEMORY=org.wso2.carbon.logging.service.appender.CarbonMemoryAppender] - ``` - - - If the implemented class requires additional properties to initialise the class, the `` element in the `logging-config.xml` file can be used. For example, a cassandra based log provider may need information on keyspace, column family, etc. You can configure these details in the `logging-config.xml` file and access them at runtime using the `LoggingConfig` class, which contains all configuration parameters. For a Cassandra based log provider, the following properties can be defined in the `logging-config.xml` file and later used in the implementation using the `LoggingConfig` class, which is assigned when initializing the class. - -- The following properties can be configured in the `logging-config.xml` file for a Cassandra based log provider: - - ``` java - - - - - - - - - - - - - - - - - - - - - - - ``` - - diff --git a/en/docs/administer/logging-and-monitoring/monitoring/jmx-monitoring.md b/en/docs/administer/logging-and-monitoring/monitoring/jmx-monitoring.md deleted file mode 100644 index 59d6ecf1a0..0000000000 --- a/en/docs/administer/logging-and-monitoring/monitoring/jmx-monitoring.md +++ /dev/null @@ -1,206 +0,0 @@ -# JMX Monitoring - -Java Management Extensions (JMX) is a technology that lets you implement management interfaces for Java applications. **JConsole** is a JMX-compliant monitoring tool, which comes with the Java Development Kit (JDK). Therefore, when you use a WSO2 product, JMX is enabled by default, which allows you to monitor the product using JConsole. - -!!! info -Go to the [WSO2 Administration Guide](https://docs.wso2.com/display/ADMIN44x/JMX-Based+Monitoring) for detailed instructions on how to configure JMX for a WSO2 product and how to use **JConsole** for monitoring a product. - - -### MBeans for WSO2 API Manager - -When JMX is enabled, WSO2 ESBAPI Manager exposes a number of management resources as JMX MBeans that can be used for managing and monitoring the running server. When you start JConsole, you can monitor these MBeans from the **MBeans** tab. While some of these MBeans (**ServerAdmin** and **DataSource**) are common to all WSO2 products, some MBeans are specific to WSO2 API Manager. - -!!! tip -The common MBeans are explained in detail in the [WSO2 Administration Guide](https://docs.wso2.com/display/ADMIN44x/JMX-Based+Monitoring) . Listed below are the MBeans that are specific to WSO2 API Manager. - -This section summarizes the attributes and operations available for the following ESB specific MBeans: - -- Connection MBeans -- Latency MBeans -- Threading MBeans -- Transport MBeans - -#### Connection MBeans - -These MBeans provide connection statistics for the HTTP and HTTPS transports. - -You can view the following Connection MBeans: - --`org.apache.synapse/PassThroughConnections/http-listener` --`org.apache.synapse/PassThroughConnections/http-sender` --`org.apache.synapse/PassThroughConnections/https-listener` --`org.apache.synapse/PassThroughConnections/https-sender` - -**Attributes** - -| Attribute Name | Description | -|--------------------|------------------------------------------------------------| -| ActiveConnections | Number of currently active connections. | -| LastXxxConnections | Number of connections created during last Xxx time period. | -| RequestSizesMap | A map of number of requests against their sizes. | -| ResponseSizesMap | A map of number of responses against their sizes. | -| LastResetTime | Last time connection statistic recordings was reset. | - -**Operations** - -| Operation Name | Description | -|----------------|-------------------------------------------------------------| -| reset() | Clear recorded connection statistics and restart recording. | - -#### Latency MBeans - -This view provides statistics of the latencies from all backend services connected through the HTTP  and HTTPS transports. These statistics are provided as an aggregate value. - -You can view the following Latency MBeans: - --`org.apache.synapse/PassthroughLatencyView/nio-http-http` --`org.apache.synapse/PassthroughLatencyView/nio-https-https` - -**Attributes** - -| Attribute Name | Description | -|----------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Avg\_Latency | Average latency since latency recording was last reset. | -| txxx\_AvgLatency | Average latency for last xxx time period. For example, LastHourAvgLatency returns the average latency for the last hour. | -| LastResetTime | Last time latency statistic recording was reset. | -| Avg\_Client\_To\_Esb\_RequestReadTime | Average Time taken to read request by API Manager which is sent by the client | -| xxx\_Avg\_Client\_To\_Esb\_RequestReadTime | Average Time taken to read request by gateway which is sent by the client for last xxx time period. For example 15m\_Avg\_Client\_To\_Esb\_RequestReadTime means average Time taken to read request by API Manager which is sent by the client for last 15 minutes. | -| Avg\_Esb\_To\_Backend\_RequestWriteTime | Average Time taken to write the request from gateway to the backend. | -| xxx\_Avg\_Esb\_To\_Backend\_RequestWriteTime | Average Time taken to write the request from gateway to the backend in last xxx time period. For example 15m\_Avg\_Esb\_To\_Backend\_RequestWriteTime is average Time taken to write the request from gateway to the backend in last 15 minutes. | -| Avg\_Backend\_To\_Esb\_ResponseReadTime | Average Time taken to read the response from gateway to backend. | -| xxx\_Avg\_Backend\_To\_Esb\_ResponseReadTime | Average Time taken to read the response from gateway to backend in last xxx time period. | -| Avg\_Esb\_To\_Client\_ResponseWriteTime | Average time taken to write the response from gateway to the client application. | -| xxx\_Avg\_Esb\_To\_Client\_ResponseWriteTime | Average time taken to write the response from gateway to the client application in last xxx time period. | -| Avg\_ClientWorker\_Queued\_Time | Average time where the ClientWorker get queued. | -| xxx\_Avg\_ClientWorker\_Queued\_Time | Average time where the ClientWorker get queued in last xxx time period. | -| Avg\_ServeWorker\_Queued\_Time | Average time where the ServerWorker get queued. | -| xxx\_Avg\_ClientWorker\_Queued\_Time | Average time where the ServerWorker get queued in last xxx time period. | -| Avg\_Latency\_Backend | Average backend latency. | -| xxx\_Avg\_Latency\_Backend | Average backend latency in last xxx time period. | -| Avg\_Request\_Mediation\_Latency | Average latency of mediating the requests. | -| Avg\_Response\_Mediation\_Latency | Average latency of mediating the responses. | - -**Operations** - -| Operation Name | Description | -|----------------|----------------------------------------------------------| -| reset() | Clear recorded latency statistics and restart recording. | - -#### Threading MBeans - -These MBeans are only available in the NHTTP transport and not in the default Pass Through transport. - -You can view the following Threading MBeans: - --`org.apache.synapse/Threading/PassThroughHttpServerWorker` - -**Attributes** - -| Attribute Name | Description | -|--------------------------------|---------------------------------------------------------------------| -| TotalWorkerCount | Total worker threads related to this server/client. | -| AvgUnblockedWorkerPercentage | Time-averaged unblocked worker thread percentage. | -| AvgBlockedWorkerPercentage | Time-averaged blocked worker thread percentage. | -| LastXxxBlockedWorkerPercentage | Blocked worker thread percentage averaged for last Xxx time period. | -| DeadLockedWorkers | Number of deadlocked worker threads since last statistics reset. | -| LastResetTime | Last time thread statistic recordings was reset. | - -**Operations** - - ---- - - - - - - - - - - - - -
-Operation Name -
-Description -
reset()Clear recorded thread statistic and restart recording.
- -#### Transport MBeans - -For each transport listener and sender enabled in the ESB, there will be an MBean under the `org.apache.axis2/Transport` domain. For example, when the JMS transport is enabled, the following MBean will be exposed: - --`org.apache.axis2/Transport/jms-sender-n` - -You can also view the following Transport MBeans: - --`org.apache.synapse/Transport/passthru-http-receiver` --`org.apache.synapse/Transport/passthru-http-sender` --`org.apache.synapse/Transport/passthru-https-receiver` --`org.apache.synapse/Transport/passthru-https-sender` - -**Attributes** - -Attribute Name - -Description - -Attribute Name - -Description - -ActiveThreadCount -Threads active in this transport listener/sender. -AvgSizeReceived -Average size of received messages. -AvgSizeSent -Average size of sent messages. -BytesReceived -Number of bytes received through this transport. -BytesSent -Number of bytes sent through this transport. -FaultsReceiving -Number of faults encountered while receiving. -FaultsSending -Number of faults encountered while sending. -LastResetTime -Last time transport listener/sender statistic recording was reset. -MaxSizeReceived -Maximum message size of received messages. -MaxSizeSent -Maximum message size of sent messages. -MetricsWindow -Time difference between current time and last reset time in milliseconds. -MinSizeReceived -Minimum message size of received messages. -MinSizeSent -Minimum message size of sent messages. -MessagesReceived -Total number of messages received through this transport. -MessagesSent -Total number of messages sent through this transport. -QueueSize -Number of messages currently queued. Messages get queued if all the worker threads in this transport thread pool are busy. -ResponseCodeTable -Number of messages sent against their response codes. -TimeoutsReceiving -Message receiving timeout. -TimeoutsSending -Message sending timeout. - -**Operations** - -| Operation Name | Description | -|---------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------| -| start() | Start this transport listener/sender. | -| stop() | Stop this transport listener/sender. | -| resume() | Resume this transport listener/sender which is currently paused. | -| resetStatistics() | Clear recorded transport listener/sender statistics and restart recording. | -| pause() | Pause this transport listener/sender which has been started. | -| maintenenceShutdown(long gracePeriod) | Stop processing new messages, and wait the specified maximum time for in-flight requests to complete before a controlled shutdown for maintenance. | - - diff --git a/en/docs/administer/logging-and-monitoring/monitoring/monitoring-with-opentracing.md b/en/docs/administer/logging-and-monitoring/monitoring/monitoring-with-opentracing.md deleted file mode 100644 index 148a23598e..0000000000 --- a/en/docs/administer/logging-and-monitoring/monitoring/monitoring-with-opentracing.md +++ /dev/null @@ -1,122 +0,0 @@ -# Enabling Tracing with OpenTracing - -In a distributed API Manager architecture, tracing a message is important to debug and observe a message path. This is known as distributed tracing. OpenTracing allows you to enable distributed tracing for WSO2 API Manager. -OpenTracing aims to be an open, vendor-neutral standard for distributed systems instrumentation. It offers a way for developers to follow the thread — to trace requests from beginning to end across touch points and understand distributed systems at scale. Open tracing will also help to trace the message and identify the latencies that took place in each process or method. Thereby, open tracing will help you to carry out a time-related analysis. - - WSO2 API Manager supports the following types of ways to retrieve instrumented data. - - - Jaeger - - Zipkin - - Log - -For more information, see [Open Tracer Configurations]({{base_path}}/reference/config-catalog/#api-m-open-tracer-configurations). - -## Enabling Jaeger Tracing - -1. Copy the following configuration into the `deployment.toml` file. - - === "Format" - ```toml - [apim.open_tracer] - remote_tracer.enable = true - remote_tracer.name = "jaeger" - remote_tracer.properties.hostname = "" - remote_tracer.properties.port = "" - ``` - - === "Example" - ```toml - [apim.open_tracer] - remote_tracer.enable = true - remote_tracer.name = "jaeger" - remote_tracer.properties.hostname = "localhost" - remote_tracer.properties.port = "6831" - #6832 can also be used as the port - ``` - -2. Start the server. - - After you invoke the APIs you will see the tracing data in Jaeger as follow: - - [![Distributed tracing jaeger]({{base_path}}/assets/img/administer/opentracing-jaeger.png)]({{base_path}}/assets/img/administer/opentracing-jaeger.png) - -## Enabling Zipkin Tracing - -1. Copy the following configuration into the `deployment.toml` file. - - === "Format" - ```toml - [apim.open_tracer] - remote_tracer.enable = true - remote_tracer.name = "zipkin" - remote_tracer.properties.hostname = "" - remote_tracer.properties.port = "" - ``` - - === "Example" - ```toml - [apim.open_tracer] - remote_tracer.enable = true - remote_tracer.name = "zipkin" - remote_tracer.properties.hostname = "localhost" - remote_tracer.properties.port = "9411" - ``` - -2. Start the server. - - After you invoke the APIs you will see the tracing data in Zipkin as follow: - -[![Distributed tracing zipkin]({{base_path}}/assets/img/administer/opentracing-zipkin.png)]({{base_path}}/assets/img/administer/opentracing-zipkin.png) - - -## Enabling Log Tracing - -1. Navigate to the `/conf/log4j2.properties` file and locate the following configuration. - - ``` - logger.trace.name = trace - ``` -2. Change the above configuration as follows. - - ``` - logger.trace.name = tracer - ``` - -3. Copy the following configuration into the `deployment.toml` file. - - ```toml - [apim.open_tracer] - remote_tracer.enable = false - log_tracer.enable = true - ``` - -4. Start the server. - - After you invoke the APIs you will be able to see tracing data in the `wso2-apimgt-open-tracing.log` in the `/repository/logs` folder. - - ```log - 20:19:46,937 [-] [PassThroughMessageProcessor-14] TRACE {"Latency":0,"Operation":"API:CORS_Request_Handler","Tags":{}} - n20:19:46,938 [-] [PassThroughMessageProcessor-14] TRACE {"Latency":0,"Operation":"API:Get_Client_Domain()","Tags":{}} - n20:19:46,939 [-] [PassThroughMessageProcessor-14] TRACE {"Latency":0,"Operation":"API:Find_matching_verb()","Tags":{}} - n20:19:46,939 [-] [PassThroughMessageProcessor-14] TRACE {"Latency":1,"Operation":"API:Get_Resource_Authentication_Scheme()","Tags":{}} - n20:19:46,956 [-] [https-jsse-nio-9443-exec-23] TRACE {"Latency":0,"Operation":"API:Get_Access_Token_Cache_key()","Tags":{}} - n20:19:46,958 [-] [https-jsse-nio-9443-exec-23] TRACE {"Latency":0,"Operation":"API:Fetching_API_iNFO_DTO_FROM_CACHE()","Tags":{}} - n20:19:46,959 [-] [https-jsse-nio-9443-exec-23] TRACE {"Latency":0,"Operation":"API:Validate_Token()","Tags":{}} - n20:19:46,972 [-] [https-jsse-nio-9443-exec-23] TRACE {"Latency":12,"Operation":"API:Validate_Subscription()","Tags":{}} - n20:19:46,973 [-] [https-jsse-nio-9443-exec-23] TRACE {"Latency":0,"Operation":"API:Validate_Scopes()","Tags":{}} - n20:19:46,974 [-] [https-jsse-nio-9443-exec-23] TRACE {"Latency":0,"Operation":"API:Write_To_Key_Manager_Cache()","Tags":{}} - n20:19:46,975 [-] [https-jsse-nio-9443-exec-23] TRACE {"Latency":0,"Operation":"API:Publishing_Key_Validation_Response","Tags":{}} - n20:19:46,976 [-] [https-jsse-nio-9443-exec-23] TRACE {"Latency":20,"Operation":"API:Validate_Main","Tags":{}} - n20:19:46,991 [-] [PassThroughMessageProcessor-14] TRACE {"Latency":51,"Operation":"API:Key_Validation_From_Gateway_Node","Tags":{}} - n20:19:46,992 [-] [PassThroughMessageProcessor-14] TRACE {"Latency":52,"Operation":"API:Get_Key_Validation_Info()","Tags":{}} - n20:19:46,993 [-] [PassThroughMessageProcessor-14] TRACE {"Latency":56,"Operation":"API:Key_Validation_Latency","Tags":{}} - n20:19:46,994 [-] [PassThroughMessageProcessor-14] TRACE {"Latency":0,"Operation":"API:Throttle_Latency","Tags":{}} - n20:19:46,995 [-] [PassThroughMessageProcessor-14] TRACE {"Latency":0,"Operation":"API:API_Mgt_Usage_Handler","Tags":{}} - n20:19:46,996 [-] [PassThroughMessageProcessor-14] TRACE {"Latency":1,"Operation":"API:Google_Analytics_Handler","Tags":{}} - n20:19:46,996 [-] [PassThroughMessageProcessor-14] TRACE {"Latency":0,"Operation":"API:Request_Mediation_Latency","Tags":{}} - n20:19:47,016 [-] [PassThroughMessageProcessor-15] TRACE {"Latency":19,"Operation":"API:Backend_Latency","Tags":{"span.endpoint":"https://localhost:9443/am/sample/pizzashack/v1/api/"}} - n20:19:47,017 [-] [PassThroughMessageProcessor-15] TRACE {"Latency":0,"Operation":"API:Response_Mediation_Latency","Tags":{}} - n20:19:47,018 [-] [PassThroughMessageProcessor-15] TRACE {"Latency":0,"Operation":"API:API_MGT_Response_Handler","Tags":{}} - n20:19:47,019 [-] [PassThroughMessageProcessor-15] TRACE {"Latency":83,"Operation":"API:Response_Latency","Tags":{"span.resource":"/menu","span.kind":"server","span.api.name":"PizzaShackAPI","span.consumerkey":"Fn9RGuFeefEe7W07jOq_mvQvLJwa","span.request.method":"GET","span.request.path":"pizzashack/1.0.0/menu","span.api.version":"1.0.0","span.activity.id":"urn:uuid:339f337a-8848-41ec-adba-73da367fa66e"}} - n - ``` diff --git a/en/docs/administer/managing-users-and-roles/managing-user-stores/configuring-the-system-administrator.md b/en/docs/administer/managing-users-and-roles/managing-user-stores/configuring-the-system-administrator.md deleted file mode 100644 index 8cb6e2dc39..0000000000 --- a/en/docs/administer/managing-users-and-roles/managing-user-stores/configuring-the-system-administrator.md +++ /dev/null @@ -1,89 +0,0 @@ -# Configuring the System Administrator - -The **admin** user is the super tenant that will be able to manage all other users, roles and permissions in the system by using the management console of the product. Therefore, the user that should have admin permissions is required to be stored in the primary user store when you start the system for the first time . The documentation on setting up primary user stores will explain how to configure the administrator while configuring the user store. The information under this topic will explain the main configurations that are relevant to setting up the system administrator. - -!!! note - If the primary user store is read-only, you will be using a user ID and role that already exists in the user store, for the administrator. If the user store is read/write, you have the option of creating the administrator user in the user store as explained below. By default, the embedded H2 database (with read/write enabled) is used for both these purposes in WSO2 products. - - -Note the following key facts about the system administrator in your system: - -- The admin user and role is always stored in the primary user store in your system. -- An administrator is configured for your system by default. This **admin** user is assigned to the **admin** role, which has all permissions enabled. -- The permissions assigned to the default **admin** role cannot be modified. - -### Before you begin **:** - -Ensure that you have a primary user store (for storing users and roles) and an RDBMS (for storing information related to permissions). See the following documentation for instructions on how to set up these repositories. - -- [Configuring the Primary User Stores](https://docs.wso2.com/display/ADMIN44x/Configuring+the+Primary+User+Store) : This topic explains how the primary user store is set up and configured for your product. -- [Configuring the Authorization Manager](https://docs.wso2.com/display/ADMIN44x/Configuring+the+Authorization+Manager) : This topic explains how the repository (RDBMS) for storing authorization information (role-based permissions) is configured for your product. -- [Changing a Password](https://docs.wso2.com/display/ADMIN44x/Changing+a+Password) : This topic explains how you can change the admin password using the management console of the product. - -### Updating the administrator - -The `` section at the top of the `/repository/conf/user-mgt.xml` file allows you to configure the administrator user in your system as well as the RDBMS that will be used for storing information related to user authentication (i.e. role-based permissions). - -``` java - - - true -  admin - - admin - admin - - everyone - - ............... - - ... - -``` - -Note the following regarding the configuration above. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ElementDescription
<AddAdmin> When true , this element creates the admin user based on the AdminUser element. It also indicates whether to create the specified admin user if it doesn't already exist. When connecting to an external read-only LDAP or Active Directory user store, this property needs to be false if an admin user and admin role exist within the user store. If the admin user and admin role do not exist in the user store, this value should be true , so that the role is added to the user management database. However, if the admin user is not there in the user store, we must add that user to the user store manually. If the AddAdmin value is set to true in this case, it will generate an exception.
<AdminRole>wso2admin</AdminRole> This is the role that has all administrative privileges of the WSO2 product, so all users having this role are admins of the product. You can provide any meaningful name for this role. This role is created in the internal H2 database when the product starts. This role has permission to carry out any actions related to the Management Console. If the user store is read-only, this role is added to the system as a special internal role where users are from an external user store.
<AdminUser>

Configures the default administrator for the WSO2 product. If the user store is read-only, the admin user must exist in the user store or the system will not start. If the external user store is read-only, you must select a user already existing in the external user store and add it as the admin user that is defined in the <AdminUser> element. If the external user store is in read/write mode, and you set <AddAdmin> to true , the user you specify will be automatically created.

<UserName> This is the username of the default administrator or super tenant of the user store. I f the user store is read-only, the admin user MUST exist in the user store for the process to work.
<Password>
-

Do NOT put the password here but leave the default value. I f the user store is read-only, this element and its value are ignored. This password is used only if the user store is read-write and the AddAdmin value is set to true .
-

-
-

Note

-

Note that the password in the user-mgt.xml file is written to the primary user store when the server starts for the first time. Thereafter, the password will be validated from the primary user store and not from the user-mgt.xml file. Therefore, if you need to change the admin password stored in the user store, you cannot simply change the value in the user-mgt.xml file. To change the admin password, you must use the Change Password option from the management console as explained here .

-
- -
<EveryOneRoleName> The name of the "everyone" role. All users in the system belong to this role.
- - diff --git a/en/docs/administer/managing-users-and-roles/managing-user-stores/understanding-the-user-realm.md b/en/docs/administer/managing-users-and-roles/managing-user-stores/understanding-the-user-realm.md deleted file mode 100644 index 90ff74d55f..0000000000 --- a/en/docs/administer/managing-users-and-roles/managing-user-stores/understanding-the-user-realm.md +++ /dev/null @@ -1,14 +0,0 @@ -# Understanding the User Realm - -User management functionality is provided by default in all WSO2 Carbon-based products and is configured in the -`deployment.toml` file found in the `/repository/conf/` directory. The following documentation explains -the configurations that should be done in WSO2 products in order to set up the User Management module. - -The complete functionality and contents of the User Management module is called a **user realm** . The realm includes the user management classes, configurations and repositories that store information. Therefore, configuring the User Management functionality in a WSO2 product involves setting up the relevant repositories and updating the relevant configuration files. - -The following diagram illustrates the required configurations and repositories: -![]({{base_path}}/assets/attachments/126562314/126562315.png) - -The following sections include instructions on the above required configurations and repositories: - - diff --git a/en/docs/administer/multitenancy/adding-new-tenants.md b/en/docs/administer/multitenancy/adding-new-tenants.md deleted file mode 100644 index 2766defe96..0000000000 --- a/en/docs/administer/multitenancy/adding-new-tenants.md +++ /dev/null @@ -1,110 +0,0 @@ -# Adding New Tenants - -See the topics given below for instructions. - -- [Adding tenants using the management console](#AddingNewTenants-Addingtenantsusingthemanagementconsole) -- [Managing tenants using Admin Services](#AddingNewTenants-ManagingtenantsusingAdminServices) - -### Adding tenants using the management console - -You can add a new tenant in the management console and then view it by following the procedure below. In order to add a new tenant, you should be logged in as a super user. - -1. Click **Add New Tenant** in the **Configure** tab of your product's management console. - - ![]({{base_path}}/assets/attachments/126562777/126562778.png) - -2. Enter the tenant information in **Register A New Organization** screen as follows, and click **Save**. - - | Parameter Name | Description | - |----------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| - | **Domain** | The domain name for the organization, which should be unique (e.g., abc.com). This is used as a unique identifier for your domain. You can use it to log into the admin console to be redirected to your specific tenant. The domain is also used in URLs to distinguish one tenant from another. | - | **Select Usage Plan for Tenant** | The usage plan defines limitations (such as the number of users, bandwidth etc.) for the tenant. | - | **First Name** / **Last Name** | The name of the tenant admin. | - | **Admin Username** | The login username of the tenant admin. The username always ends with the domain name (e.g., admin@abc.com ) | - | **Admin Password** | The password used to log in using the admin username specified. | - | **Admin Password (Repeat)** | Repeat the password to confirm. | - | **Email** | The email address of the admin. | - -3. After saving, the newly added tenant appears in the **Tenants List** page as shown below. Click **View Tenants** in the **Configure** tab of the management console to see information of all the tenants that currently exist in the system. If you want to view only tenants of a specific domain, enter the domain name in the **Enter the Tenant Domain** parameter and click **Find** . - ![]({{base_path}}/assets/attachments/126562777/126562781.png) -### Managing tenants using Admin Services - -Other tenant management operations such as activating, deactivating, and updating, which are not available in the management console UI, can be done through one of the following admin services: - --`TenantMgtAdminService` --`RemoteTenantManagerService` - -You can invoke these operations using a SOAP client like SOAP UI as follows: - -1. Open the `/repository/conf/carbon.xml` file and set the `HideAdminServiceWSDLs` parameter to false. -2. Start the product server by executing the product startup script from the `/bin` directory: - - **In Linux** - - ``` java - sh api-manager.sh - ``` - - **In Windows** - - ``` java - api-manager.bat - ``` - - !!! tip - Get the list of available admin services - - If you want to discover the admin services that are exposed by your product: - - 1. Execute the following command: - - **In Linux** - - ``` java - sh api-manager.sh -DosgiConsole - ``` - - **In Windows** - - ``` java - api-manager.bat -DosgiConsole - ``` - - 2. When the server is started, hit the enter/return key several times to get the OSGI shell in the console. - 3. In the OSGI shell, enter the following: `listAdminServices` - - This will give the list of admin services for your product. - - -3. Start the SOAP UI client, and import the WSDL of the admin service that you are using: - - - For `TenantMgtAdminService: https://localhost:9443/services/TenantMgtAdminService?wsdl` - - For `RemoteTenantManagerService: https://localhost:9443/services/RemoteTenantManagerService?wsdl` - - This assumes that you are running the SOAP UI client from the same machine as the product instance. Note that there are several operations shown in the SOAP UI after importing the WSDL file: - - ![]({{base_path}}/assets/attachments/126562777/126562782.png) - !!! warning - Before invoking an operation: - - - Be sure to set the admin user's credentials for authorization in the SOAP UI. - - Note that it is **not recommended** to delete tenants. - - -4. Click on the operation to open the request view. For example, to activate a tenant use the `activateTenant` operation. - -5. If your tenant domain is abc.com, invoke the `activateTenant` operation with the following request: - - ``` java - - - - - - abc.com - - - - ``` - - diff --git a/en/docs/api-analytics/analytics-api-guide.md b/en/docs/api-analytics/analytics-api-guide.md deleted file mode 100644 index 74e3899792..0000000000 --- a/en/docs/api-analytics/analytics-api-guide.md +++ /dev/null @@ -1,2 +0,0 @@ -!!! Note - Content to be added. WIP. \ No newline at end of file diff --git a/en/docs/api-analytics/choreo-analytics/gateways/configure-synapse-gateway.md b/en/docs/api-analytics/choreo-analytics/gateways/configure-synapse-gateway.md deleted file mode 100644 index be3dbec99d..0000000000 --- a/en/docs/api-analytics/choreo-analytics/gateways/configure-synapse-gateway.md +++ /dev/null @@ -1,55 +0,0 @@ -# Configure the API Gateway - -API Analytics is delivered via the API Analytics Cloud. Therefore, the API Manager Gateway needs to be configured to publish analytics data into the cloud. - -## Basic configurations - -{!includes/analytics/configure-synapse-gateway.md!} - -## Advanced configurations - -This section explains the additional configurations that you can carry out to fine-tune the analytics data publishing process. By default, the following configurations are set to default values that are derived through testing. However, based on other factors, there may be a need to fine-tune these parameters. - -### Worker Thread Count - -This property defines the number of threads that are publishing analytics data into the Analytics Cloud. The default value is one thread. One thread can serve up to 3200 requests per second with unrestricted internet bandwidth. In case a single thread is not enough to meet the load handled by your Gateway, you will encounter the following error message in Gateway logs. - -`Event queue is full. Starting to drop analytics events.` - -If you get the above-mentioned error during an API invocation spike, then increase the 'Queue Size' as explained in next section. However, if you are getting this error repeatedly, then you should increase the worker thread count as shown below. - -```toml -[apim.analytics] -enable = true -config_endpoint = "https://analytics-event-auth.choreo.dev/auth/v1" -auth_token = “” -properties.'worker.thread.count' = 2 -``` - -### Queue Size - -This property denotes the number of analytics events that the Gateway keeps in-memory and uses to handle request bursts. The default value is set to 20000. As explained in the previous section, if you get the following error message in the Gateway logs during API invocation spikes, you should consider increasing queue size. - -`Event queue is full. Starting to drop analytics events.` - -However, another factor that you should consider when increasing the queue size is the memory footprint. A single analytics publishing event is around 1 KB. Therefore, you should plan the capacity to not hinder the JVM heap. To tweak the property, add the configuration as shown below. - -```toml -[apim.analytics] -enable = true -config_endpoint = "https://analytics-event-auth.choreo.dev/auth/v1" -auth_token = “” -properties.'queue.size' = 10000 -``` - -### Client Flushing Delay - -This property denotes the guaranteed frequency (in milliseconds) that the analytics events will be published to the cloud. Currently, analytics events are batched before being sent. Once a given batch is full, that batch will be published into the Analytics Cloud. However, under low throughput, it can take some time for a batch to be filled. In such cases, Client Flushing Delay will come into the picture. A separate publisher will publish the analytics events after every Client Flushing Delay if the Event Queue, which was mentioned above, is empty and also if none of the worker threads are currently publishing. By default, this is set to 10 seconds. To tweak the property, add the configuration as shown below. - -```toml -[apim.analytics] -enable = true -config_endpoint = "https://analytics-event-auth.choreo.dev/auth/v1" -auth_token = “” -properties.'client.flushing.delay' = 15000 -``` diff --git a/en/docs/api-analytics/qos.md b/en/docs/api-analytics/qos.md deleted file mode 100644 index 74e3899792..0000000000 --- a/en/docs/api-analytics/qos.md +++ /dev/null @@ -1,2 +0,0 @@ -!!! Note - Content to be added. WIP. \ No newline at end of file diff --git a/en/docs/api-analytics/viewing/analytics-pages-overview.md b/en/docs/api-analytics/viewing/analytics-pages-overview.md deleted file mode 100644 index df5c6d4be1..0000000000 --- a/en/docs/api-analytics/viewing/analytics-pages-overview.md +++ /dev/null @@ -1,36 +0,0 @@ -# Viewing the API Analytics Overview - -[![overview page full]({{base_path}}/assets/img/analytics/overview/overview-page-full.png)]({{base_path}}/assets/img/analytics/overview/overview-page-full.png) - -This is the welcome page of the analytics portal. This page gives you a quick overview of the whole - management system. Targeted audience for the page are managers and business users who need quick insight. Also, this - page can be used as a dashboard to view the current system status. - -### Total Traffic Widget - - -Total traffic widget displays the total traffic your selected environment received during the selected time range -. This includes both successful requests and error requests. If you want to further investigate the traffic, see [Viewing API Analytics on Traffic]({{base_path}}/api-analytics/viewing/analytics-pages-traffic). - -### Latency Widget - - -Latency widget displays the 95th percentile of all API latencies in your selected environment for the selected time - period. You can use this widget to know whether the whole system operates under given SLAs. This metric - gives the first indication of slow APIs. To investigate further, see [Viewing API Analytics on Latency]({{base_path}}/api-analytics/viewing/analytics-pages-latency). - -## Error Rate Widget - - -This widget displays the average error rate (error count/total request count) in your selected environment for - the selected time period. You can use this widget as an indicator to know the health of the system. If the error - rate is high, investigate further using the [Viewing API Analytics on Errors]({{base_path}}/api-analytics/viewing/analytics-pages-errors). - -## API Request Summary Timeline - -![overview page latency]({{base_path}}/assets/img/analytics/overview/overview-page-timeline.png) - -This chart combines the above three widgets and plots them in a timeline. The y-axis on the left shows the 'request count' and 'error - count. The x-axis displays time and right y-axis on the right shows latency in milliseconds. Granularity of the data points are - decided based on the time range you have selected. The tooltip accurately provides the exact value of all three metrics. - accurately. diff --git a/en/docs/api-analytics/viewing/introduction-to-api-analytics-dashboard.md b/en/docs/api-analytics/viewing/introduction-to-api-analytics-dashboard.md deleted file mode 100644 index 0d98582118..0000000000 --- a/en/docs/api-analytics/viewing/introduction-to-api-analytics-dashboard.md +++ /dev/null @@ -1,7 +0,0 @@ -# Introduction to the API Analytics Dashboard - -Choreo Insights consists of several pages, which are divided based on different functional aspects (e.g., traffic, latency). -You can use these pages to get the complete business analytics on your API management system. - -For more information, see [Derive Insights](https://wso2.com/choreo/docs/insights/view-api-insights/). - diff --git a/en/docs/api-analytics/viewing/usecases/finding-faulty-apis.md b/en/docs/api-analytics/viewing/usecases/finding-faulty-apis.md deleted file mode 100644 index 74e3899792..0000000000 --- a/en/docs/api-analytics/viewing/usecases/finding-faulty-apis.md +++ /dev/null @@ -1,2 +0,0 @@ -!!! Note - Content to be added. WIP. \ No newline at end of file diff --git a/en/docs/assets/attachments/quick-start-guide/MI_QSG_HOME-JDK11.zip b/en/docs/assets/attachments/quick-start-guide/MI_QSG_HOME-JDK11.zip deleted file mode 100644 index 1aac68eb79..0000000000 Binary files a/en/docs/assets/attachments/quick-start-guide/MI_QSG_HOME-JDK11.zip and /dev/null differ diff --git a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/dependency-reduced-pom.xml b/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/dependency-reduced-pom.xml deleted file mode 100644 index 6911f7d731..0000000000 --- a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/dependency-reduced-pom.xml +++ /dev/null @@ -1,18 +0,0 @@ - - - - msf4j-service - org.wso2.msf4j - 2.0.0 - ../pom.xml/pom.xml - - 4.0.0 - org.wso2 - Hospital-Service - WSO2 MSF4J Microservice - 2.0.0 - - org.wso2.service.hospital.Application - - - diff --git a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/pom.xml b/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/pom.xml deleted file mode 100644 index 980b8d7b88..0000000000 --- a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/pom.xml +++ /dev/null @@ -1,70 +0,0 @@ - - - - - - org.wso2.msf4j - msf4j-service - 2.0.0 - - 4.0.0 - - org.wso2 - Hospital-Service - 2.0.0 - WSO2 MSF4J Microservice - - - - - javax.annotation - javax.annotation-api - 1.3.2 - - - - - jakarta.xml.bind - jakarta.xml.bind-api - 2.3.2 - - - - - org.glassfish.jaxb - jaxb-runtime - 2.3.2 - - - - com.googlecode.json-simple - json-simple - 1.1.1 - - - - - - org.wso2.service.hospital.Application - - - diff --git a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/src/main/java/org/wso2/service/hospital/Application.java b/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/src/main/java/org/wso2/service/hospital/Application.java deleted file mode 100644 index 9a921dc11f..0000000000 --- a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/src/main/java/org/wso2/service/hospital/Application.java +++ /dev/null @@ -1,32 +0,0 @@ -/* - * Copyright (c) 2020, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. - * - * WSO2 Inc. licenses this file to you under the Apache License, - * Version 2.0 (the "License"); you may not use this file except - * in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - * - */ - -package org.wso2.service.hospital; - -import org.wso2.msf4j.MicroservicesRunner; - -/** - * Application entry point. - */ -public class Application { - public static void main(String[] args) { - new MicroservicesRunner(9090, 9091) - .deploy(new GrandOakDoctorInfoService(), new PineValleyDoctorInfoService()).start(); - } -} diff --git a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/src/main/java/org/wso2/service/hospital/DoctorTypeBean.java b/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/src/main/java/org/wso2/service/hospital/DoctorTypeBean.java deleted file mode 100644 index 1ea5f9ee09..0000000000 --- a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/src/main/java/org/wso2/service/hospital/DoctorTypeBean.java +++ /dev/null @@ -1,39 +0,0 @@ -/* - * Copyright (c) 2020, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. - * - * WSO2 Inc. licenses this file to you under the Apache License, - * Version 2.0 (the "License"); you may not use this file except - * in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - * - */ - -package org.wso2.service.hospital; - -import javax.xml.bind.annotation.XmlRootElement; - -@XmlRootElement -public class DoctorTypeBean { - private String doctorType; - - public DoctorTypeBean(String doctorType) { - this.doctorType = doctorType; - } - - public String getDoctorType() { - return doctorType; - } - - public void setDoctorType(String doctorType) { - this.doctorType = doctorType; - } -} diff --git a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/src/main/java/org/wso2/service/hospital/GrandOakDoctorInfoService.java b/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/src/main/java/org/wso2/service/hospital/GrandOakDoctorInfoService.java deleted file mode 100644 index f8c196d115..0000000000 --- a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/src/main/java/org/wso2/service/hospital/GrandOakDoctorInfoService.java +++ /dev/null @@ -1,43 +0,0 @@ -/* - * Copyright (c) 2020, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. - * - * WSO2 Inc. licenses this file to you under the Apache License, - * Version 2.0 (the "License"); you may not use this file except - * in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - * - */ - -package org.wso2.service.hospital; - -import org.json.simple.JSONObject; - -import javax.ws.rs.GET; -import javax.ws.rs.Path; -import javax.ws.rs.PathParam; -import javax.ws.rs.core.MediaType; -import javax.ws.rs.core.Response; - -@Path("/grandOak") -public class GrandOakDoctorInfoService extends HospitalService { - - @GET - @Path("/doctors/{doctorType}") - public Response getDoctorRecord(@PathParam("doctorType") String doctorType) { - JSONObject doctorByType = super.getGrandOakDoctor(doctorType); - if (doctorByType.isEmpty()) { - String msg = "No matching service found for path : /grandOak/doctors/" + doctorType; - return Response.status(Response.Status.OK).entity(msg).type(MediaType.APPLICATION_JSON).build(); - } - return Response.status(Response.Status.OK).entity(doctorByType).type(MediaType.APPLICATION_JSON).build(); - } -} diff --git a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/src/main/java/org/wso2/service/hospital/HospitalService.java b/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/src/main/java/org/wso2/service/hospital/HospitalService.java deleted file mode 100644 index 688f30b14e..0000000000 --- a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/src/main/java/org/wso2/service/hospital/HospitalService.java +++ /dev/null @@ -1,171 +0,0 @@ -/* - * Copyright (c) 2020, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. - * - * WSO2 Inc. licenses this file to you under the Apache License, - * Version 2.0 (the "License"); you may not use this file except - * in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - * - */ - -package org.wso2.service.hospital; - -import org.json.simple.JSONObject; -import org.json.simple.parser.ParseException; -import org.json.simple.parser.JSONParser; -import org.wso2.msf4j.Microservice; - -/** - * This is the Microservice resource class. - * See https://github.com/wso2/msf4j#getting-started - * for the usage of annotations. - */ -public class HospitalService implements Microservice { - - public JSONObject getGrandOakDoctor(String doctorType) { - - JSONObject jsonPayload = null; - String responsePayload = null; - JSONParser parser = new JSONParser(); - - if (doctorType.equalsIgnoreCase("Ophthalmologist")) { - responsePayload = "{\n" + - " \"doctors\": {\n" + - " \"doctor\": [\n" + - " {\n" + - " \"name\": \"John Mathew\",\n" + - " \"time\": \"03:30 PM\",\n" + - " \"hospital\": \"Grand Oak\"\n" + - " },\n" + - " {\n" + - " \"name\": \"Allan Silvester\",\n" + - " \"time\": \"04:30 PM\",\n" + - " \"hospital\": \"Grand Oak\"\n" + - " }\n" + - " ]\n" + - " }\n" + - " }"; - } else if (doctorType.equalsIgnoreCase("Physician")) { - responsePayload = "{\n" + - " \"doctors\": {\n" + - " \"doctor\": [\n" + - " {\n" + - " \"name\": \"Shane Martin\",\n" + - " \"time\": \"07:30 AM\",\n" + - " \"hospital\": \"Grand Oak\"\n" + - " },\n" + - " {\n" + - " \"name\": \"Geln Ivan\",\n" + - " \"time\": \"08:30 AM\",\n" + - " \"hospital\": \"Grand Oak\"\n" + - " }\n" + - " ]\n" + - " }\n" + - " }"; - } else if (doctorType.equalsIgnoreCase("Pediatrician")) { - responsePayload = "{\n" + - " \"doctors\": {\n" + - " \"doctor\": [\n" + - " {\n" + - " \"name\": \"Bob Watson\",\n" + - " \"time\": \"05:30 PM\",\n" + - " \"hospital\": \"Grand Oak\"\n" + - " },\n" + - " {\n" + - " \"name\": \"Paul Johnson\",\n" + - " \"time\": \"07:30 AM\",\n" + - " \"hospital\": \"Grand Oak\"\n" + - " }\n" + - " ]\n" + - " }\n" + - " }"; - - } - try { - if (responsePayload != null) { - jsonPayload = (JSONObject) parser.parse(responsePayload); - } - } catch (ParseException e) { - throw new RuntimeException("Error parsing response.", e); - } - return jsonPayload; - } - - public JSONObject getPineValleyDoctor(String doctorType) { - - JSONObject jsonPayload = null; - String responsePayload = null; - JSONParser parser = new JSONParser(); - - if (doctorType.equalsIgnoreCase("Ophthalmologist")) { - responsePayload = "{\n" + - " \"doctors\": {\n" + - " \"doctor\": [\n" + - " {\n" + - " \"name\": \"John Mathew\",\n" + - " \"time\": \"07:30 AM\",\n" + - " \"hospital\": \"pineValley\"\n" + - " },\n" + - " {\n" + - " \"name\": \"Roma Katherine\",\n" + - " \"time\": \"04:30 PM\",\n" + - " \"hospital\": \"pineValley\"\n" + - " }\n" + - " ]\n" + - " }\n" + - " }"; - } else if (doctorType.equalsIgnoreCase("Physician")) { - responsePayload = "{\n" + - " \"doctors\": {\n" + - " \"doctor\": [\n" + - " {\n" + - " \"name\": \"Geln Ivan\",\n" + - " \"time\": \"05:30 PM\",\n" + - " \"hospital\": \"pineValley\"\n" + - " },\n" + - " {\n" + - " \"name\": \"Daniel Lewis\",\n" + - " \"time\": \"05:30 PM\",\n" + - " \"hospital\": \"pineValley\"\n" + - " }\n" + - " ]\n" + - " }\n" + - " }"; - } else if (doctorType.equalsIgnoreCase("Pediatrician")) { - responsePayload = "{\n" + - " \"doctors\": {\n" + - " \"doctor\": [\n" + - " {\n" + - " \"name\": \"Bob Watson\",\n" + - " \"time\": \"07:30 AM\",\n" + - " \"hospital\": \"pineValley\"\n" + - " },\n" + - " {\n" + - " \"name\": \"Wilson Mcdonald\",\n" + - " \"time\": \"07:30 AM\",\n" + - " \"hospital\": \"pineValley\"\n" + - " }\n" + - " ]\n" + - " }\n" + - " }"; - } - - try { - if (responsePayload != null) { - jsonPayload = (JSONObject) parser.parse(responsePayload); - } - } catch (ParseException e) { - throw new RuntimeException("Error parsing response.", e); - } - return jsonPayload; - } -} diff --git a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/src/main/java/org/wso2/service/hospital/PineValleyDoctorInfoService.java b/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/src/main/java/org/wso2/service/hospital/PineValleyDoctorInfoService.java deleted file mode 100644 index 9233e70654..0000000000 --- a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServices-JDK11Proj/src/main/java/org/wso2/service/hospital/PineValleyDoctorInfoService.java +++ /dev/null @@ -1,46 +0,0 @@ -/* - * Copyright (c) 2020, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. - * - * WSO2 Inc. licenses this file to you under the Apache License, - * Version 2.0 (the "License"); you may not use this file except - * in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - * - */ - -package org.wso2.service.hospital; - - -import org.json.simple.JSONObject; - -import javax.ws.rs.Consumes; -import javax.ws.rs.POST; -import javax.ws.rs.Path; -import javax.ws.rs.core.MediaType; -import javax.ws.rs.core.Response; - -@Path("/pineValley") -public class PineValleyDoctorInfoService extends HospitalService { - - @POST - @Consumes("application/json") - @Path("/doctors") - public Response getDoctorRecord(DoctorTypeBean doctorTypeBean) { - - JSONObject doctorByType = super.getPineValleyDoctor(doctorTypeBean.getDoctorType()); - if (doctorByType.isEmpty()) { - String msg = "No matching service found for path : /pineValley/doctors/" + doctorTypeBean.getDoctorType(); - return Response.status(Response.Status.OK).entity(msg).type(MediaType.APPLICATION_JSON).build(); - } - return Response.status(Response.Status.OK).entity(doctorByType).type(MediaType.APPLICATION_JSON).build(); - } -} diff --git a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServicesProj/Ballerina.lock b/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServicesProj/Ballerina.lock deleted file mode 100644 index f57ec3ed29..0000000000 --- a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServicesProj/Ballerina.lock +++ /dev/null @@ -1,4 +0,0 @@ -org_name = "isuru" -version = "0.1.0" -lockfile_version = "1.0.0" -ballerina_version = "1.0.1" diff --git a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServicesProj/Ballerina.toml b/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServicesProj/Ballerina.toml deleted file mode 100644 index 794c321cb8..0000000000 --- a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServicesProj/Ballerina.toml +++ /dev/null @@ -1,5 +0,0 @@ -[project] -org-name= "isuru" -version= "0.1.0" - -[dependencies] diff --git a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServicesProj/src/DoctorInfo/Module.md b/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServicesProj/src/DoctorInfo/Module.md deleted file mode 100644 index 613e7db224..0000000000 --- a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServicesProj/src/DoctorInfo/Module.md +++ /dev/null @@ -1,16 +0,0 @@ -# Grand Oak and Pine Valley Hospital Services - -Gives out available doctors for a given doctor type for Grand Oak hospital and Pine Valley hospital - -Available doctor types: -Pediatrician -Physician -Ophthalmologist - -## Example requests - -``curl -v http://localhost:9090/grandOak/doctors/Physician`` - -``curl -v http://localhost:9091/pineValley/doctors/Physician`` - - diff --git a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServicesProj/src/DoctorInfo/grandOak.bal b/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServicesProj/src/DoctorInfo/grandOak.bal deleted file mode 100644 index eae4e3060c..0000000000 --- a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServicesProj/src/DoctorInfo/grandOak.bal +++ /dev/null @@ -1,95 +0,0 @@ -// Copyright (c) 2019 WSO2 Inc. (http://www.wso2.org) All Rights Reserved. -// -// WSO2 Inc. licenses this file to you under the Apache License, -// Version 2.0 (the "License"); you may not use this file except -// in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, -// software distributed under the License is distributed on an -// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -// KIND, either express or implied. See the License for the -// specific language governing permissions and limitations -// under the License. - -import ballerina/http; -import ballerina/log; - -@http:ServiceConfig { - basePath: "/grandOak" -} -service grandOakService on new http:Listener(9090) { - - @http:ResourceConfig { - path: "/doctors/{doctorType}", - methods: ["GET"] - } - resource function doctors(http:Caller caller, http:Request request, string doctorType) returns error? { - - json responsePayload = {}; - if (doctorType == "Ophthalmologist" || doctorType == "ophthalmologist") { - responsePayload = { - "doctors": { - "doctor": [ - { - "name": "John Mathew", - "time": "03:30 PM", - "hospital": "Grand Oak" - }, - { - "name": "Allan Silvester", - "time": "04:30 PM", - "hospital": "Grand Oak" - } - ] - } - }; - } else if (doctorType == "Physician" || doctorType == "physician") { - responsePayload = { - "doctors": { - "doctor": [ - { - "name": "Shane Martin", - "time": "07:30 AM", - "hospital": "Grand Oak" - }, - { - "name": "Geln Ivan", - "time": "08:30 AM", - "hospital": "Grand Oak" - } - ] - } - }; - } else if (doctorType == "Pediatrician" || doctorType == "pediatrician") { - responsePayload = { - "doctors": { - "doctor": [ - { - "name": "Bob Watson", - "time": "05:30 PM", - "hospital": "Grand Oak" - }, - { - "name": "Paul Johnson", - "time": "07:30 AM", - "hospital": "Grand Oak" - } - ] - } - }; - } else { - handleError(caller, "Invalid doctor category"); - return; - } - http:Response response = new; - response.setJsonPayload(responsePayload, "application/json"); - var result = caller->respond(response); - // Logs the `error` in case of a failure. - if (result is error) { - log:printError("Error sending response", err = result); - } - } -} \ No newline at end of file diff --git a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServicesProj/src/DoctorInfo/pineValley.bal b/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServicesProj/src/DoctorInfo/pineValley.bal deleted file mode 100644 index a95fbb5c30..0000000000 --- a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServicesProj/src/DoctorInfo/pineValley.bal +++ /dev/null @@ -1,101 +0,0 @@ -// Copyright (c) 2019 WSO2 Inc. (http://www.wso2.org) All Rights Reserved. -// -// WSO2 Inc. licenses this file to you under the Apache License, -// Version 2.0 (the "License"); you may not use this file except -// in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, -// software distributed under the License is distributed on an -// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -// KIND, either express or implied. See the License for the -// specific language governing permissions and limitations -// under the License. - -import ballerina/http; -import ballerina/log; - -@http:ServiceConfig { - basePath: "/pineValley" -} -service pineValleyService on new http:Listener(9091) { - - @http:ResourceConfig { - path: "/doctors", - methods: ["POST"] - } - resource function doctors(http:Caller caller, http:Request request) returns error? { - - var requestPayload = request.getJsonPayload(); - if (requestPayload is json) { - string doctorType = requestPayload.doctorType.toString(); - json responsePayload = {}; - if (doctorType == "Ophthalmologist" || doctorType == "ophthalmologist") { - responsePayload = { - "doctors": { - "doctor": [ - { - "name": "John Mathew", - "time": "07:30 AM", - "hospital": "pineValley" - }, - { - "name": "Roma Katherine", - "time": "04:30 PM", - "hospital": "pineValley" - } - ] - } - }; - } else if (doctorType == "Physician" || doctorType == "physician") { - responsePayload = { - "doctors": { - "doctor": [ - { - "name": "Geln Ivan", - "time": "05:30 PM", - "hospital": "pineValley" - }, - { - "name": "Daniel Lewis", - "time": "05:30 PM", - "hospital": "pineValley" - } - ] - } - }; - } else if (doctorType == "Pediatrician" || doctorType == "pediatrician") { - responsePayload = { - "doctors": { - "doctor": [ - { - "name": "Bob Watson", - "time": "07:30 AM", - "hospital": "pineValley" - }, - { - "name": "Wilson Mcdonald", - "time": "07:30 AM", - "hospital": "pineValley" - } - ] - } - }; - } else { - handleError(caller, "Invalid doctor category"); - return; - } - http:Response response = new; - response.setJsonPayload(responsePayload, "application/json"); - var result = caller->respond(response); - // Logs the `error` in case of a failure. - if (result is error) { - log:printError("Error sending response", err = result); - } - } else { - handleError(caller, "Invalid request"); - } - } -} diff --git a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServicesProj/src/DoctorInfo/utils.bal b/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServicesProj/src/DoctorInfo/utils.bal deleted file mode 100644 index 5c935837df..0000000000 --- a/en/docs/assets/attachments/quick-start-guide/backend-service/HospitalServicesProj/src/DoctorInfo/utils.bal +++ /dev/null @@ -1,34 +0,0 @@ -// Copyright (c) 2019 WSO2 Inc. (http://www.wso2.org) All Rights Reserved. -// -// WSO2 Inc. licenses this file to you under the Apache License, -// Version 2.0 (the "License"); you may not use this file except -// in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, -// software distributed under the License is distributed on an -// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -// KIND, either express or implied. See the License for the -// specific language governing permissions and limitations -// under the License. - -import ballerina/http; -import ballerina/log; - -function handleError(http:Caller caller, string errorMsg) { - http:Response response = new; - - json responsePayload = { - "error": { - "message": errorMsg - } - }; - - response.setJsonPayload(responsePayload, "application/json"); - var result = caller->respond(response); - if (result is error) { - log:printError("Error sending response", err = result); - } -} diff --git a/en/docs/assets/attachments/quick-start-guide/config-projects/HealthcareConfigProject/artifact.xml b/en/docs/assets/attachments/quick-start-guide/config-projects/HealthcareConfigProject/artifact.xml deleted file mode 100644 index 0fc62936db..0000000000 --- a/en/docs/assets/attachments/quick-start-guide/config-projects/HealthcareConfigProject/artifact.xml +++ /dev/null @@ -1,5 +0,0 @@ - - - src/main/synapse-config/api/HealthcareAPI.xml - - diff --git a/en/docs/assets/attachments/quick-start-guide/config-projects/HealthcareConfigProject/pom.xml b/en/docs/assets/attachments/quick-start-guide/config-projects/HealthcareConfigProject/pom.xml deleted file mode 100644 index 773687d31a..0000000000 --- a/en/docs/assets/attachments/quick-start-guide/config-projects/HealthcareConfigProject/pom.xml +++ /dev/null @@ -1,152 +0,0 @@ - - - 4.0.0 - com.mi.example.HealthcareConfigProject - HealthcareConfigProject - 1.0.0 - pom - HealthcareConfigProject - HealthcareConfigProject - - bpel/workflow=zip,lib/registry/filter=jar,webapp/jaxws=war,lib/library/bundle=jar,service/dataservice=dbs,synapse/local-entry=xml,synapse/proxy-service=xml,carbon/application=car,registry/resource=zip,lib/dataservice/validator=jar,synapse/endpoint=xml,web/application=war,lib/carbon/ui=jar,service/axis2=aar,synapse/sequence=xml,synapse/configuration=xml,wso2/gadget=dar,lib/registry/handlers=jar,lib/synapse/mediator=jar,synapse/task=xml,synapse/api=xml,synapse/template=xml,synapse/message-store=xml,synapse/message-processors=xml,synapse/inbound-endpoint=xml - false - - - - - true - daily - ignore - - wso2-nexus - http://maven.wso2.org/nexus/content/groups/wso2-public/ - - - - - - true - daily - ignore - - wso2-nexus - http://maven.wso2.org/nexus/content/groups/wso2-public/ - - - - target/capp - - - org.codehaus.mojo - exec-maven-plugin - 1.4.0 - true - - - package - package - - exec - - - mvn - ${project.build.directory} - - clean - package - -Dmaven.test.skip=${maven.test.skip} - - - - - install - install - - exec - - - mvn - ${project.build.directory} - - clean - install - -Dmaven.test.skip=${maven.test.skip} - - - - - deploy - deploy - - exec - - - mvn - ${project.build.directory} - - deploy - -Dmaven.test.skip=${maven.test.skip} - - - - - - - - maven-eclipse-plugin - 2.9 - - - - org.wso2.developerstudio.eclipse.esb.project.nature - - - - - org.wso2.maven - synapse-unit-test-maven-plugin - 5.2.10 - - - synapse-unit-test - test - - synapse-unit-test - - - - - - ${testServerType} - ${testServerHost} - ${testServerPort} - ${testServerPath} - - ${project.basedir}/test/${testFile} - ${maven.test.skip} - - - - org.wso2.maven - wso2-esb-api-plugin - 2.1.0 - true - - - api - process-resources - - pom-gen - - - . - ${artifact.types} - - - - - - - - diff --git a/en/docs/assets/attachments/quick-start-guide/config-projects/HealthcareConfigProject/src/main/synapse-config/api/HealthcareAPI.xml b/en/docs/assets/attachments/quick-start-guide/config-projects/HealthcareConfigProject/src/main/synapse-config/api/HealthcareAPI.xml deleted file mode 100644 index 1970072b5a..0000000000 --- a/en/docs/assets/attachments/quick-start-guide/config-projects/HealthcareConfigProject/src/main/synapse-config/api/HealthcareAPI.xml +++ /dev/null @@ -1,65 +0,0 @@ - - - - - - - - - - - - - - -1 - 1 - - - 0 - - - - - - - - - - { - "doctorType": "$1" - } - - - - - - - - - - - -1 - 1 - - - 0 - - - - - - - - - - - - - - - - - - - - diff --git a/en/docs/assets/attachments/quick-start-guide/config-projects/HealthcareConfigProjectCompositeApplication/pom.xml b/en/docs/assets/attachments/quick-start-guide/config-projects/HealthcareConfigProjectCompositeApplication/pom.xml deleted file mode 100644 index 836e159f33..0000000000 --- a/en/docs/assets/attachments/quick-start-guide/config-projects/HealthcareConfigProjectCompositeApplication/pom.xml +++ /dev/null @@ -1,101 +0,0 @@ - - - 4.0.0 - com.mi.example.HealthcareConfigProject - HealthcareConfigProjectCompositeApplication - 1.0.0 - carbon/application - HealthcareConfigProjectCompositeApplication - HealthcareConfigProjectCompositeApplication - - jaggery/app=zip,synapse/priority-executor=xml,synapse/inbound-endpoint=xml,service/rule=aar,synapse/message-store=xml,event/stream=json,service/meta=xml,datasource/datasource=xml,synapse/proxy-service=xml,bpel/workflow=zip,synapse/sequence=xml,synapse/endpointTemplate=xml,carbon/application=car,wso2/gadget=dar,synapse/api=xml,synapse/event-source=xml,synapse/message-processors=xml,event/receiver=xml,lib/dataservice/validator=jar,synapse/template=xml,synapse/endpoint=xml,lib/carbon/ui=jar,lib/synapse/mediator=jar,event/publisher=xml,synapse/local-entry=xml,synapse/task=xml,webapp/jaxws=war,registry/resource=zip,synapse/configuration=xml,service/axis2=aar,synapse/lib=zip,synapse/sequenceTemplate=xml,event/execution-plan=siddhiql,service/dataservice=dbs,web/application=war,lib/library/bundle=jar - - - - - true - daily - ignore - - wso2-nexus - http://maven.wso2.org/nexus/content/groups/wso2-public/ - - - wso2-maven2-repository-1 - http://dist.wso2.org/maven2 - - - wso2-nexus-repository-1 - http://maven.wso2.org/nexus/content/groups/wso2-public/ - - - - - - true - daily - ignore - - wso2-nexus - http://maven.wso2.org/nexus/content/groups/wso2-public/ - - - wso2-maven2-repository-1 - http://dist.wso2.org/maven2 - - - wso2-nexus-repository-1 - http://maven.wso2.org/nexus/content/groups/wso2-public/ - - - - - - maven-eclipse-plugin - 2.9 - - - - org.wso2.developerstudio.eclipse.distribution.project.nature - - - - - org.wso2.maven - maven-car-plugin - 2.1.1 - true - - - car - package - - car - - - - - - - org.wso2.maven - maven-car-deploy-plugin - 1.1.1 - true - - - - ${basedir}/src/main/resources/security/wso2carbon.jks - wso2carbon - JKS - https://localhost:9443 - admin - admin - deploy - - - - - - - diff --git a/en/docs/assets/attachments/quick-start-guide/mi-qsg-home.zip b/en/docs/assets/attachments/quick-start-guide/mi-qsg-home.zip deleted file mode 100644 index 503f261b20..0000000000 Binary files a/en/docs/assets/attachments/quick-start-guide/mi-qsg-home.zip and /dev/null differ diff --git a/en/docs/assets/attachments/quick-start-guide/sampledata.xml b/en/docs/assets/attachments/quick-start-guide/sampledata.xml deleted file mode 100644 index 2e7659c990..0000000000 --- a/en/docs/assets/attachments/quick-start-guide/sampledata.xml +++ /dev/null @@ -1,34 +0,0 @@ - - - Almond cookie - 100 - - - Baked alaska - 20 - - - Toffee - 40 - - - Baked alaska - 20 - - - Almond cookie - 70 - - - Baked alaska - 30 - - - Toffee - 60 - - - Baked alaska - 30 - - diff --git a/en/docs/consume/customizations/adding-internationalization.md b/en/docs/consume/customizations/adding-internationalization.md deleted file mode 100644 index 43cbba13a1..0000000000 --- a/en/docs/consume/customizations/adding-internationalization.md +++ /dev/null @@ -1,375 +0,0 @@ -# Adding Internationalization and Localization - -The API Manager includes two Web interfaces, namely the API Publisher and Developer Portal. The steps below explain how you can localize the API Publisher and the Developer Portal. - -## Changing the browser language - -!!! note - - The web applications are shipped with the following default and additional languages for demonstration and testing purposes. - - - - - - - - - - - - - - - - - - - - - -
- Web Application - - Supported Languages -
- Default
language
-
- Additional
languages
-
- Developer Portal - - English - - Spanish
Arabic
Sinhala -
- API Publisher - - English - - Sinhala -
- - - If the language that you set in the browser settings is not a supported language of the API Publisher and/or the Developer Portal web application, "English" is set as the language by default in the web applications. - - - Therefore, if you need to change the language and the language is not supported, make sure to [add the language](#adding-a-new-language) first before changing the browser language. - - -Set your browser language to a preferred language by following the user guide that corresponds to your browser. - -For example, let's assume that you are using Google Chrome, and let's change the browser language to "Spanish". - -1. Navigate to the `chrome://settings/languages` URL in your browser. - - ![Chrome browser settings]({{base_path}}/assets/img/administer/chrome-set-language.png) - -2. Add the highest preference to "Spanish", so that "Spanish" moves to the top of the language list. - -3. Refresh the API Publisher and Developer Portal web apps. - - The text in the browser will be translated into Spanish. - - -## Adding a new language - -
-

Info

-

- All the text in the Developer Portal or the API Publisher are loaded via an external JSON file. These JSON files are asynchronously fetched from the browser based on the browser locale. The locale files are available in the following locations. - - - - - - - - - -
- Publisher - - <APIM_HOME>/repository/deployment/server/webapps/publisher/site/public/locales -
- Developer Portal - <APIM_HOME>/repository/deployment/server/webapps/devportal/site/public/locales -
- -

-
- -Follow the instructions below to add a new language to the Developer Portal or the API Publisher. - -Let's add support for the French language to the Developer Portal. - -1. Identify the two-letter locale code for the language that you want to add to the Developer Portal. - - The locale code for the French language is `fr`. - -2. Make a copy of the `en.json` file and rename it based on the locale code. - - Rename the copy of the `/repository/deployment/server/webapps/devportal/src/main/webapp/site/public/locales/en.json` file to `fr.json`. - - !!! info - If you are setting the browser locale to a specific regional language, for example, French (Switzerland), the language with the regional code is `fr-ch`. In this scenario too the two letter locale code is `fr`, because WSO2 API Manager does not support regional language switching. - -3. Change all the values that correspond to the key-value pairs to the language that you want to add to the Developer Portal. - - The JSON file (`.json`) has key-value pairs as follows: - - ```js - "Apis.Details.ApiConsole.ApiConsole.title": "Try Out", - "Apis.Details.ApiConsole.SelectAppPanel.applications": "Appplications", - ``` - - 1. [Find the keys to modify](#finding-the-keys-to-modify). - - 2. Convert each of the values into French. - -### Finding the keys to modify - -Sometimes going through the list of keys and modifying each of the values that correspond to a specific language is not going to be enough. You may need to find a key that is responsible for a particular text in the UI. Let’s consider the following scenario. - -Let's find the key for the main title named **APIs** in the following screen. - -![Main title highlighted in the Developer Portal]({{base_path}}/assets/img/administer/find-key-01.png) - -
-

Prerequisites

-

- -

  • Chrome web browser.
  • -
  • React Developer Tools extension for Google Chrome.
  • -

    -
    - - -1. Right-click over the title named **APIs** and select **Inspect Element**. - - ![Right click menu]({{base_path}}/assets/img/administer/find-key-02.png) - - The Chrome Developer Tools will open. - -2. Click on **Components** and copy the ID of the text component. - - ![Inspect element window]({{base_path}}/assets/img/administer/find-key-03.png) - - -## Changing the layout direction - -WSO2 API Manager has the capability of direction change for the **Developer Portal** web application. This feature enables you to entirely change the default direction of the UI from LTR (Left To Right) to RTL (Right to Left). This is required if you are trying to add support for languages such as Arabic. - -Follow the instructions below to change the direction of the UI: - -1. Add the specific configuration in the `defaultTheme.js` file. - - Add the following configuration to change the page direction to RTL (Right To Left). - - !!! note - If you have already done customizations to the default theme, make sure to merge the following with the existing changes carefully. - - ```js - const Configurations = { - direction: 'rtl', - }; - ``` - -2. Reload the Developer Portal to view the changes. - -!!! info - If you have done the theme changes for the instance via the `/repository/deployment/server/webapps/devportal/src/main/webapp/site/public/theme/defaultTheme.js` file the above configuration is valid. However, if it is the tenant theme file (`defaultTheme.js`) the variable assignment is not required and the `defaultTheme.js` file has to be a valid JSON file. For example, the valid configuration that should go into the `defaultTheme.js` file to change the page direction to RTL (Right To Left) is as follows: - - ```js - { - "direction": "rtl", - } - ``` - - !!! tip - Learn more about [Tenant theming]({{base_path}}/consume/customizations/customizing-the-developer-portal/overriding-developer-portal-theme/#uploading-via-the-admin-portal-tenants-only). - - -## Enabling the language switch - -WSO2 API Manager has the capability of language switching for the **Developer Portal** web application. - -!!! note - - When you switch between languages via the language switch, it will take precedence over the browser locale. - - If you do not select a language and - - - If the **browser locale exists** in the list of languages given in the language switch, the browser locale will be automatically selected from the list of available languages. - - If the **browser locale does not exist** in the list of languages, then "English" will get automatically set - as the language switch. - - When you enable the language switch, the direction of each language will take precedence over the root level direction. - -Follow the instructions below to enable the language switch: - -1. Open the `/repository/deployment/server/webapps/devportal/src/main/webapp/site/public/theme/defaultTheme.js` file. - -2. Add the following configuration to the file to enable the language switch. - - !!! note - If you have already done customizations to the default theme, make sure to merge the following with the existing changes carefully. - - ```js - const Configurations = { - custom: { - languageSwitch: { - active: true, - } - } - }; - ``` - -3. Optionally, add a language. - - !!! note - This is only applicable if you have added a [new language to the respective web application](#adding-a-new-language), which is not a [language that is available by default](#languages-available-by-default). - - ```js - const DefaultConfigurations = { - direction: 'ltr', - custom: { - languageSwitch: { - active: true, - languages: [ - { - key: 'fn', - image: '/site/public/images/flags/fn.png', - imageWidth: 24, // in pixles - text: 'French', - direction: 'ltr', - } - ] - } - } - } - ``` - -4. Customize the language switch if required. - - The following are the additional parameters that are available to customize the language switch. - - | Key | value | - | --- | ----- | - | `showFlag` | default set to `true`. Setting the value to `false` will hide the flag and display only the text. | - | `showText` | default set to `true`. Setting the value to `false` will hide the text and display only the flag. | - | `minWidth` | Sets the width of the whole element. The default is set to 60 pixels. | - - ```js - const DefaultConfigurations = { - direction: 'ltr', - custom: { - languageSwitch: { - active: false, - languages: [ - { - key: 'fn', - image: '/site/public/images/flags/fn.png', - imageWidth: 24, // in pixles - text: 'French', - direction: 'ltr', - } - ], - showFlag: true, - showText: true, - minWidth: 60, // Width of the language switch in pixles - } - } - } - ``` - -3. Reload the Developer Portal to view the changes. - - Now, a switch will be displayed in the top menu to change the language. - - ![Switch language option]({{base_path}}/assets/img/administer/find-key-04.png) - - -### Languages available by default - -The following are the languages available by default. - -```js -languages: [ - { - key: 'en', - image: '/site/public/images/flags/en.png', - imageWidth: 24, // in pixles - text: 'English', - direction: 'ltr', - }, - { - key: 'es', - image: '/site/public/images/flags/sp.png', - imageWidth: 24, // in pixles - text: 'Spanish', - direction: 'ltr', - }, - { - key: 'ar', - image: '/site/public/images/flags/ar.png', - imageWidth: 24, // in pixles - text: 'Arabic', - direction: 'rtl', - }, - { - key: 'si', - image: '/site/public/images/flags/si.png', - imageWidth: 24, // in pixles - text: 'Sinhala', - direction: 'ltr', - } -] -``` - -## Complete configuration related to localization - -The following is the complete configuration related to localization. - -```js -const DefaultConfigurations = { - direction: 'ltr', - custom: { - languageSwitch: { - active: false, - languages: [ - { - key: 'en', - image: '/site/public/images/flags/en.png', - imageWidth: 24, // in pixles - text: 'English', - direction: 'ltr', - }, - { - key: 'es', - image: '/site/public/images/flags/sp.png', - imageWidth: 24, // in pixles - text: 'Spanish', - direction: 'ltr', - }, - { - key: 'ar', - image: '/site/public/images/flags/ar.png', - imageWidth: 24, // in pixles - text: 'Arabic', - direction: 'rtl', - }, - { - key: 'si', - image: '/site/public/images/flags/si.png', - imageWidth: 24, // in pixles - text: 'Sinhala', - direction: 'ltr', - } - ], - showFlag: true, - showText: true, - minWidth: 60, // Width of the language switch in pixles - } - } -} -``` - -## Advanced concepts - -The following document describes how i18n is implemented in the API Publisher and the Developer Portal web applications, how you can auto-generate the language file, and how to programmatically convert the locale file from one language to any other language. - -[How internationalization (i18n) works in WSO2 API Manager React Apps](https://github.com/wso2/carbon-apimgt/wiki/How-internationalization-(i18n)-works-in-API-Manager-React-Apps) diff --git a/en/docs/consume/manage-application/advanced-topics/adding-an-application-creation-workflow-using-bps.md b/en/docs/consume/manage-application/advanced-topics/adding-an-application-creation-workflow-using-bps.md deleted file mode 100644 index a665b33c4e..0000000000 --- a/en/docs/consume/manage-application/advanced-topics/adding-an-application-creation-workflow-using-bps.md +++ /dev/null @@ -1,212 +0,0 @@ -# Adding an Application Creation Workflow - -This section explains as to how you can attach a custom workflow to the application creation operation in WSO2 API Manager (WSO2 API-M). - -Attaching a custom workflow to application creation allows you to control the creation of applications within the Developer Portal. An application is the entity that holds a set of subscribed  API's that would be accessed by a authorization key specified for that particular application. Therefore, controlling the creation of these applications would be a decision based on the organization's requirements. - -Example usecase: - -- Review the information that corresponds to an application by a specific reviewer before the application is created. -- The application creation would be offered as a paid service. -- The application creation should be allowed only to users who are in a specific role. - -!!! tip - **Before you begin**, if you have changed the API Manager's default user and role, make sure you do the following changes: - - - Change the credentials of the workflow configurations in the following registry resource: `_system/governance/apimgt/applicationdata/workflow-extensions.xml` file. - - Point the database that has the API Manager user permissions to BPS. - - Share any LDAPs, if any exist. - - Unzip the `/business-processes/application-creation/HumanTask/ApplicationsApprovalTask-1.0.0.zip` file, update the role as follows in the `ApplicationsApprovalTask.ht` file, and ZIP the `ApplicationsApprovalTask-1.0.0` folder. - - **Format** - - ``` java -   - [new-role-name] -   - ``` - -## Step 1 - Configure the Business Process Server - -1. Download [WSO2 Enterprise Integrator](https://wso2.com/integration/previous-releases/) version 6.5.0 by selecting the version from the dropdown. -2. Set an offset of 2 to the default BPS port in the `/wso2/business-process/conf/carbon.xml` file. - - This prevents port conflicts that occur when you start more than one WSO2 product on the same server. For more information, see [Changing the Default Ports with Offset]({{base_path}}/install-and-setup/setup/deployment-best-practices/changing-the-default-ports-with-offset/). - - ``` java - 2 - ``` - - !!! tip - - If you run WSO2 API-M and WSO2 EI on different machines, set the `hostname` to a different value than `localhost`. - - If you change the BPS port **offset to a value other than 2 or run WSO2 API-M and WSO2 EI on different machines**, you need to search and replace the value 9765 in all the files (`.epr`) inside the `/business-processes` directory with the new port (i.e., the value of 9763 + ``). - - -3. Open the `/wso2/business-process/conf/humantask.xml` file and `/wso2/business-process/conf/b4p-coordination-config.xml` file and set the `TaskCoordinationEnabled` property to true. - - ``` java - true - ``` - -4. Copy the following from the `/business-processes/epr` directory to the `/wso2/business-process/repository/conf/epr` directory. - - !!! note - - If the `/wso2/business-process/repository/conf/epr` directory does not exist, create it. - - Make sure to give the correct credentials in the `/wso2/business-process/repository/conf/epr` files. - - - - Update the `/business-processes/epr/ApplicationCallbackService.epr` file based on WSO2 API Manager. - ``` java - https://localhost:8243/services/WorkflowCallbackService - ``` - - - Update the `/business-processes/epr/ApplicationService.epr` file according to EI. - ``` java - http://localhost:9765/services/ApplicationService - ``` - -5. Start the EI server and sign in to the Management Console (`https://:9443+/carbon`). - - !!! warning - If you are using Mac OS with High Sierra, you may encounter the following warning when logging in to the Management Console due to a compression issue that exists in the High Sierra SDK. - - ``` java - WARN {org.owasp.csrfguard.log.JavaLogger} - potential cross-site request forgery (CSRF) attack thwarted (user:, ip:xxx.xxx.xx.xx, method:POST, uri:/carbon/admin/login_action.jsp, error:required token is missing from the request) - ``` - - To avoid this issue, open the `/wso2/business-process/conf/tomcat/catalina-server.xml` file and change the `compression="on"` to `compression="off"` in the Connector configuration, and restart the EI server. - - -6. Add a workflow. - 1. Click **BPEL** under **Processes**. - 2. Upload the `/business-processes/application-creation/BPEL/ApplicationApprovalWorkFlowProcess_1.0.0.zip` file to EI.  - - This is the business process archive file. - [![Upload BPEL archive file]({{base_path}}/assets/img/learn/add-application-wf-bpel.png)]({{base_path}}/assets/img/learn/add-application-wf-bpel.png) - -7. Click **Main** --> **Human Tasks** --> **Add** and upload the `/business-processes/application-creation/HumanTask/ApplicationsApprovalTask-1.0.0.zip` file to EI.  - - This is the human task archived file. - - [![Add human task package]({{base_path}}/assets/img/learn/add-application-wf-humantask.png)]({{base_path}}/assets/img/learn/add-application-wf-humantask.png) - -!!! tip - **Before you begin**, if you have changed the API Manager's default user and role, make sure you do the following changes: - - - Change the credentials of the workflow configurations in the following registry resource: `_system/governance/apimgt/applicationdata/workflow-extensions.xml`. - - Point the database that has the API Manager user permissions to BPS. - - Share any LDAPs, if any exist. - - Unzip the `/business-processes/application-creation/HumanTask/ApplicationsApprovalTask-1.0.0.zip` file, update the role as follows in the `ApplicationsApprovalTask.ht` file, and ZIP the ApplicationsApprovalTask-1.0.0 folder. - - **Format** - - ``` java -   - [new-role-name] -   - ``` - -## Step 2 - Configure WSO2 API Manager - -Open the `/repository/deployment/server/webapps/admin/src/main/webapp/site/conf/site.json` file and configure `workFlowServerURL` under `workflows` to point to the BPS server. - -**Example** -``` java -"workFlowServerURL": "https://localhost:9445/services/" -``` - -!!! note - When enabling the workflow, make sure to **import the certificate** of WSO2 API Manager into the client-truststore of the EI server and also import the certificate of EI into the client-truststore of API Manager. - - Paths to the directory containing the client-truststore of each product are as follows: - - 1. API-M - '/repository/resources/security' - 2. EI - '/wso2/business-process/repository/resources/security' - -## Step 3 - Engage the WS Workflow Executor in the API Manager - -First, enable the application creation workflow. - -1. Sign in to WSO2 API-M Management Console (`https://:9443/carbon`). - -2. Click **Main** --> **Resources** --> **Browse**. - - - -2. Go to the `/_system/governance/apimgt/applicationdata/workflow-extensions.xml` resource, disable the Simple Workflow Executor, and enable **WS Workflow Executor**. In addition, specify the service endpoint where the workflow engine is hosted and the credentials required to access the said service via basic authentication (i.e., username/password based authentication). - - ``` xml - - - http://localhost:9765/services/ApplicationApprovalWorkFlowProcess/ - admin - admin - https://localhost:8243/services/WorkflowCallbackService - - - ``` - - !!! tip - All the workflow process services of the EI run on port 9765 because you changed its default port (9763) with an offset of 2. - - - The application creation WS Workflow Executor is now engaged. - - -3. Create an application via the Developer Portal. - - 1. Sign in to the Developer Portal. - - (`https://localhost:9443/devportal`) - - 2. Click **Applications** and create a new application. - - This invokes the application creation process and creates a Human Task instance that holds the execution of the BPEL process until some action is performed on it. - - Note that the **Status** field of the application states **INACTIVE (Waiting for approval)** if the BPEL is invoked correctly, indicating that the request is successfully submitted. - - [![Application status is INACTIVE - Waiting for approval]({{base_path}}/assets/img/learn/add-application-wf-inactive.png) ]({{base_path}}/assets/img/learn/add-application-wf-inactive.png) - -4. Sign in to the Admin Portal (`https://localhost:9443/admin`), list all the tasks for application creation and approve the task. - - It resumes the BPEL process and completes the application creation. - - [![Approve tasks]({{base_path}}/assets/img/learn/add-application-wf-approve.png)]({{base_path}}/assets/img/learn/add-application-wf-approve.png) - - -5. Go back to the **Applications** page in the WSO2 Developer Portal and see the created application. - - Whenever a user tries to create an application in the Developer Portal, a request is sent to the workflow endpoint. A sample is shown below: - - ``` xml - - - - - application1 - Gold - http://webapp/url - Application 1 - wso2.com - user1 - c0aad878-278c-4439-8d7e-712ee71d3f1c - https://localhost:8243/services/WorkflowCallbackService - - - - ``` - - Elements of the above configuration are described below: - - | Element | Description | - |-------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| - | applicationName | Name of the application the user creates. | - | applicationTier | Throttling tier of the application. | - | applicationCallbackUrl | When the OAuth2 Authorization Code grant type is applied, this is the endpoint on which the callback needs to happen after the user is authenticated. This is an attribute of the actual application registered on the Developer Portal. | - | applicationDescription | Description of the application | - | tenantDomain | Tenant domain associated with the application (domain of the user creating the application). | - | userName | Username of the user creating the application. | - | `workflowExternalRef` | The unique reference against which a workflow is tracked. This needs to be sent back from the workflow engine to the API Manager at the time of workflow completion. | - | callBackURL | This property is configured in the `` element in the `workflow-extensions.xml` registry file. | - - diff --git a/en/docs/consume/manage-application/advanced-topics/adding-an-application-deletion-workflow.md b/en/docs/consume/manage-application/advanced-topics/adding-an-application-deletion-workflow.md deleted file mode 100644 index f41e244156..0000000000 --- a/en/docs/consume/manage-application/advanced-topics/adding-an-application-deletion-workflow.md +++ /dev/null @@ -1,47 +0,0 @@ -# Adding an Application Deletion Workflow - -Attaching a custom workflow to application deletion, enables an admin to approve/reject application deletion requests made for existing applications. Note that only an admin is able to approve/reject an application deletion request. - -After application deletion workflow is enabled, when an application deletion request is made, the application workflow status is changed to the `DELETE PENDING` state. In this state, a consumer can still use the application to subscribe to APIs, generate production and sandbox keys until the application deletion is approved. Once the application deletion request is approved the application will be deleted. - -### Engaging the Approval Workflow Executor in the API Manager - -1. Sign in to the API Manager Management Console (`https://:9443/carbon`) and go to **Browse** under **Registry**. - - [![Workflow Extensions Browse]({{base_path}}/assets/img/learn/navigate-main-resources.png)]({{base_path}}/assets/img/learn/navigate-main-resources.png) - - -2. Open the `/_system/governance/apimgt/applicationdata/workflow-extensions.xml` resource and click **Edit as text**. Disable the `ApplicationDeletionSimpleWorkflowExecutor` and enable `ApplicationDeletionApprovalWorkflowExecutor`. - ``` - - ... - - - ... - - ``` - - The application deletion approval workflow executor is now engaged. - - -3. Sign in to the WSO2 API Developer Portal (`https://:/devportal`) and click **Applications**. - - [![Applications Listing Tab]({{base_path}}/assets/img/learn/application-listing.png)]({{base_path}}/assets/img/learn/application-listing.png) - - -4. Click the **Delete** icon under **Actions** column to open the **Delete Application** popup to delete the desired application. Confirm the delete request by clicking the **Delete** button. - - [![Application Delete Tab]({{base_path}}/assets/img/learn/application-delete.png)]({{base_path}}/assets/img/learn/application-delete.png) - - -5. You will see the workflow status as **DELETE PENDING**. - - [![Application Delete Before Approval]({{base_path}}/assets/img/learn/application-delete-before-approval.png)]({{base_path}}/assets/img/learn/application-delete-before-approval.png) - -6. Sign in to the Admin Portal (`https://:9443/admin`), list all the tasks for Application delete from **Tasks** --> **Application Deletion** and click on approve (or reject) to approve (or reject) the workflow pending request. - - [![Application Delete Admin]({{base_path}}/assets/img/learn/application-delete-admin-entry.png)]({{base_path}}/assets/img/learn/application-delete-admin-entry.png) - -7. After approving go back to the API Developer Portal Application listing page. The application will be removed. - - diff --git a/en/docs/consume/manage-application/advanced-topics/adding-an-application-key-generation-workflow-using-bps.md b/en/docs/consume/manage-application/advanced-topics/adding-an-application-key-generation-workflow-using-bps.md deleted file mode 100644 index c0c418c313..0000000000 --- a/en/docs/consume/manage-application/advanced-topics/adding-an-application-key-generation-workflow-using-bps.md +++ /dev/null @@ -1,279 +0,0 @@ -# Adding an Application Key Generation Workflow - -This section explains as to how you can attach a custom workflow to the **application registration** operation in the API Manager. - -[Application creation]({{base_path}}/consume/manage-application/advanced-topics/adding-an-application-creation-workflow) and **Application registration** are different workflows. After an application is created, you can subscribe to available APIs, but you get the consumer key/secret and access tokens only after registering the application. There are two types of registrations with regard to an application: production and sandbox. The following are the situations in which you need to change the default application registration workflow: - -- To only issue sandbox keys when creating production keys is deferred until testing is complete. -- To restrict untrusted applications from creating production keys. You allow only the creation of sandbox keys. -- To make API subscribers go through an approval process before creating any type of access token. - -!!! tip - **Before you begin**, if you have changed the API Manager's default user and role, make sure you do the following changes: - - 1. Change the credentials of the workflow configurations in the registry resource `_system/governance/apimgt/applicationdata/workflow-extensions.xml`. - - a. Sign in to the Management Console of WSO2 API Manager in . - - b. Click **Main** --> **Resources** --> **Browse**. - - c. Go to `/_system/governance/apimgt/applicationdata/workflow-extensions.xml` location in the registry browser. - - d. Click **Edit as text** to open the `workflow-extensions.xml` file. - - [![Edit view of workflow-extensions file]({{base_path}}/assets/img/learn/application-registration-wf-config.png)]({{base_path}}/assets/img/learn/application-registration-wf-config.png) - - e. Uncomment the following two sections and change the credentials of API Manager's default user credentials you have given. - - !!! warning - It is assumed in the following configuration that WSO2 EI is running with offset 2. If you are running WSO2 EI in a different offset change the ports that correspond to the **serviceEndpoint** properties in the following configuration according to the changed port offset. - - ``` java - - http://localhost:9765/services/ApplicationRegistrationWorkFlowProcess/ - admin - admin - https://localhost:8248/services/WorkflowCallbackService - - - http://localhost:9765/services/ApplicationRegistrationWorkFlowProcess/ - admin - admin - https://localhost:8248/services/WorkflowCallbackService - - ``` - - !!! note - Make sure to comment out the existing `ProductionApplicationRegistration` and `SandboxApplicationRegistration` executors as shown below. - - ``` java - - - ``` - - - 2. Point the database that has the API Manager user permissions to EI. - - In this step you need to share the user store database in WSO2 API Manager with WSO2 EI. - - a. Copy the following datasource configuration in the `/repository/conf/datasources/master-datasources.xml` file: - ``` java - WSO2UM_DB - The datasource used by user manager - - jdbc/WSO2UM_DB - - - - jdbc:mysql://userdb.mysql-wso2.com:3306/userdb?autoReconnect=true - user - password - com.mysql.jdbc.Driver - 50 - 60000 - true - SELECT 1 - 30000 - - - - ``` - - - !!! note - MySQL is used to configure the datasources in this documentation. You can configure this based on the database that you are using. For more information, see the [Working with Database]({{base_path}}/install-and-setup/setting-up-databases/overview/). - - - b. Change the datasource to point the WSO2UM\_DB by changing the realm configuration in the `/repository/conf/user-mgt.xml` file as shown below. - ``` java - - - - .... - jdbc/WSO2UM_DB - - .... - - - ``` - - c. Do the configuration described in (a) and (b) in the `/wso2/business-process/conf/datasources/master-datasources.xml` and in the `/wso2/business-process/conf/user-mgt.xml` file respectively. - - 3. Share any LDAPs, if they exist. - 4. Unzip the `/business-processes/application-registration/HumanTask/ApplicationRegistrationTask-1.0.0.zip` file, update the role as follows in the `ApplicationRegistrationTask.ht` file, and ZIP the `ApplicationRegistrationTask-1.0.0` folder. - - **Format** - - ``` java - - [new-role-name] - - ``` - - 5. Restart the WSO2 API Manager server. - - -## Step 1 - Configure the Business Process server - -1. Download [WSO2 Enterprise Integrator](https://wso2.com/enterprise-integrator/6.5.0). -2. Set an offset of 2 to the default EI port in the `/wso2/business-process/conf/carbon.xml` file. This prevents port conflicts that occur when you start more than one WSO2 product on the same server. For more information, see [Changing the Default Ports with Offset]({{base_path}}/install-and-setup/setup/deployment-best-practices/changing-the-default-ports-with-offset/). - - ``` java - 2 - ``` - - !!! tip - - If you run the API Manager and EI on different machines, set the `hostname` to a different value than `localhost`. - - If you change the EI **port offset to a value other than 2 or run the API Manager and EI on different machines**, do the following: - - Search and replace the value 9765 in all the files (.epr) inside `/business-processes` folder with the new port (9763 + port offset.) - - -3. Open the `/wso2/business-process/conf/humantask.xml` file and the `/wso2/business-process/conf/b4p-coordination-config.xml` file and set the `TaskCoordinationEnabled` property to true. - - ``` xml - true - ``` - -4. Copy the following from the `/business-processes/epr` folder to the `/wso2/business-process/conf/epr` folder. - - !!! note - - If the `/wso2/business-process/conf/epr` folder does not exist, create it. - - - Make sure you give the correct credentials in the `/wso2/business-process/conf/epr` files. - - - - Update the `/business-processes/epr/RegistrationCallbackService.epr` file according to API Manager. - - ``` java - https://localhost:8243/services/WorkflowCallbackService - ``` - - - Update the `/business-processes/epr/RegistrationService.epr` file according to EI. - - ``` java - http://localhost:9765/services/ApplicationRegistration - ``` - -5. Start the BPS server and sign in to its Management Console (`https://:9443+/carbon`). - - !!! warning - If you are using Mac OS with High Sierra, you may encounter the following warning when you sign in to the Management console due to a compression issue that exists in High Sierra SDK. - - ``` java - WARN {org.owasp.csrfguard.log.JavaLogger} - potential cross-site request forgery (CSRF) attack thwarted (user:, ip:xxx.xxx.xx.xx, method:POST, uri:/carbon/admin/login_action.jsp, error:required token is missing from the request) - ``` - - To avoid this issue open the `/repository/conf/tomcat/catalina-server.xml` file and change `compression="on"` to `compression="off"` in Connector configuration and restart the EI. - -6. Sign in to the Management console of WSO2 EI. - -7. Click **Main** --> **Processes** --> **Add** --> **BPEL** and upload the `/business-processes/application-registration/BPEL/ApplicationRegistrationWorkflowProcess_1.0.0.zip` file to EI. This is the business process archive file. - - [![Upload BPEL package]({{base_path}}/assets/img/learn/add-registration-wf-bpel.png)]({{base_path}}/assets/img/learn/add-registration-wf-bpel.png) - -8. Click **Main** --> **Processes** --> **Human Tasks** --> **Add** and upload the `/business-processes/application-registration/HumanTask/ApplicationRegistrationTask-1.0.0.zip` file to EI. - - This is the human task archived file. - - [![Add the human task archived file]({{base_path}}/assets/img/learn/add-registration-wf-humantask.png)]({{base_path}}/assets/img/learn/add-application-wf-humantask.png) - -## Step 2 - Configure WSO2 API Manager - -Open the `/repository/deployment/server/webapps/admin/src/main/webapp/site/conf/site.json` file and configure the value for `workFlowServerURL` under the `workflows` section to point to the EI/BPS server (e.g., `"workFlowServerURL": "https://localhost:9445/services/"`) - -``` java -{ - ..... - "context": "/admin", - "request_url": "READ_FROM_REQUEST", - "tasksPerPage": 10, - "allowedPermission": "/permission/admin/manage/apim_admin", - "workflows": { - "workFlowServerURL": "https://localhost:9445/services/", - } - ..... -} -``` - -## Step 3 - Engage the WS Workflow executor in the API Manager - -First, enable the application registration workflow. - -1. Start WSO2 API Manager and sign in to the APIM management console (`https://:9443/carbon`). - -2. Click **Main** --> **Resources** --> **Browse**. - - - -2. Go to the `/_system/governance/apimgt/applicationdata/workflow-extensions.xml` resource, disable the Simple Workflow Executor and enable WS Workflow Executor as described in the tip provided at the start of this documentation if you haven't done so already. - - ``` xml - - ... - - http://localhost:9765/services/ApplicationRegistrationWorkFlowProcess/ - admin - admin - https://localhost:8248/services/WorkflowCallbackService - - ...   - - http://localhost:9765/services/ApplicationRegistrationWorkFlowProcess/ - admin - admin - https://localhost:8248/services/WorkflowCallbackService - - ... - - ``` - - !!! tip - **Note** that all workflow process services of the EI/BPS run on port 9765 because you changed its default port (9763) with an offset of 2. - - -3. Sign in to the API Developer Portal () as a Developer Portal user and open the application with which you subscribed to the API. - - !!! note - If you do not have an API already created and an Application subscribed to it, follow [Create a REST API]({{base_path}}/design/create-api/create-rest-api/create-a-rest-api/), [Publish an API]({{base_path}}/deploy-and-publish/publish-on-dev-portal/publish-an-api/), and [Subscribe to an API]({{base_path}}/consume/manage-subscription/subscribe-to-an-api) to create an API and subscribe to it. - - -4. Click **Applications**, **Production Keys**, and **Generate Keys**. - - It invokes the `ApplicationRegistrationWorkFlowProcess.bpel` that is bundled with the `ApplicationRegistrationWorkflowProcess_1.0.0.zip` file and creates a HumanTask instance that holds the execution of the BPEL process until some action is performed on it. - - [ ![Generate keys for an application]({{base_path}}/assets/img/learn/add-registration-wf-generate-keys.png) ]({{base_path}}/assets/img/learn/add-registration-wf-generate-keys.png) - - - Note that a message appears saying that the request is successfully submitted when the BPEL is invoked correctly. - -5. Sign in to the Admin Portal (`https://:9443/admin`) with admin credentials and list all the tasks for application registrations. - -6. Click **Start** to start the Human Task and then change its state.  - -7. Click **Approve** and **Complete** to complete the task. - - This resumes the BPEL process and completes the registration. - - [![]({{base_path}}/assets/img/learn/add-registration-wf-approval.png)]({{base_path}}/assets/img/learn/add-registration-wf-approval.png) - -7. Navigate back to the API Developer Portal and view your application. - - It shows the application access token, consumer key and consumer secret. - - - -After the registration request is approved, the keys are generated by invoking the `APIKeyMgtSubscriber` service hosted in Key Manager nodes. Even when the request is approved, the key generation can fail if this service becomes unavailable. To address such failures, you can configure to trigger key generation at a time Key Manager nodes become available again. Given below is the message used to invoke the BPEL process: - ``` xml - - NewApp5 - Unlimited - - - carbon.super - admin - 4a20749b-a10d-4fa5-819b-4fae5f57ffaf - https://localhost:8243/services/WorkflowCallbackService - PRODUCTION - - ``` \ No newline at end of file diff --git a/en/docs/consume/manage-application/grant-type.md b/en/docs/consume/manage-application/grant-type.md deleted file mode 100644 index b9899ae51a..0000000000 --- a/en/docs/consume/manage-application/grant-type.md +++ /dev/null @@ -1,5 +0,0 @@ -# Grant Type - -Include details of the business use case. - -For e.g CC vs Password etc, and when they should be used. Talk about Revoke token use case when describing Revoke token grant type. diff --git a/en/docs/consume/manage-subscription/advanced-topics/adding-an-api-subscription-workflow-using-bps.md b/en/docs/consume/manage-subscription/advanced-topics/adding-an-api-subscription-workflow-using-bps.md deleted file mode 100644 index ad878917a4..0000000000 --- a/en/docs/consume/manage-subscription/advanced-topics/adding-an-api-subscription-workflow-using-bps.md +++ /dev/null @@ -1,228 +0,0 @@ -# Adding an API Subscription Workflow - -This section explains how to attach a custom workflow to the API subscription operation in the API Manager. First, see [Workflow Extensions]({{base_path}}/reference/customize-product/extending-api-manager/extending-workflows/customizing-a-workflow-extension/) for information on different types of workflows executors. - -Attaching a custom workflow to API subscription enables you to add throttling tiers to an API that consumers cannot choose at the time of subscribing. Only admins can set these tiers to APIs. When a consumer subscribes to an API, he/she has to subscribe to an application in order to get access to the API. However, when API subscription workflow is enabled, when the consumer subscribes to an application, it initially is in the `On Hold` state, and he/she can not use the API, using its production or sandbox keys, until their subscription is approved. - -!!! Note - You will only need to configure either **WSO2 EI** or **WSO2 BPS**. The WSO2 API Manager configuration will be common for both. - -## Configuring WSO2 EI - -!!! tip - **Before you begin** , if you have changed the API Manager's default user and role, make sure you do the following changes: - - Point the database that has the API Manager user permissions to EI. - - Share any LDAPs, if exist. - - Unzip the `/business-processes/subscription-creation/HumanTask/SubscriptionsApprovalTask-1.0.0.zip` file, update the role as follows in the `SubscriptionsApprovalTask.ht` file, and ZIP the `SubscriptionsApprovalTask-1.0.0` folder. - - ``` xml - - [new-role-name] - - ``` - -1. Download [WSO2 Enterprise Integrator](https://wso2.com/integration). -2. Set an offset of 2 to the default EI port in `/conf/carbon.xml` file. This prevents port conflicts that occur when you start more than one WSO2 product on the same server. For more information, see [Changing the Default Ports with Offset](https://docs.wso2.com/display/AM260/Changing+the+Default+Ports+with+Offset) . - - ``` xml - 2 - ``` - - !!! tip - **Tip** : If you change the EI **port offset to a value other than 2 or run the API Manager and EI on different machines** (therefore, want to set the `hostname` to a different value than `localhost` ), you do the following: - - - Search and replace the value 9765 in all the files (.epr) inside the `/business-processes` folder with the new port (9763 + port offset.) - - -3. Open the `/wso2/business-process/conf/humantask.xml` file and `/wso2/business-process/conf/b4p-coordination-config.xml` file and set the `TaskCoordinationEnabled` property to true. - - ``` xml - true - ``` - -4. Copy the following from `/business-processes/epr` to `/wso2/business-process/repository/conf/epr` folder. If the `/wso2/business-process/repository/conf/epr` folder isn't there, Create it. - - !!! note - Make sure to give the correct credentials in the `/wso2/business-process/repository/conf/epr` files. - - - - Update the `/business-processes/epr/SubscriptionCallbackService.epr` file according to API Manager. - - ``` java - https://localhost:8243/services/WorkflowCallbackService - ``` - - - Update the `/business-processes/epr/SubscriptionService.epr` file according to EI. - - ``` java - http://localhost:9765/services/SubscriptionService/ - ``` - -5. Start the EI server and sign in to its management console ( `https://:9443+/carbon` ). - - !!! warning - If you are using Mac OS with High Sierra, you may encounter following warning when login into the Management console due to a compression issue exists in High Sierra SDK. - - ``` java - WARN {org.owasp.csrfguard.log.JavaLogger} - potential cross-site request forgery (CSRF) attack thwarted (user:, ip:xxx.xxx.xx.xx, method:POST, uri:/carbon/admin/login_action.jsp, error:required token is missing from the request) - ``` - - To avoid this issue open `/conf/tomcat/catalina-server.xml` and change the compression="on" to compression="off" in Connector configuration and restart the EI. - - -6. Select **Add** under the **Processes** menu and upload the `/business-processes/subscription-creation/BPEL/SubscriptionApprovalWorkFlowProcess_1.0.0.zip` file to EI. This is the business process archive file. - ![]({{base_path}}/assets/img/learn/learn-subscription-workflow-upload.png) - -7. Select **Add** under the **Human Tasks** menu and upload the `/business-processes/subscription-creation/HumanTask/SubscriptionsApprovalTask-1.0.0.zip` file to EI. This is the human task archived file. - - -##Configuring WSO2 BPS - -!!! tip - **Before you begin** , if you have changed the API Manager's default user and role, make sure you do the following changes: - - Point the database that has the API Manager user permissions to BPS. - - Share any LDAPs, if exist. - - Unzip the `/business-processes/subscription-creation/HumanTask/SubscriptionsApprovalTask-1.0.0.zip` file, update the role as follows in the `SubscriptionsApprovalTask.ht` file, and ZIP the `SubscriptionsApprovalTask-1.0.0` folder. - - ``` xml - - [new-role-name] - - ``` - -1. Download [WSO2 Business Process Server](https://wso2.com/api-manager/) . -2. Set an offset of 2 to the default BPS port in `/repository/conf/carbon.xml` file. This prevents port conflicts that occur when you start more than one WSO2 product on the same server. For more information, see [Changing the Default Ports with Offset](https://docs.wso2.com/display/AM260/Changing+the+Default+Ports+with+Offset) . - - ``` xml - 2 - ``` - - !!! tip - **Tip** : If you change the BPS **port offset to a value other than 2 or run the API Manager and BPS on different machines** (therefore, want to set the `hostname` to a different value than `localhost` ), you do the following: - - - Search and replace the value 9765 in all the files (.epr) inside the `/business-processes` folder with the new port (9763 + port offset.) - - -3. Open the `/repository/conf/humantask.xm` file and `/repository/conf/b4p-coordination-config.xml` file and set the `TaskCoordinationEnabled` property to true. - - ``` xml - true - ``` - -4. Copy the following from `/business-processes/eprto/repository/conf/epr` folder. If the `/repository/conf/epr` folder isn't there, please create it. - - !!! note - Make sure to give the correct credentials in the `/repository/conf/epr` files. - - - Update the `/business-processes/epr/SubscriptionCallbackService.epr` file according to API Manager. - ``` - https://localhost:8243/services/WorkflowCallbackService - ``` - - - Update the `/business-processes/epr/SubscriptionService.epr` file according to BPS. - ``` - http://localhost:9765/services/SubscriptionService/ - ``` -5. Start the BPS server and sign in to its management console ( `https://:9443+/carbon` ). - - !!! warning - If you are using Mac OS with High Sierra, you may encounter following warning when login into the Management console due to a compression issue exists in High Sierra SDK. - - ``` java - WARN {org.owasp.csrfguard.log.JavaLogger} - potential cross-site request forgery (CSRF) attack thwarted (user:, ip:xxx.xxx.xx.xx, method:POST, uri:/carbon/admin/login_action.jsp, error:required token is missing from the request) - ``` - - To avoid this issue open `/repository/conf/tomcat/catalina-server.xml` and change the compression="on" to compression="off" in Connector configuration and restart the BPS. - - -6. Select **Add** under the **Processes** menu and upload the -`/business-processes/subscription-creation/BPEL/SubscriptionApprovalWorkFlowProcess_1.0.0.zip` -file to BPS. This is the business process archive file. - ![]({{base_path}}/assets/img/learn/learn-subscription-workflow-upload.png) - -7. Select **Add** under the **Human Tasks** menu and upload the `/business-processes/subscription-creation/HumanTask/SubscriptionsApprovalTask-1.0.0.zip` file to BPS. This is the human task archived file. - -## Configuring the API Manager - -Open the `/repository/deployment/server/webapps/admin/src/main/webapp/site/conf/site.json` file and configure " `workFlowServerURL"` under " `workflows"` to point to the EI/BPS server (e.g. `"workFlowServerURL": " https://localhost:9445/services/ "` ) - -#### Engaging the WS Workflow Executor in the API Manager - -First, enable the API subscription workflow **.** - -1. Sign in to API Manager Management Console ( `https://:9443/carbon` ) and select **Browse** under **Resources** . - - ![]({{base_path}}/assets/img/learn/learn-subscription-workflow-browse.png) - -2. Go to the `/_system/governance/apimgt/applicationdata/workflow-extensions.xml` resource, disable the Simple Workflow Executor and enable WS Workflow Executor. Also specify the service endpoint where the workflow engine is hosted and the credentials required to access the said service via basic authentication (i.e., username/password based authentication). - - ``` - - ... - - http://localhost:9765/services/SubscriptionApprovalWorkFlowProcess/ - admin - admin - https://localhost:8243/services/WorkflowCallbackService - - ... - - ``` - - !!! tip - **Note** that all workflow process services of the EI/BPS run on port 9765 because you changed its default port (9763) with an offset of 2. - - - The application creation WS Workflow Executor is now engaged. - - -3. Go to the API Developer Portal credentials page and subscribe to an API. It will trigger the API subscription process and create a Human Task instance that pauses the execution of the BPEL until some action is performed on it. After subscribing you will see the subscription status as ON_HOLD - - ![]({{base_path}}/assets/img/learn/workflow-subscription-onhold.png) - -4. Sign in to the Admin Portal ( `https://:9443/admin` ), list all the tasks for API subscription and click on start to approve the task. It resumes the BPEL process and completes the subscription process. - - ![]({{base_path}}/assets/img/learn/workflow-subscription-admin-entry.png) - - After approving go back to the API Developer Portal credentials page, the application status will be UNBLOCKED - - ![]({{base_path}}/assets/img/learn/workflow-subscription-complete.png) - -5. Go back to the API Developer Portal and see that the user is now subscribed to the API. - - Whenever a user tries to subscribe to an API, a request of the following format is sent to the workflow endpoint: - - ``` - - - - - sampleAPI - 1.0.0 - /sample - admin - subscriber1 - application1 - gold - - ? - - - - ``` - - Elements of the above configuration are described below: - - | Element | Description | - |----------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| - | apiName | Name of the API to which subscription is requested. | - | apiVersion | Version of the API the user subscribes to. | - | apiContext | Context in which the requested API is to be accessed. | - | apiProvider | Provider of the API. | - | subscriber | Name of the user requesting subscription. | - | applicationName | Name of the application through which the user subscribes to the API. | - | tierName | Throttling tiers specified for the application. | - | `workflowExternalRef` | The unique reference against which a workflow is tracked. This needs to be sent back from the workflow engine to the API Manager at the time of workflow completion. | - | callBackURL | The URL to which the Workflow completion request is sent to by the workflow engine, at the time of workflow completion. This property is configured in the `` element in the `workflow-extensions.xml` registry file | - - diff --git a/en/docs/design/advanced-topics/adding-an-api-state-change-workflow-using-bps.md b/en/docs/design/advanced-topics/adding-an-api-state-change-workflow-using-bps.md deleted file mode 100644 index e6e15d890c..0000000000 --- a/en/docs/design/advanced-topics/adding-an-api-state-change-workflow-using-bps.md +++ /dev/null @@ -1,248 +0,0 @@ -# Adding an API State Change Workflow - -This section explains how to add a custom workflow to control the API state changes in the API Manager. Before you begin, see [Workflow Extensions]({{base_path}}/reference/extending-the-api-manager/extendingh-workflows/invoking-the-api-manager-from-the-bpel-engine) for more information on the different types of workflow executors, and also, see [API Lifecycle]({{base_path}}/getting-started/key-concepts#api-lifecycle) to get a better understanding on the API states. - -!!! Note - - You will only need to configure either **WSO2 EI** or **WSO2 BPS**. The WSO2 API Manager configuration will be common for both. - -## Configuring WSO2 EI - -1. Download [WSO2 Enterprise Integrator (WSO2 EI)](https://wso2.com/integration). -2. Set an offset of 2 to the default BPS port in `/conf/carbon.xml` file. - - This prevents port conflicts that occur when you start more than one WSO2 product on the same server. For more information, see [Changing the Default Ports with Offset]({{base_path}}/install-and-setup/setup/deployment-best-practices/changing-the-default-ports-with-offset). - - ``` xml - 2 - ``` - - !!! tip - - If you **run the API Manager and EI on different machines** set the `hostname` to a different value than `localhost` - - If you change the EI port **offset to a value other than 2 or run the API Manager and EI on different machines**, you need to do the following: - - Search and replace the value 9765 in all the files (`.epr`) inside the `/business-processes` directory with the new port (9763 + port offset). - - -3. Start the EI server and sign in to its management console (`https://:9443+/carbon`). - - !!! warning - If you are using Mac OS with High Sierra, you may encounter the following warning when you sign in to the management console due to a compression issue that exists in High Sierra SDK. - - ``` java - WARN {org.owasp.csrfguard.log.JavaLogger} - potential cross-site request forgery (CSRF) attack thwarted (user:, ip:xxx.xxx.xx.xx, method:POST, uri:/carbon/admin/login_action.jsp, error:required token is missing from the request) - ``` - - To avoid this issue open the `/conf/tomcat/catalina-server.xml` file and change `compression="on"` to `compression="off"` in the Connector configuration and restart WSO2 EI. - - -4. Select **Processes > Add > BPMN** and upload the `/business-processes/api-state-change/APIStateChangeApprovalProcess.bar` file to EI. -![]({{base_path}}/assets/img/learn/learn-state-change-workflow-add-bpmn.png) - -##Configuring WSO2 BPS - -1. Download [WSO2 Business Process Server](https://wso2.com/api-manager/). -2. Set an offset of 2 to the default BPS port in the `/repository/conf/carbon.xml` file. - - This prevents port conflicts that occur when you start more than one WSO2 product on the same server. For more information, see [Changing the Default Ports with Offset]({{base_path}}/install-and-setup/setup/deployment-best-practices/changing-the-default-ports-with-offset). - - ``` xml - 2 - ``` - - !!! tip - - If you **run the API Manager and EI on different machines** set the `hostname` to a different value than `localhost` - - If you change the BPS port **offset to a value other than 2 or run the API Manager and BPS on different machines**, you need to do the following: - - Search and replace the value 9765 in all the files (`.epr`) inside the `/business-processes` directory with the new port (9763 + port offset). - - -3. Start the BPS server and sign in to its management console (`https://:9443+/carbon`). - - !!! warning - If you are using Mac OS with High Sierra, you may encounter the following warning when you sign in to the management console due to a compression issue that exists in High Sierra SDK. - - ``` java - WARN {org.owasp.csrfguard.log.JavaLogger} - potential cross-site request forgery (CSRF) attack thwarted (user:, ip:xxx.xxx.xx.xx, method:POST, uri:/carbon/admin/login_action.jsp, error:required token is missing from the request) - ``` - - To avoid this issue open the `/repository/conf/tomcat/catalina-server.xml` file and change `compression="on"` to `compression="off"` in the Connector configuration and restart the BPS. - - -4. Select **Processes > Add > BPMN** and upload the `/business-processes/api-state-change/APIStateChangeApprovalProcess.bar` file to BPS. -![]({{base_path}}/assets/img/learn/learn-state-change-workflow-add-bpmn.png) - -## Configuring the API Manager - -1. Open the `/repository/conf/deployment.toml` file and uncomment all the configuration that is set in the `[apim.workflow]` section and set `enable` to `true`. - ``` - [apim.workflow] - enable = true - service_url = "https://localhost:9445/bpmn" - username = "$ref{super_admin.username}" - password = "$ref{super_admin.password}" - callback_endpoint = "https://localhost:${mgt.transport.https.port}/api/am/admin/v4/workflows/update-workflow-status" - oken_endpoint = "https://localhost:${https.nio.port}/token" - client_registration_endpoint = "https://localhost:${mgt.transport.https.port}/client-registration/v0.15/register" - client_registration_username = "$ref{super_admin.username}" - client_registration_password = "$ref{super_admin.password}" - ``` -2. Change the `service_url` if you have configured the BPS/EI to run on a different port offset. - -### Engaging the WS Workflow Executor in the API Manager - -First, enable the API state change workflow. - -1. Sign in to the APIM management console (`https://:9443/carbon`). - -2. Click **Resources** > **Browse**. - - [![Resources Browse Menu]({{base_path}}/assets/img/learn/learn-state-change-workflow-browse.png)]({{base_path}}/assets/img/learn/learn-state-change-workflow-browse.png) - -3. Go to the `/_system/governance/apimgt/applicationdata/workflow-extensions.xml` resource, disable the Simple Workflow Executor and enable WS Workflow Executor. - - ``` - - .... - - - - APIStateChangeApprovalProcess - Created:Publish,Published:Block - - .... - - ``` - - You have now engaged the API WS Workflow. The default configuration is set for the **Created to Publish** and **Published to Block** state changes. See [Advanced Configurations](#advanced-configurations) for information on configuring more state changes. - -4. Sign in to the API Publisher (`https://:9443/publisher`) and publish an API. - - For more information, see [Create a REST API]({{base_path}}/design/create-api/create-rest-api/create-a-rest-api/) and [Publish an API]({{base_path}}/deploy-and-publish/publish-on-dev-portal/publish-an-api/). - -5. Click **Lifecycle**. - - A message related to the publish workflow will be displayed because the workflow is enabled for **Created to Publish** state change. - - ![Lifecycle]({{base_path}}/assets/img/learn/learn-state-change-workflow-pending.png) - - !!! info - Note that the **publish** button will be disabled in the overview page until the workflow task is completed or deleted. - ![Publish button]({{base_path}}/assets/img/learn/learn-state-change-workflow-publish-pending.png) - -6. You can revoke the state change by clicking **Delete Task**. - - ![Delete task button]({{base_path}}/assets/img/learn/learn-state-change-workflow-delete-task.png) - -7. Sign in to the Admin Portal (`https://:9443/admin`) - -8. Click **API State Change** to see the list of tasks awaiting for approval. - - ![]({{base_path}}/assets/img/learn/learn-state-change-workflow-admin-assign.png) - -9. Click **Assign to Me** to approve the task. - -10. Select **Approve** and click **Complete** to resume and complete the API state change. - - ![]({{base_path}}/assets/img/learn/learn-state-change-workflow-admin-approve.png) - - -## Configuring the BPS for tenants - -### Using the EI - -1. Sign in to the EI with the credentials of the tenant. - -2. Select **Processes > Add > BPMN** and upload the `/business-processes/api-state-change/APIStateChangeApprovalProcess.bar` file to EI. - -3. Copy the `/wso2/business-process/repository/deployment/server/webapps/bpmn.war` web app into the `/wso2/business-process/repository/tenants//webapps` directory. - -4. To engage the WS Workflow Executor, sign in to the admin console using the credentials of the tenant and repeat step 2 from the [Engaging the WS Workflow Executor in the API Manager](#engaging-the-ws-workflow-executor-in-the-api-manager) section. - -### Using the BPS - -1. Sign in to the BPS with the credentials of the tenant. Select **Processes > Add > BPMN** and upload the `/business-processes/api-state-change/APIStateChangeApprovalProcess.bar` file to BPS. - -2. Copy the `/repository/deployment/server/webapps/bpmn.war` web app into the `/repository/tenants//webapps` directory. - -3. To engage the WS Workflow Executor, sign in to the admin console using the credentials of the tenant and repeat step 2 from the [Engaging the WS Workflow Executor in the API Manager](#engaging-the-ws-workflow-executor-in-the-api-manager) section. - -### Advanced configurations - -Given below are the configurations that can be changed by editing the `/repository/conf/deployment.toml` file. - -``` -[apim.workflow] -enable = true -service_url = "https://localhost:9445/bpmn" -username = "$ref{super_admin.username}" -password = "$ref{super_admin.password}" -callback_endpoint = "https://localhost:${mgt.transport.https.port}/api/am/admin/v4/workflows/update-workflow-status" -oken_endpoint = "https://localhost:${https.nio.port}/token" -client_registration_endpoint = "https://localhost:${mgt.transport.https.port}/client-registration/v0.15/register" -client_registration_username = "$ref{super_admin.username}" -client_registration_password = "$ref{super_admin.password}" -``` - - -The elements of the above configuration are explained below. - -| Element name | Description | -|----------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------| -| `Enabled`| Enables the Admin Portal to approve state change tasks. | -| `ServerUrl`| The URL of the BPMN server. | -| `ServerUser`| User accessing the BPMN REST API. | -| `ServerPassword`| Password of the user accessing the BPMN REST API. | -| `WorkflowCallbackAPI` | The REST API invoked by the BPMN to complete the workflow. | -| `TokenEndPoint`| The API call to generate the access token is passed to the BPMN process. Once the access token is received, it is used to call the workflow callback API. | -| `DCREndPoint`| Endpoint to generate OAuth application. This application is used by the BPMN process to generate the token. | -| `DCREndPointUser`| Endpoint user. | -| `DCREndPointPassword` | Endpoint password. f | - -!!! note - Setting a DCREndPointUser - Create a user with exclusive **apim:apiworkflow** scope permissions when setting a `DCREndPointUser.` Please avoid using super admin credentials. If super admin credentials are used, the created OAuth application will have all the permissions related to scopes in the other REST APIs. Follow the instructions below to create a user with the **apim:apiworkflow** scope permissions: - - 1. Sign in to  APIM management console (`https://:9443/carbon`) and create a role named `workflowCallbackRole`. Set the create and publisher or subscriber permissions to this role. - 2. Sign in to APIM Admin console (`https://:9443/admin`) and go to **Settings** --> **Advanced**. - 3. Update the role related to ‘apim:api\_workflow’ scope with the newly created role. - - ``` java - ... - { - "Name": "apim:api_workflow", - "Roles": "workflowCallbackRole" - } - ... - ``` - - 4. Assign this role to a user. - 5. Update `` and `` with this user's credentials. - - For more details on how to create users and roles see [managing users and roles]({{base_path}}/administer/product-administration/managing-users-and-roles/admin-managing-users-roles-and-permissions). - - -The configurations that can be changed by editing the `/_system/governance/apimgt/applicationdata/workflow-extensions.xml` are given below. - -**Simple workflow** - -``` java - -``` - -**WS workflow** - -``` java - - APIStateChangeApprovalProcess - Created:Publish,Published:Block - -``` - -The elements of the above configuration are explained below. - -| Element Name | Mandatory/Optional | Description | -|-----------------------------------------------------|--------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `processDefinitionKey` | Mandatory | BPMN process definition ID.  BPMN process provided with AM as default has `APIStateChangeApprovalProcess` as the ID | -| `stateList`| Mandatory | This is a comma separated list of the current state and intended action. For example, Created:Publish,Published:Block | -| `serviceEndpoint`| Optional | The URL of the BPMN process engine. This overrides the global `` value from the `api-manager.xml` file. This can be used to connect a separate workflow engine for a tenant. | -| `username`| Optional | Username for the external BPMN process engine. This overrides `` defined in the `api-manager.xml` file for the tenant. | -| `password`| Optional | Password for the external BPMN process engine. This overrides `` defined in the `api-manager.xml` file for the tenant. | diff --git a/en/docs/design/api-policies/regular-gateway-policies/creating-and-uploading-using-integration-studio.md b/en/docs/design/api-policies/regular-gateway-policies/creating-and-uploading-using-integration-studio.md deleted file mode 100644 index 2ca2747394..0000000000 --- a/en/docs/design/api-policies/regular-gateway-policies/creating-and-uploading-using-integration-studio.md +++ /dev/null @@ -1,97 +0,0 @@ -# Creating and Uploading Custom Mediation Policies using WSO2 Integration Studio - -You can design all custom mediation policies using a tool such as WSO2 Integration Studio and then store the policy in the registry, which can be later deployed to the Gateway. - -Let's see how to create a custom mediation policy using the WSO2 Integration Studio and then deploy and use it in your APIs. -This custom policy adds a full trace log that gets printed when you invoke a particular API deployed in the Gateway. - -1. Navigate to the Integration Studio page - - -2. Click **Download** to download the WSO2 Integration Studio based on your preferred platform (i.e., Mac, Windows, Linux). - - *For example, if you are using a Ubuntu 64-bit computer you need to download, WSO2-Integration-Studio-8.1.0-linux-gtk-x86_64.tar.gz.* - -3. Extract the downloaded archive of the Integration Studio to the desired location and run the **IntegrationStudio** application to start the tool. - - [![Integration Studio]({{base_path}}/assets/img/learn/api-gateway/message-mediation/integration-studio.png)]({{base_path}}/assets/img/learn/api-gateway/message-mediation/integration-studio.png) - - !!! tip - To learn more about using WSO2 Integration Studio, see the [WSO2 Integration Studio]({{base_path}}/integrate/develop/wso2-integration-studio/) documentation. - -4. Click **Window -> Perspective -> Open Perspective -> Other** to get the Perspective options. - - [![Perspective Path]({{base_path}}/assets/img/learn/api-gateway/message-mediation/open-perspective.png)]({{base_path}}/assets/img/learn/api-gateway/message-mediation/open-perspective.png) - -5. Select **WSO2 APIManager** from the perspective list and click **Open**. - - [![APIM Perspective]({{base_path}}/assets/img/learn/api-gateway/message-mediation/apim-perspective.png)]({{base_path}}/assets/img/learn/api-gateway/message-mediation/apim-perspective.png) - - You will be redirected to the following page. - - [![APIM Perspective View]({{base_path}}/assets/img/learn/api-gateway/message-mediation/apim-perspective-view.png)]({{base_path}}/assets/img/learn/api-gateway/message-mediation/apim-perspective-view.png) - -6. Click on the Login icon. - - The Add Registry dialog box appears. - - [![Login]({{base_path}}/assets/img/learn/api-gateway/message-mediation/login.png)]({{base_path}}/assets/img/learn/api-gateway/message-mediation/login.png) - -7. Enter the URL of the Publisher, Username and Password and click **OK**. - - [![Checkin to register]({{base_path}}/assets/img/learn/api-gateway/message-mediation/checkin.png)]({{base_path}}/assets/img/learn/api-gateway/message-mediation/checkin.png) - -8. Locate the path where the sequence needs to be added `(IN/OUT/FAULT)` from the **Registry Tree Browser**. - - [![Locate Path]({{base_path}}/assets/img/learn/api-gateway/message-mediation/registry-path.png)]({{base_path}}/assets/img/learn/api-gateway/message-mediation/registry-path.png) - -9. Click **Create** and create a new sequence, provide the sequence name as `newSequence`, and click **Finish**. - - [![Create a new sequence]({{base_path}}/assets/img/learn/api-gateway/message-mediation/create-sequence.png)]({{base_path}}/assets/img/learn/api-gateway/message-mediation/create-sequence.png) - - Your sequence now appears on the Integration Studio Editor. - -10. Drag and drop a **Log Mediator** from the **Mediators** section, to your sequence and **Save** the `newSequence.xml` file. - - [![New sequence XML]({{base_path}}/assets/img/learn/api-gateway/message-mediation/newsequence-log-xml.png)]({{base_path}}/assets/img/learn/api-gateway/message-mediation/newsequence-log-xml.png) - -11. Right-click on the created mediator, click **Show Properties View**, and enter the following values in the **Log Mediator**. - - `Log Level: Full` - -12. Right-click on the sequence file (`newSequence.xml`), and click **Commit file**. - - [![push to register]({{base_path}}/assets/img/learn/api-gateway/message-mediation/commit-to-reg.png)]({{base_path}}/assets/img/learn/api-gateway/message-mediation/commit-to-reg.png) - -13. Click **Yes** and push the changes into the remote registry. - -14. Click **Ok** and acknowledge the successful message popped up. - - [![Success]({{base_path}}/assets/img/learn/api-gateway/message-mediation/success-message.png)]({{base_path}}/assets/img/learn/api-gateway/message-mediation/success-message.png) - -15. Navigate to the API Manager Management Console and click **Resources** > **Browser**, under the **Main** section to access the Registry Browser and to verify whether the sequence was added successfully. - - [![API Manager Management Console]({{base_path}}/assets/img/learn/api-gateway/message-mediation/mgt-console-reg-browser.png)]({{base_path}}/assets/img/learn/api-gateway/message-mediation/mgt-console-reg-browser.png) - -16. Sign in to the **API Publisher**. - -17. Click **Create API** and then create a new REST type API. - - For more information, see [Create a REST API]({{base_path}}/design/create-api/create-rest-api/create-a-rest-api/). - -18. Click on the created API and click **Runtime Configurations**. - -19. Click the Edit icon [![Edit]({{base_path}}/assets/img/learn/api-gateway/message-mediation/edit-button.png)]({{base_path}}/assets/img/learn/api-gateway/message-mediation/edit-button.png) in the **Message Mediation** section under the **Request** sub-menu. - -20. In the **Select a Mediation Policy** pop-up, select **Common Policies**, and then select the newly added sequence (`newSequence`) from the sequence list. Finally, click **Select**. - - [![Select the mediation policy]({{base_path}}/assets/img/learn/api-gateway/message-mediation/select-mediation-policy.png)]({{base_path}}/assets/img/learn/api-gateway/message-mediation/select-mediation-policy.png) - -21. If the API is not in `PUBLISHED` state, go to the **Lifecycle** tab, and click `REDEPLOY` to re-publish the API. - -22. Invoke the API using a valid subscription. - - You will see the following trace log in the server logs. - - ``` bash - [2021-09-28 15:27:30,770] INFO - LogMediator To: /test/1.0, MessageID: urn:uuid:042a64ab-590a-4128-bd99-ef6974893610, Direction: request, Envelope: - -3. Now let’s create a new policy named “Sample Log”. Click the **Add New Policy** button from the top, and you will see the following screen. Notice that the first step is to upload the Policy Definition file. - - - -4. Policy Definition is a .j2 file that includes the Synapse gateway related logic. For this sample policy you can make use of the below logic. You can copy this content and save the file as addLogMessage.j2. Then click or drag the definition file to the dropzone appearing in the screen from step 3. - - ```xml - - - - ``` - -5. Click **Continue** to move to the second step. - -6. You need to provide the Policy Specification, which is a JSON that describes the policy that you are about to add. - - - - You can use this sample JSON and simply paste it in the editor. - - ```json - { - "category": "Mediation", - "name": "sampleLog", - "displayName": "Sample Log", - "description": "This is just a dummy policy we are creating for demo purposes", - "multipleAllowed": false, - "applicableFlows": [ - "request", - "response", - "fault" - ], - "supportedGateways": [ - "Synapse" - ], - "supportedApiTypes": [ - "REST" - ], - "policyAttributes": [] - } - - ``` - -7. Once that is done, click **Save**. - -8. Now the newly created policy will appear in the table. You can search for this policy using the search function. - -9. Try viewing this policy by clicking **View**. Notice that you can download the policy that we have created using the **Download Policy** button. This download operation will give you the .zip file inclusive of the Policy Definition file and Policy Specification file. - - - -10. You can delete this newly created policy using the **Delete** action. - - -### Creating an API Specific Policy - -If you would rather create a policy that is local to the API, you can follow the below provided steps to create such a policy. - -1. Navigate to the **Policies** tab under any API that you want. You will see a screen like below. Click on the **Add New Policy** button in order to create an API specific policy. - - - -2. Then you will be redirected to a screen where you can set up the policy for your API. - - - -3. Now you can upload the Policy Definition file, and add in the Policy Specification JSON just like you did under the common policy creation. Then, click on **Save** to create a policy. - -4. Refer to the below screenshot to find such a local policy. Notice the policy named **Local Test**, where the delete operation is enabled as opposed to the disabled delete operations of Common Policies. - - - -5. You can perform view and delete actions on API specific policies by clicking the respective icons next to the policy. - -## Attaching Policies to an API resource - -When it comes to attaching these policies to an API resource, you can pick out the desired operation and flow. Once that is decided, you can expand the relevant section from the left column of the below UI. Let’s assume that we want to attach a policy to /menu get. - - - -Now let’s drag the **Add Header Policy** from the **Request** tab of the **Policy List** and drop that to the dropzone highlighted in the above image. Then, you will notice a side panel appearing from the right hand side. - - - -Let’s add in the header name as Foo and header value as Bar and click **Save**. - -!!! Note - You can optionally use the **Apply to all resources** option to attach the same policy to all the resources when you click **Save**. This will attach the same policy with the same values to all the resources of the corresponding flow that the policy was initially dropped on. - -Now that we have saved the dropped policy, you should be able to see a new Add Header policy (depicts with the initials AH) under the /menu get like so: - - - -If you click on this newly appended AH policy, you should still be able to edit the initially added values and update those saved values. - -!!! Note - You can rearrange the dropped policies that are attached to the Request flow of the /menu get. Also, you can download the policy zip (includes the policy definition file and policy specification file). If you click on the delete, the dropped policy will cancel. - -Finally, when you're happy with the dragged and dropped policies, you can go ahead and click on the Save button at the bottom of the page. Note that if you don’t click on save, none of the dropped policies will be saved to the API. \ No newline at end of file diff --git a/en/docs/design/api-security/api-authentication/advanced-topics/changing-the-default-token-expiration-time.md b/en/docs/design/api-security/api-authentication/advanced-topics/changing-the-default-token-expiration-time.md deleted file mode 100644 index 12c280e23a..0000000000 --- a/en/docs/design/api-security/api-authentication/advanced-topics/changing-the-default-token-expiration-time.md +++ /dev/null @@ -1,47 +0,0 @@ -# duplicate\_Changing the default token expiration time - -Access tokens have an expiration time, which is set to 60 minutes by default. - -- To change the default expiration time of application access tokens, - - Change the value of the `` element in the `/repository/conf/identity/identity.xml` file. Set this to a negative value to ensure that the token never expires. **Changes to this value are applied only to the new applications that you create** . - - **Example** - - - - - - - - - - -
    -
    - <AccessTokenDefaultValidityPeriod>- 3600 </AccessTokenDefaultValidityPeriod> -
    -
    - - - Alternatively, you can set a default expiration time through the UI when generating/regenerating the application access token. - This is explained in [previous sections](https://docs.wso2.com/display/SHAN/Am300Working+with+Access+Tokens#Am300WorkingwithAccessTokens-valid) . - -- Similarly, to change the default expiration time of user access tokens, edit the value of the `` element in the `/repository/conf/identity/identity.xml` file. - - **Example** - - - - - - - - - - -
    -
    - <UserAccessTokenDefaultValidityPeriod> 3800 </UserAccessTokenDefaultValidityPeriod> -
    -
    - -Also see [Configuring Caching](https://docs.wso2.com/display/AM300/Configuring+Caching) for several caching options available to optimize key validation. diff --git a/en/docs/design/api-security/oauth2/access-tokens-per-device.md b/en/docs/design/api-security/oauth2/access-tokens-per-device.md deleted file mode 100644 index 425623362c..0000000000 --- a/en/docs/design/api-security/oauth2/access-tokens-per-device.md +++ /dev/null @@ -1,13 +0,0 @@ -# duplicate\_Access Tokens Per Device - -### Generating access tokens per device - -WSO2 API Manager returns the same token repeatedly if a valid token exists for the requesting Application, on behalf of the user. However, the latter mentioned scenario becomes an issue if the same user is using the same Application in two devices (e.g., If you have two instances of the same Application running on your iPhone and iPad, and your iPhone already has a token on behalf of you, your iPad will get the same token if you requested for it within the same validity period. Therefore, if one of your devices revoke this token (e.g., revoke on logout), the token that you obtained for your other device becomes invalid as the devices use the identical tokens. - -To overcome this problem, WSO2 API Manager provides a mechanism, with the use of [OAuth2.0 Scopes](https://docs.wso2.com/display/AM300/Key+Concepts#KeyConcepts-OAuthscopes) , for obtaining a unique Access Token for each device that uses the same Application. Thereby, allowing users to request tokens for different scopes. You need to prefix the [scope](https://docs.wso2.com/display/AM300/Key+Concepts#KeyConcepts-OAuthscopes) names with the string " `device_` ". WSO2 API Manager uses special treatment for the scopes that are prefixed with the latter mentioned string by ignoring the usual validations it does when issuing tokens that are associated to scopes. The following is a sample cURL command that you can use to request a token with a " `device_` " scope. - -| | -|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `curl -k -d"grant_type=password&username=&password=&scope=device_ipad"-H"Authorization :Basic base64encode(consumer-key:consumer-secret), Content-Type: application/x-www-form-urlencoded"https://localhost:9443/oauth2/token` | - -Each token request that is made with a different scope, results in a different access token being issued. For example if you received a token named `abc` as a result of the scope `device_ipad` , you will not receive `abc` when you request for the token with the scope `device_iphone` . Note that you can use `device_` scopes in conjunction with other scopes as usual. diff --git a/en/docs/design/api-security/threat-protection/gateway-threat-protectors/json-threat-protection-for-api-gateway.md b/en/docs/design/api-security/threat-protection/gateway-threat-protectors/json-threat-protection-for-api-gateway.md deleted file mode 100644 index 830b9360ef..0000000000 --- a/en/docs/design/api-security/threat-protection/gateway-threat-protectors/json-threat-protection-for-api-gateway.md +++ /dev/null @@ -1,84 +0,0 @@ -# Am300JSON Threat Protection for API Gateway - -The JSON threat protector in WSO2 API Manager validates the request body of the JSON message based on preconfigured to thwart payload attacks. - -- [Editing the sequence through registry artifacts](#Am300JSONThreatProtectionforAPIGateway-Editingthesequencethroughregistryartifacts) -- [Applying the JSON validator policy](#Am300JSONThreatProtectionforAPIGateway-ApplyingtheJSONvalidatorpolicy) -- [Testing the JSON threat protector](#Am300JSONThreatProtectionforAPIGateway-TestingtheJSONthreatprotector) - -#### Detecting vulnerabilities before parsing the message - -The json\_validator sequence specifies the properties to be limited in the payload. A sample json\_validator sequence is given below. - -``` java - - - - - - - - - - - - -``` - -| Property | Default Value | Description | -|--------------------------|---------------|----------------------------------------| -| maxPropertyCount | 100 | Maximum number of properties | -| maxStringLength | 100 | Maximum length of string | -| maxArrayElementCount | 100 | Maximum number of elements in an array | -| maxKeyLength | 100 | Maximum number length of key | -| maxJsonDepth | 100 | Maximum length of JSON | - -### Editing the sequence through registry artifacts - -To edit the existing sequence follow the steps below. - -1. Log in to the Management Console. -2. Navigate to `/_system/governance/apimgt/customsequences/in/json_validator.xml ` -3. Edit the `json_validator.xml` file. -4. Go to the API Publisher and re-publish your API for the changes to take effect. - -### Applying the JSON validator policy - -You can apply the predefined JSON Policy through the UI. Follow the instructions below to apply the json\_validator in sequence. - -- Create an API or edit an existing API. - -- Go to **Message Mediation** Policies under the **Implement** tab. - -- Select **Enable Message Mediation** . Select json\_validator from the drop-down menu for In Flow. - ![]({{base_path}}/assets/attachments/126559464/126559465.jpg) -- Click **Save and Publish** to save the changes. - -### Testing the JSON threat protector - -You can edit the sequence to set the property values according to your requirements. A sample request and response for each property value set to 5 is given below. - -- [**Request**](#2fabe5e92ef64a3a999bb756d894221e) -- [**Response**](#6da49ce3d2cf4091a885d78334d2513e) - -Note that this exceeds the JSON property count - -``` java - The request message: - curl -X POST "https://localhost:8243/jsonpolicy/1.0.0/addpayload" -H "accept: application/json" -H "Content-Type: application/json" -H "Authorization: Bearer b227d70b-ca56-3439-8698-ffb90345e1b5" -d "{ \"glossary\": \"value\" \"GlossSee\": \"markup\" }" -``` - -``` java - - 400 - Bad Request - Request is failed due to JSON schema validation failure: Max Key Length Reached - -``` - -!!! warning -Performance impact - -The JSON schema mediator builds the message at the mediation level. This impacts the performance of 10KB messages for 300 concurrent users by 5.2 times than the normal flow. - - diff --git a/en/docs/design/api-security/threat-protection/gateway-threat-protectors/regular-expression-threat-protection-for-api-gateway.md b/en/docs/design/api-security/threat-protection/gateway-threat-protectors/regular-expression-threat-protection-for-api-gateway.md deleted file mode 100644 index 5f273ba841..0000000000 --- a/en/docs/design/api-security/threat-protection/gateway-threat-protectors/regular-expression-threat-protection-for-api-gateway.md +++ /dev/null @@ -1,133 +0,0 @@ -# Am300Regular Expression Threat Protection for API Gateway - -WSO2 API Manager provides predefined regex patterns to sanitize the request from SQL injection attacks. The attacks may depend on the API traffic at runtime. The API developers should identify the common attacks and select the appropriate restrictive measures. This feature extracts the data from XML, JSON payloads, Queryparam, URI path, headers and validates the content against predefined regular expressions. If any predefined regex keyword is matched with the content, the API request is considered as a threat and it is blocked and rejected. This secures the backend resources from activities that make the system vulnerable. You can configure your own restriction patterns to thwart various attacks such as the following: - -- JavaScript Injection -- Server-side Include Injection -- XPath Injection -- Java Exception Injection -- XPath Abbreviated Syntax Injection - -#### Blacklisting patterns - -We recommend the following patterns for blacklisting. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    NamePatterns
    SQL Injection

    .*'.*|.*ALTER.*|.*ALTER TABLE.*|.*ALTER VIEW.*|
    - .*CREATE DATABASE.*|.*CREATE PROCEDURE.*|.*CREATE SCHEMA.*|.*create table.*|.*CREATE VIEW.*|.*DELETE.*|.*DROP DATABASE.*|.*DROP PROCEDURE.*|.*DROP.*|.*SELECT.*

    Server-side Include Injection Attack .*#include.*|.*#exec.*|.*#echo.*|.*#config.*
    Java Exception Injection

    .*Exception in thread.*

    XPath Injection

    .*'.*|.*or.*|.*1=1.*|
    - .*ALTER.*|.*ALTER TABLE.*|.*ALTER VIEW.*|.*CREATE DATABASE.*|.*CREATE PROCEDURE.*|.*CREATE SCHEMA.*|
    - .*create table.*|.*CREATE VIEW.*|.*DELETE.*|.*DROP DATABASE.*|.*DROP PROCEDURE.*|.*DROP.*|.*SELECT.*

    JavaScript Exception

    <\s*script\b[^>]*>[^<]+<\s*/\s*script\s*>

    XPath Expanded Syntax Injection

    /?(ancestor(-or-self)?|descendant(-or-self)?|following(-sibling))

    - -- [Editing the sequence through registry artifacts](#Am300RegularExpressionThreatProtectionforAPIGateway-Editingthesequencethroughregistryartifacts) -- [Applying the Regular Expression Policy](#Am300RegularExpressionThreatProtectionforAPIGateway-ApplyingtheRegularExpressionPolicy) -- [Testing the regex threat protector](#Am300RegularExpressionThreatProtectionforAPIGateway-Testingtheregexthreatprotector) - -### Editing the sequence through registry artifacts - -To edit the existing sequence follow the steps below. - -1. Log in to the Management Console. -2. Navigate to `/_system/governance/apimgt/customsequences/in/regex_policy.xml` -3. Edit the `regex_policy.xml` file. -4. Go to the API Publisher and re-publish your API for the changes to take effect. - -### Applying the Regular Expression Policy - -You can apply the predefined Regular Expression Policy through the UI. Follow the instructions below to apply the regex\_policy in sequence. - -1. Create an API or edit an existing API. -2. Go to **Message Mediation Policies** under the **Implement** tab. -3. Select **Enable Message Mediation** . Select `regex_policy` from the drop-down menu for **In Flow** . - ![]({{base_path}}/assets/attachments/126559459/126559460.png)4. Click **Save and Publish** to save the changes. - -Each request is sanitized through the regular expression threat protector. You can add or modify the regex patterns according to your requirement. - -The regex\_policy sequence is given below. - -``` xml - - - - - - - - - - - -``` - -!!! note -If you need to validate only the request headers, you can disable the `enabledCheckBody` and `enabledCheckPathParams` properties by setting the value to `false` . - - -### Testing the regex threat protector - -You can test this feature by sending an SQL injection attack with the XML message body. The sample request and response is given below. - -- [**Message**](#10673ba9a16d49dcaf1b6a073de9cf4d) -- [**Response**](#90b129a29c8c4b74869eb1676bb3f705) - -``` java - - - - Homestyle Breakfast - drop table - - Two eggs, bacon or sausage, toast, and our ever-popular hash browns - - 950 - - -``` - -``` java - - 400 - Bad Request - SQL-Injection Threat detected in Payload - -``` - -!!! warning -Performance impact - -The regex mediator builds the entire message and performs string processing to find potentially harmful constructs underneath the message body. This drops the performance of 10KB messages for 300 concurrent users by 3.6 times than the normal flow. The performance decrease may accelerate along with the message size. - - diff --git a/en/docs/design/api-security/threat-protection/gateway-threat-protectors/xml-threat-protection-for-api-gateway.md b/en/docs/design/api-security/threat-protection/gateway-threat-protectors/xml-threat-protection-for-api-gateway.md deleted file mode 100644 index 566822f2a1..0000000000 --- a/en/docs/design/api-security/threat-protection/gateway-threat-protectors/xml-threat-protection-for-api-gateway.md +++ /dev/null @@ -1,249 +0,0 @@ -# Am300XML Threat Protection for API Gateway - -The XML threat protector in WSO2 API Manager validates the XML payload vulnerabilities based on the pre-configured limits. It uses following methodologies to thwart the gateway from XML based attacks. - -- [Detecting the malformed, vulnerable XML messages through limitations](#Am300XMLThreatProtectionforAPIGateway-detectvulnerability) - -- [XML schema validation](#Am300XMLThreatProtectionforAPIGateway-XMLSchemaValidation) - -#### Detecting the malformed, vulnerable XML messages through limitations - -The xml\_validator sequence specifies the properties to be limited in the payload. A sample xml\_validator sequence is given below. - -``` java - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -``` - -Users can enable or disable XML payload limits and schema validation. Some examples are shown below. - -- [**Disabling the XML payload validation**](#b95bd611fb2144d0940b193f34addf5b) -- [**Disabling the XML schema validation**](#70c795c618f04f2cb9983858b263298d) - -``` java - -``` - -``` java - -``` - -##### XML payload validation properties - -- Disable the DTD payload in the XML properties to avoid attacks - -- You can turn on/off external entities of the payload. An example is given below with the elements of the XML request body, that can be configured . - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    PropertyDefault ValueDescription
    dtdEnabled
    false

    The DTD can be enabled/disabled according to your requirement.

    externalEntitiesEnabled true
    -
    maxXMLDepth
    100 Maximum depth of the XML request message.
    maxElementCount
    100 Maximum number of allowed elements in the XML request message.
    maxAttributeCount
    100 Maximum count of allowed attributes in the XML request message.
    maxAttributeLength
    100 Maximum allowed length of each attribute value in characters.
    entityExpansionLimit
    100 Maximum allowed entity expansion limit of the XML request message.
    maxChildrenPerElement
    100 Maximum number of child elements allowed in the XML request message.
    - -#### XML schema validation - -You can define XML schemas per resource to validate each request. For example, to add an XML schema to the resource /userapi/1.0.0/addResource/value follow the steps below. - -1. Define the resource in the case regex -2. Define the relevant schema URL and add it as shown below. -3. You can define the buffer size of the request message depending on your requirement. An example is given below. - -Each request is sanitized through the XML threat protector. API developer can modify each properties according to your requirement. - -- [Editing the sequence through registry artifacts](#Am300XMLThreatProtectionforAPIGateway-Editingthesequencethroughregistryartifacts) -- [Applying the XML validator policy](#Am300XMLThreatProtectionforAPIGateway-ApplyingtheXMLvalidatorpolicy) -- [Testing the XML threat protector](#Am300XMLThreatProtectionforAPIGateway-TestingtheXMLthreatprotector) -- [Testing the schema validation](#Am300XMLThreatProtectionforAPIGateway-Testingtheschemavalidation) - -### Editing the sequence through registry artifacts - -To edit the existing sequence follow the steps below. - -1. Log in to the Management Console. -2. Navigate to `/_system/governance/apimgt/customsequences/in/xml_validator.xml` -3. Edit the `xml_validator.xml` file. -4. Go to the API Publisher and re-publish your API for the changes to take effect. - -### Applying the XML validator policy - -You can apply the predefined XML Policy through the UI. Follow the instructions below to apply the xml\_validator in sequence. - -- Create an API or edit an existing API. - -- Go to **Message Mediation** Policies under the **Implement** tab. - -- Select **Enable Message Mediation** . Select xml\_validator from the drop-down menu for In Flow. - ![]({{base_path}}/assets/attachments/126559467/126559468.jpg) -- Click **Save and Publish** to save the changes. - -### Testing the XML threat protector - -You can edit the sequence to set the property values according to your requirements. A sample request and response for the value of the properties set to 30 is given below. Note that the .xsd URL for the relevant resource has been hosted. - -- [**Request**](#389c50828aa24292b0657e037c09c635) -- [**Response**](#159d32ca825c41a480037880ce2e6413) - -``` java - curl -X POST "https://192.168.8.101:8243/xmlPolicy/1.0.0/addResource" -H "accept: application/json" -H "Content-Type: application/xml" -H "Authorization: Bearer 2901c002-f626-372c-9be3-fc54b2c8d65f" -d " string string
    string
    string string string string string string string string string string string
    string
    string string string string string string string string string string string
    string
    string string string string string string string string string string string
    string
    string string string string string string string string string
    " -``` - -``` java - - 400 - Bad Request - XML Validation Failed: due to Maximum Element Count limit (30) Exceeded - -``` - -### Testing the schema validation - -A sample request and response to test the schema validation is given below. - -- [**Request**](#45b87273c80b44ffb18a3f8fe4f5b8f6) -- [**Response**](#194a5a4652e94e609d80ba175c16b449) -- [**.xsd URL**](#db80409dd4d941dc972837213bc340e5) - -``` java - curl -X POST "https://192.168.8.101:8243/xmlPolicy/1.0.0/addResource" -H "accept: application/json" -H "Content-Type: application/xml" -H "Authorization: Bearer 2901c002-f626-372c-9be3-fc54b2c8d65f" -d " string string string string string string
    string
    string string string string string string string string string string string
    string
    string string string string string string string string string
    " -``` - -``` java - - 400 - Bad Request - Error occurred while parsing XML payload : org.xml.sax.SAXParseException: cvc-elt.1: Cannot find the declaration of element 'inline_model'. - -``` - -``` java - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -``` -!!! warning -Performance impact - -The XML mediator builds the message at the mediator level. This impacts the performance of 10KB messages for 300 concurrent users by 5.6 times than the normal flow. The performance may slow down along with the message size. - - diff --git a/en/docs/includes/deploy/k8s-setup-note.md b/en/docs/includes/deploy/k8s-setup-note.md deleted file mode 100644 index 1c84e0c0cc..0000000000 --- a/en/docs/includes/deploy/k8s-setup-note.md +++ /dev/null @@ -1,4 +0,0 @@ - -!!! note - - If you are using [Rancher Desktop](https://rancherdesktop.io/), disable the default Traefik ingress controller in order to deploy the Nginx ingress controller. Refer [Rancher Docs](https://docs.rancherdesktop.io/faq/#q-how-can-i-disable-traefik-and-will-doing-so-remove-traefik-resources) for more information. \ No newline at end of file diff --git a/en/docs/includes/design/add-oas-example.md b/en/docs/includes/design/add-oas-example.md deleted file mode 100644 index d69d708c23..0000000000 --- a/en/docs/includes/design/add-oas-example.md +++ /dev/null @@ -1,139 +0,0 @@ -With OpenAPI 3.0 you can provide the expected payloads and headers in the following formats. - - **Single Example for an Operation** - -=== "Format" - ```yaml - : - : - responses: - : - description: - headers: -
    : - example:
    - content: - : - example: - ``` - -=== "Example" - ```yaml - /pet/findByStatus: - get: - responses: - '200': - description: OK - headers: - x-wso2-example: - example: example header value - content: - application/json: - example: - mock response: hello world - ``` - - **Multiple Examples for an Operation** - -=== "Format" - ```yaml - : - : - responses: - : - description: - headers: -
    : - example:
    - content: - : - examples: - - value: - ``` - -=== "Example" - ```yaml - /pet/findByStatus: - get: - responses: - 50X: - description: Service Unavailable - headers: - x-wso2-example: - example: example header value - content: - application/json: - examples: - ref1: - value: - mock response: hello world - ref2: - value: - mock response: Welcome - default: - description: default response - headers: - x-wso2-example: - example: default header value - content: - application/json: - examples: - ref1: - value: - mock response: default hello world - ref2: - value: - mock response: default Welcome - ``` - - | **Place Holder** | **Usage** | - |-----------------|--------------------| - | `response code` | Can be 3 digit status code or a wildcard format like 2XX. `default` can be also provided instead of a particular status code. | - | `header` | Header name. You can provide multiple headers similarly under `headers`. | - | `media type` | Mock response content type. Provide allowed content types for the resource. | - | `example` | Provide the content body as a simple string or as an object. If an object is given as the `example`, it will be parsed to JSON format. | - - - - For more information on OpenAPI response body example specifications, visit [Request and Response Body Examples](https://swagger.io/docs/specification/adding-examples/). - -!!! example - You can find a complete OpenAPI example for Mock Implementation here: [OpenAPI for Mock Implementation](https://github.com/wso2/product-microgateway/blob/main/samples/openAPI-definitions/mock-impl-sample.yaml) - - If you take the example in **Multiple Examples for an Operation** mentioned previously and update the OpenAPI definition with it, you can use `Prefer` header and `Accept` header to get different examples for same resource operation. - Using `Prefer` header you can specify which `code` and/or `example` should be returned as the response. - - Invoking `GET` for `/pet/findByStatus` will return the default example as given below. - - === "Request" - ```bash - curl -X GET https://localhost:9095/v3/1.0.6/pet/findByStatus - ``` - - === "Response" - ```bash - < HTTP/1.1 200 OK - < content-type: application/json - < x-wso2-example: "default header value" - < - {"mock response":"default hello world"} - ``` - - Invoking `GET` for `/pet/findByStatus` with the header `Prefer` will return the matched example for the particular code and the example reference. - - === "Request" - ```bash - curl -H 'Prefer: code=503, example=ref2' -X GET https://localhost:9095/v3/1.0.6/pet/findByStatus - ``` - - === "Response" - ```bash - < HTTP/1.1 503 Service Unavailable - < content-type: application/json - < x-wso2-example: "example header value" - < - {"mock response":"Welcome"} - ``` - - diff --git a/en/docs/includes/design/create-streaming-api/create-a-sse-streaming-api.md b/en/docs/includes/design/create-streaming-api/create-a-sse-streaming-api.md deleted file mode 100644 index 531c941d10..0000000000 --- a/en/docs/includes/design/create-streaming-api/create-a-sse-streaming-api.md +++ /dev/null @@ -1,160 +0,0 @@ -# Create a Server Sent Events API - -## Overview - -A Server-Sent Events (SSE) API is a streaming API in WSO2 API Manager (WSO2 API-M) that is implemented based on the [SSE](https://html.spec.whatwg.org/multipage/server-sent-events.html#server-sent-events) specification. SSE is an HTTP-based protocol that allows one-way communication similar to WebHooks, from the server to the client. The SSE server transfers events over an already established connection without creating new connections. Therefore, the SSE protocol has a lower delivery latency when compared to typical HTTP. WSO2 API Manager allows API Developers to integrate an SSE backend with an API and to receive events from the backend. - -You can create an SSE API from scratch in WSO2 API-M and export the SSE APIs that are created within WSO2 API-M as AsyncAPI definitions. Alternatively, you can also import [existing AsyncAPI definitions to create SSE APIs in WSO2 API-M](../../../../use-cases/streaming-usecase/create-streaming-api/create-a-streaming-api-from-an-asyncapi-definition). - -This section guides you through the process of creating an API from scratch, which will expose a SSE backend via WSO2 API Manager. After an SSE API is created, you will be able to add different topics and each topic can be mapped to different paths of the backend and can be used to manage them. - -## How it works - -SSE APIs use regular HTTP requests for a persistent connection. In addition, it gets multiplexing over HTTP/2 out-of-the-box. If the connection drops, the EventSource fires an error event and automatically tries to reconnect. The server can also control the timeout before the client tries to reconnect. Clients can send a unique ID with messages. When a client tries to reconnect after a dropped connection, it will send the last known ID. Then the server can see that the client missed "n" number of messages and send the backlog of missed messages on reconnection. - -## Example usage - -For example, stock market applications use SSE APIs to send messages in a uni-directional manner from the server to the client. Therefore, if users need the latest stock market prices, they will subscribe to the respective channel that publishers stock market stats. Thereafter, the server will keep sending the user the latest stock market prices as and when it gets updated in order to provide an immediate user experience. - -## Basic flow - -Follow the instructions below to create the API using the basic flow. - -### Step 1 - Design a SSE API - -1. {!includes/sign-in-publisher.md!} - -2. Click **CREATE API**, go to **Streaming API**, and Click **SSE API**. - -
    -

    Note

    -

    The CREATE button will only appear for a user who has the creator role permission.

    -
    - - - [![Design New Streaming API](../../../assets/img/design/create-api/streaming-api/design-new-streaming-api.png)](../../../assets/img/design/create-api/streaming-api/design-new-streaming-api.png) - -3. Enter API details. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    FieldSample - value
    NameServerSentEvents
    Context -
    -

    - /events -

    -
    -
    -

    The API context is used by the Gateway to identify the API. - Therefore, the API context must be unique. This context is the API's root context when invoking the API through - the Gateway.

    -
    -
    - -
    -

    You can define the API's version as a parameter of its context by - adding the {version} into the context. For example, {version}/event. The API Manager - assigns the actual version of the API to the {version} parameter internally. For example, - https://localhost:8243/1.0.0/event. Note that the version appears before the context, allowing you to - group your APIs based on the versions.

    -
    -
    -
    -
    -
    Version1.0.0
    Protocol -

    SSE

    -
    Endpoint - http://localhost:8080 -

    You need to have a Server Sent Events server running for this purpose locally

    -
    - - SE Create API Page - -4. Click **CREATE** to create the API. - - The overview page of the newly created API appears. - - [![SSE API overview page](../../../assets/img/design/create-api/streaming-api/sse-api-overview-page.png)](../../../assets/img/design/create-api/streaming-api/sse-api-overview-page.png) - - -### Step 2 - Configure the Topics - -Topics of an SSE API are always **Subscribe only**, where the flow of events will be from the server (backend) to the client. By default, an SSE API will have a topic with the name `/*`. - -1. Click **Topics** under **API Configurations** to navigate to the **Topics** page. - -2. Modify the topics as follows and click **Save** to update them. - - 1. Optionally, click delete, as shown below, to delete an existing topic. - - SSE API Delete Existing Topic - - 2. Select **sub** under **Types**, enter the **Topic Name**, and click **+** as shown below, to add a new topic. - - SSE API Add Topic - - The newly added topic is displayed as follows. - - SSE API Newly Added Topic - -### Step 3 - View the AsyncAPI Definition - -Click **AsyncAPI Definition** under **API Configurations**. - -The AsyncAPI definition of the streaming API, which you just created, appears. - -SSE API AsyncAPI Definition - -### Step 4 - Configure the Runtime Configurations - -1. Click **Runtime** under **API Configurations**. - - Transport Level Security defines the transport protocol on which the API is exposed. - - [![SSE API Runtime Configurations Page](../../../assets/img/design/create-api/streaming-api/sse-api-runtime-configurations-page.png)](../../../assets/img/design/create-api/streaming-api/sse-api-runtime-configurations-page.png) - -2. If you wish to limit the API availability to only one transport (e.g., HTTPS), uncheck the appropriate checkbox under **Transport Level Security**. - - Both HTTP and HTTPS transports are selected by default. - -Now, you have successfully created and configured a Streaming API. Next, let's [Publish your API](../../../../deploy-and-publish/publish-on-dev-portal/publish-an-api). - -## End-to-end tutorial - -Learn more by trying out an end-to-end tutorial on Creating and Publishing a SSE API, which uses the default Streaming Provider that works with WSO2 API Manager, namely the WSO2 Streaming Integrator. - -## See Also - -{!includes/design/stream-more-links.md!} - diff --git a/en/docs/includes/design/create-streaming-api/create-a-streaming-api-from-an-asyncapi-definition.md b/en/docs/includes/design/create-streaming-api/create-a-streaming-api-from-an-asyncapi-definition.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/en/docs/includes/design/stream-more-links.md b/en/docs/includes/design/stream-more-links.md index 2c9867c98a..0601ce731e 100644 --- a/en/docs/includes/design/stream-more-links.md +++ b/en/docs/includes/design/stream-more-links.md @@ -1,14 +1,13 @@ - - Learn more on the concepts that you need to know when creating a Streaming API: - - [Endpoints](../../../../design/endpoints/endpoint-types/) - - [API Security](../../../../design/api-security/api-authentication/secure-apis-using-oauth2-tokens) - - [Rate Limiting](../../../../design/rate-limiting/rate-limiting-for-streaming-apis/) - - [Life Cycle Management](../../../../design/lifecycle-management/api-lifecycle/) - - [API Monetization](../../../../design/api-monetization/monetizing-an-api/) - - [API Visibility](../../../../design/advanced-topics/control-api-visibility-and-subscription-availability-in-developer-portal/) - - [API Documentation](../../../../design/api-documentation/add-api-documentation/) - - [Custom Properties](../../../../design/create-api/adding-custom-properties-to-apis/) + - [Endpoints](../../../../design/endpoints/endpoint-types/) + - [API Security](../../../../design/api-security/api-authentication/secure-apis-using-oauth2-tokens) + - [Rate Limiting](../../../../design/rate-limiting/rate-limiting-for-streaming-apis/) + - [Life Cycle Management](../../../../design/lifecycle-management/api-lifecycle/) + - [API Monetization](../../../../design/api-monetization/monetizing-an-api/) + - [API Visibility](../../../../design/advanced-topics/control-api-visibility-and-subscription-availability-in-developer-portal/) + - [API Documentation](../../../../design/api-documentation/add-api-documentation/) + - [Custom Properties](../../../../design/create-api/adding-custom-properties-to-apis/) - Learn how to test a Streaming API. For an example, see [Test a WebSub/WebHook API](../../../../use-cases/streaming-usecase/create-streaming-api/test-a-websub-api). diff --git a/en/docs/includes/handling-mtls-ssl-termination.md b/en/docs/includes/handling-mtls-ssl-termination.md deleted file mode 100644 index 21003b2ed4..0000000000 --- a/en/docs/includes/handling-mtls-ssl-termination.md +++ /dev/null @@ -1,13 +0,0 @@ -### Handling MTLS when SSL is terminated by the Load Balancer or Reverse Proxy - -When SSL termination of API requests takes place at the Load Balancer or Reverse Proxy, the following prerequisites need to be met by the Load Balancer. - -- Terminate the mutual SSL connection from the client. -- Pass the client SSL certificate to the Gateway in an HTTP Header. - - For more information, see the [Nginx documentation](https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_client_certificate). - -The following diagram illustrates how Mutual SSL works in such an environment. - -[![MTLS Load Balancer]({{base_path}}/assets/img/learn/mtls-loadbalancer.png)]({{base_path}}/assets/img/learn/mtls-loadbalancer.png) - diff --git a/en/docs/includes/prerequisites-apim.md b/en/docs/includes/prerequisites-apim.md deleted file mode 100644 index a8e05dc1ef..0000000000 --- a/en/docs/includes/prerequisites-apim.md +++ /dev/null @@ -1,3 +0,0 @@ -## Prerequisites - -[Download and install WSO2 API Manager](../../../install-and-setup/install/installation-prerequisites/). diff --git a/en/docs/install-and-setup/install/admin-product-startup-options.md b/en/docs/install-and-setup/install/admin-product-startup-options.md deleted file mode 100644 index caec22d1d6..0000000000 --- a/en/docs/install-and-setup/install/admin-product-startup-options.md +++ /dev/null @@ -1,83 +0,0 @@ -# Product Startup Options - -Given below are the options that are available when starting a WSO2 product. The product startup scripts are stored in the `/bin/` directory. When you execute the startup script, you can pass a system property by appending it next to the start-up script as shown below. - -``` java - sh api-manager.sh - -``` - -For example: - -``` java - ./api-manager.sh -Dsetup (In Linux) - api-manager.bat -Dsetup (In Windows) -``` - -Listed below are some general options that can be used for starting the server. - -| Startup Option | Description | -|---------------------|-----------------------------------------------------------------------------------------------------| -| -start | Starts the Carbon server using "nohup" in the background. This option is not available for Windows. | -| -stop | Stops the Carbon server process. This option is not available for Windows. | -| -restart | Restarts the Carbon server process. This option is not available for windows. | -| -cleanRegistry | Cleans the registry space. **Caution:** All registry data will be lost. | -| -debug <port> | Starts the server in remote debugging mode. The remote debugging port should be specified. | -| -version | Shows the version of the product that you are running. | -| -help | Lists all the available commands and system properties. | - -Listed below are some system properties that can be used when starting the server. - - ---- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Startup OptionsDescription
    -DosgiConsole=[port]Starts the Carbon server with the Equinox OSGi console. If the optional 'port' parameter is provided, a telnet port will be opened.
    -DosgiDebugOptions=[options-file]Starts the Carbon server with OSGi debugging enabled. Debug options are loaded from the <PRODUCT_HOME>/repository/conf/etc/osgi-debug.options .
    -DsetupCleans the registry and other configurations, recreates DB, re-populates the configuration and starts the server. Note: It is not recommended to use this option in a production environment. Instead, you can manually run the DB scripts directly in the database.
    -DworkerNode

    Starts the product as a worker node, which means the front-end features of your product will not be enabled.

    -
    -

    Note

    -

    Note that from Carbon 4.4.1 onwards, you can also start the worker profile by setting the following system property to 'true' in the product startup script before the script is executed.

    -
    -
    -
    -DworkerNode=false
    -
    -
    -
    -
    -DserverRoles=<roles>A comma separated list of roles used in deploying Carbon applications.
    -Dprofile=<profileName>Starts the server with the specified profile, e.g., worker profile.
    -Dtenant.idle.time=<time>If a tenant is idle for the specified time, the tenant will be unloaded. The default tenant idle time is 30 minutes. This is required in clustered setups, which has master and worker nodes.
    - - diff --git a/en/docs/install-and-setup/install/installing-the-product/installing-api-m-analytics-as-a-windows-service.md b/en/docs/install-and-setup/install/installing-the-product/installing-api-m-analytics-as-a-windows-service.md deleted file mode 100644 index 23117c0f8d..0000000000 --- a/en/docs/install-and-setup/install/installing-the-product/installing-api-m-analytics-as-a-windows-service.md +++ /dev/null @@ -1,253 +0,0 @@ -# Running API Manager 3.2.0 Analytics as a Windows Service - -!!! note - **Before you begin:** - - - See [our compatibility matrix]({{base_path}}/install-and-setup/ProductCompatibility) to find out if this version of the product is fully tested on your OS. - -### Prerequisites - -- Install JDK and set up the `JAVA_HOME` environment variable. -- Download and install a service wrapper library to use for running WSO2 API Manager as a Windows service. WSO2 recommends Yet Another Java Service Wrapper ( YAJSW ) versions [11.03](https://sourceforge.net/projects/yajsw/files/yajsw/yajsw-stable11.03/yajsw-stable-11.03.zip/download) or [12.14](https://sourceforge.net/projects/yajsw/files/yajsw/yajsw-stable-12.14/), and several WSO2 products provide a default `wrapper.conf` file in their `/bin/yajsw/` directory. The instructions below describe how to set up this file. - -!!! important - Please note that JDK 11 might not be compatible with YAJSW 11.03. Use JDK 8 for YAJSW 11.03 and JDK 11 for YAJSW 12.14. - -### Setting up API Analytics as a Windows Service - -Use the following steps to install API Manager Analytics 3.2.0 as a Windows Service. - -1. Download the `wrapperconfs.zip` file and unzip it. - -2. Alter the `pax-logging.properties` file with the following entry of each dashboard and worker profiles of the Analytics server. The file `pax-logging.properties` will be in the location `/conf//etc` directory. - - ``` - org.ops4j.pax.logging.log4j2.config.file=${carbon.home}/conf/${wso2.runtime}/log4j2.xml - ``` - -3. Create a directory for YAJSW. - -4. Create two subdirectories for worker and dashboard in the YAJSW directory created in step 3. - -5. Download the `yajsw-stable-12.14.zip` file from [here](https://sourceforge.net/projects/yajsw/files/yajsw/yajsw-stable-12.14/) and extract it. - -6. Copy the extracted `yajsw-stable-12.14` to each of the two subdirectories created in step 4. This can be another version of yajsw. - -7. Copy the `wrapper.conf` file for each service (worker and dashboard) to their corresponding `/conf` directories. For example, the worker `wrapper.conf` needs to be copied to `/worker/yajsw-stable-12.14/conf` directory. - -8. Set environment variable for `` and ``. - -### Setting up the YAJSW wrapper configuration file - -The configuration file used for wrapping Java Applications by YAJSW is `wrapper.conf` , which is located in the `/conf/` directory. The configuration file in `/bin/yajsw/` directory of many WSO2 products can be used as a reference for this. Following is the minimal `wrapper.conf` configuration for running a WSO2 product as a Windows service. Open your `wrapper.conf` file in `/conf/` directory, set its properties as follows, and save it. - -!!! info - - If you want to set additional properties from an external registry at runtime, store sensitive information like usernames and passwords for connecting to the registry in a properties file and secure it with [secure vault]({{base_path}}/administer/product-security/General/logins-and-passwords/admin-carbon-secure-vault-implementation). - -!!! note - - Manual Configurations - - Add the following class path to the `wrapper.conf` file manually to avoid errors in the WSO2 API Manager Management Console: - - ``` bash - wrapper.java.classpath.4 = ${carbon_home}/repository/components/plugins/commons-lang_2.6.0.wso2v1.jar - ``` - -!!! tip - You may encounter the following issue when starting Windows Services when the file "java" or a "dll" used by Java cannot be found by YAJSW. - - ```bash - "Error 2: The system cannot find the file specified" - ``` - - This can be resolved by providing the "complete java path" for the wrapper.java.command as follows. - - ```bash - wrapper.java.command = ${JAVA_HOME}/bin/java - ``` - -**Minimal wrapper.conf configuration** - -``` bash - #******************************************************************** - # working directory - #******************************************************************** - wrapper.working.dir=${carbon_home}/ - # Java Main class. - # YAJSW: default is "org.rzo.yajsw.app.WrapperJVMMain" - # DO NOT SET THIS PROPERTY UNLESS YOU HAVE YOUR OWN IMPLEMENTATION - # wrapper.java.mainclass= - #******************************************************************** - # tmp folder - # yajsw creates temporary files named in_.. out_.. err_.. jna.. - # per default these are placed in jna.tmpdir. - # jna.tmpdir is set in setenv batch file to /tmp - #******************************************************************** - wrapper.tmp.path = ${jna_tmpdir} - #******************************************************************** - # Application main class or native executable - # One of the following properties MUST be defined - #******************************************************************** - # Java Application main class - wrapper.java.app.mainclass=org.wso2.carbon.bootstrap.Bootstrap - # Log Level for console output. (See docs for log levels) - wrapper.console.loglevel=INFO - # Log file to use for wrapper output logging. - wrapper.logfile=${wrapper_home}\/log\/wrapper.log - # Format of output for the log file. (See docs for formats) - #wrapper.logfile.format=LPTM - # Log Level for log file output. (See docs for log levels) - #wrapper.logfile.loglevel=INFO - # Maximum size that the log file will be allowed to grow to before - # the log is rolled. Size is specified in bytes. The default value - # of 0, disables log rolling by size. May abbreviate with the 'k' (kB) or - # 'm' (mB) suffix. For example: 10m = 10 megabytes. - # If wrapper.logfile does not contain the string ROLLNUM it will be automatically added as suffix of the file name - wrapper.logfile.maxsize=10m - # Maximum number of rolled log files which will be allowed before old - # files are deleted. The default value of 0 implies no limit. - wrapper.logfile.maxfiles=10 - # Title to use when running as a console - wrapper.console.title=WSO2 Carbon - #******************************************************************** - # Wrapper Windows Service and Posix Daemon Properties - #******************************************************************** - # Name of the service - wrapper.ntservice.name=WSO2CARBON - # Display name of the service - wrapper.ntservice.displayname=WSO2 Carbon - # Description of the service - wrapper.ntservice.description=Carbon Kernel - #******************************************************************** - # Wrapper System Tray Properties - #******************************************************************** - # enable system tray - wrapper.tray = true - # TCP/IP port. If none is defined multicast discovery is used to find the port - # Set the port in case multicast is not possible. - wrapper.tray.port = 15002 - #******************************************************************** - # Exit Code Properties - # Restart on non zero exit code - #******************************************************************** - wrapper.on_exit.0=SHUTDOWN - wrapper.on_exit.default=RESTART - #******************************************************************** - # Trigger actions on console output - #******************************************************************** - # On Exception show message in system tray - wrapper.filter.trigger.0=Exception - wrapper.filter.script.0=${wrapper_home}/scripts/trayMessage.gv - wrapper.filter.script.0.args=Exception - #******************************************************************** - # genConfig: further Properties generated by genConfig - #******************************************************************** - placeHolderSoGenPropsComeHere= - wrapper.java.command = java - wrapper.java.classpath.1 = ${carbon_home}/bin/*.jar - wrapper.java.classpath.2 = ${carbon_home}/lib/commons-lang-*.jar - wrapper.java.classpath.3 = ${carbon_home}/lib/*.jar - wrapper.app.parameter.1 = org.wso2.carbon.bootstrap.Bootstrap - wrapper.app.parameter.2 = RUN - wrapper.java.additional.1 = -Xbootclasspath/a:${carbon_home}/lib/xboot/*.jar - wrapper.java.additional.2 = -Xms256m - wrapper.java.additional.3 = -Xmx1024m - wrapper.java.additional.4 = -XX:MaxPermSize=256m - wrapper.java.additional.5 = -XX:+HeapDumpOnOutOfMemoryError - wrapper.java.additional.6 = -XX:HeapDumpPath=${carbon_home}/repository/logs/heap-dump.hprof - wrapper.java.additional.7 = -Dcom.sun.management.jmxremote - wrapper.java.additional.8 = -Dcarbon.registry.root=\/ - wrapper.java.additional.9 = -Dcarbon.home=${carbon_home} - wrapper.java.additional.10 = -Dwso2.server.standalone=true - wrapper.java.additional.11 = -Djava.command=${java_home}/bin/java - wrapper.java.additional.12 = -Djava.io.tmpdir=${carbon_home}/tmp - wrapper.java.additional.13 = -Dcatalina.base=${carbon_home}/lib/tomcat - wrapper.java.additional.14 = -Djava.util.logging.config.file=${carbon_home}/repository/conf/etc/logging-bridge.properties - wrapper.java.additional.15 = -Dcarbon.config.dir.path=${carbon_home}/repository/conf - wrapper.java.additional.16 = -Dcarbon.logs.path=${carbon_home}/repository/logs - wrapper.java.additional.17 = -Dcomponents.repo=${carbon_home}/repository/components/plugins - wrapper.java.additional.18 = -Dconf.location=${carbon_home}/repository/conf - wrapper.java.additional.19 = -Dcom.atomikos.icatch.file=${carbon_home}/lib/transactions.properties - wrapper.java.additional.20 = -Dcom.atomikos.icatch.hide_init_file_path=true - wrapper.java.additional.21 = -Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true - wrapper.java.additional.22 = -Dcom.sun.jndi.ldap.connect.pool.authentication=simple - wrapper.java.additional.23 = -Dcom.sun.jndi.ldap.connect.pool.timeout=3000 - wrapper.java.additional.24 = -Dorg.terracotta.quartz.skipUpdateCheck=true - wrapper.java.additional.25 = -Dorg.apache.jasper.compiler.Parser.STRICT_QUOTE_ESCAPING=false - wrapper.java.additional.26 = -Dfile.encoding=UTF8 - wrapper.java.additional.27 = -DworkerNode=false - wrapper.java.additional.28 = -Dhttpclient.hostnameVerifier=DefaultAndLocalhost - wrapper.java.additional.29 = -Dcarbon.new.config.dir.path=${carbon_home}/repository/resources/conf -``` - -### Setting up CARBON\_HOME - -Extract WSO2 API Manager that you want to run as a Windows service, and then set the Windows environment variable `CARBON_HOME` to the extracted product directory location. - -### Running the product in console mode - -You will now verify that YAJSW is configured correctly for running the WSO2 API Manager as a Windows service. - -1. Open a Windows command prompt and go to the `/bat/` directory. For example: - - ``` java - cd C:\Documents and Settings\yajsw_home\bat - ``` - -2. Start the wrapper in console mode using the following command: - - ``` java - runConsole.bat - ``` - - For example: - - ![]({{base_path}}/assets/attachments/28717183/29364287.png) - -If the configurations are set properly for YAJSW, you will see console output similar to the following and can now access the WSO2 management console from your web browser via . - -![]({{base_path}}/assets/attachments/28717183/29364286.png) - -### Working with the WSO2CARBON service - -To install the Carbon-based WSO2 API Manager as a Windows service, execute the following command in the `/bat/` directory: - -``` java -installService.bat -``` - -The console will display a message confirming that the WSO2CARBON service was installed. - -![]({{base_path}}/assets/attachments/28717183/29364285.png) - -To start the service, execute the following command in the same console window: - -``` java -startService.bat -``` - -The console will display a message confirming that the WSO2CARBON service was started. - -![]({{base_path}}/assets/attachments/28717183/29364288.png) - -To stop the service, execute the following command in the same console window: - -``` java -stopService.bat -``` - -The console will display a message confirming that the WSO2CARBON service has stopped. - -![]({{base_path}}/assets/attachments/28717183/29364290.png) - -To uninstall the service, execute the following command in the same console window: - -``` java -uninstallService.bat -``` - -The console will display a message confirming that the WSO2CARBON service was removed. - -![]({{base_path}}/assets/attachments/28717183/29364291.png) diff --git a/en/docs/install-and-setup/setup/advance-configurations/configure-certificate-revocation.md b/en/docs/install-and-setup/setup/advance-configurations/configure-certificate-revocation.md deleted file mode 100644 index 9b5cee2cb2..0000000000 --- a/en/docs/install-and-setup/setup/advance-configurations/configure-certificate-revocation.md +++ /dev/null @@ -1,43 +0,0 @@ -# Verifying Certificate Revocation - -The default HTTPS transport listener (Secured Pass-Through) can verify with the certificate authority whether a certificate is still trusted before it completes an SSL connection. If the certificate authority has revoked the certificate, a connection will not be completed. - -When this feature is enabled, the transport listener verifies client -certificates when a client tries to make an HTTPS connection with the server. Therefore, the client needs to send its public certificate along with the requests to the server. - -After this feature is enabled, the server attempts to -use the Online Certificate Status Protocol (OCSP) to verify with the -certificate authority at the handshake phase of the SSL protocol. If the -OCSP is not supported by the certificate authority, the server uses Certified Revocation Lists (CRL) instead. The verification -process checks all the certificates in a certificate chain. - -To enable this feature for the HTTP Pass-Through, add the following parameters in the ```/repository/conf/deployment.toml``` file. and set ```enable``` as ```true```. -This will add these parameters to the Passthrough HTTP Multi SSL Listener in ```/repository/conf/axis2/axis2.xml``` file. -Other configurations can be changed according to the requirement. The default configurations are mentioned below. - -```toml -[transport.passthru_https.listener.cert_revocation_validation] -enable = false -cache_size = 1024 -cache_delay = 1000 -allow_full_cert_chain_validation = true -allow_cert_expiry_validation = false -``` - -When ```allow_full_cert_chain_validation``` is set to ```true``` it is required to send the complete certificate chain in the request. -The ```allow_cert_expiry_validation``` can be set to ```true``` if the certificate expiry validation is required. - -If the ```allow_full_cert_chain_validation``` is set to ```false``` a single client certificate is expected in the request and the revocation validation will be done for that certificate. For this to happen it is required to add the immediate issuer of the client certificate in the server's trust store. -Same as above, the ```allow_cert_expiry_validation``` can be set to ```true``` if the certificate expiry validation is required. - -In the instances of custom listener profiles are added, following configuration can be used to configure the custom ```/repository/resources/security/listenerprofiles.xml``` file. - -``` - - true - 1024 - 1000 - false - true - -``` diff --git a/en/docs/install-and-setup/setup/distributed-deployment/clustering-gateway-for-ha-using-rsync.md b/en/docs/install-and-setup/setup/distributed-deployment/clustering-gateway-for-ha-using-rsync.md deleted file mode 100644 index 001678bd82..0000000000 --- a/en/docs/install-and-setup/setup/distributed-deployment/clustering-gateway-for-ha-using-rsync.md +++ /dev/null @@ -1,79 +0,0 @@ -# Configuring rsync for Deployment Synchronization - -Deployment synchronization can be done using [rsync](https://download.samba.org/pub/rsync/rsync.html), which is a file copying tool. These changes must be done in the manager node and in the same directory. - -1. Create a file named `workers-list.txt` , somewhere in your machine, that lists all the worker nodes in the deployment. The following is a sample of the file where there are two worker nodes. - - !!! tip - Different nodes are separated by new lines. - - **workers-list.txt** - - ``` java - ubuntu@192.168.1.1:~/setup/192.168.1.1/as/as_worker/repository/deployment/server - ubuntu@192.168.1.2:~/setup/192.168.1.2/as/as_worker/repository/deployment/server - ``` - - !!! note - If you have configured tenants in worker nodes, you need to add the `repository/tenants` directory of the worker node to the list to synchronize tenant space. For example, if the node `ubuntu@192.168.1.1` needs to be synced with both the super tenant and the tenant space, the following two entries should be added to the `workers-list.txt` file. - - **workers-list.txt** - - ``` java - ubuntu@192.168.1.1:~/setup/192.168.1.1/apim/apim_worker/repository/deployment/server - ubuntu@192.168.1.1:~/setup/192.168.1.1/apim/apim_worker/repository/tenants - ``` - -2. Create a file to synchronize the `/repository/deployment/server` folders between the manager and all worker nodes. - - !!! note - You must create your own SSH key and define it as the `pem_file` . Alternatively, you can use an existing SSH key. For information on generating and using the SSH Keys, go to the [SSH documentation](https://www.ssh.com/ssh/keygen/) . Specify the `manager_server_dir` depending on the location in your local machine. Change the `logs.txt` file path and the lock location based on where they are located in your machine. - - **rsync-for-carbon-depsync.sh** - - ``` java - #!/bin/sh - manager_server_dir=~/wso2as-5.2.1/repository/deployment/server/ - pem_file=~/.ssh/carbon-440-test.pem - - - #delete the lock on exit - trap 'rm -rf /var/lock/depsync-lock' EXIT - - mkdir /tmp/carbon-rsync-logs/ - - - #keep a lock to stop parallel runs - if mkdir /var/lock/depsync-lock; then - echo "Locking succeeded" >&2 - else - echo "Lock failed - exit" >&2 - exit 1 - fi - - #get the workers-list.txt - pushd `dirname $0` > /dev/null - SCRIPTPATH=`pwd` - popd > /dev/null - echo $SCRIPTPATH - - - for x in `cat ${SCRIPTPATH}/workers-list.txt` - do - echo ================================================== >> /tmp/carbon-rsync-logs/logs.txt; - echo Syncing $x; - rsync --delete -arve "ssh -i $pem_file -o StrictHostKeyChecking=no" $manager_server_dir $x >> /tmp/carbon-rsync-logs/logs.txt - echo ================================================== >> /tmp/carbon-rsync-logs/logs.txt; - done - ``` - -3. Create a Cron job that executes the above file every minute for deployment synchronization. Do this by running the following command in your command line. - - !!! note - You can only run the Cron job on one given node (master) at a given time. If you switch the Cron job to another node, you must stop the Cron job on the existing node and start a new Cron job on the new node after updating it with the latest files. - - - ``` java - * * * * * /home/ubuntu/setup/rsync-for-depsync/rsync-for-depsync.sh - ``` - \ No newline at end of file diff --git a/en/docs/install-and-setup/setup/distributed-deployment/configuring-the-gateway-in-a-distributed-environment-with-rsync.md b/en/docs/install-and-setup/setup/distributed-deployment/configuring-the-gateway-in-a-distributed-environment-with-rsync.md deleted file mode 100644 index 8901967453..0000000000 --- a/en/docs/install-and-setup/setup/distributed-deployment/configuring-the-gateway-in-a-distributed-environment-with-rsync.md +++ /dev/null @@ -1,129 +0,0 @@ -# Configuring the Gateway in a Distributed Environment with rsync - -!!! note -As the first preference WSO2 recommends using a shared file system over rsync as the content synchronization mechanism. For more information, see [Distributed Deployment of the Gateway](../distributed-deployment-of-the-gateway/). - - -You need to use remote synchronization (rsync) only if you are unable to have a shared file system, because when using rsync it needs one node to act as the Gateway Manager as it only provides write permission to one node. Thereby, when using rsync there is the vulnerability of a single point of failure. - -Follow the instructions below to configure the API-M Gateway in a distributed environment when using rsync as a content synchronization mechanism: - -- [Step 1 - Configure the load balancer](#step-1-configure-the-load-balancer) -- [Step 2 - Configure the Gateway Manager](#step-2-configure-the-gateway-manager) -- [Step 3 - Configure the Gateway Worker](#step-3-configure-the-gateway-worker) -- [Step 4 - Optionally configure Hazelcast](#step-4-optionally-configure-hazelcast) -- [Step 5 - Start the Gateway Nodes](#step-5-start-the-gateway-nodes) - -Note that the configurations in this topic are done based on the following pattern. This pattern is used as a basic Gateway cluster where the worker nodes and manager nodes are separated. -![]({{base_path}}/assets/attachments/103334495/103334496.png) - -### Step 1 - Configure the load balancer - -For more information, see [Configuring the Proxy Server and the Load Balancer](../../configuring-the-proxy-server-and-the-load-balancer/). - -### Step 2 - Configure the Gateway Manager - -These nodes refer to the management nodes that specialize in the management of the setup. Only management nodes are authorized to add new artifacts into the system or make configuration changes. Management nodes are usually behind an internal firewall and are exposed to clients running within the organization only. This section involves setting up the Gateway node and enabling it to work with the other components in the distributed setup. - -??? Info "Click here for information on configuring the Gateway Manager" - - 1. Configure the `deployment.toml` file. - 1. Open `/repository/conf/deployment.toml` file on the management node and add the cluster hostname. - ``` toml - [server] - hostname = "mgt.am.wso2.com" - ``` - - 2. Specify the following incoming connection configurations - - ``` - [transport.http] - properties.port = 9763 - properties.proxyPort = 80 - - [transport.https] - properties.port = 9443 - properties.proxyPort = 443 - ``` - - The TCP `port` number is the value that this `Connector` uses to create a server socket and await incoming connections. The operating system will allow only one server application to listen to a particular port number on a particular IP address. - - 3. Map the hostnames to IPs. - Open the server's `/etc/hosts` file and add the following. - - ``` plain - am.wso2.com - ``` - - **Example Format** - - ``` java - xxx.xxx.xxx.xx4 am.wso2.com - ``` - - Once you replicate these configurations for all the manager nodes, your Gateway manager is configured. - -### Step 3 - Configure the Gateway Worker - -Worker nodes specialize in serving requests to deployment artifacts and reading them. They can be exposed to external clients. - -??? Info "Click here for information on configuring the Gateway Worker" - - 1. Configure the `carbon.xml` file. - 1. Open `/repository/conf/deployment.toml` file on the management node and add the cluster hostname. - ``` toml - [server] - hostname = "am.wso2.com" - ``` - - 2. Specify the following incoming connection configurations - - ``` toml - [transport.http] - properties.port = 9763 - properties.proxyPort = 80 - - [transport.https] - properties.port = 9443 - properties.proxyPort = 443 - ``` - - 3. Map the hostnames to IPs. - Open the server's `/etc/hosts` file and add the following in order to map the hostnames with the specified real IPs. - - ``` plain - mgt.am.wso2.com - ``` - - 4. Configure rsync. - For information on configuring rsync, see [Configuring rsync for Deployment Synchronization](../../configuring-rsync-for-deployment-synchronization/). - -### Step 4 - Optionally configure Hazelcast - -You can seamlessly deploy WSO2 API Manager using local caching in a clustered setup without Hazelcast clustering. However, there are edge case scenarios where you need to enable Hazelcast clustering. For more information, see [Working with Hazelcast Clustering](../working-with-hazelcast-clustering/) to identify whether you need Hazelcast clustering and to configure it. - -### Step 5 - Start the Gateway Nodes - -Start the Gateway Manager and then the Gateway Worker nodes - -??? Info "Click here for information on starting the Gateway nodes" - - #### Step 5.1 - Start the Gateway Manager - - Start the Gateway Manager by typing the following command in the command prompt. - - ``` java - sh /bin/api-manager.sh - ``` - - #### Step 5.2 - Start the Gateway Worker - - !!! tip - It is recommendation to delete the `/repository/deployment/server` directory and create an empty server directory in the worker node. This is done to avoid any SVN conflicts that may arise. Note that when you do this, you have to restart the worker node after you start it in order to avoid any errors from occurring . - Start the Gateway Worker by typing the following command in the command prompt. - - ``` java - sh /bin/api-manager.sh -Dprofile=gateway-worker - ``` - - The additional `-Dprofile=gateway-worker` argument indicates that this is a worker node specific to the Gateway. You need to use this parameter to make a server read-only. Changes (i.e., writing or making modifications to the deployment repository, etc.) can not be made in the Gateway worker nodes. Furthermore, starting a node as a Gateway worker ensures that the Developer Portal and Publisher related functions are disabled in the respective node. This parameter also ensures that the node starts in the worker profile, where the UI bundles are not activated and only the backend bundles are activated when the server starts up. For more information on the various product profiles available in WSO2 API Manager, see [API Manager product profiles](../product-profiles/) . diff --git a/en/docs/install-and-setup/setup/security/configuring-tls-termination.md b/en/docs/install-and-setup/setup/security/configuring-tls-termination.md deleted file mode 100644 index 9352a25d53..0000000000 --- a/en/docs/install-and-setup/setup/security/configuring-tls-termination.md +++ /dev/null @@ -1,25 +0,0 @@ -# Configuring TLS Termination - -When you have Carbon servers fronted by a load balancer, you have the option of terminating SSL for HTTPS requests. This means that the load balancer will be decrypting incoming HTTPS messages and forwarding them to the Carbon servers as HTTP. This is useful when you want to reduce the load on your Carbon servers due to encryption. To achieve this, the load balancer should be configured with TLS termination and the Tomcat RemoteIpValve should be enabled for Carbon servers. - -When you work with Carbon servers, this will allow you to access admin services and the admin console of your product using HTTP (without SSL). - -Given below are the steps you need to follow: - -- [Step 1: Configuring the load balancer with TLS termination](#ConfiguringTLSTermination-Step1:ConfiguringtheloadbalancerwithTLStermination) -- [Step 2: Enabling RemoteIpValve for Carbon servers](#ConfiguringTLSTermination-Step2:EnablingRemoteIpValveforCarbonservers) - -### Step 1: Configuring the load balancer with TLS termination - -See the documentation of the load balancer that you are using for instructions on how to enable TLS termination. For example, see [NGINX SSL Termination](https://www.nginx.com/resources/admin-guide/nginx-ssl-termination/) . - -### Step 2: Enabling RemoteIpValve for Carbon servers - -You can enable Tomcat's RemoteIpValve for your Carbon server by simply adding the valve to the `catalina-sever.xml` file (stored in the `/repository/conf/tomcat` directory). This valve should be specified under the `` element (shown below) in the `catalina-sever.xml` file. See the [Tomcat documentation](https://tomcat.apache.org/tomcat-7.0-doc/api/org/apache/catalina/valves/RemoteIpValve.html) for more information about `RemoteIpValve` . - -``` java - - ............ - - -``` diff --git a/en/docs/install-and-setup/setup/security/logins-and-passwords/encrypting-passwords-with-cipher-tool.md b/en/docs/install-and-setup/setup/security/logins-and-passwords/encrypting-passwords-with-cipher-tool.md deleted file mode 100644 index bcb12490a9..0000000000 --- a/en/docs/install-and-setup/setup/security/logins-and-passwords/encrypting-passwords-with-cipher-tool.md +++ /dev/null @@ -1,362 +0,0 @@ -# Encrypting Passwords with Cipher Tool - -The instructions on this page explain how plain text passwords in configuration files can be encrypted using the secure vault implementation that is built into WSO2 products. Note that you can customize the default secure vault configurations in the product by implementing a new secret repository, call back handler etc. Read more about the [Secure Vault implementation](https://docs.wso2.com/display/ADMIN44x/Carbon+Secure+Vault+Implementation) in WSO2 products. - -In any WSO2 product that is based on Carbon 4.4.0 or a later version, the Cipher Tool feature will be installed by default. You can use this tool to easily encrypt passwords or other elements in configuration files. - -!!! note - - If you are a developer who is building a Carbon product, see the topic on enabling [Cipher Tool for password encryption](https://docs.wso2.com/display/Carbon4410/Enabling+Cipher+Tool+for+Password+Encryption) for instructions on how to include the Cipher Tool as a feature in your product build. - - The default keystore that is shipped with your WSO2 product (i.e. `wso2carbon.jks` ) is used for password encryption by default. See this [link](https://docs.wso2.com/display/ADMIN44x/Creating+New+Keystores) for details on how to set up and configure new keystores for encrypting plain text passwords. - - -Follow the topics given below for instructions. - -- [Before you begin](#EncryptingPasswordswithCipherTool-Beforeyoubegin) -- [Encrypting passwords using the automated process](#EncryptingPasswordswithCipherTool-automatedEncryptingpasswordsusingtheautomatedprocess) -- [Encrypting passwords manually](#EncryptingPasswordswithCipherTool-manual_processEncryptingpasswordsmanually) -- [Changing encrypted passwords](#EncryptingPasswordswithCipherTool-changing_encrypted_passwordsChangingencryptedpasswords) - -### Before you begin - -If you are using Windows, you need to have **Ant** ( ) installed before using the Cipher Tool. - -### Encrypting passwords using the automated process - -This automated process can only be used for passwords that can be given as an XPath. If you cannot give an XPath for the password that you want to encrypt, you must use the [manual encryption process](#EncryptingPasswordswithCipherTool-manual_process) explained in the next section. - -Follow the steps given below to have passwords encrypted using the automated process: - -1. The first step is to update the `cipher-tool.properties` file and the `cipher-text.properties` file with information of the passwords that you want to encrypt. - - !!! info - By default, the `cipher-tool.properties` and `cipher-text.properties` files that are shipped with your product will contain information on the most common passwords that require encryption. If a required password is missing in the default files, you can **add them manually** . - - - Follow the steps given below. - - 1. Open the `cipher-tool.properties` file stored in the `/repository/conf/security` folder. This file should contain information about the configuration files in which the passwords (that require encryption) are located. The following format is used: - - ``` java - =//, - ``` - - !!! info - **Important!** - - - The `` should be the same value that is hard-coded in the relevant Carbon component. - - The `` specifies the path to the XML file that contains the password. This can be the relative file path, or the absolute file path (starting from `` ). - - - The `` specifies the XPath to the XML **element** / **attribute** / **tag** that should be encrypted. See the examples given below. - - The flag that follows the XPath should be set to 'false' if you are encrypting the value of an **XML element,** or the value of an **XML attribute's tag.** The flag should be 'true' if you are encrypting the **tag** of an **XML attribute** . See the examples given below. - - - When using Secure Vault, as you use the password aliases in the `/repository/conf/carbon.xml` file, make sure to define these aliases in the following files, which are in the `/repository/conf/security` directory as follows: - - - Define your password in the `cipher-text.properties` file. - - ``` java - Carbon.Security.InternalKeyStore.Password=[your_password] - Carbon.Security.InternalKeyStore.KeyPassword=[your_password] - ``` - - - Define the XPath of your password in the `cipher-tool.properties` file. - - ``` java - Carbon.Security.InternalKeyStore.Password=repository/conf/carbon.xml//Server/Security/InternalKeyStore/Password,false - Carbon.Security.InternalKeyStore.KeyPassword=repository/conf/carbon.xml//Server/Security/InternalKeyStore/KeyPassword,false - ``` - - !!! note - Only applicable when using WSO2 API Manager Analytics - - When using Secure Vault with WSO2 API Manager Analytics (WSO2 API-M Analytics), make sure to define the password aliases in the following files, which are in the `/repository/conf/security` directory as follows: - - - Define your password in the `cipher-text.properties` file. - - ``` java - DataBridge.Config.keyStorePassword=[your_password] - Analytics.DASPassword=[your_password] - Analytics.DASRestApiPassword=[your_password] - ``` - - - Define the XPath of your password in the `cipher-tool.properties` file. - - ``` java - DataBridge.Config.keyStorePassword=repository/conf/data-bridge/data-bridge-config.xml//dataBridgeConfiguration/keyStorePassword,false - Analytics.DASPassword=repository/conf/api-manager.xml//APIManager/analytics/DASPassword,true - Analytics.DASRestApiPassword=repository/conf/api-manager.xml//APIManager/analytics/DASRestApiPassword,true - ``` - - - **Example 1:** Consider the admin user's password in the `user-mgt.xml` file shown below. - - ``` java - - - - true - admin - - admin - admin - - ........ - - ........ - - - ``` - - To encrypt this password, the `cipher-tool.properties` file should contain the details shown below. Note that this password is a value given to an XML **element** (which is 'Password'). Therefore, the XPath ends with the element name, and the flag that follows the XPath is set to 'false'. - - ``` java - UserManager.AdminUser.Password=repository/conf/user-mgt.xml//UserManager/Realm/Configuration/AdminUser/Password,false - ``` - - **Example 2:** Consider the password that is used to [connect to an LDAP user store](https://docs.wso2.com/display/ADMIN44x/Configuring+the+Primary+User+Store) (configured in the `user-mgt.xml` file) shown below. - - ``` java - admin - ``` - - To encrypt this password, the `cipher-tool.properties` file should be updated as shown below. Note that there are two possible alias values you can use for this attribute. In this example, the 'Property' **element** of the XML file uses the 'name' **attribute** with the "ConnectionPassword" **tag** . The password we are encrypting is the value of this "ConnectionPassword" tag. This is denoted in the XPath as 'Property\[@name='ConnectionPassword'\]', and the flag that follows the XPath is set to 'false'. - - - Using the `UserStoreManager.Property.ConnectionPassword` alias: - - ``` java - UserStoreManager.Property.ConnectionPassword=repository/conf/user-mgt.xml//UserManager/Realm/UserStoreManager/Property[@name='ConnectionPassword'],false - ``` - - - Using the `UserManager.Configuration.Property.ConnectionPassword` alias: - - ``` java - UserManager.Configuration.Property.ConnectionPassword=repository/conf/user-mgt.xml//UserManager/Realm/UserStoreManager/Property[@name='ConnectionPassword'],false - ``` - - !!! note - If you are trying the above example, be sure that only the relevant user store manager is enabled in the `user-mgt.xml` file. - - - **Example 3:** Consider the keystore password specified in the `catalina-server.xml` file shown below. - - ``` java - - ``` - - To encrypt this password, the `cipher-tool.properties` file should contain the details shown below. In this example, 'Connector' is the XML **element** , and 'keystorePass' is an **attribute** of that element. The password value that we are encrypting is the **tag** of the XML attribute. This is denoted in the XPath as 'Connector\[@keystorePass\]', and the flag that follows the XPath is set to ‘true’. - - ``` java - Server.Service.Connector.keystorePass=repository/conf/tomcat/catalina-server.xml//Server/Service/Connector[@keystorePass],true - ``` - - 2. Open the `cipher-text.properties` file stored in the `/repository/conf/security` folder. This file should contain the secret alias names and the corresponding plaintext passwords (enclosed within square brackets) as shown below. - - ``` java - =[plain_text_password] - ``` - - Shown below are the records in the `cipher-text.properties` file for the three examples discussed above. - - ``` java - //Example 1: Encrypting the admin user's password in the user-mgt.xml file. - UserManager.AdminUser.Password=[admin] - //Example 2: Encrypting the LDAP connection password in the user-mgt.xml file. Use one of the following: - UserStoreManager.Property.ConnectionPassword=[admin] - # UserManager.Configuration.Property.ConnectionPassword=[admin] - //Example 3: Encrypting the keystore password in the catalina-server.xml file. - Server.Service.Connector.keystorePass=[wso2carbon] - ``` - - !!! note - If your password contains a backslash character (\\) you need to use an alias with the escape characters. For example, if your password is `admin\}` the value should be given as shown in the example below. - - ``` java - UserStoreManager.Property.ConnectionPassword=[admin\\}] - ``` - - -2. Open a command prompt and go to the `/bin` directory, where the cipher tool scripts (for Windows and Linux) are stored. - -3. Execute the cipher tool script from the command prompt using the command relevant to your OS: - - - On Windows: `./ciphertool.bat -Dconfigure` - - - On Linux: `./ciphertool.sh -Dconfigure` - -4. The following message will be prompted:  "\[Please Enter Primary KeyStore Password of Carbon Server :\]". Enter the keystore password (which is "wso2carbon" for the default [keystore](https://docs.wso2.com/display/ADMIN44x/Using+Asymmetric+Encryption) ) and proceed. If the script execution is successful, you will see the following message: "Secret Configurations are written to the property file successfully". - - !!! note - If you are using the cipher tool for the first time, the - `Dconfigure` command will first initialize the tool for your product. The tool will then start encrypting the plain text passwords you specified in the `cipher-text.properties` file. - - Shown below is an example of an alias and the corresponding plaintext password (in square brackets) in the `cipher-text.properties` file: - - ``` java - UserManager.AdminUser.Password=[admin] - ``` - - If a password is not specified in the `cipher-text.properties` file for an alias, the user needs to provide it through the command line.  Check whether the alias is a known password alias in Carbon configurations. If the tool modifies the configuration element and file, you must replace the configuration element with the alias name. Define a Secret Callback in the configuration file and add proper namespaces for defining the Secure Vault. - - -5. Now, to verify the password encryption: - - - Open the `cipher-text.properties` file and see that the plain text passwords are replaced by a cipher value. - - - Open the `secret-conf.properties` file from the `/repository/conf/security/` folder and see that the default configurations are changed. - -### Encrypting passwords manually - -This manual process can be used for encrypting any password in a configuration file. However, if you want to encrypt any elements that cannot use an XPath to specify the location in a configuration file, you must use manual encryption. It is not possible to use the [automated encryption process](#EncryptingPasswordswithCipherTool-automated) if an XPath is not specified for the element. - -For example, consider the `log4j.properties` file given below, which does not use XPath notations. As shown below, the password of the `LOGEVENT` appender is set to `admin` : - -``` java - # LOGEVENT is set to be a LogEventAppender using a PatternLayout to send logs to LOGEVENT - log4j.appender.LOGEVENT=org.wso2.carbon.logging.service.appender.LogEventAppender - log4j.appender.LOGEVENT.url=tcp://localhost:7611 - log4j.appender.LOGEVENT.layout=org.wso2.carbon.utils.logging.TenantAwarePatternLayout - log4j.appender.LOGEVENT.columnList=%T,%S,%A,%d,%c,%p,%m,%I,%Stacktrace - log4j.appender.LOGEVENT.userName=admin - log4j.appender.LOGEVENT.password=admin - log4j.appender.LOGEVENT.processingLimit=1000 - log4j.appender.LOGEVENT.maxTolerableConsecutiveFailure=20 -``` - -Since we cannot use the [automated process](#EncryptingPasswordswithCipherTool-automated) to encrypt the `admin` password shown above, follow the steps given below to encrypt it manually. - -1. Download and install a WSO2 product. -2. Open a command prompt and go to the `/bin` directory, where the cipher tool scripts (for Windows and Linux) are stored. - -3. You must first enable the Cipher tool for the product by executing the `-` Dconfigure command with the cipher tool script as shown below. - - - On Linux: `./ciphertool.sh -Dconfigure` - - - On Windows: `./ciphertool.bat -Dconfigure` - - !!! note - If you are using the cipher tool for the first time, this command will first initialize the tool for your product. The tool will then encrypt any plain text passwords that are specified in the `cipher-text.properties` file. See the [automated encryption process](#EncryptingPasswordswithCipherTool-automated) for more information. - - -4. Now, you can start encrypting the admin password manually. Execute the Cipher tool using the relevant command for your OS: - - - On Linux: `./ciphertool.sh ` - - - On Windows: `./ciphertool.bat ` - -5. You will be asked to enter the primary key password, which is by default 'wso2carbon'. Enter the password and proceed. -6. You will now be asked to enter the plain text password that you want to encrypt. Enter the following element as the password and proceed: - - ``` java - Enter Plain Text Value :admin - ``` - - !!! info - Note that in certain configuration files, the password that requires encryption may not be specified as a single value as it is in the log4j.properties file. For example, the jndi.properties file used in WSO2 ESB contains the password in the connection URL. In such cases, you need to encrypt the entire connection URL as explained [here](#EncryptingPasswordswithCipherTool-encrypting_jndi) . - - -7. You will receive the encrypted value. For example: - - ``` java - Encrypted value is: - gaMpTzAccMScaHllsZLXspm1i4HLI0M/srL5pB8jyknRKQ2zT7NuCvt1+qEkElRLgwlrohz3lkuE0KFuapXrCSs5pxfGMOLn4/k7dNs2SlwbsG8C++/ - ZfUuft1Sl6cqvDRM55fQwzCPfybl713HvKu3oDaJ9VKgSbvHlQj6zqzg= - ``` - -8. Open the `cipher-text.properties` file, stored in the `/repository/conf/security` folder. - -9. Add the encrypted password against the secret alias as shown below. - - ``` java - log4j.appender.LOGEVENT.password=cpw74SGeBNgAVpryqj5/xshSyW5BDW9d1UW0xMZ - DxVeoa6RjyA1JRHutZ4SfzfSgSzy2GQJ/2jQIw70IeT5EQEAR8XLGaqlsE5IlNoe9dhyLiPXEPRGq4k/BgUQD - YiBg0nU7wRsR8YXrvf+ak8ulX2yGv0Sf8= - ``` - -10. Now, open the `log4j.properties` file, stored in the `/repository/conf` folder and replace the plain text element with the alias of the encrypted value as shown below. - - ``` java - # LOGEVENT is set to be a LogEventAppender using a PatternLayout to send logs to LOGEVENT - .... - log4j.appender.LOGEVENT.password=secretAlias:log4j.appender.LOGEVENT.password - .... - ``` - -11. If you are encrypting a password in the `/repository/conf/identity/EndpointConfig.properties` file, you need to add the encrypted values of the keys in the `EndpointConfig.properties` file itself. - - !!! note - This step is **only applicable** if you are encrypting a password in the `EndpointConfig.properties` file. - - - For example, if you have encrypted values for the following keys. - - -`Carbon.Security.KeyStore.Password` - - -`Carbon.Security.TrustStore.Password` - - Then you need to add a new key named `protectedTokens` in the `/repository/conf/identity/EndpointConfig.properties` file and add the above keys using comma separated values shown below: - - ``` java - protectedTokens=Carbon.Security.KeyStore.Password,Carbon.Security.TrustStore.Password - ``` - - As we have already disabled this feature by setting "tenantListEnabled=false" in the EndpointConfig.properties, the mutual SSL is not required. Therefore, add below property as well to the properties. - -``` java - mutualSSLManagerEnabled=false -``` - -Another example of a configuration file that uses passwords without an XPath notation is the jndi.properties file. This file is used in WSO2 Enterprise Service Bus (WSO2 ESB) for the purpose of connecting to a message broker. You can read more about this functionality from [here](https://docs.wso2.com/display/ESB490/Configure+with+WSO2+Message+Broker) . As shown below, this file contains a password value (admin) in the connection URL ( ' '). To encrypt this password, you can follow the same manual process [explained above](#EncryptingPasswordswithCipherTool-encrypting_log4j) . However, you must encrypt the entire connection URL ( ' ') and not just the password value given in the URL. - -``` java - # register some connection factories - # connectionfactory.[jndiname] = [ConnectionURL] - connectionfactory.QueueConnectionFactory = amqp://admin:admin@clientID/carbon?brokerlist='tcp://localhost:5673' -# register some queues in JNDI using the form -# queue.[jndiName] = [physicalName] -queue.MyQueue = example.MyQueue - -# register some topics in JNDI using the form -# topic.[jndiName] = [physicalName] -topic.MyTopic = example.MyTopic -``` -!!! note -***NOTE! Please note that the following instructions are currently under review!*** - -If you have special characters in the passwords on your `jndi.properties` file, note the following: - -- It is not possible to use the `@` symbol in the username or password. -- It is also not possible to use the percentage (%) sign in the password. When building the connection URL, the URL is parsed. This parsing exception happens because the percentage (%) sign acts as the escape character in URL parsing. If using the percentage (%) sign in the connection string is required, use the respective encoding character for the percentage (%) sign in the connection string. For example, if you need to pass `adm%in` as the password, then the `%` symbol should be encoded with its respective URL encoding character. Therefore, you have to send it as `adm%25in` . - For a list of possible URL parsing patterns, see [URL encoding reference](http://www.w3schools.com/tags/ref_urlencode.asp) . - - -### Changing encrypted passwords - -To change any password which we have encrypted already, follow the below steps: - -1. Be sure to shut down the server. - -2. Open a command prompt and go to the `/bin` directory, where the cipher tool scripts (for Windows and Linux) are stored. - -3. Execute the following command for your OS: - - - On Linux: `./ciphertool.sh -Dchange` - - - On Windows: `./ciphertool.bat -Dchange` - - !!! note - If you are using the cipher tool for the first time, this command will first initialize the tool for your product. The tool will then encrypt any plain text passwords that are specified in the `cipher-text.properties` file for [automatic encryption](#EncryptingPasswordswithCipherTool-automated). - - -4. It will prompt for the primary keystore password. Enter the keystore password (which is "wso2carbon" for the default keystore). - -5. The alias values of all the passwords that you encrypted will now be shown in a numbered list. - -6. The system will then prompt you to select the alias of the password which you want to change. Enter the list number of the password alias. - -7. The system will then prompt you (twice) to enter the new password. Enter your new password. - -!!! info -If you have encrypted passwords as explained above, note that these passwords have to be decrypted again for the server to be usable. That is, the passwords have to be resolved by a system administrator during server startup. The [Resolving Passwords](https://docs.wso2.com/display/ADMIN44x/Resolving+Encrypted+Passwords) topic explains how encrypted passwords are resolved. - - diff --git a/en/docs/install-and-setup/setup/setting-up-databases/changing-default-databases/changing-to-mysql-cluster.md b/en/docs/install-and-setup/setup/setting-up-databases/changing-default-databases/changing-to-mysql-cluster.md deleted file mode 100644 index a9c3cee4db..0000000000 --- a/en/docs/install-and-setup/setup/setting-up-databases/changing-default-databases/changing-to-mysql-cluster.md +++ /dev/null @@ -1,3 +0,0 @@ -# Setting up a MySQL Cluster - -For instructions on setting up any WSO2 product with a MySQL cluster, see [this article](http://wso2.com/library/articles/2013/04/deploying-wso2-platform-mysql-cluster/) , which is published in the WSO2 library. diff --git a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/admin-searching-the-registry.md b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/admin-searching-the-registry.md deleted file mode 100644 index d811568f9c..0000000000 --- a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/admin-searching-the-registry.md +++ /dev/null @@ -1,29 +0,0 @@ -# Searching the Registry - -The management console provides facility to search all resources in the registry. - -1. Log in to the product's management console a nd select **Search -> Metadata** under the **Registry** menu. - ![]({{base_path}}/assets/attachments/126562657/126562660.png) -2. The **Search** page opens . - ![]({{base_path}}/assets/attachments/126562657/126562659.png) You can define a search criteria using the following parameters: - - - Resource name - - **Created/updated date range** - The date when a resource was created/updated - - !!! info - Created/updated dates must be in MM/DD/YYYY format. Alternatively, you can pick it from the calendar interface provided. - ![]({{base_path}}/assets/attachments/126562657/126562658.png) - - - - **Created/updated author** - The person who created/updated the resource - - Tags and comments - - Property Name - - Property Value - - Media Type - - !!! info - To search for matches containing a specific pattern, use the "%" symbol. - - -3. Fill the search criteria and click **Search** to see the results in the **Search Results** page. - diff --git a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/admin-working-with-users-roles-and-permissions.md b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/admin-working-with-users-roles-and-permissions.md deleted file mode 100644 index 01284b9bb7..0000000000 --- a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/admin-working-with-users-roles-and-permissions.md +++ /dev/null @@ -1,15 +0,0 @@ -# Working with Users, Roles and Permissions - -The user management functionality allows you to configure the users that can access your product and the permissions that determine how each user can work with your system. - -The default user management configuration in a WSO2 product is as follows: - -- The default H2 database in the WSO2 product is configured as the User Store that stores all the information on users, roles and permissions. -- An **Admin** user and **Admin** password are configured by default. -- The default **Admin** role connected to the **Admin** user has all permissions granted. - -According to the default configuration explained above, you can simply log into the management console of the product with the **Admin** user and get started right away. - -Follow the links given below to understand how user management works in WSO2 products, and for step-by-step instructions on how to change/update the default configuration: - - diff --git a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/adding-a-resource.md b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/adding-a-resource.md deleted file mode 100644 index 5fb9bad668..0000000000 --- a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/adding-a-resource.md +++ /dev/null @@ -1,76 +0,0 @@ -# Adding a Resource - -You can add a resource to a certain collection for more convenient usage of the Registry. - -Follow the instructions below to add a new child entry to a collection. - -1. To add a new resource, click on the *Add Resource* link. - -![]({{base_path}}/assets/attachments/126562631/126562638.png) - -2. In the *Add Resource* panel, select *Method* from the drop-down menu. - -The following methods are available: - -- **[Upload content from file](#AddingaResource-1)** -- **[Import content from URL](#AddingaResource-2)** -- **[Create Text content](#AddingaResource-3)** -- **[Create custom content](#AddingaResource-4)** - -![]({{base_path}}/assets/attachments/126562631/126562637.png) - -### Uploading Content from File - -1. If this method was selected, specify the following options: - -- File - The path of a file to fetch content (XML, WSDL, JAR etc.) Use the *Browse* button to upload a file. -- Name - The unique name of a resource. -- Media type **-** Can add a configured media type or leave this unspecified to enforce the default media type. -- Description - The description of a resource. - -2. Click *Add* once the information is added as shown in the example below. - -![]({{base_path}}/assets/attachments/126562631/126562635.png) - -### Importing Content from URL - -1. If this method was selected, specify the following options: - -- URL - The full URL of the resource to fetch content from URL. -- Name - The unique name of a resource. -- Media type **-** Can add a configured media type or leave this unspecified to enforce the default media type. -- Description - The description of a resource. - -2. Click *Add* once the information is added. - -![]({{base_path}}/assets/attachments/126562631/126562633.png) - -### Text Content Creation - -1. If this method was selected, specify the following options: - -- Name - The unique name of a resource. -- Media type **-** Can add a configured media type or leave this unspecified to enforce the default media type. -- Description - The description of a resource. -- Content **-** The resource content. You can use either *Rich Text Editor* or *Plain Text Editor* to enter. - -2. Click *Add* once the information is added. - -![]({{base_path}}/assets/attachments/126562631/126562632.png) - -### Custom Content Creation - -1. If this method was selected, choose the *Media Type* from the drop-down menu and click *Create Content* . - -![]({{base_path}}/assets/attachments/126562631/126562636.png) - -**Media Types** - -Each collection and resource created and stored on the repository has an associated media type. However, you also have the option to leave this unspecified enforcing the default media type. There are two main ways to configure media types for resources. - -- The first method is by means of a one-time configuration, which can be done by modifying the "mime.types" file found in <CARBON\_HOME >\\repository\\conf \\etc directory. This can be done just once before the initial start-up of the server -- The second method is to configure the media types via the server administration console. The first method does not apply for collections, and the only available mechanism is to configure the media types via the server administration console. - -Initially the system contains the media types defined in the mime.types file will be available for resources and a set of default media types will be available for collections. - -Managing media types for resources can be done via the server administration console, by editing the properties of the /system/mime.types/index collection. This collection contains two resources, collection and custom.ui. To manage media types of collections and custom user interfaces, edit the properties of these two resources. diff --git a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/editing-collections-using-the-entries-panel.md b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/editing-collections-using-the-entries-panel.md deleted file mode 100644 index 5b47af664e..0000000000 --- a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/editing-collections-using-the-entries-panel.md +++ /dev/null @@ -1,21 +0,0 @@ -# Editing collections using the Entries panel - -If you select a collection, in its detailed view, you can see the Entries panel with details of child collections and resources it has. It provides a UI to view details, add new resources, collections and links as follows: - -![]({{base_path}}/assets/attachments/126562643/126562644.png) - -- Add Resource -- Add Collection -- Create Link -- Child resource/collection information such as name, created date and author -- The Info link specifying media type, feed, rating . -- The Actions link to rename, move, copy or delete a resource/collection - - !!! info - You cannot move/copy resources and collections across registry mounts if they have dependencies or associations. You can only move/copy within a mount. For more information on mounts, read WSO2 Governance Registry documentation: [Remote Instance and Mount Configuration Details](http://docs.wso2.org/display/Governance460/Remote+Instance+and+Mount+Configuration+Details) . - - !!! info - These options are not available for all resources/collections. - - - diff --git a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/link-creation.md b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/link-creation.md deleted file mode 100644 index a7ec1ea8d8..0000000000 --- a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/link-creation.md +++ /dev/null @@ -1,21 +0,0 @@ -# Link Creation - -Follow the instructions below to create a link on a resource/collection. - -1. Symbolic links and Remote links can be created in a similar way to adding a normal resource. To add a link, click *Create Link* in the *Entries* panel. - -![]({{base_path}}/assets/attachments/126562639/126562641.png) - -2. Select a link to add from the drop-down menu. - -### A Symbolic Link - -When adding a Symbolic link, enter a name for the link and the path of an existing resource or collection which is being linked. It creates a link to the particular resource. - -![]({{base_path}}/assets/attachments/126562639/126562640.png) - -### A Remote Link - -You can mount a collection in a remotely-deployed registry instance to your registry instance by adding a Remote link. Provide a name for the Remote link in the name field. Choose the instance to which you are going to mount and give the path of the remote collection which you need to mount for the path field, or else the root collection will be mounted. - -![]({{base_path}}/assets/attachments/126562639/126562642.png) diff --git a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/metadata.md b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/metadata.md deleted file mode 100644 index e7e04888c8..0000000000 --- a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/metadata.md +++ /dev/null @@ -1,51 +0,0 @@ -# Metadata - -**The Metadata panel** allows you to manage resource metadata and revisions using the [Create Checkpoint](#Metadata-Checkpoint) and [View Versions](#Metadata-Versions) options. Each time you create a check point, it is added as a new reversion of the resource. Revisions is a useful way to facilitate Disaster Recovery and Fault Tolerance in the registry. By creating a revision, a user essentially saves a snapshot of the current state of a resource or collection that can be restored at a later date. The registry's checkpoint and restoration mechanisms are similar to that of System Restore of Microsoft Windows. - -The **Metadata** panel displays the following properties of the resource or the collection: - -- **Created** - Time the resource was created and the author of the resource. -- **Last Updated** - Time the resource was updated and the author of the alterations. -- **Media Type** - An associated Media type of the resource/collection. For more information about Media types, see [Adding a Resource](https://docs.wso2.com/display/ADMIN44x/Adding+a+Resource) . -- **Checkpoint** - Allows to create a checkpoint (URL for the permanent link) of a resource/collection. -- **Versions** - Allows to view versions of a resource/collection. -- **Permalink** - Holds the resource URL in both HTTP and HTTPS. (e.g., `http://10.100.2.76:9763/registry/resource/_system/governance/trunk/services/test` ) -- **Description** - Description of the resource/collection. - -For example, - -![]({{base_path}}/assets/attachments/22185146/22514191.png) -#### Creating a checkpoint - -To create a checkpoint, click on the **Create Checkpoint** link: - -![]({{base_path}}/assets/attachments/126562605/126562606.png) - -!!! info -**NOTE** : When checkpoints are created, properties, comments, ratings and tags will also be taken into consideration. If you do not want them to be versioned along with resource content, you can disable it by making changes to the [Static Configuration](https://docs.wso2.com/display/Governance460/Configuration+for+Static+%28One-time%29+and+Auto+Versioning+Resources) . However, these changes need to be done before the server starts for the first time. - - -#### Viewing Versions - -To view the resource versions, click on the **View versions** link: - -![]({{base_path}}/assets/attachments/126562605/126562611.png) - -It opens the versions. For example, - -![]({{base_path}}/assets/attachments/22185146/22514195.png) -This page gives the following information: - -- The number of a resource/collection version -- Last date of modifications and the author who did the last alterations -- **Actions** - - **Details** - Opens the **Browse** page of a resource/collection version to view its details - - **Restore** - Restores a selected version - - **Delete Version History** - Delete the version history - -!!! info - To learn more about restoring to a previous version, see  read [here](https://docs.wso2.com/display/Governance460/Managing+Versions+of+a+Resource). - -!!! info - **NOTE**: Versions and checkpoints are not available for [Symbolic Links](https://docs.wso2.com/display/Governance460/Link+Creation#LinkCreation-ASymbolicLink) and [Remote Links](https://docs.wso2.com/display/Governance460/Link+Creation#LinkCreation-ARemoteLink). - diff --git a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/properties.md b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/properties.md deleted file mode 100644 index 230e6a9932..0000000000 --- a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/properties.md +++ /dev/null @@ -1,10 +0,0 @@ -# Properties - -The **Properties** panel displays the properties of the currently selected resource or collection. New properties can be added, while existing properties can be edited or deleted. - -1. To add a property, click on the **Add New Property** link in the **Properties** panel . - ![]({{base_path}}/assets/attachments/126562613/126562618.png) -2. Enter a unique name for the property and a value and click **Add . - ** ![]({{base_path}}/assets/attachments/126562613/126562617.png) -3. After adding the property, you can edit or delete it using the **Edit** and **Delete** links associated with it. - diff --git a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/role-permissions.md b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/role-permissions.md deleted file mode 100644 index 12c442f113..0000000000 --- a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/role-permissions.md +++ /dev/null @@ -1,34 +0,0 @@ -# Role Permissions - -When you select a collection in the registry, the **Permissions** panel opens with the defined role permissions available. It allows you to specify which role has access to perform which operations on a registry resource or a collection. - -#### Adding new role permissions - -1. In the **New Role Permissions** section, select a role from the drop-down list. This list is populated by all user roles configured in the system. - ![]({{base_path}}/assets/attachments/126562645/126562646.png) - - !!! info - The `wso2.anonymous.role` is a special role that represents a user who is not logged in to the management console. Granting `Read` access to this role means that you do not require authentication to access resources using the respective Permalinks. - - The **`everyone`** role is a special role that represents a user who is logged into the management console. Granting `Read` access to this role means that any user who has logged into the management console with sufficient permissions to access the Resource Browser can read the respective resource. Granting `Write` or `Delete` access means that any user who is logged in to the management console with sufficient permissions to access the Resource Browser can make changes to the respective resource. - - -2. Select one of the following actions: - - - **Read** - - **Write** - - **Delete** - - **Authorize** - A special permission that gives a role the ability to grant and revoke permissions to/from others - -3. Select whether to allow the action or deny and click **Add Permission** . For example - ![]({{base_path}}/assets/attachments/126562645/126562647.png) - - !!! info - `Deny` permissions have higher priority over `Allow.` That is, a `Deny` permission always overrides an `Allow` permission assigned to a role. - -`Deny` permission must be given at the collection level. For example, to deny the write/delete action on a given policy file, set Write/Delete actions for the role to `Deny` in `/trunk/policies` . If you set the `Deny` permission beyond the collection level (e.g., / or /\_system etc.) it will not be applied for the user's role. - - -4. The new permission appears in the list. - ![]({{base_path}}/assets/attachments/126562645/126562648.png) From here, you can edit the permissions by selecting and clearing the check boxes. After editing the permissions, click **Apply All Permissions** to save the alterations. - diff --git a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-all-partitions-in-a-single-server.md b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-all-partitions-in-a-single-server.md deleted file mode 100644 index b42360b3c5..0000000000 --- a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-all-partitions-in-a-single-server.md +++ /dev/null @@ -1,10 +0,0 @@ -# All Partitions in a Single Server - -#### Strategy 1: Local Registry - -![]({{base_path}}/assets/attachments/21037149/21331970.png) -Figure 1: All registry partitions in a single server instance. - -The entire registry space is local to a single server instance and not shared. This is recommended for a stand-alone deployment of a single product instance, but can also be used if there are two or more instances of a product that do not require sharing data or configuration among them. - -This strategy requires no additional configuration. diff --git a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-config-and-governance-partitions-in-a-remote-registry.md b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-config-and-governance-partitions-in-a-remote-registry.md deleted file mode 100644 index 12c2fdbfbd..0000000000 --- a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-config-and-governance-partitions-in-a-remote-registry.md +++ /dev/null @@ -1,204 +0,0 @@ -# Config and Governance Partitions in a Remote Registry - -In this deployment strategy, the configuration and governance spaces are shared among instances of a group/cluster. For example, two WSO2 Application Server instances that have been configured to operate in a clustered environment can have a single configuration and governance registry that is shared across each node of the cluster. A separate instance of the WSO2 Governance Registry is used to provide the space used in common. - -![]({{base_path}}/assets/attachments/21037149/21331972.png) -Figure 2: Config and governance partitions in the remote Governance Registry instance . - -Configuration steps are given in the following sections. - -- [Creating the Database](#ConfigandGovernancePartitionsinaRemoteRegistry-Database) -- [Configuring Governance Registry as the Remote Registry Instance](#ConfigandGovernancePartitionsinaRemoteRegistry-RemoteRegistry) -- [Configuring Carbon Server Nodes](#ConfigandGovernancePartitionsinaRemoteRegistry-CarbonServerNodes) - -### Creating the database - -In a production setup, it is recommended to use an Oracle or MySQL database for the governance registry. As an example, we use MySQL database named ‘registrydb’. Instructions are as follows: - -1. Access MySQL using the command: - -``` java - mysql -u root -p -``` - -2. Enter the password when prompted. - -3. Create 'registrydb' database. - -``` java - create database registrydb; -``` - -The MySQL database for G-Reg is now created. - -### Configuring Governance Registry as the remote registry instance - -Database configurations are stored in $CARBON\_HOME/repository/conf/datasources/ master-datasources.xml file for all carbon servers. By default, all WSO2 products use the in-built H2 database. Since Governance Registry in this example is using a MySQL database named 'registrydb', the master-datasources.xml file needs to be configured so that the datasource used for the registry and user manager in Governance Registry is the said MySQL database. - -1. Download and extract WSO2 Governance Registry distribution from [http://wso2.com/products/governance-registry.](http://wso2.com/products/governance-registry/) - -2. Navigate to $G-REG\_HOME/repository/conf/datasources/master-datasources.xml file where G-REG\_HOME is the Governance Registry distribution home. Replace the existing WSO2\_CARBON\_DB datasource with the following configuration: - -``` xml - - WSO2_CARBON_DB - The datasource used for registry and user manager - - jdbc/WSO2CarbonDB - - - - jdbc:mysql://x.x.x.x:3306/registrydb - root - root - com.mysql.jdbc.Driver - 50 - 60000 - true - SELECT 1 - 30000 - - - -``` - -Change the values of the following elements according to your environment. - -- <url> : URL of the MySQL database. -- <username> and <password> : username and password of the mySQL database . -- <validationQuery> : Validate and test the health of the DB connection . -- <validationInterval> : specified time intervals at which the DB connection validations should run . - -3. Navigate to $G-REG\_HOME /repository/conf/axis2/axis2.xml file in all Carbon-based product instances to be connected with the remote registry, and enable tribes clustering with the following configuration. - -``` xml - -``` - -The above configuration is required only when caching is enabled for the Carbon server instances and <enableCache> parameter is set to true. This provides cache invalidation at the event of any updates on the registry resources. - -4. Copy the 'mySQL JDBC connector jar ' ( ) to G-REG\_HOME/repository/components/lib directory. - -5. Start the Governance Registry server with -Dsetup so that all the required tables are created in the database. For example, in Linux - -``` java - sh wso2server.sh -Dsetup -``` - -!!! warning - Deprecation of -DSetup - - When proper Database Administrative (DBA) practices are followed, the systems (except analytics products) are not granted DDL (Data Definition) rights on the schema. Therefore, maintaining the `-DSetup` option is redundant and typically unusable. **As a result, from [January 2018 onwards](https://wso2.com/products/carbon/release-matrix/) WSO2 has deprecated the `-DSetup` option** . Note that the proper practice is for the DBA to run the DDL statements manually so that the DBA can examine and optimize any DDL statement (if necessary) based on the DBA best practices that are in place within the organization. - - -The Governance Registry server is now running with all required user manager and registry tables for the server also created in ‘registrydb’ database. - -### Configuring server nodes - -Now that the shared registry is configured, let's take a look at the configuration of Carbon server nodes that use the shared, remote registry. - -1. Download and extract the relevant WSO2 product distribution from the 'Products' menu of [https://wso2.com](https://wso2.com/) . In this example, we use two server instances (of any product) by the names CARBON-Node1 and CARBON-Node2. - -2. We use the same datasource used for Governance Registry above as the registry space of Carbon-based product instances. - -***Configuring master-datasources.xml file*** - -3. Configure $CARBON \_HOME/repository/conf/datasource/master-datasources.xml where CARBON \_HOME is the distribution home of any WSO2 Carbon-based product you downloaded in step 1. Then, add the following datasource for the registry space. - -``` xml - - WSO2_CARBON_DB_GREG - The datasource used for registry and user manager - - jdbc/WSO2CarbonDB_GREG - - - - jdbc:mysql://x.x.x.x:3306/registrydb - root - root - com.mysql.jdbc.Driver - 50 - 60000 - true - SELECT 1 - 30000 - - - -``` - -Change the values of the relevant elements accordingly. ** - -***Configuring registry.xml file*** - -4. Navigate to $CARBON\_ HOME/repository/conf/registry.xml file and specify the following configurations for both server instances setup in step 1. - -Add a new db config to the datasource configuration done in step 3 above. For example, - -``` xml - - jdbc/WSO2CarbonDB_GREG - -``` - -Specify the remote Governance Registry instance with the following configuration: - -``` xml - - instanceid - remote_registry - root@https://x.x.x.x:9443/registry - false - true - / - -``` - -Change the values of the following elements according to your environment. - -- <remoteInstance url> : URL of the remote G-Reg instance. -- <dbConfig> : The dbConfig name specified for the registry database configuration. -- <cacheId> : This provides information on where the cache resource resides. -- <enableCache> : Whether caching is enabled on the Carbon server instance. - -Define the registry partitions using the remote Governance Registry instance. In this deployment strategy, we are mounting the config and governance partitions of the Carbon-based product instances to the remote Governance Registry instance. This is graphically represented in Figure 2 at the beginning. - -``` xml - - instanceid - /_system/config - - - instanceid - /_system/governance - -``` - -- mount path : Registry collection of Carbon server instance that needs to be mounted -- mount overwrite : Defines if an existing collection/resource at the given path should be overwritten or not. Possible vales are: - - true - The existing collection/resource in the specified location will always be deleted and overwritten with the resource/s in the remote registry instance. - - false - The resource/s will not be overwritten. An error will be logged if a resource exists at the existing location. - - virtual - If the existing location has a resource/collection, it will be preserved but virtual view of the remote registry resource/s can be viewed. The original resource/collection can be viewed once the remote registry configuration is removed. -- target path : Path to the remote Governance Registry instance where the registry collection is mounted. In each of the mounting configurations, we specify the actual mount path and target mount path. The `targetPath` can be any meaningful name. - -***Configuring axis2.xml file*** - -1. Navigate to $CARBON \_HOME/repository/conf/axis2/axis2.xml file where CARBON \_HOME is the distribution home of any WSO2 Carbon-based products to be connected with the remote registry. Enable carbon clustering by copying the following configuration to all Carbon server instances: - -``` xml - -``` - -!!! info -Note - - -2. Copy 'MySQL JDBC connector jar' ( [http://dev.mysql.com/downloads/connector/j/5.1.html)](http://dev.mysql.com/downloads/connector/j/5.1.html) to $ G-REG\_HOME/repository/components/lib in both Carbon server instances. - -3. Start both servers and note the log entries that indicate successful mounting to the remote Governance Registry instance. For example, - -![]({{base_path}}/assets/attachments/21037149/21332021.png) -4. Navigate to the registry browser in the Carbon server's management console and note the config and governance partitions indicating successful mounting to the remote registry instance. For example, - -![]({{base_path}}/assets/attachments/21037149/21332022.png) \ No newline at end of file diff --git a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-config-and-governance-partitions-in-separate-nodes.md b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-config-and-governance-partitions-in-separate-nodes.md deleted file mode 100644 index b702aa96e1..0000000000 --- a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-config-and-governance-partitions-in-separate-nodes.md +++ /dev/null @@ -1,409 +0,0 @@ -# Config and Governance Partitions in Separate Nodes - -In this deployment strategy, let's assume 2 clusters of Carbon-based product Foo and Carbon-based product Bar that share a governance registry space by the name G-Reg 1. In addition, the product Foo cluster shares a configuration registry space by the name G-Reg 2 and the product Bar cluster shares a configuration registry space by the name G-Reg 3. - -![]({{base_path}}/assets/attachments/126562675/126562676.png) -Figure 4: Config and governance partitions in separate registry instances . - -Configuration steps are given in the following sections. - -- [Creating the Database](#ConfigandGovernancePartitionsinSeparateNodes-Database) -- [Configuring the Remote Registry Instances](#ConfigandGovernancePartitionsinSeparateNodes-RemoteRegistry) -- Configuring Foo Product Cluster -- Configuring Bar Product Cluster - -### Creating the database - -In a production setup, it is recommended to use an Oracle or MySQL database for the governance registry. As an example, we use MySQL database named ‘registrydb’. Instructions are as follows: - -1. Access MySQL using the command: - -``` java - mysql -u root -p -``` - -2. Enter the password when prompted. - -3. Create 'registrydb' database. - -``` java - create database registrydb; -``` - -The MySQL database for G-Reg 1 is now created. Similarly create 'registrydb 2' and ' registrydb 3' as the MySQL databases for G-Reg 2 and G-Reg 3 respectively. - -### Configuring the Remote Registry instances - -Database configurations are stored in $CARBON\_HOME/repository/conf/datasources/ master-datasources.xml file for all carbon servers. By default, all WSO2 products use the in-built H2 database. Since the Governance Registry nodes ( G-Reg 1 , G-Reg 2 and G-Reg 3) in this example are using MySQL databases ( 'registrydb', 'registrydb2' and 'registrydb3' respectively ) the master-datasources.xml file of each node needs to be configured so that the datasources used for the registry, user manager and configuration partitions in Governance Registry are the said MySQL databases. - -1. Download and extract WSO2 Governance Registry distribution from [http://wso2.com/products/governance-registry.](http://wso2.com/products/governance-registry/) - -2. First, navigate to $G-REG\_HOME/repository/conf/datasources/master-datasources.xml file where G-REG\_HOME is the distribution home of Governance Registry of G-Reg 1. Replace the existing WSO2\_CARBON\_DB datasource with the following configuration: - -``` xml - - WSO2_CARBON_DB - The datasource used for registry and user manager - - jdbc/WSO2CarbonDB - - - - jdbc:mysql://10.20.30.41:3306/registrydb - root - root - com.mysql.jdbc.Driver - 50 - 60000 - true - SELECT 1 - 30000 - - - -``` - -Change the values of the following elements according to your environment. - -- <url> : URL of the MySQL database. -- <username> and <password> : username and password of the mySQL database . -- <validationQuery> : Validate and test the health of the DB connection . -- <validationInterval> : specified time intervals at which the DB connection validations should run . - -3. Similarly, replace the existing WSO2\_CARBON\_DB datasource in G-Reg 2 with the following : - -``` xml - - WSO2_CARBON_DB - The datasource used for registry and user manager - - jdbc/WSO2CarbonDB - - - - jdbc:mysql://10.20.30.42:3306/registrydb2 - root - root - com.mysql.jdbc.Driver - 50 - 60000 - true - SELECT 1 - 30000 - - - -``` - -4. Repeat the same for G-Reg 3 as follows. - -``` xml - - WSO2_CARBON_DB - The datasource used for registry and user manager - - jdbc/WSO2CarbonDB - - - - jdbc:mysql://10.20.30.43:3306/registrydb3 - root - root - com.mysql.jdbc.Driver - 50 - 60000 - true - SELECT 1 - 30000 - - - -``` - -5. Navigate to $G-REG\_HOME /repository/conf/axis2/axis2.xml file in all instances and enable clustering with the following configuration. - -``` xml - -``` - -The above configuration is required only when caching is enabled for the Carbon server instances and <enableCache> parameter is set to true. This provides cache invalidation at the event of any updates on the registry resources. - -6. Copy the 'mySQL JDBC connector jar ' ( ) to G-REG\_HOME/repository/components/lib directory in G-Reg 1, G-Reg 2 and G-Reg 3. - -7. Start the Governance Registry servers with -Dsetup so that all the required tables will be created in the databases. For example, in Linux - -``` java - sh wso2server.sh -Dsetup -``` - -!!! warning - Deprecation of -DSetup - - When proper Database Administrative (DBA) practices are followed, the systems (except analytics products) are not granted DDL (Data Definition) rights on the schema. Therefore, maintaining the `-DSetup` option is redundant and typically unusable. **As a result, from [January 2018 onwards](https://wso2.com/products/carbon/release-matrix/) WSO2 has deprecated the `-DSetup` option** . Note that the proper practice is for the DBA to run the DDL statements manually so that the DBA can examine and optimize any DDL statement (if necessary) based on the DBA best practices that are in place within the organization. - - -The Governance Registry server instances are now running with all required user manager and registry tables for the server created in ‘registrydb’, ‘registrydb1’ and ‘registrydb2’ databases. - -### Configuring the foo product cluster - -Now that the shared registry nodes are configured, let's take a look at the configuration of Carbon server clusters that share the remote registry instances. Namely, Foo product cluster shares G-Reg 1 and G-Reg 2 while Bar product cluster shares G-Reg 1 and G-Reg 3. - -Include the following configurations in the master node of Foo product cluster. - -***Configuring master-datasources.xml file*** - -1. Configure $CARBON \_HOME/repository/conf/datasource/master-datasources.xml where CARBON \_HOME is the distribution home of any WSO2 Carbon-based product. Then, add the following datasource for the registry space. - -``` xml - - WSO2_CARBON_DB_GREG - The datasource used for registry and user manager - - jdbc/WSO2CarbonDB_GREG - - - - jdbc:mysql://10.20.30.41:3306/registrydb - root - root - com.mysql.jdbc.Driver - 50 - 60000 - true - SELECT 1 - 30000 - - - - - WSO2_CARBON_DB_GREG_CONFIG - The datasource used for configuration partition - - jdbc/WSO2CarbonDB_GREG_CONFIG - - - - jdbc:mysql://10.20.30.42:3306/registrydb2 - root - root - com.mysql.jdbc.Driver - 50 - 60000 - true - SELECT 1 - 30000 - - - -``` -Change the values of the relevant elements according to your environment. - -***Configuring registry.xml file*** - -2. Navigate to $CARBON\_ HOME/repository/conf/registry.xml file and specify the following configurations. - -Add a new db config to the datasource configuration done in step 1 above. For example, - -``` xml - - jdbc/WSO2CarbonDB_GREG - - - jdbc/WSO2CarbonDB_GREG_CONFIG - -``` - -Specify the remote Governance Registry instance with the following configuration: - -``` xml - - governanceRegistryInstance - governance_registry - root@https://10.20.30.41:9443/registry - false - true - / - - - configRegistryInstance - config_registry - root@https://10.20.30.42:9443/registry - false - true - / - -``` -Change the values of the following elements according to your environment. - -- <remoteInstance url> : URL of the remote G-Reg instance. -- <dbConfig> : The dbConfig name specified for the registry database configuration. -- <cacheId> : This provides information on where the cache resource resides. -- <enableCache> : Whether caching is enabled on the Carbon server instance. - -!!! info - Note - - When adding the corresponding configuration to the registry.xml file of a slave node, set <readOnly>true</readOnly>. This is the only configuration change. - - -Define the registry partitions using the remote Governance Registry instance. - -``` xml - - configRegistryInstance - /_system/config - - - governanceRegistryInstance - /_system/governance - -``` - -- mount path : Registry collection of Carbon server instance that needs to be mounted -- mount overwrite : Defines if an existing collection/resource at the given path should be overwritten or not. Possible vales are: - - true - The existing collection/resource in the specified location will always be deleted and overwritten with the resource/s in the remote registry instance. - - false - The resource/s will not be overwritten. An error will be logged if a resource exists at the existing location. - - virtual - If the existing location has a resource/collection, it will be preserved but virtual view of the remote registry resource/s can be viewed. The original resource/collection can be viewed once the remote registry configuration is removed. -- target path : Path to the remote Governance Registry instance where the registry collection is mounted. - -***Configuring axis2.xml file*** - -3. Navigate to $CARBON \_HOME/repository/conf/axis2/axis2.xml file and enable carbon clustering by copying the following configuration to all Carbon server instances: - ``` xml - - ``` -4. Copy 'MySQL JDBC connector jar' ( [http://dev.mysql.com/downloads/connector/j/5.1.html)](http://dev.mysql.com/downloads/connector/j/5.1.html) to $ G-REG\_HOME/repository/components/lib in Carbon server instances of Foo product cluster. - -### Configuring the bar product cluster - -The instructions here are similar to that of the Foo product cluster discussed above. The difference is that Bar product cluster shares G-Reg 1 (Governance space) and G-Reg 3 (Config space) remote registry spaces whereas Foo product cluster shares G-Reg 1 and G-Reg 2 (Config space). - -Include the following configurations in the master node of Foo product cluster. - -***Configure master-datasources.xml file*** - -1. Configure $CARBON \_HOME/repository/conf/datasource/master-datasources.xml where CARBON \_HOME is the distribution home of any WSO2 Carbon-based product. Then, add the following datasource for the registry space. - -``` xml - - WSO2_CARBON_DB_GREG - The datasource used for registry and user manager - - jdbc/WSO2CarbonDB_GREG - - - - jdbc:mysql://10.20.30.41:3306/registrydb - root - root - com.mysql.jdbc.Driver - 50 - 60000 - true - SELECT 1 - 30000 - - - - - WSO2_CARBON_DB_GREG_CONFIG - The datasource used for configuration partition - - jdbc/WSO2CarbonDB_GREG_CONFIG - - - - jdbc:mysql://10.20.30.43:3306/registrydb2 - root - root - com.mysql.jdbc.Driver - 50 - 60000 - true - SELECT 1 - 30000 - - - -``` -Change the values of the relevant elements according to your environment. ****** - -***Configuring registry.xml file*** - -2. Navigate to $CARBON\_ HOME/repository/conf/registry.xml file and specify the following configurations. - -Add a new db config to the datasource configuration done in step 1 above. For example, - -``` xml - - jdbc/WSO2CarbonDB_GREG - - - jdbc/WSO2CarbonDB_GREG_CONFIG - -``` - -Specify the remote Governance Registry instance with the following configuration: - -``` xml - - governanceRegistryInstance - governance_registry - root@https://10.20.30.41:9443/registry - false - true - / - - - configRegistryInstance - config_registry - root@https://10.20.30.43:9443/registry - false - true - / - -``` -Change the values of the following elements according to your environment. - -- <remoteInstance url> : URL of the remote G-Reg instance. -- <dbConfig> : The dbConfig name specified for the registry database configuration. -- <cacheId> : This provides information on where the cache resource resides. -- <enableCache> : Whether caching is enabled on the Carbon server instance. - -!!! info - Note - - When adding the corresponding configuration to the registry.xml file of a slave node, set <readOnly>true</readOnly>. This is the only configuration change. - - -Define the registry partitions using the remote Governance Registry instance. - -``` xml - - configRegistryInstance - /_system/config - - - governanceRegistryInstance - /_system/governance - -``` - -- mount path : Registry collection of Carbon server instance that needs to be mounted -- mount overwrite : Defines if an existing collection/resource at the given path should be overwritten or not. Possible vales are: - - true - The existing collection/resource in the specified location will always be deleted and overwritten with the resource/s in the remote registry instance. - - false - The resource/s will not be overwritten. An error will be logged if a resource exists at the existing location. - - virtual - If the existing location has a resource/collection, it will be preserved but virtual view of the remote registry resource/s can be viewed. The original resource/collection can be viewed once the remote registry configuration is removed. -- target path : Path to the remote Governance Registry instance where the registry collection is mounted. In each of the mounting configurations, we specify the actual mount path and target mount path. The `targetPath` can be any meaningful name. - -***Configuring axis2.xml file*** - -3. Navigate to $CARBON \_HOME/repository/conf/axis2/axis2.xml file and enable carbon clustering by copying the following configuration to all Carbon server instances: -``` xml - -``` -4. Copy 'MySQL JDBC connector jar' ( [http://dev.mysql.com/downloads/connector/j/5.1.html)](http://dev.mysql.com/downloads/connector/j/5.1.html) to $ G-REG\_HOME/repository/components/lib in Carbon server instances of Bar product cluster. - -5. Start both clusters and note the log entries that indicate successful mounting to the remote Governance Registry nodes. - -6. Navigate to the registry browser in the Carbon server's management console of a selected node and note the config and governance partitions indicating successful mounting to the remote registry instances. diff --git a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-governance-partition-in-a-remote-registry.md b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-governance-partition-in-a-remote-registry.md deleted file mode 100644 index ef869b415f..0000000000 --- a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-governance-partition-in-a-remote-registry.md +++ /dev/null @@ -1,196 +0,0 @@ -# Governance Partition in a Remote Registry - -In this deployment strategy, only the governance partition is shared among instances of a group/cluster. For example, a WSO2 Application Server instance and a WSO2 ESB instance that have been configured to operate in a clustered environment can have a single governance registry which is shared across each node of the cluster. A separate instance of the WSO2 Governance Registry (G-Reg) is used to provide the space used in common. - -![]({{base_path}}/assets/attachments/126562673/126562674.png) -Figure 3: Governance partition in the remote Governance Registry instance . - -Configuration steps are given in the following sections. - -- [Creating the Database](#GovernancePartitioninaRemoteRegistry-Database) -- [Configuring Governance Registry Instance](#GovernancePartitioninaRemoteRegistry-RemoteRegistry) -- [Configuring Carbon Server Nodes](#GovernancePartitioninaRemoteRegistry-CarbonServerNodes) - -### Creating the database - -In a production setup, it is recommended to use an Oracle or MySQL database for the governance registry. As an example, we use MySQL database named ‘registrydb’. Instructions are as follows: - -1. Access MySQL using the command: - -``` java - mysql -u root -p -``` - -2. Enter the password when prompted. - -3. Create 'registrydb' database. - -``` java - create database registrydb; -``` - -The MySQL database for G-Reg is now created. - -### Configuring Governance Registry instance - -Database configurations are stored in $CARBON\_HOME/repository/conf/datasources/ master-datasources.xml file for all carbon servers. By default, all WSO2 products use the in-built H2 database. Since Governance Registry in this example is using a MySQL database named 'registrydb', the master-datasources.xml file needs to be configured so that the datasource used for the registry and user manager in Governance Registry is the said MySQL database. - -1. Download and extract WSO2 Governance Registry distribution from [http://wso2.com/products/governance-registry.](http://wso2.com/products/governance-registry/) - -2. Navigate to $G-REG\_HOME/repository/conf/datasources/master-datasources.xml file where G-REG\_HOME is the Governance Registry distribution home. Replace the existing WSO2\_CARBON\_DB datasource with the following configuration: - -``` xml - - WSO2_CARBON_DB - The datasource used for registry and user manager - - jdbc/WSO2CarbonDB - - - - jdbc:mysql://x.x.x.x:3306/registrydb - root - root - com.mysql.jdbc.Driver - 50 - 60000 - true - SELECT 1 - 30000 - - - -``` - -Change the values of the following elements according to your environment. - -- <url> : URL of the MySQL database. -- <username> and <password> : username and password of the mySQL database . -- <validationQuery> : Validate and test the health of the DB connection . -- <validationInterval> : specified time intervals at which the DB connection validations should run . - -3. Navigate to $G-REG\_HOME /repository/conf/axis2/axis2.xml file in all Carbon-based product instances to be connected with the remote registry, and enable clustering with the following configuration. - -``` xml - -``` - -The above configuration is required only when caching is enabled for the Carbon server instances and <enableCache> parameter is set to true. This provides cache invalidation at the event of any updates on the registry resources. - -4. Copy the 'mySQL JDBC connector jar ' ( ) to G-REG\_HOME/repository/components/lib directory. - -5. Start the Governance Registry server with -Dsetup so that all the required tables are created in the database. For example, in Linux - -``` java - sh wso2server.sh -Dsetup -``` - -!!! warning -Deprecation of -DSetup - -When proper Database Administrative (DBA) practices are followed, the systems (except analytics products) are not granted DDL (Data Definition) rights on the schema. Therefore, maintaining the `-DSetup` option is redundant and typically unusable. **As a result, from [January 2018 onwards](https://wso2.com/products/carbon/release-matrix/) WSO2 has deprecated the `-DSetup` option** . Note that the proper practice is for the DBA to run the DDL statements manually so that the DBA can examine and optimize any DDL statement (if necessary) based on the DBA best practices that are in place within the organization. - - -The Governance Registry server is now running with all required user manager and registry tables for the server also created in ‘registrydb’ database. - -### Configuring server nodes - -Now that the shared registry is configured, let's take a look at the configuration of Carbon server nodes that use the shared, remote registry. - -1. Download and extract the relevant WSO2 product distribution from the 'Products' menu of [https://wso2.com](https://wso2.com/) . In this example, we use two server instances (of any product) by the names CARBON-Node1 and CARBON-Node2 and the configuration is given for one server instance. Similar steps apply to the other server instance as well. - -2. We use the same datasource used for Governance Registry above as the registry space of Carbon-based product instances. - -***Configure master-datasources.xml file*** - -3. Configure $CARBON \_HOME/repository/conf/datasource/master-datasources.xml where CARBON \_HOME is the distribution home of any WSO2 Carbon-based product you downloaded in step 1. Then, add the following datasource for the registry space. - -``` xml - - WSO2_CARBON_DB_GREG - The datasource used for registry and user manager - - jdbc/WSO2CarbonDB_GREG - - - - jdbc:mysql://x.x.x.x:3306/registrydb - root - root - com.mysql.jdbc.Driver - 50 - 60000 - true - SELECT 1 - 30000 - - - -``` - -Change the values of the relevant elements accordingly. ** - -***Configuring registry.xml file*** - -4. Navigate to $CARBON\_ HOME/repository/conf/registry.xml file and specify the following configurations for both server instances setup in step 1. - -Add a new db config to the datasource configuration done in step 3 above. For example, - -``` xml - - jdbc/WSO2CarbonDB_GREG - -``` - -Specify the remote Governance Registry instance with the following configuration: - -``` xml - - instanceid - remote_registry - root@https://x.x.x.x:9443/registry - false - true - / - -``` - -Change the values of the following elements according to your environment. - -- <remoteInstance url> : URL of the remote G-Reg instance. -- <dbConfig> : The dbConfig name specified for the registry database configuration. -- <cacheId> : This provides information on where the cache resource resides. -- <enableCache> : Whether caching is enabled on the Carbon server instance. - -Define the registry partitions using the remote Governance Registry instance. In this deployment strategy, we are mounting the governance partition of the Carbon-based product instances to the remote Governance Registry instance. This is graphically represented in Figure 3 at the beginning. - -``` xml - - instanceid - /_system/governance - -``` - -- mount path : Registry collection of Carbon server instance that needs to be mounted -- mount overwrite : Defines if an existing collection/resource at the given path should be overwritten or not. Possible vales are: - - true - The existing collection/resource in the specified location will always be deleted and overwritten with the resource/s in the remote registry instance. - - false - The resource/s will not be overwritten. An error will be logged if a resource exists at the existing location. - - virtual - If the existing location has a resource/collection, it will be preserved but virtual view of the remote registry resource/s can be viewed. The original resource/collection can be viewed once the remote registry configuration is removed. -- target path : Path to the remote Governance Registry instance where the registry collection is mounted. - -***Configuring axis2.xml file*** - -5. Navigate to $CARBON \_HOME/repository/conf/axis2/axis2.xml file where CARBON \_HOME is the distribution home of any WSO2 Carbon-based products to be connected with the remote registry. Enable carbon clustering by copying the following configuration to all Carbon server instances: - -``` xml - -``` - -!!! info -Note - - -6. Copy 'MySQL JDBC connector jar' ( [http://dev.mysql.com/downloads/connector/j/5.1.html)](http://dev.mysql.com/downloads/connector/j/5.1.html) to $ G-REG\_HOME/repository/components/lib in both Carbon server instances. - -7. Start both servers and note the log entries that indicate successful mounting to the remote Governance Registry instance. Also navigate to the registry browser in the Carbon server's management console and note the governance partition indicating successful mounting to the remote registry instance. - diff --git a/en/docs/install-and-setup/setup/single-node/deploying-api-manager-using-single-node-instances.md b/en/docs/install-and-setup/setup/single-node/deploying-api-manager-using-single-node-instances.md deleted file mode 100644 index f43f432b07..0000000000 --- a/en/docs/install-and-setup/setup/single-node/deploying-api-manager-using-single-node-instances.md +++ /dev/null @@ -1,59 +0,0 @@ -# Deploying API Manager using Single Node Instances - -In a typical production deployment, API Manager is deployed as components (Publisher, Store, Gateway, Key Manager and Traffic Manager). While this provides very high performance and a high level of scalability, it may be too complex if you want to run API Manager as a small to medium scale API Management solution. A WSO2 API-M single node deployment, which has all the API-M components in one instance, would be simple to set up and requires less resources when compared with a distributed deployment. It is ideal for any organization that wants to start small and iteratively build up a robust API Management Platform. - -WSO2 provides two options for organizations that are interested in setting up a small to medium scale API Management solution. - -- Setting up on WSO2 API Cloud, which is a subscription based API Management solution. You can access this service by creating an account in [WSO2 API Cloud](http://wso2.com/cloud/api-cloud/) . - -- If you are interested in setting up a single node API Manager instance, which has all the API-M components in one instance,  on-premise, you can [download](https://wso2.com/api-manager/) the latest version of API Manager and follow the instructions given below to set up the instance. - -### Prerequisites - -| | | -|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Hardware | Ensure that the minimum hardware requirements mentioned in the [hardware requirements](https://docs.wso2.com/display/ADMIN44x/Production+Deployment+Guidelines) section are met. Since this is an all-in-one deployment, it is recommended to use a higher hardware specification. You can further fine tune your operating system for production by [tuning performance](https://docs.wso2.com/display/AM210/Tuning+Performance) . For more information on installing the product on different operating systems, see [Installing the Product](https://docs.wso2.com/display/AM210/Installing+the+Product) . | -| Software | Oracle JDK 1.8 | - -You can deploy a single node API Manager instance in the following methods: - -- [Single node deployment](#DeployingAPIManagerusingSingleNodeInstances-Singlenodedeployment) -- [Active/active deployment](#DeployingAPIManagerusingSingleNodeInstances-Active/activedeployment) - -### Single node deployment - -In this setup, API traffic is served by one all-in-one instance of WSO2 API Manager. - -![]({{base_path}}/assets/attachments/103334465/103334466.png) - -| Pros | Cons | -|--------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------| -| - Production support is required only for a single API Manager node (you receive 24\*7 WSO2 production support). - - Deployment is up and running within hours. - - Can handle up to 43 million API calls a day (up to 500 API calls a second) - - Minimum hardware/cloud infrastructure requirements (only one node). - - Suitable for anyone new to API Management. | - Deployment does not provide High Availability. - - Not network friendly. Deploying on a demilitarized zone (DMZ) would require a Reverse Proxy. | - -!!! info -For more information on manually configuring the production servers from scratch, see \_Configuring a Single Node . - - -### -Active/active deployment - -In this setup, API traffic is served by two single node (all-in-one) instances of WSO2 API Manager. - -![]({{base_path}}/assets/attachments/103334465/103334467.png) -!!! info -For more information on manually configuring the production servers from scratch, see \_Configuring an Active-Active Deployment . - - -| Pros | Cons | -|---------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------| -| - The system is highly available. - - Production support is required for 2 API Manager nodes (you receive 24\*7 WSO2 production support). - - Can handle up to 86 million API calls a day ( up to 1000 API calls a second) - - Deployment is up and running within hours. | - Not network friendly. Deploying on a DMZ would require a Reverse Proxy. | - - diff --git a/en/docs/learn/api-security/api-authentication/secure-apis-using-api-keys.md b/en/docs/learn/api-security/api-authentication/secure-apis-using-api-keys.md deleted file mode 100644 index a730412e0f..0000000000 --- a/en/docs/learn/api-security/api-authentication/secure-apis-using-api-keys.md +++ /dev/null @@ -1,247 +0,0 @@ -# Secure APIs with API Keys - -An API key is the simplest form of application-based security that you can configure for an API. You can obtain an API key for a client application from WSO2 API Manager's Developer Portal, via the UI, or via REST APIs. Thereafter, the client application can use the API key to invoke the APIs that are secured with the API key security scheme. - -WSO2 API Manager uses a self-contained JSON Web Token (JWT) as the API key, and this JWT access token is generated via the Developer Portal without communicating with the Key Manager. - -When an API is invoked specifying an API key as the authentication method, the APIM Gateway performs the following two basic validations. - -- Signature validation -- Subscription validation - -## Prerequisites for API keys - -- The API key should be a valid JWT signed using the primary KeyStore private key of the Developer Portal. - -- The expected token format is as follows: - - `base64(header).base64(payload).base64(signature)` - -- The public certificate of the private key that is used to sign the tokens should be added to the trust store under the `"gateway_certificate_alias"` alias. For more information, see [Import the public certificate into the client trust store.](#import) - - -
    -

    Note

    -

    The latter mentioned prerequisite is not applicable if you use the default certificates, which are the certificates that are shipped with the product itself.

    -
    - - -## Validation of API subscriptions - -The subscription validation is mandatory for the API keys, and the keys generated before an application subscribes to an API will not contain the subscription information under the token details. As a result, these keys will not be allowed to access that specific API. Therefore, API Keys should be generated after the application has subscribed to the required API. - -## Using API keys to secure an API - -Follow the instructions below to use API key Authentication in WSO2 API Manager. - -### Step 1 - Create and publish an API - -Create and publish an API that is secured with the API key security scheme as the application-level security. Let's work with the sample app for this purpose. - -1. Sign in to the Publisher. - - `https://:9443/publisher` - -2. Click **DEPLOY SAMPLE API** to deploy the sample PizzaShack API. - -3. Click **Runtime Configurations** and select **Application Level Security**. - -4. Select **API Key** and click **SAVE**. - - [![Configure API key authentication]({{base_path}}/assets/img/learn/api-key-option.png)]({{base_path}}/assets/img/learn/api-key-option.png) - -### Step 2 - Generate the API Key - -1. Sign in to the Developer Portal. - - `https://:9443/devportal` - -2. Click **APIs** and click on the respective API (e.g., `PizzaShackAPI`). - -3. Click **Subscriptions**. - -4. Select an application and select a throttling policy. - - -
    -

    Note

    -

    API Keys can work with any application, which is either JWT or OAuth.

    -
    - - -5. Click **Subscribe**. - [![Subscribe to the API]({{base_path}}/assets/img/learn/subscribe-to-api.png)]({{base_path}}/assets/img/learn/subscribe-to-api.png) - -6. Click **MANAGE APP**, corresponding to the application that you used to subscribe to the API. - - [![View list of credentials]({{base_path}}/assets/img/learn/view-credentials-manage-app.png)]({{base_path}}/assets/img/learn/view-credentials-manage-app.png) - -7. Click **API KEY** and click **GENERATE KEY**. - - [![Generate API key]({{base_path}}/assets/img/learn/generate-api-key.png)]({{base_path}}/assets/img/learn/generate-api-key.png) - -8. Optionally, define a validity period for the token. - - By default, the API Key does not expire. However, optionally, you can define a validity period for the token as follows: - - 1. When you click **Generate Keys,** uncheck the **API Key with infinite validity period** option in the pop-up. - - 2. Enter the expiry time in seconds. - -9. Copy the API key. - - [![Copy API key]({{base_path}}/assets/img/learn/copy-api-key.png)]({{base_path}}/assets/img/learn/copy-api-key.png) - -### Step 3 - Invoke the API - -Invoke the API using the API key. - -You can use any one of the following methods to invoke the API. - -- Specify the API Key in the `apikey` header. - - === "Format" - ``` bash - curl -k -X GET "https://localhost:8243/pizzashack/1.0.0/menu" -H "accept: application/json" -H "apikey: " - ``` - - === "Example" - ``` bash - curl -k -X GET "https://localhost:8243/pizzashack/1.0.0/menu" -H "accept: application/json" -H "apikey: eyJ4NXQiOiJaalJtWVRNd05USmpPV1U1TW1Jek1qZ3pOREkzWTJJeU1tSXlZMkV6TWpkaFpqVmlNamMwWmc9PSIsImtpZCI6ImdhdGV3YXlfY2VydGlmaWNhdGVfYWxpYXMiLCJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJrYW5jaGFuYSIsImFwcGxpY2F0aW9uIjp7Im93bmVyIjoia2FuY2hhbmEiLCJ0aWVyIjoiVW5saW1pdGVkIiwibmFtZSI6IkRlZmF1bHRBcHBsaWNhdGlvbiIsImlkIjozNSwidXVpZCI6IjFmYjBiYjZlLTNiNWUtNDVmZS04Y2I1LTEwN2QzMGJmOTU0NyJ9LCJ0aWVySW5mbyI6eyJVbmxpbWl0ZWQiOnsic3RvcE9uUXVvdGFSZWFjaCI6dHJ1ZSwic3Bpa2VBcnJlc3RMaW1pdCI6MCwic3Bpa2VBcnJlc3RVbml0IjpudWxsfX0sImtleXR5cGUiOiJQUk9EVUNUSU9OIiwic3Vic2NyaWJlZEFQSXMiOlt7InN1YnNjcmliZXJUZW5hbnREb21haW4iOiJjYXJib24uc3VwZXIiLCJuYW1lIjoiUGl6emFTaGFja0FQSSIsImNvbnRleHQiOiJcL3Bpenphc2hhY2tcLzEuMC4wIiwicHVibGlzaGVyIjoiYWRtaW4iLCJ2ZXJzaW9uIjoiMS4wLjAiLCJzdWJzY3JpcHRpb25UaWVyIjoiVW5saW1pdGVkIn0seyJzdWJzY3JpYmVyVGVuYW50RG9tYWluIjoiY2FyYm9uLnN1cGVyIiwibmFtZSI6IlBpenphU2hhY2tBUEkiLCJjb250ZXh0IjoiXC9waXp6YXNoYWNrXC8xLjAuMCIsInB1Ymxpc2hlciI6ImFkbWluIiwidmVyc2lvbiI6IjEuMC4wIiwic3Vic2NyaXB0aW9uVGllciI6IlVubGltaXRlZCJ9XSwiaWF0IjoxNTcxNzY1Njk2LCJqdGkiOiJhOWVmMDFmYi1kNDA1LTQ0YTYtOWVkMi02ZTdhZjUyZGQ3ODMifQ==.KbxcrZv7buRSqtyI44eCGA_4mrGTRc0-ik4hmsYsmoFs5wbTXrcC1vZ7-fe9KMEWnyW6VeWJq-PnqDZzc4wOno02YMlUH9kGZ6bWj3z4RH9vVLd_xeBV50EXEDm7MbyeI-t7ADMYoOWOBBafNfiigm_86gj7LfeoSkGjsreFIJyhWIxepm3lO54cfYcDJAk3pB-T2bKC0aHJzFn_N_HuBN9lOy2yCPdJyoThQEbedBwtvh8WlTNKh7kL9Nj2E1ZwhKli0M9tuIsp08aztwUP3a-QPF4oIx4Lid0rYIr5jyQCHHor55wtzxJKH2VayZnEFIdySEjQBBj7SAfjcLXvXw==" - ``` - - === "Response" - ``` bash - [{"name":"BBQ Chicken Bacon","description":"Grilled white chicken, hickory-smoked bacon and fresh sliced onions in barbeque sauce","price":"24.99","icon":"/images/6.png"},{"name":"Chicken Parmesan","description":"Grilled chicken, fresh tomatoes, feta and mozzarella cheese","price":"11.99","icon":"/images/1.png"},{"name":"Chilly Chicken Cordon Bleu","description":"Spinash Alfredo sauce topped with grilled chicken, ham, onions and mozzarella","price":"23.99","icon":"/images/10.png"},{"name":"Double Bacon 6Cheese","description":"Hickory-smoked bacon, Julienne cut Canadian bacon, Parmesan, mozzarella, Romano, Asiago and and Fontina cheese","price":"20.99","icon":"/images/9.png"},{"name":"Garden Fresh","description":"Slices onions and green peppers, gourmet mushrooms, black olives and ripe Roma tomatoes","price":"11.99","icon":"/images/3.png"},{"name":"Grilled Chicken Club","description":"Grilled white chicken, hickory-smoked bacon and fresh sliced onions topped with Roma tomatoes","price":"14.99","icon":"/images/8.png"},{"name":"Hawaiian BBQ Chicken","description":"Grilled white chicken, hickory-smoked bacon, barbeque sauce topped with sweet pine-apple","price":"12.99","icon":"/images/7.png"},{"name":"Spicy Italian","description":"Pepperoni and a double portion of spicy Italian sausage","price":"23.99","icon":"/images/2.png"},{"name":"Spinach Alfredo","description":"Rich and creamy blend of spinach and garlic Parmesan with Alfredo sauce","price":"25.99","icon":"/images/5.png"},{"name":"Tuscan Six Cheese","description":"Six cheese blend of mozzarella, Parmesan, Romano, Asiago and Fontina","price":"24.99","icon":"/images/4.png"}] - ``` - -- Specify as a query parameter in the API request. - - - `` - Encode the API key using a URL encoder (e.g., [https://www.urlencoder.org](ttps://www.urlencoder.org)). - - === "Format" - ``` bash - curl -k -X GET "https://localhost:8243/pizzashack/1.0.0/menu?apikey=" - ``` - - === "Example" - ``` bash - curl -k -X GET "https://localhost:8243/pizzashack/1.0.0/menu?apikey=eyJ4NXQiOiJaalJtWVRNd05USmpPV1U1TW1Jek1qZ3pOREkzWTJJeU1tSXlZMkV6TWpkaFpqVmlNamMwWmc9PSIsImtpZCI6ImdhdGV3YXlfY2VydGlmaWNhdGVfYWxpYXMiLCJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJrYW5jaGFuYSIsImFwcGxpY2F0aW9uIjp7Im93bmVyIjoia2FuY2hhbmEiLCJ0aWVyIjoiVW5saW1pdGVkIiwibmFtZSI6IkRlZmF1bHRBcHBsaWNhdGlvbiIsImlkIjozNSwidXVpZCI6IjFmYjBiYjZlLTNiNWUtNDVmZS04Y2I1LTEwN2QzMGJmOTU0NyJ9LCJ0aWVySW5mbyI6eyJVbmxpbWl0ZWQiOnsic3RvcE9uUXVvdGFSZWFjaCI6dHJ1ZSwic3Bpa2VBcnJlc3RMaW1pdCI6MCwic3Bpa2VBcnJlc3RVbml0IjpudWxsfX0sImtleXR5cGUiOiJQUk9EVUNUSU9OIiwic3Vic2NyaWJlZEFQSXMiOlt7InN1YnNjcmliZXJUZW5hbnREb21haW4iOiJjYXJib24uc3VwZXIiLCJuYW1lIjoiUGl6emFTaGFja0FQSSIsImNvbnRleHQiOiJcL3Bpenphc2hhY2tcLzEuMC4wIiwicHVibGlzaGVyIjoiYWRtaW4iLCJ2ZXJzaW9uIjoiMS4wLjAiLCJzdWJzY3JpcHRpb25UaWVyIjoiVW5saW1pdGVkIn0seyJzdWJzY3JpYmVyVGVuYW50RG9tYWluIjoiY2FyYm9uLnN1cGVyIiwibmFtZSI6IlBpenphU2hhY2tBUEkiLCJjb250ZXh0IjoiXC9waXp6YXNoYWNrXC8xLjAuMCIsInB1Ymxpc2hlciI6ImFkbWluIiwidmVyc2lvbiI6IjEuMC4wIiwic3Vic2NyaXB0aW9uVGllciI6IlVubGltaXRlZCJ9XSwiaWF0IjoxNTcxNzY1Njk2LCJqdGkiOiJhOWVmMDFmYi1kNDA1LTQ0YTYtOWVkMi02ZTdhZjUyZGQ3ODMifQ%3D%3D.KbxcrZv7buRSqtyI44eCGA_4mrGTRc0-ik4hmsYsmoFs5wbTXrcC1vZ7-fe9KMEWnyW6VeWJq-PnqDZzc4wOno02YMlUH9kGZ6bWj3z4RH9vVLd_xeBV50EXEDm7MbyeI-t7ADMYoOWOBBafNfiigm_86gj7LfeoSkGjsreFIJyhWIxepm3lO54cfYcDJAk3pB-T2bKC0aHJzFn_N_HuBN9lOy2yCPdJyoThQEbedBwtvh8WlTNKh7kL9Nj2E1ZwhKli0M9tuIsp08aztwUP3a-QPF4oIx4Lid0rYIr5jyQCHHor55wtzxJKH2VayZnEFIdySEjQBBj7SAfjcLXvXw%3D%3D" - ``` - - === "Response" - ``` bash - [{"name":"BBQ Chicken Bacon","description":"Grilled white chicken, hickory-smoked bacon and fresh sliced onions in barbeque sauce","price":"24.99","icon":"/images/6.png"},{"name":"Chicken Parmesan","description":"Grilled chicken, fresh tomatoes, feta and mozzarella cheese","price":"11.99","icon":"/images/1.png"},{"name":"Chilly Chicken Cordon Bleu","description":"Spinash Alfredo sauce topped with grilled chicken, ham, onions and mozzarella","price":"23.99","icon":"/images/10.png"},{"name":"Double Bacon 6Cheese","description":"Hickory-smoked bacon, Julienne cut Canadian bacon, Parmesan, mozzarella, Romano, Asiago and and Fontina cheese","price":"20.99","icon":"/images/9.png"},{"name":"Garden Fresh","description":"Slices onions and green peppers, gourmet mushrooms, black olives and ripe Roma tomatoes","price":"11.99","icon":"/images/3.png"},{"name":"Grilled Chicken Club","description":"Grilled white chicken, hickory-smoked bacon and fresh sliced onions topped with Roma tomatoes","price":"14.99","icon":"/images/8.png"},{"name":"Hawaiian BBQ Chicken","description":"Grilled white chicken, hickory-smoked bacon, barbeque sauce topped with sweet pine-apple","price":"12.99","icon":"/images/7.png"},{"name":"Spicy Italian","description":"Pepperoni and a double portion of spicy Italian sausage","price":"23.99","icon":"/images/2.png"},{"name":"Spinach Alfredo","description":"Rich and creamy blend of spinach and garlic Parmesan with Alfredo sauce","price":"25.99","icon":"/images/5.png"},{"name":"Tuscan Six Cheese","description":"Six cheese blend of mozzarella, Parmesan, Romano, Asiago and Fontina","price":"24.99","icon":"/images/4.png"}] - ``` - -## Additional Information - - -### Importing the public certificate into the client trust store - - -
    -

    Note

    -

    Make sure to import the Developer Portal certificate to the APIM Gateway client-truststore under the same alias.

    -
    - - -Follow the instructions below to import the public certificate into the client trust store. - -1. Navigate to the `/repository/resources/security/` directory. - -2. Run the following command to export the public certificate from WSO2 API Manager's key store (`wso2carbon.jks`).  - - `keytool -export -alias wso2carbon -file wso2.crt -keystore wso2carbon.jks` - -3. Enter `wso2carbon` as the default password of the key store when prompted. - -4. Run the following command to import the public certificate into the trust store.  - - ``` - keytool -import -trustcacerts -keystore client-truststore.jks -alias gateway_certificate_alias -file wso2.crt - ``` - -5. Enter `wso2carbon` as the default password of the trust store when prompted. - - - -### Changing the alias name in the JWT - -By default, the alias name is `gateway_certificate_alias`. Follow the instructions below if you need to change the alias name in the JWT. - -1. Navigate to the `/repository/conf/deployment.toml` file. - -2. Configure the `api_key_alias` value under `[apim.devportal]` as follows: - - ``` - [apim.devportal] - api_key_alias = "" - ``` - -### Configuring custom keystores - -<<<<<<< HEAD -You can also configure and use a custom Keystore in API Manager to sign the API Keys. Given below is a sample TOML configuration to configure a custom Keystore in the API Manager server. For more information, refer the [Configuration Catalog]({{base_path}}//reference/config-catalog/) for more details. -======= -You can also configure and use a custom Keystore in API Manager to sign the API Keys. Given below is a sample TOML configuration to configure a custom Keystore in the API Manager server. For more information, see [Configuration Catalog]({{base_path}}/reference/config-catalog/). ->>>>>>> 4cc235f87... Update en/docs/learn/api-security/api-authentication/secure-apis-using-api-keys.md - -To configure custom keystores, add the following to the `/repository/conf/deployment.toml` file. - -``` -[custom_keystore.APIKeyKeyStore] -file_name = "apikeysigner.jks" -type = "JKS" -password = "wso2carbon" -alias = "apikeysigner" -key_password = "wso2carbon" -``` - -If you have generated a custom Keystore and you need to use it to sign the API Keys, it is required to configure the following TOML configurations to define which Keystore and certs should be used. Given below is a sample TOML configuration that refers to a custom Keystore named `APIKeyKeyStore` and the cert with the alias `apikeysigner`. -To configure a custom keystore to use and sign the API keys in the Devportal node, add the following to the `/repository/conf/deployment.toml` file. - -``` -[apim.devportal] -api_key_keystore = "APIKeyKeyStore" -api_key_alias = "custom_alias" -``` - -### API key restriction for IP address and HTTP referrer - -After issuing an API key for an application, it can be used by anyone to invoke an API subscribed to the application. However, if an unauthorized party gets hold of the token, they can create unnecessary invocations to the APIs. To prevent this issue, you can define the authorized parties when generating a token. - -WSO2 API Manager allows API keys to be restricted based on two approaches. - -#### 1) IP address restriction - -The IP address restriction allows only the clients with specific IP addresses can use the token. The IP addresses can be specified -in the following formats. - -- IPv4 (e.g., `192.168.1.2`) -- IPv6 (e.g., `2002:eb8::2`) -- IP range in CIDR notation (e.g. `152.12.0.0/13`, `1001:ab8::/14`) - -**Generating an API key with an IP restriction** - -1. Navigate to API key generation window of the specific application in the Developer Portal. - -2. Select `IP Addresses`, add the IP addresses in the text input as shown below, and generate the key. - - [![IP Restricted API key]({{base_path}}/assets/img/learn/ip-api-key.png)]({{base_path}}/assets/img/learn/ip-api-key.png) - -#### 2) HTTP referer restriction - -When the HTTP referer restriction has been enabled, only the specific HTTP referrers can use the token. Therefore, by using this restriction, when API clients run on web browsers, you can limit the access to an API through only specific web pages. The referrer can be specified in the following formats. - -- A specific URL with an exact path: `www.example.com/path` -- Any URL in a single subdomain, using a wildcard asterisk (*): `sub.example.com/*` -- Any subdomain or path URLs in a single domain, using wildcard asterisks (\*): `*.example.com/*` - -**Generating an API key with the HTTP referer restriction** - -1. Navigate to API key generation window of that specific application in the Developer Portal. - -2. Select `HTTP Referrers (Web Sites)` and add the referrers in the text input as shown below and generate the key. - - [![HTTP Referer Restricted API key]({{base_path}}/assets/img/learn/http-referer-api-key.png)]({{base_path}}/assets/img/learn/http-referer-api-key.png) diff --git a/en/docs/observe/api-manager-analytics/default-ports-of-wso2-api-m-analytics.md b/en/docs/observe/api-manager-analytics/default-ports-of-wso2-api-m-analytics.md deleted file mode 100644 index 98f7554d33..0000000000 --- a/en/docs/observe/api-manager-analytics/default-ports-of-wso2-api-m-analytics.md +++ /dev/null @@ -1,9 +0,0 @@ -# Default Ports of WSO2 API-M Analytics - -Given below are the specific ports used by WSO2 API-M Analytics. - -- 7712 - Thrift SSL port for secure transport, where the client(gateway) is authenticated to use WSO2 API-M Analytics. -- 7612 - Thrift TCP port where WSO2 API-M Analytics receives events from clients(gateways). -- 7444 - The default port for the Siddhi Store REST API. -- 9444 - MSF4J HTTPS Port used to upload analytics data from the microgateway. -- 9643 - Default port for the analytics dashboard portal. diff --git a/en/docs/observe/api-manager-analytics/general-data-protection-regulation-gdpr-for-wso2-api-manager-analytics.md b/en/docs/observe/api-manager-analytics/general-data-protection-regulation-gdpr-for-wso2-api-manager-analytics.md deleted file mode 100644 index 7b35794555..0000000000 --- a/en/docs/observe/api-manager-analytics/general-data-protection-regulation-gdpr-for-wso2-api-manager-analytics.md +++ /dev/null @@ -1,148 +0,0 @@ -# General Data Protection Regulation (GDPR) for WSO2 API Manager Analytics - -In API Manager Analytics, Personally Identifiable Information(PII) of a user can be included in the log files and in the data sources associated to API Manager Analytics distribution. - -To handle the PII of a user to support GDPR, following steps needs to be executed. - -1. Remove personally identifiable information of a user in the API Manager Analytics logs via the Forget-me tool -2. Obfuscate personally identifiable information of a user stored in the data sources associated to API Manager Analytics via the GDPR-Client - - -## Removing personally identifiable information via the Forget-me tool - -!!! tip "Before you begin:" - - Note that this tool is designed to run in offline mode (e.g., the server should be shut down or run on another machine) in order to prevent unnecessary load to the server. - -**Step 1: Configure the config.json file** - -The `/wso2/tools/identity-anonymization-tool/conf/config.json` file specifies the log file - locations from which persisted user data needs to be removed. - -Replace the content in the `config.json` file with the below content. -Then replace `` with the path to the API Manager Analytics distribution. - -If you have configured logs with PII to be saved in another location, you can add it to this list of processors. - -``` js -{ - "processors" : [ - "log-file" - ], - "directories": [ - { - "dir": "worker-logs", - "type": "log-file", - "processor" : "log-file", - "log-file-path" : "/wso2/worker/logs", - "log-file-name-regex" : "(.)*(log|out)" - }, - { - "dir": "dashboard-logs", - "type": "log-file", - "processor" : "log-file", - "log-file-path" : "/wso2/dashboard/logs", - "log-file-name-regex" : "(.)*(log|out)" - } - ] -} -``` - -For information on changing these configurations, see [Configuring the config.json file]({{base_path}}/administer/product-security/general-data-protection-regulation-gdpr-for-wso2-api-manager/#configuring-the-master-configuration-file) in the Product Administration Guide. - -**Step 2: Execute the Forget-me tool** - -1. Open a new terminal window and navigate to the `/bin` directory. -2. Execute one of the following commands depending on your operating system: - - On Linux/Mac OS: `./forgetme.sh -U ` - - On Windows: `forgetme.bat -U ` - -!!!note - When specifying the `` please provide the full tenant qualified username. - - e.g., - ``` - ./forgetme.sh -U user1@abc.com - ``` - -## Obfuscate personally identifiable information of a user stored in the data sources via the GDPR-Client - -This gdpr-client tool obfuscate personally identifiable information of a user stored in the API Manager Analytics related databases. - -This tool removes the following personally identifiable information of a specified user: - -1. Username -2. User email address -3. User IP address - -!!! tip "Before you begin:" - - Note that this tool is designed to run in offline mode (e.g., the worker and dashboard servers should be shutdown or run on another machine) in order to prevent unnecessary load to the server. - If this tool runs in online mode (e.g., when the servers are running), DB lock situations on the H2 databases may occur. - - If you have configured any JDBC database other than the H2 database provided by default, copy the relevant JDBC driver to the `/wso2/tools/gdpr-client/lib` directory. - -**Step 1: Configure the conf.yaml file** - -The `/wso2/tools/gdpr-client/conf/conf.yaml` file specifies the configurations for data sources - which store personally identifiable information(PII) of a user. This `conf.yaml` file also consists with the tables with - column names where the PII of a user are stored. - -1. Replace `` reference (used in database configurations and secure vault configurations) with - the relevant absolute path to the API Manager Analytics home directory. -2. If you have configured any JDBC databases for the defined data sources(`APIM_ANALYTICS_DB` and `DASHBOARD_DB`) in the - conf.yaml file other than the H2 database provided by default, you need to edit the relevant data source configuration. - - For an example if you have configured MySQL for `APIM_ANALYTICS_DB` you need to add the relevant MySQL JDBC driver - to `/wso2/tools/gdpr-client/lib` directory and edit the `APIM_ANALYTICS_DB` datasource as - shown below. - - ```java - wso2.datasources: - dataSources: - - name: APIM_ANALYTICS_DB - description: "The datasource used for APIM statistics aggregated data." - definition: - type: RDBMS - configuration: - jdbcUrl: 'jdbc:jdbc:mysql://localhost:3306/APIM_ANALYTICS_DB' - username: wso2carbon - password: wso2carbon - driverClassName: com.mysql.jdbc.Driver - maxPoolSize: 1 - idleTimeout: 60000 - connectionTestQuery: SELECT 1 - validationTimeout: 30000 - isAutoCommit: false - ``` - -**Step 2: Execute the gdpr-client tool** - -1. Open a new terminal window and navigate to the `/bin` directory. -2. Execute one of the following commands depending on your operating system: - - On Linux/Mac OS: `./gdprclient.sh -U -T -E -I ` - - On Windows: `gdprclient.bat -U -T -E -I ` - - e.g., - ``` - ./gdprclient.sh -U user1 -T abc.com -E user1@abc.com -I 127.0.0.1 - ``` - - !!!warning - Before running the command, make sure that you have finalised the command line options given with the command. - For an example if you have run the command `./gdprclient.sh -U user1 -T abc.com -E user1@abc.com` this will not - update the IP address of the user(only the username and the email address will be replaced). You cannot rerun - the tool and update the IP address(with pseudonym value) of the user again, as at this moment username is already - replaced with the pseudonym value. If you need to remove the IP address as well, then execute `./gdprclient.sh -U user1 -T - abc.com -E user1@abc.com -I 127.0.0.1` command in the first place. - - Same as that, if you did not provide the `-E ` option, this will not replace any user associated - email address and you cannot rerun the tool and replace the email value afterwards. - - **The following is the list of all the command line options that can be used with gdpr-client.** - - | **Option** | **Description** | **Mandatory/ Optional** | **Example** | - | -- | -- | -- | -- | - | -U | Username(without appending the user tenant domain) | Mandatory | -U john | - | -T | Tenant domain of the user.

    If this option is not provided, by default `carbon.super` will be used as the tenant domain.| Optional | -T abc.com | - | -E | User email

    If this option is not provided, then the stored references(in database tables) of the user email will not be removed. You cannot rerun the tool and replace the email references of the user afterwards, if you have not provided this option in the first run.
    | Optional | john@abc.com | - | -I | User ip address

    If this option is not provided, then the stored references(in database tables) of the user IP address will not be removed. You cannot rerun the tool and replace the user IP address references of the user afterwards, if you have not provided this option in the first run. | Optional | 123.3.5.2 | - | -pu | Pseudonym which the user name and email needs to be replaced with.

    If this option is not provided, by default a random UUID value is generated. | Optional | -pu “123-343-435-545-dfd-4” | - | -sha256 | If this option is provided, a SHA256 hash value will be generated as the pseudonym to obfuscate the username and user email address.| Optional | -sha256 | diff --git a/en/docs/observe/api-manager-analytics/integrating-with-google-analytics.md b/en/docs/observe/api-manager-analytics/integrating-with-google-analytics.md deleted file mode 100644 index 608532708b..0000000000 --- a/en/docs/observe/api-manager-analytics/integrating-with-google-analytics.md +++ /dev/null @@ -1,46 +0,0 @@ -# Integrating with Google Analytics - -You can configure the API Manager to track runtime statistics of API invocations through [Google Analytics](http://www.google.com/analytics). Google Analytics is a service that allows you to track visits to a website and generate detailed statistics on them. - -This guide explains how to setup API Manager in order to feed runtime statistics to Google analytics for summarization and display. - -1. Setup a Google Analytics account if not subscribed already and receive a Tracking ID, which is of the format "UA-XXXXXXXX-X". A Tracking ID is issued at the time an account is created with Google Analytics. -2. Log in to the API Manager management console (`https://localhost:9443/carbon`) using admin/admin credentials and go to **Main -> Resources -> Browse** menu. - - ![Browse Management Console]({{base_path}}/assets/img/learn/management-console-browse.png) - -3. Navigate to /_system/governance/apimgt/statistics/ga-config.xml file. - - ![ga-config file]({{base_path}}/assets/img/learn/ga-config-xml.png) - -4. Change the <Enabled> element to `true` , set your tracking ID in <TrackingID> element and **Save**. - - ![Enable Google Analytics Tracking]({{base_path}}/assets/img/learn/enable-google-analytics.png) - -5. If you want to enable tracking for tenants, log in to the management console with a tenant's credentials, click **Source View**, and then add the following parameter to the `org.wso2.carbon.mediation.registry.WSO2Registry` registry definition near the top (repeat this step for each tenant): - - `15000` - - The following screen shot illustrates this change: - - ![Screen shot of service bus source view with registry configuration highlighted]({{base_path}}/assets/img/learn/service-bus-configuration.png) - -6. API Manager is now integrated with Google Analytics. A user who has subscribed to a published API through the Developer Portal should see an icon as `Real-Time` after logging into their Google Analytics account. Click on this icon and select **Overview**. - -7. Invoke the above API using the embedded [WSO2 REST Client]({{base_path}}/consume/invoke-apis/invoke-apis-using-tools/invoke-an-api-using-the-integrated-api-console/)(or any third-part rest client such as cURL). - - #### Real-time statistics - -8. This is one invocation of the API. Accordingly, Google Analytics graphs and statistics will be displayed at runtime. This example displays the **PageViews** per second graph and 1 user as active. - - ![Google Analytics Graphs]({{base_path}}/assets/img/learn/google-analytics-graphs.png) - - #### Report statistics - - Google analytics reporting statistics take more than 24 hours from the time of invocation to populate. Shown below is a sample Dashboard with populated statistics. - - ![Google Analytics Report]({{base_path}}/assets/img/learn/google-analytics-report.png) - - There are widgets with statistics related to Audience, Traffic, Page Content, Visit Duration etc. You can add any widget of your preference to dashboard. - - diff --git a/en/docs/observe/api-manager-analytics/publishing-events-to-analytics-cloud.md b/en/docs/observe/api-manager-analytics/publishing-events-to-analytics-cloud.md deleted file mode 100644 index 334af5b681..0000000000 --- a/en/docs/observe/api-manager-analytics/publishing-events-to-analytics-cloud.md +++ /dev/null @@ -1,23 +0,0 @@ -# Publishing Events to Analytics Cloud - -In order to view analytics, you need to publish events to the cloud and view it in a dashboard available in the analytics cloud. - -To publish events to analytics cloud, you need to configure WSO2 API Manager to enable analytics. - -1. Enable analytics. Open `/repository/conf/deployment.toml` and enable analytics as follows. - - ```toml - [apim.analytics] - enable = true - ``` - -2. The analytics cloud configuration endpoint is required to publish events to cloud. You need to configure the auth_token property with an auth token. The configuration endpoint and auth token configurations are as follows. - - ```toml - [apim.analytics] - enable = true - config_endpoint = "https://analytics-event-auth.st.choreo.dev/auth/v1" - auth_token = “” - ``` - -3. Restart the API Manager and try to invoke APIs. diff --git a/en/docs/observe/api-manager-analytics/troubleshooting-analytics.md b/en/docs/observe/api-manager-analytics/troubleshooting-analytics.md deleted file mode 100644 index 74e3899792..0000000000 --- a/en/docs/observe/api-manager-analytics/troubleshooting-analytics.md +++ /dev/null @@ -1,2 +0,0 @@ -!!! Note - Content to be added. WIP. \ No newline at end of file diff --git a/en/docs/publish/api-microgateway/internal-communication-protocol.md b/en/docs/publish/api-microgateway/internal-communication-protocol.md deleted file mode 100644 index 7595d57f1f..0000000000 --- a/en/docs/publish/api-microgateway/internal-communication-protocol.md +++ /dev/null @@ -1,24 +0,0 @@ -# Communication Protocol of API Microgateway Components - -WSO2 API Microgateway uses an implementation of Envoy's [xDS protocol]({{envoy_path}}/api-docs/xds_protocol#xds-rest-and-grpc-protocol) to communicate between its components, especially the Adapter -> Enforcer and Adapter -> Router communication. Here, the Adapter -> Router communication is already implemented by Envoy and its control plane. WSO2 implements the same protocol for Adapter -> Enforcer communication. - -Envoy xDS protocol is implemented on top of gRPC. This allows both the server and the client to stream data between each other. Therefore client's can request for required data from the server and server can push the requested data back to client when new data is available. - -## WSO2 xDS Implementation - -WSO2 xDS implementation is mainly used for communication between Adapter and Enforcer. Using this communication link, Enforcer receives all latest updates of resources required during the startup and runtime. These resources can be APIs, Configurations, Subscriptions, Revoked Tokens etc. Enforcer then uses above data to populate in memory data structures and validate requests based on provided configurations. - -Following is the request/response flow of Adapter -> Enforcer xDS communication. -1. During startup, the Enforcer sends the initial [`DiscoveryRequest`]({{envoy_path}}/api-v3/service/discovery/v3/discovery.proto#service-discovery-v3-discoveryrequest) to the Adapter. - - This request mainly specifies the expected type of the resource (e.g.,: API, Config, Application, Subscription) by xDS client (Enforcer). -2. The Adapter checks if new resources are available in its cache, for the requested resource type. - - If available, the Adapter sends a [`DiscoveryResponse`]({{envoy_path}}/api-v3/service/discovery/v3/discovery.proto#service-discovery-v3-discoveryresponse). - - If the resource is unavailable the Adapter doesn't respond to the client immediately. It waits until a new resource update happens for the requested resource type. - - As soon as the new resource is added to the Adapter xDS cache, it responds to the initial client request with with a `DiscoveryResponse` -3. When the Enforcer receives a new `DiscoveryResponse` it extracts the resources from the response and populates in the memory data structures for request validation. -4. Then the Enforcer would Ack/Nack `DiscoveryRequest` to the Adapter. - - If the Enforcer is able to process the `DiscoveryResponse` successfully, it sends a new `DiscoveryRequest` as `Ack` to the last received version of the resource. - - If the Enforcer is unable to process the `DiscoveryResponse` successfully, it sends a new `DiscoveryRequest` as `Nack` to the last received version of the resource. Version information of this request contains the version of last successfully processed resource version. -5. The Adapter keeps track of the last `Ack`ed version of resource by an Enforcer node and uses that information to decide when and what to send in the next `DiscoveryResponse` to the Enforcer. - - When a new resource cache update happens in the Adapter, it notifies this change to all subscribed Enforcer nodes. If an Enforcer node `Ack`ed the response, the Adapter will send another response to that Enforcer node only if a new resource version update happens in Adapter resource cache. diff --git a/en/docs/publish/api-microgateway/quick-start-guide.md b/en/docs/publish/api-microgateway/quick-start-guide.md deleted file mode 100644 index 8bba6488d2..0000000000 --- a/en/docs/publish/api-microgateway/quick-start-guide.md +++ /dev/null @@ -1,96 +0,0 @@ -# Quick Start Guide - -## Design Your First API - -This section is a step-by-step guide to create, publish, and invoke an API using WSO2 API Microgateway. - -### Before you begin... - -1. Install [docker](https://docs.docker.com/engine/install/). -2. Install the [docker-compose](https://docs.docker.com/compose/install/). - -### Objectives -1. Setup Microgateway and CLI tool (APICTL). -2. Create and deploy an API project. -3. Invoke the API using a generated key. - - Let's get started... - -### Step 1 - Setup Microgateway and CLI tool(APICTL) - -1. Download the CLI tool(APICTL) and the microgateway distribution from the - [GitHub release page's](https://github.com/wso2/product-microgateway/releases) Assets and - extract them to a folder of your choice. - - From here onwards, CLI tool extracted location will be referred as `CLI_HOME` and Microgateway distribution extracted - location would be referred as `MG_HOME`. - -2. Using your command line client tool add the 'CLI_HOME' folder to your PATH variable. - - ``` bash - export PATH=$PATH: - ``` - -### Step 2 - Create and deploy an API project - -1. Let's create our first project with name "petstore" by adding the - [open API definition](https://petstore.swagger.io/v2/swagger.json) of the petstore . - You can do that by executing the following command using your command line tool. - - ``` bash - apictl init petstore --oas https://petstore.swagger.io/v2/swagger.json - ``` - - The project is now initialized. You should notice a directory with name "petstore" being created in the location - where you executed the command. - -2. Now lets start the microgateway on docker by executing the docker compose script inside the `MG_HOME`. - Navigate to `MG_HOME` and execute the following command. - - ``` bash - docker-compose up -d - ``` - - Once containers are up and running, we can monitor the status of the containers using the following command - - ``` bash - docker ps | grep mg- - ``` - -3. Now let's deploy our first API to Microgateway using the project created in the step 3. - Navigate to the location where the petstore project was initialized. - Execute the following command to deploy the API in the microgateway. - - ``` bash - apictl mg deploy --host https://localhost:9843 --file petstore -u admin -p admin -k - ``` - - !!! note - The user credentials can be configured in the configurations of the `MG_HOME` distribution. - `admin:admin` is the default accepted credentials by the microgateway adapter. - Go to `MG_HOME/resources/conf/config.toml` and modify as below. - - ``` toml - [[adapter.server.users]] - username = "admin" - password = "admin" - ``` - -### Step 3 - Invoke the API - -1. The next step would be to invoke the API using a REST tool. Since APIs on the Microgateway are by default secured. - We need a valid token in order to invoke the API. - Use the following sample token accepted by the microgateway to access the API. - Lets set the token to command line as a variable. - - ``` bash - TOKEN=eyJ4NXQiOiJNell4TW1Ga09HWXdNV0kwWldObU5EY3hOR1l3WW1NNFpUQTNNV0kyTkRBelpHUXpOR00wWkdSbE5qSmtPREZrWkRSaU9URmtNV0ZoTXpVMlpHVmxOZyIsImtpZCI6Ik16WXhNbUZrT0dZd01XSTBaV05tTkRjeE5HWXdZbU00WlRBM01XSTJOREF6WkdRek5HTTBaR1JsTmpKa09ERmtaRFJpT1RGa01XRmhNelUyWkdWbE5nX1JTMjU2IiwiYWxnIjoiUlMyNTYifQ==.eyJhdWQiOiJBT2syNFF6WndRXzYyb2QyNDdXQnVtd0VFZndhIiwic3ViIjoiYWRtaW5AY2FyYm9uLnN1cGVyIiwibmJmIjoxNTk2MDA5NTU2LCJhenAiOiJBT2syNFF6WndRXzYyb2QyNDdXQnVtd0VFZndhIiwic2NvcGUiOiJhbV9hcHBsaWNhdGlvbl9zY29wZSBkZWZhdWx0IiwiaXNzIjoiaHR0cHM6Ly9sb2NhbGhvc3Q6OTQ0My9vYXV0aDIvdG9rZW4iLCJrZXl0eXBlIjoiUFJPRFVDVElPTiIsImV4cCI6MTYyNzU0NTU1NiwiaWF0IjoxNTk2MDA5NTU2LCJqdGkiOiIyN2ZkMWY4Ny01ZTI1LTQ1NjktYTJkYi04MDA3MTFlZTJjZWMifQ==.otDREOsUUmXuSbIVII7FR59HAWqtXh6WWCSX6NDylVIFfED3GbLkopo6rwCh2EX6yiP-vGTqX8sB9Zfn784cIfD3jz2hCZqOqNzSUrzamZrWui4hlYC6qt4YviMbR9LNtxxu7uQD7QMbpZQiJ5owslaASWQvFTJgBmss5t7cnurrfkatj5AkzVdKOTGxcZZPX8WrV_Mo2-rLbYMslgb2jCptgvi29VMPo9GlAFecoMsSwywL8sMyf7AJ3y4XW5Uzq7vDGxojDam7jI5W8uLVVolZPDstqqZYzxpPJ2hBFC_OZgWG3LqhUgsYNReDKKeWUIEieK7QPgjetOZ5Geb1mA== - ``` - -2. We can now invoke the API running on the microgateway using cURL as below. - - ``` bash - curl -X GET "https://localhost:9095/v2/pet/findByStatus?status=available" -H "accept: application/json" -H "Authorization:Bearer $TOKEN" -k - ``` - -Congratulations! You have successfully created your first API, and invoked it via the API Microgateway. diff --git a/en/docs/reference/customize-product/customizations/adding-a-user-signup-workflow-using-bps.md b/en/docs/reference/customize-product/customizations/adding-a-user-signup-workflow-using-bps.md deleted file mode 100644 index 6fa20a6e5b..0000000000 --- a/en/docs/reference/customize-product/customizations/adding-a-user-signup-workflow-using-bps.md +++ /dev/null @@ -1,247 +0,0 @@ -# Adding a User Signup Workflow - -This section explains how to attach a custom workflow to the user signup operation in the API Manager. - -!!! note - You can either use the **Enterprise Integrator(EI)** or the **Business Process Server(BPS)** for the business process tasks with API Manager during the Workflow configuration process. - -!!! tip - **Before you begin** , if you have changed the API Manager's default user and role, make sure you do the following changes : - - 1. Change the credentials of the workflow configurations in the registry resource `_system/governance/apimgt/applicationdata/workflow-extensions.xml` . - 2. Point the database that has the API Manager user permissions to Enterprise Integrator(EI)/Business Process Server(BPS). - 3. Share any LDAPs, if exist. - 4. Unzip the `/business-processes/user-signup/UserApprovalTask-1.0.0.zip` file, update the role as follows in the `UserApprovalTask.ht` file, and ZIP the `UserApprovalTask.ht` folder. - - **Format** - - ``` java - - [new-role-name] - - ``` - -#### Configuring the Enterprise Integrator - -!!! note - Follow this sub section, only if you will be using the **Enterprise Integrator(EI)** for the business process tasks. If not please refer the sub section for [Configuring the Business Process Server](#configuring-the-business-process-server). - -1. Download [WSO2 Enterprise Integrator](https://wso2.com/integration). - -2. Make sure that an offset of 2 is added to the default EI port in the `/wso2/business-process/conf/carbon.xml` file. This prevents port conflicts that occur when you start more than one WSO2 product on the same server. For more information, see [Changing the Default Ports with Offset]({{base_path}}/reference/guides/changing-the-default-ports-with-offset). - - ``` xml - 2 - ``` - - !!! tip - If you **change the EI port offset to a value other than 2 or run the API Manager and EI on different machines** (therefore, want to set the `hostname` to a different value than `localhost` ), you do the following : - - - Search and replace the value 9765 in all the files (.epr) inside `/business-processes` folder with the  new port (9763 + port offset). - - !!! note - **Note:** Make sure that the port offset is updated in the following files as well. Note that the zipped files should be unzipped for you to be able to see the files - - -`/business-processes/user-signup/HumanTask/UserApprovalTask-1.0.0.zip/UserApprovalTask.wsdl` - - -`/business-processes/user-signup/BPEL/UserSignupApprovalProcess_1.0.0.zip/UserApprovalTask.wsdl` - - -`/business-processes/user-signup/BPEL/UserSignupApprovalProcess_1.0.0.zip/WorkflowCallbackService.wsdl` - - -3. Open the `/wso2/business-process/conf/humantask.xml` file and `/wso2/business-process/conf/b4p-coordination-config.xml` file and set the `TaskCoordinationEnabled` property to true. For further information on this configuration see [Configuring Human Task Coordination](https://docs.wso2.com/display/BPS360/Configuring+Human+Task+Coordination) . - - ``` xml - true - ``` - -4. Copy the following 2 files from the `/business-processes/epr` folder to the `/wso2/business-process/repository/conf/epr` folder. - - - `/business-processes/epr/UserSignupProcess.epr` - - `/business-processes/epr/UserSignupService.epr` - - !!! note - - If the `/wso2/business-process/repository/conf/epr` folder isn't there, please create it. - - - Make sure to give the correct credentials in the `/wso2/business-process/repository/conf/epr` files. - - 1. Update the `/business-processes/epr/UserSignupProcess.epr` file according to the port offset configured in API Manager. (Default port 8243). - - ```java - https://localhost:8243/services/WorkflowCallbackService - ``` - - 2. Update the `/business-processes/epr/UserSignupService.epr` file according to the port offset of EI. (Default port 9763 + 2). - - ```java - http://localhost:9765/services/UserApprovalService - ``` - -5. [Start the EI server](https://docs.wso2.com/display/EI650/Running+the+Product#RunningtheProduct-Startingtheserver) and log in to its management console ( `https://:9443+/carbon` ). - -
    -

    Warning

    -

    If you are using Mac OS with High Sierra, you may encounter following warning when logging into the Management console due to a compression issue that exists in High Sierra SDK.

    -

    `WARN {org.owasp.csrfguard.log.JavaLogger} - potential cross-site request forgery (CSRF) attack thwarted (user:, ip:xxx.xxx.xx.xx, method:POST, uri:/carbon/admin/login_action.jsp, error:required token is missing from the request)` -

    -

    To avoid this issue open `/wso2/business-process/conf/tomcat/catalina-server.xml` and change the compression="on" to compression="off" in Connector configuration and restart the EI.

    -
    - -6. Select **BPEL** under the **Processes** > **Add** menu and upload the `/business-processes/user-signup/BPEL/UserSignupApprovalProcess_1.0.0.zip` file to EI. This is the business process archive file. - - ![Add BPEL to EI]({{base_path}}/assets/img/learn/bpel-upload-signup-workflow.png) - -7. Select **Add** under the **Human Tasks** menu and upload the `/business-processes/user-signup/HumanTask/UserApprovalTask-1.0.0.zip` file to EI. This is the human task archived file. - - ![Add Human Task to EI]({{base_path}}/assets/img/learn/add-human-task-signup.png) - -#### Configuring the Business Process Server - -!!! note - Follow this sub section, only if you will be using the **Business Process Server(BPS)** for the business process tasks. If not please refer the sub section for [Configuring the Enterprise Integrator](#configuring-the-enterprise-integrator). - -1. Download [WSO2 Business Process Server](https://wso2.com/api-manager/) . - -2. Set an offset of 2 to the default BPS port in the `/repository/conf/carbon.xml` file. This prevents port conflicts that occur when you start more than one WSO2 product on the same server. For more information, see [Changing the Default Ports with Offset]({{base_path}}/reference/guides/changing-the-default-ports-with-offset). - - ``` xml - 2 - ``` - -!!! tip - If you **change the BPS port offset to a value other than 2 or run the API Manager and BPS on different machines** (therefore, want to set the `hostname` to a different value than `localhost` ), you do the following: - - - Search and replace the value 9765 in all the files (.epr) inside `/business-processes` folder with the  new port (9763 + port offset). - - !!! note - **Note:** Make sure that the port offset is updated in the following files as well. Note that the zipped files should be unzipped for you to be able to see the files - - -`/business-processes/user-signup/HumanTask/UserApprovalTask-1.0.0.zip/UserApprovalTask.wsdl` - - -`/business-processes/user-signup/BPEL/UserSignupApprovalProcess_1.0.0.zip/UserApprovalTask.wsdl` - - -`/business-processes/user-signup/BPEL/UserSignupApprovalProcess_1.0.0.zip/WorkflowCallbackService.wsdl` - -3. Open the `/repository/conf/humantask.xml` file and `/repository/conf/b4p-coordination-config.xml` file and set the `TaskCoordinationEnabled` property to true. For further information on this configuration see [Configuring Human Task Coordination](https://docs.wso2.com/display/BPS360/Configuring+Human+Task+Coordination) . - - ``` xml - true - ``` - -4. Copy the following 2 files from the `/business-processes/epr` folder to the `/repository/conf/epr` folder. - - - `/business-processes/epr/UserSignupProcess.epr` - - `/business-processes/epr/UserSignupService.epr` - - !!! note - - If the `/repository/conf/epr` folder isn't there, please create it . - - Make sure to give the correct credentials in the `/repository/conf/epr` files. - - - 1. Update the `/business-processes/epr/UserSignupProcess.epr` file according to API Manager. - - ```java - https://localhost:8243/services/WorkflowCallbackService - ``` - - 2. Update the `/business-processes/epr/UserSignupService.epr` file according to BPS. - - ```java - http://localhost:9765/services/UserApprovalService - ``` - -5. [Start the BPS server](https://docs.wso2.com/display/AM260/Running+the+Product#RunningtheProduct-Startingtheserver) and log in to its management console ( `https://:9443+/carbon` ). - -
    -

    Warning

    -

    If you are using Mac OS with High Sierra, you may encounter following warning when login into the Management console due to a compression issue exists in High Sierra SDK.

    -

    `WARN {org.owasp.csrfguard.log.JavaLogger} - potential cross-site request forgery (CSRF) attack thwarted (user:, ip:xxx.xxx.xx.xx, method:POST, uri:/carbon/admin/login_action.jsp, error:required token is missing from the request)`

    -

    To avoid this issue open `/repository/conf/tomcat/catalina-server.xml` and change the compression="on" to compression="off" in Connector configuration and restart the BPS.

    -
    - -6. Select **BPEL** under the **Processes** > **Add** menu and upload the `/business-processes/user-signup/BPEL/UserSignupApprovalProcess_1.0.0.zip` file to BPS. This is the business process archive file. - - ![Add BPEL to BPS]({{base_path}}/assets/img/learn/bpel-upload-signup-workflow.png) - -7. Select **Add** under the **Human Tasks** menu and upload the `/business-processes/user-signup/HumanTask/UserApprovalTask-1.0.0.zip` file to BPS. This is the human task archived file. - - ![Add Human Task to BPS]({{base_path}}/assets/img/learn/add-human-task-signup.png) - -#### Configuring the API Manager - -1. Open the `/repository/deployment/server/webapps/admin/src/main/webapp/site/conf/site.json` file and configure **workFlowServerURL** under **workflows** to point to the EI/BPS server (e.g. `"workFlowServerURL": "https://localhost:9445/services/"` ) - -!!! note - When enabling the workflow, make sure to **import the certificate** of API Manager into the client-truststore of the EI/BPS server and also import the certificate of EI/BPS into the client-truststore of API Manager. - - Paths to the directory containing the client-truststore of each product are : - - 1. API-M - '/repository/resources/security' - 2. EI - '/wso2/business-process/repository/resources/security' - 3. BPS - '/repository/resources/security' - -#### Engaging the WS Workflow Executor in the API Manager - -1. Log in to API-M management console ( `https://:9443/carbon` ) and select **Browse** under **Resources**. - - ![Browse resources]({{base_path}}/assets/img/learn/browse-resources.png) - -2. Go to `/_system/governance/apimgt/applicationdata/workflow-extensions.xml` resource, disable the **Simple Workflow Executor** and enable **WS Workflow Executor**. Also specify the service endpoint where the workflow engine is hosted and the credentials required to access the said service via basic authentication (i.e., username/password based authentication). - - ```html/xml - - ... - - http://localhost:9765/services/UserSignupProcess/ - admin - admin - https://localhost:8243/services/WorkflowCallbackService - - ... - - ``` - !!! info - All workflow process services of the EI/BPS run on port 9765 because you changed its default port (9763) with an offset of 2. - -3. Go to the Developer Portal Web interface of API Manager and sign up / register as a new user. - - -
    - Register now option -
    - - - It invokes the signup process and creates a Human Task instance that holds the execution of the BPEL until some action is performed on it. - -4. Note the message that appears if the BPEL is invoked correctly, saying that the request is successfully submitted. - -5. Log in to the [Admin Portal](`https://localhost:9443/admin`) (`https://:9443/admin`) of API Manager giving the admin username and password. - -6. Navigate to **Tasks** > **User Creation** and approve the user signup task listed. This will resume the BPEL process and complete the signup process. - -7. Go back to the Developer Portal and see that the user is now registered. - -Whenever a user tries to sign up to the Developer Portal, a request of the following format is sent to the workflow endpoint: - -```xml - - - - - sampleuser - foo.com - c0aad878-278c-4439-8d7e-712ee71d3f1c - https://localhost:8243/services/WorkflowCallbackService - - - -``` - -Elements of the above configuration are described below: - -| Element | Description | -|---------------------------------------------------- |-------------------------------------------------------------------------------------------------------------------| -|`userName` | The user name requested by the user | -|`tenantDomain` | Domain to which the user belongs to | -|`workflowExternalRef` | The unique reference against which a workflow is tracked. This needs to be sent from the workflow engine to the API Manager at the time of workflow completion. | -|`callBackURL` | The URL to which the workflow completion request is sent by the workflow engine, at the time of workflow completion. This property is configured under the `callBackURL` property in the `workflow-extensions.xml registry` file. | diff --git a/en/docs/reference/customize-product/customizations/customizing-the-developer-portal/advanced-customization.md b/en/docs/reference/customize-product/customizations/customizing-the-developer-portal/advanced-customization.md deleted file mode 100644 index 295aa847cb..0000000000 --- a/en/docs/reference/customize-product/customizations/customizing-the-developer-portal/advanced-customization.md +++ /dev/null @@ -1,106 +0,0 @@ -# Advanced Customization - -The user interface of the WSO2 API-M Developer Portal and Publisher Portal can be customized simply without editing the React codebase or the CSS in most cases. You will be required to modify the React codebase only if you need to do advanced customizations. - -## Adding advanced UI customizations to WSO2 API-M UIs - -Follow the instructions below to add advanced UI customizations to the Developer Portal and/or Publisher. - -!!! note "Prerequisites" - - **NodeJS**(minimum 8.12.0) - This is a platform required for ReactJS development. - - **NPM**(minimum 5.7.0) - -1. Navigate to the `/repository/deployment/server/webapps` directory in a terminal and run the following command. - - ```js - npm install - ``` - - This will install the dependencies for the `lerna` package manager. - -2. Run the following command from the same directory in a terminal. - - ```js - npm run bootstrap - ``` - - This will install the local package dependencies in the Publisher and Developer Portal applications. - -3. Run the command given below in the relevant application. - - If it is a Developer Portal, run `npm run build:dev` from the `devportal` folder or else run the command from the `publisher` folder), to start the npm build. Note that it will continuously watch for any changes and rebuild the project. - - **For example to customize the Developer Portal:** - - 1. Navigate to the `/repository/deployment/server/webapps/devportal/src/main/webapp/` directory. - - 2. Run the following command. - - ```js - npm run build:dev - ``` - -3. Make the UI related changes in the respective folder based on the WSO2 API-M Console. - - - If you need to rewrite the UI completely, you can make changes in the following directory. - - Developer Portal - `devportal/src/main/webapp/source` - - Publisher Portal - `publisher/src/main/webapp/source` - - If you want to override a specific React component or a file from the `source/src/` directory, you need to make the changes in the following directory by only copying the desired file/files. - - Developer Portal - `devportal/src/main/webapp/override/src` - - Publisher Portal - `publisher/src/main/webapp/override/src` - -#### Overriding the API Documentation and Overview components - -```sh -override -└── src - ├── Readme.txt - └── app - └── components - └── Apis - └── Details - ├── Documents - │ └── Documentation.jsx - └── Overview.jsx -``` - -#### Adding new files to the override folder - -```sh -override -└── src - ├── Readme.txt - └── app - └── components - └── Apis - └── Details - ├── Documents - │ └── Documentation.jsx - └── Overview.jsx - └── NewFile.jsx - -``` - -You can import the **NewFile.jsx** by adding the **AppOverride** prefix to the import and provide the full path relative to the override directory. - -```sh -import NewFile from 'AppOverride/src/app/components/Apis/Details/NewFile.jsx'; -``` - -A compilation error will show up if you try to import the **NewFile.jsx** from **Overview.jsx** as follows. - -```sh -import NewFile from './NewFile.jsx'; -``` - -## Development - -During an active development, the watch mode works with the overridden files. Adding new files and directories will not trigger a new webpack build. - -## Production Build - -Make sure you do a production build after you finish development with the command given below. The output of the production build contains minified javascript files optimized for web browsers. - -``` -npm run build:prod -``` diff --git a/en/docs/reference/enabling-authentication-session-persistence.md b/en/docs/reference/enabling-authentication-session-persistence.md deleted file mode 100644 index d6ab5b1a4e..0000000000 --- a/en/docs/reference/enabling-authentication-session-persistence.md +++ /dev/null @@ -1,77 +0,0 @@ -# Enabling Authentication Session Persistence - -This topic is regarding sessions in the WSO2 API Manager (WSO2 API-M) and the process of enabling session persistence for these sessions. This is particularly useful when the remember me option is selected when logging into either the service provider or the WSO2 API-M. - -Uncomment the following configuration in the `/repository/conf/identity/identity.xml` file, under the `Server` and `JDBCPersistenceManager` elements to enable authentication session persistence. - -``` xml - - true - false - 100 - - true - 20160 - 1140 - - - true - 720 - - -``` - -The following table describes the elements of the configurations mentioned above. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Configuration elementDescription

    Enable

    This enables the persistence of session data. Therefore, this must be configured to true if you wish to enable session persistence.

    Temporary

    Setting this to true enables persistence of temporary caches that are created within an authentication request.

    PoolSizeTo improve performance, OAuth2 access tokens are persisted asynchronously in the database using a thread pool.
    -This value refers to the number of threads in that thread pool.

    SessionDataCleanUp

    This section of the configuration is related to the cleaning up of session data.

    Enable

    Selecting true here enables the cleanup task and ensures that it starts running.

    CleanUpTimeOut

    This is the timeout value (in minutes) of the session data that is removed by the cleanup task. The default value is 2 weeks.

    CleanUpPeriod

    This is the time period (in minutes) that the cleanup task would run. The default value is 1 day.

    OperationDataCleanUpThis section of the configuration is related to the cleaning up of operation data.
    - -!!! note -**Note** : If Single Sign-On is to work, you must enable at least one of the two configurations mentioned in this topic. - - -**Related Topics** - -- See [Configuring Single Sign-on with SAML2](https://docs.wso2.com/display/AM260/Configuring+Single+Sign-on+with+SAML2) for more information - diff --git a/en/docs/reference/guides/business-scenarios.md b/en/docs/reference/guides/business-scenarios.md deleted file mode 100644 index 4bcf5849f3..0000000000 --- a/en/docs/reference/guides/business-scenarios.md +++ /dev/null @@ -1,3 +0,0 @@ -# Business Scenarios - - diff --git a/en/docs/reference/guides/database-upgrade-guide.md b/en/docs/reference/guides/database-upgrade-guide.md deleted file mode 100644 index 67525dc9f9..0000000000 --- a/en/docs/reference/guides/database-upgrade-guide.md +++ /dev/null @@ -1,56 +0,0 @@ -# Database Upgrade Guide - -This page takes you through the general steps for upgrading product versions based on Carbon 4.4.6 to Carbon 4.4.7. - -### Preparing to Upgrade - -The following are the specific prerequisites you must complete before an upgrade: - -- Before you upgrade to the latest version of a product, you create a staging database, which is essentially an empty database. Note that you should NOT connect a new product version to an older database that has not been upgraded. - -- Make a backup of the database and the `` directory prior to upgrading. The `` directory can simply be copied to the new directory. - -- Stop all the Carbon servers connected to the database before running the migration scripts. - - !!! note - Note that the upgrade should be done during a period when there is low traffic on the system. - - -#### Limitations - -- This upgrade can only be done if the database type is the same. For example, if you are using MySQL currently and you need to migrate to Oracle in the new version, these scripts will not work. -- You cannot roll back an upgrade. It is impossible to restore a backup of the previous server and retry the upgrade process. - -#### Downtime - -The downtime is limited to the time taken for switching databases when the staging database is promoted to the actual production status. - -### Upgrading the configurations - -There are no database changes between Carbon 4.4.6 to Carbon 4.4.7. Therefore, only the new configuration options in Carbon 4.4.7 should be updated for the new environment as explained below. - -1. Copy the data from the old database to the staging database you created. This becomes the new database for your new version of Carbon. -2. Download Carbon 4.4.7 and connect it to your staging database. - -3. Update the configuration files in Carbon 4.4.7 as required. - -4. Copy the following directories from the old database to the staging database. - - 1. To migrate the super tenant settings, copy the `/repository/deployment/server` directory. - 2. If multitenancy is used, copy the `/repository/tenants` directory. - - !!! note - Note that configurations should not be copied directly between servers. - - -5. Start the server. - -### Going into production - -The following are recommended tests to run on the staging system. - -- Create multiple user stores and try adding users to different user stores. - -- Create multiple tenants and add different user stores to the different tenants. Thereafter, add users to the various user stores. - -Once the above tests are run successfully, it is safe to consider that the upgrade is ready for production. However, it is advised to test any features that are being used in production. diff --git a/en/docs/reference/other.md b/en/docs/reference/other.md deleted file mode 100644 index 5fb48b86d5..0000000000 --- a/en/docs/reference/other.md +++ /dev/null @@ -1,4 +0,0 @@ -# Other - -- FAQ - diff --git a/en/docs/reference/using-the-registry-rest-api.md b/en/docs/reference/using-the-registry-rest-api.md deleted file mode 100644 index 468900dc36..0000000000 --- a/en/docs/reference/using-the-registry-rest-api.md +++ /dev/null @@ -1,21 +0,0 @@ -# Using the Registry REST API - - You can use the registry REST API to perform CRUD operations on registry resources. This is not packed with WSO2 API Manager by default. Follow the instructions below to use the registry REST API with WSO2 API Manager. - -1. Download the [registry REST API webapp]({{base_path}}/assets/attachments/resource.war). -2. Copy the webapp to the `/repository/deployment/server/webapps` directory. -3. Invoke the registry REST API. - - For an example, you can use the following cURL command to get the content of the `app-tiers.xml` file, in the following registry path `_system/governance/apimgt/applicationdata` - - === "Format" - ``` bash - curl -X GET -H "Authorization: Basic =" -H "Content-Type: application/json" -H "Cache-Control: no-cache" "https://:/resource/1.0.0/artifact/_system/governance/apimgt/applicationdata/app-tiers.xml" -k - ``` - - === "Sample" - ``` bash - curl -X GET -H "Authorization: Basic YWRtaW46YWRtaW4=" -H "Content-Type: application/json" -H "Cache-Control: no-cache" "https://localhost:9443/resource/1.0.0/artifact/_system/governance/apimgt/applicationdata/app-tiers.xml" -k - ``` - - For a complete reference of the available REST API operations, go to [Resources with REST API](https://docs.wso2.com/display/Governance540/Resources+with+REST+API). \ No newline at end of file diff --git a/en/docs/reference/working-with-audit-logs.md b/en/docs/reference/working-with-audit-logs.md deleted file mode 100644 index 4cbe67f286..0000000000 --- a/en/docs/reference/working-with-audit-logs.md +++ /dev/null @@ -1,49 +0,0 @@ -# Working with Audit Logs - -Auditing is a primary requirement when it comes to monitoring production servers. For examples, DevOps need to have a clear mechanism for identifying who did what, and to filter possible system violations or breaches. -Audit logs or audit trails contain a set of log entries that describe a sequence of actions that occurred over a period of time. Audit logs allow you to trace all the actions of a single user, or all the actions or changes introduced to a certain module in the system etc. over a period of time. For example, it captures all the actions of a single user from the first point of logging in to the server. - -Audit logs are enabled by default in WSO2 API Manager (WSO2 API-M) via the following configurations, which are in the `/repository/conf/log4j.properties` file. - -``` java - # Configure audit log for auditing purposeslog4j.logger.AUDIT_LOG=INFO, AUDIT_LOGFILE - log4j.appender.AUDIT_LOGFILE=org.apache.log4j.DailyRollingFileAppender - log4j.appender.AUDIT_LOGFILE.File=${carbon.home}/repository/logs/audit.log - log4j.appender.AUDIT_LOGFILE.Append=true - log4j.appender.AUDIT_LOGFILE.layout=org.wso2.carbon.utils.logging.TenantAwarePatternLayout - log4j.appender.AUDIT_LOGFILE.layout.ConversionPattern=[%d] %P%5p - %x %m %n - log4j.appender.AUDIT_LOGFILE.layout.TenantPattern=%U%@%D [%T] [%S] - log4j.appender.AUDIT_LOGFILE.threshold=INFO - log4j.additivity.AUDIT_LOG=false -``` - -!!! info -The audit logs that get created when running WSO2 API-M are stored in the `audit.log` file, which is located in the `/repository/logs` directory. - - -### Audit log actions - -In WSO2 API-M, audit logs can be enabled for the following user actions in the Publisher and Developer Portal. - -#### Publisher - -| Action | Sample Format | -|--------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Sign in to the Publisher | `[2017-06-07 22:26:22,506]  INFO -  'devona@carbon.super [-1234]' logged in at [2017-06-07 22:26:22,501+0530]`| -| Create an API | `[2017-06-07 22:28:06,027]  INFO -  {"performedBy":"admin","action":"created","typ":"API","info":"{\"provider\":\"admin\",\"name\":\"PhoneVerification\",\"context\":\"\\\/phoneverify\\\/1.0.0\",\"version\":\"1.0.0\"}"}`| -| Update an API | `[2017-06-08 10:22:49,657]  INFO -  {"performedBy":"admin","action":"updated","typ":"API","info":"{\"provider\":\"admin\",\"name\":\"PhoneVerification\",\"context\":\"\\\/phoneverify\\\/1.0.0\",\"version\":\"1.0.0\"}"}` | -| Delete an API | `[2017-06-08 10:15:55,369]  INFO -  {"performedBy":"admin","action":"deleted","typ":"API","info":"{\"provider\":\"admin\",\"name\":\"PhoneVerification\",\"version\":\"1.0.0\"}"}`| - -#### Developer Portal - -| Action | Sample Format | -|---------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Sign in to the Developer Portal | `[2017-06-07 22:34:54,684]  INFO -  'admin@carbon.super [-1234]' logged in at [2017-06-07 22:34:54,682+0530]`| -| Sign up via the Developer Portal | `[2017-06-07 22:55:34,054]  INFO -  Initiator : admin@carbon.super | Action : Update Roles of User | Target : Kimmmy | Data : { Roles : [] } | Result : Success`| -| Create an application | `[2017-06-07 22:40:17,625]  INFO -  {"performedBy":"admin","action":"created","typ":"Application","info":"{\"tier\":\"20PerMin\",\"name\":\"TestApp\",\"callbackURL\":null}"}`| -| Update an application | `[2017-06-07 22:44:25,931]  INFO -  {"performedBy":"admin","action":"updated","typ":"Application","info":"{\"tier\":\"20PerMin\",\"name\":\"MobileApp\",\"callbackURL\":\"\",\"status\":\"APPROVED\"}"}`| -| Delete an application | `[2017-06-07 22:45:59,093]  INFO -  {"performedBy":"admin","action":"deleted","typ":"Application","info":"{\"tier\":\"20PerMin\",\"name\":\"MobileApp\",\"callbackURL\":\"\"}"}`| -| Subscribe to an application | `[2017-06-07 22:36:48,826]  INFO -  {"performedBy":"admin","action":"created","typ":"Subscription","info":"{\"application_name\":\"DefaultApplication\",\"tier\":\"Gold\",\"provider\":\"admin\",\"api_name\":\"PhoneVerification\",\"application_id\":1}"}` | -| Unsubscribe from an application | `[2017-06-07 22:38:08,277]  INFO -  {"performedBy":"admin","action":"deleted","typ":"Subscription","info":"{\"application_name\":\"DefaultApplication\",\"provider\":\"admin\",\"api_name\":\"PhoneVerification\",\"application_id\":1}"}`| - - diff --git a/en/docs/wip/README.md b/en/docs/wip/README.md deleted file mode 100644 index 8ad9c7c190..0000000000 --- a/en/docs/wip/README.md +++ /dev/null @@ -1,2 +0,0 @@ -#WIP -This folder includes documents that are work in progress/ needs review etc. diff --git a/en/docs/wip/assets/img/api-key-option.png b/en/docs/wip/assets/img/api-key-option.png deleted file mode 100755 index 4ba1e24595..0000000000 Binary files a/en/docs/wip/assets/img/api-key-option.png and /dev/null differ diff --git a/en/docs/wip/assets/img/copy-api-key.png b/en/docs/wip/assets/img/copy-api-key.png deleted file mode 100755 index dc63d6f3d2..0000000000 Binary files a/en/docs/wip/assets/img/copy-api-key.png and /dev/null differ diff --git a/en/docs/wip/assets/img/generate-api-key.png b/en/docs/wip/assets/img/generate-api-key.png deleted file mode 100644 index 21db0d0d4b..0000000000 Binary files a/en/docs/wip/assets/img/generate-api-key.png and /dev/null differ diff --git a/en/docs/wip/assets/img/view-credentials-manage-app.png b/en/docs/wip/assets/img/view-credentials-manage-app.png deleted file mode 100644 index 3a6dac6742..0000000000 Binary files a/en/docs/wip/assets/img/view-credentials-manage-app.png and /dev/null differ diff --git a/en/docs/wip/deleted-pages/changing-to-embedded-derby.md b/en/docs/wip/deleted-pages/changing-to-embedded-derby.md deleted file mode 100644 index 1bf1346248..0000000000 --- a/en/docs/wip/deleted-pages/changing-to-embedded-derby.md +++ /dev/null @@ -1,230 +0,0 @@ -# Setting up Embedded Derby - -The following section describes how to set up an IBM DB2 database to replace the default H2 database in your WSO2 product: - -### Setting up the database - -Follow the steps below to set up an embedded Derby database: - -1. Download [Apache Derby](http://apache.mesi.com.ar/db/derby/db-derby-10.8.2.2/) . -2. Install Apache Derby on your computer. - - !!! info - For instructions on installing Apache Derby, see the [Apache Derby documentation](http://db.apache.org/derby/manuals/). - - -## What's next - -By default, all WSO2 products are configured to use the embedded H2 database. To configure your product with it, see [Changing to Embedded Derby](https://docs.wso2.com/display/ADMIN44x/Changing+to+Embedded+Derby) . - - -# Changing to Embedded Derby - -The following sections describe how to replace the default H2 database with embedded Derby: - -- [Setting up datasource configurations](#ChangingtoEmbeddedDerby-Settingupdatasourceconfigurations) -- [Creating database tables](#ChangingtoEmbeddedDerby-Creatingdatabasetables) - -!!! tip -Before you begin - -You need to set up the embedded Derby before following the steps to configure your product with Embedded Derby. For more information, see [Setting up Embedded Derby](https://docs.wso2.com/display/ADMIN44x/Setting+up+Embedded+Derby) . - - -### Setting up datasource configurations - -A datasource is used to establish the connection to a database. By default, `WSO2_CARBON_DB` datasource is used to connect to the default H2 database, which stores registry and user management data. After setting up the Embedded Derby database to replace the default H2 database, either [change the default configurations of the `WSO2_CARBON_DB` datasource](#ChangingtoEmbeddedDerby-ChangingthedefaultWSO2_CARBON_DBdatasource) , or [configure a new datasource](#ChangingtoEmbeddedDerby-Configuringnewdatasourcestomanageregistryorusermanagementdata) and point it to the new database as explained below. - -#### Changing the default WSO2\_CARBON\_DB datasource - -Follow the steps below to change the type of the default `WSO2_CARBON_DB` datasource. - -Edit the default datasource configuration in the < `PRODUCT_HOME>/repository/conf/datasources/master-datasources.xml` file as shown below. - -``` html/xml - - WSO2_CARBON_DB - The datasource used for registry and user manager - - jdbc/WSO2CarbonDB - - - - jdbc:derby://localhost:1527/db;create=true - regadmin - regadmin - org.apache.derby.jdbc.EmbeddedDriver - 80 - 60000 - 5 - true - SELECT 1 - 30000 - false - - - -``` - -The elements in the above configuration are described below: - -| Element | Description | -|-------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| **url** | The URL of the database. The default port for a DB2 instance is 50000. | -| **username** and **password** | The name and password of the database user. | -| **driverClassName** | The class name of the database driver. | -| **maxActive** | The maximum number of active connections that can be allocated at the same time from this pool. Enter any negative value to denote an unlimited number of active connections. | -| **maxWait** | The maximum number of milliseconds that should elapse (when there are no available connections in the pool) before the system throws an exception. You can enter zero or a negative value to wait indefinitely. | -| **minIdle** | The minimum number of active connections that can remain idle in the pool without extra ones being created. Enter zero to create none. | -| **testOnBorrow** | The indication of whether objects will be validated before being borrowed from the pool. If the object fails to validate, it will be dropped from the pool and another attempt will be made to borrow another. | -| **validationQuery** | The SQL query that will be used to validate connections from this pool before returning them to the caller. | -| **validationInterval** | The indication to avoid excess validation, and only run validation at the most, at this frequency (time in milliseconds). If a connection is due for validation but has been validated previously within this interval, it will not be validated again. | -| **defaultAutoCommit** | This property is **not** applicable to the Carbon database in WSO2 products because auto committing is usually handled at the code level, i.e., the default auto commit configuration specified for the RDBMS driver will be effective instead of this property element. Typically, auto committing is enabled for RDBMS drivers by default. - - When auto committing is enabled, each SQL statement will be committed to the database as an individual transaction, as opposed to committing multiple statements as a single transaction. | - -!!! info -For more information on other parameters that can be defined in the `/repository/conf/datasources/master-datasources.xml` file, see [Tomcat JDBC Connection Pool](http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html#Tomcat_JDBC_Enhanced_Attributes) . - - -!!! warning -The following elements are available only as a **WUM** update and is effective from 14th September 2018 (2018-09-14).  For more information, see [Updating WSO2 Products](https://www.google.com/url?q=https%3A%2F%2Fdocs.wso2.com%2Fdisplay%2FADMIN44x%2FUpdating%2BWSO2%2BProducts&sa=D&sntz=1&usg=AFQjCNEMvqxxFtu8Qv8K4YugxNXrTfNtUA) . -This WUM update is only applicable to Carbon 4.4.11 and will be shipped out-out-the-box with Carbon versions newer than Carbon 4.4.35. For more information on Carbon compatibility, see [Release Matrix](https://wso2.com/products/carbon/release-matrix/) . - - -| **Element** | **Description** | -|----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| **commitOnReturn** | If `defaultAutoCommit =false`, then you can set `commitOnReturn =true`, so that the pool can complete the transaction by calling the commit on the connection as it is returned to the pool. However, If `rollbackOnReturn =true` then this attribute is ignored. The default value is false. | -| **rollbackOnReturn** | If `defaultAutoCommit =false`, then you can set `rollbackOnReturn =true` so that the pool can terminate the transaction by calling rollback on the connection as it is returned to the pool. The default value is false. | - -**Configuring the connection pool behavior on return -** When a database connection is returned to the pool, by default  the product rolls back the pending transactions if defaultAutoCommit =true . However, if required you can disable the latter mentioned default behavior by disabling the `ConnectionRollbackOnReturnInterceptor` , which is a JDBC-Pool JDBC interceptor, and setting the connection pool behavior on return via the datasource configurations by using the following options. - -!!! warning -Disabling the `ConnectionRollbackOnReturnInterceptor` is only possible with the **WUM** update and is effective from 14th September 2018 (2018-09-14). For more information on updating WSO2 API Manager, see [Updating WSO2 Products](https://www.google.com/url?q=https%3A%2F%2Fdocs.wso2.com%2Fdisplay%2FADMIN44x%2FUpdating%2BWSO2%2BProducts&sa=D&sntz=1&usg=AFQjCNEMvqxxFtu8Qv8K4YugxNXrTfNtUA) . This WUM update is only applicable to Carbon 4.4.11. - - -- **Configure the connection pool to commit pending transactions on connection return** - 1. Navigate to either one of the following locations based on your OS. - - On Linux/Mac OS: `/bin/api-manager.sh/` - - On Windows: `\bin\api-manager.bat` - 2. Add the following JVM option: - - ``` java - -Dndatasource.disable.rollbackOnReturn=true \ - ``` - - 3. Navigate to the `/repository/conf/datasources/master-datasources.xml` file. - 4. Disable the `defaultAutoCommit` by defining it as false. - 5. Add the `commitOnReturn` property and set it to true for all the datasources, including the custom datasources. - - ``` html/xml - - ... - - - ... - false - true - ... - - - - ``` - -- **Configure the connection pool to rollback pending transactions on connection return** - - 1. Navigate to the `/repository/conf/datasources/master-datasources.xml` file. - 2. Disable the `defaultAutoCommit` by defining it as false. - - 3. Add the `rollbackOnReturn` property to the datasources. - - ``` html/xml - - ... - - - ... - false - true - ... - - - - ``` - -#### Configuring new datasources to manage registry or user management data - -Follow the steps below to configure new datasources to point to the new database(s) you create to manage registry and/or user management data separately. - -1. Add a new datasource with similar configurations as the [`WSO2_CARBON_DB` datasource](#ChangingtoEmbeddedDerby-ChangingthedefaultWSO2_CARBON_DBdatasource) above to the < `PRODUCT_HOME>/repository/conf/datasources/master-datasources.xml` file. Change its elements with your custom values. For instructions, see [Setting up datasource configurations.](#ChangingtoEmbeddedDerby-Settingupdatasourceconfigurations) -2. If you are setting up a separate database to store registry-related data, update the following configurations in the < `PRODUCT_HOME>/repository/conf/registry.xml` file. - - ``` xml - - jdbc/MY_DATASOURCE_NAME - - ``` - -3. If you are setting up a separate database to store user management data, update the following configurations in the < `PRODUCT_HOME>/repository/conf/user-mgt.xml` file. - - ``` xml - - jdbc/MY_DATASOURCE_NAME - - ``` - -### Creating database tables - -You can create database tables by executing the database scripts as follows: - -1. Run the `ij` tool located in the `/bin/` directory as illustrated below: - ![]({{base_path}}/assets/attachments/126562586/126562587.png) -2. Create the database and connect to it using the following command inside the `ij` prompt: - - connect 'jdbc:derby:repository/database/WSO2CARBON_DB;create=true'; - - !!! info - Replace the database file path in the above command with the full path to your database. - - -3. Exit from the `ij` tool by typing the `exit` command. - - exit; - -4. Log in to the `ij` tool with the username and password that you set in `registry.xml` and `user-mgt.xml` : -`connect 'jdbc:derby:repository/database/WSO2CARBON_DB' user 'regadmin' password 'regadmin'; ` -5. Use the scripts given in the following locations to create the database tables: - - - To create tables for the **registry and user manager database ( `WSO2CARBON_DB` )** , run the below command: - - ``` powershell - run '/dbscripts/derby.sql'; - ``` - - !!! info - Now the product is running using the embedded Apache Derby database. - - -6. Restart the server. - -!!! info -You can create database tables automatically **when starting the product for the first time** by using the `-Dsetup` parameter as follows. - -- For Windows: `/bin/api-manager.bat -Dsetup` - -- For Linux: `/bin/api-manager.sh -Dsetup` - -!!! warning -Deprecation of -DSetup - -When proper Database Administrative (DBA) practices are followed, the systems (except analytics products) are not granted DDL (Data Definition) rights on the schema. Therefore, maintaining the `-DSetup` option is redundant and typically unusable. **As a result, from [January 2018 onwards](https://wso2.com/products/carbon/release-matrix/) WSO2 has deprecated the `-DSetup` option** . Note that the proper practice is for the DBA to run the DDL statements manually so that the DBA can examine and optimize any DDL statement (if necessary) based on the DBA best practices that are in place within the organization. - - -!!! info -The product is configured to run using an embedded Apache Derby database. - -!!! info -In contrast to setting up with remote Derby, when setting up with the embedded mode, set the database driver name (the `driverClassName` element) to `org.apache.derby.jdbc.EmbeddedDriver` and the database URL (the `url` element) to the database directory location relative to the installation. In the above sample configuration, it is inside the `/WSO2_CARBON_DB/` directory. - - diff --git a/en/docs/wip/deleted-pages/changing-to-embedded-h2.md b/en/docs/wip/deleted-pages/changing-to-embedded-h2.md deleted file mode 100644 index 72fe67e905..0000000000 --- a/en/docs/wip/deleted-pages/changing-to-embedded-h2.md +++ /dev/null @@ -1,246 +0,0 @@ -# Setting up Embedded H2 - -The following sections describe how to set up an embedded H2 database to replace the default H2 database in your WSO2 product. - -!!! warning - H2 is not recommended in production - - The embedded H2 database is NOT recommended in enterprise testing and production environments. It has lower performance, clustering limitations, and can cause file corruption failures. Please use an industry-standard RDBMS such as Oracle, PostgreSQL, MySQL, or MS SQL instead. - - You can use the embedded H2 database in development environments and as the local registry in a registry mount. - - -## Setting up the database - -Download and install the H2 database engine on your computer. - -!!! info - For instructions on installing DB2 Express-C, see [H2 installation guide.](http://www.h2database.com/html/quickstart.html) - - -## Setting up the drivers - -WSO2 currently ships H2 database engine version h2_1.4.199.\* and its related H2 database driver. If you want to use a different H2 database driver, follow the instructions below: - -1. Delete the following H2 database-related JAR file, which is shipped with WSO2 products: -`/repository/components/plugins/h2_1.4.199.wso2v1.jar` -2. Find the JAR file of the new H2 database driver (`/bin/h2-*.jar`, where `` is the H2 installation directory) and copy it to your WSO2 product's `/repository/components/lib/` directory. - -## Changing the Carbon database to Embedded H2 -The following sections describe how to replace the default H2 database with Embedded H2: - -- [Setting up datasource configurations](#ChangingtoEmbeddedH2-Settingupdatasourceconfigurations) -- [Creating database tables](#ChangingtoEmbeddedH2-Creatingdatabasetables) - -!!! tip - Before you begin, - -<<<<<<< HEAD:en/docs/install-and-setup/setting-up-databases/changing-default-databases/changing-to-embedded-h2.md - You need to set up Embedded H2 before following the steps to configure your product with it. For more information, see [Setting up Embedded H2]({{base_path}}/install-and-setup/setting-up-databases/changing-default-databases/changing-to-embedded-h2/) . -======= - You need to set up Embedded H2 before following the instructions to configure your product with it. For more information, see [Setting up Embedded H2]({{base_path}}/install-and-setup/setting-up-databases/changing-default-databases/changing-to-embedded-h2/). ->>>>>>> 3.0.0:en/docs/wip/deleted-pages/changing-to-embedded-h2.md - - -### Setting up datasource configurations - -A datasource is used to establish the connection to a database. By default, `WSO2_CARBON_DB` datasource is used to connect to the default  H2 database, which stores registry and user management data. After setting up the Embedded H2 database to replace the default H2 database, either [change the default configurations of the WSO2_CARBON_DB datasource](#ChangingtoEmbeddedH2-Changingthedefaultdatabase), or [configure a new datasource](#ChangingtoEmbeddedH2-Configuringnewdatasourcestomanageregistryorusermanagementdata) to point it to the new database as explained below. - -#### Changing the default WSO2\_CARBON\_DB datasource - -Follow the instructions below to change the type of the default `WSO2_CARBON_DB` datasource. - -Edit the default datasource configuration in the `/repository/conf/deployment.toml` file as shown below. - -=== "Format" - ```toml - type = "h2" - url = "jdbc:h2:./repository/database/WSO2CARBON_DB;DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=60000" - username = "wso2carbon" - password = "wso2carbon" - driver="org.h2.Driver" - validationQuery="SELECT 1" - ``` - -=== "Format" - ```toml - [database.carbon_db] - type = "h2" - url = "jdbc:h2:./repository/database/WSO2CARBON_DB;DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=60000" - username = "wso2carbon" - password = "wso2carbon" - driver="org.h2.Driver" - validationQuery="SELECT 1" - ``` - -The elements in the above configuration are described below: - -| Element | Description | -|-------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| **url** | The URL of the database. The default port for a DB2 instance is 50000. | -| **username** and **password** | The name and password of the database user | -| **driver** | The class name of the database driver | -| **maxActive** | The maximum number of active connections that can be allocated  at the same time from this pool. Enter any negative value to denote an unlimited number of active connections. | -| **maxWait** | The maximum number of milliseconds that the pool will wait (when there are no available connections) for a connection to be returned before throwing an exception. You can enter zero or a negative value to wait indefinitely. | -| **minIdle** | The minimum number of active connections that can remain idle in the pool without extra ones being created, or enter zero to create none. | -| **testOnBorrow** | The indication of whether objects will be validated before being borrowed from the pool. If the object fails to validate, it will be dropped from the pool, and another attempt will be made to borrow another. | -| **validationQuery** | The SQL query that will be used to validate connections from this pool before returning them to the caller. | -| **validationInterval** | The indication to avoid excess validation, and only run validation at the most, at this frequency (time in milliseconds). If a connection is due for validation but has been validated previously within this interval, it will not be validated again. | -| **defaultAutoCommit** | This property is **not** applicable to the Carbon database in WSO2 products because auto committing is usually handled at the code level, i.e., the default auto commit configuration specified for the RDBMS driver will be effective instead of this property element. Typically, auto committing is enabled for RDBMS drivers by default. When auto committing is enabled, each SQL statement will be committed to the database as an individual transaction, as opposed to committing multiple statements as a single transaction. | - -!!! info - For more information on other parameters that can be defined in the `/repository/conf/deployment.toml` file, see [Tomcat JDBC Connection Pool](http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html#Tomcat_JDBC_Enhanced_Attributes). - - -!!! warning - The following elements are available only as a **WUM** update and is effective from 14th September 2018 (2018-09-14).  For more information, see [Updating WSO2 API Manager]({{base_path}}/administer/product-administration/updating-wso2-api-manager). - This WUM update is only applicable to Carbon 4.4.11 and will be shipped out-out-the-box with Carbon versions newer than Carbon 4.4.35. For more information on Carbon compatibility, see [Release Matrix](https://wso2.com/products/carbon/release-matrix/). - - -| **Element** | **Description** | -|----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| **commitOnReturn** | If `defaultAutoCommit =false`, then you can set `commitOnReturn =true`, so that the pool can complete the transaction by calling the commit on the connection as it is returned to the pool. However, If `rollbackOnReturn =true` then this attribute is ignored. The default value is false. | -| **rollbackOnReturn** | If `defaultAutoCommit =false`, then you can set `rollbackOnReturn =true` so that the pool can terminate the transaction by calling rollback on the connection as it is returned to the pool. The default value is false. | - -**Configuring the connection pool behavior on return** - -When a database connection is returned to the pool, by default  the product rolls back the pending transactions if `defaultAutoCommit =true`. However, if required, you can disable the latter mentioned default behavior by disabling the `ConnectionRollbackOnReturnInterceptor`, which is a JDBC-Pool JDBC interceptor, and setting the connection pool behavior on return via the datasource configurations by using the following options. - -!!! warning - Disabling the `ConnectionRollbackOnReturnInterceptor` is only possible with the **WUM** update and is effective from 14th September 2018 (2018-09-14). For more information on updating WSO2 API Manager, see [Updating WSO2 API Manager]({{base_path}}/administer/product-administration/updating-wso2-api-manager). This WUM update is only applicable to Carbon 4.4.11. - - -- **Configure the connection pool to commit pending transactions on connection return** - 1. Navigate to either one of the following locations based on your OS. - - On Linux/Mac OS: `/bin/api-manager.sh/` - - On Windows: `\bin\api-manager.bat` - 2. Add the following JVM option: - - ``` java - -Dndatasource.disable.rollbackOnReturn=true - ``` - - 3. Navigate to the `/repository/conf/deployment.toml` file. - 4. Disable the `defaultAutoCommit` by defining it as `false`. - 5. Add the `commitOnReturn` property and set it to `true` for all the datasources, including the custom datasources. - - ``` toml - [database.shared_db] - type = "mysql" - url = "jdbc:mysql://localhost:3306/shared_db" - username = "regadmin" - password = "regadmin" - pool_options.maxActive = 100 - pool_options.maxWait = 10000 - pool_options.validationInterval = 10000 - pool_options.defaultAutoCommit=false - pool_options.commitOnReturn=true - - [database.apim_db] - type = "mysql" - url = "jdbc:mysql://localhost:3306/apim_db" - username = "apimadmin" - password = "apimadmin" - pool_options.maxActive = 50 - pool_options.maxWait = 30000 - pool_options.validationInterval = 10000 - pool_options.defaultAutoCommit=false - ``` - -- **Configure the connection pool to rollback pending transactions on connection return** - - 1. Navigate to the `/repository/conf/deployment.toml` file. - 2. Disable the `defaultAutoCommit` by defining it as false. - 3. Add the `rollbackOnReturn` property to the datasources. - - ``` toml - [database.shared_db] - type = "mysql" - url = "jdbc:mysql://localhost:3306/shared_db" - username = "regadmin" - password = "regadmin" - pool_options.maxActive = 100 - pool_options.maxWait = 10000 - pool_options.validationInterval = 10000 - pool_options.defaultAutoCommit=false - pool_options.commitOnReturn=true - pool_options.rollbackOnReturn=true - - [database.apim_db] - type = "mysql" - url = "jdbc:mysql://localhost:3306/apim_db" - username = "apimadmin" - password = "apimadmin" - pool_options.maxActive = 50 - pool_options.maxWait = 30000 - pool_options.validationInterval = 10000 - pool_options.defaultAutoCommit=false - pool_options.rollbackOnReturn=true - ``` - -#### Configuring new datasources to manage registry or user management data - -Follow the instructions below to configure new datasources to point to the new database(s) you create to manage registry and/or user management data separately. - -1. Add a new datasource with similar configurations as the [WSO2_CARBON_DB datasource](#changing-the-default-wso295carbon95db-datasource) above to the `/repository/conf/deployment.toml` file. Change its elements with your custom values. For instructions, see [Setting up datasource configurations.](#setting-up-datasource-configurations) -2. If you are setting up a separate database to store registry-related data, update the following configurations in the `/repository/conf/deployment.toml` file. - - - === "Format" - ```toml - [database.config] - dataSource = "jdbc/MY_DATASOURCE_NAME" - ``` - - === "Example" - ```toml - [database.config] - dataSource = "jdbc/WSO2_CARBON_DB" - ``` - -3. If you are setting up a separate database to store user management data, update the following configurations in the `/repository/conf/deployment.toml` file. - - === "Format" - ```toml - [user_store] - dataSource = "jdbc/MY_DATASOURCE_NAME" - ``` - - === "Example" - ```toml - [user_store] - dataSource = "jdbc/WSO2_CARBON_DB" - ``` - -### Creating database tables - -To create the database tables, connect to the database that you created earlier and run the following scripts in the H2 shell or web console: - -- To create tables in the registry and user manager database (`WSO2CARBON_DB`), use the following script: - - ``` java - /dbscripts/h2.sql - ``` - -Follow the instructions below to run the script in Web console: - -1. Run the `./h2.sh` command to start the Web console. -2. Copy the script text from the SQL file. -3. Paste it into the console. -4. Click **Run**. -5. Restart the server. - -!!! info - You can create database tables automatically **when starting the product for the first time** by using the `-Dsetup` parameter as follows: - - - For Windows: `/bin/api-manager.bat -Dsetup` - - - For Linux: `/bin/api-manager.sh -Dsetup` - -!!! warning - Deprecation of -DSetup - When proper Database Administrative (DBA) practices are followed, the systems (except analytics products) are not granted DDL (Data Definition) rights on the schema. Therefore, maintaining the `-DSetup` option is redundant and typically unusable. **As a result, from [January 2018 onwards](https://wso2.com/products/carbon/release-matrix/) WSO2 has deprecated the `-DSetup` option**. Note that the proper practice is for the DBA to run the DDL statements manually so that the DBA can examine and optimize any DDL statement (if necessary) based on the DBA best practices that are in place within the organization. - - - - diff --git a/en/docs/wip/deleted-pages/changing-to-ibm-informix.md b/en/docs/wip/deleted-pages/changing-to-ibm-informix.md deleted file mode 100644 index ba6c64b5ac..0000000000 --- a/en/docs/wip/deleted-pages/changing-to-ibm-informix.md +++ /dev/null @@ -1,180 +0,0 @@ -# Changing to IBM Informix - -By default, WSO2 API Manager uses the embedded H2 database as the database for storing user management and registry data. Given below are the steps you need to follow in order to use IBM Informix for this purpose. - -## Setting up IBM Informix - -The following sections describe how to set up a IBM Informix database to replace the default H2 database in your WSO2 product: - -- [Setting up the database and users](#setting-up-the-database-and-users) -- [Setting up the drivers](#setting-up-the-drivers) -- [Executing db scripts on IBM Informix database](#executing-db-scripts-on-ibm-informix-database) - -### Setting up the database and users - -Create the database and users in Informix. For instructions on creating the database and users, see [Informix product documentation](http://www-947.ibm.com/support/entry/portal/all_documentation_links/information_management/informix_servers?productContext=-1122713425). - -!!! tip - Do the following changes to the default database when creating the Informix database. - - - Define the page size as 4K or higher when creating the dbspace as shown in the following command (i.e. denoted by `-k 4` ) : - - ``` java - onspaces -c -S testspace4 -k 4 -p /usr/informix/logdir/data5.dat -o 100 -s 3000000 - ``` - - - Add the following system environment variables. - - ``` text - export DB_LOCALE=en_US.UTF-8 - export CLIENT_LOCALE=en_US.UTF-8 - ``` - - - Create an sbspace other than the dbspace by executing the following command: - - ``` java - onspaces -c -S testspace4 -k 4 -p /usr/informix/logdir/data5.dat -o 100 -s 3000000 - ``` - - - Add the following entry to the `/etc/onconfig` file, and replace the given example sbspace name (i.e. `testspace4` ) with your sbspace name: - - ``` java - SBSPACENAME testspace4 - ``` - - -### Setting up the drivers - -1. Unzip the WSO2 API Manager pack. Let's refer to it as ``. - -1. Download the [Informix JDBC driver](https://www.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.jdbc_pg.doc/ids_jdbc_013.htm). - -1. Copy the file relevant JAR file for your JRE version to the `/repository/components/lib/` directory in all the nodes of the cluster. - -!!! info - Use Informix JDBC driver version 3.70.JC8, 4.10.JC2 or higher. - -### Executing db scripts on IBM Informix database - -1. To create tables in the registry and user manager database (`WSO2_SHARED_DB`), use the script: `/dbscripts/informix.sql/` - -1. To create tables in the apim database (`WSO2AM_DB`), use the script: `/dbscripts/apimgt/informix.sql/` - - -## Changing the Carbon database to IBM Informix - -- [Creating the datasource connection to IBM Informix](#creating-the-datasource-connection-to-ibm-informix) - -### Creating the datasource connection to IBM Informix - -A datasource is used to establish the connection to a database. By default, `SHARED_DB` and `AM_DB` datasource are configured in the `deployment.toml` file for the purpose of connecting to the default H2 databases. - -After setting up the IBM Informix database to replace the default H2 database, either change the default configurations of the `SHARED_DB` and `AM_DB` datasource, or configure a new datasource to point it to the new database as explained below. - -Follow the steps below to change the type of the default datasource. - -1. Open the `/repository/conf/deployment.toml` configuration file and locate the `[database.shared_db]` and `[database.apim_db]` configuration elements. - -1. You simply have to update the URL pointing to your IBM Informix database, the username, and password required to access the database, the IBM Informix driver details and validation query for validating the connection as shown below. - - | Element | Description | - |-------------------------------|------------------------------------------------------------------------------------------------------------| - | **type** | The database type used | - | **url** | The URL of the database. The default port for IBM Informix is 1533. | - | **username** and **password** | The name and password of the database user | - | **driverClassName** | The class name of the database driver | - | **validationQuery** | The SQL query that will be used to validate connections from this pool before returning them to the caller.| - - !!! tip - Add the following configuration to the connection URL when specifying it as shown in the example below: - - `CLIENT_LOCALE=en_US.utf8;DB_LOCALE=en_us.utf8;IFX_USE_STRENC=true;` - - Sample configuration is shown below: - - === "Format" - ``` toml - url = "jdbc:informix-sqli://localhost:1533/;CLIENT_LOCALE=en_US.utf8;DB_LOCALE=en_us.utf8;IFX_USE_STRENC=true;" - username = "" - password = "" - driver = "com.informix.jdbc.IfxDriver" - validationQuery = "SELECT 1" - ``` - - === "Example" - ``` toml - [database.shared_db] - url = "jdbc:informix-sqli://localhost:1533/shared_db;CLIENT_LOCALE=en_US.utf8;DB_LOCALE=en_us.utf8;IFX_USE_STRENC=true;" - username = "regadmin" - password = "regadmin" - driver = "com.informix.jdbc.IfxDriver" - validationQuery = "SELECT 1" - - [database.apim_db] - url = "jdbc:informix-sqli://localhost:1533/apim_db;CLIENT_LOCALE=en_US.utf8;DB_LOCALE=en_us.utf8;IFX_USE_STRENC=true;" - username = "apimadmin" - password = "apimadmin" - driver = "com.informix.jdbc.IfxDriver" - validationQuery = "SELECT 1" - ``` - -1. You can update the configuration elements given below for your database connection. - - | Element | Description | - |------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| - | **maxActive** | The maximum number of active connections that can be allocated at the same time from this pool. Enter any negative value to denote an unlimited number of active connections. | - | **maxWait** | The maximum number of milliseconds that the pool will wait (when there are no available connections) for a connection to be returned before throwing an exception. You can enter zero or a negative value to wait indefinitely. | - | **minIdle** | The minimum number of active connections that can remain idle in the pool without extra ones being created, or enter zero to create none. | - | **testOnBorrow** | The indication of whether objects will be validated before being borrowed from the pool. If the object fails to validate, it will be dropped from the pool, and another attempt will be made to borrow another. | - | **validationInterval** | The indication to avoid excess validation, and only run validation at the most, at this frequency (time in milliseconds). If a connection is due for validation but has been validated previously within this interval, it will not be validated again. | - | **defaultAutoCommit** | This property is **not** applicable to the Carbon database in WSO2 products because auto committing is usually handled at the code level, i.e., the default auto commit configuration specified for the RDBMS driver will be effective instead of this property element. Typically, auto committing is enabled for RDBMS drivers by default. When auto committing is enabled, each SQL statement will be committed to the database as an individual transaction, as opposed to committing multiple statements as a single transaction.| - | **commitOnReturn** | If `defaultAutoCommit =false`, then you can set `commitOnReturn =true`, so that the pool can complete the transaction by calling the commit on the connection as it is returned to the pool. However, If `rollbackOnReturn =true` then this attribute is ignored. The default value is false.| - | **rollbackOnReturn** | If `defaultAutoCommit =false`, then you can set `rollbackOnReturn =true` so that the pool can terminate the transaction by calling rollback on the connection as it is returned to the pool. The default value is false.| - - Sample configuration is shown below: - - === "Format" - ``` toml - url = "jdbc:informix-sqli://localhost:1533/;CLIENT_LOCALE=en_US.utf8;DB_LOCALE=en_us.utf8;IFX_USE_STRENC=true;" - username = "" - password = "" - driver = "com.informix.jdbc.IfxDriver" - validationQuery = "SELECT 1" - pool_options. = - pool_options. = - ... - ``` - - === "Example" - ``` toml - [database.shared_db] - url = "jdbc:informix-sqli://localhost:1533/shared_db;CLIENT_LOCALE=en_US.utf8;DB_LOCALE=en_us.utf8;IFX_USE_STRENC=true;" - username = "regadmin" - password = "regadmin" - driver = "com.informix.jdbc.IfxDriver" - validationQuery = "SELECT 1" - pool_options.maxActive = 100 - pool_options.maxWait = 10000 - pool_options.validationInterval = 10000 - - [database.apim_db] - url = "jdbc:informix-sqli://localhost:1533/apim_db;CLIENT_LOCALE=en_US.utf8;DB_LOCALE=en_us.utf8;IFX_USE_STRENC=true;" - username = "apimadmin" - password = "apimadmin" - driver = "com.informix.jdbc.IfxDriver" - validationQuery = "SELECT 1" - pool_options.maxActive = 50 - pool_options.maxWait = 30000 - ``` - - !!! info - For more information on other parameters that can be defined in the `/repository/conf/deployment.toml` file, see [Tomcat JDBC Connection Pool](http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html#Tomcat_JDBC_Enhanced_Attributes). - -1. Restart the server. - - !!! note -<<<<<<< HEAD:en/docs/install-and-setup/setting-up-databases/changing-default-databases/changing-to-ibm-informix.md - To give the Key Manager, Publisher, and Developer Portal components access to the user management data with shared permissions, JDBCUserStoreManager has been configured by default. For more information, refer [Configuring Userstores]({{base_path}}/administer/product-administration/managing-users-and-roles/managing-user-stores/configure-primary-user-store/configuring-a-jdbc-user-store). -======= - To give the Key Manager, Publisher, and Developer Portal components access to the user management data with shared permissions, JDBCUserStoreManager has been configured by default. For more information, refer [Configuring Userstores]({{base_path}}/administer/product-administration/managing-users-and-roles/managing-user-stores/configure-primary-user-store/configuring-a-jdbc-user-store). ->>>>>>> 3.0.0:en/docs/wip/deleted-pages/changing-to-ibm-informix.md diff --git a/en/docs/wip/deleted-pages/changing-to-remote-h2.md b/en/docs/wip/deleted-pages/changing-to-remote-h2.md deleted file mode 100644 index 69bf7ddde4..0000000000 --- a/en/docs/wip/deleted-pages/changing-to-remote-h2.md +++ /dev/null @@ -1,143 +0,0 @@ -# Changing to Remote H2 - -By default, WSO2 API Manager uses the embedded H2 database as the database for storing user management and registry data. Given below are the instructions you need to follow in order to use remote H2 for this purpose. - -!!! warning - H2 is not recommended in production. - - The embedded H2 database is NOT recommended in enterprise testing and production environments. It has lower performance, clustering limitations, and can cause file corruption failures. Please use an industry-standard RDBMS such as Oracle, PostgreSQL, MySQL or MS SQL instead. - - You can use the embedded H2 database in development environments and as the local registry in a registry mount. - -## Setting up remote H2 - -The following sections describe how to set up a remote H2 database to replace the default embedded H2 database in your WSO2 product: - -- [Setting up the drivers](#setting-up-the-drivers) -- [Executing db scripts to create tables on remote H2 database](#executing-db-scripts-to-create-tables-on-remote-h2-database) - -### Setting up the drivers - -1. Unzip the WSO2 API Manager pack. Let's refer to it as ``. - -1. Download the [h2 zip file](http://www.h2database.com/html/download.html), and extract it. - -1. Copy the JAR file to the `/repository/components/lib/` directory in all the nodes of the cluster. - -### Executing db scripts to create tables on remote H2 database - -1. Run the `./h2.sh` command to start the Web console. - -1. To create tables in the registry and user manager database (`WSO2_SHARED_DB`) use the script: `/dbscripts/h2.sql` - -1. To create tables in the registry and user manager database (`WSO2_SHARED_DB`) use the script: `/dbscripts/apimgt/h2.sql` - - -## Changing the Carbon database to remote H2 - -- [Creating the datasource connection to remote H2](#creating-the-datasource-connection-to-remote-h2) - -### Creating the datasource connection to remote H2 - -A datasource is used to establish the connection to a database. By default, `WSO2_SHARED_DB` and `WSO2AM_DB` datasource are configured in the `deployment.toml` file for the purpose of connecting to the default embedded H2 databases. - -After setting up the remote H2 database to replace the default embedded H2 database, either change the default configurations of the `WSO2_SHARED_DB` and `WSO2AM_DB` datasource, or configure a new datasource to point it to the new database as explained below. - -!!! note - **If you are configuring API-M in a distributed setup**, do the changes in all the WSO2 API-M components. - -Follow the instructions below to change the type of the default datasource. - -1. Open the `/repository/conf/deployment.toml` configuration file and locate the `[database.shared_db]` and `[database.apim_db]` configuration elements. - -1. You simply have to update the URL pointing to your remote H2 database, the username, and password required to access the database as shown below. - - | Element | Description | - |-------------------------------|-------------------------------------------------------------| - | **type** | The database type used | - | **url** | The URL of the database. The default port for H2 remote is 9092 | - | **username** and **password** | The name and password of the database user | - - Sample configuration is shown below: - - === "Format" - ``` toml - type = "h2" - url = "jdbc:h2:tcp://localhost:9092/" - username = "" - password = "" - ``` - - === "Example" - ``` toml - [database.shared_db] - type = "h2" - url = "jdbc:h2:tcp://localhost:9092/~/shared_db" - username = "regadmin" - password = "regadmin" - - [database.apim_db] - type = "h2" - url = "jdbc:h2:tcp://localhost:9092/~/apim_db" - username = "apimadmin" - password = "apimadmin" - ``` - -1. You can update the configuration elements given below for your database connection. - - | Element | Description | - |------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| - | **maxActive** | The maximum number of active connections that can be allocated at the same time from this pool. Enter any negative value to denote an unlimited number of active connections. | - | **maxWait** | The maximum number of milliseconds that the pool will wait (when there are no available connections) for a connection to be returned before throwing an exception. You can enter zero or a negative value to wait indefinitely. | - | **minIdle** | The minimum number of active connections that can remain idle in the pool without extra ones being created, or enter zero to create none. | - | **testOnBorrow** | The indication of whether objects will be validated before being borrowed from the pool. If the object fails to validate, it will be dropped from the pool, and another attempt will be made to borrow another. | - | **validationQuery** | The SQL query that will be used to validate connections from this pool before returning them to the caller. | - | **validationInterval** | The indication to avoid excess validation, and only run validation at the most, at this frequency (time in milliseconds). If a connection is due for validation but has been validated previously within this interval, it will not be validated again. | - | **defaultAutoCommit** | This property is **not** applicable to the Carbon database in WSO2 products because auto committing is usually handled at the code level, i.e., the default auto commit configuration specified for the RDBMS driver will be effective instead of this property element. Typically, auto committing is enabled for RDBMS drivers by default. When auto committing is enabled, each SQL statement will be committed to the database as an individual transaction, as opposed to committing multiple statements as a single transaction.| - | **commitOnReturn** | If `defaultAutoCommit =false`, then you can set `commitOnReturn =true`, so that the pool can complete the transaction by calling the commit on the connection as it is returned to the pool. However, If `rollbackOnReturn =true` then this attribute is ignored. The default value is false.| - | **rollbackOnReturn** | If `defaultAutoCommit =false`, then you can set `rollbackOnReturn =true` so that the pool can terminate the transaction by calling rollback on the connection as it is returned to the pool. The default value is false.| - - Sample configuration is shown below: - - === "Format" - ``` toml - type = "h2" - url = "jdbc:h2:tcp://localhost:9092/" - username = "" - password = "" - pool_options. = - pool_options. = - ... - ``` - - === "Example" - ``` toml - [database.shared_db] - type = "h2" - url = "jdbc:h2:tcp://localhost:9092/~/shared_db" - username = "regadmin" - password = "regadmin" - pool_options.maxActive = 100 - pool_options.maxWait = 10000 - pool_options.validationInterval = 10000 - - [database.apim_db] - type = "h2" - url = "jdbc:h2:tcp://localhost:9092/~/apim_db" - username = "apimadmin" - password = "apimadmin" - pool_options.maxActive = 50 - pool_options.maxWait = 30000 - ``` - - !!! info - For more information on other parameters that can be defined in the `/repository/conf/deployment.toml` file, see [Tomcat JDBC Connection Pool](http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html#Tomcat_JDBC_Enhanced_Attributes). - -1. Restart the server. - - !!! note -<<<<<<< HEAD:en/docs/install-and-setup/setting-up-databases/changing-default-databases/changing-to-remote-h2.md - To give the Key Manager, Publisher, and Developer Portal components access to the user management data with shared permissions, JDBCUserStoreManager has been configured by default. For more information, refer [Configuring Userstores]({{base_path}}/administer/product-administration/managing-users-and-roles/managing-user-stores/configure-primary-user-store/configuring-a-jdbc-user-store). -======= - To give the Key Manager, Publisher, and Developer Portal components access to the user management data with shared permissions, JDBCUserStoreManager has been configured by default. For more information, refer [Configuring Userstores]({{base_path}}/administer/product-administration/managing-users-and-roles/managing-user-stores/configure-primary-user-store/configuring-a-jdbc-user-store). ->>>>>>> 3.0.0:en/docs/wip/deleted-pages/changing-to-remote-h2.md diff --git a/en/docs/wip/deleted-pages/distributed-deployment-of-api-manager.md b/en/docs/wip/deleted-pages/distributed-deployment-of-api-manager.md deleted file mode 100644 index 5bc4fdffde..0000000000 --- a/en/docs/wip/deleted-pages/distributed-deployment-of-api-manager.md +++ /dev/null @@ -1,15 +0,0 @@ -# Distributed Deployment of API Manager - -[WSO2 API Manager](https://wso2.com/api-manager/) (WSO2 API-M) is a complete API management solution, used for creating and publishing APIs, creating and managing a developer community, and routing API traffic in a scalable manner. The WSO2 API-M includes the following five components: Publisher, Developer Portal, Gateway, Key Manager, and Traffic Manager. - -Typically, when you get started with WSO2 API Manager in a development environment, you deploy WSO2 API Manager as a single instance with all its components on a single server. For details, see [All-in-One Deployment Overview]({{base_path}}/install-and-setup/deploying-wso2-api-manager/single-node/all-in-one-deployment-overview/). - -However, in a production deployment, these components are deployed in a distributed manner. Therefore, you can create a distributed deployment of WSO2 API-M's five main components. This page describes how to set up and deploy WSO2 API-M as a distributed deployment. - -!!! note - Note that your configurations may vary depending on the WSO2 API Manager deployment pattern that you choose. If you are using multi-tenancy, all nodes should use the same user store, as all servers are servicing the same set of tenants, and it has to share the same Governance Registry space across all nodes. - - -- [Understanding the Distributed Deployment of WSO2 API-M]({{base_path}}/install-and-setup/deploying-wso2-api-manager/distributed-deployment/understanding-the-distributed-deployment-of-wso2-api-m/#understanding-the-distributed-deployment) -- [Deploying WSO2 API-M in a Distributed Setup]({{base_path}}/install-and-setup/deploying-wso2-api-manager/distributed-deployment/deploying-wso2-api-m-in-a-distributed-setup/) - diff --git a/en/docs/wip/need-to-update/create-and-publish-an-api.md b/en/docs/wip/need-to-update/create-and-publish-an-api.md deleted file mode 100644 index 192cb01eaa..0000000000 --- a/en/docs/wip/need-to-update/create-and-publish-an-api.md +++ /dev/null @@ -1,225 +0,0 @@ -# Create and Publish an API - -**API creation** is the process of linking an existing backend API implementation to the [API Publisher](_Key_Concepts_) so that you can manage and monitor the [API's lifecycle](_Key_Concepts_) , documentation, security, community, and subscriptions. Alternatively, you can provide the API implementation in-line in the [API Publisher](_Key_Concepts_) itself. - -!!! note -Click the following topics for a description of the concepts that you need to know when creating an API: - -- [API visibility](_Key_Concepts_) -- [Resources](_Key_Concepts_) -- [Endpoints](_Key_Concepts_) -- [Throttling tiers](_Key_Concepts_) -- [Sequences](_Key_Concepts_) -- [Response caching](_Configuring_Caching_) - - -1. Sign in to the WSO2 API Publisher. -`https://:9443/publisher` (e.g., `https://localhost:9443/publisher` ). Use **admin** as the username and password. -`` -2. Close the interactive tutorial that starts automatically if you are a first-time user, and click **ADD NEW API** . - - !!! tip - You can go back to the interactive tutorial at a later stage by clicking **API Walkthrough** on the top right corner. - - - ![]({{base_path}}/assets/attachments/103327814/103327786.png) - -3. Click **Design a New REST API** and click **Start Creating** . - ![]({{base_path}}/assets/attachments/103327814/103327785.png) -4. Give the information in the table below and click **Add** to add the resource. - - Field - Sample value - Name - PhoneVerification - Context -`/phoneverify ` - - !!! info - The API context is used by the Gateway to identify the API. Therefore, the API context must be unique. This context is the API's root context when invoking the API through the Gateway. - !!! tip - You can define the API's version as a parameter of its context by adding the `{version}` into the context. For example, `{version}/phoneverify` . The API Manager assigns the actual version of the API to the `{version}` parameter internally. For example, `https://localhost:8243/1.0.0/phoneverify` . Note that the version appears before the context, allowing you to group your APIs based on the versions. - - Version - 1.0.0 - Access Control - All - Visibility on Store - Public - Tags - phone, checkNumbers - - !!! tip - Tags can be used to filter out APIs matching certain search criteria. It is recommended that you add tags that explain the functionality and purpose of the API as subscribers can search for APIs based on the tags. - - Resources - URL pattern -`CheckPhoneNumber ` - Request types - GET, POST - - !!! info - The selection of the HTTP method should match the actual backend resource. For example, if the actual backend contains the GET method to retrieve details of a phone number, that resource should match a GET resource type with a proper context. - - ![]({{base_path}}/assets/attachments/103327814/103327784.png) - - For more information on URL patterns, see [API Resources](_Key_Concepts_) . - -5. After you add the resource, click it's `GET` method to expand it. Update the value for **Produces** as `application/xml` and the value for **Consumes** as `application/json` . - - !!! note - In the resource definition, you define the MIME types. **Consumes** refers to the MIME type of request accepted by the backend service and **Produces** refers to the MIME type of response produced by the backend service that you define as the endpoint of the API. - - -6. Next, add the following parameters. You use these parameters to invoke the API using the integrated API Console, which is explained in later tutorials. - - | Parameter Name | Description | Parameter Type | Data Type | Required | - |--------------------------------------------|-----------------------------------------------|----------------|-----------|----------| - | `PhoneNumber` | Give the phone number to be validated | query | string | True | - | `LicenseKey`| Give the license key as 0 for testing purpose | query | string | True | - - ![]({{base_path}}/assets/attachments/103327814/103327783.png) - - !!! info - HTTP Post - - By design, the HTTP POST method specifies that the web server accepts data enclosed within the body of the request. Therefore, when adding a POST method, API Manager adds the payload parameter to the POST method by default. - - !!! note - Import or Edit API definition - - ![]({{base_path}}/assets/attachments/103327814/103327782.png) - - To import an existing swagger definition from a file or a URL, click **Import** . Click **Edit Source** to manually edit the API swagger definition. - - -7. Once done, click **Next: Implement >** . - Alternatively, click **Save** to save all the changes made to the API. You can come back later to edit it further by selecting the API and clicking **Edit** . For details about the states of the API, see Manage the API Lifecycle . - - !!! info - The following parameter types can be defined according to the resource parameters you add. - - | Parameter Type | Description | - |-----------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| - | `query`| Contains the fields added as part of the invocation URL that holds the data to be used to call the backend service. | - | `header`| Contains the case-sensitive names followed by a colon (:) and then by its value that carries additional information with the request which defines the operating parameters of the transaction. | - | `formData` | Contains a property list of attribute names and values that are included in the body of the message. | - | `body`| An arbitrary amount of data of any type sent with a POST message | - - You can use the following data type categories, supported by [swagger](http://docs.swagger.io/spec.html#433-data-type-fields) . - - - [`primitive `](http://docs.swagger.io/spec.html#431-primitives) (input/output) - -`containers` (as arrays/sets) (input/output) - -`complex` (as models) (input/output) - - [`void `](http://docs.swagger.io/spec.html#432-void) (output) - - [`file `](http://docs.swagger.io/spec.html#434-file) (input) - - -8. Click the **Managed API** option. - - ![]({{base_path}}/assets/attachments/103327814/103327781.png) - -9. The **Implement** tab opens. Enter the information in the table below. - - - - - - - - - - - - - - - - - - - - - - -
    FieldSample value
    Endpoint type
    -

    HTTP/REST endpoint

    - !!! info -

    Load balanced and fail over endpoints

    -

    The load balanced and failover endpoint types are not selected in this example. For details about these endpoint types, see Working with Endpoints and ESB Endpoints .

    - -
    Production endpoint

    This sample service has two operations; CheckPhoneNumber and CheckPhoneNumbers . Let's use CheckPhoneNumber here.
    - http://ws.cdyne.com/phoneverify/phoneverify.asmx

    -

    To verify the URL, click the Test button next to it (this is the actual endpoint where the API implementation can be found).

    Sandbox endpoint

    This sample service has two operations; CheckPhoneNumber and CheckPhoneNumbers . Let's use CheckPhoneNumber here.
    - http://ws.cdyne.com/phoneverify/phoneverify.asmx

    -

    To verify the URL, click the Test button next to it.

    - - For more information on Endpoints, see [Working with Endpoints](https://docs.wso2.com/display/AM2xx/Working+with+Endpoints) . - - ![]({{base_path}}/assets/attachments/103327814/103327780.png) - For additional information, see [Enabling CORS for APIs](_Enabling_CORS_for_APIs_) and [Adding Mediation Extensions](_Adding_Mediation_Extensions_) . For details on adding and managing certificates, see [Dynamic SSL Certificate Installation](_Add_SSL_Certificates_for_Endpoints_) . - - !!! info - You can deploy your API as a **Prototyped API** in the **Implement** tab. A prototyped API is usually a mock implementation made public in order to get feedback about its usability. You can implement it **inline** or by specifying an **endpoint** . - - ![]({{base_path}}/assets/attachments/103327814/103327779.png) - - You can invoke the API without a subscription after publishing the API to the Developer Portal. For more information, see [Deploy and Test as a Prototype](_Deploy_and_Test_Mock_APIs_) . - - -10. Click **Next: Manage >** and enter the information in the table below. - - - - - - - - - - - - - - - - - - - - - -
    FieldSample valueDescription
    TransportsHTTP and HTTPS
    -

    The transport protocol on which the API is exposed.  Both HTTP and HTTPS transports are selected by default. If you want to limit API availability to only one transport (e.g., HTTPS), clear the checkbox of the other transport.

    - !!! warning -

    You can only try out HTTPS based APIs via the API Console because the Developer Portal runs on HTTPS.

    - -
    Subscription TiersSelect allThe API can be available at different levels of service. They allow you to limit the number of successful hits to an API during a given period.
    - - ![]({{base_path}}/assets/attachments/103327814/103327778.png) - - !!! info - Make Default Version - - **Make this the Default Version** checkbox ensures that the API is available in the Gateway without a version specified in the production and sandbox URLs. This option allows you to create a new version of an API and set it as the default version. Then, you can invoke the same resources in the client applications without changing the API gateway URL. This allows you to create new versions of an API with changes, while at the same time allowing existing client applications to be invoked without the client having to change the URLs. - - ![]({{base_path}}/assets/attachments/103327814/103327777.png) - - - For more information on **maximum backend throughput** and **advanced throttling policies** , see [Working with Throttling](_Rate_Limiting_) . - -11. Click **Save & Publish** . This publishes the API that you just created to the Developer Portal so that subscribers can use it. - - !!! tip - You can save partially complete or completed APIs without publishing it. Select the API and click on the **Lifecycle** tab to manage the API Lifecycle . - - -You have created an API. - -**Related Tutorials** - -- [Create and Publish an API from a Swagger Definition](_Create_and_Publish_an_API_from_a_Swagger_Definition_) -- [Create a Prototyped API with an Inline Script](_Create_a_Mock_API_with_an_Inline_Script_) -- [Create a WebSocket API](_Create_a_WebSocket_API_) -- [Create and Publish a SOAP API](_Create_and_Publish_a_SOAP_API_) - diff --git a/en/docs/wip/scope-role-mapping.md b/en/docs/wip/scope-role-mapping.md deleted file mode 100644 index 872dd12528..0000000000 --- a/en/docs/wip/scope-role-mapping.md +++ /dev/null @@ -1,36 +0,0 @@ -# Scope Mapping - -Internal REST API scopes and their role mappings are stored in the `tenant-conf.json` file. In earlier versions of WSO2 API Manager, users had to manually update the `tenant-conf.json` file in order to modify the scope to role mappings. You can easily achieve the latter mentioned task via the Admin Portal. - -## Modify an Existing Scope Mapping - -1. Sign in to the Admin Portal. - - (`https://:9443/admin`) - -2. Click to Settings --> Scope Mapping. - - The list of the existing REST API scopes along with their current role bindings appear. - -3. Update the `apim_scope` role binding. - - For example, if you have a role named 'manager' and you need to allow 'manager' users to access the REST API resources protected by the `apim_publish` scope, then you need to update the `apim_scope` role binding as follows: - - ``` - apim_publish : admin,Internal/publisher,manager - ``` -4. Locate apim_publish scope in the table and click **Edit**. - -5. Enter the modified role list and click **Save**. - -## Role Mapping - -If you need to rename the `admin` role in your environment as `manager`, you would generally have to replace `admin` to `manager` in all the scope mappings in the scope mapping table, which is a tedious task. - -However, when using scope mapping in WSO2 API-M, you can simply map the role names by adding a row in the role mapping table with instructs to map the `admin` role to the `manager` role as explained below: - -1. Enter `admin` as the original role name. - -2. Enter `manager` as the mapped role(s) list. - -3. Click **Add**. diff --git a/en/mkdocs.yml b/en/mkdocs.yml index a7e0eda0cc..ce3ab0e2e1 100644 --- a/en/mkdocs.yml +++ b/en/mkdocs.yml @@ -239,20 +239,6 @@ nav: - Revision Deployment Workflow: deploy-and-publish/deploy-on-gateway/deploy-api/revision-deployment-workflow.md - API Gateway: - Overview of the WSO2 API Gateway: deploy-and-publish/deploy-on-gateway/api-gateway/overview-of-the-api-gateway.md - #- Publish APIs Overview: deploy-and-publish/deploy-on-gateway/publish-api-overview.md - #- Message Mediation: - #- Specify a Mediation Flow Using a Policy: deploy-and-publish/deploy-on-gateway/api-gateway/message-mediation/specifying-mediation-flow-based-on-policy.md - #- Change the Default Mediation Flow of API Requests: deploy-and-publish/deploy-on-gateway/api-gateway/message-mediation/changing-the-default-mediation-flow-of-api-requests.md - #- Create and Upload Using Integration Studio: deploy-and-publish/deploy-on-gateway/api-gateway/message-mediation/creating-and-uploading-using-integration-studio.md - #- Add Dynamic Endpoints: deploy-and-publish/deploy-on-gateway/api-gateway/message-mediation/adding-dynamic-endpoints.md - #- Remove Specific Request Headers From Response: deploy-and-publish/deploy-on-gateway/api-gateway/message-mediation/removing-specific-request-headers-from-response.md - #- Pass a Custom Authorization Token to the Backend: deploy-and-publish/deploy-on-gateway/api-gateway/message-mediation/passing-a-custom-authorization-token-to-the-backend.md - #- URL Mapping: deploy-and-publish/deploy-on-gateway/api-gateway/message-mediation/mapping-the-parameters-of-your-backend-urls-with-the-api-publisher-urls.md - #- Disable Message Chunking: deploy-and-publish/deploy-on-gateway/api-gateway/message-mediation/disabling-message-chunking.md - #- Transform API Message Payload: deploy-and-publish/deploy-on-gateway/api-gateway/message-mediation/transforming-api-message-payload.md - #- Add a Non-Blocking Send Operation: deploy-and-publish/deploy-on-gateway/api-gateway/message-mediation/adding-a-non-blocking-send-operation.md - #- Add a Class Mediator: deploy-and-publish/deploy-on-gateway/api-gateway/message-mediation/adding-a-class-mediator.md - #- Configure message builders and formatters: deploy-and-publish/deploy-on-gateway/api-gateway/message-mediation/configuring-message-builders-formatters.md - Gateway Policies: deploy-and-publish/deploy-on-gateway/api-gateway/gateway-policies.md - Response Caching: deploy-and-publish/deploy-on-gateway/api-gateway/response-caching.md - Threat Protectors: @@ -745,7 +731,6 @@ plugins: 'install-and-setup/deploying-wso2-api-manager/distributed-deployment/configure-a-third-party-key-manager.md': 'https://apim.docs.wso2.com/en/4.4.0/install-and-setup/setup/distributed-deployment/configure-a-third-party-key-manager/' 'install-and-setup/deploying-wso2-api-manager/distributed-deployment/configuring-wso2-identity-server-as-a-key-manager.md': 'https://apim.docs.wso2.com/en/4.4.0/install-and-setup/setup/distributed-deployment/configuring-wso2-identity-server-as-a-key-manager/' 'install-and-setup/deploying-wso2-api-manager/configuring-message-builders-formatters.md': 'https://apim.docs.wso2.com/en/4.4.0/deploy-and-publish/deploy-on-gateway/api-gateway/message-mediation/configuring-message-builders-formatters/' - ##'install-and-setup/deploying-wso2-api-manager/configuring-rsync-for-deployment-synchronization.md': 'https://apim.docs.wso2.com/en/3.2.0/install-and-setup/setup/distributed-deployment/clustering-gateway-for-ha-using-rsync/' 'install-and-setup/deploying-wso2-api-manager/deploying-api-manager-with-kubernetes-or-openshift-resources.md': 'https://apim.docs.wso2.com/en/4.4.0/install-and-setup/install/deploying-api-manager-with-kubernetes-or-openshift-resources/' 'install-and-setup/deploying-wso2-api-manager/production-deployment-guidelines.md': 'https://apim.docs.wso2.com/en/4.4.0/install-and-setup/setup/deployment-best-practices/production-deployment-guidelines/' 'install-and-setup/deploying-wso2-api-manager/security-guidelines-for-production-deployment.md': 'https://apim.docs.wso2.com/en/4.4.0/install-and-setup/setup/deployment-best-practices/security-guidelines-for-production-deployment/' @@ -776,7 +761,6 @@ plugins: 'getting-started/quick-start-guide.md': 'https://apim.docs.wso2.com/en/4.4.0/get-started/quick-start-guide/quick-start-guide/' 'getting-started/about-this-release.md': 'https://apim.docs.wso2.com/en/4.4.0/get-started/about-this-release/' 'get-started/quick-start-guide/quick-start-guide.md': 'https://apim.docs.wso2.com/en/4.4.0/get-started/api-manager-quick-start-guide/' - 'get-started/quick-start-guide/integration-qsg.md': 'https://apim.docs.wso2.com/en/4.4.0/get-started/integration-quick-start-guide/' 'get-started/quick-start-guide/streaming-qsg.md': 'https://apim.docs.wso2.com/en/4.4.0/get-started/streaming-quick-start-guide/' 'learn/design-api/create-api/create-a-rest-api.md': 'https://apim.docs.wso2.com/en/4.4.0/design/create-api/create-rest-api/create-a-rest-api/' 'learn/design-api/create-api/create-a-rest-api-from-an-openapi-definition.md': 'https://apim.docs.wso2.com/en/4.4.0/design/create-api/create-rest-api/create-a-rest-api-from-an-openapi-definition/' @@ -787,7 +771,6 @@ plugins: 'learn/design-api/create-api/test-a-rest-api.md': 'https://apim.docs.wso2.com/en/4.4.0/design/create-api/create-rest-api/test-a-rest-api/' 'learn/api-gateway/overview-of-the-api-gateway.md': 'https://apim.docs.wso2.com/en/4.4.0/deploy-and-publish/deploy-on-gateway/api-gateway/overview-of-the-api-gateway/' 'learn/api-gateway/message-mediation/changing-the-default-mediation-flow-of-api-requests.md': 'https://apim.docs.wso2.com/en/4.4.0/design/api-policies/attach-policy/' - 'learn/api-gateway/message-mediation/creating-and-uploading-using-integration-studio.md': 'https://apim.docs.wso2.com/en/4.4.0/deploy-and-publish/deploy-on-gateway/api-gateway/message-mediation/creating-and-uploading-using-integration-studio/' 'learn/api-gateway/message-mediation/adding-dynamic-endpoints.md': 'https://apim.docs.wso2.com/en/4.4.0/deploy-and-publish/deploy-on-gateway/api-gateway/message-mediation/adding-dynamic-endpoints/' 'learn/api-gateway/message-mediation/removing-specific-request-headers-from-response.md': 'https://apim.docs.wso2.com/en/4.4.0/deploy-and-publish/deploy-on-gateway/api-gateway/message-mediation/removing-specific-request-headers-from-response/' 'learn/api-gateway/message-mediation/passing-a-custom-authorization-token-to-the-backend.md': 'https://apim.docs.wso2.com/en/4.4.0/deploy-and-publish/deploy-on-gateway/api-gateway/message-mediation/passing-a-custom-authorization-token-to-the-backend/' @@ -1063,10 +1046,6 @@ plugins: 'observe/api-manager-analytics/analytics-pages/analytics-pages-devices.md': 'https://apim.docs.wso2.com/en/4.4.0/api-analytics/viewing/analytics-pages-devices/' 'observe/api-manager-analytics/analytics-pages/analytics-pages-alerts.md': 'https://apim.docs.wso2.com/en/4.4.0/api-analytics/viewing/analytics-pages-alerts/' 'observe/api-manager-analytics/analytics-pages/analytics-pages-report.md': 'https://apim.docs.wso2.com/en/4.4.0/api-analytics/viewing/analytics-pages-report/' - #'observe/api-manager-analytics/analytics-usecases/finding-faulty-apis.md': 'https://apim.docs.wso2.com/en/4.4.0/api-analytics/viewing/usecases/finding-faulty-apis/' - #'observe/api-manager-analytics/analytics-qos.md': 'https://apim.docs.wso2.com/en/4.4.0/api-analytics/qos/' - #'observe/api-manager-analytics/querying-apim-analytics.md': 'https://apim.docs.wso2.com/en/4.4.0/api-analytics/analytics-api-guide/' - #'observe/api-manager-analytics/troubleshooting-analytics.md': 'observe/api-manager-analytics/overview-of-api-analytics.md': 'https://apim.docs.wso2.com/en/4.4.0/api-analytics/getting-started-guide/' 'get-started/architecture.md': 'https://apim.docs.wso2.com/en/4.4.0/get-started/apim-architecture/' 'api-analytics/getting-started-guide.md': 'https://apim.docs.wso2.com/en/latest/api-analytics/choreo-analytics/getting-started-guide/' @@ -1081,8 +1060,6 @@ plugins: 'administer/logging-and-monitoring/monitoring/working-with-observability.md': 'https://apim.docs.wso2.com/en/4.4.0/observe/api-manager/monitoring-correlation-logs/' 'administer/logging-and-monitoring/logging/monitoring-http-access-logs.md': 'https://apim.docs.wso2.com/en/4.4.0/observe/api-manager/monitoring-http-access-logs/' 'administer/logging-and-monitoring/logging/monitoring-audit-logs.md': 'https://apim.docs.wso2.com/en/4.4.0/observe/api-manager/monitoring-audit-logs/' - 'administer/logging-and-monitoring/monitoring/monitoring-with-opentracing.md': 'https://apim.docs.wso2.com/en/4.4.0/observe/api-manager/traces/monitoring-with-opentracing/' - 'administer/logging-and-monitoring/monitoring/jmx-based-monitoring.md': 'https://apim.docs.wso2.com/en/4.4.0/observe/api-manager/metrics/jmx-based-monitoring/' 'administer/logging-and-monitoring/logging/setting-up-logging.md': 'https://apim.docs.wso2.com/en/4.4.0/administer/logging-and-monitoring/logging/configuring-logging/' 'api-analytics/viewing/view-overview-of-api-analytics.md': 'https://wso2.com/choreo/docs/monitoring-and-insights/usage-insights/#overview' 'api-analytics/viewing/view-api-analytics-on-traffic.md': 'https://wso2.com/choreo/docs/monitoring-and-insights/usage-insights/#traffic'