diff --git a/siebel-cloud-manager/access-siebel-oke/access-siebel-oke.md b/siebel-cloud-manager/access-siebel-oke/access-siebel-oke.md
index 378092ddc..37f327b4a 100644
--- a/siebel-cloud-manager/access-siebel-oke/access-siebel-oke.md
+++ b/siebel-cloud-manager/access-siebel-oke/access-siebel-oke.md
@@ -2,69 +2,63 @@
## Introduction
-In this lab, we shall access the Siebel Kubernetes Cluster that hosts the newly deployed Siebel CRM. In this way, we can see the various pods, services, and other Kubernetes resources running in the cluster and manage them as required.
+In this lab, we access the Siebel Kubernetes Cluster hosting our deployed Siebel CRM Enterprise. We'll see the various pods, services, and other Kubernetes resources running in the cluster.
-Estimated Time: 20 minutes
+Estimated Time: 15 minutes
### Objectives
In this lab, you will:
-* Gather connection details of the Siebel Kubernetes Cluster
-* Set up access to the Cluster from Siebel Cloud Manager instance
* View the Siebel CRM environment in Kubernetes
### Prerequisites
* SSH Key
-## Task 1: Gather connection details of the Siebel Kubernetes Cluster
+## Task 1: View the Siebel CRM environment in Kubernetes
-1. Log in to Oracle Cloud Console and navigate to **Developer Services** and **Kubernetes Clusters (OKE)**
+1. First log in via SSH to the SCM machine.
-2. In the **List Scope** section on the left side panel, choose the compartment **SiebelLab**
+2. Once logged in to the SCM via SSH, enter the following command
-3. Drill down on the cluster name, **siebellab\_oke\_cluster**
-
-4. Click ***Access Cluster***
-
-5. Click ***Local Access*** and note the commands mentioned.
+ ```
+ docker exec -it cloudmanager bash
+ ```
-## Task 2: Set up access to connect to the Cluster from Siebel Cloud Manager instance
+ This drops us into a shell inside the cloudmanager container.
-1. Connect to the Siebel Cloud Manager instance through PuTTY using the ssh private key that we had created in Lab 1. Enter the username as **opc**
+3. In Lab 4, we deployed Siebel CRM and, via a REST API call, we received an environment ID, referred to as **env_id**, which we used to follow the progress of the deployment.
-2. Execute the following command to enter the Siebel Cloud Manager container.
+ Now we will want to interrogate the Kubernetes environment deployed for that environment. To begin, we can issue the following command for the environment.
```
- $ docker exec -it cloudmanager bash
+ source /home/opc/siebel/{env_id}/k8sprofile
```
-3. Now, execute the commands mentioned in the **Local Access** page from Oracle Cloud Console one by one in the PuTTY session. The commands would look like this,
+4. To view all the resources that were created as part of the new Siebel CRM environment, execute the following command (assuming you stuck with the name 'SiebelLab' in your environment payload)
```
- $ mkdir -p $HOME/.kube
- ```
- ```
- $ oci ce cluster create-kubeconfig --cluster-id {OCID_of_the_Cluster} --file $HOME/.kube/config --region us-ashburn-1 --token-version 2.0.0 --kube-endpoint PUBLIC_ENDPOINT
- ```
- ```
- $ export KUBECONFIG=$HOME/.kube/config
+ $ kubectl -n siebellab get all
```
-4. The config information will be written to the **$HOME/.kube/config** file of the Siebel Cloud Manager container. We are now ready to access the cluster.
-## Task 3: View the Siebel CRM environment in Kubernetes
+ ![Siebel Cluster Details Screenshot](./images/sbl-cluster-details.png)
-1. To view all the resources that were created as part of the new Siebel CRM environment, execute the following command.
+ In the above screenshot, the **siebelcgw-0** pod represents the Siebel Gateway, the **edge-0** pod represents the Siebel Server, and the **quantum-0** pod represents the Siebel Application Interface.
- ```
- $ kubectl -n siebellab get all
- ```
+6. You can verify the version of the Siebel container in use as follows.
- ![Siebel Cluster Details Screenshot](./images/sbl-cluster-details.png)
+ ```
+ $ kubectl -n siebellab describe edge-0 | grep -i version
+ ```
-In the above screenshot, the **siebelcgw-0** pod represents the Siebel Gateway, the **edge-0** pod represents the Siebel Server, and the **quantum-0** pod represents the Siebel Application Interface.
+ or
+
+ ```
+ $ kubectl -n siebellab describe edge-0 | grep -i image
+ ```
-2. To enter a particular pod to execute commands or check services' status and so on, execute the following command.
+
+5. To enter a particular pod to execute commands or check services' status and so on, execute the following command.
```
$ kubectl -n siebellab exec -it {Pod_Name} -- /bin/bash
@@ -73,12 +67,32 @@ In the above screenshot, the **siebelcgw-0** pod represents the Siebel Gateway,
```
$ kubectl -n siebellab exec -it edge-0 -- /bin/bash
```
+
+6. From within the edge-0 pod, you can now connect to the database should you need to.
+
+ ```
+ $ sqlplus admin/{admin_password}@siebellab_tp
+ ```
+
+ When revisiting the vault secret for the password, ensure to click the radio button to show the decoded Base64 value if you aim to cut and paste the value you used.
+
+ ![Vault Secret](./images/plaintext-vault-secret.png)
+
+
+7. We could also view the typical siebel processes running within the container.
+
+ ```
+ $ ps -ef
+ ```
+
+ ![Edge-0 Processes](./images/edge-0-processes.png)
+
## Summary
-In this lab, we accessed the Siebel Kubernetes Cluster and viewed the various resources that support the Siebel CRM environment.
+In this lab, we accessed the Siebel Kubernetes Cluster,viewed the various resources that support the Siebel CRM environment, and delved briefly into the edge-0 container.
## Acknowledgements
-* **Author:** Shyam Mohandas, Principal Cloud Architect; Sampath Nandha, Principal Cloud Architect
+* **Author:** Duncan Ford, Software Engineer; Shyam Mohandas, Principal Cloud Architect; Sampath Nandha, Principal Cloud Architect
* **Contributors** - Vinodh Kolluri, Raj Aggarwal, Mark Farrier, Sandeep Kumar
-* **Last Updated By/Date** - Sampath Nandha, Principal Cloud Architect, March 2023
\ No newline at end of file
+* **Last Updated By/Date** - Duncan Ford, Software Engineer, May 2024
\ No newline at end of file
diff --git a/siebel-cloud-manager/access-siebel-oke/images/edge-0-processes.png b/siebel-cloud-manager/access-siebel-oke/images/edge-0-processes.png
new file mode 100644
index 000000000..559bfdc18
Binary files /dev/null and b/siebel-cloud-manager/access-siebel-oke/images/edge-0-processes.png differ
diff --git a/siebel-cloud-manager/access-siebel-oke/images/plaintext-vault-secret.png b/siebel-cloud-manager/access-siebel-oke/images/plaintext-vault-secret.png
new file mode 100644
index 000000000..3e7e89655
Binary files /dev/null and b/siebel-cloud-manager/access-siebel-oke/images/plaintext-vault-secret.png differ
diff --git a/siebel-cloud-manager/access-siebel-oke/images/sbl-cluster-details.png b/siebel-cloud-manager/access-siebel-oke/images/sbl-cluster-details.png
index fb3986bdd..329b12656 100644
Binary files a/siebel-cloud-manager/access-siebel-oke/images/sbl-cluster-details.png and b/siebel-cloud-manager/access-siebel-oke/images/sbl-cluster-details.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/deploy-siebel-crm.md b/siebel-cloud-manager/deploy-siebel-crm/deploy-siebel-crm.md
index 37dc856c6..0fa363f83 100644
--- a/siebel-cloud-manager/deploy-siebel-crm/deploy-siebel-crm.md
+++ b/siebel-cloud-manager/deploy-siebel-crm/deploy-siebel-crm.md
@@ -2,120 +2,200 @@
## Introduction
-In this lab, we will deploy the Siebel CRM. We shall prepare a JSON payload containing required environment details and send a **POST** request to the Siebel Cloud Manager API.
+In this lab, we will deploy Siebel CRM. We prepare a JSON payload containing required environment details and send a **POST** request to the Siebel Cloud Manager (**SCM**) API to activate the deployment process.
Estimated Time: 1 hour
### Objectives
In this lab, you will:
-* Generate an Oracle Cloud Infrastructure user Auth Token
+
+* Generate an Oracle Cloud Infrastructure (**OCI**) User Auth Token
* Prepare a payload to deploy Siebel CRM
* Install and set up Postman
-* Execute the payload
+* Submit the payload for processing
* Monitor the deployment
* Launch the Siebel Application
### Prerequisites
-* GitLab and Siebel Cloud Manager Instance
+* GitLab and SCM Instances
* GitLab Access Token
-## Task 1: Generate an Oracle Cloud Infrastructure user Auth Token
+## Task 1: Generate an OCI User Auth Token
-We need to log in to the Oracle Cloud Infrastructure Tenancy and generate an auth token for our user. This token will be used as the **registry_password** in the payload.
+We need to log in to the OCI Tenancy and generate an [auth token for our user](https://docs.oracle.com/en-us/iaas/Content/Registry/Tasks/registrygettingauthtoken.htm). When SCM comes to the point where it needs to store Siebel containers in the tenancy's container registry, this token will be used, supplied as the **registry_password** in the payload.
-1. Log in to the OCI tenancy. On the console, click the ***Profile Icon***
+1. Log in to the OCI tenancy.
-2. Click the first option that has the user id mentioned to reach the **User Details** page.
+2. Click the **Profile** icon at the top right and then click **My Profile**
![OCI Profile Icon](./images/oci-prof-icon.png)
3. On the left side panel, in the **Resources** section, click ***Auth Tokens***
-4. Click ***Generate Token*** and give the **Description** as below.
+ ![OCI Profile Icon](./images/oci-prof-auth-tokens.png)
- ```
- Token for SiebelCM
- ```
+4. Click **Generate Token**
![OCI Generate Token for User](./images/click-gen-token.png)
-5. Copy the auth token that got generated before closing the window as it will not be shown again.
+5. Populate the **Description** with a useful name, then click **Generate Token**.
+
+ ![OCI Generate Token for User](./images/oci-gen-token-dialog.png)
+
+ ```
+ SCM Registry Access
+ ```
+
+5. Copy the generated auth token and store it somewhere safe before closing the window as it will not be accessible again afterward. However, you can always delete the token and re-create a new one if required.
![OCI User Token Displayed](./images/oci-user-token-display.png)
## Task 2: Prepare a payload to deploy Siebel CRM
-After we have completed all the prior tasks, we can use Siebel Cloud Manager to deploy Siebel CRM
-on Oracle Cloud Infrastructure. To do this, we shall first prepare a suitable JSON payload and then execute this payload on Siebel Cloud
-Manager.
+We're now ready to use SCM to deploy Siebel CRM on OCI. To do this, we shall first prepare a suitable JSON payload and then **POST** this payload to SCM for processing.
-1. Consider a sample payload below. Substitute parameter values inside **{}** as required.
+1. Consider the sample payload below. Substitute parameter values inside **{}** as required.
```
- {
+
+ {
"name": "SiebelLab",
"siebel": {
- "registry_url": "{Available_Endpoints_In_Your_Region}",
- "registry_user": "{User_Id_To_Connect_To_Container_Registry}",
- "registry_password": "{User_Auth_Token}",
- "database_type": "Vanilla",
- "industry": "Sales"
- },
- "infrastructure": {
- "gitlab_url": "https://{Public IP of Gitlab Instance}",
- "gitlab_accesstoken": "{Gitlab_Access_Token}",
- "gitlab_user": "root",
- "gitlab_selfsigned_cacert": "/home/opc/certs/rootCA.crt"
- },
- "database": {
- "db_type": "ATP",
- "atp": {
- "admin_password": "WElcome123###",
- "storage_in_tbs": 1,
- "cpu_cores": 2
- }
- },
- "size": {
- "kubernetes_node_shape": "VM.Standard.E4.Flex",
- "kubernetes_node_count": 3,
- "node_shape_config": {
- "memory_in_gbs": 20,
- "ocpus": 2
- },
- "ses_resource_limits": {
- "cpu": 2,
- "memory": "15Gi"
- },
- "cgw_resource_limits": {
- "cpu": 2,
- "memory": "15Gi"
- },
- "sai_resource_limits": {
- "cpu": 1,
- "memory": "15Gi"
- }
+ "registry_url": "{Available_Registry_Endpoint_In_Your_Region}",
+ "registry_user": "{User_Id_To_Connect_To_Container_Registry}",
+ "registry_password": "{User_Auth_Token}",
+ "database_type": "Vanilla",
+ "industry": "Financial Services"
+ },
+ "infrastructure": {
+ "gitlab_url": "https://{Public IP of Gitlab Instance}",
+ "gitlab_accesstoken": "{Gitlab_Access_Token}",
+ "gitlab_user": "root",
+ "gitlab_selfsigned_cacert": "/home/opc/certs/rootCA.crt"
+ },
+ "database": {
+ "db_type": "ATP",
+ "atp": {
+ "admin_password": "{OCID_For_Vault_Secret_For_Admin_Password}",
+ "storage_in_tbs": "1",
+ "cpu_cores": "2",
+ "wallet_password": "{OCID_For_Vault_Secret_For_Wallet_Password}"
+ },
+ "auth_info": {
+ "table_owner_password": "{OCID_For_Vault_Secret_For_TBLO_Password}",
+ "table_owner_user": "SIEBEL",
+ "default_user_password": "{OCID_For_Vault_Secret_For_Default_User}",
+ "anonymous_user_password": "{OCID_For_Vault_Secret_For_Anonymous_User_Password}",
+ "siebel_admin_password": "{OCID_For_Vault_Secret_For_Admin_Password}",
+ "siebel_admin_username": "SADMIN"
+ }
+ },
+ "size": {
+ "kubernetes_node_shape": "VM.Standard.E4.Flex",
+ "kubernetes_node_count": 3,
+ "node_shape_config": {
+ "memory_in_gbs": 20,
+ "ocpus": 2
+ },
+ "ses_resource_limits": {
+ "cpu": "2",
+ "memory": "15Gi"
+ },
+ "cgw_resource_limits": {
+ "cpu": "2",
+ "memory": "15Gi"
+ },
+ "sai_resource_limits": {
+ "cpu": "1",
+ "memory": "15Gi"
+ }
+ }
}
- }
+
```
- The below table describes certain important payload parameters. For the latest list of parameters, their description, and an example payload, please follow the Siebel Cloud Manager documentation attached to this Oracle Support article - **Using Siebel Cloud Manager to Deploy Siebel CRM on OCI (Doc ID 2828904.1).**
+### Payload Parameter Options
+
+ The below table describes a couple of key payload parameters. For further details, please review the full SCM documentation in the [Siebel Bookshelf](https://www.oracle.com/documentation/siebel-crm-libraries.html) for the version you are deploying; e.g. [24.4](https://docs.oracle.com/cd/F26413_52/books/DeploySCM/c-deploying-siebel-crm-on-oci.html#Parameters-in-Payload-Content)
+
+ ![Payload parameter documentation for 24.4](./images/payload-parameter-documentation.png)
+
| Payload Parameter | Description |
|---|---|
| registry_url | Specify the URL of the Docker container registry. If you are using the OCI registry in your tenancy, then use the container registry from the same region as the Siebel Cloud Manager instance. For example, for the Ashburn region, you might use iad.ocir.io. For other regions, see [https://docs.oracle.com/en-us/iaas/Content/Registry/Concepts/registryprerequisites.htm](https://docs.oracle.com/en-us/iaas/Content/Registry/Concepts/registryprerequisites.htm) |
| registry_user | Specify the OCI user ID in either of the following formats,
Federated tenancies: {tenancy-namespace}/oracleidentitycloudservice/{username}
Non-Federated tenancies: {tenancy-namespace}/{username}
Refer to [https://docs.oracle.com/en-us/iaas/Content/Functions/Tasks/functionslogintoocir.htm](https://docs.oracle.com/en-us/iaas/Content/Functions/Tasks/functionslogintoocir.htm) |
-| db_type | Specifies the database type |
-| registry_url | Refer to [https://docs.oracle.com/en-us/iaas/Content/Registry/Concepts/registryprerequisites.htm#regional-availability](https://docs.oracle.com/en-us/iaas/Content/Registry/Concepts/registryprerequisites.htm#regional-availability) |
+| database_type | "Vanilla" or "Sample |
+| db_type | Specifies the database type. Options are "ATP", "DBCS_VM", "BYOD" |
+| industry | Provides a one-string method to deploy a given swathe of CRM functionality. Review the documentation for the version you are deploying. Valid values for 24.4 are "Automotive", Financial Services", "Life Sciences", "Sales", "Service", "Partner Relationship Management", "Public Sector", "Telecommunications", "Loyalty", "Consumer Goods", "Hospitality"
+
+## Task 3: Create vault secrets for passwords
+
+**Note:**
+
+The passwords you create in your vault for the auth_info section need to comply with the database password requirements, which are currently set to the most restrictive across all supported database types, namely DBCS. This means your actual password, as of the time of writing, needs to 9 to 30 characters long and contain at least 2 upper case characters, 2 lower case, 2 special, and 2 numbers. The special characters must be in the following set:
+
+ * underscore _
+ * hash #
+ * dash -
+
+Furthermore, don't include dictionary words in the password.
+
+1. Navigate to the OCI menu for **Vault** under **Identity & Security**
+
+ ![OCI Menu - Identity & Security - Vault](./images/oci-menu-vault.png)
+
+2. Ensure you've selected the correct compartment. This was created during Lab 2.
+
+ ![Compartment - Vaults](./images/oci-compartment-vaults.png)
+
+3. Click on name of the vault that was created.
+
+ ![SiebelCM Vault - Encryption Keys](./images/siebelcm-vault-encryption-keys.png)
+
+4. Click **Create Key** to create a new master encryption key for the secrets we're about to create.
+
+ ![SiebelCM Vault - Encryption Keys](./images/siebelcm-vault-create-key.png)
+
+5. After a few moments, the state of the new encryption key will changes from **Creating** to **Enabled**, at which point you can proceed.
+
+ ![SiebelCM Vault - Encryption Keys](./images/siebelcm-vault-key-created.png)
+
+4. Now select **Secrets** at the bottom left of the screen.
+
+ ![SiebelCM Vault - Secrets](./images/oci-siebelcm-vault-secrets.png)
+
+4. Now create the secret, or secrets, you need using the **Create Secret** button. For simplicity in the live lab, we'll create a single secret and use that for all of the auth_info passwords. Give it a name and be sure to follow the note above to create a password that will be accepted. Select the Encryption Key we just created.
+
+ ![Create secret for auth_info section](./images/auth-info-secret.png)
+
+5. Click **Create Secret** to finalise the creation of the secret. Similarly with the encryption key creation, the secret is ready when the state changes from **Creating** to **Active**
+
+ ![Auth Info secret active](./images/auth-info-secret-active.png)
+
+6. Now click on the name of the secret, and copy the OCID of the newly created secret. It's this OCID value that is requried for the auth_info values.
+
+ ![Auth Info secret details](./images/auth-info-secret-details.png)
+
+7. Now repeat this process to create two additional secrets. One each for the ATP database password and wallet password.
+
+ ![All vault secrets](./images/all-vault-secrets.png)
+
+## Task 4: Login to Postman
-## Task 3: Install and set up Postman
+We need to submit the payload above to Siebel Cloud Manager in order to get things rolling. There are many options for this, but we'll use an application called Postman.
-Postman is an API platform for building and using APIs. Postman can be either downloaded and installed in the local or we can also use its web version. For this lab, we shall use the web version.
+Postman is an API platform for building and using APIs. Postman can be either downloaded and installed or used direcxtly from the web. For this lab, we shall use the web version, but for higher security you should install locally to your platform.
+
+**Note:**
+
+At the time of writing, these steps were taken using Google Chrome. Firefox seemed not to be able to correctly present the Postman user interface.
1. Go to this website - [https://www.postman.com/downloads/](https://www.postman.com/downloads/)
-2. In the **Postman on the web** section, click ***Try the web version***
+2. In the **Postman on the web** section, click **Try the web version**
![Try the Web version](./images/try-web-version-postman.png)
@@ -123,21 +203,21 @@ Postman is an API platform for building and using APIs. Postman can be either do
![Create New Account](./images/create-new-account.png)
-4. Once the account has been successfully created, we will be directed to [https://web.postman.co/home](https://web.postman.co/home)
+4. Once we complete the account onboarding process, we arrive at our new workspace. You can return here any time by visiting [https://web.postman.co/workspaces](https://web.postman.co/workspaces)
-5. Click ***Create New Workspace*** under the **Workspaces** menu.
+ ![Completed Onboarding](./images/postman-completed-onboarding.png)
-6. Specify a **Name** of your choice and Choose **Personal** under **Visibility**. Click ***Create Workspace***.
+## Task 4: Execute the payload
- ![Create New Workspace](./images/create-workspace.png)
+1. Click **Send an API request**. This creates a new empty request for us to populate.
-## Task 4: Execute the payload
+ ![Create New Request](./images/postman-new-request.png)
-1. Click ***New*** button and choose **HTTP Request**.
+2. In the horizontal menu, navigate to **Authorization**. Choose the Type **Basic Auth** from the drop-down.
- ![Create New Request](./images/create-new-req.png)
+ ![Create New Request](./images/postman-request-basic-auth.png)
-2. In the horizontal menu, navigate to **Authorization**. Choose the Type **Basic Auth** from the drop-down. Give the **Username** and **Password** as below.
+3. Populate the **Username** and **Password** as follows.
**Username**
@@ -145,162 +225,139 @@ Postman is an API platform for building and using APIs. Postman can be either do
**Password**
- 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
-
- ![Give Authorization and credentials](./images/req_auth.png)
+ The password is randomly generated at the time SCM is deployed. Obtain the value for your instance by using SSH (or Putty) to connect to the SCM instance. The admin password can be found in **/home/opc/config/api_creds.ini**
-3. Navigate to **Body** menu and select the **raw** radio button. Change the format from **Text** to **JSON**.
+ ![Give Authorization and credentials](./images/req_auth.png)
- ![Choose Body and Format](./images/msg-body.png)
+ e.g
+ ```
+ [opc@scm2024xxxx-siebel-cm config]$ pwd
-4. Paste the Payload in the body section.
+ /home/opc/config
-5. Set the following attributes for the request.
+ [opc@scm2024xxxx-siebel-cm config]$ more api_creds.ini
- **Method**
+ [basic_auth]
- POST
-
- **Request URL**
+ basic_auth_password = m4799g6z7d5DTp6l-oDUFifPl8FxOHtFv3UEMWmcOVgK34DxiWxxxx
+ ```
- http://{Public IP of the Siebel Cloud Manager Instance}:16690/scm/api/v1.0/environment
+ You may prefer to use Postman's capability to set the Authorization parameters for all requests in a collection.
-6. Click ***Send***
+ ![Postman collection authorization settings](./images/postman-collection-auth.png)
-7. Save the response to a file as this has vital information on the Siebel environment that we are creating.
+ If you do this, set **Auth Type** for individual requests to **Inherit auth from parent**
-8. Note the value of **env_id** from the response.
+4. Navigate to **Body** menu and select the **raw** radio button. Verify that **JSON** is selected as the format.
- ![Env ID from the log](./images/env-id.png)
-
- **Note:**
-
- If you have selected "Advanced Network Configuration" in the Lab 2 - Task 2, then will encounter below error while submitting the Payload.
-
- Schema validation error :
- {'infrastructure': {'_schema': [' Provide siebel environment subnet cidr ranges for advanced network configuration.']}
-
- To Resolve this issue, you will need to correct the cidr blocks in the payload file. The detailed update can be refered in Doc ID 2862505.1.
-
- 1. Modify your payload under the "infrastructure" section so that the subnet/CIDR range is added along with gitlab information.
-
- ```
- "infrastructure":
- {
- ...
- "siebel_public_subnet_cidr" : "xx.x.x.x/xx",
- "siebel_private_subnet_cidr" : "xx.x.x.x/xx",
- "siebel_atp_subnet_cidr" : "xx.x.x.x/xx",
- "siebel_cluster_subnet_cidr" : "xx.x.x.x/xx"
- }
- ```
- 2. Once the above lines have been incorporated for subnet_cidr is added for public/private/atp/cluster, save the payload in Postman or the tool used for running payload.
+ ![Choose Body and Format](./images/msg-body.png)
- 3. Re-execute the payload now for the POST request.
- 4. Check to see if the payload creates the self-link and executes the stages for the OCI deployment of Siebel application for Lift & Shift or Greenfield.
+5. Paste the populated Payload in the body section, indicated by the arrow above
- **Important**
- Starting from SCM Version 22.8, the parameter name changed from "siebel\_public\_subnet\_cidr" to "siebel\_lb\_subnet\_cidr"
+6. Set the following attributes for the request.
+ **Method**
+ POST
+ **Request URL**
+ http://{Public IP of the Siebel Cloud Manager Instance}:16690/scm/api/v1.0/environment
+ ![POST and URL](./images/postman-post-and-url.png)
+7. Hopefully your request should look at little like this at this stage
-## Task 5: Monitor the deployment
+ ![Choose Body and Format](./images/postman-payload-ready.png)
-After sending a post request with our payload, the Siebel Cloud Manager will prepare and deploy the Siebel CRM environment stack.
+8. Click **Save** to save your work. Give it a name and place it in a collection, creating a new collection if necessary.
-1. With the **env_id** that we noted earlier, send a **GET** request to Siebel Cloud Manager from Postman as below.
+9. Click **Send**
- **Method**
+10. If you receive errors in the response and make changes to the payload, be sure to hit save each time. There's a lot of cut and pasting to generate the payload at present. You can validate the JSON using various online tools. Search for JSON Validator on Google for some ideas.
- GET
+11. Save the response to a file as this has vital information on the Siebel environment that we are creating.
- **Request URL**
+12. Note the value of **env_id** from the response
- http://{Public IP of the Siebel Cloud Manager Instance}:16690/scm/api/v1.0/environment/{env_id}
+ ![Env ID from the log](./images/env-id.png)
- The log file will be returned as a response.
+## Task 5: Monitor the deployment
- ![Deployment status log](./images/deploy-status-log.png)
+After sending a post request with our payload, the Siebel Cloud Manager will prepare and deploy the Siebel CRM environment stack. This will take a while.
- The response body will have a section named **stages** that indicates the particular stage of deployment. The **status** parameter in each stage can have values such as **passed, in-progress, failed**, etc.
+1. We can monitor the state of the deployment using the **env_id** that we noted earlier. To do this, create a **GET** request to Siebel Cloud Manager from Postman as below.
- We can also monitor the **Oracle Resource Manager (ORM)** stack logs from the Oracle Cloud console to see the progress of the stack deployment.
+ Start by creating a new request in Postman as follows:
-2. In the Oracle Cloud console, navigate to **Developer Services > Stacks**.
+ **Method**
-3. In the list scope section on the left side panel, choose the compartment **scm-siebel-cm**.
+ GET
-4. Drill down on the stack name and then drill down on the job name.
+ **Request URL**
- Monitor the logs here as it will be mentioning the resources' creation status.
+ http://{Public IP of the Siebel Cloud Manager Instance}:16690/scm/api/v1.0/environment/{env_id}
- The ORM stack deployment is just one of the many **stages** of the overall Siebel CRM deployment. To check the other stages and their status regularly, send a **GET** request to the Siebel Cloud Manager API as mentioned earlier.
+ Ensure you populate the Authorization section with the user name **admin** and the password for your SCM deployment if you didn't setup Authorization already at the collection level.
+
+ The log file generated during the deployment process will be returned as a response.
- Each stage has its log and the path to it can be found in the respective section itself. In case required, login to the Siebel Cloud Manager instance using its public IP address and view the required log.
+ The response body will have a section named **stages** that indicates the particular stage of deployment. The **status** parameter in each stage can have values such as **passed, in-progress, failed**, etc.
-5. (Optional) If the ORM stack deployment fails due to any of the following types of errors, then send a **PUT** request to rerun the job.
+ ![Deployment status log](./images/deploy-status-log.png)
- ```
- Error: 400-InvalidParameter
+2. We can also monitor the **Oracle Resource Manager (ORM)** stack logs from the Oracle Cloud console to see the progress of the stack deployment.
- Provider version: 4.20.0, released on 2021-03-31. This provider is 36 updates behind to current.
+ ![OCI Menu - Resource Manager Stacks](./images/oci-resource-manager-stacks.png)
- Service: FileStorageFileSystem
- Error Message: Ocid
- 'ocid1.compartment.oc1..aaaaaaaabwvdshyuwbyfpx72m4lq6yni673m2ewf7qrou7ha5dvaxrjeogfa' not found in Compartment Tree!
+3. In the list scope section on the left side panel, choose the compartment **scm{date}-siebel-cm**.
- OPC request ID: a42cd5b1927359f403a56e8eabb378b8/47793109B015FB5F54CE70BC905ACF70/968A739323FC3A76967AD9E94862A1E2
- Suggestion: Please update the parameter(s) in the Terraform config as per error message Ocid
- 'ocid1.compartment.oc1..aaaaaaaabwvdshyuwbyfpx72m4lq6yni673m2ewf7qrou7ha5dvaxrjeogfa' not found in Compartment Tree!
+ ![OCI Compartment Stacks](./images/oci-compartment-stacks.png)
- on modules/storage/main.tf line 1, in resource "oci_file_storage_file_system" "siebelCM_Fss"
- 1: resource "oci_file_storage_file_system" "siebelCM_Fss" {
- ```
+4. Click on the stack name (not the Gitlab one) and then drill down on the job name.
-6. (Optional) Send a **Put** request to rerun the stack as below in case any of the above errors were encountered.
+ ![OCI Stack Job Progress](./images/oci-stack-job-progress.png)
- **Method**
+ Monitor the logs to observe the resources' creation statii.
- PUT
+ The ORM stack deployment is just one of the many **stages** of the overall Siebel CRM deployment. To check the other stages and their statii regularly, repeat step 1 above to send a **GET** request to the Siebel Cloud Manager API.
- **Request URL**
+ Each stage has its own log and the path to it can be found in the respective section as **log_location**. If required, SSH to the Siebel Cloud Manager and view the specific log.
- http://{Public IP of the Siebel Cloud Manager Instance}:16690/api/v1/environments/{env_id}?rerun=true
+ ![SCM Deployment Stage Log](./images/scm-deployment-stage-log.png)
-7. After all the **stages** have been completed successfully as **passed**, the list of relevant application URLs will be mentioned towards the end of the log as shown below.
+5. After all the **stages** have been completed successfully as **passed**, the list of application URLs for the deployed industry will be described at the end of the response.
![All Stages Passed](./images/appln-urls.png)
-
## Task 6: Launch the Siebel Application
1. Launch the application URL in a new browser session and enter the below credentials.
+ ![Siebel Login](./images/sbl-login.png)
+
**User Id**
- sadmin
+ {siebel_admin_username}
**Password**
- SiebelAdmin123
+ [siebel_admin_password]
- This password can be found in the log that was retrieved with the **GET** request and this can be found against the **userpassword** parameter under the **database** section.
+ These values correspond to those values you populated into the vault secrets, whose IDs were sent in the deployment payload.
+
+ ![All Stages Passed](./images/scm-deployment-payload-database.png)
- ![Siebel Login](./images/sbl-login.png)
## Summary
-We have successfully deployed a new Siebel CRM environment using the Siebel Cloud Manager. Please follow the Siebel Cloud Manager documentation to understand the different payload parameters that can be customized while deploying the Siebel CRM.
+We have successfully deployed a new Siebel CRM environment using the Siebel Cloud Manager. Please follow the Siebel Cloud Manager documentation to understand the different payload parameters that can be customized while deploying Siebel CRM.
In the next lab, you can view and manage the Siebel's Kubernetes Cluster to connect to the Siebel pods and perform administration and management.
## Acknowledgements
-* **Author:** Shyam Mohandas, Principal Cloud Architect; Sampath Nandha, Principal Cloud Architect
+* **Author:** Duncan Ford, Software Engineer; Shyam Mohandas, Principal Cloud Architect; Sampath Nandha, Principal Cloud Architect
* **Contributors** - Vinodh Kolluri, Raj Aggarwal, Mark Farrier, Sandeep Kumar
-* **Last Updated By/Date** - Sampath Nandha, Principal Cloud Architect, March 2023
\ No newline at end of file
+* **Last Updated By/Date** - Duncan Ford, Software Engineer, May 2024
\ No newline at end of file
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/all-vault-secrets.png b/siebel-cloud-manager/deploy-siebel-crm/images/all-vault-secrets.png
new file mode 100644
index 000000000..1847f5baf
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/all-vault-secrets.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/appln-urls.png b/siebel-cloud-manager/deploy-siebel-crm/images/appln-urls.png
index fe81bccdc..fe4accf10 100644
Binary files a/siebel-cloud-manager/deploy-siebel-crm/images/appln-urls.png and b/siebel-cloud-manager/deploy-siebel-crm/images/appln-urls.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/auth-info-secret-active.png b/siebel-cloud-manager/deploy-siebel-crm/images/auth-info-secret-active.png
new file mode 100644
index 000000000..9ae6239b9
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/auth-info-secret-active.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/auth-info-secret-details.png b/siebel-cloud-manager/deploy-siebel-crm/images/auth-info-secret-details.png
new file mode 100644
index 000000000..973e3ff31
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/auth-info-secret-details.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/auth-info-secret.png b/siebel-cloud-manager/deploy-siebel-crm/images/auth-info-secret.png
new file mode 100644
index 000000000..19d56683b
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/auth-info-secret.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/click-gen-token.png b/siebel-cloud-manager/deploy-siebel-crm/images/click-gen-token.png
index 421b57b65..7f4861675 100644
Binary files a/siebel-cloud-manager/deploy-siebel-crm/images/click-gen-token.png and b/siebel-cloud-manager/deploy-siebel-crm/images/click-gen-token.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/create-blank-workspace.png b/siebel-cloud-manager/deploy-siebel-crm/images/create-blank-workspace.png
new file mode 100644
index 000000000..f9ed9874d
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/create-blank-workspace.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/create-new-account.png b/siebel-cloud-manager/deploy-siebel-crm/images/create-new-account.png
index 97318504c..a80781e24 100644
Binary files a/siebel-cloud-manager/deploy-siebel-crm/images/create-new-account.png and b/siebel-cloud-manager/deploy-siebel-crm/images/create-new-account.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/create-new-req.png b/siebel-cloud-manager/deploy-siebel-crm/images/create-new-req.png
deleted file mode 100644
index 7869b90c8..000000000
Binary files a/siebel-cloud-manager/deploy-siebel-crm/images/create-new-req.png and /dev/null differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/create-workspace.png b/siebel-cloud-manager/deploy-siebel-crm/images/create-workspace.png
deleted file mode 100644
index 8d6ca7ec4..000000000
Binary files a/siebel-cloud-manager/deploy-siebel-crm/images/create-workspace.png and /dev/null differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/deploy-status-log.png b/siebel-cloud-manager/deploy-siebel-crm/images/deploy-status-log.png
index 7aad5810b..dfbbb8473 100644
Binary files a/siebel-cloud-manager/deploy-siebel-crm/images/deploy-status-log.png and b/siebel-cloud-manager/deploy-siebel-crm/images/deploy-status-log.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/env-id.png b/siebel-cloud-manager/deploy-siebel-crm/images/env-id.png
index f7c184e13..2b9b42606 100644
Binary files a/siebel-cloud-manager/deploy-siebel-crm/images/env-id.png and b/siebel-cloud-manager/deploy-siebel-crm/images/env-id.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/msg-body.png b/siebel-cloud-manager/deploy-siebel-crm/images/msg-body.png
index ae07b19ab..b648d82ff 100644
Binary files a/siebel-cloud-manager/deploy-siebel-crm/images/msg-body.png and b/siebel-cloud-manager/deploy-siebel-crm/images/msg-body.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/oci-compartment-stacks.png b/siebel-cloud-manager/deploy-siebel-crm/images/oci-compartment-stacks.png
new file mode 100644
index 000000000..17ace60bb
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/oci-compartment-stacks.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/oci-compartment-vaults.png b/siebel-cloud-manager/deploy-siebel-crm/images/oci-compartment-vaults.png
new file mode 100644
index 000000000..6314a42ef
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/oci-compartment-vaults.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/oci-gen-token-dialog.png b/siebel-cloud-manager/deploy-siebel-crm/images/oci-gen-token-dialog.png
new file mode 100644
index 000000000..af48c97a7
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/oci-gen-token-dialog.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/oci-menu-vault.png b/siebel-cloud-manager/deploy-siebel-crm/images/oci-menu-vault.png
new file mode 100644
index 000000000..cbd165c10
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/oci-menu-vault.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/oci-prof-auth-tokens.png b/siebel-cloud-manager/deploy-siebel-crm/images/oci-prof-auth-tokens.png
new file mode 100644
index 000000000..ba33d2a6b
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/oci-prof-auth-tokens.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/oci-prof-icon.png b/siebel-cloud-manager/deploy-siebel-crm/images/oci-prof-icon.png
index 7b93235f9..ce8e7cf0f 100644
Binary files a/siebel-cloud-manager/deploy-siebel-crm/images/oci-prof-icon.png and b/siebel-cloud-manager/deploy-siebel-crm/images/oci-prof-icon.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/oci-resource-manager-stacks.png b/siebel-cloud-manager/deploy-siebel-crm/images/oci-resource-manager-stacks.png
new file mode 100644
index 000000000..1ed3a5590
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/oci-resource-manager-stacks.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/oci-siebelcm-vault-secrets.png b/siebel-cloud-manager/deploy-siebel-crm/images/oci-siebelcm-vault-secrets.png
new file mode 100644
index 000000000..e639795f5
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/oci-siebelcm-vault-secrets.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/oci-stack-job-progress.png b/siebel-cloud-manager/deploy-siebel-crm/images/oci-stack-job-progress.png
new file mode 100644
index 000000000..1f39f76db
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/oci-stack-job-progress.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/oci-user-token-display.png b/siebel-cloud-manager/deploy-siebel-crm/images/oci-user-token-display.png
index 782e3e02d..15d5cd6bb 100644
Binary files a/siebel-cloud-manager/deploy-siebel-crm/images/oci-user-token-display.png and b/siebel-cloud-manager/deploy-siebel-crm/images/oci-user-token-display.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/payload-parameter-documentation.png b/siebel-cloud-manager/deploy-siebel-crm/images/payload-parameter-documentation.png
new file mode 100644
index 000000000..91348f07a
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/payload-parameter-documentation.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/postman-collection-auth.png b/siebel-cloud-manager/deploy-siebel-crm/images/postman-collection-auth.png
new file mode 100644
index 000000000..ded8c9574
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/postman-collection-auth.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/postman-completed-onboarding.png b/siebel-cloud-manager/deploy-siebel-crm/images/postman-completed-onboarding.png
new file mode 100644
index 000000000..021cd008b
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/postman-completed-onboarding.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/postman-new-request.png b/siebel-cloud-manager/deploy-siebel-crm/images/postman-new-request.png
new file mode 100644
index 000000000..d653f2057
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/postman-new-request.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/postman-payload-ready.png b/siebel-cloud-manager/deploy-siebel-crm/images/postman-payload-ready.png
new file mode 100644
index 000000000..d2ce571ae
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/postman-payload-ready.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/postman-post-and-url.png b/siebel-cloud-manager/deploy-siebel-crm/images/postman-post-and-url.png
new file mode 100644
index 000000000..fa548c15d
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/postman-post-and-url.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/postman-request-basic-auth.png b/siebel-cloud-manager/deploy-siebel-crm/images/postman-request-basic-auth.png
new file mode 100644
index 000000000..7a92f1f31
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/postman-request-basic-auth.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/req_auth.png b/siebel-cloud-manager/deploy-siebel-crm/images/req_auth.png
index 8f68c929a..23b0534fd 100644
Binary files a/siebel-cloud-manager/deploy-siebel-crm/images/req_auth.png and b/siebel-cloud-manager/deploy-siebel-crm/images/req_auth.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/sbl-login.png b/siebel-cloud-manager/deploy-siebel-crm/images/sbl-login.png
index dad8e48d9..be000238d 100644
Binary files a/siebel-cloud-manager/deploy-siebel-crm/images/sbl-login.png and b/siebel-cloud-manager/deploy-siebel-crm/images/sbl-login.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/scm-deployment-payload-database.png b/siebel-cloud-manager/deploy-siebel-crm/images/scm-deployment-payload-database.png
new file mode 100644
index 000000000..55914c89a
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/scm-deployment-payload-database.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/scm-deployment-stage-log.png b/siebel-cloud-manager/deploy-siebel-crm/images/scm-deployment-stage-log.png
new file mode 100644
index 000000000..d43ffd7ef
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/scm-deployment-stage-log.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/siebelcm-vault-create-key.png b/siebel-cloud-manager/deploy-siebel-crm/images/siebelcm-vault-create-key.png
new file mode 100644
index 000000000..dbb0eb72b
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/siebelcm-vault-create-key.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/siebelcm-vault-encryption-keys.png b/siebel-cloud-manager/deploy-siebel-crm/images/siebelcm-vault-encryption-keys.png
new file mode 100644
index 000000000..f13b9aa8e
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/siebelcm-vault-encryption-keys.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/siebelcm-vault-key-created.png b/siebel-cloud-manager/deploy-siebel-crm/images/siebelcm-vault-key-created.png
new file mode 100644
index 000000000..672f0c907
Binary files /dev/null and b/siebel-cloud-manager/deploy-siebel-crm/images/siebelcm-vault-key-created.png differ
diff --git a/siebel-cloud-manager/deploy-siebel-crm/images/try-web-version-postman.png b/siebel-cloud-manager/deploy-siebel-crm/images/try-web-version-postman.png
index 00e0ad57a..a7949cf0c 100644
Binary files a/siebel-cloud-manager/deploy-siebel-crm/images/try-web-version-postman.png and b/siebel-cloud-manager/deploy-siebel-crm/images/try-web-version-postman.png differ
diff --git a/siebel-cloud-manager/introduction/introduction.md b/siebel-cloud-manager/introduction/introduction.md
index 52254fcf7..dcd091570 100644
--- a/siebel-cloud-manager/introduction/introduction.md
+++ b/siebel-cloud-manager/introduction/introduction.md
@@ -4,45 +4,46 @@
This workshop showcases the deployment of a new Siebel CRM environment on Oracle Cloud Infrastructure (OCI) using the Siebel Cloud Manager.
-The Siebel Cloud Manager is a new REST-based continuous deployment tool used for automating the deployment of Siebel CRM on Oracle Cloud Infrastructure, whether you start from the existing on-premises deployment of Siebel CRM or create a new deployment of Siebel CRM on OCI. The Siebel CRM runs as Docker Containers in Oracle Kubernetes Engine (OKE).
+Siebel Cloud Manager is a new REST-based continuous-deployment tool used for automating the deployment of Siebel CRM on Oracle Cloud Infrastructure. Customers have the option to lift and shift an existing on-premise environment or set up a fresh 'greefield' deployment. Siebel CRM is deployed as a set of Containers managed by Oracle Kubernetes Engine.
-For the complete documentation on Siebel Cloud Manager, visit the support article **Using Siebel Cloud Manager to Deploy Siebel CRM on OCI (Doc ID 2828904.1)** and download the attached pdf document.
+For the latest documentation on Siebel Cloud Manager, visit [Siebel Bookshelf](https://www.oracle.com/documentation/siebel-crm-libraries.html) and review the guide titled **Deploying Siebel CRM on OCI using Siebel Cloud Manager** for the appropriate release you are deploying.
Estimated Time: 2 hours 20 minutes
Notes:
-* The workshop is quite detailed and technical. PLEASE take your time and DO NOT skip any steps.
-* IP addresses and URLs in the screenshots in this workbook may differ from what you use in the labs, as these are dynamically generated.
-* For security purposes, some sensitive text (such as IP addresses) may be redacted in the screenshots in this workbook.
+* The workshop is quite detailed and technical. Take your time and do not skip any steps.
+* IP addresses and URLs in the screenshots in this workbook may differ from what you see as they are dynamically generated.
+* For security purposes, some sensitive text (such as IP addresses) has been redacted in the screenshots in this workbook.
* Replace **{}** characters and the string inside them with the relevant values wherever applicable as they are placeholders; for example, **{Application_Name}** will be **Siebel**
-
-UNIX commands (usually executed in an SSH session using PuTTY) are displayed in a monospace font within a box, as follows:
+UNIX commands (usually executed in a console-based SSH session) are displayed in a monospace font within a box as follows:
```
-$ sudo yum install wget -y $ wget -O bitnami-mean-linux-installer.run https://bitnami.com/stack/mean/download_latest/linux-x64
+$ sudo yum install wget -y
+$ wget -O bitnami-mean-linux-installer.run https://bitnami.com/stack/mean/download_latest/linux-x64
```
### Workshop Overview
This workshop uses the following components:
-* Trial accounts (one per attendee)
+* A Trial or Paid OCI Tenancy
- - Virtual Cloud Network and related resources
- - User-generated using Resource Manager and provided Terraform script
+* Virtual Cloud Network and related resources
+ - User-generated using Resource Manager and provided Terraform script
- - GitLab Instance
- - Deployed through Architecture Center's GitLab stack
+* GitLab Instance
+ - Deployed through Architecture Center's GitLab stack
- - Siebel Cloud Manager instance
- - Provisioned from OCI Marketplace Image
+* Siebel Cloud Manager instance
+ - Provisioned from an OCI Marketplace Image
- - Oracle Kubernetes Engine (OKE) and related resources
- - Created by Siebel Cloud Manager
+* Oracle Kubernetes Engine and related resources
+ - Provisioned and configured by Siebel Cloud Manager
- - Siebel CRM Application
+* Siebel CRM Application
+ - Deployed as a set of pods on Oracle Kubernetes Engine
### Objectives
@@ -60,8 +61,8 @@ In this lab, you will:
You will need the following to complete this workshop:
* A secure remote login (Secure Shell, or SSH) utility
- - Such as PuTTY - downloaded from [here](https://www.ssh.com/ssh/putty/download)
-* Basic understanding of Dockers & Kubernetes, and Unix commands.
+ - e.g. PuTTY, which can be downloaded [here](https://www.ssh.com/ssh/putty/download)
+* Basic understanding of Containers, Kubernetes, and Unix commands.
## Appendix
### Terminology
@@ -90,6 +91,6 @@ The following terms are commonly employed in Oracle Siebel cloud operations and
## Acknowledgements
-* **Author:** Shyam Mohandas, Principal Cloud Architect; Sampath Nandha, Principal Cloud Architect
+* **Author:** Shyam Mohandas, Principal Cloud Architect; Sampath Nandha, Principal Cloud Architect; Duncan Ford, Siebel Software Engineer
* **Contributors** - Vinodh Kolluri, Raj Aggarwal, Mark Farrier, Sandeep Kumar
-* **Last Updated By/Date** - Sampath Nandha, Principal Cloud Architect, March 2023
+* **Last Updated By/Date** - Duncan Ford, Siebel Software Engineer, April 2024
diff --git a/siebel-cloud-manager/provision-scm/images/assign-public-ip-address.png b/siebel-cloud-manager/provision-scm/images/assign-public-ip-address.png
new file mode 100644
index 000000000..b091e1b20
Binary files /dev/null and b/siebel-cloud-manager/provision-scm/images/assign-public-ip-address.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/click-configure-variables.png b/siebel-cloud-manager/provision-scm/images/click-configure-variables.png
new file mode 100644
index 000000000..7b1b0d249
Binary files /dev/null and b/siebel-cloud-manager/provision-scm/images/click-configure-variables.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/click-create.png b/siebel-cloud-manager/provision-scm/images/click-create.png
new file mode 100644
index 000000000..a2c2c8c7e
Binary files /dev/null and b/siebel-cloud-manager/provision-scm/images/click-create.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/compartment-for-stack.png b/siebel-cloud-manager/provision-scm/images/compartment-for-stack.png
new file mode 100644
index 000000000..3311cca90
Binary files /dev/null and b/siebel-cloud-manager/provision-scm/images/compartment-for-stack.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/compute-instances.png b/siebel-cloud-manager/provision-scm/images/compute-instances.png
new file mode 100644
index 000000000..5ada55018
Binary files /dev/null and b/siebel-cloud-manager/provision-scm/images/compute-instances.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/copy-the-ocid.png b/siebel-cloud-manager/provision-scm/images/copy-the-ocid.png
index 1b4fe7844..e4fa1a320 100644
Binary files a/siebel-cloud-manager/provision-scm/images/copy-the-ocid.png and b/siebel-cloud-manager/provision-scm/images/copy-the-ocid.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/create-new-vault.png b/siebel-cloud-manager/provision-scm/images/create-new-vault.png
new file mode 100644
index 000000000..963f8b1ee
Binary files /dev/null and b/siebel-cloud-manager/provision-scm/images/create-new-vault.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/docker-ps-output.png b/siebel-cloud-manager/provision-scm/images/docker-ps-output.png
new file mode 100644
index 000000000..f5bf44728
Binary files /dev/null and b/siebel-cloud-manager/provision-scm/images/docker-ps-output.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/enter-resource-prefix.png b/siebel-cloud-manager/provision-scm/images/enter-resource-prefix.png
new file mode 100644
index 000000000..78b89408c
Binary files /dev/null and b/siebel-cloud-manager/provision-scm/images/enter-resource-prefix.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/hamburger.png b/siebel-cloud-manager/provision-scm/images/hamburger.png
new file mode 100644
index 000000000..66ae6eded
Binary files /dev/null and b/siebel-cloud-manager/provision-scm/images/hamburger.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/launch-stack-Button.png b/siebel-cloud-manager/provision-scm/images/launch-stack-Button.png
index 17d7c8820..ea72bc6a0 100644
Binary files a/siebel-cloud-manager/provision-scm/images/launch-stack-Button.png and b/siebel-cloud-manager/provision-scm/images/launch-stack-Button.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/navigate-compartment.png b/siebel-cloud-manager/provision-scm/images/navigate-compartment.png
index 68855445a..dafa4acfa 100644
Binary files a/siebel-cloud-manager/provision-scm/images/navigate-compartment.png and b/siebel-cloud-manager/provision-scm/images/navigate-compartment.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/navigate-marketplace.png b/siebel-cloud-manager/provision-scm/images/navigate-marketplace.png
index fb7b7c6ce..abe1a07a0 100644
Binary files a/siebel-cloud-manager/provision-scm/images/navigate-marketplace.png and b/siebel-cloud-manager/provision-scm/images/navigate-marketplace.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/network-ip-config.png b/siebel-cloud-manager/provision-scm/images/network-ip-config.png
new file mode 100644
index 000000000..7a23426e2
Binary files /dev/null and b/siebel-cloud-manager/provision-scm/images/network-ip-config.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/scm-instance-details.png b/siebel-cloud-manager/provision-scm/images/scm-instance-details.png
new file mode 100644
index 000000000..a89d010f2
Binary files /dev/null and b/siebel-cloud-manager/provision-scm/images/scm-instance-details.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/scm-instance-subnet-details.png b/siebel-cloud-manager/provision-scm/images/scm-instance-subnet-details.png
new file mode 100644
index 000000000..3c7f09204
Binary files /dev/null and b/siebel-cloud-manager/provision-scm/images/scm-instance-subnet-details.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/scm-instance.png b/siebel-cloud-manager/provision-scm/images/scm-instance.png
new file mode 100644
index 000000000..c44fc89a4
Binary files /dev/null and b/siebel-cloud-manager/provision-scm/images/scm-instance.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/scm-security-list-ingress.png b/siebel-cloud-manager/provision-scm/images/scm-security-list-ingress.png
new file mode 100644
index 000000000..d7ce8c888
Binary files /dev/null and b/siebel-cloud-manager/provision-scm/images/scm-security-list-ingress.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/scm-stack-terraform-apply.png b/siebel-cloud-manager/provision-scm/images/scm-stack-terraform-apply.png
new file mode 100644
index 000000000..c0573d45e
Binary files /dev/null and b/siebel-cloud-manager/provision-scm/images/scm-stack-terraform-apply.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/scm-stack-terraform-complete.png b/siebel-cloud-manager/provision-scm/images/scm-stack-terraform-complete.png
new file mode 100644
index 000000000..150e3c8aa
Binary files /dev/null and b/siebel-cloud-manager/provision-scm/images/scm-stack-terraform-complete.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/scm-stack-terraform-outputs.png b/siebel-cloud-manager/provision-scm/images/scm-stack-terraform-outputs.png
new file mode 100644
index 000000000..4e691b107
Binary files /dev/null and b/siebel-cloud-manager/provision-scm/images/scm-stack-terraform-outputs.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/scm-ui.png b/siebel-cloud-manager/provision-scm/images/scm-ui.png
index acaaa56b6..f66243b47 100644
Binary files a/siebel-cloud-manager/provision-scm/images/scm-ui.png and b/siebel-cloud-manager/provision-scm/images/scm-ui.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/search-scm.png b/siebel-cloud-manager/provision-scm/images/search-scm.png
index 490925a5d..1671551e2 100644
Binary files a/siebel-cloud-manager/provision-scm/images/search-scm.png and b/siebel-cloud-manager/provision-scm/images/search-scm.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/select-instance-type.png b/siebel-cloud-manager/provision-scm/images/select-instance-type.png
new file mode 100644
index 000000000..7b74e2f63
Binary files /dev/null and b/siebel-cloud-manager/provision-scm/images/select-instance-type.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/select-scm-instance-resources.png b/siebel-cloud-manager/provision-scm/images/select-scm-instance-resources.png
new file mode 100644
index 000000000..018dd14a7
Binary files /dev/null and b/siebel-cloud-manager/provision-scm/images/select-scm-instance-resources.png differ
diff --git a/siebel-cloud-manager/provision-scm/images/select-ssh-key-for-stack.png b/siebel-cloud-manager/provision-scm/images/select-ssh-key-for-stack.png
new file mode 100644
index 000000000..73ab6c1ff
Binary files /dev/null and b/siebel-cloud-manager/provision-scm/images/select-ssh-key-for-stack.png differ
diff --git a/siebel-cloud-manager/provision-scm/provision-scm.md b/siebel-cloud-manager/provision-scm/provision-scm.md
index 904f28fdd..1e46d7d02 100644
--- a/siebel-cloud-manager/provision-scm/provision-scm.md
+++ b/siebel-cloud-manager/provision-scm/provision-scm.md
@@ -2,7 +2,7 @@
## Introduction
-In this lab, we shall first create a new compartment to organize all our lab-related cloud resources. Later, we shall deploy the Siebel Cloud Manager stack to provision a virtual machine that will have the Siebel Cloud Manager application pre-installed.
+In this lab, we will first create a new compartment to organize all our lab-related cloud resources. Later, we'll deploy the Siebel Cloud Manager stack to provision a virtual machine that has the Siebel Cloud Manager application pre-installed.
Estimated Time: 20 minutes
@@ -22,13 +22,17 @@ In this lab, you will:
## Task 1: Create a compartment
-1. Log in to the Oracle Cloud Infrastructure Tenancy and in the console, click the **hamburger icon** and navigate to **Identity and Security** then **Compartments**
+1. Log in to your Oracle Cloud Infrastructure Tenancy and, in the console, click the icon at the top left, sometimes referred to as the **hamburger icon**.
- ![Navigate to Compartment](./images/navigate-compartment.png)
+ ![Click the nav menu button, or hamburger icon](./images/hamburger.png)
-2. On the compartments page, click **Create Compartment**
+2. Navigate to **Identity & Security** then **Compartments**
-3. Give the **Name** and **Description** as below. In the **Parent Compartment** field choose either the root compartment or any other compartment to which the one we are creating now will be a child.
+ ![Navigate to OCI's Compartments page](./images/navigate-compartment.png)
+
+3. On the Compartments page, click **Create Compartment**
+
+4. Give a **Name** and **Description**; an example is given below. In the **Parent Compartment** field choose either the root compartment or the compartment your OCI administrator has indicated should be the parent.
**Name**
@@ -38,81 +42,143 @@ In this lab, you will:
Compartment for all Siebel Cloud Manager resources
- ![Create Compartment](./images/create-compartment.png)
+ ![Create a Compartment with the example values just given](./images/create-compartment.png)
+
+5. Click **Create Compartment** and the compartment will be created.
-4. Click ***Create Compartment*** and the compartment will be created.
+6. Click on the new Compartment that's now added to the list. You may need to first click on the parent Compartment you selected to see the new Compartment.
-5. Note the **OCID** of this compartment.
+7. Note the **OCID** of this compartment. You many want to copy this to a digital notepad for use later.
- ![Copy the Compartment OCID](./images/copy-the-ocid.png)
+ ![Copy the allocated Compartment's OCID value](./images/copy-the-ocid.png)
-## Task 2: Create a Siebel Cloud Manager instance from the Marketplace image
+## Task 2: Create a Siebel Cloud Manager instance from the Marketplace Image
-Siebel Cloud Manager stack will create the following resources,
+The Siebel Cloud Manager Stack will create the following resources,
- - A subcompartment
- - Network resources like VCN, public/private subnets, security lists, route table and rules, dynamic group, policies, etc.
- - A compute instance with the Siebel Cloud Manager application pre-installed
+ - A Sub-Compartment
+ - Network resources; i.e. VCN, public/private subnets, security lists, route table and rules, dynamic group, policies, etc.
+ - A compute instance with the Siebel Cloud Manager application pre-installed as a container on a Linux Operating System.
1. In the Oracle Cloud Console, click the ***hamburger icon***
+ ![Click the nav menu button, or hamburger icon](./images/hamburger.png)
+
2. Navigate to **Marketplace** and **All Applications**
- ![Navigate to Marketplace](./images/navigate-marketplace.png)
+ ![Navigate to Marketplace, then All Applications](./images/navigate-marketplace.png)
-3. In the search bar, type in **siebel** and hit search. Click ***Siebel Cloud Manager*** image.
+3. In the search bar, type in **siebel**. The list of available applications will change as you type. Click the card for ***Siebel Cloud Manager (SCM)***.
- ![Search Siebel](./images/search-scm.png)
+ ![Search for Siebel and click Siebel Cloud Manager](./images/search-scm.png)
4. On the page that appears, choose the latest **Version** and the compartment as **SiebelCloudManager**
-5. Click ***Launch Stack***
+5. If you wish to proceed, you must click the checkbox to agree that you reviewed and agree to the terms of service.
- ![Launch Stack](./images/launch-stack-Button.png)
+6. Click ***Launch Stack***
-6. On the **Create Stack** page, click ***Configure Variables***
+ ![Click 'Launch Stack'](./images/launch-stack-Button.png)
-7. Fill up the following,
+7. On the **Create Stack** page, click ***Configure Variables***
- a. **Root Compartment OCID:** This is the OCID of **SiebelCloudManager** compartment.
+ ![Click 'Configure Variables'](./images/click-configure-variables.png)
- b. **Cloud manager public ssh key:** Either upload or paste the public SSH key that we created as part of Lab 1.
+8. Fill in the following details in the following way, at least on your first run through the process:
- c. **Resource prefix to name the OCI resources:** scm
+ a. **Root Compartment OCID:** This is the OCID of **SiebelCloudManager** compartment from Task 1, Step 7. If you didn't make a digital note earlier that's easy to copy, feel free to open a second browser tab to find and copy the OCID now.
- d. Click ***Next*** and review the Stack Information and Configuration Variables. Check **Run Apply**
+ ![Populate Root Compartment OCID](./images/compartment-for-stack.png)
- e. Click ***Create***
+ b. **Cloud manager public ssh key:** Either upload or paste the public SSH key that was created as part of Lab 1. You will use this to login via SSH.
-8. Now, we will be directed to the **Stack Details** page and we will see that our terraform apply job is running.
+ ![Select ssh key for SCM instance](./images/select-ssh-key-for-stack.png)
- The **Logs** section will show the progress of the apply job. This can be monitored to check the various resources that are getting created. In case there are any errors, they will be displayed too. After running for a while, the stack apply job's state will show **Succeeded**.
+ c. **Resource prefix to name the OCI resources:** We suggest using the value ***scm*** combined with the current date in some format.
-9. In the **Logs** section of the stack apply job, make a note of the following information:
- * **CloudManagerApplication**: The URL for running Siebel Cloud Manager, which uses the public IP address and port number of the newly created instance.
- The output would appear as below.
- ```
- CloudManagerApplication = "http://{Public IP of Siebel Cloud Manager}:16690/"
- ```
+ ![Populate resource prefix](./images/enter-resource-prefix.png)
+
+ d. We don't need a powerful compute resource for the Siebel Cloud Manager. For the CloudManager instance configuration, start by selecting ***VM.Optimized3.Flex***. Don't worry if your list of options looks different. If in doubt, use another browser to review the available types and select something appropriate.
+
+ ![Select instance type](./images/select-instance-type.png)
+
+ e. With a flixible instance type such as the one we've selected, you can now tailor the number of CPUs and memory. We suggest something very modest as the workload is not high on the SCM instance itself. The minumum values are 2 CPUs and 15 GB of Memory. Select higher if you feel you will need more compute.
+
+ ![Select instance resource options](./images/select-scm-instance-resources.png)
+
+ f. Ensure you click to assign a public ip address initially. This will allow easy initial access for SSH and the SCM UI. No other ports will be exposed by default, and you can constrain access to SCM's web interface to your specific IP address or set of addresses later on, or configure a bastion service, VPN access and so on.
+
+ ![Assign public ip address](./images/assign-public-ip-address.png)
+
+ g. For Key Management, select to create a new vault unless you already have one assigned to you by your cloud administrator, in which case enter the OCID of that vault.
+
+ ![Create a new vault or populate OCID of existing vault](./images/create-new-vault.png)
+
+ h. We're leaving this blank for the lab meaning we'll accept default VCN configuration. If you're using an existing VCN in your tenancy, consult your cloud administrator on how to set this up here.
+
+ ![Leave Network IP configuration at the default value](./images/network-ip-config.png)
+
+ i. Click ***Next*** at the bottom left of the form, review the Stack Information and Configuration Variables. Note that **Run Apply** is enabled by default.
+
+ ![Review configuration](./images/click-create.png)
+
+ j. Click ***Create*** at the bottom left.
+
+
+8. Now, we will be directed to the **Stack Details** job page and we can see that a terraform job is running.
+
+ ![Terraform job status](./images/scm-stack-terraform-apply.png)
+
+ The **Logs** section will show the progress of the apply job. This can be monitored to check the various resources that are getting created. In case there are any errors, they will be displayed too. After running for a while, the stack apply job's state should show **Succeeded**.
+
+ ![Terraform job complete](./images/scm-stack-terraform-complete.png)
+
+9. In the **Outputs** section, take note of the value for **CloudManagerApplication**. This is the URL to access your instance of Siebel Cloud Manager.
+
+ ![Terraform job outputs](./images/scm-stack-terraform-outputs.png)
## Task 3: Verify the Siebel Cloud Manager application
Once the stack is successfully deployed, the Siebel Cloud Manager instance can be accessed from **Compute** and **Instances** section.
-We shall now verify if the Siebel Cloud Manager application is running.
+ ![Compute - Instances](./images/compute-instances.png)
+
+ ![SCM - Instance](./images/scm-instance.png)
+
+We shall now verify that the Siebel Cloud Manager application is running.
+
+1. Connect to this instance using ssh (you can use a client such as PuTTY) using the private key related to the public key you pasted into the stack creation job above. Enter the username as **opc**.
+- In the **Outputs** listed above, you also can use the **CloudManagedSSHConnection** value as a guide
-1. Connect to this instance through an ssh client such as PuTTY using the ssh private key that we had used to create this instance. Enter the username as **opc**
+ ```
+ [~]$ ssh -i cloudshellkey opc@***.***.***.***
+
+ The authenticity of host '***.***.***.*** (***.***.***.***)' can't be established.
+ ED25519 key fingerprint is SHA256:hygK------------------------------------------------.
+ This key is not known by any other names
+ Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
+ Warning: Permanently added '***.***.***.***' (ED25519) to the list of known hosts.
+
+ Enter passphrase for key 'cloudshellkey':
+
+ [opc@scm2024****-**-siebel-cm ~]$
+ ```
2. Run the following command.
```
$ docker ps
- (The above command will give the output of the running Siebel cloud manager container)
```
- ![docker ps command output](./images/docker-ps.png)
-3. To check the response, launch the Cloud Manager Application URL that was shown in the stack apply job.
+ The above command will give the output of the running Siebel cloud manager container, which will look something like this
+
+
+ ![docker ps command output](./images/docker-ps-output.png)
+
+
+3. Now launch the Cloud Manager Application using the URL that was shown in the stack job outputs.
+
```
CloudManagerApplication = "http://{Public IP of Siebel Cloud Manager}:16690/"
```
@@ -120,17 +186,28 @@ We shall now verify if the Siebel Cloud Manager application is running.
The above page indicates that the Siebel Cloud Manager application is up and running.
-## Summary
+4. As you may infer, this page is public visible to the globe. If you wish to constrain access, you can do this in a very flexible way by controlling the security list for the instance. Click on the instance listed above to access the instance details.
-In this lab, the Siebel Cloud Manager instance has been provisioned. In the next lab, we will install and configure a GitLab instance.
+5. Click on the subnet link
-You may now **proceed to the next lab**.
+![Siebel Cloud Manager Instance Details](./images/scm-instance-details.png)
+6. Click on **security-list-for-cm**
+![Click on security list to configure](./images/scm-instance-subnet-details.png)
+7. Now adjust the ingress rules to your needs. While port 22 is globally open, access is limited by your ssh key. You may wish to restrict port 16690 to be visible to a select CIDR.
+
+![Adjust ingress rules](./images/scm-security-list-ingress.png)
+
+## Summary
+
+In this lab, the Siebel Cloud Manager instance was provisioned. In the next lab, we will install and configure a GitLab instance.
+
+You may now **proceed to the next lab**.
## Acknowledgements
-* **Author:** Shyam Mohandas, Principal Cloud Architect; Sampath Nandha, Principal Cloud Architect
+* **Author:** Duncan Ford, Software Engineer; Shyam Mohandas, Principal Cloud Architect; Sampath Nandha, Principal Cloud Architect
* **Contributors** - Vinodh Kolluri, Raj Aggarwal, Mark Farrier, Sandeep Kumar
-* **Last Updated By/Date** - Sampath Nandha, Principal Cloud Architect, March 2023
\ No newline at end of file
+* **Last Updated By/Date** - Duncan Ford, Software Engineer, October 2024
\ No newline at end of file
diff --git a/siebel-cloud-manager/setup-gitlab/images/access-token-generated.png b/siebel-cloud-manager/setup-gitlab/images/access-token-generated.png
index 92ed1871b..9fd9566a8 100644
Binary files a/siebel-cloud-manager/setup-gitlab/images/access-token-generated.png and b/siebel-cloud-manager/setup-gitlab/images/access-token-generated.png differ
diff --git a/siebel-cloud-manager/setup-gitlab/images/deployment-region.png b/siebel-cloud-manager/setup-gitlab/images/deployment-region.png
new file mode 100644
index 000000000..2a1d675c0
Binary files /dev/null and b/siebel-cloud-manager/setup-gitlab/images/deployment-region.png differ
diff --git a/siebel-cloud-manager/setup-gitlab/images/drilldown-on-gitlab-server.png b/siebel-cloud-manager/setup-gitlab/images/drilldown-on-gitlab-server.png
deleted file mode 100644
index d7b836f74..000000000
Binary files a/siebel-cloud-manager/setup-gitlab/images/drilldown-on-gitlab-server.png and /dev/null differ
diff --git a/siebel-cloud-manager/setup-gitlab/images/gitlab-access-token.png b/siebel-cloud-manager/setup-gitlab/images/gitlab-access-token.png
index dbdc4df33..404a18c7a 100644
Binary files a/siebel-cloud-manager/setup-gitlab/images/gitlab-access-token.png and b/siebel-cloud-manager/setup-gitlab/images/gitlab-access-token.png differ
diff --git a/siebel-cloud-manager/setup-gitlab/images/gitlab-add-new-token.png b/siebel-cloud-manager/setup-gitlab/images/gitlab-add-new-token.png
new file mode 100644
index 000000000..3d137fd62
Binary files /dev/null and b/siebel-cloud-manager/setup-gitlab/images/gitlab-add-new-token.png differ
diff --git a/siebel-cloud-manager/setup-gitlab/images/gitlab-change-password.png b/siebel-cloud-manager/setup-gitlab/images/gitlab-change-password.png
new file mode 100644
index 000000000..c92dd53a0
Binary files /dev/null and b/siebel-cloud-manager/setup-gitlab/images/gitlab-change-password.png differ
diff --git a/siebel-cloud-manager/setup-gitlab/images/gitlab-edit-profile.png b/siebel-cloud-manager/setup-gitlab/images/gitlab-edit-profile.png
new file mode 100644
index 000000000..1fb5fa671
Binary files /dev/null and b/siebel-cloud-manager/setup-gitlab/images/gitlab-edit-profile.png differ
diff --git a/siebel-cloud-manager/setup-gitlab/images/gitlab-initial-login.png b/siebel-cloud-manager/setup-gitlab/images/gitlab-initial-login.png
new file mode 100644
index 000000000..74f189d5d
Binary files /dev/null and b/siebel-cloud-manager/setup-gitlab/images/gitlab-initial-login.png differ
diff --git a/siebel-cloud-manager/setup-gitlab/images/gitlab-profile-icon.png b/siebel-cloud-manager/setup-gitlab/images/gitlab-profile-icon.png
deleted file mode 100644
index f9c27a5ea..000000000
Binary files a/siebel-cloud-manager/setup-gitlab/images/gitlab-profile-icon.png and /dev/null differ
diff --git a/siebel-cloud-manager/setup-gitlab/images/gitlab-stack-information.png b/siebel-cloud-manager/setup-gitlab/images/gitlab-stack-information.png
index 0a914b2ae..cd1aaca03 100644
Binary files a/siebel-cloud-manager/setup-gitlab/images/gitlab-stack-information.png and b/siebel-cloud-manager/setup-gitlab/images/gitlab-stack-information.png differ
diff --git a/siebel-cloud-manager/setup-gitlab/images/note-public-ip-address.png b/siebel-cloud-manager/setup-gitlab/images/note-public-ip-address.png
new file mode 100644
index 000000000..34bea9ffb
Binary files /dev/null and b/siebel-cloud-manager/setup-gitlab/images/note-public-ip-address.png differ
diff --git a/siebel-cloud-manager/setup-gitlab/setup-gitlab.md b/siebel-cloud-manager/setup-gitlab/setup-gitlab.md
index 589ab3c78..b6d28b541 100644
--- a/siebel-cloud-manager/setup-gitlab/setup-gitlab.md
+++ b/siebel-cloud-manager/setup-gitlab/setup-gitlab.md
@@ -4,7 +4,7 @@
In this lab, we will install and configure a GitLab instance.
-Siebel Cloud Manager uses GitLab to store the configuration of each deployment that it performs. Then, it will access the configuration files from GitLab to do the actual deployment.
+Siebel Cloud Manager uses GitLab to store the configuration of each deployment that it performs. Changes made to Gitlab content are automatically reflected in the deployed deployment.
Estimated Time: 40 minutes
@@ -13,13 +13,12 @@ Estimated Time: 40 minutes
In this lab, you will:
* Deploy the GitLab stack
* Configure HTTPS for GitLab
-* Upgrade GitLab for enhanced security
* Generate a GitLab Access Token
### Prerequisites
* Oracle Cloud Infrastructure tenancy access
-* PuTTY Client and SSH Key
+* SSH Client and SSH Key
## Task 1: Deploy the GitLab stack
@@ -27,17 +26,25 @@ In this task, we will visit the Oracle Architecture Center to deploy the GitLab
During the stack creation, review all default values displayed. Confirm each value or enter a new value as appropriate for our task.
-1. Go to this document about deploying GitLab - [https://docs.oracle.com/en/solutions/deploy-gitlab-ci-cd-oci/index.html](https://docs.oracle.com/en/solutions/deploy-gitlab-ci-cd-oci/index.html)
+1. Click the following link to review documentation about deploying GitLab - [https://docs.oracle.com/en/solutions/deploy-gitlab-ci-cd-oci/index.html](https://docs.oracle.com/en/solutions/deploy-gitlab-ci-cd-oci/index.html)
2. In the Deploy section of this page, click the **Deploy to Oracle Cloud** link.
![In the Deploy section, click the Deploy to Oracle Cloud link](./images/deploy-to-oracle-cloud.png " ")
-3. In the stack information section, specify the compartment in which to create the GitLab stack and leave the default values for **Working Directory**, **Name**, **and Description** . Click ***Next***
+3. Check the region if necessary in case the deploy link changed it.
+
+ ![Change deployment region](./images/deployment-region.png " ")
+
+4. In the **Stack information** section, specify the **Compartment** in which to create the GitLab stack. In the SCM provisioning lab, a new sub-compartment was automatically provisioned as part of Task 2. To keep everything together, specify that compartment here.
![GitLab Stack Information](./images/gitlab-stack-information.png " ")
-4. In **Compute Configuration** section, specify the new compartment **scm-siebel-cm** (created by the Siebel Cloud Manager stack) and the rest of the options such as **Availability Domain, Instance name, DNS Hostname Label, Flex Shape OCPUs, and Compute Image** as shown below. Leave the default values for **External URL, Tag key name, and Tag value**
+
+5. Leave the default values for **Working Directory**, **Name**, **and Description**. Click ***Next***
+
+
+6. In **Compute Configuration** section, specify the same compartment again for **Compute Compartment**. Complete the remaining options as shown below. Leave the default values for **External URL, Tag key name, and Tag value**
**Availability Domain:**
@@ -51,20 +58,29 @@ During the stack creation, review all default values displayed. Confirm each val
gitlabserver
- **Flex Shape OCPUs:**
+ **Compute Shape:**
+
+ VM.Standard.E3.Flex
- 1
**Flex Shape Memory:**
- 6
+ 6
+
+ **Flex Shape OCPUs:**
+
+ 1
**Compute Image**
- (Choose any image from the list)
+ (Choose the most recent image from the list)
+
+ **Public SSH Key string**
+
+ (Paste in (or choose) the SSH public key you used in the previous lab)
**Network Compartment**
- scm-siebel-cm
+ (Copy the compartment you selected for the Compute Compartment above)
**Network Strategy**
@@ -72,7 +88,7 @@ During the stack creation, review all default values displayed. Confirm each val
**Existing VCN**
- (Choose the VCN that was previously created by the Siebel Cloud Manager stack)
+ (Choose the VCN that was previously created by the Siebel Cloud Manager stack - it should be the only option for the selected compartment)
**Subnet Type**
@@ -86,56 +102,59 @@ During the stack creation, review all default values displayed. Confirm each val
Use Recommended Configuration
-5. Click ***Next***
+7. Click ***Next***
-6. Verify the configuration variables. To immediately provision the resources defined in the Terraform configuration, check **Run Apply**
+8. Verify the configuration variables. To immediately provision the resources defined in the Terraform configuration, check that **Run Apply** is ticked
-7. Click ***Create***
+9. Click ***Create***
- The GitLab stack **apply** job will run successfully.
+ The GitLab stack **apply** job should run successfully.
## Task 2: Configure HTTPS for GitLab
1. On the Oracle Cloud Console page, navigate to **Compute** and **Instances**.
-2. In the **List Scope** section on the left side panel, choose our compartment **scm-siebel-cm**.
-
-3. Drill down on the instance name **gitlab-server** and note the Public IP address.
+2. In the **List Scope** section on the left side panel, choose the compartment used for the lab **scm-siebel\-cm**.
- ![Drilldown on GitLab server](./images/drilldown-on-gitlab-server.png " ")
+3. Note the Public IP address.
-4. Connect to this instance through an ssh client such as PuTTY using the ssh private key that we had created in Lab 1. Enter the username as **opc**
+ ![Note public IP](./images/note-public-ip-address.png " ")
-5. After successful login, execute the following command to change to root user.
+4. Connect to this instance via ssh or through an ssh client such as PuTTY using the ssh private key that we had created in Lab 1 and a username of **opc**
```
- $ sudo su
+ $ ssh -i cloudshellkey opc@{Public IP of GitLab instance}
```
-6. In the **/etc/gitlab/gitlab.rb** file, edit the **external_url** parameter as shown below using the vi editor.
+5. You can check the version of Gitlab installed as follows
```
- $ vi /etc/gitlab/gitlab.rb
+ $ sudo gitlab-rake gitlab:env:info
+ ```
- external_url 'https://{Public IP of the GitLab Instance}'
+6. After successful login, execute the following command to change to root user.
+
+ ```
+ $ sudo su
```
-7. In the same file, disable the **letsencrypt** feature by setting its value to **false**
+7. In the **/etc/gitlab/gitlab.rb** file, edit the **external_url** parameter as shown below using the vi editor, then save your changes and close the file.
```
$ vi /etc/gitlab/gitlab.rb
- letsencrypt['enable'] = false
+ external_url 'https://{Public IP of the GitLab Instance}'
```
-8. Create the self-signed certificates using OpenSSL. Run the following commands one by one.
- > **Note:** The self-signed certificates are only for this lab. For a real-world implementation, use Certificate Authority (CA) signed certificates for security reasons.
+8. Create a self-signed certificates using OpenSSL. Run the following commands one by one to begin the process.
+
+ > **Note:** Self-signed certificates are only suggested for this lab. Typically you will use Certificate Authority (CA) signed certificates. Follow [Gitlab's documentation to troubleshoot SSL issues](https://docs.gitlab.com/omnibus/settings/ssl/ssl_troubleshooting.html)
```
- $ sudo mkdir -p /etc/gitlab/ssl
+ $ mkdir -p /etc/gitlab/ssl
```
```
- $ sudo chmod 755 /etc/gitlab/ssl
+ $ chmod 755 /etc/gitlab/ssl
```
```
$ cd /etc/gitlab/ssl
@@ -152,13 +171,20 @@ During the stack creation, review all default values displayed. Confirm each val
```
$ openssl req -new -key {Public IP of GitLab instance}.key -out {Public IP of GitLab instance}.csr -subj "/CN=localhost"
```
-9. Create a configuration file named **device-csr.conf** under **/etc/gitlab/ssl** directory with the following content.
+9. Create a configuration file named **device-csr.conf**
+
+ ```
+ $ vi device-csr.conf
+ ```
+
+10. Populate this file with the following the save and close the file
```
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
+
[req_distinguished_name]
C = US
ST = UT
@@ -166,42 +192,54 @@ During the stack creation, review all default values displayed. Confirm each val
O = Oracle
OU = Corp
CN = localhost
+
[v3_req]
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
+
[alt_names]
IP.1 = {Public IP of GitLab instance}
```
-10. After the file **device-csr.conf** is created, run the following command.
+
+11. Run the following command to finally create a self-signed certificate
```
$ openssl x509 -req -in {Public IP of GitLab instance}.csr -CA rootCA.crt -CAkey rootCA.key -CAcreateserial -out {Public IP of GitLab instance}.crt -extfile device-csr.conf -extensions v3_req -days 365
```
-11. Reconfigure GitLab by running the below command.
+
+12. Reconfigure GitLab by running the below command.
```
- $ sudo gitlab-ctl reconfigure
+ $ gitlab-ctl reconfigure
```
-12. Copy the **rootCA.crt** from **/etc/gitlab/ssl** folder to the Siebel Cloud Manager instance's **/home/opc/certs** folder.
-## Task 3: Upgrade GitLab for enhanced security
+13. Prepare to copy **rootCA.crt** from Gitlab machine to SCM machine by creating a file on the Gitlab machine containing the SSH key.
-To avoid **CVE-2021-22205** vulnerability, we shall now upgrade GitLab.
+ ```
+ $ vi ~/cloudshellkey
+ ```
+14. Paste in the private key you created earlier then save the file and close it.
-1. In the GitLab instance's terminal, run the following command.
+15. Change the permission of the private key so that it's not visible to other users
```
- $ sudo yum install -y gitlab-ee-13.8.8-ee.0.el8
+ $ chmod 600 ~/cloudshellkey
```
-2. Once the above command finishes upgrading GitLab, restart GitLab by executing the below command.
+13. Copy the **rootCA.crt** from **/etc/gitlab/ssl** folder to the Siebel Cloud Manager instance's **/home/opc/certs** folder.
```
- $ sudo gitlab-ctl restart
+ $ scp -i ~/cloudshellkey rootCA.crt opc@{Public IP address for SCM instance}:/home/opc/certs
```
+## Task 3: Plan to keep your Gitlab instance up to date
+
+1. Review [Gitlab's documentation for updating](https://docs.gitlab.com/ee/update/)
+
+2. Keep an eye on [Gitlab's release page](https://about.gitlab.com/releases/categories/releases/) to be aware of new releases, the issues they fix, and the features they offer
+
## Task 4: Generate a GitLab Access Token
We need to generate a GitLab Access Token that will be used in Lab 4 where we deploy Siebel CRM.
@@ -211,28 +249,35 @@ We need to generate a GitLab Access Token that will be used in Lab 4 where we de
```
https://{Public IP of GitLab Instance}
```
+
> **Note:** Sometimes, we might encounter the **502 Error** upon launching the GitLab URL and refreshing the page at the time should display the right content.
-
-2. Give a new password for the **root** user per the prompt, confirm the password, and click ***Change your password***.
+ > **Note:** As we're using a self-signed certificate, your browser is expected to complain and ask for your explicit permission to access the security risk entailed.
- ![GitLab New Password](./images/gitlab-new-password.png)
+2. Obtain the initial root user password, randomly generated during installation from the gitlab machine.
+
+ ```
+ $ ssh -i cloudshellkey opc@{Public IP of GitLab instance}
+ ```
+ ```
+ $ sudo more /etc/gitlab/initial_root_password
+ ```
-3. We will be prompted to log in to GitLab. Enter the following credentials and click ***Sign in***.
+2. Log in as root with the initial root password..
- **Username**
+ ![GitLab Login](./images/gitlab-initial-login.png)
- root
+3. If you wish to change, the root password from the default, now is a good time. The file above will be deleted 24 hours after initial setup. Click on the user icon on the right hand side at the top left, and select **Edit Profile**
- **Password**
+ ![GitLab Edit Profile](./images/gitlab-edit-profile.png)
- {The new password that was set earlier}
+4. Now click the **Password** option on the left menu, and fill in the form to change the password, then click **Save password**.
-4. After logging in, click the ***Profile Icon*** in the right-hand top corner and click ***Settings***.
+ ![GitLab Edit Profile](./images/gitlab-change-password.png)
- ![GitLab Profile Icon](./images/gitlab-profile-icon.png)
+5. In the left side panel, navigate to ***Access Tokens*** and then click **Add new token** on the right hand side.
-5. In the left side panel, navigate to ***Access Tokens*** page.
+ ![GitLab Profile Icon](./images/gitlab-add-new-token.png)
6. Give the following values for the respective fields.
@@ -242,7 +287,7 @@ We need to generate a GitLab Access Token that will be used in Lab 4 where we de
**Expires at:**
- {Give a distant future date}
+ {Give a future date not more than a year away}
**Scope:**
@@ -250,9 +295,9 @@ We need to generate a GitLab Access Token that will be used in Lab 4 where we de
![GitLab Access Token](./images/gitlab-access-token.png)
-7. Click ***Create Personal Access Token***.
+7. Scroll down and click ***Create Personal Access Token***.
-8. Note the token displayed in the **Your new personal access token** field.
+8. Note the token displayed in the **Your new personal access token** field (click the eye icon to see, or the copy icon to copy to the clipboard).
![Access Token Generated](./images/access-token-generated.png)
@@ -264,6 +309,6 @@ You may now **proceed to the next lab**.
## Acknowledgements
-* **Author:** Shyam Mohandas, Principal Cloud Architect; Sampath Nandha, Principal Cloud Architect
+* **Author:** Duncan Ford, Software Engineer; Shyam Mohandas, Principal Cloud Architect; Sampath Nandha, Principal Cloud Architect
* **Contributors** - Vinodh Kolluri, Raj Aggarwal, Mark Farrier, Sandeep Kumar
-* **Last Updated By/Date** - Sampath Nandha, Principal Cloud Architect, March 2023
\ No newline at end of file
+* **Last Updated By/Date** - Duncan Ford, Software Engineer, October 2024
\ No newline at end of file
diff --git a/siebel-cloud-manager/teardown-scm/images/oci-developer-services-stacks.png b/siebel-cloud-manager/teardown-scm/images/oci-developer-services-stacks.png
new file mode 100644
index 000000000..994f1f4f8
Binary files /dev/null and b/siebel-cloud-manager/teardown-scm/images/oci-developer-services-stacks.png differ
diff --git a/siebel-cloud-manager/teardown-scm/images/oci-stack-destroy-pane.png b/siebel-cloud-manager/teardown-scm/images/oci-stack-destroy-pane.png
new file mode 100644
index 000000000..2b56cb190
Binary files /dev/null and b/siebel-cloud-manager/teardown-scm/images/oci-stack-destroy-pane.png differ
diff --git a/siebel-cloud-manager/teardown-scm/images/oci-stack-destroy.png b/siebel-cloud-manager/teardown-scm/images/oci-stack-destroy.png
new file mode 100644
index 000000000..d33dfeb2e
Binary files /dev/null and b/siebel-cloud-manager/teardown-scm/images/oci-stack-destroy.png differ
diff --git a/siebel-cloud-manager/teardown-scm/images/scm-deployment-stacks.png b/siebel-cloud-manager/teardown-scm/images/scm-deployment-stacks.png
new file mode 100644
index 000000000..c0f54b588
Binary files /dev/null and b/siebel-cloud-manager/teardown-scm/images/scm-deployment-stacks.png differ
diff --git a/siebel-cloud-manager/teardown-scm/teardown-scm.md b/siebel-cloud-manager/teardown-scm/teardown-scm.md
index 4a2477142..418a4e31d 100644
--- a/siebel-cloud-manager/teardown-scm/teardown-scm.md
+++ b/siebel-cloud-manager/teardown-scm/teardown-scm.md
@@ -13,31 +13,37 @@ Estimated Time: 20 minutes
### Prerequisites
* Oracle Cloud Infrastructure tenancy access
-* User with 'manage' access to **SiebelCloudManager** and **scm-siebel-cm** compartments
+* User with 'manage' access to **SiebelCloudManager** and **scm{date}-siebel-cm** compartments
## Task 1: Destroy Siebel CRM environment Stack
1. From the Oracle Cloud Console, navigate to **Developer Services** and **Stacks**.
-2. In the **List Scope** section on the left side panel, choose **siebellab_compartment**.
+ ![OCI Menu - Stacks](./images/oci-developer-services-stacks.png)
-3. From the **Stack List**, drill down on the stack name. The stack name would be of the below format.
+2. In the **List Scope** section on the left side panel, find and select **scm{date}-siebel-cm**.
+
+ ![SCM Deployment - Stacks](./images/scm-deployment-stacks.png)
+
+3. From the list, drill down on the **SiebelLab** stack name. The stack name would be of the below format.
```
Siebel_siebellab_{timestamp}
```
- ![Stack Drilldown](./images/stack-drilldown.png)
-
4. On the **Stack Details** page, click ***Destroy***.
-5. On the **Destroy** page, leave the default **Job** name and click ***Destroy***. The Destroy job will run for a while and succeed.
+ ![SCM Stack - Destroy](./images/oci-stack-destroy.png)
+
+5. On the **Destroy** pane, leave the default **Job** name and click ***Destroy***. The Destroy job will run for a while and should succeed.
+
+ ![SCM Stack Destroy - Pane](./images/oci-stack-destroy-pane.png)
## Task 2: Destroy GitLab instance
1. From the Oracle Cloud Console, navigate to **Developer Services** and **Stacks**.
-2. In the **List Scope** section on the left side panel, choose **scm-siebel-cm**.
+2. In the **List Scope** section on the left side panel, choose **scm{date}-siebel-cm**.
3. From the **Stack List**, drill down on the GitLab stack name.
@@ -53,12 +59,13 @@ Estimated Time: 20 minutes
4. Run **Destroy** job for this stack too as done earlier. This job will run for a while and succeed.
+
## Summary
-In this lab, we have destroyed all the resources that were created for this workshop.
+In this lab, we have destroyed nearly all the resources that were created for this workshop. You could also delete the compartment and the vault, but you may wish to reuse these if you are going to repeat the lab straight away to gain some more confort with the processes.
## Acknowledgements
-* **Author:** Shyam Mohandas, Principal Cloud Architect; Sampath Nandha, Principal Cloud Architect
+* **Author:** Duncan Ford, Software Engineer; Shyam Mohandas, Principal Cloud Architect; Sampath Nandha, Principal Cloud Architect
* **Contributors** - Vinodh Kolluri, Raj Aggarwal, Mark Farrier, Sandeep Kumar
-* **Last Updated By/Date** - Sampath Nandha, Principal Cloud Architect, March 2023
\ No newline at end of file
+* **Last Updated By/Date** - Duncan Ford, Software Engineer, October 2024
\ No newline at end of file
diff --git a/siebel-cloud-manager/workshops/sandbox/manifest.json b/siebel-cloud-manager/workshops/sandbox/manifest.json
index 0b37be7f8..1af25f40f 100644
--- a/siebel-cloud-manager/workshops/sandbox/manifest.json
+++ b/siebel-cloud-manager/workshops/sandbox/manifest.json
@@ -10,7 +10,7 @@
{
"title": "Get Started",
"description": "This is the prerequisites for customers using Free Trial and Paid tenancies, and Always Free accounts (if applicable).",
- "filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/cloud-login.md"
+ "filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/cloud-login-livelabs2.md"
},
{
"title": "Lab 1: Create SSH Keys Using Oracle Cloud Shell",
diff --git a/siebel-cloud-manager/workshops/tenancy/manifest.json b/siebel-cloud-manager/workshops/tenancy/manifest.json
index 2ea983b30..32478e800 100644
--- a/siebel-cloud-manager/workshops/tenancy/manifest.json
+++ b/siebel-cloud-manager/workshops/tenancy/manifest.json
@@ -10,8 +10,8 @@
{
"title": "Get Started",
"description": "This is the prerequisites for customers using Free Trial and Paid tenancies, and Always Free accounts (if applicable).",
- "filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/cloud-login-livelabs2.md"
- },
+ "filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/cloud-login.md"
+ },
{
"title": "Lab 1: Create SSH Keys Using Oracle Cloud Shell",
"description": "Create SSH keys for use with this lab",