From 6bc8b2d89c529efc00e4c89243b2525bfbb3ce0d Mon Sep 17 00:00:00 2001 From: MoritzWeber Date: Tue, 26 Sep 2023 14:15:13 +0200 Subject: [PATCH 1/6] docs: Add installation guide to mkdocs There is also a less detailed installation guide in the README. These instructions should be merged in a later PR. --- docs/user/docs/installation.md | 172 +++++++++++++++++++++++++++++++++ docs/user/mkdocs.yml | 4 + 2 files changed, 176 insertions(+) create mode 100644 docs/user/docs/installation.md diff --git a/docs/user/docs/installation.md b/docs/user/docs/installation.md new file mode 100644 index 000000000..a4d3cd0b7 --- /dev/null +++ b/docs/user/docs/installation.md @@ -0,0 +1,172 @@ + + +# Install the Collaboration Manager + +This guide will help you set up the Capella Collaboration Manager on a +Kubernetes cluster. The setup of the basic installation is straightforward, but +we'll also delve into the more complex TeamForCapella support that requires +building custom Docker images. + +During development, we also took into account that the application can be +installed in highly restricted environments. An internet connection is not +necessarily required. + +## Step 1: Set up a Kubernetes cluster + +Kubernetes allows us to make operations as simple as possible later on. Updates +can be fully automated. In addition, Kubernetes allows us to ensure a secure +operation through standardized security hardening. + +You can use an existing cloud service to create a Kubernetes cluster. We have +running production deployments on Microsoft AKS and Amazon EKS. The application +is designed in such a way that no cluster scope is necessary. All operations +run at the namespace level, so it even runs in shared OpenShift clusters. But +even if you simply have a Linux server at your disposal, this is no obstacle. +Setting up a cluster is easier than you think. + +If you already have a running cluster, have `kubectl` up and running and can +reach the cluster, then you can skip this step. + +We provide instructions for some environments. If you set up the application in +a different environment, we would be happy to receive a PR to help other users +in the future. + +=== "microK8s" + + !!! info + We have tested the instructions with Ubuntu Server 22.04. + + 1. Run steps 1-4 of the official microK8s [`Getting started`](https://microk8s.io/docs/getting-started) guide. + + 2. Enable all required add-ons: + ```zsh + microk8s enable hostpath-storage # For workspace storage + microk8s enable rbac # For role-based access control + microk8s enable ingress # For load balancing + ``` + 3. If you don't have any external registry available and TeamForCapella support is required, enable the registry: + ```zsh + microk8s enable registry + ``` + 4. Copy the `kubectl` configuration to the host, so that `helm` can pick it up: + ```zsh + mkdir -p $HOME/.kube + microk8s config > $HOME/.kube/config + chmod 600 $HOME/.kube/config # Nobody else should be able to read the configuration + ``` + +=== "k3d" + + We are constantly working on expanding our documentation. This installation method is currently not documented. If it is relevant, please feel free to contact us at set@deutschebahn.com. + +=== "OpenShift" + + We are constantly working on expanding our documentation. This installation method is currently not documented. If it is relevant, please feel free to contact us at set@deutschebahn.com. + +## Step 2: Set up the required namespaces (optional) + +The Collaboration Manager requires two different namespaces. For security and +overview reasons, they are separated: + + +- Capella Collaboration Manager control namespace: In this namespace, we run + the core application. It has full control over the sessions namespace and consists of the following services: + - Frontend + - Backend + - Documentation + - Guacamole + - Prometheus + - Grafana (Loki), can be disabled in the `values.yaml` + +- Sessions namespace. The namespace is controlled by the control namespace and you won't need to touch it. In the session namespace, the following services run: + - Storage for persistent workspaces + - Storage for Juypter file-shares + - Pipeline jobs for nightly TeamForCapella to Git synchronisation + - Session containers (Capella, Papyrus, Juypter, pure::variants) + + +1. Create the two required namespaces: + + ```zsh + kubectl create namespace collab-manager # If you use another name, please update the following commands and use your namespace name. + kubectl create namespace collab-sessions # If you use another name, please update the `values.yaml` accordingly. + ``` + +2. Set the `collab-manager` as default namespace in the default context (optional): + + ```zsh + kubectl config set-context --current --namespace=collab-manager + ``` + +## Step 3: Install helm + +Follow the official instructions to install Helm: +[Installing helm](https://helm.sh/docs/intro/install/) + +Verify that `helm` is working by executing the command: + +```zsh +helm version +``` + +## Step 4: Clone the Github repository + +Navigate to a persistent location on your server, e.g. `/opt`. Then clone the +Github repository by running: + +```zsh +git clone https://github.com/DSD-DBS/capella-collab-manager.git +``` + +## Step 5: Configure the environment / Create the `values.yaml` + +Copy the +[`values.yaml`](https://github.com/DSD-DBS/capella-collab-manager/blob/main/helm/values.yaml) +to a persistent and secure location on your server or deploment environment. +The `local` directory in the Collaboration Manager is gitignored. We recommend +to put the custom `values.yaml` in this directory. + +Adjust all values according to your needs. + +## Step 6: Install the application in the cluster + +Run the following commands in the root directory of the repository: + +```zsh +helm dependency update ./helm +helm upgrade --install \ + --namespace collab-manager \ + --values \ + \ + ./helm +``` + +## Step 7: Initialize the Guacamole database + +The Guacamole database is not initialized per default, it has do be done +manually. Run the following command to initialize the PostgreSQL database: + +```zsh +kubectl exec --container prod-guacamole-guacamole deployment/prod-guacamole-guacamole -- /opt/guacamole/bin/initdb.sh --postgresql | \ + kubectl exec -i deployment/prod-guacamole-postgres -- psql -U guacamole guacamole +``` + +## Step 8: Check the application status + +Run `kubectl get pods` to see the status of all components. Once all containers +are running, verify the installation state by running: + +```zsh +curl http://localhost/api/v1/health/general +``` + +It should return the following JSON: + +```json +{ "guacamole": true, "database": true, "operator": true } +``` + +If a value is false, check the backend logs for more information. diff --git a/docs/user/mkdocs.yml b/docs/user/mkdocs.yml index ad46e5b34..f03ef9a31 100644 --- a/docs/user/mkdocs.yml +++ b/docs/user/mkdocs.yml @@ -4,8 +4,12 @@ site_name: Capella Collaboration Manager Documentation theme: name: material + features: + - content.code.copy + nav: - Introduction: index.md + - Installation: installation.md - Projects: - Get access to a project: projects/access.md - Add a user to a project: projects/add-user.md From 13385e7f64fb6a3c32c1edc6478e78429ab0d1ec Mon Sep 17 00:00:00 2001 From: MoritzWeber Date: Tue, 26 Sep 2023 17:06:18 +0200 Subject: [PATCH 2/6] docs: Move installation instructions to user docs The resource section was added to the user documentation. The installation part of the README was moved to the user docs. --- README.md | 38 +--------------------------------- docs/user/docs/installation.md | 12 +++++++++++ 2 files changed, 13 insertions(+), 37 deletions(-) diff --git a/README.md b/README.md index 2bef6ec96..089b2c3a1 100644 --- a/README.md +++ b/README.md @@ -144,43 +144,7 @@ running in a few minutes. ### Deployment -### Install/Upgrade on a cluster - -1. Ensure your `kubectl` configuration points to the right cluster -1. Make sure that your cluster has enough resources. - The minimum required resources are 3 [Kubernetes CPU cores](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu) - and around 2,5GiB of memory for the management platform. - Depending on the load, the instance can scale up and is limited to 10 Kubernetes CPU cores cores and ~8GiB of memory. - - Each session requires a minimum of 0.4 Kubernetes CPU cores and 1.6Gi of memory. - A session can scale up until it reaches 2 Kubernetes CPU cores and 6Gi of memory. - -1. The setup requires at least one Docker container registry, which has to be accessible from the cluster. - All images have to be pushed to the registry before continuing. - -1. Copy `helm/values.yaml` to `deployments/yourinstance.values.yaml` and - set all required values in the `deployments/yourinstance.values.yaml` configuration file. - - If you're upgrading the instance, please make sure to compare the changes in the `values.yaml` before continuing. - -1. If it doesn't exist yet, create namespace for the sessions in your kubernetes cluster: - - ```sh - kubectl create namespace - ``` - -1. Run the following command to deploy to your kubernetes cluster: - - ```sh - helm upgrade --install production -n -f deployments/yourinstance.values.yaml helm - ``` - -1. Set up the database for guacamole: [Initializing the PostgreSQL database](https://guacamole.apache.org/doc/gug/guacamole-docker.html#initializing-the-postgresql-database) -1. Verify the status of all pods with - - ```zsh - kubectl -n get pods - ``` +The Installation Guide has been moved to the [general documentation](https://capella-collaboration-manager.readthedocs.io/en/latest/installation/). ### Uninstall the environment diff --git a/docs/user/docs/installation.md b/docs/user/docs/installation.md index a4d3cd0b7..8522e421c 100644 --- a/docs/user/docs/installation.md +++ b/docs/user/docs/installation.md @@ -66,6 +66,18 @@ in the future. We are constantly working on expanding our documentation. This installation method is currently not documented. If it is relevant, please feel free to contact us at set@deutschebahn.com. +## Step 2: Validate the available resources + +The minimum required resources are 3 +[Kubernetes CPU cores](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu) +and around 2,5GiB of memory for the management platform. Depending on the load, +the instance can scale up and is limited to 10 Kubernetes CPU cores cores and +~8GiB of memory. + +Each session requires a minimum of 0.4 Kubernetes CPU cores and 1.6Gi of +memory. A session can scale up until it reaches 2 Kubernetes CPU cores and 6Gi +of memory. + ## Step 2: Set up the required namespaces (optional) The Collaboration Manager requires two different namespaces. For security and From 76d67bc2972c8e34ed2c6370db32556521f482ad Mon Sep 17 00:00:00 2001 From: MoritzWeber Date: Tue, 26 Sep 2023 17:27:50 +0200 Subject: [PATCH 3/6] docs: Add installation instructions for T4C client --- docs/user/docs/installation.md | 67 ++++++++++++++++++++++++++++++---- 1 file changed, 60 insertions(+), 7 deletions(-) diff --git a/docs/user/docs/installation.md b/docs/user/docs/installation.md index 8522e421c..d7b6bc342 100644 --- a/docs/user/docs/installation.md +++ b/docs/user/docs/installation.md @@ -50,6 +50,7 @@ in the future. 3. If you don't have any external registry available and TeamForCapella support is required, enable the registry: ```zsh microk8s enable registry + export DOCKER_REGISTRY=localhost:32000 ``` 4. Copy the `kubectl` configuration to the host, so that `helm` can pick it up: ```zsh @@ -78,7 +79,7 @@ Each session requires a minimum of 0.4 Kubernetes CPU cores and 1.6Gi of memory. A session can scale up until it reaches 2 Kubernetes CPU cores and 6Gi of memory. -## Step 2: Set up the required namespaces (optional) +## Step 3: Set up the required namespaces (optional) The Collaboration Manager requires two different namespaces. For security and overview reasons, they are separated: @@ -113,7 +114,7 @@ overview reasons, they are separated: kubectl config set-context --current --namespace=collab-manager ``` -## Step 3: Install helm +## Step 4: Install helm Follow the official instructions to install Helm: [Installing helm](https://helm.sh/docs/intro/install/) @@ -124,7 +125,7 @@ Verify that `helm` is working by executing the command: helm version ``` -## Step 4: Clone the Github repository +## Step 5: Clone the Github repository Navigate to a persistent location on your server, e.g. `/opt`. Then clone the Github repository by running: @@ -133,7 +134,7 @@ Github repository by running: git clone https://github.com/DSD-DBS/capella-collab-manager.git ``` -## Step 5: Configure the environment / Create the `values.yaml` +## Step 6: Configure the environment / Create the `values.yaml` Copy the [`values.yaml`](https://github.com/DSD-DBS/capella-collab-manager/blob/main/helm/values.yaml) @@ -141,9 +142,15 @@ to a persistent and secure location on your server or deploment environment. The `local` directory in the Collaboration Manager is gitignored. We recommend to put the custom `values.yaml` in this directory. +Make sure to set restrictive permissions on the `values.yaml`: + +```zsh +chmod 600 values.yaml +``` + Adjust all values according to your needs. -## Step 6: Install the application in the cluster +## Step 7: Install the application in the cluster Run the following commands in the root directory of the repository: @@ -156,7 +163,7 @@ helm upgrade --install \ ./helm ``` -## Step 7: Initialize the Guacamole database +## Step 8: Initialize the Guacamole database The Guacamole database is not initialized per default, it has do be done manually. Run the following command to initialize the PostgreSQL database: @@ -166,7 +173,7 @@ kubectl exec --container prod-guacamole-guacamole deployment/prod-guacamole-guac kubectl exec -i deployment/prod-guacamole-postgres -- psql -U guacamole guacamole ``` -## Step 8: Check the application status +## Step 9: Check the application status Run `kubectl get pods` to see the status of all components. Once all containers are running, verify the installation state by running: @@ -182,3 +189,49 @@ It should return the following JSON: ``` If a value is false, check the backend logs for more information. + +## Step 10: Add TeamForCapella support + + +!!! info "TeamForCapella server required" + The setup of the TeamForCapella server and license server itself will + not be part of this tutorial. To process, you'll need to have a running and + reachable TeamForCapella server. + + +!!! info "Container registry required" + For the TeamForCapella support, you'll need to build own Docker images. In order to use this in the cluster, an external or internal container registry is required. + + +1. Install [GNU make](https://www.gnu.org/software/make/manual/make.html) >= + 3.82 +1. Navigate to the root of the capella-collab-manager repository. +1. Clone the capella-dockerimages repository: + ```zsh + git clone https://github.com/DSD-DBS/capella-dockerimages + ``` +1. Prepare the `capella/base` and `t4c/client/base` images according to the + Capella Docker images documentation (Only the preparation section is + needed): + + - [`capella/base`](https://dsd-dbs.github.io/capella-dockerimages/capella/base/#preparation) + - [`t4c/client/base`](https://dsd-dbs.github.io/capella-dockerimages/capella/t4c/base/#preparation) + +1. Set the following environment variables: + + ```zsh + export PUSH_IMAGES=1 # Auto-push images to the container registry after build + export DOCKER_REGISTRY= # Location of your remote or local container registry + export CAPELLA_BUILD_TYPE=offline # Don't download Capella during each build + export CAPELLA_VERSIONS="5.2.0 6.0.0 6.1.0" # Space separated list of Capella versions to build + ``` + +1. Then, build the `t4c/client/remote` images (the one that we'll use in the + Collaboration Manager): + + ```zsh + make t4c/client/remote + ``` + +1. In the Collaboration Manager UI, change the docker image of the tool to + `/t4c/client/remote:-latest` From ff727d590ceea9cad3b70867d4ba66fbe7186d4c Mon Sep 17 00:00:00 2001 From: MoritzWeber Date: Wed, 27 Sep 2023 08:56:25 +0200 Subject: [PATCH 4/6] docs: Add instructions to change Guacamole password ...to the installation guide. --- docs/user/docs/installation.md | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/docs/user/docs/installation.md b/docs/user/docs/installation.md index d7b6bc342..58df167cf 100644 --- a/docs/user/docs/installation.md +++ b/docs/user/docs/installation.md @@ -173,6 +173,21 @@ kubectl exec --container prod-guacamole-guacamole deployment/prod-guacamole-guac kubectl exec -i deployment/prod-guacamole-postgres -- psql -U guacamole guacamole ``` +After the initialization, the Guacamole password defaults to `guacadmin`. We +have to change it to a more secure password: + +1. Open and login with `guacadmin` / + `guacadmin`. +1. Click on the `guacadmin` user at the top-right corner of the screen, then + select "Settings". +1. Select the tab "Preferences" +1. In the "Change password" section, enter `guacadmin` as current password. + Generate a secure password and enter it for "New password" and confirm it. + Then, click "Update password" +1. Log out and verify that the combination `guacadmin` / `guacadmin` no longer + works. +1. Update the key `guacamole.password` in the `values.yaml` and repeat step 7. + ## Step 9: Check the application status Run `kubectl get pods` to see the status of all components. Once all containers From 2a9753bc61a2c99ded2502158ee8196e4e9034e8 Mon Sep 17 00:00:00 2001 From: MoritzWeber Date: Wed, 27 Sep 2023 16:48:02 +0200 Subject: [PATCH 5/6] docs: Add instruction for microK8S NFS --- docs/user/docs/installation.md | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/docs/user/docs/installation.md b/docs/user/docs/installation.md index 58df167cf..32bffcb2a 100644 --- a/docs/user/docs/installation.md +++ b/docs/user/docs/installation.md @@ -43,7 +43,7 @@ in the future. 2. Enable all required add-ons: ```zsh - microk8s enable hostpath-storage # For workspace storage + microk8s enable hostpath-storage # For persistent storage microk8s enable rbac # For role-based access control microk8s enable ingress # For load balancing ``` @@ -58,6 +58,13 @@ in the future. microk8s config > $HOME/.kube/config chmod 600 $HOME/.kube/config # Nobody else should be able to read the configuration ``` + 5. Optional, but recommended: Set up a NFS for workspaces and Juypter file-shares. + The default `hostpath-storage` of microK8S doesn't enforce the specified capacity on PVCs. + This can be exploited by a user uploading so much data to their workspace that + the server goes out of disk storage. + + Please follow the official instructions:
+ Make sure to update the storageClass in the `values.yaml` in step 6 to `nfs-csi`. === "k3d" @@ -239,6 +246,7 @@ If a value is false, check the backend logs for more information. export DOCKER_REGISTRY= # Location of your remote or local container registry export CAPELLA_BUILD_TYPE=offline # Don't download Capella during each build export CAPELLA_VERSIONS="5.2.0 6.0.0 6.1.0" # Space separated list of Capella versions to build + export CAPELLA_DROPINS="" # Command separated list of dropins ``` 1. Then, build the `t4c/client/remote` images (the one that we'll use in the From 2dda1d46de1015660ed17ab0e77ae79d17321fdc Mon Sep 17 00:00:00 2001 From: MoritzWeber Date: Thu, 28 Sep 2023 11:38:41 +0200 Subject: [PATCH 6/6] docs: Add instructions for NFS user mapping --- README.md | 2 +- docs/user/docs/installation.md | 44 ++++++++++++++++++++++++---------- 2 files changed, 32 insertions(+), 14 deletions(-) diff --git a/README.md b/README.md index 089b2c3a1..91ee814b3 100644 --- a/README.md +++ b/README.md @@ -144,7 +144,7 @@ running in a few minutes. ### Deployment -The Installation Guide has been moved to the [general documentation](https://capella-collaboration-manager.readthedocs.io/en/latest/installation/). +You can find the installation guide for deployment in the [general documentation](https://capella-collaboration-manager.readthedocs.io/en/latest/installation/). ### Uninstall the environment diff --git a/docs/user/docs/installation.md b/docs/user/docs/installation.md index 32bffcb2a..b10a7110b 100644 --- a/docs/user/docs/installation.md +++ b/docs/user/docs/installation.md @@ -3,7 +3,7 @@ ~ SPDX-License-Identifier: Apache-2.0 --> -# Install the Collaboration Manager +# Installation of the Collaboration Manager This guide will help you set up the Capella Collaboration Manager on a Kubernetes cluster. The setup of the basic installation is straightforward, but @@ -24,15 +24,15 @@ You can use an existing cloud service to create a Kubernetes cluster. We have running production deployments on Microsoft AKS and Amazon EKS. The application is designed in such a way that no cluster scope is necessary. All operations run at the namespace level, so it even runs in shared OpenShift clusters. But -even if you simply have a Linux server at your disposal, this is no obstacle. -Setting up a cluster is easier than you think. +also if you simply have a Linux server at your disposal, this is no obstacle. If you already have a running cluster, have `kubectl` up and running and can reach the cluster, then you can skip this step. We provide instructions for some environments. If you set up the application in -a different environment, we would be happy to receive a PR to help other users -in the future. +a different environment, please document the installation and obstacles that +you find and we would be happy to receive a PR to help other users in the +future. === "microK8s" @@ -63,16 +63,34 @@ in the future. This can be exploited by a user uploading so much data to their workspace that the server goes out of disk storage. - Please follow the official instructions:
- Make sure to update the storageClass in the `values.yaml` in step 6 to `nfs-csi`. + Please follow the official instructions: . + + Make sure to update the `backend.storageClassName` in the `values.yaml` in step 6 to `nfs-csi`. + All new Jupyter file-shares and personal workspaces will use the new storage class then. + + !!! warning "User mapping for non-root containers" + If you want to run the session containers as non-root, you can set the `runAsUser` value in the `podSecurityContext` of the values.yaml. + In the default configuration, `runAsUser` is set to `1004370000`. + + Unfortunately our setup NFS does not respect the `fsGroup` option. Therefore, all volumes are mounted with `nobody:nogroup` per default. + This will lead to permission errors and crashing session containers. + + To fix it, change the `/etc/exports` file and modify the options for the create file-share to: + ``` + (rw,sync,no_subtree_check,all_squash,anonuid=,anongid=0) + ``` + + Replace `` with the value of the `runAsUser` value of the Kubernetes Pod security context. + + Then, apply the new configuration by running `exportfs -ra`. === "k3d" - We are constantly working on expanding our documentation. This installation method is currently not documented. If it is relevant, please feel free to contact us at set@deutschebahn.com. + We are constantly working on expanding our documentation. This installation method is currently not documented. If it is relevant, please feel free to contact us at set@deutschebahn.com or open an issue in this repository. === "OpenShift" - We are constantly working on expanding our documentation. This installation method is currently not documented. If it is relevant, please feel free to contact us at set@deutschebahn.com. + We are constantly working on expanding our documentation. This installation method is currently not documented. If it is relevant, please feel free to contact us at set@deutschebahn.com or open an issue in this repository. ## Step 2: Validate the available resources @@ -172,12 +190,12 @@ helm upgrade --install \ ## Step 8: Initialize the Guacamole database -The Guacamole database is not initialized per default, it has do be done -manually. Run the following command to initialize the PostgreSQL database: +The Guacamole database is not initialized automatically. Run the following +command to initialize the PostgreSQL database: ```zsh -kubectl exec --container prod-guacamole-guacamole deployment/prod-guacamole-guacamole -- /opt/guacamole/bin/initdb.sh --postgresql | \ - kubectl exec -i deployment/prod-guacamole-postgres -- psql -U guacamole guacamole +kubectl exec --container -guacamole-guacamole deployment/-guacamole-guacamole -- /opt/guacamole/bin/initdb.sh --postgresql | \ + kubectl exec -i deployment/-guacamole-postgres -- psql -U guacamole guacamole ``` After the initialization, the Guacamole password defaults to `guacadmin`. We