diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000000..e69de29bb2 diff --git a/404.html b/404.html new file mode 100644 index 0000000000..504f8769d6 --- /dev/null +++ b/404.html @@ -0,0 +1,2717 @@ + + + +
+ + + + + + + + + + + + + + +The image builder template builds the following images and pushes them to any +Docker registry:
+Please add the following section to your .gitlab-ci.yml
:
include:
+ - remote: https://raw.githubusercontent.com/DSD-DBS/capella-collab-manager/${CAPELLA_COLLABORATION_MANAGER_REVISION}/ci-templates/gitlab/image-builder.yml
+
The build images are tagged with the revision they were build with (e.g., when
+running for main the tag would be :main
) All characters matching the regex
+[^a-za-z0-9.] will be replaced with -.
You have to add the following environment variables on repository level. Make +sure to enable the "Expand variable reference" flag.
+PRIVATE_GPG_PATH
: Path to the private GPG key used to decrypt the
+ secret.docker.json
file (More about this file below)FRONTEND_IMAGE_NAME
(defaults to capella/collab/frontend)BACKEND_IMAGE_NAME
(default to capella/collab/backend)DOCS_IMAGE_NAME
(defaults to capella/collab/docs)GUACAMOLE_IMAGE_NAME
(defaults to capella/collab/guacamole)In addition you can adjust the following variables when running a pipeline:
+FRONTEND
: Build the frontend image?BACKEND
: Build the backend image?DOCS
: Build the docs image?GUACAMOLE
: Build the guacamole image?TARGET
: The target for which you want to build the images (More information
+ why this is important below)This is the (minimal) configuration. For more advanced configuration options, +please refer to the +image-builder +Gitlab template.
+We make use of Mozilla SOPS files to store
+secrets used in the image builder template. Therefore you need to have a
+directory $TARGET
for each target with a secret.docker.json
inside. You can
+create the secret.docker.json
by running the following command:
sops -e --output ./<target>/secret.docker.json input.json
+
The input.json
in this command is a placeholder for your own input file,
+which should have the following structure:
{
+ "registry_unencrypted": "<registry>",
+ "username_unencrypted": "<username>",
+ "password": "<unencrypted password>"
+}
+
Verify that you can open the secret file with
+sops ./<target>/secret.docker.json
. When it works, delete the input.json
!
In addition, you will need a .sops.yaml
at the root level having the
+following structure:
creation_rules:
+ - path_regex: .*
+ key_groups:
+ - pgp:
+ - <GPG fingerprint>
+
Any time you update the .sops.yaml
(i.e., adding or removing a fingerprint)
+you will have to run sops updatekeys ./<target>/secret.docker.json
to ensure
+that only authorized persons can decrypt the secret file.
Lastly, please ensure that your Gitlab runners GPG fingerprint is present in
+the .sops.yaml
such that it can use the secret values.
The Kubernetes deploy template is used to deploy the Capella Collaboration +Manager to a Kubernetes cluster using Helm.
+Please add the following section to your .gitlab-ci.yml
:
include:
+ - remote: https://raw.githubusercontent.com/DSD-DBS/capella-collab-manager/${CAPELLA_COLLABORATION_MANAGER_REVISION}/ci-templates/gitlab/k8s-deploy.yml
+
You have to add the following environment variables on repository level. Make +sure to enable the "Expand variable reference" flag.
+PRIVATE_GPG_PATH
: Path to the private GPG key used to decrypt the
+ secret.k8s.json
files.GRAFANA_HELM_CHART
: (Optional) - This variable is used to set the URL for
+ the Grafana Helm chart. It is useful if your deployment environment has
+ limited access, so you can specify a URL that is accessible for you.In addition you can adjust the following variables when running a pipeline:
+TARGET
: The target for which you want to build the images (More information
+ why this is important below)REVISION
: The revision of the capella collab manager repository you want to
+ useFor the k8s-deploy.yml
you need to have some secret sops files in place.
+First of all, you need a secret.docker.json
file as described
+here. In addition, you need to have a secret.k8s.json
in
+each target directory created by a json file having the following structure:
{
+ "server_unencrypted": "<k8s server>",
+ "namespace_unencrypted": "<namespace>",
+ "release_unencrypted": "<release>",
+ "username_unencrypted": "<username>",
+ "token": "<unencrypted token>"
+}
+
In addition, you need to have a general.values.yaml
containing all the
+values.yaml
values that do not have to be encrypted and a
+secret.values.yaml
only containing the values that should be encrypted
+(Please do not use YAML anchors in the secret.values.yaml
file and do not use
+the _unencrypted
suffix in the variable names).
Please delete the plain text files containing secrets directly after encrypting +them.
+The tree inside of your Gitlab repository should look like:
+├── .gitlab-ci.yml
+├── .sops.yaml
+├── environment.prod.ts
+├── favicon.ico
+├── target1
+│ ├── general.values.yaml
+│ ├── secret.values.yaml
+│ ├── secret.docker.json
+│ └── secret.k8s.json
+├── target2
+│ ├── general.values.yaml
+│ ├── secret.values.yaml
+│ ├── secret.docker.json
+│ └── secret.k8s.json
+└── ...
+
This is the (minimal) configuration. For more advanced configuration options, +please refer to the +k8s-deploy +Gitlab template.
+ + + + + + +The Collaboration Manager repository contains a few tools that may come in +handy when you're an administrator of a Collaboration Manager setup.
+The CLI (Command Line Interface) tool allows you to backup and restore user's +workspaces.
+For the tools to work you'll need access to the Kubernetes cluster the +Collaboration manager is running on. In particular the namespace used to spawn +sessions.
+In order to use the CLI tooling, you'll need to have a local copy of the +collab-manager application and Python 3.11 installed.
+git clone https://github.com/DSD-DBS/capella-collab-manager.git
+cd capella-collab-manager/backend
+python -m venv .venv
+source .venv/bin/activate
+pip install .
+
Once your environment is set up, you can use the CLI tooling. The tooling is +located in a module:
+python -m capellacollab.cli --help
+
This gives you the help information. The CLI tool currently has one subcommand:
+ws
, short for workspace.
Usage: python -m capellacollab.cli [OPTIONS] COMMAND [ARGS]...
+
+Options:
+ --install-completion [bash|zsh|fish|powershell|pwsh]
+ Install completion for the specified shell.
+ --show-completion [bash|zsh|fish|powershell|pwsh]
+ Show completion for the specified shell, to
+ copy it or customize the installation.
+ --help Show this message and exit.
+
+Commands:
+ ws
+
You can discover the CLI on your own by printing the help messages of the +subcommands
+python -m capellacollab.cli ws --help
+python -m capellacollab.cli ws backup --help
+
This guide describes the steps to get started with the Capella Collaboration +Manager.
+Before you start, make sure you have a running environment. For instructions on +how to set up such an environment, please refer to the +Development installation guide.
+First open a browser and go to http://localhost:8080.
+You will be welcomed by a friendly screen and you can log in. The default setup +is running an OAuth mock service for authentication.
+ +As username, provide the admin
for the admin user. If you have changed the
+username or want to test another user, enter your custom username.
You'll be returned to the Collaboration manager. Now you can start a session. +Select Persistent Workspace and hit Request Session.
+ +The system will now schedule and start a fresh workspace. Wait a bit for the +workspace to be available
+ +Once the session is ready, click Connect to Session and a new tab should +open. After a few seconds you should see the Capella splash screen and a +workspace will be shown in your browser.
+ +This introduction only scratches the surface of what's possible with the +Collaboration Manager.
+More advanced features include:
+Are you interested in the platform and want to integrate it into your +environment? We like to know more about the use case so that we can take it +into account in future development. Please feel free to contact us: +set@deutschebahn.com
+You can also try out the platform locally. The README provides instructions for +this. For production deployments you can learn more here: +Production installation
+ + + + + + +This guide will help you set up the Capella Collaboration Manager on a +Kubernetes cluster. The setup of the basic installation is straightforward, but +we'll also delve into the more complex TeamForCapella support that requires +building custom Docker images.
+During development, we also took into account that the application can be +installed in highly restricted environments. An internet connection is not +necessarily required.
+Kubernetes allows us to make operations as simple as possible later on. Updates +can be fully automated. In addition, Kubernetes allows us to ensure a secure +operation through standardized security hardening.
+You can use an existing cloud service to create a Kubernetes cluster. We have +running production deployments on Microsoft AKS and Amazon EKS. The application +is designed in such a way that no cluster scope is necessary. All operations +run at the namespace level, so it even runs in shared OpenShift clusters. But +also if you simply have a Linux server at your disposal, this is no obstacle.
+If you already have a running cluster, have kubectl
up and running and can
+reach the cluster, then you can skip this step.
We provide instructions for some environments. If you set up the application in +a different environment, please document the installation and obstacles that +you find and we would be happy to receive a PR to help other users in the +future.
+Info
+We have tested the instructions with Ubuntu Server 22.04.
+Run steps 1-4 of the official microK8s Getting started
guide.
Enable all required add-ons: +
microk8s enable hostpath-storage # For persistent storage
+microk8s enable rbac # For role-based access control
+microk8s enable ingress # For load balancing
+
microk8s enable registry
+export DOCKER_REGISTRY=localhost:32000
+
kubectl
configuration to the host, so that helm
can pick it up:
+ mkdir -p $HOME/.kube
+microk8s config > $HOME/.kube/config
+chmod 600 $HOME/.kube/config # Nobody else should be able to read the configuration
+
Optional, but recommended: Set up a NFS for workspaces and Juypter file-shares.
+ The default hostpath-storage
of microK8S doesn't enforce the specified capacity on PVCs.
+ This can be exploited by a user uploading so much data to their workspace that
+ the server goes out of disk storage.
Please follow the official instructions: https://microk8s.io/docs/nfs.
+Make sure to update the backend.storageClassName
in the values.yaml
in step 6 to nfs-csi
.
+All new Jupyter file-shares and personal workspaces will use the new storage class then.
User mapping for non-root containers
+If you want to run the session containers as non-root, you can set the runAsUser
value in the podSecurityContext
of the values.yaml.
+In the default configuration, runAsUser
is set to 1004370000
.
Unfortunately our setup NFS does not respect the fsGroup
option. Therefore, all volumes are mounted with nobody:nogroup
per default.
+This will lead to permission errors and crashing session containers.
To fix it, change the /etc/exports
file and modify the options for the create file-share to:
+
(rw,sync,no_subtree_check,all_squash,anonuid=<user-id-of-session-containers>,anongid=0)
+
Replace <user-id-of-session-containers>
with the value of the runAsUser
value of the Kubernetes Pod security context.
Then, apply the new configuration by running exportfs -ra
.
We are constantly working on expanding our documentation. This installation method is currently not documented. If it is relevant, please feel free to contact us at set@deutschebahn.com or open an issue in this repository.
+We are constantly working on expanding our documentation. This installation method is currently not documented. If it is relevant, please feel free to contact us at set@deutschebahn.com or open an issue in this repository.
+The minimum required resources are 3 +Kubernetes CPU cores +and around 2,5GiB of memory for the management platform. Depending on the load, +the instance can scale up and is limited to 10 Kubernetes CPU cores cores and +~8GiB of memory.
+Each session requires a minimum of 0.4 Kubernetes CPU cores and 1.6Gi of +memory. A session can scale up until it reaches 2 Kubernetes CPU cores and 6Gi +of memory.
+The Collaboration Manager requires two different namespaces. For security and +overview reasons, they are separated:
+ +Capella Collaboration Manager control namespace: In this namespace, we run + the core application. It has full control over the sessions namespace and consists of the following services:
+values.yaml
Sessions namespace. The namespace is controlled by the control namespace and you won't need to touch it. In the session namespace, the following services run:
+Create the two required namespaces:
+kubectl create namespace collab-manager # If you use another name, please update the following commands and use your namespace name.
+kubectl create namespace collab-sessions # If you use another name, please update the `values.yaml` accordingly.
+
Set the collab-manager
as default namespace in the default context (optional):
kubectl config set-context --current --namespace=collab-manager
+
Follow the official instructions to install Helm: +Installing helm
+Verify that helm
is working by executing the command:
helm version
+
Navigate to a persistent location on your server, e.g. /opt
. Then clone the
+Github repository by running:
git clone https://github.com/DSD-DBS/capella-collab-manager.git
+
values.yaml
Copy the
+values.yaml
+to a persistent and secure location on your server or deployment environment.
+The local
directory in the Collaboration Manager is gitignored. We recommend
+to put the custom values.yaml
in this directory.
Make sure to set restrictive permissions on the values.yaml
:
chmod 600 values.yaml
+
Adjust all values according to your needs.
+Run the following commands in the root directory of the repository:
+helm dependency update ./helm
+helm upgrade --install \
+ --namespace collab-manager \
+ --values <path-to-your-custom-values.yaml> \
+ <release-name> \
+ ./helm
+
The Guacamole database is not initialized automatically. Run the following +command to initialize the PostgreSQL database:
+kubectl exec --container <release-name>-guacamole-guacamole deployment/<release-name>-guacamole-guacamole -- /opt/guacamole/bin/initdb.sh --postgresql | \
+ kubectl exec -i deployment/<release-name>-guacamole-postgres -- psql -U guacamole guacamole
+
After the initialization, the Guacamole password defaults to guacadmin
. We
+have to change it to a more secure password:
guacadmin
/
+ guacadmin
.guacadmin
user at the top-right corner of the screen, then
+ select "Settings".guacadmin
as current password.
+ Generate a secure password and enter it for "New password" and confirm it.
+ Then, click "Update password"guacadmin
/ guacadmin
no longer
+ works.guacamole.password
in the values.yaml
and repeat step 7.Run kubectl get pods
to see the status of all components. Once all containers
+are running, verify the installation state by running:
curl http://localhost/api/v1/health/general
+
It should return the following JSON:
+{ "guacamole": true, "database": true, "operator": true }
+
If a value is false, check the backend logs for more information.
+TeamForCapella server required
+The setup of the TeamForCapella server and license server itself will +not be part of this tutorial. To process, you'll need to have a running and +reachable TeamForCapella server.
+Container registry required
+For the TeamForCapella support, you'll need to build own Docker images. In order to use this in the cluster, an external or internal container registry is required.
+git clone https://github.com/DSD-DBS/capella-dockerimages
+
Prepare the capella/base
and t4c/client/base
images according to the
+ Capella Docker images documentation (Only the preparation section is
+ needed):
Set the following environment variables:
+export PUSH_IMAGES=1 # Auto-push images to the container registry after build
+export DOCKER_REGISTRY=<your-registry> # Location of your remote or local container registry
+export CAPELLA_BUILD_TYPE=offline # Don't download Capella during each build
+export CAPELLA_VERSIONS="5.2.0 6.0.0 6.1.0" # Space separated list of Capella versions to build
+export CAPELLA_DROPINS="" # Command separated list of dropins
+
Then, build the t4c/client/remote
images (the one that we'll use in the
+ Collaboration Manager):
make t4c/client/remote
+
In the Collaboration Manager UI, change the docker image of the tool to
+ <registry>/t4c/client/remote:<capella-version>-latest
We're sorry to see you go
If you have any suggestions for us to
+improve, please share them with us. Either privately via set@deutschebahn.com
+or via a
+Github issue.
If you want to uninstall the management portal, you can run the following + comment:
+helm uninstall <release-name> -n <namespace> helm
+
or delete the management portal namespace:
+kubectl delete namespace <namespace>
+
The previous command doesn't clean the sessions namespace. Please clean it + manually by running (this does also remove all persistent workspaces!):
+kubectl -n <sessions-namespace> delete all --all
+
or just delete the namespace:
+kubectl delete namespace <sessions-namespace>
+
{"use strict";/*!
+ * escape-html
+ * Copyright(c) 2012-2013 TJ Holowaychuk
+ * Copyright(c) 2015 Andreas Lubbe
+ * Copyright(c) 2015 Tiancheng "Timothy" Gu
+ * MIT Licensed
+ */var Ha=/["'&<>]/;Un.exports=$a;function $a(e){var t=""+e,r=Ha.exec(t);if(!r)return t;var o,n="",i=0,s=0;for(i=r.index;i