A web platform for collaboration on MBSE and Capella projects.
Copyright 2021 - 2024 DB InfraGO AG, licensed under Apache 2.0 License (see full text here)
Turn your local MBSE and Capella experience into a browser-based collaboration platform for model-based projects. Designed to enable co-working across multiple organizations. Here are some of the key features:
- Run MBSE related tools (Capella, Papyrus, Eclipse, pure::variants, Jupyter, etc.) in a browser
- Supports both Git and Team for Capella co-working models
- Single sign-on (SSO) via OAuth2
- No need to install or maintain local Capella clients - clients are made on demand in an underlaying Kubernetes cluster
- Access to projects and models is self-managed by project admins, model owners or delegates
- Within a project a user could have read or read & write access. Read-only users don't consume licenses in Team for Capella projects.
- Integration with Git repository management for backup and workflow automation around the models.
- Diagram cache integration: Display Capella diagrams in the browser within seconds.
- Model badge integration: Each model displays an automatically generated model complexity badge.
- Automatic "garbage collection": Unused sessions are terminated to free up resources and reduce cost.
- Jupyter integration to talk to Capella models from the workspace and to automate tasks.
In addition, we have integrated commercial products:
-
- Automatic repository monitoring
- UI to create and delete models
- Automatic license injection into sessions.
- Nightly synchronization from TeamForCapella repositories to Git repositories
- Automatic access management via session tokens.
-
- Automatic license injection
- Access to licenses is self-managed by project admins
We've prepared a small video, where we showcase the diagram cache feature and show how you can use Capella and Jupyter in split view in the browser:
collab-mgr-demo.mp4
To deploy the application you need:
- Docker >= 20.10.X
- kubectl >= 1.24 (Stargazer)
- helm >= 3.9.X
- Make >= 3.82, recommended 4.X
- Python >= 3.10
If you'd like to run it locally, these tools are additionally required:
- k3d - a lightweight k8s cluster
When you have all that installed you can do the following:
git clone --recurse-submodules https://github.com/DSD-DBS/capella-collab-manager.git
cd capella-collab-manager
# Create a local k3d cluster and test the registry reachability
make create-cluster reach-registry
Then, choose one of the four options and run the corresponding command. The options can be changed at any time later:
Note
Currently, we only provide amd64 images. If you want to run the application on arm64, you need to build the images yourself (option 3 or 4).
-
Fetch management portal and session images from Github (without TeamForCapella support). This option is recommended for the first deployment.
export DOCKER_REGISTRY=ghcr.io/dsd-dbs/capella-collab-manager export CAPELLACOLLAB_SESSIONS_REGISTRY=ghcr.io/dsd-dbs/capella-dockerimages DEVELOPMENT_MODE=1 make helm-deploy open
-
Build management portal images and fetch session images from Github (without initial TeamForCapella support)
export CAPELLACOLLAB_SESSIONS_REGISTRY=ghcr.io/dsd-dbs/capella-dockerimages DEVELOPMENT_MODE=1 make build helm-deploy open rollout
-
Build management portal and session images locally (without initial TeamForCapella support)
To reduce the build time, the default configutation only builds images for Capella 6.0.0. If you want to build more images for different versions, set the environment variableCAPELLA_VERSIONS
with a space-separated list of semantic Capella versions.export CAPELLA_VERSIONS="6.0.0 6.1.0" export BUILD_ARCHITECTURE=amd64 # or arm64
Then, run the following command:
DEVELOPMENT_MODE=1 make deploy
-
Build Capella and TeamForCapella images locally (with initial TeamForCapella support)
Read and execute the preparation in the Capella Docker images documentation: TeamForCapella client base.
Then, run the following command:
DEVELOPMENT_MODE=1 make deploy-t4c
It can take a long time to run, but shouldn't take more than 5 minutes. Please wait until all services are in the "Running" state.
If all goes well, you should find Capella Collaboration Manager running on https://localhost:443/.
If you want to see the individual services in the web-based Kubernetes dashboard, you can run the following command:
make dashboard
If something goes wrong, please open an issue on Github.
To clean up the environment, run:
make delete-cluster
k3d registry delete k3d-myregistry.localhost
Once the cluster is installed and all services are running
(kubectl get pods
), you can get started. Follow our
Getting started guide
and be up and running in a few minutes.
You can find the installation guide for a production deployment in the general documentation.
The Capella Collaboration Manager consists of a couple of components:
- A frontend - what you see in the browser
- A backend service - for managing projects, users and sessions
- Guacamole, to expose the sessions via the browser
- Databases, for state persistence
- Prometheus for session monitoring
- Grafana Loki for logs management
External software can also be linked. These parts can be installed separately:
- Optional: A Git server (used for read-only sessions and Git backups)
- Optional: A Team4Capella server
- Optional: A pure::variants server
We'd love to see your bug reports and improvement suggestions! Please take a look at our developer documentation. You'll also find instructions on how to set up a local development environment.