diff --git a/CHANGELOG.md b/CHANGELOG.md
index 37ac621..e14c98e 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,7 +5,7 @@
### Changed
- renamed backend resources for bindings.
- Changed the namespace used for bindings on the consumer clusters from `kube-bind` to `klutch-bind`.
+ Changed the namespace used for bindings on the App Clusters from `kube-bind` to `klutch-bind`.
This change automatically applies to new bindings.
diff --git a/docs/architecture_overview.svg b/docs/architecture_overview.svg
index 2a12623..7c89fcc 100644
--- a/docs/architecture_overview.svg
+++ b/docs/architecture_overview.svg
@@ -1,4 +1,4 @@
-
\ No newline at end of file
+
App Cluster 1
Control Plane Cluster
App Cluster n
Request ServicesÂ
via Proxy Claim
App Developers
Configures and managesÂ
available Resources
Platform Operator
App Developer
Request Services
via Proxy Claim
Sync up Resource specifications Sync down status and additional info
Creates ResourcesÂ
On-premise Infrastructure
Cloud Providers
bind
bind
\ No newline at end of file
diff --git a/docs/core_concepts.md b/docs/core_concepts.md
index 39a5821..740d949 100644
--- a/docs/core_concepts.md
+++ b/docs/core_concepts.md
@@ -22,20 +22,20 @@ fundamental to understanding Klutch's architecture and operation:
Klutch operates on a multi-cluster model:
-- **Central Management Cluster:**
+- **Control Plane Cluster:**
- Hosts Crossplane and its providers
- Runs the bind backend
-- **Developer Cluster:**
+- **App Cluster:**
- Hosts klutch-bind CLI for creating bindings to remote resources
- - Hosts the konnector for state synchronization between the Custom Resources (CRs) in the central and developer cluster(s)
+ - Hosts the konnector for state synchronization between the Custom Resources (CRs) in the Control Plane and App cluster(s)
### 2. State Synchronization
The konnector component performs bidirectional state synchronization:
-- Watches for changes in CRs on developer clusters
-- Propagates these changes to the central management cluster
-- Updates the status of resources in developer clusters based on the actual state in the central cluster
+- Watches for changes in CRs on App Clusters
+- Propagates these changes to the Control Plane Cluster
+- Updates the status of resources in App Clusters based on the actual state in the Control Plane Cluster
### 3. Authentication and Authorization
@@ -44,13 +44,13 @@ Klutch implements a token-based auth system:
- Uses OIDC for initial authentication
- The bind backend verifies tokens with the OIDC provider (e.g., Keycloak)
-### 4. Remote / Proxy Resources
+### 4. Proxy Claims
-To manage remote resources, Klutch uses the concept of proxy resources:
+To manage remote resources, Klutch uses the concept of Proxy Claims:
-- Proxy resources are CRs that represent remote resources in the central cluster
-- Proxy resources map to [Crossplane Composite Resources (XRs)](https://docs.crossplane.io/master/concepts/composite-resources/)
-- The resource management Klutch does is all based on the yaml files the user manages on their consumer clusters.
+- Proxy Claims are applied in the App Cluster.
+- These Proxy Claims are mapped to [Crossplane Composite Resources Claims (XRCs)](https://docs.crossplane.io/latest/concepts/claims/)
+ in the Control Plane Cluster.
- The developers clusters are the source of truth for what resources should exist.
### 5. Binding Mechanism
@@ -58,19 +58,19 @@ To manage remote resources, Klutch uses the concept of proxy resources:
Klutch's binding mechanism enables API usage across multiple Kubernetes clusters:
- **Klutch-bind CLI:** enables usage of Klutch's APIs across different Kubernetes clusters by synchronizing state
-between the management cluster and API consumer clusters. The CLI
-initiates the OIDC auth process and installs the konnector into the user's cluster.
+between the Control Plane Cluster and API App Clusters. The CLI initiates the OIDC auth process and installs the
+konnector into the user's cluster.
-- **bind backend:** The backend authenticates new users via OIDC before creating a binding space on the consumer cluster
+- **bind backend:** The backend authenticates new users via OIDC before creating a binding space on the App Cluster
for them. The backend implementation is open to different approaches, as long as they follow the standard.
-- **konnector:** this component gets installed in the consumer's cluster and is responsible for synchronization between
-the management cluster and the consumer cluster.
+- **konnector:** this component gets installed in the App Cluster and is responsible for synchronization between
+the Control Plane Cluster and the App Cluster.
### 6. Provider Integration
Klutch leverages [Crossplane's provider](https://docs.crossplane.io/master/concepts/providers/) model:
- Supports any provider that adheres to Crossplane's provider specification
-- Platform operators can install and configure providers in the central cluster
+- Platform operators can install and configure providers in the Control Plane Cluster
- Providers handle the actual interaction with cloud APIs or infrastructure management tools
diff --git a/docs/for-developers/developer_interactions.svg b/docs/for-developers/developer_interactions.svg
index a30ca31..110044d 100644
--- a/docs/for-developers/developer_interactions.svg
+++ b/docs/for-developers/developer_interactions.svg
@@ -1,4 +1,4 @@
-
Developer
Kubernetes Cluster (Klutch-enabled)
Klutch Central Management Cluster
Cloud Provider / On-premise Infrastructure
Use kubectl to create a Custom ResourceProcess Custom ResourceProvision remote resource(s)Resource status updatesSync status and optionally additional objectsMonitor resource (kubectl get/describe)View resource statusUpdate resource (modify & reapply YAML)Process updateUpdate remote resourceDelete resource (kubectl delete)Process deletionDelete remote resource
\ No newline at end of file
+
App Developer
App Cluster (Klutch-enabled)
Klutch Control Plane Cluster
Cloud Provider / On-premise Infrastructure
Use kubectl to create a Custom ResourceProcess Custom ResourceProvision remote resource(s)Resource status updatesSync status and optionally additional objectsMonitor resource (kubectl get/describe)View resource statusUpdate resource (modify & reapply YAML)Process updateUpdate remote resourceDelete resource (kubectl delete)Process deletionDelete remote resource
\ No newline at end of file
diff --git a/docs/for-developers/index.md b/docs/for-developers/index.md
index 6c0a7ce..7404a64 100644
--- a/docs/for-developers/index.md
+++ b/docs/for-developers/index.md
@@ -24,16 +24,16 @@ manage cloud or on-premise resources directly from your Kubernetes cluster using
Before you begin, ensure you have:
-1. [kubectl](https://kubernetes.io/docs/tasks/tools/) installed and configured to interact with your developer cluster.
+1. [kubectl](https://kubernetes.io/docs/tasks/tools/) installed and configured to interact with your App Cluster.
2. Access to a Klutch-enabled Kubernetes cluster.
- If your cluster wasn't set up by a platform operator, you need to use the klutch-bind CLI to connect to the central
- management cluster and bind to the resources you intend to use. For instructions on using the klutch-bind CLI, refer
+ If your cluster wasn't set up by a platform operator, you need to use the klutch-bind CLI to connect to the Control
+ Plane Cluster and bind to the resources you intend to use. For instructions on using the klutch-bind CLI, refer
to the ["For Platform Operators"](../platform-operator/index.md) section.
## Available Resource Types
-The types of resources you can create depend on the service bindings configured in your developer cluster. These can
+The types of resources you can create depend on the service bindings configured in your App Cluster. These can
include databases, message queues, storage solutions, or any other services available in supported cloud providers or
on-premise infrastructure.
@@ -133,7 +133,7 @@ If you encounter issues while creating or managing resources, the following step
resolve the problem. Depending on your specific situation and level of access, some or all of these steps may be
applicable:
-1. Check the resource status and events in your developer cluster:
+1. Check the resource status and events in your App Cluster:
```bash
kubectl describe
@@ -141,16 +141,16 @@ applicable:
Look for events or status messages that might indicate the issue.
-2. Examine the logs of konnector (the component of Klutch running in your developer cluster):
+2. Examine the logs of konnector (the component of Klutch running in your App Cluster):
```bash
kubectl logs -n klutch-bind deployment/konnector
```
- This may show issues related to the communication between your developer cluster and the central management cluster.
+ This may show issues related to the communication between your App Cluster and the Control Plane Cluster.
-3. If you have access to the central management cluster and are familiar with the Crossplane setup and configuration,
- you can perform additional troubleshooting steps in the central management cluster. Refer to the latest [official Crossplane troubleshooting guide](https://docs.crossplane.io/latest/guides/troubleshoot-crossplane/) for comprehensive instructions.
+3. If you have access to the Control Plane Cluster and are familiar with the Crossplane setup and configuration,
+ you can perform additional troubleshooting steps in the Control Plane Cluster. Refer to the latest [official Crossplane troubleshooting guide](https://docs.crossplane.io/latest/guides/troubleshoot-crossplane/) for comprehensive instructions.
Some key steps you can take include:
@@ -196,7 +196,7 @@ applicable:
kubectl get packages
```
-Remember that as a developer, your access is typically limited to your developer cluster. Many advanced troubleshooting
-steps, especially those involving the central management cluster or Crossplane configuration, may require collaboration
+Remember that as a developer, your access is typically limited to your App Cluster. Many advanced troubleshooting
+steps, especially those involving the Control Plane Cluster or Crossplane configuration, may require collaboration
with your platform operators or additional permissions. If you suspect a bug in Klutch, please consider opening an issue
in the relevant GitHub repository with a detailed description of the problem and steps to reproduce it.
diff --git a/docs/for-developers/tutorials/developer_tutorial.svg b/docs/for-developers/tutorials/developer_tutorial.svg
index e4ed427..8556979 100644
--- a/docs/for-developers/tutorials/developer_tutorial.svg
+++ b/docs/for-developers/tutorials/developer_tutorial.svg
@@ -1,4 +1,4 @@
-
Using kubectl in Developer Cluster
Create Remote PostgreSQL Resource
Use Remote PostgreSQL Resource
Using a9s CLI
Deploy Central Management Cluster
Set up Developer Cluster
Bind APIs
Start
End
\ No newline at end of file
+
Using kubectl in App Cluster
Create Remote PostgreSQL Resource
Use Remote PostgreSQL Resource
Using a9s CLI
Deploy Control Plane Cluster
Set up App Cluster
Bind APIs
Start
End
\ No newline at end of file
diff --git a/docs/for-developers/tutorials/index.md b/docs/for-developers/tutorials/index.md
index b4ff8e6..89abe49 100644
--- a/docs/for-developers/tutorials/index.md
+++ b/docs/for-developers/tutorials/index.md
@@ -13,18 +13,18 @@ keywords:
## Local Klutch Deployment
This tutorial guides you through deploying Klutch locally using two interconnected local clusters that simulate a
-developer and a central management cluster. It covers setting up both clusters, connecting them using Klutch, and
-showcases how developers can request and utilize resources in the developer cluster that are actually provisioned in the
-central management cluster.
+App and a Control Plane Cluster. It covers setting up both clusters, connecting them using Klutch, and showcases how
+developers can request and utilize resources in the App Cluster that are actually provisioned in the Control Plane
+Cluster.
### Overview
In this tutorial, you'll perform the following steps in your local environment:
-1. Deploy a central management cluster (which will also host resources)
-2. Set up a developer cluster
-3. Bind APIs from the development to Central management cluster
-4. Create and use remote resources from the development cluster (in this case Postgresql service)
+1. Deploy a Control Plane Cluster (which will also host resources)
+2. Set up an App Cluster
+3. Bind APIs from the App Cluster to Control Plane Cluster
+4. Create and use remote resources from the App Cluster (in this case Postgresql service)
We'll use the open source [a9s CLI](https://github.com/anynines/a9s-cli-v2) to streamline this process, making it easy
to follow along and understand each step.
@@ -47,7 +47,7 @@ If you work with Kubernetes regularly, you probably have these standard tools al
To follow along with this tutorial, you need to install the following specialized tools:
-1. [kubectl-bind](https://docs.k8s.anynines.com/docs/develop/platform-operator/central-management-cluster-setup/#binding-a-consumer-cluster-interactive)
+1. [kubectl-bind](https://docs.k8s.anynines.com/docs/develop/platform-operator/central-management-cluster-setup/#binding-an-app-cluster-interactive)
2. [a9s cli](https://docs.a9s-cli.anynines.com/docs/a9s-cli/)
### Network Access
@@ -63,14 +63,13 @@ Ensure your machine can reach the following external resources:
## Step 1: Run the Deployment Command
-In this step, we'll set up both the central management cluster and the developer cluster for Klutch using **a single
-command**. This command will install all components needed by Klutch, including the a8s framework with the PostgreSQL
-operator.
+In this step, we'll set up both the Control Plane Cluster and the App Cluster for Klutch using **a single command**.
+This command will install all components needed by Klutch, including the a8s framework with the PostgreSQL operator.
:::note
-This step does not automatically create bindings between the developer cluster and the resources in the central
-management cluster. You'll need to create these bindings using a web UI in a later step.
+This step does not automatically create bindings between the App Cluster and the resources in the Control Plane Cluster.
+You'll need to create these bindings using a web UI in a later step.
:::
@@ -82,7 +81,7 @@ a9s klutch deploy --port 8080 --yes
:::note
-- The ```--port 8080``` flag specifies the port on which the central management cluster's ingress will listen. You can
+- The ```--port 8080``` flag specifies the port on which the Control Plane Cluster's ingress will listen. You can
change this if needed.
- The ```--yes``` flag skips all confirmation prompts, speeding up the process.
@@ -90,9 +89,9 @@ a9s klutch deploy --port 8080 --yes
What this command does:
-1. Deploys the central management (producer) cluster with all required components.
+1. Deploys the Control Plane Cluster with all required components.
2. Installs the [a8s framework](https://k8s.anynines.com/for-postgres/) with the PostgreSQL Kubernetes operator.
-3. Creates a developer (consumer) Kind cluster.
+3. Creates an App Cluster with Kind.
:::tip
@@ -103,7 +102,7 @@ For a hands-off deployment, keep the `--yes` flag to skip all prompts.
:::
-### 1.1 Central Management (Producer) Cluster Deployment
+### 1.1 Control Plane Cluster Deployment
The CLI automatically:
@@ -126,10 +125,10 @@ You'll see progress updates and YAML files being applied for each component.
After setting up the management cluster, the CLI:
-- Creates a new Kind cluster named "klutch-consumer"
+- Creates a new Kind cluster named "klutch-app"
-At the moment this is an empty Kind cluster. Klutch components will be added in the next step, when the consumer
-cluster is "bound" to the central management cluster. Stay tuned!
+At the moment this is an empty Kind cluster. Klutch components will be added in the next step, when the App Cluster is
+"bound" to the Control Plane Cluster. Stay tuned!
### Deployment Output
@@ -143,7 +142,7 @@ Checking Prerequisites...
✅ Found kind at path /opt/homebrew/bin/kind.
...
-Creating cluster "klutch-management"...
+Creating cluster "klutch-control-plane"...
• Ensuring node image (kindest/node:v1.31.0) 🖼 ...
✓ Ensuring node image (kindest/node:v1.31.0) 🖼
• Preparing nodes 📦 ...
@@ -163,8 +162,8 @@ Applying the a8s Data Service manifests...
✅ The a8s System appears to be ready.
...
-Deploying a Consumer Kind cluster...
-Creating cluster "klutch-consumer" ...
+Deploying an App Cluster with Kind...
+Creating cluster "klutch-app" ...
• Ensuring node image (kindest/node:v1.31.0) 🖼 ...
✓ Ensuring node image (kindest/node:v1.31.0) 🖼
• Preparing nodes 📦 ...
@@ -173,24 +172,24 @@ Creating cluster "klutch-consumer" ...
Summary
You've successfully accomplished the followings steps:
-✅ Deployed a Klutch management Kind cluster.
+✅ Deployed a Klutch Control Plane Cluster with Kind.
✅ Deployed Dex Idp and the anynines klutch-bind backend.
✅ Deployed Crossplane and the Kubernetes provider.
✅ Deployed the Klutch Crossplane configuration package.
-✅ Deployed Klutch API Service Export Templates to make the Klutch Crossplane APIs available to consumer clusters.
+✅ Deployed Klutch API Service Export Templates to make the Klutch Crossplane APIs available to App Clusters.
✅ Deployed the a8s Stack.
-✅ Deployed a consumer cluster.
-🎉 You are now ready to bind APIs from the consumer cluster using the `a9s klutch bind` command.
+✅ Deployed an App Cluster.
+🎉 You are now ready to bind APIs from the App Cluster using the `a9s klutch bind` command.
```
-## Step 2: Bind Resource APIs from the Consumer Cluster
+## Step 2: Bind Resource APIs from the App Cluster
-After setting up both clusters, the next step is to bind APIs from the consumer cluster to the management cluster. We'll
+After setting up both clusters, the next step is to bind APIs from the App Cluster to the Control Plane Cluster. We'll
bind two APIs: `postgresqlinstance` and `servicebinding`.
-This operation also sets up an agent in the cluster to keep resources in sync between the consumer cluster and the
-central management cluster.
+This operation also sets up an agent in the cluster to keep resources in sync between the App Cluster and the Control
+Plane Cluster.
Execute the following command to initiate the binding process:
@@ -221,7 +220,7 @@ Checking Prerequisites...
...
The following command will be executed for you:
-/opt/homebrew/bin/kubectl bind http://192.168.0.91:8080/export --konnector-image public.ecr.aws/w5n9a2g2/anynines/konnector:v1.3.0 --context kind-klutch-consumer
+/opt/homebrew/bin/kubectl bind http://192.168.0.91:8080/export --konnector-image public.ecr.aws/w5n9a2g2/anynines/konnector:v1.3.0 --context kind-klutch-app
```
Next, a browser window will open for authentication. Use these demo credentials:
@@ -252,7 +251,7 @@ Do you accept this Permission? [No,Yes]
You've successfully accomplished the following steps:
✅ Called the kubectl bind plugin to start the interactive binding process
-✅ Authorized the management cluster to manage the selected API on your consumer cluster.
+✅ Authorized the management cluster to manage the selected API on your App Cluster.
✅ You've bound the postgresqlinstances resource. You can now apply instances of this resource, for example with the
following yaml:
@@ -269,12 +268,12 @@ spec:
name: a8s-postgresql
```
-To bind to `servicebinding`, repeat the [same process](#step-2-bind-resource-apis-from-the-consumer-cluster), but click
+To bind to `servicebinding`, repeat the [same process](#step-2-bind-resource-apis-from-the-app-cluster), but click
on `Bind` under the `servicebinding` API in the web UI.
## Step 3: Create and Use a PostgreSQL Instance
-After binding the PostgresqlInstance, you can create and use PostgreSQL instances in your consumer cluster. This section
+After binding the PostgresqlInstance, you can create and use PostgreSQL instances in your App Cluster. This section
will guide you through creating an instance and using it with a simple blogpost application.
### 3.1 Create a PostgreSQL Instance
@@ -297,7 +296,7 @@ spec:
OR download the yaml manifest
-Apply the file to your developer cluster:
+Apply the file to your App Cluster:
```bash
kubectl apply -f pg-instance.yaml
@@ -333,7 +332,7 @@ kubectl apply -f service-binding.yaml
### 3.3 Configure Local Network for Testing
Before deploying our application, we need to configure the local network to make the PostgreSQL service available in the
-developer cluster. This step is for local testing purposes and may vary significantly in a production environment.
+App Cluster. This step is for local testing purposes and may vary significantly in a production environment.
Create a file named `external-pg-service.yaml` with the following content:
@@ -367,11 +366,11 @@ Apply the file:
kubectl apply -f <(eval "echo \"$(cat external-pg-service.yaml)\"")
```
-#### Set up port forwarding in the management cluster
+#### Set up port forwarding in the Control Plane Cluster
a. Open a new terminal window.
- b. Switch the kubectl context to the management cluster:
+ b. Switch the kubectl context to the Control Plane Cluster:
```bash
kubectl config use-context kind-klutch-management
@@ -397,7 +396,7 @@ kubectl apply -f <(eval "echo \"$(cat external-pg-service.yaml)\"")
### 3.4 Deploy a Blogpost Application
Now, let's deploy a simple blogpost application that uses our PostgreSQL service. Return to the terminal window where
-your **kubectl context** is set to the **developer cluster**.
+your **kubectl context** is set to the **App Cluster**.
Create a file named `blogpost-app.yaml` with the following content:
@@ -495,12 +494,12 @@ If you need to start over or remove the clusters created by Klutch, use the foll
a9s klutch delete
```
-This command will remove both the management and developer clusters that were created during the Klutch deployment
+This command will remove both the Control Plane Cluster and App Clusters that were created during the Klutch deployment
process.
:::note
-Use this command with caution as it will delete all resources and data in both the management and developer clusters.
+Use this command with caution as it will delete all resources and data in both the Control Plane and App clusters.
Make sure to back up any important data before proceeding.
:::
diff --git a/docs/index.md b/docs/index.md
index 38c4792..9bff11a 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -15,18 +15,18 @@ keywords:
## Overview
-Klutch is a Kubernetes-native platform that simplifies the orchestration of resources and services across diverse cloud
-environments and fleets of Kubernetes clusters. It enables on-the-fly provisioning of services by multiple consumer
+Klutch is a Kubernetes-native platform that simplifies the orchestration of resources and data services across diverse cloud
+environments and fleets of Kubernetes clusters. It enables on-the-fly provisioning of services by multiple application
Kubernetes clusters using a declarative interface. It caters to both the needs of platform operators and application
-developers, as it simplifies both the hosting as well as the consumption of services.
+developers, as it simplifies both the hosting as well as the consumption of data services.
### Key Features
-- **Multi-cluster Management**: Orchestrate resources and services across fleets of Kubernetes clusters.
-- **On-demand Provisioning**: Allow consumer clusters to provision services dynamically as needed.
-- **Declarative Interface**: Utilize a unified Kubernetes-native declarative approach for service provisioning and
+- **Multi-cluster Management**: Orchestrate resources and data services across fleets of Kubernetes clusters.
+- **On-demand Provisioning**: Allow App Clusters to provision services dynamically as needed.
+- **Declarative Interface**: Utilize a unified Kubernetes-native declarative approach for data service provisioning and
consumption.
-- **Unified Control**: Manage resources and services across multiple environments from a single point.
+- **Unified Control**: Manage resources and data services across multiple environments from a single point.
- **Dual-focused Design**: Simplify operations for both platform operators and application developers.
- **Extensible Architecture**: Plugin-based architecture facilitates easy integration of new resource types and cloud
providers.
@@ -40,27 +40,27 @@ developers, as it simplifies both the hosting as well as the consumption of serv
1. **Developers**
- Interact with their Kubernetes clusters to request and use remote resources.
- - Define service requirements using Custom Resources, initiating automated provisioning and deployment processes.
+ - Define data service requirements using Custom Resources, initiating automated provisioning and deployment processes.
-2. **Developer Kubernetes Clusters**
+2. **App Clusters**
- - Request services from the Central Management Cluster.
+ - Request services from the Control Plane Cluster.
- Utilize Klutch-bind to subscribe and enable usage of Klutch's APIs across different Kubernetes clusters.
- - Synchronize resource specifications, status, and additional information with the Central Management Cluster.
+ - Synchronize resource specifications, status, and additional information with the Control Plane Cluster.
3. **Platform Operators**
- - Configure and manage available remote resources through the Central Management Cluster.
+ - Configure and manage available remote resources through the Control Plane Cluster.
- Oversee the entire system, ensuring smooth operation.
-4. **Central Management Clusters**
+4. **Control Plane Clusters**
- Manage the entire ecosystem using the centralized control plane.
- - Process service requests from developer clusters.
+ - Process service requests from App Clusters.
- Utilize Crossplane for managing and provisioning cloud-native resources across multiple cloud providers and
on-premise environments.
- - Manage bidirectional synchronization of resource specifications, status, and additional information with developer
- clusters.
+ - Manage bidirectional synchronization of resource specifications, status, and additional information with App
+ Clusters.
- Key functionalities include:
- Maintain a list of available resources.
- Handle the actual provisioning of resources in target environments.
diff --git a/docs/klutch_components.svg b/docs/klutch_components.svg
index e7d3aec..4fe69c6 100644
--- a/docs/klutch_components.svg
+++ b/docs/klutch_components.svg
@@ -1,4 +1,4 @@
-
Crossplane
Crossplane
OIDC (e.g. keycloak)
OIDC (e.g. keycloak)
Crossplane provider(s)
Crossplane provider(...
crossplane-api
crossplane-api
Remote resources
Remote resources
Proxy resources CRDs
Proxy resources CRDs
konnector
konnector
bind backend
bind backend
klutch-bind CLI
klutch-bind CLI
Install
Install
Provide
Provide
Provide
Provide
Use
Use
Make available
Make avail...
Verify Tokens
Verify Tok...
Synchronize objects
Synchroniz...
Create new binding
Create new...
Installs
Installs
Central Management Cluster
Central Management Cluster
Developer Cluster
Developer Cluster
Legend
Legend
Component
Component
Data structure / API
Data structure / API
Third party component
Third party component
Request ServicesÂ
via Custom Resources
Request Services...
Application Developer
Applicat...
Configures and managesÂ
available Resources
Configures...
Platform Operator
Platform...
On-premise Infrastructure
On-premise Infrastru...
Cloud providers
Cloud providers
Deployed at
Deployed at
Proxy resources CR
Proxy resources CR
Authenticate
Authentica...
Sync
Sync
\ No newline at end of file
+
Crossplane
OIDC (e.g. keycloak)
Crossplane provider(s)
crossplane-api
Remote resources
Composite Resource Claims CRDs
konnector
bind backend
klutch-bind CLI
Install
Provide
Provide
Use
Make available
Verify Tokens
Synchronize objects
Create new binding
Installs
Control Plane Cluster
App Cluster
Legend
Component
Data structure / API
Third party component
Request ServicesÂ
via Custom Resources
App Developer
Configures and managesÂ
available Resources
Platform Operator
On-premise Infrastructure
Cloud Providers
Deployed at
Proxy Claim
Authenticate
Sync
\ No newline at end of file
diff --git a/docs/platform-operator/central-management-cluster-setup/klutch-deployment.png b/docs/platform-operator/central-management-cluster-setup/klutch-deployment.png
deleted file mode 100644
index 9ca57d2..0000000
Binary files a/docs/platform-operator/central-management-cluster-setup/klutch-deployment.png and /dev/null differ
diff --git a/docs/platform-operator/central-management-cluster-setup/index.md b/docs/platform-operator/control-plane-cluster-setup/index.md
similarity index 89%
rename from docs/platform-operator/central-management-cluster-setup/index.md
rename to docs/platform-operator/control-plane-cluster-setup/index.md
index 1d3962a..6960eaa 100644
--- a/docs/platform-operator/central-management-cluster-setup/index.md
+++ b/docs/platform-operator/control-plane-cluster-setup/index.md
@@ -1,8 +1,8 @@
---
id: klutch-po-data-services
-title: Setting up Central Management and Developer Clusters
+title: Setting up Control Plane and App Clusters
tags:
- - central management cluster
+ - control plane cluster
- kubernetes
- data services
- platform operator
@@ -11,18 +11,17 @@ keywords:
- platform operator
---
-Below are the instructions for setting up Klutch's central management cluster, which includes
+Below are the instructions for setting up Klutch's Control Plane Cluster, which includes
deploying [Crossplane](https://www.crossplane.io/). While other cloud providers are supported as
well, for the purpose of this example, we will use [AWS](https://aws.amazon.com/). These
-instructions also cover the configuration of the consumer cluster - i.e. the cluster from which data
-services will be consumed or used - with bindings to services exported in the central management
-cluster.
+instructions also cover the configuration of the App Cluster - i.e. the cluster from which data
+services will be used - with bindings to services exported in the Control Plane Cluster.
## Prerequisites
- Provision an EKS cluster
- - Use a minimum of 3 nodes if you want to host highly available services on the central management
- cluster, each node should at least be t3a.xlarge or equivalent.
+ - Use a minimum of 3 nodes if you want to host highly available services on the Control Plane
+ Cluster, each node should at least be t3a.xlarge or equivalent.
- In general, the Klutch control plane itself can also run with just one worker node.
- Set up a VPC with 3 subnets.
- Make sure [eksctl](https://eksctl.io/introduction/#getting-started) is installed and configured
@@ -31,32 +30,32 @@ cluster.
## Overview
To successfully manage data services using Klutch, several components must be deployed. Konnector is
-deployed on each consumer cluster that wants to manage its data services with Klutch. Klutch itself
-is deployed on a central management cluster. Konnector is configured to correctly interact with
-klutch-bind running in Klutch so each service running on the consumer cluster doesn't need to be
+deployed on each App Cluster that wants to manage its data services with Klutch. Klutch itself
+is deployed on a Control Plane Cluster. Konnector is configured to correctly interact with
+klutch-bind running in Klutch so each service running on the App Cluster doesn't need to be
configured to call Klutch. Instead, the services can use Klutch to manage their data services by
interacting with Konnector.
![Deploy Klutch and its related components](klutch-deployment.png)
The following instructions will install the services that are necessary to use Klutch. First, the
-Crossplane provider `provider-anynines` is installed in the management cluster. This is done by
+Crossplane provider `provider-anynines` is installed in the Control Plane Cluster. This is done by
installing both the provider itself and configuration that the provider needs to run properly.
-Then, the klutch-bind backend is deployed in the management cluster. The installation for
-klutch-bind includes permission configuration that needs to be set up so the developer cluster can
+Then, the klutch-bind backend is deployed in the Control Plane Cluster. The installation for
+klutch-bind includes permission configuration that needs to be set up so the App Cluster can
properly access the backend.
-Lastly, Konnector must be [installed on the developer cluster](./setup-developer-cluster.md). After
-installation, Konnector is bound to the klutch-bind backend. This is how the developer cluster can call
-Klutch in the management cluster.
+Lastly, Konnector must be [installed on the App Cluster](./setup-app-cluster.md). After
+installation, Konnector is bound to the klutch-bind backend. This is how the App Cluster can call
+Klutch in the Control Plane Cluster.
The current instructions only include deployment of `provider-anynines`. This product is currently
in development and more providers can be expected soon!
-## Setup Klutch central management cluster
+## Setup Klutch Control Plane Cluster
-We will now set up the central management Kubernetes cluster that you've set up in the previous step
+We will now set up the Kubernetes Control Plane Cluster that you've set up in the previous step
so that we can deploy Klutch on it.
### Deploy Crossplane and provider-anynines
@@ -152,8 +151,8 @@ values that require updating include:
| -------------------------------- | ----------------------------------------------------------------------------- |
| `` | Cookies signing key - run `openssl rand -base64 32` to create it |
| `` | Cookies encryption key - run `openssl rand -base64 32` to create it |
-| `` | The base64 encoded certificate of the central management Kubernetes cluster |
-| `` | URL of the Kubernetes API server of the central management Kubernetes cluster |
+| `` | The base64 encoded certificate of the Control Plane Cluster |
+| `` | URL of the Kubernetes API server of the Control Plane Cluster |
| `` | OIDC client url |
| `` | OIDC client secret |
| `` | the URL of the Klutch backend service, see [backend-host](#backend-host) |
@@ -183,18 +182,18 @@ ACME protocol. If a different approach is preferred, please update the `Issuer`
#### Kubernetes cluster certificate
-The base64 encoded certificate of the central management Kubernetes cluster can be found in the
+The base64 encoded certificate of the Control Plane Cluster can be found in the
KubeConfig of that cluster, specifically in `clusters.certificate-authority-data`.
#### Kubernetes api external name
-The URL of the Kubernetes API server of the central management Kubernetes cluster, .i.e. the
+The URL of the Kubernetes API server of the Control Plane Cluster, .i.e. the
Kubernetes API server's external hostname can be found in kubeConfig `clusters.server`.
#### backend-host
During the [deployment of Klutch](#deployment) a service of type `LoadBalancer` was created. This
-load balancer can be used to connect to Klutch from a developer (or consumer) cluster. To obtain the
+load balancer can be used to connect to Klutch from an App Cluster. To obtain the
required information about the service, execute the following command:
```bash
diff --git a/docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 1.png b/docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 1.png
similarity index 100%
rename from docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 1.png
rename to docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 1.png
diff --git a/docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 10.png b/docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 10.png
similarity index 100%
rename from docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 10.png
rename to docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 10.png
diff --git a/docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 11.png b/docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 11.png
similarity index 100%
rename from docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 11.png
rename to docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 11.png
diff --git a/docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 12.png b/docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 12.png
similarity index 100%
rename from docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 12.png
rename to docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 12.png
diff --git a/docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 2.png b/docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 2.png
similarity index 100%
rename from docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 2.png
rename to docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 2.png
diff --git a/docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 3.png b/docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 3.png
similarity index 100%
rename from docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 3.png
rename to docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 3.png
diff --git a/docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 4.png b/docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 4.png
similarity index 100%
rename from docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 4.png
rename to docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 4.png
diff --git a/docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 5.png b/docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 5.png
similarity index 100%
rename from docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 5.png
rename to docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 5.png
diff --git a/docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 6.png b/docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 6.png
similarity index 100%
rename from docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 6.png
rename to docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 6.png
diff --git a/docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 7.png b/docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 7.png
similarity index 100%
rename from docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 7.png
rename to docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 7.png
diff --git a/docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 8.png b/docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 8.png
similarity index 100%
rename from docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 8.png
rename to docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 8.png
diff --git a/docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 9.png b/docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 9.png
similarity index 100%
rename from docs/platform-operator/central-management-cluster-setup/keycloak screenshots/Step 9.png
rename to docs/platform-operator/control-plane-cluster-setup/keycloak screenshots/Step 9.png
diff --git a/docs/platform-operator/central-management-cluster-setup/klutch-bind-ui.png b/docs/platform-operator/control-plane-cluster-setup/klutch-bind-ui.png
similarity index 100%
rename from docs/platform-operator/central-management-cluster-setup/klutch-bind-ui.png
rename to docs/platform-operator/control-plane-cluster-setup/klutch-bind-ui.png
diff --git a/docs/platform-operator/control-plane-cluster-setup/klutch-deployment.png b/docs/platform-operator/control-plane-cluster-setup/klutch-deployment.png
new file mode 100644
index 0000000..53751af
Binary files /dev/null and b/docs/platform-operator/control-plane-cluster-setup/klutch-deployment.png differ
diff --git a/docs/platform-operator/central-management-cluster-setup/oidc-keycloak.md b/docs/platform-operator/control-plane-cluster-setup/oidc-keycloak.md
similarity index 90%
rename from docs/platform-operator/central-management-cluster-setup/oidc-keycloak.md
rename to docs/platform-operator/control-plane-cluster-setup/oidc-keycloak.md
index 59b3963..40149a2 100644
--- a/docs/platform-operator/central-management-cluster-setup/oidc-keycloak.md
+++ b/docs/platform-operator/control-plane-cluster-setup/oidc-keycloak.md
@@ -17,8 +17,8 @@ Keycloak.
## OpenID Connect (OIDC) client for the backend
-The Klutch backend needs to be configured as an OIDC client so that consumers can authenticate
-against it and set up service accounts for [developer (consumer) clusters](./setup-developer-cluster.md)
+The Klutch backend needs to be configured as an OIDC client so that developers can authenticate
+against it and set up service accounts for [App Clusters](./setup-app-cluster)
to connect (bind) to data services. For this purpose, create a new OIDC client for the Klutch
backend. In our example we call the client `klutch-bind-backend`.
diff --git a/docs/platform-operator/central-management-cluster-setup/provider-anynines.md b/docs/platform-operator/control-plane-cluster-setup/provider-anynines.md
similarity index 97%
rename from docs/platform-operator/central-management-cluster-setup/provider-anynines.md
rename to docs/platform-operator/control-plane-cluster-setup/provider-anynines.md
index 49897c5..aade769 100644
--- a/docs/platform-operator/central-management-cluster-setup/provider-anynines.md
+++ b/docs/platform-operator/control-plane-cluster-setup/provider-anynines.md
@@ -2,7 +2,7 @@
title: Configuring Crossplane Provider provider-anynines
sidebar_position: 2
tags:
- - central management cluster
+ - control plane cluster
- kubernetes
- a9s data services
- platform operator
@@ -18,7 +18,7 @@ Klutch, you can use `provider-anynines` to talk to the service broker.
In order to follow along with this manual, you need a working installation of the CloudFoundry
service broker and a pair of credentials. The service broker must be reachable from the network of
-the Central Management Cluster's worker nodes.
+the Control Plane Cluster's worker nodes.
### Install ProviderConfig
diff --git a/docs/platform-operator/central-management-cluster-setup/setup-developer-cluster.md b/docs/platform-operator/control-plane-cluster-setup/setup-app-cluster.md
similarity index 92%
rename from docs/platform-operator/central-management-cluster-setup/setup-developer-cluster.md
rename to docs/platform-operator/control-plane-cluster-setup/setup-app-cluster.md
index b20eb5a..6284bb1 100644
--- a/docs/platform-operator/central-management-cluster-setup/setup-developer-cluster.md
+++ b/docs/platform-operator/control-plane-cluster-setup/setup-app-cluster.md
@@ -1,21 +1,21 @@
---
-title: Set up developer (consumer) clusters
+title: Set up App Clusters
sidebar_position: 1
tags:
- - central management cluster
+ - app cluster
- kubernetes
- a9s data services
- - platform operator
+ - application developer
keywords:
- a9s data services
- - platform operator
+ - application developer
---
import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
-## Set up a developer cluster
+## Set up an App Cluster
-We use klutch-bind to make the a9s Kubernetes API available inside a developer cluster. In order to
+We use klutch-bind to make the a9s Kubernetes API available inside an App Cluster. In order to
utilize the `kubectl bind` command, you'll need to have the `kubectl-bind` binary installed and
properly added to your system's path. Download the appropriate `kubectl-bind` binary for your
system's architecture from the provided options:
@@ -155,8 +155,8 @@ kubectl bind
---
-We proceed by binding the developer cluster with the Klutch backend. This will allow users of the
-developer cluster to set up new data service instances in the environment managed by the Klutch
+We proceed by binding the App Cluster with the Klutch backend. This will allow users of the
+App Cluster to set up new data service instances in the environment managed by the Klutch
backend. To create this binding, execute the following commands:
1. In the following line, replace `` with the hostname of the Klutch backend:
@@ -173,18 +173,18 @@ You can select the service to bind by using the web UI, as shown in the followin
![Bind an a9s Data Service using the web UI](klutch-bind-ui.png)
-And that's it, you have now successfully configured both the provider and developer clusters.
+And that's it, you have now successfully configured both the Control Plane and App clusters.
# Install Konnector without klutch-bind CLI
-When provisioning a developer cluster from an automated CI flow, it may be desirable to avoid
+When provisioning an App Cluster from an automated CI flow, it may be desirable to avoid
additional dependencies like the `kubectl bind` CLI binary or the anynines `helper` CLI. For those
cases it is possible to deploy the `Konnector` component using a plain Kubernetes manifest.
:::note
-These steps will only install the generic Konnector component. They will not bind the developer cluster
-to the central management cluster yet.
+These steps will only install the generic Konnector component. They will not bind the App Cluster
+to the Control Plane Cluster yet.
:::
diff --git a/docs/platform-operator/monitoring.md b/docs/platform-operator/monitoring.md
index a02d0c9..f05248c 100644
--- a/docs/platform-operator/monitoring.md
+++ b/docs/platform-operator/monitoring.md
@@ -7,7 +7,7 @@ title: Monitoring
The platform components contain facilities to monitor their health. This page describes these
facilities and how to use them.
-## Central Management Cluster
+## Control Plane Cluster
### Crossplane providers
@@ -100,7 +100,7 @@ details about the cause of the failure.
For easier integration in monitoring systems, the anynines provider exposes an HTTP endpoint
accumulating the healthiness of all `ProviderConfigs`. The endpoint is called `/backend-status` and
is reachable on port **8081** of the `provider-anynines` pods. By default the endpoint is not
-exposed in any way. To make it reachable from inside the central management cluster, create a
+exposed in any way. To make it reachable from inside the Control Plane Cluster, create a
service such as this:
```yaml
@@ -157,7 +157,7 @@ NAME READY UP-TO-DATE AVAILABLE AGE
anynines-backend 1/1 1 1 1h
```
-## Developer cluster
+## App Cluster
### Konnector
diff --git a/docs/platform-operator/update-cluster-components/index.md b/docs/platform-operator/update-cluster-components/index.md
index 45825f1..2f1642e 100644
--- a/docs/platform-operator/update-cluster-components/index.md
+++ b/docs/platform-operator/update-cluster-components/index.md
@@ -1,7 +1,7 @@
---
title: Updating cluster components
tags:
- - developer cluster
+ - app cluster
- kubernetes
- a9s data services
- platform operator
@@ -12,7 +12,7 @@ keywords:
This page documents the update process for all Klutch components.
-## Updating provider cluster
+## Updating Control Plane Cluster
### Crossplane runtime
@@ -39,7 +39,7 @@ kubectl patch configurations/anynines-dataservices \
--type merge -p '{"spec":{"package":"public.ecr.aws/w5n9a2g2/anynines/dataservices:v1.3.0"}}'
```
-### Provider backend
+### Control Plane Cluster backend
:::warning
@@ -48,11 +48,11 @@ Please read the change log before updating, and follow any migration instruction
:::
1. Install the latest CRDs for the backend according to the
- [backend installation instructions](../central-management-cluster-setup/index.md#prerequisites-2)
+ [backend installation instructions](../control-plane-cluster-setup/index.md#prerequisites-2)
2. Update the Klutch backend deployment according to the
- [installation instructions](../central-management-cluster-setup/index.md#deploy-the-klutch-backend)
+ [installation instructions](../control-plane-cluster-setup/index.md#deploy-the-klutch-backend)
3. If the new version also introduces new data service types, follow the binding creation steps
- [follow the binding creation steps](../central-management-cluster-setup/setup-developer-cluster.md)
+ [follow the binding creation steps](../control-plane-cluster-setup/setup-app-cluster.md)
to install them in consumer clusters.
## Downtime during update
@@ -67,9 +67,9 @@ complete.
:::
-## Updating developer cluster
+## Updating App Cluster
-The developer cluster contains only one component: the `konnector` deployment. To update the
+The App Cluster contains only one component: the `konnector` deployment. To update the
`konnector`, simply change it's container image to the new one. The latest image can be found by
checking out the tab "Image tags" for this image in our
[image registry](https://gallery.ecr.aws/w5n9a2g2/anynines/konnector).
diff --git a/docs/platform-operator/update-cluster-components/update-provider-apis.md b/docs/platform-operator/update-cluster-components/update-provider-apis.md
index 9e76378..344423a 100644
--- a/docs/platform-operator/update-cluster-components/update-provider-apis.md
+++ b/docs/platform-operator/update-cluster-components/update-provider-apis.md
@@ -2,7 +2,7 @@
id: klutch-po-update-provider-apis
title: "Background Info: API binding propagation"
tags:
- - provider cluster
+ - control plane cluster
- kubernetes
- a9s data services
- platform operator
@@ -21,23 +21,23 @@ system or for debugging. This information is not needed for normal system operat
## Journey of an API
In this section we will cover the journey of an API definition through the stack. Initially, the API
-is defined in a Crossplane Configuration Package. When this package is installed to the central
-management cluster, the API definitions are extracted by Crossplane. To make the resource packages
-available for developer clusters, the Platform Operator defines an `APIServiceExporttemplate`. When a
-binding is created the developer cluster will create an `APIServiceExportRequest` on the central
-management cluster.
+is defined in a Crossplane Configuration Package. When this package is installed to the Control
+Plane Cluster, the API definitions are extracted by Crossplane. To make the resource packages
+available for App Clusters, the Platform Operator defines an `APIServiceExporttemplate`. When a
+binding is created the App Cluster will create an `APIServiceExportRequest` on the Control
+Plane Cluster.
-Upon creation of the `APIServiceExportRequest` the Klutch backend will grant the developer cluster's
+Upon creation of the `APIServiceExportRequest` the Klutch backend will grant the App Cluster's
Kubernetes service account the necessary permissions to interact with the requested API and its
related objects. Afterwards the Klutch backend creates an `APIServiceExport` object that contains a
snapshot of the bound CRD at the time of binding.
The application developer then applies an `APIServiceBinding` object to their cluster. In the
binding process this is done by executing the `kubectl bind` command. This event is picked up by the
-`Konnector` installed in the developer cluster. The `Konnector` will read the `APIServiceBinding`
-object and attempt to find a matching `APIServiceExport` on the central management cluster. If a
+`Konnector` installed in the App Cluster. The `Konnector` will read the `APIServiceBinding`
+object and attempt to find a matching `APIServiceExport` on the Control Plane Cluster. If a
matching Object is found the `Konnector` reads the API schema from the `APIServiceExport` and
-creates a Custom Resource Definition (CRD) with a matching schema on the developer cluster. This
+creates a Custom Resource Definition (CRD) with a matching schema on the App Cluster. This
process runs continuously and will pick up changes and new APIs as they are added.
## Updating provider APIs
@@ -55,18 +55,18 @@ does not have any breaking changes.
Klutch will not introduce breaking changes to the data services APIs until safe migrations are
supported.
-Coming soon, updates to APIs on the central management cluster will be automatically distributed to
-the developer clusters.
+Coming soon, updates to APIs on the Control Plane Cluster will be automatically distributed to
+the App Clusters.
When a change to a CRD that is referenced by an `APIServiceExportTemplate` is detected, all
-`APIServiceExport`s will be modified to include the new change. The `Konnector` on developer clusters
+`APIServiceExport`s will be modified to include the new change. The `Konnector` on App Clusters
will detect this change in the `APIServiceExport` and update the local CRDs acccordingly.
## Adding new APIs
Adding a new API - e.g. a new data dervice - requires a new binding creation. This means the
-creation of a `APIServiceExport` on the central management cluster and the creation of a
-`APIServiceBinding` on the developer cluster.
+creation of a `APIServiceExport` on the Control Plane Cluster and the creation of a
+`APIServiceBinding` on the App Cluster.
The easiest way to create them is to follow the process for new bindings using `kubectl bind` as
described above.