Skip to content

Commit

Permalink
Merge pull request #11 from anynines/standarize_vocabulary_used_in_docs
Browse files Browse the repository at this point in the history
Standarize vocabulary used in docs
  • Loading branch information
iliasmavridis authored Sep 26, 2024
2 parents 2e36ef8 + 0026d92 commit 972f3f2
Show file tree
Hide file tree
Showing 31 changed files with 155 additions and 157 deletions.
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
### Changed

- renamed backend resources for bindings.
Changed the namespace used for bindings on the consumer clusters from `kube-bind` to `klutch-bind`.
Changed the namespace used for bindings on the App Clusters from `kube-bind` to `klutch-bind`.

This change automatically applies to new bindings.

Expand Down
2 changes: 1 addition & 1 deletion docs/architecture_overview.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
34 changes: 17 additions & 17 deletions docs/core_concepts.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,20 +22,20 @@ fundamental to understanding Klutch's architecture and operation:

Klutch operates on a multi-cluster model:

- **Central Management Cluster:**
- **Control Plane Cluster:**
- Hosts Crossplane and its providers
- Runs the bind backend
- **Developer Cluster:**
- **App Cluster:**
- Hosts klutch-bind CLI for creating bindings to remote resources
- Hosts the konnector for state synchronization between the Custom Resources (CRs) in the central and developer cluster(s)
- Hosts the konnector for state synchronization between the Custom Resources (CRs) in the Control Plane and App cluster(s)

### 2. State Synchronization

The konnector component performs bidirectional state synchronization:

- Watches for changes in CRs on developer clusters
- Propagates these changes to the central management cluster
- Updates the status of resources in developer clusters based on the actual state in the central cluster
- Watches for changes in CRs on App Clusters
- Propagates these changes to the Control Plane Cluster
- Updates the status of resources in App Clusters based on the actual state in the Control Plane Cluster

### 3. Authentication and Authorization

Expand All @@ -44,33 +44,33 @@ Klutch implements a token-based auth system:
- Uses OIDC for initial authentication
- The bind backend verifies tokens with the OIDC provider (e.g., Keycloak)

### 4. Remote / Proxy Resources
### 4. Proxy Claims

To manage remote resources, Klutch uses the concept of proxy resources:
To manage remote resources, Klutch uses the concept of Proxy Claims:

- Proxy resources are CRs that represent remote resources in the central cluster
- Proxy resources map to [Crossplane Composite Resources (XRs)](https://docs.crossplane.io/master/concepts/composite-resources/)
- The resource management Klutch does is all based on the yaml files the user manages on their consumer clusters.
- Proxy Claims are applied in the App Cluster.
- These Proxy Claims are mapped to [Crossplane Composite Resources Claims (XRCs)](https://docs.crossplane.io/latest/concepts/claims/)
in the Control Plane Cluster.
- The developers clusters are the source of truth for what resources should exist.

### 5. Binding Mechanism

Klutch's binding mechanism enables API usage across multiple Kubernetes clusters:

- **Klutch-bind CLI:** enables usage of Klutch's APIs across different Kubernetes clusters by synchronizing state
between the management cluster and API consumer clusters. The CLI
initiates the OIDC auth process and installs the konnector into the user's cluster.
between the Control Plane Cluster and API App Clusters. The CLI initiates the OIDC auth process and installs the
konnector into the user's cluster.

- **bind backend:** The backend authenticates new users via OIDC before creating a binding space on the consumer cluster
- **bind backend:** The backend authenticates new users via OIDC before creating a binding space on the App Cluster
for them. The backend implementation is open to different approaches, as long as they follow the standard.

- **konnector:** this component gets installed in the consumer's cluster and is responsible for synchronization between
the management cluster and the consumer cluster.
- **konnector:** this component gets installed in the App Cluster and is responsible for synchronization between
the Control Plane Cluster and the App Cluster.

### 6. Provider Integration

Klutch leverages [Crossplane's provider](https://docs.crossplane.io/master/concepts/providers/) model:

- Supports any provider that adheres to Crossplane's provider specification
- Platform operators can install and configure providers in the central cluster
- Platform operators can install and configure providers in the Control Plane Cluster
- Providers handle the actual interaction with cloud APIs or infrastructure management tools
2 changes: 1 addition & 1 deletion docs/for-developers/developer_interactions.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
22 changes: 11 additions & 11 deletions docs/for-developers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,16 +24,16 @@ manage cloud or on-premise resources directly from your Kubernetes cluster using

Before you begin, ensure you have:

1. [kubectl](https://kubernetes.io/docs/tasks/tools/) installed and configured to interact with your developer cluster.
1. [kubectl](https://kubernetes.io/docs/tasks/tools/) installed and configured to interact with your App Cluster.
2. Access to a Klutch-enabled Kubernetes cluster.

If your cluster wasn't set up by a platform operator, you need to use the klutch-bind CLI to connect to the central
management cluster and bind to the resources you intend to use. For instructions on using the klutch-bind CLI, refer
If your cluster wasn't set up by a platform operator, you need to use the klutch-bind CLI to connect to the Control
Plane Cluster and bind to the resources you intend to use. For instructions on using the klutch-bind CLI, refer
to the ["For Platform Operators"](../platform-operator/index.md) section.

## Available Resource Types

The types of resources you can create depend on the service bindings configured in your developer cluster. These can
The types of resources you can create depend on the service bindings configured in your App Cluster. These can
include databases, message queues, storage solutions, or any other services available in supported cloud providers or
on-premise infrastructure.

Expand Down Expand Up @@ -133,24 +133,24 @@ If you encounter issues while creating or managing resources, the following step
resolve the problem. Depending on your specific situation and level of access, some or all of these steps may be
applicable:

1. Check the resource status and events in your developer cluster:
1. Check the resource status and events in your App Cluster:

```bash
kubectl describe <resource-type> <resource-name>
```

Look for events or status messages that might indicate the issue.

2. Examine the logs of konnector (the component of Klutch running in your developer cluster):
2. Examine the logs of konnector (the component of Klutch running in your App Cluster):

```bash
kubectl logs -n klutch-bind deployment/konnector
```

This may show issues related to the communication between your developer cluster and the central management cluster.
This may show issues related to the communication between your App Cluster and the Control Plane Cluster.

3. If you have access to the central management cluster and are familiar with the Crossplane setup and configuration,
you can perform additional troubleshooting steps in the central management cluster. Refer to the latest [official Crossplane troubleshooting guide](https://docs.crossplane.io/latest/guides/troubleshoot-crossplane/) for comprehensive instructions.
3. If you have access to the Control Plane Cluster and are familiar with the Crossplane setup and configuration,
you can perform additional troubleshooting steps in the Control Plane Cluster. Refer to the latest [official Crossplane troubleshooting guide](https://docs.crossplane.io/latest/guides/troubleshoot-crossplane/) for comprehensive instructions.

Some key steps you can take include:

Expand Down Expand Up @@ -196,7 +196,7 @@ applicable:
kubectl get packages
```

Remember that as a developer, your access is typically limited to your developer cluster. Many advanced troubleshooting
steps, especially those involving the central management cluster or Crossplane configuration, may require collaboration
Remember that as a developer, your access is typically limited to your App Cluster. Many advanced troubleshooting
steps, especially those involving the Control Plane Cluster or Crossplane configuration, may require collaboration
with your platform operators or additional permissions. If you suspect a bug in Klutch, please consider opening an issue
in the relevant GitHub repository with a detailed description of the problem and steps to reproduce it.
2 changes: 1 addition & 1 deletion docs/for-developers/tutorials/developer_tutorial.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
83 changes: 41 additions & 42 deletions docs/for-developers/tutorials/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,18 +13,18 @@ keywords:
## Local Klutch Deployment

This tutorial guides you through deploying Klutch locally using two interconnected local clusters that simulate a
developer and a central management cluster. It covers setting up both clusters, connecting them using Klutch, and
showcases how developers can request and utilize resources in the developer cluster that are actually provisioned in the
central management cluster.
App and a Control Plane Cluster. It covers setting up both clusters, connecting them using Klutch, and showcases how
developers can request and utilize resources in the App Cluster that are actually provisioned in the Control Plane
Cluster.

### Overview

In this tutorial, you'll perform the following steps in your local environment:

1. Deploy a central management cluster (which will also host resources)
2. Set up a developer cluster
3. Bind APIs from the development to Central management cluster
4. Create and use remote resources from the development cluster (in this case Postgresql service)
1. Deploy a Control Plane Cluster (which will also host resources)
2. Set up an App Cluster
3. Bind APIs from the App Cluster to Control Plane Cluster
4. Create and use remote resources from the App Cluster (in this case Postgresql service)

We'll use the open source [a9s CLI](https://github.com/anynines/a9s-cli-v2) to streamline this process, making it easy
to follow along and understand each step.
Expand All @@ -47,7 +47,7 @@ If you work with Kubernetes regularly, you probably have these standard tools al

To follow along with this tutorial, you need to install the following specialized tools:

1. [kubectl-bind](https://docs.k8s.anynines.com/docs/develop/platform-operator/central-management-cluster-setup/#binding-a-consumer-cluster-interactive)
1. [kubectl-bind](https://docs.k8s.anynines.com/docs/develop/platform-operator/central-management-cluster-setup/#binding-an-app-cluster-interactive)
2. [a9s cli](https://docs.a9s-cli.anynines.com/docs/a9s-cli/)

### Network Access
Expand All @@ -63,14 +63,13 @@ Ensure your machine can reach the following external resources:

## Step 1: Run the Deployment Command

In this step, we'll set up both the central management cluster and the developer cluster for Klutch using **a single
command**. This command will install all components needed by Klutch, including the a8s framework with the PostgreSQL
operator.
In this step, we'll set up both the Control Plane Cluster and the App Cluster for Klutch using **a single command**.
This command will install all components needed by Klutch, including the a8s framework with the PostgreSQL operator.

:::note

This step does not automatically create bindings between the developer cluster and the resources in the central
management cluster. You'll need to create these bindings using a web UI in a later step.
This step does not automatically create bindings between the App Cluster and the resources in the Control Plane Cluster.
You'll need to create these bindings using a web UI in a later step.

:::

Expand All @@ -82,17 +81,17 @@ a9s klutch deploy --port 8080 --yes

:::note

- The ```--port 8080``` flag specifies the port on which the central management cluster's ingress will listen. You can
- The ```--port 8080``` flag specifies the port on which the Control Plane Cluster's ingress will listen. You can
change this if needed.
- The ```--yes``` flag skips all confirmation prompts, speeding up the process.

:::

What this command does:

1. Deploys the central management (producer) cluster with all required components.
1. Deploys the Control Plane Cluster with all required components.
2. Installs the [a8s framework](https://k8s.anynines.com/for-postgres/) with the PostgreSQL Kubernetes operator.
3. Creates a developer (consumer) Kind cluster.
3. Creates an App Cluster with Kind.

:::tip

Expand All @@ -103,7 +102,7 @@ For a hands-off deployment, keep the `--yes` flag to skip all prompts.

:::

### 1.1 Central Management (Producer) Cluster Deployment
### 1.1 Control Plane Cluster Deployment

The CLI automatically:

Expand All @@ -126,10 +125,10 @@ You'll see progress updates and YAML files being applied for each component.

After setting up the management cluster, the CLI:

- Creates a new Kind cluster named "klutch-consumer"
- Creates a new Kind cluster named "klutch-app"

At the moment this is an empty Kind cluster. Klutch components will be added in the next step, when the consumer
cluster is "bound" to the central management cluster. Stay tuned!
At the moment this is an empty Kind cluster. Klutch components will be added in the next step, when the App Cluster is
"bound" to the Control Plane Cluster. Stay tuned!

### Deployment Output

Expand All @@ -143,7 +142,7 @@ Checking Prerequisites...
✅ Found kind at path /opt/homebrew/bin/kind.
...

Creating cluster "klutch-management"...
Creating cluster "klutch-control-plane"...
• Ensuring node image (kindest/node:v1.31.0) 🖼 ...
✓ Ensuring node image (kindest/node:v1.31.0) 🖼
• Preparing nodes 📦 ...
Expand All @@ -163,8 +162,8 @@ Applying the a8s Data Service manifests...
✅ The a8s System appears to be ready.
...

Deploying a Consumer Kind cluster...
Creating cluster "klutch-consumer" ...
Deploying an App Cluster with Kind...
Creating cluster "klutch-app" ...
• Ensuring node image (kindest/node:v1.31.0) 🖼 ...
✓ Ensuring node image (kindest/node:v1.31.0) 🖼
• Preparing nodes 📦 ...
Expand All @@ -173,24 +172,24 @@ Creating cluster "klutch-consumer" ...

Summary
You've successfully accomplished the followings steps:
✅ Deployed a Klutch management Kind cluster.
✅ Deployed a Klutch Control Plane Cluster with Kind.
✅ Deployed Dex Idp and the anynines klutch-bind backend.
✅ Deployed Crossplane and the Kubernetes provider.
✅ Deployed the Klutch Crossplane configuration package.
✅ Deployed Klutch API Service Export Templates to make the Klutch Crossplane APIs available to consumer clusters.
✅ Deployed Klutch API Service Export Templates to make the Klutch Crossplane APIs available to App Clusters.
✅ Deployed the a8s Stack.
✅ Deployed a consumer cluster.
🎉 You are now ready to bind APIs from the consumer cluster using the `a9s klutch bind` command.
✅ Deployed an App Cluster.
🎉 You are now ready to bind APIs from the App Cluster using the `a9s klutch bind` command.
```
## Step 2: Bind Resource APIs from the Consumer Cluster
## Step 2: Bind Resource APIs from the App Cluster
After setting up both clusters, the next step is to bind APIs from the consumer cluster to the management cluster. We'll
After setting up both clusters, the next step is to bind APIs from the App Cluster to the Control Plane Cluster. We'll
bind two APIs: `postgresqlinstance` and `servicebinding`.

This operation also sets up an agent in the cluster to keep resources in sync between the consumer cluster and the
central management cluster.
This operation also sets up an agent in the cluster to keep resources in sync between the App Cluster and the Control
Plane Cluster.

Execute the following command to initiate the binding process:

Expand Down Expand Up @@ -221,7 +220,7 @@ Checking Prerequisites...
...
The following command will be executed for you:
/opt/homebrew/bin/kubectl bind http://192.168.0.91:8080/export --konnector-image public.ecr.aws/w5n9a2g2/anynines/konnector:v1.3.0 --context kind-klutch-consumer
/opt/homebrew/bin/kubectl bind http://192.168.0.91:8080/export --konnector-image public.ecr.aws/w5n9a2g2/anynines/konnector:v1.3.0 --context kind-klutch-app
```
Next, a browser window will open for authentication. Use these demo credentials:
Expand Down Expand Up @@ -252,7 +251,7 @@ Do you accept this Permission? [No,Yes]
You've successfully accomplished the following steps:
✅ Called the kubectl bind plugin to start the interactive binding process
✅ Authorized the management cluster to manage the selected API on your consumer cluster.
✅ Authorized the management cluster to manage the selected API on your App Cluster.
✅ You've bound the postgresqlinstances resource. You can now apply instances of this resource, for example with the
following yaml:
Expand All @@ -269,12 +268,12 @@ spec:
name: a8s-postgresql
```

To bind to `servicebinding`, repeat the [same process](#step-2-bind-resource-apis-from-the-consumer-cluster), but click
To bind to `servicebinding`, repeat the [same process](#step-2-bind-resource-apis-from-the-app-cluster), but click
on `Bind` under the `servicebinding` API in the web UI.

## Step 3: Create and Use a PostgreSQL Instance

After binding the PostgresqlInstance, you can create and use PostgreSQL instances in your consumer cluster. This section
After binding the PostgresqlInstance, you can create and use PostgreSQL instances in your App Cluster. This section
will guide you through creating an instance and using it with a simple blogpost application.

### 3.1 Create a PostgreSQL Instance
Expand All @@ -297,7 +296,7 @@ spec:

<a href="/dev_files/pg-instance.yaml" target="_blank" download>OR download the yaml manifest</a>

Apply the file to your developer cluster:
Apply the file to your App Cluster:

```bash
kubectl apply -f pg-instance.yaml
Expand Down Expand Up @@ -333,7 +332,7 @@ kubectl apply -f service-binding.yaml
### 3.3 Configure Local Network for Testing
Before deploying our application, we need to configure the local network to make the PostgreSQL service available in the
developer cluster. This step is for local testing purposes and may vary significantly in a production environment.
App Cluster. This step is for local testing purposes and may vary significantly in a production environment.
Create a file named `external-pg-service.yaml` with the following content:
Expand Down Expand Up @@ -367,11 +366,11 @@ Apply the file:
kubectl apply -f <(eval "echo \"$(cat external-pg-service.yaml)\"")
```
#### Set up port forwarding in the management cluster
#### Set up port forwarding in the Control Plane Cluster
a. Open a new terminal window.
b. Switch the kubectl context to the management cluster:
b. Switch the kubectl context to the Control Plane Cluster:
```bash
kubectl config use-context kind-klutch-management
Expand All @@ -397,7 +396,7 @@ kubectl apply -f <(eval "echo \"$(cat external-pg-service.yaml)\"")
### 3.4 Deploy a Blogpost Application
Now, let's deploy a simple blogpost application that uses our PostgreSQL service. Return to the terminal window where
your **kubectl context** is set to the **developer cluster**.
your **kubectl context** is set to the **App Cluster**.

Create a file named `blogpost-app.yaml` with the following content:

Expand Down Expand Up @@ -495,12 +494,12 @@ If you need to start over or remove the clusters created by Klutch, use the foll
a9s klutch delete
```

This command will remove both the management and developer clusters that were created during the Klutch deployment
This command will remove both the Control Plane Cluster and App Clusters that were created during the Klutch deployment
process.

:::note

Use this command with caution as it will delete all resources and data in both the management and developer clusters.
Use this command with caution as it will delete all resources and data in both the Control Plane and App clusters.
Make sure to back up any important data before proceeding.

:::
Loading

0 comments on commit 972f3f2

Please sign in to comment.