Skip to content

Commit

Permalink
Very minor capitalization, punctuation, and Kubernetes style guide (#37)
Browse files Browse the repository at this point in the history
adjustments for:

- Hosted control plane documentation for AWS, Azure, and vSphere.
- Template parameters for AWS.
- Expanded glossary entries with new definitions.
- Content in BYO templates and main template.
  • Loading branch information
randybias authored Nov 7, 2024
1 parent 85c72b2 commit dfd8770
Show file tree
Hide file tree
Showing 7 changed files with 57 additions and 33 deletions.
16 changes: 8 additions & 8 deletions docs/clustertemplates/aws/hosted-control-plane.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# AWS Hosted control plane deployment

This section covers setting up for a K0smotron hosted control plane on AWS.
This section covers setting up for a k0smotron hosted control plane on AWS.

## Prerequisites

- Management Kubernetes cluster (v1.28+) deployed on AWS with HMC installed on it
- Default storage class configured on the management cluster
- VPC id for the worker nodes
- VPC ID for the worker nodes
- Subnet ID which will be used along with AZ information
- AMI id which will be used to deploy worker nodes
- AMI ID which will be used to deploy worker nodes

Keep in mind that all control plane components for all managed clusters will
reside in the management cluster.
Expand All @@ -31,30 +31,30 @@ and other provider controllers will need a large amount of resources to run.
**VPC ID**

```bash
kubectl get awscluster <cluster name> -o go-template='{{.spec.network.vpc.id}}'
kubectl get awscluster <cluster-name> -o go-template='{{.spec.network.vpc.id}}'
```

**Subnet ID**

```bash
kubectl get awscluster <cluster name> -o go-template='{{(index .spec.network.subnets 0).resourceID}}'
kubectl get awscluster <cluster-name> -o go-template='{{(index .spec.network.subnets 0).resourceID}}'
```

**Availability zone**

```bash
kubectl get awscluster <cluster name> -o go-template='{{(index .spec.network.subnets 0).availabilityZone}}'
kubectl get awscluster <cluster-name> -o go-template='{{(index .spec.network.subnets 0).availabilityZone}}'
```

**Security group**
```bash
kubectl get awscluster <cluster name> -o go-template='{{.status.networkStatus.securityGroups.node.id}}'
kubectl get awscluster <cluster-name> -o go-template='{{.status.networkStatus.securityGroups.node.id}}'
```

**AMI id**

```bash
kubectl get awsmachinetemplate <cluster name>-worker-mt -o go-template='{{.spec.template.spec.ami.id}}'
kubectl get awsmachinetemplate <cluster-name>-worker-mt -o go-template='{{.spec.template.spec.ami.id}}'
```

If you want to use different VPCs/regions for your management or managed
Expand Down
2 changes: 1 addition & 1 deletion docs/clustertemplates/aws/template-parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## AWS AMI

By default AMI id will be looked up automatically using the latest Amazon Linux 2 image.
By default AMI ID will be looked up automatically using the latest Amazon Linux 2 image.

You can override lookup parameters to search your desired image automatically or
use AMI ID directly.
Expand Down
20 changes: 10 additions & 10 deletions docs/clustertemplates/azure/hosted-control-plane.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,43 +22,43 @@ If you deployed your Azure Kubernetes cluster using Cluster API Provider Azure
**Location**

```bash
kubectl get azurecluster <cluster name> -o go-template='{{.spec.location}}'
kubectl get azurecluster <cluster-name> -o go-template='{{.spec.location}}'
```

**Subscription ID**

```bash
kubectl get azurecluster <cluster name> -o go-template='{{.spec.subscriptionID}}'
kubectl get azurecluster <cluster-name> -o go-template='{{.spec.subscriptionID}}'
```

**Resource group**

```bash
kubectl get azurecluster <cluster name> -o go-template='{{.spec.resourceGroup}}'
kubectl get azurecluster <cluster-name> -o go-template='{{.spec.resourceGroup}}'
```

**vnet name**

```bash
kubectl get azurecluster <cluster name> -o go-template='{{.spec.networkSpec.vnet.name}}'
kubectl get azurecluster <cluster-name> -o go-template='{{.spec.networkSpec.vnet.name}}'
```

**Subnet name**

```bash
kubectl get azurecluster <cluster name> -o go-template='{{(index .spec.networkSpec.subnets 1).name}}'
kubectl get azurecluster <cluster-name> -o go-template='{{(index .spec.networkSpec.subnets 1).name}}'
```

**Route table name**

```bash
kubectl get azurecluster <cluster name> -o go-template='{{(index .spec.networkSpec.subnets 1).routeTable.name}}'
kubectl get azurecluster <cluster-name> -o go-template='{{(index .spec.networkSpec.subnets 1).routeTable.name}}'
```

**Security group name**

```bash
kubectl get azurecluster <cluster name> -o go-template='{{(index .spec.networkSpec.subnets 1).securityGroup.name}}'
kubectl get azurecluster <cluster-name> -o go-template='{{(index .spec.networkSpec.subnets 1).securityGroup.name}}'
```


Expand Down Expand Up @@ -112,7 +112,7 @@ spec:
Then you can render it using the command:
```bash
kubectl get azurecluster <management cluster name> -o go-template="$(cat template.yaml)"
kubectl get azurecluster <management-cluster-name> -o go-template="$(cat template.yaml)"
```

## Cluster creation
Expand All @@ -124,7 +124,7 @@ the `AzureCluster` object due to current limitations (see
To do so you need to execute the following command:

```bash
kubectl patch azurecluster <cluster name> --type=merge --subresource status --patch 'status: {ready: true}'
kubectl patch azurecluster <cluster-name> --type=merge --subresource status --patch 'status: {ready: true}'
```

## Important notes on the cluster deletion
Expand All @@ -139,7 +139,7 @@ which will cause cluster deletion to stuck indefinitely.
To place finalizer you can execute the following command:

```bash
kubectl patch azurecluster <cluster name> --type=merge --patch 'metadata: {finalizers: [manual]}'
kubectl patch azurecluster <cluster-name> --type=merge --patch 'metadata: {finalizers: [manual]}'
```

When finalizer is placed you can remove the `ManagedCluster` as usual. Check that
Expand Down
13 changes: 6 additions & 7 deletions docs/clustertemplates/vsphere/hosted-control-plane.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,12 @@ reside in the management cluster.
Hosted CP template has mostly identical parameters with standalone CP, you can
check them in the [template parameters](template-parameters.md) section.

> NOTE: **Important note on control plane endpoint IP**
> Since vSphere provider requires that user will provide control plane endpoint
> IP before deploying the cluster you should make sure that this IP will be the
> same that will be assigned to the k0smotron LB service. Thus you must provide
> control plane endpoint IP to the k0smotron service via annotation which is
> accepted by your LB provider (in the following example `kube-vip` annotation
> is used)
> NOTE: **Important Note on Control Plane Endpoint IP Address**
> The vSphere provider requires the control plane endpoint IP to be specified
> before deploying the cluster. Ensure that this IP matches the IP assigned to
> the k0smotron load balancer (LB) service. Provide the control plane endpoint
> IP to the k0smotron service via an annotation accepted by your LB provider
> (e.g., the `kube-vip` annotation in the example below).
```yaml
apiVersion: hmc.mirantis.com/v1alpha1
Expand Down
25 changes: 25 additions & 0 deletions docs/glossary.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,26 @@ controller, CNI, and/or CSI. While from the perspective of how they are deployed
they are no different from other Kubernetes services, we define them as distinct
from the apps and services deployed as part of the applications.

### Cluster API (CAPI)
CAPI is a Kubernetes project that provides a declarative way to manage the
lifecycle of Kubernetes clusters. It abstracts the underlying infrastructure,
allowing users to create, scale, upgrade, and delete clusters using a
consistent API. CAPI is extensible via providers that offer infrastructure-
specific functionality, such as AWS, Azure, and vSphere.

### CAPI provider (see also [Infrastructure provider](#infrastructure-provider-see-also-capi-provider))
A CAPI provider is a Kubernetes CAPI extension that allows 2A to manage and
drive the creation of clusters on a specific infrastructure via API calls.

### CAPA
CAPA stands for Cluster API Provider for AWS.

### CAPV
CAPV stands for Cluster API Provider for vSphere.

### CAPZ
CAPZ stands for Cluster API Provider for Azure.

### Cloud Controller Manager
Cloud Controller Manager (CCM) is a Kubernetes component that embeds logic to
manage a specific infrastructure provider.
Expand All @@ -30,6 +46,15 @@ references other CRs with infrastructure-specific credentials such as access
keys, passwords, certificates, etc. This means that a credential is specific to
the CAPI provider that uses it.

### Hosted Control Plane (HCP)
An HCP is a Kubernetes control plane that runs outside of the clusters it
manages. Instead of running the control plane components (like the API server,
controller manager, and etcd) within the same cluster as the worker nodes, the
control plane is hosted on a separate, often centralized, infrastructure. This
approach can provide benefits such as easier management, improved security, and
better resource utilization, as the control plane can be scaled independently
of the worker nodes.

### Infrastructure provider (see also [CAPI provider](#capi-provider-see-also-infrastructure-provider))
An infrastructure provider (aka `InfrastructureProvider`) is a Kubernetes custom
resource (CR) that defines the infrastructure-specific configuration needed for
Expand Down
10 changes: 5 additions & 5 deletions docs/template/byo-templates.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ external Helm repository. Label it with `hmc.mirantis.com/managed: "true"`.
2. Create a [HelmChart](https://fluxcd.io/flux/components/source/helmcharts/) object referencing the `HelmRepository` as a
`sourceRef`, specifying the name and version of your Helm chart. Label it with `hmc.mirantis.com/managed: "true"`.
3. Create a `ClusterTemplate`, `ServiceTemplate` or `ProviderTemplate` object referencing this helm chart in
`spec.helm.chartRef`. `chartRef` is a field of the
`.spec.helm.chartRef`. `chartRef` is a field of the
[CrossNamespaceSourceReference](https://fluxcd.io/flux/components/helm/api/v2/#helm.toolkit.fluxcd.io/v2.CrossNamespaceSourceReference) kind.
For `ClusterTemplate` and `ServiceTemplate` configure the namespace where this template should reside
(`metadata.namespace`).
Expand Down Expand Up @@ -75,7 +75,7 @@ spec:
The `*Template` should follow the rules mentioned below:

`spec.providers` should contain the list of required Cluster API providers: `infrastructure`, `bootstrap` and
`.spec.providers` should contain the list of required Cluster API providers: `infrastructure`, `bootstrap` and
`control-plane`. As an alternative, the referenced helm chart may contain the specific annotations in the `Chart.yaml`
(value is a list of providers divided by comma). These fields are only used for validation. For example:

Expand Down Expand Up @@ -153,7 +153,7 @@ Given compatibility attributes will be then set accordingly in the `.status` fie
Compatibility contract versions are key-value pairs, where the key is **the name of the provider**,
and the value is the provider contract version required to be supported by the provider.

Example with the `spec`:
Example with the `.spec`:

```yaml
apiVersion: hmc.mirantis.com/v1alpha1
Expand All @@ -171,7 +171,7 @@ and the value is the provider contract version required to be supported by the p
infrastructure-aws: v1beta2
```

Example with the `annotations` in the `Chart.yaml`:
Example with the `.annotations` in the `Chart.yaml`:

```yaml
annotations:
Expand All @@ -186,7 +186,7 @@ and the value is the provider contract version required to be supported by the p
Kubernetes version to match against the related `ClusterTemplate` objects.
Given compatibility values will be then set accordingly in the `.status` field.

Example with the `spec`:
Example with the `.spec`:

```yaml
apiVersion: hmc.mirantis.com/v1alpha1
Expand Down
4 changes: 2 additions & 2 deletions docs/template/main.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ and the upgrade sequences for them.
The example of the Cluster Template Management:

1. Create `ClusterTemplateChain` object in the system namespace (defaults to `hmc-system`). Properly configure
the list of `availableUpgrades` for the specified `ClusterTemplate` if the upgrade is allowed. For example:
the list of `.spec.supportedTemplates[].availableUpgrades` for the specified `ClusterTemplate` if the upgrade is allowed. For example:

```yaml
apiVersion: hmc.mirantis.com/v1alpha1
Expand All @@ -39,7 +39,7 @@ spec:
- name: aws-standalone-cp-0-0-2
```
2. Edit `TemplateManagement` object and configure the `spec.accessRules`.
2. Edit `TemplateManagement` object and configure the `.spec.accessRules`.
For example, to apply all templates and upgrade sequences defined in the `aws` `ClusterTemplateChain` to the
`default` namespace, the following `accessRule` should be added:

Expand Down

0 comments on commit dfd8770

Please sign in to comment.