Skip to content

Commit

Permalink
Document plural up
Browse files Browse the repository at this point in the history
Also backfills some missing terraform docs.  I think we probably need to devote an entire section to the terraform provider but not handling that for now.  We'd also ideally embed a getting started video in here.
  • Loading branch information
michaeljguarino committed Feb 2, 2024
1 parent 2ff55cc commit 344a632
Show file tree
Hide file tree
Showing 7 changed files with 219 additions and 26 deletions.
57 changes: 44 additions & 13 deletions pages/deployments/cli-quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,36 +14,67 @@ This guide goes over how to deploy your services with the Plural CLI. At the end

## Onboard to Plural and install the Plural Console

If you haven't already, you'll need to follow the Plural guide to install Console. You can use the guide for the [in-browser Cloud Shell](/getting-started/cloud-shell-quickstart) or the [CLI](/getting-started/quickstart) to get started.
If you haven't already, you'll need to follow the Plural guide to install Console. There are two recommended ways to do this:

{% callout severity="info" %}
`plural cd` is an alias for `plural deployments`, and can be used interchangeably within the CLI.
- [Bring Your Own Cluster](/deployments/existing-cluster) - you've created a kubernetes cluster your way with all the main prequisites. You can use helm to install the management plane and then use the Console to manage itself from there.
- `plural up` - a single command to create a new management cluster on the major clouds, wire up a basic GitOps setup and get you started.

Both are pretty flexible, and even if you chose to use the BYOK method, we recommend looking into some of our example `plural up` repos to get some ideas on how to use our CRDs and terraform provider with all the other tools they'll commonly touch. You can see an example `plural up` repository [here](https://github.com/pluralsh/plural-up-demo).

## Use `plural up` to create your first management console

First you'll want to follow our guide [here](getting-started/cli-quickstart) to install our CLI. Once you've done this, you'll simply run:

```sh
plural up # optionally add --service-account <email> if you want to use a service account to group manage this console
```

{% callout severity="warn" %}
`plural up` is best run in an empty repo. That will let it oauth to github/gitlab and create the repository for you, alongside registering pull deploy keys to register it in your new console.
{% /callout %}

## Set Environment Variables
This will do a few things:

- create a new repo to house your IaC and yaml manifests
- execute terraform to create the new cluster
- execute another terraform stack to provision the GitOps setup for the Plural Console and any other services you'd like to deploy from that repo

We've also generated a README that should give an overview of how the repo can be used for things like:

- creating and registering new workload clusters with terraform
- adding new services in the main infra repo
- handling updates to the cluster terraform at your own pace

If you haven't already, you'll need to set your Console URL and Console token. Set them with:
## Set Up the `plural cd` CLI

If you'd like to configure the plural cli to communicate with your new Console instance, the configuration process is pretty simple, you'll need to set your Console URL and Console token. Set them with:

```
PLURAL_CONSOLE_URL
PLURAL_CONSOLE_TOKEN
```

## Create Clusters
or alternatively you can run `plural cd login` to set them to a config file within `~/.plural`

To deploy additional clusters, use the `plural cd clusters create` command. As an example:
{% callout severity="info" %}
`plural cd` is an alias for `plural deployments`, and can be used interchangeably within the CLI.
{% /callout %}

```
plural cd clusters create --handle <CLUSTER_HANDLE> --version <K8s_VERSION> CLUSTER_NAME
```
## List Clusters, Services, Repositories

The following commands can help you list a lot of the clusters, services, etc that have already been registered:

To import an existing cluster, see the guide for [existing clusters](/deployments/existing-cluster).
```sh
plural cd clusters list
plural cd services list @{cluster-handle}
plural cd repositories list
```

## Import Git Repositories and Deploy services

You'll need to then import the Git repository containing your service and the associated Kubernetes manifests. To do so, use `plural cd repositories create`:

```
```sh
plural cd repositories create --url <REPO_URL>
```

Expand All @@ -53,7 +84,7 @@ To then deploy your service, find the repo ID for the service you want to deploy

You can then use the `plural cd services create` command:

```
```sh
plural cd services create --name <SERVICE_NAME> --namespace <SERVICE_NAMESPACE> --repo-id <REPO_ID> --git-ref <GIT_REF> --git-folder <GIT_FOLDER> CLUSTER_ID

```
Expand Down
7 changes: 6 additions & 1 deletion pages/deployments/cluster-cost.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,14 @@ We recommend you configure it as a global service, as that will ensure it's inst

## How Kubernetes Node Autoscaling Works

Kubernetes has its own mechanism of managing autoscaling. Instead of using familiar patterns like EC2 autoscaling groups, ultimately keyed on CPU or memory utilization, kubernetes will add or removed nodes based on whether there are outstanding pods that cannot be scheduled to any worker given the currently configured pod requests.
Kubernetes has its own mechanism of managing autoscaling. Instead of using familiar patterns like EC2 autoscaling groups, ultimately keyed on CPU or memory utilization, kubernetes will add or remove nodes based on whether there are outstanding pods that cannot be scheduled to any worker given the currently configured pod requests.

<<<<<<< Updated upstream
There are also some other constraints that kubernetes will cause autoscaling for, e.g. pods that have scheduling constraints preventing them to be scheduling on the same nodes as other (thus requiring a new node), pods that must be in a specific availability zone or other node pool, pods that must remain due to a PodDisruptionBudget.
=======
There are also some other constraints that kubernetes will cause autoscaling for, eg pods that have scheduling constraints preventing them to be scheduling on the same nodes as others (thus requiring a new node), pods that must be in a specific availability zone or other node pool, pods that must remain due to a PodDisruptionBudget.

> > > > > > > Stashed changes
This does lead to much more powerful autoscaling constructs but can lead to a bit of confusion for new users. To leverage kubernetes autoscaling properly you'll need to be sure you're putting sane resource requests on your pods and also have some of those edge cases in mind.

Expand Down
70 changes: 69 additions & 1 deletion pages/deployments/cluster-create.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,72 @@ If you click on a cluster and go to its properties page, you should be able to a

## Create Using Terraform

Coming Soon!
```tf
resource "plural_provider" "aws_provider" {
name = "aws"
cloud = "aws"
cloud_settings = {
aws = {
# access_key_id = "" # Required, can be sourced from PLURAL_AWS_ACCESS_KEY_ID
# secret_access_key = "" # Required, can be sourced from PLURAL_AWS_SECRET_ACCESS_KEY
}
}
}
data "plural_provider" "aws_provider" {
cloud = "aws"
}
resource "plural_cluster" "aws_cluster" {
name = "aws-cluster-tf"
handle = "awstf"
version = "1.24"
provider_id = data.plural_provider.aws_provider.id
cloud = "aws"
protect = "false"
cloud_settings = {
aws = {
region = "us-east-1"
}
}
node_pools = {
pool1 = {
name = "pool1"
min_size = 1
max_size = 5
instance_type = "t5.large"
},
pool2 = {
name = "pool2"
min_size = 1
max_size = 5
instance_type = "t5.large"
labels = {
"key1" = "value1"
"key2" = "value2"
},
taints = [
{
key = "test"
value = "test"
effect = "NoSchedule"
}
]
},
pool3 = {
name = "pool3"
min_size = 1
max_size = 5
instance_type = "t5.large"
cloud_settings = {
aws = {
launch_template_id = "test"
}
}
}
}
tags = {
"managed-by" = "terraform-provider-plural"
}
}
```
15 changes: 14 additions & 1 deletion pages/deployments/credentials.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,17 @@ Once created, you'll be able to see a new service named `capi-{cloud}` in your m

## Create using Terraform

Coming Soon!
```tf
resource "plural_provider" "aws_provider" {
name = "aws"
cloud = "aws"
cloud_settings = {
aws = {
# access_key_id = "" # Required, can be sourced from PLURAL_AWS_ACCESS_KEY_ID
# secret_access_key = "" # Required, can be sourced from PLURAL_AWS_SECRET_ACCESS_KEY
}
}
}
```

You can find some more examples at https://github.com/pluralsh/terraform-provider-plural/blob/main/example
18 changes: 15 additions & 3 deletions pages/deployments/import-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,9 @@ description: Set Up deployments to an existing, self-managed cluster

## Overview

Most users will have created a significant amount of kubernetes infrastructure with tooling like terraform, pulumi or other forms of infrastructure automation. You can easily configure deployments to these clusters by installing our agent with a single command, and Plural CD will manage that agent from then on without any manual intervention.
Most users will have created a significant amount of kubernetes infrastructure with tooling like terraform, pulumi or other forms of infrastructure automation. It's also very common for users to prefer sticking with their tried-and-true IaC patterns rather than futzing with cluster api, which we completely appreciate and wish to support fully.

You can easily configure deployments to these clusters by installing our agent with a single command, and Plural CD will manage that agent from then on without any manual intervention.

## Installation

Expand All @@ -23,10 +25,20 @@ plural cd bootstrap --name {name} --tag {name}={value} --tag {name2}={value2}

## Terraform

You can also set up a BYOK cluster via terraform with the following:
You can also set up a BYOK cluster via terraform with the following (this would be for an eks cluster already created elsewhere in terraform):

```tf
Coming Soon!
resource "plural_cluster" "this" {
handle = "your-cluster-handle"
name = "human-readable-name"
tags = var.tags
protect = true # or false
kubeconfig = {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
}
}
```

## Networking Considerations
Expand Down
66 changes: 65 additions & 1 deletion pages/deployments/services-deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,70 @@ You can find the repo-id for your desired repository by running `plural cd repos

You should then see your service show up when calling `plural cd services list`.

## Create Using GitOps

We definitely recommend you read over our [operator docs](deployments/using-operator) to see the various CRDs you can use to define your services and patterns available there. For most use cases this will be the most robust workflow.

## Create Using Terraform

Coming Soon!
There are some times where you'd still want to use terraform to create a service, a common pattern would be in bootstrapping the environment for a team or something similar.

```tf
data "plural_cluster" "byok_workload_cluster" {
handle = "gcp-workload-cluster"
}
data "plural_git_repository" "cd-test" {
url = "https://github.com/pluralsh/plrl-cd-test.git"
}
resource "plural_service_deployment" "cd-test" {
# Required
name = "tf-cd-test"
namespace = "tf-cd-test"
cluster = {
handle = data.plural_cluster.byok_workload_cluster.handle
}
repository = {
id = data.plural_git_repository.cd-test.id
ref = "main"
folder = "kubernetes"
}
# Optional
version = "0.0.2"
docs_path = "doc"
protect = false
configuration = [
{
name : "host"
value : "tf-cd-test.gcp.plural.sh"
},
{
name : "tag"
value : "sha-4d01e86"
}
]
sync_config = {
namespace_metadata = {
annotations = {
"testannotationkey" : "testannotationvalue"
}
labels = {
"testlabelkey" : "testlabelvalue"
}
}
}
depends_on = [
data.plural_cluster.byok_workload_cluster,
data.plural_git_repository.cd-test
]
}
```

You can see some more examples [here](https://github.com/pluralsh/terraform-provider-plural/blob/main/example/service/)
12 changes: 6 additions & 6 deletions src/NavData.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ const rootNavData: NavMenu = deepFreeze([
],
},
{
title: 'Plural Continuous Deployment',
title: 'Plural Fleet Management',
sections: [
{
href: '/getting-started/deployments',
Expand All @@ -72,10 +72,10 @@ const rootNavData: NavMenu = deepFreeze([
title: 'Quickstart with our CLI',
href: '/deployments/cli-quickstart',
},
{
href: '/deployments/browser-quickstart',
title: 'Quickstart from your Browser',
},
// {
// href: '/deployments/browser-quickstart',
// title: 'Quickstart from your Browser',
// },
{
href: '/deployments/existing-cluster',
title: 'Set Up on your own Cluster',
Expand Down Expand Up @@ -132,7 +132,7 @@ const rootNavData: NavMenu = deepFreeze([
},
{
href: '/deployments/cluster-create',
title: 'Create Workload Clusters',
title: 'Create Cluster API Workload Clusters',
},
// {
// href: '/deployments/cluster-config',
Expand Down

0 comments on commit 344a632

Please sign in to comment.