Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deploying Production clusters using DKP #2

Closed
wants to merge 4 commits into from
Closed
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 42 additions & 0 deletions content/posts/deploy-dkp-production-clusters/blogpost.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
---
author: [email protected]
title: Deploying Production clusters at D2iQ
date: 2022-06-01
draft: true
---

Deploying cluster with DKP is easy but in common production environments the Cluster must fit into the existing ecosystem respecting permissions and user-roles, dealing with docker registry authentication or using a special certificate issuer
tillt marked this conversation as resolved.
Show resolved Hide resolved

# What means production cluster
With this post we want to explain how a DKP cluster can be created in a repoducable way that is using SSO instead of static credentials, ensuring to not run into rate limiting for docker hub pulls or ACME based certificate requests.


# DKP on AWS
We consider AWS is being used in this example. Although most things will definitely work on other clouds or on-prem but in detail permissions and configs might be slightly different. You would also need IAM permissions to do the exact same thing in our account.

## Shared bootstrap cluster
In many case using the `dkp` cli with its build-in bootstrap cluster is the easiest way to deploy DKP 2.x clusters but when it comes to the ability to reproduce things or not depending on an operators machine when it comes to huge initial clusters we will use EKS [^1]. This reduces the dependency on the operators machine to an extremely low level.

## Konvoy Image Builder
We expect a custom image ID is known and being provided as an ami-id to the followin commands. Whenever `<AMI>` is mentioned we consider a KIB image to be provided.
Please consult the [Image Builder Docs](https://docs.d2iq.com/dkp/konvoy/2.2/image-builder/) for more details.

## Terraform
We'll use Terraform[^2] to maintain the IAM Policy, Role and Instance Profiles. Alternatives could be simple AWS-CLI usage or Cloudformation templates but for us the easiest way maintaining clusters with multiple operators is using Terraform as it gives us out of the box shared state and locking mechanisms[^3]

## Authentication
D2iQ uses Onelogin as its SSO provider but any OIDC Identity provider[^4] will work. In many cases using Google as Identity provider is the simplest solution [^5]


# Getting things started
Before we spawn the cluster some foundation needs to be done. Like described above we'll be using terraform for that.


## AWS IAM policy, role and instance profiles


[^1]: Amazon Elastic Kubernetes Service is a managed Kubernetes cluster by AWS https://aws.amazon.com/eks/
[^2]: https://terraform.io
[^3]: https://www.terraform.io/language/settings/backends/s3
[^4]: https://en.wikipedia.org/wiki/List_of_OAuth_providers
[^5]: https://developers.google.com/identity/protocols/oauth2/openid-connect