-
Notifications
You must be signed in to change notification settings - Fork 11
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Document setting up grafana agent, github runners, split ingress
Documenting some more advanced workflows around setting up monitoring, ingress, self-hosted github actions runners.
- Loading branch information
1 parent
a6305b2
commit 9c9cd5e
Showing
6 changed files
with
120 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
--- | ||
title: Advanced Configuration | ||
description: Fine-tuning your Plural Console to meet your requirements | ||
--- |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -29,6 +29,41 @@ plural cd services update @{cluster-handle}/{service-name} --conf {name}={value} | |
|
||
Feel free to run `plural cd services update --help` for more documentation as well. | ||
|
||
## Self-Hosted Runners | ||
|
||
Many users will want to host their console in a private network. If that's the case, a standard hosted Github Actions runner will not be able to network to the console api and allow the execution of the `plural cd` commands. The solution for this is to leverage github's self-hosted runners to allow you to run the Actions in an adjacent network and maintain the security posture of your console. We've added a few add-ons to make this setup trivially easy to handle, you'll want to: | ||
|
||
- install the `github-actions-controller` runner to set up the k8s operator that manages runners in a cluster. You likely want this to be installed in your management cluster for network adjacency. | ||
- install the `plrl-github-actions-runner` in that same cluster to create a runner set you can schedule jobs on. | ||
|
||
Once both are deployed, you can create your first job, it'll likely look something like this: | ||
|
||
```yaml | ||
jobs: | ||
# some previous jobs... | ||
update-service: | ||
needs: [docker-build] | ||
runs-on: plrl-github-actions-runner | ||
env: | ||
PLURAL_CONSOLE_TOKEN: ${{ secrets.PLURAL_CONSOLE_TOKEN }} | ||
PLURAL_CONSOLE_URL: ${{ secrets.PLURAL_CONSOLE_URL }} | ||
steps: | ||
- name: Checkout | ||
uses: actions/checkout@v4 | ||
- name: installing plural | ||
uses: pluralsh/[email protected] | ||
- name: Using short sha | ||
run: echo ${GITHUB_SHA::7} | ||
- name: Update service | ||
run: plural cd services update @mgmt/marketing --conf tag=sha-${GITHUB_SHA::7} | ||
``` | ||
Note that the `runs-on` job attribute is what specifies this as using the plrl-github-actions runner. It's worth also looking into some of the control mechanisms Github provides to gate what repositories and workflows can leverage self-hosted runners to manage the security tradeoffs it poses. | ||
|
||
{% callout severity="warning" %} | ||
Github recommends you don't use self-hosted runners on public repositories due to the complexity required to prevent workflows from being run by fork repository pull requests. | ||
{% /callout %} | ||
|
||
## Addendum | ||
|
||
Since the plural cli is a standalone go binary, it can easily be injected in any CI framework in much the same way by installing it and the executing the appropriate cli command to modify your service once a deployable artifact has been built. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,50 @@ | ||
--- | ||
title: Network Configuration | ||
description: Modifying ingress controller and setting up public/private endpoints for your console | ||
--- | ||
|
||
## Overview | ||
|
||
There are a few strategies you can take to harden the network security of your console or align it with how you typically secure kubernetes ingresses. We'll note a few of these here. | ||
|
||
## Bringing Your Own Ingress | ||
|
||
Our helm chart has the ability to reconfigure the ingress class for your console. This could be useful if you already have an ingress controller with CIDR ranges and WAF setups built in. The helm values change is relatively simple, simply do: | ||
|
||
```yaml | ||
ingress: | ||
ingressClass: <new-ingress-class> | ||
# potentially you might also want to add some annotations | ||
annotations: | ||
new.ingress.annotations: <value> | ||
|
||
kas: | ||
ingress: | ||
ingressClass: <new-ingress-class> | ||
``` | ||
Both KAS and the console leverage websockets for some portion of their functionality. In the case of the console, the websockets are also far more performant with connection stickiness in place. Some ingress controllers have inconsistent websocket support (or require paid versions to unlock it), which is worth keeping in mind. | ||
Also we do configure the ingresses with cert-manager by default. Some orgs will set a wildcard cert at the ingress level, in which case you'd want to disable the ingress-level certs. | ||
## Public/Private Ingress | ||
Another setup we support is splitting the console ingress between public and private. This allows you to host the entirety of the Console's api in a private network, while exposing a subset needed to serve the apis for the deployment agents to poll our APIs. These apis are minimal, they only provide: | ||
- read access to the services deployable to an agent | ||
- a ping endpoint for a given cluster sending the cluster version and a timestamp | ||
- the ability to update the components created for a service by an agent | ||
This is a relatively easy way to ensure network connectivity to end clusters in a pretty broad network topology, but there are of course other more advanced setups a team can attempt. The basic setup for this is as follows: | ||
```yaml | ||
ingress: | ||
ingressClass: internal-nginx # or another private ingress controller | ||
|
||
externalIngress: | ||
hostname: console-ext.your.subdomain # or whatever you'd like to rename it | ||
``` | ||
This will create a second, limited ingress exposing only the apis listed above via path routing. In this world, we'd also recommend you leave the KAS service also on a similar network as the external ingress. | ||
There are still additional tactics you can use to harden this setup, for instance adding CIDR ranges for the NAT gateways of all the networks the clusters you wish to deploy to reside on can provide robust firewalling for the ingresses you'd configured. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters