Skip to content

Commit

Permalink
update(k8s-metacollector/README.md): update readme.
Browse files Browse the repository at this point in the history
Signed-off-by: Aldo Lacuku <[email protected]>
  • Loading branch information
alacuku committed Dec 20, 2023
1 parent 0400452 commit 897bc26
Show file tree
Hide file tree
Showing 3 changed files with 247 additions and 127 deletions.
57 changes: 57 additions & 0 deletions charts/k8s-metacollector/README.gotmpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# k8s-metacollector

[k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) is a self-contained module that can be deployed within a Kubernetes cluster to perform the task of gathering metadata from various Kubernetes resources and subsequently transmitting this collected metadata to designated subscribers.

## Introduction

This chart installs the [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) in a kubernetes cluster. The main application will be deployed as Kubernetes deployment with replica count equal to 1. In order for the application to work correctly the following resources will be created:
* ServiceAccount;
* ClusterRole;
* ClusterRoleBinding;
* Service;
* ServiceMonitor (optional);

*Note*: Incrementing the number of replicas is not recommended. The [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) does not implement memory sharding techniques. Furthermore, events are distributed over `gRPC` using `streams` which does not work well with load balancing mechanisms implemented by Kubernetes.

## Adding `falcosecurity` repository

Before installing the chart, add the `falcosecurity` charts repository:

```bash
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
```

## Installing the Chart

To install the chart with default values and release name `k8s-metacollector` run:

```bash
helm install k8s-metacollector falcosecurity/k8s-metacollector --namespace metacollector --create-namespace
```

After a few seconds, k8s-metacollector should be running in the `metacollector` namespace.

### Enabling ServiceMonitor
Assuming that Promtheus scrapes only the ServiceMonitors that present a `release label` the following command will install and label the ServiceMonitor:

```bash
helm install k8s-metacollector falcosecurity/k8s-metacollector \
--create-namespace \
--namespace metacollector \
--set serviceMonitor.create=true \
--set serviceMonitor.labels.release="kube-prometheus-stack"
```

## Uninstalling the Chart
To uninstall the `k8s-metacollector` release in namespace `metacollector`:
```bash
helm uninstall k8s-metacollector --namespace metacollector
```
The command removes all the Kubernetes resources associated with the chart and deletes the release.

## Configuration

The following table lists the main configurable parameters of the {{ template "chart.name" . }} chart v{{ template "chart.version" . }} and their default values. See `values.yaml` for full list.

{{ template "chart.valuesSection" . }}
177 changes: 120 additions & 57 deletions charts/k8s-metacollector/README.md
Original file line number Diff line number Diff line change
@@ -1,65 +1,128 @@
# k8s-metacollector

![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 0.0.0](https://img.shields.io/badge/AppVersion-0.0.0-informational?style=flat-square)
[k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) is a self-contained module that can be deployed within a Kubernetes cluster to perform the task of gathering metadata from various Kubernetes resources and subsequently transmitting this collected metadata to designated subscribers.

A Helm chart for Kubernetes
## Introduction

This chart installs the [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) in a kubernetes cluster. The main application will be deployed as Kubernetes deployment with replica count equal to 1. In order for the application to work correctly the following resources will be created:
* ServiceAccount;
* ClusterRole;
* ClusterRoleBinding;
* Service;
* ServiceMonitor (optional);

*Note*: Incrementing the number of replicas is not recommended. The [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) does not implement memory sharding techniques. Furthermore, events are distributed over `gRPC` using `streams` which does not work well with load balancing mechanisms implemented by Kubernetes.

## Adding `falcosecurity` repository

Before installing the chart, add the `falcosecurity` charts repository:

```bash
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
```

## Installing the Chart

To install the chart with default values and release name `k8s-metacollector` run:

```bash
helm install k8s-metacollector falcosecurity/k8s-metacollector --namespace metacollector --create-namespace
```

After a few seconds, k8s-metacollector should be running in the `metacollector` namespace.

### Enabling ServiceMonitor
Assuming that Promtheus scrapes only the ServiceMonitors that present a `release label` the following command will install and label the ServiceMonitor:

```bash
helm install k8s-metacollector falcosecurity/k8s-metacollector \
--create-namespace \
--namespace metacollector \
--set serviceMonitor.create=true \
--set serviceMonitor.labels.release="kube-prometheus-stack"
```

## Uninstalling the Chart
To uninstall the `k8s-metacollector` release in namespace `metacollector`:
```bash
helm uninstall k8s-metacollector --namespace metacollector
```
The command removes all the Kubernetes resources associated with the chart and deletes the release.

## Configuration

The following table lists the main configurable parameters of the k8s-metacollector chart v0.1.0 and their default values. See `values.yaml` for full list.

## Values

| Key | Type | Default | Description |
|-----|------|---------|-------------|
| affinity | object | `{}` | |
| containerSecurityContext.capabilities.drop[0] | string | `"ALL"` | |
| fullnameOverride | string | `""` | |
| healthChecks.livenessProbe.httpGet.path | string | `"/healthz"` | |
| healthChecks.livenessProbe.httpGet.port | int | `8081` | |
| healthChecks.livenessProbe.initialDelaySeconds | int | `60` | |
| healthChecks.livenessProbe.periodSeconds | int | `15` | |
| healthChecks.livenessProbe.timeoutSeconds | int | `5` | |
| healthChecks.readinessProbe.httpGet.path | string | `"/readyz"` | |
| healthChecks.readinessProbe.httpGet.port | int | `8081` | |
| healthChecks.readinessProbe.initialDelaySeconds | int | `45` | |
| healthChecks.readinessProbe.periodSeconds | int | `15` | |
| healthChecks.readinessProbe.timeoutSeconds | int | `5` | |
| image.pullPolicy | string | `"IfNotPresent"` | |
| image.pullSecrets | list | `[]` | |
| image.registry | string | `"docker.io"` | |
| image.repository | string | `"falcosecurity/k8s-metacollector"` | |
| image.tag | string | `"main"` | |
| nameOverride | string | `""` | |
| namespaceOverride | string | `""` | |
| nodeSelector | object | `{}` | |
| podAnnotations | object | `{}` | |
| podSecurityContext.fsGroup | int | `1000` | |
| podSecurityContext.runAsGroup | int | `1000` | |
| podSecurityContext.runAsNonRoot | bool | `true` | |
| podSecurityContext.runAsUser | int | `1000` | |
| replicaCount | int | `1` | |
| resources | object | `{}` | |
| service.enabled | bool | `true` | |
| service.ports.broker-grpc.port | int | `45000` | |
| service.ports.broker-grpc.protocol | string | `"TCP"` | |
| service.ports.broker-grpc.targetPort | string | `"broker-grpc"` | |
| service.ports.health-probe.port | int | `8081` | |
| service.ports.health-probe.protocol | string | `"TCP"` | |
| service.ports.health-probe.targetPort | string | `"health-probe"` | |
| service.ports.metrics.port | int | `8080` | |
| service.ports.metrics.protocol | string | `"TCP"` | |
| service.ports.metrics.targetPort | string | `"metrics"` | |
| service.type | string | `"ClusterIP"` | |
| serviceAccount.annotations | object | `{}` | |
| serviceAccount.create | bool | `true` | |
| serviceAccount.name | string | `""` | |
| serviceMonitor.enabled | bool | `false` | |
| serviceMonitor.interval | string | `"1m"` | |
| serviceMonitor.labels | object | `{}` | |
| serviceMonitor.path | string | `"/metrics"` | |
| serviceMonitor.relabelings | list | `[]` | |
| serviceMonitor.scheme | string | `"http"` | |
| serviceMonitor.scrapeTimeout | string | `"30s"` | |
| serviceMonitor.targetLabels | list | `[]` | |
| serviceMonitor.tlsConfig | object | `{}` | |
| tolerations | list | `[]` | |

----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.11.0](https://github.com/norwoodj/helm-docs/releases/v1.11.0)
| affinity | object | `{}` | affinity allows pod placement based on node characteristics, or any other custom labels assigned to nodes. |
| containerSecurityContext | object | `{"capabilities":{"drop":["ALL"]}}` | containerSecurityContext holds the security settings for the container. |
| containerSecurityContext.capabilities | object | `{"drop":["ALL"]}` | capabilities fine-grained privileges that can be assigned to processes. |
| containerSecurityContext.capabilities.drop | list | `["ALL"]` | drop drops the given set of privileges. |
| fullnameOverride | string | `""` | fullNameOverride same as nameOverride but for the full name. |
| healthChecks | object | `{"livenessProbe":{"httpGet":{"path":"/healthz","port":8081},"initialDelaySeconds":60,"periodSeconds":15,"timeoutSeconds":5},"readinessProbe":{"httpGet":{"path":"/readyz","port":8081},"initialDelaySeconds":45,"periodSeconds":15,"timeoutSeconds":5}}` | healthChecks contains the configuration for liveness and readiness probes. |
| healthChecks.livenessProbe | object | `{"httpGet":{"path":"/healthz","port":8081},"initialDelaySeconds":60,"periodSeconds":15,"timeoutSeconds":5}` | livenessProbe is a diagnostic mechanism used to determine wether a container within a Pod is still running and healthy. |
| healthChecks.livenessProbe.httpGet | object | `{"path":"/healthz","port":8081}` | httpGet specifies that the liveness probe will make an HTTP GET request to check the health of the container. |
| healthChecks.livenessProbe.httpGet.path | string | `"/healthz"` | path is the specific endpoint on which the HTTP GET request will be made. |
| healthChecks.livenessProbe.httpGet.port | int | `8081` | port is the port on which the container exposes the "/healthz" endpoint. |
| healthChecks.livenessProbe.initialDelaySeconds | int | `60` | initialDelaySeconds tells the kubelet that it should wait X seconds before performing the first probe. |
| healthChecks.livenessProbe.periodSeconds | int | `15` | periodSeconds specifies the interval at which the liveness probe will be repeated. |
| healthChecks.livenessProbe.timeoutSeconds | int | `5` | timeoutSeconds is the number of seconds after which the probe times out. |
| healthChecks.readinessProbe | object | `{"httpGet":{"path":"/readyz","port":8081},"initialDelaySeconds":45,"periodSeconds":15,"timeoutSeconds":5}` | readinessProbe is a mechanism used to determine whether a container within a Pod is ready to serve traffic. |
| healthChecks.readinessProbe.httpGet | object | `{"path":"/readyz","port":8081}` | httpGet specifies that the readiness probe will make an HTTP GET request to check whether the container is ready. |
| healthChecks.readinessProbe.httpGet.path | string | `"/readyz"` | path is the specific endpoint on which the HTTP GET request will be made. |
| healthChecks.readinessProbe.httpGet.port | int | `8081` | port is the port on which the container exposes the "/readyz" endpoint. |
| healthChecks.readinessProbe.initialDelaySeconds | int | `45` | initialDelaySeconds tells the kubelet that it should wait X seconds before performing the first probe. |
| healthChecks.readinessProbe.periodSeconds | int | `15` | periodSeconds specifies the interval at which the readiness probe will be repeated. |
| healthChecks.readinessProbe.timeoutSeconds | int | `5` | timeoutSeconds is the number of seconds after which the probe times out. |
| image | object | `{"pullPolicy":"IfNotPresent","pullSecrets":[],"registry":"docker.io","repository":"falcosecurity/k8s-metacollector","tag":"main"}` | image is the configuration for the k8s-metacollector image. |
| image.pullPolicy | string | `"IfNotPresent"` | pullPolicy is the policy used to determine when a node should attempt to pull the container image. |
| image.pullSecrets | list | `[]` | pullSecects a list of secrets containing credentials used when pulling from private/secure registries. |
| image.registry | string | `"docker.io"` | registry is the image registry to pull from. |
| image.repository | string | `"falcosecurity/k8s-metacollector"` | repository is the image repository to pull from |
| image.tag | string | `"main"` | tag is image tag to pull. Overrides the image tag whose default is the chart appVersion. |
| nameOverride | string | `""` | nameOverride is the new name used to override the release name used for k8s-metacollector components. |
| namespaceOverride | string | `""` | namespaceOverride overrides the deployment namespace. It's useful for multi-namespace deployments in combined charts. |
| nodeSelector | object | `{}` | nodeSelector specifies a set of key-value pairs that must match labels assigned to nodes for the Pod to be eligible for scheduling on that node. |
| podAnnotations | object | `{}` | podAnnotations are custom annotations to be added to the pod. |
| podSecurityContext | object | `{"fsGroup":1000,"runAsGroup":1000,"runAsNonRoot":true,"runAsUser":1000}` | These settings are override by the ones specified for the container when there is overlap. |
| podSecurityContext.fsGroup | int | `1000` | fsGroup specifies the group ID (GID) that should be used for the volume mounted within a Pod. |
| podSecurityContext.runAsGroup | int | `1000` | runAsGroup specifies the group ID (GID) that the containers inside the pod should run as. |
| podSecurityContext.runAsNonRoot | bool | `true` | runAsNonRoot when set to true enforces that the specified container runs as a non-root user. |
| podSecurityContext.runAsUser | int | `1000` | runAsUser specifies the user ID (UID) that the containers inside the pod should run as. |
| replicaCount | int | `1` | replicaCount is the number of identical copies of the k8s-metacollector. |
| resources | object | `{}` | resources defines the computing resources (CPU and memory) that are allocated to the containers running within the Pod. |
| service | object | `{"create":true,"ports":{"broker-grpc":{"port":45000,"protocol":"TCP","targetPort":"broker-grpc"},"health-probe":{"port":8081,"protocol":"TCP","targetPort":"health-probe"},"metrics":{"port":8080,"protocol":"TCP","targetPort":"metrics"}},"type":"ClusterIP"}` | service exposes the k8s-metacollector services to be accessed from within the cluster. ref: https://kubernetes.io/docs/concepts/services-networking/service/ |
| service.create | bool | `true` | enabled specifies whether a service should be created. |
| service.ports | object | `{"broker-grpc":{"port":45000,"protocol":"TCP","targetPort":"broker-grpc"},"health-probe":{"port":8081,"protocol":"TCP","targetPort":"health-probe"},"metrics":{"port":8080,"protocol":"TCP","targetPort":"metrics"}}` | ports denotes all the ports on which the Service will listen. |
| service.ports.broker-grpc | object | `{"port":45000,"protocol":"TCP","targetPort":"broker-grpc"}` | broker-grpc denotes a listening service named "grpc-broker" |
| service.ports.broker-grpc.port | int | `45000` | port is the port on which the Service will listen. |
| service.ports.broker-grpc.protocol | string | `"TCP"` | protocol specifies the network protocol that the Service should use for the associated port. |
| service.ports.broker-grpc.targetPort | string | `"broker-grpc"` | targetPort is the port on which the Pod is listening. |
| service.ports.health-probe | object | `{"port":8081,"protocol":"TCP","targetPort":"health-probe"}` | health-probe denotes a listening service named "health-probe" |
| service.ports.health-probe.port | int | `8081` | port is the port on which the Service will listen. |
| service.ports.health-probe.protocol | string | `"TCP"` | protocol specifies the network protocol that the Service should use for the associated port. |
| service.ports.health-probe.targetPort | string | `"health-probe"` | targetPort is the port on which the Pod is listening. |
| service.ports.metrics | object | `{"port":8080,"protocol":"TCP","targetPort":"metrics"}` | metrics denotes a listening service named "metrics". |
| service.ports.metrics.port | int | `8080` | port is the port on which the Service will listen. |
| service.ports.metrics.protocol | string | `"TCP"` | protocol specifies the network protocol that the Service should use for the associated port. |
| service.ports.metrics.targetPort | string | `"metrics"` | targetPort is the port on which the Pod is listening. |
| service.type | string | `"ClusterIP"` | type denotes the service type. Setting it to "ClusterIP" we ensure that are accessible from within the cluster. |
| serviceAccount | object | `{"annotations":{},"create":true,"name":""}` | serviceAccount is the configuration for the service account. |
| serviceAccount.annotations | object | `{}` | annotations to add to the service account. |
| serviceAccount.create | bool | `true` | create specifies whether a service account should be created. |
| serviceAccount.name | string | `""` | If not set and create is true, a name is generated using the full name template. |
| serviceMonitor | object | `{"create":false,"interval":"1m","labels":{},"path":"/metrics","relabelings":[],"scheme":"http","scrapeTimeout":"30s","targetLabels":[],"tlsConfig":{}}` | serviceMonitor holds the configuration for the ServiceMonitor CRD. A ServiceMonitor is a custom resource definition (CRD) used to configure how Prometheus should discover and scrape metrics from the k8s-metacollector service. |
| serviceMonitor.create | bool | `false` | create specifies whether a ServiceMonitor CRD should be created for a prometheus operator. https://github.com/coreos/prometheus-operator Enable it only if the ServiceMonitor CRD is installed in your cluster. |
| serviceMonitor.interval | string | `"1m"` | interval specifies the time interval at which Prometheus should scrape metrics from the service. |
| serviceMonitor.labels | object | `{}` | labels set of labels to be applied to the ServiceMonitor resource. If your Prometheus deployment is configured to use serviceMonitorSelector, then add the right label here in order for the ServiceMonitor to be selected for target discovery. |
| serviceMonitor.path | string | `"/metrics"` | path at which the metrics are expose by the k8s-metacollector. |
| serviceMonitor.relabelings | list | `[]` | relabelings configures the relabeling rules to apply the target’s metadata labels. |
| serviceMonitor.scheme | string | `"http"` | scheme specifies network protocol used by the metrics endpoint. In this case HTTP. |
| serviceMonitor.scrapeTimeout | string | `"30s"` | scrapeTimeout determines the maximum time Prometheus should wait for a target to respond to a scrape request. If the target does not respond within the specified timeout, Prometheus considers the scrape as failed for that target. |
| serviceMonitor.targetLabels | list | `[]` | targetLabels defines the labels which are transferred from the associated Kubernetes service object onto the ingested metrics. |
| serviceMonitor.tlsConfig | object | `{}` | tlsConfig specifies TLS (Transport Layer Security) configuration for secure communication when scraping metrics from a service. It allows you to define the details of the TLS connection, such as CA certificate, client certificate, and client key. Currently, the k8s-metacollector does not support TLS configuration for the metrics endpoint. |
| tolerations | list | `[]` | tolerations are applied to pods and allow them to be scheduled on nodes with matching taints. |
Loading

0 comments on commit 897bc26

Please sign in to comment.