diff --git a/charts/alertmanager/.helmignore b/charts/alertmanager/.helmignore new file mode 100644 index 000000000000..7653e97e6621 --- /dev/null +++ b/charts/alertmanager/.helmignore @@ -0,0 +1,25 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ + +unittests/ diff --git a/charts/alertmanager/Chart.yaml b/charts/alertmanager/Chart.yaml new file mode 100644 index 000000000000..4350fc092720 --- /dev/null +++ b/charts/alertmanager/Chart.yaml @@ -0,0 +1,23 @@ +apiVersion: v2 +name: alertmanager +description: The Alertmanager handles alerts sent by client applications such as the Prometheus server. +home: https://prometheus.io/ +icon: https://raw.githubusercontent.com/prometheus/prometheus.github.io/master/assets/prometheus_logo-cb55bb5c346.png +sources: + - https://github.com/prometheus/alertmanager +type: application +version: 1.8.1 +appVersion: v0.26.0 +kubeVersion: ">=1.19.0-0" +keywords: + - monitoring +maintainers: + - name: monotek + email: monotek23@gmail.com + - name: naseemkullah + email: naseem@transit.app +annotations: + "artifacthub.io/license": Apache-2.0 + "artifacthub.io/links": | + - name: Chart Source + url: https://github.com/prometheus-community/helm-charts diff --git a/charts/alertmanager/README.md b/charts/alertmanager/README.md new file mode 100644 index 000000000000..d3f4df73a272 --- /dev/null +++ b/charts/alertmanager/README.md @@ -0,0 +1,62 @@ +# Alertmanager + +As per [prometheus.io documentation](https://prometheus.io/docs/alerting/latest/alertmanager/): +> The Alertmanager handles alerts sent by client applications such as the +> Prometheus server. It takes care of deduplicating, grouping, and routing them +> to the correct receiver integration such as email, PagerDuty, or OpsGenie. It +> also takes care of silencing and inhibition of alerts. + +## Prerequisites + +Kubernetes 1.14+ + +## Get Repository Info + +```console +helm repo add prometheus-community https://prometheus-community.github.io/helm-charts +helm repo update +``` + +_See [`helm repo`](https://helm.sh/docs/helm/helm_repo/) for command documentation._ + +## Install Chart + +```console +helm install [RELEASE_NAME] prometheus-community/alertmanager +``` + +_See [configuration](#configuration) below._ + +_See [helm install](https://helm.sh/docs/helm/helm_install/) for command documentation._ + +## Uninstall Chart + +```console +helm uninstall [RELEASE_NAME] +``` + +This removes all the Kubernetes components associated with the chart and deletes the release. + +_See [helm uninstall](https://helm.sh/docs/helm/helm_uninstall/) for command documentation._ + +## Upgrading Chart + +```console +helm upgrade [RELEASE_NAME] [CHART] --install +``` + +_See [helm upgrade](https://helm.sh/docs/helm/helm_upgrade/) for command documentation._ + +### To 1.0 + +The [configmap-reload](https://github.com/jimmidyson/configmap-reload) container was replaced by the [prometheus-config-reloader](https://github.com/prometheus-operator/prometheus-operator/tree/main/cmd/prometheus-config-reloader). +Extra command-line arguments specified via configmapReload.prometheus.extraArgs are not compatible and will break with the new prometheus-config-reloader, refer to the [sources](https://github.com/prometheus-operator/prometheus-operator/blob/main/cmd/prometheus-config-reloader/main.go) in order to make the appropriate adjustment to the extea command-line arguments. +The `networking.k8s.io/v1beta1` is no longer supported. use [`networking.k8s.io/v1`](https://kubernetes.io/docs/reference/using-api/deprecation-guide/#ingressclass-v122). + +## Configuration + +See [Customizing the Chart Before Installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing). To see all configurable options with detailed comments, visit the chart's [values.yaml](./values.yaml), or run these configuration commands: + +```console +helm show values prometheus-community/alertmanager +``` diff --git a/charts/alertmanager/ci/config-reload-values.yaml b/charts/alertmanager/ci/config-reload-values.yaml new file mode 100644 index 000000000000..cba5de8e2927 --- /dev/null +++ b/charts/alertmanager/ci/config-reload-values.yaml @@ -0,0 +1,2 @@ +configmapReload: + enabled: true diff --git a/charts/alertmanager/templates/NOTES.txt b/charts/alertmanager/templates/NOTES.txt new file mode 100644 index 000000000000..46ea5bee59bb --- /dev/null +++ b/charts/alertmanager/templates/NOTES.txt @@ -0,0 +1,21 @@ +1. Get the application URL by running these commands: +{{- if .Values.ingress.enabled }} +{{- range $host := .Values.ingress.hosts }} + {{- range .paths }} + http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }} + {{- end }} +{{- end }} +{{- else if contains "NodePort" .Values.service.type }} + export NODE_PORT=$(kubectl get --namespace {{ include "alertmanager.namespace" . }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "alertmanager.fullname" . }}) + export NODE_IP=$(kubectl get nodes --namespace {{ include "alertmanager.namespace" . }} -o jsonpath="{.items[0].status.addresses[0].address}") + echo http://$NODE_IP:$NODE_PORT +{{- else if contains "LoadBalancer" .Values.service.type }} + NOTE: It may take a few minutes for the LoadBalancer IP to be available. + You can watch the status of by running 'kubectl get --namespace {{ include "alertmanager.namespace" . }} svc -w {{ include "alertmanager.fullname" . }}' + export SERVICE_IP=$(kubectl get svc --namespace {{ include "alertmanager.namespace" . }} {{ include "alertmanager.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}") + echo http://$SERVICE_IP:{{ .Values.service.port }} +{{- else if contains "ClusterIP" .Values.service.type }} + export POD_NAME=$(kubectl get pods --namespace {{ include "alertmanager.namespace" . }} -l "app.kubernetes.io/name={{ include "alertmanager.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") + echo "Visit http://127.0.0.1:{{ .Values.service.port }} to use your application" + kubectl --namespace {{ include "alertmanager.namespace" . }} port-forward $POD_NAME {{ .Values.service.port }}:80 +{{- end }} diff --git a/charts/alertmanager/templates/_helpers.tpl b/charts/alertmanager/templates/_helpers.tpl new file mode 100644 index 000000000000..827b6ee9f7da --- /dev/null +++ b/charts/alertmanager/templates/_helpers.tpl @@ -0,0 +1,92 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "alertmanager.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "alertmanager.fullname" -}} +{{- if .Values.fullnameOverride }} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- $name := default .Chart.Name .Values.nameOverride }} +{{- if contains $name .Release.Name }} +{{- .Release.Name | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} +{{- end }} +{{- end }} +{{- end }} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "alertmanager.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Common labels +*/}} +{{- define "alertmanager.labels" -}} +helm.sh/chart: {{ include "alertmanager.chart" . }} +{{ include "alertmanager.selectorLabels" . }} +{{- with .Chart.AppVersion }} +app.kubernetes.io/version: {{ . | quote }} +{{- end }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- end }} + +{{/* +Selector labels +*/}} +{{- define "alertmanager.selectorLabels" -}} +app.kubernetes.io/name: {{ include "alertmanager.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end }} + +{{/* +Create the name of the service account to use +*/}} +{{- define "alertmanager.serviceAccountName" -}} +{{- if .Values.serviceAccount.create }} +{{- default (include "alertmanager.fullname" .) .Values.serviceAccount.name }} +{{- else }} +{{- default "default" .Values.serviceAccount.name }} +{{- end }} +{{- end }} + +{{/* +Define Ingress apiVersion +*/}} +{{- define "alertmanager.ingress.apiVersion" -}} +{{- printf "networking.k8s.io/v1" }} +{{- end }} + +{{/* +Define Pdb apiVersion +*/}} +{{- define "alertmanager.pdb.apiVersion" -}} +{{- if $.Capabilities.APIVersions.Has "policy/v1/PodDisruptionBudget" }} +{{- printf "policy/v1" }} +{{- else }} +{{- printf "policy/v1beta1" }} +{{- end }} +{{- end }} + +{{/* +Allow overriding alertmanager namespace +*/}} +{{- define "alertmanager.namespace" -}} +{{- if .Values.namespaceOverride -}} +{{- .Values.namespaceOverride -}} +{{- else -}} +{{- .Release.Namespace -}} +{{- end -}} +{{- end -}} diff --git a/charts/alertmanager/templates/_images.tpl b/charts/alertmanager/templates/_images.tpl new file mode 100644 index 000000000000..ac03db7fb4f0 --- /dev/null +++ b/charts/alertmanager/templates/_images.tpl @@ -0,0 +1,45 @@ +{{- define "alertmanager.common.images.image" -}} +{{- $registryName := "" }} +{{- if and .global .global.imageRegistry }} + {{- $registryName = .global.imageRegistry }} +{{- end }} +{{- $repositoryName := .imageRoot.repository -}} +{{- $separator := ":" -}} +{{- $termination := "" -}} +{{- if and .global .global.tag }} + {{- $termination = .global.tag | toString }} +{{- end }} +{{- if .imageRoot.registry }} + {{- $registryName = .imageRoot.registry -}} +{{- end -}} +{{- if empty $registryName }} + {{- if .imageRoot.defaultRegistry }} + {{- $registryName = .imageRoot.defaultRegistry }} + {{- end -}} +{{- end -}} +{{- if .imageRoot.tag }} + {{- $termination = .imageRoot.tag | toString -}} +{{- end -}} +{{- if .imageRoot.digest }} + {{- $separator = "@" -}} + {{- $termination = .imageRoot.digest | toString -}} +{{- end -}} +{{- if $registryName }} + {{- printf "%s/%s%s%s" $registryName $repositoryName $separator $termination -}} +{{- else }} + {{- printf "%s%s%s" $repositoryName $separator $termination -}} +{{- end }} +{{- end -}} + +{{- define "alertmanager.image" -}} +{{- $_dict := (dict "imageRoot" .Values.image "global" .Values.global) }} +{{- if empty $_dict.imageRoot.tag }} + {{- $_ := set $_dict.imageRoot "tag" .Chart.AppVersion }} +{{- end }} +{{- include "alertmanager.common.images.image" $_dict }} +{{- end -}} + +{{- define "alertmanager.configmapReload.image" -}} +{{- $_dict := (dict "imageRoot" .Values.configmapReload.image "global" .Values.global) }} +{{- include "alertmanager.common.images.image" $_dict }} +{{- end -}} \ No newline at end of file diff --git a/charts/alertmanager/templates/_node-selectors.tpl b/charts/alertmanager/templates/_node-selectors.tpl new file mode 100644 index 000000000000..c76cc8d5bd9c --- /dev/null +++ b/charts/alertmanager/templates/_node-selectors.tpl @@ -0,0 +1,16 @@ +{{- define "alertmanager.common.nodeSelectors.nodeSelector" -}} +{{- if and .global .global.nodeSelector }} + {{- $nodeSelector := .global.nodeSelector -}} + {{- if .nodeSelector }} + {{- $nodeSelector = merge .nodeSelector $nodeSelector -}} + {{- end -}} + {{- toYaml $nodeSelector }} +{{- else }} + {{- toYaml .nodeSelector }} +{{- end }} +{{- end -}} + +{{- define "alertmanager.nodeSelector" -}} +{{- $_dict := (dict "nodeSelector" .Values.nodeSelector "global" .Values.global) }} +{{- include "alertmanager.common.nodeSelectors.nodeSelector" $_dict }} +{{- end -}} diff --git a/charts/alertmanager/templates/configmap.yaml b/charts/alertmanager/templates/configmap.yaml new file mode 100644 index 000000000000..9e5882dc8625 --- /dev/null +++ b/charts/alertmanager/templates/configmap.yaml @@ -0,0 +1,21 @@ +{{- if .Values.config.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ include "alertmanager.fullname" . }} + labels: + {{- include "alertmanager.labels" . | nindent 4 }} + {{- with .Values.configAnnotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ include "alertmanager.namespace" . }} +data: + alertmanager.yml: | + {{- $config := omit .Values.config "enabled" }} + {{- toYaml $config | default "{}" | nindent 4 }} + {{- range $key, $value := .Values.templates }} + {{ $key }}: |- + {{- $value | nindent 4 }} + {{- end }} +{{- end }} diff --git a/charts/alertmanager/templates/ingress.yaml b/charts/alertmanager/templates/ingress.yaml new file mode 100644 index 000000000000..e729a8ad3bab --- /dev/null +++ b/charts/alertmanager/templates/ingress.yaml @@ -0,0 +1,44 @@ +{{- if .Values.ingress.enabled }} +{{- $fullName := include "alertmanager.fullname" . }} +{{- $svcPort := .Values.service.port }} +apiVersion: {{ include "alertmanager.ingress.apiVersion" . }} +kind: Ingress +metadata: + name: {{ $fullName }} + labels: + {{- include "alertmanager.labels" . | nindent 4 }} + {{- with .Values.ingress.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ include "alertmanager.namespace" . }} +spec: + {{- if .Values.ingress.className }} + ingressClassName: {{ .Values.ingress.className }} + {{- end }} + {{- if .Values.ingress.tls }} + tls: + {{- range .Values.ingress.tls }} + - hosts: + {{- range .hosts }} + - {{ . | quote }} + {{- end }} + secretName: {{ .secretName }} + {{- end }} + {{- end }} + rules: + {{- range .Values.ingress.hosts }} + - host: {{ .host | quote }} + http: + paths: + {{- range .paths }} + - path: {{ .path }} + pathType: {{ .pathType }} + backend: + service: + name: {{ $fullName }} + port: + number: {{ $svcPort }} + {{- end }} + {{- end }} +{{- end }} diff --git a/charts/alertmanager/templates/ingressperreplica.yaml b/charts/alertmanager/templates/ingressperreplica.yaml new file mode 100644 index 000000000000..6f5a023500c5 --- /dev/null +++ b/charts/alertmanager/templates/ingressperreplica.yaml @@ -0,0 +1,56 @@ +{{- if and .Values.servicePerReplica.enabled .Values.ingressPerReplica.enabled }} +{{- $pathType := .Values.ingressPerReplica.pathType }} +{{- $count := .Values.replicaCount | int -}} +{{- $servicePort := .Values.service.port -}} +{{- $ingressValues := .Values.ingressPerReplica -}} +{{- $fullName := include "alertmanager.fullname" . }} +apiVersion: v1 +kind: List +metadata: + name: {{ $fullName }}-ingressperreplica + namespace: {{ include "alertmanager.namespace" . }} +items: +{{- range $i, $e := until $count }} + - kind: Ingress + apiVersion: {{ include "alertmanager.ingress.apiVersion" $ }} + metadata: + name: {{ $fullName }}-{{ $i }} + namespace: {{ include "alertmanager.namespace" $ }} + labels: + {{- include "alertmanager.labels" $ | nindent 8 }} + {{- if $ingressValues.labels }} +{{ toYaml $ingressValues.labels | indent 8 }} + {{- end }} + {{- if $ingressValues.annotations }} + annotations: +{{ toYaml $ingressValues.annotations | indent 8 }} + {{- end }} + spec: + {{- if $ingressValues.className }} + ingressClassName: {{ $ingressValues.className }} + {{- end }} + rules: + - host: {{ $ingressValues.hostPrefix }}-{{ $i }}.{{ $ingressValues.hostDomain }} + http: + paths: + {{- range $p := $ingressValues.paths }} + - path: {{ tpl $p $ }} + pathType: {{ $pathType }} + backend: + service: + name: {{ $fullName }}-{{ $i }} + port: + name: http + {{- end -}} + {{- if or $ingressValues.tlsSecretName $ingressValues.tlsSecretPerReplica.enabled }} + tls: + - hosts: + - {{ $ingressValues.hostPrefix }}-{{ $i }}.{{ $ingressValues.hostDomain }} + {{- if $ingressValues.tlsSecretPerReplica.enabled }} + secretName: {{ $ingressValues.tlsSecretPerReplica.prefix }}-{{ $i }} + {{- else }} + secretName: {{ $ingressValues.tlsSecretName }} + {{- end }} + {{- end }} +{{- end -}} +{{- end -}} diff --git a/charts/alertmanager/templates/pdb.yaml b/charts/alertmanager/templates/pdb.yaml new file mode 100644 index 000000000000..103e9ecde161 --- /dev/null +++ b/charts/alertmanager/templates/pdb.yaml @@ -0,0 +1,14 @@ +{{- if .Values.podDisruptionBudget }} +apiVersion: {{ include "alertmanager.pdb.apiVersion" . }} +kind: PodDisruptionBudget +metadata: + name: {{ include "alertmanager.fullname" . }} + labels: + {{- include "alertmanager.labels" . | nindent 4 }} + namespace: {{ include "alertmanager.namespace" . }} +spec: + selector: + matchLabels: + {{- include "alertmanager.selectorLabels" . | nindent 6 }} + {{- toYaml .Values.podDisruptionBudget | nindent 2 }} +{{- end }} diff --git a/charts/alertmanager/templates/serviceaccount.yaml b/charts/alertmanager/templates/serviceaccount.yaml new file mode 100644 index 000000000000..bc9ccaaff949 --- /dev/null +++ b/charts/alertmanager/templates/serviceaccount.yaml @@ -0,0 +1,14 @@ +{{- if .Values.serviceAccount.create }} +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ include "alertmanager.serviceAccountName" . }} + labels: + {{- include "alertmanager.labels" . | nindent 4 }} + {{- with .Values.serviceAccount.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ include "alertmanager.namespace" . }} +automountServiceAccountToken: {{ .Values.automountServiceAccountToken }} +{{- end }} diff --git a/charts/alertmanager/templates/serviceperreplica.yaml b/charts/alertmanager/templates/serviceperreplica.yaml new file mode 100644 index 000000000000..faa75b3ba2e3 --- /dev/null +++ b/charts/alertmanager/templates/serviceperreplica.yaml @@ -0,0 +1,44 @@ +{{- if and .Values.servicePerReplica.enabled }} +{{- $count := .Values.replicaCount | int -}} +{{- $serviceValues := .Values.servicePerReplica -}} +apiVersion: v1 +kind: List +metadata: + name: {{ include "alertmanager.fullname" . }}-serviceperreplica + namespace: {{ include "alertmanager.namespace" . }} +items: +{{- range $i, $e := until $count }} + - apiVersion: v1 + kind: Service + metadata: + name: {{ include "alertmanager.fullname" $ }}-{{ $i }} + namespace: {{ include "alertmanager.namespace" $ }} + labels: + {{- include "alertmanager.labels" $ | nindent 8 }} + {{- if $serviceValues.annotations }} + annotations: +{{ toYaml $serviceValues.annotations | indent 8 }} + {{- end }} + spec: + {{- if $serviceValues.clusterIP }} + clusterIP: {{ $serviceValues.clusterIP }} + {{- end }} + {{- if $serviceValues.loadBalancerSourceRanges }} + loadBalancerSourceRanges: + {{- range $cidr := $serviceValues.loadBalancerSourceRanges }} + - {{ $cidr }} + {{- end }} + {{- end }} + {{- if ne $serviceValues.type "ClusterIP" }} + externalTrafficPolicy: {{ $serviceValues.externalTrafficPolicy }} + {{- end }} + ports: + - name: http + port: {{ $.Values.service.port }} + targetPort: http + selector: + {{- include "alertmanager.selectorLabels" $ | nindent 8 }} + statefulset.kubernetes.io/pod-name: {{ include "alertmanager.fullname" $ }}-{{ $i }} + type: "{{ $serviceValues.type }}" +{{- end }} +{{- end }} diff --git a/charts/alertmanager/templates/services.yaml b/charts/alertmanager/templates/services.yaml new file mode 100644 index 000000000000..9637ae7586de --- /dev/null +++ b/charts/alertmanager/templates/services.yaml @@ -0,0 +1,71 @@ +apiVersion: v1 +kind: Service +metadata: + name: {{ include "alertmanager.fullname" . }} + labels: + {{- include "alertmanager.labels" . | nindent 4 }} + {{- with .Values.service.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.service.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ include "alertmanager.namespace" . }} +spec: + type: {{ .Values.service.type }} + {{- with .Values.service.loadBalancerIP }} + loadBalancerIP: {{ . }} + {{- end }} + {{- with .Values.service.loadBalancerSourceRanges }} + loadBalancerSourceRanges: + {{- range $cidr := . }} + - {{ $cidr }} + {{- end }} + {{- end }} + ports: + - port: {{ .Values.service.port }} + targetPort: http + protocol: TCP + name: http + {{- if (and (eq .Values.service.type "NodePort") .Values.service.nodePort) }} + nodePort: {{ .Values.service.nodePort }} + {{- end }} + {{- with .Values.service.extraPorts }} + {{- toYaml . | nindent 4 }} + {{- end }} + selector: + {{- include "alertmanager.selectorLabels" . | nindent 4 }} +--- +apiVersion: v1 +kind: Service +metadata: + name: {{ include "alertmanager.fullname" . }}-headless + labels: + {{- include "alertmanager.labels" . | nindent 4 }} + {{- with .Values.service.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ include "alertmanager.namespace" . }} +spec: + clusterIP: None + ports: + - port: {{ .Values.service.port }} + targetPort: http + protocol: TCP + name: http + {{- if or (gt (int .Values.replicaCount) 1) (.Values.additionalPeers) }} + - port: {{ .Values.service.clusterPort }} + targetPort: clusterpeer-tcp + protocol: TCP + name: cluster-tcp + - port: {{ .Values.service.clusterPort }} + targetPort: clusterpeer-udp + protocol: UDP + name: cluster-udp + {{- end }} + {{- with .Values.service.extraPorts }} + {{- toYaml . | nindent 4 }} + {{- end }} + selector: + {{- include "alertmanager.selectorLabels" . | nindent 4 }} diff --git a/charts/alertmanager/templates/statefulset.yaml b/charts/alertmanager/templates/statefulset.yaml new file mode 100644 index 000000000000..bf50e674eb1a --- /dev/null +++ b/charts/alertmanager/templates/statefulset.yaml @@ -0,0 +1,248 @@ +{{- $svcClusterPort := .Values.service.clusterPort }} +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: {{ include "alertmanager.fullname" . }} + labels: + {{- include "alertmanager.labels" . | nindent 4 }} + {{- with .Values.statefulSet.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ include "alertmanager.namespace" . }} +spec: + replicas: {{ .Values.replicaCount }} + minReadySeconds: {{ .Values.minReadySeconds }} + revisionHistoryLimit: {{ .Values.revisionHistoryLimit }} + selector: + matchLabels: + {{- include "alertmanager.selectorLabels" . | nindent 6 }} + serviceName: {{ include "alertmanager.fullname" . }}-headless + template: + metadata: + labels: + {{- include "alertmanager.selectorLabels" . | nindent 8 }} + {{- with .Values.podLabels }} + {{- toYaml . | nindent 8 }} + {{- end }} + annotations: + {{- if not .Values.configmapReload.enabled }} + checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} + {{- end }} + {{- with .Values.podAnnotations }} + {{- toYaml . | nindent 8 }} + {{- end }} + spec: + automountServiceAccountToken: {{ .Values.automountServiceAccountToken }} + {{- with .Values.imagePullSecrets }} + imagePullSecrets: + {{- toYaml . | nindent 8 }} + {{- end }} + serviceAccountName: {{ include "alertmanager.serviceAccountName" . }} + {{- with .Values.dnsConfig }} + dnsConfig: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.hostAliases }} + hostAliases: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with (include "alertmanager.nodeSelector" .) }} + nodeSelector: + {{- . | nindent 8 }} + {{- end }} + {{- with .Values.schedulerName }} + schedulerName: {{ . }} + {{- end }} + {{- if or .Values.podAntiAffinity .Values.affinity }} + affinity: + {{- end }} + {{- with .Values.affinity }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- if eq .Values.podAntiAffinity "hard" }} + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - topologyKey: {{ .Values.podAntiAffinityTopologyKey }} + labelSelector: + matchExpressions: + - {key: app.kubernetes.io/name, operator: In, values: [{{ include "alertmanager.name" . }}]} + {{- else if eq .Values.podAntiAffinity "soft" }} + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + topologyKey: {{ .Values.podAntiAffinityTopologyKey }} + labelSelector: + matchExpressions: + - {key: app.kubernetes.io/name, operator: In, values: [{{ include "alertmanager.name" . }}]} + {{- end }} + {{- with .Values.priorityClassName }} + priorityClassName: {{ . }} + {{- end }} + {{- with .Values.topologySpreadConstraints }} + topologySpreadConstraints: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.tolerations }} + tolerations: + {{- toYaml . | nindent 8 }} + {{- end }} + securityContext: + {{- toYaml .Values.podSecurityContext | nindent 8 }} + {{- with .Values.extraInitContainers }} + initContainers: + {{- toYaml . | nindent 8 }} + {{- end }} + containers: + {{- if .Values.configmapReload.enabled }} + - name: {{ .Chart.Name }}-{{ .Values.configmapReload.name }} + image: "{{ include "alertmanager.configmapReload.image" . }}" + imagePullPolicy: "{{ .Values.configmapReload.image.pullPolicy }}" + {{- with .Values.configmapReload.extraEnv }} + env: + {{- toYaml . | nindent 12 }} + {{- end }} + args: + {{- if and (hasKey .Values.configmapReload.extraArgs "config-file" | not) (hasKey .Values.configmapReload.extraArgs "watched-dir" | not) }} + - --watched-dir=/etc/alertmanager + {{- end }} + {{- if not (hasKey .Values.configmapReload.extraArgs "reload-url") }} + - --reload-url=http://127.0.0.1:9093/-/reload + {{- end }} + {{- range $key, $value := .Values.configmapReload.extraArgs }} + - --{{ $key }}={{ $value }} + {{- end }} + resources: + {{- toYaml .Values.configmapReload.resources | nindent 12 }} + {{- with .Values.configmapReload.containerPort }} + ports: + - containerPort: {{ . }} + {{- end }} + {{- with .Values.configmapReload.securityContext }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} + volumeMounts: + - name: config + mountPath: /etc/alertmanager + {{- if .Values.configmapReload.extraVolumeMounts }} + {{- toYaml .Values.configmapReload.extraVolumeMounts | nindent 12 }} + {{- end }} + {{- end }} + - name: {{ .Chart.Name }} + securityContext: + {{- toYaml .Values.securityContext | nindent 12 }} + image: "{{ include "alertmanager.image" . }}" + imagePullPolicy: {{ .Values.image.pullPolicy }} + env: + - name: POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP + {{- if .Values.extraEnv }} + {{- toYaml .Values.extraEnv | nindent 12 }} + {{- end }} + {{- with .Values.command }} + command: + {{- toYaml . | nindent 12 }} + {{- end }} + args: + - --storage.path=/alertmanager + {{- if not (hasKey .Values.extraArgs "config.file") }} + - --config.file=/etc/alertmanager/alertmanager.yml + {{- end }} + {{- if or (gt (int .Values.replicaCount) 1) (.Values.additionalPeers) }} + - --cluster.advertise-address=[$(POD_IP)]:{{ $svcClusterPort }} + - --cluster.listen-address=0.0.0.0:{{ $svcClusterPort }} + {{- end }} + {{- if gt (int .Values.replicaCount) 1}} + {{- $fullName := include "alertmanager.fullname" . }} + {{- range $i := until (int .Values.replicaCount) }} + - --cluster.peer={{ $fullName }}-{{ $i }}.{{ $fullName }}-headless:{{ $svcClusterPort }} + {{- end }} + {{- end }} + {{- if .Values.additionalPeers }} + {{- range $item := .Values.additionalPeers }} + - --cluster.peer={{ $item }} + {{- end }} + {{- end }} + {{- range $key, $value := .Values.extraArgs }} + - --{{ $key }}={{ $value }} + {{- end }} + ports: + - name: http + containerPort: 9093 + protocol: TCP + {{- if or (gt (int .Values.replicaCount) 1) (.Values.additionalPeers) }} + - name: clusterpeer-tcp + containerPort: {{ $svcClusterPort }} + protocol: TCP + - name: clusterpeer-udp + containerPort: {{ $svcClusterPort }} + protocol: UDP + {{- end }} + livenessProbe: + {{- toYaml .Values.livenessProbe | nindent 12 }} + readinessProbe: + {{- toYaml .Values.readinessProbe | nindent 12 }} + resources: + {{- toYaml .Values.resources | nindent 12 }} + volumeMounts: + {{- if .Values.config.enabled }} + - name: config + mountPath: /etc/alertmanager + {{- end }} + {{- range .Values.extraSecretMounts }} + - name: {{ .name }} + mountPath: {{ .mountPath }} + subPath: {{ .subPath }} + readOnly: {{ .readOnly }} + {{- end }} + - name: storage + mountPath: /alertmanager + {{- if .Values.extraVolumeMounts }} + {{- toYaml .Values.extraVolumeMounts | nindent 12 }} + {{- end }} + {{- with .Values.extraContainers }} + {{- toYaml . | nindent 8 }} + {{- end }} + volumes: + {{- if .Values.config.enabled }} + - name: config + configMap: + name: {{ include "alertmanager.fullname" . }} + {{- end }} + {{- range .Values.extraSecretMounts }} + - name: {{ .name }} + secret: + secretName: {{ .secretName }} + {{- with .optional }} + optional: {{ . }} + {{- end }} + {{- end }} + {{- if .Values.extraVolumes }} + {{- toYaml .Values.extraVolumes | nindent 8 }} + {{- end }} + {{- if .Values.persistence.enabled }} + volumeClaimTemplates: + - metadata: + name: storage + spec: + accessModes: + {{- toYaml .Values.persistence.accessModes | nindent 10 }} + resources: + requests: + storage: {{ .Values.persistence.size }} + {{- if .Values.persistence.storageClass }} + {{- if (eq "-" .Values.persistence.storageClass) }} + storageClassName: "" + {{- else }} + storageClassName: {{ .Values.persistence.storageClass }} + {{- end }} + {{- end }} + {{- else }} + - name: storage + emptyDir: {} + {{- end }} diff --git a/charts/alertmanager/templates/tests/test-connection.yaml b/charts/alertmanager/templates/tests/test-connection.yaml new file mode 100644 index 000000000000..410eba5bd8ba --- /dev/null +++ b/charts/alertmanager/templates/tests/test-connection.yaml @@ -0,0 +1,20 @@ +{{- if .Values.testFramework.enabled }} +apiVersion: v1 +kind: Pod +metadata: + name: "{{ include "alertmanager.fullname" . }}-test-connection" + labels: + {{- include "alertmanager.labels" . | nindent 4 }} + {{- with .Values.testFramework.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ include "alertmanager.namespace" . }} +spec: + containers: + - name: wget + image: busybox + command: ['wget'] + args: ['{{ include "alertmanager.fullname" . }}:{{ .Values.service.port }}'] + restartPolicy: Never +{{- end }} diff --git a/charts/alertmanager/unittests/__snapshot__/ingress_test.yaml.snap b/charts/alertmanager/unittests/__snapshot__/ingress_test.yaml.snap new file mode 100644 index 000000000000..b10604e45f9c --- /dev/null +++ b/charts/alertmanager/unittests/__snapshot__/ingress_test.yaml.snap @@ -0,0 +1,26 @@ +should match snapshot of default values: + 1: | + apiVersion: networking.k8s.io/v1 + kind: Ingress + metadata: + labels: + app.kubernetes.io/instance: RELEASE-NAME + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: alertmanager + app.kubernetes.io/version: 1.0.0 + helm.sh/chart: alertmanager-1.0.0 + name: RELEASE-NAME-alertmanager + namespace: NAMESPACE + spec: + ingressClassName: nginx-test + rules: + - host: alertmanager.domain.com + http: + paths: + - backend: + service: + name: RELEASE-NAME-alertmanager + port: + number: 9093 + path: / + pathType: ImplementationSpecific diff --git a/charts/alertmanager/unittests/ingress_test.yaml b/charts/alertmanager/unittests/ingress_test.yaml new file mode 100644 index 000000000000..41b5e9c81c68 --- /dev/null +++ b/charts/alertmanager/unittests/ingress_test.yaml @@ -0,0 +1,43 @@ +suite: test ingress +templates: + - ingress.yaml +tests: + - it: should be empty if ingress is not enabled + asserts: + - hasDocuments: + count: 0 + - it: should have apiVersion networking.k8s.io/v1 for k8s >= 1.19 + set: + ingress.enabled: true + capabilities: + majorVersion: 1 + minorVersion: 19 + asserts: + - hasDocuments: + count: 1 + - isKind: + of: Ingress + - isAPIVersion: + of: networking.k8s.io/v1 + - it: should have an ingressClassName for k8s >= 1.19 + set: + ingress.enabled: true + ingress.className: nginx-test + capabilities: + majorVersion: 1 + minorVersion: 19 + asserts: + - hasDocuments: + count: 1 + - equal: + path: spec.ingressClassName + value: nginx-test + - it: should match snapshot of default values + set: + ingress.enabled: true + ingress.className: nginx-test + chart: + version: 1.0.0 + appVersion: 1.0.0 + asserts: + - matchSnapshot: { } diff --git a/charts/alertmanager/values.schema.json b/charts/alertmanager/values.schema.json new file mode 100644 index 000000000000..1cab60e23bbf --- /dev/null +++ b/charts/alertmanager/values.schema.json @@ -0,0 +1,488 @@ +{ + "$schema": "https://json-schema.org/draft/2020-12/schema", + "properties": { + "additionalPeers": { + "type": "array" + }, + "affinity": { + "properties": {}, + "type": "object" + }, + "automountServiceAccountToken": { + "type": "boolean" + }, + "command": { + "type": "array" + }, + "config": { + "properties": { + "enabled": { + "type": "boolean" + }, + "global": { + "properties": {}, + "type": "object" + }, + "receivers": { + "items": { + "properties": { + "name": { + "type": "string" + } + }, + "type": "object" + }, + "type": "array" + }, + "route": { + "properties": { + "group_interval": { + "type": "string" + }, + "group_wait": { + "type": "string" + }, + "receiver": { + "type": "string" + }, + "repeat_interval": { + "type": "string" + } + }, + "type": "object" + }, + "templates": { + "items": { + "type": "string" + }, + "type": "array" + } + }, + "type": "object" + }, + "configAnnotations": { + "properties": {}, + "type": "object" + }, + "configmapReload": { + "properties": { + "enabled": { + "type": "boolean" + }, + "extraArgs": { + "properties": {}, + "type": "object" + }, + "extraEnv": { + "type": "array" + }, + "extraVolumeMounts": { + "type": "array" + }, + "image": { + "properties": { + "defaultRegistry": { + "type": "string" + }, + "pullPolicy": { + "type": "string" + }, + "registry": { + "type": "string" + }, + "repository": { + "type": "string" + }, + "tag": { + "type": "string" + } + }, + "type": "object" + }, + "name": { + "type": "string" + }, + "resources": { + "properties": {}, + "type": "object" + }, + "securityContext": { + "properties": {}, + "type": "object" + } + }, + "type": "object" + }, + "dnsConfig": { + "properties": {}, + "type": "object" + }, + "extraArgs": { + "properties": {}, + "type": "object" + }, + "extraContainers": { + "type": "array" + }, + "extraEnv": { + "type": "array" + }, + "extraInitContainers": { + "type": "array" + }, + "extraSecretMounts": { + "type": "array" + }, + "extraVolumeMounts": { + "type": "array" + }, + "extraVolumes": { + "type": "array" + }, + "fullnameOverride": { + "type": "string" + }, + "global": { + "properties": {}, + "type": "object" + }, + "hostAliases": { + "type": "array" + }, + "image": { + "properties": { + "defaultRegistry": { + "type": "string" + }, + "pullPolicy": { + "type": "string" + }, + "registry": { + "type": "string" + }, + "repository": { + "type": "string" + }, + "tag": { + "type": "string" + } + }, + "type": "object" + }, + "imagePullSecrets": { + "type": "array" + }, + "ingress": { + "properties": { + "annotations": { + "properties": {}, + "type": "object" + }, + "className": { + "type": "string" + }, + "enabled": { + "type": "boolean" + }, + "hosts": { + "items": { + "properties": { + "host": { + "type": "string" + }, + "paths": { + "items": { + "properties": { + "path": { + "type": "string" + }, + "pathType": { + "type": "string" + } + }, + "type": "object" + }, + "type": "array" + } + }, + "type": "object" + }, + "type": "array" + }, + "tls": { + "type": "array" + } + }, + "type": "object" + }, + "ingressPerReplica": { + "properties": { + "annotations": { + "properties": {}, + "type": "object" + }, + "className": { + "type": "string" + }, + "enabled": { + "type": "boolean" + }, + "hostDomain": { + "type": "string" + }, + "hostPrefix": { + "type": "string" + }, + "labels": { + "properties": {}, + "type": "object" + }, + "pathType": { + "type": "string" + }, + "paths": { + "items": { + "type": "string" + }, + "type": "array" + }, + "tlsSecretName": { + "type": "string" + }, + "tlsSecretPerReplica": { + "properties": { + "enabled": { + "type": "boolean" + }, + "prefix": { + "type": "string" + } + }, + "type": "object" + } + }, + "type": "object" + }, + "livenessProbe": { + "properties": { + "httpGet": { + "properties": { + "path": { + "type": "string" + }, + "port": { + "type": "string" + } + }, + "type": "object" + } + }, + "type": "object" + }, + "minReadySeconds": { + "type": "integer" + }, + "nameOverride": { + "type": "string" + }, + "namespaceOverride": { + "type": "string" + }, + "nodeSelector": { + "properties": {}, + "type": "object" + }, + "persistence": { + "properties": { + "accessModes": { + "items": { + "type": "string" + }, + "type": "array" + }, + "enabled": { + "type": "boolean" + }, + "size": { + "type": "string" + } + }, + "type": "object" + }, + "podAnnotations": { + "properties": {}, + "type": "object" + }, + "podAntiAffinity": { + "type": "string" + }, + "podAntiAffinityTopologyKey": { + "type": "string" + }, + "podDisruptionBudget": { + "properties": {}, + "type": "object" + }, + "podLabels": { + "properties": {}, + "type": "object" + }, + "podSecurityContext": { + "properties": { + "fsGroup": { + "type": "integer" + } + }, + "type": "object" + }, + "priorityClassName": { + "type": "string" + }, + "readinessProbe": { + "properties": { + "httpGet": { + "properties": { + "path": { + "type": "string" + }, + "port": { + "type": "string" + } + }, + "type": "object" + } + }, + "type": "object" + }, + "replicaCount": { + "type": "integer" + }, + "resources": { + "properties": {}, + "type": "object" + }, + "revisionHistoryLimit": { + "type": "integer" + }, + "schedulerName": { + "type": "string" + }, + "securityContext": { + "properties": { + "runAsGroup": { + "type": "integer" + }, + "runAsNonRoot": { + "type": "boolean" + }, + "runAsUser": { + "type": "integer" + } + }, + "type": "object" + }, + "service": { + "properties": { + "annotations": { + "properties": {}, + "type": "object" + }, + "clusterPort": { + "type": "integer" + }, + "extraPorts": { + "type": "array" + }, + "labels": { + "properties": {}, + "type": "object" + }, + "loadBalancerIP": { + "type": "string" + }, + "loadBalancerSourceRanges": { + "type": "array" + }, + "port": { + "type": "integer" + }, + "type": { + "type": "string" + } + }, + "type": "object" + }, + "serviceAccount": { + "properties": { + "annotations": { + "properties": {}, + "type": "object" + }, + "create": { + "type": "boolean" + }, + "name": { + "type": "string" + } + }, + "type": "object" + }, + "servicePerReplica": { + "properties": { + "annotations": { + "properties": {}, + "type": "object" + }, + "enabled": { + "type": "boolean" + }, + "externalTrafficPolicy": { + "type": "string" + }, + "loadBalancerSourceRanges": { + "type": "array" + }, + "type": { + "type": "string" + } + }, + "type": "object" + }, + "statefulSet": { + "properties": { + "annotations": { + "properties": {}, + "type": "object" + } + }, + "type": "object" + }, + "templates": { + "properties": {}, + "type": "object" + }, + "testFramework": { + "properties": { + "annotations": { + "properties": { + "helm.sh/hook": { + "type": "string" + } + }, + "type": "object" + }, + "enabled": { + "type": "boolean" + } + }, + "type": "object" + }, + "tolerations": { + "type": "array" + }, + "topologySpreadConstraints": { + "type": "array" + } + }, + "type": "object" +} \ No newline at end of file diff --git a/charts/alertmanager/values.yaml b/charts/alertmanager/values.yaml new file mode 100644 index 000000000000..28a9487f2dcd --- /dev/null +++ b/charts/alertmanager/values.yaml @@ -0,0 +1,376 @@ +# yaml-language-server: $schema=values.schema.json +# Default values for alertmanager. +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. + +global: {} + +replicaCount: 1 + +# Number of old history to retain to allow rollback +# Default Kubernetes value is set to 10 +revisionHistoryLimit: 10 + +image: + defaultRegistry: quay.io + registry: "" + repository: prometheus/alertmanager + pullPolicy: IfNotPresent + # Overrides the image tag whose default is the chart appVersion. + tag: "" + +extraArgs: {} + +## Additional Alertmanager Secret mounts +# Defines additional mounts with secrets. Secrets must be manually created in the namespace. +extraSecretMounts: [] + # - name: secret-files + # mountPath: /etc/secrets + # subPath: "" + # secretName: alertmanager-secret-files + # readOnly: true + +imagePullSecrets: [] +nameOverride: "" +fullnameOverride: "" +## namespaceOverride overrides the namespace which the resources will be deployed in +namespaceOverride: "" + +automountServiceAccountToken: true + +serviceAccount: + # Specifies whether a service account should be created + create: true + # Annotations to add to the service account + annotations: {} + # The name of the service account to use. + # If not set and create is true, a name is generated using the fullname template + name: "" + +# Sets priorityClassName in alertmanager pod +priorityClassName: "" + +# Sets schedulerName in alertmanager pod +schedulerName: "" + +podSecurityContext: + fsGroup: 65534 +dnsConfig: {} + # nameservers: + # - 1.2.3.4 + # searches: + # - ns1.svc.cluster-domain.example + # - my.dns.search.suffix + # options: + # - name: ndots + # value: "2" + # - name: edns0 +hostAliases: [] + # - ip: "127.0.0.1" + # hostnames: + # - "foo.local" + # - "bar.local" + # - ip: "10.1.2.3" + # hostnames: + # - "foo.remote" + # - "bar.remote" +securityContext: + # capabilities: + # drop: + # - ALL + # readOnlyRootFilesystem: true + runAsUser: 65534 + runAsNonRoot: true + runAsGroup: 65534 + +additionalPeers: [] + +## Additional InitContainers to initialize the pod +## +extraInitContainers: [] + +## Additional containers to add to the stateful set. This will allow to setup sidecarContainers like a proxy to integrate +## alertmanager with an external tool like teams that has not direct integration. +## +extraContainers: [] + +livenessProbe: + httpGet: + path: / + port: http + +readinessProbe: + httpGet: + path: / + port: http + +service: + annotations: {} + labels: {} + type: ClusterIP + port: 9093 + clusterPort: 9094 + loadBalancerIP: "" # Assign ext IP when Service type is LoadBalancer + loadBalancerSourceRanges: [] # Only allow access to loadBalancerIP from these IPs + # if you want to force a specific nodePort. Must be use with service.type=NodePort + # nodePort: + + # Optionally specify extra list of additional ports exposed on both services + extraPorts: [] + +# Configuration for creating a separate Service for each statefulset Alertmanager replica +# +servicePerReplica: + enabled: false + annotations: {} + + # Loadbalancer source IP ranges + # Only used if servicePerReplica.type is "LoadBalancer" + loadBalancerSourceRanges: [] + + # Denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints + # + externalTrafficPolicy: Cluster + + # Service type + # + type: ClusterIP + +ingress: + enabled: false + className: "" + annotations: {} + # kubernetes.io/ingress.class: nginx + # kubernetes.io/tls-acme: "true" + hosts: + - host: alertmanager.domain.com + paths: + - path: / + pathType: ImplementationSpecific + tls: [] + # - secretName: chart-example-tls + # hosts: + # - alertmanager.domain.com + +# Configuration for creating an Ingress that will map to each Alertmanager replica service +# alertmanager.servicePerReplica must be enabled +# +ingressPerReplica: + enabled: false + + # className for the ingresses + # + className: "" + + annotations: {} + labels: {} + + # Final form of the hostname for each per replica ingress is + # {{ ingressPerReplica.hostPrefix }}-{{ $replicaNumber }}.{{ ingressPerReplica.hostDomain }} + # + # Prefix for the per replica ingress that will have `-$replicaNumber` + # appended to the end + hostPrefix: "alertmanager" + # Domain that will be used for the per replica ingress + hostDomain: "domain.com" + + # Paths to use for ingress rules + # + paths: + - / + + # PathType for ingress rules + # + pathType: ImplementationSpecific + + # Secret name containing the TLS certificate for alertmanager per replica ingress + # Secret must be manually created in the namespace + tlsSecretName: "" + + # Separated secret for each per replica Ingress. Can be used together with cert-manager + # + tlsSecretPerReplica: + enabled: false + # Final form of the secret for each per replica ingress is + # {{ tlsSecretPerReplica.prefix }}-{{ $replicaNumber }} + # + prefix: "alertmanager" + +resources: {} + # We usually recommend not to specify default resources and to leave this as a conscious + # choice for the user. This also increases chances charts run on environments with little + # resources, such as Minikube. If you do want to specify resources, uncomment the following + # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + # limits: + # cpu: 100m + # memory: 128Mi + # requests: + # cpu: 10m + # memory: 32Mi + +nodeSelector: {} + +tolerations: [] + +affinity: {} + +## Pod anti-affinity can prevent the scheduler from placing Alertmanager replicas on the same node. +## The default value "soft" means that the scheduler should *prefer* to not schedule two replica pods onto the same node but no guarantee is provided. +## The value "hard" means that the scheduler is *required* to not schedule two replica pods onto the same node. +## The value "" will disable pod anti-affinity so that no anti-affinity rules will be configured. +## +podAntiAffinity: "" + +## If anti-affinity is enabled sets the topologyKey to use for anti-affinity. +## This can be changed to, for example, failure-domain.beta.kubernetes.io/zone +## +podAntiAffinityTopologyKey: kubernetes.io/hostname + +## Topology spread constraints rely on node labels to identify the topology domain(s) that each Node is in. +## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ +topologySpreadConstraints: [] + # - maxSkew: 1 + # topologyKey: failure-domain.beta.kubernetes.io/zone + # whenUnsatisfiable: DoNotSchedule + # labelSelector: + # matchLabels: + # app.kubernetes.io/instance: alertmanager + +statefulSet: + annotations: {} + +## Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to +## be considered available. Defaults to 0 (pod will be considered available as soon as it is ready). +## This is an alpha field from kubernetes 1.22 until 1.24 which requires enabling the StatefulSetMinReadySeconds +## feature gate. +## Ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#minimum-ready-seconds +minReadySeconds: 0 + +podAnnotations: {} +podLabels: {} + +# Ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ +podDisruptionBudget: {} + # maxUnavailable: 1 + # minAvailable: 1 + +command: [] + +persistence: + enabled: true + ## Persistent Volume Storage Class + ## If defined, storageClassName: + ## If set to "-", storageClassName: "", which disables dynamic provisioning + ## If undefined (the default) or set to null, no storageClassName spec is + ## set, choosing the default provisioner. + ## + # storageClass: "-" + accessModes: + - ReadWriteOnce + size: 50Mi + +configAnnotations: {} + ## For example if you want to provide private data from a secret vault + ## https://github.com/banzaicloud/bank-vaults/tree/main/charts/vault-secrets-webhook + ## P.s.: Add option `configMapMutation: true` for vault-secrets-webhook + # vault.security.banzaicloud.io/vault-role: "admin" + # vault.security.banzaicloud.io/vault-addr: "https://vault.vault.svc.cluster.local:8200" + # vault.security.banzaicloud.io/vault-skip-verify: "true" + # vault.security.banzaicloud.io/vault-path: "kubernetes" + ## Example for inject secret + # slack_api_url: '${vault:secret/data/slack-hook-alerts#URL}' + +config: + enabled: true + global: {} + # slack_api_url: '' + + templates: + - '/etc/alertmanager/*.tmpl' + + receivers: + - name: default-receiver + # slack_configs: + # - channel: '@you' + # send_resolved: true + + route: + group_wait: 10s + group_interval: 5m + receiver: default-receiver + repeat_interval: 3h + +## Monitors ConfigMap changes and POSTs to a URL +## Ref: https://github.com/prometheus-operator/prometheus-operator/tree/main/cmd/prometheus-config-reloader +## +configmapReload: + ## If false, the configmap-reload container will not be deployed + ## + enabled: false + + ## configmap-reload container name + ## + name: configmap-reload + + ## configmap-reload container image + ## + image: + defaultRegistry: quay.io + registry: "" + repository: prometheus-operator/prometheus-config-reloader + tag: v0.66.0 + pullPolicy: IfNotPresent + + # containerPort: 9533 + + ## configmap-reload resource requests and limits + ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## + resources: {} + + extraArgs: {} + + ## Optionally specify extra list of additional volumeMounts + extraVolumeMounts: [] + # - name: extras + # mountPath: /usr/share/extras + # readOnly: true + + ## Optionally specify extra environment variables to add to alertmanager container + extraEnv: [] + # - name: FOO + # value: BAR + + securityContext: {} + # capabilities: + # drop: + # - ALL + # readOnlyRootFilesystem: true + # runAsUser: 65534 + # runAsNonRoot: true + # runAsGroup: 65534 + +templates: {} +# alertmanager.tmpl: |- + +## Optionally specify extra list of additional volumeMounts +extraVolumeMounts: [] + # - name: extras + # mountPath: /usr/share/extras + # readOnly: true + +## Optionally specify extra list of additional volumes +extraVolumes: [] + # - name: extras + # emptyDir: {} + +## Optionally specify extra environment variables to add to alertmanager container +extraEnv: [] + # - name: FOO + # value: BAR + +testFramework: + enabled: false + annotations: + "helm.sh/hook": test-success + # "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded"