-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[scalar-manager] Create the Helm chart for Scalar Manager #254
Changes from 2 commits
7ef77d4
f853e93
3182519
d9991a4
1e1f690
aeb8ee4
819baf3
590d911
5502ca0
b497bb1
b870b5a
b8864eb
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -5,13 +5,17 @@ type: application | |
version: 2.0.0-SNAPSHOT | ||
appVersion: 2.0.0-SNAPSHOT | ||
deprecated: false | ||
icon: https://scalar-labs.com/wp-content/themes/scalar/assets/img/logo_scalar.svg | ||
keywords: | ||
- scalardb | ||
- scalardl | ||
- scalar-manager | ||
- scalar-manager | ||
- scalardb-cluster | ||
- scalardl-ledger | ||
- scalardl-auditor | ||
- scalar-admin-for-kubernetes | ||
home: https://scalar-labs.com/ | ||
sources: | ||
- https://github.com/scalar-labs/scalar-manager | ||
- https://github.com/scalar-labs/scalar-manager-api | ||
- https://github.com/scalar-labs/scalar-manager-web | ||
maintainers: | ||
- name: Takanori Yokoyama | ||
email: [email protected] | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -54,9 +54,9 @@ app.kubernetes.io/instance: {{ .Release.Name }} | |
Create the name of the service account to use | ||
*/}} | ||
{{- define "scalar-manager.serviceAccountName" -}} | ||
{{- if .Values.serviceAccount.serviceAccountName }} | ||
{{- .Values.serviceAccount.serviceAccountName }} | ||
{{- if .Values.serviceAccount.create }} | ||
{{- default (include "scalar-manager.fullname" .) .Values.serviceAccount.name }} | ||
{{- else }} | ||
{{- print (include "scalar-manager.fullname" .) "-sa" | trunc 63 | trimSuffix "-" }} | ||
{{- default "default" .Values.serviceAccount.name }} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As I mentioned in another comment, to keep configuration consistency between each Scalar Helm Chart, I think it would be better to use the same implementation as the ScalarDB Cluster chart. What do you think? |
||
{{- end }} | ||
{{- end }} |
This file was deleted.
This file was deleted.
This file was deleted.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,10 @@ | ||
apiVersion: rbac.authorization.k8s.io/v1 | ||
kind: ClusterRole | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Just a question. What is the reason why we use In other words, does Scalar Manager access the following resources that need the
|
||
metadata: | ||
name: {{ include "scalar-manager.fullname" . }} | ||
labels: | ||
{{- include "scalar-manager.labels" . | nindent 4 }} | ||
rules: | ||
- apiGroups: ["", "apps", "batch", "rbac.authorization.k8s.io"] | ||
resources: ["pods", "deployments", "services", "namespaces", "configmaps", "secrets", "cronjobs", "serviceaccounts", "roles", "rolebindings", "jobs"] | ||
verbs: ["get", "list", "create", "patch", "delete", "update"] | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I put all resources and verbs in one rule for convenience. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In general, you should define separate rules for each api group. For example, the current configuration grants permissions to a pod resource in the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thank you for your advise. The rules became to rules:
- apiGroups: [""]
resources: ["pods", "services", "namespaces", "configmaps", "secrets", "serviceaccounts"]
verbs: ["get", "list", "create", "patch", "delete", "update"]
- apiGroups: ["batch"]
resources: ["cronjobs", "jobs"]
verbs: ["get", "list", "create", "delete"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles", "rolebindings"]
verbs: ["get", "list", "create", "delete"] In the core group, for example, this application doesn't delete or update pods. If so, should we separate the resources with proper verbs in general practice? Like, for example, rules:
- apiGroups: [""]
resources: ["configmaps", "secrets", "serviceaccounts"]
verbs: ["get", "list", "create", "patch", "delete", "update"]
- apiGroups: [""]
resources: ["pods", "services", "namespaces"]
verbs: ["get", "list"] There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Yes, that is correct. Based on the principle of least privilege, unnecessary privileges should not be granted. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @superbrothers Sorry, I recalled that we probably need these permissions because the application in this Helm chart (Scalar Manager) uses Helm to install another Helm chart. Allow me to keep them. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
apiVersion: rbac.authorization.k8s.io/v1 | ||
kind: ClusterRoleBinding | ||
metadata: | ||
name: {{ include "scalar-manager.fullname" . }} | ||
labels: | ||
{{- include "scalar-manager.labels" . | nindent 4 }} | ||
subjects: | ||
- kind: ServiceAccount | ||
name: {{ include "scalar-manager.fullname" . }} | ||
namespace: {{ .Release.Namespace }} | ||
apiGroup: "" | ||
roleRef: | ||
kind: ClusterRole | ||
name: {{ include "scalar-manager.fullname" . }} | ||
apiGroup: rbac.authorization.k8s.io |
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
@@ -0,0 +1,12 @@ | ||||||
apiVersion: v1 | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Just a question. I think this ConfigMap is used for storing cluster and pause information. And, it seems that this chart adds Do we need to create this I want to confirm whether Scalar Manager API can create (If Scalar Manager can create There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As we discussed, we will update Scalar Manager API to create There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As we discussed, we noticed that this There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We address this issue on the Scalar Manager API side. |
||||||
kind: ConfigMap | ||||||
metadata: | ||||||
name: {{ include "scalar-manager.fullname" . }} | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
I think this ConfigMap name So, I think it would be better to describe What do you think? |
||||||
namespace: {{ .Release.Namespace }} | ||||||
labels: | ||||||
app.kubernetes.io/app: {{ include "scalar-manager.fullname" . }} | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think it would be better to add the label In that case, we can add Is there any reason why you set There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As we discussed, we will move this label to |
||||||
{{- include "scalar-manager.labels" . | nindent 4 }} | ||||||
data: | ||||||
managed-clusters: "[]" | ||||||
paused-states: "[]" | ||||||
paused-states-updated-at: "0" |
Original file line number | Diff line number | Diff line change | ||
---|---|---|---|---|
@@ -0,0 +1,101 @@ | ||||
apiVersion: apps/v1 | ||||
kind: Deployment | ||||
metadata: | ||||
name: {{ include "scalar-manager.fullname" . }} | ||||
namespace: {{ .Release.Namespace }} | ||||
labels: | ||||
{{- include "scalar-manager.labels" . | nindent 4 }} | ||||
spec: | ||||
replicas: {{ .Values.replicaCount }} | ||||
selector: | ||||
matchLabels: | ||||
{{- include "scalar-manager.selectorLabels" . | nindent 6 }} | ||||
template: | ||||
metadata: | ||||
annotations: | ||||
checksum/config: {{ include (print $.Template.BasePath "/scalar-manager/configmap.yaml") . | sha256sum }} | ||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
I think this If the above understanding is correct, this So, I think we can remove this annotation. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As we discussed, we will remove this annotation. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As we discussed in another comment, we decided to use |
||||
{{- if .Values.podAnnotations }} | ||||
{{- toYaml .Values.podAnnotations | nindent 8 }} | ||||
{{- end }} | ||||
labels: | ||||
{{- include "scalar-manager.selectorLabels" . | nindent 8 }} | ||||
{{- if .Values.podLabels }} | ||||
{{- toYaml .Values.podLabels | nindent 8 }} | ||||
{{- end }} | ||||
spec: | ||||
restartPolicy: Always | ||||
serviceAccountName: {{ include "scalar-manager.serviceAccountName" . }} | ||||
automountServiceAccountToken: {{ .Values.serviceAccount.automount }} | ||||
terminationGracePeriodSeconds: 90 | ||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Just a question. How long does Scalar Manager take to stop gracefully? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As we discussed, we can use the default value. So, we can remove this configuration from here. |
||||
containers: | ||||
- name: {{ .Chart.Name }}-api | ||||
image: "{{ .Values.api.image.repository }}:{{ .Values.api.image.tag | default .Chart.AppVersion }}" | ||||
resources: | ||||
{{- toYaml .Values.resources | nindent 12 }} | ||||
ports: | ||||
- containerPort: 8080 | ||||
imagePullPolicy: {{ .Values.api.image.pullPolicy }} | ||||
securityContext: | ||||
{{- toYaml .Values.securityContext | nindent 12 }} | ||||
env: | ||||
- name: GRAFANA_KUBERNETES_SERVICE_LABEL_NAME | ||||
value: {{ .Values.api.grafanaKubernetesServiceLabelName | quote }} | ||||
- name: GRAFANA_KUBERNETES_SERVICE_LABEL_VALUE | ||||
value: {{ .Values.api.grafanaKubernetesServiceLabelValue | quote }} | ||||
- name: GRAFANA_KUBERNETES_SERVICE_PORT_NAME | ||||
value: {{ .Values.api.grafanaKubernetesServicePortName | quote }} | ||||
- name: PROMETHEUS_KUBERNETES_SERVICE_LABEL_NAME | ||||
value: {{ .Values.api.prometheusKubernetesServiceLabelName | quote }} | ||||
- name: PROMETHEUS_KUBERNETES_SERVICE_LABEL_VALUE | ||||
value: {{ .Values.api.prometheusKubernetesServiceLabelValue | quote }} | ||||
- name: PROMETHEUS_KUBERNETES_SERVICE_PORT_NAME | ||||
value: {{ .Values.api.prometheusKubernetesServicePortName | quote }} | ||||
- name: LOKI_KUBERNETES_SERVICE_LABEL_NAME | ||||
value: {{ .Values.api.lokiKubernetesServiceLabelName | quote }} | ||||
- name: LOKI_KUBERNETES_SERVICE_LABEL_VALUE | ||||
value: {{ .Values.api.lokiKubernetesServiceLabelValue | quote }} | ||||
- name: LOKI_KUBERNETES_SERVICE_PORT_NAME | ||||
value: {{ .Values.api.lokiKubernetesServicePortName | quote }} | ||||
- name: HELM_SCALAR_REPOSITORY_NAME | ||||
value: {{ .Values.api.helmScalarRepositoryName | quote }} | ||||
- name: HELM_SCALAR_REPOSITORY_URL | ||||
value: {{ .Values.api.helmScalarRepositoryUrl | quote }} | ||||
- name: HELM_SCALAR_ADMIN_FOR_KUBERNETES_CHART_NAME | ||||
value: {{ .Values.api.helmScalarAdminForKubernetesChartName | quote }} | ||||
- name : HELM_SCALAR_ADMIN_FOR_KUBERNETES_CHART_VERSION | ||||
value: {{ .Values.api.helmScalarAdminForKubernetesChartVersion | quote }} | ||||
- name: CONFIG_MAP_LABEL_NAME | ||||
value: "app.kubernetes.io/app" | ||||
- name: CONFIG_MAP_LABEL_VALUE | ||||
value: {{ include "scalar-manager.fullname" . | quote }} | ||||
- name: PAUSED_STATE_RETENTION_STORAGE | ||||
value: {{ .Values.api.pausedStateRetentionStorage | quote }} | ||||
- name: PAUSED_STATE_RETENTION_MAX_NUMBER | ||||
value: {{ .Values.api.pausedStateRetentionMaxNumber | quote }} | ||||
- name: {{ .Chart.Name }}-web | ||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If my remembering is correct, Scalar Manager works as follows. +-[Kubernetes Cluster A]---+ +-[Kubernetes Cluster B]---+ +-[Kubernetes Cluster C]---+
| | | | | |
| +--------------------+ | | +--------------------+ | | +--------------------+ |
| | Scalar products | | | | Scalar products | | | | Scalar products | |
| +--------------------+ | | +--------------------+ | | +--------------------+ |
| | | | | |
| +--------------------+ | | +--------------------+ | | +--------------------+ |
| | Scalar Manager API | | | | Scalar Manager API | | | | Scalar Manager API | |
| +--------------------+ | | +--------------------+ | | +--------------------+ |
| | | | | | | | |
+------------+-------------+ +------------+-------------+ +------------+-------------+
| | |
| | |
| | |
+-----------------------------+-----------------------------+
|
|
|
+---------+----------+
| Scalar Manager Web |
+--------------------+ So, we don't need to deploy Vice versa, it would be better to deploy Is my understanding correct? (Sorry, I might miss some Scalar Manager specifications...) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As we discussed, at this time, we must deploy |
||||
image: "{{ .Values.web.image.repository }}:{{ .Values.web.image.tag | default .Chart.AppVersion }}" | ||||
resources: | ||||
{{- toYaml .Values.resources | nindent 12 }} | ||||
ports: | ||||
- containerPort: 3000 | ||||
imagePullPolicy: {{ .Values.web.image.pullPolicy }} | ||||
securityContext: | ||||
{{- toYaml .Values.securityContext | nindent 12 }} | ||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Just a confirmation. Is it fine to set the same There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As we discussed, we will separate |
||||
securityContext: | ||||
{{- toYaml .Values.podSecurityContext | nindent 8 }} | ||||
{{- with .Values.imagePullSecrets }} | ||||
imagePullSecrets: | ||||
{{- toYaml . | nindent 8 }} | ||||
{{- end }} | ||||
{{- with .Values.nodeSelector }} | ||||
nodeSelector: | ||||
{{- toYaml . | nindent 8 }} | ||||
{{- end }} | ||||
{{- with .Values.affinity }} | ||||
affinity: | ||||
{{- toYaml . | nindent 8 }} | ||||
{{- end }} | ||||
{{- with .Values.tolerations }} | ||||
tolerations: | ||||
{{- toYaml . | nindent 8 }} | ||||
{{- end }} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,16 +1,16 @@ | ||
apiVersion: v1 | ||
kind: Service | ||
metadata: | ||
namespace: {{ .Release.Namespace }} | ||
name: {{ include "scalar-manager.fullname" . }} | ||
namespace: {{ .Release.Namespace }} | ||
labels: | ||
{{- include "scalar-manager.labels" . | nindent 4 }} | ||
spec: | ||
type: {{ .Values.service.type }} | ||
ports: | ||
- port: {{ .Values.service.port }} | ||
targetPort: {{ .Values.scalarManager.port }} | ||
protocol: TCP | ||
name: http | ||
- protocol: TCP | ||
name: web | ||
port: {{ .Values.service.port }} | ||
targetPort: 3000 | ||
selector: | ||
{{- include "scalar-manager.selectorLabels" . | nindent 4 }} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,9 +1,7 @@ | ||
{{- if not .Values.serviceAccount.serviceAccountName }} | ||
apiVersion: v1 | ||
kind: ServiceAccount | ||
metadata: | ||
namespace: {{ .Release.Namespace }} | ||
name: {{ include "scalar-manager.serviceAccountName" . }} | ||
namespace: {{ .Release.Namespace }} | ||
labels: | ||
{{- include "scalar-manager.labels" . | nindent 4 }} | ||
{{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now we only have 1.0.0-SNAPSHOT images.
I use 2.0.0-SNAPSHOT here because we will bump the SNAPSHOT images version to 2.0.0 after officially release Scalar Manager.