diff --git a/stable/mission-control/Chart.yaml b/stable/mission-control/Chart.yaml index b72d7dcbe..b159681c9 100644 --- a/stable/mission-control/Chart.yaml +++ b/stable/mission-control/Chart.yaml @@ -1,8 +1,8 @@ apiVersion: v1 name: mission-control description: A Helm chart for JFrog Mission Control -version: 0.9.4 -appVersion: 3.4.3 +version: 1.0.0 +appVersion: 3.5.0 home: https://jfrog.com/mission-control/ icon: https://raw.githubusercontent.com/JFrogDev/artifactory-dcos/master/images/jfrog_med.png keywords: diff --git a/stable/mission-control/README.md b/stable/mission-control/README.md index ca846f86b..b42da7451 100644 --- a/stable/mission-control/README.md +++ b/stable/mission-control/README.md @@ -29,6 +29,23 @@ helm repo add jfrog https://charts.jfrog.io ```bash helm install --name mission-control jfrog/mission-control ``` +### Create a unique MC Key +Mission Control HA cluster uses a unique mc key. By default the chart has one set in values.yaml (`missionControl.mcKey`). + +**This key is for demo purpose and should not be used in a production environment!** + +You should generate a unique one and pass it to the template at install/upgrade time. +```bash +# Create a key +export MC_KEY=$(openssl rand -hex 16) +echo ${MC_KEY} + +# Pass the created master key to helm +helm install --name mission-control --set missionControl.mcKey=${MC_KEY} jfrog/mission-control +``` + +**NOTE:** Make sure to pass the same mc key on all future calls to `helm install` and `helm upgrade`! In the first case, this means always passing `--set missionControl.mcKey=${MC_KEY}`. + ## Set Mission Control base URL * Get mission-control url by running following commands: @@ -45,13 +62,62 @@ helm upgrade --name mission-control --set missionControl.missionControlUrl=$MISS **NOTE:** It might take a few minutes for Mission Control's public IP to become available, and the nodes to complete initial setup. Follow the instructions outputted by the install command to get the Mission Control IP and URL to access it. -### Updating Mission Control +## Upgrade Once you have a new chart version, you can update your deployment with ``` helm upgrade mission-control jfrog/mission-control ``` -### Use an external Database +**NOTE:** Check for any version specific upgrade nodes in [CHANGELOG.md] + +### Non compatible upgrades +In cases where a new version is not compatible with existing deployed version (look in CHANGELOG.md) you should +* Deploy new version along side old version (set a new release name) +* Copy configurations and data from old deployment to new one (The following instructions were tested for chart migration from 0.9.x to 1.0.0) +```Copy data and config from old deployment to local filesystem : +kubectl cp :/usr/share/elasticsearch/data //mission-control-data/elastic_data -n +kubectl cp :/var/lib/postgresql/data //mission-control-data/postgres_data -n +kubectl cp :/var/opt/jfrog/mission-control/etc/mission-control.properties //mission-control-data/mission-control.properties -n -c mission-control +kubectl cp :/var/opt/jfrog/mission-control/data/security/mc.key //mission-control-data/mc.key -n -c mission-control +``` +```Copy data and config from local filesystem to new deployment : +kubectl cp //mission-control-data/mc.key :/var/opt/jfrog/mission-control/data/security/mc.key -n -c mission-control +kubectl cp //mission-control-data/mission-control.properties :/var/opt/jfrog/mission-control/etc/mission-control.properties -n -c mission-control +kubectl cp //mission-control-data/elastic_data :/usr/share/elasticsearch -n -c elasticsearch +kubectl cp //mission-control-data/postgres_data :/var/lib/postgresql -n + +kubectl exec -it -n -- bash + rm -fr /var/lib/postgresql/data + cp -fr /var/lib/postgresql/postgres_data/* /var/lib/postgresql/data/ + rm -fr /var/lib/postgresql/postgres_data +kubectl exec -it -n -c elasticsearch -- bash + rm -fr /usr/share/elasticsearch/data + cp -fr /usr/share/elasticsearch/elastic_data/* /usr/share/elasticsearch/data + rm -fr /usr/share/elasticsearch/elastic_data +``` +* Restart the new deployment +```bash +kubectl scale deployment --replicas=0 -n +kubectl scale statefulset --replicas=0 -n + +kubectl scale deployment --replicas=1 -n +kubectl scale statefulset --replicas=1 -n +``` +* Once the new release is up and ready, update mission-control base url with new DNS + * Login to mission-control pod, +```bash +kubectl exec -it -n -c mission-control -- bash +``` + * Update mission-control base url by running the api from [Mission Control Rest API](https://www.jfrog.com/confluence/display/MC/Mission+Control+REST+API#MissionControlRESTAPI-UpdateBaseURL) +* A new mc.key will be generated after this upgrade, save a copy of this key. **NOTE**: This should be passed on all future calls to `helm install` and `helm upgrade`! +```bash +export MC_KEY=$(kubectl exec -it -n -c mission-control -- cat /var/opt/jfrog/mission-control/data/security/mc.key) +``` +* Remove old release + +### Use external Database + +#### PostgreSQL There are cases where you will want to use an external **PostgreSQL** and not the enclosed **PostgreSQL**. See more details on [configuring the database](https://www.jfrog.com/confluence/display/MC/Using+External+Databases#UsingExternalDatabases-ExternalizingPostgreSQL) @@ -67,7 +133,7 @@ This can be done with the following parameters ``` **NOTE:** You must set `postgresql.enabled=false` in order for the chart to use the `database.*` parameters. Without it, they will be ignored! -#### Use existing secrets for PostgreSQL connection details +##### Use existing secrets for PostgreSQL connection details You can use already existing secrets for managing the database connection details. Pass them to the install command with the following parameters @@ -84,6 +150,20 @@ export POSTGRES_PASSWORD_SECRET_KEY= ... ``` +##### Elasticsearch + +There are cases where you will want to use an external **Elasticsearch** and not the enclosed **Elasticsearch**. + +This can be done with the following parameters +```bash +... +--set elasticsearch.enabled=false \ +--set elasticsearch.url=${ES_URL} \ +--set elasticsearch.username=${ES_USERNAME} \ +--set elasticsearch.password=${ES_PASSWORD} \ +... +``` + ### Logger sidecars This chart provides the option to add sidecars to tail various logs from Mission Control containers. See the available values in `values.yaml` @@ -116,28 +196,11 @@ The following table lists the configurable parameters of the mission-control cha | `initContainerImage` | Init Container Image | `alpine:3.6` | | `imagePullPolicy` | Container pull policy | `IfNotPresent` | | `imagePullSecrets` | Docker registry pull secret | | +| `replicaCount` | Number of replicas | `1` | | `serviceAccount.create` | Specifies whether a ServiceAccount should be created | `true` | | `serviceAccount.name` | The name of the ServiceAccount to create | Generated using the fullname template | | `rbac.create` | Specifies whether RBAC resources should be created | `true` | | `rbac.role.rules` | Rules to create | `[]` | -| `mongodb.enabled` | Enable Mongodb | `false` | -| `mongodb.image.tag` | Mongodb docker image tag | `3.6.8-debian-9` | -| `mongodb.image.pullPolicy` | Mongodb Container pull policy | `IfNotPresent` | -| `mongodb.persistence.enabled` | Mongodb persistence volume enabled | `true` | -| `mongodb.persistence.existingClaim` | Use an existing PVC to persist data | `nil` | -| `mongodb.persistence.storageClass` | Storage class of backing PVC | `generic` | -| `mongodb.persistence.size` | Mongodb persistence volume size | `50Gi` | -| `mongodb.livenessProbe.initialDelaySeconds` | Mongodb delay before liveness probe is initiated | `40` | -| `mongodb.readinessProbe.initialDelaySeconds` | Mongodb delay before readiness probe is initiated | `30` | -| `mongodb.mongodbExtraFlags` | MongoDB additional command line flags | `["--wiredTigerCacheSizeGB=1"]` | -| `mongodb.usePassword` | Enable password authentication | `false` | -| `mongodb.db.adminUser` | Mongodb Database Admin User | `admin` | -| `mongodb.db.adminPassword` | Mongodb Database Password for Admin user | ` ` | -| `mongodb.db.mcUser` | Mongodb Database Mission Control User | `mission_platform` | -| `mongodb.db.mcPassword` | Mongodb Database Password for Mission Control user | ` ` | -| `mongodb.db.insightUser` | Mongodb Database Insight User | `jfrog_insight` | -| `mongodb.db.insightPassword` | Mongodb Database password for Insight User | ` ` | -| `mongodb.db.insightSchedulerDb` | Mongodb Database for Scheduler | `insight_scheduler` | | `postgresql.enabled` | Enable PostgreSQL | `true` | | `postgresql.imageTag` | PostgreSQL docker image tag | `9.6.11` | | `postgresql.image.pullPolicy` | PostgreSQL Container pull policy | `IfNotPresent` | @@ -192,11 +255,13 @@ The following table lists the configurable parameters of the mission-control cha | `elasticsearch.javaOpts.xms` | Elasticsearch ES_JAVA_OPTS -Xms | ` ` | | `elasticsearch.javaOpts.xmx` | Elasticsearch ES_JAVA_OPTS -Xmx | ` ` | | `elasticsearch.env.clusterName` | Elasticsearch Cluster Name | `es-cluster` | +| `elasticsearch.env.minimumMasterNodes` | The value for discovery.zen.minimum_master_nodes. Should be set to (replicaCount / 2) + 1 | `1` | | `logger.image.repository` | repository for logger image | `busybox` | | `logger.image.tag` | tag for logger image | `1.30` | | `missionControl.name` | Mission Control name | `mission-control` | | `missionControl.image` | Container image | `docker.jfrog.io/jfrog/mission-control` | | `missionControl.version` | Container image tag | `.Chart.AppVersion` | +| `missionControl.mcKey` | Mission Control mc Key. Can be generated with `openssl rand -hex 16` |`bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb`| | `missionControl.customInitContainers` | Custom init containers | ` ` | | `missionControl.service.type` | Mission Control service type | `LoadBalancer` | | `missionControl.externalPort` | Mission Control service external port | `80` | diff --git a/stable/mission-control/charts/elasticsearch/.helmignore b/stable/mission-control/charts/elasticsearch/.helmignore deleted file mode 100644 index f0c131944..000000000 --- a/stable/mission-control/charts/elasticsearch/.helmignore +++ /dev/null @@ -1,21 +0,0 @@ -# Patterns to ignore when building packages. -# This supports shell glob matching, relative path matching, and -# negation (prefixed with !). Only one pattern per line. -.DS_Store -# Common VCS dirs -.git/ -.gitignore -.bzr/ -.bzrignore -.hg/ -.hgignore -.svn/ -# Common backup files -*.swp -*.bak -*.tmp -*~ -# Various IDEs -.project -.idea/ -*.tmproj diff --git a/stable/mission-control/charts/elasticsearch/Chart.yaml b/stable/mission-control/charts/elasticsearch/Chart.yaml deleted file mode 100644 index f2799774b..000000000 --- a/stable/mission-control/charts/elasticsearch/Chart.yaml +++ /dev/null @@ -1,11 +0,0 @@ -apiVersion: v1 -description: A Helm chart for ElasticSearch -home: https://www.elastic.co/products/elasticsearch -icon: https://static-www.elastic.co/assets/blteb1c97719574938d/logo-elastic-elasticsearch-lt.svg -maintainers: -- email: jainishs@jfrog.com - name: jainishshah17 -- email: eldada@jfrog.com - name: eldada -name: elasticsearch -version: 0.1.0 diff --git a/stable/mission-control/charts/elasticsearch/README.md b/stable/mission-control/charts/elasticsearch/README.md deleted file mode 100644 index e69de29bb..000000000 diff --git a/stable/mission-control/charts/elasticsearch/templates/NOTES.txt b/stable/mission-control/charts/elasticsearch/templates/NOTES.txt deleted file mode 100644 index ab589d35f..000000000 --- a/stable/mission-control/charts/elasticsearch/templates/NOTES.txt +++ /dev/null @@ -1,32 +0,0 @@ -Congratulations. You have just deployed Elasticsearch! - -Elasticsearch can be accessed: - - * Within your cluster, at the following DNS name at port 9200: - - {{ template "elasticsearch.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local - - * From outside the cluster, run these commands in the same shell: - {{- if contains "NodePort" .Values.service.type }} - - export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "elasticsearch.fullname" . }}) - export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") - echo http://$NODE_IP:$NODE_PORT - - {{- else if contains "LoadBalancer" .Values.service.type }} - - WARNING: You have likely exposed your Elasticsearch cluster direct to the internet. - Elasticsearch does not implement any security for public facing clusters by default. - As a minimum level of security; switch to ClusterIP/NodePort and place an Nginx gateway infront of the cluster in order to lock down access to dangerous HTTP endpoints and verbs. - - NOTE: It may take a few minutes for the LoadBalancer IP to be available. - You can watch the status of by running 'kubectl get svc -w {{ template "elasticsearch.fullname" . }}' - - export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "elasticsearch.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') - echo http://$SERVICE_IP:9200 - {{- else if contains "ClusterIP" .Values.service.type }} - - export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "elasticsearch.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") - echo "Visit http://127.0.0.1:9200 to use Elasticsearch" - kubectl port-forward --namespace {{ .Release.Namespace }} $POD_NAME 9200:9200 - {{- end }} \ No newline at end of file diff --git a/stable/mission-control/charts/elasticsearch/templates/_helpers.tpl b/stable/mission-control/charts/elasticsearch/templates/_helpers.tpl deleted file mode 100644 index 0620cd369..000000000 --- a/stable/mission-control/charts/elasticsearch/templates/_helpers.tpl +++ /dev/null @@ -1,16 +0,0 @@ -{{/* vim: set filetype=mustache: */}} -{{/* -Expand the name of the chart. -*/}} -{{- define "elasticsearch.name" -}} -{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} -{{- end -}} - -{{/* -Create a default fully qualified app name. -We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). -*/}} -{{- define "elasticsearch.fullname" -}} -{{- $name := default .Chart.Name .Values.nameOverride -}} -{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} -{{- end -}} diff --git a/stable/mission-control/charts/elasticsearch/templates/elasticsearch-deployment.yaml b/stable/mission-control/charts/elasticsearch/templates/elasticsearch-deployment.yaml deleted file mode 100644 index d1e6a56fa..000000000 --- a/stable/mission-control/charts/elasticsearch/templates/elasticsearch-deployment.yaml +++ /dev/null @@ -1,110 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: {{ template "elasticsearch.fullname" . }} - labels: - app: {{ template "elasticsearch.name" . }} - chart: {{ .Chart.Name }}-{{ .Chart.Version }} - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} -spec: - replicas: {{ .Values.replicaCount }} - template: - metadata: - labels: - app: {{ template "elasticsearch.name" . }} - release: {{ .Release.Name }} - spec: - {{- if .Values.imagePullSecrets }} - imagePullSecrets: - - name: {{ .Values.imagePullSecrets }} - {{- end }} - initContainers: - - name: init-data - image: "{{ .Values.initContainerImage }}" - securityContext: - privileged: true - command: - - '/bin/sh' - - '-c' - - > - chmod -R 777 {{ .Values.persistence.mountPath }}; - sysctl -w vm.max_map_count={{ .Values.env.maxMapCount }} - volumeMounts: - - name: elasticsearch-data - mountPath: {{ .Values.persistence.mountPath | quote }} - containers: - - name: {{ template "elasticsearch.fullname" . }} - image: {{ .Values.image.repository }}:{{ .Values.image.version }} - imagePullPolicy: {{ .Values.imagePullPolicy }} - env: - - name: 'cluster.name' - value: {{ .Values.env.clusterName }} - - name: 'network.host' - value: {{ .Values.env.networkHost }} - - name: 'transport.host' - value: {{ .Values.env.transportHost }} - - name: 'xpack.security.enabled' - value: {{ .Values.env.xpackSecurityEnabled | quote }} - - name: ES_JAVA_OPTS - value: " - {{- if .Values.javaOpts.xms }} - -Xms{{ .Values.javaOpts.xms }} - {{- end }} - {{- if .Values.javaOpts.xmx }} - -Xmx{{ .Values.javaOpts.xmx }} - {{- end }} - " - - name: ELASTIC_SEARCH_URL - value: {{ .Values.env.esUrl }} - - name: ELASTIC_SEARCH_USERNAME - value: {{ .Values.env.esUsername }} - - name: ELASTIC_SEARCH_PASSWORD - valueFrom: - secretKeyRef: - name: {{ template "elasticsearch.fullname" . }} - key: esPassword - lifecycle: - postStart: - exec: - command: - - '/bin/sh' - - '-c' - - > - sleep 5; - mkdir -p /var/log/elasticsearch; - bash /scripts/setup.sh > /var/log/elasticsearch/setup-$(date +%Y%m%d%H%M%S).log 2>&1 - ports: - - containerPort: {{ .Values.internalHttpPort }} - protocol: TCP - - containerPort: {{ .Values.internalTransportPort }} - protocol: TCP - volumeMounts: - - name: setup-script - mountPath: "/scripts" - - name: elasticsearch-data - mountPath: {{ .Values.persistence.mountPath | quote }} - resources: -{{ toYaml .Values.resources | indent 10 }} - livenessProbe: - httpGet: - path: /_cluster/health?local=true - port: 9200 - initialDelaySeconds: 90 - periodSeconds: 10 - readinessProbe: - httpGet: - path: /_cluster/health?local=true - port: 9200 - initialDelaySeconds: 60 - volumes: - - name: setup-script - configMap: - name: {{ template "elasticsearch.fullname" . }}-setup-script - - name: elasticsearch-data - {{- if .Values.persistence.enabled }} - persistentVolumeClaim: - claimName: {{ if .Values.persistence.existingClaim }}{{ .Values.persistence.existingClaim }}{{ else }}{{ template "elasticsearch.fullname" . }}{{ end }} - {{- else }} - emptyDir: {} - {{- end }} \ No newline at end of file diff --git a/stable/mission-control/charts/elasticsearch/templates/elasticsearch-pvc.yaml b/stable/mission-control/charts/elasticsearch/templates/elasticsearch-pvc.yaml deleted file mode 100644 index 93bcbbea5..000000000 --- a/stable/mission-control/charts/elasticsearch/templates/elasticsearch-pvc.yaml +++ /dev/null @@ -1,24 +0,0 @@ -{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }} -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: {{ template "elasticsearch.fullname" . }} - labels: - app: {{ template "elasticsearch.name" . }} - chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" - release: "{{ .Release.Name }}" - heritage: "{{ .Release.Service }}" -spec: - accessModes: - - {{ .Values.persistence.accessMode | quote }} - resources: - requests: - storage: {{ .Values.persistence.size | quote }} -{{- if .Values.persistence.storageClass }} -{{- if (eq "-" .Values.persistence.storageClass) }} - storageClassName: "" -{{- else }} - storageClassName: "{{ .Values.persistence.storageClass }}" -{{- end }} -{{- end }} -{{- end }} diff --git a/stable/mission-control/charts/elasticsearch/templates/elasticsearch-secret.yaml b/stable/mission-control/charts/elasticsearch/templates/elasticsearch-secret.yaml deleted file mode 100644 index 727ff6461..000000000 --- a/stable/mission-control/charts/elasticsearch/templates/elasticsearch-secret.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1 -kind: Secret -metadata: - name: {{ template "elasticsearch.fullname" . }} - labels: - app: {{ template "elasticsearch.name" . }} - chart: {{ .Chart.Name }}-{{ .Chart.Version }} - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} -type: Opaque -data: - {{ if .Values.env.esPassword }} - esPassword: {{ .Values.env.esPassword | b64enc | quote }} - {{ else }} - esPassword: {{ randAlphaNum 10 | b64enc | quote }} - {{ end }} diff --git a/stable/mission-control/charts/elasticsearch/templates/elasticsearch-setup-scripts.yaml b/stable/mission-control/charts/elasticsearch/templates/elasticsearch-setup-scripts.yaml deleted file mode 100644 index 0afb6ae28..000000000 --- a/stable/mission-control/charts/elasticsearch/templates/elasticsearch-setup-scripts.yaml +++ /dev/null @@ -1,223 +0,0 @@ -apiVersion: v1 -kind: ConfigMap -metadata: - name: {{ template "elasticsearch.fullname" . }}-setup-script - labels: - app: {{ template "elasticsearch.name" . }} - chart: {{ .Chart.Name }}-{{ .Chart.Version }} - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} -data: - setup.sh: | - #!/bin/bash - # Startup script for preparing ElasticSearch pod for running with Mission Control - - echo "Waiting for ElasticSearch to be ready" - until [[ "$(curl -s -o /dev/null -w \"%{http_code}\" ${ELASTIC_SEARCH_URL}/_cluster/health?local=true)" =~ "200" ]]; do - echo "Waiting for ElasticSearch availability" - sleep 2 - done - - mkdir -p /var/log/elasticsearch - bash /scripts/createIndices.sh > /var/log/elasticsearch/createIndices.sh.log 2>&1 - - createIndices.sh: | - #!/bin/bash - ELASTIC_SEARCH_LABEL='Elasticsearch' - - #Print the input with additional formatting to indicate a section/title - title () { - echo - echo "-----------------------------------------------------" - printf "| %-50s|\n" "$1" - echo "-----------------------------------------------------" - } - - # This function prints the echo with color. - #If invoked with a single string, treats it as INFO level - #Valid inputs for the second parameter are DEBUG and INFO - log() { - echo "" - echo -e $1 - echo "" - } - - # Utility method to display warnings and important information - warn() { - echo "" - echo -e "\033[33m $1 \033[0m" - echo "" - } - - errorExit() { - echo; echo -e "\033[31mERROR:$1\033[0m"; echo - exit 1 - } - - attempt_number=0 - elasticSearchIsNotReady(){ - #echo "in method isElasticSearchReady" - curl "$ELASTIC_SEARCH_URL" > /dev/null 2>&1 - if [ $? -ne 0 ]; then - if [ $attempt_number -gt 10 ]; then - errorExit "Unable to proceed. $ELASTIC_SEARCH_LABEL is not reachable. The command [curl $ELASTIC_SEARCH_URL] is failing. Gave up after $attempt_number attempts" - fi - let "attempt_number=attempt_number+1" - return 0 - else - return 1 - fi - } - - runCommand() { - curl_response= - local operation=$1 - local commandToRun=$2 - local request_body=$3 - local params=$4 - local waitTime=$5 - - if [[ ! -z "$waitTime" && "$waitTime" != "" ]]; then - sleep $waitTime - fi - - commandToRun="\"$ELASTIC_SEARCH_URL/$commandToRun\"" - if [[ ! -z "$ELASTIC_SEARCH_USERNAME" && ! -z "$ELASTIC_SEARCH_PASSWORD" ]]; then - commandToRun="$commandToRun --user $ELASTIC_SEARCH_USERNAME:$ELASTIC_SEARCH_PASSWORD" - fi - - if [[ ! -z "$params" ]]; then - commandToRun="$commandToRun $params" - fi - - if [[ ! -z "$request_body" ]]; then - commandToRun="$commandToRun -d '"$request_body"'" - fi - if [[ "$operation" == "GET" ]]; then - commandToRun="curl --silent -XGET $commandToRun" - curl_response=$(eval "${commandToRun}") - else - eval "curl --silent -X $operation ${commandToRun}" || errorExit "could not update Elastic Search" - fi - } - - setElasticSearchParams() { - log "Waiting for $ELASTIC_SEARCH_LABEL to get ready (using the command: [curl $ELASTIC_SEARCH_URL])" - while elasticSearchIsNotReady - do - sleep 5 - echo -n '.' - done - log "$ELASTIC_SEARCH_LABEL is ready. Executing commands" - runCommand "GET" "_template/storage_insight_template" "" "" 10 - #echo "$ELASTIC_SEARCH_LABEL curl response: $curl_response" - if [[ $curl_response = {} ]]; then - migrateToElastic61 - runCommand "GET" "_template/active_insight_data" - if [[ $curl_response = {} ]]; then - log "Creating new template" - runCommand "PUT" "_template/storage_insight_template" '{"template":"active_insight_data_*","aliases":{"active_insight_data":{},"search_insight_data":{}},"mappings":{"artifacts_storage":{"properties":{"used_space":{"type":"double"},"timestamp":{"type":"date"},"artifacts_size":{"type":"double"}}}}}' '-H "Content-Type: application/json"' > /dev/null 2>&1 - runCommand "PUT" "%3Cactive_insight_data_%7Bnow%2Fd%7D-1%3E" > /dev/null 2>&1 - - else - performUpgrade - fi - - fi - runCommand "GET" "_template/build_info_template" "" "" 10 - if [[ $curl_response = {} ]]; then - log "Create Build Info template" - createBuildInfoTemplate - fi - - updateBuildInfoTemplate - log "$ELASTIC_SEARCH_LABEL setup is now complete" - - log "Created build info templates" - } - - performUpgrade(){ - log "Performing upgrade" - runCommand "DELETE" "_template/active_insight_data" > /dev/null 2>&1 - runCommand "PUT" "_template/storage_insight_template" '{"template":"active_insight_data_*","aliases":{"active_insight_data":{},"search_insight_data":{}},"mappings":{"artifacts_storage":{"properties":{"used_space":{"type":"double"},"timestamp":{"type":"date"},"artifacts_size":{"type":"double"}}}}}' '-H "Content-Type: application/json"' > /dev/null 2>&1 - log "Created new template" - runCommand "GET" "_alias/active_insight_data" - if [[ $curl_response = *missing* ]]; then - runCommand "PUT" "%3Cactive_insight_data_%7Bnow%2Fd%7D-1%3E" > /dev/null 2>&1 - else - indexname=$(echo $curl_response |cut -d'"' -f 2) - log "Old index $indexname" - curl_response=$(runCommand "PUT" "%3Cactive_insight_data_%7Bnow%2Fd%7D-1%3E") - if [[ "$curl_response" = *"resource_already_exists_exception"* ]]; then - log "Index with same name exists, creating with different name" - runCommand "PUT" "%3Cactive_insight_data_%7Bnow%2Fd%7D-2%3E" > /dev/null 2>&1 - fi - log "Created new index" - runCommand "GET" "_alias/active_insight_data" - runCommand "POST" "_aliases" '{"actions":[{"remove":{"index":"'$indexname'","alias":"active_insight_data"}}]}' '-H "Content-Type: application/json"' > /dev/null 2>&1 - log "Removed the old index from active alias" - fi - } - - createBuildInfoTemplate(){ - runCommand "PUT" "_template/build_info_template" '{"template":"active_build_data_*","aliases":{"active_build_data":{},"search_build_data":{}},"mappings":{"build_info":{"properties":{"created_time":{"type":"date"},"timestamp":{"type":"date"},"build_name":{"type":"keyword"},"build_number":{"type":"integer"},"build_URL":{"type":"keyword"},"build_created_by":{"type":"keyword"},"project_name":{"type":"keyword"},"project_id":{"type":"keyword"},"service_id":{"type":"keyword"},"access_service_id":{"type":"keyword"},"build_promotion":{"type":"keyword"},"build_status":{"type":"keyword"},"build_duration_seconds":{"type":"integer"},"total_no_of_commits":{"type":"short"},"total_no_of_modules":{"type":"short"},"total_dependency_count":{"type":"short"},"total_artifact_count":{"type":"short"},"total_artifact_count_downloaded":{"type":"short"},"total_artifact_count_not_downloaded":{"type":"short"},"total_artifact_size":{"type":"double"},"total_dependency_size":{"type":"double"},"module_dependency":{"type":"nested","properties":{"module_name":{"type":"keyword"},"dependency_name":{"type":"keyword"},"dependency_type":{"type":"keyword"},"dependency_size":{"type":"double"}}},"module_artifacts":{"type":"nested","properties":{"module_name":{"type":"keyword"},"artifact_name":{"type":"keyword"},"artifact_size":{"type":"double"},"no_of_downloads":{"type":"short"},"last_download_by":{"type":"keyword"}}},"commits":{"type":"nested","properties":{"repo":{"type":"keyword"},"branch":{"type":"keyword"},"commit_message":{"type":"text"},"revision_no":{"type":"keyword"}}},"total_vulnerability":{"properties":{"low":{"type":"short"},"medium":{"type":"short"},"high":{"type":"short"}}},"total_open_source_violoation":{"properties":{"low":{"type":"short"},"medium":{"type":"short"},"high":{"type":"short"}}},"major_xray_issues":{"type":"long"},"minor_xray_issues":{"type":"long"},"unknown_xray_issues":{"type":"long"},"critical_xray_issues":{"type":"long"}}}}}' '-H "Content-Type: application/json"' > /dev/null 2>&1 - runCommand "PUT" "%3Cactive_build_data_%7Bnow%2Fd%7D-1%3E" > /dev/null 2>&1 - } - - updateBuildInfoTemplate(){ - runCommand "PUT" "active_build*/_mapping/build_info" '{"properties":{"created_time":{"type":"date"},"timestamp":{"type":"date"},"build_name":{"type":"keyword"},"build_number":{"type":"integer"},"build_URL":{"type":"keyword"},"build_created_by":{"type":"keyword"},"project_name":{"type":"keyword"},"project_id":{"type":"keyword"},"service_id":{"type":"keyword"},"access_service_id":{"type":"keyword"},"build_promotion":{"type":"keyword"},"build_status":{"type":"keyword"},"build_duration_seconds":{"type":"integer"},"total_no_of_commits":{"type":"short"},"total_no_of_modules":{"type":"short"},"total_dependency_count":{"type":"short"},"total_artifact_count":{"type":"short"},"total_artifact_count_downloaded":{"type":"short"},"total_artifact_count_not_downloaded":{"type":"short"},"total_artifact_size":{"type":"double"},"total_dependency_size":{"type":"double"},"module_dependency":{"type":"nested","properties":{"module_name":{"type":"keyword"},"dependency_name":{"type":"keyword"},"dependency_type":{"type":"keyword"},"dependency_size":{"type":"double"}}},"module_artifacts":{"type":"nested","properties":{"module_name":{"type":"keyword"},"artifact_name":{"type":"keyword"},"artifact_size":{"type":"double"},"no_of_downloads":{"type":"short"},"last_download_by":{"type":"keyword"}}},"commits":{"type":"nested","properties":{"repo":{"type":"keyword"},"branch":{"type":"keyword"},"commit_message":{"type":"text"},"revision_no":{"type":"keyword"}}},"total_vulnerability":{"properties":{"low":{"type":"short"},"medium":{"type":"short"},"high":{"type":"short"}}},"total_open_source_violoation":{"properties":{"low":{"type":"short"},"medium":{"type":"short"},"high":{"type":"short"}}},"major_xray_issues":{"type":"long"},"minor_xray_issues":{"type":"long"},"unknown_xray_issues":{"type":"long"},"critical_xray_issues":{"type":"long"}}}' '-H "Content-Type: application/json"' > /dev/null 2>&1 - log "Updated build info indices" - } - - migrateToElastic61(){ - local activeIndexPrefix="active_insight_data" - local repoStorageName="migrate-repostorage" - local storageSummaryName="migrate-storage" - local index="" - - log "Getting current indices with name : ${activeIndexPrefix}" - result=$(curl --silent "$ELASTIC_SEARCH_URL/_cat/indices/${activeIndexPrefix}*") - if [[ "$result" = *"${activeIndexPrefix}"* ]]; then - echo $result | while read indices ; do - index=$(echo $indices | awk -F " " '{print $3}') - log "Attempting migrate of index : ${index}" - indexDate=$(echo "${index}" | sed -e "s#${activeIndexPrefix}##g") - modifiedRepoStorageName=${repoStorageName}${indexDate} - modifiedStorageSummaryName=${storageSummaryName}${indexDate} - - # Reindex from each type - runCommand 'POST' '_reindex' '{"source":{"index":"'${index}'","type":"repo_storage_info"},"dest":{"index":"'${modifiedRepoStorageName}'"}}' '-H "Content-Type: application/json"' 2 > /dev/null 2>&1 - runCommand 'POST' '_reindex' '{"source":{"index":"'${index}'","type":"storage_summary_info"},"dest":{"index":"'${modifiedStorageSummaryName}'"}}' '-H "Content-Type: application/json"' 2 > /dev/null 2>&1 - - # Add type field - runCommand 'POST' ${modifiedRepoStorageName}'/_update_by_query' '{"script": {"inline": "ctx._source.type = \"repo_storage_info\"","lang": "painless"}}' '-H "Content-Type: application/json"' 2 > /dev/null 2>&1 - runCommand 'POST' ${modifiedStorageSummaryName}'/_update_by_query' '{"script": {"inline": "ctx._source.type = \"storage_summary_info\"","lang": "painless"}}' '-H "Content-Type: application/json"' 2 > /dev/null 2>&1 - - # Add the new indices to search alias - runCommand 'POST' '_aliases' '{"actions" : [{ "add" : { "index" : "'${modifiedRepoStorageName}'", "alias" : "search_insight_data" } }]}' '-H "Content-Type: application/json"' 2 > /dev/null 2>&1 - runCommand 'POST' '_aliases' '{"actions" : [{ "add" : { "index" : "'${modifiedStorageSummaryName}'", "alias" : "search_insight_data" } }]}' '-H "Content-Type: application/json"' 2 > /dev/null 2>&1 - - # Delete the old index - log "Deleting index : ${index}" - runCommand 'DELETE' "${index}" > /dev/null 2>&1 - done - fi - } - - main() { - if [[ -z $ELASTIC_SEARCH_URL ]]; then - title "$ELASTIC_SEARCH_LABEL Manual Setup" - log "This script will attempt to seed $ELASTIC_SEARCH_LABEL with the templates and indices needed by JFrog Mission Control" - - warn "Please enter the same details as you entered during installation. If the details are incorrect, you may need to rerun the installation" - - local DEFAULT_URL="http://docker.for.mac.localhost:9200" - read -p "Please enter the $ELASTIC_SEARCH_LABEL URL [$DEFAULT_URL]:" choice - : ${choice:=$DEFAULT_URL} - ELASTIC_SEARCH_URL=$choice - fi - echo "Beginning $ELASTIC_SEARCH_LABEL bootstrap" - setElasticSearchParams - } - - main diff --git a/stable/mission-control/charts/elasticsearch/templates/elasticsearch-svc.yaml b/stable/mission-control/charts/elasticsearch/templates/elasticsearch-svc.yaml deleted file mode 100644 index cabe05af9..000000000 --- a/stable/mission-control/charts/elasticsearch/templates/elasticsearch-svc.yaml +++ /dev/null @@ -1,25 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: {{ template "elasticsearch.fullname" . }} - labels: - app: {{ template "elasticsearch.name" . }} - chart: {{ .Chart.Name }}-{{ .Chart.Version }} - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} -{{- if .Values.service.annotations }} - annotations: -{{ toYaml .Values.service.annotations | indent 4 }} -{{- end }} -spec: - type: {{ .Values.service.type }} - ports: - - name: http - port: {{ .Values.internalHttpPort }} - targetPort: {{ .Values.externalHttpPort }} - - name: transport - port: {{ .Values.internalTransportPort }} - targetPort: {{ .Values.externalTransportPort }} - selector: - app: {{ template "elasticsearch.name" . }} - release: {{ .Release.Name }} \ No newline at end of file diff --git a/stable/mission-control/charts/elasticsearch/values.yaml b/stable/mission-control/charts/elasticsearch/values.yaml deleted file mode 100644 index 28d50364f..000000000 --- a/stable/mission-control/charts/elasticsearch/values.yaml +++ /dev/null @@ -1,63 +0,0 @@ -# Default values for elasticsearch. -# This is a YAML-formatted file. -# Beware when changing values here. You should know what you are doing! -# Access the values with {{ .Values.key.subkey }} - -# Common -initContainerImage: "alpine:3.6" -imagePullPolicy: IfNotPresent -imagePullSecrets: - -replicaCount: 1 -image: - repository: "docker.bintray.io/elasticsearch/elasticsearch" - version: 6.1.1 -resources: {} -# requests: -# memory: "2Gi" -# cpu: "100m" -# limits: -# memory: "2Gi" -# cpu: "500m" -## ElasticSearch xms and xmx should be same! -javaOpts: {} -# xms: "2g" -# xmx: "2g" - - -env: - clusterName: "es-cluster" - networkHost: "0.0.0.0" - transportHost: "0.0.0.0" - xpackSecurityEnabled: false - esUrl: "http://localhost:9200" - esUsername: "elastic" - esPassword: "changeme" - maxMapCount: 262144 - -persistence: - enabled: true - ## A manually managed Persistent Volume and Claim - ## Requires persistence.enabled: true - ## If defined, PVC must be created manually before volume will be bound - # existingClaim: - - mountPath: "/usr/share/elasticsearch/data" - accessMode: ReadWriteOnce - size: 100Gi - ## ElasticSearch data Persistent Volume Storage Class - ## If defined, storageClassName: - ## If set to "-", storageClassName: "", which disables dynamic provisioning - ## If undefined (the default) or set to null, no storageClassName spec is - ## set, choosing the default provisioner. (gp2 on AWS, standard on - ## GKE, AWS & OpenStack) - ## - # storageClass: "-" - -service: - type: ClusterIP - annotations: {} -externalHttpPort: 9200 -internalHttpPort: 9200 -externalTransportPort: 9300 -internalTransportPort: 9300 diff --git a/stable/mission-control/requirements.lock b/stable/mission-control/requirements.lock index fb00cb12f..a8b95344c 100644 --- a/stable/mission-control/requirements.lock +++ b/stable/mission-control/requirements.lock @@ -2,8 +2,5 @@ dependencies: - name: postgresql repository: https://kubernetes-charts.storage.googleapis.com/ version: 0.9.5 -- name: mongodb - repository: https://kubernetes-charts.storage.googleapis.com/ - version: 4.3.10 -digest: sha256:3299d564e9a61263571329d573aa1c6100869bd81d55edf949072c34ee43fcdd -generated: 2019-02-19T18:55:08.944392949+05:30 +digest: sha256:7e07fb616d953e518e3373e2c5183290b4b6e94292a233528c0d52ffd42afc77 +generated: 2019-03-22T16:29:17.502807362+05:30 diff --git a/stable/mission-control/requirements.yaml b/stable/mission-control/requirements.yaml index e9ba321bd..91ad5018c 100644 --- a/stable/mission-control/requirements.yaml +++ b/stable/mission-control/requirements.yaml @@ -2,8 +2,4 @@ dependencies: - name: postgresql version: 0.9.5 repository: https://kubernetes-charts.storage.googleapis.com/ - condition: postgresql.enabled -- name: mongodb - version: 4.3.10 - repository: https://kubernetes-charts.storage.googleapis.com/ - condition: mongodb.enabled \ No newline at end of file + condition: postgresql.enabled \ No newline at end of file diff --git a/stable/mission-control/templates/elasticsearch-scripts.yaml b/stable/mission-control/templates/elasticsearch-scripts.yaml new file mode 100644 index 000000000..4c01fa0a4 --- /dev/null +++ b/stable/mission-control/templates/elasticsearch-scripts.yaml @@ -0,0 +1,53 @@ +{{- if .Values.elasticsearch.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "mission-control.fullname" . }}-elasticsearch-scripts + labels: + app: {{ template "mission-control.name" . }} + chart: {{ template "mission-control.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + initSG.sh: | + LABEL="Elasticsearch" + SEARCH_GUARD_DIR=/usr/share/elasticsearch/plugins/search-guard-6 + : ${ELASTIC_SEARCH_URL:="http://localhost:9200"} + : ${ELASTIC_TRANSPORT_PORT:=9300} + + if [ ! -d "${SEARCH_GUARD_DIR}" ]; then + echo "Unable to find ${SEARCH_GUARD_DIR} (SEARCH_GUARD_DIR) directory, please install secure guard plugin and retry" + exit 1 + fi + + log() { + echo -e "$1" + } + + # Source: https://gist.github.com/sj26/88e1c6584397bb7c13bd11108a579746 + function retry { + local retries=$1 + shift + + local count=0 + until "$@"; do + exit=$? + wait=5 + if [ $count -lt $retries ]; then + echo "Retry $count/$retries exited $exit, retrying in $wait seconds..." + sleep $wait + else + echo "Retry $count/$retries exited $exit, no more retries left." + exit 1 #return $exit + fi + done + return 0 + } + + log "Waiting for $LABEL to get ready using the commands: \"curl -sL -I --output /dev/null \"$ELASTIC_SEARCH_URL\"\"" + retry 5 curl -sL -I --output /dev/null "$ELASTIC_SEARCH_URL" + + log "Initializing secure guard plugin on elasticsearch" + cd ${SEARCH_GUARD_DIR}/tools && ./sgadmin.sh -p ${ELASTIC_TRANSPORT_PORT} -cacert root-ca.pem -cert sgadmin.pem -key sgadmin.key -nhnv -icl -cd ../sgconfig/ 1>/dev/null + exit $? +{{- end }} \ No newline at end of file diff --git a/stable/mission-control/templates/elasticsearch-secrets.yaml b/stable/mission-control/templates/elasticsearch-secrets.yaml new file mode 100644 index 000000000..9df05d123 --- /dev/null +++ b/stable/mission-control/templates/elasticsearch-secrets.yaml @@ -0,0 +1,12 @@ +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "mission-control.fullname" . }}-elasticsearch-cred + labels: + app: {{ template "mission-control.name" . }} + chart: {{ template "mission-control.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +type: Opaque +data: + password: {{ required "A valid .Values.elasticsearch.password entry required!" .Values.elasticsearch.password | b64enc | quote }} diff --git a/stable/mission-control/templates/jfmc-setup-scripts.yaml b/stable/mission-control/templates/jfmc-setup-scripts.yaml index ad578af39..9019ecb8b 100644 --- a/stable/mission-control/templates/jfmc-setup-scripts.yaml +++ b/stable/mission-control/templates/jfmc-setup-scripts.yaml @@ -46,6 +46,8 @@ data: touch ${JFMC_PROPERTIES} || ( echo "unable to create ${JFMC_PROPERTIES} file" && exit 1 ) fi + addProperty "elastic.username" "${ELASTIC_SEARCH_USERNAME}" ${JFMC_PROPERTIES} && \ + addProperty "elastic.password" "${ELASTIC_SEARCH_PASSWORD}" ${JFMC_PROPERTIES} && \ addProperty "jfmc.db.username" "${JFMC_DB_USERNAME}" ${JFMC_PROPERTIES} && \ addProperty "jfmc.db.password" "${JFMC_DB_PASSWORD}" ${JFMC_PROPERTIES} && \ addProperty "jfex.db.username" "${JFEX_DB_USERNAME}" ${JFMC_PROPERTIES} && \ diff --git a/stable/mission-control/templates/mission-control-pvc.yaml b/stable/mission-control/templates/mission-control-pvc.yaml deleted file mode 100644 index c1abd3f5e..000000000 --- a/stable/mission-control/templates/mission-control-pvc.yaml +++ /dev/null @@ -1,24 +0,0 @@ -{{- if and .Values.missionControl.persistence.enabled (not .Values.missionControl.persistence.existingClaim) }} -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: {{ template "mission-control.fullname" . }} - labels: - app: {{ template "mission-control.name" . }} - chart: {{ template "mission-control.chart" . }} - release: {{ .Release.Name }} - heritage: {{ .Release.Service }} -spec: - accessModes: - - {{ .Values.missionControl.persistence.accessMode | quote }} - resources: - requests: - storage: {{ .Values.missionControl.persistence.size }} -{{- if .Values.missionControl.persistence.storageClass }} -{{- if (eq "-" .Values.missionControl.persistence.storageClass) }} - storageClassName: "" -{{- else }} - storageClassName: "{{ .Values.missionControl.persistence.storageClass }}" -{{- end }} -{{- end }} -{{- end }} diff --git a/stable/mission-control/templates/mission-control-deployment.yaml b/stable/mission-control/templates/mission-control-statefulset.yaml similarity index 72% rename from stable/mission-control/templates/mission-control-deployment.yaml rename to stable/mission-control/templates/mission-control-statefulset.yaml index 4f86976ef..061ce31f3 100644 --- a/stable/mission-control/templates/mission-control-deployment.yaml +++ b/stable/mission-control/templates/mission-control-statefulset.yaml @@ -1,5 +1,5 @@ apiVersion: apps/v1beta2 -kind: Deployment +kind: StatefulSet metadata: name: {{ template "mission-control.fullname" . }} labels: @@ -9,12 +9,10 @@ metadata: heritage: {{ .Release.Service }} release: {{ .Release.Name }} spec: - replicas: {{ .Values.missionControl.replicaCount }} + serviceName: {{ template "mission-control.fullname" . }} + replicas: {{ .Values.replicaCount }} strategy: type: RollingUpdate - rollingUpdate: - maxSurge: 0 - maxUnavailable: 1 selector: matchLabels: app: {{ template "mission-control.name" . }} @@ -35,6 +33,21 @@ spec: securityContext: fsGroup: {{ .Values.uid }} initContainers: + {{- if .Values.elasticsearch.enabled }} + - name: es-init + image: "{{ .Values.elasticsearch.initContainerImage }}" + securityContext: + privileged: true + command: + - '/bin/sh' + - '-c' + - > + chmod -R 777 {{ .Values.elasticsearch.persistence.mountPath }}; + sysctl -w vm.max_map_count={{ .Values.elasticsearch.env.maxMapCount }} + volumeMounts: + - name: elasticsearch-data + mountPath: {{ .Values.elasticsearch.persistence.mountPath | quote }} + {{- end }} - name: "wait-for-db" image: "{{ .Values.initContainerImage }}" command: @@ -44,47 +57,10 @@ spec: {{- if .Values.postgresql.enabled }} until nc -z -w 2 {{ .Release.Name }}-postgresql {{ .Values.postgresql.service.port }} && echo database ok; \ {{- else }} - {{- if and .Values.database.host .Values.database.port }} - until nc -z -w 2 {{ .Values.database.host }} {{ .Values.database.port }} && echo database ok; \ - {{- else }} until true; \ - {{- end }} - {{- end }} - {{- if .Values.mongodb.enabled }} - nc -z -w 2 {{ .Release.Name }}-mongodb 27017 && echo mongodb ok && \ - {{- end }} - {{- if .Values.elasticsearch.enabled }} - nc -z -w 2 {{ .Release.Name }}-elasticsearch {{ .Values.elasticsearch.service.port }} && echo elasticsearch ok; {{- end }} do sleep 2; done; - {{- if .Values.mongodb.enabled }} - - name: mongodb-setup - image: "{{ .Values.dbSetup.mongodb.image.repository }}:{{ .Values.dbSetup.mongodb.image.tag }}" - env: - - name: MONGODB_ADMIN_PASSWORD - valueFrom: - secretKeyRef: - name: {{ template "mission-control.fullname" . }}-mongodb-cred - key: adminPassword - - name: MONGODB_MC_PASSWORD - valueFrom: - secretKeyRef: - name: {{ template "mission-control.fullname" . }}-mongodb-cred - key: mcPassword - - name: MONGODB_INSIGHT_PASSWORD - valueFrom: - secretKeyRef: - name: {{ template "mission-control.fullname" . }}-mongodb-cred - key: insightPassword - command: - - 'sh' - - '-c' - - 'sh /scripts/setup.sh' - volumeMounts: - - name: mongodb-setup - mountPath: "/scripts" - {{- end }} {{- if .Values.postgresql.enabled }} - name: postgresql-setup image: "{{ .Values.dbSetup.postgresql.image.repository }}:{{ .Values.dbSetup.postgresql.image.tag }}" @@ -102,8 +78,9 @@ spec: - name: PGPASSWORD valueFrom: secretKeyRef: - {{- if .Values.postgresql.db.postgresPassword }} + {{- if .Values.postgresql.postgresPassword }} name: {{ template "mission-control.fullname" . }}-postgresql-cred + key: postgresPassword {{- else }} name: {{ .Release.Name }}-postgresql {{- end }} @@ -155,6 +132,13 @@ spec: - name: "set-properties" image: "{{ .Values.initContainerImage }}" env: + - name: ELASTIC_SEARCH_USERNAME + value: '{{ .Values.elasticsearch.username }}' + - name: ELASTIC_SEARCH_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "mission-control.fullname" . }}-elasticsearch-cred + key: password {{- if .Values.postgresql.enabled }} - name: JFMC_DB_USERNAME value: '{{ .Values.postgresql.db.jfmcUsername }}' @@ -314,6 +298,78 @@ spec: {{ tpl .Values.missionControl.customInitContainers . | indent 6}} {{- end }} containers: + {{- if .Values.elasticsearch.enabled }} + - name: {{ .Values.elasticsearch.name }} + image: "{{ .Values.elasticsearch.image.repository }}:{{ .Values.elasticsearch.image.tag }}" + imagePullPolicy: {{ .Values.elasticsearch.imagePullPolicy }} + env: + - name: 'cluster.name' + value: '{{ .Values.elasticsearch.env.clusterName }}' + - name: 'network.host' + value: '{{ .Values.elasticsearch.env.networkHost }}' + - name: 'transport.host' + value: '{{ .Values.elasticsearch.env.transportHost }}' + - name: 'http.port' + value: '{{ .Values.elasticsearch.httpPort }}' + - name: 'transport.port' + value: '{{ .Values.elasticsearch.transportPort }}' + - name: 'discovery.zen.minimum_master_nodes' + value: '{{ .Values.elasticsearch.env.minimumMasterNodes }}' + - name: 'discovery.zen.ping.unicast.hosts' + value: '{{ .Release.Name }}-mission-control' + - name: ELASTIC_SEARCH_USERNAME + value: '{{ .Values.elasticsearch.username }}' + - name: ELASTIC_SEARCH_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "mission-control.fullname" . }}-elasticsearch-cred + key: password + - name: ES_JAVA_OPTS + value: " + {{- if .Values.elasticsearch.javaOpts.xms }} + -Xms{{ .Values.elasticsearch.javaOpts.xms }} + {{- end }} + {{- if .Values.elasticsearch.javaOpts.xmx }} + -Xmx{{ .Values.elasticsearch.javaOpts.xmx }} + {{- end }} + " + - name: ELASTIC_SEARCH_URL + value: '{{ .Values.elasticsearch.url }}' + lifecycle: + postStart: + exec: + command: + - '/bin/bash' + - '-c' + - 'sh /scripts/initSG.sh' + ports: + - name: eshttp + containerPort: {{ .Values.elasticsearch.httpPort }} + - name: estransport + containerPort: {{ .Values.elasticsearch.transportPort }} + volumeMounts: + - name: elasticsearch-data + mountPath: {{ .Values.elasticsearch.persistence.mountPath | quote }} + - name: elasticsearch-scripts + mountPath: "/scripts" + resources: +{{ toYaml .Values.resources | indent 10 }} + livenessProbe: + exec: + command: + - '/bin/bash' + - '-c' + - 'curl -s --fail -u${ELASTIC_SEARCH_USERNAME}:${ELASTIC_SEARCH_PASSWORD} "{{ .Values.elasticsearch.url }}/_cluster/health?local=true" --output /dev/null' + initialDelaySeconds: 90 + periodSeconds: 10 + readinessProbe: + exec: + command: + - '/bin/bash' + - '-c' + - 'curl -s --fail -u${ELASTIC_SEARCH_USERNAME}:${ELASTIC_SEARCH_PASSWORD} "{{ .Values.elasticsearch.url }}/_cluster/health?local=true" --output /dev/null' + initialDelaySeconds: 40 + {{- end }} - name: {{ .Values.missionControl.name }} image: {{ .Values.missionControl.image }}:{{ default .Chart.AppVersion .Values.missionControl.version }} imagePullPolicy: {{ .Values.imagePullPolicy }} @@ -340,21 +396,6 @@ spec: value: '{{ .Values.database.port }}' - name: JFMC_DB_URL value: 'jdbc:postgresql://{{ .Values.database.host }}:{{ .Values.database.port }}/{{ .Values.database.name }}?currentSchema={{ .Values.database.jfmcSchema }}' - {{- end }} - {{- if .Values.mongodb.enabled }} - - name: SPRING_DATA_MONGODB_HOST - value: '{{ .Release.Name }}-mongodb' - - name: SPRING_DATA_MONGODB_PORT - value: '27017' - - name: SPRING_DATA_MONGODB_USERNAME - value: '{{ .Values.mongodb.db.mcUser }}' - - name: SPRING_DATA_MONGODB_PASSWORD - valueFrom: - secretKeyRef: - name: {{ template "mission-control.fullname" . }}-mongodb-cred - key: mcPassword - - name: SPRING_DATA_MONGODB_DATABASE - value: '{{ .Values.mongodb.db.missionControl }}' {{- end }} - name: INSIGHT_URL value: "http://localhost:{{ .Values.insightServer.internalHttpPort }}" @@ -386,14 +427,8 @@ spec: " - name: JFMC_SERVER_HOME value: "{{ .Values.missionControl.home }}" - - name: JFMC_LOGS_ROOT - value: "{{ .Values.missionControl.home }}/logs" - - name: JFMC_LOGS - value: "{{ .Values.missionControl.home }}/logs/{{ .Values.missionControl.appName }}" - - name: JFMC_APP_NAME - value: "{{ .Values.missionControl.appName }}" - - name: JFSC_URL - value: 'http://localhost:{{ .Values.insightScheduler.internalPort }}' + - name: JFMC_KEY + value: "{{ .Values.missionControl.mcKey }}" - name: JFMC_REPOSITORY value: "{{ .Values.missionControl.repository }}" - name: JFMC_PACKAGE @@ -427,43 +462,21 @@ spec: path: /api/v3/ping port: {{ .Values.missionControl.internalPort }} periodSeconds: 20 - initialDelaySeconds: 120 + initialDelaySeconds: 60 - name: {{ .Values.insightServer.name }} image: {{ .Values.insightServer.image }}:{{ default .Chart.AppVersion .Values.insightServer.version }} imagePullPolicy: {{ .Values.imagePullPolicy }} env: - name: JFIS_URL value: 'http://localhost:{{ .Values.insightServer.internalHttpPort }}' + - name: JFMC_ES_CLUSTER_SETUP + value: 'NO' - name: JFEX_URL value: 'http://localhost:{{ .Values.insightExecutor.internalPort }}' - name: JFSC_URL value: 'http://localhost:{{ .Values.insightScheduler.internalPort }}' - - name: JFIS_LOGS - value: "{{ .Values.insightServer.home }}/{{ .Values.insightServer.name }}/logs" - - name: JFIS_APP_NAME - value: "{{ .Values.insightServer.name }}" - name: GOMAXPROCS value: "1" - {{- if .Values.mongodb.enabled }} - - name: MONGO_URL - value: '{{ .Release.Name }}-mongodb:27017' - - name: MONGODB_USERNAME - value: '{{ .Values.mongodb.db.insightUser }}' - - name: MONGODB_PASSWORD - valueFrom: - secretKeyRef: - name: {{ template "mission-control.fullname" . }}-mongodb-cred - key: insightPassword - - name: MONGODB_ADMIN_USERNAME - value: '{{ .Values.mongodb.db.adminUser }}' - - name: MONGODB_ADMIN_PASSWORD - valueFrom: - secretKeyRef: - name: {{ template "mission-control.fullname" . }}-mongodb-cred - key: adminPassword - - name: JFMC_INSIGHT_SERVER_DB - value: "{{ .Values.mongodb.db.insightServerDb }}" - {{- end }} {{- if .Values.postgresql.enabled }} - name: DB_TYPE value: 'postgresql' @@ -499,14 +512,12 @@ spec: - name: JFEX_DB_SCHEMA value: '{{ .Values.database.jfexSchema }}' {{- end }} - - name: JFMC_URL - value: 'http://localhost:{{ .Values.missionControl.internalPort }}' - name: ELASTIC_SEARCH_URL - value: 'http://{{ .Release.Name }}-elasticsearch:9200' + value: '{{ .Values.elasticsearch.url }}' - name: ELASTIC_SEARCH_WRITE_URL - value: 'http://{{ .Release.Name }}-elasticsearch:9200' + value: '{{ .Values.elasticsearch.url }}' - name: ELASTIC_SEARCH_READ_URL - value: 'http://{{ .Release.Name }}-elasticsearch:9200' + value: '{{ .Values.elasticsearch.url }}' - name: ELASTIC_LB_WRITE_URL value: '' - name: ELASTIC_LB_READ_URL @@ -541,28 +552,6 @@ spec: env: - name: JFIS_URL value: 'http://localhost:{{ .Values.insightServer.internalHttpPort }}' - - name: JFMC_URL - value: 'http://localhost:{{ .Values.missionControl.internalPort }}' - {{- if .Values.mongodb.enabled }} - - name: MONGO_URL - value: '{{ .Release.Name }}-mongodb:27017' - - name: MONGODB_USERNAME - value: '{{ .Values.mongodb.db.insightUser }}' - - name: MONGODB_PASSWORD - valueFrom: - secretKeyRef: - name: {{ template "mission-control.fullname" . }}-mongodb-cred - key: insightPassword - - name: MONGODB_ADMIN_USERNAME - value: '{{ .Values.mongodb.db.adminUser }}' - - name: MONGODB_ADMIN_PASSWORD - valueFrom: - secretKeyRef: - name: {{ template "mission-control.fullname" . }}-mongodb-cred - key: adminPassword - - name: JFMC_SCHEDULER_MONGO_DB - value: '{{ .Values.mongodb.db.insightSchedulerDb }}' - {{- end }} - name: JFMC_EXTRA_JAVA_OPTS value: " {{- if .Values.insightScheduler.javaOpts.other }} @@ -576,10 +565,6 @@ spec: {{- end}} -Dserver.port={{ .Values.insightScheduler.internalPort }} " - - name: JFSC_LOGS - value: '{{ .Values.insightScheduler.home }}/{{ .Values.insightScheduler.name }}/logs' - - name: JFSC_APP_NAME - value: '{{ .Values.insightScheduler.name }}' ports: - containerPort: {{ .Values.insightScheduler.internalPort }} protocol: TCP @@ -604,30 +589,6 @@ spec: env: - name: JFIS_URL value: 'http://localhost:{{ .Values.insightServer.internalHttpPort }}' - - name: JFEX_LOGS - value: '{{ .Values.insightExecutor.home }}/{{ .Values.insightExecutor.name }}/logs' - - name: JFEX_APP_NAME - value: '{{ .Values.insightExecutor.name }}' - {{- if .Values.mongodb.enabled }} - - name: MONGO_URL - value: '{{ .Release.Name }}-mongodb:27017' - - name: MONGODB_USERNAME - value: '{{ .Values.mongodb.db.insightUser }}' - - name: MONGODB_PASSWORD - valueFrom: - secretKeyRef: - name: {{ template "mission-control.fullname" . }}-mongodb-cred - key: insightPassword - - name: MONGODB_ADMIN_USERNAME - value: '{{ .Values.mongodb.db.adminUser }}' - - name: MONGODB_ADMIN_PASSWORD - valueFrom: - secretKeyRef: - name: {{ template "mission-control.fullname" . }}-mongodb-cred - key: adminPassword - - name: JFMC_EXECUTOR_MONGO_DB - value: '{{ .Values.mongodb.db.insightExecutorDb }}' - {{- end }} - name: JFMC_EXTRA_JAVA_OPTS value: " {{- if .Values.insightExecutor.javaOpts.other }} @@ -747,44 +708,155 @@ spec: {{ toYaml . | indent 8 }} {{- end }} volumes: + {{- if .Values.postgresql.enabled }} + - name: postgresql-setup + configMap: + name: {{ template "mission-control.fullname" . }}-postgresql-setup-script + {{- end }} + - name: jfmc-setup-scripts + configMap: + name: {{ template "mission-control.fullname" . }}-jfmc-setup-scripts + {{- if .Values.elasticsearch.enabled }} + - name: elasticsearch-scripts + configMap: + name: {{ template "mission-control.fullname" . }}-elasticsearch-scripts + {{- end }} + {{- if not .Values.missionControl.persistence.enabled }} - name: mission-control-data - {{- if .Values.missionControl.persistence.enabled }} - persistentVolumeClaim: - claimName: {{ if .Values.missionControl.persistence.existingClaim }}{{ .Values.missionControl.persistence.existingClaim }}{{ else }}{{ template "mission-control.fullname" . }}{{ end }} - {{- else }} emptyDir: {} - {{- end }} - name: insight-server-logs - {{- if .Values.insightServer.persistence.enabled }} - persistentVolumeClaim: - claimName: {{ if .Values.insightServer.persistence.existingClaim }}{{ .Values.insightServer.persistence.existingClaim }}{{ else }}{{ template "insight-server.fullname" . }}{{ end }} - {{- else }} emptyDir: {} - {{- end }} - name: insight-scheduler-logs - {{- if .Values.insightScheduler.persistence.enabled }} - persistentVolumeClaim: - claimName: {{ if .Values.insightScheduler.persistence.existingClaim }}{{ .Values.insightScheduler.persistence.existingClaim }}{{ else }}{{ template "insight-scheduler.fullname" . }}{{ end }} - {{- else }} emptyDir: {} + - name: elasticsearch-data + emptyDir: {} + {{- else }} + volumeClaimTemplates: + - metadata: + name: mission-control-data + labels: + app: {{ template "mission-control.name" . }} + chart: {{ template "mission-control.chart" . }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} + spec: + {{- if .Values.missionControl.persistence.existingClaim }} + selector: + matchLabels: + app: {{ template "mission-control.name" . }} + {{- else }} + {{- if .Values.missionControl.persistence.storageClass }} + {{- if (eq "-" .Values.missionControl.persistence.storageClass) }} + storageClassName: '' + {{- else }} + storageClassName: '{{ .Values.missionControl.persistence.storageClass }}' + {{- end }} + {{- end }} + accessModes: [ '{{ .Values.missionControl.persistence.accessMode }}' ] + resources: + requests: + storage: {{ .Values.missionControl.persistence.size }} {{- end }} - - name: insight-executor-logs - {{- if .Values.insightExecutor.persistence.enabled }} - persistentVolumeClaim: - claimName: {{ if .Values.insightExecutor.persistence.existingClaim }}{{ .Values.insightExecutor.persistence.existingClaim }}{{ else }}{{ template "insight-executor.fullname" . }}{{ end }} + - metadata: + name: insight-server-logs + labels: + app: {{ template "mission-control.name" . }} + chart: {{ template "mission-control.chart" . }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} + spec: + {{- if .Values.insightServer.persistence.existingClaim }} + selector: + matchLabels: + app: {{ template "mission-control.name" . }} {{- else }} - emptyDir: {} + {{- if .Values.insightServer.persistence.storageClass }} + {{- if (eq "-" .Values.insightServer.persistence.storageClass) }} + storageClassName: '' + {{- else }} + storageClassName: '{{ .Values.insightServer.persistence.storageClass }}' + {{- end }} + {{- end }} + accessModes: [ '{{ .Values.insightServer.persistence.accessMode }}' ] + resources: + requests: + storage: {{ .Values.insightServer.persistence.size }} {{- end }} - {{- if .Values.postgresql.enabled }} - - name: postgresql-setup - configMap: - name: {{ template "mission-control.fullname" . }}-postgresql-setup-script + - metadata: + name: insight-scheduler-logs + labels: + app: {{ template "mission-control.name" . }} + chart: {{ template "mission-control.chart" . }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} + spec: + {{- if .Values.insightScheduler.persistence.existingClaim }} + selector: + matchLabels: + app: {{ template "mission-control.name" . }} + {{- else }} + {{- if .Values.insightScheduler.persistence.storageClass }} + {{- if (eq "-" .Values.insightScheduler.persistence.storageClass) }} + storageClassName: '' + {{- else }} + storageClassName: '{{ .Values.insightScheduler.persistence.storageClass }}' + {{- end }} + {{- end }} + accessModes: [ '{{ .Values.insightScheduler.persistence.accessMode }}' ] + resources: + requests: + storage: {{ .Values.insightScheduler.persistence.size }} {{- end }} - {{- if .Values.mongodb.enabled }} - - name: mongodb-setup - configMap: - name: {{ template "mission-control.fullname" . }}-setup-script + - metadata: + name: insight-executor-logs + labels: + app: {{ template "mission-control.name" . }} + chart: {{ template "mission-control.chart" . }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} + spec: + {{- if .Values.insightExecutor.persistence.existingClaim }} + selector: + matchLabels: + app: {{ template "mission-control.name" . }} + {{- else }} + {{- if .Values.insightExecutor.persistence.storageClass }} + {{- if (eq "-" .Values.insightExecutor.persistence.storageClass) }} + storageClassName: '' + {{- else }} + storageClassName: '{{ .Values.insightExecutor.persistence.storageClass }}' + {{- end }} + {{- end }} + accessModes: [ '{{ .Values.insightExecutor.persistence.accessMode }}' ] + resources: + requests: + storage: {{ .Values.insightExecutor.persistence.size }} {{- end }} - - name: jfmc-setup-scripts - configMap: - name: {{ template "mission-control.fullname" . }}-jfmc-setup-scripts \ No newline at end of file + {{- if .Values.elasticsearch.enabled }} + - metadata: + name: elasticsearch-data + labels: + app: {{ template "mission-control.name" . }} + chart: {{ template "mission-control.chart" . }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} + spec: + {{- if .Values.elasticsearch.persistence.existingClaim }} + selector: + matchLabels: + app: {{ template "mission-control.name" . }} + {{- else }} + {{- if .Values.elasticsearch.persistence.storageClass }} + {{- if (eq "-" .Values.elasticsearch.persistence.storageClass) }} + storageClassName: '' + {{- else }} + storageClassName: '{{ .Values.elasticsearch.persistence.storageClass }}' + {{- end }} + {{- end }} + accessModes: [ '{{ .Values.elasticsearch.persistence.accessMode }}' ] + resources: + requests: + storage: {{ .Values.elasticsearch.persistence.size }} + {{- end }} + {{- end }} + {{- end }} diff --git a/stable/mission-control/templates/mission-control-svc.yaml b/stable/mission-control/templates/mission-control-svc.yaml index 7d3c70b6c..6482c3e78 100644 --- a/stable/mission-control/templates/mission-control-svc.yaml +++ b/stable/mission-control/templates/mission-control-svc.yaml @@ -15,6 +15,11 @@ spec: port: {{ .Values.missionControl.externalPort }} targetPort: {{ .Values.missionControl.internalPort }} protocol: TCP +{{- if .Values.elasticsearch.enabled }} + - name: estransport + port: {{ .Values.elasticsearch.transportPort }} + targetPort: estransport +{{- end }} selector: app: {{ template "mission-control.name" . }} component: {{ .Values.missionControl.name }} diff --git a/stable/mission-control/templates/mongodb-secret.yaml b/stable/mission-control/templates/mongodb-secret.yaml deleted file mode 100644 index dc9f4e5a8..000000000 --- a/stable/mission-control/templates/mongodb-secret.yaml +++ /dev/null @@ -1,16 +0,0 @@ -{{- if .Values.mongodb.enabled }} -apiVersion: v1 -kind: Secret -metadata: - name: {{ template "mission-control.fullname" . }}-mongodb-cred - labels: - app: {{ template "mission-control.name" . }} - chart: {{ template "mission-control.chart" . }} - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} -type: Opaque -data: - adminPassword: {{ required "A valid .Values.mongodb.db.adminPassword entry required!" .Values.mongodb.db.adminPassword | b64enc | quote }} - mcPassword: {{ required "A valid .Values.mongodb.db.mcPassword entry required!" .Values.mongodb.db.mcPassword | b64enc | quote }} - insightPassword: {{ required "A valid .Values.mongodb.db.insightPassword entry required!" .Values.mongodb.db.insightPassword | b64enc | quote }} -{{- end }} \ No newline at end of file diff --git a/stable/mission-control/templates/mongodb-setup-scripts.yaml b/stable/mission-control/templates/mongodb-setup-scripts.yaml deleted file mode 100644 index 1336e379e..000000000 --- a/stable/mission-control/templates/mongodb-setup-scripts.yaml +++ /dev/null @@ -1,79 +0,0 @@ -{{- if .Values.mongodb.enabled }} -apiVersion: v1 -kind: ConfigMap -metadata: - name: {{ template "mission-control.fullname" . }}-setup-script - labels: - app: {{ template "mission-control.name" . }} - chart: {{ template "mission-control.chart" . }} - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} -data: - setup.sh: | - #!/bin/sh - # Setup script to create MongoDB users - - errorExit () { - echo; echo "ERROR: $1"; echo; exit 1 - } - - echo "Waiting for mongodb to come up" - until mongo --host {{ .Release.Name }}-mongodb --port 27017 --eval "db.adminCommand('ping')" > /dev/null 2>&1; do - echo "Waiting for db availability" - sleep 1 - done - echo "DB ready. Configuring..." - mongo --eval "var adminPassword = '$MONGODB_ADMIN_PASSWORD', mcPassword = '$MONGODB_MC_PASSWORD', insightPassword = '$MONGODB_INSIGHT_PASSWORD';" --host {{ .Release.Name }}-mongodb --port 27017 /scripts/createMongoDBUsers.js || errorExit "DB user setup failed" - echo "DB config done" - - createMongoDBUsers.js: | - // JFrog Mission-Control MongoDB Bootstrap - - // Default admin user - var adminUser = { - user: "{{ .Values.mongodb.db.adminUser }}", - pwd: adminPassword - }; - - // Create the admin user - adminUser.roles = ["root"]; - adminUser.customData = { - createdBy: "JFrog Mission-Control installer" - }; - db.getSiblingDB(adminUser.user).auth(adminUser.user, adminUser.pwd) || db.getSiblingDB(adminUser.user).createUser(adminUser); - - // Default mc user - var jfmcUser = { - user: "{{ .Values.mongodb.db.mcUser }}", - pwd: mcPassword, - roles: ["dbOwner"], - customData: { - createdBy: "JFrog Mission-Control installer" - } - }; - - // Default insight-server user - var jiUser = { - user: "{{ .Values.mongodb.db.insightUser }}", - pwd: insightPassword, - roles: ["dbOwner"], - customData: { - createdBy: "JFrog Mission-Control installer" - } - }; - - // Authenticating as admin to create mc user - var loginOutput = db.getSiblingDB(adminUser.user).auth(adminUser.user, adminUser.pwd); - - // Check if user exists before creation - function createUserDB(dbName, dbUser) { - db.getSiblingDB(dbName).getUser(dbUser.user) || db.getSiblingDB(dbName).createUser(dbUser); - } - - createUserDB("{{ .Values.mongodb.db.mcUser }}", jfmcUser); - createUserDB("insight_CUSTOM_", jiUser); - createUserDB("insight_team", jiUser); - createUserDB("{{ .Values.mongodb.db.insightSchedulerDb }}", jiUser) - -{{- end }} - diff --git a/stable/mission-control/templates/postgresql-secret.yaml b/stable/mission-control/templates/postgresql-secret.yaml index ca68eb2ae..509ceb33c 100644 --- a/stable/mission-control/templates/postgresql-secret.yaml +++ b/stable/mission-control/templates/postgresql-secret.yaml @@ -30,7 +30,7 @@ data: {{- else }} jfmcPassword: {{ randAlphaNum 10 | b64enc | quote }} {{- end }} -{{- if .Values.postgresql.db.postgresPassword }} - postgres-password: {{ .Values.postgresql.db.postgresPassword | b64enc | quote }} +{{- if .Values.postgresql.postgresPassword }} + postgres-password: {{ .Values.postgresql.postgresPassword | b64enc | quote }} {{- end }} {{- end }} \ No newline at end of file diff --git a/stable/mission-control/templates/postgresql-setup-script.yaml b/stable/mission-control/templates/postgresql-setup-script.yaml index 8cd66c58e..bd44211a3 100644 --- a/stable/mission-control/templates/postgresql-setup-script.yaml +++ b/stable/mission-control/templates/postgresql-setup-script.yaml @@ -73,7 +73,7 @@ data: attempt_number=${attempt_number:-0} ${PSQL} $POSTGRES_OPTIONS --version > /dev/null 2>&1 outcome1=$? - # Execute a simple db function to verify if mongo is up and running + # Execute a simple db function to verify if postgres is up and running ${PSQL} $POSTGRES_OPTIONS -l > /dev/null 2>&1 outcome2=$? if [[ $outcome1 -eq 0 ]] && [[ $outcome2 -eq 0 ]]; then diff --git a/stable/mission-control/values.yaml b/stable/mission-control/values.yaml index 7a0e587e1..af4f92bab 100644 --- a/stable/mission-control/values.yaml +++ b/stable/mission-control/values.yaml @@ -12,6 +12,9 @@ uname: jfmc imagePullSecrets: +# For HA +replicaCount: 1 + ## Role Based Access Control ## Ref: https://kubernetes.io/docs/admin/authorization/rbac/ rbac: @@ -41,56 +44,12 @@ serviceAccount: ## Details required for initialization/setup of database dbSetup: - mongodb: - image: - repository: mvertes/alpine-mongo - tag: 3.6.3-0 - pullPolicy: IfNotPresent postgresql: image: repository: postgres tag: 9.6.11-alpine pullPolicy: IfNotPresent -# Sub charts -## Configuration values for the mongodb dependency -## ref: https://github.com/kubernetes/charts/blob/master/stable/mongodb/README.md -## -mongodb: - enabled: false - image: - tag: 3.6.8-debian-9 - pullPolicy: IfNotPresent - persistence: - size: 50Gi - resources: {} - # requests: - # memory: "2Gi" - # cpu: "100m" - # limits: - # memory: "2Gi" - # cpu: "250m" - ## Make sure the --wiredTigerCacheSizeGB is no more than half the memory limit! - ## This is critical to protect against OOMKill by Kubernetes! - mongodbExtraFlags: - - "--wiredTigerCacheSizeGB=1" - usePassword: false - db: - adminUser: admin - adminPassword: - mcUser: mission_platform - mcPassword: - insightUser: jfrog_insight - insightPassword: - insightSchedulerDb: insight_scheduler - insightExecutorDb: insight_executor - insightServerDb: insight_team - missionControl: mission_platform - livenessProbe: - initialDelaySeconds: 40 - readinessProbe: - initialDelaySeconds: 30 - # PostgreSQL ## Configuration values for the postgresql dependency ## ref: https://github.com/kubernetes/charts/blob/master/stable/postgresql/README.md @@ -99,7 +58,7 @@ postgresql: enabled: true imageTag: "9.6.11" postgresUsername: postgres - postgresPassword: + postgresPassword: postgres postgresConfig: maxConnections: "1500" db: @@ -175,26 +134,47 @@ database: elasticsearch: enabled: true + name: elasticsearch + initContainerImage: "alpine:3.6" + image: + repository: docker.bintray.io/jfrog/elasticsearch-oss-sg + tag: 6.6.0 + pullPolicy: IfNotPresent + ## Enter elasticsearch connection details + url: http://localhost:9200 + httpPort: 9200 + transportPort: 9300 + username: "admin" + password: "admin" + env: + clusterName: "es-cluster" + networkHost: "0.0.0.0" + transportHost: "0.0.0.0" + maxMapCount: 262144 + minimumMasterNodes: 1 + persistence: - size: 50Gi - resources: {} - # requests: - # memory: "2Gi" - # cpu: "100m" - # limits: - # memory: "2Gi" - # cpu: "500m" - ## ElasticSearch xms and xmx should be same! + enabled: true + ## A manually managed Persistent Volume and Claim + ## Requires persistence.enabled: true + ## If defined, PVC must be created manually before volume will be bound + # existingClaim: + + mountPath: "/usr/share/elasticsearch/data" + accessMode: ReadWriteOnce + size: 100Gi + ## ElasticSearch data Persistent Volume Storage Class + ## If defined, storageClassName: + ## If set to "-", storageClassName: "", which disables dynamic provisioning + ## If undefined (the default) or set to null, no storageClassName spec is + ## set, choosing the default provisioner. (gp2 on AWS, standard on + ## GKE, AWS & OpenStack) + ## + # storageClass: "-" + javaOpts: {} # xms: "2g" # xmx: "2g" - env: - clusterName: "es-cluster" - esUsername: "elastic" - esPassword: - - service: - port: 9200 podRestartTime: @@ -217,7 +197,6 @@ logger: tag: 1.30 missionControl: - replicaCount: 1 name: mission-control appName: jfmc-server home: /var/opt/jfrog/mission-control @@ -252,6 +231,14 @@ missionControl: # - jfmc-server.log # - monitoring.log + ## Mission Control requires a unique mc.key + # This will be used to encript sensitive data to be stored in database + ## You can generate one with the command: + ## 'openssl rand -hex 16' + ## Pass it to helm with '--set missionControl.mcKey=${MC_KEY}' + ## IMPORTANT: You should NOT use the example mcKey for a production deployment! + mcKey: bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb + missionControlUrl: podRestartTime: repository: jfrog-mission-control @@ -306,7 +293,6 @@ missionControl: insightServer: - replicaCount: 1 name: insight-server home: /opt/jfrog image: docker.bintray.io/jfrog/insight-server @@ -349,7 +335,6 @@ insightServer: # - insight-server.log insightScheduler: - replicaCount: 1 name: insight-scheduler home: /opt/jfrog image: docker.bintray.io/jfrog/insight-scheduler @@ -362,13 +347,13 @@ insightScheduler: javaOpts: {} # other: # xms: "500m" - # xmx: "3g" + # xmx: "1g" resources: {} # requests: # memory: "500Mi" # cpu: "100m" # limits: - # memory: "3.5Gi" + # memory: "1.5Gi" # cpu: "1" # Persistence for the logs @@ -397,7 +382,6 @@ insightScheduler: # - access.log insightExecutor: - replicaCount: 1 name: insight-executor home: /opt/jfrog image: docker.bintray.io/jfrog/insight-executor @@ -410,13 +394,13 @@ insightExecutor: javaOpts: {} # other: # xms: "500m" - # xmx: "3g" + # xmx: "2g" resources: {} # requests: # memory: "500Mi" # cpu: "100m" # limits: - # memory: "3.5Gi" + # memory: "2.5Gi" # cpu: "1" # Persistence for the logs