diff --git a/.env b/.env index e047d26..5f0a80e 100644 --- a/.env +++ b/.env @@ -1,5 +1,5 @@ COMPOSE_PROJECT_NAME=elastic -ELK_VERSION=7.17.0 +ELK_VERSION=8.0.1 #----------- Resources --------------------------# ELASTICSEARCH_HEAP=1024m diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 0bd0125..19a049b 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -28,6 +28,4 @@ jobs: run: make setup && make up - name: Test Elasticsearch run: timeout 240s sh -c "until curl https://elastic:changeme@localhost:9200 --insecure --silent; do echo 'Elasticsearch Not Up, Retrying...'; sleep 3; done" && echo 'Elasticsearch is up' - - name: Test Kibana - run: timeout 240s sh -c "until curl https://localhost:5601 --insecure --silent; do echo 'Kibana Not Ready, Retrying...'; sleep 3; done" && echo 'Kibana is up' diff --git a/LICENSE b/LICENSE index 831d886..5d7c46f 100644 --- a/LICENSE +++ b/LICENSE @@ -1,6 +1,6 @@ MIT License -Copyright (c) 2020 Sherif Abdel-Naby +Copyright (c) 2022 Sherif Abdel-Naby Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal diff --git a/Makefile b/Makefile index e2acabb..bc98915 100644 --- a/Makefile +++ b/Makefile @@ -12,11 +12,6 @@ ELK_TOOLS := rubban ELK_NODES := elasticsearch-1 elasticsearch-2 ELK_MAIN_SERVICES := ${ELK_SERVICES} ${ELK_MONITORING} ${ELK_TOOLS} ELK_ALL_SERVICES := ${ELK_MAIN_SERVICES} ${ELK_NODES} ${ELK_LOG_COLLECTION} -# -------------------------- - -# load .env so that Docker Swarm Commands has .env values too. (https://github.com/moby/moby/issues/29133) -include .env -export # -------------------------- .PHONY: setup keystore certs all elk monitoring tools build down stop restart rm logs @@ -77,26 +72,9 @@ images: ## Show all Images of ELK and all its extra components. @docker-compose $(COMPOSE_ALL_FILES) images ${ELK_ALL_SERVICES} prune: ## Remove ELK Containers and Delete Volume Data - @make swarm-rm || echo "" @make stop && make rm @docker volume prune -f -swarm-deploy-elk: - @make build - docker stack deploy -c docker-compose.yml elastic - -swarm-deploy-monitoring: - @make build - @docker stack deploy -c docker-compose.yml -c docker-compose.monitor.yml elastic - -swarm-deploy-tools: - @make build - @docker stack deploy -c docker-compose.yml -c docker-compose.tools.yml elastic - -swarm-rm: - docker stack rm elastic - - help: ## Show this help. @echo "Make Application Docker Images and Containers using Docker-Compose files in 'docker' Dir." @awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m\033[0m (default: help)\n\nTargets:\n"} /^[a-zA-Z_-]+:.*?##/ { printf " \033[36m%-12s\033[0m %s\n", $$1, $$2 }' $(MAKEFILE_LIST) diff --git a/README.md b/README.md index 6dbe59c..77b4050 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@

Configured to be ready to be used for Log, Metrics, APM, Alerting, Machine Learning, and Security (SIEM) usecases.

- Elastic Stack Version 7^^ + Elastic Stack Version 7^^ @@ -36,24 +36,27 @@ Elastic Stack (**ELK**) Docker Composition, preconfigured with **Security**, **M Suitable for Demoing, MVPs and small production deployments. -Based on [Official Elastic Docker Images](https://www.docker.elastic.co/) - -Stack Version: [7.17.0](https://www.elastic.co/blog/elastic-stack-7-17-0-released) -> You can change Elastic Stack version by setting `ELK_VERSION` in `.env` file and rebuild your images. Any version >= 7.0.0 is compatible with this template. +Stack Version: [8.0.1](https://www.elastic.co/blog/whats-new-elastic-8-0-0) 🎉 - Based on [Official Elastic Docker Images](https://www.docker.elastic.co/) +> You can change Elastic Stack version by setting `ELK_VERSION` in `.env` file and rebuild your images. Any version >= 8.0.0 is compatible with this template. ### Main Features 📜 -- Configured as Production Single Node Cluster. (With a multi-node cluster option for experimenting). -- Deployed on a Single Docker Host or a Docker Swarm Cluster. -- Security Enabled (under basic license). -- SSL Enabled (enables Alerting, SIEM, and ML features). +- Configured as a Production Single Node Cluster. (With a multi-node cluster option for experimenting). +- Security Enabled By Default. +- Configured to Enable: + - Logging & Metrics Ingestion + - APM + - Alerting + - Machine Learning + - SIEM + - Enabling Trial License - Use Docker-Compose and `.env` to configure your entire stack parameters. - Persist Elasticsearch's Keystore and SSL Certifications. - Self-Monitoring Metrics Enabled. - Prometheus Exporters for Stack Metrics. +- Collect Docker Host Logs to ELK via `make collect-docker-logs`. - Embedded Container Healthchecks for Stack Images. - [Rubban](https://github.com/sherifabdlnaby/rubban) for Kibana curating tasks. -- A command to ship your host Docker Images to the ELK. #### More points And comparing Elastdocker and the popular [deviantony/docker-elk](https://github.com/deviantony/docker-elk) @@ -93,8 +96,8 @@ Elastdocker differs from `deviantony/docker-elk` in the following points. # Requirements -- [Docker 17.05 or higher](https://docs.docker.com/install/) -- [Docker-Compose 3 or higher](https://docs.docker.com/compose/install/) +- [Docker 20.05 or higher](https://docs.docker.com/install/) +- [Docker-Compose 1.29 or higher](https://docs.docker.com/compose/install/) - 4GB RAM (For Windows and MacOS make sure Docker's VM has more than 4GB+ memory.) # Setup @@ -118,24 +121,10 @@ Elastdocker differs from `deviantony/docker-elk` in the following points. > - Notice that Kibana is configured to use HTTPS, so you'll need to write `https://` before `localhost:5601` in the browser. > - Modify `.env` file for your needs, most importantly `ELASTIC_PASSWORD` that setup your superuser `elastic`'s password, `ELASTICSEARCH_HEAP` & `LOGSTASH_HEAP` for Elasticsearch & Logstash Heap Size. + +> Whatever your Host (e.g AWS EC2, Azure, DigitalOcean, or on-premise server), once you expose your host to the network, ELK component will be accessible on their respective ports. Since the enabled TLS uses a self-signed certificate, it is recommended to SSL-Terminate public traffic using your signed certificates. -Whatever your Host (e.g AWS EC2, Azure, DigitalOcean, or on-premise server), once you expose your host to the network, ELK component will be accessible on their respective ports. Since the enabled TLS uses a self-signed certificate, it is recommended to SSL-Terminate public traffic using your signed certificates. - -### Docker Swarm Support - -Elastdocker can be deployed to Docker Swarm using `make swarm-deploy` - -

Expand -

- -However it is not recommended to [depend on Docker Swarm](https://boxboat.com/2019/12/10/migrate-docker-swarm-to-kubernetes/); if your scale needs a multi-host cluster to host your ELK then Kubernetes is the recommended next step. - -Elastdocker should be used for small production workloads enough to fit on a single host. - -> Docker Swarm lacks some features such as `ulimits` used to disable swapping in Elasticsearch container, please change `bootstrap.memory_lock` to `false` in docker-compose.yml and find an [alternative way](https://www.elastic.co/guide/en/elasticsearch/reference/master/setup-configuration-memory.html) to disable swapping in your swarm cluster. - -

-
+> 🏃🏻‍♂️ To start ingesting logs, you can start by running `make collect-docker-logs` which will collect your host's container logs. ## Additional Commands @@ -183,7 +172,7 @@ $ make prune * Some Configuration are parameterized in the `.env` file. * `ELASTIC_PASSWORD`, user `elastic`'s password (default: `changeme` _pls_). - * `ELK_VERSION` Elastic Stack Version (default: `7.17.0`) + * `ELK_VERSION` Elastic Stack Version (default: `8.0.1`) * `ELASTICSEARCH_HEAP`, how much Elasticsearch allocate from memory (default: 1GB -good for development only-) * `LOGSTASH_HEAP`, how much Logstash allocate from memory. * Other configurations which their such as cluster name, and node name, etc. @@ -225,8 +214,23 @@ make keystore --------------------------- +![Intro](https://user-images.githubusercontent.com/16992394/156664447-c24c49f4-4282-4d6a-81a7-10743cfa384e.png) +![Alerting](https://user-images.githubusercontent.com/16992394/156664848-d14f5e58-8f80-497d-a841-914c05a4b69c.png) +![Maps](https://user-images.githubusercontent.com/16992394/156664562-d38e11ee-b033-4b91-80bd-3a866ad65f56.png) +![ML](https://user-images.githubusercontent.com/16992394/156664695-5c1ed4a7-82f3-47a6-ab5c-b0ce41cc0fbe.png) + + # Monitoring The Cluster +### Via Self-Monitoring + +Head to Stack Monitoring tab in Kibana to see cluster metrics for all stack components. + +![Overview](https://user-images.githubusercontent.com/16992394/156664539-cc7e1a69-f1aa-4aca-93f6-7aedaabedd2c.png) +![Moniroting](https://user-images.githubusercontent.com/16992394/156664647-78cfe2af-489d-4c35-8963-9b0a46904cf7.png) + +> In Production, cluster metrics should be shipped to another dedicated monitoring cluster. + ### Via Prometheus Exporters If you started Prometheus Exporters using `make monitoring` command. Prometheus Exporters will expose metrics at the following ports. @@ -237,14 +241,6 @@ If you started Prometheus Exporters using `make monitoring` command. Prometheus ![Metrics](https://user-images.githubusercontent.com/16992394/78685076-89a58900-78f1-11ea-959b-ce374fe51500.jpg) -### Via Self-Monitoring - -Head to Stack Monitoring tab in Kibana to see cluster metrics for all stack components. - -![Metrics](https://user-images.githubusercontent.com/16992394/65841358-b0bb4680-e321-11e9-9a71-36a1d6fb2a41.png) -![Metrics](https://user-images.githubusercontent.com/16992394/65841362-b6189100-e321-11e9-93e4-b7b2caa5a37d.jpg) - -> In Production, cluster metrics should be shipped to another dedicated monitoring cluster. # License [MIT License](https://raw.githubusercontent.com/sherifabdlnaby/elastdocker/master/LICENSE) diff --git a/docker-compose.monitor.yml b/docker-compose.monitor.yml index b0a9e9e..c9c60c3 100644 --- a/docker-compose.monitor.yml +++ b/docker-compose.monitor.yml @@ -7,6 +7,7 @@ services: image: justwatch/elasticsearch_exporter:1.1.0 restart: always command: ["--es.uri", "https://${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}@${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}", + "--es.ssl-skip-verify", "--es.all", "--es.snapshots", "--es.indices"] diff --git a/docker-compose.setup.yml b/docker-compose.setup.yml index 78848d6..716fb99 100644 --- a/docker-compose.setup.yml +++ b/docker-compose.setup.yml @@ -2,7 +2,11 @@ version: '3.5' services: keystore: - image: docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION} + image: elastdocker/elasticsearch:${ELK_VERSION} + build: + context: elasticsearch/ + args: + ELK_VERSION: ${ELK_VERSION} command: bash /setup/setup-keystore.sh user: "0" volumes: @@ -12,9 +16,13 @@ services: ELASTIC_PASSWORD: ${ELASTIC_PASSWORD} certs: - image: docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION} + image: elastdocker/elasticsearch:${ELK_VERSION} + build: + context: elasticsearch/ + args: + ELK_VERSION: ${ELK_VERSION} command: bash /setup/setup-certs.sh user: "0" volumes: - ./secrets:/secrets - - ./setup/:/setup/ \ No newline at end of file + - ./setup/:/setup \ No newline at end of file diff --git a/docker-compose.yml b/docker-compose.yml index 6866c1e..3ad2834 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -13,6 +13,8 @@ volumes: secrets: elasticsearch.keystore: file: ./secrets/keystore/elasticsearch.keystore + elasticsearch.service_tokens: + file: ./secrets/service_tokens elastic.ca: file: ./secrets/certs/ca/ca.crt elasticsearch.certificate: @@ -48,6 +50,8 @@ services: secrets: - source: elasticsearch.keystore target: /usr/share/elasticsearch/config/elasticsearch.keystore + - source: elasticsearch.service_tokens + target: /usr/share/elasticsearch/config/service_tokens - source: elastic.ca target: /usr/share/elasticsearch/config/certs/ca.crt - source: elasticsearch.certificate @@ -105,6 +109,8 @@ services: ELASTIC_USERNAME: ${ELASTIC_USERNAME} ELASTIC_PASSWORD: ${ELASTIC_PASSWORD} ELASTICSEARCH_HOST_PORT: https://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT} + env_file: + - ./secrets/.env.kibana.token secrets: - source: elastic.ca target: /certs/ca.crt diff --git a/kibana/config/kibana.yml b/kibana/config/kibana.yml index dc54111..79e6372 100644 --- a/kibana/config/kibana.yml +++ b/kibana/config/kibana.yml @@ -18,8 +18,7 @@ xpack.encryptedSavedObjects.encryptionKey: D12GTfrlfxSPxPlGRBlgPB5qM5GOPDV5 xpack.reporting.encryptionKey: RSCueeHKzrqzOVTJhkjt17EMnzM96LlN ## X-Pack security credentials -elasticsearch.username: ${ELASTIC_USERNAME} -elasticsearch.password: ${ELASTIC_PASSWORD} +elasticsearch.serviceAccountToken: "${KIBANA_SERVICE_ACCOUNT_TOKEN}" elasticsearch.ssl.certificateAuthorities: [ "/certs/ca.crt" ] diff --git a/setup/setup-certs.sh b/setup/setup-certs.sh index 4d5626f..80893bb 100644 --- a/setup/setup-certs.sh +++ b/setup/setup-certs.sh @@ -2,6 +2,7 @@ set -e OUTPUT_DIR=/secrets/certs +ZIP_CA_FILE=$OUTPUT_DIR/ca.zip ZIP_FILE=$OUTPUT_DIR/certs.zip printf "======= Generating Elastic Stack Certificates =======\n" @@ -13,11 +14,17 @@ if ! command -v unzip &>/dev/null; then fi printf "Clearing Old Certificates if exits... \n" -find $OUTPUT_DIR -mindepth 1 -type d -exec rm -rf -- {} + -rm -f $ZIP_FILE +mkdir -p $OUTPUT_DIR +find $OUTPUT_DIR -type d -exec rm -rf -- {} + +mkdir -p $OUTPUT_DIR/ca -printf "Generating... \n" -bin/elasticsearch-certutil cert --silent --pem --in /setup/instances.yml -out $ZIP_FILE &> /dev/null + +printf "Generating CA Certificates... \n" +PASSWORD=`openssl rand -base64 32` +/usr/share/elasticsearch/bin/elasticsearch-certutil ca --pass "$PASSWORD" --pem --out $ZIP_CA_FILE &> /dev/null +printf "Generating Certificates... \n" +unzip -qq $ZIP_CA_FILE -d $OUTPUT_DIR; +/usr/share/elasticsearch/bin/elasticsearch-certutil cert --silent --pem --ca-cert $OUTPUT_DIR/ca/ca.crt --ca-key $OUTPUT_DIR/ca/ca.key --ca-pass "$PASSWORD" --in /setup/instances.yml -out $ZIP_FILE &> /dev/null printf "Unzipping Certifications... \n" unzip -qq $ZIP_FILE -d $OUTPUT_DIR; diff --git a/setup/setup-keystore.sh b/setup/setup-keystore.sh index b241479..189046d 100644 --- a/setup/setup-keystore.sh +++ b/setup/setup-keystore.sh @@ -1,8 +1,12 @@ # Exit on Error set -e -OUTPUT_FILE=/secrets/keystore/elasticsearch.keystore -NATIVE_FILE=/usr/share/elasticsearch/config/elasticsearch.keystore +GENERATED_KEYSTORE=/usr/share/elasticsearch/config/elasticsearch.keystore +OUTPUT_KEYSTORE=/secrets/keystore/elasticsearch.keystore + +GENERATED_SERVICE_TOKENS=/usr/share/elasticsearch/config/service_tokens +OUTPUT_SERVICE_TOKENS=/secrets/service_tokens +OUTPUT_KIBANA_TOKEN=/secrets/.env.kibana.token # Password Generate PW=$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c 16 ;) @@ -14,23 +18,47 @@ printf "========== Creating Elasticsearch Keystore ==========\n" printf "=====================================================\n" elasticsearch-keystore create >> /dev/null -# Setting Secrets -echo "Elastic password is: $ELASTIC_PASSWORD" +# Setting Secrets and Bootstrap Password sh /setup/keystore.sh +echo "Elastic Bootstrap Password is: $ELASTIC_PASSWORD" + +# Generating Kibana Token +echo "Generating Kibana Service Token..." + +# Delete old token if exists +/usr/share/elasticsearch/bin/elasticsearch-service-tokens delete elastic/kibana default &> /dev/null || true + +# Generate new token +TOKEN=$(/usr/share/elasticsearch/bin/elasticsearch-service-tokens create elastic/kibana default | cut -d '=' -f2 | tr -d ' ') +echo "Kibana Service Token is: $TOKEN" +echo "KIBANA_SERVICE_ACCOUNT_TOKEN: $TOKEN" > $OUTPUT_KIBANA_TOKEN # Replace current Keystore -if [ -f "$OUTPUT_FILE" ]; then +if [ -f "$OUTPUT_KEYSTORE" ]; then echo "Remove old elasticsearch.keystore" - rm $OUTPUT_FILE + rm $OUTPUT_KEYSTORE fi echo "Saving new elasticsearch.keystore" -mv $NATIVE_FILE $OUTPUT_FILE -chmod 0644 $OUTPUT_FILE +mkdir -p "$(dirname $OUTPUT_KEYSTORE)" +mv $GENERATED_KEYSTORE $OUTPUT_KEYSTORE +chmod 0644 $OUTPUT_KEYSTORE + +# Replace current Service Tokens File +if [ -f "$OUTPUT_SERVICE_TOKENS" ]; then + echo "Remove old service_tokens file" + rm $OUTPUT_SERVICE_TOKENS +fi + +echo "Saving new service_tokens file" +mv $GENERATED_SERVICE_TOKENS $OUTPUT_SERVICE_TOKENS +chmod 0644 $OUTPUT_SERVICE_TOKENS printf "======= Keystore setup completed successfully =======\n" printf "=====================================================\n" printf "Remember to restart the stack, or reload secure settings if changed settings are hot-reloadable.\n" printf "About Reloading Settings: https://www.elastic.co/guide/en/elasticsearch/reference/current/secure-settings.html#reloadable-secure-settings\n" +printf "=====================================================\n" printf "Your 'elastic' user password is: $ELASTIC_PASSWORD\n" +printf "Your Kibana Service Token is: $TOKEN\n" printf "=====================================================\n" \ No newline at end of file