Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add docker support #1

Open
wants to merge 20 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
7ae9e80
First Dockerfile that successfully builds (+requirements.txt Makefile)
craigpratt Jul 27, 2021
d966b24
Normalizing the paths to enable the use of storage volume for shared …
craigpratt Jul 28, 2021
06dd588
Fixed Dockerfile mkdir step
craigpratt Jul 29, 2021
69b6cc6
More Dockerfile changes
craigpratt Jul 29, 2021
fd62944
WIP with Rich. Docker images build for TT and FileBeat (netflow-enabled)
craigpratt Feb 24, 2022
d6fb09f
WIP
craigpratt Apr 11, 2022
56cbf69
Added docker compose and kibana.yml
craigpratt Apr 11, 2022
f9a5443
Merge branch 'main' into task/add-docker-support
craigpratt Apr 11, 2022
e159e77
WIP from working meetings with Rich
craigpratt Apr 29, 2022
f539a13
Completed docker build configs for elasticsearch and kibana
craigpratt May 2, 2022
af59350
Made a variety of changes to reflect docker-elk changes for ES 8.1.3
craigpratt May 3, 2022
b7a3621
Added empty logstash pipeline dicts. Fixed host binds.
craigpratt May 6, 2022
3d207fa
Added TattleTake entrypoint script and blank Logstash dictionaries.
craigpratt May 7, 2022
fd707ce
Reorganized the compose script. Added .env.
craigpratt May 8, 2022
efb9d85
Fixed output rule to use new/improved role password
craigpratt May 8, 2022
6822dfe
Created "fletch" subdir to contain all the scheduled flush/fetch jobs…
craigpratt May 10, 2022
b2eb29f
Removed testing entry from crontab
craigpratt May 10, 2022
f069699
Fixed a couple uses of "localhost" that should have been "0.0.0.0"
craigpratt May 10, 2022
97ad3a3
Attempt to fix logstash_internal permissions
craigpratt May 11, 2022
55131ca
Merge pull request #3 from racompton/task/fix-docker-write-perm-error
craigpratt May 11, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 22 additions & 0 deletions .env
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
ELASTIC_VERSION=8.1.3

## Passwords for stack users
#

# User 'elastic' (built-in)
#
# Superuser role, full access to cluster management and data indices.
# https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html
ELASTIC_PASSWORD='elasticheart'

# User 'logstash_internal' (custom)
#
# The user Logstash uses to connect and send data to Elasticsearch.
# https://www.elastic.co/guide/en/logstash/current/ls-security.html
LOGSTASH_INTERNAL_PASSWORD='stashthemanylogs'

# User 'kibana_system' (built-in)
#
# The user Kibana uses to connect and communicate with Elasticsearch.
# https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html
KIBANA_SYSTEM_PASSWORD='kibanarama'
15 changes: 15 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
DOCKER_REGISTRY := placeholder.github.com
DOCKER_IMAGE_TAG := latest
DOCKER_IMAGE_PATH := tattle-tale:$(DOCKER_IMAGE_TAG)
DOCKER_SAVE_FILE := tattle-tale-$(DOCKER_IMAGE_TAG)

docker-build:
docker build -t $(DOCKER_REGISTRY)/$(DOCKER_IMAGE_PATH) .

docker-push:
docker login $(DOCKER_REGISTRY)
docker push $(DOCKER_REGISTRY)/$(DOCKER_IMAGE_PATH)

docker-save:
docker save $(DOCKER_REGISTRY)/$(DOCKER_IMAGE_PATH) | gzip > $(DOCKER_SAVE_FILE).gz

6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,9 @@ Copy the files from the `filebeat` directory and put them into `/etc/filebeat`

Copy the file `delete_old_indices.sh` from the `cron.daily` directory to `/etc/cron.daily` and make the file executable (`chmod 755 /etc/cron.daily/delete_old_indices.sh`)

Create the `/opt/tattle_tale` directory `sudo mkdir /opt/tattle_tale`
Create the `/opt/tattle-tale` directory `sudo mkdir /opt/tattle-tale`

Copy the `tattle_shadow.py`, `tattle_snmp_poll.py` and `tattle_tale_cfg.py` files to the `/opt/tattle_tale` directory and make `tattle_shadow.py` and `tattle_snmp_poll.py` executable (`chmod 755 <filename>`)
Copy the `tattle_shadow.py`, `tattle_snmp_poll.py` and `tattle_tale_cfg.py` files to the `/opt/tattle-tale/bin` directory and make `tattle_shadow.py` and `tattle_snmp_poll.py` executable (`chmod 755 <filename>`)

Rename the `netflow.yml.disabled` file to `netflow.yml` in `/etc/filebeat/modules.d`
Enable the filebeat module `sudo filebeat modules enable netflow`
Expand All @@ -42,7 +42,7 @@ Edit the `tattle_tale_cfg.py` file and populate these fields:
`snmp_community = "<community string>"`


Create the file `/opt/tattle_tale/router_list.txt` and put in the IPs of routers that will be polled (one router per line).
Create the file `/opt/tattle-tale/etc/router_list.txt` and put in the IPs of routers that will be polled (one router per line).


Restart the ELK stack daemons:
Expand Down
129 changes: 129 additions & 0 deletions docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
version: '3.7'

networks:
tattle-tale:
driver: bridge

volumes:
setup:
elasticsearch:

services:
setup:
# The 'setup' service runs a one-off script which initializes the
# 'logstash_internal' and 'kibana_system' users inside Elasticsearch with the
# values of the passwords defined in the '.env' file.
#
# This task is only performed during the *initial* startup of the stack. On all
# subsequent runs, the service simply returns immediately, without performing
# any modification to existing users.
#
# See https://github.com/deviantony/docker-elk#setting-up-user-authentication
build:
context: setup/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
init: true
volumes:
- setup:/state:Z
environment:
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-}
LOGSTASH_INTERNAL_PASSWORD: ${LOGSTASH_INTERNAL_PASSWORD:-}
KIBANA_SYSTEM_PASSWORD: ${KIBANA_SYSTEM_PASSWORD:-}
networks:
- tattle-tale

elasticsearch:
build:
context: elasticsearch/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
volumes:
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
environment:
ES_JAVA_OPTS: -Xmx256m -Xms256m
# Bootstrap password.
# Used to initialize the keystore during the initial startup of
# Elasticsearch. Ignored on subsequent runs.
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-}
# Use single node discovery in order to disable production mode and avoid bootstrap checks.
# see: https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
networks:
- tattle-tale

logstash:
build:
context: logstash/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
volumes:
- type: bind
target: /usr/share/logstash/pipeline
source: ./logstash/pipeline
read_only: true
- type: bind
# This directory must be mapped to TattleTale's /opt/tattle-tale/lib/logstash directory
# source: ${TT_HOST_LIB_DIRECTORY}/logstash
target: /usr/share/logstash/tattle-tale
source: ./lib/logstash
read_only: true
environment:
LS_JAVA_OPTS: -Xmx256m -Xms256m
LOGSTASH_INTERNAL_PASSWORD: ${LOGSTASH_INTERNAL_PASSWORD:-}
networks:
- tattle-tale
depends_on:
- elasticsearch

kibana:
build:
context: kibana/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
ports:
# Replace 0.0.0.0 with the IP address you want the web interface to run on
- "0.0.0.0:5601:5601"
environment:
KIBANA_SYSTEM_PASSWORD: ${KIBANA_SYSTEM_PASSWORD:-}
networks:
- tattle-tale
depends_on:
- elasticsearch

filebeat:
# See https://www.elastic.co/guide/en/beats/filebeat/8.2/running-on-docker.html#running-on-docker
build:
context: filebeat/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
ports:
# Replace 0.0.0.0 with the IP of the interface receiving netflow
- 0.0.0.0:2055:2055/udp
networks:
- tattle-tale
depends_on:
- logstash

fletch:
# Fetches and flushes things
build:
context: fletch/
environment:
TT_SHADOW_USER: BLAH
TT_SHADOW_PASS: BLAH
TT_SNMP_COMMUNITY_STRING: BLAH
# Change to match your naming convention
# BTW, https://regex101.com is really good for testing regex
TT_INT_DESCRIPTION_PEER_NAME_REGEX: '\[NAME=(.+?)\]'
volumes:
- type: bind
# This lib directory must also be mapped to logstash's /usr/share/logstash/tattle-tale
# source: ${TT_HOST_LIB_DIRECTORY}
source: ./lib
target: /opt/tattle-tale/lib
read_only: false
networks:
- tattle-tale
9 changes: 9 additions & 0 deletions elasticsearch/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
ARG ELASTIC_VERSION

# https://www.docker.elastic.co/
FROM docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION}

COPY elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml

# Add your elasticsearch plugins setup here
# Example: RUN elasticsearch-plugin install analysis-icu
22 changes: 18 additions & 4 deletions elasticsearch/elasticsearch.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,18 @@
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
indices.query.bool.max_clause_count: 8192
search.max_buckets: 100000
---
## Default Elasticsearch configuration from Elasticsearch base image.
## https://github.com/elastic/elasticsearch/blob/master/distribution/docker/src/docker/config/elasticsearch.yml
#
cluster.name: "docker-cluster"
network.host: 0.0.0.0

## X-Pack settings
## see https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html
#
xpack.license.self_generated.type: trial
xpack.security.enabled: true

#xpack.license.self_generated.type: basic
#xpack.security.enabled: false
#xpack.security.transport.ssl.enabled: false
#xpack.security.http.ssl.enabled: false
#xpack.monitoring.collection.enabled: true
12 changes: 12 additions & 0 deletions filebeat/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
ARG ELASTIC_VERSION

FROM docker.elastic.co/beats/filebeat:${ELASTIC_VERSION}

RUN mv modules.d/netflow.yml.disabled modules.d/netflow.yml
RUN filebeat modules enable netflow
RUN sed -i "s/enabled: false/enabled: true/g" modules.d/netflow.yml
RUN sed -i "s/netflow_host: localhost/netflow_host: 0.0.0.0/g" modules.d/netflow.yml

COPY filebeat.yml ./

CMD filebeat -e -c ./filebeat.yml --path.home /usr/share/filebeat --path.config /usr/share/filebeat
2 changes: 1 addition & 1 deletion filebeat/filebeat.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ setup.template.settings:
index.number_of_shards: 1
setup.kibana:
output.logstash:
hosts: ["localhost:5044"]
hosts: ["logstash:5044"]
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
Expand Down
36 changes: 36 additions & 0 deletions fletch/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# Use an official Python runtime as a parent image
FROM python:3.6-slim

WORKDIR /opt/tattle-tale

RUN mkdir -pv bin etc lib/logstash var/downloads var/tmp

# Install/update system packages
RUN apt-get update ; apt-get -y install libsmi-dev gcc curl cron
RUN pip install elasticsearch-curator
COPY curator/curator.yml curator/delete_old_indices.yml etc/

# Install any needed packages specified in requirements.txt
COPY requirements.txt ./
RUN pip3 install --trusted-host pypi.python.org -r requirements.txt

# Install our bits & pieces
COPY *.py *.sh bin/
RUN chmod +x bin/*

# RUN curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.13.4-amd64.deb
# RUN dpkg -i filebeat-7.13.4-amd64.deb ; rm filebeat-7.13.4-amd64.deb

# Setup cron to run the update scripts
COPY cron/cron.daily/* /etc/cron.daily/
RUN chmod +x /etc/cron.daily/*

COPY cron/cron.hourly/* /etc/cron.hourly/
RUN chmod +x /etc/cron.hourly/*

RUN mkdir /etc/cron.minutely

COPY cron/crontab /etc/crontab
RUN crontab /etc/crontab

CMD /opt/tattle-tale/bin/entrypoint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@
# Script to delete elasticsearch netflow-* indicies older than 45 days
# Put this file in /etc/cron.daily/ and make sure it's executable (chmod 755 delete_old_indicies.sh)

/bin/curator /etc/curator/delete_old_indices.yml --config /etc/curator/curator.yml
/usr/local/bin/curator /opt/tattle-tale/etc/delete_old_indices.yml --config /opt/tattle-tale/etc/curator.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@
# Run the script to pull down the shadowserver files and then create dictionary files for each
# The logstash filters then use this to filter out events

/opt/tattle_tale/tattle_shadow.py
/opt/tattle-tale/bin/tattle_shadow.py
4 changes: 4 additions & 0 deletions fletch/cron/cron.hourly/tattle-tale-status.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
#!/bin/sh

datestring=$(date)
echo "TattleTale status at $datestring: RUNNING"
16 changes: 16 additions & 0 deletions fletch/cron/crontab
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * command to be executed
25 * * * * /etc/cron.hourly/tattle-tale-status.sh > /proc/1/fd/1 2>/proc/1/fd/2
30 6 * * * /etc/cron.daily/delete_old_indices.sh > /proc/1/fd/1 2>/proc/1/fd/2
35 6 * * * /etc/cron.daily/tattle-tale-shadow.sh > /proc/1/fd/1 2>/proc/1/fd/2
# 45 6 * * 7 /etc/cron.weekly/blah.sh > /proc/1/fd/1 2>/proc/1/fd/2
# 50 6 1 * ) /etc/cron.monthly/blah.sh > /proc/1/fd/1 2>/proc/1/fd/2
2 changes: 1 addition & 1 deletion curator/curator.yml → fletch/curator/curator.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
client:
hosts:
- 127.0.0.1
- elasticsearch
port: 9200
url_prefix:
use_ssl: False
Expand Down
File renamed without changes.
19 changes: 19 additions & 0 deletions fletch/entrypoint.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
#!/usr/bin/env bash

set -eu

echo "TattleTale status at $(date): STARTING..."

echo "Fetching current ShadowServer report..."
/opt/tattle-tale/bin/tattle_shadow.py
echo "Completed processing ShadowServer report (return code $?)."

# TODO: Fetch IP report from DIS
echo "Cron schedule:"
crontab -l

echo "TattleTale status at $(date): STARTED"
echo "Starting cron..."
cron -f -L 8

echo "exited $0"
26 changes: 26 additions & 0 deletions fletch/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
#
# Output generated via "pipdeptree -f > requirements.txt"
#
# Example setup using python3 venv:
#
# python3 -m venv ./venv
# source venv/bin/activate
# pip install -r requirements.txt
#

pipdeptree==2.0.0
pip==21.1.3
requests==2.26.0
certifi==2021.5.30
charset-normalizer==2.0.3
idna==3.2
urllib3==1.26.6
snimpy==1.0.0
cffi==1.14.6
pycparser==2.20
pysnmp==4.4.12
pyasn1==0.4.8
pycryptodomex==3.10.1
pysmi==0.3.4
ply==3.11
setuptools==44.0.0
3 changes: 3 additions & 0 deletions tattle_shadow.py → fletch/tattle_shadow.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,9 @@ def copy_files(source_dir, dest_dir):
session = requests.Session()
credentials = {'user': cfg.shadow_user, 'password': cfg.shadow_pass, 'login':'Login'}
response = session.post(cfg.shadow_url, data=credentials)
print(f"Got response downloading ShadowServer report: {response.text} (status code {response.status_code})")
if response.status_code != 200:
sys.exit(1)
yester_day_month = find_yesterday()
urls_files = find_links(yester_day_month[0], yester_day_month[1], response.text)

Expand Down
File renamed without changes.
Loading