- Introduction
- Guiding principals
- Contributing
- Features
- Considerations
- Setup
- Usage / Scripts reference
- Accessing internal services
- Core Services
- Default internal services
- Future / TODO
This project is an attempt to create a turn-key "cluster" that is suitable for small servers with limited RAM.
The only OS level dependencies it needs to be operated are docker
and docker-compose
and I use it on a AWS Lightsail host with 1GB of RAM.
I wrote this after attempting to get "light weight" kubernetes environments going like K3s
or KIND, and finding that the resource overhead was simply too high for my requirements
(512mb
minimum, 1gb
recommended)
-
Every application runs in a
docker
container -
All configuration and data is stored under a single directory structure, to make backups simple
-
Portable across different cloud providers, minimal host pre-requisites
-
Provides the software that I typically want available to me when experimenting with personal projects (eg:
npm
registry,postgres
instance)
I welcome feedback and improvements, especially around any security concerns. However please note, this project is pretty particular to my personal preferences and needs - I hope it might be of use to others, but I might not accept PR's that don't align with my requirements.
Please feel free to fork and customize to your hearts desire though :)
-
Public ingress on port 80 using haproxy
-
Internal/private service ingress on configured port and ip address using haproxy
- I recommend using
wireguard
or similar to bind this to a private ip address
- I recommend using
-
Private Docker registry
-
Private NPM registry
-
Metrics collected by
prometheus
, withgrafana
for dashboards -
Letsencrypt certbot for automatic SSL provisioning, and renewal
-
"Watchdog" script suitable for running on demand, or as
CronJob
to reconcile running applications with the configuration on filesystem -
Helper scripts for updating application configuration with new docker tags to deploy
-
Letsencrypt will ban domains that make invalid requests to its production environment it's worthwhile testing this part of things using their staging environment before running
-
This is held together with string, a collection of bash scripts that may or may not be portable, it has been tested on Fedora 34 and AWS Linux 2.
-
Whilst every effort is made to limit RAM consumption, you may still want to add swap to avoid OOM killer. Influxdb in particular can consume a fair chunk of RAM. Instructions here: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-memory-swap-file/
- Clone repo to somewhere on host or otherwise place the contents on the server
- Install
docker
/docker-compose
(./init/install-dependencies.sh
, or manually) - Bootstrap data/configuration structure using
./init/create-empty-configuration-structure.sh /path/you/want/config-and-data-to-be-stored
- Configure generated files
- Update config.sh with correct ip addresses, ports etc
- Add, remove, or customise applications in
applications-internal
/applications-public
- Run ./start.sh
There are a number of bash scripts in this project, I give a brief overview below, but you should probably read through them before attempting to use in production.
- Contains the path to the configuration directory. Generated by
./init/create-empty-configuration-structure.sh
- Generates haproxy configuration
- Creates the docker networks
- Start services defined in
docker-compose.yaml
- Starts application using
applications/watchdog-up.sh
- Stop services defined in
docker-compose.yaml
- Stop applications using
applications/watchdog-down.sh
- Clean up unused networks
- Called automatically by
start.sh
and ssl certificate scripts - Re-generates
haproxy.cfg
files, and sends a signal to the containers to reload their config. - Can be called manually if changes to the template or applications yaml files have been made
- Suitable for calling as a
CronJob
or manually after making changes to the application yaml configuration files, attempts to reconcile running containers with the configuration state.
TODO: write documentation
There are two main options for configuring this securely:
- Bind to private ip address and access via VPN
- Bind to localhost / firewalled port and access via SSH tunnel
Note: these services are only exposed over http
but I might add support for
exposing them over https
as well since docker
in particular gets a bit naggy about "insecure"
registries. It is assumed that you will only make them accessible via secure channels
like the examples below.
Prerequisites: wireguard-tools
installed, Linux kernel >= 5.6 or wireguard
installed as a kernel module.
If you're not using Linux on both ends then you'll need to consult your platforms documentation.
- On both host and client generate public/private key pairs: (you'll want to delete these files at the end)
wg genkey | tee privatekey | wg pubkey > publickey
- Create server config:
Create file /etc/wireguard/wg0.conf:
[Interface]
Address = 10.12.0.1
PrivateKey = <SERVER_PRIVATE_KEY>
ListenPort = 51820
[Peer]
PublicKey = <CLIENT_PUBLIC_KEY>
AllowedIPs = 10.12.0.1/24
-
Open port
51820
on your servers firewall -
Bring interface up on server
sudo wg-quick up wg0
sudo systemctl enable wg-quick@wg0 # optional, enable bringing up at boot-time
- Create client config
Create file /etc/wireguard/wg0.conf:
[Interface]
Address = 10.12.0.2
PrivateKey = <CLIENT_PRIVATE_KEY>
[Peer]
PublicKey = <SERVER_PUBLIC_KEY>
Endpoint = <SERVER_PUBLIC_IP>:51820
AllowedIPs = 10.12.0.2/24
PersistentKeepalive = 25
- Bring interface up on client and test
sudo wg-quick up wg0
sudo systemctl enable wg-quick@wg0 # optional, enable bringing up at boot-time
ping 10.12.0.1
Assumptions: ingress configured to 127.0.0.1:8080
Create tunnel using ssh, eg:
ssh [email protected] -L 127.0.0.1:8080:127.0.0.1:8080
Then services are available via port 8080, with the domains configured in the haproxy-internal/applications.js
file. The easiest way to make this available in your browser is to add entries to your
hosts file, or use something like dnsmasq
Eg: /etc/hosts
file
127.0.0.1 grafana.internal.example.com
127.0.0.1 npm.internal.example.com
127.0.0.1 docker.internal.example.com
There are a number of "core" services managed by docker-compose
. This is distinct from application services
that you want to deploy and make available.
There are two instances of haproxy in use, one for public ingress, and one for private/internal ingress.
The configuration is generated from the docker-compose yaml files in the applications-public
/ applications-internal
directories. Specifically:
-
A custom top level property
x-external-host-names
is used to know which vhosts to proxy to that application. -
A service named
application
is expected to exist, and thehostname
of this is used to know which container to proxy to. -
A custom top level property
x-container-port
is used to know which port to proxy to, or default to 80.
If there are no external host names declared, or no application service is found, the file is skipped, and not included in the proxy.
Any customizations you need to make should be made to the template rather to avoid them being overwritten.
The public proxy exposes stats using a unix socket, mounted to haproxy-public-stats
and
read from by telegraf.
For the public ingress, SSL certificates are read from haproxy-public-ssl
, noting that haproxy
requires the public and private portion to be stored in the same file.
Originals are managed by certbot and stored in letsencrypt/etc
This folder is where the concatenated ssl certificate + private keys will be stored, and loaded by the public facing haproxy instance.
Due to an annoying "feature" of haproxy where it will refuse to start if this
directly is empty, which is likely is when first bootstrapping your server,
there is a invalid.pem
file which contains a self-signed certificate for the
domain invalid.
This avoids the chicken and egg situation where you either need to first start with ssl disabled then restart after certificates have been populated, etc
As per RFC 6761 section 6.4,
invalid.
is guaranteed to never exist, and once you have your own certificates
in this folder, it is safe to delete this placeholder certificate.
The configuration template defines some default internal services, including:
- Prometheus / Node Exporter / Postgres Exporter / cadvisor (metrics collection)
- Grafana (dashboards / monitoring)
- Private NPM Registry (
verdaccio
) - Private Docker Registry (
registry:2
) - Postgres
You'll need to modify the external hostnames in the yaml files to suit your environment
Disabling a service: simply delete it's yaml file from applications-internal
At first start you will need to configure grafana with a connection to the prometheus datasource, and create / import some dashboards.
Datasource configuration:
- Type
prometheus
- URL:
http://monitoring_prometheus:9090
- Authentication disabled by default
I recommend importing these dashboards to get started:
- System: https://grafana.com/grafana/dashboards/1860-node-exporter-full/
- Docker: https://grafana.com/grafana/dashboards/16527-docker-monitoring/
- Postgres: https://grafana.com/grafana/dashboards/14114-postgres-overview/
- HAProxy: https://grafana.com/grafana/dashboards/12693-haproxy-2-full/
You'll need to create a user for the exporter to use, and then configure it's credentials in prometheus.yaml
create user postgres_exporter with login password '<password>';
grant pg_monitor to postgres_exporter;
grant connect on database postgres to postgres_exporter;
TODO: write documentation
TODO: write documentation
- Get rid of the certificate concatenation, haproxy no longer requires this since version 2.2 🥳
- find a way to allow issuing of SSL certs for private/internal services?
- would probably have to go the DNS TXT record route, but AFAIK there is not a standardised API for this that can be reasonably expected to work across providers 😢
- rework data directory structure by be split by configuration / data, eg:
/data/conf/ /data/data/