diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 000000000..e69de29bb diff --git a/404.html b/404.html new file mode 100644 index 000000000..2a41ec91b --- /dev/null +++ b/404.html @@ -0,0 +1,2134 @@ + + + +
+ + + + + + + + + + + + + + + + +The challenge most of us face with remotely accessing our home networks is that our routers usually have a dynamically-allocated IP address on the public (WAN) interface.
+From time to time the IP address that your ISP assigns changes and it's difficult to keep up. Fortunately, there is a solution: Dynamic DNS. The section below shows you how to set up an easy-to-remember domain name that follows your public IP address no matter when it changes.
+Secondly, how do you get into your home network? Your router has a firewall that is designed to keep the rest of the internet out of your network to protect you. The solution to that is a Virtual Private Network (VPN) or "tunnel".
+There are two parts to a Dynamic DNS service:
+The first part is fairly simple and there are quite a few Dynamic DNS service providers including:
+++You can find more service providers by Googling "Dynamic DNS service".
+
Some router vendors also provide their own built-in Dynamic DNS capabilities for registered customers so it's a good idea to check your router's capabilities before you plough ahead.
+The "something" on your side of the network propagating WAN IP address changes can be either:
+If you have the choice, your router is to be preferred. That's because your router is usually the only device in your network that actually knows when its WAN IP address changes. A Dynamic DNS client running on your router will propagate changes immediately and will only transmit updates when necessary. More importantly, it will persist through network interruptions or Dynamic DNS service provider outages until it receives an acknowledgement that the update has been accepted.
+Nevertheless, your router may not support the Dynamic DNS service provider you wish to use, or may come with constraints that you find unsatisfactory so any behind-the-router technique is always a viable option, providing you understand its limitations.
+A behind-the-router technique usually relies on sending updates according to a schedule. An example is a cron
job that runs every five minutes. That means any router WAN IP address changes won't be propagated until the next scheduled update. In the event of network interruptions or service provider outages, it may take close to ten minutes before everything is back in sync. Moreover, given that WAN IP address changes are infrequent events, most scheduled updates will be sending information unnecessarily.
The recommended and easiest solution is to install the Duckdns docker-container +from the menu. It includes the cron service and logs are handled by Docker.
+For configuration see Containers/Duck DNS.
+Note
+This is a recently added container, please don't hesitate to report any +possible faults to Discord or as Github issues.
+Info
+This method will soon be deprecated in favor of the DuckDNS container.
+IOTstack provides a solution for DuckDNS. The best approach to running it is:
+$ mkdir -p ~/.local/bin
+$ cp ~/IOTstack/duck/duck.sh ~/.local/bin
+
++The reason for recommending that you make a copy of
+duck.sh
is because the "original" is under Git control. If you change the "original", Git will keep telling you that the file has changed and it may block incoming updates from GitHub.
Then edit ~/.local/bin/duck.sh
to add your DuckDNS domain name(s) and token:
DOMAINS="YOURS.duckdns.org"
+DUCKDNS_TOKEN="YOUR_DUCKDNS_TOKEN"
+
For example:
+DOMAINS="downunda.duckdns.org"
+DUCKDNS_TOKEN="8a38f294-b5b6-4249-b244-936e997c6c02"
+
Note:
+The DOMAINS=
variable can be simplified to just "YOURS", with the .duckdns.org
portion implied, as in:
DOMAINS="downunda"
+
Once your credentials are in place, test the result by running:
+$ ~/.local/bin/duck.sh
+ddd, dd mmm yyyy hh:mm:ss ±zzzz - updating DuckDNS
+OK
+
The timestamp is produced by the duck.sh
script. The expected responses from the DuckDNS service are:
Check your work if you get "KO" or any other errors.
+Next, assuming dig
is installed on your Raspberry Pi (sudo apt install dnsutils
), you can test propagation by sending a directed query to a DuckDNS name server. For example, assuming the domain name you registered was downunda.duckdns.org
, you would query like this:
$ dig @ns1.duckdns.org downunda.duckdns.org +short
+
The expected result is the IP address of your router's WAN interface. It is a good idea to confirm that it is the same as you get from whatismyipaddress.com.
+A null result indicates failure so check your work.
+Remember, the Domain Name System is a distributed database. It takes time for changes to propagate. The response you get from directing a query to ns1.duckdns.org may not be the same as the response you get from any other DNS server. You often have to wait until cached records expire and a recursive query reaches the authoritative DuckDNS name-servers.
+The recommended arrangement for keeping your Dynamic DNS service up-to-date is to invoke duck.sh
from cron
at five minute intervals.
If you are new to cron
, see these guides for more information about setting up and editing your crontab
:
A typical crontab
will look like this:
SHELL=/bin/bash
+HOME=/home/pi
+PATH=/home/pi/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+
+*/5 * * * * duck.sh >/dev/null 2>&1
+
The first three lines construct the runtime environment correctly and should be at the start of any crontab
.
The last line means "run duck.sh every five minutes". See crontab.guru if you want to understand the syntax of the last line.
+When launched in the background by cron
, the script supplied with IOTstack adds a random delay of up to one minute to try to reduce the "hammering effect" of a large number of users updating DuckDNS simultaneously.
Standard output and standard error are redirected to /dev/null
which is appropriate in this instance. When DuckDNS is working correctly (which is most of the time), the only output from the curl
command is "OK". Logging that every five minutes would add wear and tear to SD cards for no real benefit.
If you suspect DuckDNS is misbehaving, you can run the duck.sh
command from a terminal session, in which case you will see all the curl
output in the terminal window.
If you wish to keep a log of duck.sh
activity, the following will get the job done:
Make a directory to hold log files:
+$ mkdir -p ~/Logs
+
Edit the last line of the crontab
like this:
*/5 * * * * duck.sh >>./Logs/duck.log 2>&1
+
Remember to prune the log from time to time. The generally-accepted approach is:
+$ cat /dev/null >~/Logs/duck.log
+
WireGuard is supplied as part of IOTstack. See WireGuard documentation.
+pimylifeup.com has an excellent tutorial on how to install PiVPN
+In point 17 and 18 they mention using noip for their dynamic DNS. Here you can use the DuckDNS address if you created one.
+Don't forget you need to open the port 1194 on your firewall. Most people won't be able to VPN from inside their network so download OpenVPN client for your mobile phone and try to connect over mobile data. (More info.)
+Once you activate your VPN (from your phone/laptop/work computer) you will effectively be on your home network and you can access your devices as if you were on the wifi at home.
+I personally use the VPN any time I'm on public wifi, all your traffic is secure.
+https://www.zerotier.com/
+Zerotier is an alternative to PiVPN that doesn't require port forwarding on your router. It does however require registering for their free tier service here.
+Kevin Zhang has written a how to guide here. Just note that the install link is outdated and should be:
+$ curl -s 'https://raw.githubusercontent.com/zerotier/ZeroTierOne/master/doc/contact%40zerotier.com.gpg' | gpg --import && \
+if z=$(curl -s 'https://install.zerotier.com/' | gpg); then echo "$z" | sudo bash; fi
+
This page explains how to use the backup and restore functionality of IOTstack.
+The backup command can be executed from IOTstack's menu, or from a cronjob.
+To ensure that all your data is saved correctly, the stack should be brought down. This is mainly due to databases potentially being in a state that could cause data loss.
+There are 2 ways to run backups:
+Backup and Restore
> Run backup
bash ./scripts/backup.sh
The command that's run from the command line can also be executed from a cronjob:
+0 2 * * * cd /home/pi/IOTstack && /bin/bash ./scripts/backup.sh
The current directory of bash must be in IOTstack's directory, to ensure that it can find the relative paths of the files it's meant to back up. In the example above, it's assume that it's inside the pi
user's home directory.
./scripts/backup.sh {TYPE=3} {USER=$(whoami)}
+
Backups:
+./scripts/backup.sh
./scripts/backup.sh 3
Either of these will run both backups.
+./scripts/backup.sh 2
This will only produce a backup in the rollowing folder. It will be called 'backup_XX.tar.gz' where XX is the current day of the week (as an int)
+sudo bash ./scripts/backup.sh 2 pi
This will only produce a backup in the rollowing folder and change all the permissions to the 'pi' user.
+There are 2 ways to run a restore:
+Backup and Restore
> Restore from backup
bash ./scripts/restore.sh
Important: The restore script assumes that the IOTstack directory is fresh, as if it was just cloned. If it is not fresh, errors may occur, or your data may not correctly be restored even if no errors are apparent.
+Note: It is suggested that you test that your backups can be restored after initially setting up, and anytime you add or remove a service. Major updates to services can also break backups.
+./scripts/restore.sh {FILENAME=backup.tar.gz} {noask}
+
./backups/
directory, or a subfolder in it. That means it should be moved from ./backups/backup
to ./backups/
, or that you need to specify the backup
portion of the directory (see examples)The script checks if there are any pre and post back up hooks to execute commands. Both of these files will be included in the backup, and have also been added to the .gitignore
file, so that they will not be touched when IOTstack updates.
The prebackup hook script is executed before any compression happens and before anything is written to the temporary backup manifest file (./.tmp/backup-list_{{NAME}}.txt
). It can be used to prepare any services (such as databases that IOTstack isn't aware of) for backing up.
To use it, simple create a ./pre_backup.sh
file in IOTstack's main directory. It will be executed next time a backup runs.
The postbackup hook script is executed after the tarball file has been written to disk, and before the final backup log information is written to disk.
+To use it, simple create a ./post_backup.sh
file in IOTstack's main directory. It will be executed after the next time a backup runs.
The post restore hook script is executed after all files have been extracted and written to disk. It can be used to apply permissions that your custom services may require.
+To use it, simple create a ./post_restore.sh
file in IOTstack's main directory. It will be executed after a restore happens.
This section explains how to backup your files with 3rd party software.
+Coming soon.
+Coming soon.
+Coming soon.
+Coming soon.
+Coming soon.
+ + + + + + + + + + + + + +Each time you build the stack from the menu, the Docker Compose file
+docker-compose.yml
is recreated, losing any custom changes you've made. There
+are different ways of dealing with this:
docker-compose.yml
, in case you overwrite it by mistake or
+ habit from the menu.docker-compose.override.yml
. This limits you to changing values and
+ appending to lists already present in your docker-compose.yml, but it's
+ handy as changes are immediately picked up by docker-compose commands. To
+ see the resulting final config run docker-compose config
.compose-override.yml
with the menu-generated stack
+ into docker-compose.yml
. This can be used to add even complete new
+ services. See below for details.~/customStack/docker-compose.yml
. This composition can then
+ be independently managed from that folder: cd ~/customStack
and use
+ docker-compose
commands as normal. The best override is the one you don't
+ have to make.You can specify modifcations to the docker-compose.yml
file, including your own networks and custom containers/services.
Create a file called compose-override.yml
in the main directory, and place your modifications into it. These changes will be merged into the docker-compose.yml
file next time you run the build script.
The compose-override.yml
file has been added to the .gitignore
file, so it shouldn't be touched when upgrading IOTstack. It has been added to the backup script, and so will be included when you back up and restore IOTstack. Always test your backups though! New versions of IOTstack may break previous builds.
tmp
directory.compose-override.yml
exists:3
docker-compose.yml
.yaml_merge.py
script, merge both the compose-override.yml
and the temporary docker compose file together; Using the temporary file as the default values and interating through each level of the yaml structure, check to see if the compose-override.yml
has a value set.docker-compose.yml
.If you specify an override for a service, and then rebuild the docker-compose.yml
file, but deselect the service from the list, then the YAML merging will still produce that override.
For example, lets say NodeRed was selected to have have the following override specified in compose-override.yml
:
+
services:
+ nodered:
+ restart: always
+
When rebuilding the menu, ensure to have NodeRed service always included because if it's no longer included, the only values showing in the final docker-compose.yml
file for NodeRed will be the restart
key and its value. Docker Compose will error with the following message:
Service nodered has neither an image nor a build context specified. At least one must be provided.
When attempting to bring the services up with docker-compose up -d
.
Either remove the override for NodeRed in compose-override.yml
and rebuild the stack, or ensure that NodeRed is built with the stack to fix this.
Lets assume you put the following into the compose-override.yml
file:
+
services:
+ mosquitto:
+ ports:
+ - 1996:1996
+ - 9001:9001
+
Normally the mosquitto service would be built like this inside the docker-compose.yml
file:
+
version: '3.6'
+services:
+ mosquitto:
+ container_name: mosquitto
+ image: eclipse-mosquitto
+ restart: unless-stopped
+ user: "1883"
+ ports:
+ - 1883:1883
+ - 9001:9001
+ volumes:
+ - ./volumes/mosquitto/data:/mosquitto/data
+ - ./volumes/mosquitto/log:/mosquitto/log
+ - ./volumes/mosquitto/pwfile:/mosquitto/pwfile
+ - ./services/mosquitto/mosquitto.conf:/mosquitto/config/mosquitto.conf
+ - ./services/mosquitto/filter.acl:/mosquitto/config/filter.acl
+
Take special note of the ports list.
+If you run the build script with the compose-override.yml
file in place, and open up the final docker-compose.yml
file, you will notice that the port list have been replaced with the ones you specified in the compose-override.yml
file.
+
version: '3.6'
+services:
+ mosquitto:
+ container_name: mosquitto
+ image: eclipse-mosquitto
+ restart: unless-stopped
+ user: "1883"
+ ports:
+ - 1996:1996
+ - 9001:9001
+ volumes:
+ - ./volumes/mosquitto/data:/mosquitto/data
+ - ./volumes/mosquitto/log:/mosquitto/log
+ - ./volumes/mosquitto/pwfile:/mosquitto/pwfile
+ - ./services/mosquitto/mosquitto.conf:/mosquitto/config/mosquitto.conf
+ - ./services/mosquitto/filter.acl:/mosquitto/config/filter.acl
+
Do note that it will replace the entire list, if you were to specify +
services:
+ mosquitto:
+ ports:
+ - 1996:1996
+
Then the final output will be: +
version: '3.6'
+services:
+ mosquitto:
+ container_name: mosquitto
+ image: eclipse-mosquitto
+ restart: unless-stopped
+ user: "1883"
+ ports:
+ - 1996:1996
+ volumes:
+ - ./volumes/mosquitto/data:/mosquitto/data
+ - ./volumes/mosquitto/log:/mosquitto/log
+ - ./volumes/mosquitto/pwfile:/mosquitto/pwfile
+ - ./services/mosquitto/mosquitto.conf:/mosquitto/config/mosquitto.conf
+ - ./services/mosquitto/filter.acl:/mosquitto/config/filter.acl
+
If you need or prefer to use *.env files for docker-compose environment variables in a separate file instead of using overrides, you can do so like this:
+services:
+ grafana:
+ env_file:
+ - ./services/grafana/grafana.env
+ environment:
+
This will remove the default environment variables set in the template, and tell docker-compose to use the variables specified in your file. It is not mandatory that the .env file be placed in the service's service directory, but is strongly suggested. Keep in mind the PostBuild Script functionality to automatically copy your .env files into their directories on successful build if you need to.
+Custom services can be added in a similar way to overriding default settings for standard services. Lets add a Minecraft and rcon server to IOTstack.
+Firstly, put the following into compose-override.yml
:
+
services:
+ mosquitto:
+ ports:
+ - 1996:1996
+ - 9001:9001
+ minecraft:
+ image: itzg/minecraft-server
+ ports:
+ - "25565:25565"
+ volumes:
+ - "./volumes/minecraft:/data"
+ environment:
+ EULA: "TRUE"
+ TYPE: "PAPER"
+ ENABLE_RCON: "true"
+ RCON_PASSWORD: "PASSWORD"
+ RCON_PORT: 28016
+ VERSION: "1.15.2"
+ REPLACE_ENV_VARIABLES: "TRUE"
+ ENV_VARIABLE_PREFIX: "CFG_"
+ CFG_DB_HOST: "http://localhost:3306"
+ CFG_DB_NAME: "IOTstack Minecraft"
+ CFG_DB_PASSWORD_FILE: "/run/secrets/db_password"
+ restart: unless-stopped
+ rcon:
+ image: itzg/rcon
+ ports:
+ - "4326:4326"
+ - "4327:4327"
+ volumes:
+ - "./volumes/rcon_data:/opt/rcon-web-admin/db"
+secrets:
+ db_password:
+ file: ./db_password
+
Then create the service directory that the new instance will use to store persistant data:
+mkdir -p ./volumes/minecraft
and
+mkdir -p ./volumes/rcon_data
Obviously you will need to give correct folder names depending on the volumes
you specify for your custom services. If your new service doesn't require persistant storage, then you can skip this step.
Then simply run the ./menu.sh
command, and rebuild the stack with what ever services you had before.
Using the Mosquitto example above, the final docker-compose.yml
file will look like:
version: '3.6'
+services:
+ mosquitto:
+ ports:
+ - 1996:1996
+ - 9001:9001
+ container_name: mosquitto
+ image: eclipse-mosquitto
+ restart: unless-stopped
+ user: '1883'
+ volumes:
+ - ./volumes/mosquitto/data:/mosquitto/data
+ - ./volumes/mosquitto/log:/mosquitto/log
+ - ./services/mosquitto/mosquitto.conf:/mosquitto/config/mosquitto.conf
+ - ./services/mosquitto/filter.acl:/mosquitto/config/filter.acl
+ minecraft:
+ image: itzg/minecraft-server
+ ports:
+ - 25565:25565
+ volumes:
+ - ./volumes/minecraft:/data
+ environment:
+ EULA: 'TRUE'
+ TYPE: PAPER
+ ENABLE_RCON: 'true'
+ RCON_PASSWORD: PASSWORD
+ RCON_PORT: 28016
+ VERSION: 1.15.2
+ REPLACE_ENV_VARIABLES: 'TRUE'
+ ENV_VARIABLE_PREFIX: CFG_
+ CFG_DB_HOST: http://localhost:3306
+ CFG_DB_NAME: IOTstack Minecraft
+ CFG_DB_PASSWORD_FILE: /run/secrets/db_password
+ restart: unless-stopped
+ rcon:
+ image: itzg/rcon
+ ports:
+ - 4326:4326
+ - 4327:4327
+ volumes:
+ - ./volumes/rcon_data:/opt/rcon-web-admin/db
+secrets:
+ db_password:
+ file: ./db_password
+
Do note that the order of the YAML keys is not guaranteed.
+ + + + + + + + + + + + + +Here you can find a list of the default mode and ports used by each service found in the .templates directory.
+This list can be generated by running the default_ports_md_generator.sh script.
+Service Name | +Mode | +Port(s) External:Internal |
+
---|---|---|
adguardhome | +non-host | +53:53 8089:8089 3001:3000 |
+
adminer | +non-host | +9080:8080 |
+
blynk_server | +non-host | +8180:8080 8440:8440 9443:9443 |
+
chronograf | +non-host | +8888:8888 |
+
dashmachine | +non-host | +5000:5000 |
+
deconz | +non-host | +8090:80 443:443 5901:5900 |
+
diyhue | +non-host | +8070:80 1900:1900 1982:1982 2100:2100 |
+
domoticz | +non-host | +8083:8080 6144:6144 1443:1443 |
+
dozzle | +non-host | +8889:8080 |
+
duckdns | +host | ++ |
espruinohub | +host | ++ |
gitea | +non-host | +7920:3000 2222:22 |
+
grafana | +non-host | +3000:3000 |
+
heimdall | +non-host | +8880:80 8883:443 |
+
home_assistant | +host | ++ |
homebridge | +host | ++ |
homer | +non-host | +8881:8080 |
+
influxdb | +non-host | +8086:8086 |
+
influxdb2 | +non-host | +8087:8086 |
+
kapacitor | +non-host | +9092:9092 |
+
mariadb | +non-host | +3306:3306 |
+
mosquitto | +non-host | +1883:1883 |
+
"motioneye" | +non-host | +8765:8765 8081:8081 |
+
"n8n" | +non-host | +5678:5678 |
+
nextcloud | +non-host | +9321:80 |
+
nodered | +non-host | +1880:1880 |
+
octoprint | +non-host | +9980:80 |
+
openhab | +host | ++ |
pihole | +non-host | +8089:80 53:53 67:67 |
+
plex | +host | ++ |
portainer-ce | +non-host | +8000:8000 9000:9000 |
+
portainer-agent | +non-host | +9001:9001 |
+
postgres | +non-host | +5432:5432 |
+
prometheus-cadvisor | +non-host | +8082:8080 |
+
prometheus-nodeexporter | +non-host | ++ |
prometheus | +non-host | +9090:9090 |
+
python | +non-host | ++ |
qbittorrent | +non-host | +6881:6881 15080:15080 1080:1080 |
+
ring-mqtt | +non-host | +8554:8554 55123:55123 |
+
rtl_433 | +non-host | ++ |
scrypted | +host | +10443:10443 |
+
syncthing | +host | ++ |
tasmoadmin | +non-host | +8088:80 |
+
telegraf | +non-host | +8092:8092 8094:8094 8125:8125 |
+
timescaledb | +non-host | ++ |
transmission | +non-host | +9091:9091 51413:51413 |
+
webthingsio_gateway | +host | ++ |
wireguard | +non-host | +51820:51820 |
+
zerotier | +host | ++ |
zerotier | +host | ++ |
zigbee2mqtt | +non-host | +8080:8080 |
+
zigbee2mqtt_assistant | +non-host | +8880:80 |
+
When Docker starts a container, it executes its entrypoint command. Any +output produced by this command is logged by Docker. By default Docker stores +logs internally together with other data associated to the container image.
+This has the effect that when recreating or updating a container, logs shown by
+docker-compose logs
won't show anything associated with the previous
+instance. Use docker system prune
to remove old instances and free up disk
+space. Keeping logs only for the latest instance is helpful when testing, but
+may not be desirable for production.
By default there is no limit on the log size. Surprisingly, when using a +SD-card this is exactly what you want. If a runaway container floods the log +with output, writing will stop when the disk becomes full. Without a mechanism +to prevent such excessive writing, the SD-card would keep being written to +until the flash hardware program-erase cycle limit is +reached, after which it is permanently broken.
+When using a quality SSD-drive, potential flash-wear isn't usually a +concern. Then you can enable log-rotation by either:
+Configuring Docker to do it for you automatically. Edit your
+ docker-compose.yml
and add a top-level x-logging and a logging: to
+ each service definition. The Docker compose reference documentation has
+ a good example.
Configuring Docker to log to the host system's journald.
+ps. if /etc/docker/daemon.json
doesn't exist, just create it.
Bash aliases for stopping and starting the stack and other common operations
+are in the file .bash_aliases
. To use them immediately and in future logins,
+run in a console:
$ source ~/IOTstack/.bash_aliases
+$ echo "source ~/IOTstack/.bash_aliases" >> ~/.profile
+
These commands no longer need to be executed from the IOTstack directory and can be executed in any directory
+IOTSTACK_HOME="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+alias iotstack_up="cd "$IOTSTACK_HOME" && docker-compose up -d --remove-orphans"
+alias iotstack_down="cd "$IOTSTACK_HOME" && docker-compose down --remove-orphans"
+alias iotstack_start="cd "$IOTSTACK_HOME" && docker-compose start"
+alias iotstack_stop="cd "$IOTSTACK_HOME" && docker-compose stop"
+alias iotstack_pull="cd "$IOTSTACK_HOME" && docker-compose pull"
+alias iotstack_build="cd "$IOTSTACK_HOME" && docker-compose build --pull --no-cache"
+alias iotstack_update_docker_images='f(){ iotstack_pull "$@" && iotstack_build "$@" && iotstack_up --build "$@"; }; f'
+
You can now type iotstack_up
. The aliases also accept additional parameters,
+e.g. iotstack_stop portainer
.
The iotstack_update_docker_images
alias will update docker images to newest
+released images, build and recreate containers. Do note that using this will
+result in a broken containers from time to time, as upstream may release faulty
+docker images. Have proper backups, or be prepared to manually pin a previous
+release build by editing docker-compose.yml
.
The menu.sh
-script is used to create or modify the docker-compose.yml
-file.
+This file defines how all containers added to the stack are configured.
One of the drawbacks of an sd card is that it has a limited lifespan. One way +to reduce the load on the sd card is to move your log files to RAM. log2ram is a convenient tool to simply set this up. +It can be installed from the miscellaneous menu.
+This only affects logs written to /var/log, and won't have any effect on Docker +logs or logs stored inside containers.
+This a great utility to easily upload data from your PI to the cloud. The +MagPi has an +excellent explanation of the process of setting up the Dropbox API. +Dropbox-Uploader is used in the backup script.
+See Backing up and restoring IOTstack
+RTL_433 can be installed from the "Native install sections"
+This video demonstrates +how to use RTL_433
+The installer will install any dependencies. If ~/rpieasy
exists it will
+update the project to its latest, if not it will clone the project
RPIEasy can be run by sudo ~/rpieasy/RPIEasy.py
To have RPIEasy start on boot in the webui under hardware look for "RPIEasy +autostart at boot"
+RPIEasy will select its ports from the first available one in the list +(80,8080,8008). If you run Hass.io then there will be a conflict so check the +next available port
+The build script creates the ./services directory and populates it from the +template file in .templates . The script then appends the text withing each +service.yml file to the docker-compose.yml . When the stack is rebuilt the menu +does not overwrite the service folder if it already exists. Make sure to sync +any alterations you have made to the docker-compose.yml file with the +respective service.yml so that on your next build your changes pull through.
+The .gitignore file is setup such that if you do a git pull origin master
it
+does not overwrite the files you have already created. Because the build script
+does not overwrite your service directory any changes in the .templates
+directory will have no affect on the services you have already made. You will
+need to move your service folder out to get the latest version of the template.
The docker-compose instruction creates an internal network for the containers to communicate in, the ports get exposed to the PI's IP address when you want to connect from outside. It also creates a "DNS" the name being the container name. So it is important to note that when one container talks to another they talk by name. All the containers names are lowercase like nodered, influxdb...
+An easy way to find out your IP is by typing ip address
in the terminal and look next to eth0 or wlan0 for your IP. It is highly recommended that you set a static IP for your PI or at least reserve an IP on your router so that you know it
Check the docker-compose.yml to see which ports have been used
+mosquitto
http://influxdb:8086
Many containers try to use popular ports such as 80,443,8080. For example openHAB and Adminer both want to use port 8080 for their web interface. Adminer's port has been moved 9080 to accommodate this. Please check the description of the container in the README to see if there are any changes as they may not be the same as the port you are used to.
+Port mapping is done in the docker-compose.yml file. Each service should have a section that reads like this: +
ports:
+ - HOST_PORT:CONTAINER_PORT
+
ports:
+ - 9080:8080
+
Search github issues.
+Ask questions on IOTStack Discord. Or report + how you were able to fix a problem.
+There are over 40 gists about IOTstack. These address a diverse range of + topics from small convenience scripts to complete guides. These are + individual contributions that aren't reviewed.
+You can add your own keywords into the search: +https://gist.github.com/search?q=iotstack
+Breaking update
+A change done 2022-01-18 will require manual steps
+or you may get an error like:
+ERROR: Service "influxdb" uses an undefined network "iotstack_nw"
If you are trying to run IOTstack on non-Raspberry Pi hardware, you will probably get the following error from docker-compose
when you try to bring up your stack for the first time:
Error response from daemon: error gathering device information while adding custom device "/dev/ttyAMA0": no such file or directory
+
++You will get a similar message about any device which is not known to your hardware.
+
The /dev/ttyAMA0
device is the Raspberry Pi's built-in serial port so it is guaranteed to exist on any "real" Raspberry Pi. As well as being referenced by containers that can actually use the serial port, ttyAMA0
is often employed as a placeholder.
Examples:
+node-red-node-serialport
node to access the serial port. This is an example of "actual use";The Zigbee2MQTT container employs ttyAMA0
as a placeholder. This allows the container to start. Once you have worked out how your Zigbee adapter appears on your system, you will substitute your adapter's actual device path. For example:
- "/dev/serial/by-id/usb-Texas_Instruments_TI_CC2531_USB_CDC___0X00125B0028EEEEE0-if00:/dev/ttyACM0"
+
The simplest approach to solving "error gathering device information" problems is just to comment-out every device mapping that produces an error and, thereafter, treat the comments as documentation about what the container is expecting at run-time. For example, this is the devices list for Node-RED:
+ devices:
+ - "/dev/ttyAMA0:/dev/ttyAMA0"
+ - "/dev/vcio:/dev/vcio"
+ - "/dev/gpiomem:/dev/gpiomem"
+
Those are, in turn, the Raspberry Pi's:
+If none of those is available on your chosen platform (the usual situation on non-Pi hardware), commenting-out the entire block is appropriate:
+# devices:
+# - "/dev/ttyAMA0:/dev/ttyAMA0"
+# - "/dev/vcio:/dev/vcio"
+# - "/dev/gpiomem:/dev/gpiomem"
+
You interpret each line in a device map like this:
+ - "«external»:«internal»"
+
The «external» device is what the platform (operating system plus hardware) sees. The «internal» device is what the container sees. Although it is reasonably common for the two sides to be the same, this is not a requirement. It is usual to replace the «external» device with the actual device while leaving the «internal» device unchanged.
+Here is an example. On macOS, a CP2102 USB-to-Serial adapter shows up as:
+/dev/cu.SLAB_USBtoUART
+
Assume you are running the Node-RED container in macOS Docker Desktop, and that you want a flow to communicate with the CP2102. You would change the service definition like this:
+ devices:
+ - "/dev/cu.SLAB_USBtoUART:/dev/ttyAMA0"
+# - "/dev/vcio:/dev/vcio"
+# - "/dev/gpiomem:/dev/gpiomem"
+
In other words, the «external» (real world) device cu.SLAB_USBtoUART
is mapped to the «internal» (container) device ttyAMA0
. The flow running in the container is expecting to communicate with ttyAMA0
and is none-the-wiser.
sudo
to run docker commands¶You should never (repeat never) use sudo
to run docker or docker compose commands. Forcing docker to do something with sudo
almost always creates more problems than it solves. Please see What is sudo? to understand how sudo
actually works.
If docker
or docker-compose
commands seem to need elevated privileges, the most likely explanation is incorrect group membership. Please read the next section about errors involving docker.sock
. The solution (two usermod
commands) is the same.
If, however, the current user is a member of the docker
group but you still get error responses that seem to imply a need for sudo
, it implies that something fundamental is broken. Rather than resorting to sudo
, you are better advised to rebuild your system.
docker.sock
¶If you encounter permission errors that mention /var/run/docker.sock
, the most likely explanation is the current user (usually "pi") not being a member of the "docker" group.
You can check membership with the groups
command:
$ groups
+pi adm dialout cdrom sudo audio video plugdev games users input render netdev bluetooth lpadmin docker gpio i2c spi
+
In that list, you should expect to see both bluetooth
and docker
. If you do not, you can fix the problem like this:
$ sudo usermod -G docker -a $USER
+$ sudo usermod -G bluetooth -a $USER
+$ exit
+
The exit
statement is required. You must logout and login again for the two usermod
commands to take effect. An alternative is to reboot.
You should read this section if you experience any of the following problems:
+Start by shutting down your Pi and moving your SSD to one of the USB2 ports. The slower speed will often alleviate the problem.
+Tips:
+If you don't have sufficient control to issue a shutdown and/or your Pi won't shut down cleanly:
+If you run "headless" and find that the Pi responds to pings but you can't connect via SSH:
+dhcpcd
patch¶Next, verify that the dhcpcd patch is installed. There seems to be a timing component to the deadlock which is why it can be alleviated, to some extent, by switching the SSD to a USB2 port.
+If the dhcpcd
patch was not installed but you have just installed it, try returning the SSD to a USB3 port.
If problems persist even when the dhcpcd
patch is in place, you may have an SSD which isn't up to the Raspberry Pi's expectations. Try the following:
Run the following command:
+$ dmesg | grep "\] usb [[:digit:]]-"
+
In the output, identify your SSD. Example:
+[ 1.814248] usb 2-1: new SuperSpeed Gen 1 USB device number 2 using xhci_hcd
+[ 1.847688] usb 2-1: New USB device found, idVendor=f0a1, idProduct=f1b2, bcdDevice= 1.00
+[ 1.847708] usb 2-1: New USB device strings: Mfr=99, Product=88, SerialNumber=77
+[ 1.847723] usb 2-1: Product: Blazing Fast SSD
+[ 1.847736] usb 2-1: Manufacturer: Suspect Drives
+
In the above output, the second line contains the Vendor and Product codes that you need:
+idVendor=f0a1
idProduct=f1b2
Substitute the values of «idVendor» and «idProduct» into the following command template:
+sed -i.bak '1s/^/usb-storage.quirks=«idVendor»:«idProduct»:u /' "$CMDLINE"
+
This is known as a "quirks string". Given the dmesg
output above, the string would be:
sed -i.bak '1s/^/usb-storage.quirks=f0a1:f1b2:u /' "$CMDLINE"
+
Make sure that you keep the space between the :u
and /'
. You risk breaking your system if that space is not there.
Run these commands - the second line is the one you prepared in step 4 using sudo
:
$ CMDLINE="/boot/firmware/cmdline.txt" && [ -e "$CMDLINE" ] || CMDLINE="/boot/cmdline.txt"
+$ sudo sed -i.bak '1s/^/usb-storage.quirks=f0a1:f1b2:u /' "$CMDLINE"
+
The command:
+cmdline.txt
as cmdline.txt.bak
cmdline.txt
.You can confirm the result as follows:
+display the original (baseline reference):
+$ cat "$CMDLINE.bak"
+console=serial0,115200 console=tty1 root=PARTUUID=06c69364-02 rootfstype=ext4 fsck.repair=yes rootwait quiet splash plymouth.ignore-serial-consoles
+
display the modified version:
+$ cat "$CMDLINE"
+usb-storage.quirks=f0a1:f1b2:u console=serial0,115200 console=tty1 root=PARTUUID=06c69364-02 rootfstype=ext4 fsck.repair=yes rootwait quiet splash plymouth.ignore-serial-consoles
+
Shutdown your Pi.
+There is more information about this problem on the Raspberry Pi forum.
+If you create a mess and can't see how to recover, try proceeding like this:
+$ cd ~/IOTstack
+$ docker-compose down
+$ cd
+$ mv IOTstack IOTstack.old
+$ git clone https://github.com/SensorsIot/IOTstack.git IOTstack
+
In words:
+cd
command without any arguments changes your working directory to
+ your home directory (variously known as ~
or $HOME
or /home/pi
).Move your existing IOTstack directory out of the way. If you get a + permissions problem:
+sudo
; andsudo
command. Needing sudo
+ in this situation is an example of over-using sudo
.Check out a clean copy of IOTstack.
+Now, you have a clean slate and can start afresh by running the menu:
+$ cd ~/IOTstack
+$ ./menu.sh
+
The IOTstack.old
directory remains available as a reference for as long as
+you need it. Once you have no further use for it, you can clean it up via:
$ cd
+$ sudo rm -rf ./IOTstack.old # (1)
+
sudo
command is needed in this situation because some files and
+ folders (eg the "volumes" directory and most of its contents) are owned by
+ root.In simple terms, Docker is a software platform that simplifies the process of building, running, +managing and distributing applications. It does this by virtualizing the operating system of the +computer on which it is installed and running.
+Let’s say you have three different Python-based applications that you plan to host on a single server +(which could either be a physical or a virtual machine).
+Each of these applications makes use of a different version of Python, as well as the associated +libraries and dependencies, differ from one application to another.
+Since we cannot have different versions of Python installed on the same machine, this prevents us from +hosting all three applications on the same computer.
+Let’s look at how we could solve this problem without making use of Docker. In such a scenario, we +could solve this problem either by having three physical machines, or a single physical machine, which +is powerful enough to host and run three virtual machines on it.
+Both the options would allow us to install different versions of Python on each of these machines, +along with their associated dependencies.
+The machine on which Docker is installed and running is usually referred to as a Docker Host or Host in +simple terms. So, whenever you plan to deploy an application on the host, it would create a logical +entity on it to host that application. In Docker terminology, we call this logical entity a Container or +Docker Container to be more precise.
+Whereas the kernel of the host’s operating system is shared across all the containers that are running +on it.
+This allows each container to be isolated from the other present on the same host. Thus it supports +multiple containers with different application requirements and dependencies to run on the same host, +as long as they have the same operating system requirements.
+Docker Images and Docker Containers are the two essential things that you will come across daily while +working with Docker.
+In simple terms, a Docker Image is a template that contains the application, and all the dependencies +required to run that application on Docker.
+On the other hand, as stated earlier, a Docker Container is a logical entity. In more precise terms, +it is a running instance of the Docker Image.
+Docker Compose provides a way to orchestrate multiple containers that work together. Docker compose +is a simple yet powerful tool that is used to run multiple containers as a single service. +For example, suppose you have an application which requires Mqtt as a communication service between IOT devices +and OpenHAB instance as a Smarthome application service. In this case by docker-compose, you can create one +single file (docker-compose.yml) which will create both the containers as a single service without starting +each separately. It wires up the networks (literally), mounts all volumes and exposes the ports.
+The IOTstack with the templates and menu is a generator for that docker-compose service descriptor.
+use yaml files to configure application services (docker-compose.yaml) +can start all the services with a single command ( docker-compose up ) +can stop all the service with a single command ( docker-compose down )
+The containers are automagically connected when we run the stack with docker-compose up. +The containers using same logical network (by default) where the instances can access each other with the instance +logical name. Means if there is an instance called mosquitto and an openhab, when openHAB instance need +to access mqtt on that case the domain name of mosquitto will be resolved as the runnuning instance of mosquitto.
+The containers are enclosed processes which state are lost with the restart of container. To be able to +persist states volumes (images or directories) can be used to share data with the host. +Which means if you need to persist some database, configuration or any state you have to bind volumes where the +running service inside the container will write files to that binded volume. +In order to understand what a Docker volume is, we first need to be clear about how the filesystem normally works +in Docker. Docker images are stored as series of read-only layers. When we start a container, Docker takes +the read-only image and adds a read-write layer on top. If the running container modifies an existing file, +the file is copied out of the underlying read-only layer and into the top-most read-write layer where the +changes are applied. The version in the read-write layer hides the underlying file, but does not +destroy it -- it still exists in the underlying layer. When a Docker container is deleted, +relaunching the image will start a fresh container without any of the changes made in the previously +running container -- those changes are lost, thats the reason that configs, databases are not persisted,
+Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. +While bind mounts are dependent on the directory structure of the host machine, volumes are completely +managed by Docker. In IOTstack project uses the volumes directory in general to bind these container volumes.
+When containers running a we would like to delegate some services to the outside world, for example +OpenHAB web frontend have to be accessible for users. There are several ways to achive that. One is +mounting the port to the most machine, this called port binding. On that case service will have a dedicated +port which can be accessed, one drawback is one host port can be used one serice only. Another way is reverse proxy. +The term reverse proxy (or Load Balancer in some terminology) is normally applied to a service that sits in front +of one or more servers (in our case containers), accepting requests from clients for resources located on the +server(s). From the client point of view, the reverse proxy appears to be the web server and so is +totally transparent to the remote user. Which means several service can share same port the server +will route the request by the URL (virtual domain or context path). For example, there is grafana and openHAB +instances, where the opeanhab.domain.tld request will be routed to openHAB instance 8181 port while +grafana.domain.tld to grafana instance 3000 port. On that case the proxy have to be mapped for host port 80 and/or +444 on host machine, the proxy server will access the containers via the docker virtual network.
+Source materials used:
+https://takacsmark.com/docker-compose-tutorial-beginners-by-example-basics/ +https://www.freecodecamp.org/news/docker-simplified-96639a35ff36/ +https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/ +https://blog.container-solutions.com/understanding-volumes-docker
+ + + + + + + + + + + + + +Many first-time users of IOTstack get into difficulty by misusing the sudo
command. The problem is best understood by example. In the following, you would expect ~
(tilde) to expand to /home/pi
. It does:
$ echo ~/IOTstack
+/home/pi/IOTstack
+
The command below sends the same echo
command to bash
for execution. This is what happens when you type the name of a shell script. You get a new instance of bash
to run the script:
$ bash -c 'echo ~/IOTstack'
+/home/pi/IOTstack
+
Same answer. Again, this is what you expect. But now try it with sudo
on the front:
$ sudo bash -c 'echo ~/IOTstack'
+/root/IOTstack
+
Different answer. It is different because sudo
means "become root, and then run the command". The process of becoming root changes the home directory, and that changes the definition of ~
.
Any script designed for working with IOTstack assumes ~
(or the equivalent $HOME
variable) expands to /home/pi
. That assumption is invalidated if the script is run by sudo
.
Of necessity, any script designed for working with IOTstack will have to invoke sudo
inside the script when it is required. You do not need to second-guess the script's designer.
Please try to minimise your use of sudo
when you are working with IOTstack. Here are some rules of thumb:
Is what you are about to run a script? If yes, check whether the script already contains sudo
commands. Using menu.sh
as the example:
$ grep -c 'sudo' ~/IOTstack/menu.sh
+28
+
There are numerous uses of sudo
within menu.sh
. That means the designer thought about when sudo
was needed.
Did the command you just executed work without sudo
? Note the emphasis on the past tense. If yes, then your work is done. If no, and the error suggests elevated privileges are necessary, then re-execute the last command like this:
$ sudo !!
+
It takes time, patience and practice to learn when sudo
is actually needed. Over-using sudo
out of habit, or because you were following a bad example you found on the web, is a very good way to find that you have created so many problems for yourself that will need to reinstall your IOTstack. Please err on the side of caution!
To edit sudo functionality and permissions use: sudo visudo
For instance, to allow sudo usage without prompting for a password: +
# Allow members of group sudo to execute any command without password prompt
+%sudo ALL=(ALL:ALL) NOPASSWD:ALL
+
For more information: man sudoers
IOTstack is not a system. It is a set of conventions for assembling arbitrary collections of containers into something that has a reasonable chance of working out-of-the-box. The three most important conventions are:
+If a container needs information to persist across restarts (and most containers do) then the container's persistent store will be found at:
+~/IOTstack/volumes/«container»
+
Most service definitions examples found on the web have a scattergun approach to this problem. IOTstack imposes order on this chaos.
+To the maximum extent possible, network port conflicts have been sorted out in advance.
+Sometimes this is not possible. For example, Pi-hole and AdGuardHome both offer Domain Name System services. The DNS relies on port 53. You can't have two containers claiming port 53 so the only way to avoid this is to pick either Pi-hole or AdGuardHome. +3. Where multiple containers are needed to implement a single user-facing service, the IOTstack service definition will include everything needed. A good example is NextCloud which relies on MariaDB. IOTstack implements MariaDB as a private instance which is only available to NextCloud. This strategy ensures that you are able to run your own separate MariaDB container without any risk of interference with your NextCloud service.
+IOTstack makes the following assumptions:
+Your hardware is capable of running Debian or one of its derivatives. Examples that are known to work include:
+a Raspberry Pi (typically a 3B+ or 4B)
+++The Raspberry Pi Zero W2 has been tested with IOTstack. It works but the 512MB RAM means you should not try to run too many containers concurrently.
+
Orange Pi Win/Plus see also issue 375
+Your host or guest system is running a reasonably-recent version of Debian or an operating system which is downstream of Debian in the Linux family tree, such as Raspberry Pi OS (aka "Raspbian") or Ubuntu.
+IOTstack is known to work in 32-bit mode but not all containers have images on DockerHub that support 320bit mode. If you are setting up a new system from scratch, you should choose a 64-bit option.
+IOTstack was known to work with Buster but it has not been tested recently. Bullseye is known to work but if you are setting up a new system from scratch, you should choose Bookworm.
+Please don't waste your own time trying Linux distributions from outside the Debian family tree. They are unlikely to work.
+You are logged-in as the default user (ie not root). In most cases, this is the user with ID=1000 and is what you get by default on either a Raspberry Pi OS or Debian installation.
+This assumption is not really an IOTstack requirement as such. However, many containers assume UID=1000 exists and you are less likely to encounter issues if this assumption holds.
+Please don't read these assumptions as saying that IOTstack will not run on other hardware, other operating systems, or as a different user. It is just that IOTstack gets most of its testing under these conditions. The further you get from these implicit assumptions, the more your mileage may vary.
+You have two choices:
+This method assumes an existing system rather than a green-fields installation. The script uses the principle of least interference. It only installs the bare minimum of prerequisites and, with the exception of adding some boot time options to your Raspberry Pi (but not any other kind of hardware), makes no attempt to tailor your system.
+To use this method:
+Install curl
:
$ sudo apt install -y curl
+
Run the following command:
+$ curl -fsSL https://raw.githubusercontent.com/SensorsIot/IOTstack/master/install.sh | bash
+
The install.sh
script is designed to be run multiple times. If the script discovers a problem, it will explain how to fix that problem and, assuming you follow the instructions, you can safely re-run the script. You can repeat this process until the script completes normally.
Compared with the add-on method, PiBuilder is far more comprehensive. PiBuilder:
+In addition to cloning IOTstack (this repository), PiBuilder also clones:
+Performs extra tailoring intended to deliver a rock-solid platform for IOTstack.
+PiBuilder does, however, assume a green fields system rather than an existing installation. Although the PiBuilder scripts will probably work on an existing system, that scenario has never been tested so it's entirely at your own risk.
+PiBuilder actually has two specific use-cases:
+You can skip this section if you used PiBuilder to construct your system. That's because PiBuilder installs all necessary patches automatically.
+If you used the add-on method, you should consider applying these patches by hand. Unless you know that a patch is not required, assume that it is needed.
+Run the following commands:
+$ sudo bash -c '[ $(egrep -c "^allowinterfaces eth\*,wlan\*" /etc/dhcpcd.conf) -eq 0 ] && echo "allowinterfaces eth*,wlan*" >> /etc/dhcpcd.conf'
+
This patch prevents the dhcpcd
daemon from trying to allocate IP addresses to Docker's docker0
and veth
interfaces. Docker assigns the IP addresses itself and dhcpcd
trying to get in on the act can lead to a deadlock condition which can freeze your Pi.
See Issue 219 and Issue 253 for more information.
+This patch is ONLY for Raspbian Buster. Do NOT install this patch if you are running Raspbian Bullseye or Bookworm.
+check your OS release
+Run the following command:
+$ grep "PRETTY_NAME" /etc/os-release
+PRETTY_NAME="Raspbian GNU/Linux 10 (buster)"
+
If you see the word "buster", proceed to step 2. Otherwise, skip this patch.
+if you are indeed running "buster"
+Without this patch on Buster, Docker images will fail if:
+To install the patch:
+$ sudo apt-key adv --keyserver hkps://keyserver.ubuntu.com:443 --recv-keys 04EE7237B7D453EC 648ACFD622F3D138
+$ echo "deb http://httpredir.debian.org/debian buster-backports main contrib non-free" | sudo tee -a "/etc/apt/sources.list.d/debian-backports.list"
+$ sudo apt update
+$ sudo apt install libseccomp2 -t buster-backports
+
Kernel control groups need to be enabled in order to monitor container specific
+usage. This makes commands like docker stats
fully work. Also needed for full
+monitoring of docker resource usage by the telegraf container.
Enable by running (takes effect after reboot):
+$ CMDLINE="/boot/firmware/cmdline.txt" && [ -e "$CMDLINE" ] || CMDLINE="/boot/cmdline.txt"
+$ echo $(cat "$CMDLINE") cgroup_memory=1 cgroup_enable=memory | sudo tee "$CMDLINE"
+$ sudo reboot
+
The menu is used to construct your docker-compose.yml
file. That file is read by docker-compose
which issues the instructions necessary for starting your stack.
The menu is a great way to get started quickly but it is only an aid. It is a good idea to learn the various docker
and docker-compose
commands so you can use them outside the menu. It is also a good idea to study the docker-compose.yml
generated by the menu to see how everything is put together. You will gain a lot of flexibility if you learn how to add containers by hand.
In essence, the menu is a concatenation tool which appends service definitions that exist inside the hidden ~/IOTstack/.templates
folder to your docker-compose.yml
.
Once you understand what the menu does (and, more importantly, what it doesn't do), you will realise that the real power of IOTstack lies not in its menu system but resides in its conventions.
+To create your first docker-compose.yml
:
$ cd ~/IOTstack
+$ ./menu.sh
+Select "Build Stack"
+
Follow the on-screen prompts and select the containers you need.
+++The best advice we can give is "start small". Limit yourself to the core containers you actually need (eg Mosquitto, Node-RED, InfluxDB, Grafana, Portainer). You can always add more containers later. Some users have gone overboard with their initial selections and have run into what seem to be Raspberry Pi OS limitations.
+
Key point:
+The process finishes by asking you to bring up the stack:
+$ cd ~/IOTstack
+$ docker-compose up -d
+
The first time you run up
the stack docker will download all the images from DockerHub. How long this takes will depend on how many containers you selected and the speed of your internet connection.
Some containers also need to be built locally. Node-RED is an example. Depending on the Node-RED nodes you select, building the image can also take a very long time. This is especially true if you select the SQLite node.
+Be patient (and, if you selected the SQLite node, ignore the huge number of warnings).
+The commands in this menu execute shell scripts in the root of the project.
+The old and new menus differ in the options they offer. You should come back and explore them once your stack is built and running.
+Handy rules:
+docker
commands can be executed from anywhere, butdocker-compose
commands need to be executed from within ~/IOTstack
To start the stack:
+$ cd ~/IOTstack
+$ docker-compose up -d
+
Once the stack has been brought up, it will stay up until you take it down. This includes shutdowns and reboots of your Raspberry Pi. If you do not want the stack to start automatically after a reboot, you need to stop the stack before you issue the reboot command.
+If you get docker logging error like:
+Cannot create container for service [service name here]: unknown log opt 'max-file' for journald log driver
+
Run the command:
+$ sudo nano /etc/docker/daemon.json
+
change:
+"log-driver": "journald",
+
to:
+"log-driver": "json-file",
+
Logging limits were added to prevent Docker using up lots of RAM if log2ram is enabled, or SD cards being filled with log data and degraded from unnecessary IO. See Docker Logging configurations
+You can also turn logging off or set it to use another option for any service by using the IOTstack docker-compose-override.yml
file mentioned at IOTstack/Custom.
Another approach is to change daemon.json
to be like this:
{
+ "log-driver": "local",
+ "log-opts": {
+ "max-size": "1m"
+ }
+}
+
The local
driver is specifically designed to prevent disk exhaustion. Limiting log size to one megabyte also helps, particularly if you only have a limited amount of storage.
If you are familiar with system logging where it is best practice to retain logs spanning days or weeks, you may feel that one megabyte is unreasonably small. However, before you rush to increase the limit, consider that each container is the equivalent of a small computer dedicated to a single task. By their very nature, containers tend to either work as expected or fail outright. That, in turn, means that it is usually only recent container logs showing failures as they happen that are actually useful for diagnosing problems.
+To start a particular container:
+$ cd ~/IOTstack
+$ docker-compose up -d «container»
+
Stopping aka "downing" the stack stops and deletes all containers, and removes the internal network:
+$ cd ~/IOTstack
+$ docker-compose down
+
To stop the stack without removing containers, run:
+$ cd ~/IOTstack
+$ docker-compose stop
+
stop
can also be used to stop individual containers, like this:
$ cd ~/IOTstack
+$ docker-compose stop «container»
+
This puts the container in a kind of suspended animation. You can resume the container with
+$ cd ~/IOTstack
+$ docker-compose start «container»
+
You can also down
a container:
$ cd ~/IOTstack
+$ docker-compose down «container»
+
If the down
command returns an error suggesting that you can't use it to down a container, it actually means that you have an obsolete version of docker-compose
. You should upgrade your system. The workaround is to you the old syntax:
$ cd ~/IOTstack
+$ docker-compose rm --force --stop -v «container»
+
To reactivate a container which has been stopped and removed:
+$ cd ~/IOTstack
+$ docker-compose up -d «container»
+
You can check the status of containers with:
+$ docker ps
+
or
+$ cd ~/IOTstack
+$ docker-compose ps
+
You can inspect the logs of most containers like this:
+$ docker logs «container»
+
for example:
+$ docker logs nodered
+
You can also follow a container's log as new entries are added by using the -f
flag:
$ docker logs -f nodered
+
Terminate with a Control+C. Note that restarting a container will also terminate a followed log.
+You can restart a container in several ways:
+$ cd ~/IOTstack
+$ docker-compose restart «container»
+
This kind of restart is the least-powerful form of restart. A good way to think of it is "the container is only restarted, it is not rebuilt".
+If you change a docker-compose.yml
setting for a container and/or an environment variable file referenced by docker-compose.yml
then a restart
is usually not enough to bring the change into effect. You need to make docker-compose
notice the change:
$ cd ~/IOTstack
+$ docker-compose up -d «container»
+
This type of "restart" rebuilds the container.
+Alternatively, to force a container to rebuild (without changing either docker-compose.yml
or an environment variable file):
$ cd ~/IOTstack
+$ docker-compose up -d --force-recreate «container»
+
See also updating images built from Dockerfiles if you need to force docker-compose
to notice a change to a Dockerfile.
Docker allows a container's designer to map folders inside a container to a folder on your disk (SD, SSD, HD). This is done with the "volumes" key in docker-compose.yml
. Consider the following snippet for Node-RED:
volumes:
+ - ./volumes/nodered/data:/data
+
You read this as two paths, separated by a colon. The:
+./volumes/nodered/data
/data
In this context, the leading "." means "the folder containingdocker-compose.yml
", so the external path is actually:
~/IOTstack/volumes/nodered/data
This type of volume is a +bind-mount, where the +container's internal path is directly linked to the external path. All +file-system operations, reads and writes, are mapped to directly to the files +and folders at the external path.
+If you need a "clean slate" for a container, you can delete its volumes. Using InfluxDB as an example:
+$ cd ~/IOTstack
+$ docker-compose rm --force --stop -v influxdb
+$ sudo rm -rf ./volumes/influxdb
+$ docker-compose up -d influxdb
+
When docker-compose
tries to bring up InfluxDB, it will notice this volume mapping in docker-compose.yml
:
volumes:
+ - ./volumes/influxdb/data:/var/lib/influxdb
+
and check to see whether ./volumes/influxdb/data
is present. Finding it not there, it does the equivalent of:
$ sudo mkdir -p ./volumes/influxdb/data
+
When InfluxDB starts, it sees that the folder on right-hand-side of the volumes mapping (/var/lib/influxdb
) is empty and initialises new databases.
This is how most containers behave. There are exceptions so it's always a good idea to keep a backup.
+Breaking update
+Recent changes will require manual steps
+or you may get an error like:
+ERROR: Service "influxdb" uses an undefined network "iotstack_nw"
You should keep your Raspberry Pi up-to-date. Despite the word "container" suggesting that containers are fully self-contained, they sometimes depend on operating system components ("WireGuard" is an example).
+$ sudo apt update
+$ sudo apt upgrade -y
+
Although the menu will generally do this for you, it does not hurt to keep your local copy of the IOTstack repository in sync with the master version on GitHub.
+$ cd ~/IOTstack
+$ git pull
+
There are two kinds of images used in IOTstack:
+Those built using Dockerfiles (special cases)
+++A Dockerfile is a set of instructions designed to customise an image before it is instantiated to become a running container.
+
The easiest way to work out which type of image you are looking at is to inspect the container's service definition in your docker-compose.yml
file. If the service definition contains the:
image:
keyword then the image is not built using a Dockerfile.build:
keyword then the image is built using a Dockerfile.If new versions of this type of image become available on DockerHub, your local IOTstack copies can be updated by a pull
command:
$ cd ~/IOTstack
+$ docker-compose pull
+$ docker-compose up -d
+$ docker system prune
+
The pull
downloads any new images. It does this without disrupting the running stack.
The up -d
notices any newly-downloaded images, builds new containers, and swaps old-for-new. There is barely any downtime for affected containers.
Containers built using Dockerfiles have a two-step process:
+Node-RED is a good example of a container built from a Dockerfile. The Dockerfile defines some (or possibly all) of your add-on nodes, such as those needed for InfluxDB or Tasmota.
+There are two separate update situations that you need to consider:
+Node-RED also provides a good example of why your Dockerfile might change: if you decide to add or remove add-on nodes.
+Note:
+When your Dockerfile changes, you need to rebuild like this:
+$ cd ~/IOTstack
+$ docker-compose up --build -d «container»
+$ docker system prune
+
This only rebuilds the local image and, even then, only if docker-compose
senses a material change to the Dockerfile.
If you are trying to force the inclusion of a later version of an add-on node, you need to treat it like a DockerHub update.
+Key point:
+Note:
+You can also use this type of build if you get an error after modifying Node-RED's environment:
+$ cd ~/IOTstack
+$ docker-compose up --build -d nodered
+
When a newer version of the base image appears on DockerHub, you need to rebuild like this:
+$ cd ~/IOTstack
+$ docker-compose build --no-cache --pull «container»
+$ docker-compose up -d «container»
+$ docker system prune
+$ docker system prune
+
This causes DockerHub to be checked for the later version of the base image, downloading it as needed.
+Then, the Dockerfile is run to produce a new local image. The Dockerfile run happens even if a new base image was not downloaded in the previous step.
+As your system evolves and new images come down from DockerHub, you may find that more disk space is being occupied than you expected. Try running:
+$ docker system prune
+
This recovers anything no longer in use. Sometimes multiple prune
commands are needed (eg the first removes an old local image, the second removes the old base image).
If you add a container via menu.sh
and later remove it (either manually or via menu.sh
), the associated images(s) will probably persist. You can check which images are installed via:
$ docker images
+
+REPOSITORY TAG IMAGE ID CREATED SIZE
+influxdb latest 1361b14bf545 5 days ago 264MB
+grafana/grafana latest b9dfd6bb8484 13 days ago 149MB
+iotstack_nodered latest 21d5a6b7b57b 2 weeks ago 540MB
+portainer/portainer-ce latest 5526251cc61f 5 weeks ago 163MB
+eclipse-mosquitto latest 4af162db6b4c 6 weeks ago 8.65MB
+nodered/node-red latest fa3bc6f20464 2 months ago 376MB
+portainer/portainer latest dbf28ba50432 2 months ago 62.5MB
+
Both "Portainer CE" and "Portainer" are in that list. Assuming "Portainer" is no longer in use, it can be removed by using either its repository name or its Image ID. In other words, the following two commands are synonyms:
+$ docker rmi portainer/portainer
+$ docker rmi dbf28ba50432
+
In general, you can use the repository name to remove an image but the Image ID is sometimes needed. The most common situation where you are likely to need the Image ID is after an image has been updated on DockerHub and pulled down to your Raspberry Pi. You will find two containers with the same name. One will be tagged "latest" (the running version) while the other will be tagged "\<none>" (the prior version). You use the Image ID to resolve the ambiguity.
+See container image updates to understand how to tell the difference between images that are used "as is" from DockerHub versus those that are built from local Dockerfiles.
+Note:
+To pin an image to a specific version:
+If the image comes straight from DockerHub, you apply the pin in docker-compose.yml
. For example, to pin Grafana to version 7.5.7, you change:
grafana:
+ container_name: grafana
+ image: grafana/grafana:latest
+ …
+
to:
+ grafana:
+ container_name: grafana
+ image: grafana/grafana:7.5.7
+ …
+
To apply the change, "up" the container:
+$ cd ~/IOTstack
+$ docker-compose up -d grafana
+
If the image is built using a local Dockerfile, you apply the pin in the Dockerfile. For example, to pin Mosquitto to version 1.6.15, edit ~/IOTstack/.templates/mosquitto/Dockerfile
to change:
# Download base image
+FROM eclipse-mosquitto:latest
+…
+
to:
+# Download base image
+FROM eclipse-mosquitto:1.6.15
+…
+
To apply the change, "up" the container and pass the --build
flag:
$ cd ~/IOTstack
+$ docker-compose up -d --build mosquitto
+
AdGuard Home and PiHole perform similar functions. They use the same ports so you can not run both at the same time. You must choose one or the other.
+When you first install AdGuard Home:
+Use a web browser to connect to it using port 3001. For example:
+http://raspberrypi.local:3001
+
Click "Getting Started".
+Change the port number for the Admin Web Interface to be "8089". Leave the other settings on the page at their defaults and click "Next".
+After the initial setup, you connect to AdGuard Home via port 8089:
+http://raspberrypi.local:8089
+
Port 8089 is the default administrative user interface for AdGuard Home running under IOTstack.
+Port 8089 is not active until you have completed the Quick Start procedure. You must start by connecting to port 3001.
+Because of AdGuard Home limitations, you must take special precautions if you decide to change to a different port number:
+The internal and external ports must be the same; and
+You must convince AdGuard Home that it is a first-time installation:
+$ cd ~/IOTstack
+$ docker-compose stop adguardhome
+$ docker-compose rm -f adguardhome
+$ sudo rm -rf ./volumes/adguardhome
+$ docker-compose up -d adguardhome
+
Repeat the Quick Start procedure, this time substituting the new Admin Web Interface port where you see "8089".
+Port 3001 (external, 3000 internal) is only used during Quick Start procedure. Once port 8089 becomes active, port 3001 ceases to be active.
+In other words, you need to keep port 3001 reserved even though it is only ever used to set up port 8089.
+If you want to run AdGuard Home as your DHCP server, you need to put the container into "host mode". You need edit the AdGuard Home service definition in docker-compose.yml
to:
add the line:
+network_mode: host
+
remove the ports:
directive and all of the port mappings.
Note:
+This is a nice tool for managing databases. Web interface has moved to port 9080. There was an issue where openHAB and Adminer were using the same ports. If you have an port conflict edit the docker-compose.yml and under the adminer service change the line to read: +
ports:
+ - 9080:8080
+
This document discusses an IOTstack-specific version of Blynk-Server. It is built on top of an Ubuntu base image using a Dockerfile.
+Acknowledgement:
+~/IOTstack
+├── .templates
+│ └── blynk_server
+│ ├── Dockerfile ❶
+│ ├── docker-entrypoint.sh ❷
+│ ├── iotstack_defaults ❸
+│ │ ├── mail.properties
+│ │ └── server.properties
+│ └── service.yml ❹
+├── services
+│ └── blynk_server
+│ └── service.yml ❺
+├── docker-compose.yml ❻
+└── volumes
+ └── blynk_server ❼
+ ├── config ❽
+ │ ├── mail.properties
+ │ └── server.properties
+ └── data
+
blynk_server
container.Everything in ❽:
+Periodically, the source code is updated and a new version is released. You can check for the latest version at the releases page.
+When you select Blynk Server in the IOTstack menu, the template service definition is copied into the Compose file.
+++Under old menu, it is also copied to the working service definition and then not really used.
+
On a first install of IOTstack, you run the menu, choose your containers, and are told to do this:
+$ cd ~/IOTstack
+$ docker-compose up -d
+
docker-compose
reads the Compose file. When it arrives at the blynk_server
fragment, it finds:
blynk_server:
+ build:
+ context: ./.templates/blynk_server/.
+ args:
+ - BLYNK_SERVER_VERSION=0.41.16
+
The build
statement tells docker-compose
to look for:
~/IOTstack/.templates/blynk_server/Dockerfile
+
The BLYNK_SERVER_VERSION
argument is passed into the build process. This implicitly pins each build to the version number in the Compose file (eg 0.41.16). If you need to update to a
++The Dockerfile is in the
+.templates
directory because it is intended to be a common build for all IOTstack users. This is different to the arrangement for Node-RED where the Dockerfile is in theservices
directory because it is how each individual IOTstack user's version of Node-RED is customised.
The Dockerfile begins with:
+FROM ubuntu
+
The FROM
statement tells the build process to pull down the base image from DockerHub.
++It is a base image in the sense that it never actually runs as a container on your Raspberry Pi.
+
The remaining instructions in the Dockerfile customise the base image to produce a local image. The customisations are:
+The local image is instantiated to become your running container.
+When you run the docker images
command after Blynk Server has been built, you may see two rows that are relevant:
$ docker images
+REPOSITORY TAG IMAGE ID CREATED SIZE
+iotstack_blynk_server latest 3cd6445f8a7e 3 hours ago 652MB
+ubuntu latest 897590a6c564 7 days ago 49.8MB
+
ubuntu
is the base image; andiotstack_blynk_server
is the local image.You may see the same pattern in Portainer, which reports the base image as "unused". You should not remove the base image, even though it appears to be unused.
+++Whether you see one or two rows depends on the version of
+docker-compose
you are using and how your version ofdocker-compose
builds local images.
You can inspect Blynk Server's log by:
+$ docker logs blynk_server
+
The first time you launch the blynk_server
container, the following structure will be created in the persistent storage area:
~/IOTstack/volumes/blynk_server
+├── [drwxr-xr-x pi ] config
+│ ├── [-rw-r--r-- pi ] mail.properties
+│ └── [-rw-r--r-- pi ] server.properties
+└── [drwxr-xr-x root ] data
+
The two .properties
files can be used to alter Blynk Server's configuration. When you make change to these files, you activate then by restarting the container:
$ cd ~/IOTstack
+$ docker-compose restart blynk_server
+
Erasing Blynk Server's persistent storage area triggers self-healing and restores known defaults:
+$ cd ~/IOTstack
+$ docker-compose down blynk_server
+$ sudo rm -rf ./volumes/blynk_server
+$ docker-compose up -d blynk_server
+
You can also remove individual configuration files and then trigger self-healing. For example, if you decide to edit server.properties
and make a mess, you can restore the original default version like this:
$ cd ~/IOTstack
+$ rm volumes/blynk_server/config/server.properties
+$ docker-compose restart blynk_server
+
See also if downing a container doesn't work
+To find out when a new version has been released, you need to visit the Blynk-Server releases page at GitHub.
+At the time of writing, version 0.41.16 was the most up-to-date. Suppose that version 0.41.17 has been released and that you decide to upgrade:
+Edit your Compose file to change the version nuumber:
+ blynk_server:
+ build:
+ context: ./.templates/blynk_server/.
+ args:
+ - BLYNK_SERVER_VERSION=0.41.17
+
Note:
+You then have two options:
+If you only want to reconstruct the local image:
+$ cd ~/IOTstack
+$ docker-compose up --build -d blynk_server
+$ docker system prune -f
+
If you want to update the Ubuntu base image at the same time:
+$ cd ~/IOTstack
+$ docker-compose build --no-cache --pull blynk_server
+$ docker-compose up -d blynk_server
+$ docker system prune -f
+$ docker system prune -f
+
The second prune
will only be needed if there is an old base image and that, in turn, depends on the version of docker-compose
you are using and how your version of docker-compose
builds local images.
See the References for documentation links.
+To connect to the administrative interface, navigate to:
+https://<your pis IP>:9444/admin
+
You may encounter browser security warnings which you will have to acknowledge in order to be able to connect to the page. The default credentials are:
+admin@blynk.cc
admin
Restart the container using either Portainer or the command line:
+$ cd ~/IOTstack
+$ docker-compose restart blynk_server
+
Optional step, useful for getting the auth token emailed to you. +(To be added once confirmed working....)
+Enter Node-Red.....
+node-red-contrib-blynk-ws
from Manage Palette.Configure the Blynk node for the first time:
+URL: wss://youripaddress:9444/websockets
+
There is more information here. +4. Enter your auth token from before and save/exit. +5. When you deploy the flow, notice the app shows connected message, as does the Blynk node. +6. Press the button on the app, you will notice the payload is sent to the debug node.
+If you selected Kapacitor in the menu and want Chronograf to be able to interact with it, you need to edit docker-compose.yml
to un-comment the lines which are commented-out in the following:
chronograf:
+ …
+ environment:
+ …
+ # - KAPACITOR_URL=http://kapacitor:9092
+ depends_on:
+ …
+ # - kapacitor
+
If the Chronograf container is already running when you make this change, run:
+$ cd ~IOTstack
+$ docker-compose up -d chronograf
+
You can update the container via:
+$ cd ~/IOTstack
+$ docker-compose pull
+$ docker-compose up -d
+$ docker system prune
+
In words:
+docker-compose pull
downloads any newer images;docker-compose up -d
causes any newly-downloaded images to be instantiated as containers (replacing the old containers); andprune
gets rid of the outdated images.If you need to pin to a particular version:
+docker-compose.yml
.Find the line:
+image: chronograf:latest
+
Replace latest
with the version you wish to pin to. For example, to pin to version 1.9.0:
image: chronograf:1.9.0
+
Save the file and tell docker-compose
to bring up the container:
$ cd ~/IOTstack
+$ docker-compose up -d chronograf
+$ docker system prune
+
The web UI can be found on "your_ip":5000
.
The default credentials are:
+* User: admin
+* Password: admin
DashMachine is a web application bookmark dashboard. It allows you to have all your application bookmarks available in one place, grouped and organized how you want to see them.
+Within the context of IOTstack, DashMachine can help you organize your deployed services.
+ + + + + + + + + + + + + +If you use "old menu", you may get an error message similar to the following on first launch:
+parsing ~/IOTstack/docker-compose.yml: error while interpolating services.deconz.devices.[]: required variable DECONZ_DEVICE_PATH is missing a value: eg echo DECONZ_DEVICE_PATH=/dev/serial0 >>~/IOTstack/.env
+
The message is telling you that you need to define the path to your deCONZ device. Common examples are:
+/dev/serial0
/dev/ttyUSB0
/dev/ttyACM0
Once you have identified the appropriate device path, you can define it like this:
+$ echo DECONZ_DEVICE_PATH=/dev/serial0 >>~/IOTstack/.env
+
This example uses /dev/serial0
. Substitute your actual device path if it is different.
New menu offers a sub-menu (place the cursor on deconz
and press the right arrow) where you can select the appropriate device path.
Before running docker-compose up -d
, make sure your Linux user is part of the dialout group, which allows the user access to serial devices (i.e. Conbee/Conbee II/RaspBee). If you are not certain, simply add your user to the dialout group by running the following command (username "pi" being used as an example):
$ sudo usermod -a -G dialout pi
+
Your Conbee/Conbee II/RaspBee gateway must be plugged in when the deCONZ Docker container is being brought up. If your gateway is not detected, or no lights can be paired, try moving the device to another usb port. A reboot may help too.
+Use a 0.5-1m usb extension cable with ConBee (II) to avoid wifi and bluetooth noise/interference from your Raspberry Pi (recommended by the manufacturer and often the solution to poor performance).
+The Phoscon UI is available using port 8090 (http://your.local.ip.address:8090/)
+The Zigbee mesh can be viewed using VNC on port 5901. The default VNC password is "changeme".
+Install node-red-contrib-deconz via the "Manage palette" menu in Node-RED (if not already installed) and follow these 2 simple steps (also shown in the video below):
+Step 1: In the Phoscon UI, Go to Settings > Gateway > Advanced and click "Authenticate app".
+Step 2: In Node-RED, open a deCONZ node, select "Add new deonz-server", insert your ip adress and port 8090 and click "Get settings". Click "Add", "Done" and "Deploy". Your device list will not be updated before deploying.
+ + + + + + + + + + + + + + +diyHue is a utility to contol the lights in your home
+Before you start diyHue you will need to get your IP and MAC addresses. Run ip addr
in the terminal
Enter these values into the ./services/diyhue/diyhue.env
file
The default username and password it Hue
and Hue
respectively
The web interface is available on port 8070
+ + + + + + + + + + + + + +There is no IOTstack documentation for Domoticz.
+This is a standing invitation to anyone who is familiar with this container to submit a Pull Request to provide some documentation.
+TZ=${TZ:-Etc/UTC}
If TZ
is defined in ~/IOTstack/.env
then the value there is applied, otherwise the default of Etc/UTC
is used. You can initialise .env
like this:
$ cd ~/IOTstack
+$ [ $(grep -c "^TZ=" .env) -eq 0 ] && echo "TZ=$(cat /etc/timezone)" >>.env
+
LOG_PATH=/opt/domoticz/userdata/domoticz.log
This is disabled by default. If you enable it, Domoticz will write a log to that internal path. The path corresponds with the external path:
+~/IOTstack/volumes/domoticz/domoticz.log
+
Note that this log is persistent. In other words, it will survive container restarts. This means you are responsible for pruning it from time to time. The Unix tradition for pruning logs is:
+$ cd ~/IOTstack/volumes/domoticz/
+$ cat /dev/null | sudo tee domoticz.log
+
If, instead, you decide to delete the log file, you should stop the container first:
+$ cd ~/IOTstack
+$ docker-compose down domoticz
+$ sudo rm ./volumes/domoticz/domoticz.log
+$ docker-compose up -d domoticz
+
EXTRA_CMD_ARG=
This is disabled by default. It can be enabled and used to override the default parameters and pass command-line parameters of your choosing to Domoticz.
+The service definition includes an x-devices:
clause. The x-
prefix has the same effect as commenting-out the entire clause. If you wish to map an external device into the container:
x-
prefix.Recreate the container:
+$ cd ~/IOTstack
+$ docker-compose up -d domoticz
+
lscr.io/linuxserver/domoticz:latest
image. The current service definition uses the domoticz/domoticz:stable
image.The location of the persistent store has changed, as has its relationship to the internal path:
+service definition | +persistent store | +internal path | +
---|---|---|
older | +~/IOTstack/volumes/domoticz/data | +config | +
current | +~/IOTstack/volumes/domoticz | +/opt/domoticz/userdata | +
If you have have been using the older service definition and wish to upgrade to the current service definition, you can try migrating like this:
+$ cd ~/IOTstack/volumes
+$ sudo mv domoticz domoticz.old
+$ sudo cp -a domoticz.old/data domoticz
+
Webninterface is available at "your_ip":8889
Dozzle is a small lightweight application with a web based interface to monitor Docker logs. +It doesn’t store any log files. It is for live monitoring of your container logs only.
+ + + + + + + + + + + + + +Duckdns is a free public DNS service that provides you with a domain name you +can update to match your dynamic IP-address.
+This container automates the process to keep the duckdns.org domain updated +when your IP-address changes.
+First, register an account, add your subdomain and get your token from +http://www.duckdns.org/
+Either edit ~/IOTstack/docker-compose.yml
or create a file
+~/IOTstack/docker-compose.override.yml
. Place your Duckdns token and
+subdomain name (without .duckdns.org) there:
version: '3.6'
+services:
+ duckdns:
+ environment:
+ TOKEN: your-duckdns-token
+ SUBDOMAINS: subdomain
+
Observe that at least the initial update is successful:
+$ cd ~/IOTstack
+$ docker-compose up -d duckdns
+$ docker-compose logs -f duckdns
+...SNIP...
+duckdns | Sat May 21 11:01:00 UTC 2022: Your IP was updated
+...SNIP...
+(ctrl-c to stop following the log)
+
If there is a problem, check that the resulting effective configuration of +'duckdns:' looks OK: +
$ cd ~/IOTstack && docker-compose config
+
Example public/private IP:s and domains
+flowchart
+I([Internet])
+G("Router\npublic IP: 52.85.51.71\nsubdomain.duckdns.org")
+R(Raspberry pi\nprivate IP: 192.168.0.100\nprivate_subdomain.duckdns.org)
+I --- |ISP| G --- |LAN| R
+As a public DNS server, Duckdns is not meant to be used for private IPs. It's +recommended that for resolving internal LAN IPs you use the Pi +Hole container or run a dedicated DNS server.
+That said, it's possible to update a Duckdns subdomain to your private LAN IP. +This may be convenient if you have devices that don't support mDNS (.local) or +don't want to run Pi-hole. This is especially useful if you can't assign a +static IP to your RPi. No changes to your DNS resolver settings are needed.
+First, as for the public subdomain, add the domain name to your Duckdns account
+by logging in from their homepage. Then add a PRIVATE_SUBDOMAINS
variable
+indicating this subdomain:
version: '3.6'
+services:
+ duckdns:
+ environment:
+ TOKEN: ...
+ SUBDOMAINS: ...
+ PRIVATE_SUBDOMAINS: private_subdomain
+
ESPHome is a system to control your microcontrollers by simple yet powerful configuration files and control them remotely through Home Automation systems.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 |
|
Notes:
+x-
prefix on the x-ports
clause has the same effect as commenting-out lines 10 and 11. It serves the twin purposes of documenting the fact that the ESPHome container uses port 6052 and minimising the risk of port number collisions.If you select ESPHome in the IOTstack menu, as well as adding the service definition to your compose file, the menu:
+/etc/udev/rules.d
.~/IOTstack/.env
for the presence of the ESPHOME_USERNAME
and initialises it to the value esphome
if it is not found.~/IOTstack/.env
for the presence of the ESPHOME_PASSWORD
and initialises it to a random value if it is not found.If you prefer to avoid the menu, you can install ESPHome like this:
+Be in the correct directory:
+$ cd ~/IOTstack
+
If you are on the "master" branch, add the service definition like this:
+$ sed -e "s/^/ /" ./.templates/esphome/service.yml >>docker-compose.yml
+
Alternatively, if you are on the "old-menu" branch, do this:
+$ cat ./.templates/esphome/service.yml >>docker-compose.yml
+
Replace «username»
and «password»
in the following commands with values of your choice and then run the commands:
$ echo "ESPHOME_USERNAME=«username»” >>.env
+$ echo "ESPHOME_PASSWORD=«password»" >>.env
+
This initialises the required environment variables. Although the username defaults to esphome
, there is no default for the password. If you forget to set a password, docker-compose
will remind you when you try to start the container:
error while interpolating services.esphome.environment.[]: \
+ required variable ESPHOME_PASSWORD is missing a value: \
+ eg echo ESPHOME_PASSWORD=ChangeMe >>~/IOTstack/.env
+
The values of the username and password variables are applied each time you start the container. In other words, if you decide to change these credentials, all you need to do is edit the .env
file and “up” the container.
Copy the UDEV rules file into place and ensure it has the correct permissions:
+$ sudo cp ./.templates/esphome/88-tty-iotstack-esphome.rules /etc/udev/rules.d/
+$ sudo chmod 644 /etc/udev/rules.d/88-tty-iotstack-esphome.rules
+
ESPHome provides a number of methods for provisioning an ESP device. These instructions focus on the situation where the device is connected to your Raspberry Pi via a USB cable.
+To start the container:
+$ cd ~/IOTstack
+$ docker-compose up -d esphome
+
Tip:
+You can always retrieve your ESPHome login credentials from the .env
file. For example:
$ grep “^ESPHOME_” .env
+ESPHOME_USERNAME=esphome
+ESPHOME_PASSWORD=8AxXG5ZVsO4UGTMt
+
Connect your ESP device to one of your Raspberry Pi’s USB ports. You need to connect the device while the ESPHome container is running so that the UDEV rules file can propagate the device (typically /dev/ttyUSBn
) into the container.
So long as the container is running, you can freely connect and disconnect ESP devices to your Raspberry Pi’s USB ports, and the container will keep “in sync”.
+Launch your browser. For maximum flexibility, ESPHome recommends browsers that support WebSerial, like Google Chrome or Microsoft Edge.
+Connect to your Raspberry Pi on port 6052 (reference point 🄰 in the following screen shot):
+ +You can use your Raspberry Pi’s:
+raspberrypi.local
);Enter your ESPHome credentials at 🄱 and click Login.
+Click either of the + New Device buttons 🄲:
+ +Read the dialog and then click Continue 🄳:
+ +Give the configuration a name at 🄴:
+ +In the fields at 🄵, enter the Network Name (SSID) and password (PSK) of the WiFi network that you want your ESP devices to connect to when they power up.
+++The WiFi fields are only displayed the very first time you set up a device. Thereafter, ESPHome assumes all your devices will use the same WiFi network.
+
Click “Next” 🄶.
+Select the appropriate SoC (System on a Chip) type for your device. Here, I am using a generic ESP32 at 🄷:
+ +Clicking on the appropriate line proceeds to the next step.
+You can either make a note of the encryption key or, as is explained in the dialog, defer that until you actually need it for Home Assistant. Click “Install” 🄸.
+ +The primary reason for running ESPHome as a container in IOTstack is so you can program ESP devices attached to your Raspberry Pi. You need to tell ESPHome what you are doing by selecting “Plug into the computer running ESPHome Dashboard” 🄹:
+ +If all has gone well, your device will appear in the list. Select it 🄺:
+ +If, instead, you see the window below, it likely means you did not connect your ESP device while the ESPHome container was running:
+ +Try disconnecting and reconnecting your ESP device, and waiting for the panel 🄺 to refresh. If that does not cure the problem then it likely means the UDEV rules are not matching on your particular device for some reason. You may need to consider privileged mode.
+The container will begin the process of compiling the firmware and uploading it to your device. The first time you do this takes significantly longer than second-or-subsequent builds, mainly because the container downloads almost 2GB of data.
+ +The time to compile depends on the speed of your Raspberry Pi hardware (ie a Raspberry Pi 5 will be significantly faster than a model 4, than a model 3). Be patient!
+When the progress log 🄻 implies the process has been completed, you can click Stop 🄼 to dismiss the window.
+Assuming normal completion, your ESP device should show as “Online” 🄽. You can edit or explore the configuration using the “Edit” and “⋮” buttons.
+ +If ESPHome misbehaves or your early experiments leave a lot of clutter behind, and you decide it would be best to start over with a clean installation, run the commands below:
+$ cd ~/IOTstack
+$ docker-compose down esphome
+$ sudo rm -rf ./volumes/esphome
+$ docker-compose up -d esphome
+
Notes:
+sudo rm
. Double-check the command before you press enter.The sudo rm
may seem to take longer than you expect. Do not be concerned. ESPHome downloads a lot of data which it stores at the hidden path:
/IOTstack/volumes/esphome/config/.esphome
+
A base install has more than 13,000 files and over 3,000 directories. Even on a solid state disk, deleting that meny directory entries takes time!
+The service definition contains the following lines:
+14 +15 |
|
Those lines assume the presence of a rules file at:
+/etc/udev/rules.d/88-tty-iotstack-esphome.rules
+
That file is copied into place automatically if you use the IOTstack menu to select ESPHome. It should also have been copied if you installed ESPHome manually.
+What the rules file does is to wait for you to connect any USB device which maps to a major device number of 188. That includes most (hopefully all) USB-to-serial adapters that are found on ESP dev boards, or equivalent standalone adapters such as those made by Future Technology Devices International (FTDI) and Silicon Laboratories Incorporated where you typically connect jumper wires to the GPIO pins which implement the ESP's primary serial interface.
+Whenever you connect such a device to your Raspberry Pi, the rules file instructs the ESPHome container to add a matching node. Similarly, when you remove such a device, the rules file instructs the ESPHome container to delete the matching node. The container gains the ability to access the USB device (the ESP) via the device_cgroup_rules
clause.
You can check whether a USB device is known to the container by running:
+$ docker exec esphome ls /dev
+
The mechanism is not 100% robust. In particular, it will lose synchronisation if the system is rebooted, or the container is started when a USB device is already mounted. Worst case should be the need to unplug then re-plug the device, after which the container should catch up.
+The UDEV rules "fire" irrespective of whether or not the ESPHome container is actually running. All that happens if the container is not running is an error message in the system log. However, if you decide to remove the ESPHome container, you should remove the rules file by hand:
+$ sudo rm /etc/udev/rules.d/88-tty-iotstack-esphome.rules
+
The UDEV rules approach uses the principle of least privilege but it relies upon an assumption about how ESP devices represent themselves when connected to a Raspberry Pi.
+If you encounter difficulties, you can consider trying this instead:
+Edit the service definition so that it looks like this:
+14 +15 +16 |
|
The x-
prefix has the effect of commenting-out lines 14 and 15, making it easy to restore them later.
Start the container:
+$ cd ~/IOTstack
+$ docker-compose up -d esphome
+
The privileged
flag gives the container unrestricted access to all of /dev
. The container runs as root so this is the same as granting any process running inside the ESPHome container full and unrestricted access to all corners of your hardware platform, including your mass storage devices (SD, HD, SSD). You should use privileged mode sparingly and in full knowledge that it is entirely at your own risk!
You can keep ESPHome up-to-date with routine “pull” commands:
+$ cd ~/IOTstack
+$ docker-compose pull
+$ docker-compose up -d
+$ docker system prune -f
+
If a pull
downloads a more-recent image for ESPHome, the subsequent up
will (logically) disconnect any connected ESP device from the container.
The same will happen if you “down” and “up” the ESPHome container, or reboot the Raspberry Pi, while an ESP device is physically connected to the Raspberry Pi.
+++In every case, the device will still be known to the Raspberry Pi, just not the ESPHome container. In a logical sense, the container is “out of sync” with the host system.
+
If this happens, disconnect and reconnect the device. The UDEV rule will “fire” and propagate the device back into the running container.
+ + + + + + + + + + + + + +This is a testing container
+I tried it however the container keeps restarting docker logs espruinohub
I get "BLE Broken?" but could just be i dont have any BLE devices nearby
web interface is on "{your_Pis_IP}:1888"
+see EspruinoHub#status--websocket-mqtt--espruino-web-ide for other details.
+there were no recommendations for persistent data volumes. so docker-compose down
may destroy all you configurations so use docker-compose stop
in stead
Please check existing issues if you encounter a problem, and then open a new issue if your problem has not been reported.
+ + + + + + + + + + + + + +When you have logged into Grafana (default user/pass: admin/admin), you have +to add a data source to be used for the graphs.
+Select Data Sources
-> Add data source
-> InfluxDB
.
Set options:
+http://influxdb:8086
telegraf
nodered
nodered
Grafana documentation contains a list of +settings. +Settings are described in terms of how they appear in ".ini" files.
+Grafana configuration is usually done in grafana.ini, but when used via +docker as the IOTstack does, it should be configured using environment +variables.
+Edit docker-compose.yml
and find grafana:
and under it
+environment:
this is where you can place the ini-options, but formatted as:
+
- GF_<SectionName>_<KeyName>=<value>
+
~/IOTstack/services/grafana/grafana.env
+instead and add the lines directly there, but without the leading dash:
+GF_<SectionName>_<KeyName>=<value>
+For any changes to take effect you need recreate the Grafana container:
+$ docker-compose up -d grafana
+
Change the right hand side to your own +timezone:
+ - TZ=Etc/UTC
+
To allow anonymous logins add:
+ - GF_AUTH_ANONYMOUS_ENABLED=true
+
If you do not change anything then, when you bring up the stack and use a browser to connect to your Raspberry Pi on port 3000, Grafana will:
+Thereafter, you will login as "admin" with whatever password you chose. You can change the administrator's password as often as you like via the web UI (profile button, change password tab).
+This default operation can be changed by configuration options. They will have +any effect only if Grafana has just been added to the stack, but has never +been launched. Thus, if the folder ~/IOTstack/volumes/grafana exists, Grafana +has already been started, and adding and changing these options will not +have any effect.
+To customize, editing the file as describe above, add the following lines under
+the environment:
clause. For example, to set the administrative username to be "maestro" with password "123456":
- GF_SECURITY_ADMIN_USER=maestro
+ - GF_SECURITY_ADMIN_PASSWORD=123456
+
If you change the default password, Grafana will not force you to change the +password on first login but you will still be able to change it via the web UI.
+As a summary, the environment variables only take effect if you set them up before Grafana is launched for the first time:
+GF_SECURITY_ADMIN_USER
has a default value of "admin". You can explicitly set it to "admin" or some other value. Whatever option you choose then that's the account name of Grafana's administrative user. But choosing any value other than "admin" is probably a bad idea.GF_SECURITY_ADMIN_PASSWORD
has a default value of "admin". You can explicitly set it to "admin" or some other value. If its value is "admin" then you will be forced to change it the first time you login to Grafana. If its value is something other than "admin" then that will be the password until you change it via the web UI.To set an options with a space, you must enclose the whole value in quotes:
+ - "GF_AUTH_ANONYMOUS_ORG_NAME=Main Org."
+
Assuming Grafana is started, run:
+$ docker exec grafana grafana cli admin reset-admin-password «NEWPASSWORD»
+
where «NEWPASSWORD»
is the value of your choice.
Note:
+GF_SECURITY_ADMIN_USER
to be something other than "admin", the password change will be applied to that username. In other words, in the docker exec
command above, the two references to "admin" are referring to the administrator's account, not the username of the administrator's account. Run the command "as is". Do not replace "admin" with the username of the administrator's account."I made a bit of a mess with Grafana. First time user. Steep learning curve. False starts, many. Mistakes, unavoidable. Been there, done that. But now I really need to start from a clean slate. And, yes, I understand there is no undo for this."
+Begin by stopping Grafana:
+$ cd ~/IOTstack
+$ docker-compose down grafana
+
++see also if downing a container doesn't work
+
You have two options:
+Destroy your settings and dashboards but retain any plugins you may have installed:
+$ sudo rm ~/IOTstack/volumes/grafana/data/grafana.db
+
Nuke everything (triple-check this command before you hit return):
+$ sudo rm -rf ~/IOTstack/volumes/grafana/data
+
This is where you should edit docker-compose.yml or +~/IOTstack/services/grafana/grafana.env to correct any problems (such as +choosing an administrative username other than "admin").
+When you are ready, bring Grafana back up again:
+$ cd ~/IOTstack
+$ docker-compose up -d grafana
+
Grafana will automatically recreate everything it needs. You will be able to login as "admin/admin" (or the credentials you set using GF_SECURITY_ADMIN_USER
and GF_SECURITY_ADMIN_PASSWORD
).
The web UI can be found on:
+"your_ip":8882
"your_ip":8883
From the Heimdall website:
+++Heimdall Application Dashboard is a dashboard for all your web applications. It doesn't need to be limited to applications though, you can add links to anything you like. There are no iframes here, no apps within apps, no abstraction of APIs. if you think something should work a certain way, it probably does.
+
Within the context of IOTstack, the Heimdall Application Dashboard can help you organize your deployed services.
+ + + + + + + + + + + + + +Home Assistant is a home automation platform. It is able to track and control all devices at your home and offer a platform for automating control.
+There are two versions of Home Assistant:
+Each version:
+Home Assistant Container runs as a single Docker container, and doesn't support all the features that Supervised Home Assistant does (such as add-ons). Supervised Home Assistant runs as a collection of Docker containers under its own orchestration.
+The only method supported by IOTstack is Home Assistant Container.
+++To understand why, see about Supervised Home Assistant.
+
If Home Assistant Container will not do what you want then, basically, you will need two Raspberry Pis:
+Home Assistant (Container) can be found in the Build Stack
menu. Selecting it in this menu results in a service definition being added to:
~/IOTstack/docker-compose.yml
+
The normal IOTstack commands apply to Home Assistant Container such as:
+$ cd ~/IOTstack
+$ docker-compose up -d
+
In order to be able to use BT & BLE devices from HA integrations, make sure that Bluetooth is enabled:
+$ hciconfig
+hci0: Type: Primary Bus: UART
+ BD Address: DC:89:FB:A6:32:4B ACL MTU: 1021:8 SCO MTU: 64:1
+ UP RUNNING
+ RX bytes:2003 acl:0 sco:0 events:159 errors:0
+ TX bytes:11583 acl:0 sco:0 commands:159 errors:0
+
The "UP" in the third line of output indicates that Bluetooth is enabled. If Bluetooth is not enabled, check:
+$ grep "^AutoEnable" /etc/bluetooth/main.conf
+AutoEnable=true
+
If AutoEnable
is either missing or not set to true
, then:
Use sudo
to and your favouring text editor to open:
/etc/bluetooth/main.conf
+
Find AutoEnable
and make it true
.
++If
+AutoEnable
is missing, it needs to be added to the[Policy]
section.
Reboot your Raspberry Pi.
+See also: Scribles: Auto Power On Bluetooth Adapter on Boot-up.
+Although the Home Assistant documentation does not mention this, it is possible that you may also need to make the following changes to the Home Assistant service definition in your docker-compose.yml
:
Add the following mapping to the volumes:
clause:
- /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket
+
Add the following devices:
clause:
devices:
+ - "/dev/serial1:/dev/ttyAMA0"
+ - "/dev/vcio:/dev/vcio"
+ - "/dev/gpiomem:/dev/gpiomem"
+
Notes:
+/dev/ttyAMA0
meant "the serial interface" on Raspberry Pis. Subsequently, it came to mean "the Bluetooth interface" where Bluetooth support was present. Now, /dev/serial1
is used to mean "the Raspberry Pi's Bluetooth interface". The example above maps that to the internal device /dev/ttyAMA0
because that is probably what the container expects. There are no guarantees and you may need to experiment with internal device names.Some HA integrations (e.g google assistant) require your HA API to be +accessible via https with a valid certificate. You can configure HA to do this: +docs / +guide +or use a reverse proxy container, as described below.
+The linuxserver Secure Web Access Gateway container +(swag) (Docker hub +docs) will automatically generate a +SSL-certificate, update the SSL certificate before it expires and act as a +reverse proxy.
+http://raspberrypi.local:8123/
(assuming
+your RPi hostname is raspberrypi)Add swag to ~/IOTstack/docker-compose.yml beneath the services:
-line:
swag:
+ image: ghcr.io/linuxserver/swag
+ cap_add:
+ - NET_ADMIN
+ environment:
+ - PUID=1000
+ - PGID=1000
+ - TZ=${TZ:-Etc/UTC}
+ - URL=<yourdomain>.duckdns.org
+ - SUBDOMAINS=wildcard
+ - VALIDATION=duckdns
+ - DUCKDNSTOKEN=<token>
+ - CERTPROVIDER=zerossl
+ - EMAIL=<e-mail> # required when using zerossl
+ volumes:
+ - ./volumes/swag/config:/config
+ ports:
+ - 443:443
+ restart: unless-stopped
+
Replace the bracketed values. Do NOT use any "-characters to enclose the values.
+Start the swag container, this creates the file to be edited in the next step:
+$ cd ~/IOTstack
+$ docker-compose up -d
+
Check it starts up OK: docker-compose logs -f swag
. It will take a minute or two before it finally logs "Server ready".
Enable reverse proxy for raspberrypi.local
. homassistant.*
is already by default. and fix homeassistant container name ("upstream_app"):
$ cd ~/IOTstack
+$ sed -e 's/server_name/server_name *.local/' \
+ volumes/swag/config/nginx/proxy-confs/homeassistant.subdomain.conf.sample \
+ > volumes/swag/config/nginx/proxy-confs/homeassistant.subdomain.conf
+
Forward to correct IP when target is a container running in "network_mode: + host" (like Home Assistant does):
++
cd ~/IOTstack
+cat << 'EOF' | sudo tee volumes/swag/config/custom-cont-init.d/add-host.docker.internal.sh
+#!/bin/sh
+DOCKER_GW=$(ip route | awk 'NR==1 {print $3}')
+
+sed -i -e "s/upstream_app .*/upstream_app ${DOCKER_GW};/" \
+ /config/nginx/proxy-confs/homeassistant.subdomain.conf
+EOF
+sudo chmod u+x volumes/swag/config/custom-cont-init.d/add-host.docker.internal.sh
+
(This needs to be copy-pasted/entered as-is, ignore any "> "-prefixes printed +by bash)
+(optional) Add reverse proxy password protection if you don't want to rely + on the HA login for security, doesn't affect API-access:
+$ cd ~/IOTstack
+$ sed -i -e 's/#auth_basic/auth_basic/' \
+ volumes/swag/config/nginx/proxy-confs/homeassistant.subdomain.conf
+$ docker-compose exec swag htpasswd -c /config/nginx/.htpasswd anyusername
+
Add use_x_forwarded_for
and trusted_proxies
to your homeassistant http
+ config. The configuration
+ file is at volumes/home_assistant/configuration.yaml
For a default install
+ the resulting http-section should be:
http:
+ use_x_forwarded_for: true
+ trusted_proxies:
+ - 192.168.0.0/16
+ - 172.16.0.0/12
+ - 10.77.0.0/16
+
Refresh the stack: cd ~/IOTstack && docker-compose stop && docker-compose
+ up -d
(again may take 1-3 minutes for swag to start if it recreates
+ certificates)
http://raspberrypi.local:8123/
(assuming your RPi hostname is
+ raspberrypi)Test the reverse proxy https is working correctly:
+ https://raspberrypi.local/
(browser will issue a warning about wrong
+ certificate domain, as the certificate is issued for you duckdns-domain, we
+ are just testing)
Or from the command line in the RPi:
+$ curl --resolve homeassistant.<yourdomain>.duckdns.org:443:127.0.0.1 \
+ https://homeassistant.<yourdomain>.duckdns.org/
+
(output should end in if (!window.latestJS) { }</script></body></html>
)
And finally test your router forwards correctly by accessing it from
+ outside your LAN(e.g. using a mobile phone):
+ https://homeassistant.<yourdomain>.duckdns.org/
Now the certificate
+ should work without any warnings.
IOTstack used to offer a menu entry leading to a convenience script that could install Supervised Home Assistant. That script stopped working when Home Assistant changed their approach. The script's author made it clear that script's future was bleak so the affordance was removed from IOTstack.
+For a time, you could manually install Supervised Home Assistant using their installation instructions for advanced users. Once you got HA working, you could install IOTstack, and the two would (mostly) happily coexist.
+The direction being taken by the Home Assistant folks is to supply a ready-to-run image for your Raspberry Pi. They still support the installation instructions for advanced users but the requirements are very specific. In particular:
+++Debian Linux Debian 11 aka Bullseye (no derivatives)
+
Raspberry Pi OS is a Debian derivative and it is becoming increasingly clear that the "no derivatives" part of that requirement must be taken literally and seriously. Recent examples of significant incompatibilities include:
+grub
(GRand Unified Bootloader). The Raspberry Pi does not use grub
but the change is actually about forcing Control Groups version 1 when the Raspberry Pi uses version 2.systemd-resolved
. This is a DNS resolver which claims port 53. That means you can't run your own DNS service like PiHole, AdGuardHome or BIND9 as an IOTstack container. Because of the self-updating nature of Supervised Home Assistant, your Raspberry Pi might be happily running Supervised Home Assistant plus IOTstack one day, and suddenly start misbehaving the next day, simply because Supervised Home Assistant assumed it was in total control of your Raspberry Pi.
+If you want Supervised Home Assistant to work, reliably, it really needs to be its own dedicated appliance. If you want IOTstack to work, reliably, it really needs to be kept well away from Supervised Home Assistant. If you want both Supervised Home Assistant and IOTstack, you really need two Raspberry Pis.
+ + + + + + + + + + + + + +Homebridge documentation has a comprehensive configuration guide which you are encouraged to read.
+Homebridge is configured using environment variables. In IOTstack:
+docker-compose.yml
.If you are running old menu (old-menu branch), environment variables are at the path:
+~/IOTstack/services/homebridge/homebridge.env
+
In either case, you apply changes by editing the relevant file (docker-compose.yml
or homebridge.env
) and then:
$ cd ~/IOTstack
+$ docker-compose up -d homebridge
+
"avahi", "multicast DNS", "Rendezvous", "Bonjour" and "ZeroConf" are synonyms.
+Current Homebridge images disable avahi services by default. The Homebridge container runs in "host mode" which means it can participate in multicast traffic flows. If you have a plugin that requires avahi, it can enabled by setting the environment variable:
+ENABLE_AVAHI=1
+
The web UI for Homebridge can be found on "your_ip":8581
. You can change the port by adjusting the environment variable:
HOMEBRIDGE_CONFIG_UI_PORT=8581
+
The web UI can be found on "your_ip":8881
From the Homer README:
+++A dead simple static HOMepage for your servER to keep your services on hand, from a simple
+yaml
configuration file.
You can find an example of the config.yml
file here.
Within the context of IOTstack, Homer can help you organize your deployed services.
+ + + + + + + + + + + + + +InfluxDB is a time series database. What that means is time is the primary key of each table.
+Another feature of InfluxDB is the separation of attributes into:
+InfluxDB has configurable aggregation and retention policies allowing measurement resolution reduction, storing all added data points for recent data and only aggregated values for older data.
+Note:
+influxdb:1.8
image. Substituting the :latest
tag will get you InfluxDB version 2 and will create a mess.All InfluxDB settings can be applied using environment variables. Environment variables override any settings in the InfluxDB configuration file:
+Under "new menu" (master branch), environment variables are stored inline in
+~IOTstack/docker-compose.yml
+
Under "old menu", environment variables are stored in:
+~/IOTstack/services/influxdb/influxdb.env
+
Whenever you change an environment variable, you activate it like this:
+$ cd ~/IOTstack
+$ docker-compose up -d influxdb
+
The default service definition provided with IOTstack exposes the following environment variables:
+TZ=Etc/UTC
set this to your local timezone. Do not use quote marks!INFLUXDB_HTTP_FLUX_ENABLED=false
set this true
if you wish to use Flux queries rather than InfluxQL:
++At the time of writing, Grafana queries use InfluxQL.
+
INFLUXDB_REPORTING_DISABLED=false
InfluxDB activates phone-home reporting by default. This variable disables it for IOTstack. You can activate it if you want your InfluxDB instance to send reports to the InfluxDB developers.
INFLUXDB_MONITOR_STORE_ENABLED=FALSE
disables automatic creation of the _internal
database. This database stores metrics about InfluxDB itself. The database is incredibly busy. Side-effects of enabling this feature include increased wear and tear on SD cards and, occasionally, driving CPU utilisation through the roof and generally making your IOTstack unstable.
++To state the problem in a nutshell: do you want Influx self-metrics, or do you want a usable IOTstack? You really can't have both. See also issue 19543.
+
Authentication variables:
+INFLUXDB_HTTP_AUTH_ENABLED=false
INFLUX_USERNAME=dba
INFLUX_PASSWORD=supremo
Misunderstanding the purpose and scope of these variables is a common mistake made by new users. Please do not guess! Please read Authentication before you enable or change any of these variables. In particular, dba
and supremo
are not defaults for database access.
UDP data acquisition variables:
+INFLUXDB_UDP_ENABLED=false
INFLUXDB_UDP_BIND_ADDRESS=0.0.0.0:8086
INFLUXDB_UDP_DATABASE=udp
Read UDP support before making any decisions on these variables.
+influxdb.conf
¶A lot of InfluxDB documentation and help material on the web refers to the influxdb.conf
configuration file. Such instructions are only appropriate when InfluxDB is installed natively.
When InfluxDB runs in a container, changing influxdb.conf
is neither necessary nor recommended. Anything that you can do with influxdb.conf
can be done with environment variables.
However, if you believe that you have a use case that absolutely demands the use of influxdb.conf
then you can set it up like this:
Execute the following commands:
+$ cd ~/IOTstack
+$ docker cp influxdb:/etc/influxdb/influxdb.conf .
+
Edit docker-compose.yml
, find the influxdb
service definition, and add the following line to the volumes:
directive:
- ./volumes/influxdb/config:/etc/influxdb
+
Execute the following commands:
+$ docker-compose up -d influxdb
+$ sudo mv influxdb.conf ./volumes/influxdb/config/
+$ docker-compose restart influxdb
+
At this point, you can start making changes to:
+~/IOTstack/volumes/influxdb/config/influxdb.conf
+
You can apply changes by sending a restart
to the container (as above). However, from time to time you may find that your settings disappear or revert to defaults. Make sure you keep good backups.
By default, InfluxDB runs in non-host mode and respects the following port-mapping directive in its service definition:
+ports:
+ - "8086:8086"
+
If you are connecting from:
+another container (eg Node-RED or Grafana) that is also running in non-host mode, use:
+http://influxdb:8086
+
In this context, 8086
is the internal (right hand side) port number.
either the Raspberry Pi itself or from another container running in host mode, use:
+http://localhost:8086
+
In this context, 8086
is the external (left hand side) port number.
a different host, you use either the IP address of the Raspberry Pi or its fully-qualified domain name. Examples:
+http://192.168.1.10:8086
+http://raspberrypi.local:8086
+http://iot-hub.mydomain.com:8086
+
In this context, 8086
is the external (left hand side) port number.
You can open the influx
CLI interactive shell by:
$ docker exec -it influxdb influx
+Connected to http://localhost:8086 version 1.8.10
+InfluxDB shell version: 1.8.10
+>
+
The command prompt in the CLI is >
. While in the CLI you can type commands such as:
> help
+> create database MYTESTDATABASE
+> show databases
+> USE MYTESTDATABASE
+> show measurements
+> show series
+> select * from «someMeasurement» where «someCriterion»
+
You may also wish to set retention policies on your databases. This is an example of creating a database named "mydb" where any data older than 52 weeks is deleted:
+> create database mydb
+
+> show retention policies on mydb
+name duration shardGroupDuration replicaN default
+---- -------- ------------------ -------- -------
+autogen 0s 168h0m0s 1 true
+
+> alter retention policy "autogen" on "mydb" duration 52w shard duration 1w replication 1 default
+
+> show retention policies on mydb
+name duration shardGroupDuration replicaN default
+---- -------- ------------------ -------- -------
+autogen 8736h0m0s 168h0m0s 1 true
+
To exit the CLI, either press Control+d or type:
+> exit
+$
+
Consider adding the following alias to your .bashrc
:
alias influx='docker exec -it influxdb influx -precision=rfc3339'
+
With that alias installed, typing influx
and pressing return, gets you straight into the influx CLI. The -precision
argument tells the influx CLI to display dates in human-readable form. Omitting that argument displays dates as integer nanoseconds since 1970-01-01.
Note:
+This tutorial also assumes that you do not have any existing databases so it starts by creating two. One database will be provided with access controls but the other will be left alone so that the behaviour can be compared.
+However, you need to understand that enabling authentication in InfluxDB is all-or-nothing. If you have any existing InfluxDB databases, you will need to:
+If you do not do this, your existing Node-Red flows, Grafana dashboards and other processes that write to or query your databases will stop working as soon as you activate authentication below.
+Create two databases named "mydatabase1" and "mydatabase2":
+$ influx
+> CREATE DATABASE "mydatabase1"
+> CREATE DATABASE "mydatabase2"
+
++Typing
+influx
didn't work? See useful alias above.
Define an administrative user. In this example, that user is "dba" (database administrator) with the password "supremo":
+> CREATE USER "dba" WITH PASSWORD 'supremo' WITH ALL PRIVILEGES
+
Define some garden-variety users:
+> CREATE USER "nodered_user" WITH PASSWORD 'nodered_user_pw'
+> CREATE USER "grafana_user" WITH PASSWORD 'grafana_user_pw'
+
You can define any usernames you like. The reason for using "nodered_" and "grafana_" prefixes in these examples is because those are common candidates in an IOTstack environment. The reason for the "_user" suffixes is to make it clear that a username is separate and distinct from a container name.
+The user "dba" already has access to everything but, for all other users, you need to state which database(s) the user can access, and whether that access is:
+> GRANT WRITE ON "mydatabase1" TO "nodered_user"
+> GRANT READ ON "mydatabase1" TO "grafana_user"
+
Once you have finished defining users and assigning access rights, drop out of the influx CLI:
+> exit
+$
+
Make sure you read the warning above, then edit the InfluxDB environment variables to enable this key:
+- INFLUXDB_HTTP_AUTH_ENABLED=true
+
Put the change into effect by "upping" the container:
+$ cd ~/IOTstack
+$ docker-compose up -d influxdb
+
+Recreating influxdb ... done
+
The up
causes docker-compose
to notice that the environment has changed, and to rebuild the container with the new settings.
Note: You should always wait for 30 seconds after a rebuild for InfluxDB to become available. Any time you see a message like this:
+Failed to connect to http://localhost:8086: Get http://localhost:8086/ping: dial tcp 127.0.0.1:8086: connect: connection refused
+Please check your connection settings and ensure 'influxd' is running.
+
it simply means that you did not wait long enough. Be patient!
+Start the influx CLI:
+$ influx
+
Unless you have also set up the INFLUX_USERNAME
and INFLUX_PASSWORD
environment variables (described later under Authentication Hints), your session will not be authenticated as any user so you will not be able to access either database:
> USE mydatabase1
+ERR: unable to parse authentication credentials
+DB does not exist!
+> USE mydatabase2
+ERR: unable to parse authentication credentials
+DB does not exist!
+
Authenticate as "nodered_user" and try again:
+> AUTH
+username: nodered_user
+password:
+> USE mydatabase1
+Using database mydatabase1
+> USE mydatabase2
+ERR: Database mydatabase2 doesn't exist. Run SHOW DATABASES for a list of existing databases.
+DB does not exist!
+
The "nodered_user" can access "mydatabase1" but not "mydatabase2". You will get similar behaviour for the "grafana_user" (try it).
+Authenticate as the "dba" and try again:
+> AUTH
+username: dba
+password:
+> USE mydatabase1
+Using database mydatabase1
+> USE mydatabase2
+Using database mydatabase2
+
The super-user can access both databases.
+To get a list of users:
+> SHOW USERS
+user admin
+---- -----
+dba true
+nodered_user false
+grafana_user false
+
To find out what privileges a user has on a database:
+> SHOW GRANTS FOR "nodered_user"
+database privilege
+-------- ---------
+mydatabase1 WRITE
+
To test grants, you can try things like this:
+AUTH
+username: nodered_user
+password:
+> USE "mydatabase1"
+Using database mydatabase1
+> INSERT example somefield=123
+
"nodered_user" has WRITE access to "mydatabase1".
+> SELECT * FROM example
+ERR: error authorizing query: nodered_user not authorized to execute statement 'SELECT * FROM example', requires READ on mydatabase1
+
"nodered_user" does not have READ access to "mydatabase1".
+Authenticate as "grafana_user" and try the query again:
+> AUTH
+username: grafana_user
+password:
+> SELECT * FROM example
+name: example
+time somefield
+---- ---------
+2020-09-19T01:41:09.6390883Z 123
+
"grafana_user" has READ access to "mydatabase1". Try an insertion as "grafana_user":
+> INSERT example somefield=456
+ERR: {"error":"\"grafana_user\" user is not authorized to write to database \"mydatabase1\""}
+
"grafana_user" does not have WRITE access to "mydatabase1".
+Change the privileges for "nodered_user" to ALL then try both an insertion and a query. Note that changing privileges requires first authenticating as "dba":
+> AUTH
+username: dba
+password:
+> GRANT ALL ON "mydatabase1" TO "nodered_user"
+> AUTH
+username: nodered_user
+password:
+> INSERT example somefield=456
+> SELECT * FROM example
+name: example
+time somefield
+---- ---------
+2020-09-19T01:41:09.6390883Z 123
+2020-09-19T01:42:36.85766382Z 456
+
"nodered_user" has both READ and WRITE access to "mydatabase1".
+Some inferences to draw from the above:
+INFLUXDB_HTTP_AUTH_ENABLED=true
is how authentication is activated and enforced. If it is false, all enforcement goes away (a handy thing to know if you lose passwords or need to recover from a mess).INFLUXDB_HTTP_AUTH_ENABLED
suggests, it applies to access via HTTP. This includes the influx CLI and processes like Node-Red and Grafana.Always keep in mind that the InfluxDB log is your friend:
+$ docker logs influxdb
+
After you enable authentication, there are a couple of ways of speeding-up your daily activities. You can pass the dba username and password on the end of the influx alias:
+$ influx -database mydatabase1 -username dba -password supremo
+
but this is probably sub-optimal because of the temptation to hard-code your dba password into scripts. An alternative is to enable these environment variables:
+- INFLUX_USERNAME=dba
+- INFLUX_PASSWORD=supremo
+
and then "up" the container as explained above to apply the changes.
+Misunderstandings about the scope and purpose of INFLUX_USERNAME
and INFLUX_PASSWORD
are quite common so make sure you realise that the variables:
-username
and -password
parameters on the influx
CLI command; andIn other words, with INFLUX_USERNAME
and INFLUX_PASSWORD
added to the environment, the following two commands are identical:
$ influx -database mydatabase1 -username dba -password supremo
+$ influx -database mydatabase1
+
The INFLUX_USERNAME
and INFLUX_PASSWORD
variables also work if you start a shell into the InfluxDB container and then invoke the influx CLI from there:
$ docker exec -it influxdb bash
+# influx
+>
+
That is all the INFLUX_USERNAME
and INFLUX_PASSWORD
variables do.
To undo the steps in this tutorial, first set INFLUXDB_HTTP_AUTH_ENABLED=false
and then "up" influxdb. Then:
$ influx
+> DROP USER "dba"
+> DROP USER "nodered_user"
+> DROP USER "grafana_user"
+> DROP DATABASE "mydatabase1"
+> DROP DATABASE "mydatabase2"
+> exit
+
Assumptions:
+This tutorial uses the following aliases:
+influx
- explained earlier - see useful alias.DPS
which is the equivalent of:
$ docker ps --format "table {{.Names}}\t{{.RunningFor}}\t{{.Status}}"
+
The focus is: what containers are running?
+DNET
which is the equivalent of:
$ docker ps --format "table {{.Names}}\t{{.Ports}}"
+
The focus is: what ports are containers using?
+++Any container where no ports are listed is either exposing no ports and/or is running in host mode.
+
Although both DPS
& DNET
invoke docker ps
, the formatting means the output usually fits on your screen without line wrapping.
All three aliases are installed by IOTstackAliases.
+$ DNET
+NAMES PORTS
+influxdb 0.0.0.0:8086->8086/tcp
+
Interpretation: Docker is listening on TCP port 8086, and is routing the traffic to the same port on the influxdb container. There is no mention of UDP.
+This tutorial uses the database name of "udp".
+$ influx
+> create database udp
+> exit
+> $
+
Edit docker-compose.yml
to define a UDP port mapping (the second line in the ports
grouping below):
influxdb:
+ …
+ ports:
+ - "8086:8086"
+ - "8086:8086/udp"
+ …
+
Edit your docker-compose.yml
and change the InfluxDB environment variables to glue it all together:
environment:
+ - INFLUXDB_UDP_DATABASE=udp
+ - INFLUXDB_UDP_ENABLED=true
+ - INFLUXDB_UDP_BIND_ADDRESS=0.0.0.0:8086
+
In this context, the IP address "0.0.0.0" means "this host" (analogous to the way "255.255.255.255" means "all hosts").
+$ cd ~/IOTstack
+$ docker-compose up -d influxdb
+
+Recreating influxdb ... done
+
The up
causes docker-compose
to notice that the environment has changed, and to rebuild the container with the new settings.
$ DNET
+NAMES PORTS
+influxdb 0.0.0.0:8086->8086/tcp, 0.0.0.0:8086->8086/udp
+
Interpretation: In addition to the TCP port, Docker is now listening on UDP port 8086, and is routing the traffic to the same port on the influxdb container.
+Check the log:
+$ docker logs influxdb
+
If you see a line like this:
+ts=2020-09-18T03:09:26.154478Z lvl=info msg="Started listening on UDP" log_id=0PJnqbK0000 service=udp addr=0.0.0.0:8086
+
then everything is probably working correctly. If you see anything that looks like an error message then you will need to follow your nose.
+Although the how-to is beyond the scope of this tutorial, you will need a process that can send "line format" payloads to InfluxDB using UDP port 8086.
+Once that is set up, you can inspect the results like this:
+$ influx -database udp
+> show measurements
+
If data is being received, you will get at least one measurement name. An empty list implies no data is being received.
+If you get at least one measurement name then you can inspect the data using:
+> select * from «measurement»
+
where «measurement»
is one of the names in the show measurements
list.
SSD-drives have pretty good controllers spreading out writes, so this isn't a this isn't really a concern for them. But if you store data on an SD-card, flash wear may cause the card to fail prematurely. Flash memory has a limited number of erase-write cycles per physical block. These blocks may be multiple megabytes. You can use sudo lsblk -D
to see how big the erase granularity is on your card. The goal is to avoid writing lots of small changes targeting the same physical blocks. Here are some tips to mitigate SD-card wear:
flush_interval
-option, which will combine the measurements into one write. - INFLUXDB_DATA_QUERY_LOG_ENABLED=false
+ - INFLUXDB_HTTP_LOG_ENABLED=false
+
This is especially important if you plan on having Grafana or Chronograf displaying up-to-date data on a dashboard, making queries all the time.
+Sometimes you need start the container without starting influxdb to access its maintenance tools. Usually when influx crashes on startup.
+Add a new line below influxdb:
to your docker-compose.yml:
influxdb:
+ …
+ entrypoint: sleep infinity
+
Recreate the container using the new entrypoint:
+$ docker-compose up -d influxdb
+Recreating influxdb ... done
+
Now the container should start and you can get a shell to poke around and try the influx_inspect
command:
$ docker exec -it influxdb bash
+# influx_inspect
+Usage: influx_inspect [[command] [arguments]]
+
Once you have finished poking around, you should undo the change by removing the custom entrypoint and up -d
again to return to normal container behaviour where you can then test to see if your fixes worked.
The container is pretty bare-bones by default. It is OK to install additional tools. Start by running:
+# apt update
+
and then use apt install
to add whatever you need. Packages you add will persist until the next time the container is re-created.
If you need to see the actual packets being sent to Influx for insertion into your database, you can set it up like this:
+$ docker exec influxdb bash -c 'apt update && apt install tcpdump -y'
+
That adds tcpdump
to the running container and, as noted above, that will persist until you re-create the container.
To capture traffic:
+$ docker exec influxdb tcpdump -i eth0 -s 0 -n -c 100 -w /var/lib/influxdb/capture.pcap dst port 8086
+
Breaking that down:
+-i eth0
is the container's internal virtual Ethernet network interface (attached to the internal bridged network)-s 0
means "capture entire packets"-n
means "do not try to resolve IP addresses to domain names-c 100
is optional and means "capture 100 packets then stop". If you omit this option, tcpdump
will capture packets until you press control+C.-w /var/lib/influxdb/capture.pcap
is the internal path to the file where captured packets are written. You can, of course, substitute any filename you like for capture.pcap
.dst port 8086
captures all packets where the destination port field is 8086, which is the InfluxDB internal port number.The internal path:
+/var/lib/influxdb/capture.pcap
+
maps to the external path:
+~/IOTstack/volumes/influxdb/data/capture.pcap
+
You can copy that file to another system where you have a tool like WireShark installed. WireShark will open the file and you can inspect packets and verify that the information being sent to InfluxDB is what you expect.
+Do not forget to clean-up any packet capture files:
+$ cd ~/IOTstack/volumes/influxdb/data
+$ sudo rm capture.pcap
+
Your Raspberry Pi is running full 64-bit Raspberry Pi OS Debian GNU/Linux 11 (bullseye).
+/boot/config.txt
. User-mode needs to be 64-bit capable as well. You must start from a full 64-bit image.Node-RED is your principal mechanism for feeding data to InfluxDB 1.8.
+Grafana is your principle mechanism for creating dashboards based on data stored in InfluxDB 1.8.
+Node-RED, InfluxDB 1.8 and Grafana are all running in non-host mode on the same Docker instance, and that it is your intention to deploy InfluxDB 2 in non-host mode as well.
+InfluxDB 1.8 and InfluxDB 2 are both database management systems (DBMS), sometimes referred to as "engines", optimised for storage and retrieval of time-series data. InfluxDB 1.8 uses the term database to mean a collection of measurements. InfluxDB 2 uses the term bucket to mean the same thing.
+When an InfluxDB 1.8 database is migrated, it becomes an InfluxDB 2 bucket. You will see this change in terminology in various places, such as the InfluxDB-out node in Node-RED. When that node is set to:
+Version 1.x, the user interface has a "Database" field which travels with the connection. For example:
+This implies that you need one connection per database.
+Version 2.0, the user interface has a "Bucket" field which is independent of the connection. For example:
+This implies that you need one connection per engine. It is a subtle but important difference.
+The InfluxDB 2 service definition is added to your compose file by the IOTstack menu.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 |
|
As an alternative to using the menu, you can copy and paste the service definition into your compose file from the template at:
+~/IOTstack/.templates/influxdb2/service.yml
+
Edit the service definition in your compose file to change the following variables:
+TZ=
«country»/«city»
DOCKER_INFLUXDB_INIT_USERNAME=
«username»
This name becomes the administrative user. It is associated with your «password» and «token».
+DOCKER_INFLUXDB_INIT_PASSWORD=
«password»
Your «username» and «password» form your login credentials when you administer InfluxDB 2 using its web-based graphical user interface. The strength of your password is up to you.
+DOCKER_INFLUXDB_INIT_ORG=
«organisation»
An organisation name is required. Examples:
+DOCKER_INFLUXDB_INIT_BUCKET=
«bucket»
A default bucket name is required. The name does not matter because you won't actually be using it so you can accept the default of "mybucket". You can delete the unused bucket later if you want to be tidy.
+DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=
«token»
Although you can let InfluxDB 2 generate your access token for you, it will keep things simple if you generate your own. Here are some possible approaches:
+use a universally-unique ID:
+$ uuidgen
+4fef85b4-2f56-480f-b143-fa5cb6e8f18a
+
use GnuPG to generate a random string:
+$ gpg --gen-random -a 0 25
+bYS3EsnnY0AlRxJ2uk44Hzwm7GMKYu5unw==
+
use a password-generator of your choosing.
+Note:
+InfluxDB 2 operates in three distinct modes which are controlled by the DOCKER_INFLUXDB_INIT_MODE
environment variable. The table below summarises the variables and volumes mappings that need to be active in each mode.
If you have only just included the template service definition in your compose file and performed the required edits, then you can follow the initialisation process below.
+However, if you want to re-initialise the container, go to re-initialising InfluxDB 2.
+To initialise InfluxDB 2:
+Be in the correct directory (assumed throughout):
+$ cd ~/IOTstack
+
Start the InfluxDB 2 container:
+$ docker-compose up -d influxdb2
+
InfluxDB 2 will notice the following environment variable:
+DOCKER_INFLUXDB_INIT_MODE=setup
+
This instructs the container to initialise the database engine structures based on a combination of defaults and the values you provide via the other environment variables.
+Confirm that the InfluxDB 2 container is not in a restart loop and isn't reporting errors by using commands like:
+$ docker ps
+$ docker logs influxdb2
+
If you don't need to migrate any data from InfluxDB 1.8 you can go straight to running InfluxDB 2, otherwise follow the data-migration procedure instructions below.
+Successful migration depends on the following assumptions being true:
+The InfluxDB 1.8 container is running, and is based on the IOTstack service definition (or reasonable facsimile) at:
+~/IOTstack/.templates/influxdb/service.yml
+
To migrate your InfluxDB 1.8 data:
+Be in the correct directory (assumed throughout):
+$ cd ~/IOTstack
+
InfluxDB 1.8 runs as root and its persistent store is owned by root but not all files and folders in the persistent store are group or world readable. InfluxDB 2 runs as user ID 1000 (user "influxdb" inside the container). Because of this, you need to give InfluxDB 2 permission to read the InfluxDB 1.8 persistent store.
+It is not a good idea to interfere with a persistent store while a container is running so best practice is to stop InfluxDB 1.8 for long enough to make a copy of its persistent store:
+$ sudo rm -rf ./volumes/influxdb.migrate
+$ docker-compose down influxdb
+$ sudo cp -a ./volumes/influxdb ./volumes/influxdb.migrate
+$ docker-compose up -d influxdb
+$ sudo chown -R 1000:1000 ./volumes/influxdb.migrate/data
+
++see also if downing a container doesn't work
+
In words:
+sudo rm
command. Check your work before you press return.Edit your compose file as per the "upgrade" column of Table 1. The changes you need to make are:
+Change the initialisation mode from setup
to upgrade
:
before editing:
+12 +13 |
|
after editing:
+12 +13 |
|
Activate the volume mapping to give InfluxDB 2 read-only access to the copy of the InfluxDB 1.8 persistent store that you made in step 2:
+before editing:
+20 |
|
after editing:
+20 |
|
Save your work but do not execute any docker-compose
commands.
InfluxDB 2 creates a "bolt" (lock) file to prevent accidental data-migrations. That file needs to be removed:
+$ rm ./volumes/influxdb2/data/influxd.bolt
+
The InfluxDB 2 container is still running. The following command causes the container to be recreated with the edits you made in step 3:
+$ docker-compose up -d influxdb2
+
InfluxDB 2 will notice the following environment variable:
+DOCKER_INFLUXDB_INIT_MODE=upgrade
+
This, combined with the absence of the "bolt" file, starts the migration process. You need to wait until the migration is complete. The simplest way to do that is to watch the size of the persistent store for InfluxDB 2 until it stops increasing. Experience suggests that the InfluxDB 2 persistent store will usually be a bit larger than InfluxDB 1.8. For example:
+reference size for an InfluxDB 1.8 installation:
+$ sudo du -sh ./volumes/influxdb
+633M ./volumes/influxdb
+
final size after migration to InfluxDB 2:
+$ sudo du -sh ./volumes/influxdb2
+721M ./volumes/influxdb2
+
Data migration is complete once the folder size stops changing.
+Proceed to running InfluxDB 2 below.
+The container now needs to be instructed to run in normal mode.
+Be in the correct directory (assumed throughout):
+$ cd ~/IOTstack
+
Edit your compose file as per the "(omitted)" column of Table 1. The changes are:
+Deactivate all DOCKER_INFLUXDB_INIT_
environment variables. After editing, the relevant lines should look like:
7 + 8 + 9 +10 +11 +12 +13 |
|
Deactivate the volume mapping if it is active. After editing, the line should look like:
+20 |
|
Save your work.
+The InfluxDB 2 container is still running. The following command causes the container to be recreated with the edits you have just made:
+$ docker-compose up -d influxdb2
+
The absence of an active DOCKER_INFLUXDB_INIT_MODE
variable places InfluxDB 2 into normal run mode.
If you have just performed a data migration, you can remove the copy of the InfluxDB 1.8 persistent store:
+$ sudo rm -rf ./volumes/influxdb.migrate
+
++always be extremely careful with any
+sudo rm
command. Always check your work before you press return.
If you need to start over from a clean slate:
+Be in the correct directory (assumed throughout):
+$ cd ~/IOTstack
+
Terminate the InfluxDB 2 container:
+$ docker-compose down influxdb2
+
++see also if downing a container doesn't work
+
Remove the persistent store:
+$ sudo rm -rf ./volumes/influxdb2
+
++always be extremely careful with any
+sudo rm
command. Always check your work before you press return.
Edit your compose file as per the "setup" column of Table 1. After editing, the relevant lines should look like this:
+7 + 8 + 9 +10 +11 +12 +13 |
|
Go to initialising InfluxDB 2.
+Launch a browser and connect it to port 8087 on your Raspberry Pi. For example:
+http://raspberrypi.local:8087
+
You can also use the IP address or domain name of your Raspberry Pi. In this context, 8087 is the external port number from the left hand side of the port mapping in the service definition:
+14 +15 |
|
Sign in to the InfluxDB 2 instance using your «username» and «password».
+Click on "Explore" in the left-hand tool strip. That is marked [A] in the screen shot. In the area marked [B] you should be able to see a list of the buckets that were migrated from InfluxDB 1.8 databases.
+In the screen shot, I clicked on other fields to create a query:
+You can explore your own tables using similar techniques.
+Grafana does not (yet) seem to have the ability to let you build Flux queries via point-and-click like you can with InfluxQL queries. Until Grafana gains that ability, it's probably a good idea to learn how to build Flux queries in InfluxDB, so you can copy-and-paste the Flux statements into Grafana.
+Once you have constructed a query in the "Query Builder", click the "Script Editor" button [H] to switch to the editor view.
+For this example, the query text is:
+from(bucket: "power/autogen")
+ |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
+ |> filter(fn: (r) => r["_measurement"] == "hiking2")
+ |> filter(fn: (r) => r["_field"] == "voltage")
+ |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)
+ |> yield(name: "mean")
+
Two important things to note here are:
+ +Assume you have an existing flow (eg a fairly standard 3-node flow) which is logging to an InfluxDB 1.8 database. Your goal is to modify the flow to log the same data to the recently-migrated InfluxDB 2 bucket.
+Start Node-RED if it is not running:
+$ cd ~/IOTstack
+$ docker-compose up -d nodered
+
Use a web browser to connect to your Node-RED instance.
+Drag a new InfluxDB-out node onto the canvas:
+Double-click the InfluxDB-out node to open it:
+ +Click the pencil icon [B] adjacent to the Server field:
+Set the URL [F] to point to your InfluxDB 2 instance:
+http://influxdb2:8086
+
++In this context, "influxdb2" is the container name and 8086 is the container's internal port. Node-RED communicates with InfluxDB 2 across the internal bridged network (see assumptions).
+
Paste your «token» into the Token field [G].
+Set the Organisation field [I] to your «organisation».
+Set the Bucket [J] to the correct value. You can get that from either:
+In this example, the bucket name is "power/autogen".
+Set the Measurement [K] to the measurement name. You can get that from either:
+In this example, the measurement name is "hiking2".
+Click Done [L].
+Connect the outlet of the Change node to the inlet of the InfluxDB-out node.
+Go back to the InfluxDB 2 Data Explorer and click the refresh button "I". If everything has gone according to plan, you should see recent observations added to your graph.
+++You may need to wait until your sensor has sent new data.
+
Start Grafana if it is not running:
+$ cd ~/IOTstack
+$ docker-compose up -d grafana
+
Use a web browser to connect to your Grafana instance and login as an administrator.
+Configure as follows:
+ +Change the Query Language popup menu [B] to "Flux".
+++Ignore the advice about Flux support being in beta.
+
Change the URL [C] to point to your InfluxDB 2 instance:
+http://influxdb2:8086
+
++In this context, "influxdb2" is the container name and 8086 is the container's internal port. Grafana communicates with InfluxDB 2 across the internal bridged network (see assumptions).
+
Turn off all the switches in the "Auth" group [D].
+Paste your «token» into the Token field [F].
+++ignore the fact that the prompt text says "password" - you need the token!
+
Set the Default Bucket [G] to the bucket (database) you want to query. You can get that from either:
+In this example, the value is "power/autogen".
+Click Save & Test [H].
+In the side-by-side screen shots below, observations before the straight-line (missing data) segment were imported from InfluxDB 1.8 while observations after the straight-line segment were inserted by the new InfluxDB-out node in Node-RED.
+ +Forgot your token:
+$ docker exec influxdb2 influx auth ls
+
Create a new user, password and token:
+$ docker exec influxdb2 influx user create --name «username» --password «password»
+$ docker exec influxdb2 influx auth create --user «username» --all-access
+
List available buckets:
+$ docker exec influxdb2 influx bucket ls
+
Delete the default «bucket»:
+$ docker exec influxdb2 influx bucket delete --org «organisation» --name «bucket»
+
From the fact that both InfluxDB 1.8 and InfluxDB 2 can run in parallel, with Node-RED feeding the same data to both, it should be self-evident that you can repeat the data-migration as often as necessary, simply by starting from re-initialising InfluxDB 2.
+This implies that you can concentrate on one database at a time, adjusting Node-RED so that it writes each row of sensor data to both the InfluxDB 1.8 database and corresponding InfluxDB 2 bucket.
+Having the data going to both engines means you can take your time adjusting your Grafana dashboards to be based on Flux queries. You can either retrofit InfluxDB 2 bucket sources and Flux queries to existing dashboards, or build parallel dashboards from the ground up.
+ + + + + + + + + + + + + +You can update the container via:
+$ cd ~/IOTstack
+$ docker-compose pull
+$ docker-compose up -d
+$ docker system prune
+
In words:
+docker-compose pull
downloads any newer images;docker-compose up -d
causes any newly-downloaded images to be instantiated as containers (replacing the old containers); andprune
gets rid of the outdated images.If you need to pin to a particular version:
+docker-compose.yml
.Find the line:
+yaml
+ image: kapacitor:1.5
Replace 1.5
with the version you wish to pin to. For example, to pin to version 1.5.9:
yaml
+ image: kapacitor:1.5.9
Note:
+latest
tag. At the time of writing, there was no linux/arm/v7
architecture support. Save the file and tell docker-compose
to bring up the container:
$ cd ~/IOTstack
+$ docker-compose up -d kapacitor
+$ docker system prune
+
The mjpg-streamer
container lets you pass a video stream from a local camera to a motioneye
container. The mjpg-streamer
and motioneye
containers can be running on the same or different hosts.
Each mjpg-streamer
container can process a stream from an official Raspberry Pi "ribbon cable" camera, or from a third-party USB-connected camera, such as those from Logitech.
Using mjpg-streamer
to handle your video streams gives you a consistent approach to supporting multiple cameras and camera types. You do not need to care about distinctions between "ribbon" or USB cameras, nor which hosts are involved.
++This section is only relevant if you are trying to use a camera that connects to your Raspberry Pi via a ribbon cable.
+
Beginning with Raspberry Pi OS Bullseye, the Raspberry Pi Foundation introduced the LibCamera subsystem and withdrew support for the earlier raspistill
and raspivid
mechanisms which then became known as the legacy camera system.
The introduction of the LibCamera subsystem triggered quite a few articles (and videos) on the topic, of which this is one example:
+ +Although the LibCamera subsystem works quite well with "native" applications, it has never been clear whether it supports passing camera streams to Docker containers. At the time of writing (2023-10-23), this author has never been able to find any examples which demonstrate that such support exists.
+It is important to understand that:
+mjpg-streamer
container depends on the legacy camera system; andIn other words, if you want to use the mjpg-streamer
container to process a stream from a Raspberry Pi Ribbon Camera, you have to forgo using the LibCamera subsystem.
If you have a Raspberry Pi Ribbon Camera, prepare your system like this:
+Check the version of your system by running:
+$ grep "VERSION_CODENAME" /etc/os-release
+
The answer should be one of "buster", "bullseye" or "bookworm".
+Configure camera support:
+if your system is running Buster, run this command:
+$ sudo raspi-config nonint do_camera 0
+
Buster pre-dates LibCamera so this is the same as enabling the legacy camera system. In this context, 0
means "enable" and 1
means "disable".
if your system is running Bullseye or Bookworm, run these commands:
+$ sudo raspi-config nonint do_camera 1
+$ sudo raspi-config nonint do_legacy 0
+
The first command is protective and turns off the LibCamera subsystem, while the second command enables the legacy camera system.
+++When executed from the command line, both the
+do_camera
anddo_legacy
commands are supported in the Bookworm version ofraspi-config
. However, neither command is available whenraspi-config
is invoked as a GUI in a Bookworm system. This likely implies that the commands have been deprecated and will be removed, in which case this documentation will break.
Reboot your system:
+$ sudo reboot
+
Make a note that your ribbon camera will be accessible on /dev/video0
.
The simplest approach is:
+Run:
+$ ls -l /dev/v4l/by-id
+
This is an example of the response with a LogiTech "C920 PRO FHD Webcam 1080P" camera connected:
+lrwxrwxrwx 1 root root 12 Oct 23 15:42 usb-046d_HD_Pro_Webcam_C920-video-index0 -> ../../video1
+lrwxrwxrwx 1 root root 12 Oct 23 15:42 usb-046d_HD_Pro_Webcam_C920-video-index1 -> ../../video2
+
In general, the device at index0
is where your camera will be accessible, as in:
/dev/v4l/by-id/usb-046d_HD_Pro_Webcam_C920-video-index0
+
If you don't get a sensible response to the ls
command then try disconnecting and reconnecting your camera, and rebooting your system.
variable | +default | +remark | +
---|---|---|
MJPG_STREAMER_USERNAME |
+container ID | +changes each time the container is recreated | +
MJPG_STREAMER_PASSWORD |
+random UUID | +changes each time the container restarts | +
MJPG_STREAMER_SIZE |
+640x480 |
+should be one of your camera's natural resolutions | +
MJPG_STREAMER_FPS |
+5 |
+frames per second | +
variable | +default | +remark | +
---|---|---|
MJPG_STREAMER_EXTERNAL_DEVICE |
+/dev/video0 |
+must be set to your video device | +
To initialise your environment, begin by using a text editor (eg vim
, nano
) to edit ~/IOTstack/.env
(which may or may not already exist):
If your .env
file does not already define your time-zone, take the opportunity to set it. For example:
TZ=Australia/Sydney
+
The access credentials default to random values which change each time the container starts. This is reasonably secure but is unlikely to be useful in practice, so you need to invent some credentials of your own. Example:
+MJPG_STREAMER_USERNAME=streamer
+MJPG_STREAMER_PASSWORD=oNfDG-d1kgzC
+
Define the external device path to your camera. Two examples have been given above:
+a ribbon camera:
+MJPG_STREAMER_EXTERNAL_DEVICE=/dev/video0
+
a Logitech C920 USB camera:
+MJPG_STREAMER_EXTERNAL_DEVICE=/dev/v4l/by-id/usb-046d_HD_Pro_Webcam_C920-video-index
+
If you know your camera supports higher resolutions, you can also set the size. Examples:
+the ribbon camera can support:
+MJPG_STREAMER_SIZE=1152x648
+
the Logitech C920 can support:
+MJPG_STREAMER_SIZE=1920x1080
+
If the mjpg-streamer
and motioneye
containers are going to be running on:
the same host, you can consider increasing the frame rate:
+MJPG_STREAMER_FPS=30
+
Even though we are setting up a web camera, the traffic will never leave the host and will not traverse your Ethernet or WiFi networks.
+different hosts, you should probably leave the rate at 5 frames per second until you understand the impact on network traffic.
+Save your work.
+Tip:
+It is still a good idea to define TZ
in your .env
file. Most IOTstack containers now use the TZ=${TZ:-Etc/UTC}
syntax so a single entry in your .env
sets the timezone for all of your containers.
However, if you prefer to keep most of your environment variables inline in your docker-compose.yml
rather than in .env
, you can do that. Example:
environment:
+ - TZ=${TZ:-Etc/UTC}
+ - MJPG_STREAMER_USERNAME=streamer
+ - MJPG_STREAMER_PASSWORD=oNfDG-d1kgzC
+ - MJPG_STREAMER_SIZE=1152x648
+ - MJPG_STREAMER_FPS=5
+
Similarly for the camera device mapping:
+devices:
+ - "/dev/v4l/by-id/usb-046d_HD_Pro_Webcam_C920-video-index:/dev/video0"
+
If you're wondering about the syntax used for environment variables:
+ - MJPG_STREAMER_USERNAME=${MJPG_STREAMER_USERNAME:-}
+
it means that .env
will be checked for the presence of MJPG_STREAMER_USERNAME=value
. If the key is found, its value will be used. If the key is not found, the value will be set to a null string. Then, inside the container, a null string is used as the trigger to apply the defaults listed in the table above.
In the case of the camera device mapping, this syntax:
+ - "${MJPG_STREAMER_EXTERNAL_DEVICE:-/dev/video0}:/dev/video0"
+
means that .env
will be checked for the presence of MJPG_STREAMER_EXTERNAL_DEVICE=path
. If the key is found, the path will be used. If the key is not found, the path will be set to /dev/video0
on the assumption that a camera is present and the device exists.
Regardless of whether a device path comes from .env
, or is defined inline, or defaults to /dev/video0
, if the device does not actually exist then docker-compose
will refuse to start the container with the following error:
Error response from daemon: error gathering device information while adding custom device "«path»": no such file or directory
+
Start the container like this:
+$ cd ~/IOTstack
+$ docker-compose up -d mjpg-streamer
+
The first time you do this triggers a fairly long process. First, a basic operating system image is downloaded from DockerHub, then a Dockerfile is run to add the streamer software and construct a local image, after which the local image is instantiated as your running container. Subsequent launches use the local image so the container starts immediately. See also container maintenance.
+Once the container is running, make sure it is behaving normally and has not gone into a restart loop:
+$ docker ps -a --format "table {{.Names}}\t{{.RunningFor}}\t{{.Status}}"
+
++The
+docker ps
command produces a lot of output which generally results in line-wrapping and can be hard to read. The--format
argument reduces this clutter by focusing on the interesting columns. If you have IOTstackAliases installed, you can useDPS
instead of copy/pasting the above command.
If the container is restarting, you will see evidence of that in the STATUS column. If that happens, re-check the values set in the .env
file and "up" the container again. The container's log (see below) may also be helpful.
Check the container's log:
+$ docker logs mjpg-streamer
+ i: Using V4L2 device.: /dev/video0
+ i: Desired Resolution: 1152 x 648
+ i: Frames Per Second.: 5
+ i: Format............: JPEG
+ i: TV-Norm...........: DEFAULT
+ o: www-folder-path......: /usr/local/share/mjpg-streamer/www/
+ o: HTTP TCP port........: 80
+ o: HTTP Listen Address..: (null)
+ o: username:password....: streamer:oNfDG-d1kgzC
+ o: commands.............: enabled
+
Many of the values you set earlier using environment variables show up here so viewing the log is a good way of making sure everything is being passed to the container.
+Note:
+/dev/video0
in the first line of output is the internal device path (inside the container). This is not the same as the external device path associated with MJPG_STREAMER_EXTERNAL_DEVICE
. The container doesn't know about the external device path so it has no way to display it.If the motioneye
and mjpg-streamer
containers are running on:
the same host, the URL should be:
+http://mjpg-streamer:80/?action=stream
+
Here:
+mjpg-streamer
is the name of the container. Technically, it is a host name (rather than a domain name); andport 80 is the internal port that the streamer process running inside the container is listening to. It comes from the right hand side of the port mapping in the service definition:
+ports:
+- "8980:80"
+
different hosts, the URL should be in this form:
+http://«name-or-ip»:8980/?action=stream
+
Here:
+«name-or-ip»
is the domain name or IP address of the host on which the mjpg-streamer
container is running. Examples:
http://raspberrypi.local:8980/?action=stream
+http://my-spy.domain.com:8980/?action=stream
+http://192.168.200.200:8980/?action=stream
+
port 8980 is the external port that the host where the mjpg-streamer
container is running is listening on behalf of the container. It comes from the left hand side of the port mapping in the service definition:
ports:
+- "8980:80"
+
Enter the Username ("streamer" in this example).
+Because it is built from a local Dockerfile, the mjpg-streamer
does not get updated in response to a normal "pull". If you want to rebuild the container, proceed like this:
$ cd ~/IOTstack
+$ docker-compose build --no-cache --pull mjpg-streamer
+$ docker-compose up -d mjpg-streamer
+$ docker system prune -f
+
If you have IOTstackAliases installed, the above is:
+$ REBUILD mjpg-streamer
+$ UP mjpg-streamer
+$ PRUNE
+
MariaDB is a fork of MySQL. This is an unofficial image provided by linuxserver.io because there is no official image for arm.
+The port is 3306. It exists inside the docker network so you can connect via mariadb:3306
for internal connections. For external connections use <your Pis IP>:3306
Before starting the stack, edit the docker-compose.yml
file and check your environment variables. In particular:
environment:
+ - TZ=Etc/UTC
+ - MYSQL_ROOT_PASSWORD=
+ - MYSQL_DATABASE=default
+ - MYSQL_USER=mariadbuser
+ - MYSQL_PASSWORD=
+
If you are running old-menu, you will have to set both passwords. Under new-menu, the menu may have allocated random passwords for you but you can change them if you like.
+You only get the opportunity to change the MQSL_
prefixed environment variables before you bring up the container for the first time. If you decide to change these values after initialisation, you will either have to:
Erase the persistent storage area and start again. There are three steps:
+Stop the container and remove the persistent storage area:
+$ cd ~/IOTstack
+$ docker-compose down mariadb
+$ sudo rm -rf ./volumes/mariadb
+
++see also if downing a container doesn't work
+
Edit docker-compose.yml
and change the variables.
Bring up the container:
+$ docker-compose up -d mariadb
+
Open a terminal window within the container (see below) and change the values by hand.
+++The how-to is beyond the scope of this documentation. Google is your friend!
+
You can open a terminal session within the mariadb container via:
+$ docker exec -it mariadb bash
+
To connect to the database: mysql -uroot -p
To close the terminal session, either:
+A script , or "agent", to assess the health of the MariaDB container has been added to the local image via the Dockerfile. In other words, the script is specific to IOTstack.
+The agent is invoked 30 seconds after the container starts, and every 30 seconds thereafter. The agent:
+Runs the command:
+mysqladmin ping -h localhost
+
If that command succeeds, the agent compares the response returned by the command with the expected response:
+mysqld is alive
+
If the command returned the expected response, the agent tests the responsiveness of the TCP port the mysqld
daemon should be listening on (see customising health-check).
If all of those steps succeed, the agent concludes that MariaDB is functioning properly and returns "healthy".
+Portainer's Containers display contains a Status column which shows health-check results for all containers that support the feature.
+You can also use the docker ps
command to monitor health-check results. The following command narrows the focus to mariadb:
$ docker ps --format "table {{.Names}}\t{{.Status}}" --filter name=mariadb
+
Possible reply patterns are:
+The container is starting and has not yet run the health-check agent:
+NAMES STATUS
+mariadb Up 5 seconds (health: starting)
+
The container has been running for at least 30 seconds and the health-check agent has returned a positive result within the last 30 seconds:
+NAMES STATUS
+mariadb Up 33 seconds (healthy)
+
The container has been running for more than 90 seconds but has failed the last three successive health-check tests:
+NAMES STATUS
+mariadb Up About a minute (unhealthy)
+
You can customise the operation of the health-check agent by editing the mariadb
service definition in your Compose file:
By default, the mysqld
daemon listens to internal port 3306. If you need change that port, you also need to inform the health-check agent via an environment variable. For example, suppose you changed the internal port to 12345:
environment:
+ - MYSQL_TCP_PORT=12345
+
Notes:
+MYSQL_TCP_PORT
variable is defined by MariaDB, not IOTstack, so changing this variable affects more than just the health-check agent.If you are running "old menu", this change should be made in the file:
+~/IOTstack/services/mariadb/mariadb.env
+
The mysqladmin ping
command relies on the root password supplied via the MYSQL_ROOT_PASSWORD
environment variable in the Compose file. The command will not succeed if the root password is not correct, and the agent will return "unhealthy".
If the health-check agent misbehaves in your environment, or if you simply don't want it to be active, you can disable all health-checking for the container by adding the following lines to its service definition:
+ healthcheck:
+ disable: true
+
Note:
+The mere presence of a healthcheck:
clause in the mariadb
service definition overrides the supplied agent. In other words, the following can't be used to re-enable the supplied agent:
healthcheck:
+ disable: false
+
You must remove the entire healthcheck:
clause.
To update the mariadb
container:
$ cd ~/IOTstack
+$ docker-compose build --no-cache --pull mariadb
+$ docker-compose up -d mariadb
+$ docker system prune
+$ docker system prune
+
The first "prune" removes the old local image, the second removes the old base image.
+ + + + + + + + + + + + + +This document discusses an IOTstack-specific version of Mosquitto built on top of Eclipse/Mosquitto using a Dockerfile.
+++If you want the documentation for the original implementation of Mosquitto (just "as it comes" from DockerHub) please see Mosquitto.md on the old-menu branch.
+
~/IOTstack
+├── .templates
+│ └── mosquitto
+│ ├── service.yml ❶
+│ ├── Dockerfile ❷
+│ ├── docker-entrypoint.sh ❸
+│ └── iotstack_defaults ❹
+│ ├── config
+│ │ ├── filter.acl
+│ │ └── mosquitto.conf
+│ └── pwfile
+│ └── pwfile
+├── services
+│ └── mosquitto
+│ └── service.yml ❺
+├── docker-compose.yml ❻
+└── volumes
+ └── mosquitto ❼
+ ├── config
+ │ ├── filter.acl
+ │ └── mosquitto.conf
+ ├── data
+ │ └── mosquitto.db
+ ├── log
+ └── pwfile
+ └── pwfile
+
The persistent storage area:
+sudo
to make changes in this area.The source code for Mosquitto lives at GitHub eclipse/mosquitto.
+Periodically, the source code is recompiled and the resulting image is pushed to eclipse-mosquitto on DockerHub.
+When you select Mosquitto in the IOTstack menu, the template service definition is copied into the Compose file.
+++Under old menu, it is also copied to the working service definition and then not really used.
+
On a first install of IOTstack, you run the menu, choose Mosquitto as one of your containers, and are told to do this:
+$ cd ~/IOTstack
+$ docker-compose up -d
+
++See also the Migration considerations (below).
+
docker-compose
reads the Compose file. When it arrives at the mosquitto
fragment, it finds:
mosquitto:
+ container_name: mosquitto
+ build:
+ context: ./.templates/mosquitto/.
+ args:
+ - MOSQUITTO_BASE=eclipse-mosquitto:latest
+ …
+
Note:
+Earlier versions of the Mosquitto service definition looked like this:
+ mosquitto:
+ container_name: mosquitto
+ build: ./.templates/mosquitto/.
+ …
+
The single-line build
produces exactly the same result as the four-line build
, save that the single-line form does not support pinning Mosquitto to a specific version.
The ./.templates/mosquitto/.
path associated with the build
tells docker-compose
to look for:
~/IOTstack/.templates/mosquitto/Dockerfile
+
++The Dockerfile is in the
+.templates
directory because it is intended to be a common build for all IOTstack users. This is different to the arrangement for Node-RED where the Dockerfile is in theservices
directory because it is how each individual IOTstack user's version of Node-RED is customised.
The Dockerfile begins with:
+ARG MOSQUITTO_BASE=eclipse-mosquitto:latest
+FROM $MOSQUITTO_BASE
+
The FROM
statement tells the build process to pull down the base image from DockerHub.
++It is a base image in the sense that it never actually runs as a container on your Raspberry Pi.
+
The remaining instructions in the Dockerfile customise the base image to produce a local image. The customisations are:
+Add the rsync
and tzdata
packages.
rsync
helps the container perform self-repair; whiletzdata
enables Mosquitto to respect the "TZ" environment variable.Add a standard set of configuration defaults appropriate for IOTstack.
+Replace docker-entrypoint.sh
with a version which:
rsync
to perform self-repair if configuration files go missing; and~/IOTstack/volumes/mosquitto
.The local image is instantiated to become your running container.
+When you run the docker images
command after Mosquitto has been built, you may see two rows for Mosquitto:
$ docker images
+REPOSITORY TAG IMAGE ID CREATED SIZE
+iotstack_mosquitto latest cf0bfe1a34d6 4 weeks ago 11.6MB
+eclipse-mosquitto latest 46ad1893f049 4 weeks ago 8.31MB
+
eclipse-mosquitto
is the base image; andiotstack_mosquitto
is the local image.You may see the same pattern in Portainer, which reports the base image as "unused". You should not remove the base image, even though it appears to be unused.
+++Whether you see one or two rows depends on the version of
+docker-compose
you are using and how your version ofdocker-compose
builds local images.
Under the original IOTstack implementation of Mosquitto (just "as it comes" from DockerHub), the service definition expected the configuration files to be at:
+~/IOTstack/services/mosquitto/mosquitto.conf
+~/IOTstack/services/mosquitto/filter.acl
+
Under this implementation of Mosquitto, the configuration files have moved to:
+~/IOTstack/volumes/mosquitto/config/mosquitto.conf
+~/IOTstack/volumes/mosquitto/config/filter.acl
+
++The change of location is one of the things that allows self-repair to work properly.
+
The default versions of each configuration file are the same. Only the locations have changed. If you did not alter either file when you were running the original IOTstack implementation of Mosquitto, there will be no change in Mosquitto's behaviour when it is built from a Dockerfile.
+However, if you did alter either or both configuration files, then you should compare the old and new versions and decide whether you wish to retain your old settings. For example:
+$ cd ~/IOTstack
+$ diff ./services/mosquitto/mosquitto.conf ./volumes/mosquitto/config/mosquitto.conf
+
++You can also use the
+-y
option on thediff
command to see a side-by-side comparison of the two files.
Using mosquitto.conf
as the example, assume you wish to use your existing file instead of the default:
To move your existing file into the new location:
+$ cd ~/IOTstack
+$ sudo mv ./services/mosquitto/mosquitto.conf ./volumes/mosquitto/config/mosquitto.conf
+
++The move overwrites the default. At this point, the moved file will probably be owned by user "pi" but that does not matter.
+
Mosquitto will always enforce correct ownership (1883:1883) on any restart but it will not overwrite permissions. If in doubt, use mode 644 as your default for permissions:
+$ sudo chmod 644 ./services/mosquitto/mosquitto.conf
+
Restart Mosquitto:
+$ docker-compose restart mosquitto
+
Check your work:
+$ ls -l ./volumes/mosquitto/config/mosquitto.conf
+-rw-r--r-- 1 1883 1883 ssss mmm dd hh:mm ./volumes/mosquitto/config/mosquitto.conf
+
If necessary, repeat these steps with filter.acl
.
Mosquitto logging is controlled by mosquitto.conf
. This is the default configuration:
#log_dest file /mosquitto/log/mosquitto.log
+log_dest stdout
+log_timestamp_format %Y-%m-%dT%H:%M:%S
+# Reduce size and SD-card flash wear, safe to remove if using a SSD
+connection_messages false
+
When log_dest
is set to stdout
, you inspect Mosquitto's logs like this:
$ docker logs mosquitto
+
Logs written to stdout
are stored and persisted to disk as managed by Docker.
+They are kept over reboots, but are lost when your Mosquitto container is
+removed or updated.
The alternative, which may be more appropriate if you are running on an SSD or HD, is to change mosquitto.conf
to be like this:
log_dest file /mosquitto/log/mosquitto.log
+#log_dest stdout
+log_timestamp_format %Y-%m-%dT%H:%M:%S
+
and then restart Mosquitto:
+$ cd ~/IOTstack
+$ docker-compose restart mosquitto
+
The path /mosquitto/log/mosquitto.log
is an internal path. When this style of logging is active, you inspect Mosquitto's logs using the external path like this:
$ sudo tail ~/IOTstack/volumes/mosquitto/log/mosquitto.log
+
++You need to use
+sudo
because the log is owned by userID 1883 and Mosquitto creates it without "world" read permission.
Logs written to mosquitto.log
persist until you take action to prune the file.
Mosquitto security is controlled by mosquitto.conf
. These are the relevant directives:
#password_file /mosquitto/pwfile/pwfile
+allow_anonymous true
+
Mosquitto security can be in four different states, which are summarised in the following table:
+password_file |
+allow_anonymous |
+security enforcement | +remark | +
---|---|---|---|
disabled | +true | +open access | +default | +
disabled | +false | +all access denied | +not really useful | +
enabled | +true | +credentials optional | ++ |
enabled | +false | +credentials required | ++ |
The password file for Mosquitto is part of a mapped volume:
+/mosquitto/pwfile/pwfile
~/IOTstack/volumes/mosquitto/pwfile/pwfile
A common problem with the previous version of Mosquitto for IOTstack occurred when the password_file
directive was enabled but the pwfile
was not present. Mosquitto went into a restart loop.
The Mosquitto container performs self-repair each time the container is brought up or restarts. If pwfile
is missing, an empty file is created as a placeholder. This prevents the restart loop. What happens next depends on allow_anonymous
:
If true
then:
pwfile
is empty so there is nothing to match on).If false
then all MQTT requests will be rejected.
To create a username and password, use the following as a template.
+$ docker exec mosquitto mosquitto_passwd -b /mosquitto/pwfile/pwfile «username» «password»
+
Replace «username» and «password» with appropriate values, then execute the command. For example, to create the username "hello" with password "world":
+$ docker exec mosquitto mosquitto_passwd -b /mosquitto/pwfile/pwfile hello world
+
Note:
+There are two ways to verify that the password file exists and has the expected content:
+View the file using its external path:
+$ sudo cat ~/IOTstack/volumes/mosquitto/pwfile/pwfile
+
+++
sudo
is needed because the file is neither owned nor readable bypi
.
View the file using its internal path:
+$ docker exec mosquitto cat /mosquitto/pwfile/pwfile
+
Each credential starts with the username and occupies one line in the file:
+hello:$7$101$ZFOHHVJLp2bcgX+h$MdHsc4rfOAhmGG+65NpIEJkxY0beNeFUyfjNAGx1ILDmI498o4cVOaD9vDmXqlGUH9g6AgHki8RPDEgjWZMkDA==
+
To remove an entry from the password file:
+$ docker exec mosquitto mosquitto_passwd -D /mosquitto/pwfile/pwfile «username»
+
There are several ways to reset the password file. Your options are:
+Remove the password file and restart Mosquitto:
+$ cd ~/IOTstack
+$ sudo rm ./volumes/mosquitto/pwfile/pwfile
+$ docker-compose restart mosquitto
+
The result is an empty password file.
+Clear all existing passwords while adding a new password:
+$ docker exec mosquitto mosquitto_passwd -c -b /mosquitto/pwfile/pwfile «username» «password»
+
The result is a password file with a single entry.
+Clear all existing passwords in favour of a single dummy password which is then removed:
+$ docker exec mosquitto mosquitto_passwd -c -b /mosquitto/pwfile/pwfile dummy dummy
+$ docker exec mosquitto mosquitto_passwd -D /mosquitto/pwfile/pwfile dummy
+
The result is an empty password file.
+Use sudo
and your favourite text editor to open the following file:
~/IOTstack/volumes/mosquitto/config/mosquitto.conf
+
Remove the comment indicator from the following line:
+#password_file /mosquitto/pwfile/pwfile
+
so that it becomes:
+password_file /mosquitto/pwfile/pwfile
+
Set allow_anonymous
as required:
allow_anonymous true
+
If true
then:
If false
then:
Save the modified configuration file and restart Mosquitto:
+$ cd ~/IOTstack
+$ docker-compose restart mosquitto
+
password_file
is enabled.allow_anonymous
is false
.If you do not have the Mosquitto clients installed on your Raspberry Pi (ie $ which mosquitto_pub
does not return a path), install them using:
$ sudo apt install -y mosquitto-clients
+
Test without providing credentials:
+$ mosquitto_pub -h 127.0.0.1 -p 1883 -t "/password/test" -m "up up and away"
+Connection Refused: not authorised.
+Error: The connection was refused.
+
Note:
+Test with credentials
+$ mosquitto_pub -h 127.0.0.1 -p 1883 -t "/password/test" -m "up up and away" -u hello -P world
+$
+
Note:
+Prove round-trip connectivity will succeed when credentials are provided. First, set up a subscriber as a background process. This mimics the role of a process like Node-Red:
+$ mosquitto_sub -v -h 127.0.0.1 -p 1883 -t "/password/test" -F "%I %t %p" -u hello -P world &
+[1] 25996
+
Repeat the earlier test:
+$ mosquitto_pub -h 127.0.0.1 -p 1883 -t "/password/test" -m "up up and away" -u hello -P world
+2021-02-16T14:40:51+1100 /password/test up up and away
+
Note:
+mosquitto_sub
running in the background.When you have finished testing you can kill the background process (press return twice after you enter the kill
command):
$ kill %1
+$
+[1]+ Terminated mosquitto_sub -v -h 127.0.0.1 -p 1883 -t "/password/test" -F "%I %t %p" -u hello -P world
+
A script , or "agent", to assess the health of the Mosquitto container has been added to the local image via the Dockerfile. In other words, the script is specific to IOTstack.
+The agent is invoked 30 seconds after the container starts, and every 30 seconds thereafter. The agent:
+Publishes a retained MQTT message to the broker running in the same container. The message payload is the current date and time, and the default topic string is:
+iotstack/mosquitto/healthcheck
+
Subscribes to the same broker for the same topic for a single message event.
+Portainer's Containers display contains a Status column which shows health-check results for all containers that support the feature.
+You can also use the docker ps
command to monitor health-check results. The following command narrows the focus to mosquitto:
$ docker ps --format "table {{.Names}}\t{{.Status}}" --filter name=mosquitto
+
Possible reply patterns are:
+The container is starting and has not yet run the health-check agent:
+NAMES STATUS
+mosquitto Up 3 seconds (health: starting)
+
The container has been running for at least 30 seconds and the health-check agent has returned a positive result within the last 30 seconds:
+NAMES STATUS
+mosquitto Up 34 seconds (healthy)
+
The container has been running for more than 90 seconds but has failed the last three successive health-check tests:
+NAMES STATUS
+mosquitto Up About a minute (unhealthy)
+
You can also subscribe to the same topic that the health-check agent is using to view the retained messages as they are published:
+$ mosquitto_sub -v -h localhost -p 1883 -t "iotstack/mosquitto/healthcheck" -F "%I %t %p"
+
Notes:
+localhost
with the IP address or domain name of the host where your Mosquitto container is running.-p 1883
is the external port. You will need to adjust this if you are using a different external port for your MQTT service.-u «user»
and -P «password»
parameters to this command.You can customise the operation of the health-check agent by editing the mosquitto
service definition in your Compose file:
By default, the mosquitto broker listens to internal port 1883. If you need change that port, you also need to inform the health-check agent via an environment variable. For example, suppose you changed the internal port to 12345:
+ environment:
+ - HEALTHCHECK_PORT=12345
+
If the default topic string used by the health-check agent causes a name-space collision, you can override it. For example, you could use a Universally-Unique Identifier (UUID):
+ environment:
+ - HEALTHCHECK_TOPIC=4DAA361F-288C-45D5-9540-F1275BDCAF02
+
Note:
+mosquitto_sub
command shown at monitoring health-check.If you have enabled authentication for your Mosquitto broker service, you will need to provide appropriate credentials for your health-check agent:
+ environment:
+ - HEALTHCHECK_USER=healthyUser
+ - HEALTHCHECK_PASSWORD=healthyUserPassword
+
If the health-check agent misbehaves in your environment, or if you simply don't want it to be active, you can disable all health-checking for the container by adding the following lines to its service definition:
+ healthcheck:
+ disable: true
+
Notes:
+HEALTHCHECK_
environment variables that may already be in place.Conversely, the mere presence of a healthcheck:
clause in the mosquitto
service definition overrides the supplied agent. In other words, the following can't be used to re-enable the supplied agent:
healthcheck:
+ disable: false
+
You must remove the entire healthcheck:
clause.
You can update most containers like this:
+$ cd ~/IOTstack
+$ docker-compose pull
+$ docker-compose up -d
+$ docker system prune
+
In words:
+docker-compose pull
downloads any newer images;docker-compose up -d
causes any newly-downloaded images to be instantiated as containers (replacing the old containers); andprune
gets rid of the outdated images.This strategy doesn't work when a Dockerfile is used to build a local image on top of a base image downloaded from DockerHub. The local image is what is running so there is no way for the pull
to sense when a newer version becomes available.
The only way to know when an update to Mosquitto is available is to check the eclipse-mosquitto tags page on DockerHub.
+Once a new version appears on DockerHub, you can upgrade Mosquitto like this:
+$ cd ~/IOTstack
+$ docker-compose build --no-cache --pull mosquitto
+$ docker-compose up -d mosquitto
+$ docker system prune
+$ docker system prune
+
Breaking it down into parts:
+build
causes the named container to be rebuilt;--no-cache
tells the Dockerfile process that it must not take any shortcuts. It really must rebuild the local image;--pull
tells the Dockerfile process to actually check with DockerHub to see if there is a later version of the base image and, if so, to download it before starting the build;mosquitto
is the named container argument required by the build
command.Your existing Mosquitto container continues to run while the rebuild proceeds. Once the freshly-built local image is ready, the up
tells docker-compose
to do a new-for-old swap. There is barely any downtime for your MQTT broker service.
The prune
is the simplest way of cleaning up. The first call removes the old local image. The second call cleans up the old base image. Whether an old base image exists depends on the version of docker-compose
you are using and how your version of docker-compose
builds local images.
If an update to Mosquitto introduces a breaking change, you can revert to an earlier know-good version by pinning to that version. Here's how:
+Use your favourite text editor to open:
+~/IOTstack/docker-compose.yml
+
Find the Mosquitto service definition. If your service definition contains this line:
+build: ./.templates/mosquitto/.
+
then replace that line with the following four lines:
+build:
+ context: ./.templates/mosquitto/.
+ args:
+ - MOSQUITTO_BASE=eclipse-mosquitto:latest
+
Notes:
+build
directive is now the default for Mosquitto so those lines may already be present in your compose file.Replace latest
with the version you wish to pin to. For example, to pin to version 2.0.13:
- MOSQUITTO_BASE=eclipse-mosquitto:2.0.13
+
Save the file and tell docker-compose
to rebuild the local image:
$ cd ~/IOTstack
+$ docker-compose build --no-cache --pull mosquitto
+$ docker-compose up -d mosquitto
+$ docker system prune
+
The new local image is built, then the new container is instantiated based on that image. The prune
deletes the old local image.
Images built in this way will always be tagged with "latest", as in:
+$ docker images iotstack_mosquitto
+REPOSITORY TAG IMAGE ID CREATED SIZE
+iotstack_mosquitto latest 8c0543149b9b About a minute ago 16.2MB
+
You may find it useful to assign an explicit tag to help you remember the version number used for the build. For example:
+$ docker tag iotstack_mosquitto:latest iotstack_mosquitto:2.0.13
+$ docker images iotstack_mosquitto
+REPOSITORY TAG IMAGE ID CREATED SIZE
+iotstack_mosquitto 2.0.13 8c0543149b9b About a minute ago 16.2MB
+iotstack_mosquitto latest 8c0543149b9b About a minute ago 16.2MB
+
You can also query the image metadata to discover version information:
+$ docker image inspect iotstack_mosquitto:latest | jq .[0].Config.Labels
+{
+ "com.github.SensorsIot.IOTstack.Dockerfile.based-on": "https://github.com/eclipse/mosquitto",
+ "com.github.SensorsIot.IOTstack.Dockerfile.build-args": "eclipse-mosquitto:2.0.13",
+ "description": "Eclipse Mosquitto MQTT Broker",
+ "maintainer": "Roger Light <roger@atchoo.org>"
+}
+
Earlier versions of the IOTstack service definition for Mosquitto included two port mappings:
+ports:
+ - "1883:1883"
+ - "9001:9001"
+
Issue 67 explored the topic of port 9001 and showed that:
+On that basis, the mapping for port 9001 was removed from service.yml
.
If you have a use-case that needs port 9001, you can re-enable support by:
+Inserting the port mapping under the mosquitto
definition in docker-compose.yml
:
- "9001:9001"
+
Inserting the additional listener in mosquitto.conf
:
listener 1883
+listener 9001
+
You need both lines. If you omit 1883 then Mosquitto will stop listening to port 1883 and will only listen to port 9001.
+Restarting the container:
+$ cd ~/IOTstack
+$ docker-compose restart mosquitto
+
Please consider raising an issue to document your use-case. If you think your use-case has general application then please also consider creating a pull request to make the changes permanent.
+ + + + + + + + + + + + + +MotionEye is a web frontend for the Motion project.
+This is the default service definition:
+motioneye:
+ image: dontobi/motioneye.rpi:latest
+ container_name: "motioneye"
+ restart: unless-stopped
+ ports:
+ - "8765:8765"
+ - "8766:8081"
+ environment:
+ - TZ=${TZ:-Etc/UTC}
+ volumes:
+ - ./volumes/motioneye/etc_motioneye:/etc/motioneye
+ - ./volumes/motioneye/var_lib_motioneye:/var/lib/motioneye
+
MotionEye's administrative interface is available on port 8765. For example:
+http://raspberrypi.local:8765
+
The default username is admin
(all lower case) with no password.
The first camera you define in the administrative interface is assigned to internal port 8081. The default service definition maps that to port 8766:
+- "8766:8081"
+
You can access the stream with a web browser on port 8766. For example:
+http://raspberrypi.local:8766
+
Each subsequent camera you define in the administrative interface will be assigned a new internal port number:
+Each camera you define after the first will need its own port mapping in the service definition in your compose file. For example:
+- "8767:8082"
+- "8768:8083"
+- …
+
Key points:
+By default local camera data is stored at the internal path:
+/var/lib/motioneye/«camera_name»
+
That maps to the external path:
+~/IOTstack/volumes/motioneye/var_lib_motioneye/«camera_name»
+
Tips:
+«camera_name»
can be unreliable. After defining a camera, it is a good idea to double-check the actual path in the "Root Directory" field of the "File Storage" section in the administrative interface.Although it depends on your exact settings, MotionEye's video storage can represent a significant proportion of your backup files. If you want to constrain your backup files to reasonable sizes, consider excluding the video storage from your routine backups by changing where MotionEye videos are kept. This is one approach:
+Be in the appropriate directory:
+$ cd ~/IOTstack
+
Terminate the motioneye container:
+$ docker-compose down motioneye
+
++see also if downing a container doesn't work
+
Move the video storage folder:
+$ sudo mv ./volumes/motioneye/var_lib_motioneye ~/motioneye-videos
+
Open your docker-compose.yml
in a text editor. Find this line in your motioneye
service definition:
- ./volumes/motioneye/var_lib_motioneye:/var/lib/motioneye
+
and change it to be:
+- /home/pi/motioneye-videos:/var/lib/motioneye
+
then save the edited compose file.
+Start the container again:
+$ docker-compose up -d motioneye
+
This change places video storage outside of the usual ~/IOTstack/volumes
path, where IOTstack backup scripts will not see it.
An alternative approach is to omit the volume mapping for /var/lib/motioneye
entirely. Clips will be still be recorded inside the container and you will be able to play and download the footage using the administrative interface. However, any saved clips will disappear each time the container is re-created (not just restarted). Clips stored inside the container also will not form part of any backup.
If you choose this method, make sure you configure MotionEye to discard old footage using the "Preserve Movies" field of the "Movies" section in the administrative interface. This is a per-camera setting so remember to do it for all your cameras. If you do not do this, you are still at risk of running your Pi out of disk space, and it's a difficult problem to diagnose.
+If you have connected to a remote motion eye note that the directory is on that device and has nothing to do with the container.
+ + + + + + + + + + + + + +This is the core of the IOTstack Nextcloud service definition:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 |
|
There are two containers, one for the cloud service itself, and the other for the database. Both containers share the same persistent storage area in the volumes subdirectory so they are treated as a unit. This will not interfere with any other MariaDB containers you might wish to run.
+Key points:
+Under old-menu, you are responsible for setting passwords. The passwords are "internal use only" and it is unlikely that you will need them unless you plan to go ferreting-about in the database using SQL. The rules are:
+«user_password»
must be the same.«root_password»
should be different from «user_password»
.Under new-menu, the menu can generate random passwords for you. You can either use that feature or roll your own using the old-menu approach by replacing:
+%randomMySqlPassword%
(the «user_password»
)%randomPassword%
(the «root_password»
)The passwords need to be set before you bring up the Nextcloud service for the first time. However, the following initialisation steps assume you might not have done that and always start from a clean slate.
+Be in the correct directory:
+$ cd ~/IOTstack
+
If the stack is running, take it down:
+$ docker-compose down
+
++see also if downing a container doesn't work
+
Erase the persistent storage area for Nextcloud (double-check the command before you hit return):
+$ sudo rm -rf ./volumes/nextcloud
+
This is done to force re-initialisation. In particular, it gives you assurance that the passwords in your docker-compose.yml
are the ones that are actually in effect.
Bring up the stack:
+$ docker-compose up -d
+
Check for errors:
+Repeat the following command two or three times at 10-second intervals:
+$ docker ps
+
You are looking for evidence that the nextcloud
and nextcloud_db
containers are up, stable, and not restarting. If you see any evidence of restarts, try to figure out why using:
$ docker logs nextcloud
+
On a computer that is not the device running Nextcloud, launch a browser and point to the device running Nextcloud using your chosen connection method. Examples:
+http://192.168.203.200:9321
+http://myrpi.mydomain.com:9321
+http://myrpi.local:9321
+http://myrpi:9321
+
The expected result is:
+ +Create an administrator account and then click "Install" and wait for the loading to complete.
+Eventually, the dashboard will appear. Then the dashboard will be obscured by the "Nextcloud Hub" floating window which you can dismiss:
+ +Congratulations. Your IOTstack implementation of Nextcloud is ready to roll:
+ +++If you are reading this because you are staring at an "access through untrusted domain" message then you have come to the right place.
+
Let's assume the following:
+raspi-config
to give your Raspberry Pi the name "myrpi".Out of the box, a Raspberry Pi participates in multicast DNS so it will also have the mDNS name:
+Let's also assume you have a local Domain Name System server where your Raspberry Pi:
+Rolling all that together, you would expect your Nextcloud service to be reachable at any of the following URLs:
+http://192.168.203.200:9321
http://myrpi.local:9321
http://myrpi.mydomain.com:9321
http://nextcloud.mydomain.com:9321
To tell Nextcloud that all of those URLs are valid, you need to use sudo
and your favourite text editor to edit this file:
~/IOTstack/volumes/nextcloud/html/config/config.php
+
Hint:
+It is a good idea to make a backup of any file before you edit it. For example:
+$ cd ~/IOTstack/volumes/nextcloud/html/config/
+$ sudo cp config.php config.php.bak
+
Search for "trusted_domains". To tell Nextcloud to trust all of the URLs above, edit the array structure like this:
+ 'trusted_domains' =>
+ array (
+ 0 => '192.168.203.200:9321',
+ 1 => 'myrpi.local:9321',
+ 2 => 'myrpi.mydomain.com:9321',
+ 3 => 'nextcloud.mydomain.com:9321',
+ ),
+
++Note: all the trailing commas are intentional!
+
Once you have finished editing the file, save your work then restart Nextcloud:
+$ cd ~/IOTstack
+$ docker-compose restart nextcloud
+
Use docker ps
to check that the container has restarted properly and hasn't gone into a restart loop.
See also:
+ +++The information in this section may be out of date. Recent tests suggest it is no longer necessary to add a
+hostname
clause to yourdocker-compose.yml
to silence warnings when using DNS aliases to reach your NextCloud service. This section is being left here so you will know what to do if you encounter the problem.
The examples above include using a DNS alias (a CNAME record) for your Nextcloud service. If you decide to do that, you may see this warning in the log:
+Could not reliably determine the server's fully qualified domain name
+
You can silence the warning by editing the Nextcloud service definition in docker-compose.yml
to add your fully-qualified DNS alias using a hostname
directive. For example:
hostname: nextcloud.mydomain.com
+
Nextcloud traffic is not encrypted. Do not expose it to the web by opening a port on your home router. Instead, use a VPN like Wireguard to provide secure access to your home network, and let your remote clients access Nextcloud over the VPN tunnel.
+The IOTstack service definition for NextCloud reserves port 9343 for HTTPS access but leaves it as an exercise for the reader to figure out how to make it work. You may get some guidance here.
+A script , or "agent", to assess the health of the MariaDB container has been added to the local image via the Dockerfile. In other words, the script is specific to IOTstack.
+Because it is an instance of MariaDB, Nextcloud_DB inherits the health-check agent. See the IOTstack MariaDB documentation for more information.
+To update the nextcloud
container:
$ cd ~/IOTstack
+$ docker-compose pull nextcloud
+$ docker-compose up -d nextcloud
+$ docker system prune
+
To update the nextcloud_db
container:
$ cd ~/IOTstack
+$ docker-compose build --no-cache --pull nextcloud_db
+$ docker-compose up -d nextcloud_db
+$ docker system prune
+
++You may need to run the
+prune
command twice if you are using a 1.x version ofdocker-compose
.
Nextcloud is currently excluded from the IOTstack-supplied backup scripts due to its potential size.
+++Paraphraser/IOTstackBackup includes backup and restore for NextCloud.
+
If you want to take a backup, something like the following will get the job done:
+$ cd ~/IOTstack
+$ BACKUP_TAR_GZ=$PWD/backups/$(date +"%Y-%m-%d_%H%M").$HOSTNAME.nextcloud-backup.tar.gz
+$ touch "$BACKUP_TAR_GZ"
+$ docker-compose down nextcloud nextcloud_db
+$ sudo tar -czf "$BACKUP_TAR_GZ" -C "./volumes/nextcloud" .
+$ docker-compose up -d nextcloud
+
Notes:
+up
of the NextCloud container implies the up
of the Nextcloud_DB container.To restore, you first need to identify the name of the backup file by looking in the backups
directory. Then:
$ cd ~/IOTstack
+$ RESTORE_TAR_GZ=$PWD/backups/2021-06-12_1321.sec-dev.nextcloud-backup.tar.gz
+$ docker-compose down nextcloud nextcloud_db
+$ sudo rm -rf ./volumes/nextcloud/*
+$ sudo tar -x --same-owner -z -f "$RESTORE_TAR_GZ" -C "./volumes/nextcloud"
+$ docker-compose up -d nextcloud
+
If you are running from an SD card, it would be a good idea to mount an external drive to store the data. Something like:
+ +The external drive will have to be an ext4 formatted drive because smb, fat32 and NTFS can't handle Linux file permissions. If the permissions aren't set to "www-data" then the container won't be able to write to the disk.
+Finally, a warning:
+A walkthrough of a network model may help you to understand how Nextcloud and its database communicate. To help set the scene, the following model shows a Raspberry Pi with Docker running four containers:
+nextcloud
and nextcloud_db
- both added when you select "NextCloud"mariadb
- optional container added when you select "MariaDB"wireguard
- optional container added when you select "WireGuard"The first thing to understand is that the nextcloud_db
and mariadb
containers are both instances of MariaDB. They are instantiated from the same image but they have completely separate existences. They have different persistent storage areas (ie databases) and they do not share data.
The second thing to understand is how the networks inside the "Docker" rectangle shown in the model are created. The networks
section of your compose file defines the networks:
networks:
+
+ default:
+ driver: bridge
+ ipam:
+ driver: default
+
+ nextcloud:
+ driver: bridge
+ internal: true
+ ipam:
+ driver: default
+
At run time, the lower-case representation of the directory containing the compose file (ie "iotstack") is prepended to the network names, resulting in:
+default
⟹ iotstack_default
nextcloud
⟹ iotstack_nextcloud
Each network is assigned a /16 IPv4 subnet. Unless you override it, the subnet ranges are chosen at random. This model assumes:
+iotstack_default
is assigned 172.18.0.0/16iotstack_nextcloud
is assigned 172.19.0.0/16The logical router on each network takes the .0.1
address.
++The reason why two octets are devoted to the host address is because a /16 network prefix implies a 16-bit host portion. Each octet describes 8 bits.
+
As each container is brought up, the network(s) it joins are governed by the following rules:
+networks:
clause in the container's service definition then the container joins the network(s) listed in the body of the clause; otherwisedefault
network.Assuming that the mariadb
and wireguard
containers do not have networks:
clauses, the result of applying those rules is shown in the following table.
Each container is assigned an IPv4 address on each network it joins. In general, the addresses are assigned in the order in which the containers start.
+No container can easily predict either the network prefix of the networks it joins or the IP address of any other container. However, Docker provides a mechanism for any container to reach any other container with which it shares a network by using the destination container's name.
+In this model there are two MariaDB instances, one named nextcloud_db
and the other named mariadb
. How does the nextcloud
container know which name to use? Simple. It's passed in an environment variable:
environment:
+ - MYSQL_HOST=nextcloud_db
+
At runtime, the nextcloud
container references nextcloud_db:3306
. Docker resolves nextcloud_db
to 172.19.0.2 so the traffic traverses the 172.19/16 internal bridged network and arrives at the nextcloud_db
container.
The nextcloud
container could reach the mariadb
container via mariadb:3306
. There's no ambiguity because Docker resolves mariadb
to 172.18.0.2, which is a different subnet and an entirely different internal bridged network.
++There would still be no ambiguity even if all containers attached to the
+iotstack_default
network because each container name still resolves to a distinct IP address.
In terms of external ports, only mariadb
exposes port 3306. Any external process trying to reach 192.168.203.60:3306 will always be port-forwarded to the mariadb
container. The iotstack_nextcloud
network is declared "internal" which means it is unreachable from beyond the Raspberry Pi. Any port-mappings associated with that network are ignored.
~/IOTstack
+├── .templates
+│ └── nodered
+│ └── service.yml ❶
+├── services
+│ └── nodered
+│ ├── Dockerfile ❷
+│ └── service.yml ❸
+├── docker-compose.yml ❹
+└── volumes
+ └── nodered ❺
+ ├── data ❻
+ └── ssh ❼
+
The source code for Node-RED lives at GitHub node-red/node-red-docker.
+Periodically, the source code is recompiled and pushed to nodered/node-red on DockerHub. See Node-RED and node.js
versions for an explanation of the versioning tags associated with images on DockerHub.
When you select Node-RED in the IOTstack menu, the template service definition ❶ is copied into the Compose file ❹.
+++Under old menu, it is also copied to the working service definition ❸ and then not really used.
+
You choose add-on nodes from a supplementary menu. We recommend accepting the default nodes, and adding others that you think you are likely to need. Node-RED will not build if you do not select at least one add-on node.
+Key points:
+Choosing add-on nodes in the menu causes the Dockerfile ❷ to be created.
+On a first install of IOTstack, you are told to do this:
+$ cd ~/IOTstack
+$ docker-compose up -d
+
docker-compose
reads the Compose file ❹. When it arrives at the nodered
service definition, it finds :
1 +2 +3 +4 +5 +6 +7 |
|
Note:
+Prior to July 2022, IOTstack used the following one-line syntax for the build
directive:
3 |
|
The older syntax meant all local customisations (version-pinning and adding extra packages) needed manual edits to the Dockerfile ❷. Those edits would be overwritten each time the menu was re-run to alter the selected add-on nodes. The newer multi-line syntax avoids that problem.
+See also updating to July 2022 syntax.
+In either case, the path ./services/nodered/.
tells docker-compose
to look for ❷:
~/IOTstack/services/nodered/Dockerfile
+
which contains instructions to download a base image from DockerHub and then apply local customisations such as the add-on nodes you chose in the IOTstack menu. The result is a local image which is instantiated to become your running container.
+Notes:
+npm audit fix
. You should ignore all such messages. There is no need to take any action.If SQLite is in your list of nodes, be aware that it needs to be compiled from its source code. It takes a long time, outputs an astonishing number of warnings and, from time to time, will look as if it has gotten stuck. Be patient.
+++Acknowledgement: Successful installation of the SQLite node is thanks to @fragolinux.
+
When you run the docker images
command after Node-RED has been built, you will see something like this:
$ docker images
+REPOSITORY TAG IMAGE ID CREATED SIZE
+iotstack-nodered latest 9feeb87019cd 11 days ago 945MB
+
The image name iotstack-nodered
is the concatenation of two components:
docker-compose
project name. This is the all-lower-case representation of the name of the folder containing docker-compose.yml
. In a default clone of IOTstack, the folder name is IOTstack
so the project name is iotstack
.nodered
.When you install Node-RED for the first time, the entire process of downloading a base image from Dockerhub, building a local image by running your local Dockerfile ❷, and then instantiating that local image as your running container, is all completely automatic.
+However, after that first build, your local image is essentially frozen and it needs special action on your part to keep it up-to-date. See maintaining Node-RED and, in particular:
+After you install Node-RED, you should set an encryption key. Completing this step will silence the warning you will see when you run:
+$ docker logs nodered
+…
+---------------------------------------------------------------------
+Your flow credentials file is encrypted using a system-generated key.
+
+If the system-generated key is lost for any reason, your credentials
+file will not be recoverable, you will have to delete it and re-enter
+your credentials.
+
+You should set your own key using the 'credentialSecret' option in
+your settings file. Node-RED will then re-encrypt your credentials
+file using your chosen key the next time you deploy a change.
+---------------------------------------------------------------------
+…
+
Setting an encryption key also means that any credentials you create will be portable, in the sense that you can backup Node-RED on one machine and restore it on another.
+The encryption key can be any string. For example, if you have UUID support installed (sudo apt install -y uuid-runtime
), you could generate a UUID as your key:
$ uuidgen
+2deb50d4-38f5-4ab3-a97e-d59741802e2d
+
Once you have defined your encryption key, use sudo
and your favourite text editor to open this file:
~/IOTstack/volumes/nodered/data/settings.js
+
Search for credentialSecret
:
//credentialSecret: "a-secret-key",
+
Un-comment the line and replace a-secret-key
with your chosen key. Do not remove the comma at the end of the line. The result should look something like this:
credentialSecret: "2deb50d4-38f5-4ab3-a97e-d59741802e2d",
+
Save the file and then restart Node-RED:
+$ cd ~/IOTstack
+$ docker-compose restart nodered
+
To secure Node-RED you need a password hash. Run the following command, replacing PASSWORD
with your own password:
$ docker exec nodered node -e "console.log(require('bcryptjs').hashSync(process.argv[1], 8));" PASSWORD
+
You will get an answer that looks something like this:
+$2a$08$gTdx7SkckJVCw1U98o4r0O7b8P.gd5/LAPlZI6geg5LRg4AUKuDhS
+
Copy that text to your clipboard, then follow the instructions at Node-RED User Guide - Securing Node-RED - Username & Password-based authentication.
+Node-RED can run in two modes. By default, it runs in "non-host mode" but you can also move the container to "host mode" by editing the Node-RED service definition in your Compose file to:
+Add the following directive:
+network_mode: host
+
Remove the ports
directive and the mapping of port 1880.
Most examples on the web assume Node-RED and other services in the MING (Mosquitto, InfluxDB, Node-RED, Grafana) stack have been installed natively, rather than in Docker containers. Those examples typically include the loopback address + port syntax, like this:
+127.0.0.1:1883
+
The loopback address will not work when Node-RED is in non-host mode. This is because each container behaves like a self-contained computer. The loopback address means "this container". It does not mean "this Raspberry Pi".
+You refer to other containers by their container name. For example, a flow subscribing to an MQTT feed provided by the mosquitto container uses:
+mosquitto:1883
+
Similarly, if a flow writes to an InfluxDB database maintained by the influxdb container, the flow uses:
+influxdb:8086
+
Behind the scenes, Docker maintains a table, similar to an /etc/hosts
file, mapping container names to the IP addresses on the internal bridged network that are assigned, dynamically, by Docker, when it spins up each container.
This is where you use loopback+port syntax, such as the following to communicate with Mosquitto:
+127.0.0.1:1883
+
What actually occurs is that Docker is listening to external port 1883 on behalf of Mosquitto. It receives the packet and routes it (layer three) to the internal bridged network, performing network address translation (NAT) along the way to map the external port to the internal port. Then the packet is delivered to Mosquitto. The reverse happens when Mosquitto replies. It works but is less efficient than when all containers are in non-host mode.
+When the container is running in non-host mode, there are several ways in which it can refer to the host on which the container is running:
+The problem with the first two is that they tie your flows to the specific host.
+The third method is portable, meaning a flow can conceptually refer to "this" host and be independent of the actual host on which the container is running.
+Method 1
+The default gateway on the Docker bridge network is usually "172.17.0.1". You can confirm the IP address by running:
+$ docker network inspect bridge | jq .[0].IPAM.Config[0].Gateway
+"172.17.0.1"
+
++If
+jq
is not installed on your system, you can install it by runningsudo apt install -y jq
.
If you use this method, your flows can refer to "this" host using the IP address "172.17.0.1".
+Method 2
+Alternatively, you can add the following lines to your Node-RED service definition:
+extra_hosts:
+ - "host.docker.internal:host-gateway"
+
If you use this method, your flows can refer to "this" host using the domain name "host.docker.internal".
+Generally the second method is recommended for IOTstack. That is because your flows will continue to work even if the 172.17.0.1 IP address changes. However, it does come with the disadvantage that, if you publish a flow containing this domain name, the flow will not work unless the recipient also adds the extra_hosts
clause.
To communicate with your Raspberry Pi's GPIO you need to do the following:
+Install dependencies:
+$ sudo apt update
+$ sudo apt install pigpio python-pigpio python3-pigpio
+
Notes:
+pigpio
and python3-pigpio
are usually installed by default in standard releases of Raspberry Pi OS.pigpio
is actually required.Install the node-red-node-pi-gpiod
node. See component management. It allows you to connect to multiple Pis from the same Node-RED service.
Note:
+node-red-node-pi-gpiod
from the list of add-on nodes added to your Dockerfile by the IOTstack menu, it will be installed already. You can confirm this by examining your Node-RED Dockerfile ❷.Configure the pigpdiod
daemon:
copy the following text to the clipboard:
+1 +2 +3 +4 +5 +6 +7 +8 +9 |
|
++Acknowledgement: some of the above from joan2937/pigpio issue 554
+
execute the following commands:
+$ sudo systemctl stop pigpiod
+$ sudo systemctl revert pigpiod
+$ sudo systemctl edit pigpiod
+
follow the on-screen instructions and paste the contents of the clipboard into the blank area between the lines. The final result should be (lines 4…12 are the pasted material):
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 |
|
Save your work by pressing:
+Check your work by running:
+$ sudo systemctl cat pigpiod
+
The expected result is:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 |
|
Lines 12…20 should be those you copied to the clipboard at the start of this step. If you do not see the expected result, go back and start from the beginning of this step.
+Activate the daemon:
+$ sudo systemctl enable pigpiod
+$ sudo systemctl start pigpiod
+
Reboot.
+Check that the daemon is running:
+$ sudo systemctl status pigpiod
+
Once you have configured pigpiod
correctly and it has come up after a reboot, you should not need to worry about it again.
pigpiod
provides open access to your Raspberry Pi's GPIO via port 8888. Consult the man
pages if you want to make it more secure. Once you have decided what to do, start over from the beginning of this step, and add your parameters to the line:
6 |
|
Drag a pi gpio
node onto the canvas. Configure it according to your needs.
The Host
field should be set to one of:
172.17.0.1
; orhost.docker.internal
See also Bridge network - default gateway.
+Don't try to use 127.0.0.1 because that is the loopback address of the Node-RED container.
+Node-RED running in a container can communicate with serial devices attached to your Raspberry Pi's USB ports. However, it does not work "out of the box". You need to set it up.
+Let's make an assumption. A device connected to one of your Raspberry Pi's USB ports presents itself as:
+/dev/ttyUSB0
+
You have three basic options:
+You can map the device into the container using that name:
+devices:
+ - "/dev/ttyUSB0:/dev/ttyUSB0"
+
This is simple and effective but it suffers from a few problems:
+docker-compose
will not start your container if the device is not present when you bring up your stack.You can deal with the last problem by using the device's "by-id" path. There's an example of this in the Zigbee2MQTT documentation.
+Options 2 and 3 (below) deal with the first two problems in the sense that:
+docker-compose
will always start the container, irrespective of whether devices are actually attached to your USB ports.Options 2 and 3 (below) can't provide a workaround for devices being given different names via enumeration but you can still deal with that by using the device's "by-id" path (as explained above).
+You can map a class of devices:
+modify the volumes
clause to add a read-only mapping for /dev
:
volumes:
+ - /dev:/dev:ro
+
++The "read-only" flag (
+:ro
) prevents the container from doing dangerous things like destroying your Raspberry Pi's SD or SSD. Please don't omit that flag!
discover the major number for your device:
+$ ls -l /dev/ttyUSB0
+crw-rw---- 1 root dialout 188, 0 Feb 18 15:30 /dev/ttyUSB0
+
In the above, the 188, 0
string means the major number for ttyUSB0 is "188" and "0" the minor number.
add two device CGroup rules:
+device_cgroup_rules:
+ - 'c 1:* rw' # access to devices like /dev/null
+ - 'c 188:* rmw' # change numbers to your device
+
In the above:
+"188" is the major number for ttyUSB0 and you should substitute accordingly if your device has a different major number.
+the "*" is a wildcard for the minor number.
+Use the "privileged" flag by adding the following to your Node-RED service definition:
+privileged: true
+
Please make sure you read the following references BEFORE you select this option:
+ +At the time of writing (Feb 2023), it was not possible to add node-red-node-serialport
to the list of nodes in your Dockerfile. Attempting to do so crashed the Node-RED container with a segmentation fault. The workaround is to build the node from source by adding an extra line at the end of your Dockerfile:
RUN npm install node-red-node-serialport --build-from-source
+
Historically, /dev/ttyAMA0
referred to the Raspberry Pi's serial port. The situation became less straightforward once Pis gained Bluetooth capabilities:
On Pis without Bluetooth hardware:
+/dev/ttyAMA0
means the serial port; and/dev/serial0
is a symlink to /dev/ttyAMA0
On Pis with Bluetooth capabilities:
+/dev/ttyS0
means the serial port; and/dev/serial0
is a symlink to /dev/ttyS0
In addition, whether /dev/ttyS0
(and, therefore, /dev/serial0
) are present at runtime depends on adding the following line to config.txt
:
enable_uart=1
+
And, if that isn't sufficiently confusing, the location of config.txt
depends on the OS version:
/boot/config.txt
/boot/firmware/config.txt
Rolling all that together, if you want access to the hardware serial port from Node-RED, you need to:
+enable_uart=1
to config.txt
.Add a device-mapping to Node-RED's service definition:
+devices:
+ - /dev/serial0:/dev/«internalDevice»
+
where «internalDevice»
is whatever device the add-on node you're using is expecting, such as ttyAMA0
.
Recreate the Node-RED container by running:
+$ cd ~/IOTstack
+$ docker-compose up -d nodered
+
If you enable the node-red-contrib-generic-ble
add on node, you will also need to make the following changes:
If you are running Bookworm, you will need to use sudo
to edit this file:
/boot/firmware/config.txt
+
You need to add this line to the end of the file:
+dtparam=krnbt=off
+
You then need to reboot. This adds the Bluetooth device to /dev
.
Find the the Node-RED service definition in your docker-compose.yml
:
Add the following mapping to the volumes:
clause:
- /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket
+
Add the following devices:
clause:
devices:
+ - "/dev/serial1:/dev/serial1"
+ - "/dev/vcio:/dev/vcio"
+ - "/dev/gpiomem:/dev/gpiomem"
+
Recreate the Node-RED container:
+$ cd ~/IOTstack
+$ docker-compose up -d nodered
+
Notes:
+Historically, /dev/ttyAMA0
meant the serial interface. Subsequently, it came to mean the Bluetooth interface but only where Bluetooth hardware was present, otherwise it still meant the serial interface.
On Bookworm and later, if it is present, /dev/ttyAMA1
means the Bluetooth Interface.
On Bullseye and later, /dev/serial1
is a symbolic link pointing to whichever of /dev/ttyAMA0
or /dev/ttyAMA1
means the Bluetooth interface. This means that /dev/serial1
is the most reliable way of referring to the Bluetooth Interface. That's why it appears in the devices:
clause above.
Containers run in a sandboxed environment. A process running inside a container can't see the Raspberry Pi's file system. Neither can a process running outside a container access files inside the container.
+This presents a problem if you want write to a file outside a container, then read from it inside the container, or vice-versa.
+IOTstack containers have been set up with shared volume mappings. Each volume mapping associates a specific directory in the Raspberry Pi file system with a specific directory inside the container. If you write to files in a shared directory (or one of its sub-directories), both the host and the container can see the same sub-directories and files.
+Key point:
+The Node-RED service definition in the Compose file includes the following:
+volumes:
+ - ./volumes/nodered/data:/data
+
That decomposes into:
+./volumes/nodered/data
/data
The leading "." on the external path implies "the folder containing the Compose file so it actually means:
+~/IOTstack/volumes/nodered/data
/data
If you write to the internal path from inside the Node-RED container, the Raspberry Pi will see the results at the external path, and vice versa. Example:
+$ docker exec -it nodered bash
+# echo "The time now is $(date)" >/data/example.txt
+# cat /data/example.txt
+The time now is Thu Apr 1 11:25:56 AEDT 2021
+# exit
+$ cat ~/IOTstack/volumes/nodered/data/example.txt
+The time now is Thu Apr 1 11:25:56 AEDT 2021
+$ sudo rm ~/IOTstack/volumes/nodered/data/example.txt
+
In words:
+Open a shell into the Node-RED container. Two things happen:
+sudo
for anything.Use the echo
command to create a small file which embeds the current timestamp. The path is in the /data
directory which is mapped to the Raspberry Pi's file system.
exit
command and press return, or press Control+D.sudo
to do that because the persistent storage area at the external path is owned by root, and you are running as user "pi".You can do the same thing from within a Node-RED flow.
+ +The flow comprises:
+An Inject node, wired to a Template node.
+A Template node, wired to both a Debug node and a File node. The template field is set to:
+The time at the moment is {{payload}} seconds since 1/1/1970 UTC !
+
{{payload}}
with the seconds value supplied by the Inject node.A Debug node.
+A File node. The "Filename" field of the node is set to write to the path:
+/data/flow-example.txt
+
/data
is an internal path within the Node-RED container.Deploying the flow and clicking on the Inject node results in the debug message shown on the right hand side of the screen shot. The embedded terminal window shows that the same information is accessible from outside the container.
+You can reverse this process. Any file you place within the path ~/IOTstack/volumes/nodered/data
can be read by a "File in" node.
A reasonably common requirement in a Node-RED flow is the ability to execute a command on the host system. The standard tool for this is an "exec" node.
+An "exec" node works as expected when Node-RED is running as a native service but not when Node-RED is running in a container. That's because the command spawned by the "exec" node runs inside the container.
+To help you understand the difference, consider this command:
+$ grep "^PRETTY_NAME=" /etc/os-release
+
When you run that command on a Raspberry Pi outside container-space, the answer will be something like:
+PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
+
If you run the same command inside a Node-RED container, the output will reflect the operating system upon which the container is based, such as:
+PRETTY_NAME="Alpine Linux v3.16"
+
The same thing will happen if a Node-RED "exec" node executes that grep
command when Node-RED is running in a container. It will see the "Alpine Linux" answer.
Docker doesn't provide any mechanism for a container to execute an arbitrary command outside of its container. A workaround is to utilise SSH. This remainder of this section explains how to set up the SSH scaffolding so that "exec" nodes running in a Node-RED container can invoke arbitrary commands outside container-space.
+Be able to use a Node-RED "exec" node to perform the equivalent of:
+$ ssh host.docker.internal «COMMAND»
+
where «COMMAND»
is any command known to the target host.
This section uses host.docker.internal
throughout. That name comes from method 2 of bridge network - default gateway but, in principle, you can refer to the host using any mechanism described in referring to the host.
These instructions are specific to IOTstack but the underlying concepts should apply to any installation of Node-RED in a Docker container.
+These instructions make frequent use of the ability to run commands "inside" the Node-RED container. For example, suppose you want to execute:
+$ grep "^PRETTY_NAME=" /etc/os-release
+
You have several options:
+You can do it from the normal Raspberry Pi command line using a Docker command. The basic syntax is:
+$ docker exec {-it} «containerName» «command and parameters»
+
The actual command you would need would be:
+$ docker exec nodered grep "^PRETTY_NAME=" /etc/os-release
+
Note:
+-it
flags are optional. They mean "interactive" and "allocate pseudo-TTY". Their presence tells Docker that the command may need user interaction, such as entering a password or typing "yes" to a question.You can open a shell into the container, run as many commands as you like inside the container, and then exit. For example:
+$ docker exec -it nodered bash
+# grep "^PRETTY_NAME=" /etc/os-release
+# whoami
+# exit
+$
+
In words:
+bash
shell inside the Node-RED container. You need to be able to interact with the shell to type commands so the -it
flag is required.bash
running inside the container. It also signals that you are running as the root user inside the container.grep
, whoami
and any other commands.exit
command (or Control+D).Run the command from Portainer by selecting the container, then clicking the ">_ console" link. This is identical to opening a shell.
+Create a key-pair for Node-RED. This is done by executing the ssh-keygen
command inside the container:
$ docker exec -it nodered ssh-keygen -q -t ed25519 -C "Node-RED container key-pair" -N ""
+
Notes:
+ssh-keygen
displays an "Overwrite (y/n)?" message, it implies that a key-pair already exists. You will need to decide what to do:Node-RED's public key needs to be copied to the "pi" user account on the host where you want a Node-RED "exec" node to be able to execute commands. At the same time, the Node-RED container needs to learn the host's public key. The ssh-copy-id
command does both steps. The command is:
$ docker exec -it nodered ssh-copy-id pi@host.docker.internal
+
The output will be something similar to the following:
+/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_ed25519.pub"
+The authenticity of host 'host.docker.internal (172.17.0.1)' can't be established.
+ED25519 key fingerprint is SHA256:gHMlhvArbUPJ807vh5qNEuyRCeNUQQTKEkmDS6qKY6c.
+This key is not known by any other names
+Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
+
Respond to the prompt by typing "yes" and pressing return.
+The output continues:
+/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
+expr: warning: '^ERROR: ': using '^' as the first character
+of a basic regular expression is not portable; it is ignored
+/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
+pi@host.docker.internal's password:
+
Enter the password you use to login as "pi" on the host and press return.
+Normal completion looks similar to this:
+Number of key(s) added: 1
+
+Now try logging into the machine, with: "ssh 'pi@host.docker.internal'"
+and check to make sure that only the key(s) you wanted were added.
+
If you do not see an indication that a key has been added, you may need to retrace your steps.
+The output above recommends a test. The test needs to be run inside the Node-RED container so the syntax is:
+$ docker exec -it nodered ssh pi@host.docker.internal ls -1 /home/pi/IOTstack
+
You should not be prompted for a password. If you are, you may need to retrace your steps.
+If everything works as expected, you should see a list of the files in your IOTstack folder.
+Assuming success, think about what just happened? You told SSH inside the Node-RED container to run the ls
command outside the container on your Raspberry Pi. You broke through the containerisation.
Six files are relevant to Node-RED's ability to execute commands outside of container-space:
+in /etc/ssh
:
ssh_host_ed25519_key
is the Raspberry Pi's private host keyssh_host_ed25519_key.pub
is the Raspberry Pi's public host key
Those keys were created when your Raspberry Pi was initialised. They are unique to the host.
+Unless you take precautions, those keys will change whenever your Raspberry Pi is rebuilt from scratch and that will prevent a Node-RED "exec" node from being able to invoke SSH to call out of the container.
+You can recover by re-running ssh-copy-id
.
in ~/IOTstack/volumes/nodered/ssh
:
id_ed25519
is the Node-RED container's private key id_ed25519.pub
is the Node-RED container's public key
Those keys were created when you generated the SSH key-pair for Node-RED.
+They are unique to Node-RED but will follow the container in backups and will work on the same machine, or other machines, if you restore the backup.
+It does not matter if the Node-RED container is rebuilt or if a new version of Node-RED comes down from DockerHub. These keys will remain valid until lost or overwritten.
+If you lose or destroy these keys, that will prevent a Node-RED "exec" node from being able to invoke SSH to call out of the container.
+You can recover by generating new keys and then re-running ssh-copy-id
.
known_hosts
The known_hosts
file contains a copy of the Raspberry Pi's public host key. It was put there by ssh-copy-id
.
If you lose this file or it gets overwritten, invoking SSH inside the container will still work but it will re-prompt for authorisation to connect. You will see the prompt if you run commands via docker exec -it
but not when invoking SSH from an "exec" node.
Note that authorising the connection at the command line ("Are you sure you want to continue connecting?") will auto-repair the known_hosts
file.
in ~/.ssh/
:
authorized_keys
That file contains a copy of the Node-RED container's public key. It was put there by ssh-copy-id
.
Pay attention to the path. It implies that there is one authorized_keys
file per user, per target host.
If you lose this file or it gets overwritten, SSH will still work but will ask for the password for "pi". This works when you are running commands from docker exec -it
but not when invoking SSH from an "exec" node.
Note that providing the correct password at the command line will auto-repair the authorized_keys
file.
SSH running inside the Node-RED container uses the Node-RED container's private key to provide assurance to SSH running outside the container that it (the Node-RED container) is who it claims to be.
+SSH running outside container-space verifies that assurance by using its copy of the Node-RED container's public key in authorized_keys
.
SSH running outside container-space uses the Raspberry Pi's private host key to provide assurance to SSH running inside the Node-RED container that it (the RPi) is who it claims to be.
+SSH running inside the Node-RED container verifies that assurance by using its copy of the Raspberry Pi's public host key stored in known_hosts
.
You don't have to do this step but it will simplify your exec node commands and reduce your maintenance problems if you do.
+At this point, SSH commands can be executed from inside the container using this syntax:
+# ssh pi@host.docker.internal «COMMAND»
+
A config
file is needed to achieve the task goal of the simpler syntax:
# ssh host.docker.internal «COMMAND»
+
The goal is to set up this file:
+-rw-r--r-- 1 root root ~/IOTstack/volumes/nodered/ssh/config
+
The file needs the ownership and permissions shown. There are several ways of going about this and you are free to choose the one that works for you. The method described here creates the file first, then sets correct ownership and permissions, and then moves the file into place.
+Start in a directory where you can create a file without needing sudo
. The IOTstack folder is just as good as anywhere else:
$ cd ~/IOTstack
+$ touch config
+
Select the following text, copy it to the clipboard.
+host host.docker.internal
+ user pi
+ IdentitiesOnly yes
+ IdentityFile /root/.ssh/id_ed25519
+
Open ~/IOTstack/config
in your favourite text editor and paste the contents of the clipboard. Save the file. Change the config file's ownership and permissions, and move it into the correct directory:
$ chmod 644 config
+$ sudo chown root:root config
+$ sudo mv config ./volumes/nodered/ssh
+
The previous test used this syntax:
+$ docker exec nodered ssh pi@host.docker.internal ls -1 /home/pi/IOTstack
+
Now that the config file is in place, the syntax changes to:
+$ docker exec nodered ssh host.docker.internal ls -1 /home/pi/IOTstack
+
The result should be the same as the earlier test.
+In the Node-RED GUI:
+Open the first "exec" node and:
+set the "command" field to:
+grep "^PRETTY_NAME=" /etc/os-release
+
ssh host.docker.internal grep "^PRETTY_NAME=" /etc/os-release
+
Click the Deploy button.
+Inspect the result in the debug panel. You should see payload differences similar to the following:
+PRETTY_NAME="Alpine Linux v3.16""
+PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
+
The first line is the result of running the command inside the Node-RED container. The second line is the result of running the same command outside the Node-RED container on the Raspberry Pi.
+Use these commands to:
+$ cd ~/IOTstack
+$ docker-compose up -d nodered
+
The first time you execute this command, the base image of Node-RED is downloaded from DockerHub, and then the Dockerfile is run to produce a local image. The local image is then instantiated to become the running container.
+To stop the running container:
+$ cd ~/IOTstack
+$ docker-compose down nodered
+
++see also if downing a container doesn't work
+
Alternatively, you can stop the entire stack:
+$ cd ~/IOTstack
+$ docker-compose down
+
The restart
command sends a signal to the processes running within the container. The container itself does not stop.
$ cd ~/IOTstack
+$ docker-compose restart nodered
+
You need to rebuild the local image if you do any of the following:
+DOCKERHUB_TAG
or EXTRA_PACKAGES
) in your Compose file.To rebuild your local image:
+$ cd ~/IOTstack
+$ docker-compose up --build -d nodered
+$ docker system prune -f
+
Think of these commands as "re-running the Dockerfile". The only time a base image will be downloaded from DockerHub is when a base* image with a tag matching the value of DOCKERHUB_TAG
can't be found on your Raspberry Pi.
Your existing Node-RED container continues to run while the rebuild proceeds. Once the freshly-built local image is ready, the up
tells docker-compose
to do a new-for-old swap. There is barely any downtime for your Node-RED service.
IOTstack provides a convenience script which can help you work out if a new version of Node-RED is available. You can run it like this:
+$ ~/IOTstack/scripts/nodered_version_check.sh
+
The script is not infallible. It works by comparing the version number in the Node-RED image on your system with a version number stored on GitHub.
+GitHub is always updated before a new image appears on DockerHub. Sometimes there is a delay of weeks between the two events. For that reason, the script should be viewed more like a meteorological forecast than hard fact.
+The script assumes that your local image builds as iotstack-nodered:latest
. If you use different tags, you can pass that information to the script. Example:
$ ~/IOTstack/scripts/nodered_version_check.sh iotstack-nodered:3.0.2
+
The only way to know, for certain, when an update to Node-RED is available is to check the nodered/node-red tags page on DockerHub.
+Once a new version appears on DockerHub, you can upgrade Node-RED like this:
+$ cd ~/IOTstack
+$ docker-compose build --no-cache --pull nodered
+$ docker-compose up -d nodered
+$ docker system prune -f
+
Breaking it down into parts:
+build
causes the named container to be rebuilt;--no-cache
tells the Dockerfile process that it must not take any shortcuts. It really must rebuild the local image;--pull
tells the Dockerfile process to actually check with DockerHub to see if there is a later version of the base image and, if so, to download it before starting the build;nodered
is the named container argument required by the build
command.Your existing Node-RED container continues to run while the rebuild proceeds. Once the freshly-built local image is ready, the up
tells docker-compose
to do a new-for-old swap. There is barely any downtime for your Node-RED service.
The prune
is the simplest way of cleaning up old images. Sometimes you need to run this twice, the first time to clean up the old local image, the second time for the old base image. Whether an old base image exists depends on the version of docker-compose
you are using and how your version of docker-compose
builds local images.
node.js
versions¶You can use the npm version
command to check which versions of Node-RED and node.js
are running in your container:
$ docker exec nodered npm version
+{
+ 'node-red-docker': '2.2.2',
+ npm: '6.14.15',
+ ares: '1.18.1',
+ brotli: '1.0.9',
+ cldr: '37.0',
+ http_parser: '2.9.4',
+ icu: '67.1',
+ llhttp: '2.1.4',
+ modules: '72',
+ napi: '8',
+ nghttp2: '1.41.0',
+ node: '12.22.8',
+ openssl: '1.1.1m',
+ tz: '2019c',
+ unicode: '13.0',
+ uv: '1.40.0',
+ v8: '7.8.279.23-node.56',
+ zlib: '1.2.11'
+}
+
In the above:
+'node-red-docker': '2.2.2'
indicates that version 2.2.2 of Node-RED is running. This is the version number you see at the bottom of the main menu when you click on the "hamburger" icon ("≡") at the top, right of the Node-Red window in your browser.node: '12.22.8'
indicates that version 12.x of node.js
is installed.IOTstack uses a service definition for Node-RED that includes these lines:
+3 +4 +5 +6 |
|
++If you do not see this structure in your Compose file, refer to updating to July 2022 syntax.
+
The value of the DOCKERHUB_TAG
gives you the ability to control, from your Compose file, which versions of Node-RED and node.js
run within your Node-RED container.
The allowable values of DOCKERHUB_TAG
can be found on the DockerHub Node-RED tags page. The table below contains examples of tags that were available on DockerHub at the time of writing (2022-07-06):
tag | +Node-RED version | +node.js version |
+
---|---|---|
latest | +2.2.2 | +14.x | +
latest-14 | +2.2.2 | +14.x 📌 | +
2.2.2 | +2.2.2 📌 | +14.x | +
2.2.2-14 | +2.2.2 📌 | +14.x 📌 | +
Interpreting the tag:
+The sub-string to the left of the hyphen determines the version of Node-RED:
+The sub-string to the right of the hyphen determines the version of node.js
:
node.js
version 14.x and pins your container to that specific version of node.js
.node.js
can change any time you follow the process to upgrade Node-RED.In short:
+IOTstack defaults to "latest". Although this appears to cede control to the maintainers of the DockerHub images, in practice it is no different to any other container where you pull its image directly from DockerHub using the latest
tag (irrespective of whether latest
is explicit or implied by omission).
The DOCKERHUB_TAG
argument for Node-RED merely gives you the ability to pin to specific versions of Node-RED from within your Compose file, in the same way as you can use tags on image
directives for other containers.
For example, suppose you wanted to pin to Node-RED version 2.2.2 with node.js
version 12:
Edit your Compose file so that the DOCKERHUB_TAG
looks like this:
- DOCKERHUB_TAG=2.2.2-12
+
Run the re-building the local Node-RED image commands.
+Changing a pinned version and rebuilding may result in a new base image being downloaded from DockerHub.
+You can install components by adjusting the Node-RED Dockerfile. This can be done by:
+Using the IOTstack menu limits your choice of components to those presented in the menu. Editing the Dockerfile with a text editor is more flexible but carries the risk that your changes could be lost if you subsequently use the menu method.
+To apply changes made to your Dockerfile, run the re-building the local Node-RED image commands.
+You can add, remove or update components in Manage Palette. Node-RED will remind you to restart Node-RED and that is something you have to do by hand:
+$ cd ~/IOTstack
+$ docker-compose restart nodered
+
Note:
+Some users have reported misbehaviour from Node-RED if they do too many iterations of:
+It is better to make all the changes you intend to make, and only then restart Node-RED.
+npm
¶You can also run npm
inside the container to install any component that could be installed by npm
in a non-container environment. This is the basic syntax:
$ cd ~/IOTstack
+$ docker exec -w /data nodered npm «command» «arguments…»
+$ docker-compose restart nodered
+
Examples:
+To add the "find my iphone" node:
+$ docker exec -w /data nodered npm install find-my-iphone-node
+$ docker-compose restart nodered
+
To remove the "find my iphone" node:
+$ docker exec -w /data nodered npm uninstall find-my-iphone-node
+$ docker-compose restart nodered
+
Note:
+-w /data
on each command. Any formula you find on the web will not include this. You have to remember to do it yourself!--save
flag on the npm
command. That flag is not needed (it is ignored because the behaviour it used to control has been the default since NPM version 5. Node-RED containers have been using NPM version 6 for some time.You can use this approach if you need to force the installation of a specific version (which you don't appear to be able to do in Manage Palette). For example, to install version 4.0.0 of the "moment" node:
+$ docker exec -w /data nodered npm install node-red-contrib-moment@4.0.0
+$ docker-compose restart nodered
+
In terms of outcome, there is no real difference between the various methods. However, some nodes (eg "node-red-contrib-generic-ble" and "node-red-node-sqlite") must be installed by Dockerfile. The only way of finding out if a component must be installed via Dockerfile is to try Manage Palette and find that it doesn't work.
+Aside from the exception cases that require Dockerfile or where you need to force a specific version, it is quicker to install nodes via Manage Palette and applying updates is a bit easier too. But it's really up to you.
+If you're wondering about "backup", nodes installed via:
+npm install
– explicitly backed up when the ~/IOTstack/volumes
directory is backed-up.Basically, if you're running IOTstack backups then your add-on nodes will be backed-up.
+Components that are installed via Dockerfile wind up at the internal path:
+/usr/src/node-red
+
Components installed via Manage Palette or docker exec -w /data
wind up at the internal path:
/data
+
which is the same as the external path:
+~/IOTstack/volumes/nodered/data
+
Because there are two places, this invites the question of what happens if a given component is installed in both? The answer is that components installed in /data
take precedence.
Or, to put it more simply: in any contest between methods, Dockerfile comes last.
+Sometimes, even when you are 100% certain that you didn't do it, a component will turn up in both places. There is probably some logical reason for this but I don't know what it is.
+The problem this creates is that a later version of a component installed via Dockerfile will be blocked by the presence of an older version of that component installed by a different method.
+The nodered_list_installed_nodes.sh
script helps discover when this situation exists. For example:
$ nodered_list_installed_nodes.sh
+
+Fetching list of candidates installed via Dockerfile
+
+Components built into the image (via Dockerfile)
+ ACTIVE: node-red-admin
+ ACTIVE: node-red-configurable-ping
+ ACTIVE: node-red-contrib-boolean-logic
+ ACTIVE: node-red-contrib-generic-ble
+ ACTIVE: node-red-contrib-influxdb
+ ACTIVE: node-red-dashboard
+ BLOCKED: node-red-node-email
+ ACTIVE: node-red-node-pi-gpiod
+ ACTIVE: node-red-node-rbe
+ ACTIVE: node-red-node-sqlite
+ ACTIVE: node-red-node-tail
+
+Fetching list of candidates installed via Manage Palette or npm
+
+Components in persistent store at
+ /home/pi/IOTstack/volumes/nodered/data/node_modules
+ node-red-contrib-boolean-logic-ultimate
+ node-red-contrib-chartjs
+ node-red-node-email
+ node-red-contrib-md5
+ node-red-contrib-moment
+ node-red-contrib-pushsafer
+
Notice how the node-red-node-email
instance installed in the Dockerfile is being blocked. To fix this problem:
$ cd ~/IOTstack
+$ docker exec -w /data nodered npm uninstall node-red-node-email
+$ docker-compose restart nodered
+
As well as providing the Node-RED service, the nodered container is an excellent testbed. Installing the DNS tools, Mosquitto clients and tcpdump will help you to figure out what is going on inside container-space.
+There are two ways to add extra packages. The first method is to add them to the running container. For example, to add the Mosquitto clients:
+$ docker exec nodered apk add --no-cache mosquitto-clients
+
++The "apk" implies that the Node-RED container is based on Alpine Linux. Keep that in mind when you search for instructions on installing packages.
+
Packages installed this way will persist until the container is re-created (eg a down
and up
of the stack, or a reboot of your Raspberry Pi). This is a good choice if you only want to run a quick experiment.
The second method adds the packages to your local image every time you rebuild. Because the packages are in the local image, they are always in the running container. For example, to include the Mosquitto clients in every build:
+Edit your Compose file to include the package on the EXTRA_PACKAGES
argument:
- EXTRA_PACKAGES=mosquitto-clients
+
++If you do not see the
+EXTRA_PACKAGES
argument in your Compose file, refer to updating to July 2022 syntax.
Rebuild your local image by running the re-building the local Node-RED image commands.
+You can specify multiple packages on the same line. For example:
+- EXTRA_PACKAGES=mosquitto-clients bind-tools tcpdump
+
Notes:
+The primary benefit of the new syntax is that you no longer risk the IOTstack menu overwriting any custom changes you may have made to your Node-RED Dockerfile.
+If you install a clean copy of IOTstack, run the menu, enable Node-RED and select one or more add-on nodes then both your Compose file and Dockerfile will use the latest syntax automatically.
+If you have an older version of IOTstack installed, the syntax used in your Compose file and Dockerfile will depend on when you last ran the menu and manipulated Node-RED.
+To avoid any uncertainties, you can use a text editor to update your existing Compose file and Dockerfile to adopt the latest syntax.
+Step 1: Implement the new syntactic scaffolding:
+The first three lines of the old syntax are:
+1 +2 +3 |
|
Replace line 3 (the one-line build:
directive) with the following lines:
3 +4 +5 +6 +7 |
|
Step 2: Pin to the desired version (optional):
+If your existing Dockerfile pins to a specific version, edit the value of DOCKERHUB_TAG
(line 6 of your updated Compose file) to use the tag from your Dockerfile. For example, if your existing Dockerfile begins with:
FROM nodered/node-red:latest-12
+
then line 6 of your Compose file should be:
+6 |
|
Note:
+latest-12
in March 2021. The default for July 2022 syntax is latest
. At the time of writing, that is the same as latest-14
, which is what is recommended by Node-RED. If any of your flows has a dependence on node.js
version 12 (or if you do not want to take the risk), use latest-12
.Step 3: Define extra packages (optional):
+If your existing Dockerfile includes extra packages, edit the value of EXTRA_PACKAGES
(line 7 of your updated Compose file) to list the same packages. For example, if your existing Dockerfile includes:
RUN apk update && apk add --no-cache eudev-dev mosquitto-clients bind-tools tcpdump
+
then everything after eudev-dev
should appear on line 7 of your Compose file:
6 |
|
Notes:
+eudev-dev
(it is specified in the updated Dockerfile).The first four lines of your existing Dockerfile will have a structure similar to this:
+1 +2 +3 +4 |
|
++The actual text will depend on whether you have modified the tag in the first line or added extra packages to the third line.
+
Replace the first four lines of your Dockerfile with the following lines:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 |
|
All remaining lines of your original Dockerfile should be left as-is.
+Run the re-building the local Node-RED image commands.
+The first part of IOTstack's default service definition for Node-RED is shown at IOTstack first run. Although it is not immediately obvious, this results in a container which is based on the Alpine Linux distribution. You can confirm this by running:
+$ docker exec nodered grep "PRETTY_NAME" /etc/os-release
+PRETTY_NAME="Alpine Linux v3.20"
+
Historically, Node-RED has been distributed on on Dockerhub as two distinct sets of Node-RED images:
+In general, Node-RED images have tracked Alpine releases more consistently than they have Debian. For example, at the time of writing (July 2024):
+Image Tag | +Distro | +Image OS | +Current | +
---|---|---|---|
latest |
+Alpine | +v3.20 | +v3.20 | +
latest-debian |
+Debian | +11 (bullseye) | +12 (bookworm) | +
In addition, Node-RED images based on Alpine have offered a greater range of options when it comes to the embedded version of Node.js. At the time of writing:
+latest-18
, latest-20
and latest-22
, implying a choice of Node.js versions 18, 20 and 22, with version 20 being the default; whilelatest-debian
which comes with Node.js version 20.Naturally, this situation could change at any time! This information is only here to make the point that, historically, Node-RED images based on Debian have lagged behind Alpine and have only supported a single version of Node.js. This is also the main reason why IOTstack defaults to Alpine images.
+However, there may be circumstances where you decide it is appropriate to run a Node-RED image based on Debian. The purpose of this section is not to explore scenarios nor weigh the pros and cons, merely to explain how to adapt your Node-RED service definition to accomplish it. Proceed as follows:
+Make a copy of your existing Dockerfile:
+$ cd ~/IOTstack/services/nodered
+$ cp Dockerfile Debian.Dockerfile
+
The reason for making a copy is to preserve your existing (Alpine-aware) Dockerfile so you can easily switch back if you break something.
+Open Debian.Dockerfile
in a text editor and make the following changes:
Find the line:
+4 |
|
Replace that line with:
+4 |
|
Find the line:
+15 |
|
Replace that line with:
+15 |
|
apk
is the Alpine package manager whereas apt
is the Debian package manager.
Save your work.
+Make a copy of your existing compose file:
+$ cd ~/IOTstack
+$ cp docker-compose.yml docker-compose.yml.bak
+
The reason for making a copy is to preserve your existing (Alpine-aware) service definition so you can easily switch back if you break something.
+Open docker-compose.yml
in a text editor and make the following changes:
Change the Node-RED build
clause so that it looks like this:
3 +4 +5 +6 +7 +8 |
|
There are two key edits:
+dockerfile
line (as line 5).DOCKERHUB_TAG
argument from latest
to latest-debian
(line 7).If you have any EXTRA_PACKAGES
specified, you will need to allow for any package-name differences between Alpine and Debian. For example, suppose you are using this list of extra packages with Alpine:
8 |
|
The mosquitto-clients
, tcpdump
and tree
packages have the same names in the apk
(Alpine) package manager as they do in apt
(Debian) whereas bind-tools
is named dnsutils
in the Debian repositories. Thus the extra packages list for a Debian build would need to be:
8 |
|
Save your work.
+Rebuild Node-RED:
+$ cd ~/IOTstack
+$ docker-compose build --no-cache --pull nodered
+
If the build process reports any errors, go back and check your work.
+Start the new container:
+$ docker-compose up -d nodered
+
Check that the new container is running properly and hasn't gone into a restart loop:
+$ docker ps -a --format "table {{.Names}}\t{{.RunningFor}}\t{{.Status}}\t{{.Size}}" --filter name=nodered
+NAMES CREATED STATUS SIZE
+nodered 32 seconds ago Up 31 seconds (healthy) 0B (virtual 945MB)
+
Providing the STATUS column reports "healthy" after roughly 30 seconds of runtime, it is usually safe to assume that the container is behaving normally.
+Verify the base Linux distribution being used by the container:
+$ docker exec nodered grep "PRETTY_NAME" /etc/os-release
+PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
+
Check your Node-RED and Node.js versions:
+$ docker exec nodered npm version --json | jq -r '[.["node-red-docker"],.["node"]] | @tsv'
+4.0.2 20.15.0
+
Interpretation - the container is running:
+The actual version numbers you see in the last two steps will depend (obviously) on whatever the good folks who maintain Node-RED thought was appropriate at the time they released whatever latest-debian
variant is present on DockerHub at the moment when you perform the migration.
Please keep in mind that none of this affects the IOTstack menu. Re-running the menu is likely to revert your Node-RED service definition to be based on Alpine images.
+ + + + + + + + + + + + + +The first time you try to bring up the OctoPrint container, you should expect to see the following error:
+parsing ~/IOTstack/docker-compose.yml: error while interpolating services.octoprint.devices.[]: required variable OCTOPRINT_DEVICE_PATH is missing a value: eg echo OCTOPRINT_DEVICE_PATH=/dev/serial0 >>~/IOTstack/.env
+
The message is telling you that you need to define the device path to your 3D Printer.
+You need to work out how your printer presents itself and define the external device accordingly.
+/dev/ttyUSBn
¶Using "ttyUSBn" will "work" but, because of the inherent variability in the name, this approach is not recommended.
+The "n" in the "ttyUSBn" can vary depending on which USB devices are attached to your Raspberry Pi and the order in which they are attached. The "n" may also change as you add and remove devices.
+If the OctoPrint container is up when the device number changes, the container will crash, and it will either go into a restart loop if you try to bring it up when the expected device is not "there", or will try to communicate with a device that isn't your 3D printer.
+Suppose you choose this method and your 3D Printer mounts as /dev/ttyUSB0
, you would define your printer like this:
$ echo OCTOPRINT_DEVICE_PATH=/dev/ttyUSB0 >>~/IOTstack/.env
+
/dev/serial/by-id/xxxxxxxx
¶The "xxxxxxxx" is (usually) unique to your 3D printer. To find it, connect your printer to your Raspberry Pi, then run the command:
+$ ls -1 /dev/serial/by-id
+
You will get an answer like this:
+usb-Silicon_Labs_CP2102N_USB_to_UART_Bridge_Controller_3b14eaa48a154d5e87032d59459d5206-if00-port0
+
Suppose you choose this method and your 3D Printer mounts as shown above. You would define your printer like this:
+$ echo OCTOPRINT_DEVICE_PATH=/dev/serial/by-id/usb-Silicon_Labs_CP2102N_USB_to_UART_Bridge_Controller_3b14eaa48a154d5e87032d59459d5206-if00-port0 >>~/IOTstack/.env
+
Note:
+/dev/humanReadableName
¶Suppose your 3D printer is a MasterDisaster5000Pro, and that you would like to be able to set up the device to use a human-readable name like:
+/dev/MasterDisaster5000Pro
+
Start by disconnecting your 3D printer from your Raspberry Pi. Next, run this command:
+$ tail -f /var/log/messages
+
Connect your 3D printer and observe the log output. You are interested in messages that look like this:
+mmm dd hh:mm:ss mypi kernel: [423839.626522] cp210x 1-1.1.3:1.0: device disconnected
+mmm dd hh:mm:ss mypi kernel: [431265.973308] usb 1-1.1.3: new full-speed USB device number 10 using dwc_otg
+mmm dd hh:mm:ss mypi kernel: [431266.109418] usb 1-1.1.3: New USB device found, idVendor=dead, idProduct=beef, bcdDevice= 1.00
+mmm dd hh:mm:ss mypi kernel: [431266.109439] usb 1-1.1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=3
+mmm dd hh:mm:ss mypi kernel: [431266.109456] usb 1-1.1.3: Product: CP2102N USB to UART Bridge Controller
+mmm dd hh:mm:ss mypi kernel: [431266.109471] usb 1-1.1.3: Manufacturer: Silicon Labs
+mmm dd hh:mm:ss mypi kernel: [431266.109486] usb 1-1.1.3: SerialNumber: cafe80facefeed
+mmm dd hh:mm:ss mypi kernel: [431266.110657] cp210x 1-1.1.3:1.0: cp210x converter detected
+mmm dd hh:mm:ss mypi kernel: [431266.119225] usb 1-1.1.3: cp210x converter now attached to ttyUSB0
+
and, in particular, these two lines:
+… New USB device found, idVendor=dead, idProduct=beef, bcdDevice= 1.00
+… SerialNumber: cafe80facefeed
+
Terminate the tail
command by pressing Control+C.
Use this line as a template:
+SUBSYSTEM=="tty", ATTRS{idVendor}=="«idVendor»", ATTRS{idProduct}=="«idProduct»", ATTRS{serial}=="«SerialNumber»", SYMLINK+="«sensibleName»"
+
Replace the «delimited» values with those you see in the log output. For example, given the above log output, and the desire to associate your 3D printer with the human-readable name of "MasterDisaster5000Pro", the result would be:
+SUBSYSTEM=="tty", ATTRS{idVendor}=="dead", ATTRS{idProduct}=="beef", ATTRS{serial}=="cafe80facefeed", SYMLINK+="MasterDisaster5000Pro"
+
Next, ensure the required file exists by executing the following command:
+$ sudo touch /etc/udev/rules.d/99-usb-serial.rules
+
++If the file does not exist already, the
+touch
command creates an empty file, owned by root, with mode 644 (rw-r--r--) permissions (all of which are correct).
Use sudo
and your favourite text editor to edit /etc/udev/rules.d/99-usb-serial.rules
and insert the "SUBSYSTEM==" line you prepared earlier into that file, then save the file.
++Rules files are read on demand so there is no
+start
orreload
command to execute.
Check your work by disconnecting, then re-connecting your 3D printer, and then run:
+$ ls /dev
+
You should expect to see the human-readable name you chose in the list of devices.
+You would then define your printer like this:
+$ echo OCTOPRINT_DEVICE_PATH=/dev/MasterDisaster5000Pro >>~/IOTstack/.env
+
Notes:
+99-usb-serial.rules
file that you install on all of your Raspberry Pis. Then, you can attach a named device to any of your Raspberry Pis and it will always get the same name./dev/video0:/dev/video0
mapping¶By default, video camera support is disabled. This is because it is unsafe to assume a camera is present on /dev/video0
.
++See the Webcams topic of the Octoprint Community Forum for help configuring other kinds of cameras.
+
The OctoPrint docker image includes an MJPG streamer. You do not need to run another container with a streamer unless you want to.
+To activate a Raspberry Pi camera attached via ribbon cable:
+/dev/video0
.Edit docker-compose.yml
and uncomment all of the commented-out lines in the following:
environment:
+ # - ENABLE_MJPG_STREAMER=true
+ # - MJPG_STREAMER_INPUT=-r 640x480 -f 10 -y
+ # - CAMERA_DEV=/dev/video0
+
+devices:
+ # - /dev/video0:/dev/video0
+
Note:
+CAMERA_DEV
environment variable corresponds with the right hand side (ie after the colon) of the device mapping. There should be no reason to change either.The "640x480" MJPG_STREAMER_INPUT
settings will probably result in your camera feed being "letterboxed" but they will get you started. A full list of options is at mjpg-streamer-configuration-options.
The typical specs for a baseline Raspberry Pi camera are:
+For that type of camera, the following is probably more appropriate:
+ - MJPG_STREAMER_INPUT=-r 1152x648 -f 10
+
The resolution of 1152x648 is 60% of 1080p 1920x1080 and does not cause letterboxing. The resolution and rate of 10 frames per second won't over-tax your communications links, and the camera is MJPEG-capable so it does not need the -y
option.
To start a print session:
+Bring up the container:
+$ cd ~/IOTstack
+$ docker-compose up -d octoprint
+
If you try to start the OctoPrint container before your 3D printer has been switched on and the USB interface has registered with the Raspberry Pi, the container will go into a restart loop.
+Use a browser to point to port 9980 on your Raspberry Pi. For example:
+http://raspberrypi.local:9980
+
This will launch the "Setup Wizard".
+Click the "Next" button until you reach the "Access Control" screen:
+At the "Online Connectivity Check" screen:
+At the "Configure Anonymous Usage Tracking" and "Configure plugin blacklist processing" screens:
+At the "Set up your printer profile" screen:
+At the "Server Commands" screen:
+Enter the following in the "Restart OctoPrint" field:
+s6-svc -r /var/run/s6/services/octoprint
+
Click "Next".
+At the "Webcam & Timelapse Recordings" screen, and assuming you are configuring a PiCamera:
+Enter the following in the "Stream URL" field:
+/webcam/?action=stream
+
Click the "Test" button to confirm that the camera is working, then click "Close".
+Enter the following in the "Snapshot URL" field:
+http://localhost:8080/?action=snapshot
+
Click the "Test" button to confirm that the camera is working, then click "Close".
+Enter the following in the "Path to FFMPEG" field:
+/usr/bin/ffmpeg
+
The expected result is the message "The path is valid".
+Click "Next".
+Click "Finish" then click the button to reload the user interface.
+Use a browser to point to port 9980 on your Raspberry Pi. For example:
+http://raspberrypi.local:9980
+
Supply your user credentials and login.
+OctoPrint will display numerous messages in popup windows. These generally fall into two categories:
+In general, you can ignore messages about updates. You will get all updates automatically the next time the octoprint-docker container is rebuilt and pushed to DockerHub.
+You can, if you wish, allow an update to proceed. It might be appropriate to do that if you want to test an update. Just be aware that:
+You can restart the OctoPrint service in two ways:
+Whichever method you choose will result in a refresh of the OctoPrint user interface and you will need to follow the prompts to reload your browser page.
+Run the following commands:
+$ cd ~/IOTstack
+$ docker-compose restart octoprint
+
From the "System" icon in the OctoPrint toolbar (looks like a power button symbol):
+Note:
+If you do not see the "System" icon in the toolbar, fix it line this:
+Enter the following into the "Restart OctoPrint" field:
+s6-svc -r /var/run/s6/services/octoprint
+
Click "Save".
+Unless you intend to leave your printer switched on 24 hours a day, you will also need to be careful when you switch off the printer:
+Terminate the container:
+$ cd ~/IOTstack
+$ docker-compose stop octoprint
+$ docker-compose rm -f octoprint
+
Turn the 3D printer off.
+If you turn the printer off without terminating the container, you will crash the container.
+You can view the video feed independently of the OctoPrint web interface like this:
+http://raspberrypi.local:9980/webcam/?action=stream
+
OctoPrint assumes it is running "natively" rather than in a container. From a data-communications perspective, OctoPrint (the process running inside the OctoPrint container) sees itself as running on a computer attached to the internal Docker network. When you connect to OctoPrint's web interface from a client device attached to an external network, OctoPrint sees that your source IP address is not on the internal Docker network and it issues a security warning.
+To silence the warning:
+Terminate the container if it is running:
+$ cd ~/IOTstack
+$ docker-compose stop octoprint
+$ docker-compose rm -f octoprint
+
use sudo
and your favourite text editor to open the following file:
~/IOTstack/volumes/octoprint/octoprint/config.yaml
+
Implement the following pattern:
+server:
+ …
+ ipCheck:
+ enabled: true
+ trustedSubnets:
+ - 203.0.132.0/24
+
Notes:
+server:
, ipCheck:
and enabled:
directives may already be in place but the trustedSubnets:
directive may not be. Add it, and then add your local subnet(s) where you see the "192.168.1.0/24" example.Save the file.
+Bring up the container:
+$ cd ~/IOTstack
+$ docker-compose up -d octoprint
+
You can check for updates like this:
+$ cd ~/IOTstack
+$ docker-compose pull octoprint
+$ docker-compose up -d octoprint
+$ docker system prune
+
You can view a list of usernames like this:
+$ docker exec octoprint octoprint --basedir /octoprint/octoprint user list
+
To reset a user's password:
+Use the following line as a template and replace «username»
and «password»
with appropriate values:
$ docker exec octoprint octoprint --basedir /octoprint/octoprint user password --password «password» «username»
+
Execute the edited command. For example, to set the password for user "me" to "verySecure":
+$ docker exec octoprint octoprint --basedir /octoprint/octoprint user password --password verySecure me
+
Restart OctoPrint:
+$ cd ~/IOTstack
+$ docker-compose restart octoprint
+
Note:
+OctoPrint supports more than one username. To explore the further:
+$ docker exec octoprint octoprint --basedir /octoprint/octoprint user --help
+
If the OctoPrint container seems to be misbehaving, you can get a "clean slate" by:
+$ cd ~/IOTstack
+$ docker-compose stop octoprint
+$ docker-compose rm -f octoprint
+$ sudo rm -rf ./volumes/octoprint
+$ docker-compose up -d octoprint
+
The OctoPrint container is well-behaved and will re-initialise its persistent storage area correctly. OctoPrint will adopt "first run" behaviour and display the Setup Wizard.
+ + + + + + + + + + + + + + + + +openHAB runs in "host mode" so there are no port mappings. The default port bindings on IOTstack are:
+If you want to change either of the first two:
+Edit the openhab
fragment in docker-compose.yml
:
- OPENHAB_HTTP_PORT=4050
+ - OPENHAB_HTTPS_PORT=4051
+
Recreate the openHAB container:
+$ cd ~/IOTstack
+$ docker-compose up -d openhab
+
There do not appear to be any environment variables to control ports 8101 or 5007 so, if other containers you need to run also depend on those ports, you will have to figure out some way of resolving the conflict.
+Note:
+The original IOTstack documentation included:
+++openHAB has been added without Amazon Dashbutton binding.
+
but it is not clear if this is still the case.
+Amazon Dashbuttons have been discontinued so this may no longer be relevant.
+pgAdmin4 is a graphical user interface to PostgreSQL.
+The service definition includes the following lines:
+ image: gpongelli/pgadmin4-arm:latest-armv7
+ platform: linux/arm/v7
+# image: gpongelli/pgadmin4-arm:latest-armv8
+
The ARMv7 image is enabled by default. This will run on both 32-bit (ARMv7) and 64-bit (ARMv8) systems. The platform
clause silences warnings from docker-compose that arise when you try to run an ARMv7 image on ARMv8 architecture.
If you are running on a full 64-bit system, you should edit your service definition so that it looks like this:
+# image: gpongelli/pgadmin4-arm:latest-armv7
+# platform: linux/arm/v7
+ image: gpongelli/pgadmin4-arm:latest-armv8
+
The service definition includes the TZ
environment variable. It defaults to Etc/UTC
. You can either edit the environment variable directly in your compose file, or provide your own substitute by editing ~/IOTstack/.env
. Example:
$ cat ~/IOTstack/.env
+TZ=Australia/Sydney
+
These instructions assume you have selected the postgresql
container from the IOTstack menu, and that that container is running.
Complete the following steps:
+Use your web browser to connect to pgAdmin4 on port 5050
. For example:
http://raspberrypi.local:5050
The pgAdmin4 service takes a while to start so please be patient if you have only just launched the container. Once your browser is able to connect to pgAdmin4 successfully, the home screeen will be displayed, overlaid with a prompt to enter a master password:
+ +Enter a master password.
+Click "Add New Server". This displays the server registration sheet:
+ +Give the server a name. The name is not important. It just needs to be meaningful to you.
+Click the "Connection" tab:
+ +Enter the name of the PostgreSQL container (ie "postgres").
+POSTGRES_DB
environment variable as it applies to the PostgreSQL container.POSTGRES_USER
environment variable as it applies to the PostgreSQL container.POSTGRES_PASSWORD
environment variable as it applies to the PostgreSQL container.Keep in mind that the values of the environment variables you set in steps 9, 10 and 11 only apply the first time you launch the PostgreSQL container. If you change any of these in PostgreSQL, you will have to make matching changes in pgAdmin4.
+ + + + + + + + + + + + + +Pi-hole is a fantastic utility to reduce ads.
+In conjunction with controls in Pi-hole's web GUI, environment variables govern much of Pi-hole's behaviour.
+If you are running new menu (master branch), environment variables are inline in your compose file. If you are running old menu, the variables will be in:
+~/IOTstack/services/pihole/pihole.env
+
++There is nothing about old menu which requires the variables to be stored in the
+pihole.env
file. You can migrate everything todocker-compose.yml
if you wish.
Pi-hole's authoritative list of environment variables can be found here. Although many of Pi-hole's options can be set through its web GUI, there are two key advantages to using environment variables:
+By default, Pi-hole does not have an administrator password. That is because the default service definition provided by IOTstack contains the following environment variable with no value on its right hand side:
+- WEBPASSWORD=
+
Each time the Pi-hole container is launched, it checks for the presence or absence of the WEBPASSWORD
environment variable, then reacts like this:
If WEBPASSWORD
is defined but does not have a value:
This is the default situation for IOTstack.
+If WEBPASSWORD
is defined and has a value, that value will become the admin password. For example, to change your admin password to be "IOtSt4ckP1Hol3":
Edit your compose file so that Pi-hole's service definition contains:
+- WEBPASSWORD=IOtSt4ckP1Hol3
+
Run:
+$ cd ~/IOTstack
+$ docker-compose up -d pihole
+
docker-compose will notice the change to the environment variable and re-create the container. The container will see that WEBPASSWORD
has a value and will change the admin password to "IOtSt4ckP1Hol3".
You will be prompted for a password whenever you connect to Pi-hole's web interface.
+If WEBPASSWORD
is undefined (absent from your compose file), Pi-hole behaves like this:
If this is the first time Pi-hole has been launched, a random password is generated.
+Pi-hole senses "first launch" if it has to initialise its persistent storage area. See also getting a clean slate. You can discover the password by running:
+$ docker logs pihole | grep random
+
Remember, docker logs are cleared each time a container is terminated or re-created so you need to run that command before the log disappears!
+Otherwise, whatever password was set on the previous launch will be re-used.
+pihole -a -p
¶Some Pi-hole documentation on the web recommends using the following command to change Pi-hole's admin password:
+$ docker exec pihole pihole -a -p «yourPasswordHere»
+
That command works but its effect will always be overridden by WEBPASSWORD
. For example, suppose your service definition contains:
- WEBPASSWORD=myFirstPassword
+
When you start the container, the admin password will be "myFirstPassword". If you run:
+$ docker exec pihole pihole -a -p mySecondPassword
+
then "mySecondPassword" will become the admin password until the next time the container is re-created by docker-compose, at which point the password will be reset to "myFirstPassword".
+Given this behaviour, we recommend that you ignore the pihole -a -p
command.
You can control the amount of information Pi-hole retains about your DNS queries using the "Privacy Settings" tab of the "Settings" group. The default is "Show & record everything".
+If you choose any option except "Anonymous mode", then Pi-hole divides the logging store into two parts:
+In the "System" tab of the "Settings" group is a Flush logs (last 24 hours) button. Clicking that button erases all log entries which are more recent than 24 hours. The button does not erase entries which are older than 24 hours.
+Retention of log entries older than 24 hours is controlled by the following environment variable:
+- FTLCONF_MAXDBDAYS=365
+
The default (which applies if the variable is omitted) is to retain log entries for 365 days.
+Depending on your DNS activity, the database where the log entries are stored can become quite large. Setting this variable to a shorter period will help you control the amount of storage Pi-hole consumes on disk and in your backups.
+Tip:
+Adding this variable to an existing service definition, or changing the number of days to be less than the previous setting will not reduce the size of the logging database. Although Pi-hole will implement the change, the SQLite database where the logs are written retains the released storage for subsequent re-use. If you want to reclaim that space, run the following command:
+$ sqlite3 ~/IOTstack/volumes/pihole/etc-pihole/pihole-FTL.db "vacuum;"
+
The command should not need sudo
because pi
is the owner by default. There is no need to terminate Pi-hole before running this command (SQLite handles any contention).
You can control which public DNS servers are used by PiHole when it needs to refer queries to the Internet. You do this by enabling or disabling checkboxes in the "Upstream DNS Servers" panel of the "DNS" tab in the "Settings" group.
+The default is to use the two Google IPv4 DNS servers which correspond with 8.8.8.8 and 8.8.4.4, respectively.
+An alternative to toggling checkboxes in the Pi-hole GUI is to use an environment variable:
+- PIHOLE_DNS_=8.8.8.8;8.8.4.4
+
++The variable does end with an underscore!
+
This variable takes a semi-colon-separated list of DNS servers. You can discover the IP address associated with a checkbox by hovering your mouse pointer over the checkbox and waiting for a tool-tip to appear:
+ +First, understand that there are two basic types of DNS query:
+forward queries:
+reverse queries:
+Pi-hole has its own built-in DNS server which can answer both kinds of queries. The implementation is useful but doesn't offer all the features of a full-blown DNS server like BIND9. If you decide to implement a more capable DNS server to work alongside Pi-hole, you will need to understand the following Pi-hole environment variables:
+REV_SERVER=
If you configure Pi-hole's built-in DNS server to be authoritative for your local domain name, REV_SERVER=false
is appropriate, in which case none of the variables discussed below has any effect.
Setting REV_SERVER=true
allows Pi-hole to forward queries that it can't answer to a local upstream DNS server, typically running inside your network.
REV_SERVER_DOMAIN=yourdomain.com
(where "yourdomain.com" is an example)
The Pi-hole documentation says:
+++"If conditional forwarding is enabled, set the domain of the local network router".
+
The words "if conditional forwarding is enabled" mean "when REV_SERVER=true
".
However, this option really has little-to-nothing to do with the "domain of the local network router". Your router may have an IP address that reverse-resolves to a local domain name (eg gateway.mydomain.com) but this is something most routers are unaware of, even if you have configured your router's DHCP server to inform clients that they should assume a default domain of "yourdomain.com".
+This variable actually tells Pi-hole the name of your local domain. In other words, it tells Pi-hole to consider the possibility that an unqualified name like "fred" could be the fully-qualified domain name "fred.yourdomain.com".
+REV_SERVER_TARGET=192.168.1.5
(where 192.168.1.5 is an example):
The Pi-hole documentation says:
+++"If conditional forwarding is enabled, set the IP of the local network router".
+
This option tells Pi-hole where to direct forward queries that it can't answer. In other words, Pi-hole will send a forward query for fred.yourdomain.com to 192.168.1.5.
+It may be appropriate to set REV_SERVER_TARGET
to the IP address of your router (eg 192.168.1.1) but, unless your router is running as a DNS server (not impossible but uncommon), the router will likely just relay any queries to your ISP's DNS servers (or other well-known DNS servers like 8.8.8.8 or 1.1.1.1 if you have configured those). Those external DNS servers are unlikely to be able to resolve queries for names in your private domain, and won't be able to do anything sensible with reverse queries if your home network uses RFC1918 addressing (which most do: 192.168.x.x being the most common example).
Forwarding doesn't guarantee that 192.168.1.5 will be able to answer the query. The DNS server at 192.168.1.5 may well relay the query to yet another server. In other words, this environment variable does no more than set the next hop.
+If you are planning on using this option, the target needs to be a DNS server that is authoritative for your local domain and that, pretty much, is going to be a local upstream DNS server inside your home network like another Raspberry Pi running BIND9.
+REV_SERVER_CIDR=192.168.1.0/24
(where 192.168.1.0/24 is an example)
The Pi-hole documentation says:
+++"If conditional forwarding is enabled, set the reverse DNS zone (e.g. 192.168.0.0/24)".
+
This is correct but it lacks detail.
+The string "192.168.1.0/24" defines your local subnet using Classless Inter-Domain Routing (CIDR) notation. Most home subnets use a subnet-mask of 255.255.255.0. If you write that out in binary, it is 24 1-bits followed by 8 0-bits, as in:
+ 255 . 255 . 255 . 0
+11111111 11111111 11111111 00000000
+
Those 24 one-bits are where the /24
comes from in 192.168.1.0/24
. When you perform a bitwise logical AND between that subnet mask and 192.168.1.0, the ".0" is removed (conceptually), as in:
192.168.1.0 AND 255.255.255.0 = 192.168.1
+
What it means is:
+When you set REV_SERVER_CIDR=192.168.1.0/24
you are telling Pi-hole that reverse queries for the host range 192.168.1.1 through 192.168.1.254 should be sent to the REV_SERVER_TARGET=192.168.1.5
.
Note: in order for Web GUI settings to have any effects, you need to configure +the RPi or other machines to use it. This is described in the next topics.
+Point your browser to:
+http://«your_ip»:8089/admin
+
where «your_ip» can be:
+Login to the Pi-hole web interface: http://raspberrypi.local:8089/admin
:
raspberrypi.home.arpa
and the RPi's IP Address, e.g. 192.168.1.10
.Now you can use raspberrypi.home.arpa
as the domain name for the Raspberry Pi
+in your whole local network. You can also add domain names for your other
+devices, provided they too have static IPs.
why .home.arpa?
+Instead of .home.arpa
- which is the real standard, but a mouthful - you
+can use .internal
. Using .local
would technically work, but it should
+be reserved for mDNS use only.
The Raspberry Pi itself does not have to use the Pi-hole container for its own DNS services. Some chicken-and-egg situations can exist if, for example, the Pi-hole container is down when another process (eg apt
or docker-compose
) needs to do something that depends on DNS services being available.
Nevertheless, if you configure Pi-hole to be local DNS resolver, then you will probably want to configure your Raspberry Pi to use the Pi-hole container in the first instance, and then fall back to a public DNS server if the container is down. As a beginner, this is probably what you want regardless. Do this by running the commands:
+$ echo "name_servers=127.0.0.1" | sudo tee -a /etc/resolvconf.conf
+$ echo "name_servers_append=8.8.8.8" | sudo tee -a /etc/resolvconf.conf
+$ echo "resolv_conf_local_only=NO" | sudo tee -a /etc/resolvconf.conf
+$ sudo resolvconf -u
+
This results in a configuration that will continue working, even if the Pi-hole +container isn't running.
+name_servers=127.0.0.1
instructs the Raspberry Pi to direct DNS queries to the loopback address. Port 53 is implied. If the Pi-hole container is running in:
name_servers_append=8.8.8.8
instructs the Raspberry Pi to fail-over to 8.8.8.8 if Pi-hole does not respond. You can replace 8.8.8.8
(a Google service) with:
1.1.1.1
(Cloudflare).You need slightly different syntax if you want to add multiple fallback servers. For example, suppose your fallback hosts are a local server (eg 192.168.1.2) running BIND9 and 8.8.8.8. The command would be:
+$ echo 'name_servers_append="192.168.1.2 8.8.8.8"' | sudo tee -a /etc/resolvconf.conf
+
resolv_conf_local_only=NO
is needed so that 127.0.0.1 and 8.8.8.8 can coexist.
resolvconf -u
command instructs Raspberry Pi OS to rebuild the active resolver configuration. In principle, that means parsing /etc/resolvconf.conf
to derive /etc/resolv.conf
. This command can sometimes return the error "Too few arguments". You should ignore that error.flowchart LR
+ RERECONF["/etc/resolvconf.conf"] --- UP([resolvconf -u])
+ DHCP[DHCP provided DNS-server] --- UP
+ UP -- "generates" --> RECONF["/etc/resolv.conf"]
+ classDef command fill:#9996,stroke-width:0px
+ class UP command
+If you wish to prevent the Raspberry Pi from including the address(es) of DNS servers learned from DHCP, you can instruct the DHCP client running on the Raspberry Pi to ignore the information coming from the DHCP server:
+$ echo 'nooption domain_name_servers' | sudo tee -a /etc/dhcpcd.conf
+$ sudo service dhcpcd reload
+$ sudo resolvconf -u
+
If you have followed the steps in Adding local domain names to define names for your local hosts, you can inform the Raspberry Pi of that fact like this:
+$ echo 'search_domains=home.arpa' | sudo tee -a /etc/resolvconf.conf
+$ sudo resolvconf -u
+
That will add the following line to /etc/resolv.conf
:
search home.arpa
+
Then, when you refer to a host by a short name (eg "fred") the Raspberry Pi will also consider "fred.home.arpa" when trying to discover the IP address.
+Docker provides a special IP 127.0.0.11, which listens to DNS queries and +resolves them according to the host RPi's resolv.conf. Containers usually +rely on this to perform DNS lookups. This is nice as it won't present any +surprises as DNS lookups on both the host and in the containers will yeild +the same results.
+It's possible to make DNS queries directly cross-container, and even +supported in some rare use-cases.
+To use the Pi-hole in your LAN, you need to assign the Raspberry Pi a fixed IP-address and configure this IP as your DNS server.
+If you want clients on your network to use Pi-hole for their DNS, the Raspberry Pi running Pi-hole must have a fixed IP address. It does not have to be a static IP address (in the sense of being hard-coded into the Raspberry Pi). The Raspberry Pi can still obtain its IP address from DHCP at boot time, providing your DHCP server (usually your home router) always returns the same IP address. This is usually referred to as a static binding and associates the Raspberry Pi's MAC address with a fixed IP address.
+Keep in mind that many Raspberry Pis have both Ethernet and WiFi interfaces. It is generally prudent to establish static bindings for both network interfaces in your DHCP server.
+You can use the following command to discover the MAC addresses for your Raspberry Pi's Ethernet and WiFi interfaces:
+$ for I in eth0 wlan0 ; do ip link show $I ; done
+2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
+ link/ether dc:a6:32:4c:89:f9 brd ff:ff:ff:ff:ff:ff
+3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
+ link/ether e5:4f:01:41:88:b2 brd ff:ff:ff:ff:ff:ff
+
In the above:
+If a physical interface does not exist, the command returns "Device does not exist" for that interface. If you prefer, you can also substitute the ifconfig
command for ip link show
. It's just a little more wordy.
In order for Pi-hole to block ads or resolve anything, clients need to be told to use it as their DNS server. You can either:
+Option 1 (whole-of-network) is the simplest approach. Assuming your Raspberry Pi has the static IP 192.168.1.10
:
Go to your network's DHCP server. In most home networks, this will be your Wireless Access Point/WLAN Router:
+192.168.1.10
All local clients have to be rebooted. Without this they will continue to use the old DNS setting from an old DHCP lease for quite some time.
+Option 2 (case-by-case) generally involves finding the IP configuration options for each host and setting the DNS server manually. Manual changes are usually effective immediately without needing a reboot.
+Setting up a combination of Pi-hole (for ad-blocking services), and/or a local upstream DNS resolver (eg BIND9) to be authoritative for a local domain and reverse-resolution for your local IP addresses, and decisions about where each DNS server forwards queries it can't answer (eg your ISP's DNS servers, or Google's 8.8.8.8, or Cloudflare's 1.1.1.1) is a complex topic and depends on your specific needs.
+The same applies to setting up a DHCP server (eg DHCPD) which is capable of distinguishing between the various clients on your network (ie by MAC address) to make case-by-case decisions as to where each client should obtain its DNS services.
+If you need help, try asking questions on the IOTstack Discord channel.
+Make these assumptions:
+You have followed the instructions above to add these lines to /etc/resolvconf.conf
:
name_servers=127.0.0.1
+name_servers_append=8.8.8.8
+resolv_conf_local_only=NO
+
The Raspberry Pi running Pi-hole has the IP address 192.168.1.10 which it obtains as a static assignment from your DHCP server.
+The result of the configuration appears in /etc/resolv.conf
:
$ cat /etc/resolv.conf
+# Generated by resolvconf
+nameserver 127.0.0.1
+nameserver 192.168.1.10
+nameserver 8.8.8.8
+
Interpretation:
+nameserver 127.0.0.1
is present because of name_servers=127.0.0.1
nameserver 192.168.1.10
is present because it was learned from DHCPnameserver 8.8.8.8
is present because of name_servers_append=8.8.8.8
The fact that the Raspberry Pi is effectively represented twice (once as 127.0.0.1, and again as 192.168.1.10) does not matter. If the Pi-hole container stops running, the Raspberry Pi will bypass 192.168.1.10 and fail over to 8.8.8.8, failing back to 127.0.0.1 when the Pi-hole container starts again.
+Install dig:
+$ sudo apt install dnsutils
+
Test that Pi-hole is correctly configured (should respond 192.168.1.10):
+$ dig raspberrypi.home.arpa @192.168.1.10
+
To test on another machine if your network's DNS configuration is correct, and +an ESP will resolve its DNS queries correctly, restart the other machine to +ensure DNS changes are updated and then use:
+$ dig raspberrypi.home.arpa
+
This should produce the same result as the previous command.
+If this fails to resolve the IP, check that the server in the response is
+192.168.1.10
. If it's 127.0.0.xx
check /etc/resolv.conf
begins with
+nameserver 192.168.1.10
. If not, check the machine is configured to use DHCP
+and revisit Pi-hole as DNS.
If you want to avoid hardcoding your Raspberry Pi IP to your ESPhome devices, +you need a DNS server that will do the resolving. This can be done using the +Pi-hole container as described above.
+*.local
won't work for ESPhome¶There is a special case for resolving *.local
addresses. If you do a ping raspberrypi.local
on your desktop Linux or the Raspberry Pi, it will first try using mDNS/bonjour to resolve the IP address raspberrypi.local. If this fails it will then ask the DNS server. ESPhome devices can't use mDNS to resolve an IP address. You need a proper DNS server to respond to queries made by an ESP. As such, dig raspberrypi.local
will fail, simulating ESPhome device behavior. This is as intended, and you should use raspberrypi.home.arpa as the address on your ESP-device.
If Pi-hole misbehaves, you can always try starting from a clean slate by erasing Pi-hole's persistent storage area. Erasing the persistent storage area causes PiHole to re-initialise its data structures on the next launch. You will lose:
+Also note that your administrative password will reset.
+The recommended approach is:
+Run the following commands:
+$ cd ~/IOTstack
+$ docker-compose down pihole
+$ sudo rm -rf ./volumes/pihole
+$ docker-compose up -d pihole
+
++see also if downing a container doesn't work
+
Login to Pi-hole's web GUI and navigate to Settings » Teleporter.
+If you run Pi-hole using Docker Desktop for macOS, all client activity will be logged against the IP address of the default gateway on the internal bridged network.
+It appears that Docker Desktop for macOS interposes an additional level of Network Address Translation (NAT) between clients and the Pi-hole service. This does not affect Pi-hole's ability to block ads. It just makes the GUI reports a little less useful.
+It is not known whether this is peculiar to Docker Desktop for macOS or also affects other variants of Docker Desktop.
+This problem does not affect Pi-hole running in a container on a Raspberry Pi.
+ + + + + + + + + + + + + +The web UI can be found on "your_ip":32400/web
Create a directory in you home directory called mnt
with a subdirectory HDD
. Follow the instruction above to mount your external drive to /home/pi/mnt/HDD
in you fstab
edit your docker-compose.yml file under plex and uncomment the volumes for tv series and movies (modify the path to point to your media locations). Run docker-compose up -d
to rebuild plex with the new volumes
The portainer agent is a great way to add a second docker instance to an existing portainer instance. This allows you to manage multiple docker environments from one portainer instance.
+When you want to add the agent to an existing portainer instance.
+Add endpoint
ip-of-agent-instance:9001
"#yourip" means any of the following:
+192.168.1.10
)iot-hub.local
)iot-hub.mydomain.com
) Portainer CE (Community Edition) is an application for managing Docker. It is a successor to Portainer. According to the Portainer CE documentation
+++Portainer 1.24.x will continue as a separate code branch, released as portainer/portainer:latest, and will receive ongoing security updates until at least 1st Sept 2021. No new features will be added beyond what was available in 1.24.1.
+
From that it should be clear that Portainer is deprecated and that Portainer CE is the way forward.
+Run the menu:
+$ cd ~/IOTstack
+$ ./menu.sh
+
Choose "Build Stack", select "Portainer-ce", press [TAB] then "\<Ok>" and follow through to the end of the menu process, typically choosing "Do not overwrite" for any existing services. When the menu finishes:
+$ docker-compose up -d
+
Ignore any message like this:
+++WARNING: Found orphan containers (portainer) for this project …
+
In your web browser navigate to #yourip:9000/
:
From there, you can click on the "Local" group and take a look around. One of the things Portainer CE can help you do is find unused containers but beware of reading too much into this because, sometimes, an "unused" container is actually the base for another container (eg Node-RED).
+There are 'Quick actions' to view logs and other stats. This can all be done from terminal commands but Portainer CE makes it easier.
+If you click on a "Published Port" in the "Containers" list, your browser may return an error saying something like "can't connect to server" associated with an IP address of "0.0.0.0".
+To fix that problem, proceed as shown below:
+ +iot-hub.local
)iot-hub.mydomain.com
)192.168.1.10
)++To remove the Public IP address, repeat the above steps but clear the "Public IP" field in step 3.
+
The reason why you have to tell Portainer CE which Public IP address to use is because an instance of Portainer CE does not necessarily have to be running on the same Raspberry Pi as the Docker containers it is managing.
+Keep in mind that clicking on a "Published Port" does not guarantee that your browser can open a connection. For example:
+++All things considered, you will get more consistent behaviour if you simply bookmark the URLs you want to use for your IOTstack services.
+
Notes:
+If you forget the password you created for Portainer CE, you can recover by doing the following:
+$ cd ~/IOTstack
+$ docker-compose stop portainer-ce
+$ sudo rm -r ./volumes/portainer-ce
+$ docker-compose start portainer-ce
+
Then, follow the steps in:
+ + + + + + + + + + + + + + +PostgreSQL is an SQL server, for those that need an SQL database.
+The database is available on port 5432
The service definition includes the following environment variables:
+TZ
your timezone. Defaults to Etc/UTC
POSTGRES_USER
. Initial username. Defaults to postuser
.POSTGRES_PASSWORD
. Initial password associated with initial username. Defaults to IOtSt4ckpostgresDbPw
(postpassword
for old menu).POSTGRES_DB
. Initial database. Defaults to postdb
.You can either edit the environment variables directly or provide your own substitutes by editing ~/IOTstack/.env
. Example:
$ cat ~/IOTstack/.env
+TZ=Australia/Sydney
+POSTGRES_PASSWORD=oneTwoThree
+
When the container is brought up:
+TZ
will have the value Australia/Sydney
(from .env
)POSTGRES_PASSWORD
will have the value oneTwoThree
(from .env
)POSTGRES_USER
will have the value postuser
(the default); andPOSTGRES_DB
will have the value postdb
(the default).The TZ
variable takes effect every time the container is brought up. The other environment variables only work the first time the container is brought up.
It is highly recommended to select your own password before you launch the container for the first time. See also Getting a clean slate.
+You can interact with the PostgreSQL Relational Database Management System running in the container via its psql
command. You can invoke psql
like this:
$ docker exec -it postgres bash -c 'PGPASSWORD=$POSTGRES_PASSWORD psql $POSTGRES_DB $POSTGRES_USER'
+
++Because of the single quotes (') surrounding everything after the
+-c
, expansion of the environment variables is deferred until the command is executed inside the container.
You can use any of the following methods to exit psql
:
Once you have logged into psql
you can reset the password like this:
# ALTER USER «user» WITH PASSWORD '«password»';
+
Replace:
+«user»
with the username (eg the default username is postuser
)«password»
with your new password.Notes:
+ALTER
command does not update the value of the POSTGRES_PASSWORD
environment variable. You need to do that by hand.Whenever you make a change to a running container's environment variables, the changes will not take effect until you re-create the container by running:
+$ cd ~/IOTstack
+$ docker-compose up -d postgresql
+
If you need to start over, proceed like this:
+$ cd ~/IOTstack
+$ docker-compose down postgres
+$ sudo rm -rf ./volumes/postgres
+$ docker-compose up -d postgres
+
++ + + + + + + + + + + + + +see also if downing a container doesn't work
+
GitHub:
+ +DockerHub:
+ +Issue 620 pointed out there was an error in the default configuration file. That has been fixed. To adopt it, please do the following:
+If Prometheus and/or any of its associated containers are running, take them down:
+$ cd ~/IOTstack
+$ docker-compose down prometheus prometheus-cadvisor prometheus-nodeexporter
+
++see also if downing a container doesn't work
+
Move the existing active configuration out of the way:
+$ cd ~/IOTstack/volumes/prometheus/data/config
+$ mv config.yml config.yml.old
+
Make sure that the service definitions in your docker-compose.yml
are up-to-date by comparing them with the template versions:
~/IOTstack/.templates/prometheus/service.yml
~/IOTstack/.templates/prometheus-cadvisor/service.yml
~/IOTstack/.templates/prometheus-nodeexporter/service.yml
Your service definitions and those in the templates do not need to be identical, but you should be able to explain any differences.
+Rebuild your Prometheus container by following the instructions in Upgrading Prometheus. Rebuilding will import the updated default configuration into the container's image.
+Start the service:
+$ cd ~/IOTstack
+$ docker-compose up -d prometheus
+
Starting prometheus
should start prometheus-cadvisor
and prometheus-nodeexporter
automatically. Because the old configuration has been moved out of the way, the container will supply a new version as a default.
Compare the configurations:
+$ cd ~/IOTstack/volumes/prometheus/data/config
+$ diff -y config.yml.old config.yml
+global: global:
+ scrape_interval: 10s scrape_interval: 10s
+ evaluation_interval: 10s evaluation_interval: 10s
+
+scrape_configs: scrape_configs:
+ - job_name: "iotstack" - job_name: "iotstack"
+ static_configs: static_configs:
+ - targets: - targets:
+ - localhost:9090 - localhost:9090
+ - cadvisor:8080 | - prometheus-cadvisor:8080
+ - nodeexporter:9100 | - prometheus-nodeexporter:9100
+
In the output above, the vertical bars (|
) in the last two lines indicate that those lines have changed. The "old" version is on the left, "new" on the right.
If you have made other alterations to your config then you should see other change indicators including <
, |
and >
. If so, you should hand-merge your own changes from config.yml.old
into config.yml
and then restart the container:
$ cd ~/IOTstack
+$ docker-compose restart prometheus
+
Prometheus is a collection of three containers:
+The default configuration for Prometheus supplied with IOTstack scrapes information from all three containers.
+When you select Prometheus in the IOTstack menu, you must also select:
+If you do not select all three containers, Prometheus will not start.
+When you select Prometheus in the IOTstack menu, the service definition includes the three containers:
+~/IOTstack
+├── .templates
+│ └── prometheus
+│ ├── service.yml ❶
+│ ├── Dockerfile ❷
+│ ├── docker-entrypoint.sh ❸
+│ └── iotstack_defaults ❹
+│ └── config.yml
+├── services
+│ └── prometheus
+│ └── service.yml ❺
+├── docker-compose.yml ❻
+└── volumes
+ └── prometheus ❼
+ └── data
+ ├── config ❽
+ │ ├── config.yml
+ │ └── prometheus.yml
+ └── data
+
The source code for Prometheus lives at GitHub prometheus/prometheus.
+Periodically, the source code is recompiled and the resulting image is pushed to prom/prometheus on DockerHub.
+When you select Prometheus in the IOTstack menu, the template service definition is copied into the Compose file.
+++Under old menu, it is also copied to the working service definition and then not really used.
+
On a first install of IOTstack, you run the menu, choose Prometheus as one of your containers, and are told to do this:
+$ cd ~/IOTstack
+$ docker-compose up -d
+
docker-compose
reads the Compose file. When it arrives at the prometheus
fragment, it finds:
prometheus:
+ container_name: prometheus
+ build: ./.templates/prometheus/.
+
The build
statement tells docker-compose
to look for:
~/IOTstack/.templates/prometheus/Dockerfile
+
++The Dockerfile is in the
+.templates
directory because it is intended to be a common build for all IOTstack users. This is different to the arrangement for Node-RED where the Dockerfile is in theservices
directory because it is how each individual IOTstack user's version of Node-RED is customised.
The Dockerfile begins with:
+FROM prom/prometheus:latest
+
++If you need to pin to a particular version of Prometheus, the Dockerfile is the place to do it. See Prometheus version pinning.
+
The FROM
statement tells the build process to pull down the base image from DockerHub.
++It is a base image in the sense that it never actually runs as a container on your Raspberry Pi.
+
The remaining instructions in the Dockerfile customise the base image to produce a local image. The customisations are:
+Add docker-entrypoint.sh
which:
/prometheus/config/
exists;~/IOTstack/volumes/prometheus/data/config
.The local image is instantiated to become your running container.
+When you run the docker images
command after Prometheus has been built, you may see two rows for Prometheus:
$ docker images
+REPOSITORY TAG IMAGE ID CREATED SIZE
+iotstack_prometheus latest 1815f63da5f0 23 minutes ago 169MB
+prom/prometheus latest 3f9575991a6c 3 days ago 169MB
+
prom/prometheus
is the base image; andiotstack_prometheus
is the local image.You may see the same pattern in Portainer, which reports the base image as "unused". You should not remove the base image, even though it appears to be unused.
+++Whether you see one or two rows depends on the version of
+docker-compose
you are using and how your version ofdocker-compose
builds local images.
The CAdvisor and Node Exporter are included in the Prometheus service definition as dependent containers. What that means is that each time you start Prometheus, docker-compose
ensures that CAdvisor and Node Exporter are already running, and keeps them running.
The default configuration for Prometheus assumes CAdvisor and Node Exporter are running and starts scraping information from those targets as soon as it launches.
+The configuration directory for the IOTstack implementation of Prometheus is at the path:
+~/IOTstack/volumes/prometheus/data/config
+
That directory contains two files:
+config.yml
; andprometheus.yml
.If you delete either file, Prometheus will replace it with a default the next time the container starts. This "self-repair" function is intended to provide reasonable assurance that Prometheus will at least start instead of going into a restart loop.
+Unless you decide to change it, the config
folder and its contents are owned by "pi:pi". This means you can edit the files in the configuration directory without needing the sudo
command. Ownership is enforced each time the container restarts.
The file named config.yml
is the active configuration. This is the file you should edit if you want to make changes. The default structure of the file is:
global:
+ scrape_interval: 10s
+ evaluation_interval: 10s
+
+scrape_configs:
+ - job_name: "iotstack"
+ static_configs:
+ - targets:
+ - localhost:9090
+ - cadvisor:8080
+ - nodeexporter:9100
+
To cause a running instance of Prometheus to notice a change to this file:
+$ cd ~/IOTstack
+$ docker-compose restart prometheus
+$ docker logs prometheus
+
Note:
+docker-compose
). For this reason, you should always check the Prometheus log after any configuration change.The file named prometheus.yml
is a reference configuration. It is a copy of the original configuration file that ships inside the Prometheus container at the path:
/etc/prometheus/prometheus.yml
+
Editing prometheus.yml
has no effect. It is provided as a convenience to help you follow examples on the web. If you want to make the contents of prometheus.yml
the active configuration, you need to do this:
$ cd ~/IOTstack/volumes/prometheus/data/config
+$ cp prometheus.yml config.yml
+$ cd ~/IOTstack
+$ docker-compose restart prometheus
+$ docker logs prometheus
+
The IOTstack implementation of Prometheus supports two environment variables:
+environment:
+ - IOTSTACK_UID=1000
+ - IOTSTACK_GID=1000
+
Those variables control ownership of the Configuration directory and its contents. Those environment variables are present in the standard IOTstack service definition for Prometheus and have the effect of assigning ownership to "pi:pi".
+If you delete those environment variables from your Compose file, the Configuration directory will be owned by "nobody:nobody"; otherwise the directory and its contents will be owned by whatever values you pass for those variables.
+Under the original IOTstack implementation of Prometheus (just "as it comes" from DockerHub), the service definition expected the configuration file to be at:
+~/IOTstack/services/prometheus/config.yml
+
Under this implementation of Prometheus, the configuration file has moved to:
+~/IOTstack/volumes/prometheus/data/config/config.yml
+
++The change of location is one of the things that allows self-repair to work properly.
+
Some of the assumptions behind the default configuration file have changed. In particular, instead of the entire scrape_configs
block being commented-out, it is active and defines localhost
, cadvisor
and nodeexporter
as targets.
You should compare the old and new versions and decide which settings need to be migrated into the new configuration file.
+If you change the configuration file, restart Prometheus and then check the log for errors:
+$ docker-compose restart prometheus
+$ docker logs prometheus
+
Note:
+You can update cadvisor
and nodeexporter
like this:
$ cd ~/IOTstack
+$ docker-compose pull cadvisor nodeexporter
+$ docker-compose up -d
+$ docker system prune
+
In words:
+docker-compose pull
downloads any newer images;docker-compose up -d
causes any newly-downloaded images to be instantiated as containers (replacing the old containers); andprune
gets rid of the outdated images.This "simple pull" strategy doesn't work when a Dockerfile is used to build a local image on top of a base image downloaded from DockerHub. The local image is what is running so there is no way for the pull
to sense when a newer version becomes available.
The only way to know when an update to Prometheus is available is to check the prom/prometheus tags page on DockerHub.
+Once a new version appears on DockerHub, you can upgrade Prometheus like this:
+$ cd ~/IOTstack
+$ docker-compose build --no-cache --pull prometheus
+$ docker-compose up -d prometheus
+$ docker system prune
+$ docker system prune
+
Breaking it down into parts:
+build
causes the named container to be rebuilt;--no-cache
tells the Dockerfile process that it must not take any shortcuts. It really must rebuild the local image;--pull
tells the Dockerfile process to actually check with DockerHub to see if there is a later version of the base image and, if so, to download it before starting the build;prometheus
is the named container argument required by the build
command.Your existing Prometheus container continues to run while the rebuild proceeds. Once the freshly-built local image is ready, the up
tells docker-compose
to do a new-for-old swap. There is barely any downtime for your service.
The prune
is the simplest way of cleaning up. The first call removes the old local image. The second call cleans up the old base image.
++Whether an old base image exists depends on the version of
+docker-compose
you are using and how your version ofdocker-compose
builds local images.
If you need to pin Prometheus to a particular version:
+Use your favourite text editor to open the following file:
+~/IOTstack/.templates/prometheus/Dockerfile
+
Find the line:
+FROM prom/prometheus:latest
+
Replace latest
with the version you wish to pin to. For example, to pin to version 2.30.2:
FROM prom/prometheus:2.30.2
+
Save the file and tell docker-compose
to rebuild the local image:
$ cd ~/IOTstack
+$ docker-compose up -d --build prometheus
+$ docker system prune
+
The new local image is built, then the new container is instantiated based on that image. The prune
deletes the old local image.
Note:
+git pull
. Nothing will change until you decide to remove the pin.When you select Python in the menu:
+The following folder and file structure is created:
+$ tree ~/IOTstack/services/python
+/home/pi/IOTstack/services/python
+├── app
+│ └── app.py
+├── docker-entrypoint.sh
+└── Dockerfile
+
Note:
+service.yml
is also copied into the python
directory but is then not used.This service definition is added to your docker-compose.yml
:
python:
+ container_name: python
+ build: ./services/python/.
+ restart: unless-stopped
+ environment:
+ - TZ=Etc/UTC
+ - IOTSTACK_UID=1000
+ - IOTSTACK_GID=1000
+# ports:
+# - "external:internal"
+ volumes:
+ - ./volumes/python/app:/usr/src/app
+
The service definition contains a number of customisation points:
+restart: unless-stopped
assumes your Python script will run in an infinite loop. If your script is intended to run once and terminate, you should remove this directive.TZ=Etc/UTC
should be set to your local time-zone. Never use quote marks on the right hand side of a TZ=
variable.If you are running as a different user ID, you may want to change both IOTSTACK_UID
and IOTSTACK_GID
to appropriate values.
Notes:
+The only thing these variables affect is the ownership of:
+~/IOTstack/volumes/python/app
+
and its contents. If you want everything to be owned by root, set both of these variables to zero (eg IOTSTACK_UID=0
).
If your Python script listens to data-communications traffic, you can set up the port mappings by uncommenting the ports:
directive.
If your Python container is already running when you make a change to its service definition, you can apply it via:
+$ cd ~/IOTstack
+$ docker-compose up -d python
+
After running the menu, you are told to run the commands:
+$ cd ~/IOTstack
+$ docker-compose up -d
+
This is what happens:
+docker-compose.yml
.When it finds the service definition for Python, it encounters:
+build: ./services/python/.
+
The leading period means "the directory containing docker-compose.yml
while the trailing period means "Dockerfile", so the path expands to:
~/IOTstack/services/python/Dockerfile
+
The Dockerfile
is processed. It downloads the base image for Python from Dockerhub and then makes changes including:
copying the contents of the following directory into the image as a set of defaults:
+/home/pi/IOTstack/services/python/app
+
copying the following file into the image:
+/home/pi/IOTstack/services/python/docker-entrypoint.sh
+
The docker-entrypoint.sh
script runs each time the container launches and performs initialisation and "self repair" functions.
The output of the Dockerfile run is a new local image tagged with the name iotstack_python
.
The iotstack_python
image is instantiated to become the running container.
When the container starts, the docker-entrypoint.sh
script runs and initialises the container's persistent storage area:
$ tree -pu ~/IOTstack/volumes
+/home/pi/IOTstack/volumes
+└── [drwxr-xr-x root ] python
+ └── [drwxr-xr-x pi ] app
+ └── [-rwxr-xr-x pi ] app.py
+
Note:
+python
folder is owned by "root" but the app
directory and its contents are owned by "pi".The initial app.py
Python script is a "hello world" placeholder. It runs as an infinite loop emitting messages every 10 seconds until terminated. You can see what it is doing by running:
$ docker logs -f python
+The world is born. Hello World.
+The world is re-born. Hello World.
+The world is re-born. Hello World.
+…
+
Pressing control+c terminates the log display but does not terminate the running container.
+To stop the container from running, either:
+take down your whole stack:
+$ cd ~/IOTstack
+$ docker-compose down
+
terminate the python container
+$ cd ~/IOTstack
+$ docker-compose down python
+
++see also if downing a container doesn't work
+
To bring up the container again after you have stopped it, either:
+bring up your whole stack:
+$ cd ~/IOTstack
+$ docker-compose up -d
+
bring up the python container
+$ cd ~/IOTstack
+$ docker-compose up -d python
+
Each time you launch the Python container after the first launch:
+iotstack_python
) is instantiated to become the running container.docker-entrypoint.sh
script runs and performs "self-repair" by replacing any files that have gone missing from the persistent storage area. Self-repair does not overwrite existing files! app.py
Python script is run.If the container misbehaves, the log is your friend:
+$ docker logs python
+
It is critical that you understand that all of your project development should occur within the folder:
+~/IOTstack/volumes/python/app
+
So long as you are performing some sort of routine backup (either with a supplied script or a third party solution like Paraphraser/IOTstackBackup), your work will be protected.
+Start by editing the file:
+~/IOTstack/volumes/python/app/app.py
+
If you need other supporting scripts or data files, also add those to the directory:
+~/IOTstack/volumes/python/app
+
Any time you change something in the app
folder, tell the running python container to notice the change by:
$ cd ~/IOTstack
+$ docker-compose restart python
+
Consider this line in the service definition:
+- ./volumes/python/app:/usr/src/app
+
The leading period means "the directory containing docker-compose.yml
" so it the same as:
- ~/IOTstack/volumes/python/app:/usr/src/app
+
Then, you split the line at the ":", resulting in:
+~/IOTstack/volumes/python/app
/usr/src/app
What it means is that:
+If your script writes into any other directory inside the container, the data will be lost when the container re-launches.
+If you make a mess of things and need to start from a clean slate, erase the persistent storage area:
+$ cd ~/IOTstack
+$ docker-compose down python
+$ sudo rm -rf ./volumes/python
+$ docker-compose up -d python
+
++see also if downing a container doesn't work
+
The container will re-initialise the persistent storage area from its defaults.
+As you develop your project, you may find that you need to add supporting packages. For this example, we will assume you want to add "Flask" and "beautifulsoup4".
+If you were developing a project outside of container-space, you would simply run:
+$ pip3 install -U Flask beautifulsoup4
+
You can do the same thing with the running container:
+$ docker exec python pip3 install -U Flask beautifulsoup4
+
and that will work — until the container is re-launched, at which point the added packages will disappear.
+To make Flask and beautifulsoup4 a permanent part of your container:
+Change your working directory:
+$ cd ~/IOTstack/services/python/app
+
Use your favourite text editor to create the file requirements.txt
in that directory. Each package you want to add should be on a line by itself:
Flask
+beautifulsoup4
+
Tell Docker to rebuild the local Python image:
+$ cd ~/IOTstack
+$ docker-compose build --force-rm python
+$ docker-compose up -d --force-recreate python
+$ docker system prune -f
+
Note:
+Confirm that the packages have been added:
+$ docker exec python pip3 freeze | grep -e "Flask" -e "beautifulsoup4"
+beautifulsoup4==4.10.0
+Flask==2.0.1
+
Continue your development work by returning to getting started.
+Note:
+The first time you following the process described above to create requirements.txt
, a copy will appear at:
~/IOTstack/volumes/python/app/requirements.txt
+
This copy is the result of the "self-repair" code that runs each time the container starts noticing that requirements.txt
is missing and making a copy from the defaults stored inside the image.
If you make more changes to the master version of requirements.txt
in the services directory and rebuild the local image, the copy in the volumes directory will not be kept in-sync. That's because the "self-repair" code never overwrites existing files.
If you want to bring the copy of requirements.txt
in the volumes directory up-to-date:
$ cd ~/IOTstack
+$ rm ./volumes/python/app/requirements.txt
+$ docker-compose restart python
+
The requirements.txt
file will be recreated and it will be a copy of the version in the services directory as of the last image rebuild.
Suppose the Python script you have been developing reaches a major milestone and you decide to "freeze dry" your work up to that point so that it becomes the default when you ask for a clean slate. Proceed like this:
+If you have added any packages by following the steps in adding packages, run the following command:
+$ docker exec python bash -c 'pip3 freeze >requirements.txt'
+
That generates a requirements.txt
representing the state of play inside the running container. Because it is running inside the container, the requirements.txt
created by that command appears outside the container at:
~/IOTstack/volumes/python/app/requirements.txt
+
Make your work the default:
+$ cd ~/IOTstack
+$ cp -r ./volumes/python/app/* ./services/python/app
+
The cp
command copies:
requirements.txt
(from step 1); andKey point:
+./services/python/app
will become part of the new local image.Terminate the Python container and erase its persistent storage area:
+$ cd ~/IOTstack
+$ docker-compose down python
+$ sudo rm -rf ./volumes/python
+
Note:
+If erasing the persistent storage area feels too risky, just move it out of the way:
+$ cd ~/IOTstack/volumes
+$ sudo mv python python.off
+
Rebuild the local image:
+$ cd ~/IOTstack
+$ docker-compose build --force-rm python
+$ docker-compose up -d --force-recreate python
+
On its first launch, the new container will re-populate the persistent storage area but, this time, it will be your Python script and any other supporting files, rather than the original "hello world" script.
+Clean up by removing the old local image:
+$ docker system prune -f
+
Suppose your project has reached the stage where you wish to put it into production as a service under its own name. Make two further assumptions:
+./services/python/app
correctly captures your project.Proceed like this:
+Stop the development project:
+$ cd ~/IOTstack
+$ docker-compose down python
+
Remove the existing local image:
+$ docker rmi iotstack_python
+
Rename the python
services directory to the name of your project:
$ cd ~/IOTstack/services
+$ mv python wishbone
+
Edit the python
service definition in docker-compose.yml
and replace references to python
with the name of your project. In the following, the original is on the left, the edited version on the right, and the lines that need to change are indicated with a "|":
python: | wishbone:
+ container_name: python | container_name: wishbone
+ build: ./services/python/. | build: ./services/wishbone/.
+ restart: unless-stopped restart: unless-stopped
+ environment: environment:
+ - TZ=Etc/UTC - TZ=Etc/UTC
+ - IOTSTACK_UID=1000 - IOTSTACK_UID=1000
+ - IOTSTACK_GID=1000 - IOTSTACK_GID=1000
+ # ports: # ports:
+ # - "external:internal" # - "external:internal"
+ volumes: volumes:
+ - ./volumes/python/app:/usr/src/app | - ./volumes/wishbone/app:/usr/src/app
+
Note:
+python
service definition and then perform the required "wishbone" edits on the copy, the python
definition will still be active so docker-compose
may try to bring up both services. You will eliminate the risk of confusing yourself if you follow these instructions "as written" by not leaving the python
service definition in place.Start the renamed service:
+$ cd ~/IOTstack
+$ docker-compose up -d wishbone
+
Remember:
+After you have done this, the persistent storage area will be at the path:
+~/IOTstack/volumes/wishbone/app
+
To make sure you are running from the most-recent base image of Python from Dockerhub:
+$ cd ~/IOTstack
+$ docker-compose build --no-cache --pull python
+$ docker-compose up -d python
+$ docker system prune -f
+$ docker system prune -f
+
In words:
+The old base image can't be removed until the old local image has been removed, which is why the prune
command needs to be run twice.
Note:
+python
, just substitute the new name where you see python
in the two dockerc-compose
commands.Requirements, you will need to have a SDR dongle for you to be able to use RTL. I've tested this with a RTL2838
+Make sure you can see your receiver by running lsusb
$ lsusb
+Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
+Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
+Bus 001 Device 004: ID 0bda:2838 Realtek Semiconductor Corp. RTL2838 DVB-T
+Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub
+Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
+
Before starting the container please install RTL_433 from the native installs menu. This will setup your environment with the correct variables and programs. It is also advised to run RTL_433 to verify that it is working correctly on your system.
+The container is designed to send all detected messages over mqtt
+Edit the IOTstack/services/rtl_433/rtl_433.env file with your relevant settings for your mqtt server: +
MQTT_ADDRESS=mosquitto
+MQTT_PORT=1833
+#MQTT_USER=myuser
+#MQTT_PASSWORD=mypassword
+MQTT_TOPIC=RTL_433
+
the container starts with the command rtl_433 -F mqtt:....
currently it does not filter any packets, you will need to do this in Node-RED
Be in the correct directory (assumed throughout):
+$ cd ~/IOTstack
+
Run the IOTstack menu and choose ring-mqtt
. An alternative to running the menu is to append the service definition template to your compose file like this:
$ sed -e "s/^/ /" ./.templates/ring-mqtt/service.yml >>docker-compose.yml
+
++The
+sed
command is required because service definition templates are left-shifted by two spaces.
This step is optional. Use a text editor to open your docker-compose.yml
file:
ring-mqtt
service definition;TZ
environment variable to your time-zone;Bring up the container:
+$ docker-compose up -d ring-mqtt
+
This pulls the image from DockerHub, instantiates the container, and initialises its persistent storage.
+Use sudo
and a text editor to open the configuration file at the path. For example:
$ sudo vi ./volumes/ring-mqtt/data/config.json
+
At the time of writing, the default configuration file looked like this:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 |
|
From the perspective of any process running in a Docker container, localhost
means "this container" rather than "this Raspberry Pi". You need to edit line 2 to point to your MQTT broker:
If the ring-mqtt
container and your mosquitto
container are running on the same Raspberry Pi:
2 |
|
Otherwise, replace localhost
with the IP address or domain name of the host where your MQTT broker is running. For example:
2 |
|
If your MQTT broker is protected by a username and password, refer to the Ring-MQTT Wiki for the correct syntax.
+Save your work then restart the container:
+$ docker-compose restart ring-mqtt
+
Launch your browser (eg Chrome, Firefox, Safari) and open the following URL:
+http://«ip-or-name»:55123
+
where «ip-or-name»
is the IP address or domain name of the Raspberry Pi running your ring-mqtt container. Examples:
http://192.168.1.100:55123
http://iot-hub.my.domain.com:55123
http://iot-hub.local:55123
You should see the following screen:
+ +Follow the instructions on the screen to generate your refresh token.
+Check the logs:
+$ docker logs ring-mqtt
+
Unless you see errors being reported, your ring-mqtt
container should be ready.
The default service definition includes two environment variables:
+environment:
+- TZ=Etc/UTC
+- DEBUG=ring-*
+
TZ=
should be set to your local time zone (explained above).DEBUG=ring-*
("all debugging options enabled") is the default for ring-mqtt
when running in a container. It is included as a placeholder if you want to tailor debugging output. Refer to the Ring-MQTT Wiki.Whenever you change an environment variable, run:
+$ cd ~/IOTstack
+$ docker-compose up -d ring-mqtt
+
The "up" causes docker-compose to notice the configuration change and re-create the container.
+Consult the Ring-MQTT Wiki.
+Periodically:
+$ cd ~/IOTstack
+$ docker-compose pull ring-mqtt
+
If a new image comes down from DockerHub:
+$ docker-compose up -d ring-mqtt
+$ docker system prune -f
+
The "up" instantiates the newly-downloaded image as the running container. The "prune" cleans up the older image.
+ + + + + + + + + + + + + +Before starting the container for the first time, run the following commands:
+$ cd ~/IOTstack
+$ echo "SCRYPTED_WEBHOOK_UPDATE_AUTHORIZATION=$(cat /proc/sys/kernel/random/uuid | md5sum | head -c 24)" >>.env
+
This generates a random token and places it in ~/IOTstack/.env
.
Notes:
+Start Scrypted:
+$ cd ~/IOTstack
+$ docker-compose up -d scrypted
+
Note:
+Use the following URL as a template:
+https://«host-or-ip»:10443
+
Replace «host-or-ip»
with the domain name or IP address of your Raspberry Pi. Examples:
https://raspberrypi.my.domain.com:10443
https://raspberrypi.local:10443
https://192.168.1.10:10443
Note:
+http
protocol. You must use https
.Paste the URL into a browser window. The container uses a self-signed certificate so you will need to accept that using your browser's mechanisms.
+If you see the message:
+required variable SCRYPTED_WEBHOOK_UPDATE_AUTHORIZATION is missing a value: see instructions for generating a token
+
it means that you did not complete step 2 before starting the container. Go back and perform step 2.
+If you need to start over from scratch:
+$ cd ~/IOTstack
+$ docker-compose down scrypted
+$ sudo rm -rf ./volumes/scrypted
+$ docker-compose up -d scrypted
+
++see also if downing a container doesn't work
+
The Scrypted container runs in host mode, which means it binds directly to the Raspberry Pi's ports. The service definition includes:
+x-ports:
+- "10443:10443"
+
The effect of the x-
prefix is to comment-out that port mapping. It is included as an aide-memoire to help you remember the port number.
The service definition also includes the following environment variable:
+- SCRYPTED_WEBHOOK_UPDATE=http://localhost:10444/v1/update
+
The container does not bind to port 10444 so the purpose of this is not clear. The port number should be treated as reserved.
+ + + + + + + + + + + + + + + + +Syncthing is a continuous file synchronization program. It synchronizes files between two or more computers in real time, safely protected from prying eyes. Your data is your data alone and you deserve to choose where it is stored, whether it is shared with some third party, and how it's transmitted over the internet.
+Forget about using propietary solutions and take control of your data. Syncthing is an open source solution for synchronizing your data in a p2p way.
+Official Syncthing docker image - Not the one used here
+The web UI can be found on yourip:8384
Configuration data is available under /config
containers directroy and mapped to ./volumes/syncthing/config
.
The /app
directory is inside the container, on the host you will use ./volumes/syncthing/data.
+The default share is named Sync. Other added folders will also appear under data.
Have a look at ~/IOTStack/.templates/syncthing/service.yml
or linuxserve docker documentation, by the way, used ports are;
ports:
+ - 8384:8384 # Web UI
+ - 22000:22000/tcp # TCP file transfers
+ - 22000:22000/udp # QUIC file transfers
+ - 21027:21027/udp # Receive local discovery broadcasts
+
This document discusses an IOTstack-specific version of Telegraf built on top of influxdata/influxdata-docker/telegraf using a Dockerfile.
+The purpose of the Dockerfile is to:
+~/IOTstack
+├── .templates
+│ └── telegraf
+│ ├── Dockerfile ❶
+│ ├── entrypoint.sh ❷
+│ ├── iotstack_defaults
+│ │ ├── additions ❸
+│ │ └── auto_include ❹
+│ └── service.yml ❺
+├── services
+│ └── telegraf
+│ └── service.yml ❻
+├── docker-compose.yml
+└── volumes
+ └── telegraf ❼
+ ├── additions ❽
+ ├── telegraf-reference.conf ➒
+ └── telegraf.conf ➓
+
telegraf
container script of the same name, extended to handle container self-repair.telegraf.conf
. See Automatic includes to telegraf.conf.telegraf
container.Everything in the persistent storage area ❼:
+When you select Telegraf in the IOTstack menu, the template service definition is copied into the Compose file.
+++Under old menu, it is also copied to the working service definition and then not really used.
+
On a first install of IOTstack, you run the menu, choose your containers, and are told to do this:
+$ cd ~/IOTstack
+$ docker-compose up -d
+
++See also the Migration considerations (below).
+
docker-compose
reads the Compose file. When it arrives at the telegraf
fragment, it finds:
telegraf:
+ container_name: telegraf
+ build: ./.templates/telegraf/.
+ …
+
The build
statement tells docker-compose
to look for:
~/IOTstack/.templates/telegraf/Dockerfile
+
++The Dockerfile is in the
+.templates
directory because it is intended to be a common build for all IOTstack users. This is different to the arrangement for Node-RED where the Dockerfile is in theservices
directory because it is how each individual IOTstack user's version of Node-RED is customised.
The Dockerfile begins with:
+FROM telegraf:latest
+
++If you need to pin to a particular version of Telegraf, the Dockerfile is the place to do it. See Telegraf version pinning.
+
The FROM
statement tells the build process to pull down the base image from DockerHub.
++It is a base image in the sense that it never actually runs as a container on your Raspberry Pi.
+
The remaining instructions in the Dockerfile customise the base image to produce a local image. The customisations are:
+rsync
package. This helps the container perform self-repair.Replace entrypoint.sh
with a version which:
rsync
to perform self-repair if telegraf.conf
goes missing; and~/IOTstack/volumes/telegraf
.The local image is instantiated to become your running container.
+When you run the docker images
command after Telegraf has been built, you may see two rows for Telegraf:
$ docker images
+REPOSITORY TAG IMAGE ID CREATED SIZE
+iotstack_telegraf latest 59861b7fe9ed 2 hours ago 292MB
+telegraf latest a721ac170fad 3 days ago 273MB
+
telegraf
is the base image; andiotstack_telegraf
is the local image.You may see the same pattern in Portainer, which reports the base image as "unused". You should not remove the base image, even though it appears to be unused.
+++Whether you see one or two rows depends on the version of
+docker-compose
you are using and how your version ofdocker-compose
builds local images.
Under the original IOTstack implementation of Telegraf (just "as it comes" from DockerHub), the service definition expected telegraf.conf
to be at:
~/IOTstack/services/telegraf/telegraf.conf
+
Under this implementation of Telegraf, the configuration file has moved to:
+~/IOTstack/volumes/telegraf/telegraf.conf
+
++The change of location is one of the things that allows self-repair to work properly.
+
With one exception, all prior and current versions of the default configuration file are identical in terms of their semantics.
+++In other words, once you strip away comments and blank lines, and remove any "active" configuration options that simply repeat their default setting, you get the same subset of "active" configuration options. The default configuration file supplied with gcgarner/IOTstack is available here if you wish to refer to it.
+
The exception is [[inputs.mqtt_consumer]]
which is now provided as an optional addition. If your existing Telegraf configuration depends on that input, you will need to apply it. See applying optional additions.
You can inspect Telegraf's log by:
+$ docker logs telegraf
+
These logs are ephemeral and will disappear when your Telegraf container is rebuilt.
+The following log message can be misleading:
+W! [outputs.influxdb] When writing to [http://influxdb:8086]: database "telegraf" creation failed: Post "http://influxdb:8086/query": dial tcp 172.30.0.9:8086: connect: connection refused
+
If InfluxDB is not running when Telegraf starts, the depends_on:
clause in Telegraf's service definition tells Docker to start InfluxDB (and Mosquitto) before starting Telegraf. Although it can launch the InfluxDB container first, Docker has no way of knowing when the influxd
process running inside the InfluxDB container will start listening to port 8086.
What this error message usually means is that Telegraf has tried to communicate with InfluxDB before the latter is ready to accept connections. Telegraf typically retries after a short delay and is then able to communicate with InfluxDB.
+The first time you launch the Telegraf container, the following structure will be created in the persistent storage area:
+~/IOTstack/volumes/telegraf
+├── [drwxr-xr-x root ] additions
+│ └── [-rw-r--r-- root ] inputs.mqtt_consumer.conf
+├── [-rw-r--r-- root ] telegraf.conf
+└── [-r--r--r-- root ] telegraf-reference.conf
+
The file:
+telegraf-reference.conf
:
telegraf.conf
:
telegraf-reference.conf
, leaving only the "active" configuration options, and then adding options necessary for IOTstack.telegraf-reference.conf
.inputs.mqtt_consumer.conf
– see Applying optional additions below.
The intention of this structure is that you:
+telegraf-reference.conf
to find the configuration option you need;telegraf.conf
.When you make a change to telegraf.conf
, you activate it by restarting the container:
$ cd ~/IOTstack
+$ docker-compose restart telegraf
+
inputs.docker.conf
instructs Telegraf to collect metrics from Docker. Requires kernel control
+ groups to be enabled to collect memory usage data. If not done during initial installation,
+ enable by running (reboot required):
$ CMDLINE="/boot/firmware/cmdline.txt" && [ -e "$CMDLINE" ] || CMDLINE="/boot/cmdline.txt"
+$ echo $(cat "$CMDLINE") cgroup_memory=1 cgroup_enable=memory | sudo tee "$CMDLINE"
+
inputs.cpu_temp.conf
collects cpu temperature.
The additions folder (see Significant directories and files) is a mechanism for additional IOTstack-ready configuration options to be provided for Telegraf.
+Currently there is one addition:
+inputs.mqtt_consumer.conf
which formed part of the gcgarner/IOTstack telegraf configuration and instructs Telegraf to subscribe to a metric feed from the Mosquitto broker. This assumes, of course, that something is publishing those metrics.Using inputs.mqtt_consumer.conf
as the example, applying that addition to
+your Telegraf configuration file involves:
$ cd ~/IOTstack/volumes/telegraf
+$ grep -v "^#" additions/inputs.mqtt_consumer.conf | sudo tee -a telegraf.conf >/dev/null
+$ cd ~/IOTstack
+$ docker-compose restart telegraf
+
The grep
strips comment lines and the sudo tee
is a safe way of appending the result to telegraf.conf
. The restart
causes Telegraf to notice the change.
Erasing Telegraf's persistent storage area triggers self-healing and restores known defaults:
+$ cd ~/IOTstack
+$ docker-compose down telegraf
+$ sudo rm -rf ./volumes/telegraf
+$ docker-compose up -d telegraf
+
Notes:
+You can also remove individual files within the persistent storage area and then trigger self-healing. For example, if you decide to edit telegraf-reference.conf
and make a mess, you can restore the original version like this:
$ cd ~/IOTstack
+$ sudo rm ./volumes/telegraf/telegraf-reference.conf
+$ docker-compose restart telegraf
+
See also if downing a container doesn't work
+To reset the InfluxDB database that Telegraf writes into, proceed like this:
+$ cd ~/IOTstack
+$ docker-compose down telegraf
+$ docker exec -it influxdb influx -precision=rfc3339
+> drop database telegraf
+> exit
+$ docker-compose up -d telegraf
+
In words:
+telegraf
database, and then exit the CLI.You can update most containers like this:
+$ cd ~/IOTstack
+$ docker-compose pull
+$ docker-compose up -d
+$ docker system prune
+
In words:
+docker-compose pull
downloads any newer images;docker-compose up -d
causes any newly-downloaded images to be instantiated as containers (replacing the old containers); andprune
gets rid of the outdated images.This strategy doesn't work when a Dockerfile is used to build a local image on top of a base image downloaded from DockerHub. The local image is what is running so there is no way for the pull
to sense when a newer version becomes available.
The only way to know when an update to Telegraf is available is to check the Telegraf tags page on DockerHub.
+Once a new version appears on DockerHub, you can upgrade Telegraf like this:
+$ cd ~/IOTstack
+$ docker-compose build --no-cache --pull telegraf
+$ docker-compose up -d telegraf
+$ docker system prune
+$ docker system prune
+
Breaking it down into parts:
+build
causes the named container to be rebuilt;--no-cache
tells the Dockerfile process that it must not take any shortcuts. It really must rebuild the local image;--pull
tells the Dockerfile process to actually check with DockerHub to see if there is a later version of the base image and, if so, to download it before starting the build;telegraf
is the named container argument required by the build
command.Your existing Telegraf container continues to run while the rebuild proceeds. Once the freshly-built local image is ready, the up
tells docker-compose
to do a new-for-old swap. There is barely any downtime for your service.
The prune
is the simplest way of cleaning up. The first call removes the old local image. The second call cleans up the old base image. Whether an old base image exists depends on the version of docker-compose
you are using and how your version of docker-compose
builds local images.
If you need to pin Telegraf to a particular version:
+Use your favourite text editor to open the following file:
+~/IOTstack/.templates/telegraf/Dockerfile
+
Find the line:
+FROM telegraf:latest
+
Replace latest
with the version you wish to pin to. For example, to pin to version 1.19.3:
FROM telegraf:1.19.3
+
Save the file and tell docker-compose
to rebuild the local image:
$ cd ~/IOTstack
+$ docker-compose up -d --build telegraf
+$ docker system prune
+
The new local image is built, then the new container is instantiated based on that image. The prune
deletes the old local image.
Note:
+git pull
. Nothing will change until you decide to remove the pin.In order to avoid port conflict with PostgreSQL, the public database port is +mapped to 5433 using Docker.
+Cross-container access from other containers still works as previously:
+timescaledb:5432
.
WireGuard is a fast, modern, secure Virtual Private Network (VPN) tunnel. It can securely connect you to your home network, allowing you to access your home network's local services from anywhere. It can also secure your traffic when using public internet connections.
+Reference:
+Assumptions:
+You increase your chances of a trouble-free installation by performing the installation steps in the following order.
+To be able to run WireGuard successfully, your Raspberry Pi needs to be fully up-to-date. If you want to understand why, see the read only flag.
+$ sudo apt update
+$ sudo apt upgrade -y
+
Before you can use WireGuard (or any VPN solution), you need a mechanism for your remote clients to reach your home router. You have two choices:
+This is the service definition template that IOTstack uses for WireGuard:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 |
|
Unfortunately, that service definition will not work "as is". It needs to be configured.
+Key points:
+environment:
section from SERVERURL=
down to PEERDNS=
(inclusive) affects WireGuard's generated configurations (the QR codes). In other words, any time you change any of those values, any existing QR codes will stop working.With most containers, you can continue to tweak environment variables and settings without upsetting the container's basic behaviour. WireGuard is a little different. You really need to think, carefully, about how you want to configure the service before you start. If you change your mind later, you generally have to start from a clean slate.
+SERVERURL=
should be set to the domain name you have registered with a Dynamic DNS service provider. Example:
- SERVERURL=downunda.duckdns.org
+
PEERS=
should be a comma-separated list of your client devices (all the phones, tablets, laptops, desktops you want to use remotely to get back into your home network). Example:
- PEERS=jillMacbook,jackChromebook,alexNokiaG10
+
Notes:
+You have several options for how your remote peers resolve DNS requests:
+PEERDNS=auto
DNS queries made on connected WireGuard clients should work as if they were made on the host. If you configure PiHole into the host's resolveconf.conf
, Wireguard clients will also automatically use it.
Details:
+auto
instructs the WireGuard service running within the WireGuard container to use a DNS-service, coredns, also running in the Wireguard container. Coredns by default directs queries to 127.0.0.11, which Docker intercepts and forwards to whichever resolvers are specified in the Raspberry Pi's /etc/resolv.conf
.PEERDNS=auto
with custom-cont-init
This configuration instructs WireGuard to forward DNS queries from remote peers to any host daemon or container which is listening on port 53. This is the option you will want to choose if you are running an ad-blocking DNS server (eg PiHole or AdGuardHome) in a container on the same host as WireGuard, and you want your remote clients to obtain DNS resolution via the ad-blocker, but don't want your Raspberry Pi host to use it.
+++Acknowledgement: thanks to @ukkopahis for developing this option.
+
To activate this feature:
+PEERDNS=auto
.Start the WireGuard container by executing:
+$ cd ~/IOTstack
+$ docker-compose up -d wireguard
+
This ensures that the ~/IOTstack/volumes/wireguard
folder structure is created and remote client configurations are (re)generated properly.
Run the following commands:
+$ cd ~/IOTstack
+$ sudo cp ./.templates/wireguard/use-container-dns.sh ./volumes/wireguard/custom-cont-init.d/
+$ docker-compose restart wireguard
+
The presence of use-container-dns.sh
causes WireGuard to redirect incoming DNS queries to the default gateway on the internal bridged network. That, in turn, results in the queries being forwarded to any other container that is listening for DNS traffic on port 53. It does not matter if that other container is PiHole, AdGuardHome, bind9 or any other kind of DNS server.
Do note, however, that this configuration creates a dependency between WireGuard and the container providing DNS resolution. You may wish to make that explicit in your docker-compose.yml
by adding these lines to your WireGuard service definition:
depends_on:
+ - pihole
+
++Substitute
+adguardhome
orbind9
forpihole
, as appropriate.
Once activated, this feature will remain active until you decide to deactivate it. If you ever wish to deactivate it, run the following commands:
+$ cd ~/IOTstack
+$ sudo rm ./volumes/wireguard/custom-cont-init.d/use-container-dns.sh
+$ docker-compose restart wireguard
+
PEERDNS=«ip address»
A third possibility is if you have a local upstream DNS server. You can specify the IP address of that server so that remote peers receive DNS resolution from that host. For example:
+- PEERDNS=192.168.203.65
+
Do note that changes to PEERDNS
will not be updated to existing clients, and as such you may want to use PEERDNS=auto
unless you have a very specific requirement.
The WireGuard service definition template follows the convention of using UDP port "51820" in three places. You can leave it like that and it will just work. There is no reason to change the defaults unless you want to.
+To understand what each port number does, it is better to think of them like this:
+environment:
+- SERVERPORT=«public»
+ports:
+- "«external»:«internal»/udp"
+
These definitions are going to be used throughout this documentation:
+The «public» port is the port number that your remote WireGuard clients (phone, laptop etc) will try to reach. This is the port number that your router needs to expose to the outside world.
+The «external» port is the port number that Docker, running on your Raspberry Pi, will be listening on. Your router needs to forward WireGuard incoming traffic to the «external» port on your Raspberry Pi.
+The «internal» port is the port number that WireGuard (the server process) will be listening on inside the WireGuard container. Docker handles forwarding between the «external» and «internal» port.
+Rule #1:
+Rule #2:
+Rule #3:
+See Understanding WireGuard's port numbers if you want more information on how the various port numbers are used.
+There are two approaches:
+docker-compose.yml
with the default WireGuard service definition template, and then edit docker-compose.yml
.compose-override.yml
file, then run the menu and have it perform the substitutions for you.Of the two, the first is generally the simpler and means you don't have to re-run the menu whenever you want to change WireGuard's configuration.
+docker-compose.yml
¶Run the menu:
+$ cd ~/IOTstack
+$ ./menu.sh
+
Choose the "Build Stack" option.
+docker-compose.yml
in your favourite text editor.compose-override.yml
¶The Custom services and overriding default settings for IOTstack page describes how to use an override file to allow the menu to incorporate your custom configurations into the final docker-compose.yml
file.
You will need to create the compose-override.yml
before running the menu to build your stack. If you have already built your stack, you'll have to rebuild it after creating compose-override.yml
.
Use your favourite text editor to create (or open) the override file. The file is expected to be at the path:
+~/IOTstack/compose-override.yml
+
Define overrides to implement the decisions you took in Decide what to configure. For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 |
|
Key points:
+services:
directive at the start.Save your work.
+Run the menu:
+$ cd ~/IOTstack
+$ ./menu.sh
+
Choose the "Build Stack" option.
+Check your work by running:
+$ cat docker-compose.yml
+
and verify that the wireguard
service definition is as you expect.
To start WireGuard, bring up your stack:
+$ cd ~/IOTstack
+$ docker-compose up -d
+
Confirm that WireGuard has started properly by running:
+$ docker ps --format "table {{.Names}}\t{{.RunningFor}}\t{{.Status}}" --filter name=wireguard
+
Repeat the command a few times with a short delay in between. You are looking for signs that the WireGuard container is restarting. If the container seems to be restarting then this command is your friend:
+$ docker logs wireguard
+
See also discussion of the read-only flag.
+Confirm that WireGuard has generated the expected configurations. For example, given the following setting in docker-compose.yml
:
- PEERS=jillMacbook,jackChromebook,alexNokiaG10
+
you would expect a result something like this:
+$ tree ./volumes/wireguard/config
+./volumes/wireguard/config
+├── coredns
+│ └── Corefile
+├── peer_alexNokiaG10
+│ ├── peer_alexNokiaG10.conf
+│ ├── peer_alexNokiaG10.png
+│ ├── presharedkey-peer_alexNokiaG10
+│ ├── privatekey-peer_alexNokiaG10
+│ └── publickey-peer_alexNokiaG10
+├── peer_jackChromebook
+│ ├── peer_jackChromebook.conf
+│ ├── peer_jackChromebook.png
+│ ├── presharedkey-peer_jackChromebook
+│ ├── privatekey-peer_jackChromebook
+│ └── publickey-peer_jackChromebook
+├── peer_jillMacbook
+│ ├── peer_jillMacbook.conf
+│ ├── peer_jillMacbook.png
+│ ├── presharedkey-peer_jillMacbook
+│ ├── privatekey-peer_jillMacbook
+│ └── publickey-peer_jillMacbook
+├── server
+│ ├── privatekey-server
+│ └── publickey-server
+├── templates
+│ ├── peer.conf
+│ └── server.conf
+└── wg0.conf
+
Notice how each element in the PEERS=
list is represented by a sub-directory prefixed with peer_
. You should expect the same pattern for your peers.
The first time you launch WireGuard, it generates cryptographically protected configurations for your remote clients and encapsulates those configurations in QR codes. You can see the QR codes by running:
+$ docker logs wireguard
+
WireGuard's log is ephemeral, which means it resets each time the container is re-created. In other words, you can't rely on going back to the log to obtain your QR codes if you lose them.
+WireGuard also records the QR codes as .png
files. In fact, the QR codes shown by docker logs wireguard
are just side-effects of the .png
files as they are created.
If your Raspberry Pi has a GUI (such as a screen attached to an HDMI port or a VNC connection), you can always retrieve the QR codes by opening the .png
files in the GUI.
If, however, your Raspberry Pi is running headless, you will need to copy the .png
files to a system that is capable of displaying them, such as a Mac or PC. You can use SCP to do that.
++See ssh tutorial if you need help setting up SSH (of which SCP is a part).
+
For example, to copy all PNG files from your Raspberry Pi to a target system:
+$ find ~/IOTstack/volumes/wireguard/config -name "*.png" -exec scp {} user@hostorip:. \;
+
Note:
+hostorip
is the host name, fully-qualified domain name, multicast domain name or IP address of the GUI-capable target computer; anduser
is a valid username on the target computer.If you want to work in the other direction (ie from the GUI-capable system), you can try:
+$ scp pi@hostorip:IOTstack/volumes/wireguard/peer_jill-macbook/peer_jill-macbook.png .
+
In this case:
+hostorip
is the host name, fully-qualified domain name, multicast domain name or IP address of the Raspberry Pi that is running WireGuard.Keep in mind that each QR code contains everything needed for any device to access your home network via WireGuard. Treat your .png
files as "sensitive documents".
A typical home network will have a firewall that effectively blocks all incoming attempts from the Internet to open a new connection with a device on your network.
+To use a VPN from outside of your home network (which is precisely the point of running the service!), you need to configure your router to allow incoming WireGuard traffic to reach the Raspberry Pi running WireGuard. These instructions assume you have the privileges to do that.
+If you have not used your router's administrative interface before, the default login credentials may be physically printed on the device or in its instruction manual.
+++If you have never changed the default login credentials, you should take the time to do that.
+
Routers have wildly different user interfaces but the concepts will be the same. This section describes the basic technique but if you are unsure how to do this on your particular router model, the best idea would be to search the web for:
+A typical configuration process goes something like this:
+The NAT component you are looking for probably has a name like "Port Redirection", "Port Forwarding", "NAT Forwarding" or "NAT Virtual Server".
+The configuration screen will contain at least the following fields:
+Field | +Value | +
---|---|
Interface | +router's WAN interface | +
Private IP | +x.x.x.x | +
Private Port | +«external» | +
Protocol | +UDP | +
Public Port | +«public» | +
Service Name | +WireGuard | +
The fields in the above list are in alphabetical order. They will almost certainly be in a different order in your router and may also have different names:
+Private Port (or Internal Port) needs to be the value you chose for «external» in the WireGuard service definition (51820 if you didn't change it).
+++Yes, this does sound counterintuitive but it's a matter of perspective. From the router's perspective, the port is on the private or internal part of your home network. From Docker's perspective, the port is «external» to container-space.
+
Protocol will usually default to "TCP" but you must change it to "UDP".
+This is a massive topic and one which is well beyond the scope of this guide. You really will have to work it out for yourself. Start by Googling:
+You will find the list of client software at WireGuard Installation.
+For portable devices (eg iOS and Android) it usually boils down to:
+Here's a concrete example configuration using three different port numbers:
+environment:
+- SERVERURL=downunda.duckdns.org
+- SERVERPORT=51620
+ports:
+- "51720:51820/udp"
+
In other words:
+You also need to make a few assumptions:
+Here's a reference model to help explain what occurs:
+ +The remote WireGuard client:
+SERVERURL=
and SERVERPORT=
environment variables in docker-compose.yml
.You configure a NAT port-forwarding rule in your router which accepts incoming traffic on the «public» UDP port (51620) and uses Network Address Translation to change the destination IP address to the Raspberry Pi and destination port to the «external» UDP port (51720). In other words, each incoming packet is readdressed to 192.168.203.60:51720.
+Docker is listening to the Raspberry Pi's «external» UDP port 51720. Docker uses Network Address Translation to change the destination IP address to the WireGuard container and destination port to the «internal» UDP port (51820). In other words, each incoming packet is readdressed to 172.18.0.6:51820.
+The packet is then routed to the internal bridged network, and delivered to the WireGuard server process running in the container which is listening on the «internal» UDP port (51820).
+A reciprocal process occurs when the WireGuard server process sends packets back to the remote WireGuard client.
+The following table summarises the transformations as the client and server exchange information:
+ +Even if you use port 51820 everywhere (the default), all this Network Address Translation still occurs. Keep this in mind if you are trying to debug WireGuard because you may actually find it simpler to understand what is going on if you use different numbers for the «public» and «external» ports.
+This model is a slight simplification because the remote client may also be also operating behind a router performing Network Address Translation. It is just easier to understand the basic concepts if you assume the remote client has a publicly-routable IP address.
+If tcpdump
is not installed on your Raspberry Pi, you can install it by:
$ sudo apt install tcpdump
+
After that, you can capture traffic between your router and your Raspberry Pi by:
+$ sudo tcpdump -i eth0 -n udp port «external»
+
Press ctrlc to terminate the capture.
+First, you need to add tcpdump
to the container. You only need to do this once per debugging session. The package will remain in place until the next time you re-create the container.
$ docker exec wireguard bash -c 'apt update ; apt install -y tcpdump'
+
To monitor traffic:
+$ docker exec -t wireguard tcpdump -i eth0 -n udp port «internal»
+
Press ctrlc to terminate the capture.
+$ PORT=«external»; sudo nmap -sU -p $PORT 127.0.0.1 | grep "$PORT/udp"
+
There will be a short delay. The expected answer is either:
+«external»/udp open|filtered unknown
= Docker is listening«external»/udp closed unknown
= Docker is not listeningSuccess implies that the container is also listening.
+$ PORT=«public»; sudo nmap -sU -p $PORT downunda.duckdns.org | grep "$PORT/udp"
+
There will be a short delay. The expected answer is either:
+«public»/udp open|filtered unknown
= router is listening«public»/udp closed unknown
= router is not listeningNote:
+tcpdump
telling you whether your router is forwarding traffic to your Raspberry Pi.The :ro
at the end of the following line in WireGuard's service definition means "read only":
- /lib/modules:/lib/modules:ro
+
If that flag is omitted then WireGuard may try to update the /lib/modules
path in your operating system. To be clear, /lib/modules
is both outside the WireGuard container and outside the normal persistent storage area in the ./volumes
directory.
The basic idea of containers is that processes are contained, include all their own dependencies, can be added and removed cleanly, and don't change the underlying operating system.
+Writing into /lib/modules
is not needed on a Raspberry Pi, providing that Raspberry Pi OS is up-to-date. That is why the first step in the installation procedure tells you to bring the system up-to-date.
If WireGuard refuses to install and you have good reason to suspect that WireGuard may be trying to write to /lib/modules
then you can consider removing the :ro
flag and re-trying. Just be aware that WireGuard will likely be modifying your operating system.
To update the WireGuard container:
+$ cd ~/IOTstack
+$ docker-compose pull wireguard
+
If a new image comes down, then:
+$ docker-compose up -d wireguard
+$ docker system prune
+
WireGuard's designers have redefined the structure they expect in the persistent storage area. Before the change, a single volume-mapping got the job done:
+volumes:
+- ./volumes/wireguard:/config
+
After the change, three mappings are required:
+volumes:
+- ./volumes/wireguard/config:/config
+- ./volumes/wireguard/custom-cont-init.d:/custom-cont-init.d
+- ./volumes/wireguard/custom-services.d:/custom-services.d
+
In essence, inside the container:
+custom-cont-init.d
and custom-services.d
directories were subdirectories of /config
;custom-cont-init.d
and custom-services.d
are top-level directories alongside /config
.The new custom-cont-init.d
and custom-services.d
directories also need to be owned by root. Previously, they could be owned by "pi".
IOTstack users implementing WireGuard for the first time will get the correct structure. Existing users need to migrate. The process is a little messy so IOTstack provides a script to automate the restructure:
+$ cd ~/IOTstack
+$ docker-compose down wireguard
+$ ./scripts/2022-10-01-wireguard-restructure.sh
+
++see also if downing a container doesn't work
+
In words:
+The script:
+./volumes/wireguard
to ./volumes/wireguard.bak
; then./volumes/wireguard
structure using ./volumes/wireguard.bak
for its source material.docker-compose.yml
to adopt the new service definition.Your WireGuard client configurations (QR codes) are not affected by the migration.
+Once the migration is complete and you have adopted the new service definition, you can start WireGuard again:
+$ docker-compose up -d wireguard
+
You should test that your remote clients can still connect. Assuming a successful migration, you can safely delete the backup directory:
+$ sudo rm -rf ./volumes/wireguard.bak
+
++Always be careful when using
+sudo
in conjunction with recursive remove. Double-check everything before pressing return.
If WireGuard misbehaves, you can start over from a clean slate. You may also need to do this if you change any of the following environment variables:
+- SERVERURL=
+- SERVERPORT=
+- PEERS=
+- PEERDNS=
+
The procedure is:
+If WireGuard is running, terminate it:
+$ cd ~/IOTstack
+$ docker-compose down wireguard
+
++see also if downing a container doesn't work
+
Erase the persistent storage area (essential):
+$ sudo rm -rf ./volumes/wireguard
+
++Be very careful with that command and double-check your work before you hit return.
+
Erasing the persistent storage area:
+PEERDNS=auto
with custom-cont-init
.Start WireGuard:
+$ docker-compose up -d wireguard
+
This will generate new client configurations and QR codes for your devices.
+Remember to re-activate PEERDNS=auto
with custom-cont-init
if you need it.
WordPress is a web content-management system.
+You need to perform two steps before WordPress can be launched:
+ +Note:
+Be in the correct directory:
+$ cd ~/IOTstack
+
Launch the menu
+$ ./menu.sh
+
Choose "Build Stack".
+When IOTstack is cloned from GitHub, the default for your local copy of the repository is to be on the "master" branch. Master-branch templates are left-shifted by two spaces with respect to how they need to appear in docker-compose.yml
. The following sed
command prepends two spaces to the start of each line:
$ sed -e "s/^/ /" ./.templates/wordpress/service.yml >>docker-compose.yml
+
Templates on the "old-menu" branch already have proper alignment, so cat
can be used:
$ cat ./.templates/wordpress/service.yml >>docker-compose.yml
+
The password-generation steps in the next section assume uuidgen
is available on your system. The following command installs uuidgen
if it is not present already:
$ [ -z "$(which uuidgen)" ] && sudo apt update && sudo apt install -y uuid-runtime
+
WordPress relies on MariaDB, and MariaDB requires both a user password and a root password. You can generate the passwords like this:
+$ echo "WORDPRESS_DB_PASSWORD=$(uuidgen)" >>~/IOTstack/.env
+$ echo "WORDPRESS_ROOT_PASSWORD=$(uuidgen)" >>~/IOTstack/.env
+
Key points:
+You will not need to know either of these passwords in order to use WordPress.
+++These passwords govern access to the WordPress database (the
+wordpress_db
container). WordPress (thewordpress
container) has a separate system of credentials. You set up an administrator account the first time you login to WordPress.
You will not need to know either password in order to use the mysql
command line interface to inspect the WordPress database. See accessing the MariaDB command line interface.
.env
will break your installation.WordPress (running inside the container) needs to know the domain name of the host on which the container is running. You can satisfy the requirement like this:
+$ echo "WORDPRESS_HOSTNAME=$HOSTNAME.local" >>~/IOTstack/.env
+
The above assumes the host is advertising a multicast domain name. This is a safe assumption for Raspberry Pis but may not necessarily be applicable in other situations. If your host is associated with a fully-qualified domain name (A record or CNAME), you can use that instead. For example:
+$ echo "WORDPRESS_HOSTNAME=iot-hub.my.domain.com" >>~/IOTstack/.env
+
You can confirm that the passwords and hostname have been added to .env
like this:
$ grep "^WORDPRESS" ~/IOTstack/.env
+WORDPRESS_DB_PASSWORD=41dcbe76-9c39-4c7f-bd65-2f0421bccbeb
+WORDPRESS_ROOT_PASSWORD=ee749d72-f1a5-4bc0-b182-21e8284f9fd2
+WORDPRESS_HOSTNAME=raspberrypi.local
+
If you prefer to keep your environment values inline in your docker-compose.yml
rather than in the .env
file then you can achieve the same result by editing the service definitions as follows:
wordpress
:
environment:
+ WORDPRESS_DB_PASSWORD: «yourUserPasswordHere»
+ hostname: «hostname».«domain»
+
wordpress_db
:
environment:
+ MYSQL_ROOT_PASSWORD: «yourRootPasswordHere»
+ MYSQL_PASSWORD: «yourUserPasswordHere»
+
$ cd ~/IOTstack
+$ docker-compose up -d wordpress
+
This starts both WordPress and its database.
+ +Use a URL in the following form, where «host»
should be the value you chose at set hostname.
http://«host»:8084
+
Examples:
+http://raspberrypi.local:8084
http://iot-hub.my.domain.com:8084
You will be prompted to:
+After that, you should refer to the WordPress documentation.
+ +The MariaDB instance associated with WordPress is private to WordPress. It is included along with the WordPress service definition. You do not have to select MariaDB in the IOTstack menu.
+++ +There is nothing stopping you from also selecting MariaDB in the IOTstack menu. Multiple instances of MariaDB will coexist quite happily but they are separate and distinct Relational Database Manager Systems (RDBMS).
+
If you need inspect or manipulate the WordPress database, begin by opening a shell into the WordPress MariaDB container:
+$ docker exec -it wordpress_db bash
+
While you are in the shell, you can use the MYSQL_ROOT_PASSWORD
environment variable to reference the root password. For example:
# mysql -p$MYSQL_ROOT_PASSWORD
+Welcome to the MariaDB monitor. Commands end with ; or \g.
+Your MariaDB connection id is 169
+Server version: 10.11.6-MariaDB-log Alpine Linux
+
+Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
+
+Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+
+MariaDB [(none)]>
+
Note:
+-p
and $MYSQL_ROOT_PASSWORD
. If you insert a space, mysql
will prompt you to enter the password interactively.Once you have opened a session using mysql
, you can execute MySQL commands. For example:
MariaDB [(none)]> show databases;
++--------------------+
+| Database |
++--------------------+
+| information_schema |
+| mysql |
+| performance_schema |
+| sys |
+| wordpress |
++--------------------+
+5 rows in set (0.010 sec)
+
To exit mysql
, either press control+d or use the exit
command:
MariaDB [(none)]> exit
+Bye
+
+#
+
Similarly, control+d or exit
will terminate the container's bash
shell and return you to the host's command line.
nextcloud
¶Both the wordpress
and wordpress_db
service definitions connect to the nextcloud
network.
++Please note the emphasis on "network".
+
The nextcloud
network is an internal private network created by docker-compose
to facilitate data-communications between a user-facing service (like WordPress) and an associated database back-end (like MariaDB).
The NextCloud container was the first to use the private-network strategy so the "nextcloud" name is an accident of history. In an ideal world, the network would be renamed to something which more accurately reflected its purpose, like "databases". Unfortunately, the IOTstack menu lacks the facilities needed to update existing deployments so the most likely result of any attempt at renaming would be to break existing stacks.
+At runtime, the nextcloud
network has the name iotstack_nextcloud
, and exists alongside the iotstack_default
network which is shared by other IOTstack containers.
The material point is that, even though WordPress has nothing to do with NextCloud, the references to the nextcloud
network are are not mistakes. They are intentional.
If you start the WordPress container and then decide that you need to change its environment variables, you must first erase the container's persistent store:
+$ cd ~/IOTstack
+$ docker-compose down wordpress wordpress_db
+$ sudo rm -rf ./volumes/wordpress
+
Notes:
+wordpress
and wordpress_db
containers need to be taken down before the persistent store can be removed safely. sudo rm
command. Double-check before pressing the return key!Once the persistent store has been erased, you can change the environment variables.
+When you are ready, start WordPress again:
+$ cd ~/IOTstack
+$ docker-compose up -d wordpress
+
Note:
+wordpress_db
container does not need to be brought up explicitly. It is started automatically as a by-product of starting wordpress
.x2go is an "alternative" to using VNC for a remote connection. It uses X11 forwarding over ssh to provide a desktop environment
+Reason for using: +I have a Pi 4 and I didn't buy a micro hdmi cable. You can use VNC however you are limited to a 800x600 window.
+Install with sudo apt install x2goserver
x2go cant connect to the native Raspbian Desktop so you will need to install another with sudo tasksel
I chose Xfce because it is light weight.
+Install the x2go client from their website
+Now I have a full-screen client
+ +ZeroTier and WireGuard are not mutually exclusive. You can run both if you wish. The purpose of this document is to try to offer some general guidance about the two solutions.
+Assume your goal is to give yourself access to your home network when you are on the road. This is something you can do with both WireGuard and ZeroTier.
+Providing you follow IOTstack's WireGuard documentation faithfully, WireGuard is a bit easier to get going than ZeroTier.
+Although it helps to have some feeling for TCP/IP fundamentals, you definitely don't need to be a comms guru.
+Using WireGuard to access your home network when you are on the road involves:
+A routable IP address on the WAN side of your home router.
+++The IP address on the WAN side of your home router is allocated by your ISP. It can be fixed or dynamic. If you have not explicitly signed up for a fixed IP address service then your address is probably dynamic and can change each time you reboot your router, or if your ISP "bounces" your connection.
+
If your WAN IP address is dynamic then you need a mechanism for making it discoverable using a Dynamic Domain Name System (DDNS) service such as DuckDNS or NoIP.com.
+++That's a separate registration and setup process.
+
A WireGuard server running in a Docker container on your Raspberry Pi. Ideally, you give some thought to the clients you will need so that the QR codes can be generated the first time you bring up the container.
+A WireGuard client running in each remote device. Each client needs to be configured with a QR code or configuration file created in the previous step.
+A port-forwarding rule in your home router so that traffic originated by remote WireGuard clients can be relayed to the WireGuard server running on your Raspberry Pi.
+Implementing ZeroTier is not actually any more difficult to get going than WireGuard. ZeroTier's apparent complexity arises from the way it inherently supports many network topologies. Getting it set up to meet your requirements takes planning.
+You still don't need to be a comms guru but it will help if you've had some experience making TCP/IP do what you want.
+Using ZeroTier to access your home network when you are on the road involves:
+Registering for a ZeroTier account (free and paid levels).
+Either (or both) of the following:
+A ZeroTier client running in each remote device.
+Every ZeroTier client (home and remote) needs to be provided with your ZeroTier network identifier. You also need to authorise each client to join your ZeroTier network. Together, these are the equivalent of WireGuard's QR code.
+Depending on what you want to achieve, you may need to configure one or more static routes in the ZeroTier Cloud and in your home router.
+The things you don't need to worry about include:
+Now that you have some appreciation for the comparative level of difficulty in setting up each service, let's focus on WireGuard's key problem.
+WireGuard depends on the IP address on the WAN side of your home router being routable. What that means is that the IP address has to be known to the routing tables of the core routers that drive the Internet.
+You will probably have seen quite a few of the addresses in the following table:
+Table 1: Reserved IP Address Ranges | +
---|
+ |
Nothing in that list is routable. That list is also far from complete (see wikipedia). The average IOTstack user has probably encountered at least:
+Figure 1: Router WAN port using CGNAT range | +
---|
+ |
Consider Figure 1. On the left is a cloud representing your home network where you probably use a subnet in the 192.168/16 range. The 192.168/16 range is not routable so, to exchange packets with the Internet, your home router needs to perform Network Address Translation (NAT).
+Assume a computer on your home network has the IP address 192.168.1.100 and wants to communicate with a service on the Internet. What the NAT service running in your home router does is:
+The NAT service running in your router builds tables that keep track of everything needed to make this work but, and this is a critical point, NAT can only build those tables when devices on your home LAN originate the traffic. If a packet addressed to your WAN IP arrives unexpectedly and NAT can't figure out what to do from its tables, the packet gets dropped.
+A remote WireGuard client trying to originate a connection with the WireGuard server running in your IOTstack is an example of an "unexpected packet". The reason it doesn't get dropped is because of the port-forwarding rule you set up in your router. That rule essentially fools NAT into believing that the WireGuard server originated the traffic.
+If the IP address your ISP assigns to your router's WAN interface is routable then your traffic will follow the green line in Figure 1. It will transit your ISP's network, be forwarded to the Internet, and reply packets will come back the same way.
+However, if the WAN IP address is not routable then your traffic will follow the red line in Figure 1. What happens next is another round of Network Address translation. Using the same address examples above:
+The system at the other end sees 200.1.2.3 as the source address so that's what it uses in reply packets.
+Both NAT engines "A" and "B" are building tables to make this work but, again, it is all in response to outbound traffic. If your remote WireGuard client tries to originate a connection with your WireGuard server by addressing the packet to "B", it's unexpected and gets dropped.
+Unlike the situation with your home router where you can add a port-forwarding rule to fool NAT into believing your WireGuard server originated the traffic, you don't control your ISP's NAT router so it's a problem you can't fix.
+Your remote WireGuard client can't bypass your ISP's NAT router by addressing the packet to "A" because that address is not routable, so nothing on the Internet has any idea of where to send it, so the packet gets dropped.
+Due to the shortage of IPv4 addresses, it is increasingly common for ISPs to apply their own NAT service after yours. Generally, ISPs use the 100.64/10 range so, if you connect to your home router's user interface and see something like the IP address circled in Figure 2, you can be sure that you are the victim of "CGNAT".
+Figure 2: Router WAN port using CGNAT range | +
---|
+ |
While seeing a router WAN address that is not routable proves that your ISP is performing an additional Network Address Translation step, seeing an IP address that should be routable does not necessarily prove the opposite. The only way to be certain is to compare the IP address your router shows for its WAN interface with the IP address you see in a service like whatsmyip.com. If they are not the same, your ISP is likely applying its own NAT service.
+If WireGuard won't work and you suspect your ISP is applying its own NAT service, you have the following options:
+You can use both WireGuard and ZeroTier to set up secure site-to-site routing such as between your home and the homes of your friends and relatives.
+If you want to use WireGuard:
+If you want to use ZeroTier:
+ZeroTier is a Virtual Private Network (VPN) solution that creates secure data-communications paths between devices at different locations. You can use ZeroTier to:
+This documentation covers two DockerHub images and two IOTstack templates:
+zyclonite:zerotier
This image implements a standard ZeroTier client. It is what you get if you choose "ZeroTier-client" from the IOTstack menu. Its function is identical to the clients you install on Android, iOS, macOS and Windows.
+zyclonite:zerotier-router
This is an enhanced version of the ZeroTier client. It is what you get if you choose "ZeroTier-router" from the IOTstack menu. In addition to connecting your Raspberry Pi to your ZeroTier network, it can also forward packets between remote clients and devices attached to your home LAN. It is reasonably close to WireGuard in its general behaviour.
+ZeroTier:
+ +zyclonite/zerotier:
+ +ZeroTier offers both free and paid accounts. A free account offers enough for the average home user.
+Go to the Zerotier downloads page. If you wait a little while, a popup window will appear with a "Start here" link which triggers a wizard to guide you through the registration and setup process. At the end, you will have an account plus an initial ZeroTier Network ID.
+++Tip: Make a note of your ZeroTier network ID - you will need it!
+
You should take the time to work through the configuration page for your newly-created ZeroTier network. At the very least:
+Scroll down until you see the "IPv4 Auto-Assign" area. By default, ZeroTier will have done the following:
+If the range selected by ZeroTier does not begin with "10.x", consider changing the selection to something in that range. This documentation uses 10.244.*.*
throughout and it may be easier to follow if you do something similar.
++Tip: avoid
+10.13.*.*
if you are also running WireGuard.
The logic behind this recommendation is that you can use 10.x.x.x for ZeroTier and 192.168.x.x for your home networks, leaving 172.x.x.x for Docker. That should make it easier to understand what is going on when you examine routing tables.
+Nevertheless, nothing about ZeroTier depends on you using a 10.x network. If you have good reasons for selecting from a different range, do so. It's your network!
+You should install ZeroTier client software on at least one mobile device (laptop, iDevice) that is going to connect remotely. You don't need to go to a remote location or fake "remoteness" by connecting through a cellular system. You can do all this while the device is connected to your home network.
+Connecting a client to your ZeroTier network is a three-step process:
+Install the client software on the device. The Zerotier downloads page has clients for every occasion: Android, iOS, macOS, Unix and Windows.
+Launch the client and enter your ZeroTier Network ID:
+on macOS, launching the app adds a menu to the right hand side of your menu bar. From that menu, choose "Join New Network…", enter your network ID into the dialog box and click "Join".
+on iOS, launching the app for the first time presents you with a privacy policy which you need to accept, followed by a mostly-blank screen:
+Android and Windows – follow your nose.
+In a web browser:
+Each time you authorise a client, ZeroTier assigns an IP address from the range you selected in the "IPv4 Auto-Assign" area. Most of the time this is exactly what you want but, occasionally, you may want to override ZeroTier's choice. The simplest approach is:
+Type a new IP address into the text field to the right of the + ;
+++your choice needs to be from the range you selected in the "IPv4 Auto-Assign" area
+
Click the + to accept the address; then
+ZeroTier IP addresses are like fixed assignments from a DHCP server. They persist. The same client will always get the same IP address each time it connects.
+Key point:
+Do not install ZeroTier on your Raspberry Pi by following the Linux instructions on the Zerotier downloads page. Those instructions lead to a "native" installation. We are about to do all that with a Docker container.
+You can install ZeroTier clients on other systems but you should hold off on doing that for now because, ultimately, it may not be needed. Whether you need ZeroTier client software on any device will depend on the decisions you make as you follow these instructions.
+To help you choose between the ZeroTier-client and ZeroTier-router containers, it is useful to study a network topology that does not include routing.
+Topology 1: Remote client accesses client on home network | +
---|
+ |
Four devices are shown:
+B is some other device (another Pi, Linux box, Mac, PC).
+++The key thing to note is that B is not running ZeroTier client software.
+
C is your local router, likely an off-the-shelf device running a custom OS.
+++Again, assume C is not running ZeroTier client software.
+
G is the remote client you set up above.
+Table 1 summarises what you can and can't do from the remote client G:
+Table 1: Reachability using only ZeroTier clients | +
---|
+ |
G can't reach B or C, directly, because those devices are not running ZeroTier client software.
+G can reach B and C, indirectly, by first connecting to A. An example would be G opening an SSH session on A then, within that session, opening another SSH session on B or C.
+It should be apparent that you can also solve this problem by installing ZeroTier client software on B. It would then have its own interface in the 10.244.0.0/16 network that forms the ZeroTier Cloud and be reachable directly from G. The no† entries would then become yes, with the caveat that G would reach B via its interface in the 10.244.0.0/16 network.
+The same would be true for your router C, providing it was capable of running ZeroTier client software.
+Lessons to learn:
+ZeroTier clients are incredibly easy to set up. It's always:
+After that, it's full peer-to-peer interworking.
+The problem with this approach is that it does not scale if you are only signed up for a free ZeroTier account. Free accounts are limited to 25 clients. After that you need a paid account.
+Now that you understand what the ZeroTier-client will and won't do, if you want to install the ZeroTier client on your Raspberry Pi via IOTstack, proceed like this:
+Bring up the container:
+$ cd ~/IOTstack
+$ docker-compose up -d zerotier-client
+
Tell the container to join your ZeroTier network by replacing «NetworkID» with your ZeroTier Network ID:
+$ docker exec zerotier zerotier-cli join «NetworkID»
+
You only need to do this once. The information is kept in the container's persistent storage area. Thereafter, the client will rejoin the same network each time the container comes up.
+Go to ZeroTier Central and authorise the device.
+Job done! There are no environment variables to set. It just works.
+This topology is a good starting point for using ZeroTier to replicate a WireGuard service running on your Raspberry Pi. Remember, you don't have to make an either/or choice between ZeroTier and WireGuard. You can run both containers side-by-side.
+Topology 2: Remote client accesses home network | +
---|
+ |
With this structure in place, all hosts in Topology 2 can reach each other directly. All the cells in Table 1 are yes. Full peer-to-peer networking!
+The ZeroTier-router container is just the ZeroTier-client container with some iptables
rules. However, you can't run both containers at the same time. If ZeroTier-client is installed:
Terminate the container if it is running:
+$ cd ~/IOTstack
+$ docker-compose down zerotier-client
+
++See also if downing a container doesn't work
+
Remove the existing service definition, either by:
+docker-compose.yml
to remove the service definition.The ZeroTier-router can re-use the ZeroTier-client configuration (and vice-versa) so you should not erase the persistent storage area at:
+~/IOTstack/volumes/zerotier-one/
+
Keeping the configuration also means you won't need to authorise the ZeroTier-router client when it first launches.
+To install Zerotier-router:
+Run the IOTstack menu and choose "Zerotier-router".
+Use a text editor to open your docker-compose.yml
. Find the ZeroTier service definition and the environment variables it contains:
5 + 6 + 7 + 8 + 9 +10 +11 +12 |
|
You should:
+Uncomment line 9 and replace "yourNetworkID" with your ZeroTier Network ID. This variable only has an effect the first time ZeroTier is launched. It is an alternative to executing the following command after the container has come up the first time:
+$ docker exec zerotier zerotier-cli join «NetworkID»
+
The reason for the plural variable name ("IDS") is because it supports joining multiple networks on first launch. Network IDs are space-separated, like this:
+9 |
|
If necessary, change line 10 to represent your active local interfaces. Examples:
+if your Raspberry Pi only connects to WiFi, you would use:
+10 |
|
if both Ethernet and WiFi are active, use:
+10 |
|
Launch the container:
+$ cd ~/IOTstack
+$ docker-compose up -d zerotier-router
+
If the Raspberry Pi running the service has not previously been authorised in ZeroTier Central, authorise it. Make a note of the IP address assigned to the device in ZeroTier Central. In Topology 2 it is 10.244.0.1.
+You also need to set up some static routes:
+In ZeroTier Central …
+Please start by reading Managed Routes.
+Once you understand how to construct a valid less-specific route, go to ZeroTier Central and find the "Managed Routes" area. Under "Add Routes" are text-entry fields. Enter the values into the fields:
+Destination: 192.168.202.0/23 (via) 10.244.0.1
+
Click Submit.
+With reference to Topology 2:
+This route teaches ZeroTier clients that the 10.244.0.0/16 network offers a path to the less-specific range (192.168.202.0/23) encompassing the home subnet (192.168.203.0/24).
+Remote clients can then reach devices on your home network. When a packet arrives on A, it is passed through NAT so devices on your home network "think" the packet has come from A. That means they can reply. However, this only works for connections that are initiated by remote clients like G. Devices on your home network like B and C can't initiate connections with remote clients because they don't know where to send the traffic. That's the purpose of the next static route.
+In your home router C …
+Add a static route to the ZeroTier Cloud pointing to the IP address of your Raspberry Pi on your home network. In Topology 2, this is:
+10.244.0.0/16 via 192.168.203.50
+
++You need to figure out how to add this route in your router's user interface.
+
Here's an example of what actually happens once this route is in place. Suppose B wants to communicate with G. B is not a ZeroTier client so it doesn't know that A offers a path to G. The IP stack running on B sends the packet to the default gateway C (your router). Because of the static route, C sends the packet to A. Once the packet arrives on A, it is forwarded via the ZeroTier Cloud to G.
+The process of a packet going into a router and coming back out on the same interface is sometimes referred to as "one-armed routing". It may seem inefficient but C also sends B what is called an "ICMP Redirect" message. This teaches B that it reach G via A so, in practice, not every B-to-G packet needs to transit C.
+The ZeroTier Cloud does not offer a path to the Internet. It is not a VPN solution which will allow you to pretend to be in another location. Every ZeroTier client still needs its own viable path to the Internet.
+Topology 3: Remote client tunnels to Internet via Home Network | +
---|
+ |
In terms of traffic flows, what this means in a practical sense is:
+This is the routing table you would expect to see on G:
+1 +2 +3 +4 +5 |
|
Executing a traceroute
to 8.8.8.8 (Google DNS) shows:
$ traceroute 8.8.8.8
+traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
+ 1 172.20.10.1 (172.20.10.1) 4.706 ms 4.572 ms 4.398 ms
+ 2 10.111.9.189 (10.111.9.189) 49.599 ms 49.807 ms 49.626 ms
+…
+11 dns.google (8.8.8.8) 32.710 ms 32.047 ms
+
You can see that the first hop is via 172.20.10.1. This means the traffic is not flowing over the ZeroTier Cloud (10.244.0.0/16). The traffic is reaching 8.8.8.8 via the default route through the phone's connection to the carrier's network (172.20.10.0/28).
+ZeroTier supports an option for forcing all of a client's traffic to pass over the ZeroTier Cloud. The client's traffic is then end-to-end encrypted, at least until it reaches your home. Traffic destined for the Internet will then pass back out through your home router. From the perspective of the Internet, your remote client will appear to be at your home.
+Enabling this feature is a two-step process:
+In ZeroTier Central, find the "Managed Routes" area and add:
+Destination: 0.0.0.0/0 (via) 10.240.0.1
+
This is setting up a "default route". 10.240.0.1 is the IP address of A in the ZeroTier network.
+Each remote client (and only remote clients) needs to be instructed to accept the default route from the ZeroTier Cloud:
+iOS clients:
+Once the client has been configured like this, the "Enable Default Route" setting will stick. Subsequent connections will follow the managed default route.
+If you wish to turn the setting off again, you need to repeat the same series of steps, turning "Enable Default Route" off at Step 4.
+Linux clients: execute the command:
+$ docker exec zerotier zerotier-cli set «yourNetworkID» allowDefault=1
+
See change option for an explanation of the output and how to turn the option off.
+macOS clients: open the ZeroTier menu, then the sub-menu for the Network ID, then enable "Allow Default Router [sic] Override".
+Once allowDefault
is enabled on a client, the routing table changes:
1 +2 +3 +4 +5 +6 +7 |
|
Close inspection will show you that two entries have been added to the routing table:
+Line | +Route | +Destination | +Mask | +Address Range | +
---|---|---|---|---|
2 | +0.0.0.0/1 | +10.244.0.1 | +128.0.0.0 | +0.0.0.0…127.255.255.255 | +
5 | +128.0.0.0/1 | +10.244.0.1 | +128.0.0.0 | +128.0.0.0…255.255.255.255 | +
Taken together, these have the same effect as a standard default route (0.0.0.0/0) but, because they are more-specific than the standard default route being offered by the cellular network, the path via ZeroTier Cloud will be preferred.
+You can test this with a traceroute
:
$ traceroute 8.8.8.8
+traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
+ 1 10.244.0.1 (10.244.0.1) 98.239 ms 98.121 ms 98.042 ms
+ 2 192.168.203.1 (192.168.203.1) 98.038 ms 97.943 ms 97.603 ms
+…
+ 7 dns.google (8.8.8.8) 104.748 ms 106.669 ms 106.356 ms
+
This time, the first hop is via the ZeroTier Cloud to A (10.244.0.1), then out through the local router C (192.168.203.1).
+Topology 4: Site-to-Site with ZeroTier-router | +
---|
+ |
In this topology, everything can reach everything within your catenet. The installation process for F is the same as it was for A. See Installing ZeroTier-router.
+In ZeroTier Central you need one "less-specific" Managed Route pointing to each site where there is a ZeroTier router.
+At each site, the local router needs two static routes, both via the IP address of the local host running the ZeroTier-router container:
+If the second route does not make sense, think of it like this:
+In essence, both these static routes are "set and forget". They assume catenet growth is a possibility, and that it is preferable to set up schemes that will be robust and not need constant tweaking.
+The diagram above for Topology 4 does not include a default route in ZeroTier Central. If you implement Topology 4 according to the diagram:
+If you want remote clients like G to use full tunnelling, you can follow the same approach as for Topology 3. You simply need to decide which site should used by G to reach the Internet. Having made your decision, define an appropriate default route in ZeroTier Central. For example, if G should reach the Internet via:
+the left-hand site, the default route should point to the ZeroTier-router running on A:
+Destination: 0.0.0.0/0 (via) 10.240.0.1
+
the right-hand site, the default route should point to the ZeroTier-router running on F:
+Destination: 0.0.0.0/0 (via) 10.240.0.2
+
Once you implement the default route, everything else is the same as for Topology 3.
+If your home network is a single subnet with a /24 prefix (a subnet mask of 255.255.255.0), you need to follow two rules when constructing the "destination" field of a Managed Route in ZeroTier Central:
+Examples:
+Table 2: Constructing Managed Routes for Subnets - examples | +
---|
+ |
If your home network has multiple subnets and/or you do not use /24 prefixes then you should either read through the next section or consult one of the many IP address calculators that are available on the Internet. One example:
+ +This is a slightly contrived example but it will help you to understand why you need Managed Routes and how to construct them correctly in ZeroTier Central.
+Assume we are talking about Topology 1 and that this is the routing table for host A:
+1 +2 +3 |
|
Suppose A wants to send a packet to B. The IP stack starts searching the routing table. For each row:
+The destination IP address for B (192.168.203.60) is ANDed with the subnet mask (255.255.255.0). Given the last row in the routing table above:
+candidate = destinationIP AND Genmask
+ = 192.168.203.60 AND 255.255.255.0
+ = 192.168.203.0
+
The candidate (192.168.203.0) is compared with the value in the Destination column (192.168.203.0). If the two values are the same, the route is considered to be a match:
+match = compareEqual(candidate,Destination)
+ = compareEqual(192.168.203.0, 192.168.203.0)
+ = true
+
The result is a match so the packet is handed to Layer 2 for transmission via the eth0
interface.
Now suppose A wants to send a packet to 8.8.8.8 (Google DNS). The last row of the routing table will evaluate as follows:
+candidate = destinationIP AND Genmask
+ = 8.8.8.8 AND 255.255.255.0
+ = 8.8.8.0
+ match = compareEqual(candidate,Destination)
+ = compareEqual(8.8.8.0, 192.168.203.0)
+ = false
+
The result is no-match so the routing algorithm continues to search the table. Eventually it will arrive at the 0.0.0.0 entry which is known as the "default route":
+candidate = destinationIP AND Genmask
+ = 8.8.8.8 AND 0.0.0.0
+ = 0.0.0.0
+ match = compareEqual(candidate,Destination)
+ = compareEqual(0.0.0.0, 0.0.0.0)
+ = true
+
The result of comparing anything with the default route is always a match. Because the "Gateway" column is non-zero, the IP address of 192.168.203.1 (C) is used as the "next hop". The IP stack searches the routing table again. This new search for 192.168.203.1 will match on the bottom row so the packet will be handed to Layer 2 for transmission out of the eth0
interface aimed at C (the local router, otherwise known as the "default gateway"). In turn, the local router forwards the packet to the ISP and, eventually, it winds up at 8.8.8.8.
Let's bring ZeroTier into the mix.
+The local subnet shown in Topology 1 is 192.168.203.0/24 so it seems to make sense to use that same subnet in a Managed Route. Assume you configured that in ZeroTier Central:
+192.168.203.0/24 via 10.144.0.1
+
When the ZeroTier client on (A) adds that route to its routing table, you get something like this:
+1 +2 +3 +4 +5 |
|
++To all network gurus following along: please remember this is a contrived example.
+
Study the last two lines. You should be able to see that both lines will match when the IP stack searches this table whenever A needs to send a packet to B. This results in a tie.
+What normally happens is a tie-breaker algorithm kicks in. Schemes of route metrics, route weights, hop counts, round-trip times or interface priorities are used to pick a winner. Unfortunately, those schemes are all "implementation defined". Although the algorithms usually converge on a good answer, sometimes Murphy's Law kicks in. Routing problems are notoriously difficult to diagnose and can manifest in a variety of ways, ranging from sub-optimal routing, where the only symptom may be sluggishness, to forwarding loops, which can render your network mostly useless.
+Prevention is always better than cure so it is preferable to side-step the entire problem by taking advantage of the fact that IP routing will always match on a more-specific route before a less-specific route, and employ slightly less-specific Managed Routes in ZeroTier Central.
+What do "more-" and "less-" mean when we're talking about searching a routing table? The terms refer to the length of the network prefix. In "/X" notation, a larger value of X is more-specific than a smaller value of X:
+To ensure that the IP stack will always make the correct decision, the Managed Route you configure in ZeroTier Central should always be slightly less-specific than the actual subnet it covers. Given 192.168.203.0/24, your first attempt at constructing a less-specific route might be:
+192.168.203.0/23 via 10.144.0.1
+
Sadly, that won't work. Why? Because the 192.168.203.0/23 subnet does not actually exist. That may surprise you but it's true. It has to do with the requirement that subnet masks use contiguous one-bits. It's easier to understand if you study the binary:
+Table 3: Invalid vs Valid Managed Route | +
---|
+ |
The left hand side of Table 3 shows a network prefix of 192.168.203.0/23 along with what that /23 expands to as a subnet mask of 255.255.254.0. The last row is the result of ANDing the first two rows. Notice the right-most 1-bit in the third octet (circled). That bit hasn't made it to the last row and that's a problem.
+What's going on here is that the right-most 1-bit in the third octet is not actually part of the network portion of the IP address; it's part of the host portion. For a network prefix to be valid, all the bits in the host portion must be zero. To put it another way, the IP address 192.168.203.0/23 is host .1.0 (ordinal 256) in subnet 192.168.202.0/23.
+Read that last sentence again because "in subnet 192.168.202.0/23" is the clue.
+The right hand side of Table 3 starts with network prefix 192.168.202.0/23 and ANDs it with its subnet mask. This time the host portion is all-zero. That means it's a valid subnet and, accordingly, can be the subject of a Managed Route.
+Table 3 tells us something else about a /23 prefix. It tells us that whatever value appears in that third octet, the right-most 1-bit must always be zero. That's another way of saying that a /23 subnet is only valid if the third octet is an even number.
+At this point, you should understand the reason for the two rules in TL;DR above, and have a better idea of what you are doing if you need to use a subnet calculator.
+If you intend to set up multiple sites and route between them using ZeroTier, you need to be aware of some of the consequences that flow from how you need to configure Managed Routes.
+First, it should be obvious that you can't have two sites with the same network prefix. You and a friend can't both be using 192.168.1.0/24 at home.
+The second is that the set of less-specific prefixes in Managed Routes can't overlap either. If you are using the 192.168.0.0/24 subnet at home while your friend is using 192.168.1.0/24 at her home, both of your less-specific Managed Routes will be the same: 192.168.0.0/23. If you set up two Managed Routes to 192.168.0.0/23 with different "via" addresses, all the routers will think there's a single site that can be reached by multiple routes. That's a recipe for a mess.
+Putting both of the above together, any network plan for multiple sites should assume a gap of two between subnets. For example, if you are using the subnet 192.168.0.0/24 then your friend should be using 192.168.2.0/24. Your Managed Route will be 192.168.0.0/23, and your friend's Managed Route will be 192.168.2.0/23.
+None of this stops either you or your friend from using both of the /24 subnets that aggregate naturally under your respective /23 prefixes. For example, the single Managed Route 192.168.0.0/23 naturally aggregates two subnets:
+Similarly, if you are using more than two subnets, such as:
+then you would slide your ZeroTier Managed Route prefix another bit to the left and use:
+192.168.0.0/22 via 10.144.0.1
+
Notice what happens as you slide the prefix left. Things change in powers of 2:
+The direct consequence of that for Managed Routes is:
+Understanding how adjacent subnets can be aggregated easily by changing the prefix length should also bring with it the realisation that it is unwise to use a scattergun approach when allocating the third octet among your home subnets. Consider this scheme:
+You would need three /23 Managed Routes in ZeroTier Central. In addition, you would prevent anyone else in your private ZeroTier catenet from using 192.168.1.0/24, 192.168.101.0/24 and 192.168.201.0/24. It would be preferable to use a single /22 as shown in the example above.
+Sure, that third octet can range from 0..255 but it's still a finite resource which is best used wisely, particularly once you start to contemplate using ZeroTier to span multiple sites.
+The default service definition for ZeroTier-router contains the following lines:
+13 +14 +15 |
|
Line 13 tells ZeroTier to run in Docker's "host mode". This means the processes running inside the container bind to the Raspberry Pi's network ports.
+++Processes running inside non-host-mode containers bind to the container's ports, and then use Network Address Translation (NAT) to reach the Raspberry Pi's ports.
+
The x-
prefix on line 14 has the effect of commenting-out the entire clause. In other words, the single x-
has exactly the same meaning as:
14 +15 |
|
The x-ports
clause is included to document the fact that ZeroTier uses the Raspberry Pi's port 9993.
++Documenting the ports in use for host-mode containers helps IOTstack's maintainers avoid port conflicts when adding new containers.
+
You should not remove the x-
prefix. If docker-compose complains about the x-ports
clause, the message is actually telling you that your copy of docker-compose is obsolete and that you should upgrade.
If you have a DNS server running somewhere in your catenet, you can ask ZeroTier to propagate that information to your ZeroTier clients. It works the same way as a DHCP server can be configured to provide the IP addresses of DNS servers when handing out leases.
+It is a two-step process:
+In ZeroTier Central, find the "DNS" area, complete the (optional) "Search Domain" and (required) "Server Address" fields, then click Submit.
+Examples. In Topology 4, suppose the DNS server (eg PiHole or BIND9) is host:
+Each client needs to be instructed to accept the DNS configuration:
+Linux clients: execute the command:
+$ docker exec zerotier zerotier-cli set «yourNetworkID» allowDNS=1
+
See change option for an explanation of the output and how to turn the option off.
+macOS clients: open the ZeroTier menu, then the sub-menu for the Network ID, then enable "Allow DNS Configuration".
+Notes:
+There are reports of allowDNS
being unreliable on Linux clients. If you have trouble on Linux, try disabling allowDNS
and add the DNS server(s) to:
/etc/resolvconf.conf
+
The ZeroTier Cloud relays multicast traffic. That means that multicast DNS (mDNS) names are propagated between ZeroTier clients and you can use those names in connection requests.
+In terms of Topology 4, A, F and G can all reach each other using mDNS names. For example:
+pi@a:~$ ssh pi@f.local
+
However, even if B and C were advertising mDNS names over 192.168.203.0/24, they would be unreachable from D, E, F and G using those mDNS names because B and C are not ZeroTier clients. The same applies to reaching D and E from A, B, C or G using mDNS names.
+As your network infrastructure becomes more complex, you may find that you occasionally run into address-range conflicts that force you to consider renumbering.
+ZeroTier Central is where you define the subnet used by the ZeroTier Cloud (eg 10.244.0.0/16), while your home router is generally where you define the subnets used on your home networks.
+Docker typically allocates its internal subnets from 172.16/12 but it can sometimes venture into 192.168/16. Docker tries to stay clear of anything that is in use but it doesn't always have full visibility into every corner of your private catenet.
+The IOTstack menu adds the following to your compose file:
+networks:
+
+ default:
+ driver: bridge
+ ipam:
+ driver: default
+
+ nextcloud:
+ driver: bridge
+ internal: true
+ ipam:
+ driver: default
+
That structure tells docker-compose that it should construct two networks:
+iotstack_default
iotstack_nextcloud
but leaves it up to docker-compose to work out the details. If you need more control, you can tell docker-compose to use specific subnets by adding two lines to each network definition:
+networks:
+
+ default:
+ driver: bridge
+ ipam:
+ driver: default
+ config:
+ - subnet: 172.30.0.0/22
+
+ nextcloud:
+ driver: bridge
+ internal: true
+ ipam:
+ driver: default
+ config:
+ - subnet: 172.30.4.0/22
+
A /22 is sufficient for 1,021 containers. That may seem like overkill but it doesn't really affect anything. Nevertheless, no part of those subnet prefixes is any kind of "magic number". You should feel free to use whatever subnet definitions are appropriate to your needs.
+Note:
+The 172.30.0.0/22
and 172.30.4.0/22
subnets (or whatever alternative ranges you choose) are private to the host where IOTstack is installed. That means you can re-use these same subnets on multiple hosts (Raspberry Pis or other supported platforms), irrespective of whether those hosts are at the same site (like A and B) or distributed across multiple sites (like A and F).
++The only time you would need to consider adjusting the subnet ranges is if you happened to be running two or more instances of IOTstack on the same host, simultaneously.
+
Everything in this documentation assumes you are using RFC1918 private ranges throughout your catenet. ZeroTier Cloud makes the same assumption.
+If some parts of your private catenet are using public addressing (either officially allocated to you or "misappropriated" like the 28/7 network), you may need to enable assignment of Global addressing:
+Linux clients: execute the command:
+$ docker exec zerotier zerotier-cli set «yourNetworkID» allowGlobal=1
+
See change option for an explanation of the output and how to turn the option off.
+macOS clients: open the ZeroTier menu, then the sub-menu for the Network ID, then enable "Allow Assignment of Global IPs".
+The "Allow Managed Addresses" command (aka allowManaged
option) is enabled by default. It gives ZeroTier permission to propagate IP addresses and route assignments. It is not a good idea to turn it off. If you turn it off accidentally, you can re-enable it either in the GUI or via:
$ docker exec zerotier zerotier-cli set «yourNetworkID» allowManaged=1
+
See change option for an explanation of the output.
+The commands in this section are given using this syntax:
+$ zerotier-cli command {argument …}
+
When ZeroTier client software is running in a container, you can execute commands:
+directly using docker exec
:
$ docker exec zerotier zerotier-cli command {argument …}
+
or by first opening a shell into the container:
+$ docker exec -it zerotier /bin/ash
+# zerotier-cli command {argument …}
+# exit
+$
+
On macOS you can run the commands from a Terminal window with sudo
:
$ sudo zerotier-cli command {argument …}
+
Windows, presumably, has similar functionality.
+To check the ZeroTier networks the client has joined:
+$ zerotier-cli listnetworks
+200 listnetworks <nwid> <name> <mac> <status> <type> <dev> <ZT assigned ips>
+200 listnetworks 900726788b1df8e2 My_Great_Network 33:b0:c6:2e:ad:2d OK PRIVATE feth4026 10.244.0.1/16
+
To join a new ZeroTier network:
+$ zerotier-cli join «NewNetworkID»
+
To leave an existing ZeroTier network:
+$ zerotier-cli leave «ExistingNetworkID»
+
To check the status of a device running ZeroTier client:
+$ zerotier-cli info
+200 info 340afcaa2a 1.10.1 ONLINE
+
To check the status of peers in your ZeroTier Networks:
+$ zerotier-cli peers
+200 peers
+<ztaddr> <ver> <role> <lat> <link> <lastTX> <lastRX> <path>
+7492fd0dc5 1.10.1 LEAF 2 DIRECT 5407 5407 17.203.229.120/47647
+f14094b92a 1.10.1 LEAF 227 DIRECT 1976 1976 34.209.49.222/54643
+C88262CD64 1.10.1 LEAF 2 DIRECT 5411 5408 192.168.1.70/64408
+…
+
Tip:
+In the <link>
column, DIRECT
means ZeroTier has been able to arrange for this client (where you are running the command) and that peer to communicate directly. In other words, the traffic is not being relayed through ZeroTier's servers. Seeing RELAY
in this field is not necessarily a bad thing but, to quote from the ZeroTier documentation:
++If you see the peer you're trying to contact in the RELAY state, that means packets are bouncing through our root servers because a direct connection between peers cannot be established. Side effects of RELAYING are increased latency and possible packet loss. See "Router Configuration Tips" above for how to resolve this.
+
At the time of writing, these options are defined:
+option | +Let ZeroTier … | +
---|---|
allowDefault | +… modify the system's default route | +
allowDNS | +… modify the system's DNS settings | +
allowGlobal | +… manage IP addresses and Route assignments outside the RFC1918 ranges | +
allowManaged | +… manage IP addresses and Route assignments | +
To check an option:
+$ zerotier-cli get «yourNetworkID» «option»
+
The result is either "0" (false) or "1" (true). Example:
+$ zerotier-cli get 900726788b1df8e2 allowDNS
+0
+
To enable an option:
+$ zerotier-cli set «yourNetworkID» «option»=1
+
To disable an option:
+$ zerotier-cli set «yourNetworkID» «option»=0
+
The response to changing an option is a large amount of JSON output. The updated state of the options is near the start. In practice, you can limit the output to just the options with a grep
:
$ zerotier-cli set 900726788b1df8e2 allowDNS=0 | grep allow
+ "allowDNS": false,
+ "allowDefault": false,
+ "allowGlobal": false,
+ "allowManaged": true,
+
Both ZeroTier-client and ZeroTier-router use the same persistent storage area. Should you choose to do so, you can freely switch back and forth between the -client and -router containers without worrying about the persistent storage area.
+The contents of ZeroTier's persistent storage uniquely identify the client to the ZeroTier Cloud. Unlike WireGuard, it is neither safe nor prudent to copy ZeroTier's persistent storage from one Raspberry Pi to another.
+An exception to this would be where you actually intend to move a ZeroTier client's identity to a different machine. That will work, providing your migration procedure never results in the same ZeroTier identity being in use on two machines at the same time.
+You can erase ZeroTier's persistent storage area like this:
+$ cd ~/IOTstack
+$ docker-compose down {zerotier-client | zerotier-router}
+$ sudo rm -rf ./volumes/zerotier-one
+
Tips:
+sudo
commands before hitting Enter.Erasing persistent storage destroys the client's authorisation (cryptographic credentials). If you start the container again, it will construct a new identity and you will need to re-authorise the client in ZeroTier Central. You should also delete the obsolete client authorisation.
+ZeroTier (either -client or -router) can be kept up-to-date with routine "pulls":
+$ cd ~/IOTstack
+$ docker-compose pull
+$ docker-compose up -d
+$ docker system prune -f
+
On iOS, you must decide whether to select "Custom DNS" when you define the VPN. If you want to change your mind, you need to delete the connection and start over.
+++Providing you don't delete the Zerotier app, the client's identity remains unchanged so you won't need to re-authorise the client in ZeroTier Central.
+
An example of when you might want to enable Custom DNS is if you want your remote clients to use PiHole for name services. If PiHole is running on the same Raspberry Pi as your Zerotier instance, you should use the IP address associated with the Raspberry Pi's interface to the ZeroTier Cloud (ie 10.244.0.1 in the example topologies).
+ + + + + + + + + + + + + +"compose file" means the file at the path:
+~/IOTstack/docker-compose.yml
+
Run the IOTstack menu and choose both "Mosquitto" and "Zigbee2MQTT". That adds the service definitions for both of those containers to your compose file.
+Prepare your Zigbee adapter by flashing its firmware.
+The default environment variables assume:
+This is a good basis for getting started. If it sounds like it will meet your needs, you will not need to make any changes. Otherwise, review the environment variables and make appropriate changes to the service definition in your compose file.
+$ cd ~/IOTstack
+$ docker-compose up -d
+
Confirm that the Zigbee2MQTT container appears to be working correctly. You should:
+Connect to the web front end and start adding your Zigbee devices.
+Zigbee adapters usually need to be "flashed" before they can be used by Zigbee2MQTT. To prepare your adatper:
+Note:
+This section covers adapters that connect to your Raspberry Pi via USB.
+++See connect to a remote adapter for information on connecting to adapters via TCP.
+
Many USB Zigbee adapters mount as /dev/ttyACM0
but this is not true for all adapters. In addition, if you have multiple devices connected to your Raspberry Pi that contend for a given device name, there are no guarantees that your Zigbee adapter will always be assigned the same name each time the device list is enumerated.
For those reasons, it is better to take the time to identify your Zigbee adapter in a manner that will be predictable, unique and reliable:
+Run the following command (the option is the digit "1"):
+$ ls -1 /dev/serial/by-id
+
The possible response patterns are:
+An error message:
+ls: cannot access '/dev/serial/by-id': No such file or directory
+
A list of one or more lines where your Zigbee adapter is not present. Example:
+usb-Silicon_Labs_CP2102N_USB_to_UART_Bridge_Controller_f068b8e7e82d4b119c0ee71fa1143ea0-if00-port0
+
The actual response (error, or a list of devices) does not matter. You are simply establishing a baseline.
+Connect your prepared Zigbee adapter to a USB port on your Raspberry Pi.
+Repeat the same ls
command from step 2. The response pattern should be different from step 2. The list should now contain your Zigbee adapter. Example:
usb-Silicon_Labs_CP2102N_USB_to_UART_Bridge_Controller_f068b8e7e82d4b119c0ee71fa1143ea0-if00-port0
+usb-Texas_Instruments_TI_CC2531_USB_CDC___0X00125A00183F06C5-if00
+
The second line indicates a CC2531 adapter is attached to the Raspberry Pi.
+If the response pattern does not change, it means the Raspberry Pi is unable to see your adapter. The two most common reasons are:
+Your adapter does not mount as a serial device. Try repeating steps 2 through 4 with the command:
+$ ls -1 /dev
+
to see if you can discover how your adapter attaches to your Raspberry Pi.
+++One example is the Electrolama zig-a-zig-ah which attaches as
+/dev/ttyUSB0
.
Use the output from the ls
command in step 4 to form the absolute path to your Zigbee adapter. Example:
/dev/serial/by-id/usb-Texas_Instruments_TI_CC2531_USB_CDC___0X00125A00183F06C5-if00
+
Check your work like this (the option is the lower-case letter "l"):
+$ ls -l /dev/serial/by-id/usb-Texas_Instruments_TI_CC2531_USB_CDC___0X00125A00183F06C5-if00
+lrwxrwxrwx 1 root root 13 Mar 31 19:49 dev/serial/by-id/usb-Texas_Instruments_TI_CC2531_USB_CDC___0X00125A00183F06C5-if00 -> ../../ttyACM0
+
What the output is telling you is that the by-id path is a symbolic link to /dev/ttyACM0
. Although this may always be true on your Raspberry Pi, the only part that is actually guaranteed to be true is the by-id path, which is why you should use it.
Once you have identified the path to your adapter, you communicate that information to docker-compose like this:
+$ echo ZIGBEE2MQTT_DEVICE_PATH=/dev/serial/by-id/usb-Texas_Instruments_TI_CC2531_USB_CDC___0X00125A00183F06C5-if00 >>~/IOTstack/.env
+
Note:
+if you forget to do this step, docker-compose will display the following error message:
+parsing ~/IOTstack/docker-compose.yml: error while interpolating services.zigbee2mqtt.devices.[]: required variable ZIGBEE2MQTT_DEVICE_PATH is missing a value: eg echo ZIGBEE2MQTT_DEVICE_PATH=/dev/ttyACM0 >>~/IOTstack/.env
+
Continue from bring up your stack.
+Any value that can be set in a Zigbee2MQTT configuration file can also be set using an environment variable.
+++The Zigbee2MQTT documentation explains the syntax.
+
Note:
+Whenever you change the value of an environment variable, you also need to tell docker-compose
to apply the change:
$ cd ~/IOTstack
+$ docker-compose up -d zigbee2mqtt
+
The default service definition provided with IOTstack includes the following environment variables:
+ZIGBEE2MQTT_CONFIG_MQTT_SERVER=mqtt://mosquitto:1883
Typical values for this are:
+mqtt://mosquitto:1883
This is default value supplied with the IOTstack template. It assumes that both Zigbee2MQTT and the Mosquitto broker are running in non-host mode containers on the same Raspberry Pi.
+mqtt://localhost:1883
This would be appropriate if you were to run Zigbee2MQTT in host mode and the Mosquitto broker was running on the same Raspberry Pi.
+mqtt://«host-or-ip»:1883
If the Mosquitto broker is running on a different computer, replace «host-or-ip»
with the IP address or domain name of that other computer. You should also remove or comment-out the following lines from the service definition:
depends_on:
+ - mosquitto
+
The depends_on
clause ensures that the Mosquitto container starts alongside the Zigbee2MQTT container. That would not be appropriate if Mosquitto was running on a separate computer.
ZIGBEE2MQTT_CONFIG_FRONTEND=true
This variable activates the Zigbee2MQTT web interface on port 8080. If you want to change the port number where you access the Zigbee2MQTT web interface, see connecting to the web GUI.
+ZIGBEE2MQTT_CONFIG_ADVANCED_LOG_SYMLINK_CURRENT=true
Defining this variable causes Zigbee2MQTT to create a symlink pointing to the current log folder at the path:
+~/IOTstack/volumes/zigbee2mqtt/data/log/current
+
See Checking the log for more information about why this is useful.
+- DEBUG=zigbee-herdsman*
Enabling this variable turns on extended debugging inside the container.
+Zigbee2MQTT creates a default configuration file at the path:
+~/IOTstack/volumes/zigbee2mqtt/data/configuration.yaml
+
Although you can edit the configuration file, the approach recommended for IOTstack is to use environment variables.
+If you decide to edit the configuration file:
+sudo
to edit the file.After you have finished making changes, you need to inform the running container by:
+$ cd ~/IOTstack
+$ docker-compose restart zigbee2mqtt
+
Check the log for errors.
+Note:
+… MQTT_SERVER
environment variable discussed above, the container will go into a restart loop. This happens because the Zigbee2MQTT container defaults to trying to reach the Mosquitto broker at localhost:1883
instead of mosquitto:1883
. That usually fails.$ docker ps | grep -e mosquitto -e zigbee2mqtt
+NAMES CREATED STATUS
+zigbee2mqtt 33 seconds ago Up 30 seconds
+mosquitto 33 seconds ago Up 31 seconds (healthy)
+
++The above output is filtered down to the relevant columns
+
You are looking for evidence that the container is restarting (ie the "Status" column only ever shows a low number of seconds when compared with the "Created" column).
+You can't use docker logs zigbee2mqtt
to inspect the Zigbee2MQTT container's logs. That's because Zigbee2MQTT writes its logging information to the path:
~/IOTstack/volumes/zigbee2mqtt/data/log/yyyy-mm-dd.hh-mm-ss/log.txt
+
where yyyy-mm-dd.hh-mm-ss
is the date and time the container was last started. This means that you have to identify the folder with the latest timestamp before you can inspect the log contained within it.
Fortunately, Zigbee2MQTT offers a shortcut. If the … LOG_SYMLINK_CURRENT
environment variable is true
then the path to the current log will be:
~/IOTstack/volumes/zigbee2mqtt/data/log/current/log.txt
+
You can use commands like cat
and tail
to examine the current log. Example:
$ cat ~/IOTstack/volumes/zigbee2mqtt/data/log/current/log.txt
+
To perform this check, you will need to have the Mosquitto clients installed:
+$ sudo apt install -y mosquitto-clients
+
The Mosquitto clients package includes two command-line tools:
+mosquitto_pub
for publishing MQTT messages to the broker; andmosquitto_sub
for subscribing to MQTT messages distributed by the broker.
++In IOTstack, the "broker" is usually the Mosquitto container.
+
Assuming the Mosquitto clients are installed, you can run the following command:
+$ mosquitto_sub -v -h "localhost" -t "zigbee2mqtt/#" -F "%I %t %p"
+
One of two things will happen:
+Terminate the mosquitto_sub
command with a Controlc.
Open a browser, and point it to port 8080 on your Raspberry Pi. For example:
+http://raspberrypi.local:8080
+
You should see the Zigbee2MQTT interface.
+Notes:
+The availability of the Zigbee2MQTT UI is governed by an environment variable. If you do not see the UI, check that … FRONTEND
is defined.
In the URL above, port 8080 is an external port which is exposed via the following port mapping in the Zigbee2MQTT service definition:
+ports:
+ - "8080:8080"
+
If you want to reach the Zigbee2MQTT UI via a different port, you should edit the left hand side of that mapping. For example, if you wanted to use port 10080 you would write:
+ports:
+ - "10080:8080"
+
Do not change the internal port number on the right hand side of the mapping. To apply changes to the port mapping:
+$ cd ~/IOTstack
+$ docker-compose up -d zigbee2mqtt
+
To open a shell inside the Zigbee2MQTT container, run:
+$ docker exec -it zigbee2mqtt ash
+
+++
ash
is not a typo!
To close the shell and leave the container, either type "exit" and press return, or press Controld.
+When you become aware of a new version of Zigbee2MQTT on DockerHub, do the following:
+$ cd ~IOTstack
+$ docker-compose pull zigbee2mqtt
+$ docker-compose up -d zigbee2mqtt
+$ docker system prune
+
In words:
+pull
compares the version on your Raspberry Pi with the latest version on DockerHub, and downloads any later version.up
instantiates a new container based on the new image and performs a new-for-old swap. There is barely any downtime.prune
cleans up the older image.You can omit the zigbee2mqtt
arguments from the pull
and up
commands, in which case docker-compose
makes an attempt to pull any available updates for all non-Dockerfile-based images, and then instantiates any new images it has downloaded.
This information is for existing users of the Zigbee2MQTT container.
+The default IOTstack service definition for Zigbee2MQTT has changed:
+If you were using the Zigbee2MQTT container in IOTstack before April 2022, you should use your favourite text editor to update your compose file to conform with the new service definition.
+++You could run the menu, then de-select and re-select Zigbee2MQTT. That will have the effect of applying the updated service definition but it also risks overwriting any other customisations you may have in place. That is why editing your compose file is the recommended approach.
+
The updated service definition is included here for ease of reference:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 |
|
The changes you should make to your existing Zigbee2MQTT service definition are:
+Replace the build
directive:
build: ./.templates/zigbee2mqtt/.
+
with this image
directive:
image: koenkk/zigbee2mqtt:latest
+
This causes IOTstack to use Zigbee2MQTT images "as is" from DockerHub.
+Add these environment variables:
+ - ZIGBEE2MQTT_CONFIG_MQTT_SERVER=mqtt://mosquitto:1883
+ - ZIGBEE2MQTT_CONFIG_FRONTEND=true
+ - ZIGBEE2MQTT_CONFIG_ADVANCED_LOG_SYMLINK_CURRENT=true
+
The first two have the identical effect to the changes previously made via the Dockerfile. The last variable makes it easier for you to find and view the current log.
+See environment variables for more detail.
+Add the dependency clause:
+depends_on:
+ - mosquitto
+
This ensures the Mosquitto container is brought up alongside Zigbee2MQTT. The Zigbee2MQTT container goes into a restart loop if Mosquitto is not reachable so this change enforces that business rule. See … MQTT_SERVER
for the situation where this might not be appropriate.
Environment variables in your compose file override corresponding values set in the configuration file at:
+~/IOTstack/volumes/zigbee2mqtt/data/configuration.yaml
+
If you have customised your existing Zigbee2MQTT configuration file, you should review your settings for potential conflicts with the environment variables introduced by the changes to the IOTstack service definition. You can resolve any conflicts either by:
+The second approach is recommended because it minimises the risk that Zigbee2MQTT will go into a restart loop if the configuration file is not present when the container starts.
+As the Zigbee2MQTT documentation explains, any option that can be set in a configuration file can also be set using an environment variable, so you may want to take the opportunity to implement all your settings as environment variables.
+ + + + + + + + + + + + + +This service a web frontend which displays Zigbee2Mqtt service messages and able to control it over MQTT. For the +servie a working MQTT server is required and that have to be configured.
+Z2MA_SETTINGS__MQTTSERVER=mosquitto
- The MQTT service instance which is used by Zigbee2Mqtt instance. Here, "mosquitto" is the name of the container.Z2MA_SETTINGS__MQTTUSERNAME=name
- Used if your MQTT service has authentication enabled. Optional.Z2MA_SETTINGS__MQTTPASSWORD=password
- Used if your MQTT service has authentication enabled. Optional.TZ=Etc/UTC
- Set to your timezone. Optional but recommended.The Zigbee2Mqtt Assistant UI is available using port 8880. For example:
+http://your.local.ip.address:8880/
This page explains how to have a service generate a random password during build time. This will require that your service have a working options menu.
+Keep in mind that updating strings in a service's yaml config isn't limited to passwords.
+Many services often set a password on their initial spin up and store it internally. That means if if the password is changed by the menu afterwards, it may not be reflected in the service. By default the password specified in the documentation should be used, unless the user specifically selected to use a randomly generated one. In the future, the feature to specify a password manually may be added in, much like how ports can be customised.
+Inside the service's service.yml
file, a special string can be added in for the build script to find and replace. Commonly the string is %randomPassword%
, but technically any string can be used. The same string can be used multiple times for the same password to be used multiple times, and/or multiple difference strings can be used for multiple passwords.
+
mariadb:
+ image: linuxserver/mariadb
+ container_name: mariadb
+ environment:
+ - MYSQL_ROOT_PASSWORD=%randomAdminPassword%
+ - MYSQL_DATABASE=default
+ - MYSQL_USER=mariadbuser
+ - MYSQL_PASSWORD=%randomPassword%
+
These strings will be updated during the Prebuild Hook stage when building. The code to make this happen is shown below.
+This code can basically be copy-pasted into your service's build.py
file. You are welcome to expand upon it if required. It will probably be refactored into a utils function in the future to adear to DRY (Don't Repeat Yourself) practices.
+
def preBuild():
+ # Multi-service load. Most services only include a single service. The exception being NextCloud where the database information needs to match between NextCloud and MariaDB (as defined in NextCloud's 'service.yml' file, not IOTstack's MariaDB).
+ with open((r'%s/' % serviceTemplate) + servicesFileName) as objServiceFile:
+ serviceYamlTemplate = yaml.load(objServiceFile)
+
+ oldBuildCache = {}
+ try:
+ with open(r'%s' % buildCache) as objBuildCache: # Load previous build, if it exists
+ oldBuildCache = yaml.load(objBuildCache)
+ except:
+ pass
+
+ buildCacheServices = {}
+ if "services" in oldBuildCache: # If a previous build does exist, load it so that we can reuse the password from it if required.
+ buildCacheServices = oldBuildCache["services"]
+
+ if not os.path.exists(serviceService): # Create the service directory for the service
+ os.makedirs(serviceService, exist_ok=True)
+
+ # Check if buildSettings file exists (from previous build), or create one if it doesn't (in the else block).
+ if os.path.exists(buildSettings):
+ # Password randomisation
+ with open(r'%s' % buildSettings) as objBuildSettingsFile:
+ piHoleYamlBuildOptions = yaml.load(objBuildSettingsFile)
+ if (
+ piHoleYamlBuildOptions["databasePasswordOption"] == "Randomise database password for this build"
+ or piHoleYamlBuildOptions["databasePasswordOption"] == "Randomise database password every build"
+ or deconzYamlBuildOptions["databasePasswordOption"] == "Use default password for this build"
+ ):
+
+ if deconzYamlBuildOptions["databasePasswordOption"] == "Use default password for this build":
+ newAdminPassword = "######" # Update to what's specified in your documentation
+ newPassword = "######" # Update to what's specified in your documentation
+ else:
+ # Generate our passwords
+ newAdminPassword = generateRandomString()
+ newPassword = generateRandomString()
+
+ # Here we loop through each service included in the current service's `service.yml` file and update the password strings.
+ for (index, serviceName) in enumerate(serviceYamlTemplate):
+ dockerComposeServicesYaml[serviceName] = serviceYamlTemplate[serviceName]
+ if "environment" in serviceYamlTemplate[serviceName]:
+ for (envIndex, envName) in enumerate(serviceYamlTemplate[serviceName]["environment"]):
+ envName = envName.replace("%randomPassword%", newPassword)
+ envName = envName.replace("%randomAdminPassword%", newAdminPassword)
+ dockerComposeServicesYaml[serviceName]["environment"][envIndex] = envName
+
+ # If the user had selected to only update the password once, ensure the build options file is updated.
+ if (piHoleYamlBuildOptions["databasePasswordOption"] == "Randomise database password for this build"):
+ piHoleYamlBuildOptions["databasePasswordOption"] = "Do nothing"
+ with open(buildSettings, 'w') as outputFile:
+ yaml.dump(piHoleYamlBuildOptions, outputFile)
+ else: # Do nothing - don't change password
+ for (index, serviceName) in enumerate(buildCacheServices):
+ if serviceName in buildCacheServices: # Load service from cache if exists (to maintain password)
+ dockerComposeServicesYaml[serviceName] = buildCacheServices[serviceName]
+ else:
+ dockerComposeServicesYaml[serviceName] = serviceYamlTemplate[serviceName]
+
+ # Build options file didn't exist, so create one, and also use default password (default action).
+ else:
+ print("PiHole Warning: Build settings file not found, using default password")
+ time.sleep(1)
+ newAdminPassword = "######" # Update to what's specified in your documentation
+ newPassword = "######" # Update to what's specified in your documentation
+ for (index, serviceName) in enumerate(serviceYamlTemplate):
+ dockerComposeServicesYaml[serviceName] = serviceYamlTemplate[serviceName]
+ if "environment" in serviceYamlTemplate[serviceName]:
+ for (envIndex, envName) in enumerate(serviceYamlTemplate[serviceName]["environment"]):
+ envName = envName.replace("%randomPassword%", newPassword)
+ envName = envName.replace("%randomAdminPassword%", newAdminPassword)
+ dockerComposeServicesYaml[serviceName]["environment"][envIndex] = envName
+ piHoleYamlBuildOptions = {
+ "version": "1",
+ "application": "IOTstack",
+ "service": "PiHole",
+ "comment": "PiHole Build Options"
+ }
+
+ piHoleYamlBuildOptions["databasePasswordOption"] = "Do nothing"
+ with open(buildSettings, 'w') as outputFile:
+ yaml.dump(piHoleYamlBuildOptions, outputFile)
+
+ return True
+
While not needed, since the default action is to create a random password, it is a good idea to allow the user to choose what to do. This can be achieved by giving them access to a password menu. This code can be placed in your service's build.py
file, that will show a new menu option, allowing users to select it and be taken to a password settings screen.
Remember that you need to have an already working menu, and to place this code into it.
+import signal
+
+...
+
+def setPasswordOptions():
+ global needsRender
+ global hasRebuiltAddons
+ passwordOptionsMenuFilePath = "./.templates/{currentService}/passwords.py".format(currentService=currentServiceName)
+ with open(passwordOptionsMenuFilePath, "rb") as pythonDynamicImportFile:
+ code = compile(pythonDynamicImportFile.read(), passwordOptionsMenuFilePath, "exec")
+ execGlobals = {
+ "currentServiceName": currentServiceName,
+ "renderMode": renderMode
+ }
+ execLocals = {}
+ screenActive = False
+ exec(code, execGlobals, execLocals)
+ signal.signal(signal.SIGWINCH, onResize)
+ screenActive = True
+ needsRender = 1
+
+...
+
+def createMenu():
+ global yourServicesBuildOptions
+ global serviceService
+
+ yourServicesBuildOptions = []
+ yourServicesBuildOptions.append([
+ "Your Service Password Options",
+ setPasswordOptions
+ ])
+
+ yourServicesBuildOptions.append(["Go back", goBack])
+
The code for the Password settings is lengthy, but it's pasted here for convienence +
#!/usr/bin/env python3
+
+import signal
+
+def main():
+ from blessed import Terminal
+ from deps.chars import specialChars, commonTopBorder, commonBottomBorder, commonEmptyLine
+ from deps.consts import servicesDirectory, templatesDirectory, buildSettingsFileName
+ import time
+ import subprocess
+ import ruamel.yamls
+ import os
+
+ global signal
+ global currentServiceName
+ global menuSelectionInProgress
+ global mainMenuList
+ global currentMenuItemIndex
+ global renderMode
+ global paginationSize
+ global paginationStartIndex
+ global hideHelpText
+
+ yaml = ruamel.yaml.YAML()
+ yaml.preserve_quotes = True
+
+ try: # If not already set, then set it.
+ hideHelpText = hideHelpText
+ except:
+ hideHelpText = False
+
+ term = Terminal()
+ hotzoneLocation = [((term.height // 16) + 6), 0]
+ paginationToggle = [10, term.height - 25]
+ paginationStartIndex = 0
+ paginationSize = paginationToggle[0]
+
+ serviceService = servicesDirectory + currentServiceName
+ serviceTemplate = templatesDirectory + currentServiceName
+ buildSettings = serviceService + buildSettingsFileName
+
+ def goBack():
+ global menuSelectionInProgress
+ global needsRender
+ menuSelectionInProgress = False
+ needsRender = 1
+ return True
+
+ mainMenuList = []
+
+ hotzoneLocation = [((term.height // 16) + 6), 0]
+
+ menuSelectionInProgress = True
+ currentMenuItemIndex = 0
+ menuNavigateDirection = 0
+
+ # Render Modes:
+ # 0 = No render needed
+ # 1 = Full render
+ # 2 = Hotzone only
+ needsRender = 1
+
+ def onResize(sig, action):
+ global mainMenuList
+ global currentMenuItemIndex
+ mainRender(1, mainMenuList, currentMenuItemIndex)
+
+ def generateLineText(text, textLength=None, paddingBefore=0, lineLength=64):
+ result = ""
+ for i in range(paddingBefore):
+ result += " "
+
+ textPrintableCharactersLength = textLength
+
+ if (textPrintableCharactersLength) == None:
+ textPrintableCharactersLength = len(text)
+
+ result += text
+ remainingSpace = lineLength - textPrintableCharactersLength
+
+ for i in range(remainingSpace):
+ result += " "
+
+ return result
+
+ def renderHotZone(term, renderType, menu, selection, hotzoneLocation, paddingBefore = 4):
+ global paginationSize
+ selectedTextLength = len("-> ")
+
+ print(term.move(hotzoneLocation[0], hotzoneLocation[1]))
+
+ if paginationStartIndex >= 1:
+ print(term.center("{b} {uaf} {uaf}{uaf}{uaf} {ual} {b}".format(
+ b=specialChars[renderMode]["borderVertical"],
+ uaf=specialChars[renderMode]["upArrowFull"],
+ ual=specialChars[renderMode]["upArrowLine"]
+ )))
+ else:
+ print(term.center(commonEmptyLine(renderMode)))
+
+ for (index, menuItem) in enumerate(menu): # Menu loop
+ if index >= paginationStartIndex and index < paginationStartIndex + paginationSize:
+ lineText = generateLineText(menuItem[0], paddingBefore=paddingBefore)
+
+ # Menu highlight logic
+ if index == selection:
+ formattedLineText = '-> {t.blue_on_green}{title}{t.normal} <-'.format(t=term, title=menuItem[0])
+ paddedLineText = generateLineText(formattedLineText, textLength=len(menuItem[0]) + selectedTextLength, paddingBefore=paddingBefore - selectedTextLength)
+ toPrint = paddedLineText
+ else:
+ toPrint = '{title}{t.normal}'.format(t=term, title=lineText)
+ # #####
+
+ # Menu check render logic
+ if menuItem[1]["checked"]:
+ toPrint = " (X) " + toPrint
+ else:
+ toPrint = " ( ) " + toPrint
+
+ toPrint = "{bv} {toPrint} {bv}".format(bv=specialChars[renderMode]["borderVertical"], toPrint=toPrint) # Generate border
+ toPrint = term.center(toPrint) # Center Text (All lines should have the same amount of printable characters)
+ # #####
+ print(toPrint)
+
+ if paginationStartIndex + paginationSize < len(menu):
+ print(term.center("{b} {daf} {daf}{daf}{daf} {dal} {b}".format(
+ b=specialChars[renderMode]["borderVertical"],
+ daf=specialChars[renderMode]["downArrowFull"],
+ dal=specialChars[renderMode]["downArrowLine"]
+ )))
+ else:
+ print(term.center(commonEmptyLine(renderMode)))
+ print(term.center(commonEmptyLine(renderMode)))
+ print(term.center(commonEmptyLine(renderMode)))
+
+
+ def mainRender(needsRender, menu, selection):
+ global paginationStartIndex
+ global paginationSize
+ term = Terminal()
+
+ if selection >= paginationStartIndex + paginationSize:
+ paginationStartIndex = selection - (paginationSize - 1) + 1
+ needsRender = 1
+
+ if selection <= paginationStartIndex - 1:
+ paginationStartIndex = selection
+ needsRender = 1
+
+ if needsRender == 1:
+ print(term.clear())
+ print(term.move_y(term.height // 16))
+ print(term.black_on_cornsilk4(term.center('IOTstack YourServices Password Options')))
+ print("")
+ print(term.center(commonTopBorder(renderMode)))
+ print(term.center(commonEmptyLine(renderMode)))
+ print(term.center("{bv} Select Password Option {bv}".format(bv=specialChars[renderMode]["borderVertical"])))
+ print(term.center(commonEmptyLine(renderMode)))
+
+ if needsRender >= 1:
+ renderHotZone(term, needsRender, menu, selection, hotzoneLocation)
+
+ if needsRender == 1:
+ print(term.center(commonEmptyLine(renderMode)))
+ if not hideHelpText:
+ if term.height < 32:
+ print(term.center(commonEmptyLine(renderMode)))
+ print(term.center("{bv} Not enough vertical room to render controls help text {bv}".format(bv=specialChars[renderMode]["borderVertical"])))
+ print(term.center(commonEmptyLine(renderMode)))
+ else:
+ print(term.center(commonEmptyLine(renderMode)))
+ print(term.center("{bv} Controls: {bv}".format(bv=specialChars[renderMode]["borderVertical"])))
+ print(term.center("{bv} [Space] to select option {bv}".format(bv=specialChars[renderMode]["borderVertical"])))
+ print(term.center("{bv} [Up] and [Down] to move selection cursor {bv}".format(bv=specialChars[renderMode]["borderVertical"])))
+ print(term.center("{bv} [H] Show/hide this text {bv}".format(bv=specialChars[renderMode]["borderVertical"])))
+ print(term.center("{bv} [Enter] to build and save option {bv}".format(bv=specialChars[renderMode]["borderVertical"])))
+ print(term.center("{bv} [Escape] to cancel changes {bv}".format(bv=specialChars[renderMode]["borderVertical"])))
+ print(term.center(commonEmptyLine(renderMode)))
+ print(term.center(commonEmptyLine(renderMode)))
+ print(term.center(commonBottomBorder(renderMode)))
+
+ def runSelection(selection):
+ import types
+ if len(mainMenuList[selection]) > 1 and isinstance(mainMenuList[selection][1], types.FunctionType):
+ mainMenuList[selection][1]()
+ else:
+ print(term.green_reverse('IOTstack Error: No function assigned to menu item: "{}"'.format(mainMenuList[selection][0])))
+
+ def isMenuItemSelectable(menu, index):
+ if len(menu) > index:
+ if len(menu[index]) > 1:
+ if "skip" in menu[index][1] and menu[index][1]["skip"] == True:
+ return False
+ return True
+
+ def loadOptionsMenu():
+ global mainMenuList
+ mainMenuList.append(["Use default password for this build", { "checked": True }])
+ mainMenuList.append(["Randomise database password for this build", { "checked": False }])
+ mainMenuList.append(["Randomise database password every build", { "checked": False }])
+ mainMenuList.append(["Do nothing", { "checked": False }])
+
+ def checkMenuItem(selection):
+ global mainMenuList
+ for (index, menuItem) in enumerate(mainMenuList):
+ mainMenuList[index][1]["checked"] = False
+
+ mainMenuList[selection][1]["checked"] = True
+
+ def saveOptions():
+ try:
+ if not os.path.exists(serviceService):
+ os.makedirs(serviceService, exist_ok=True)
+
+ if os.path.exists(buildSettings):
+ with open(r'%s' % buildSettings) as objBuildSettingsFile:
+ yourServicesYamlBuildOptions = yaml.load(objBuildSettingsFile)
+ else:
+ yourServices = {
+ "version": "1",
+ "application": "IOTstack",
+ "service": "Your Service",
+ "comment": "Your Service Build Options"
+ }
+
+ yourServices["databasePasswordOption"] = ""
+
+ for (index, menuOption) in enumerate(mainMenuList):
+ if menuOption[1]["checked"]:
+ yourServices["databasePasswordOption"] = menuOption[0]
+ break
+
+ with open(buildSettings, 'w') as outputFile:
+ yaml.dump(yourServices, outputFile)
+
+ except Exception as err:
+ print("Error saving Your Services Password options", currentServiceName)
+ print(err)
+ return False
+ global hasRebuiltHardwareSelection
+ hasRebuiltHardwareSelection = True
+ return True
+
+ def loadOptions():
+ try:
+ if not os.path.exists(serviceService):
+ os.makedirs(serviceService, exist_ok=True)
+
+ if os.path.exists(buildSettings):
+ with open(r'%s' % buildSettings) as objBuildSettingsFile:
+ yourServicesYamlBuildOptions = yaml.load(objBuildSettingsFile)
+
+ for (index, menuOption) in enumerate(mainMenuList):
+ if menuOption[0] == yourServicesYamlBuildOptions["databasePasswordOption"]:
+ checkMenuItem(index)
+ break
+
+ except Exception as err:
+ print("Error loading Your Services Password options", currentServiceName)
+ print(err)
+ return False
+ return True
+
+
+ if __name__ == 'builtins':
+ global signal
+ term = Terminal()
+ signal.signal(signal.SIGWINCH, onResize)
+ loadOptionsMenu()
+ loadOptions()
+ with term.fullscreen():
+ menuNavigateDirection = 0
+ mainRender(needsRender, mainMenuList, currentMenuItemIndex)
+ menuSelectionInProgress = True
+ with term.cbreak():
+ while menuSelectionInProgress:
+ menuNavigateDirection = 0
+
+ if not needsRender == 0: # Only rerender when changed to prevent flickering
+ mainRender(needsRender, mainMenuList, currentMenuItemIndex)
+ needsRender = 0
+
+ key = term.inkey(esc_delay=0.05)
+ if key.is_sequence:
+ if key.name == 'KEY_TAB':
+ if paginationSize == paginationToggle[0]:
+ paginationSize = paginationToggle[1]
+ else:
+ paginationSize = paginationToggle[0]
+ mainRender(1, mainMenuList, currentMenuItemIndex)
+ if key.name == 'KEY_DOWN':
+ menuNavigateDirection += 1
+ if key.name == 'KEY_UP':
+ menuNavigateDirection -= 1
+ if key.name == 'KEY_ENTER':
+ if saveOptions():
+ return True
+ else:
+ print("Something went wrong. Try saving the list again.")
+ if key.name == 'KEY_ESCAPE':
+ menuSelectionInProgress = False
+ return True
+ elif key:
+ if key == ' ': # Space pressed
+ checkMenuItem(currentMenuItemIndex) # Update checked list
+ needsRender = 2
+ elif key == 'h': # H pressed
+ if hideHelpText:
+ hideHelpText = False
+ else:
+ hideHelpText = True
+ mainRender(1, mainMenuList, currentMenuItemIndex)
+
+ if menuNavigateDirection != 0: # If a direction was pressed, find next selectable item
+ currentMenuItemIndex += menuNavigateDirection
+ currentMenuItemIndex = currentMenuItemIndex % len(mainMenuList)
+ needsRender = 2
+
+ while not isMenuItemSelectable(mainMenuList, currentMenuItemIndex):
+ currentMenuItemIndex += menuNavigateDirection
+ currentMenuItemIndex = currentMenuItemIndex % len(mainMenuList)
+ return True
+
+ return True
+
+originalSignalHandler = signal.getsignal(signal.SIGINT)
+main()
+signal.signal(signal.SIGWINCH, originalSignalHandler)
+
This page explains how the build stack system works for developers.
+A service only requires 2 files:
+* service.yml
- Contains data for docker-compose
+* build.py
- Contains logic that the menu system uses.
Inside the service.yml
is where the service data for docker-compose is housed, for example:
+
adminer:
+ container_name: adminer
+ image: adminer
+ restart: unless-stopped
+ ports:
+ - "9080:8080"
+
adminer
service must be placed into a folder called adminer
inside the ./.templates
directory.
+At the very least, the build.py
requires the following code:
+
#!/usr/bin/env python3
+
+issues = {} # Returned issues dict
+buildHooks = {} # Options, and others hooks
+haltOnErrors = True
+
+# Main wrapper function. Required to make local vars work correctly
+def main():
+ global currentServiceName # Name of the current service
+
+ # This lets the menu know whether to put " >> Options " or not
+ # This function is REQUIRED.
+ def checkForOptionsHook():
+ try:
+ buildHooks["options"] = callable(runOptionsMenu)
+ except:
+ buildHooks["options"] = False
+ return buildHooks
+ return buildHooks
+
+ # This function is REQUIRED.
+ def checkForPreBuildHook():
+ try:
+ buildHooks["preBuildHook"] = callable(preBuild)
+ except:
+ buildHooks["preBuildHook"] = False
+ return buildHooks
+ return buildHooks
+
+ # This function is REQUIRED.
+ def checkForPostBuildHook():
+ try:
+ buildHooks["postBuildHook"] = callable(postBuild)
+ except:
+ buildHooks["postBuildHook"] = False
+ return buildHooks
+ return buildHooks
+
+ # This function is REQUIRED.
+ def checkForRunChecksHook():
+ try:
+ buildHooks["runChecksHook"] = callable(runChecks)
+ except:
+ buildHooks["runChecksHook"] = False
+ return buildHooks
+ return buildHooks
+
+ # Entrypoint for execution
+ if haltOnErrors:
+ eval(toRun)()
+ else:
+ try:
+ eval(toRun)()
+ except:
+ pass
+
+# This check isn't required, but placed here for debugging purposes
+global currentServiceName # Name of the current service
+if currentServiceName == 'adminer': # Make sure you update this.
+ main()
+else:
+ print("Error. '{}' Tried to run 'adminer' config".format(currentServiceName))
+
If Python isn't your thing, here's a code blob you can copy and paste. Just be sure to update the lines where the comments start with ---
+
#!/usr/bin/env python3
+
+issues = {} # Returned issues dict
+buildHooks = {} # Options, and others hooks
+haltOnErrors = True
+
+# Main wrapper function. Required to make local vars work correctly
+def main():
+ import subprocess
+ global dockerComposeServicesYaml # The loaded memory YAML of all checked services
+ global toRun # Switch for which function to run when executed
+ global buildHooks # Where to place the options menu result
+ global currentServiceName # Name of the current service
+ global issues # Returned issues dict
+ global haltOnErrors # Turn on to allow erroring
+
+ from deps.consts import servicesDirectory, templatesDirectory, volumesDirectory, servicesFileName
+
+ # runtime vars
+ serviceVolume = volumesDirectory + currentServiceName # Unused in example
+ serviceService = servicesDirectory + currentServiceName # Unused in example
+ serviceTemplate = templatesDirectory + currentServiceName
+
+ # This lets the menu know whether to put " >> Options " or not
+ # This function is REQUIRED.
+ def checkForOptionsHook():
+ try:
+ buildHooks["options"] = callable(runOptionsMenu)
+ except:
+ buildHooks["options"] = False
+ return buildHooks
+ return buildHooks
+
+ # This function is REQUIRED.
+ def checkForPreBuildHook():
+ try:
+ buildHooks["preBuildHook"] = callable(preBuild)
+ except:
+ buildHooks["preBuildHook"] = False
+ return buildHooks
+ return buildHooks
+
+ # This function is REQUIRED.
+ def checkForPostBuildHook():
+ try:
+ buildHooks["postBuildHook"] = callable(postBuild)
+ except:
+ buildHooks["postBuildHook"] = False
+ return buildHooks
+ return buildHooks
+
+ # This function is REQUIRED.
+ def checkForRunChecksHook():
+ try:
+ buildHooks["runChecksHook"] = callable(runChecks)
+ except:
+ buildHooks["runChecksHook"] = False
+ return buildHooks
+ return buildHooks
+
+ # This service will not check anything unless this is set
+ # This function is optional, and will run each time the menu is rendered
+ def runChecks():
+ checkForIssues()
+ return []
+
+ # This function is optional, and will run after the docker-compose.yml file is written to disk.
+ def postBuild():
+ return True
+
+ # This function is optional, and will run just before the build docker-compose.yml code.
+ def preBuild():
+ execComm = "bash {currentServiceTemplate}/build.sh".format(currentServiceTemplate=serviceTemplate) # --- You may want to change this
+ print("[Wireguard]: ", execComm) # --- Ensure to update the service name with yours
+ subprocess.call(execComm, shell=True) # This is where the magic happens
+ return True
+
+ # #####################################
+ # Supporting functions below
+ # #####################################
+
+ def checkForIssues():
+ return True
+
+ if haltOnErrors:
+ eval(toRun)()
+ else:
+ try:
+ eval(toRun)()
+ except:
+ pass
+
+# This check isn't required, but placed here for debugging purposes
+global currentServiceName # Name of the current service
+if currentServiceName == 'wireguard': # --- Ensure to update the service name with yours
+ main()
+else:
+ print("Error. '{}' Tried to run 'wireguard' config".format(currentServiceName)) # --- Ensure to update the service name with yours
+
How to setup and use git for IOTstack development.
+$ git clone git@github.com:<username>/IOTstack.git
+$ cd IOTstack
+$ git config user.name <username>
+$ git config user.email <1234>+<username>@users.noreply.github.com
+
$ git remote add upstream https://github.com/SensorsIot/IOTstack.git
+
$ git config fetch.prune true
+$ git config remote.pushDefault origin
+$ git config --add remote.origin.fetch "^refs/heads/gh-pages"
+$ git config --add remote.upstream.fetch "^refs/heads/gh-pages"
+$ git config branch.master.mergeoptions "--no-ff"
+$ git config fetch.parallel 0
+$ git fetch --all
+
flowchart LR
+ upstream["upstream (SensorsIOT)"] -- "1. git fetch + git checkout -b"
+ --> local[local branch]
+ local -- "2. git commit" --> local
+ local -- "3. git push" --> origin["origin (your fork)"]
+ origin -- "3. create github pull-request" --> upstream
+Please see Contributing for instructions on how to write commit +messages.
+$ git fetch upstream
+$ git checkout -b <your-descriptive-branch-name> upstream/master
+...coding and testing...
+$ git add <your new or changed file>
+Check everything has been added:
+$ git status
+$ git commit
+$ git push
+
$ git config alias.lg "log --color --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit"
+
When your pull-requests have been merged, their branches aren't needed anymore. +Remove them to reduce clutter and distractions. The master branch is never +deleted.
+$ git fetch --all
+$ git checkout master
+$ git branch -r --merged upstream/master | \
+ grep -v origin/master$ | grep origin | sed 's/origin\///' | \
+ xargs -I 'B' git push --delete origin B
+$ git branch --merged upstream/master | grep -v " master$" | \
+ xargs -I'B' git branch -d B
+
This is handy for easily testing out other persons' suggested changes. The +branches are of course fetch-only, and you can't push your own commits to them.
+$ git config --add remote.upstream.fetch +refs/pull/*/head:refs/remotes/upstream/pr-*
+$ git fetch upstream
+
Note: Everything below requires this.
+Branches that include the latest upstream/master, but are not merged to +your current branch, are potentially mergeable pull-requests. This is useful +for identifying which pull-requests you should be able to merge without +conflict.
+$ git fetch upstream
+$ git branch -r --contains upstream/master --no-merged upstream/master
+
In git, the only way to know if a branch can be merged without a conflict, is
+by actually doing the merge. An alias to (re-)create a branch named
+merge-test
and do merges into it:
$ git config alias.test-pull-request-merge $'!f() { : git merge && \
+ OPENPULLS=$(curl -s \'https://api.github.com/repos/SensorsIot/IOTstack/pulls?base=master&per_page=100\' | \
+ grep "^.....number" | sed -E \'s/.* ([0-9]+),/ upstream\\/pr-\\1/\') && \
+ git fetch upstream && git checkout -B merge-test upstream/master && \
+ git branch -r --contains upstream/master --no-merged upstream/master | \
+ grep upstream/pr- | sort - <(echo "$OPENPULLS") | \
+ { uniq -d; [[ "$1" ]] && echo "$1"; } | \
+ xargs -I B sh -c "echo Merging B && \
+ git merge --no-rerere-autoupdate --no-ff --quiet B || \
+ { echo ***FAILED TO MERGE B && exit 255; };" ;}; f'
+
Then use this alias combined with git checkout -
, returning your working copy
+back to the original branch if all merges succeeded:
$ git test-pull-request-merge && git checkout -
+
This merges all branches that are: a) currently open pull requests and b) +up-to-date, i.e. contains upstream/master and c) not merged already and d) the +optional provided argument. Note: won't ignore draft pull-requests. If it +encounters a failure, it stops immediately to let you inspect the conflict.
+Failed merge?
+If there was a merge-conflict, inspect it e.g. using git diff
, but
+don't do any real work or conflict resolution in the merge-test branch.
+When you have understood the merge-conflict and want to leave the
+merge-test branch, abort the failed merge and switch to your actual branch:
$ git diff
+$ git merge --abort
+$ git checkout <your-PR-branch-that-resulted-in-the-conflict>
+
When you intend to submit a pull-request you might want to check that it won't +conflict with any of the existing pull-requests.
+Use the alias from the previous "Test all current pull-requests..."-topic + to test merging your branch in addition to all current pull request:
+$ git test-pull-request-merge <your-pull-request-branch> && git checkout -
+
If there is a merge-conflict, see "Failed merge?" above.
+This page explains how the menu system works for developers.
+Originally this script was written in bash. After a while it became obvious that bash wasn't well suited to dealing with all the different types of configuration files, and logic that goes with configuring everything. IOTstack needs to be accessible to all levels of programmers and tinkerers, not just ones experienced with Linux and bash. For this reason, it was rewritten in Python since the language syntax is easier to understand, and is more commonly used for scripting and programming than bash. Bash is still used in IOTstack where it makes sense to use it, but the menu system itself uses Python. The code it self while not being the most well structured or efficient, was intentionally made that way so that beginners and experienced programmers could contribute to the project. We are always open to improvements if you have suggestions.
+Each screen of the menu is its own Python script. You can find most of these in the ./scripts
directory. When you select an item from the menu, and it changes screens, it actually dynamically loads and executes that Python script. It passes data as required by placing it into the global variable space so that both the child and the parent script can access it.
with open(childPythonScriptPath, "rb") as pythonDynamicImportFile:
+ code = compile(pythonDynamicImportFile.read(), childPythonScriptPath, "exec")
+execGlobals = {
+ "globalKeyName": "globalKeyValue"
+}
+execLocals = {}
+print(globalKeyName) # Will print out 'globalKeyValue'
+exec(code, execGlobals, execLocals)
+print(globalKeyName) # Will print out 'newValue'
+
def someFunction:
+ global globalKeyName
+ print(globalKeyName) # Will print out 'globalKeyValue'
+ globalKeyName = "newValue"
+
Each menu is its own python executable. The entry point is down the bottom of the file wrapped in a main()
function to prevent variable scope creep.
The code at the bottom of the main()
function:
+
if __name__ == 'builtins':
+
Is actually where the execution path runs, all the code above it is just declared so that it can be called without ordering or scope issues.
+It was obvious early on that the menu system would be slow on lower end devices, such as the Raspberry Pi, especially if it were rending a 4k terminal screen from a desktop via SSH. To mitigate this issue, not all of the screen is redrawn when there is a change. A "Hotzone" as it's called in the code, is usually rerendered when there's a change (such as pressing up or down to change an item selection, but not when scrolling). Full screen redraws are expensive and are only used when required, for example, when scrolling the pagination, selecting or deselecting a service, expanding or collapsing the menu and so on.
+At the very beginning of the main menu screen (./scripts/main_menu.py
) the function checkRenderOptions()
is run to determine what characters can be displayed on the screen. It will try various character sets, and eventually default to ASCII if none of the fancier stuff can be rendered. This setting is passed into of the sub menus through the submenu's global variables so that they don't have to recheck when they load.
From the main screen, you will see several sections leading to various submenus. Most of these menus work in the same way as the main menu. The only exception to this rule is the Build Stack menu, which is probably the most complex part of IOTstack.
+Path: ./scripts/buildstack_menu.py
./templates
directory and check for a build.py
file inside each of them. This can be seen in the generateTemplateList()
function, which is executed before the first rendering happens../services/docker-compose.save.yml
exists. This file is used to save the configuration of the last build. This happens in the loadCurrentConfigs()
function. It is important that the service name in the compose file matches the folder name, any service that doesn't will either cause an error, or won't be loaded into the menu.prepareMenuState()
function that basically checks which items should be ticked, and check for any issues with the ticked items by running checkForIssues()
.When an item is selected, 3 things happen:
+1. Update the UI variable (menu
) with function checkMenuItem(selectionIndex)
to let the user know the current state.
+2. Update the array holding every checked item setCheckedMenuItems()
. It uses the UI variable (menu
) to know which items are set.
+3. Check for any issues with the new list of selected items by running checkForIssues()
.
During a full render sequence (this is not a hotzone render), the build stack menu checks to see if each of the services has an options menu. It does this by executing the build.py
script of each of the services and passing in checkForOptionsHook
into the toRun
global variable property to see if the script has a runOptionsMenu
function. If the service's function result is true, without error, then the options text will appear up for that menu item.
When a service is selected or deselected on the menu, the checkForIssues()
function is run. This function iterates through each of the selected menu items' folders executing the build.py
script and passing in checkForRunChecksHook
into the toRun
global variable property to see if the script has a runChecks
function. The runChecks
function is different depending on the service, since each service has its own requirements. Generally though, the runChecks
function should check for conflicting port conflicts again any of the other services that are enabled. The menu will still allow you to build the stack, even if issues are present, assumine there's no errors raised during the build process.
Pressing enter on the Build Stack menu kicks off the build process. The Build Stack menu will execute the runPrebuildHook()
function. This function iterates through each of the selected menu items' folders executing the build.py
script and passing in checkForPreBuildHook
into the toRun
global variable property to see if the script has a preBuild
function. The preBuild
function is different depending on the service, since each service has its own requirements. Some services may not even use the prebuild hook. The prebuild is very useful for setting up the services' configuration however. For example, it can be used to autogenerate a password for a paticular service, or copy and modify a configuration file from the ./.templates
directory into the ./services
or ./volumes
directory.
The Build Stack menu will execute the runPostBuildHook()
function in the final step of the build process, after the docker-compose.yml
file has been written to disk. This function iterates through each of the selected menu items' folders executing the build.py
script and passing in checkForPostBuildHook
into the toRun
global variable property to see if the script has a postBuild
function. The postBuild
function is different depending on the service, since each service has its own requirements. Most services won't require this function, but it can be useful for cleaning up temporary files and so on.
The selected services' yaml configuration is already loaded into memory before the build stack process is started.
+./.templates/docker-compose-base.yml
file into a in memory yaml structure../compose-override.yml
file into memory./docker-compose.yml
.postbuild.sh
if it exists, with the list of services built.The postbuild bash script allows for executing arbitrary execution of bash commands after the stack has been build.
+Place a file in the main directory called postbuild.sh
. When the buildstack build logic finishes, it'll execute the postbuild.sh
script, passing in each service selected from the buildstack menu as a parameter. This script is run each time the buildstack logic runs.
The postbuild.sh
file has been added to gitignore, so it won't be updated by IOTstack when IOTstack is updated. It has also been added to the backup script so that it will be backed up with your personal IOTstack backups.
postbuild.sh
script¶The following script will print out each of the services built, and a custom message for nodered. If it was the first time the script was executed, it'll also output "Fresh Install" at the end, using a .install_tainted
file for knowing.
+
#!/bin/bash
+
+for iotstackService in "$@"
+do
+ echo "$iotstackService"
+ if [ "$iotstackService" == "nodered" ]; then
+ echo "NodeRed Installed!"
+ fi
+done
+
+if [ ! -f .install_tainted ]; then
+ echo "Fresh Install!"
+ touch .install_tainted
+fi
+
The postbuild script can be used to run custom bash commands, such as moving files, or issuing commands that your services expect to be completed before running.
+ + + + + + + + + + + + + +We welcome pull-requests.
+For larger contributions, please open an issue describing your idea. It +may provide valuable discussion and feedback. It also prevents the unfortunate +case of two persons working on the same thing. There's no need to wait for any +approval.
+Development guidelines
+Tip
+For simple changes you can straight-up just use the edit link available on +every documentation page. It's the pen-icon to the right of the top +heading. Write your changes, check the preview-tab everything looks as +expected and submit as proposed changes.
+Documentation is is written as markdown, processed using mkdocs (docs) and the Material theme (docs). The Material theme is not just styling, but provides additional syntax extensions.
+To test your local changes while writing them and before making a pull-request, +start a local mkdocs server: +
$ ~/IOTstack/scripts/development/mkdocs-serve.sh
+
In this section you can find information on how to contribute a service to IOTstack. We are generally very accepting of new services where they are useful. Keep in mind that if it is not IOTstack, selfhosted, or automation related we may not approve the PR.
+Services will grow over time, we may split up the buildstack menu into subsections or create filters to make organising all the services we provide easier to find.
+service.yml
file is correctbuild.py
file is correctPre
and Post
hooks work with no errors. service_name: Add/Fix/Change feature or bug summary
+
+Optional longer description of the commit. What is changed and why it
+is changed. Wrap at 72 characters.
+
+* You can use markdown formating as this will automatically be the
+ description of your pull-request.
+* End by adding any issues this commit fixes, one per line:
+
+Fixes #1234
+Fixes #4567
+
The first line is a short description. Keep it short, aim for 50 + characters. This is like the subject of an email. It shouldn't try to fully + or uniquely describe what the commit does. More importantly it should aim + to inform why this commit was made.
+service_name
- service or project-part being changed, e.g. influxdb,
+grafana, docs. Documentation changes should use the the name of the
+service. Use docs
if it's changes to general documentation. If all else
+fails, use the folder-name of the file you are changing. Use lowercase.
Add/Fix/Change
- what type of an change this commit is. Capitalized.
feature or bug summary
- free very short text giving an idea of why/what.
Empty line.
+A longer description of what and why. Wrapped to 72 characters.
+Use github issue linking +to automatically close issues when the pull-request of this commit is +merged.
+For tips on how to use git, see Git Setup.
+If your new service is approved and merged then congratulations! Please watch the Issues page on github over the next few days and weeks to see if any users have questions or issues with your new service.
+Links:
+ + + + + + + + + + + + + + +(may include items not yet merged)
+ + +duck/duck.sh
script.Originally this script was written in bash. After a while it became obvious that bash wasn't well suited to dealing with all the different types of configuration files, and logic that goes with configuring everything. IOTstack needs to be accessible to all levels of programmers and tinkerers, not just ones experienced with Linux and bash. For this reason, it was rewritten in Python since the language syntax is easier to understand, and is more commonly used for scripting and programming than bash. Bash is still used in IOTstack where it makes sense to use it, but the menu system itself uses Python. The code is intentionally made so that beginners and experienced programmers could contribute to the project. We are always open to improvements if you have suggestions.
+There are many features that are needing to be introduced into the new menu system. From meta tags on services for filtering, to optional nginx autoconfiguration and authentication. For this reason you may initially experience bugs (very hard to test every type of configuration!). The new menu system has been worked on and tested for 6 months and we think it's stable enough to merge into the master branch for mainstream usage. The code still needs some work to make it easier to add new services and to not require copy pasting the same code for each new service. Also to make the menu system not be needed at all (so it can be automated with bash scripts).
+There are a few changes that you need to be aware of:
+*.env
files are no longer a thing by default. Everything needed is specified in the service.yml file, you can still optionally use them though either with Custom Overrides or with the PostBuild script. Specific config files for certain services still work as they once did.old-menu
branch. It will be unmaintained except for critical updates. It will eventually be removed - but not before everyone is ready to leave it.Test that your backups are working before you switch. The old-menu
branch will become avaiable just before the new menu is merged into master to ensure it has the latest commits applied.
service.yml
files)~
.These instructions explain how to migrate from gcgarner/IOTstack to SensorsIot/IOTstack.
+Migrating to SensorsIot/IOTstack was fairly easy when this repository was first forked from gcgarner/IOTstack. Unfortunately, what was a fairly simple switching procedure no longer works properly because conflicts have emerged.
+The probability of conflicts developing increases as a function of time since the fork. Conflicts were and are pretty much inevitable so a more involved procedure is needed.
+Make sure that you are, actually, on gcgarner. Don't assume!
+$ git remote -v
+origin https://github.com/gcgarner/IOTstack.git (fetch)
+origin https://github.com/gcgarner/IOTstack.git (push)
+
Do not proceed if you don't see those URLs!
+Take your stack down. This is not strictly necessary but we'll be moving the goalposts a bit so it's better to be on the safe side.
+$ cd ~/IOTstack
+$ docker-compose down
+
There are two basic approaches to switching from gcgarner/IOTstack to SensorsIot/IOTstack:
+ +You can think of the first as "working with git" while the second is "using brute force".
+The first approach will work if you haven't tried any other migration steps and/or have not made too many changes to items in your gcgarner/IOTstack that are under git control.
+If you are already stuck or you try the first approach and get a mess, or it all looks far too hard to sort out, then try the Migration by clone and merge approach.
+Make sure you are on the master branch (you probably are so this is just a precaution), and then see if Git thinks you have made any local changes:
+$ cd ~/IOTstack
+$ git checkout master
+$ git status
+
If Git reports any "modified" files, those will probably get in the way of a successful migration so it's a good idea to get those out of the way.
+For example, suppose you edited menu.sh
at some point. Git would report that as:
modified: menu.sh
+
The simplest way to deal with modified files is to rename them to move them out of the way, and then restore the original:
+Rename your customised version by adding your initials to the end of the filename. Later, you can come back and compare your customised version with the version from GitHub and see if you want to preserve any changes.
+Here I'm assuming your initials are "jqh":
+$ mv menu.sh menu.sh.jqh
+
Tell git to restore the unmodified version:
+$ git checkout -- menu.sh
+
Now, repeat the Git command that complained about the file:
+$ git status
+
The modified file will show up as "untracked" which is OK (ignore it)
+Untracked files:
+ (use "git add <file>..." to include in what will be committed)
+
+ menu.sh.jqh
+
Make sure your local copy of gcgarner is in sync with GitHub.
+$ git pull
+
There may or may not be any "upstream" set. The most likely reason for this to happen is if you used your local copy as the basis of a Pull Request.
+The next command will probably return an error, which you should ignore. It's just a precaution.
+$ git remote remove upstream
+
Change your local repository to point to SensorsIot.
+$ git remote set-url origin https://github.com/SensorsIot/IOTstack.git
+
This is where things can get a bit tricky so please read these instructions carefully before you proceed.
+When you run the next command, it will probably give you a small fright by opening a text-editor window. Don't panic - just keep reading. Now, run this command:
+$ git pull -X theirs origin master
+
The text editor window will look something like this:
+Merge branch 'master' of https://github.com/SensorsIot/IOTstack
+
+# Please enter a commit message to explain why this merge is necessary,
+# especially if it merges an updated upstream into a topic branch.
+#
+# Lines starting with '#' will be ignored, and an empty message aborts
+# the commit.
+
The first line is a pre-prepared commit message, the remainder is boilerplate instructions which you can ignore.
+Exactly which text editor opens is a function of your EDITOR
environment variable and the core.editor
set in your global Git configuration. If you:
remember changing EDITOR
and/or core.editor
then, presumably, you will know how to interact with your chosen text editor. You don't need to make any changes to this file. All you need to do is save the file and exit;
don't remember changing either EDITOR
or core.editor
then the editor will probably be the default vi
(aka vim
). You need to type ":wq" (without the quotes) and then press return. The ":" puts vi
into command mode, the "w" says "save the file" and "q" means "quit vi
". Pressing return runs the commands.
Git will display a long list of stuff. It's very tempting to ignore it but it's a good idea to take a closer look, particularly for signs of error or any lines beginning with:
+Auto-merging
+
At the time of writing, you can expect Git to mention these two files:
+Auto-merging menu.sh
+Auto-merging .templates/zigbee2mqtt/service.yml
+
Those are known issues and the merge strategy -X theirs
on the git pull
command you have just executed deals with both, correctly, by preferring the SensorsIot version.
Similar conflicts may emerge in future and those will probably be dealt with, correctly, by the same merge strategy. Nevertheless, you should still check the output very carefully for other signs of merge conflict so that you can at least be alive to the possibility that the affected files may warrant closer inspection.
+For example, suppose you saw:
+Auto-merging .templates/someRandomService/service.yml
+
If you don't use someRandomService
then you could safely ignore this on the basis that it was "probably right". However, if you did use that service and it started to misbehave after migration, you would know that the service.yml
file was a good place to start looking for explanations.
At this point, only the migrated master branch is present on your local copy of the repository. The next command brings you fully in-sync with GitHub:
+$ git pull
+
If you have been following the process correctly, your IOTstack will already be down.
+Move your old IOTstack folder out of the way, like this:
+$ cd ~
+$ mv IOTstack IOTstack.old
+
Note:
+sudo
for the mv
command but it is OK to use it if necessary.$ git clone https://github.com/SensorsIot/IOTstack.git ~/IOTstack
+
Explore the result:
+$ tree -aFL 1 --noreport ~/IOTstack
+/home/pi/IOTstack
+├── .bash_aliases
+├── .git/
+├── .github/
+├── .gitignore
+├── .native/
+├── .templates/
+├── .tmp/
+├── LICENSE
+├── README.md
+├── docs/
+├── duck/
+├── install.sh*
+├── menu.sh*
+├── mkdocs.yml
+└── scripts/
+
Note:
+tree
command is not installed for some reason, use ls -A1F ~/IOTstack
.Observe what is not there:
+docker-compose.yml
backups
directoryservices
directoryvolumes
directoryFrom this, it should be self-evident that a clean checkout from GitHub is the factory for all IOTstack installations, while the contents of backups
, services
, volumes
and docker-compose.yml
represent each user's individual choices, configuration options and data.
Execute the following commands:
+$ mv ~/IOTstack.old/docker-compose.yml ~/IOTstack
+$ mv ~/IOTstack.old/services ~/IOTstack
+$ sudo mv ~/IOTstack.old/volumes ~/IOTstack
+
You should not need to use sudo
for the first two commands. However, if you get a permissions conflict on either, you should proceed like this:
docker-compose.yml
+$ sudo mv ~/IOTstack.old/docker-compose.yml ~/IOTstack
+$ sudo chown pi:pi ~/IOTstack/docker-compose.yml
+
services
+$ sudo mv ~/IOTstack.old/services ~/IOTstack
+$ sudo chown -R pi:pi ~/IOTstack/services
+
There is no need to migrate the backups
directory. You are better off creating it by hand:
$ mkdir ~/IOTstack/backups
+
If you have reached this point, you have migrated to SensorsIot/IOTstack where you are on the "master" branch. This implies "new menu".
+The choice of menu is entirely up to you. Differences include:
+docker-compose.yml
. Old menu keeps environment variables in "environment files" in ~/IOTstack/services
. There is no "right" or "better" about either approach. It's just something to be aware of.service.yml
files in ~/IOTstack/.templates
have all been left-shifted by two spaces. That means you can no longer use copy and paste to test containers - you're stuck with the extra work of re-adding the spaces. Again, this doesn't matter but you do need to be aware of it.What you give up when you choose old menu is summarised in the following. If a container appears on the right hand side but not the left then it is only available in new menu.
+old-menu master (new menu)
+├── adminer ├── adminer
+├── blynk_server ├── blynk_server
+├── dashmachine ├── dashmachine
+├── deconz ├── deconz
+├── diyhue ├── diyhue
+├── domoticz ├── domoticz
+├── dozzle ├── dozzle
+├── espruinohub ├── espruinohub
+ > ├── example_template
+├── gitea ├── gitea
+├── grafana ├── grafana
+├── heimdall ├── heimdall
+ > ├── home_assistant
+├── homebridge ├── homebridge
+├── homer ├── homer
+├── influxdb ├── influxdb
+├── mariadb ├── mariadb
+├── mosquitto ├── mosquitto
+├── motioneye ├── motioneye
+├── nextcloud ├── nextcloud
+├── nodered ├── nodered
+├── openhab ├── openhab
+├── pihole ├── pihole
+├── plex ├── plex
+├── portainer ├── portainer
+├── portainer_agent ├── portainer_agent
+├── portainer-ce ├── portainer-ce
+├── postgres ├── postgres
+├── prometheus ├── prometheus
+├── python ├── python
+├── qbittorrent ├── qbittorrent
+├── rtl_433 ├── rtl_433
+├── tasmoadmin ├── tasmoadmin
+├── telegraf ├── telegraf
+├── timescaledb ├── timescaledb
+├── transmission ├── transmission
+├── webthings_gateway ├── webthings_gateway
+├── wireguard ├── wireguard
+└── zigbee2mqtt ├── zigbee2mqtt
+ > └── zigbee2mqtt_assistant
+
You also give up the compose-override.yml
functionality. On the other hand, Docker has its own docker-compose.override.yml
which works with both menus.
If you want to switch to the old menu:
+$ git checkout old-menu
+
Any time you want to switch back to the new menu:
+$ git checkout master
+
You can switch back and forth as much as you like and as often as you like. It's no harm, no foul. The branch you are on just governs what you see when you run:
+$ ./menu.sh
+
Although you can freely change branches, it's probably not a good idea to try to mix-and-match your menus. Pick one menu and stick to it.
+Even so, nothing will change until you run your chosen menu to completion and allow it to generate a new docker-compose.yml
.
Unless you have gotten ahead of yourself and have already run the menu (old or new) then nothing will have changed in the parts of your ~/IOTstack
folder that define your IOTstack implementation. You can safely:
$ docker-compose up -d
+
There is another gist Installing Docker for IOTstack which explains how to overcome problems with outdated Docker and Docker-Compose installations.
+Depending on the age of your gcgarner installation, you may run into problems which will be cured by working through that gist.
+ + + + + + + + + + + + + +There are two different update sources: the IOTstack project (github.com) and +Docker image registries (e.g. hub.docker.com). Both the initial stack creation +and updates use both of these. Initial creation is a bit simpler, as the +intermediate steps are done automatically. For a full update they need to be +performed explicitly. To illustrate the steps and artifacts of the update +process:
+flowchart TD
+ GIT[github.com/sensorsiot/IOTstack.git]
+ GIT --- GITPULL([$ git pull -r])
+ GITPULL --> TEMPLATES["~/IOTstack/.templates"]
+ TEMPLATES --- MENU([$ ./menu.sh -> Build stack])
+ MENU --> COMPOSE["~/IOTstack/docker-compose.yml
+ ~/IOTstack/.templates/*/Dockerfile
+ ~/IOTstack/services/*/Dockerfile"]
+ COMPOSE --- UP(["$ docker-compose up --build -d"])
+
+ HUB[hub.docker.com images and tags]
+ HUB --- PULL([$ docker-compose pull\n$ docker-compose build --pull --no-cache])
+ COMPOSE --- PULL
+ PULL --> CACHE[local Docker image cache]
+ CACHE --- UP
+
+ UP --> CONTAINER[recreated Docker containers based on the latest cached images]
+
+ classDef command fill:#9996,stroke-width:0px
+ class GITPULL,MENU,UP,PULL command
+In order to keep the graph simple, some minor details were left unprecise:
+$ docker-compose pull
will read docker-compose.yml
, in order to know
+ what image tags to check for updates.$ docker-compose build --pull --no-cache
will use docker-compose.yml
+ to find which of the "build:" sources are in use:
~/IOTstack/.templates/*/Dockerfile
~/IOTstack/services/*/Dockerfile
and pull Docker images referenced in these while building.
+$ docker-compose up --build -d
may not require the "--build"-flag,
+ but having it won't hurt (and may help keep some corner-case problems
+ away, docker may be a bit finicky).
The usual way of backing up just your ~/IOTstack
contents isn't sufficient
+for a 100% identical restore. Some containers may have local ephemeral
+modifications that will be lost when they're recreated. Currently running
+containers may be based on now outdated images. Recreating a container using an
+old image is tricky. The local Docker image cache can't easily be restored to
+the same state with old images and old tag references. The docker pull
will
+fetch the latest images, but it's not unheard of that the latest image may
+break something.
Thus to guarantee a successful rollback to the pre-update state, you have to +shutdown your RPi and save a complete disk image backup of its storage using +another machine.
+For a hobby project, not having a perfect rollback may be a risk you're willing +to take. Usually image problems will have fixes/workarounds within a day.
+You should keep your Raspberry Pi up-to-date. Despite the word "container" +suggesting that containers are fully self-contained, they sometimes depend on +operating system components (WireGuard is an example).
+$ sudo apt update
+$ sudo apt upgrade -y
+
When you built the stack using the menu, it created the Docker Compose file
+docker-compose.yml
. This file and any used build instructions
+(Dockerfile
s), use image name and tag references to images on hub.docker.com
+or other registries. An undefined tag defaults to :latest
. When Docker is
+told to pull updated images, it will download the images into the local
+cache, based upon what is currently stored at the registry for the used names
+and tags.
Updating the IOTstack project templates and recreating your
+docker-compose.yml
isn't usually necessary. Doing so isn't likely to provide
+much benefits, and may actually break something. A full update is only
+recommended when there is a new feature or change you need.
Recommended update procedure
+$ docker-compose pull
+
$ docker-compose build --pull --no-cache
+
$ docker-compose up --build -d
+
If a service fails to start after it's updated, especially if you are updating +frequently, wait for a few hours and repeat the update procedure. Sometimes bad +releases are published to hub.docker.com, but they are usually fixed in under +half a day. Of course you are always welcome to report the problem to our +Discord server. Usually someone else has +encountered the same problem and reported the fix.
+Periodically updates are made to project which include new or updated container +template, changes to backups or additional features. To evaluate if this is +really needed, see the changelog or merged pull requests. To apply all these +changes all service definitions are recreated. As a drawback, this will wipe +any custom changes to docker-compose.yml, may change semantics or even require +manual migration steps.
+Breaking update
+A change done 2022-01-18 will require manual steps
+or you may get an error like:
+ERROR: Service "influxdb" uses an undefined network "iotstack_nw"
Full update steps:
+check git status --untracked-files no
for any local changes you may have
+ made to project files. For any listed changes, either:
git commit -m
+ "local customization" -- path/to/changed_file
, orgit checkout -- path/to/changed_file
Update project files from github: git pull -r origin master
cp docker-compose.yml
+ docker-compose.yml.bak
. NOTE: this is really useful, as the next step will
+ overwrite all your previous manual changes to docker-compose.yml../menu.sh
, select Build Stack,
+ for each of your selected services: de- and re-select it, press enter to
+ build, and then exit.diff
+ docker-compose.yml docker-compose.yml.bak
$ docker-compose pull
+$ docker-compose build --pull --no-cache
+$ docker-compose up --build -d
+
docker-compose restart
docker-compose logs *service-name*
diff docker-compose.yml
+ docker-compose.yml.bak
docker-compose down
rm -r docker-compose.yml services
./menu.sh
, select Build Stack, select all your
+ services, press enter to build, and then exit.docker-compose up -d
Warning
+If you ran git checkout -- 'git ls-files -m'
as suggested in the old wiki entry then please check your duck.sh because it removed your domain and token
Git offers build in functionality to fetch the latest changes.
+git pull origin master
will fetch the latest changes from GitHub without overwriting files that you have modified yourself. If you have done a local commit then your project may to handle a merge conflict.
This can be verified by running git status
. You can ignore if it reports duck.sh as being modified.
Should you have any modified scripts or templates they can be reset to the latest version with git checkout -- scripts/ .templates/
With the new latest version of the project you can now use the menu to build your stack. If there is a particular container you would like to update its template then you can select that at the overwrite option for your container. You have the choice to not to overwrite, preserve env files or to completely overwrite any changes (passwords)
+ +After your stack had been rebuild you can run docker-compose up -d
to pull in the latest changes. If you have not update your images in a while consider running the ./scripts/update.sh
to get the latest version of the image from Docker hub as well
Networking under both new menu (master branch) and old menu (old-menu branch) has undergone a significant change. This will not affect new users of IOTstack (who will adopt it automatically). Neither will it affect existing users who do not use the menu to maintain their stacks (see adopting networking changes by hand below).
+Users who do use the menu to maintain their stacks will also be unaffected until the next menu run, at which point it will be prudent to down your stack entirely and re-select all your containers. Downing the stack causes Docker to remove all associated networks as well as the containers.
+These changes mean that networking is identical under both old and new menus. To summarise the changes:
+Only two internal networks are defined – as follows:
+iotstack_default
at runtime.iotstack_nextcloud
at runtime.If you are using docker-compose v2.0.0 or later then the iotstack_nextcloud
network will only be instantiated if you select NextCloud as one of your services. Earlier versions of docker-compose instantiate all networks even if no service uses them (which is why you get those warnings at "up" time).
The only service definitions which now have networks:
directives are:
All other containers will join the "default" network, automatically, without needing any networks:
directives.
If you maintain your docker-compose.yml
by hand, you can adopt the networking changes by doing the following:
Remove all networks:
directives wherever they appear in your docker-compose.yml
. That includes:
networks:
directives in all service definitions; andnetworks:
specifications at the end of the file.Append the contents of the following file to your docker-compose.yml
:
~/IOTstack/.templates/docker-compose-base.yml
+
For example:
+$ cat ~/IOTstack/.templates/docker-compose-base.yml >>~/IOTstack/docker-compose.yml
+
The docker-compose-base.yml
file is named env.yml
in the old-menu branch.
If you run the NextCloud service then:
+Add these lines to the NextCloud service definition:
+networks:
+ - default
+ - nextcloud
+
Add these lines to the NextCloud_DB service definition:
+networks:
+ - nextcloud
+
Bring up your stack.
+