Remotely is a tiny alternative to enterprise configuration management tools like Ansible and Puppet. I’ve designed it for my personal use case, which is managing a small personal server with some self-hosted services. Remotely is focused on simplicity and the unix philosophy instead of scalability.
To configure a server you “just write a shell script”; Remotely simply provides a few extra functions that make it really easy to upload files and run remote commands.
#!/bin/bash
source remotely.sh # Load Remotely
remotely_go # Establish an SSH to $REMOTELY_HOST
upload /etc/nginx # Rsync files from local directory ./files/etc/nginx to remote /etc/nginx
remotely apt-get install -y nginx # Run a remote command
Now you know why it’s named “Remotely” – not to sound like a hip startup
(because it’s not), but because running a command on the remote machine is as
simple as prefixing it with remotely
.
Key features:
- “Playbooks” are just shell scripts.
- It’s a bash library with less than 200 lines of code. Not a framework.
rsync
for uploading files.m4
for templating.- Fast, agentless, and with “ssh pipelining” (control sockets).
This README is the documentation for Remotely, but is written more like a blog post. If you prefer to learn by example, check out my personal server configuration or the server configuration I manage for a club at my university, both of which are managed entirely with Remotely.
I’ll quickly cover the basics of using Remotely to setup shadowsocks-libev, a high performance proxy server which is often used to bypass the Great Firewall of China, hide network traffic from IT at work, etc.
Let’s start writing a Bash script that will setup shadowsocks-libev on a server of our choice.
#!/bin/bash
Remotely will not work in shells other than Bash. Next up, let’s set up an environment variable to tell Remotely how to connect to our server:
export [email protected]
You should connect to the target server as root, then de-escalate priviliges on
a per-command basis as necessary. If your server has a special SSH configuration
(such as a custom port), you can use REMOTELY_SSH_OPTIONS=-p 2222
, for
example. Now let’s load Remotely:
source remotely.sh
remotely_go
This implies how you “install” Remotely: Just copy remotely.sh
next to your
script. You could also add this repository as a Git submodule then source
remotely/remotely.sh
.
remotely_go
establishes an SSH connection, which will be reused throughout the
rest of the script to maximize performance. remotely_go
also processes file templates,
which I’ll talk about later.
remotely apt-get install -y shadowsocks-libev
Here’s where the magic starts! Any command preceded by remotely
will
be run on the remote host.
remotely systemctl start shadowsocks-libev
remotely systemctl enable shadowsocks-libev
At this point, the script would work. However, you’ll almost always want
to configure Shadowsocks before usage, to set a password if for no other
reason. Let’s create a files/etc/shadowsocks-libev/config.json
relative to our script:
{ "server":"0.0.0.0", "server_port":8388, "local_port":1080, "password":"top sekrit", "local_address":"17.82.99.102", "timeout":60, "method":"chacha20-ietf-poly1305" }
Now, let’s upload this configuration file, by inserting the following line before the systemd calls:
upload /etc/shadowsocks-libev/config.json
The upload
command uses rsync
to upload the given file from the local
files/
directory to the same spot on the remote machine. Additionally, we
should change systemctl start
to systemctl restart
so that if we change the
configuration and re-run the script, the new configuration takes effect.
There’s one more important problem with this example: We hardcoded
password
and local_address
into the configuration file. It’s likely
that if we wanted to use our script on multiple servers, we’d want to
use different values for these configuration options. There’s also a
security aspect: If you want to host your script on Github, you’d better
redact these options. Let’s create a new file, which you can call
anything, but which I’ll call vars.sh
:
export [email protected]
export SS_PASSWORD='top sekrit'
export SS_LOCAL_ADDR=17.82.99.102
We won’t commit vars.sh
to version control, and we will always
source vars.sh
before executing our main script. Now, how do we get
those variables into config.json
? Simple: Rename config.json
as
config.json.m4
, and then use some macros:
{ "server":"0.0.0.0", "server_port":8388, "local_port":1080, "password":"m4_getenv_req(SS_PASSWORD)", "local_address":"m4_getenv_req(SS_LOCAL_ADDR)", "timeout":60, "method":"chacha20-ietf-poly1305" }
The m4_getenv_req
macro is defined by Remotely. It looks for an
environment variable with the given name, and if it’s not found, signals
an error. When remotely_go
runs, it looks at all .m4
files in the
files/
tree, processes m4
macros in them, and puts the output into a
temporary folder, with the .m4
part of the name removed. That’s all!
m4_getenv
and m4_getenv_req
are the macros you’ll probably use most
often, but you can use any m4 macros you want (it’s turing complete).
The m4
manual is an excellent place to start learning about m4.
Any options given to upload
after the name of the file are passed to
rsync. For instance,
upload /home/good-boi -og --chown good-boi:good-boi
will upload the
folder with ownership to good-boi
instead of root
.
Remotely is convenient when the commands you’re running are inherently
idempotent. For example, running apt-get install
on a package that’s
already installed is no big deal; it will exit as soon as it discovers
the package is installed and does not signal any error. Certain more
complex tasks are not so convenient to automate with shell scripting
alone. For instance, on my personal server, I run
Navidrome, a music server.
Navidrome is not in the Debian repositories, so I need to download a
.tar.gz, extract its contents, and then move the executable to
/usr/local/bin. It’s easy to make this work in Bash, but it probably
won’t be super fast when executed the second time; if you just use
curl
and tar
, then your script will re-download the release and
re-extract it, even if it’s already installed! You could check
explicitly whether Navidrome was downloaded or extracted previously, but
then your code gets messy and hard to test. Instead, you can create a
Makefile, say in files/build/navidrome/Makefile
:
navidrome_dir := navidrome-$(NAVIDROME_VERSION) navidrome_tar := navidrome-$(NAVIDROME_VERSION).tar.gz navidrome_url := https://github.com/deluan/navidrome/releases/download/v$(NAVIDROME_VERSION)/navidrome_$(NAVIDROME_VERSION)_Linux_x86_64.tar.gz # Copy the Navidrome executable to the PATH /usr/local/bin/navidrome: $(navidrome_dir)/navidrome install $< $@ # Extract the Navidrome tarball $(navidrome_dir)/navidrome: $(navidrome_tar) mkdir -p $(navidrome_dir) tar xaf $(navidrome_tar) -C $(navidrome_dir) touch $@ # modification time # Download the Navidrome tarball $(navidrome_tar): curl -Lo $@ '$(navidrome_url)'
Then, in my script, I simply upload this Makefile then run
remotely make -C /build/navidrome NAVIDROME_VERSION=0.14.0
, which
leaves the artifacts in /build/navidrome to speed up the next run.
remotely_go
has no effect if run multiple times. Thus, one Remotely
script can source
another, and it will re-use the same ssh connection
and file tree. If you don’t desire this, call the subscript in a new
process, using bash
or by executing the script directly.
The way I structure my own scripts is that I have a whole bunch of
self-contained files which can be executed directly, named
go-shadowsocks.sh
to install shadowsocks, go-networking.sh
to setup
Wireguard and iptables, etc. These each source remotely.sh
and
remotely_go
. Then, I have a go.sh
which source
-s each of the
sub-files. This setup allows me to quickly update the configuration for
small parts of my server at a time, while also allowing me to easily
re-run the whole thing.
To re-use something across many scripts, put it into a Bash function in
a file that you can source
from elsewhere.
By default, ssh
handles word splitting in a way that you probably
don’t want: All its command line arguments are joined with a space, then
sent to the remote shell, where they’re re-parsed. A command like
ssh [email protected] cat "'my file" " name'"
will be sent to the
server as the string cat 'my file name'
, and thus will print the
content of the file named “my file name”. On the other hand, executing
cat "'my file" " name"
locally would concatenate the file named “my
file” with the file named ” name”. This behavior is justified because
ssh is meant to be shell-agnostic, but most modern servers use Bash or
similar, which makes this behavior cumbersome today. To remedy the
situation, the remotely
function adds an extra level of quotes around
each argument. Thus, remotely cat "'my file" " name'"
runs an ssh
command formatted like
ssh [email protected] "\"cat\" \"'my file\" \" name'\""
, and the string
that makes it to Bash on the other end is "cat" "'my file" " name'"
,
exactly as you intended.
If you need to access remote shell features, like output redirection, you can
disable the word splitting my using remotely_no_escape
I do actively use Remotely to configure my main private VPS, which I use to host markasoftware.com and a number of private self-hosted services. You can find the full configuration at github.com/markasoftware/swirl. The services I manage with Remotely include
- Syncthing (file sync)
- Quassel (IRC bouncer)
- Navidrome (music server)
- Transmission (bittorrent client)
- Shadowsocks (proxy)
- Wireguard (VPN; restricts access to Syncthing, Quassel, Navidrome, Transmission, etc)
- Nginx (web server)
- Certbot (for Letsencrypt SSL certificates)
- Iptables (firewall)
- Netdata (server monitoring)
So you can get a pretty good idea of how to use Remotely effectively from my repository.
I’m pretty happy with Remotely overall, but pain points do exist; some pieces of software don’t like to be configured from the command line, or the commands that you must use are not really idempotent (eg, they throw an error if run twice, or worse, perform some unintended action). For example, to create the PostgreSQL user and database for Quassel, I had to use:
remotely su - postgres -c "psql -c \"CREATE USER \\\"quassel-custom\\\" WITH PASSWORD '$QUASSEL_POSTGRES_PASSWORD'\"" || true
remotely su - postgres -c 'createdb --owner quassel-custom quassel-custom' || true
Ew! I needed to call psql
, use multiple layers of escaped quotes, and use || true
to ignore
errors in case the user or database already exist! Further, this code actually even includes a
subtle bug: If $QUASSEL_POSTGRES_PASSWORD
includes an apostrophe, bad things will happen. A
dedicated Postgres library for Remotely could abstract this away.
Letsencrypt poses a more substantial problem. While Certbot’s --nginx
plugin is
super useful when setting up a server manually, scripting the interaction
between certbot and nginx has always been a nightmare for me.
As far as I can tell, there’s a necessary tradeoff between simplicity in the configuration script and achieving 100% uptime when it comes to setting up Certbot and Nginx. I took the simpler option.
In /etc/letsencrypt/renewal-hooks/pre/nginx
:
systemctl stop nginx
And /etc/letsencrypt/renewal-hooks/post/nginx
:
systemctl start nginx
With these hooks in place, I can simply run certbot
in standalone mode.
Provisioning the certificates is as simple as
remotely certbot certonly --non-interactive --agree-tos --standalone \
--cert-name my-cert -m "$LETSENCRYPT_EMAIL" -d "$LETSENCRYPT_DOMAINS"
The nginx configuration can be blissfully unaware of how Certbot manages renewals. Simply hardcode in the path to the SSL certificates.
Remotely is just a library that makes it easy to do tasks involving a remote server from a shell script. Thus, there’s no reason to use it only for configuration. I also use it to write backup scripts, and have included a handful of features to make backups fun!
- Automatically creates new backup directories named after the current date/time
- Uses
rsync
’s excellent--link-dest
option to perform sorta-incremental backups. Files unchanged from one backup to the next will be hardlinked. When a file is partially changed, parts of it that haven’t changed since the last backup will just be copied from the last backup. It’s incredible how close we can get to a full incremental backup solution using a single option on a binary that’s included in many linux distros.
A super simple backup script, which I use to backup all the files in my
public-html
folder periodically, looks like this:
#!/bin/bash
source remotely.sh
remotely_backup web-server
backup /home/public-html/ -l
Instead of remotely_go
, I use remotely_backup
, which creates a new backup
directory named after the current date/time, inside of
$BACKUP_DIR/web-server
. The function backup
is just like upload
, except
instead of uploading from ./files
to the remote machine, it downloads from
the remote machine into the current backup directory. The -l
is just an
rsync option to preserve symlinks.
A more involved example is a script I use to backup a mediawiki installation. Mediawiki backups involve three parts: An SQL dump of the database, an XML dump of god knows what, and then a backup of remaining files (eg, images).
#!/bin/bash
source remotely.sh
remotely_backup wiki
# While sending passwords through environment variables is more or less secure in 2021, MySQL has
# still deprecated it. If this line breaks in the future, you know why!
echo "Doing mysqldump..."
remotely_no_escape "MYSQL_PWD=$WIKI_DB_PASSWORD" mysqldump "$WIKI_DB_NAME" -u "$WIKI_DB_USER" '|' gzip > "$NEW_BACKUP_DIR/my.sql.gz"
echo "Doing dumpBackup.php..."
remotely_no_escape php /var/www/html/wiki/maintenance/dumpBackup.php --full --quiet '|' gzip > "$NEW_BACKUP_DIR/dump.xml.gz"
echo "Backing up remaining files..."
backup /var/www/html/wiki/
This script is admittedly getting a bit ugly, but it packs a lot of punch for
5 lines of code! The first remotely_no_escape
command generates the SQL
dump, compresses it on the remote host, then saves the compressed backup
locally.
We have to use remotely_no_escape
instead of plain remotely
because
remotely
does fancy SSH argument escaping (described here) which would
prevent us from using the pipe or setting the environment variable MYSQL_PWD
.
Next, notice that the pipe is quoted, but the output redirection to
my.sql.gz
is not. That’s because the pipe is being passed to the remote
shell, but the output redirection is being executed locally. $NEW_BACKUP_DIR
is set by Remotely, and is the location where the current backup is being
saved.