With UrBackup you can program backups of concrete directories rather than whole systems, as you've seen in the two previous guides. But first you need to deploy an UrBackup server and install the corresponding UrBackup clients in the target systems which, in this case, will be your K3s node VMs.
The UrBackup server could be deployed in your K3s Kubernetes cluster (there's a Docker image available of the UrBackup server), but usually you should have the backup server on a completely different system for safety. Of course, in this guide series' scenario, there's only one physical system, so I'll show you how to setup the UrBackup server on a small Debian 11 VM.
You already have a suitable VM template from which you can clone a new Debian 11 VM. This VM template is the one named debiantpl
, which you prepared in the G020, G021, G022 and G023 guides. The resulting VM will require some adjustments, in the same way you had to reconfigure the VMs that became K3s nodes. Since I've already explained how to do all those changes in earlier guides, in the following subsections I'll just indicate you what to change and why while also pointing you to the proper sections of previous guides.
Clone a new VM from the debiantpl
VM template, as is explained here in the G024 guide. The cloning parameters for generating this guide's VM were set as follows:
-
VM ID
:6161
. As you did with the K3s node VMs, give your new VM an ID correlated to the IPs it's going to have. So, for this guide, this new VM will have the ID6161
, which corresponds to the IPs that you'll see used in this guide. -
Name
:bkpserver
. The VM name should be something meaningful likebkpserver
. Be aware that later you'll set up this name also as the VM'shostname
within Debian. -
Mode
:Linked Clone
, so this VM is linked to the VM template, instead of duplicating it.
After the new VM is cloned, DON'T start it. You'll need to take a look to its assigned hardware capabilities. If you have the same kind of limited hardware as the one used in this guide series, and also having a K3s Kubernetes cluster already running in the system, you must be careful of how much RAM and CPU you assign to any new VM. So, in this guide, the new VM will have the same hardware setup as the VM template, except on the memory department, in which it'll have an absolute maximum of 1 GiB.
In the capture below, you can see how the Hardware
tab looks for this guide's new VM.
On the other hand, don't forget to modify the Notes
section in the Summary
tab of this VM. For instance, you could write something like the following.
# Backup Server VM
Template created: 2022-07-05
OS: **Debian 11 Bullseye**
Root login disabled: yes
Sysctl configuration: yes
Transparent hugepages disabled: yes
SWAP disabled: no
SSH access: yes
TFA enabled: yes
QEMU guest agent working: yes
Fail2Ban working: yes
NUT (UPS) client working: yes
Utilities apt packages installed: yes
Backup server software: UrBackup
A final detail to do is to set to Yes
the parameter Start at boot
found in the Options
tab, so the VM is booted up by Proxmox VE after the host system itself has started.
To facilitate your remote access to the VM, set up a static IP for it's main network device (the net0
one) in your router or gateway, as you've done for the other VMs. Remember that you can see the MAC of the network device in the Hardware
tab of your new VM.
In this guide, the VM will have the IP 192.168.1.61
, and notice how the 61
portion corresponds with the ID of this VM.
Boot up your bkpserver
VM, then connect to it through remote SSH shell and login as mgrsys
.
BEWARE!
At this point, to access yourbkpserver
VM you'll have to use the same credentials as in the VM template.
The very first thing you'll want to do is to change the hostname of this VM to match its name on Proxmox VE. So, to set the string bkpserver
, do as it's already explained in this section of the G024 guide.
The LVM filesystem structure of this VM retains the same names as the VM template, and this could be confusing. In particular, it's the VG (volume group) that should be changed to something related to this VM, such as bkpserver-vg
. But this is not a trivial change, in particular because this VM has a SWAP partition active. So, carefully follow the instructions next.
-
First, temporarily disable the active SWAP of this VM with the
swapoff
command below.$ sudo swapoff -a
Verify that the command has been successful by checking the
/proc/swaps
file.$ cat /proc/swaps Filename Type Size Used Priority
If there's nothing listed, like above, you're good to go.
-
Now, follow closely the instructions specified in this section of the G024 guide, although bearing in mind that the new VG name is
bkpserver-vg
in this guide. -
Finally, you'll need to edit the file
/etc/initramfs-tools/conf.d/resume
and again find and change the stringdebiantpl
for the corresponding one,bkpserver
in this guide. After the change, the line should look as below.RESUME=/dev/mapper/bkpserver--vg-swap_1
BEWARE!
Don't make a.orig
backup of this file, or not in the very same directory the original is. Debian reads all files within the directory!
The second network device, or virtual Ethernet card, in the new VM is disabled by default. You have to enable it and assign it a proper IP, so later it can communicate directly with the secondary virtual NICs of your K3s node VMs. Do this by following the instructions in this particular section of the G024 guide, but configuring an IP within the same range (10.0.0.x
) that is not already assigned. In this guide, to keep the relationship with the 6161
VM ID used in this guide, the IP for this secondary NIC will be 10.0.0.61
.
You should change the password of the mgrsys
user, since it's the same one it had in the VM template. To do so, execute the passwd
command as mgrsys
and it'll ask you the old and new password for the account.
$ passwd
Changing password for mgrsys.
Current password:
New password:
Retype new password:
passwd: password updated successfully
As you've done with mgrsys
' password, now you must change its TOTP to make it unique for this bkpserver
VM. Just execute the google-authenticator
command, and it will overwrite the current content of the .google_authenticator
file in the $HOME
directory of your current user. In this guide, for this VM the command will be as below.
$ google-authenticator -t -d -f -r 3 -R 30 -w 3 -Q UTF8 -i bkpserver.deimos.cloud -l mgrsys@bkpserver
BEWARE!
Export and save all the codes and even the.google_authenticator
file in a password manager or by any other secure method.
To connect through SSH to the VM, you're using the key pair originally meant just for the VM template. The proper thing to do is to change it for a new pair meant only for this bkpserver
VM. You already did this change for the K3s node VMs, a procedure detailed in a concrete section within the G025 guide. Follow those instructions, but set the comment (the -C
option at the ssh-keygen
command) in the new key pair to a meaningful string like bkpserver.deimos.cloud@mgrsys
.
The main purpose of this new VM is to store backups, therefore you'll need to attach a new virtual storage drive (a hard disk in Proxmox VE) to the bkpserver
VM.
Get into your Proxmox VE web console and do the following.
-
Go to the
Hardware
tab of yourbkpserver
VM, there add a newhard disk
with a configuration like in the snapshot below.The parameters above are configured for a big external storage drive.
Storage
: is the partition enabled ashddusb_bkpdata
, specifically reserved for storing data backups, which is found in the external USB storage drive attached to the Proxmox VE physical host.Disk size
: I've put 250 GiB here as an example, but you can input anything you want. Yet be careful of not going over the real capacity of the storage you've selected.Discard
: enabled since is an option supported by the underlying filesystem.SSD emulation
: disabled because the underlying physical storage device is a regular HDD drive.IO thread
: enabled to improve the I/O performance of this storage.
-
After adding the new hard disk, it should almost immediately appear listed in the
Hardware
tab among the VM's other devices. -
Next, open a shell as
mgrsys
in thebkpserver
VM. Then, check withfdisk
that the new storage is available.$ sudo fdisk -l Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors Disk model: QEMU HARDDISK Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x76bd2712 Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 999423 997376 487M 83 Linux /dev/sda2 1001470 20969471 19968002 9.5G 5 Extended /dev/sda5 1001472 20969471 19968000 9.5G 8e Linux LVM Disk /dev/mapper/bkpserver--vg-root: 8.54 GiB, 9172942848 bytes, 17915904 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/bkpserver--vg-swap_1: 976 MiB, 1023410176 bytes, 1998848 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdb: 250 GiB, 268435456000 bytes, 524288000 sectors Disk model: QEMU HARDDISK Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
The newly attached storage appears at the bottom of the
fdisk
output as the/dev/sdb
disk.
Btrfs, which seems to stand for BeTTeR FileSystem, is a filesystem with specific advanced capabilities that UrBackup can take advantage of for optimizing its backups. Debian 11 supports btrfs, but you need to install the package with tools for handling this filesystem.
$ sudo apt install -y btrfs-progs
With the tools available, now you can turn your /dev/sdb
drive into a btrfs filesystem.
-
Start by adding the
/dev/sdb
to a new labelled multidevice btrfs configuration with the correspondingmkfs.btrfs
command.$ sudo mkfs.btrfs -d single -L bkpdata-hdd /dev/sdb btrfs-progs v5.10.1 See http://btrfs.wiki.kernel.org for more information. Label: bkpdata-hdd UUID: b4520c18-3d89-451e-8b30-f804c4d274f7 Node size: 16384 Sector size: 4096 Filesystem size: 250.00GiB Block group profiles: Data: single 8.00MiB Metadata: DUP 1.00GiB System: DUP 8.00MiB SSD detected: no Incompat features: extref, skinny-metadata Runtime features: Checksum: crc32c Number of devices: 1 Devices: ID SIZE PATH 1 250.00GiB /dev/sdb
Let's analyze the command's output above.
- The command has created a btrfs volume labeled
bkpdata-hdd
. - This volume is multidevice, although currently it only has one storage drive.
- The volume works in
single
mode, meaning that:- The metadata will be mirrored in all the devices in the volume.
- The data is allocated in "linear" fashion all along the devices in the volume.
BEWARE!
Never build a multidevice volume with drives of different capabilities. So don't put in the same volume SSD devices with HDD ones, for instance. Always be sure that they all are of the same kind and have the same I/O capabilities.With the btrfs volume configured this way, when you're running out of space in it, you can add another storage device to it. Check out how in the official btrfs wiki.
- The command has created a btrfs volume labeled
-
You need a mount point for the btrfs volume, so create one under the usual
/mnt
folder.$ sudo mkdir /mnt/bkpdata-hdd
-
Mount the btrfs volume in the mount point created before. To do the mounting, you can invoke in the
mount
command ANY of the devices used in the volume. In this case, there's only/dev/sdb
.$ sudo mount /dev/sdb /mnt/bkpdata-hdd
-
To make that mounting permanent, you'll have to edit the
/etc/fstab
file and add the corresponding line there. First, make a backup of thefstab
file.$ sudo cp /etc/fstab /etc/fstab.bkp
Then append the lines next to the
fstab
file.# Backup storage /dev/sdb /mnt/bkpdata-hdd btrfs defaults,nofail 0 0
-
Reboot the VM.
$ sudo reboot
-
Log back into a shell in the VM, then check with
df
that the volume is mounted.$ df -h Filesystem Size Used Avail Use% Mounted on udev 471M 0 471M 0% /dev tmpfs 98M 520K 98M 1% /run /dev/mapper/bkpserver--vg-root 8.4G 1.2G 6.8G 15% / tmpfs 489M 0 489M 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock /dev/sdb 250G 3.8M 248G 1% /mnt/bkpdata-hdd /dev/sda1 470M 48M 398M 11% /boot tmpfs 98M 0 98M 0% /run/user/1000
See above how the
/dev/sdb
device appears in the list of filesystems.
This VM is based on a VM template that could be at this point out of date. You should then use apt
to update the Debian in this VM.
$ sudo apt update
$ sudo apt upgrade
If the upgrade affects many packages, or critical ones, you should reboot after applying the upgrade.
$ sudo reboot
With the VM ready, you can proceed with the deployment of the UrBackup server on it.
To install UrBackup server in your Debian 11 VM, you just need to install one .deb
package. The problem is acquiring the most recent one, since the official UrBackup page doesn't link them at all. At the time of writing this, the latest non-beta version of UrBackup server is the 2.5.25
, and you can find it in this web folder. So, knowing this particularity, let's get on with the installation procedure.
-
In a shell as
mgrsys
, usewget
to download in thebkpserver
VM the package for UrBackup version2.5.25
.$ wget https://beta.urbackup.org/Server/2.5.25/urbackup-server_2.5.25_amd64.deb
-
Install the downloaded
urbackup-server_2.5.25_amd64.deb
package withdpkg
.$ sudo dpkg -i urbackup-server_2.5.25_amd64.deb
The output of this
dpkg
command may warn you of unsatisfied dependencies that have left the package installation unfinished. This is something you'll correct in the next step withapt
. -
Apply the following
apt
command to properly finalize the installation of the UrBackup package.$ sudo apt -f install
A few seconds later, this
apt
command will launch a text-based window asking you where you want UrBackup to store its backups.Change the suggested path for the one where the btrfs volume is mounted, in this guide
/mnt/bkpdata-hdd
.With the path properly adjusted, press enter and the installation will be completed.
-
UrBackup server is installed in your VM, so now you can browse to its web console which you can access through an http connection to the port
55414
. So, for this guide's VM, the whole URL would behttp://192.168.1.61:55414/
. The first page you'll see is theStatus
tab of your new UrBackup server.Notice that you've gotten here without going through any kind of login. This security hole is one of several things you'll adjust in the following sections.
On the other hand, go to the shell on the VM and list the contents of the
/mnt/bkpdata-hdd
.$ ls -al /mnt/bkpdata-hdd/ total 20 drwxr-xr-x 1 urbackup urbackup 50 Jul 6 19:19 . drwxr-xr-x 3 root root 4096 Jul 6 17:10 .. drwxr-x--- 1 urbackup urbackup 0 Jul 6 19:11 clients drwxr-x--- 1 urbackup urbackup 0 Jul 6 19:19 urbackup_tmp_files
You'll find that UrBackup has already made use of this storage, and with its own
urbackup
user no less, which confirms that the backup path configuration in UrBackup is correct. -
There's a test you should do to confirm that your UrBackup server will be able to use the btrfs features of the chosen backup storage. Execute the following
urbackup_snapshot_helper
command in yourbkpserver
VM.$ urbackup_snapshot_helper test Testing for btrfs... Create subvolume '/mnt/bkpdata-hdd/testA54hj5luZtlorr494/A' Create a snapshot of '/mnt/bkpdata-hdd/testA54hj5luZtlorr494/A' in '/mnt/bkpdata-hdd/testA54hj5luZtlorr494/B' Delete subvolume (commit): '/mnt/bkpdata-hdd/testA54hj5luZtlorr494/A' Delete subvolume (commit): '/mnt/bkpdata-hdd/testA54hj5luZtlorr494/B' BTRFS TEST OK
Notice two things:
- The
urbackup_snapshot_helper
command doesn't requiresudo
to be executed. - At the end of its output it informs of the test result which, in this case, is the expected
OK
.
- The
-
Unless you want to keep it in the server, you can remove now the
urbackup-server_2.5.25_amd64.deb
package file from the system.$ rm urbackup-server_2.5.25_amd64.deb
-
Know that the UrBackup server is installed as a service that you can manage with
systemctl
commands.$ sudo systemctl status urbackupsrv.service ● urbackupsrv.service - LSB: Server for doing backups Loaded: loaded (/etc/init.d/urbackupsrv; generated) Active: active (running) since Tue 2022-07-12 13:19:25 CEST; 1h 9min ago Docs: man:systemd-sysv-generator(8) Process: 667 ExecStart=/etc/init.d/urbackupsrv start (code=exited, status=0/SUCCESS) Tasks: 24 (limit: 1129) Memory: 122.0M CPU: 7.290s CGroup: /system.slice/urbackupsrv.service └─674 /usr/bin/urbackupsrv run --config /etc/default/urbackupsrv --daemon --pidfile /var/run/urbackupsrv.pid Jul 12 13:19:25 bkpserver systemd[1]: Starting LSB: Server for doing backups... Jul 12 13:19:25 bkpserver systemd[1]: Started LSB: Server for doing backups.
Your UrBackup server at this point allows anyone to access and manage the backups, something you don't really want at all. Let's add an administrator user then.
-
In the UrBackup server's web interface, browse to the
Settings
tab. In the resulting page, click onUsers
.Then click on the
Create user
button. -
You'll get to the following simple form.
There are a couple of thing to realize in this form.
- It only allows you to create an administrator user, since the
Rights for
unfoldable list is greyed out. - You can change the default
admin
username to something else.
- It only allows you to create an administrator user, since the
-
Fill the form, then press on
Create
. -
The creation is immediate and sends you back to the updated
Users
view.The web interface warns you of the new user added, while you can see your new administrator listed in the user list.
-
Refresh the page in your browser and you'll get to see the web interface's login page.
See that it only asks you for the password.
BEWARE!
The web interface doesn't have a logout or disconnect button. To force the logout, you can manually refresh the page in your browser to get back to the login page.
Remember to enable a domain name for your UrBackup server in your network, either by hosts files or configuring it in your local network router. In this guide, I'll assign the domain bkpserver.deimos.cloud
to the 192.168.1.61
IP, to make reaching the web interface easier to remember.
UrBackup server doesn't come with SSL/TLS support so, to secure the connections to the web interface, you have to put a reverse proxy in front of it. I'll show you how to do this with an nginx server, slightly adapting what is explained in this guide.
-
Open a shell as
mgrsys
in yourbkpserver
VM, then install nginx withapt
.$ sudo apt install -y nginx
-
With the
openssl
command, create a self-signed SSL certificate for encrypting the HTTPS connections.$ sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048 $ sudo openssl req -x509 -nodes -days 2562 -newkey rsa:2048 -subj "/O=Urb Security/OU=Urb/CN=Urb.local/CN=Urb" -keyout /etc/ssl/certs/urb-cert.key -out /etc/ssl/certs/urb-cert.crt
The first
openssl
command will take a bit to end, while the second one should finish faster. -
Create an empty file in the path
/etc/nginx/sites-available/urbackup.conf
.$ sudo touch /etc/nginx/sites-available/urbackup.conf
-
Edit the new
urbackup.conf
file so it has the following content.# Make UrBackup webinterface accessible via SSL server { # Define your listen https port listen 443 ssl; # (optionally) # server_name urbackup.yourdomain; # SSL configuration ssl on; include snippets/ssl-params.conf; ssl_certificate /etc/ssl/certs/urb-cert.crt; ssl_certificate_key /etc/ssl/certs/urb-cert.key; # SSL configuration # set the root directory and index files root /usr/share/urbackup/www; index index.htm; # This location we have to # Proxy the “x” file to the running UrBackup FastCGI server location /x { include fastcgi_params; fastcgi_pass 127.0.0.1:55413; } # If come here using HTTP, redirect them to HTTPS error_page 497 https://$host:$server_port$request_uri; # Disable logs access_log off; error_log off; }
With this configuration, nginx will redirect calls on the default HTTPS port
443
to the UrBackup server, which is also listening in its localhost fastcgi55413
port. -
You need to enable the UrBackup configuration in nginx.
$ sudo ln -s /etc/nginx/sites-available/urbackup.conf /etc/nginx/sites-enabled/urbackup.conf
-
Disable the
default
configuration that nginx has enabled in its standard installation.$ sudo rm /etc/nginx/sites-enabled/default
-
Create the file
/etc/nginx/snippets/ssl-params.conf
.$ sudo touch /etc/nginx/snippets/ssl-params.conf
-
Edit the new
ssl-params.conf
file so it has the following configuration lines.# from https://cipherli.st/ # and https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html ssl_protocols TLSv1.3; ssl_prefer_server_ciphers on; ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; ssl_ecdh_curve secp384r1; ssl_session_cache shared:SSL:10m; ssl_session_tickets off; ssl_stapling on; ssl_stapling_verify on; resolver 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 5s; # Disable preloading HSTS for now. You can use the commented out header line that includes # the "preload" directive if you understand the implications. #add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"; add_header Strict-Transport-Security "max-age=63072000; includeSubdomains"; add_header X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; ssl_dhparam /etc/ssl/certs/dhparam.pem; # Disallow nginx version in HTTP headers server_tokens off;
This configuration file not only tells nginx how to configure the SSL connections, but also other details like what DNS resolvers to use (the Google ones in this case).
-
At last, restart the nginx service to make it refresh its configuration.
$ sudo systemctl restart nginx.service
-
Try to browse through HTTPS to your UrBackup server. For the configuration proposed in this guide, the correct HTTPS url is
https://bkpserver.deimos.cloud
orhttps://192.168.1.61
.
As you did for your K3s node VMs in the G025 guide, you have to apply some firewall rules on Proxmox VE to increase the protection on your UrBackup server. In particular, you want this VM reachable only through the ports 22
(for SSH) and 443
(for HTTPS) on its net0
network device.
-
At the
Datacenter
level, go toFirewall
>Alias
. There, add a new alias for thebkpserver
main IP (the one for the first network device, namednet0
in Proxmox VE).See above how I named the alias
bkpserver_net0
after the VM it's related to. -
Browse to the
Firewall
>IPSet
tab, where you should create a new ipset that only includes thebkpserver_net0
alias created before.Give this ipset a name related to the VM, like
bkpserver_net0_ips
. -
Now go to the
Firewall
>Security Group
, where you should create a security group with a name such asbkpserver_net0_in
, and containing just the following rules.bkpserver_net0_in
:- Rule 1: Type
in
, ActionACCEPT
, Protocoltcp
, Sourcelocal_network_ips
, Dest. port22
, CommentSSH standard port open for entire local network
. - Rule 2: Type
in
, ActionACCEPT
, Protocoltcp
, Sourcelocal_network_ips
, Dest. port443
, CommentHTTPS standard port open for entire local network
.
- Rule 1: Type
In the PVE web console, your new security group should look like in the snapshot below.
BEWARE!
Don't forget to enable the rules when you create them, so revise theOn
column and check the ones you may have left disabled. -
Next, you have to browse to the Firewall page of the
bkpserver
VM. Here you must press theInsert: Security Group
to apply on this VM the security group you've just created before.In the form that appears, be sure of choosing the security group related to this VM (
bkpserver_net0_in
in this guide). Also specify correctly the network interface to which it'll be applied (net0
), and don't forget to leave the rule enabled!The security group rule should appear now in the VM firewall as shown below.
-
Go to the
Firewall
>IPset
section of the VM, where you have to add an IP set for the IP filter you'll enable later on thenet0
network device of this VM. Remember that the IP set's name must begin with the stringipfilter-
, then followed by the network device's name (net0
here), otherwise the ipfilter won't work.As shown next, this IP set must contain only the alias of this VM main network device's IP (
bkpserver_net0
in this case). -
To enable the firewall on this VM, click on the
Firewall
>Options
tab. There you'll have to adjust the options as shown in the following capture.The changed options are highlighted in red.
-
The
NDP
option is disabled because is only useful for IPv6 networking, which is not active in your VM. -
The
IP filter
is enabled, which helps to avoid IP spoofing.- Remember that enabling the option is not enough. You need to specify the concrete IPs allowed on the network interface in which you want to apply this security measure, something you've just done in the previous step.
-
The
log_level_in
andlog_level_out
options are set toinfo
, enabling the logging of the firewall on the VM. This allows you to see, in theFirewall > Log
view of the VM, any incoming or outgoing traffic that gets dropped or rejected by the firewall.
-
-
As a final verification, try now browsing to your UrBackup server web interface on the HTTPS url, but also on the HTTP one. Only the HTTPS one should work, while trying to connect with unsecured HTTP should return a time-out or similar error. On the other hand, also try to connect with your preferred SSH client. If any of these checks fails, go over this procedure again to find what you might have missed!
Like any other software, the UrBackup server comes with a default configuration that requires some retouching to fit better your circumstances. In particular, you'll adjust next just a couple of general options.
To enable to UrBackup clients the capacity of accessing and restoring the backups stored in the server, you need to specify the concrete server URL they have to reach to do so.
-
Browse to the
Settings
tab of yor UrBackup server's web interface. By default, this page will put you on theGeneral
>Server
options view. There, you'll see the empty parameterServer URL for client file/backup access/browsing
. -
By default, that
Server URL
parameter is empty. You have to specify here the secondary network device IP (10.0.0.61
in this guide) plus the UrBackup server port (55414
), and all of this preceded by thehttp
protocol. So, the URL in this guide ishttp://10.0.0.61:55414
, as it's shown in the snapshot below.Press on
Save
to apply the change, which should show you a success message at the bottom of the page. -
Now click on the
General
>Internet/Active clients
tab. In this page, there's aServer URL clients connect to
field where you also have to specify the same IP and port as before, although respecting theurbackup
protocol string already set there.Notice that the success warning message from the previous change hasn't disappeared when you've change to this page, which very likely is a bug in this version of UrBackup server.
Type in the
Server URL
field the correct string, which for this guide isurbackup://10.0.0.61:55414
, as you can see in the capture next.Press the
Save
button to apply the change, which should show you the same success message as before at the bottom of the page.
By default, the UrBackup server will execute automatically a full image backup from any client it's connected to. Since you already have full images done by Proxmox VE, you don't need to do the same thing again with UrBackup.
On the other hand, this procedure will fail with your K3s node VMs because the tool UrBackup uses in the clients to do the images is incompatible with the ext2 filesystem used in the boot partition used on all your Debian VMs.
With all this in mind, the best thing to do in your particular scenario is to disable, in your UrBackup server, the full image backups altogether.
-
Return to the
Settings
>General
>Server
options view of your UrBackup server's web interface. There, you'll see the optionDo not do image backups
unchecked. -
Check the
Do not do image backups
option and then press theSave
button.Again, due to a bug in the UrBackup server's web interface, you'll still see the success message from the previous change. So, after pressing
Save
, the same success warning will just stay there.
Like other services, the UrBackup server has a log file found in the /var/log
directory. It's full path is /var/log/urbackup.log
.
This log is rotated, and its default rotation configuration is set in the file /etc/logrotate.d/urbackupsrv
.
You may consider to schedule in Proxmox VE a job to backup this VM, as you did in the previous G039 guide with the K3s node VMs. If you want to do this, please bear in mind the following details.
- Remember that the backup job copies and compresses all the storages attached to the VM. This is important since the storage drive where UrBackup stores its backups is not only big, but also uses the btrfs filesystem that may not agree well with the Proxmox VE backup procedure.
- Careful with the storage space you have for backups within Proxmox VE, because the images of this VM will eat that space faster than the backups of other VMs.
- Don't include the UrBackup server VM in the same backup job with other VMs. You want it apart from the others so you can schedule it at a different and more convenient time.
$HOME
$HOME/.ssh
/etc
/etc/initramfs-tools/conf.d
/etc/nginx
/etc/nginx/sites-available
/etc/nginx/sites-enabled
/etc/nginx/snippets
/etc/ssl/certs
/mnt
/mnt/bkpdata-hdd
/proc
/var/log
$HOME/.google_authenticator
$HOME/.ssh/authorized_keys
$HOME/.ssh/id_rsa
$HOME/.ssh/id_rsa.pub
/etc/fstab
/etc/initramfs-tools/conf.d/resume
/etc/nginx/sites-available/urbackup.conf
/etc/nginx/sites-enabled/default
/etc/nginx/sites-enabled/urbackup.conf
/etc/nginx/snippets/ssl-params.conf
/etc/ssl/certs/urb-cert.crt
/etc/ssl/certs/urb-cert.key
/proc/swaps
/var/log/urbackup.log
- Increase the disk size on Debian 11 with btrfs filesystem
- BTRFS on Wikipedia
- Man page. btrfs-filesystem
- BTRFS. Main page
- BTRFS. Getting started
- BTRFS. Using Btrfs with Multiple Devices
- BTRFS support in Debian
- Btrfs file system stability status update
- Fixing Btrfs Filesystem Full Problems
- How to delete btrfs snapshot?
- UrBackup
- Administration Manual for UrBackup Server 2.4.x
- Server 2.5.21/Client 2.5.15
- Latest versions of UrBackup ARE HERE
- Linux image backups with UrBackup 2.5.y
- SSL on the web interface
- UrBackup_simple_make_web_via_ssl_https
- Connect clients with a HTTPS CONNECT web proxy
- Backup server for the raspberry pi
<< Previous (G039. Backups 03) | +Table Of Contents+ | Next (G041. Backups 05. UrBackup 02) >>