We have an implementation of PostgreSQL version 9.2. This is needed because our Vireo will not run with newer versions of PostgreSQL. We need to install PostgreSQL on an EC2 instance that we will manage
Install PostgreSQL 9 with the following.
yum list available | grep postgresql
Ensure you install the resulting version on PostgreSQL 9 from your results.
Find the postgresql.conf
file and make sure the installation is listening on all interfaces. You will need to uncomment
#listen_address = 'localhost'
to say
listen_address = '*'
Restart the PostgreSQL server when you make these changes. Add the corresponding application server IP address on the pg_hba.conf
file.
Install by downloading the most current release of pg_back on your EC2 instance.
Edit and/or create the configuration file
You will need to add the following changes to the file above
exclude_dbs = postgres, template0
format = plain
checksum_algorithm = sha512
Create the backup directory
sudo mkdir -p /var/backups/postgresql
sudo chown -R postgres:postgres /var/backups
Run the following command to set up a new repo
sudo tee -a /etc/yum.repos.d/google-cloud-sdk.repo << EOM
[google-cloud-cli]
name=Google Cloud CLI
baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el8-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOM
Install google-cloud-cli
sudo yum -y install google-cloud-cli
Create a service account and a generate a key for your service account. You will need this key to complete setup of your service account.
With the key above, activate your account (as the postgres user)
The --key-file
in the example below will be named postgres_user.json
run the following commands:
sudo su - postgres
mkdir {logs,.config}
vi ~/.config/postgres_user.json # enter file content
gcloud auth activate-service-account --key-file .config/postgres_user.json --project=pul-gcdc
Create a Google Cloud Bucket
Run the pg_back program to do an initial backup
pg_back
when it completes, ensure that you can sync the content to google cloud with
/usr/bin/gsutil rsync -ru /var/backups/postgresql gs://google-bucket-name
create a cron job to run daily with the following
# backup
25 4 * * * /usr/bin/pg_back
#
# m h dom mon dow command
25 5 * * * /usr/bin/gsutil rsync -ru /var/backups/postgresql gs://google-bucket-name > ~/logs/gsutil.log 2>&1
The following files are created:
pg_globals_{YYYY-MM-DDT00:00:00-00}.sql
: definition of roles and tablespaces, dumped withpg_dumpall -g
. This file is restored withpsql
.ident_file_{YYYY-MM-DDT00:00:00-00}.out
: the full contents of thepg_ident.conf
file, usually located in the data directory.hba_file_{YYYY-MM-DDT00:00:00-00}.out
: the full contents of thepg_hba.conf
file, usually located in the data directory.{thesis-prod-db}_{YYYY-MM-DDT00:00:00-00}.createdb.sql
: an SQL file containing the definition of the database and parameters set at the database or "role in database" level. It is mostly useful when using a version ofpg_dump
older than 11. It is restored withpsql
.{thesis-prod-db}_{YYYY-MM-DDT00:00:00-00}.sql
: the dump of the database,sql
and must be restored withpsql
.
When checksum are computed, for each file described above, a text file of the same name with a suffix naming the checksum algorithm is produced.
To sum up, when restoring:
- Create the roles and tablespaces by executing
pg_globals_{YYYY-MM-DDT00:00:00-00}.sql
withpsql
. - Create the database with
{thesis-prod-db}_{YYYY-MM-DDT00:00:00-00}.createdb.sql
if necessary.