-
-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
export and import #3
Comments
Does clickhouse have a way to do import/export? Thats what is blocking me from implementing this. |
It's a good question, I read https://clickhouse.tech/docs/en/operations/backup/ The best, officially supported option seems to be https://clickhouse.tech/docs/en/operations/utilities/clickhouse-copier/ It's not perfect (not atomic) but should be a good start :) |
We don't customize the image. Is that process available in the official clickhouse image? |
The binary clickhouse-copier is available in the official image, however I ended up using this repo: https://github.com/AlexAkulov/clickhouse-backup which was mentioned on the official doc: https://clickhouse.tech/docs/en/operations/backup/ It was fairly simple: dokku clickhouse:enter {{NAME}}
wget https://github.com/AlexAkulov/clickhouse-backup/releases/download/v0.6.4/clickhouse-backup.tar.gz
tar -zxvf clickhouse-backup.tar.gz
cd clickhouse-backup
CLICKHOUSE_DATA_PATH=/var/lib/clickhouse CLICKHOUSE_USERNAME={{username}} CLICKHOUSE_PASSWORD={{password}} ./clickhouse-backup create test After that, it's simply a matter of zipping up I dunno if this can be integrated or not into this repo. I agree it is less straightforward than a simple pg_dump. |
If the binary is indeed available, then it should be possible to do backups in the same way we do them for everything else. Is there a way to import the backup? |
I haven't looked too much into clickhouse-copier, as they mentioned that it is mostly used to replicate data from one db to another, not to generate a file. I guess we could do something like "create another db, copy data there, disconnect, zip the mounted files from the new db, drop the new db", and the opposite for export? But it seems more complex than strictly necessary, which is why I used clickhouse-backup (not copier) instead. |
Could we use that docker image from the linked repo against the service? And what does the restore process look like? |
Yes, but my docker knowledge is very limited, and I wasn't sure how to resolve the hostname externally (without exposing the service through an amdassador). Restoring should simply be |
You could start the container as a linked container? |
Yes, that would probably work and be the most efficient way of doing the backup |
The 2021 roadmap for ClickHouse contains a "backup" feature. The issue which tracks this feature is ClickHouse/ClickHouse#13953 Once that story is done there will be official backup and restore commands which could be used for "export" and "import". |
Okay seems like the backup was implemented. Does anyone have good ideas as to how this might be done on our end? Seems like it has to backup to a directory, but we don't currently mount any backup directories afaik... |
My assumption is the same. A folder have to be mounted to expose backups. I will run some tests tomorrow. |
We could backup to a file and then maybe cat the contents out? |
The type of a disk under |
That might make our existing backup/restore code more difficult to integrate with. I'd rather not deviate from that if possible, as it makes it easier to copy/paste code across plugins. |
I had a closer look on https://github.com/AlexAkulov/clickhouse-backup. The functionality which uses |
Backups were implemented in the official image, so does that mean we could implement that here? |
Backing upFor anyone who might want to back up the clickhouse to S3, here is how I do it. COMMANDS=$(cat <<'EOF'
echo -e '✅ Setting environment variables'
export CLICKHOUSE_USERNAME=<replace-clikhouse-username>
export CLICKHOUSE_PASSWORD=<replace-clikhouse-password>
export REMOTE_STORAGE=s3
export S3_ACCESS_KEY=<replace-access-key>
export S3_SECRET_KEY=<repalce-secret-key>
export S3_BUCKET=<repalce-bucket-name>
export S3_ENDPOINT=<replace-endpoint>
export S3_REGION=us-east-1
echo "✅ Download clickhouse-backup binaries"
wget https://github.com/Altinity/clickhouse-backup/releases/download/v2.6.1/clickhouse-backup-linux-amd64.tar.gz
tar -zxvf clickhouse-backup-linux-amd64.tar.gz
install -o root -g root -m 0755 build/linux/amd64/clickhouse-backup /usr/local/bin
echo "✅ Creating and uploading backup"
BACKUP_NAME=clickhouse-backup-$(date -u +%Y-%m-%dT%H-%M-%S)
clickhouse-backup create $BACKUP_NAME --rbac --configs
clickhouse-backup upload $BACKUP_NAME
echo "✅ Exit shell. Goodbye!"
exit
EOF
)
dokku clickhouse:enter <clickhouse-instance-name> bash -c "$COMMANDS" Note clickhouse-backup must be run inside the container. RestoringTo restore, simply repeat the process but run the following toward the end. clickhouse-backup restore_remote <replace-with-remote-backup_name> You can set up cronjob to run once per day. Learn more about Altinity Backup for ClickHouse. |
Description of problem
Most dokku plugins offer a
:export
and:import
feature that can be used to backup and restore to a new server if required.This plugin does not offer any similar options, it would be a really nice improvement
It seems we have access to data-dir, but it's probably unsafe to rsync it's data while other processes are writing to it.
The text was updated successfully, but these errors were encountered: