Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Point In Time Recovery #551

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open

Point In Time Recovery #551

wants to merge 7 commits into from

Conversation

Zvirovyi
Copy link

@Zvirovyi Zvirovyi commented Nov 20, 2024

Important!

This PR relies on the canonical/charmed-mysql-snap#56.

Overview

MySQL stores binary transactions logs. This PR adds a service job to upload these logs to the S3 bucket and the ability to use them later for a point-in-time-recovery with a new restore-to-time parameter during restore. This new parameter accepts MySQL timestamp or keyword latest (for replaying all the transaction logs).

Also, a new application blocked status is introduced - Another cluster S3 repository to signal user that used S3 repository is claimed by the another cluster and binlogs collecting job is disabled and creating new backups is restricted (these are the only workload limitation). This is crucial to keep stored binary logs safe from the another clusters. This check uses @@GLOBAL.group_replication_group_name.
After restore, cluster group replication is reinitialized, so practically it becomes a new different cluster. For these cases, Another cluster S3 repository message is changed to the Move restored cluster to another S3 repository to indicate this event more conveniently for the user.
Both the block messages will disappear when S3 configuration is removed or changed to the empty repository.

Usage example

  1. deploy mysql + s3-integrator and integrate them
  2. create full backup
  3. create test data:
create database zvirovyi;
use zvirovyi;
create table asd(message varchar(255) primary key);
select current_timestamp; # 2024-11-20 17:10:01
insert into asd values ('hello');
select current_timestamp; # 2024-11-20 17:10:12
insert into asd values ('world');
flush binary logs;
  1. wait several minutes for binlogs to be uploaded
  2. restore: juju run mysql/leader restore backup-id=2024-11-20T17:08:24Z restore-to-time="2024-11-20 17:10:01"
use zvirovyi;
select * from asd; # empty set returned
  1. observe application block message
  2. restore: juju run mysql/leader restore backup-id=2024-11-20T17:08:24Z restore-to-time="latest"
use zvirovyi;
select * from asd; # hello, world returned

Key notes

@Zvirovyi Zvirovyi marked this pull request as ready for review December 8, 2024 09:12
@Zvirovyi
Copy link
Author

Zvirovyi commented Dec 8, 2024

Tests are WIP!
UPD: integration tests done, and I will fix + add unit tests after PR review.

Comment on lines 563 to +571
if not success:
logger.error(f"Restore failed: {error_message}")
event.fail(error_message)

if recoverable:
self._clean_data_dir_and_start_mysqld()
else:
self.charm.app_peer_data.update({
"s3-block-message": MOVE_RESTORED_CLUSTER_TO_ANOTHER_S3_REPOSITORY_ERROR,
"binlogs-collecting": "",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: if the restore fails and is not recoverable, does this instruct the user to change the s3 repository?

if so, is that the safest & most relevant action for the user to take?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's the most convenient way to report a restore error with event fail and blocked unit status simultaneously while keeping s3-block-message as application status.
It's important to mention, that while the unit is blocked, application status can't transit to the s3-block-message due to the MySQLOperatorCharm._on_update_status -> if not self.cluster_initialized -> if self._mysql.cluster_metadata_exists(self.get_unit_address(unit)), that will fail like:

unit-mysql-0: 01:50:14 WARNING unit.mysql/0.juju-log Failed to check if cluster metadata exists from_instance='10.88.216.67'
unit-mysql-0: 01:50:14 INFO unit.mysql/0.juju-log skip status update when not initialized

Summarizing, if this particular check fails, the application status will be next:
image
Also, right below your code reference, there is the next code block:

self.charm.app_peer_data.update({
"s3-block-message": MOVE_RESTORED_CLUSTER_TO_ANOTHER_S3_REPOSITORY_ERROR,
"binlogs-collecting": "",
})

^ it will imply the same logic I talking about if any of the later backup steps are failed such as _clean_data_dir_and_start_mysqld, _pitr_restore or _post_restore.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think I understand. To clarify, my question was referring to lines 569-571

while keeping s3-block-message as application status

if the restore fails not recoverable, what should the user do?

while the unit is blocked, application status can't transit to the s3-block-message

what happens if the user removes that unit?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think I understand. To clarify, my question was referring to lines 569-571

Yes, I'm also referring to the 569-571. I'm just pointing to the 578-581 lines to inform that the same logic (like when _restore fails and not recoverable) will be applied to the later steps like _pitr_restore and _post_restore assuming that these operations are not recoverable.

if the restore fails not recoverable, what should the user do?

When it's not recoverable, user will receive failed event notification and face single blocked unit. You can see this on the screenshot I've attached to the previous message.

Best course for user in this situation will be:

  • investigate underlying issue in the debug-log and resolve it (for example, user can re-create cluster, see documentation section "Migrate a cluster")
  • run the same restore again / run another restore

Maybe it would be a great idea to put some notice about possible restore failures directly in the documentation.

what happens if the user removes that unit?

  1. Before restore:
    unit = active / idle
    application = active
  2. Restore failed:
    unit = blocked / idle
    application = active
  3. User removes unit:
    application = active
  4. User adds unit:
    unit = active / idle
    application = S3 repository claimed by another cluster
  5. Restore succeed:
    unit = active / idle
    application = Move restored cluster to another S3 repository

not self.charm.unit.is_leader() or "binlogs-collecting" not in self.charm.app_peer_data
):
logger.debug("Stopping binlogs collector")
selected_snap.stop([CHARMED_MYSQL_BINLOGS_COLLECTOR_SERVICE])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should the service be disabled/enabled instead of stopped/started so that it will restart (if service enabled) if the unit's machine restarts?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, will do this.


Returns: whether this operation was successful.
"""
if not self._s3_integrator_relation_exists:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it might be useful to delete the config if the s3 integrator relation is removed—but deferring to @paulomach

Comment on lines +840 to +842
if is_running and (
not self.charm.unit.is_leader() or "binlogs-collecting" not in self.charm.app_peer_data
):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will there be any issues if multiple units are running the service at once?

if so, as the code is currently written, I think there are might be some cases where a leader switchover could cause multiple units to run the service at once—if the peer relation data changes during a switchover, there might be a few Juju event delay (which can be seconds to minutes, depending on deferred events) where two units are running the service—and if the peer relation data doesn't change during a leader switchover, I think there could be two units running the service for an extended period of time

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are no problems with several units running this service simultaneously - it's safe

Copy link
Contributor

@shayancanonical shayancanonical left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we are only collecting binlogs from the leader unit, we need to add handling for when Juju elects a new leader -> what should happen here? Should the new leader unit start collecting binlogs instead?

Furthermore, while thinking of the above use case, we will also handle the scaling scenario -> what if the leader unit is scaled down?

Also, I would really prefer it if we could add an integration test for the above scenario (where the leader unit is scaled down after which the PITR is performed) after we determine how to handle the scenario

event.fail(error_message)
self.charm.unit.status = BlockedStatus(error_message)
return

# Run post-restore operations
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggest moving logic (from lines 585 to 600) into the _restore() method as that is where all the restore logic is located. the _on_restore action handler is just coordinating the restore

this will make it easier to trace the code since all the restore logic is in one method (_restore())

)
verif = True
if ca_chain := s3_parameters.get("tls-ca-chain"):
ca_file = tempfile.NamedTemporaryFile()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: use a context manager with the NamedTemporaryFile. this will reduce the overload to manually close the file in the finally block

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants