-
Notifications
You must be signed in to change notification settings - Fork 714
Clone Plugin Background and Internals
Worklogs:
- WL#9209: InnoDB: Clone local replica
- WL#9210: InnoDB: Clone remote replica
- WL#9211: InnoDB: Clone Replication Coordinates
- WL#11636: InnoDB: Clone Remote provisioning
User-facing terms used throughout the worklogs, code, and this Wiki:
- Donor: the MySQL server instance that is being cloned
- Client: the MySQL server instance that is cloning from a donor
- Local clone: both the donor and the client are located on the same machine
- Remote clone: the donor and the client are on different network-connected machines
- In-place clone: the client clone instance is discarding its current data to be replaced by the cloned data
Bugs reported to MySQL:
- Bug#109920: SEs cannot add to clone data size estimates
- Bug#109921: Cannot clone with synchronization across SEs
- Bug#109922: SEs do not coordinate clone rollbacks on instance startup
- Bug#109926: Clone plugin does not support transactional SEs other than InnoDB
- Locator: a non-persistent ID of the data snapshot being cloned or applied, which is specific to each storage engine. It may optionally contain the completed application state, which is used for remote clone resumes after intermittent network errors.
- Task: the work of sending clone data (on the donor) or applying it (on the client) that is performed by a single thread. A clone operation may have multiple parallel threads.
- Task ID: an ID associated with a task. The main clone thread has a constant ID zero, and the additional spawned worker threads have unique non-zero IDs. The main thread 0 ID may become non-unique in the case of clone resuming after a network error, where both the old still-connected main thread and the newly-connecting main thread have the same ID.
Sometimes a single flag and sometimes a bitset of:
-
HA_CLONE_BLOCKING
,HA_CLONE_REDO
,HA_CLONE_PAGE
,HA_CLONE_HYBRID
: Supposed to indicate, in an InnoDB-centric way, how much the clone operation can block in a storage engine. Only one of these is supposed to be set. In practice as of 8.0.28HA_CLONE_REDO
,HA_CLONE_PAGE
are unused, and the clone type is always set toHA_CLONE_HYBRID
on the donor andHA_CLONE_BLOCKING
on the client. -
HA_CLONE_MULTI_TASK
: Supposed to indicate that the storage engine can clone using multiple concurrent threads. In practice as of 8.0.28 the code assumes it is always set and ignores it otherwise. -
HA_CLONE_RESTART
: Supposed to indicate that the storage engine supports remote clone resume after a network error. In practice as of 8.0.28 the code assumes it is always set and ignores it otherwise.
For both donor and client, indicates the type of the operation being started. One of:
-
HA_CLONE_MODE_START
: Start a new clone session. -
HA_CLONE_MODE_RESTART
: Restart an existing clone session after a recoverable error (i.e. an intermittent network failure) -
HA_CLONE_MODE_ADD_TASK
: A new thread has been started for the ongoing clone session and is joining it. -
HA_CLONE_MODE_VERSION
: Used only on the client, instructs the client to prepare a version locator that is used for clone version negotiation with the donor.
A storage engine wishing to support clone must implement the handlerton API defined in Clone_interface_t
struct. Here we discuss Oracle MySQL API, without the MyRocks clone extensions. TODO: link to them
-
using Clone_capability_t = void (*)(Ha_clone_flagset &flags)
: return the bitset of clone capabilities this storage engine supports. As of 8.0.28, all SEs must support everything. Failing to provide a capability will assert in a debug build and will be ignored otherwise.
-
using Clone_begin_t = int (*)(handlerton *hton, THD *thd, const uchar *&loc, uint &loc_len, uint &task_id, Ha_clone_type type, Ha_clone_mode mode)
: start, or resume, or attach to, the clone session. -
using Clone_copy_t = int (*)(handlerton *hton, THD *thd, const uchar *loc, uint loc_len, uint task_id, Ha_clone_cbk *cbk)
: send the clone data through the provided callbacks. -
using Clone_ack_t = int (*)(handlerton *hton, THD *thd, const uchar *loc, uint loc_len, uint task_id, int in_err, Ha_clone_cbk *cbk)
: acknowledge three types of events coming from clients: 1) completed client side application for a particular clone stage and move to the next one; 2) successful application of memory buffer data (as opposed to file data); 3) clone application errors. -
using Clone_end_t = int (*)(handlerton *hton, THD *thd, const uchar *loc, uint loc_len, uint task_id, int in_err)
: finish the clone session. If the session had multiple threads attached, it is called for each one.
-
using Clone_apply_begin_t = int (*)(handlerton *hton, THD *thd, const uchar *&loc, uint &loc_len, uint &task_id, Ha_clone_mode mode, const char *data_dir)
: start the clone session, or attach to a clone session, or get the version negotiation locator. -
using Clone_apply_t = int (*)(handlerton *hton, THD *thd, const uchar *loc, uint loc_len, uint task_id, int in_err, Ha_clone_cbk *cbk)
: apply the next received chunk of data from the donor. -
using Clone_apply_end_t = int (*)(handlerton *hton, THD *thd, const uchar *loc, uint loc_len, uint task_id, int in_err)
: finish the clone session. If the session had multiple threads attached, it is called for each one.
Cloning handles replication coordinates as follows:
- Positional: each transaction commit updates the transaction system header in InnoDB; the clone enables ordered commit (if not already enabled) and waits for any unordered transactions to commit. At the end of InnoDB redo log copy XA operations are blocked, resulting in the binary log position being consistent with the redo log.
- GTID: not discussed in depth at the moment. It is copied consistently with the last committed transaction without any locking.
After the clone finishes, both positional and GTID replication coordinates are available in the performance_schema.clone_status
table. The clone plugin on the clone side uses this table to set up replication, including setting gtid_executed
. Note that the clone_status
table does not have information about storage engine log positions.
This is far from a complete InnoDB clone internals description. Only discussing enough for the cross-engine synchronization.
InnoDB clone concepts:
- InnoDB Snapshot. At any time multiple snapshots can be active.
- InnoDB Clone. At any time multiple clones can be active and attached to a single snapshot, but this is not a currently used feature.
The clone operation in InnoDB proceeds in stages:
- file copy: the tablespace files are copied. While it is in progress, a separate thread tracks all the flushed page IDs.
-
page copy: the flushed pages from the previous stage are copied. While it is in progress, the redo log from the last checkpoint LSN is being archived. The last archived LSN is the LSN of the cloned instance. A check at MTR commit guards overwriting unarchived logs (
log0write.c::log_writer_wait_on_archiver
). - redo log copy: the archived redo log is copied.
Concurrent DDL during clone support seems to consist mostly of filesystem-level operation tracking.
Documentation license here.
Installation
MyRocks
- Overview
- Transaction
- Backup
- Performance Tuning
- Monitoring
- Migration
- Internals
- Vector Database
DocStore
- Document column type
- Document Path: a new way to query JSON data
- Built-in Functions for JSON documents
MySQL/InnoDB Enhancements