Skip to content

Commit

Permalink
Remove KL for changefeed schedule on different cluster (#19190)
Browse files Browse the repository at this point in the history
  • Loading branch information
kathancox authored Dec 13, 2024
1 parent 343730a commit cc608ab
Show file tree
Hide file tree
Showing 12 changed files with 22 additions and 16 deletions.

This file was deleted.

This file was deleted.

1 change: 0 additions & 1 deletion src/current/v24.2/create-and-configure-changefeeds.md
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,6 @@ For more information, see [`EXPERIMENTAL CHANGEFEED FOR`]({% link {{ page.versio
## Known limitations

{% include {{ page.version.version }}/known-limitations/cdc.md %}
- {% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %}
- {% include {{ page.version.version }}/known-limitations/alter-changefeed-cdc-queries.md %}
- {% include {{ page.version.version }}/known-limitations/cdc-queries-column-families.md %}
- {% include {{ page.version.version }}/known-limitations/changefeed-column-family-message.md %}
Expand Down
4 changes: 4 additions & 0 deletions src/current/v24.2/create-schedule-for-changefeed.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,10 @@ Option | Value | Description
`on_execution_failure` | `retry` / `reschedule` / `pause` | Determine how the schedule handles an error. <br><br>`retry`: Retry the changefeed immediately.<br><br>`reschedule`: Reschedule the changefeed based on the `RECURRING` expression.<br><br>`pause`: Pause the schedule. This requires that you [resume the schedule]({% link {{ page.version.version }}/resume-schedules.md %}) manually.<br><br>**Default:** `reschedule`
`on_previous_running` | `start` / `skip` / `wait` | Control whether the changefeed schedule should start a changefeed if the previous scheduled changefeed is still running.<br><br>`start`: Start the new changefeed anyway, even if the previous one is running.<br><br>`skip`: Skip the new changefeed and run the next changefeed based on the `RECURRING` expression.<br><br>`wait`: Wait for the previous changefeed to complete.<br><br>**Default:** `wait`

{{site.data.alerts.callout_info}}
{% include_cached new-in.html version="v24.2" %} To avoid multiple clusters running the same schedule concurrently, changefeed schedules will [pause]({% link {{ page.version.version }}/pause-schedules.md %}) when [restored]({% link {{ page.version.version }}/restore.md %}) onto a different cluster or after [physical cluster replication]({% link {{ page.version.version }}/cutover-replication.md %}) has completed.
{{site.data.alerts.end}}

## Examples

Before running any of the examples in this section, it is necessary to enable the `kv.rangefeed.enabled` cluster setting. If you are working on a CockroachDB {{ site.data.products.standard }} or {{ site.data.products.basic }} cluster, this cluster setting is enabled by default.
Expand Down
10 changes: 7 additions & 3 deletions src/current/v24.2/cutover-replication.md
Original file line number Diff line number Diff line change
Expand Up @@ -167,17 +167,21 @@ During a replication stream, jobs running on the primary cluster will replicate
[Backup schedules]({% link {{ page.version.version }}/manage-a-backup-schedule.md %}) will pause after cutover on the promoted cluster. Take the following steps to resume jobs:

1. Verify that there are no other schedules running backups to the same [collection of backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#backup-collections), i.e., the schedule that was running on the original primary cluster.
1. Resume the backup schedule on the promoted cluster.
1. [Resume]({% link {{ page.version.version }}/resume-schedules.md %}) the backup schedule on the promoted cluster.

{{site.data.alerts.callout_info}}
If your backup schedule was created on a cluster in v23.1 or earlier, it will **not** pause automatically on the promoted cluster after cutover. In this case, you must pause the schedule manually on the promoted cluster and then take the outlined steps.
If your backup schedule was created on a cluster in v23.1 or earlier, it will **not** pause automatically on the promoted cluster after cutover. In this case, you must [pause]({% link {{ page.version.version }}/pause-schedules.md %}) the schedule manually on the promoted cluster and then take the outlined steps.
{{site.data.alerts.end}}

### Changefeeds

[Changefeeds]({% link {{ page.version.version }}/change-data-capture-overview.md %}) will fail on the promoted cluster immediately after cutover to avoid two clusters running the same changefeed to one sink. We recommend that you recreate changefeeds on the promoted cluster.

[Scheduled changefeeds]({% link {{ page.version.version }}/create-schedule-for-changefeed.md %}) will continue on the promoted cluster. You will need to manage [pausing]({% link {{ page.version.version }}/pause-schedules.md %}) or [canceling]({% link {{ page.version.version }}/drop-schedules.md %}) the schedule on the promoted standby cluster to avoid two clusters running the same changefeed to one sink.
{% include_cached new-in.html version="v24.2" %} To avoid multiple clusters running the same schedule concurrently, [changefeed schedules]({% link {{ page.version.version }}/create-schedule-for-changefeed.md %}) will [pause]({% link {{ page.version.version }}/pause-schedules.md %}) after physical cluster replication has completed.

{{site.data.alerts.callout_info}}
If your changefeed schedule was created on a cluster in v24.1 or earlier, it will **not** pause automatically on the promoted cluster after cutover. In this case, you will need to manage [pausing]({% link {{ page.version.version }}/pause-schedules.md %}) or [canceling]({% link {{ page.version.version }}/drop-schedules.md %}) the schedule on the promoted standby cluster to avoid two clusters running the same changefeed to one sink.
{{site.data.alerts.end}}

## Cut back to the primary cluster

Expand Down
2 changes: 0 additions & 2 deletions src/current/v24.2/known-limitations.md
Original file line number Diff line number Diff line change
Expand Up @@ -454,7 +454,6 @@ Accessing the DB Console for a secure cluster now requires login information (i.
#### Physical cluster replication

{% include {{ page.version.version }}/known-limitations/physical-cluster-replication.md %}
- {% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %}
- {% include {{ page.version.version }}/known-limitations/cutover-stop-application.md %}

#### `RESTORE` limitations
Expand All @@ -478,7 +477,6 @@ As a workaround, take a cluster backup instead, as the `system.comments` table i
Change data capture (CDC) provides efficient, distributed, row-level changefeeds into Apache Kafka for downstream processing such as reporting, caching, or full-text indexing. It has the following known limitations:

{% include {{ page.version.version }}/known-limitations/cdc.md %}
- {% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %}
{% include {{ page.version.version }}/known-limitations/cdc-queries.md %}
- {% include {{ page.version.version }}/known-limitations/cdc-queries-column-families.md %}
- {% include {{ page.version.version }}/known-limitations/changefeed-column-family-message.md %}
Expand Down
1 change: 0 additions & 1 deletion src/current/v24.2/physical-cluster-replication-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@ You can use PCR in a disaster recovery plan to:
## Known limitations

{% include {{ page.version.version }}/known-limitations/physical-cluster-replication.md %}
- {% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %}
- {% include {{ page.version.version }}/known-limitations/cutover-stop-application.md %}

## Performance
Expand Down
1 change: 0 additions & 1 deletion src/current/v24.3/create-and-configure-changefeeds.md
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,6 @@ For more information, see [`EXPERIMENTAL CHANGEFEED FOR`]({% link {{ page.versio
## Known limitations

{% include {{ page.version.version }}/known-limitations/cdc.md %}
- {% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %}
- {% include {{ page.version.version }}/known-limitations/alter-changefeed-cdc-queries.md %}
- {% include {{ page.version.version }}/known-limitations/cdc-queries-column-families.md %}
- {% include {{ page.version.version }}/known-limitations/changefeed-column-family-message.md %}
Expand Down
4 changes: 4 additions & 0 deletions src/current/v24.3/create-schedule-for-changefeed.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,10 @@ Option | Value | Description
`on_execution_failure` | `retry` / `reschedule` / `pause` | Determine how the schedule handles an error. <br><br>`retry`: Retry the changefeed immediately.<br><br>`reschedule`: Reschedule the changefeed based on the `RECURRING` expression.<br><br>`pause`: Pause the schedule. This requires that you [resume the schedule]({% link {{ page.version.version }}/resume-schedules.md %}) manually.<br><br>**Default:** `reschedule`
`on_previous_running` | `start` / `skip` / `wait` | Control whether the changefeed schedule should start a changefeed if the previous scheduled changefeed is still running.<br><br>`start`: Start the new changefeed anyway, even if the previous one is running.<br><br>`skip`: Skip the new changefeed and run the next changefeed based on the `RECURRING` expression.<br><br>`wait`: Wait for the previous changefeed to complete.<br><br>**Default:** `wait`

{{site.data.alerts.callout_info}}
To avoid multiple clusters running the same schedule concurrently, changefeed schedules will [pause]({% link {{ page.version.version }}/pause-schedules.md %}) when [restored]({% link {{ page.version.version }}/restore.md %}) onto a different cluster or after [physical cluster replication]({% link {{ page.version.version }}/failover-replication.md %}) has completed.
{{site.data.alerts.end}}

## Examples

Before running any of the examples in this section, it is necessary to enable the `kv.rangefeed.enabled` cluster setting. If you are working on a CockroachDB {{ site.data.products.standard }} or {{ site.data.products.basic }} cluster, this cluster setting is enabled by default.
Expand Down
10 changes: 7 additions & 3 deletions src/current/v24.3/failover-replication.md
Original file line number Diff line number Diff line change
Expand Up @@ -167,17 +167,21 @@ During a replication stream, jobs running on the primary cluster will replicate
[Backup schedules]({% link {{ page.version.version }}/manage-a-backup-schedule.md %}) will pause after failover on the promoted cluster. Take the following steps to resume jobs:

1. Verify that there are no other schedules running backups to the same [collection of backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#backup-collections), i.e., the schedule that was running on the original primary cluster.
1. Resume the backup schedule on the promoted cluster.
1. [Resume]({% link {{ page.version.version }}/resume-schedules.md %}) the backup schedule on the promoted cluster.

{{site.data.alerts.callout_info}}
If your backup schedule was created on a cluster in v23.1 or earlier, it will **not** pause automatically on the promoted cluster after failover. In this case, you must pause the schedule manually on the promoted cluster and then take the outlined steps.
If your backup schedule was created on a cluster in v23.1 or earlier, it will **not** pause automatically on the promoted cluster after failover. In this case, you must [pause]({% link {{ page.version.version }}/pause-schedules.md %}) the schedule manually on the promoted cluster and then take the outlined steps.
{{site.data.alerts.end}}

### Changefeeds

[Changefeeds]({% link {{ page.version.version }}/change-data-capture-overview.md %}) will fail on the promoted cluster immediately after failover to avoid two clusters running the same changefeed to one sink. We recommend that you recreate changefeeds on the promoted cluster.

[Scheduled changefeeds]({% link {{ page.version.version }}/create-schedule-for-changefeed.md %}) will continue on the promoted cluster. You will need to manage [pausing]({% link {{ page.version.version }}/pause-schedules.md %}) or [canceling]({% link {{ page.version.version }}/drop-schedules.md %}) the schedule on the promoted standby cluster to avoid two clusters running the same changefeed to one sink.
To avoid multiple clusters running the same schedule concurrently, [changefeed schedules]({% link {{ page.version.version }}/create-schedule-for-changefeed.md %}) will [pause]({% link {{ page.version.version }}/pause-schedules.md %}) after physical cluster replication has completed.

{{site.data.alerts.callout_info}}
If your changefeed schedule was created on a cluster in v24.1 or earlier, it will **not** pause automatically on the promoted cluster after failover. In this case, you will need to manage [pausing]({% link {{ page.version.version }}/pause-schedules.md %}) or [canceling]({% link {{ page.version.version }}/drop-schedules.md %}) the schedule on the promoted standby cluster to avoid two clusters running the same changefeed to one sink.
{{site.data.alerts.end}}

## Fail back to the primary cluster

Expand Down
2 changes: 0 additions & 2 deletions src/current/v24.3/known-limitations.md
Original file line number Diff line number Diff line change
Expand Up @@ -465,7 +465,6 @@ Accessing the DB Console for a secure cluster now requires login information (i.
#### Physical cluster replication

{% include {{ page.version.version }}/known-limitations/physical-cluster-replication.md %}
- {% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %}
- {% include {{ page.version.version }}/known-limitations/failover-stop-application.md %}

#### `RESTORE` limitations
Expand All @@ -489,7 +488,6 @@ As a workaround, take a cluster backup instead, as the `system.comments` table i
Change data capture (CDC) provides efficient, distributed, row-level changefeeds into Apache Kafka for downstream processing such as reporting, caching, or full-text indexing. It has the following known limitations:

{% include {{ page.version.version }}/known-limitations/cdc.md %}
- {% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %}
{% include {{ page.version.version }}/known-limitations/cdc-queries.md %}
- {% include {{ page.version.version }}/known-limitations/cdc-queries-column-families.md %}
- {% include {{ page.version.version }}/known-limitations/changefeed-column-family-message.md %}
Expand Down
1 change: 0 additions & 1 deletion src/current/v24.3/physical-cluster-replication-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@ You can use PCR in a disaster recovery plan to:
## Known limitations

{% include {{ page.version.version }}/known-limitations/physical-cluster-replication.md %}
- {% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %}
- {% include {{ page.version.version }}/known-limitations/failover-stop-application.md %}

## Performance
Expand Down

0 comments on commit cc608ab

Please sign in to comment.