diff --git a/src/current/_includes/v24.2/known-limitations/pcr-scheduled-changefeeds.md b/src/current/_includes/v24.2/known-limitations/pcr-scheduled-changefeeds.md
deleted file mode 100644
index 31fbf83187c..00000000000
--- a/src/current/_includes/v24.2/known-limitations/pcr-scheduled-changefeeds.md
+++ /dev/null
@@ -1 +0,0 @@
-After the [cutover process]({% link {{ page.version.version }}/cutover-replication.md %}) for [physical cluster replication]({% link {{ page.version.version }}/physical-cluster-replication-overview.md %}), [scheduled changefeeds]({% link {{ page.version.version }}/create-schedule-for-changefeed.md %}) will continue on the promoted cluster. You will need to manage [pausing]({% link {{ page.version.version }}/pause-schedules.md %}) or [canceling]({% link {{ page.version.version }}/drop-schedules.md %}) the schedule on the promoted standby cluster to avoid two clusters running the same changefeed to one sink. [#123776](https://github.com/cockroachdb/cockroach/issues/123776)
\ No newline at end of file
diff --git a/src/current/_includes/v24.3/known-limitations/pcr-scheduled-changefeeds.md b/src/current/_includes/v24.3/known-limitations/pcr-scheduled-changefeeds.md
deleted file mode 100644
index 3d6b8aa8628..00000000000
--- a/src/current/_includes/v24.3/known-limitations/pcr-scheduled-changefeeds.md
+++ /dev/null
@@ -1 +0,0 @@
-After the [failover process]({% link {{ page.version.version }}/failover-replication.md %}) for [physical cluster replication]({% link {{ page.version.version }}/physical-cluster-replication-overview.md %}), [scheduled changefeeds]({% link {{ page.version.version }}/create-schedule-for-changefeed.md %}) will continue on the promoted cluster. You will need to manage [pausing]({% link {{ page.version.version }}/pause-schedules.md %}) or [canceling]({% link {{ page.version.version }}/drop-schedules.md %}) the schedule on the promoted standby cluster to avoid two clusters running the same changefeed to one sink. [#123776](https://github.com/cockroachdb/cockroach/issues/123776)
\ No newline at end of file
diff --git a/src/current/v24.2/create-and-configure-changefeeds.md b/src/current/v24.2/create-and-configure-changefeeds.md
index 69c3a1b8e47..c2815931249 100644
--- a/src/current/v24.2/create-and-configure-changefeeds.md
+++ b/src/current/v24.2/create-and-configure-changefeeds.md
@@ -186,7 +186,6 @@ For more information, see [`EXPERIMENTAL CHANGEFEED FOR`]({% link {{ page.versio
## Known limitations
{% include {{ page.version.version }}/known-limitations/cdc.md %}
-- {% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %}
- {% include {{ page.version.version }}/known-limitations/alter-changefeed-cdc-queries.md %}
- {% include {{ page.version.version }}/known-limitations/cdc-queries-column-families.md %}
- {% include {{ page.version.version }}/known-limitations/changefeed-column-family-message.md %}
diff --git a/src/current/v24.2/create-schedule-for-changefeed.md b/src/current/v24.2/create-schedule-for-changefeed.md
index 6902e2cc135..6c3730172dc 100644
--- a/src/current/v24.2/create-schedule-for-changefeed.md
+++ b/src/current/v24.2/create-schedule-for-changefeed.md
@@ -58,6 +58,10 @@ Option | Value | Description
`on_execution_failure` | `retry` / `reschedule` / `pause` | Determine how the schedule handles an error.
`retry`: Retry the changefeed immediately.
`reschedule`: Reschedule the changefeed based on the `RECURRING` expression.
`pause`: Pause the schedule. This requires that you [resume the schedule]({% link {{ page.version.version }}/resume-schedules.md %}) manually.
**Default:** `reschedule`
`on_previous_running` | `start` / `skip` / `wait` | Control whether the changefeed schedule should start a changefeed if the previous scheduled changefeed is still running.
`start`: Start the new changefeed anyway, even if the previous one is running.
`skip`: Skip the new changefeed and run the next changefeed based on the `RECURRING` expression.
`wait`: Wait for the previous changefeed to complete.
**Default:** `wait`
+{{site.data.alerts.callout_info}}
+{% include_cached new-in.html version="v24.2" %} To avoid multiple clusters running the same schedule concurrently, changefeed schedules will [pause]({% link {{ page.version.version }}/pause-schedules.md %}) when [restored]({% link {{ page.version.version }}/restore.md %}) onto a different cluster or after [physical cluster replication]({% link {{ page.version.version }}/cutover-replication.md %}) has completed.
+{{site.data.alerts.end}}
+
## Examples
Before running any of the examples in this section, it is necessary to enable the `kv.rangefeed.enabled` cluster setting. If you are working on a CockroachDB {{ site.data.products.standard }} or {{ site.data.products.basic }} cluster, this cluster setting is enabled by default.
diff --git a/src/current/v24.2/cutover-replication.md b/src/current/v24.2/cutover-replication.md
index 08a6b44e058..a70e693bf17 100644
--- a/src/current/v24.2/cutover-replication.md
+++ b/src/current/v24.2/cutover-replication.md
@@ -167,17 +167,21 @@ During a replication stream, jobs running on the primary cluster will replicate
[Backup schedules]({% link {{ page.version.version }}/manage-a-backup-schedule.md %}) will pause after cutover on the promoted cluster. Take the following steps to resume jobs:
1. Verify that there are no other schedules running backups to the same [collection of backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#backup-collections), i.e., the schedule that was running on the original primary cluster.
-1. Resume the backup schedule on the promoted cluster.
+1. [Resume]({% link {{ page.version.version }}/resume-schedules.md %}) the backup schedule on the promoted cluster.
{{site.data.alerts.callout_info}}
-If your backup schedule was created on a cluster in v23.1 or earlier, it will **not** pause automatically on the promoted cluster after cutover. In this case, you must pause the schedule manually on the promoted cluster and then take the outlined steps.
+If your backup schedule was created on a cluster in v23.1 or earlier, it will **not** pause automatically on the promoted cluster after cutover. In this case, you must [pause]({% link {{ page.version.version }}/pause-schedules.md %}) the schedule manually on the promoted cluster and then take the outlined steps.
{{site.data.alerts.end}}
### Changefeeds
[Changefeeds]({% link {{ page.version.version }}/change-data-capture-overview.md %}) will fail on the promoted cluster immediately after cutover to avoid two clusters running the same changefeed to one sink. We recommend that you recreate changefeeds on the promoted cluster.
-[Scheduled changefeeds]({% link {{ page.version.version }}/create-schedule-for-changefeed.md %}) will continue on the promoted cluster. You will need to manage [pausing]({% link {{ page.version.version }}/pause-schedules.md %}) or [canceling]({% link {{ page.version.version }}/drop-schedules.md %}) the schedule on the promoted standby cluster to avoid two clusters running the same changefeed to one sink.
+{% include_cached new-in.html version="v24.2" %} To avoid multiple clusters running the same schedule concurrently, [changefeed schedules]({% link {{ page.version.version }}/create-schedule-for-changefeed.md %}) will [pause]({% link {{ page.version.version }}/pause-schedules.md %}) after physical cluster replication has completed.
+
+{{site.data.alerts.callout_info}}
+If your changefeed schedule was created on a cluster in v24.1 or earlier, it will **not** pause automatically on the promoted cluster after cutover. In this case, you will need to manage [pausing]({% link {{ page.version.version }}/pause-schedules.md %}) or [canceling]({% link {{ page.version.version }}/drop-schedules.md %}) the schedule on the promoted standby cluster to avoid two clusters running the same changefeed to one sink.
+{{site.data.alerts.end}}
## Cut back to the primary cluster
diff --git a/src/current/v24.2/known-limitations.md b/src/current/v24.2/known-limitations.md
index d7dccc618a5..b6c00e689f2 100644
--- a/src/current/v24.2/known-limitations.md
+++ b/src/current/v24.2/known-limitations.md
@@ -454,7 +454,6 @@ Accessing the DB Console for a secure cluster now requires login information (i.
#### Physical cluster replication
{% include {{ page.version.version }}/known-limitations/physical-cluster-replication.md %}
-- {% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %}
- {% include {{ page.version.version }}/known-limitations/cutover-stop-application.md %}
#### `RESTORE` limitations
@@ -478,7 +477,6 @@ As a workaround, take a cluster backup instead, as the `system.comments` table i
Change data capture (CDC) provides efficient, distributed, row-level changefeeds into Apache Kafka for downstream processing such as reporting, caching, or full-text indexing. It has the following known limitations:
{% include {{ page.version.version }}/known-limitations/cdc.md %}
-- {% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %}
{% include {{ page.version.version }}/known-limitations/cdc-queries.md %}
- {% include {{ page.version.version }}/known-limitations/cdc-queries-column-families.md %}
- {% include {{ page.version.version }}/known-limitations/changefeed-column-family-message.md %}
diff --git a/src/current/v24.2/physical-cluster-replication-overview.md b/src/current/v24.2/physical-cluster-replication-overview.md
index c0b699faf1d..41259a7b312 100644
--- a/src/current/v24.2/physical-cluster-replication-overview.md
+++ b/src/current/v24.2/physical-cluster-replication-overview.md
@@ -35,7 +35,6 @@ You can use PCR in a disaster recovery plan to:
## Known limitations
{% include {{ page.version.version }}/known-limitations/physical-cluster-replication.md %}
-- {% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %}
- {% include {{ page.version.version }}/known-limitations/cutover-stop-application.md %}
## Performance
diff --git a/src/current/v24.3/create-and-configure-changefeeds.md b/src/current/v24.3/create-and-configure-changefeeds.md
index 112f2a32410..7ea21b2ece6 100644
--- a/src/current/v24.3/create-and-configure-changefeeds.md
+++ b/src/current/v24.3/create-and-configure-changefeeds.md
@@ -164,7 +164,6 @@ For more information, see [`EXPERIMENTAL CHANGEFEED FOR`]({% link {{ page.versio
## Known limitations
{% include {{ page.version.version }}/known-limitations/cdc.md %}
-- {% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %}
- {% include {{ page.version.version }}/known-limitations/alter-changefeed-cdc-queries.md %}
- {% include {{ page.version.version }}/known-limitations/cdc-queries-column-families.md %}
- {% include {{ page.version.version }}/known-limitations/changefeed-column-family-message.md %}
diff --git a/src/current/v24.3/create-schedule-for-changefeed.md b/src/current/v24.3/create-schedule-for-changefeed.md
index 6902e2cc135..5822569b0dc 100644
--- a/src/current/v24.3/create-schedule-for-changefeed.md
+++ b/src/current/v24.3/create-schedule-for-changefeed.md
@@ -58,6 +58,10 @@ Option | Value | Description
`on_execution_failure` | `retry` / `reschedule` / `pause` | Determine how the schedule handles an error.
`retry`: Retry the changefeed immediately.
`reschedule`: Reschedule the changefeed based on the `RECURRING` expression.
`pause`: Pause the schedule. This requires that you [resume the schedule]({% link {{ page.version.version }}/resume-schedules.md %}) manually.
**Default:** `reschedule`
`on_previous_running` | `start` / `skip` / `wait` | Control whether the changefeed schedule should start a changefeed if the previous scheduled changefeed is still running.
`start`: Start the new changefeed anyway, even if the previous one is running.
`skip`: Skip the new changefeed and run the next changefeed based on the `RECURRING` expression.
`wait`: Wait for the previous changefeed to complete.
**Default:** `wait`
+{{site.data.alerts.callout_info}}
+To avoid multiple clusters running the same schedule concurrently, changefeed schedules will [pause]({% link {{ page.version.version }}/pause-schedules.md %}) when [restored]({% link {{ page.version.version }}/restore.md %}) onto a different cluster or after [physical cluster replication]({% link {{ page.version.version }}/failover-replication.md %}) has completed.
+{{site.data.alerts.end}}
+
## Examples
Before running any of the examples in this section, it is necessary to enable the `kv.rangefeed.enabled` cluster setting. If you are working on a CockroachDB {{ site.data.products.standard }} or {{ site.data.products.basic }} cluster, this cluster setting is enabled by default.
diff --git a/src/current/v24.3/failover-replication.md b/src/current/v24.3/failover-replication.md
index acce31d45c8..394d234fe29 100644
--- a/src/current/v24.3/failover-replication.md
+++ b/src/current/v24.3/failover-replication.md
@@ -167,17 +167,21 @@ During a replication stream, jobs running on the primary cluster will replicate
[Backup schedules]({% link {{ page.version.version }}/manage-a-backup-schedule.md %}) will pause after failover on the promoted cluster. Take the following steps to resume jobs:
1. Verify that there are no other schedules running backups to the same [collection of backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#backup-collections), i.e., the schedule that was running on the original primary cluster.
-1. Resume the backup schedule on the promoted cluster.
+1. [Resume]({% link {{ page.version.version }}/resume-schedules.md %}) the backup schedule on the promoted cluster.
{{site.data.alerts.callout_info}}
-If your backup schedule was created on a cluster in v23.1 or earlier, it will **not** pause automatically on the promoted cluster after failover. In this case, you must pause the schedule manually on the promoted cluster and then take the outlined steps.
+If your backup schedule was created on a cluster in v23.1 or earlier, it will **not** pause automatically on the promoted cluster after failover. In this case, you must [pause]({% link {{ page.version.version }}/pause-schedules.md %}) the schedule manually on the promoted cluster and then take the outlined steps.
{{site.data.alerts.end}}
### Changefeeds
[Changefeeds]({% link {{ page.version.version }}/change-data-capture-overview.md %}) will fail on the promoted cluster immediately after failover to avoid two clusters running the same changefeed to one sink. We recommend that you recreate changefeeds on the promoted cluster.
-[Scheduled changefeeds]({% link {{ page.version.version }}/create-schedule-for-changefeed.md %}) will continue on the promoted cluster. You will need to manage [pausing]({% link {{ page.version.version }}/pause-schedules.md %}) or [canceling]({% link {{ page.version.version }}/drop-schedules.md %}) the schedule on the promoted standby cluster to avoid two clusters running the same changefeed to one sink.
+To avoid multiple clusters running the same schedule concurrently, [changefeed schedules]({% link {{ page.version.version }}/create-schedule-for-changefeed.md %}) will [pause]({% link {{ page.version.version }}/pause-schedules.md %}) after physical cluster replication has completed.
+
+{{site.data.alerts.callout_info}}
+If your changefeed schedule was created on a cluster in v24.1 or earlier, it will **not** pause automatically on the promoted cluster after failover. In this case, you will need to manage [pausing]({% link {{ page.version.version }}/pause-schedules.md %}) or [canceling]({% link {{ page.version.version }}/drop-schedules.md %}) the schedule on the promoted standby cluster to avoid two clusters running the same changefeed to one sink.
+{{site.data.alerts.end}}
## Fail back to the primary cluster
diff --git a/src/current/v24.3/known-limitations.md b/src/current/v24.3/known-limitations.md
index 792ba38c91f..184ff766a49 100644
--- a/src/current/v24.3/known-limitations.md
+++ b/src/current/v24.3/known-limitations.md
@@ -465,7 +465,6 @@ Accessing the DB Console for a secure cluster now requires login information (i.
#### Physical cluster replication
{% include {{ page.version.version }}/known-limitations/physical-cluster-replication.md %}
-- {% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %}
- {% include {{ page.version.version }}/known-limitations/failover-stop-application.md %}
#### `RESTORE` limitations
@@ -489,7 +488,6 @@ As a workaround, take a cluster backup instead, as the `system.comments` table i
Change data capture (CDC) provides efficient, distributed, row-level changefeeds into Apache Kafka for downstream processing such as reporting, caching, or full-text indexing. It has the following known limitations:
{% include {{ page.version.version }}/known-limitations/cdc.md %}
-- {% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %}
{% include {{ page.version.version }}/known-limitations/cdc-queries.md %}
- {% include {{ page.version.version }}/known-limitations/cdc-queries-column-families.md %}
- {% include {{ page.version.version }}/known-limitations/changefeed-column-family-message.md %}
diff --git a/src/current/v24.3/physical-cluster-replication-overview.md b/src/current/v24.3/physical-cluster-replication-overview.md
index 8ad79e3d6d0..d0e4f831a4f 100644
--- a/src/current/v24.3/physical-cluster-replication-overview.md
+++ b/src/current/v24.3/physical-cluster-replication-overview.md
@@ -39,7 +39,6 @@ You can use PCR in a disaster recovery plan to:
## Known limitations
{% include {{ page.version.version }}/known-limitations/physical-cluster-replication.md %}
-- {% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %}
- {% include {{ page.version.version }}/known-limitations/failover-stop-application.md %}
## Performance