diff --git a/website/blog/2021-11-23-how-to-upgrade-dbt-versions.md b/website/blog/2021-11-23-how-to-upgrade-dbt-versions.md
index f7e5786bc70..6a9889d3033 100644
--- a/website/blog/2021-11-23-how-to-upgrade-dbt-versions.md
+++ b/website/blog/2021-11-23-how-to-upgrade-dbt-versions.md
@@ -12,13 +12,17 @@ date: 2021-11-29
is_featured: true
---
+import Latest from '/snippets/_release-stages-from-versionless.md'
+
+
+
:::tip February 2024 Update
It's been a few years since dbt-core turned 1.0! Since then, we've committed to releasing zero breaking changes whenever possible and it's become much easier to upgrade dbt Core versions.
In 2024, we're taking this promise further by:
- Stabilizing interfaces for everyone — adapter maintainers, metadata consumers, and (of course) people writing dbt code everywhere — as discussed in [our November 2023 roadmap update](https://github.com/dbt-labs/dbt-core/blob/main/docs/roadmap/2023-11-dbt-tng.md).
-- Introducing **Versionless** in dbt Cloud. No more manual upgrades and no more need for _a second sandbox project_ just to try out new features in development. For more details, refer to [Upgrade Core version in Cloud](/docs/dbt-versions/upgrade-dbt-version-in-cloud).
+- Introducing **Latest** release track in dbt Cloud. No more manual upgrades and no need for _a second sandbox project_ just to try out new features in development. For more details, refer to [Upgrade Core version in Cloud](/docs/dbt-versions/upgrade-dbt-version-in-cloud).
We're leaving the rest of this post as is, so we can all remember how it used to be. Enjoy a stroll down memory lane.
diff --git a/website/blog/2024-04-22-extended-attributes.md b/website/blog/2024-04-22-extended-attributes.md
index 18d4ff0b64c..9013af81d47 100644
--- a/website/blog/2024-04-22-extended-attributes.md
+++ b/website/blog/2024-04-22-extended-attributes.md
@@ -12,6 +12,10 @@ date: 2024-04-22
is_featured: true
---
+import Latest from '/snippets/_release-stages-from-versionless.md'
+
+
+
dbt Cloud now includes a suite of new features that enable configuring precise and unique connections to data platforms at the environment and user level. These enable more sophisticated setups, like connecting a project to multiple warehouse accounts, first-class support for [staging environments](/docs/deploy/deploy-environments#staging-environment), and user-level [overrides for specific dbt versions](/docs/dbt-versions/upgrade-dbt-version-in-cloud#override-dbt-version). This gives dbt Cloud developers the features they need to tackle more complex tasks, like Write-Audit-Publish (WAP) workflows and safely testing dbt version upgrades. While you still configure a default connection at the project level and per-developer, you now have tools to get more advanced in a secure way. Soon, dbt Cloud will take this even further allowing multiple connections to be set globally and reused with _global connections_.
@@ -80,7 +84,7 @@ All you need to do is configure an environment as staging and enable the **Defer
## Upgrading on a curve
-Lastly, let’s consider a more specialized use case. Imagine we have a "tiger team" (consisting of a lone analytics engineer named Dave) tasked with upgrading from dbt version 1.6 to the new **Versionless** setting, to take advantage of added stability and feature access. We want to keep the rest of the data team being productive in dbt 1.6 for the time being, while enabling Dave to upgrade and do his work in the new versionless mode.
+Lastly, let’s consider a more specialized use case. Imagine we have a "tiger team" (consisting of a lone analytics engineer named Dave) tasked with upgrading from dbt version 1.6 to the new **Latest release track**, to take advantage of new features and performance improvements. We want to keep the rest of the data team being productive in dbt 1.6 for the time being, while enabling Dave to upgrade and do his work with Latest (and greatest) dbt.
### Development environment
diff --git a/website/blog/2024-05-22-latest-dbt-stability-improvement-innovation.md b/website/blog/2024-05-22-latest-dbt-stability-improvement-innovation.md
index 078dab198fa..f2c25f3da8c 100644
--- a/website/blog/2024-05-22-latest-dbt-stability-improvement-innovation.md
+++ b/website/blog/2024-05-22-latest-dbt-stability-improvement-innovation.md
@@ -1,5 +1,5 @@
---
-title: "How we're making sure you can confidently go \"Versionless\" in dbt Cloud"
+title: "How we're making sure you can confidently switch to the \"Latest\" release track in dbt Cloud"
description: "Over the past 6 months, we've laid a stable foundation for continuously improving dbt."
slug: latest-dbt-stability
@@ -12,23 +12,27 @@ date: 2024-05-02
is_featured: true
---
+import Latest from '/snippets/_release-stages-from-versionless.md'
+
+
+
As long as dbt Cloud has existed, it has required users to select a version of dbt Core to use under the hood in their jobs and environments. This made sense in the earliest days, when dbt Core minor versions often included breaking changes. It provided a clear way for everyone to know which version of the underlying runtime they were getting.
However, this came at a cost. While bumping a project's dbt version *appeared* as simple as selecting from a dropdown, there was real effort required to test the compatibility of the new version against existing projects, package dependencies, and adapters. On the other hand, putting this off meant foregoing access to new features and bug fixes in dbt.
-But no more. Today, we're ready to announce the general availability of a new option in dbt Cloud: [**"Versionless."**](https://docs.getdbt.com/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless)
+But no more. Today, we're ready to announce the general availability of a new option in dbt Cloud: [**the "Latest" release track.**](/docs/dbt-versions/cloud-release-tracks)
For customers, this means less maintenance overhead, faster access to bug fixes and features, and more time to focus on what matters most: building trusted data products. This will be our stable foundation for improvement and innovation in dbt Cloud.
-But we wanted to go a step beyond just making this option available to you. In this blog post, we aim to shed a little light on the extensive work we've done to ensure that using "Versionless" is a stable, reliable experience for the thousands of customers who rely daily on dbt Cloud.
+But we wanted to go a step beyond just making this option available to you. In this blog post, we aim to shed a little light on the extensive work we've done to ensure that using the "Latest" release track is a stable and reliable experience for the thousands of customers who rely daily on dbt Cloud.
## How we safely deploy dbt upgrades to Cloud
We've put in place a rigorous, best-in-class suite of tests and control mechanisms to ensure that all changes to dbt under the hood are fully vetted before they're deployed to customers of dbt Cloud.
-This pipeline has in fact been in place since January! It's how we've already been shipping continuous changes to the hundreds of customers who've selected "Versionless" while it's been in Beta and Preview. In that time, this process has enabled us to prevent multiple regressions before they were rolled out to any customers.
+This pipeline has in fact been in place since January! It's how we've already been shipping continuous changes to the hundreds of customers who've selected the "Latest" release track while it's been in Beta and Preview. In that time, this process has enabled us to prevent multiple regressions before they were rolled out to any customers.
We're very confident in the robustness of this process**. We also know that we'll need to continue building trust with time.** We're sharing details about this work in the spirit of transparency and to build that trust.
@@ -82,9 +86,9 @@ All incidents are retrospected to make sure we not only identify and fix the roo
:::
-The outcome of this process is that, when you select "Versionless" in dbt Cloud, the time between an improvement being made to dbt Core and you *safely* getting access to it in your projects is a matter of days — rather than months of waiting for the next dbt Core release, on top of any additional time it may have taken to actually carry out the upgrade.
+The outcome of this process is that, when you select the "Latest" release track in dbt Cloud, the time between an improvement being made to dbt Core and you *safely* getting access to it in your projects is a matter of days — rather than months of waiting for the next dbt Core release, on top of any additional time it may have taken to actually carry out the upgrade.
-We’re pleased to say that since the beta launch of “Versionless” in dbt Cloud in March, **we have not had any functional regressions reach customers**, while we’ve also been shipping multiple improvements to dbt functionality every day. This is a foundation that we aim to build on for the foreseeable future.
+We’re pleased to say that, at the time of writing (May 2, 2024), since the beta launch of the "Latest" release track in dbt Cloud in March, **we have not had any functional regressions reach customers**, while we’ve also been shipping multiple improvements to dbt functionality every day. This is a foundation that we aim to build on for the foreseeable future.
## Stability as a feature
@@ -98,7 +102,7 @@ The adapter interface — i.e. how dbt Core actually connects to a third-party d
To solve that, we've released a new set of interfaces that are entirely independent of the `dbt-core` library: [`dbt-adapters==1.0.0`](https://github.com/dbt-labs/dbt-adapters). From now on, any changes to `dbt-adapters` will be backward and forward-compatible. This also decouples adapter maintenance from the regular release cadence of dbt Core — meaning maintainers get full control over when they ship implementations of new adapter-powered features.
-Note that adapters running in dbt Cloud **must** be [migrated to the new decoupled architecture](https://github.com/dbt-labs/dbt-adapters/discussions/87) as a baseline in order to support the new "Versionless" option.
+Note that adapters running in dbt Cloud **must** be [migrated to the new decoupled architecture](https://github.com/dbt-labs/dbt-adapters/discussions/87) as a baseline in order to support the new "Latest" release track.
### Managing behavior changes: stability as a feature
@@ -118,7 +122,7 @@ We’ve now [formalized our development best practices](https://github.com/dbt-l
In conclusion, we’re putting a lot of new muscle behind our commitments to dbt Cloud customers, the dbt Community, and the broader ecosystem:
-- **Continuous updates**: "Versionless" dbt Cloud simplifies the update process, ensuring you always have the latest features and bug fixes without the maintenance overhead.
+- **Continuous updates**: The "Latest" release track in dbt Cloud simplifies the update process, ensuring you always have the latest features and bug fixes without the maintenance overhead.
- **A rigorous new testing and deployment process**: Our new testing pipeline ensures that every update is carefully vetted against documented interfaces, Cloud-supported adapters, and popular packages before it reaches you. This process minimizes the risk of regressions — and has now been successful at entirely preventing them for hundreds of customers over multiple months.
- **A commitment to stability**: We’ve reworked our approaches to adapter interfaces, behaviour change management, and metadata artifacts to give you more stability and control.
diff --git a/website/blog/2024-06-12-putting-your-dag-on-the-internet.md b/website/blog/2024-06-12-putting-your-dag-on-the-internet.md
index 535cfc34d6e..a8c3bebb61f 100644
--- a/website/blog/2024-06-12-putting-your-dag-on-the-internet.md
+++ b/website/blog/2024-06-12-putting-your-dag-on-the-internet.md
@@ -12,6 +12,10 @@ date: 2024-06-14
is_featured: true
---
+import Latest from '/snippets/_release-stages-from-versionless.md'
+
+
+
**New in dbt: allow Snowflake Python models to access the internet**
With dbt 1.8, dbt released support for Snowflake’s [external access integrations](https://docs.snowflake.com/en/developer-guide/external-network-access/external-network-access-overview) further enabling the use of dbt + AI to enrich your data. This allows querying of external APIs within dbt Python models, a functionality that was required for dbt Cloud customer, [EQT AB](https://eqtgroup.com/). Learn about why they needed it and how they helped build the feature and get it shipped!
@@ -114,6 +118,6 @@ Traditionally dbt is the T in ELT (dbt overview [here](https://docs.getdbt.com/t
In order to get this functionality shipped quickly, EQT opened a pull request, Snowflake helped with some problems we had with CI and a member of dbt Labs helped write the tests and merge the code in!
-dbt now features this functionality in dbt 1.8+ or the “Versionless” option of dbt Cloud (dbt overview [here](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless)).
+dbt now features this functionality in dbt 1.8+ and the "Latest" release track in dbt Cloud (dbt overview [here](/docs/dbt-versions/cloud-release-tracks)).
dbt Labs staff and community members would love to chat more about it in the [#db-snowflake](https://getdbt.slack.com/archives/CJN7XRF1B) slack channel.
diff --git a/website/dbt-versions.js b/website/dbt-versions.js
index f84184a486c..13ce565d354 100644
--- a/website/dbt-versions.js
+++ b/website/dbt-versions.js
@@ -16,7 +16,7 @@
exports.versions = [
{
version: "1.10",
- customDisplay: "Cloud (Versionless)",
+ customDisplay: "Cloud (Latest)",
},
{
version: "1.9",
diff --git a/website/docs/best-practices/how-we-style/2-how-we-style-our-sql.md b/website/docs/best-practices/how-we-style/2-how-we-style-our-sql.md
index 8c61e63b888..35e025faf3f 100644
--- a/website/docs/best-practices/how-we-style/2-how-we-style-our-sql.md
+++ b/website/docs/best-practices/how-we-style/2-how-we-style-our-sql.md
@@ -8,8 +8,8 @@ id: 2-how-we-style-our-sql
- ☁️ Use [SQLFluff](https://sqlfluff.com/) to maintain these style rules automatically.
- Customize `.sqlfluff` configuration files to your needs.
- Refer to our [SQLFluff config file](https://github.com/dbt-labs/jaffle-shop-template/blob/main/.sqlfluff) for the rules we use in our own projects.
-
- - Exclude files and directories by using a standard `.sqlfluffignore` file. Learn more about the syntax in the [.sqlfluffignore syntax docs](https://docs.sqlfluff.com/en/stable/configuration.html#id2).
+ - Exclude files and directories by using a standard `.sqlfluffignore` file. Learn more about the syntax in the [.sqlfluffignore syntax docs](https://docs.sqlfluff.com/en/stable/configuration/index.html).
+ - Excluding unnecessary folders and files (such as `target/`, `dbt_packages/`, and `macros/`) can speed up linting, improve run times, and help you avoid irrelevant logs.
- 👻 Use Jinja comments (`{# #}`) for comments that should not be included in the compiled SQL.
- ⏭️ Use trailing commas.
- 4️⃣ Indents should be four spaces.
diff --git a/website/docs/docs/build/exposures.md b/website/docs/docs/build/exposures.md
index 1a85d5fb415..16dfd0e5f73 100644
--- a/website/docs/docs/build/exposures.md
+++ b/website/docs/docs/build/exposures.md
@@ -69,7 +69,7 @@ dbt test -s +exposure:weekly_jaffle_report
```
-When we generate the dbt Explorer site, you'll see the exposure appear:
+When we generate the [dbt Explorer site](/docs/collaborate/explore-projects), you'll see the exposure appear:
diff --git a/website/docs/docs/build/hooks-operations.md b/website/docs/docs/build/hooks-operations.md
index 6cec2a673c0..842d3fb99a3 100644
--- a/website/docs/docs/build/hooks-operations.md
+++ b/website/docs/docs/build/hooks-operations.md
@@ -40,8 +40,6 @@ Hooks are snippets of SQL that are executed at different times:
Hooks are a more-advanced capability that enable you to run custom SQL, and leverage database-specific actions, beyond what dbt makes available out-of-the-box with standard materializations and configurations.
-
-
If (and only if) you can't leverage the [`grants` resource-config](/reference/resource-configs/grants), you can use `post-hook` to perform more advanced workflows:
* Need to apply `grants` in a more complex way, which the dbt Core `grants` config doesn't (yet) support.
diff --git a/website/docs/docs/build/incremental-microbatch.md b/website/docs/docs/build/incremental-microbatch.md
index 0ee047a4611..55c7dc92367 100644
--- a/website/docs/docs/build/incremental-microbatch.md
+++ b/website/docs/docs/build/incremental-microbatch.md
@@ -8,7 +8,7 @@ id: "incremental-microbatch"
:::info Microbatch
-The new `microbatch` strategy is available in beta for [dbt Cloud Versionless](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) and dbt Core v1.9.
+The new `microbatch` strategy is available in beta for [dbt Cloud "Latest"](/docs/dbt-versions/cloud-release-tracks) and dbt Core v1.9.
If you use a custom microbatch macro, set a [distinct behavior flag](/reference/global-configs/behavior-changes#custom-microbatch-strategy) in your `dbt_project.yml` to enable batched execution. If you don't have a custom microbatch macro, you don't need to set this flag as dbt will handle microbatching automatically for any model using the [microbatch strategy](#how-microbatch-compares-to-other-incremental-strategies).
@@ -36,7 +36,7 @@ Each "batch" corresponds to a single bounded time period (by default, a single d
This is a powerful abstraction that makes it possible for dbt to run batches [separately](#backfills), concurrently, and [retry](#retry) them independently.
-### Example
+## Example
A `sessions` model aggregates and enriches data that comes from two other models:
- `page_views` is a large, time-series table. It contains many rows, new records almost always arrive after existing ones, and existing records rarely update. It uses the `page_view_start` column as its `event_time`.
@@ -175,7 +175,7 @@ It does not matter whether the table already contains data for that day. Given t
-### Relevant configs
+## Relevant configs
Several configurations are relevant to microbatch models, and some are required:
@@ -188,9 +188,50 @@ Several configurations are relevant to microbatch models, and some are required:
+### Required configs for specific adapters
+Some adapters require additional configurations for the microbatch strategy. This is because each adapter implements the microbatch strategy differently.
+
+The following table lists the required configurations for the specific adapters, in addition to the standard microbatch configs:
+
+| Adapter | `unique_key` config | `partition_by` config |
+|----------|------------------|--------------------|
+| [`dbt-postgres`](/reference/resource-configs/postgres-configs#incremental-materialization-strategies) | ✅ Required | N/A |
+| [`dbt-spark`](/reference/resource-configs/spark-configs#incremental-models) | N/A | ✅ Required |
+| [`dbt-bigquery`](/reference/resource-configs/bigquery-configs#merge-behavior-incremental-models) | N/A | ✅ Required |
+
+For example, if you're using `dbt-postgres`, configure `unique_key` as follows:
+
+
+
+```sql
+{{ config(
+ materialized='incremental',
+ incremental_strategy='microbatch',
+ unique_key='sales_id', ## required for dbt-postgres
+ event_time='transaction_date',
+ begin='2023-01-01',
+ batch_size='day'
+) }}
+
+select
+ sales_id,
+ transaction_date,
+ customer_id,
+ product_id,
+ total_amount
+from {{ source('sales', 'transactions') }}
+
+```
+
+ In this example, `unique_key` is required because `dbt-postgres` microbatch uses the `merge` strategy, which needs a `unique_key` to identify which rows in the data warehouse need to get merged. Without a `unique_key`, dbt won't be able to match rows between the incoming batch and the existing table.
+
+
+
+### Full refresh
+
As a best practice, we recommend configuring `full_refresh: False` on microbatch models so that they ignore invocations with the `--full-refresh` flag. If you need to reprocess historical data, do so with a targeted backfill that specifies explicit start and end dates.
-### Usage
+## Usage
**You must write your model query to process (read and return) exactly one "batch" of data**. This is a simplifying assumption and a powerful one:
- You don’t need to think about `is_incremental` filtering
@@ -207,7 +248,7 @@ During standard incremental runs, dbt will process batches according to the curr
**Note:** If there’s an upstream model that configures `event_time`, but you *don’t* want the reference to it to be filtered, you can specify `ref('upstream_model').render()` to opt-out of auto-filtering. This isn't generally recommended — most models that configure `event_time` are fairly large, and if the reference is not filtered, each batch will perform a full scan of this input table.
-### Backfills
+## Backfills
Whether to fix erroneous source data or retroactively apply a change in business logic, you may need to reprocess a large amount of historical data.
@@ -222,13 +263,13 @@ dbt run --event-time-start "2024-09-01" --event-time-end "2024-09-04"
-### Retry
+## Retry
If one or more of your batches fail, you can use `dbt retry` to reprocess _only_ the failed batches.
![Partial retry](https://github.com/user-attachments/assets/f94c4797-dcc7-4875-9623-639f70c97b8f)
-### Timezones
+## Timezones
For now, dbt assumes that all values supplied are in UTC:
diff --git a/website/docs/docs/build/metricflow-time-spine.md b/website/docs/docs/build/metricflow-time-spine.md
index 48e46caeec2..5499c61a8e4 100644
--- a/website/docs/docs/build/metricflow-time-spine.md
+++ b/website/docs/docs/build/metricflow-time-spine.md
@@ -7,7 +7,7 @@ tags: [Metrics, Semantic Layer]
---
-
+
It's common in analytics engineering to have a date dimension or "time spine" table as a base table for different types of time-based joins and aggregations. The structure of this table is typically a base column of daily or hourly dates, with additional columns for other time grains, like fiscal quarters, defined based on the base column. You can join other tables to the time spine on the base column to calculate metrics like revenue at a point in time, or to aggregate to a specific time grain.
@@ -108,7 +108,7 @@ models:
- It needs to reference a column defined under the `columns` key, in this case, `date_hour` and `date_day`, respectively.
- It sets the granularity at the column-level using the `granularity` key, in this case, `hour` and `day`, respectively.
- MetricFlow will use the `standard_granularity_column` as the join key when joining the time spine table to another source table.
-- [The `custom_granularities` field](#custom-calendar), (available in Versionless and dbt v1.9 and higher) lets you specify non-standard time periods like `fiscal_year` or `retail_month` that your organization may use.
+- [The `custom_granularities` field](#custom-calendar), (available in dbt Cloud Latest and dbt Core v1.9 and higher) lets you specify non-standard time periods like `fiscal_year` or `retail_month` that your organization may use.
For an example project, refer to our [Jaffle shop](https://github.com/dbt-labs/jaffle-sl-template/blob/main/models/marts/_models.yml) example.
@@ -310,9 +310,7 @@ You only need to include the `date_day` column in the table. MetricFlow can hand
-The ability to configure custom calendars, such as a fiscal calendar, is available in [dbt Cloud Versionless](/docs/dbt-versions/versionless-cloud) or dbt Core [v1.9 and higher](/docs/dbt-versions/core).
-
-To access this feature, [upgrade to Versionless](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) or your dbt Core version to v1.9 or higher.
+The ability to configure custom calendars, such as a fiscal calendar, is available now in [the "Latest" release track in dbt Cloud](/docs/dbt-versions/cloud-release-tracks), and it will be available in [dbt Core v1.9+](/docs/dbt-versions/core-upgrade/upgrading-to-v1.9).
diff --git a/website/docs/docs/build/metrics-overview.md b/website/docs/docs/build/metrics-overview.md
index f1afa1f37b3..57cdd929acb 100644
--- a/website/docs/docs/build/metrics-overview.md
+++ b/website/docs/docs/build/metrics-overview.md
@@ -95,7 +95,8 @@ import SLCourses from '/snippets/_sl-course.md';
Default time granularity for metrics is useful if your time dimension has a very fine grain, like second or hour, but you typically query metrics rolled up at a coarser grain.
-To set the default time granularity for metrics, you need to be on dbt Cloud Versionless or dbt v1.9 and higher.
+Default time granularity for metrics is available now in [the "Latest" release track in dbt Cloud](/docs/dbt-versions/cloud-release-tracks), and it will be available in [dbt Core v1.9+](/docs/dbt-versions/core-upgrade/upgrading-to-v1.9).
+
diff --git a/website/docs/docs/build/packages.md b/website/docs/docs/build/packages.md
index 49cd7e00b1c..9ba4ceeaff5 100644
--- a/website/docs/docs/build/packages.md
+++ b/website/docs/docs/build/packages.md
@@ -162,7 +162,7 @@ Where `name: 'dbt_utils'` specifies the subfolder of `dbt_packages` that's creat
#### SSH Key Method (Command Line only)
If you're using the Command Line, private packages can be cloned via SSH and an SSH key.
-When you use SSH keys to authenticate to your git remote server, you don’t need to supply your username and password each time. Read more about SSH keys, how to generate them, and how to add them to your git provider here: [Github](https://docs.github.com/en/github/authenticating-to-github/connecting-to-github-with-ssh) and [GitLab](https://docs.gitlab.com/ee/ssh/).
+When you use SSH keys to authenticate to your git remote server, you don’t need to supply your username and password each time. Read more about SSH keys, how to generate them, and how to add them to your git provider here: [Github](https://docs.github.com/en/github/authenticating-to-github/connecting-to-github-with-ssh) and [GitLab](https://docs.gitlab.com/ee/user/ssh.html).
diff --git a/website/docs/docs/build/snapshots.md b/website/docs/docs/build/snapshots.md
index 9a020c7c940..f72f1eb75de 100644
--- a/website/docs/docs/build/snapshots.md
+++ b/website/docs/docs/build/snapshots.md
@@ -10,8 +10,7 @@ id: "snapshots"
* [Snapshot properties](/reference/snapshot-properties)
* [`snapshot` command](/reference/commands/snapshot)
-
-### What are snapshots?
+## What are snapshots?
Analysts often need to "look back in time" at previous data states in their mutable tables. While some source data systems are built in a way that makes accessing historical data possible, this is not always the case. dbt provides a mechanism, **snapshots**, which records changes to a mutable over time.
Snapshots implement [type-2 Slowly Changing Dimensions](https://en.wikipedia.org/wiki/Slowly_changing_dimension#Type_2:_add_new_row) over mutable source tables. These Slowly Changing Dimensions (or SCDs) identify how a row in a table changes over time. Imagine you have an `orders` table where the `status` field can be overwritten as the order is processed.
@@ -39,7 +38,8 @@ This order is now in the "shipped" state, but we've lost the information about w
- To configure snapshots in versions 1.8 and earlier, refer to [Configure snapshots in versions 1.8 and earlier](#configure-snapshots-in-versions-18-and-earlier). These versions use an older syntax where snapshots are defined within a snapshot block in a `.sql` file, typically located in your `snapshots` directory.
-- Note that defining multiple resources in a single file can significantly slow down parsing and compilation. For faster and more efficient management, consider the updated snapshot YAML syntax, [available in Versionless](/docs/dbt-versions/versionless-cloud) or [dbt Core v1.9 and later](/docs/dbt-versions/core).
+- Note that defining multiple resources in a single file can significantly slow down parsing and compilation. For faster and more efficient management, consider the updated snapshot YAML syntax, [available now in the "Latest" release track in dbt Cloud](/docs/dbt-versions/cloud-release-tracks) or [dbt Core v1.9 and later](/docs/dbt-versions/core).
+ - For more information on how to migrate from the legacy snapshot configurations to the updated snapshot YAML syntax, refer to [Snapshot configuration migration](/reference/snapshot-configs#snapshot-configuration-migration).
@@ -63,9 +63,9 @@ snapshots:
[unique_key](/reference/resource-configs/unique_key): column_name_or_expression
[check_cols](/reference/resource-configs/check_cols): [column_name] | all
[updated_at](/reference/resource-configs/updated_at): column_name
- [invalidate_hard_deletes](/reference/resource-configs/invalidate_hard_deletes): true | false
[snapshot_meta_column_names](/reference/resource-configs/snapshot_meta_column_names): dictionary
[dbt_valid_to_current](/reference/resource-configs/dbt_valid_to_current): string
+ [hard_deletes](/reference/resource-configs/hard-deletes): ignore | invalidate | new_record
```
@@ -81,9 +81,9 @@ The following table outlines the configurations available for snapshots:
| [unique_key](/reference/resource-configs/unique_key) | A column(s) (string or array) or expression for the record | Yes | `id` or `[order_id, product_id]` |
| [check_cols](/reference/resource-configs/check_cols) | If using the `check` strategy, then the columns to check | Only if using the `check` strategy | ["status"] |
| [updated_at](/reference/resource-configs/updated_at) | If using the `timestamp` strategy, the timestamp column to compare | Only if using the `timestamp` strategy | updated_at |
-| [invalidate_hard_deletes](/reference/resource-configs/invalidate_hard_deletes) | Find hard deleted records in source and set `dbt_valid_to` to current time if the record no longer exists | No | True |
| [dbt_valid_to_current](/reference/resource-configs/dbt_valid_to_current) | Set a custom indicator for the value of `dbt_valid_to` in current snapshot records (like a future date). By default, this value is `NULL`. When configured, dbt will use the specified value instead of `NULL` for `dbt_valid_to` for current records in the snapshot table.| No | string |
| [snapshot_meta_column_names](/reference/resource-configs/snapshot_meta_column_names) | Customize the names of the snapshot meta fields | No | dictionary |
+| [hard_deletes](/reference/resource-configs/hard-deletes) | Specify how to handle deleted rows from the source. Supported options are `ignore` (default), `invalidate` (replaces the legacy `invalidate_hard_deletes=true`), and `new_record`.| No | string |
- In versions prior to v1.9, the `target_schema` (required) and `target_database` (optional) configurations defined a single schema or database to build a snapshot across users and environment. This created problems when testing or developing a snapshot, as there was no clear separation between development and production environments. In v1.9, `target_schema` became optional, allowing snapshots to be environment-aware. By default, without `target_schema` or `target_database` defined, snapshots now use the `generate_schema_name` or `generate_database_name` macros to determine where to build. Developers can still set a custom location with [`schema`](/reference/resource-configs/schema) and [`database`](/reference/resource-configs/database) configs, consistent with other resource types.
@@ -172,7 +172,7 @@ This strategy handles column additions and deletions better than the `check` str
-By default, `dbt_valid_to` is `NULL` for current records. However, if you set the [`dbt_valid_to_current` configuration](/reference/resource-configs/dbt_valid_to_current) (available in Versionless and 1.9 and higher), `dbt_valid_to` will be set to your specified value (such as `9999-12-31`) for current records.
+By default, `dbt_valid_to` is `NULL` for current records. However, if you set the [`dbt_valid_to_current` configuration](/reference/resource-configs/dbt_valid_to_current) (available in dbt Core v1.9+), `dbt_valid_to` will be set to your specified value (such as `9999-12-31`) for current records.
This allows for straightforward date range filtering.
@@ -210,15 +210,19 @@ Snapshots can't be rebuilt. Because of this, it's a good idea to put snapshots i
### How snapshots work
When you run the [`dbt snapshot` command](/reference/commands/snapshot):
-* **On the first run:** dbt will create the initial snapshot table — this will be the result set of your `select` statement, with additional columns including `dbt_valid_from` and `dbt_valid_to`. All records will have a `dbt_valid_to = null` or the value specified in [`dbt_valid_to_current`](/reference/resource-configs/dbt_valid_to_current) (available in Versionless and 1.9 and higher) if configured.
+* **On the first run:** dbt will create the initial snapshot table — this will be the result set of your `select` statement, with additional columns including `dbt_valid_from` and `dbt_valid_to`. All records will have a `dbt_valid_to = null` or the value specified in [`dbt_valid_to_current`](/reference/resource-configs/dbt_valid_to_current) (available in dbt Core 1.9+) if configured.
* **On subsequent runs:** dbt will check which records have changed or if any new records have been created:
- The `dbt_valid_to` column will be updated for any existing records that have changed.
- - The updated record and any new records will be inserted into the snapshot table. These records will now have `dbt_valid_to = null` or the value configured in `dbt_valid_to_current` (available in Versionless and 1.9 and higher).
+ - The updated record and any new records will be inserted into the snapshot table. These records will now have `dbt_valid_to = null` or the value configured in `dbt_valid_to_current` (available in dbt Core v1.9+).
+
+
#### Note
- These column names can be customized to your team or organizational conventions using the [snapshot_meta_column_names](#snapshot-meta-fields) config.
- Use the `dbt_valid_to_current` config to set a custom indicator for the value of `dbt_valid_to` in current snapshot records (like a future date such as `9999-12-31`). By default, this value is `NULL`. When set, dbt will use this specified value instead of `NULL` for `dbt_valid_to` for current records in the snapshot table.
-
+- Use the [`hard_deletes`](/reference/resource-configs/hard-deletes) config to track hard deletes by adding a new record when row become "deleted" in source. Supported options are `ignore`, `invalidate`, and `new_record`.
+
+
Snapshots can be referenced in downstream models the same way as referencing models — by using the [ref](/reference/dbt-jinja-functions/ref) function.
## Detecting row changes
@@ -294,7 +298,7 @@ The `check` snapshot strategy can be configured to track changes to _all_ column
:::
-**Example Usage**
+**Example usage**
@@ -344,15 +348,64 @@ snapshots:
### Hard deletes (opt-in)
+
+
+In dbt v1.9 and higher, the [`hard_deletes`](/reference/resource-configs/hard-deletes) config replaces the `invalidate_hard_deletes` config to give you more control on how to handle deleted rows from the source. The `hard_deletes` config is not a separate strategy but an additional opt-in feature that can be used with any snapshot strategy.
+
+The `hard_deletes` config has three options/fields:
+| Field | Description |
+| --------- | ----------- |
+| `ignore` (default) | No action for deleted records. |
+| `invalidate` | Behaves the same as the existing `invalidate_hard_deletes=true`, where deleted records are invalidated by setting `dbt_valid_to`. |
+| `new_record` | Tracks deleted records as new rows using the `dbt_is_deleted` [meta field](#snapshot-meta-fields) when records are deleted.|
+
+import HardDeletes from '/snippets/_hard-deletes.md';
+
+
+
+#### Example usage
+
+
+
+```yaml
+snapshots:
+ - name: orders_snapshot_hard_delete
+ relation: source('jaffle_shop', 'orders')
+ config:
+ schema: snapshots
+ unique_key: id
+ strategy: timestamp
+ updated_at: updated_at
+ hard_deletes: new_record # options are: 'ignore', 'invalidate', or 'new_record'
+```
+
+
+
+In this example, the `hard_deletes: new_record` config will add a new row for deleted records with the `dbt_is_deleted` column set to `True`.
+Any restored records are added as new rows with the `dbt_is_deleted` field set to `False`.
+
+The resulting table will look like this:
+
+| id | status | updated_at | dbt_valid_from | dbt_valid_to | dbt_is_deleted |
+| -- | ------ | ---------- | -------------- | ------------ | -------------- |
+| 1 | pending | 2024-01-01 10:47 | 2024-01-01 10:47 | 2024-01-01 11:05 | False |
+| 1 | shipped | 2024-01-01 11:05 | 2024-01-01 11:05 | 2024-01-01 11:20 | False |
+| 1 | deleted | 2024-01-01 11:20 | 2024-01-01 11:20 | 2024-01-01 12:00 | True |
+| 1 | restored | 2024-01-01 12:00 | 2024-01-01 12:00 | | False |
+
+
+
+
+
Rows that are deleted from the source query are not invalidated by default. With the config option `invalidate_hard_deletes`, dbt can track rows that no longer exist. This is done by left joining the snapshot table with the source table, and filtering the rows that are still valid at that point, but no longer can be found in the source table. `dbt_valid_to` will be set to the current snapshot time.
This configuration is not a different strategy as described above, but is an additional opt-in feature. It is not enabled by default since it alters the previous behavior.
For this configuration to work with the `timestamp` strategy, the configured `updated_at` column must be of timestamp type. Otherwise, queries will fail due to mixing data types.
-**Example Usage**
+Note, in v1.9 and higher, the [`hard_deletes`](/reference/resource-configs/hard-deletes) config replaces the `invalidate_hard_deletes` config for better control over how to handle deleted rows from the source.
-
+#### Example usage
@@ -378,33 +431,16 @@ For this configuration to work with the `timestamp` strategy, the configured `up
-
-
-
-
-```yaml
-snapshots:
- - name: orders_snapshot_hard_delete
- relation: source('jaffle_shop', 'orders')
- config:
- schema: snapshots
- unique_key: id
- strategy: timestamp
- updated_at: updated_at
- invalidate_hard_deletes: true
-```
-
-
-
-
-
## Snapshot meta-fields
Snapshot tables will be created as a clone of your source dataset, plus some additional meta-fields*.
-Starting in 1.9 or with [dbt Cloud Versionless](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless):
-- These column names can be customized to your team or organizational conventions using the [`snapshot_meta_column_names`](/reference/resource-configs/snapshot_meta_column_names) config.
+In dbt Core v1.9+ (or available sooner in [the "Latest" release track in dbt Cloud](/docs/dbt-versions/cloud-release-tracks)):
+- These column names can be customized to your team or organizational conventions using the [`snapshot_meta_column_names`](/reference/resource-configs/snapshot_meta_column_names) config.
+ess)
- Use the [`dbt_valid_to_current` config](/reference/resource-configs/dbt_valid_to_current) to set a custom indicator for the value of `dbt_valid_to` in current snapshot records (like a future date such as `9999-12-31`). By default, this value is `NULL`. When set, dbt will use this specified value instead of `NULL` for `dbt_valid_to` for current records in the snapshot table.
+- Use the [`hard_deletes`](/reference/resource-configs/hard-deletes) config to track deleted records as new rows with the `dbt_is_deleted` meta field when using the `hard_deletes='new_record'` field.
+
| Field | Meaning | Usage |
| -------------- | ------- | ----- |
@@ -412,6 +448,7 @@ Starting in 1.9 or with [dbt Cloud Versionless](/docs/dbt-versions/upgrade-dbt-v
| dbt_valid_to | The timestamp when this row became invalidated.
For current records, this is `NULL` by default or the value specified in `dbt_valid_to_current`. | The most recent snapshot record will have `dbt_valid_to` set to `NULL` or the specified value. |
| dbt_scd_id | A unique key generated for each snapshotted record. | This is used internally by dbt |
| dbt_updated_at | The updated_at timestamp of the source record when this snapshot row was inserted. | This is used internally by dbt |
+| dbt_is_deleted | A boolean value indicating if the record has been deleted. `True` if deleted, `False` otherwise. | Added when `hard_deletes='new_record'` is configured. This is used internally by dbt |
*The timestamps used for each column are subtly different depending on the strategy you use:
@@ -445,6 +482,15 @@ Snapshot results (note that `11:30` is not used anywhere):
| 1 | pending | 2024-01-01 10:47 | 2024-01-01 10:47 | 2024-01-01 11:05 | 2024-01-01 10:47 |
| 1 | shipped | 2024-01-01 11:05 | 2024-01-01 11:05 | | 2024-01-01 11:05 |
+Snapshot results with `hard_deletes='new_record'`:
+
+| id | status | updated_at | dbt_valid_from | dbt_valid_to | dbt_updated_at | dbt_is_deleted |
+|----|---------|------------------|------------------|------------------|------------------|----------------|
+| 1 | pending | 2024-01-01 10:47 | 2024-01-01 10:47 | 2024-01-01 11:05 | 2024-01-01 10:47 | False |
+| 1 | shipped | 2024-01-01 11:05 | 2024-01-01 11:05 | 2024-01-01 11:20 | 2024-01-01 11:05 | False |
+| 1 | deleted | 2024-01-01 11:20 | 2024-01-01 11:20 | | 2024-01-01 11:20 | True |
+
+
@@ -479,6 +525,14 @@ Snapshot results:
| 1 | pending | 2024-01-01 11:00 | 2024-01-01 11:30 | 2024-01-01 11:00 |
| 1 | shipped | 2024-01-01 11:30 | | 2024-01-01 11:30 |
+Snapshot results with `hard_deletes='new_record'`:
+
+| id | status | dbt_valid_from | dbt_valid_to | dbt_updated_at | dbt_is_deleted |
+|----|---------|------------------|------------------|------------------|----------------|
+| 1 | pending | 2024-01-01 11:00 | 2024-01-01 11:30 | 2024-01-01 11:00 | False |
+| 1 | shipped | 2024-01-01 11:30 | 2024-01-01 11:40 | 2024-01-01 11:30 | False |
+| 1 | deleted | 2024-01-01 11:40 | | 2024-01-01 11:40 | True |
+
## Configure snapshots in versions 1.8 and earlier
@@ -487,7 +541,7 @@ Snapshot results:
For information about configuring snapshots in dbt versions 1.8 and earlier, select **1.8** from the documentation version picker, and it will appear in this section.
-To configure snapshots in versions 1.9 and later, refer to [Configuring snapshots](#configuring-snapshots). The latest versions use a more ergonomic snapshot configuration syntax that also speeds up parsing and compilation.
+To configure snapshots in versions 1.9 and later, refer to [Configuring snapshots](#configuring-snapshots). The latest versions use an updated snapshot configuration syntax that optimizes performance.
@@ -495,7 +549,8 @@ To configure snapshots in versions 1.9 and later, refer to [Configuring snapshot
- In dbt versions 1.8 and earlier, snapshots are `select` statements, defined within a snapshot block in a `.sql` file (typically in your `snapshots` directory). You'll also need to configure your snapshot to tell dbt how to detect record changes.
- The earlier dbt versions use an older syntax that allows for defining multiple resources in a single file. This syntax can significantly slow down parsing and compilation.
-- For faster and more efficient management, consider[ upgrading to Versionless](/docs/dbt-versions/versionless-cloud) or the [latest version of dbt Core](/docs/dbt-versions/core), which introduces an updated snapshot configuration syntax that optimizes performance.
+- For faster and more efficient management, consider [choosing the "Latest" release track in dbt Cloud](/docs/dbt-versions/cloud-release-tracks) or the [latest version of dbt Core](/docs/dbt-versions/core), which introduces an updated snapshot configuration syntax that optimizes performance.
+ - For more information on how to migrate from the legacy snapshot configurations to the updated snapshot YAML syntax, refer to [Snapshot configuration migration](/reference/snapshot-configs#snapshot-configuration-migration).
The following example shows how to configure a snapshot:
diff --git a/website/docs/docs/build/unit-tests.md b/website/docs/docs/build/unit-tests.md
index 1d7143d7476..69d89ad30e6 100644
--- a/website/docs/docs/build/unit-tests.md
+++ b/website/docs/docs/build/unit-tests.md
@@ -10,13 +10,13 @@ keywords:
:::note
-This functionality is only supported in dbt Core v1.8+ or accounts that have opted for a ["Versionless"](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) dbt Cloud experience.
+Unit testing functionality is available in [dbt Cloud Release Tracks](/docs/dbt-versions/cloud-release-tracks) or dbt Core v1.8+
:::
Historically, dbt's test coverage was confined to [“data” tests](/docs/build/data-tests), assessing the quality of input data or resulting datasets' structure. However, these tests could only be executed _after_ building a model.
-With dbt Core v1.8 and dbt Cloud environments that have gone versionless by selecting the **Versionless** option, we have introduced an additional type of test to dbt - unit tests. In software programming, unit tests validate small portions of your functional code, and they work much the same way here. Unit tests allow you to validate your SQL modeling logic on a small set of static inputs _before_ you materialize your full model in production. Unit tests enable test-driven development, benefiting developer efficiency and code reliability.
+Starting in dbt Core v1.8, we have introduced an additional type of test to dbt - unit tests. In software programming, unit tests validate small portions of your functional code, and they work much the same way here. Unit tests allow you to validate your SQL modeling logic on a small set of static inputs _before_ you materialize your full model in production. Unit tests enable test-driven development, benefiting developer efficiency and code reliability.
## Before you begin
@@ -24,11 +24,14 @@ With dbt Core v1.8 and dbt Cloud environments that have gone versionless by sele
- We currently only support adding unit tests to models in your _current_ project.
- We currently _don't_ support unit testing models that use the [`materialized view`](/docs/build/materializations#materialized-view) materialization.
- We currently _don't_ support unit testing models that use recursive SQL.
-- You must specify all fields in a BigQuery STRUCT in a unit test. You cannot use only a subset of fields in a STRUCT.
- If your model has multiple versions, by default the unit test will run on *all* versions of your model. Read [unit testing versioned models](/reference/resource-properties/unit-testing-versions) for more information.
-- Unit tests must be defined in a YML file in your `models/` directory.
+- Unit tests must be defined in a YML file in your [`models/` directory](/reference/project-configs/model-paths).
- Table names must be [aliased](/docs/build/custom-aliases) in order to unit test `join` logic.
-- Redshift customers need to be aware of a [limitation when building unit tests](/reference/resource-configs/redshift-configs#unit-test-limitations) that requires a workaround.
+- Include all [`ref`](/reference/dbt-jinja-functions/ref) or [`source`](/reference/dbt-jinja-functions/source) model references in the unit test configuration as `input`s to avoid "node not found" errors during compilation.
+
+#### Adapter-specific caveats
+- You must specify all fields in a BigQuery `STRUCT` in a unit test. You cannot use only a subset of fields in a `STRUCT`.
+- Redshift customers need to be aware of a [limitation when building unit tests](/reference/resource-configs/redshift-configs#unit-test-limitations) that requires a workaround.
Read the [reference doc](/reference/resource-properties/unit-tests) for more details about formatting your unit tests.
diff --git a/website/docs/docs/cloud-integrations/configure-auto-exposures.md b/website/docs/docs/cloud-integrations/configure-auto-exposures.md
index 746bef62e44..5500e02067e 100644
--- a/website/docs/docs/cloud-integrations/configure-auto-exposures.md
+++ b/website/docs/docs/cloud-integrations/configure-auto-exposures.md
@@ -20,7 +20,7 @@ Auto-exposures help data teams optimize their efficiency and ensure data quality
To access the features, you should meet the following:
-1. Your environment and jobs are on [Versionless](/docs/dbt-versions/versionless-cloud) dbt.
+1. Your environment and jobs are on a supported [release track](/docs/dbt-versions/cloud-release-tracks) dbt.
2. You have a dbt Cloud account on the [Enterprise plan](https://www.getdbt.com/pricing/).
3. You have set up a [production](/docs/deploy/deploy-environments#set-as-production-environment) deployment environment for each project you want to explore, with at least one successful job run.
4. You have [admin permissions](/docs/cloud/manage-access/enterprise-permissions) in dbt Cloud to edit project settings or production environment settings.
diff --git a/website/docs/docs/cloud/account-integrations.md b/website/docs/docs/cloud/account-integrations.md
new file mode 100644
index 00000000000..e5ff42cb900
--- /dev/null
+++ b/website/docs/docs/cloud/account-integrations.md
@@ -0,0 +1,103 @@
+---
+title: "Account integrations in dbt Cloud"
+sidebar_label: "Account integrations"
+description: "Learn how to configure account integrations for your dbt Cloud account."
+---
+
+The following sections describe the different **Account integrations** available from your dbt Cloud account under the account **Settings** section.
+
+
+
+## Git integrations
+
+Connect your dbt Cloud account to your Git provider to enable dbt Cloud users to authenticate your personal accounts. dbt Cloud will perform Git actions on behalf of your authenticated self, against repositories to which you have access according to your Git provider permissions.
+
+To configure a Git account integration:
+1. Navigate to **Account settings** in the side menu.
+2. Under the **Settings** section, click on **Integrations**.
+3. Click on the Git provider from the list and select the **Pencil** icon to the right of the provider.
+4. dbt Cloud [natively connects](/docs/cloud/git/git-configuration-in-dbt-cloud) to the following Git providers:
+
+ - [GitHub](/docs/cloud/git/connect-github)
+ - [GitLab](/docs/cloud/git/connect-gitlab)
+ - [Azure DevOps](/docs/cloud/git/connect-azure-devops)
+
+You can connect your dbt Cloud account to additional Git providers by importing a git repository from any valid git URL. Refer to [Import a git repository](/docs/cloud/git/import-a-project-by-git-url) for more information.
+
+
+
+## OAuth integrations
+
+Connect your dbt Cloud account to an OAuth provider that are integrated with dbt Cloud.
+
+To configure an OAuth account integration:
+1. Navigate to **Account settings** in the side menu.
+2. Under the **Settings** section, click on **Integrations**.
+3. Under **OAuth**, and click on **Link** to connect your Slack account.
+4. For custom OAuth providers, under **Custom OAuth integrations**, click on **Add integration** and select the OAuth provider from the list. Fill in the required fields and click **Save**.
+
+
+
+## AI integrations
+
+Once AI features have been [enabled](/docs/cloud/enable-dbt-copilot#enable-dbt-copilot), you can use dbt Labs' AI integration or bring-your-own provider to support AI-powered dbt Cloud features like [dbt Copilot](/docs/cloud/dbt-copilot) and [Ask dbt](/docs/cloud-integrations/snowflake-native-app) (both available on [dbt Cloud Enterprise plans](https://www.getdbt.com/pricing)).
+
+dbt Cloud supports AI integrations for dbt Labs-managed OpenAI keys, Self-managed OpenAI keys, or Self-managed Azure OpenAI keys .
+
+Note, if you bring-your-own provider, you will incur API calls and associated charges for features used in dbt Cloud.
+
+:::info
+dbt Cloud's AI is optimized for OpenAIs gpt-4o. Using other models can affect performance and accuracy, and functionality with other models isn't guaranteed.
+:::
+
+To configure the AI integration in your dbt Cloud account, a dbt Cloud admin can perform the following steps:
+1. Navigate to **Account settings** in the side menu.
+2. Select **Integrations** and scroll to the **AI** section.
+3. Click on the **Pencil** icon to the right of **OpenAI** to configure the AI integration.
+
+4. Configure the AI integration for either **dbt Labs OpenAI**, **OpenAI**, or **Azure OpenAI**.
+
+
+
+
+ 1. Select the toggle for **dbt Labs** to use dbt Labs' managed OpenAI key.
+ 2. Click **Save**.
+
+
+
+
+
+
+ 1. Select the toggle for **OpenAI** to use your own OpenAI key.
+ 2. Enter the API key.
+ 3. Click **Save**.
+
+
+
+
+
+ To learn about deploying your own OpenAI model on Azure, refer to [Deploy models on Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-openai). Configure credentials for your Azure OpenAI deployment in dbt Cloud in the following two ways:
+ - [From a Target URI](#from-a-target-uri)
+ - [Manually providing the credentials](#manually-providing-the-credentials)
+
+ #### From a Target URI
+
+ 1. Locate your Azure OpenAI deployment URI in your Azure Deployment details page.
+ 2. In the dbt Cloud **Azure OpenAI** section, select the tab **From Target URI**.
+ 3. Paste the URI into the **Target URI** field.
+ 4. Enter your Azure OpenAI API key.
+ 5. Verify the **Endpoint**, **API Version**, and **Deployment Name** are correct.
+ 6. Click **Save**.
+
+
+ #### Manually providing the credentials
+
+ 1. Locate your Azure OpenAI configuration in your Azure Deployment details page.
+ 2. In the dbt Cloud **Azure OpenAI** section, select the tab **Manual Input**.
+ 2. Enter your Azure OpenAI API key.
+ 3. Enter the **Endpoint**, **API Version**, and **Deployment Name**.
+ 4. Click **Save**.
+
+
+
+
diff --git a/website/docs/docs/cloud/cloud-cli-installation.md b/website/docs/docs/cloud/cloud-cli-installation.md
index 8a058cbb90f..8a34401cd08 100644
--- a/website/docs/docs/cloud/cloud-cli-installation.md
+++ b/website/docs/docs/cloud/cloud-cli-installation.md
@@ -21,8 +21,6 @@ dbt commands are run against dbt Cloud's infrastructure and benefit from:
## Prerequisites
The dbt Cloud CLI is available in all [deployment regions](/docs/cloud/about-cloud/access-regions-ip-addresses) and for both multi-tenant and single-tenant accounts.
-- You are on dbt version 1.5 or higher. Alternatively, set it to [**Versionless**](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) to automatically stay up to date.
-
## Install dbt Cloud CLI
You can install the dbt Cloud CLI on the command line by using one of these methods.
diff --git a/website/docs/docs/cloud/connect-data-platform/connect-amazon-athena.md b/website/docs/docs/cloud/connect-data-platform/connect-amazon-athena.md
index f1009f61274..e3645500b9e 100644
--- a/website/docs/docs/cloud/connect-data-platform/connect-amazon-athena.md
+++ b/website/docs/docs/cloud/connect-data-platform/connect-amazon-athena.md
@@ -7,7 +7,7 @@ sidebar_label: "Connect Amazon Athena"
# Connect Amazon Athena
-Your environment(s) must be on ["Versionless"](/docs/dbt-versions/versionless-cloud) to use the Amazon Athena connection.
+Your environment(s) must be on a supported [release track](/docs/dbt-versions/cloud-release-tracks) to use the Amazon Athena connection.
Connect dbt Cloud to Amazon's Athena interactive query service to build your dbt project. The following are the required and optional fields for configuring the Athena connection:
diff --git a/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md b/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md
index 7e4bc7a9288..6b749ced186 100644
--- a/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md
+++ b/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md
@@ -5,6 +5,14 @@ description: "Configure Snowflake connection."
sidebar_label: "Connect Snowflake"
---
+:::note
+
+dbt Cloud connections and credentials inherit the permissions of the accounts configured. You can customize roles and associated permissions in Snowflake to fit your company's requirements and fine-tune access to database objects in your account. See [Snowflake permissions](/reference/database-permissions/snowflake-permissions) for more information about customizing roles in Snowflake.
+
+Refer to [Snowflake permissions](/reference/database-permissions/snowflake-permissions) for more information about customizing roles in Snowflake.
+
+:::
+
The following fields are required when creating a Snowflake connection
| Field | Description | Examples |
@@ -14,9 +22,6 @@ The following fields are required when creating a Snowflake connection
| Database | The logical database to connect to and run queries against. | `analytics` |
| Warehouse | The virtual warehouse to use for running queries. | `transforming` |
-
-**Note:** A crucial part of working with dbt atop Snowflake is ensuring that users (in development environments) and/or service accounts (in deployment to production environments) have the correct permissions to take actions on Snowflake! Here is documentation of some [example permissions to configure Snowflake access](/reference/database-permissions/snowflake-permissions).
-
## Authentication methods
This section describes the different authentication methods for connecting dbt Cloud to Snowflake. Configure Deployment environment (Production, Staging, General) credentials globally in the [**Connections**](/docs/deploy/deploy-environments#deployment-connection) area of **Account settings**. Individual users configure their development credentials in the [**Credentials**](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud#get-started-with-the-cloud-ide) area of their user profile.
diff --git a/website/docs/docs/cloud/connect-data-platform/connect-teradata.md b/website/docs/docs/cloud/connect-data-platform/connect-teradata.md
index cf41814078b..8663a181645 100644
--- a/website/docs/docs/cloud/connect-data-platform/connect-teradata.md
+++ b/website/docs/docs/cloud/connect-data-platform/connect-teradata.md
@@ -7,7 +7,7 @@ sidebar_label: "Connect Teradata"
# Connect Teradata
-Your environment(s) must be on ["Versionless"](/docs/dbt-versions/versionless-cloud) to use the Teradata connection.
+Your environment(s) must be on a supported [release track](/docs/dbt-versions/cloud-release-tracks) to use the Teradata connection.
| Field | Description | Type | Required? | Example |
| ----------------------------- | --------------------------------------------------------------------------------------------- | -------------- | --------- | ------- |
diff --git a/website/docs/docs/cloud/enable-dbt-copilot.md b/website/docs/docs/cloud/enable-dbt-copilot.md
index 67a11fed3fc..2b954d1db5d 100644
--- a/website/docs/docs/cloud/enable-dbt-copilot.md
+++ b/website/docs/docs/cloud/enable-dbt-copilot.md
@@ -12,7 +12,7 @@ This page explains how to enable the dbt Copilot engine in dbt Cloud, leveraging
- Available in the dbt Cloud IDE only.
- Must have an active [dbt Cloud Enterprise account](https://www.getdbt.com/pricing).
-- Development environment has been upgraded to ["Versionless"](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless).
+- Development environment is on a supported [release track](/docs/dbt-versions/cloud-release-tracks) to receive ongoing updates.
- By default, dbt Copilot deployments use a central OpenAI API key managed by dbt Labs. Alternatively, you can [provide your own OpenAI API key](#bringing-your-own-openai-api-key-byok).
- Accept and sign legal agreements. Reach out to your Account team to begin this process.
@@ -34,18 +34,13 @@ Note: To disable (only after enabled), repeat steps 1 to 3, toggle off in step 4
-### Bringing your own OpenAI API key (BYOK)
+## Bringing your own OpenAI API key (BYOK)
Once AI features have been enabled, you can provide your organization's OpenAI API key. dbt Cloud will then leverage your OpenAI account and terms to power dbt Copilot. This will incur billing charges to your organization from OpenAI for requests made by dbt Copilot.
-Note that Azure OpenAI is not currently supported, but will be in the future.
+Configure AI keys using:
+- [dbt Labs-managed OpenAI API key](/docs/cloud/account-integrations?ai-integration=dbtlabs#ai-integrations)
+- Your own [OpenAI API key](/docs/cloud/account-integrations?ai-integration=openai#ai-integrations)
+- [Azure OpenAI](/docs/cloud/account-integrations?ai-integration=azure#ai-integrations)
-A dbt Cloud admin can provide their API key by following these steps:
-
-1. Navigate to **Account settings** in the side menu.
-
-2. Find the **Settings** section and click on **Integrations**.
-
-3. Scroll to **AI** and select the toggle for **OpenAI**
-
-4. Enter your API key and click **Save**.
+For configuration details, see [Account integrations](/docs/cloud/account-integrations#ai-integrations).
diff --git a/website/docs/docs/cloud/git/connect-azure-devops.md b/website/docs/docs/cloud/git/connect-azure-devops.md
index f6c0ee634fc..f3bb07a12d0 100644
--- a/website/docs/docs/cloud/git/connect-azure-devops.md
+++ b/website/docs/docs/cloud/git/connect-azure-devops.md
@@ -4,6 +4,8 @@ id: "connect-azure-devops"
pagination_next: "docs/cloud/git/setup-azure"
---
+# Connect to Azure DevOps
+
diff --git a/website/docs/docs/cloud/git/import-a-project-by-git-url.md b/website/docs/docs/cloud/git/import-a-project-by-git-url.md
index 5cd3553b07f..2b499b39cb7 100644
--- a/website/docs/docs/cloud/git/import-a-project-by-git-url.md
+++ b/website/docs/docs/cloud/git/import-a-project-by-git-url.md
@@ -49,7 +49,7 @@ If you use GitLab, you can import your repo directly using [dbt Cloud's GitLab A
- To add a deploy key to a GitLab account, navigate to the [SSH keys](https://gitlab.com/profile/keys) tab in the User Settings page of your GitLab account.
- Next, paste in the deploy key generated by dbt Cloud for your repository.
- After saving this SSH key, dbt Cloud will be able to read and write files in your GitLab repository.
-- Refer to [Adding a read only deploy key in GitLab](https://docs.gitlab.com/ee/ssh/#per-repository-deploy-keys)
+- Refer to [Adding a read only deploy key in GitLab](https://docs.gitlab.com/ee/user/project/deploy_keys/)
diff --git a/website/docs/docs/cloud/manage-access/set-up-sso-microsoft-entra-id.md b/website/docs/docs/cloud/manage-access/set-up-sso-microsoft-entra-id.md
index de935627765..81463cf9ee5 100644
--- a/website/docs/docs/cloud/manage-access/set-up-sso-microsoft-entra-id.md
+++ b/website/docs/docs/cloud/manage-access/set-up-sso-microsoft-entra-id.md
@@ -61,6 +61,13 @@ Depending on your Microsoft Entra ID settings, your App Registration page might
### Azure <-> dbt Cloud User and Group mapping
+:::important
+
+There is a [limitation](https://learn.microsoft.com/en-us/entra/identity/hybrid/connect/how-to-connect-fed-group-claims#important-caveats-for-this-functionality) on the number of groups Azure will emit (capped at 150) via the SSO token, meaning if a user belongs to more than 150 groups, it will appear as though they belong to none. To prevent this, configure [group assignments](https://learn.microsoft.com/en-us/entra/identity/enterprise-apps/assign-user-or-group-access-portal?pivots=portal) with the dbt Cloud app in Azure and set a [group claim](https://learn.microsoft.com/en-us/entra/identity/hybrid/connect/how-to-connect-fed-group-claims#add-group-claims-to-tokens-for-saml-applications-using-sso-configuration) so Azure emits only the relevant groups.
+
+:::
+
+
The Azure users and groups you will create in the following steps are mapped to groups created in dbt Cloud based on the group name. Reference the docs on [enterprise permissions](enterprise-permissions) for additional information on how users, groups, and permission sets are configured in dbt Cloud.
### Adding users to an Enterprise application
diff --git a/website/docs/docs/cloud/use-visual-editor.md b/website/docs/docs/cloud/use-visual-editor.md
index b390432b227..2ab6a5b82d1 100644
--- a/website/docs/docs/cloud/use-visual-editor.md
+++ b/website/docs/docs/cloud/use-visual-editor.md
@@ -22,8 +22,7 @@ To join the private beta, [register your interest](https://docs.google.com/forms
- You have a [dbt Cloud Enterprise](https://www.getdbt.com/pricing) account
- You have a [developer license](/docs/cloud/manage-access/seats-and-users) with developer credentials set up
- You have an existing dbt Cloud project already created
-- You are [Keep on latest](/docs/dbt-versions/upgrade-dbt-version-in-cloud#keep-on-latest-version) for a versionless experience
-- Successful job run on Production or Staging [environment](/docs/dbt-cloud-environments)
+- Your Development environment is on a supported [release track](/docs/dbt-versions/cloud-release-tracks) to receive ongoing updates.
- Have AI-powered features toggle enabled
## Access visual editor
diff --git a/website/docs/docs/collaborate/auto-exposures.md b/website/docs/docs/collaborate/auto-exposures.md
index 28bf5bd37b1..495906cee75 100644
--- a/website/docs/docs/collaborate/auto-exposures.md
+++ b/website/docs/docs/collaborate/auto-exposures.md
@@ -14,7 +14,7 @@ As a data team, it’s critical that you have context into the downstream use ca
Auto-exposures help users understand how their models are used in downstream analytics tools to inform investments and reduce incidents — ultimately building trust and confidence in data products. It imports and auto-generates exposures based on Tableau dashboards, with user-defined curation.
## Supported plans
-Auto-exposures is available on [Versionless](/docs/dbt-versions/versionless-cloud) and [dbt Cloud Enterprise](https://www.getdbt.com/pricing/) plans. Currently, you can only connect to a single Tableau site on the same server.
+Auto-exposures is available on the [dbt Cloud Enterprise](https://www.getdbt.com/pricing/) plan. Currently, you can only connect to a single Tableau site on the same server.
:::info Tableau Server
If you're using Tableau Server, you need to [allowlist dbt Cloud's IP addresses](/docs/cloud/about-cloud/access-regions-ip-addresses) for your dbt Cloud region.
diff --git a/website/docs/docs/collaborate/explore-projects.md b/website/docs/docs/collaborate/explore-projects.md
index a4388a8696e..3780d100932 100644
--- a/website/docs/docs/collaborate/explore-projects.md
+++ b/website/docs/docs/collaborate/explore-projects.md
@@ -164,12 +164,12 @@ Under the the **Models** option, you can filter on model properties (access or m
-Trust signal icons offer a quick, at-a-glance view of data health when browsing your models in dbt Explorer. These icons keep you informed on the status of your model's health using the indicators **Healthy**, **Caution**, **Degraded**, and **Unknown**. For accurate health data, ensure the resource is up-to-date and has had a recent job run.
+Trust signal icons offer a quick, at-a-glance view of data health when browsing your resources in dbt Explorer. These icons keep you informed on the status of your resource's health using the indicators **Healthy**, **Caution**, **Degraded**, and **Unknown**. For accurate health data, ensure the resource is up-to-date and has had a recent job run. Supported resources are models, sources, and exposures.
Each trust signal icon reflects key data health components, such as test success status, missing resource descriptions, absence of builds in 30-day windows, and more.
To access trust signals:
-- Use the search function or click on **Models** or **Sources** under the **Resource** tab.
+- Use the search function or click on **Models**, **Sources** or **Exposures** under the **Resource** tab.
- View the icons under the **Health** column.
- Hover over or click the trust signal to see detailed information.
- For sources, the trust signal also indicates the source freshness status.
diff --git a/website/docs/docs/collaborate/govern/project-dependencies.md b/website/docs/docs/collaborate/govern/project-dependencies.md
index 7813e25efcb..bbda99960cd 100644
--- a/website/docs/docs/collaborate/govern/project-dependencies.md
+++ b/website/docs/docs/collaborate/govern/project-dependencies.md
@@ -18,7 +18,6 @@ This year, dbt Labs is introducing an expanded notion of `dependencies` across m
## Prerequisites
- Available in [dbt Cloud Enterprise](https://www.getdbt.com/pricing). If you have an Enterprise account, you can unlock these features by designating a [public model](/docs/collaborate/govern/model-access) and adding a [cross-project ref](#how-to-write-cross-project-ref).
-- Use a supported version of dbt (v1.6 or newer or go versionless with "[Versionless](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless)") for both the upstream ("producer") project and the downstream ("consumer") project.
- Define models in an upstream ("producer") project that are configured with [`access: public`](/reference/resource-configs/access). You need at least one successful job run after defining their `access`.
- Define a deployment environment in the upstream ("producer") project [that is set to be your Production environment](/docs/deploy/deploy-environments#set-as-production-environment), and ensure it has at least one successful job run in that environment.
- If the upstream project has a Staging environment, run a job in that Staging environment to ensure the downstream cross-project ref resolves.
diff --git a/website/docs/docs/core/connect-data-platform/redshift-setup.md b/website/docs/docs/core/connect-data-platform/redshift-setup.md
index ce3e8658045..4c00558d782 100644
--- a/website/docs/docs/core/connect-data-platform/redshift-setup.md
+++ b/website/docs/docs/core/connect-data-platform/redshift-setup.md
@@ -31,7 +31,7 @@ import SetUpPages from '/snippets/_setup-pages-intro.md';
| `port` | 5439 | |
| `dbname` | my_db | Database name|
| `schema` | my_schema | Schema name|
-| `connect_timeout` | `None` or 30 | Number of seconds before connection times out|
+| `connect_timeout` | 30 | Number of seconds before connection times out. Default is `None`|
| `sslmode` | prefer | optional, set the sslmode to connect to the database. Default prefer, which will use 'verify-ca' to connect. For more information on `sslmode`, see Redshift note below|
| `role` | None | Optional, user identifier of the current session|
| `autocreate` | false | Optional, default false. Creates user if they do not exist |
diff --git a/website/docs/docs/dbt-versions/versionless-cloud.md b/website/docs/docs/dbt-versions/cloud-release-tracks.md
similarity index 55%
rename from website/docs/docs/dbt-versions/versionless-cloud.md
rename to website/docs/docs/dbt-versions/cloud-release-tracks.md
index 34ffc34f68a..290078da572 100644
--- a/website/docs/docs/dbt-versions/versionless-cloud.md
+++ b/website/docs/docs/dbt-versions/cloud-release-tracks.md
@@ -1,18 +1,61 @@
---
-title: "Upgrade to \"Versionless\" in dbt Cloud"
-sidebar_label: "Upgrade to \"Versionless\" "
-description: "Learn how to go versionless in dbt Cloud. You never have to perform an upgrade again. Plus, you'll be able to access new features and enhancements as soon as they become available. "
+title: "Release tracks in dbt Cloud"
+sidebar_label: "dbt Cloud Release Tracks"
+description: "Learn how to get automatic upgrades to dbt in dbt Cloud. Access new features and enhancements as soon as they become available."
---
-Since May 2024, new capabilities in dbt are delivered continuously to dbt Cloud. We call this "versionless dbt," because your projects and environments are upgraded automatically.
+Since May 2024, new capabilities in the dbt framework are delivered continuously to dbt Cloud. Your projects and environments are upgraded automatically on a cadence that you choose, depending on your dbt Cloud plan.
+
+Previously, customers would pin to a minor version of dbt Core, and receive only patch updates during that specific version's active support period. Release tracks ensure that your project stays up-to-date with the modern capabilities of dbt Cloud and recent versions of dbt Core.
This will require you to make one final update to your current jobs and environments. When that's done, you'll never have to think about managing, coordinating, or upgrading dbt versions again.
-By moving your environments and jobs to "Versionless," you can get all the functionality in the latest features before they're in dbt Core — and more! — along with access to the new features and fixes as soon as they’re released.
+By moving your environments and jobs to release tracks you can get all the functionality in dbt Cloud as soon as it's ready. On the "Latest" release track, this includes access to features _before_ they're available in final releases of dbt Core OSS.
+
+## Which release tracks are available?
+
+- **"Latest"** (available to all plans, formerly called "Versionless"): Provides a continuous release of the latest functionality in dbt Cloud. Includes early access to new features of the dbt framework before they're available in open source releases of dbt Core.
+- **"Compatible"** (available to Team + Enterprise): Provides a monthly release aligned with the most recent open source versions of dbt Core and adapters, plus functionality exclusively available in dbt Cloud.
+- **"Extended"** (available to Enterprise): Provides a delayed release of the previous month's "Compatible" release.
+
+The first "Compatible" release will be in December 2024, after the final release of dbt Core v1.9.0. For December 2024 only, the "Extended" release is the same as "Compatible." Starting in January 2025, "Extended" will be one month behind "Compatible."
+
+To configure an environment in the [dbt Cloud Admin API](/docs/dbt-cloud-apis/admin-cloud-api) or [Terraform](https://registry.terraform.io/providers/dbt-labs/dbtcloud/latest) to use a release track, set `dbt_version` to the release track name:
+- `latest` (formerly called `versionless`; the old name is still supported)
+- `compatible` (available to Team + Enterprise)
+- `extended` (available to Enterprise)
+
+## Which release track should I choose?
+
+Choose the "Latest" release track to continuously receive new features, fixes, performance improvements — latest & greatest dbt. This is the default for all customers on dbt Cloud.
+
+Choose the "Compatible" and "Extended" release tracks if you need a less-frequent release cadence, the ability to test new dbt releases before they go live in production, and/or ongoing compatibility with the latest open source releases of dbt Core.
-## Tips for upgrading {#upgrade-tips}
+### Common architectures
-If you regularly develop your dbt project in dbt Cloud and this is your first time trying “Versionless,” dbt Labs recommends that you try upgrading your project in a development environment. [Override your dbt version in development](/docs/dbt-versions/upgrade-dbt-version-in-cloud#override-dbt-version). Then, launch the IDE or Cloud CLI and do your development work as usual. Everything should work as you expect.
+**Default** - majority of customers on all plans
+- Prioritize immediate access to fixes and features
+- Leave all environments on the "Latest" release track (default configuration)
+
+**Hybrid** - Team, Enterprise
+- Prioritize ongoing compatibility between dbt Cloud and dbt Core for development & deployment using both products in the same dbt projects
+- Configure all environments to use the "Compatible" release track
+- Understand that new features will not be available until they are first released in dbt Core OSS (several months after the "Latest" release track)
+
+**Cautious** - Enterprise, Business Critical
+- Prioritize "bake in" time for new features & fixes
+- Configure development & test environments to use the "Compatible" release track
+- Configure pre-production & production environments to use the "Extended" release track
+- Understand that new features will not be available until they are first released in dbt Core OSS + Compatible track
+
+**Virtual Private dbt or Single Tenant**
+- Changes to all release tracks roll out as part of dbt Cloud instance upgrades once per week
+
+## Upgrading from older versions
+
+### How to upgrade {#upgrade-tips}
+
+If you regularly develop your dbt project in dbt Cloud, and you're still running on a legacy version of dbt Core, dbt Labs recommends that you try upgrading your project in a development environment. [Override your dbt version in development](/docs/dbt-versions/upgrade-dbt-version-in-cloud#override-dbt-version). Then, launch the IDE or Cloud CLI and do your development work as usual. Everything should work as you expect.
If you do see something unexpected or surprising, revert back to the previous version and record the differences you observed. [Contact dbt Cloud support](/docs/dbt-support#dbt-cloud-support) with your findings for a more detailed investigation.
@@ -20,25 +63,23 @@ Next, we recommend that you try upgrading your project’s [deployment environme
If your organization has multiple dbt projects, we recommend starting your upgrade with projects that are smaller, newer, or more familiar for your team. That way, if you do encounter any issues, it'll be easier and faster to troubleshoot those before proceeding to upgrade larger or more complex projects.
-## Considerations
-
-The following is our guidance on some important considerations regarding dbt projects as part of the upgrade.
+### Considerations
-To learn more about how dbt Labs deploys stable dbt upgrades in a safe manner to dbt Cloud, we recommend that you read our blog post [How we're making sure you can confidently go "Versionless" in dbt Cloud](https://docs.getdbt.com/blog/latest-dbt-stability) for details.
+To learn more about how dbt Labs deploys stable dbt upgrades in a safe manner to dbt Cloud, we recommend that you read our blog post: [How we're making sure you can confidently switch to the \"Latest\" release track in dbt Cloud](https://docs.getdbt.com/blog/latest-dbt-stability).
If you're running dbt version 1.6 or older, please know that your version of dbt Core has reached [end-of-life (EOL)](/docs/dbt-versions/core#eol-version-support) and is no longer supported. We strongly recommend that you update to a newer version as soon as reasonably possible.
-dbt Labs has extended the critical support period of dbt Core v1.7 for dbt Cloud Enterprise customers.
+dbt Labs has extended the critical support period of dbt Core v1.7 for dbt Cloud Enterprise customers to January 31, 2024. At that point, we will be asking all customers to select a Release Track for receiving ongoing updates to dbt in dbt Cloud.
If you're running dbt version v1.6 or older, please know that your version of dbt Core has reached [end-of-life (EOL)](/docs/dbt-versions/core#eol-version-support) and is no longer supported. We strongly recommend that you update to a newer version as soon as reasonably possible.
-dbt Labs has extended the "Critical Support" period of dbt Core v1.7 for dbt Cloud Enterprise customers while we work through the migration with those customers to automatic upgrades. In the meantime, this means that v1.7 will continue to be accessible in dbt Cloud for Enteprise customers, jobs and environments on v1.7 for those customers will not be automatically migrated to "Versionless," and dbt Labs will continue to fix critical bugs and security issues.
+dbt Labs has extended the "Critical Support" period of dbt Core v1.7 for dbt Cloud Enterprise customers while we work through the migration with those customers to Release Tracks. In the meantime, this means that v1.7 will continue to be accessible in dbt Cloud for Enteprise customers, jobs and environments on v1.7 for those customers will not be automatically migrated to "Latest," and dbt Labs will continue to fix critical bugs and security issues.
-dbt Cloud accounts on the Developer and Team plans will be migrated to "Versionless" dbt after November 1, 2024. If you know that your project will not be compatible with the upgrade, for one of the reasons described here, or a different reason in your own testing, you should [contact dbt Cloud support](https://docs.getdbt.com/docs/dbt-support#dbt-cloud-support) to request an extension.
+dbt Cloud accounts on the Developer and Team plans will be migrated to the "Latest" release track after November 1, 2024. If you know that your project will not be compatible with the upgrade, for one of the reasons described here, or a different reason in your own testing, you should [contact dbt Cloud support](https://docs.getdbt.com/docs/dbt-support#dbt-cloud-support) to request an extension.
-If your account has been migrated to "Versionless," and you are seeing net-new failures in your scheduled dbt jobs, you should also [contact dbt Cloud support](https://docs.getdbt.com/docs/dbt-support#dbt-cloud-support) to request an extension.
+If your account has been migrated to the "Latest" release track, and you are seeing net-new failures in your scheduled dbt jobs, you should also [contact dbt Cloud support](https://docs.getdbt.com/docs/dbt-support#dbt-cloud-support) to request an extension.
@@ -61,7 +102,7 @@ You should [contact dbt Cloud support](https://docs.getdbt.com/docs/dbt-support#
-
+
For the vast majority of customers, there is no further action needed.
@@ -75,9 +116,9 @@ When we talk about _latest version_, we’re referring to the underlying runtime
If a new version of a dbt package includes a breaking change (for example, a change to one of the macros in `dbt_utils`), you don’t have to immediately use the new version. In your `packages` configuration (in `dependencies.yml` or `packages.yml`), you can still specify which versions or version ranges of packages you want dbt to install. If you're not already doing so, we strongly recommend [checking `package-lock.yml` into version control](/reference/commands/deps#predictable-package-installs) for predictable package installs in deployment environments and a clear change history whenever you install upgrades.
-If you upgrade to “Versionless” and immediately see something that breaks, please [contact support](/docs/dbt-support#dbt-cloud-support) and, in the meantime, downgrade back to v1.7.
+If you upgrade to the "Latest" release track, and immediately see something that breaks, please [contact support](/docs/dbt-support#dbt-cloud-support) and, in the meantime, downgrade back to v1.7.
-If you’re already on “Versionless” and you observe a breaking change (like something worked yesterday, but today it isn't working, or works in a surprising/different way), please [contact support](/docs/dbt-support#dbt-cloud-support) immediately. Depending on your contracted support agreement, the dbt Labs team will respond within our SLA time and we would seek to roll back the change and/or roll out a fix (just as we would for any other part of dbt Cloud). This is the same whether or not the root cause of the breaking change is in the project code or in the code of a package.
+If you’re already on the "Latest" release track, and you observe a breaking change (like something worked yesterday, but today it isn't working, or works in a surprising/different way), please [contact support](/docs/dbt-support#dbt-cloud-support) immediately. Depending on your contracted support agreement, the dbt Labs team will respond within our SLA time and we would seek to roll back the change and/or roll out a fix (just as we would for any other part of dbt Cloud). This is the same whether or not the root cause of the breaking change is in the project code or in the code of a package.
If the package you’ve installed relies on _undocumented_ functionality of dbt, it doesn't have the same guarantees as functionality that we’ve documented and tested. However, we will still do our best to avoid breaking them.
diff --git a/website/docs/docs/dbt-versions/compatible-track-changelog.md b/website/docs/docs/dbt-versions/compatible-track-changelog.md
new file mode 100644
index 00000000000..8f31775e3f1
--- /dev/null
+++ b/website/docs/docs/dbt-versions/compatible-track-changelog.md
@@ -0,0 +1,27 @@
+---
+title: "dbt Cloud Compatible Track - Changelog"
+sidebar_label: "Compatible Track Changelog"
+description: "The Compatible release track updates once per month, and it includes up-to-date open source versions as of the monthly release."
+---
+
+:::info Coming soon
+
+The "Compatible" and "Extended" release tracks will be available in Preview to eligible dbt Cloud accounts in December 2024.
+
+:::
+
+Select the "Compatible" and "Extended" release tracks if you need a less-frequent release cadence, the ability to test new dbt releases before they go live in production, and/or ongoing compatibility with the latest open source releases of dbt Core.
+
+Each monthly "Compatible" release includes functionality matching up-to-date open source versions of dbt Core and adapters at the time of release.
+
+Starting in January 2025, each monthly "Extended" release will match the previous month's "Compatible" release.
+
+For more information, see [release tracks](/docs/dbt-versions/cloud-release-tracks).
+
+## December 2024
+
+Planned release: December 11-13
+
+This release will include functionality from `dbt-core==1.9.0` and the most recent versions of all adapters supported in dbt Cloud. After the Compatible release is cut, we will update with:
+- exact versions of open source dbt packages
+- changelog notes concerning functionality specific to dbt Cloud
diff --git a/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md b/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md
index 31153188978..6ade3d5013f 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md
@@ -9,14 +9,15 @@ displayed_sidebar: "docs"
- [dbt Core 1.9 changelog](https://github.com/dbt-labs/dbt-core/blob/1.9.latest/CHANGELOG.md)
- [dbt Core CLI Installation guide](/docs/core/installation-overview)
-- [Cloud upgrade guide](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless)
+- [Cloud upgrade guide](/docs/dbt-versions/upgrade-dbt-version-in-cloud#release-tracks)
## What to know before upgrading
dbt Labs is committed to providing backward compatibility for all versions 1.x. Any behavior changes will be accompanied by a [behavior change flag](/reference/global-configs/behavior-changes#behavior-change-flags) to provide a migration window for existing projects. If you encounter an error upon upgrading, please let us know by [opening an issue](https://github.com/dbt-labs/dbt-core/issues/new).
-dbt Cloud is now [versionless](/docs/dbt-versions/versionless-cloud). If you have selected "Versionless" in dbt Cloud, you already have access to all the features, fixes, and other functionality that is included in dbt Core v1.9.
-For users of dbt Core, since v1.8 we recommend explicitly installing both `dbt-core` and `dbt-`. This may become required for a future version of dbt. For example:
+Starting in 2024, dbt Cloud provides the functionality from new versions of dbt Core via [release tracks](/docs/dbt-versions/cloud-release-tracks) with automatic upgrades. If you have selected the "Latest" release track in dbt Cloud, you already have access to all the features, fixes, and other functionality that is included in dbt Core v1.9! If you have selected the "Compatible" release track, you will have access in the next monthly "Compatible" release after the dbt Core v1.9 final release.
+
+For users of dbt Core, since v1.8, we recommend explicitly installing both `dbt-core` and `dbt-`. This may become required for a future version of dbt. For example:
```sql
python3 -m pip install dbt-core dbt-snowflake
@@ -51,9 +52,11 @@ Starting in Core 1.9, you can use the new [microbatch strategy](/docs/build/incr
Currently microbatch is supported on these adapters with more to come:
* postgres
+ * redshift
* snowflake
* bigquery
* spark
+ * databricks
### Snapshots improvements
@@ -65,9 +68,12 @@ Beginning in dbt Core 1.9, we've streamlined snapshot configuration and added a
- Standard `schema` and `database` configs supported: Snapshots will now be consistent with other dbt resource types. You can specify where environment-aware snapshots should be stored.
- Warning for incorrect `updated_at` data type: To ensure data integrity, you'll see a warning if the `updated_at` field specified in the snapshot configuration is not the proper data type or timestamp.
- Set a custom current indicator for the value of `dbt_valid_to`: Use the [`dbt_valid_to_current` config](/reference/resource-configs/dbt_valid_to_current) to set a custom indicator for the value of `dbt_valid_to` in current snapshot records (like a future date). By default, this value is `NULL`. When configured, dbt will use the specified value instead of `NULL` for `dbt_valid_to` for current records in the snapshot table.
+- Use the [`hard_deletes`](/reference/resource-configs/hard-deletes) configuration to get more control on how to handle deleted rows from the source. Supported methods are `ignore` (default), `invalidate` (replaces legacy `invalidate_hard_deletes=true`), and `new_record`. Setting `hard_deletes='new_record'` allows you to track hard deletes by adding a new record when row becomes "deleted" in source.
Read more about [Snapshots meta fields](/docs/build/snapshots#snapshot-meta-fields).
+To learn how to safely migrate existing snapshots, refer to [Snapshot configuration migration](/reference/snapshot-configs#snapshot-configuration-migration) for more information.
+
### `state:modified` improvements
We’ve made improvements to `state:modified` behaviors to help reduce the risk of false positives and negatives. Read more about [the `state:modified` behavior flag](#managing-changes-to-legacy-behaviors) that unlocks this improvement:
diff --git a/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.8.md b/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.8.md
index 9163047e7e0..e9e45a69153 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.8.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.8.md
@@ -15,13 +15,9 @@ displayed_sidebar: "docs"
dbt Labs is committed to providing backward compatibility for all versions 1.x, except for any changes explicitly mentioned on this page. If you encounter an error upon upgrading, please let us know by [opening an issue](https://github.com/dbt-labs/dbt-core/issues/new).
-## Versionless
+## Release tracks
-dbt Cloud is going "versionless." This means you'll automatically get early access to new features and functionality before they're available in final releases of dbt Core.
-
-Select [**Versionless**](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) in your development, staging, and production [environments](/docs/deploy/deploy-environments) to access to everything in dbt Core v1.8+ and more.
-
-To upgrade an environment in the [dbt Cloud Admin API](/docs/dbt-cloud-apis/admin-cloud-api) or [Terraform](https://registry.terraform.io/providers/dbt-labs/dbtcloud/latest), set `dbt_version` to the string `versionless`.
+Starting in 2024, dbt Cloud provides the functionality from new versions of dbt Core via [release tracks](/docs/dbt-versions/cloud-release-tracks) with automatic upgrades. Select a release track in your development, staging, and production [environments](/docs/deploy/deploy-environments) to access everything in dbt Core v1.8+ and more. To upgrade an environment in the [dbt Cloud Admin API](/docs/dbt-cloud-apis/admin-cloud-api) or [Terraform](https://registry.terraform.io/providers/dbt-labs/dbtcloud/latest), set `dbt_version` to the string `latest`.
## New and changed features and functionality
diff --git a/website/docs/docs/dbt-versions/core-versions.md b/website/docs/docs/dbt-versions/core-versions.md
index 4a490f96bd5..2f3cec44191 100644
--- a/website/docs/docs/dbt-versions/core-versions.md
+++ b/website/docs/docs/dbt-versions/core-versions.md
@@ -8,11 +8,11 @@ pagination_prev: null
dbt Core releases follow [semantic versioning](https://semver.org/) guidelines. For more on how we use semantic versions, see [How dbt Core uses semantic versioning](#how-dbt-core-uses-semantic-versioning).
-:::tip Go versionless and stay up to date, always
+:::tip Release Tracks keep you up to date, always
_Did you know that you can always be working with the latest features and functionality?_
-With dbt Cloud, you can get early access to new functionality before it becomes available in dbt Core and without the need of managing your own version upgrades. Refer to the [Versionless](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) setting for details.
+With dbt Cloud, you can get early access to new functionality before it becomes available in dbt Core and without the need of managing your own version upgrades. Refer to the ["Latest" Release Track](/docs/dbt-versions/cloud-release-tracks) setting for details.
:::
diff --git a/website/docs/docs/dbt-versions/product-lifecycles.md b/website/docs/docs/dbt-versions/product-lifecycles.md
index e8711c825c4..01a8628d3ca 100644
--- a/website/docs/docs/dbt-versions/product-lifecycles.md
+++ b/website/docs/docs/dbt-versions/product-lifecycles.md
@@ -17,7 +17,7 @@ dbt Cloud features all fall into one of the following categories:
- **Beta:** Beta features are still in development and are only available to select customers. To join a beta, there might be a signup form or dbt Labs may contact specific customers about testing. Some features can be activated by enabling [experimental features](/docs/dbt-versions/experimental-features) in your account. Beta features are incomplete and might not be entirely stable; they should be used at the customer’s risk, as breaking changes could occur. Beta features might not be fully documented, technical support is limited, and service level objectives (SLOs) might not be provided. Download the [Beta Features Terms and Conditions](/assets/beta-tc.pdf) for more details.
- **Preview:** Preview features are stable and considered functionally ready for production deployments. Some planned additions and modifications to feature behaviors could occur before they become generally available. New functionality that is not backward compatible could also be introduced. Preview features include documentation, technical support, and service level objectives (SLOs). Features in preview are provided at no extra cost, although they might become paid features when they become generally available.
-- **Generally available (GA):** Generally available features provide stable features introduced to all qualified dbt Cloud accounts. Service level agreements (SLAs) apply to GA features, including documentation and technical support. Certain GA feature availability is determined by the dbt version of the environment. To always receive the latest GA features, ensure your dbt Cloud [environments](/docs/dbt-cloud-environments) are set to ["Versionless"](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless).
+- **Generally available (GA):** Generally available features provide stable features introduced to all qualified dbt Cloud accounts. Service level agreements (SLAs) apply to GA features, including documentation and technical support. Certain GA feature availability is determined by the dbt version of the environment. To always receive the latest GA features, ensure your dbt Cloud [environments](/docs/dbt-cloud-environments) are on a supported [Release Track](/docs/dbt-versions/cloud-release-tracks).
- **Deprecated:** Features in this state are no longer being developed or enhanced by dbt Labs. They will continue functioning as-is, and their documentation will persist until their removal date. However, they are no longer subject to technical support.
- **Removed:** Removed features are no longer available on the platform in any capacity.
diff --git a/website/docs/docs/dbt-versions/release-notes.md b/website/docs/docs/dbt-versions/release-notes.md
index 55116db68ba..f09a637653d 100644
--- a/website/docs/docs/dbt-versions/release-notes.md
+++ b/website/docs/docs/dbt-versions/release-notes.md
@@ -18,13 +18,20 @@ Release notes are grouped by month for both multi-tenant and virtual private clo
\* The official release date for this new format of release notes is May 15th, 2024. Historical release notes for prior dates may not reflect all available features released earlier this year or their tenancy availability.
+## December 2024
+
+- **New**: You can now use your [Azure OpenAI key](/docs/cloud/account-integrations?ai-integration=azure#ai-integrations) (available in beta) to use dbt Cloud features like [dbt Copilot](/docs/cloud/dbt-copilot) and [Ask dbt](/docs/cloud-integrations/snowflake-native-app) . Additionally, you can use your own [OpenAI API key](/docs/cloud/account-integrations?ai-integration=openai#ai-integrations) or use [dbt Labs-managed OpenAI](/docs/cloud/account-integrations?ai-integration=dbtlabs#ai-integrations) key. Refer to [AI integrations](/docs/cloud/account-integrations#ai-integrations) for more information.
+- **New**: The [`hard_deletes`](/reference/resource-configs/hard-deletes) config gives you more control on how to handle deleted rows from the source. Supported options are `ignore` (default), `invalidate` (replaces the legacy `invalidate_hard_deletes=true`), and `new_record`. Note that `new_record` will create a new metadata column in the snapshot table.
+
## November 2024
+- **Enhancement**: Trust signal icons in dbt Explorer are now available for Exposures, providing a quick view of data health while browsing resources. To view trust signal icons, go to dbt Explorer and click **Exposures** under the **Resource** tab. Refer to [Trust signal for resources](/docs/collaborate/explore-projects#trust-signals-for-resources) for more info.
+- **Bug**: Identified and fixed an error with Semantic Layer queries that take longer than 10 minutes to complete.
- **Fix**: Job environment variable overrides in credentials are now respected for Exports. Previously, they were ignored.
- **Behavior change**: If you use a custom microbatch macro, set a [`require_batched_execution_for_custom_microbatch_strategy` behavior flag](/reference/global-configs/behavior-changes#custom-microbatch-strategy) in your `dbt_project.yml` to enable batched execution. If you don't have a custom microbatch macro, you don't need to set this flag as dbt will handle microbatching automatically for any model using the [microbatch strategy](/docs/build/incremental-microbatch#how-microbatch-compares-to-other-incremental-strategies).
- **Enhancement**: For users that have Advanced CI's [compare changes](/docs/deploy/advanced-ci#compare-changes) feature enabled, you can optimize performance when running comparisons by using custom dbt syntax to customize deferral usage, exclude specific large models (or groups of models with tags), and more. Refer to [Compare changes custom commands](/docs/deploy/job-commands#compare-changes-custom-commands) for examples of how to customize the comparison command.
-- **New**: SQL linting in CI jobs is now generally available in dbt Cloud. You can enable SQL linting in your CI jobs, using [SQLFluff](https://sqlfluff.com/), to automatically lint all SQL files in your project as a run step before your CI job builds. SQLFluff linting is available on [dbt Cloud Versionless](/docs/dbt-versions/versionless-cloud) and to dbt Cloud [Team or Enterprise](https://www.getdbt.com/pricing/) accounts. Refer to [SQL linting](/docs/deploy/continuous-integration#sql-linting) for more information.
-- **New**: Use the [`dbt_valid_to_current`](/reference/resource-configs/dbt_valid_to_current) config to set a custom indicator for the value of `dbt_valid_to` in current snapshot records (like a future date). By default, this value is `NULL`. When configured, dbt will use the specified value instead of `NULL` for `dbt_valid_to` for current records in the snapshot table. This feature is available in dbt Cloud Versionless and dbt Core v1.9 and later.
-- **New**: Use the [`event_time`](/reference/resource-configs/event-time) configuration to specify "at what time did the row occur." This configuration is required for [Incremental microbatch](/docs/build/incremental-microbatch) and can be added to ensure you're comparing overlapping times in [Advanced CI's compare changes](/docs/deploy/advanced-ci). Available in dbt Cloud Versionless and dbt Core v1.9 and higher.
+- **New**: SQL linting in CI jobs is now generally available in dbt Cloud. You can enable SQL linting in your CI jobs, using [SQLFluff](https://sqlfluff.com/), to automatically lint all SQL files in your project as a run step before your CI job builds. SQLFluff linting is available on [dbt Cloud release tracks](/docs/dbt-versions/cloud-release-tracks) and to dbt Cloud [Team or Enterprise](https://www.getdbt.com/pricing/) accounts. Refer to [SQL linting](/docs/deploy/continuous-integration#sql-linting) for more information.
+- **New**: Use the [`dbt_valid_to_current`](/reference/resource-configs/dbt_valid_to_current) config to set a custom indicator for the value of `dbt_valid_to` in current snapshot records (like a future date). By default, this value is `NULL`. When configured, dbt will use the specified value instead of `NULL` for `dbt_valid_to` for current records in the snapshot table. This feature is available in [the dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks) (formerly called `Versionless`) and dbt Core v1.9 and later.
+- **New**: Use the [`event_time`](/reference/resource-configs/event-time) configuration to specify "at what time did the row occur." This configuration is required for [Incremental microbatch](/docs/build/incremental-microbatch) and can be added to ensure you're comparing overlapping times in [Advanced CI's compare changes](/docs/deploy/advanced-ci). Available in [the dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks) (formerly called `Versionless`) and dbt Core v1.9 and higher.
- **Fix**: This update improves [dbt Semantic Layer Tableau integration](/docs/cloud-integrations/semantic-layer/tableau) making query parsing more reliable. Some key fixes include:
- Error messages for unsupported joins between saved queries and ALL tables.
- Improved handling of queries when multiple tables are selected in a data source.
@@ -41,7 +48,7 @@ Release notes are grouped by month for both multi-tenant and virtual private clo
- Iceberg table support for [Snowflake](https://docs.getdbt.com/reference/resource-configs/snowflake-configs#iceberg-table-format)
- [Athena](https://docs.getdbt.com/reference/resource-configs/athena-configs) and [Teradata](https://docs.getdbt.com/reference/resource-configs/teradata-configs) adapter support in dbt Cloud
- dbt Cloud now hosted on [Azure](https://docs.getdbt.com/docs/cloud/about-cloud/access-regions-ip-addresses)
- - Get comfortable with [Versionless dbt Cloud](https://docs.getdbt.com/docs/dbt-versions/versionless-cloud)
+ - Get comfortable with [dbt Cloud Release Tracks](https://docs.getdbt.com/docs/dbt-versions/cloud-release-tracks) that keep your project up-to-date, automatically — on a cadence appropriate for your team
- Scalable [microbatch incremental models](https://docs.getdbt.com/docs/build/incremental-microbatch)
- Advanced CI [features](https://docs.getdbt.com/docs/deploy/advanced-ci)
- [Linting with CI jobs](https://docs.getdbt.com/docs/deploy/continuous-integration#sql-linting)
@@ -69,17 +76,17 @@ Release notes are grouped by month for both multi-tenant and virtual private clo
- **New**: The dbt Cloud IDE supports signed commits for Git, available for Enterprise plans. You can sign your Git commits when pushing them to the repository to prevent impersonation and enhance security. Supported Git providers are GitHub and GitLab. Refer to [Git commit signing](/docs/cloud/dbt-cloud-ide/git-commit-signing.md) for more information.
- **New:** With dbt Mesh, you can now enable bidirectional dependencies across your projects. Previously, dbt enforced dependencies to only go in one direction. dbt checks for cycles across projects and raises errors if any are detected. For details, refer to [Cycle detection](/docs/collaborate/govern/project-dependencies#cycle-detection). There's also the [Intro to dbt Mesh](/best-practices/how-we-mesh/mesh-1-intro) guide to help you learn more best practices.
- **New**: The [dbt Semantic Layer Python software development kit](/docs/dbt-cloud-apis/sl-python) is now [generally available](/docs/dbt-versions/product-lifecycles). It provides users with easy access to the dbt Semantic Layer with Python and enables developers to interact with the dbt Semantic Layer APIs to query metrics/dimensions in downstream tools.
-- **Enhancement**: You can now add a description to a singular data test in dbt Cloud Versionless. Use the [`description` property](/reference/resource-properties/description) to document [singular data tests](/docs/build/data-tests#singular-data-tests). You can also use [docs block](/docs/build/documentation#using-docs-blocks) to capture your test description. The enhancement will be included in upcoming dbt Core 1.9 release.
-- **New**: Introducing the [microbatch incremental model strategy](/docs/build/incremental-microbatch) (beta), available in dbt Cloud Versionless and will soon be supported in dbt Core 1.9. The microbatch strategy allows for efficient, batch-based processing of large time-series datasets for improved performance and resiliency, especially when you're working with data that changes over time (like new records being added daily). To enable this feature in dbt Cloud, set the `DBT_EXPERIMENTAL_MICROBATCH` environment variable to `true` in your project.
+- **Enhancement**: You can now add a description to a singular data test. Use the [`description` property](/reference/resource-properties/description) to document [singular data tests](/docs/build/data-tests#singular-data-tests). You can also use [docs block](/docs/build/documentation#using-docs-blocks) to capture your test description. The enhancement is available now in [the "Latest" release track in dbt Cloud](/docs/dbt-versions/cloud-release-tracks), and it will be included in dbt Core v1.9.
+- **New**: Introducing the [microbatch incremental model strategy](/docs/build/incremental-microbatch) (beta), available now in [dbt Cloud Latest](/docs/dbt-versions/cloud-release-tracks) and will soon be supported in dbt Core v1.9. The microbatch strategy allows for efficient, batch-based processing of large time-series datasets for improved performance and resiliency, especially when you're working with data that changes over time (like new records being added daily). To enable this feature in dbt Cloud, set the `DBT_EXPERIMENTAL_MICROBATCH` environment variable to `true` in your project.
- **New**: The dbt Semantic Layer supports custom calendar configurations in MetricFlow, available in [Preview](/docs/dbt-versions/product-lifecycles#dbt-cloud). Custom calendar configurations allow you to query data using non-standard time periods like `fiscal_year` or `retail_month`. Refer to [custom calendar](/docs/build/metricflow-time-spine#custom-calendar) to learn how to define these custom granularities in your MetricFlow timespine YAML configuration.
-- **New**: In dbt Cloud Versionless, [Snapshots](/docs/build/snapshots) have been updated to use YAML configuration files instead of SQL snapshot blocks. This new feature simplifies snapshot management and improves performance, and will soon be released in dbt Core 1.9.
- - Who does this affect? New user on Versionless can define snapshots using the new YAML specification. Users upgrading to Versionless who use snapshots can keep their existing configuration or can choose to migrate their snapshot definitions to YAML.
- - Users on dbt 1.8 and earlier: No action is needed; existing snapshots will continue to work as before. However, we recommend upgrading to Versionless to take advantage of the new snapshot features.
+- **New**: In the "Latest" release track in dbt Cloud, [Snapshots](/docs/build/snapshots) have been updated to use YAML configuration files instead of SQL snapshot blocks. This new feature simplifies snapshot management and improves performance, and will soon be released in dbt Core 1.9.
+ - Who does this affect? Users of the "Latest" release track in dbt Cloud can define snapshots using the new YAML specification. Users upgrading to "Latest" who have existing snapshot definitions can keep their existing configurations, or they can choose to migrate their snapshot definitions to YAML.
+ - Users on older versions: No action is needed; existing snapshots will continue to work as before. However, we recommend upgrading to the "Latest" release track to take advantage of the new snapshot features.
- **Behavior change:** Set [`state_modified_compare_more_unrendered_values`](/reference/global-configs/behavior-changes#source-definitions-for-state) to true to reduce false positives for `state:modified` when configs differ between `dev` and `prod` environments.
- **Behavior change:** Set the [`skip_nodes_if_on_run_start_fails`](/reference/global-configs/behavior-changes#failures-in-on-run-start-hooks) flag to `True` to skip all selected resources from running if there is a failure on an `on-run-start` hook.
-- **Enhancement**: In dbt Cloud Versionless, snapshots defined in SQL files can now use `config` defined in `schema.yml` YAML files. This update resolves the previous limitation that required snapshot properties to be defined exclusively in `dbt_project.yml` and/or a `config()` block within the SQL file. This will also be released in dbt Core 1.9.
-- **New**: In dbt Cloud Versionless, the `snapshot_meta_column_names` config allows for customizing the snapshot metadata columns. This feature allows an organization to align these automatically-generated column names with their conventions, and will be included in the upcoming dbt Core 1.9 release.
-- **Enhancement**: dbt Cloud versionless began inferring a model's `primary_key` based on configured data tests and/or constraints within `manifest.json`. The inferred `primary_key` is visible in dbt Explorer and utilized by the dbt Cloud [compare changes](/docs/deploy/run-visibility#compare-tab) feature. This will also be released in dbt Core 1.9. Read about the [order dbt infers columns can be used as primary key of a model](https://github.com/dbt-labs/dbt-core/blob/7940ad5c7858ff11ef100260a372f2f06a86e71f/core/dbt/contracts/graph/nodes.py#L534-L541).
+- **Enhancement**: In the "Latest" release track in dbt Cloud, snapshots defined in SQL files can now use `config` defined in `schema.yml` YAML files. This update resolves the previous limitation that required snapshot properties to be defined exclusively in `dbt_project.yml` and/or a `config()` block within the SQL file. This will also be released in dbt Core 1.9.
+- **New**: In the "Latest" release track in dbt Cloud, the `snapshot_meta_column_names` config allows for customizing the snapshot metadata columns. This feature allows an organization to align these automatically-generated column names with their conventions, and will be included in the upcoming dbt Core 1.9 release.
+- **Enhancement**: the "Latest" release track in dbt Cloud infers a model's `primary_key` based on configured data tests and/or constraints within `manifest.json`. The inferred `primary_key` is visible in dbt Explorer and utilized by the dbt Cloud [compare changes](/docs/deploy/run-visibility#compare-tab) feature. This will also be released in dbt Core 1.9. Read about the [order dbt infers columns can be used as primary key of a model](https://github.com/dbt-labs/dbt-core/blob/7940ad5c7858ff11ef100260a372f2f06a86e71f/core/dbt/contracts/graph/nodes.py#L534-L541).
- **New:** dbt Explorer now includes trust signal icons, which is currently available as a [Preview](/docs/dbt-versions/product-lifecycles#dbt-cloud). Trust signals offer a quick, at-a-glance view of data health when browsing your dbt models in Explorer. These icons indicate whether a model is **Healthy**, **Caution**, **Degraded**, or **Unknown**. For accurate health data, ensure the resource is up-to-date and has had a recent job run. Refer to [Trust signals](/docs/collaborate/explore-projects#trust-signals-for-resources) for more information.
- **New:** Auto exposures are now available in Preview in dbt Cloud. Auto-exposures helps users understand how their models are used in downstream analytics tools to inform investments and reduce incidents. It imports and auto-generates exposures based on Tableau dashboards, with user-defined curation. To learn more, refer to [Auto exposures](/docs/collaborate/auto-exposures).
@@ -89,14 +96,14 @@ Release notes are grouped by month for both multi-tenant and virtual private clo
- **Fix**: MetricFlow updated `get_and_expire` to replace the unsupported `GETEX` command with a `GET` and conditional expiration, ensuring compatibility with Azure Redis 6.0.
- **Enhancement**: The [dbt Semantic Layer Python SDK](/docs/dbt-cloud-apis/sl-python) now supports `TimeGranularity` custom grain for metrics. This feature allows you to define custom time granularities for metrics, such as `fiscal_year` or `retail_month`, to query data using non-standard time periods.
- **New**: Use the dbt Copilot AI engine to generate semantic model for your models, now available in beta. dbt Copilot automatically generates documentation, tests, and now semantic models based on the data in your model, . To learn more, refer to [dbt Copilot](/docs/cloud/dbt-copilot).
-- **New**: Use the new recommended syntax for [defining `foreign_key` constraints](/reference/resource-properties/constraints) using `refs`, available in dbt Cloud Versionless. This will soon be released in dbt Core v1.9. This new syntax will capture dependencies and works across different environments.
+- **New**: Use the new recommended syntax for [defining `foreign_key` constraints](/reference/resource-properties/constraints) using `refs`, available in the "Latest" release track in dbt Cloud. This will soon be released in dbt Core v1.9. This new syntax will capture dependencies and works across different environments.
- **Enhancement**: You can now run [Semantic Layer commands](/docs/build/metricflow-commands) commands in the [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud). The supported commands are `dbt sl list`, `dbt sl list metrics`, `dbt sl list dimension-values`, `dbt sl list saved-queries`, `dbt sl query`, `dbt sl list dimensions`, `dbt sl list entities`, and `dbt sl validate`.
- **New**: Microsoft Excel, a dbt Semantic Layer integration, is now generally available. The integration allows you to connect to Microsoft Excel to query metrics and collaborate with your team. Available for [Excel Desktop](https://pages.store.office.com/addinsinstallpage.aspx?assetid=WA200007100&rs=en-US&correlationId=4132ecd1-425d-982d-efb4-de94ebc83f26) or [Excel Online](https://pages.store.office.com/addinsinstallpage.aspx?assetid=WA200007100&rs=en-US&correlationid=4132ecd1-425d-982d-efb4-de94ebc83f26&isWac=True). For more information, refer to [Microsoft Excel](/docs/cloud-integrations/semantic-layer/excel).
- **New**: [Data health tile](/docs/collaborate/data-tile) is now generally available in dbt Explorer. Data health tiles provide a quick at-a-glance view of your data quality, highlighting potential issues in your data. You can embed these tiles in your dashboards to quickly identify and address data quality issues in your dbt project.
- **New**: dbt Explorer's Model query history feature is now in Preview for dbt Cloud Enterprise customers. Model query history allows you to view the count of consumption queries for a model based on the data warehouse's query logs. This feature provides data teams insight, so they can focus their time and infrastructure spend on the worthwhile used data products. To learn more, refer to [Model query history](/docs/collaborate/model-query-history).
- **Enhancement**: You can now use [Extended Attributes](/docs/dbt-cloud-environments#extended-attributes) and [Environment Variables](/docs/build/environment-variables) when connecting to the Semantic Layer. If you set a value directly in the Semantic Layer Credentials, it will have a higher priority than Extended Attributes. When using environment variables, the default value for the environment will be used. If you're using exports, job environment variable overrides aren't supported yet, but they will be soon.
- **New:** There are two new [environment variable defaults](/docs/build/environment-variables#dbt-cloud-context) — `DBT_CLOUD_ENVIRONMENT_NAME` and `DBT_CLOUD_ENVIRONMENT_TYPE`.
-- **New:** The [Amazon Athena warehouse connection](/docs/cloud/connect-data-platform/connect-amazon-athena) is available as a public preview for dbt Cloud accounts that have upgraded to [`versionless`](/docs/dbt-versions/versionless-cloud).
+- **New:** The [Amazon Athena warehouse connection](/docs/cloud/connect-data-platform/connect-amazon-athena) is available as a public preview for dbt Cloud accounts that have upgraded to [the "Latest" release track](/docs/dbt-versions/cloud-release-tracks).
## August 2024
@@ -222,15 +229,15 @@ The following features are new or enhanced as part of our [dbt Cloud Launch Show
- **New**: The [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl) introduces [declarative caching](/docs/use-dbt-semantic-layer/sl-cache), allowing you to cache common queries to speed up performance and reduce query compute costs. Available for dbt Cloud Team or Enterprise accounts.
--
+-
- The **Versionless** setting is now Generally Available (previously Public Preview).
+ The **Latest** Release Track is now Generally Available (previously Public Preview).
- When the new **Versionless** setting is enabled, you get a versionless experience and always get the latest features and early access to new functionality for your dbt project. dbt Labs will handle upgrades behind-the-scenes, as part of testing and redeploying the dbt Cloud application — just like other dbt Cloud capabilities and other SaaS tools that you're using. No more manual upgrades and no more need for _a second sandbox project_ just to try out new features in development.
+ On this release track, you get automatic upgrades of dbt, including early access to the latest features, fixes, and performance improvements for your dbt project. dbt Labs will handle upgrades behind-the-scenes, as part of testing and redeploying the dbt Cloud application — just like other dbt Cloud capabilities and other SaaS tools that you're using. No more manual upgrades and no more need for _a second sandbox project_ just to try out new features in development.
- To learn more about the new setting, refer to [Versionless](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) for details.
+ To learn more about the new setting, refer to [Release Tracks](/docs/dbt-versions/cloud-release-tracks) for details.
-
+
@@ -246,7 +253,7 @@ The following features are new or enhanced as part of our [dbt Cloud Launch Show
-- **Behavior change:** Introduced the `require_explicit_package_overrides_for_builtin_materializations` flag, opt-in and disabled by default. If set to `True`, dbt will only use built-in materializations defined in the root project or within dbt, rather than implementations in packages. This will become the default in May 2024 (dbt Core v1.8 and "Versionless" dbt Cloud). Read [Package override for built-in materialization](/reference/global-configs/behavior-changes#package-override-for-built-in-materialization) for more information.
+- **Behavior change:** Introduced the `require_explicit_package_overrides_for_builtin_materializations` flag, opt-in and disabled by default. If set to `True`, dbt will only use built-in materializations defined in the root project or within dbt, rather than implementations in packages. This will become the default in May 2024 (dbt Core v1.8 and dbt Cloud release tracks). Read [Package override for built-in materialization](/reference/global-configs/behavior-changes#package-override-for-built-in-materialization) for more information.
**dbt Semantic Layer**
- **New**: Use Saved selections to [save your query selections](/docs/cloud-integrations/semantic-layer/gsheets#using-saved-selections) within the [Google Sheets application](/docs/cloud-integrations/semantic-layer/gsheets). They can be made private or public and refresh upon loading.
@@ -301,15 +308,15 @@ The following features are new or enhanced as part of our [dbt Cloud Launch Show
--
+-
_Now available in the dbt version dropdown in dbt Cloud — starting with select customers, rolling out to wider availability through February and March._
- When the new **Versionless** setting is enabled, you always get the latest fixes and early access to new functionality for your dbt project. dbt Labs will handle upgrades behind-the-scenes, as part of testing and redeploying the dbt Cloud application — just like other dbt Cloud capabilities and other SaaS tools that you're using. No more manual upgrades and no more need for _a second sandbox project_ just to try out new features in development.
+ On this release track, you get automatic upgrades of dbt, including early access to the latest features, fixes, and performance improvements for your dbt project. dbt Labs will handle upgrades behind-the-scenes, as part of testing and redeploying the dbt Cloud application — just like other dbt Cloud capabilities and other SaaS tools that you're using. No more manual upgrades and no more need for _a second sandbox project_ just to try out new features in development.
- To learn more about the new setting, refer to [Versionless](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) for details.
+ To learn more about the new setting, refer to [Release Tracks](/docs/dbt-versions/cloud-release-tracks) for details.
-
+
diff --git a/website/docs/docs/dbt-versions/upgrade-dbt-version-in-cloud.md b/website/docs/docs/dbt-versions/upgrade-dbt-version-in-cloud.md
index cfe27d5e9d7..52faa9385fa 100644
--- a/website/docs/docs/dbt-versions/upgrade-dbt-version-in-cloud.md
+++ b/website/docs/docs/dbt-versions/upgrade-dbt-version-in-cloud.md
@@ -7,17 +7,22 @@ In dbt Cloud, both [jobs](/docs/deploy/jobs) and [environments](/docs/dbt-cloud-
## Environments
-Navigate to the settings page of an environment, then click **Edit**. Click the **dbt version** dropdown bar and make your selection. You can select a previous release of dbt Core or go [**Versionless**](#versionless) (recommended). Be sure to save your changes before navigating away.
+Navigate to the settings page of an environment, then click **Edit**. Click the **dbt version** dropdown bar and make your selection. You can select a [release track](#release-tracks) to receive ongoing updates (recommended), or a legacy version of dbt Core. Be sure to save your changes before navigating away.
-### Versionless
+### Release Tracks
-By choosing to go **Versionless**, you opt for an experience that provides the latest features and early access to new functionality for your dbt project. dbt Labs will handle upgrades for you, as part of testing and redeploying the dbt Cloud SaaS application. Versionless always includes the most recent features before they're in dbt Core, and more.
+Starting in 2024, your project will be upgraded automatically on a cadence that you choose
-You can upgrade to the **Versionless** experience no matter which version of dbt you currently have selected. As a best practice, dbt Labs recommends that you test the upgrade in development first; use the [Override dbt version](#override-dbt-version) setting to test _your_ project on the latest dbt version before upgrading your deployment environments and the default development environment for all your colleagues.
+The **Latest** track ensures you have up-to-date dbt Cloud functionality, and early access to new features of the dbt framework. The **Compatible** and **Extended** tracks are designed for customers who need a less-frequent release cadence, the ability to test new dbt releases before they go live in production, and/or ongoing compatibility with the latest open source releases of dbt Core.
-To upgrade an environment in the [dbt Cloud Admin API](/docs/dbt-cloud-apis/admin-cloud-api) or [Terraform](https://registry.terraform.io/providers/dbt-labs/dbtcloud/latest), set `dbt_version` to the string `versionless`.
+As a best practice, dbt Labs recommends that you test the upgrade in development first; use the [Override dbt version](#override-dbt-version) setting to test _your_ project on the latest dbt version before upgrading your deployment environments and the default development environment for all your colleagues.
+
+To upgrade an environment in the [dbt Cloud Admin API](/docs/dbt-cloud-apis/admin-cloud-api) or [Terraform](https://registry.terraform.io/providers/dbt-labs/dbtcloud/latest), set `dbt_version` to the name of your release track:
+- `latest` (formerly called `versionless`; the old name is still supported)
+- `compatible` (available to Team + Enterprise)
+- `extended` (available to Enterprise)
### Override dbt version
diff --git a/website/docs/docs/deploy/ci-jobs.md b/website/docs/docs/deploy/ci-jobs.md
index 3da04ff6948..1128dfd7abc 100644
--- a/website/docs/docs/deploy/ci-jobs.md
+++ b/website/docs/docs/deploy/ci-jobs.md
@@ -10,7 +10,7 @@ You can set up [continuous integration](/docs/deploy/continuous-integration) (CI
- You have a dbt Cloud account.
- CI features:
- For both the [concurrent CI checks](/docs/deploy/continuous-integration#concurrent-ci-checks) and [smart cancellation of stale builds](/docs/deploy/continuous-integration#smart-cancellation) features, your dbt Cloud account must be on the [Team or Enterprise plan](https://www.getdbt.com/pricing/).
- - [SQL linting](/docs/deploy/continuous-integration#sql-linting) is available on [dbt Cloud Versionless](/docs/dbt-versions/versionless-cloud) and to dbt Cloud [Team or Enterprise](https://www.getdbt.com/pricing/) accounts. You should have [SQLFluff configured](/docs/deploy/continuous-integration#to-configure-sqlfluff-linting) in your project.
+ - [SQL linting](/docs/deploy/continuous-integration#sql-linting) is available on [dbt Cloud release tracks](/docs/dbt-versions/cloud-release-tracks) and to dbt Cloud [Team or Enterprise](https://www.getdbt.com/pricing/) accounts. You should have [SQLFluff configured](/docs/deploy/continuous-integration#to-configure-sqlfluff-linting) in your project.
- [Advanced CI](/docs/deploy/advanced-ci) features:
- For the [compare changes](/docs/deploy/advanced-ci#compare-changes) feature, your dbt Cloud account must be on the [Enterprise plan](https://www.getdbt.com/pricing/) and have enabled Advanced CI features. Please ask your [dbt Cloud administrator to enable](/docs/cloud/account-settings#account-access-to-advanced-ci-features) this feature for you. After enablement, the **dbt compare** option becomes available in the CI job settings.
- Set up a [connection with your Git provider](/docs/cloud/git/git-configuration-in-dbt-cloud). This integration lets dbt Cloud run jobs on your behalf for job triggering.
diff --git a/website/docs/docs/deploy/continuous-integration.md b/website/docs/docs/deploy/continuous-integration.md
index 38ce34678ce..c738e641a5b 100644
--- a/website/docs/docs/deploy/continuous-integration.md
+++ b/website/docs/docs/deploy/continuous-integration.md
@@ -58,7 +58,7 @@ CI runs don't consume run slots. This guarantees a CI check will never block a p
### SQL linting
-Available for [dbt Cloud Versionless](/docs/dbt-versions/versionless-cloud) and dbt Cloud Team or Enterprise accounts.
+Available on [dbt Cloud release tracks](/docs/dbt-versions/cloud-release-tracks) and dbt Cloud Team or Enterprise accounts.
When [enabled for your CI job](/docs/deploy/ci-jobs#set-up-ci-jobs), dbt invokes [SQLFluff](https://sqlfluff.com/) which is a modular and configurable SQL linter that warns you of complex functions, syntax, formatting, and compilation errors. By default, it lints all the changed SQL files in your project (compared to the last deferred production state).
diff --git a/website/docs/docs/deploy/model-notifications.md b/website/docs/docs/deploy/model-notifications.md
index 6db67bcf81e..a80de15cb92 100644
--- a/website/docs/docs/deploy/model-notifications.md
+++ b/website/docs/docs/deploy/model-notifications.md
@@ -28,7 +28,7 @@ Create configuration YAML files in your project for dbt to send notifications ab
## Prerequisites
- Your dbt Cloud administrator has [enabled the appropriate account setting](#enable-access-to-model-notifications) for you.
-- Your environment(s) must be on ["Versionless"](/docs/dbt-versions/versionless-cloud).
+- Your environment(s) must be on a [release track](/docs/dbt-versions/cloud-release-tracks) instead of a legacy dbt Core version.
## Configure groups
diff --git a/website/docs/docs/use-dbt-semantic-layer/exports.md b/website/docs/docs/use-dbt-semantic-layer/exports.md
index 5d6e4c0d996..1883212fb66 100644
--- a/website/docs/docs/use-dbt-semantic-layer/exports.md
+++ b/website/docs/docs/use-dbt-semantic-layer/exports.md
@@ -176,7 +176,7 @@ If exports aren't needed, you can set the value(s) to `FALSE` (`DBT_INCLUDE_SAVE
-
+
1. Click **Deploy** in the top navigation bar and choose **Environments**.
diff --git a/website/docs/docs/use-dbt-semantic-layer/sl-cache.md b/website/docs/docs/use-dbt-semantic-layer/sl-cache.md
index 0c6387959a3..27ffe97a951 100644
--- a/website/docs/docs/use-dbt-semantic-layer/sl-cache.md
+++ b/website/docs/docs/use-dbt-semantic-layer/sl-cache.md
@@ -22,7 +22,7 @@ While you can use caching to speed up your queries and reduce compute time, know
## Prerequisites
- dbt Cloud [Team or Enterprise](https://www.getdbt.com/) plan.
-- dbt Cloud environments that are ["Versionless"](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless).
+- dbt Cloud environments must be on [release tracks](/docs/dbt-versions/cloud-release-tracks) and not legacy dbt Core versions.
- A successful job run and [production environment](/docs/deploy/deploy-environments#set-as-production-environment).
- For declarative caching, you need to have [exports](/docs/use-dbt-semantic-layer/exports) defined in your [saved queries](/docs/build/saved-queries) YAML configuration file.
diff --git a/website/docs/guides/adapter-creation.md b/website/docs/guides/adapter-creation.md
index 1a69be98b29..37ef5ec0412 100644
--- a/website/docs/guides/adapter-creation.md
+++ b/website/docs/guides/adapter-creation.md
@@ -666,7 +666,7 @@ In order to enable the [`dbt init` command](/reference/commands/init) to prompt
See examples:
-- [dbt-postgres](https://github.com/dbt-labs/dbt-core/blob/main/plugins/postgres/dbt/include/postgres/profile_template.yml)
+- [dbt-postgres](https://github.com/dbt-labs/dbt-postgres/blob/main/dbt/include/postgres/profile_template.yml)
- [dbt-redshift](https://github.com/dbt-labs/dbt-redshift/blob/main/dbt/include/redshift/profile_template.yml)
- [dbt-snowflake](https://github.com/dbt-labs/dbt-snowflake/blob/main/dbt/include/snowflake/profile_template.yml)
- [dbt-bigquery](https://github.com/dbt-labs/dbt-bigquery/blob/main/dbt/include/bigquery/profile_template.yml)
diff --git a/website/docs/guides/core-cloud-2.md b/website/docs/guides/core-cloud-2.md
index cee1e8029c2..ddc0e883d84 100644
--- a/website/docs/guides/core-cloud-2.md
+++ b/website/docs/guides/core-cloud-2.md
@@ -155,7 +155,7 @@ After [setting the foundations of dbt Cloud](https://docs.getdbt.com/guides/core
Once you’ve confirmed that dbt Cloud orchestration and CI/CD are working as expected, you should pause your current orchestration tool and stop or update your current CI/CD process. This is not relevant if you’re still using an external orchestrator (such as Airflow), and you’ve swapped out `dbt-core` execution for dbt Cloud execution (through the [API](/docs/dbt-cloud-apis/overview)).
Familiarize your team with dbt Cloud's [features](/docs/cloud/about-cloud/dbt-cloud-features) and optimize development and deployment processes. Some key features to consider include:
-- **Version management:** Manage [dbt versions](/docs/dbt-versions/upgrade-dbt-version-in-cloud) and ensure team collaboration with dbt Cloud's one-click feature, removing the hassle of manual updates and version discrepancies. You can go [**Versionless**](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) to always get the latest features and early access to new functionality for your dbt project.
+- **Release tracks:** Choose a [release track](/docs/dbt-versions/cloud-release-tracks) for automatic dbt version upgrades, at the cadence appropriate for your team — removing the hassle of manual updates and the risk of version discrepancies. You can also get early access to new functionality, ahead of dbt Core.
- **Development tools**: Use the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation) or [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud) to build, test, run, and version control your dbt projects.
- **Documentation and Source freshness:** Automate storage of [documentation](/docs/build/documentation) and track [source freshness](/docs/deploy/source-freshness) in dbt Cloud, which streamlines project maintenance.
- **Notifications and logs:** Receive immediate [notifications](/docs/deploy/monitor-jobs) for job failures, with direct links to the job details. Access comprehensive logs for all job runs to help with troubleshooting.
diff --git a/website/docs/guides/core-to-cloud-1.md b/website/docs/guides/core-to-cloud-1.md
index efed66c862a..3d6b119c178 100644
--- a/website/docs/guides/core-to-cloud-1.md
+++ b/website/docs/guides/core-to-cloud-1.md
@@ -58,8 +58,7 @@ This guide outlines the steps you need to take to move from dbt Core to dbt Clou
## Prerequisites
-- You have an existing dbt Core project connected to a Git repository and data platform supported in [dbt Cloud](/docs/cloud/connect-data-platform/about-connections).
-- A [supported version](/docs/dbt-versions/core) of dbt or select [**Versionless**](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) of dbt.
+- You have an existing dbt Core project connected to a Git repository and data platform supported in [dbt Cloud](/docs/cloud/connect-data-platform/about-connections).
- You have a dbt Cloud account. **[Don't have one? Start your free trial today](https://www.getdbt.com/signup)**!
## Account setup
@@ -147,8 +146,8 @@ The most common data environments are production, staging, and development. The
### Initial setup steps
1. **Set up development environment** — Set up your [development](/docs/dbt-cloud-environments#create-a-development-environment) environment and [development credentials](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud#access-the-cloud-ide). You’ll need this to access your dbt project and start developing.
-2. **dbt Core version** — In your dbt Cloud environment and credentials, use the same dbt Core version you use locally. You can run `dbt --version` in the command line to find out which version of dbt Core you’re using.
- - When using dbt Core, you need to think about which version you’re using and manage your own upgrades. When using dbt Cloud, leverage ["Versionless"](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) so you don’t have to.
+2. **dbt Core version** — In your dbt Cloud environment, select a [release track](/docs/dbt-versions/cloud-release-tracks) for ongoing dbt version upgrades. If your team plans to use both dbt Core and dbt Cloud for developing or deploying your dbt project, You can run `dbt --version` in the command line to find out which version of dbt Core you’re using.
+ - When using dbt Core, you need to think about which version you’re using and manage your own upgrades. When using dbt Cloud, leverage [release tracks](/docs/dbt-versions/cloud-release-tracks) so you don’t have to.
3. **Connect to your data platform** — When using dbt Cloud, you can [connect to your data platform](/docs/cloud/connect-data-platform/about-connections) directly in the UI.
- Each environment is roughly equivalent to an entry in your `profiles.yml` file. This means you don't need a `profiles.yml` file in your project.
@@ -210,7 +209,7 @@ To use the [dbt Cloud's job scheduler](/docs/deploy/job-scheduler), set up one e
### Initial setup steps
1. **dbt Core version** — In your environment settings, configure dbt Cloud with the same dbt Core version.
- - Once your full migration is complete, we recommend upgrading your environments to ["Versionless"](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) to always get the latest features and more. You only need to do this once.
+ - Once your full migration is complete, we recommend upgrading your environments to [release tracks](/docs/dbt-versions/cloud-release-tracks) to always get the latest features and more. You only need to do this once.
2. **Configure your jobs** — [Create jobs](/docs/deploy/deploy-jobs#create-and-schedule-jobs) for scheduled or event-driven dbt jobs. You can use cron execution, manual, pull requests, or trigger on the completion of another job.
- Note that alongside [jobs in dbt Cloud](/docs/deploy/jobs), discover other ways to schedule and run your dbt jobs with the help of other tools. Refer to [Integrate with other tools](/docs/deploy/deployment-tools) for more information.
diff --git a/website/docs/guides/core-to-cloud-3.md b/website/docs/guides/core-to-cloud-3.md
index 7d482d54471..81222471345 100644
--- a/website/docs/guides/core-to-cloud-3.md
+++ b/website/docs/guides/core-to-cloud-3.md
@@ -36,7 +36,7 @@ You may have already started your move to dbt Cloud and are looking for tips to
In dbt Cloud, you can natively connect to your data platform and test its [connection](/docs/connect-adapters) with a click of a button. This is especially useful for users who are new to dbt Cloud or are looking to streamline their connection setup. Here are some tips and caveats to consider:
### Tips
-- Manage [dbt versions](/docs/dbt-versions/upgrade-dbt-version-in-cloud) and ensure team collaboration with dbt Cloud's one-click feature, eliminating the need for manual updates and version discrepancies. You can go [**Versionless**](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) to always get the latest features and early access to new functionality for your dbt project.
+- Manage [dbt versions](/docs/dbt-versions/upgrade-dbt-version-in-cloud) and ensure team collaboration with dbt Cloud's one-click feature, eliminating the need for manual updates and version discrepancies. Select a [release track](/docs/dbt-versions/cloud-release-tracks) for ongoing updates, to always stay up to date with fixes and (optionally) get early access to new functionality for your dbt project.
- dbt Cloud supports a whole host of [cloud providers](/docs/cloud/connect-data-platform/about-connections), including Snowflake, Databricks, BigQuery, Fabric, and Redshift (to name a few).
- Use [Extended Attributes](/docs/deploy/deploy-environments#extended-attributes) to set a flexible [profiles.yml](/docs/core/connect-data-platform/profiles.yml) snippet in your dbt Cloud environment settings. It gives you more control over environments (both deployment and development) and extends how dbt Cloud connects to the data platform within a given environment.
- For example, if you have a field in your `profiles.yml` that you’d like to add to the dbt Cloud adapter user interface, you can use Extended Attributes to set it.
diff --git a/website/docs/guides/custom-cicd-pipelines.md b/website/docs/guides/custom-cicd-pipelines.md
index be23524d096..668d3f6f1dd 100644
--- a/website/docs/guides/custom-cicd-pipelines.md
+++ b/website/docs/guides/custom-cicd-pipelines.md
@@ -506,7 +506,7 @@ Additionally, you’ll see the job in the run history of dbt Cloud. It should be
-
+
diff --git a/website/docs/guides/mesh-qs.md b/website/docs/guides/mesh-qs.md
index 47ece7b29ec..9a7aa8b0ce0 100644
--- a/website/docs/guides/mesh-qs.md
+++ b/website/docs/guides/mesh-qs.md
@@ -40,7 +40,6 @@ To leverage dbt Mesh, you need the following:
- You must have a [dbt Cloud Enterprise account](https://www.getdbt.com/get-started/enterprise-contact-pricing)
- You have access to a cloud data platform, permissions to load the sample data tables, and dbt Cloud permissions to create new projects.
-- Set your development and deployment [environments](/docs/dbt-cloud-environments) to use dbt [version](/docs/dbt-versions/core) 1.6 or later. You can also opt to go ["Versionless"](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) to always get the most recent features and functionality.
- This guide uses the Jaffle Shop sample data, including `customers`, `orders`, and `payments` tables. Follow the provided instructions to load this data into your respective data platform:
- [Snowflake](https://docs.getdbt.com/guides/snowflake?step=3)
- [Databricks](https://docs.getdbt.com/guides/databricks?step=3)
diff --git a/website/docs/guides/sl-snowflake-qs.md b/website/docs/guides/sl-snowflake-qs.md
index d9de3f0e5fd..79038cd1dfc 100644
--- a/website/docs/guides/sl-snowflake-qs.md
+++ b/website/docs/guides/sl-snowflake-qs.md
@@ -106,7 +106,6 @@ Open a new tab and follow these quick steps for account setup and data loading i
-- Production and development environments must be on [dbt version 1.6 or higher](/docs/dbt-versions/upgrade-dbt-version-in-cloud). Alternatively, set your environment to [**Versionless**](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) to always get the latest updates.
- Create a [trial Snowflake account](https://signup.snowflake.com/):
- Select the Enterprise Snowflake edition with ACCOUNTADMIN access. Consider organizational questions when choosing a cloud provider, refer to Snowflake's [Introduction to Cloud Platforms](https://docs.snowflake.com/en/user-guide/intro-cloud-platforms).
- Select a cloud provider and region. All cloud providers and regions will work so choose whichever you prefer.
diff --git a/website/docs/guides/snowflake-qs.md b/website/docs/guides/snowflake-qs.md
index 1eae3a13fb0..f1edd5ffc00 100644
--- a/website/docs/guides/snowflake-qs.md
+++ b/website/docs/guides/snowflake-qs.md
@@ -230,6 +230,26 @@ Now that you have a repository configured, you can initialize your project and s
```
- In the command line bar at the bottom, enter `dbt run` and click **Enter**. You should see a `dbt run succeeded` message.
+:::info
+If you receive an insufficient privileges error on Snowflake at this point, it may be because your Snowflake role doesn't have permission to access the raw source data, to build target tables and views, or both.
+
+To troubleshoot, use a role with sufficient privileges (like `ACCOUNTADMIN`) and run the following commands in Snowflake.
+
+**Note**: Replace `snowflake_role_name` with the role you intend to use. If you launched dbt Cloud with Snowflake Partner Connect, use `pc_dbt_role` as the role.
+
+```
+grant all on database raw to role snowflake_role_name;
+grant all on database analytics to role snowflake_role_name;
+
+grant all on schema raw.jaffle_shop to role snowflake_role_name;
+grant all on schema raw.stripe to role snowflake_role_name;
+
+grant all on all tables in database raw to role snowflake_role_name;
+grant all on future tables in database raw to role snowflake_role_name;
+```
+
+:::
+
## Build your first model
You have two options for working with files in the dbt Cloud IDE:
diff --git a/website/docs/reference/commands/init.md b/website/docs/reference/commands/init.md
index 112fff63a38..7b71bf70f45 100644
--- a/website/docs/reference/commands/init.md
+++ b/website/docs/reference/commands/init.md
@@ -31,7 +31,7 @@ If you've just cloned or downloaded an existing dbt project, `dbt init` can stil
`dbt init` knows how to prompt for connection information by looking for a file named `profile_template.yml`. It will look for this file in two places:
-- **Adapter plugin:** What's the bare minumum Postgres profile? What's the type of each field, what are its defaults? This information is stored in a file called [`dbt/include/postgres/profile_template.yml`](https://github.com/dbt-labs/dbt-core/blob/main/plugins/postgres/dbt/include/postgres/profile_template.yml). If you're the maintainer of an adapter plugin, we highly recommend that you add a `profile_template.yml` to your plugin, too. Refer to the [Build, test, document, and promote adapters](/guides/adapter-creation) guide for more information.
+- **Adapter plugin:** What's the bare minumum Postgres profile? What's the type of each field, what are its defaults? This information is stored in a file called [`dbt/include/postgres/profile_template.yml`](https://github.com/dbt-labs/dbt-postgres/blob/main/dbt/include/postgres/profile_template.yml). If you're the maintainer of an adapter plugin, we highly recommend that you add a `profile_template.yml` to your plugin, too. Refer to the [Build, test, document, and promote adapters](/guides/adapter-creation) guide for more information.
- **Existing project:** If you're the maintainer of an existing project, and you want to help new users get connected to your database quickly and easily, you can include your own custom `profile_template.yml` in the root of your project, alongside `dbt_project.yml`. For common connection attributes, set the values in `fixed`; leave user-specific attributes in `prompts`, but with custom hints and defaults as you'd like.
diff --git a/website/docs/reference/commands/run.md b/website/docs/reference/commands/run.md
index 26db40cb7e4..58a876f98ef 100644
--- a/website/docs/reference/commands/run.md
+++ b/website/docs/reference/commands/run.md
@@ -83,4 +83,15 @@ See [global configs](/reference/global-configs/print-output#print-color)
The `run` command supports the `--empty` flag for building schema-only dry runs. The `--empty` flag limits the refs and sources to zero rows. dbt will still execute the model SQL against the target data warehouse but will avoid expensive reads of input data. This validates dependencies and ensures your models will build properly.
-
\ No newline at end of file
+
+
+## Status codes
+
+When calling the [list_runs api](/dbt-cloud/api-v2#/operations/List%20Runs), you will get a status code for each run returned. The available run status codes are as follows:
+
+- Starting = 1
+- Running = 3
+- Success = 10
+- Error = 20
+- Canceled = 30
+- Skipped = 40
diff --git a/website/docs/reference/commands/version.md b/website/docs/reference/commands/version.md
index 3847b3cd593..4d5ce6524dd 100644
--- a/website/docs/reference/commands/version.md
+++ b/website/docs/reference/commands/version.md
@@ -13,7 +13,7 @@ The `--version` command-line flag returns information about the currently instal
## Versioning
To learn more about release versioning for dbt Core, refer to [How dbt Core uses semantic versioning](/docs/dbt-versions/core#how-dbt-core-uses-semantic-versioning).
-If using [versionless dbt Cloud](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless), then `dbt_version` uses the latest (continuous) release version. This also follows semantic versioning guidelines, using the `YYYY.MM.DD+` format. The year, month, and day represent the date the version was built (for example, `2024.10.28+996c6a8`). The suffix provides an additional unique identification for each build.
+If using a [dbt Cloud release track](/docs/dbt-versions/cloud-release-tracks), which provide ongoing updates to dbt, then `dbt_version` represents the release version of dbt in dbt Cloud. This also follows semantic versioning guidelines, using the `YYYY.MM.DD+` format. The year, month, and day represent the date the version was built (for example, `2024.10.28+996c6a8`). The suffix provides an additional unique identification for each build.
## Example usages
diff --git a/website/docs/reference/dbt-jinja-functions/model.md b/website/docs/reference/dbt-jinja-functions/model.md
index 516981e11e3..b0995ff958c 100644
--- a/website/docs/reference/dbt-jinja-functions/model.md
+++ b/website/docs/reference/dbt-jinja-functions/model.md
@@ -20,9 +20,9 @@ To view the contents of `model` for a given model:
-
+
-If you're using the CLI, use [log()](/reference/dbt-jinja-functions/log) to print the full contents:
+If you're using the command line interface (CLI), use [log()](/reference/dbt-jinja-functions/log) to print the full contents:
```jinja
{{ log(model, info=True) }}
@@ -42,6 +42,48 @@ If you're using the CLI, use [log()](/reference/dbt-jinja-functions/log) to prin
+## Batch properties for microbatch models
+
+Starting in dbt Core v1.9, the model object includes a `batch` property (`model.batch`), which provides details about the current batch when executing an [incremental microbatch](/docs/build/incremental-microbatch) model. This property is only populated during the batch execution of a microbatch model.
+
+The following table describes the properties of the `batch` object. Note that dbt appends the property to the `model` and `batch` objects.
+
+| Property | Description | Example |
+| -------- | ----------- | ------- |
+| `id` | The unique identifier for the batch within the context of the microbatch model. | `model.batch.id` |
+| `event_time_start` | The start time of the batch's [`event_time`](/reference/resource-configs/event-time) filter (inclusive). | `model.batch.event_time_start` |
+| `event_time_end` | The end time of the batch's `event_time` filter (exclusive). | `model.batch.event_time_end` |
+
+### Usage notes
+
+`model.batch` is only available during the execution of a microbatch model batch. Outside of the microbatch execution, `model.batch` is `None`, and its sub-properties aren't accessible.
+
+#### Example of safeguarding access to batch properties
+
+We recommend to always check if `model.batch` is populated before accessing its properties. To do this, use an `if` statement for safe access to `batch` properties:
+
+```jinja
+{% if model.batch %}
+ {{ log(model.batch.id) }} # Log the batch ID #
+ {{ log(model.batch.event_time_start) }} # Log the start time of the batch #
+ {{ log(model.batch.event_time_end) }} # Log the end time of the batch #
+{% endif %}
+```
+
+In this example, the `if model.batch` statement makes sure that the code only runs during a batch execution. `log()` is used to print the `batch` properties for debugging.
+
+#### Example of log batch details
+
+This is a practical example of how you might use `model.batch` in a microbatch model to log batch details for the `batch.id`:
+
+```jinja
+{% if model.batch %}
+ {{ log("Processing batch with ID: " ~ model.batch.id, info=True) }}
+ {{ log("Batch event time range: " ~ model.batch.event_time_start ~ " to " ~ model.batch.event_time_end, info=True) }}
+{% endif %}
+```
+In this example, the `if model.batch` statement makes sure that the code only runs during a batch execution. `log()` is used to print the `batch` properties for debugging.
+
## Model structure and JSON schema
To view the structure of `models` and their definitions:
diff --git a/website/docs/reference/dbt-jinja-functions/this.md b/website/docs/reference/dbt-jinja-functions/this.md
index f9f2961b08f..7d358cb6299 100644
--- a/website/docs/reference/dbt-jinja-functions/this.md
+++ b/website/docs/reference/dbt-jinja-functions/this.md
@@ -20,8 +20,6 @@ meta:
## Examples
-
-
### Configuring incremental models
diff --git a/website/docs/reference/dbtignore.md b/website/docs/reference/dbtignore.md
index 8733fc592cd..063b455f5cc 100644
--- a/website/docs/reference/dbtignore.md
+++ b/website/docs/reference/dbtignore.md
@@ -20,6 +20,13 @@ another-non-dbt-model.py
# ignore all .py files with "codegen" in the filename
*codegen*.py
+
+# ignore all folders in a directory
+path/to/folders/**
+
+# ignore some folders in a directory
+path/to/folders/subfolder/**
+
```
diff --git a/website/docs/reference/global-configs/behavior-changes.md b/website/docs/reference/global-configs/behavior-changes.md
index bccf96eb728..bda4d2b361a 100644
--- a/website/docs/reference/global-configs/behavior-changes.md
+++ b/website/docs/reference/global-configs/behavior-changes.md
@@ -64,9 +64,9 @@ flags:
-When we use dbt Cloud in the following table, we're referring to accounts that have gone "[Versionless](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless)." This table outlines which version of dbt Core contains the behavior change or the date the behavior change was added to dbt Cloud.
+This table outlines which month of the "Latest" release track in dbt Cloud and which version of dbt Core contains the behavior change's introduction (disabled by default) or maturity (enabled by default).
-| Flag | dbt Cloud: Intro | dbt Cloud: Maturity | dbt Core: Intro | dbt Core: Maturity |
+| Flag | dbt Cloud "Latest": Intro | dbt Cloud "Latest": Maturity | dbt Core: Intro | dbt Core: Maturity |
|-----------------------------------------------------------------|------------------|---------------------|-----------------|--------------------|
| [require_explicit_package_overrides_for_builtin_materializations](#package-override-for-built-in-materialization) | 2024.04 | 2024.06 | 1.6.14, 1.7.14 | 1.8.0 |
| [require_resource_names_without_spaces](#no-spaces-in-resource-names) | 2024.05 | TBD* | 1.8.0 | 1.10.0 |
@@ -179,7 +179,7 @@ Previously, users needed to set the `DBT_EXPERIMENTAL_MICROBATCH` environment va
### Cumulative metrics
-[Cumulative-type metrics](/docs/build/cumulative#parameters) are nested under the `cumulative_type_params` field in Versionless dbt Cloud, dbt Core v1.9 and newer. Currently, dbt will warn users if they have cumulative metrics improperly nested. To enforce the new format (resulting in an error instead of a warning), set the `require_nested_cumulative_type_params` to `True`.
+[Cumulative-type metrics](/docs/build/cumulative#parameters) are nested under the `cumulative_type_params` field in [the dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks), dbt Core v1.9 and newer. Currently, dbt will warn users if they have cumulative metrics improperly nested. To enforce the new format (resulting in an error instead of a warning), set the `require_nested_cumulative_type_params` to `True`.
Use the following metric configured with the syntax before v1.9 as an example:
@@ -192,7 +192,7 @@ Use the following metric configured with the syntax before v1.9 as an example:
```
-If you run `dbt parse` with that syntax on Core v1.9 or Versionless dbt Cloud, you will receive a warning like:
+If you run `dbt parse` with that syntax on Core v1.9 or [the dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks), you will receive a warning like:
```bash
@@ -224,4 +224,4 @@ Once the metric is updated, it will work as expected:
cumulative_type_params:
window: 7 days
-```
\ No newline at end of file
+```
diff --git a/website/docs/reference/global-configs/resource-type.md b/website/docs/reference/global-configs/resource-type.md
index 431b6c049cb..9a888c73885 100644
--- a/website/docs/reference/global-configs/resource-type.md
+++ b/website/docs/reference/global-configs/resource-type.md
@@ -6,7 +6,7 @@ sidebar: "resource type"
-The `--resource-type` and `--exclude-resource-type` flags include or exclude resource types from the `dbt build`, `dbt clone`, and `dbt list` commands. In Versionless and from dbt v1.9 onwards, these flags are also supported in the `dbt test` command.
+The `--resource-type` and `--exclude-resource-type` flags include or exclude resource types from the `dbt build`, `dbt clone`, and `dbt list` commands. In dbt v1.9 onwards, these flags are also supported in the `dbt test` command.
diff --git a/website/docs/reference/global-configs/version-compatibility.md b/website/docs/reference/global-configs/version-compatibility.md
index 80841678a85..7667dcfda9c 100644
--- a/website/docs/reference/global-configs/version-compatibility.md
+++ b/website/docs/reference/global-configs/version-compatibility.md
@@ -14,7 +14,7 @@ Running with dbt=1.0.0
Found 13 models, 2 tests, 1 archives, 0 analyses, 204 macros, 2 operations....
```
-:::info Versionless
+:::info dbt Cloud release tracks
:::
diff --git a/website/docs/reference/project-configs/model-paths.md b/website/docs/reference/project-configs/model-paths.md
index f01dd29a8fd..44a40c33066 100644
--- a/website/docs/reference/project-configs/model-paths.md
+++ b/website/docs/reference/project-configs/model-paths.md
@@ -12,7 +12,7 @@ model-paths: [directorypath]
## Definition
-Optionally specify a custom list of directories where [models](/docs/build/models) and [sources](/docs/build/sources) are located.
+Optionally specify a custom list of directories where [models](/docs/build/models), [sources](/docs/build/sources), and [unit tests](/docs/build/unit-tests) are located.
## Default
By default, dbt will search for models and sources in the `models` directory. For example, `model-paths: ["models"]`.
diff --git a/website/docs/reference/project-configs/on-run-start-on-run-end.md b/website/docs/reference/project-configs/on-run-start-on-run-end.md
index 74557839f11..347ce54ab63 100644
--- a/website/docs/reference/project-configs/on-run-start-on-run-end.md
+++ b/website/docs/reference/project-configs/on-run-start-on-run-end.md
@@ -27,8 +27,6 @@ A SQL statement (or list of SQL statements) to be run at the start or end of the
## Examples
-
-
### Grant privileges on all schemas that dbt uses at the end of a run
This leverages the [schemas](/reference/dbt-jinja-functions/schemas) variable that is only available in an `on-run-end` hook.
diff --git a/website/docs/reference/project-configs/require-dbt-version.md b/website/docs/reference/project-configs/require-dbt-version.md
index 97b42e036ec..f659370af4e 100644
--- a/website/docs/reference/project-configs/require-dbt-version.md
+++ b/website/docs/reference/project-configs/require-dbt-version.md
@@ -22,7 +22,7 @@ When you set this configuration, dbt sends a helpful error message for any user
If this configuration is not specified, no version check will occur.
-:::info Versionless
+:::info dbt Cloud release tracks
diff --git a/website/docs/reference/project-configs/snapshot-paths.md b/website/docs/reference/project-configs/snapshot-paths.md
index a4dd5af9434..a13697fc705 100644
--- a/website/docs/reference/project-configs/snapshot-paths.md
+++ b/website/docs/reference/project-configs/snapshot-paths.md
@@ -16,11 +16,11 @@ snapshot-paths: [directorypath]
Optionally specify a custom list of directories where [snapshots](/docs/build/snapshots) are located.
-In [Versionless](/docs/dbt-versions/versionless-cloud) and on dbt v1.9 and higher, you can co-locate your snapshots with models if they are [defined using the latest YAML syntax](/docs/build/snapshots).
+In dbt Core v1.9+, you can co-locate your snapshots with models if they are [defined using the latest YAML syntax](/docs/build/snapshots).
-Note that you cannot co-locate models and snapshots. However, in [Versionless](/docs/dbt-versions/versionless-cloud) and on dbt v1.9 and higher, you can co-locate your snapshots with models if they are [defined using the latest YAML syntax](/docs/build/snapshots).
+Note that you cannot co-locate models and snapshots. However, in dbt Core v1.9+, you can co-locate your snapshots with models if they are [defined using the latest YAML syntax](/docs/build/snapshots).
## Default
diff --git a/website/docs/reference/resource-configs/alias.md b/website/docs/reference/resource-configs/alias.md
index 3f36bbd0d8f..c14804ef2a7 100644
--- a/website/docs/reference/resource-configs/alias.md
+++ b/website/docs/reference/resource-configs/alias.md
@@ -100,7 +100,7 @@ models:
alias: unique_order_id_test
```
-When using `--store-failures`, this would return the name `analytics.finance.orders_order_id_unique_order_id_test` in the database.
+When using [`store_failures_as`](/reference/resource-configs/store_failures_as), this would return the name `analytics.finance.orders_order_id_unique_order_id_test` in the database.
diff --git a/website/docs/reference/resource-configs/athena-configs.md b/website/docs/reference/resource-configs/athena-configs.md
index f871ede9fab..fd5bc663ee7 100644
--- a/website/docs/reference/resource-configs/athena-configs.md
+++ b/website/docs/reference/resource-configs/athena-configs.md
@@ -109,7 +109,7 @@ lf_grants={
There are some limitations and recommendations that should be considered:
- `lf_tags` and `lf_tags_columns` configs support only attaching lf tags to corresponding resources.
-- We recommend managing LF Tags permissions somewhere outside dbt. For example, [terraform](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lakeformation_permissions) or [aws cdk](https://docs.aws.amazon.com/cdk/api/v1/docs/aws-lakeformation-readme.html).
+- We recommend managing LF Tags permissions somewhere outside dbt. For example, [terraform](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lakeformation_permissions) or [aws cdk](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_lakeformation-readme.html).
- `data_cell_filters` management can't be automated outside dbt because the filter can't be attached to the table, which doesn't exist. Once you `enable` this config, dbt will set all filters and their permissions during every dbt run. Such an approach keeps the actual state of row-level security configuration after every dbt run and applies changes if they occur: drop, create, and update filters and their permissions.
- Any tags listed in `lf_inherited_tags` should be strictly inherited from the database level and never overridden at the table and column level.
- Currently, `dbt-athena` does not differentiate between an inherited tag association and an override it made previously.
diff --git a/website/docs/reference/resource-configs/bigquery-configs.md b/website/docs/reference/resource-configs/bigquery-configs.md
index 9dd39c936b6..ab5f562f57c 100644
--- a/website/docs/reference/resource-configs/bigquery-configs.md
+++ b/website/docs/reference/resource-configs/bigquery-configs.md
@@ -425,9 +425,10 @@ Please note that in order for policy tags to take effect, [column-level `persist
The [`incremental_strategy` config](/docs/build/incremental-strategy) controls how dbt builds incremental models. dbt uses a [merge statement](https://cloud.google.com/bigquery/docs/reference/standard-sql/dml-syntax) on BigQuery to refresh incremental tables.
-The `incremental_strategy` config can be set to one of two values:
- - `merge` (default)
- - `insert_overwrite`
+The `incremental_strategy` config can be set to one of the following values:
+- `merge` (default)
+- `insert_overwrite`
+- [`microbatch`](/docs/build/incremental-microbatch)
### Performance and cost
@@ -561,7 +562,7 @@ If no `partitions` configuration is provided, dbt will instead:
3. Query the destination table to find the _max_ partition in the database
When building your model SQL, you can take advantage of the introspection performed
-by dbt to filter for only _new_ data. The max partition in the destination table
+by dbt to filter for only _new_ data. The maximum value in the partitioned field in the destination table
will be available using the `_dbt_max_partition` BigQuery scripting variable. **Note:**
this is a BigQuery SQL variable, not a dbt Jinja variable, so no jinja brackets are
required to access this variable.
diff --git a/website/docs/reference/resource-configs/database.md b/website/docs/reference/resource-configs/database.md
index 48ac0c8451c..6c57e7e2c69 100644
--- a/website/docs/reference/resource-configs/database.md
+++ b/website/docs/reference/resource-configs/database.md
@@ -49,7 +49,7 @@ This would result in the generated relation being located in the `staging` datab
-Available for versionless dbt Cloud or dbt Core v1.9+. Select v1.9 or newer from the version dropdown to view the configs.
+Available for dbt Cloud release tracks or dbt Core v1.9+. Select v1.9 or newer from the version dropdown to view the configs.
diff --git a/website/docs/reference/resource-configs/databricks-configs.md b/website/docs/reference/resource-configs/databricks-configs.md
index c77f3494aa7..6ac3e23c113 100644
--- a/website/docs/reference/resource-configs/databricks-configs.md
+++ b/website/docs/reference/resource-configs/databricks-configs.md
@@ -51,7 +51,7 @@ We do not yet have a PySpark API to set tblproperties at table creation, so this
-dbt Core v.9 and Versionless dbt Cloud support for `table_format: iceberg`, in addition to all previous table configurations supported in 1.8.
+dbt-databricks v1.9 adds support for the `table_format: iceberg` config. Try it now on the [dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks). All other table configurations were also supported in 1.8.
| Option | Description | Required? | Model Support | Example |
|---------------------|-----------------------------|-------------------------------------------|-----------------|--------------------------|
@@ -76,7 +76,7 @@ dbt Core v.9 and Versionless dbt Cloud support for `table_format: iceberg`, in a
### Python submission methods
-In dbt v1.9 and higher, or in [Versionless](/docs/dbt-versions/versionless-cloud) dbt Cloud, you can use these four options for `submission_method`:
+In dbt-databricks v1.9 (try it now in [the dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks)), you can use these four options for `submission_method`:
* `all_purpose_cluster`: Executes the python model either directly using the [command api](https://docs.databricks.com/api/workspace/commandexecution) or by uploading a notebook and creating a one-off job run
* `job_cluster`: Creates a new job cluster to execute an uploaded notebook as a one-off job run
diff --git a/website/docs/reference/resource-configs/dbt_valid_to_current.md b/website/docs/reference/resource-configs/dbt_valid_to_current.md
index 7c0e33aa5d7..2a6cf3abe6d 100644
--- a/website/docs/reference/resource-configs/dbt_valid_to_current.md
+++ b/website/docs/reference/resource-configs/dbt_valid_to_current.md
@@ -6,7 +6,7 @@ default_value: {NULL}
id: "dbt_valid_to_current"
---
-Available from dbt v1.9 or with [Versionless](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) dbt Cloud.
+Available from dbt v1.9 or with [the dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks) dbt Cloud.
diff --git a/website/docs/reference/resource-configs/event-time.md b/website/docs/reference/resource-configs/event-time.md
index d8c0c0e0472..c18c8de6397 100644
--- a/website/docs/reference/resource-configs/event-time.md
+++ b/website/docs/reference/resource-configs/event-time.md
@@ -7,7 +7,7 @@ description: "dbt uses event_time to understand when an event occurred. When def
datatype: string
---
-Available in dbt Cloud Versionless and dbt Core v1.9 and higher.
+Available in [the dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks) and dbt Core v1.9 and higher.
diff --git a/website/docs/reference/resource-configs/hard-deletes.md b/website/docs/reference/resource-configs/hard-deletes.md
new file mode 100644
index 00000000000..50c8046f4e1
--- /dev/null
+++ b/website/docs/reference/resource-configs/hard-deletes.md
@@ -0,0 +1,111 @@
+---
+title: hard_deletes
+resource_types: [snapshots]
+description: "Use the `hard_deletes` config to control how deleted rows are tracked in your snapshot table."
+datatype: "boolean"
+default_value: {ignore}
+id: "hard-deletes"
+sidebar_label: "hard_deletes"
+---
+
+Available from dbt v1.9 or with [dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks).
+
+
+
+
+```yaml
+snapshots:
+ - name:
+ config:
+ hard_deletes: 'ignore' | 'invalidate' | 'new_record'
+```
+
+
+
+
+```yml
+snapshots:
+ [](/reference/resource-configs/resource-path):
+ +hard_deletes: "ignore" | "invalidate" | "new_record"
+```
+
+
+
+
+
+```sql
+{{
+ config(
+ unique_key='id',
+ strategy='timestamp',
+ updated_at='updated_at',
+ hard_deletes='ignore' | 'invalidate' | 'new_record'
+ )
+}}
+```
+
+
+
+
+## Description
+
+The `hard_deletes` config gives you more control on how to handle deleted rows from the source. Supported options are `ignore` (default), `invalidate` (replaces the legacy `invalidate_hard_deletes=true`), and `new_record`. Note that `new_record` will create a new metadata column in the snapshot table.
+
+import HardDeletes from '/snippets/_hard-deletes.md';
+
+
+
+:::warning
+
+If you're updating an existing snapshot to use the `hard_deletes` config, dbt _will not_ handle migrations automatically. We recommend either only using these settings for net-new snapshots, or [arranging an update](/reference/snapshot-configs#snapshot-configuration-migration) of pre-existing tables before enabling this setting.
+:::
+
+## Default
+
+By default, if you don’t specify `hard_deletes`, it'll automatically default to `ignore`. Deleted rows will not be tracked and their `dbt_valid_to` column remains `NULL`.
+
+The `hard_deletes` config has three methods:
+
+| Methods | Description |
+| --------- | ----------- |
+| `ignore` (default) | No action for deleted records. |
+| `invalidate` | Behaves the same as the existing `invalidate_hard_deletes=true`, where deleted records are invalidated by setting `dbt_valid_to` to current time. This method replaces the `invalidate_hard_deletes` config to give you more control on how to handle deleted rows from the source. |
+| `new_record` | Tracks deleted records as new rows using the `dbt_is_deleted` meta field when records are deleted.|
+
+## Considerations
+- **Backward compatibility**: The `invalidate_hard_deletes` config is still supported for existing snapshots but can't be used alongside `hard_deletes`.
+- **New snapshots**: For new snapshots, we recommend using `hard_deletes` instead of `invalidate_hard_deletes`.
+- **Migration**: If you switch an existing snapshot to use `hard_deletes` without migrating your data, you may encounter inconsistent or incorrect results, such as a mix of old and new data formats.
+
+## Example
+
+
+
+```yaml
+snapshots:
+ - name: my_snapshot
+ config:
+ hard_deletes: new_record # options are: 'ignore', 'invalidate', or 'new_record'
+ strategy: timestamp
+ updated_at: updated_at
+ columns:
+ - name: dbt_valid_from
+ description: Timestamp when the record became valid.
+ - name: dbt_valid_to
+ description: Timestamp when the record stopped being valid.
+ - name: dbt_is_deleted
+ description: Indicates whether the record was deleted.
+```
+
+
+
+The resulting snapshot table contains the `hard_deletes: new_record` configuration. If a record is deleted and later restored, the resulting snapshot table might look like this:
+
+| id | dbt_scd_id | Status | dbt_updated_at | dbt_valid_from | dbt_valid_to | dbt_is_deleted |
+| -- | -------------------- | ----- | -------------------- | --------------------| -------------------- | ----------- |
+| 1 | 60a1f1dbdf899a4dd... | pending | 2024-10-02 ... | 2024-05-19... | 2024-05-20 ... | False |
+| 1 | b1885d098f8bcff51... | pending | 2024-10-02 ... | 2024-05-20 ... | 2024-06-03 ... | True |
+| 1 | b1885d098f8bcff53... | shipped | 2024-10-02 ... | 2024-06-03 ... | | False |
+| 2 | b1885d098f8bcff55... | active | 2024-10-02 ... | 2024-05-19 ... | | False |
+
+In this example, the `dbt_is_deleted` column is set to `True` when the record is deleted. When the record is restored, the `dbt_is_deleted` column is set to `False`.
diff --git a/website/docs/reference/resource-configs/invalidate_hard_deletes.md b/website/docs/reference/resource-configs/invalidate_hard_deletes.md
index bdaec7e33a9..67123487fa1 100644
--- a/website/docs/reference/resource-configs/invalidate_hard_deletes.md
+++ b/website/docs/reference/resource-configs/invalidate_hard_deletes.md
@@ -1,9 +1,17 @@
---
+title: invalidate_hard_deletes (legacy)
resource_types: [snapshots]
description: "Invalidate_hard_deletes - Read this in-depth guide to learn about configurations in dbt."
datatype: column_name
+sidebar_label: invalidate_hard_deletes (legacy)
---
+:::warning This is a legacy config — Use the [`hard_deletes`](/reference/resource-configs/hard-deletes) config instead.
+
+In Versionless and dbt Core 1.9 and higher, the [`hard_deletes`](/reference/resource-configs/hard-deletes) config replaces the `invalidate_hard_deletes` config for better control over how to handle deleted rows from the source.
+
+For new snapshots, set the config to `hard_deletes='invalidate'` instead of `invalidate_hard_deletes=true`. For existing snapshots, [arrange an update](/reference/snapshot-configs#snapshot-configuration-migration) of pre-existing tables before enabling this setting. Refer to
+:::
diff --git a/website/docs/reference/resource-configs/postgres-configs.md b/website/docs/reference/resource-configs/postgres-configs.md
index f2bf90a93c0..e71c6f1484d 100644
--- a/website/docs/reference/resource-configs/postgres-configs.md
+++ b/website/docs/reference/resource-configs/postgres-configs.md
@@ -11,6 +11,7 @@ In dbt-postgres, the following incremental materialization strategies are suppor
- `append` (default when `unique_key` is not defined)
- `merge`
- `delete+insert` (default when `unique_key` is defined)
+- [`microbatch`](/docs/build/incremental-microbatch)
## Performance optimizations
diff --git a/website/docs/reference/resource-configs/pre-hook-post-hook.md b/website/docs/reference/resource-configs/pre-hook-post-hook.md
index bd01a7be840..ee3c81b0fd6 100644
--- a/website/docs/reference/resource-configs/pre-hook-post-hook.md
+++ b/website/docs/reference/resource-configs/pre-hook-post-hook.md
@@ -160,8 +160,6 @@ import SQLCompilationError from '/snippets/_render-method.md';
## Examples
-
-
### [Redshift] Unload one model to S3
diff --git a/website/docs/reference/resource-configs/redshift-configs.md b/website/docs/reference/resource-configs/redshift-configs.md
index b033cd6267e..01c9bffd055 100644
--- a/website/docs/reference/resource-configs/redshift-configs.md
+++ b/website/docs/reference/resource-configs/redshift-configs.md
@@ -17,6 +17,7 @@ In dbt-redshift, the following incremental materialization strategies are suppor
- `append` (default when `unique_key` is not defined)
- `merge`
- `delete+insert` (default when `unique_key` is defined)
+- [`microbatch`](/docs/build/incremental-microbatch)
All of these strategies are inherited from dbt-postgres.
diff --git a/website/docs/reference/resource-configs/schema.md b/website/docs/reference/resource-configs/schema.md
index b239e26bd87..6f56215de61 100644
--- a/website/docs/reference/resource-configs/schema.md
+++ b/website/docs/reference/resource-configs/schema.md
@@ -50,7 +50,7 @@ This would result in the generated relation being located in the `mappings` sche
-Available for versionless dbt Cloud or dbt Core v1.9+. Select v1.9 or newer from the version dropdown to view the configs.
+Available in dbt Core v1.9+. Select v1.9 or newer from the version dropdown to view the configs. Try it now in the [dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks).
diff --git a/website/docs/reference/resource-configs/snapshot_meta_column_names.md b/website/docs/reference/resource-configs/snapshot_meta_column_names.md
index 46aba7886d0..f1d29ba8bee 100644
--- a/website/docs/reference/resource-configs/snapshot_meta_column_names.md
+++ b/website/docs/reference/resource-configs/snapshot_meta_column_names.md
@@ -6,7 +6,7 @@ default_value: {"dbt_valid_from": "dbt_valid_from", "dbt_valid_to": "dbt_valid_t
id: "snapshot_meta_column_names"
---
-Starting in 1.9 or with [versionless](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) dbt Cloud.
+Available in dbt Core v1.9+. Select v1.9 or newer from the version dropdown to view the configs. Try it now in the [dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks).
@@ -19,6 +19,7 @@ snapshots:
dbt_valid_to:
dbt_scd_id:
dbt_updated_at:
+ dbt_is_deleted:
```
@@ -34,6 +35,7 @@ snapshots:
"dbt_valid_to": "",
"dbt_scd_id": "",
"dbt_updated_at": "",
+ "dbt_is_deleted": "",
}
)
}}
@@ -52,7 +54,7 @@ snapshots:
dbt_valid_to:
dbt_scd_id:
dbt_updated_at:
-
+ dbt_is_deleted:
```
@@ -71,6 +73,7 @@ By default, dbt snapshots use the following column names to track change history
| `dbt_valid_to` | The timestamp when this row is no longer valid. | |
| `dbt_scd_id` | A unique key generated for each snapshot row. | This is used internally by dbt. |
| `dbt_updated_at` | The `updated_at` timestamp of the source record when this snapshot row was inserted. | This is used internally by dbt. |
+| `dbt_is_deleted` | A boolean value indicating if the record has been deleted. `True` if deleted, `False` otherwise. | Added when `hard_deletes='new_record'` is configured. |
However, these column names can be customized using the `snapshot_meta_column_names` config.
@@ -92,18 +95,21 @@ snapshots:
unique_key: id
strategy: check
check_cols: all
+ hard_deletes: new_record
snapshot_meta_column_names:
dbt_valid_from: start_date
dbt_valid_to: end_date
dbt_scd_id: scd_id
dbt_updated_at: modified_date
+ dbt_is_deleted: is_deleted
```
The resulting snapshot table contains the configured meta column names:
-| id | scd_id | modified_date | start_date | end_date |
-| -- | -------------------- | -------------------- | -------------------- | -------------------- |
-| 1 | 60a1f1dbdf899a4dd... | 2024-10-02 ... | 2024-10-02 ... | 2024-10-02 ... |
-| 2 | b1885d098f8bcff51... | 2024-10-02 ... | 2024-10-02 ... | |
+| id | scd_id | modified_date | start_date | end_date | is_deleted |
+| -- | -------------------- | -------------------- | -------------------- | -------------------- | ---------- |
+| 1 | 60a1f1dbdf899a4dd... | 2024-10-02 ... | 2024-10-02 ... | 2024-10-03 ... | False |
+| 1 | 60a1f1dbdf899a4dd... | 2024-10-03 ... | 2024-10-03 ... | | True |
+| 2 | b1885d098f8bcff51... | 2024-10-02 ... | 2024-10-02 ... | | False |
diff --git a/website/docs/reference/resource-configs/snowflake-configs.md b/website/docs/reference/resource-configs/snowflake-configs.md
index 7bef180e3d3..d576b195b65 100644
--- a/website/docs/reference/resource-configs/snowflake-configs.md
+++ b/website/docs/reference/resource-configs/snowflake-configs.md
@@ -38,11 +38,11 @@ flags:
The following configurations are supported.
For more information, check out the Snowflake reference for [`CREATE ICEBERG TABLE` (Snowflake as the catalog)](https://docs.snowflake.com/en/sql-reference/sql/create-iceberg-table-snowflake).
-| Field | Type | Required | Description | Sample input | Note |
-| --------------------- | ------ | -------- | -------------------------------------------------------------------------------------------------------------------------- | ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| Table Format | String | Yes | Configures the objects table format. | `iceberg` | `iceberg` is the only accepted value. |
+| Field | Type | Required | Description | Sample input | Note |
+| ------ | ----- | -------- | ------------- | ------------ | ------ |
+| Table Format | String | Yes | Configures the objects table format. | `iceberg` | `iceberg` is the only accepted value. |
| External volume | String | Yes(*) | Specifies the identifier (name) of the external volume where Snowflake writes the Iceberg table's metadata and data files. | `my_s3_bucket` | *You don't need to specify this if the account, database, or schema already has an associated external volume. [More info](https://docs.snowflake.com/en/sql-reference/sql/create-iceberg-table-snowflake#:~:text=Snowflake%20Table%20Structures.-,external_volume) |
-| Base location Subpath | String | No | An optional suffix to add to the `base_location` path that dbt automatically specifies. | `jaffle_marketing_folder` | We recommend that you do not specify this. Modifying this parameter results in a new Iceberg table. See [Base Location](#base-location) for more info. |
+| Base location Subpath | String | No | An optional suffix to add to the `base_location` path that dbt automatically specifies. | `jaffle_marketing_folder` | We recommend that you do not specify this. Modifying this parameter results in a new Iceberg table. See [Base Location](#base-location) for more info. |
### Example configuration
@@ -470,8 +470,15 @@ In this example, you can set up a query tag to be applied to every query with th
The [`incremental_strategy` config](/docs/build/incremental-strategy) controls how dbt builds incremental models. By default, dbt will use a [merge statement](https://docs.snowflake.net/manuals/sql-reference/sql/merge.html) on Snowflake to refresh incremental tables.
+Snowflake supports the following incremental strategies:
+- Merge (default)
+- Append
+- Delete+insert
+- [`microbatch`](/docs/build/incremental-microbatch)
+
Snowflake's `merge` statement fails with a "nondeterministic merge" error if the `unique_key` specified in your model config is not actually unique. If you encounter this error, you can instruct dbt to use a two-step incremental approach by setting the `incremental_strategy` config for your model to `delete+insert`.
+
## Configuring table clustering
dbt supports [table clustering](https://docs.snowflake.net/manuals/user-guide/tables-clustering-keys.html) on Snowflake. To control clustering for a or incremental model, use the `cluster_by` config. When this configuration is applied, dbt will do two things:
@@ -701,4 +708,4 @@ flags:
```
-
\ No newline at end of file
+
diff --git a/website/docs/reference/resource-configs/spark-configs.md b/website/docs/reference/resource-configs/spark-configs.md
index 3b2174b8ff5..a52fd93eace 100644
--- a/website/docs/reference/resource-configs/spark-configs.md
+++ b/website/docs/reference/resource-configs/spark-configs.md
@@ -37,7 +37,8 @@ For that reason, the dbt-spark plugin leans heavily on the [`incremental_strateg
- **`append`** (default): Insert new records without updating or overwriting any existing data.
- **`insert_overwrite`**: If `partition_by` is specified, overwrite partitions in the with new data. If no `partition_by` is specified, overwrite the entire table with new data.
- **`merge`** (Delta, Iceberg and Hudi file format only): Match records based on a `unique_key`; update old records, insert new ones. (If no `unique_key` is specified, all new data is inserted, similar to `append`.)
-
+- `microbatch` Implements the [microbatch strategy](/docs/build/incremental-microbatch) using `event_time` to define time-based ranges for filtering data.
+
Each of these strategies has its pros and cons, which we'll discuss below. As with any model config, `incremental_strategy` may be specified in `dbt_project.yml` or within a model file's `config()` block.
### The `append` strategy
diff --git a/website/docs/reference/resource-configs/target_database.md b/website/docs/reference/resource-configs/target_database.md
index 3c07b442107..f80dd31f214 100644
--- a/website/docs/reference/resource-configs/target_database.md
+++ b/website/docs/reference/resource-configs/target_database.md
@@ -6,7 +6,9 @@ datatype: string
:::note
-For [versionless](/docs/dbt-versions/core-upgrade/upgrading-to-v1.8#versionless) dbt Cloud accounts and dbt Core v1.9+, this functionality is no longer utilized. Use the [database](/reference/resource-configs/database) config as an alternative to define a custom database while still respecting the `generate_database_name` macro.
+Starting in dbt Core v1.9+, this functionality is no longer utilized. Use the [database](/reference/resource-configs/database) config as an alternative to define a custom database while still respecting the `generate_database_name` macro.
+
+Try it now in the [dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks).
:::
diff --git a/website/docs/reference/resource-configs/target_schema.md b/website/docs/reference/resource-configs/target_schema.md
index ffa95df9be7..1117e3ec42c 100644
--- a/website/docs/reference/resource-configs/target_schema.md
+++ b/website/docs/reference/resource-configs/target_schema.md
@@ -6,7 +6,9 @@ datatype: string
:::info
-For [versionless](/docs/dbt-versions/core-upgrade/upgrading-to-v1.8#versionless) dbt Cloud accounts and dbt Core v1.9+, this configuration is no longer required. Use the [schema](/reference/resource-configs/schema) config as an alternative to define a custom schema while still respecting the `generate_schema_name` macro.
+Starting in dbt Core v1.9+, this functionality is no longer utilized. Use the [database](/reference/resource-configs/database) config as an alternative to define a custom database while still respecting the `generate_database_name` macro.
+
+Try it now in the [dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks).
:::
@@ -40,7 +42,7 @@ On **BigQuery**, this is analogous to a `dataset`.
## Default
This is a required parameter, no default is provided.
-For versionless dbt Cloud accounts and dbt Core v1.9+, this is not a required parameter.
+In dbt Core v1.9+ and dbt Cloud "Latest" release track, this is not a required parameter.
## Examples
### Build all snapshots in a schema named `snapshots`
diff --git a/website/docs/reference/resource-configs/unique_key.md b/website/docs/reference/resource-configs/unique_key.md
index 41884e175d2..77c99937295 100644
--- a/website/docs/reference/resource-configs/unique_key.md
+++ b/website/docs/reference/resource-configs/unique_key.md
@@ -52,7 +52,7 @@ snapshots:
## Description
A column name or expression that is unique for the inputs of a snapshot. dbt uses this to match records between a result set and an existing snapshot, so that changes can be captured correctly.
-In Versionless and dbt v1.9 and later, [snapshots](/docs/build/snapshots) are defined and configured in YAML files within your `snapshots/` directory. You can specify one or multiple `unique_key` values within your snapshot YAML file's `config` key.
+In dbt Cloud "Latest" and dbt v1.9+, [snapshots](/docs/build/snapshots) are defined and configured in YAML files within your `snapshots/` directory. You can specify one or multiple `unique_key` values within your snapshot YAML file's `config` key.
:::caution
diff --git a/website/docs/reference/resource-properties/constraints.md b/website/docs/reference/resource-properties/constraints.md
index 6ba20db090f..1e418e884be 100644
--- a/website/docs/reference/resource-properties/constraints.md
+++ b/website/docs/reference/resource-properties/constraints.md
@@ -29,7 +29,7 @@ Foreign key constraints accept two additional inputs:
- `to`: A relation input, likely `ref()`, indicating the referenced table.
- `to_columns`: A list of column(s) in that table containing the corresponding primary or unique key.
-This syntax for defining foreign keys uses `ref`, meaning it will capture dependencies and works across different environments. It's available in [dbt Cloud Versionless](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) and versions of dbt Core starting with v1.9.
+This syntax for defining foreign keys uses `ref`, meaning it will capture dependencies and works across different environments. It's available in [dbt Cloud "Latest""](/docs/dbt-versions/cloud-release-tracks) and [dbt Core v1.9+](/docs/dbt-versions/core-upgrade/upgrading-to-v1.9).
diff --git a/website/docs/reference/resource-properties/unit-tests.md b/website/docs/reference/resource-properties/unit-tests.md
index 08081c4c24a..7bc177a133c 100644
--- a/website/docs/reference/resource-properties/unit-tests.md
+++ b/website/docs/reference/resource-properties/unit-tests.md
@@ -7,7 +7,7 @@ datatype: test
:::note
-This functionality is only supported in dbt Core v1.8+ or dbt Cloud accounts that have gone ["Versionless"](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless).
+This functionality is available in dbt Core v1.8+ and [dbt Cloud release tracks](/docs/dbt-versions/cloud-release-tracks).
:::
diff --git a/website/docs/reference/snapshot-configs.md b/website/docs/reference/snapshot-configs.md
index 3445c7ecac9..018988a4934 100644
--- a/website/docs/reference/snapshot-configs.md
+++ b/website/docs/reference/snapshot-configs.md
@@ -8,30 +8,16 @@ meta:
import ConfigResource from '/snippets/_config-description-resource.md';
import ConfigGeneral from '/snippets/_config-description-general.md';
-
## Related documentation
* [Snapshots](/docs/build/snapshots)
* The `dbt snapshot` [command](/reference/commands/snapshot)
-
## Available configurations
### Snapshot-specific configurations
-
-
-import SnapshotYaml from '/snippets/_snapshot-yaml-spec.md';
-
-
-
-
-
[+](/reference/resource-configs/plus-prefix)[check_cols](/reference/resource-configs/check_cols): [] | all
[+](/reference/resource-configs/plus-prefix)[snapshot_meta_column_names](/reference/resource-configs/snapshot_meta_column_names): {}
- [+](/reference/resource-configs/plus-prefix)[invalidate_hard_deletes](/reference/resource-configs/invalidate_hard_deletes) : true | false
+ [+](/reference/resource-configs/plus-prefix)[dbt_valid_to_current](/reference/resource-configs/dbt_valid_to_current):
+ [+](/reference/resource-configs/plus-prefix)[hard_deletes](/reference/resource-configs/hard-deletes): string
```
@@ -113,7 +100,8 @@ snapshots:
[updated_at](/reference/resource-configs/updated_at):
[check_cols](/reference/resource-configs/check_cols): [] | all
[snapshot_meta_column_names](/reference/resource-configs/snapshot_meta_column_names): {}
- [invalidate_hard_deletes](/reference/resource-configs/invalidate_hard_deletes) : true | false
+ [hard_deletes](/reference/resource-configs/hard-deletes): string
+ [dbt_valid_to_current](/reference/resource-configs/dbt_valid_to_current):
```
@@ -123,11 +111,9 @@ snapshots:
-
-
-Configurations can be applied to snapshots using the [YAML syntax](/docs/build/snapshots), available in Versionless and dbt v1.9 and higher, in the `snapshot` directory file.
+import LegacySnapshotConfig from '/snippets/_legacy-snapshot-config.md';
-
+
@@ -150,11 +136,25 @@ Configurations can be applied to snapshots using the [YAML syntax](/docs/build/s
+### Snapshot configuration migration
+
+The latest snapshot configurations introduced in dbt Core v1.9 (such as [`snapshot_meta_column_names`](/reference/resource-configs/snapshot_meta_column_names), [`dbt_valid_to_current`](/reference/resource-configs/dbt_valid_to_current), and `hard_deletes`) are best suited for new snapshots. For existing snapshots, we recommend the following to avoid any inconsistencies in your snapshots:
+
+#### For existing snapshots
+- Migrate tables — Migrate the previous snapshot to the new table schema and values:
+ - Create a backup copy of your snapshots.
+ - Use `alter` statements as needed (or a script to apply `alter` statements) to ensure table consistency.
+- New configurations — Convert the configs one at a time, testing as you go.
+
+:::warning
+If you use one of the latest configs, such as `dbt_valid_to_current`, without migrating your data, you may have mixed old and new data, leading to an incorrect downstream result.
+:::
### General configurations
+
+
```yaml
snapshots:
[](/reference/resource-configs/resource-path):
@@ -254,11 +255,7 @@ snapshots:
-
-
-Configurations can be applied to snapshots using [YAML syntax](/docs/build/snapshots), available in Versionless and dbt v1.9 and higher, in the `snapshot` directory file.
-
-
+
@@ -287,24 +284,29 @@ Snapshots can be configured in multiple ways:
-1. Defined in YAML files using a `config` [resource property](/reference/model-properties), typically in your [snapshots directory](/reference/project-configs/snapshot-paths) (available in [Versionless](/docs/dbt-versions/versionless-cloud) or and dbt Core v1.9 and higher).
+1. Defined in YAML files using a `config` [resource property](/reference/model-properties), typically in your [snapshots directory](/reference/project-configs/snapshot-paths) (available in [the dbt Cloud release track](/docs/dbt-versions/cloud-release-tracks) and dbt v1.9 and higher).
2. From the `dbt_project.yml` file, under the `snapshots:` key. To apply a configuration to a snapshot, or directory of snapshots, define the resource path as nested dictionary keys.
-1. Defined in YAML files using a `config` [resource property](/reference/model-properties), typically in your [snapshots directory](/reference/project-configs/snapshot-paths) (available in [Versionless](/docs/dbt-versions/versionless-cloud) or and dbt Core v1.9 and higher).
-2. Using a `config` block within a snapshot defined in Jinja SQL
+1. Defined in a YAML file using a `config` [resource property](/reference/model-properties), typically in your [snapshots directory](/reference/project-configs/snapshot-paths) (available in [the dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks) and dbt v1.9 and higher). The latest snapshot YAML syntax provides faster and more efficient management.
+2. Using a `config` block within a snapshot defined in Jinja SQL.
3. From the `dbt_project.yml` file, under the `snapshots:` key. To apply a configuration to a snapshot, or directory of snapshots, define the resource path as nested dictionary keys.
-Note that in Versionless and dbt v1.9 and later, snapshots are defined in an updated syntax using a YAML file within your `snapshots/` directory (as defined by the [`snapshot-paths` config](/reference/project-configs/snapshot-paths)). For faster and more efficient management, consider the updated snapshot YAML syntax, [available in Versionless](/docs/dbt-versions/versionless-cloud) or [dbt Core v1.9 and later](/docs/dbt-versions/core).
-
Snapshot configurations are applied hierarchically in the order above with higher taking precedence.
### Examples
-The following examples demonstrate how to configure snapshots using the `dbt_project.yml` file, a `config` block within a snapshot, and a `.yml` file.
+
+
+The following examples demonstrate how to configure snapshots using the `dbt_project.yml` file and a `.yml` file.
+
+
+
+The following examples demonstrate how to configure snapshots using the `dbt_project.yml` file, a `config` block within a snapshot (legacy method), and a `.yml` file.
+
- #### Apply configurations to all snapshots
To apply a configuration to all snapshots, including those in any installed [packages](/docs/build/packages), nest the configuration directly under the `snapshots` key:
@@ -397,7 +399,7 @@ The following examples demonstrate how to configure snapshots using the `dbt_pro
- You can also define some common configs in a snapshot's `config` block. We don't recommend this for a snapshot's required configuration, however.
+ You can also define some common configs in a snapshot's `config` block. However, we don't recommend this for a snapshot's required configuration.
diff --git a/website/docs/reference/snapshot-properties.md b/website/docs/reference/snapshot-properties.md
index d940a9f344c..11fb956a163 100644
--- a/website/docs/reference/snapshot-properties.md
+++ b/website/docs/reference/snapshot-properties.md
@@ -5,7 +5,7 @@ description: "Read this guide to learn about using source properties in dbt."
-In Versionless and dbt v1.9 and later, snapshots are defined and configured in YAML files within your `snapshots/` directory (as defined by the [`snapshot-paths` config](/reference/project-configs/snapshot-paths)). Snapshot properties are declared within these YAML files, allowing you to define both the snapshot configurations and properties in one place.
+In dbt v1.9 and later, snapshots are defined and configured in YAML files within your `snapshots/` directory (as defined by the [`snapshot-paths` config](/reference/project-configs/snapshot-paths)). Snapshot properties are declared within these YAML files, allowing you to define both the snapshot configurations and properties in one place.
@@ -15,7 +15,7 @@ Snapshots properties can be declared in `.yml` files in:
- your `snapshots/` directory (as defined by the [`snapshot-paths` config](/reference/project-configs/snapshot-paths)).
- your `models/` directory (as defined by the [`model-paths` config](/reference/project-configs/model-paths))
-Note, in Versionless and dbt v1.9 and later, snapshots are defined in an updated syntax using a YAML file within your `snapshots/` directory (as defined by the [`snapshot-paths` config](/reference/project-configs/snapshot-paths)). For faster and more efficient management, consider the updated snapshot YAML syntax, [available in Versionless](/docs/dbt-versions/versionless-cloud) or [dbt Core v1.9 and later](/docs/dbt-versions/core).
+Note, in dbt v1.9 and later, snapshots are defined in an updated syntax using a YAML file within your `snapshots/` directory (as defined by the [`snapshot-paths` config](/reference/project-configs/snapshot-paths)). For faster and more efficient management, consider the updated snapshot YAML syntax, available now in [the dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks) and soon in [dbt Core v1.9](/docs/dbt-versions/core-upgrade/upgrading-to-v1.9).
diff --git a/website/docs/reference/source-configs.md b/website/docs/reference/source-configs.md
index c5264e82fc7..959d4c542e9 100644
--- a/website/docs/reference/source-configs.md
+++ b/website/docs/reference/source-configs.md
@@ -255,7 +255,7 @@ sources:
-Configuring an [`event_time`](/reference/resource-configs/event-time) for a source is only available in dbt Cloud Versionless or dbt Core versions 1.9 and later.
+Configuring an [`event_time`](/reference/resource-configs/event-time) for a source is only available in [the dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks) or dbt Core versions 1.9 and later.
diff --git a/website/package-lock.json b/website/package-lock.json
index 936f05624bb..8d573ee3426 100644
--- a/website/package-lock.json
+++ b/website/package-lock.json
@@ -5,7 +5,6 @@
"requires": true,
"packages": {
"": {
- "name": "website",
"version": "0.0.0",
"dependencies": {
"@docusaurus/core": "3.4.0",
diff --git a/website/sidebars.js b/website/sidebars.js
index b880553e58f..5d6e0582765 100644
--- a/website/sidebars.js
+++ b/website/sidebars.js
@@ -49,6 +49,7 @@ const sidebarSettings = {
items: [
"docs/cloud/about-cloud-setup",
"docs/cloud/account-settings",
+ "docs/cloud/account-integrations",
"docs/dbt-cloud-environments",
"docs/cloud/migration",
{
@@ -776,7 +777,7 @@ const sidebarSettings = {
link: { type: "doc", id: "docs/dbt-versions/core" },
items: [
"docs/dbt-versions/core",
- "docs/dbt-versions/versionless-cloud",
+ "docs/dbt-versions/cloud-release-tracks",
"docs/dbt-versions/upgrade-dbt-version-in-cloud",
"docs/dbt-versions/product-lifecycles",
"docs/dbt-versions/experimental-features",
@@ -805,6 +806,7 @@ const sidebarSettings = {
},
items: [
"docs/dbt-versions/dbt-cloud-release-notes",
+ "docs/dbt-versions/compatible-track-changelog",
"docs/dbt-versions/2023-release-notes",
"docs/dbt-versions/2022-release-notes",
{
@@ -972,17 +974,18 @@ const sidebarSettings = {
label: "For snapshots",
items: [
"reference/snapshot-properties",
- "reference/resource-configs/snapshot_name",
"reference/snapshot-configs",
"reference/resource-configs/check_cols",
+ "reference/resource-configs/dbt_valid_to_current",
+ "reference/resource-configs/hard-deletes",
+ "reference/resource-configs/invalidate_hard_deletes",
+ "reference/resource-configs/snapshot_meta_column_names",
+ "reference/resource-configs/snapshot_name",
"reference/resource-configs/strategy",
"reference/resource-configs/target_database",
"reference/resource-configs/target_schema",
"reference/resource-configs/unique_key",
"reference/resource-configs/updated_at",
- "reference/resource-configs/invalidate_hard_deletes",
- "reference/resource-configs/snapshot_meta_column_names",
- "reference/resource-configs/dbt_valid_to_current",
],
},
{
diff --git a/website/snippets/_cloud-environments-info.md b/website/snippets/_cloud-environments-info.md
index 6addd6a3a7a..6d202d01998 100644
--- a/website/snippets/_cloud-environments-info.md
+++ b/website/snippets/_cloud-environments-info.md
@@ -33,9 +33,7 @@ Both development and deployment environments have a section called **General Set
:::note About dbt version
-- dbt Cloud allows users to select any dbt release. At this time, **environments must use a dbt version greater than or equal to v1.0.0;** [lower versions are no longer supported](/docs/dbt-versions/upgrade-dbt-version-in-cloud).
-- If you select a current version with `(latest)` in the name, your environment will automatically install the latest stable version of the minor version selected.
-- Go **Versionless**, which removes the need for manually upgrading environment, while ensuring you are always up to date with the latest fixes and features.
+dbt Cloud allows users to select a [release track](/docs/dbt-versions/cloud-release-tracks) to receive ongoing dbt version upgrades at the cadence that makes sense for their team.
:::
### Custom branch behavior
diff --git a/website/snippets/_config-dbt-version-check.md b/website/snippets/_config-dbt-version-check.md
index d4e495bd379..6dc2e702895 100644
--- a/website/snippets/_config-dbt-version-check.md
+++ b/website/snippets/_config-dbt-version-check.md
@@ -1,5 +1,5 @@
-Starting in 2024, when you select **Versionless** in dbt Cloud, dbt will ignore the `require-dbt-version` config. Refer to [Versionless](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) for more details.
+Starting in 2024, when you select a [release track in dbt Cloud](/docs/dbt-versions/cloud-release-tracks) to receive ongoing dbt version upgrades, dbt will ignore the `require-dbt-version` config.
dbt Labs is committed to zero breaking changes for code in dbt projects, with ongoing releases to dbt Cloud and new versions of dbt Core. We also recommend these best practices:
diff --git a/website/snippets/_hard-deletes.md b/website/snippets/_hard-deletes.md
new file mode 100644
index 00000000000..59c2e3af99e
--- /dev/null
+++ b/website/snippets/_hard-deletes.md
@@ -0,0 +1,13 @@
+
+
+**Use `invalidate_hard_deletes` (v1.8 and earlier) if:**
+- Gaps in the snapshot history (missing records for deleted rows) are acceptable.
+- You want to invalidate deleted rows by setting their `dbt_valid_to` timestamp to the current time (implicit delete).
+- You are working with smaller datasets where tracking deletions as a separate state is unnecessary.
+
+**Use `hard_deletes: new_record` (v1.9 and higher) if:**
+- You want to maintain continuous snapshot history without gaps.
+- You want to explicitly track deletions by adding new rows with a `dbt_is_deleted` column (explicit delete).
+- You are working with larger datasets where explicitly tracking deleted records improves data lineage clarity.
+
+
diff --git a/website/snippets/_legacy-snapshot-config.md b/website/snippets/_legacy-snapshot-config.md
new file mode 100644
index 00000000000..a38995308e9
--- /dev/null
+++ b/website/snippets/_legacy-snapshot-config.md
@@ -0,0 +1,4 @@
+
+:::info
+Starting from [the dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks) and dbt Core v1.9, defining snapshots in a `.sql` file using a config block is a legacy method. You can define snapshots in YAML format using the latest [snapshot-specific configurations](/docs/build/snapshots#configuring-snapshots). For new snapshots, we recommend using these latest configs. If applying them to existing snapshots, you'll need to [migrate](#snapshot-configuration-migration) over.
+:::
diff --git a/website/snippets/_release-stages-from-versionless.md b/website/snippets/_release-stages-from-versionless.md
new file mode 100644
index 00000000000..f6fbf9153b0
--- /dev/null
+++ b/website/snippets/_release-stages-from-versionless.md
@@ -0,0 +1,5 @@
+:::note Versionless is now the "latest" release track
+
+This blog post was updated on December 04, 2024 to rename "versionless" to the "latest" release track allowing for the introduction of less-frequent release tracks. Learn more about [Release Tracks](/docs/dbt-versions/cloud-release-tracks) and how to use them.
+
+:::
diff --git a/website/snippets/_snapshot-yaml-spec.md b/website/snippets/_snapshot-yaml-spec.md
index cb1675ce5bd..f306abb21dd 100644
--- a/website/snippets/_snapshot-yaml-spec.md
+++ b/website/snippets/_snapshot-yaml-spec.md
@@ -1,6 +1,4 @@
:::info Use the latest snapshot syntax
-In [dbt Cloud Versionless](/docs/dbt-versions/versionless-cloud) or [dbt Core v1.9 and later](/docs/dbt-versions/core), you can configure snapshots in YAML files using the updated syntax within your `snapshots/` directory (as defined by the [`snapshot-paths` config](/reference/project-configs/snapshot-paths)).
-
-This syntax allows for faster, more efficient snapshot management. To use it, upgrade to Versionless or dbt v1.9 or newer.
+In [dbt Cloud "Latest""](/docs/dbt-versions/cloud-release-tracks) or [dbt Core v1.9+](/docs/dbt-versions/core-upgrade/upgrading-to-v1.9), you can configure snapshots in YAML files using the updated syntax within your `snapshots/` directory (as defined by the [`snapshot-paths` config](/reference/project-configs/snapshot-paths)). This syntax allows for faster, more efficient snapshot management.
:::
diff --git a/website/snippets/_state-modified-compare.md b/website/snippets/_state-modified-compare.md
index c7bba1c8bdf..f89d63162ae 100644
--- a/website/snippets/_state-modified-compare.md
+++ b/website/snippets/_state-modified-compare.md
@@ -1,3 +1,3 @@
-You need to build the state directory using dbt v1.9 or higher, or [Versionless](/docs/dbt-versions/versionless-cloud) dbt Cloud, and you need to set `state_modified_compare_more_unrendered_values` to `true` within your dbt_project.yml.
+You need to build the state directory using dbt v1.9 or higher, or [the dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks), and you need to set `state_modified_compare_more_unrendered_values` to `true` within your dbt_project.yml.
If the state directory was built with an older dbt version or if the `state_modified_compare_more_unrendered_values` behavior change flag was either not set or set to `false`, you need to rebuild the state directory to avoid false positives during state comparison with `state:modified`.
diff --git a/website/snippets/access_url.md b/website/snippets/access_url.md
index 4fb7aa776ae..90a9238618a 100644
--- a/website/snippets/access_url.md
+++ b/website/snippets/access_url.md
@@ -1 +1 @@
-The following steps use `YOUR_AUTH0_URI` and `YOUR_AUTH0_ENTITYID`, which need to be replaced with the [appropriate Auth0 SSO URI and Auth0 Entity ID](/docs/cloud/manage-access/set-up-sso-saml-2.0#auth0-multi-tenant-uris) for your region.
+The following steps use `YOUR_AUTH0_URI` and `YOUR_AUTH0_ENTITYID`, which need to be replaced with the [appropriate Auth0 SSO URI and Auth0 Entity ID](#auth0-uris) for your region.
diff --git a/website/snippets/core-versions-table.md b/website/snippets/core-versions-table.md
index c1fa718e83e..899c3dddc28 100644
--- a/website/snippets/core-versions-table.md
+++ b/website/snippets/core-versions-table.md
@@ -14,8 +14,8 @@
| [**v1.0**](/docs/dbt-versions/core-upgrade/Older%20versions/upgrading-to-v1.0) | Dec 3, 2021 | End of Life ⚠️ |
| **v0.X** ⛔️ | (Various dates) | Deprecated ⛔️ | Deprecated ⛔️ |
-All functionality in dbt Core since the v1.7 release is available in dbt Cloud, early and continuously, by selecting ["Versionless"](https://docs.getdbt.com/docs/dbt-versions/versionless-cloud).
+All functionality in dbt Core since the v1.7 release is available in [dbt Cloud release tracks](/docs/dbt-versions/cloud-release-tracks), which provide automated upgrades at a cadence appropriate for your team.
-1 "Versionless" is now required for the Developer and Teams plans on dbt Cloud. Accounts using older dbt versions will be migrated to "Versionless."
+1 Release tracks are required for the Developer and Teams plans on dbt Cloud. Accounts using older dbt versions will be migrated to the "Latest" release track.
-For customers of dbt Cloud Enterprise, dbt v1.7 will continue to be available as an option while dbt Labs rolls out a mechanism for "extended" upgrades. In the meantime, dbt Labs strongly recommends migrating any environments that are still running on older unsupported versions to "Versionless" dbt or dbt v1.7.
+For customers of dbt Cloud Enterprise, dbt v1.7 will continue to be available as an option until dbt Labs announces that "Compatible" and "Extended" release tracks are Generally Available, planned for March 2025. (They are currently available to all eligible accounts in Preview.) In the meantime, dbt Labs strongly recommends migrating any environments that are still running on older unsupported versions to either release tracks or dbt v1.7.
diff --git a/website/snippets/hooks-to-grants.md b/website/snippets/hooks-to-grants.md
deleted file mode 100644
index d7586ec53ca..00000000000
--- a/website/snippets/hooks-to-grants.md
+++ /dev/null
@@ -1,3 +0,0 @@
-
-In older versions of dbt, the most common use of `post-hook` was to execute `grant` statements, to apply database permissions to models right after creating them. We recommend using the [`grants` resource config](/reference/resource-configs/grants) instead, in order to automatically apply grants when your dbt model runs.
-
diff --git a/website/src/components/expandable/styles.module.css b/website/src/components/expandable/styles.module.css
index fc6f258286b..4d3957228b9 100644
--- a/website/src/components/expandable/styles.module.css
+++ b/website/src/components/expandable/styles.module.css
@@ -145,4 +145,5 @@
.headerText {
display: flex;
align-items: center;
-}
\ No newline at end of file
+}
+
diff --git a/website/static/img/docs/dbt-cloud/account-integration-ai.jpg b/website/static/img/docs/dbt-cloud/account-integration-ai.jpg
new file mode 100644
index 00000000000..7dd42ee037b
Binary files /dev/null and b/website/static/img/docs/dbt-cloud/account-integration-ai.jpg differ
diff --git a/website/static/img/docs/dbt-cloud/account-integration-azure-manual.jpg b/website/static/img/docs/dbt-cloud/account-integration-azure-manual.jpg
new file mode 100644
index 00000000000..3b509d1c965
Binary files /dev/null and b/website/static/img/docs/dbt-cloud/account-integration-azure-manual.jpg differ
diff --git a/website/static/img/docs/dbt-cloud/account-integration-azure-target.jpg b/website/static/img/docs/dbt-cloud/account-integration-azure-target.jpg
new file mode 100644
index 00000000000..c8ff5dd8cf6
Binary files /dev/null and b/website/static/img/docs/dbt-cloud/account-integration-azure-target.jpg differ
diff --git a/website/static/img/docs/dbt-cloud/account-integration-dbtlabs.jpg b/website/static/img/docs/dbt-cloud/account-integration-dbtlabs.jpg
new file mode 100644
index 00000000000..a2d1386e0fa
Binary files /dev/null and b/website/static/img/docs/dbt-cloud/account-integration-dbtlabs.jpg differ
diff --git a/website/static/img/docs/dbt-cloud/account-integration-git.jpg b/website/static/img/docs/dbt-cloud/account-integration-git.jpg
new file mode 100644
index 00000000000..70a275bd039
Binary files /dev/null and b/website/static/img/docs/dbt-cloud/account-integration-git.jpg differ
diff --git a/website/static/img/docs/dbt-cloud/account-integration-oauth.jpg b/website/static/img/docs/dbt-cloud/account-integration-oauth.jpg
new file mode 100644
index 00000000000..6efb135c46f
Binary files /dev/null and b/website/static/img/docs/dbt-cloud/account-integration-oauth.jpg differ
diff --git a/website/static/img/docs/dbt-cloud/account-integration-openai.jpg b/website/static/img/docs/dbt-cloud/account-integration-openai.jpg
new file mode 100644
index 00000000000..f92fec5c712
Binary files /dev/null and b/website/static/img/docs/dbt-cloud/account-integration-openai.jpg differ
diff --git a/website/static/img/docs/dbt-cloud/account-integrations.jpg b/website/static/img/docs/dbt-cloud/account-integrations.jpg
new file mode 100644
index 00000000000..56ff1859636
Binary files /dev/null and b/website/static/img/docs/dbt-cloud/account-integrations.jpg differ
diff --git a/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/choosing-dbt-version/example-environment-settings.png b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/choosing-dbt-version/example-environment-settings.png
index 02e5073fd16..7e0d2ea747a 100644
Binary files a/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/choosing-dbt-version/example-environment-settings.png and b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/choosing-dbt-version/example-environment-settings.png differ
diff --git a/website/vercel.json b/website/vercel.json
index 3340a4ab684..fa90697a517 100644
--- a/website/vercel.json
+++ b/website/vercel.json
@@ -102,6 +102,11 @@
"destination": "/docs/dbt-versions/core-upgrade/Older%20versions/upgrading-to-v1.4",
"permanent": true
},
+ {
+ "source": "/docs/dbt-versions/versionless-cloud",
+ "destination": "/docs/dbt-versions/cloud-release-tracks",
+ "permanent": true
+ },
{
"source": "/best-practices/how-we-mesh/mesh-4-faqs",
"destination": "/best-practices/how-we-mesh/mesh-5-faqs",