diff --git a/contributing/single-sourcing-content.md b/contributing/single-sourcing-content.md
index 7a80e71728c..be6c557e4c1 100644
--- a/contributing/single-sourcing-content.md
+++ b/contributing/single-sourcing-content.md
@@ -5,7 +5,7 @@
* [Versioning entire pages](#versioning-entire-pages)
* [Versioning blocks of content](#versioning-blocks-of-content)
* [Using global variables](#using-global-variables)
-* [Reusing snippets of content](#reusing-snippets-of-content)
+* [Reusing content](#reusing-content)
## About versioning
@@ -28,13 +28,13 @@ exports.versions = [
]
```
-The **version** property is the value which shows in the nav dropdown. This value is compared to the VersionBlock component on a docs page to determine whether that section should be visible for the current active version (See the **Versioning the Sidebar** section on using the VersionBlock component).
+The **version** property is the value shown in the nav dropdown. This value is compared to the VersionBlock component on a docs page to determine whether that section should be visible for the currently active version (See the **Versioning the Sidebar** section on using the VersionBlock component).
### Using end-of-life dates
The **EOLDate** property determines when a version is no longer supported. A version is supported up until 1 year after its release.
-When a documentation page is viewed, the **EOLDate** property for the active version is compared to today’s date. If the current version has reached, or is nearing the end of support, a banner will show atop the page, notifying the visitor of the end-of-life status.
+When a documentation page is viewed, the **EOLDate** property for the active version is compared to today’s date. If the current version has reached or is nearing the end of support, a banner will show atop the page, notifying the visitor of the end-of-life status.
Two different versions of the banner will show depending on the end-of-life date:
@@ -47,11 +47,11 @@ The content for these two EOLDate banners are located in the `website/src/theme/
### Versioning entire pages
-If a Docs page should not be available for the selected version, it is possible to version the entire page. This is managed in the `versionedPages` array within the `website/dbt-versions.js` file.
+If a Docs page is unavailable for the selected version, it is possible to version the entire page. This is managed in the `versionedPages` array within the `website/dbt-versions.js` file.
Two things occur when a page is not available for the selected version:
-- A banner will appear atop the page, noting this page covers a new feature which isn’t available for the selected version.
+- A banner will appear atop the page, noting this page covers a new feature that isn’t available for the selected version.
- The page is removed from the sidebar
@@ -70,9 +70,9 @@ exports.versionedPages = [
**page** (mandatory): The path of the Docs page to version. This string must match the string for the page in the `sidebars.js` file.
-**firstVersion** (optional): Sets the first version which this page is available.
+**firstVersion** (optional): Sets the first version on which this page is available.
-**lastVersion** (optional): Sets the last version which this page is available.
+**lastVersion** (optional): Sets the last version on which this page is available.
## Versioning blocks of content
@@ -143,7 +143,7 @@ Using a global variable requires two steps:
2. Use the **Var** component to add the global variable to a page.
```jsx
-// The dbtCore property is the identifer for the variable,
+// The dbtCore property is the identifier for the variable,
// while the name property is the value shown on the page.
exports.dbtVariables = {
@@ -219,22 +219,102 @@ To use the component at the beginning of a sentence, add a non-breaking space ch
is awesome!
```
-## Reusing snippets of content
+## Reusing content
-The Snippet component allows for content to be reusable throughout the Docs. This is very similar to the existing FAQ component.
+To reuse content on different pages, you can use some techniques like partial files or snippets. Partial files, a built-in Docusaurus feature, is the recommended method over snippets.
+
+### Partial file
+
+A partial file allows you to reuse content throughout the docs. Here are the steps you can take to create and use a partial file:
+
+1. Create a new markdown partial file in the `website/snippets` directory. The file name must begin with an underscore, like `_filename.md`
+2. Go back to the docs file that will pull content from the partial file.
+3. Add the following import file: `import ComponentName from '/snippets/_this-is-your-partial-file-name.md';`
+ * You must always add an import file in that format. Note you can name `ComponentName` (a partial component) can be whatever makes sense for your purpose.
+ * `.md` needs to be added to the end of the filename.
+4. To use the partial component, go to the next line and add ``. This fetches the reusable content in the partial file
+ * Note `anyname` can be whatever makes sense for your purpose.
+
+You can also use this for more advanced use cases like reusable frontmatter.
+
+#### Partial example
+
+1. To create a new partial to use throughout the site, first, we will create a new markdown partial file within the snippets directory:
+
+```markdown
+/snippets/_partial-name.md
+```
+
+2. Add the following reusable content in the `/snippets/_partial-name.md` partial file:
+
+```markdown
+## Header 2
+
+Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam fermentum porttitor dui, id scelerisque enim scelerisque at.
+```
+
+3. Now, go back to the docs file and add the following code to fetch the reusable content added in the partial file:
+
+```markdown
+Docs content here.
+
+`import SetUpPages from '/snippets/_partial-name.md';`
+
+
+
+Docs content here.
+```
+
+- `import SetUpPages from '/snippets/_partial-name.md';` — A partial file that will be imported by other files
+- `` — A component that imports content from the partial file. You can also use it to pass in data into the partial using props (See 'How to use props to pass different content on multiple pages?' below).
+
+4. This will then render the content of the docs in the partial file.
+
+
+
+
+
+
+How to use props to pass different content on multiple pages?
+
+You can add props on the component only if you want to pass in data from the component into the partial file. This is useful for using the same partial component on
+multiple docs pages and displaying different values for each. For example, if we wanted to use a partial on multiple pages and pass in a different 'feature' for each
+docs page, you can write it as:
+
+```
+import SetUpPages from '/snippets/_available-enterprise-only.md';
+
+`
+```
+
+Then in the `/snippets/_available-enterprise-only.md file`, you can display that feature prop with:
+
+>This feature: `{props.feature}` other content etc...
+
+This will then translate to:
+
+>This feature: A really cool feature other content etc...
+
+In this example, the component `
+
+### Snippets
+
+The Snippet component allows for content to be reusable throughout the Docs. This is very similar to the existing FAQ component. Using partial files, which is a built-in Docusaurus feature, is recommended over snippets.
Creating and using a snippet requires two steps:
1. Create a new markdown snippet file in the `website/snippets` directory.
2. Use the `` component within a Docs file.
-### Snippet properties
+#### Snippet properties
**src:** Expects the file name of the snippet which lives in the snippets directory
-### Snippet example
+#### Snippet example
-To create a new snippet to use throughout the site, first we will create a new markdown snippet within the snippets directory:
+To create a new snippet to use throughout the site, first, we will create a new markdown snippet within the snippets directory:
```markdown
## Header 2
diff --git a/website/blog/2023-01-27-autoscaling-ci.md b/website/blog/2023-01-27-autoscaling-ci.md
deleted file mode 100644
index 2cb9f87e9d0..00000000000
--- a/website/blog/2023-01-27-autoscaling-ci.md
+++ /dev/null
@@ -1,283 +0,0 @@
----
-title: "Autoscaling CI: The intelligent Slim CI"
-description: "How to make your dbt Cloud Slim CI process more intelligent and, as a result, enable faster continuous integration workflows."
-slug: intelligent-slim-ci
-
-authors: [doug_guthrie]
-
-tags: [analytics craft]
-hide_table_of_contents: false
-
-date: 2023-01-25
-is_featured: true
----
-
-:::warning Deprecation Notice: Update 4/27/2023
-dbt Cloud is now offering concurrent CI checks and auto cancellation of stale CI jobs as a native feature of the platform. If you're interested, use [this form](https://docs.google.com/forms/d/e/1FAIpQLSfjSIMkwcwhZ7pbxT5ouuEf7dwpzUwRoGYBCjQApJ2ssps0tg/viewform) to sign up for our beta.
-:::
-
-Before I delve into what makes this particular solution "intelligent", let me back up and introduce CI, or continuous integration. CI is a software development practice that ensures we automatically test our code prior to merging into another branch. The idea being that we can mitigate the times when something bad happens in production, which is something that I'm sure we can all resonate with!
-
-
-
-
-
-The way that we tackle continuous integration in dbt Cloud is something we call [Slim CI](/docs/deploy/continuous-integration). This feature enables us to automatically run a dbt Cloud job anytime a pull request is opened against our primary branch or when a commit is added to that pull request. The real kicker though? This job will only run and test the code that's been modified within that specific pull request. Why is Slim CI important?
-
-- Ensures developers can work quickly by shortening the CI feedback loop
-- Reduce costs in your data warehouse by running only what's been modified
-
-I think we can all agree that increasing developer productivity while simultaneously reducing costs is an outcome that every company strives for.
-
-However, there are a couple things to be aware of when implementing Slim CI:
-
-- Only one Slim CI job can run at a given time. In the event multiple pull requests are opened, each one would trigger the same Slim CI job, but only one can be in a running state while the rest would be queued until prior runs complete.
-- A job will continue to run even if another commit is added to the pull request.
-
-Generally speaking, only customers with large data teams or disparate ones working in multiple projects tend to run into this limitation of slim CI. And when they do, the shortened feedback loop disappears as their pull requests start to stack up waiting for each one to finish.
-
-The reason I know this is because I’m a solutions architect at dbt Labs and I speak with both our customers and prospects frequently. I learn about their data stacks, understand the blockers and limitations they’re experiencing, and help them realize and uncover use cases that dbt can solve for. Sometimes, however, my job calls for more than just being a trusted advisor, it calls for creating custom solutions that help address a critical need that our platform doesn’t (yet!) provide. So, like a lot of features and functionality inside of dbt, the impetus for this solution came from you!
-
-Huge shoutout to my teammates [Matt Winkler](https://docs.getdbt.com/author/matt_winkler) and Steve Dowling, both of whom contributed immensely to both the code and ideation for this functionality!
-
-## The solution: Autoscaling CI
-
-As of this writing, autoscaling CI is functionality built only within a python package, [dbtc](https://dbtc.dpguthrie.com); dbtc is an unaffiliated python interface to the dbt Cloud Metadata and Administrative APIs. In addition, it provides a convenient command line interface (CLI) that exposes the same methods available within python.
-
-> A method in Python is simply a function that’s a member of a class
-
-One of those methods is called `trigger_autoscaling_ci_job`, and as you can probably imagine, it’s the method we’ll use to create a more intelligent Slim CI:
-
-**Autoscaling CI enables a team of developers to maintain their fast and iterative CI workflow that Slim CI provides. New commits to an existing pull request will cancel any in progress runs for that pull request. In addition, it can use the same Slim CI job definition to run separate pull requests in parallel.**
-
-### How it works
-
-In the event your CI job is already running, the `trigger_autoscaling_ci_job` method will do the following:
-
-- If this is an entirely new pull request, clone the Slim CI job definition and trigger the clone. It's important to note that the cloned job will be deleted by default after the run (you can change this through an argument to the function). Deleting the cloned job will also force the execution into a polling state (e.g. the function won't return a `Run` until it has encountered a completed state). The reason for this is dbt Cloud will not allow a run to execute without an associated job.
-- If a new commit is created for the pull request linked to the existing run, cancel the run and trigger again.
-- This will also check to see if your account has met or exceeded the allotted run slots. In the event you have, a cloned job will not be created and the existing job will be triggered and placed in the queue.
-
-### Setup
-
-1. The first step is to create a dbt Cloud job for Slim CI. We’ll follow the [exact steps](/docs/deploy/slim-ci-jobs) to create a normal Slim CI job except for one.
-
-
- | Do | Don’t |
- | --- | --- |
- | Defer to a production job | Trigger by pull request* |
- | Commands need to have a `state:modified+` selector | |
-
- *We’ll use your git provider to trigger the job instead of the dbt Cloud platform.
-
-2. Next, create a workflow file in your dbt project for your specific git provider (GitHub, GitLab, or Azure DevOps).
-
-
-
-
-
-In order for GitHub to know that you want to run an action, you need to have a couple specific folders in your project. Add a new folder named `.github`, and within that folder add a new one named `workflows`. Your final folder structure will look like this:
-
-```sql
-my_dbt_project
-├── .github
-│ ├── workflows
-│ │ └── autoscaling_ci.yml
-```
-
-To define the job for our action, let’s add a new file named `autoscaling_ci.yml` under the `workflows` folder. This file is how we tell the GitHub runner what to execute when the job is triggered.
-
-**Key pieces:**
-
-- `on`: This is how we're telling github to trigger this workflow. Specifically, on pull requests opened against `main` and the type matching one of: `opened`, `reopened`, `synchronize`, and `ready_for_review`. The `types` section, along with the `if` statement below, ensures that we don't trigger this workflow on draft PRs.
-- `env`: This is where we can set environment variables that can be used in any of the steps defined in our workflow. The main callout here, though, is that you should be using [secrets](https://docs.github.com/en/actions/security-guides/encrypted-secrets) for sensitive variables (e.g. your dbt Cloud service token). Additionally, ensure that you're setting the `JOB_ID` to the same job ID of your Slim CI job that you've set up in dbt Cloud.
-- `runs-on: ubuntu-latest`: This defines the operating system we’re using to run the job.
-- `uses`: The operating system we defined above needs to be setup further to access the code in your repo and also setup Python correctly. These two actions are called from other repos in GitHub to provide those services. For more information on them, checkout their repos: [actions/checkout](https://github.com/actions/checkout#checkout-v3) and [actions/setup-python](https://github.com/actions/setup-python#setup-python-v3).
-
-```yaml
-name: Autoscaling dbt Cloud CI
-on:
- pull_request:
- branches:
- - main
- types:
- - opened
- - reopened
- - synchronize
- - ready_for_review
-
-jobs:
- autoscaling:
- if: github.event.pull_request.draft == false
- runs-on: ubuntu-latest
- env:
- DBT_CLOUD_SERVICE_TOKEN: ${{ secrets.DBT_CLOUD_SERVICE_TOKEN }}
- DBT_CLOUD_ACCOUNT_ID: 1
- JOB_ID: 1
- PULL_REQUEST_ID: ${{ github.event.number }}
- GIT_SHA: ${{ github.event.pull_request.head.sha }}
-
- steps:
- - uses: actions/checkout@v3
- - uses: actions/setup-python@v4
- with:
- python-version: "3.9.x"
-
- - name: Trigger Autoscaling CI Job
- run: |
- pip install dbtc
- SO="dbt_cloud_pr_"$JOB_ID"_"$PULL_REQUEST_ID
- dbtc trigger-autoscaling-ci-job \
- --job-id=$JOB_ID \
- --payload='{"cause": "Autoscaling Slim CI!","git_sha":"'"$GIT_SHA"'","schema_override":"'"$SO"'","github_pull_request_id":'"$PULL_REQUEST_ID"'}' \
- --no-should-poll
-```
-
-In order to mimic the native Slim CI behavior within dbt Cloud, it's important to pass the appropriate payload. The payload should consist of the following:
-
-- `cause`: Put whatever you want here (this is a required field though).
-- `schema_override`: Match what dbt Cloud does natively - `dbt_cloud_pr__`
-- `git_sha`: `${{ github.event.pull_request.head.sha }}`
-- `github_pull_request_id`: `${{ github.event.number }}`
-
-
-
-
-Create a `.gitlab-ci.yml` file in your **root directory**.
-
-```sql
-my_dbt_project
-├── dbt_project.yml
-├── .gitlab-ci.yml
-```
-
-**Key pieces:**
-
-- `variables`: Ensure that you keep your `DBT_CLOUD_SERVICE_TOKEN` secret by creating a [variable](https://docs.gitlab.com/ee/ci/variables/#for-a-project). Additionally, we'll use some [predefined variables](https://docs.gitlab.com/ee/ci/variables/predefined_variables.html) that are provided by Gitlab in every CI/CD pipeline.
-- `only`: We want this to be triggered only on `merge_requests`.
-
-```yaml
-image: python:latest
-
-variables:
- DBT_CLOUD_SERVICE_TOKEN: $DBT_CLOUD_SERVICE_TOKEN
- DBT_CLOUD_ACCOUNT_ID: 1
- JOB_ID: 1
- MERGE_REQUEST_ID: $CI_MERGE_REQUEST_IID
- GIT_SHA: $CI_COMMIT_SHA
-
-before_script:
- - pip install dbtc
-
-stages:
- - deploy
-
-deploy-autoscaling-ci:
- stage: deploy
- only:
- - merge_requests
- script:
- - export DBT_CLOUD_SERVICE_TOKEN=$DBT_CLOUD_SERVICE_TOKEN
- - SO="dbt_cloud_pr_"${JOB_ID}"_"${MERGE_REQUEST_ID}
- - dbtc trigger-autoscaling-ci-job --job-id "$JOB_ID" --payload='{"cause":"Autoscaling Slim CI!","git_sha":"'"$GIT_SHA"'","schema_override":"'"$SO"'","gitlab_merge_request_id":'"$MERGE_REQUEST_ID"'}' --no-should-poll
-
-```
-
-In order to mimic the native Slim CI behavior within dbt Cloud, it's important to pass the appropriate payload. The payload should consist of the following:
-
-- `cause`: Put whatever you want here (this is a required field though).
-- `schema_override`: Match what dbt Cloud does natively - `dbt_cloud_pr__`
-- `git_sha`: `$CI_COMMIT_SHA`
-- `gitlab_merge_request_id`: `$CI_MERGE_REQUEST_IID`
-
-
-
-
-Create a `azure-pipelines.yml` file in your **root directory**.
-
-```sql
-my_dbt_project
-├── dbt_project.yml
-├── azure-pipelines.yml
-```
-
-**Key pieces:**
-
-- `pr`: A pull request trigger specifies which branches cause a pull request build to run. In this case, we'll specify our `main` branch.
-- `trigger`: Setting to `none` disables CI triggers on every commit.
-- `pool`: Specify which agent to use for this pipeline.
-- `variables`: Ensure that you keep your `DBT_CLOUD_SERVICE_TOKEN` secret by creating a [variable](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/set-secret-variables?view=azure-devops&tabs=yaml%2Cbash#secret-variable-in-the-ui). Additionally, we'll use some [predefined variables](https://learn.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yaml) that are provided in every pipeline.
-
-```yaml
-
-name: Autoscaling CI
-
-pr: [ main ]
-trigger: none
-
-variables:
- DBT_CLOUD_ACCOUNT_ID: 43786
- JOB_ID: 73797
- GIT_SHA: $(Build.SourceVersion)
- PULL_REQUEST_ID: $(System.PullRequest.PullRequestId)
-
-pool:
- vmImage: ubuntu-latest
-
-steps:
-- task: UsePythonVersion@0
- inputs:
- versionSpec: '3.9'
-- script: |
- pip install dbtc
- displayName: 'Install dependencies'
-- script: |
- SO="dbt_cloud_pr_"$(JOB_ID)"_"$(PULL_REQUEST_ID)
- dbtc trigger-autoscaling-ci-job --job-id $(JOB_ID) --payload '{"cause": "Autoscaling Slim CI!","git_sha":"'"$(GIT_SHA)"'","schema_override":"'"$SO"'","azure_pull_request_id":'"$(PULL_REQUEST_ID)"'}' --no-should-poll
- displayName: Trigger Job
- env:
- DBT_CLOUD_SERVICE_TOKEN: $(DBT_CLOUD_SERVICE_TOKEN)
-
-```
-
-In order to mimic the native Slim CI behavior within dbt Cloud, it's important to pass the appropriate payload. The payload should consist of the following:
-
-- `cause`: Put whatever you want here (this is a required field though).
-- `schema_override`: Match what dbt Cloud does natively - `dbt_cloud_pr__`
-- `git_sha`: `$(Build.SourceVersion)`
-- `azure_pull_request_id`: `$(System.PullRequest.PullRequestId)`
-
-
-
-
-
-### Benefits
-
-After adding this file to your repository, your CI jobs will no longer stack up behind one another. A job that’s now irrelevant because of a new commit will be cancelled and triggered again automatically. Some benefits that I think you'll begin to realize include:
-
-- Lower costs in your data warehouse from cancelling irrelevant (and potentially long-running) CI jobs
-- A faster, more efficient, development workflow that ensures a quick feedback loop from your CI process
-- Increased ability to open up development work that encourages more cross-team collaboration
-
-### Watch it in action
-
-The video below shows a quick demo of this functionality in action!
-
-
-
-## Get in touch
-
-If you run into any problems implementing this, please feel free to [open up an issue](https://github.com/dpguthrie/dbtc/issues/new) in the dbtc repository — I may know the maintainer and can get it fast tracked 😉! Also, I’m always looking for both contributors and ideas on how to make this package better. In the future, I’m also thinking about adding:
-
-- Internal data models (probably using [Pydantic](https://docs.pydantic.dev/)) for each of the dbt Cloud objects you can create. This will allow a user to understand the fields and data types an object requires to be created. It will also ensure that appropriate defaults are used in place of missing arguments.
-- A `query` method on the `metadata` property. Right now, the interfaces to retrieve data from the Metadata API force a user to return everything. There should be another option that allows a user to write the GraphQL query to return only the data they require.
-- Building on top of the bullet point above, each of the metadata methods should also accept a `fields` argument. This argument should limit the data returned from the API in a similar fashion that the `query` method would, but it would be in a more pythonic construct than forcing a user to write a GraphQL query.
-
-If any of that sounds interesting, or you have other ideas, feel free to reach out to me on the [dbt Community Slack](https://www.getdbt.com/community/join-the-community/) — @Doug Guthrie.
diff --git a/website/docs/docs/build/environment-variables.md b/website/docs/docs/build/environment-variables.md
index 8a8ebbba0bc..55d3fd19c6c 100644
--- a/website/docs/docs/build/environment-variables.md
+++ b/website/docs/docs/build/environment-variables.md
@@ -121,10 +121,13 @@ Environment variables can be used in many ways, and they give you the power and
Now that you can set secrets as environment variables, you can pass git tokens into your package HTTPS URLs to allow for on-the-fly cloning of private repositories. Read more about enabling [private package cloning](/docs/build/packages#private-packages).
#### Dynamically set your warehouse in your Snowflake connection
-Environment variables make it possible to dynamically change the Snowflake virtual warehouse size depending on the job. Instead of calling the warehouse name directly in your project connection, you can reference an environment variable which will get set to a specific virtual warehouse at runtime.
+Environment variables make it possible to dynamically change the Snowflake virtual warehouse size depending on the job. Instead of calling the warehouse name directly in your project connection, you can reference an environment variable which will get set to a specific virtual warehouse at runtime.
For example, suppose you'd like to run a full-refresh job in an XL warehouse, but your incremental job only needs to run in a medium-sized warehouse. Both jobs are configured in the same dbt Cloud environment. In your connection configuration, you can use an environment variable to set the warehouse name to `{{env_var('DBT_WAREHOUSE')}}`. Then in the job settings, you can set a different value for the `DBT_WAREHOUSE` environment variable depending on the job's workload.
+Currently, it's not possible to dynamically set environment variables across models within a single run. This is because each env_var can only have a single set value for the entire duration of the run.
+
+**Note** — You can also use this method with Databricks SQL Warehouse.
diff --git a/website/docs/docs/build/jinja-macros.md b/website/docs/docs/build/jinja-macros.md
index 5b0df69e898..0c89e842502 100644
--- a/website/docs/docs/build/jinja-macros.md
+++ b/website/docs/docs/build/jinja-macros.md
@@ -87,8 +87,8 @@ Macro files can contain one or more macros — here's an example:
```sql
-{% macro cents_to_dollars(column_name, precision=2) %}
- ({{ column_name }} / 100)::numeric(16, {{ precision }})
+{% macro cents_to_dollars(column_name, scale=2) %}
+ ({{ column_name }} / 100)::numeric(16, {{ scale }})
{% endmacro %}
```
diff --git a/website/docs/docs/build/sl-getting-started.md b/website/docs/docs/build/sl-getting-started.md
index a2e176016ee..ff0e6006921 100644
--- a/website/docs/docs/build/sl-getting-started.md
+++ b/website/docs/docs/build/sl-getting-started.md
@@ -110,11 +110,11 @@ Follow these steps to test and query your metrics using MetricFlow:
1. If you haven't done so already, make sure you [install MetricFlow](#install-metricflow).
-2. Run `mf version` to see your CLI version. If you don't have the CLI installed, run `pip install --upgrade "dbt-metricflow[your_adapter_name]"`. For example, if you have a Snowflake adapter, run `pip install --upgrade "dbt-metricflow[snowflake]"`.
+2. Run `mf --help` to confirm you have MetricFlow installed, and to see the available commands. If you don't have the CLI installed, run `pip install --upgrade "dbt-metricflow[your_adapter_name]"`. For example, if you have a Snowflake adapter, run `pip install --upgrade "dbt-metricflow[snowflake]"`.
3. Save your files and run `mf validate-configs` to validate the changes before committing them
-4. Run `mf query --metrics --dimensions ` to query the metrics and dimensions you want to see in the CLI.
+4. Run `mf query --metrics --group-by ` to query the metrics and dimensions you want to see in the CLI.
5. Verify that the metric values are what you expect. You can view the generated SQL if you enter `--explain` in the CLI.
diff --git a/website/docs/docs/cloud/manage-access/about-access.md b/website/docs/docs/cloud/manage-access/about-access.md
index e1cb4f65a35..9a95d0aeb68 100644
--- a/website/docs/docs/cloud/manage-access/about-access.md
+++ b/website/docs/docs/cloud/manage-access/about-access.md
@@ -33,8 +33,8 @@ A user's license type controls the features in dbt Cloud that the user is able
to access. dbt Cloud's three license types are:
- **Developer** — User may be granted _any_ permissions.
- - **Read Only** — User has read-only permissions applied to all dbt Cloud resources regardless of the role-based permissions that the user is assigned.
- - **IT** — User has [Security Admin](/docs/cloud/manage-access/enterprise-permissions#security-admin) and [Billing Admin](docs/cloud/manage-access/enterprise-permissions#billing-admin) permissions applied regardless of the role-based permissions that the user is assigned.
+ - **Read-Only** — User has read-only permissions applied to all dbt Cloud resources regardless of the role-based permissions that the user is assigned.
+ - **IT** — User has [Security Admin](/docs/cloud/manage-access/enterprise-permissions#security-admin) and [Billing Admin](/docs/cloud/manage-access/enterprise-permissions#billing-admin) permissions applied regardless of the role-based permissions that the user is assigned.
For more information on these license types, see [Seats & Users](/docs/cloud/manage-access/seats-and-users).
@@ -153,7 +153,7 @@ Yes, see the documentation on [Manual Assignment](#manual-assignment) above for
Make sure you're not trying to edit your own user as this isn't allowed for security reasons. To edit the group membership of your own user, you'll need a different user to make those changes.
- **How do I add or remove users**?
-Each dbt Cloud plan comes with a base number of Developer and Read Only licenses. You can add or remove licenses by modifying the number of users in your account settings.
+Each dbt Cloud plan comes with a base number of Developer and Read-Only licenses. You can add or remove licenses by modifying the number of users in your account settings.
- If you're on an Enterprise plans and have the correct [permissions](/docs/cloud/manage-access/enterprise-permissions), you can add or remove developers by adjusting your developer user seat count in **Account settings** -> **Users**.
- If you're on a Team plan and have the correct [permissions](/docs/cloud/manage-access/self-service-permissions), you can add or remove developers by making two changes: adjust your developer user seat count AND your developer billing seat count in **Account settings** -> **Users** and then in **Account settings** -> **Billing**.
diff --git a/website/docs/docs/cloud/manage-access/auth0-migration.md b/website/docs/docs/cloud/manage-access/auth0-migration.md
index af430772ca4..c93713f2730 100644
--- a/website/docs/docs/cloud/manage-access/auth0-migration.md
+++ b/website/docs/docs/cloud/manage-access/auth0-migration.md
@@ -47,7 +47,7 @@ The fields that will be updated are:
- Single sign-on URL — `https:///login/callback?connection={slug}`
- Audience URI (SP Entity ID) — `urn:auth0::{slug}`
-Sample steps to update (you must complete all of them to ensure uninterrupted access to dbt Cloud):
+Below are sample steps to update. You must complete all of them to ensure uninterrupted access to dbt Cloud and you should coordinate with your identity provider admin when making these changes.
1. Replace `{slug}` with your organization’s login slug. It must be unique across all dbt Cloud instances and is usually something like your company name separated by dashes (for example, `dbt-labs`).
@@ -69,7 +69,7 @@ Here is an example of an updated SAML 2.0 setup in Okta.
Google Workspace admins updating their SSO APIs with the Auth0 URL won't have to do much if it is an existing setup. This can be done as a new project or by editing an existing SSO setup. No additional scopes are needed since this is migrating from an existing setup. All scopes were defined during the initial configuration.
-Steps to update (you must complete all of them to ensure uninterrupted access to dbt Cloud):
+Below are steps to update. You must complete all of them to ensure uninterrupted access to dbt Cloud and you should coordinate with your identity provider admin when making these changes.
1. Open the [Google Cloud console](https://console.cloud.google.com/) and select the project with your dbt Cloud single sign-on settings. From the project page **Quick Access**, select **APIs and Services**
@@ -99,7 +99,7 @@ You must complete the domain authorization before you toggle `Enable New SSO Aut
Azure Active Directory admins will need to make a slight adjustment to the existing authentication app in the Azure AD portal. This migration does not require that the entire app be deleted or recreated; you can edit the existing app. Start by opening the Azure portal and navigating to the Active Directory overview.
-Steps to update (you must complete all of them to ensure uninterrupted access to dbt Cloud):
+Below are steps to update. You must complete all of them to ensure uninterrupted access to dbt Cloud and you should coordinate with your identity provider admin when making these changes.
1. Click **App Registrations** on the left side menu.
diff --git a/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md b/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md
index 62c193bb669..baa92b5a98f 100644
--- a/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md
+++ b/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md
@@ -8,12 +8,12 @@ sidebar: "Users and licenses"
In dbt Cloud, _licenses_ are used to allocate users to your account. There are three different types of licenses in dbt Cloud:
- **Developer** — Granted access to the Deployment and [Development](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud) functionality in dbt Cloud.
-- **Read-only** — Intended to view the [artifacts](/docs/deploy/artifacts) created in a dbt Cloud account.
+- **Read-Only** — Intended to view the [artifacts](/docs/deploy/artifacts) created in a dbt Cloud account.
- **IT** — Can manage users, groups, and licenses, among other permissions. Available on Enterprise and Team plans only.
The user's assigned license determines the specific capabilities they can access in dbt Cloud.
-| Functionality | Developer User | Read Only Users | IT Users* |
+| Functionality | Developer User | Read-Only Users | IT Users* |
| ------------- | -------------- | --------------- | -------- |
| Use the Developer IDE | ✅ | ❌ | ❌ |
| Use Jobs | ✅ | ❌ | ❌ |
@@ -25,7 +25,7 @@ The user's assigned license determines the specific capabilities they can access
## Licenses
-Each dbt Cloud plan comes with a base number of Developer, IT, and Read Only licenses. You can add or remove licenses by modifying the number of users in your account settings.
+Each dbt Cloud plan comes with a base number of Developer, IT, and Read-Only licenses. You can add or remove licenses by modifying the number of users in your account settings.
If you have a Developer plan account and want to add more people to your team, you'll need to upgrade to the Team plan. Refer to [dbt Pricing Plans](https://www.getdbt.com/pricing/) for more information about licenses available with each plan.
@@ -144,19 +144,19 @@ If your account is connected to an Identity Provider (IdP) for [Single Sign
On](/docs/cloud/manage-access/sso-overview), you can automatically map IdP user
groups to specific license types in dbt Cloud. To configure license mappings,
navigate to the Account Settings > Team > License Mappings page. From
-here, you can create or edit SSO mappings for both Read Only and Developer
+here, you can create or edit SSO mappings for both Read-Only and Developer
license types.
By default, all new members of a dbt Cloud account will be assigned a Developer
-license. To assign Read Only licenses to certain groups of users, create a new
-License Mapping for the Read Only license type and include a comma separated
-list of IdP group names that should receive a Read Only license at sign-in time.
+license. To assign Read-Only licenses to certain groups of users, create a new
+License Mapping for the Read-Only license type and include a comma separated
+list of IdP group names that should receive a Read-Only license at sign-in time.
Usage notes:
-- If a user's IdP groups match both a Developer and Read Only license type
+- If a user's IdP groups match both a Developer and Read-Only license type
mapping, a Developer license type will be assigned
- If a user's IdP groups do not match _any_ license type mappings, a Developer
license will be assigned
diff --git a/website/docs/docs/cloud/manage-access/enterprise-permissions.md b/website/docs/docs/cloud/manage-access/enterprise-permissions.md
index 3fb2ab93a8e..fb929bf2d59 100644
--- a/website/docs/docs/cloud/manage-access/enterprise-permissions.md
+++ b/website/docs/docs/cloud/manage-access/enterprise-permissions.md
@@ -93,7 +93,7 @@ Users with Project Creator permissions can:
- **Has permissions on:** Authorized projects, account-level settings
- **License restrictions:** must have a developer license
-Account Viewers have read only access to dbt Cloud accounts. Users with Account Viewer permissions can:
+Account Viewers have read-only access to dbt Cloud accounts. Users with Account Viewer permissions can:
- View all projects in an account
- View Account Settings
- View account-level artifacts
@@ -201,12 +201,12 @@ Analysts can perform the following actions in projects they are assigned to:
### Stakeholder
- **Has permissions on:** Authorized projects
-- **License restrictions:** Intended for use with Read Only licenses, but may be used with Developer licenses.
+- **License restrictions:** Intended for use with Read-Only licenses, but may be used with Developer licenses.
Stakeholders can perform the following actions in projects they are assigned to:
- View generated documentation
- View generated source freshness reports
-- View the Read Only dashboard
+- View the Read-Only dashboard
## Diagram of the Permission Sets
diff --git a/website/docs/docs/cloud/manage-access/licenses-and-groups.md b/website/docs/docs/cloud/manage-access/licenses-and-groups.md
index 51a0649b896..88d64f2d9a3 100644
--- a/website/docs/docs/cloud/manage-access/licenses-and-groups.md
+++ b/website/docs/docs/cloud/manage-access/licenses-and-groups.md
@@ -25,12 +25,12 @@ user can only have one type of license at any given time.
A user's license type controls the features in dbt Cloud that the user is able
to access. dbt Cloud's three license types are:
- - **Read Only**
+ - **Read-Only**
- **Developer**
- **IT**
For more information on these license types, see [Seats & Users](/docs/cloud/manage-access/seats-and-users).
-At a high-level, Developers may be granted _any_ permissions, whereas Read Only
+At a high-level, Developers may be granted _any_ permissions, whereas Read-Only
users will have read-only permissions applied to all dbt Cloud resources
regardless of the role-based permissions that the user is assigned. IT users will have Security Admin and Billing Admin permissions applied regardless of the role-based permissions that the user is assigned.
diff --git a/website/docs/docs/cloud/manage-access/self-service-permissions.md b/website/docs/docs/cloud/manage-access/self-service-permissions.md
index 7a086dd1eec..21cc765b76d 100644
--- a/website/docs/docs/cloud/manage-access/self-service-permissions.md
+++ b/website/docs/docs/cloud/manage-access/self-service-permissions.md
@@ -18,9 +18,9 @@ The permissions afforded to each role are described below:
| Manage team permissions | ❌ | ✅ |
| Invite Owners to the account | ❌ | ✅ |
-## Read Only vs. Developer License Types
+## Read-Only vs. Developer License Types
-Users configured with Read Only license types will experience a restricted set of permissions in dbt Cloud. If a user is associated with a _Member_ permission set and a Read Only seat license, then they will only have access to what a Read-Only seat allows. See [Seats and Users](/docs/cloud/manage-access/seats-and-users) for more information on the impact of licenses on these permissions.
+Users configured with Read-Only license types will experience a restricted set of permissions in dbt Cloud. If a user is associated with a _Member_ permission set and a Read-Only seat license, then they will only have access to what a Read-Only seat allows. See [Seats and Users](/docs/cloud/manage-access/seats-and-users) for more information on the impact of licenses on these permissions.
## Owner and Member Groups in dbt Cloud Enterprise
diff --git a/website/docs/docs/cloud/manage-access/set-up-bigquery-oauth.md b/website/docs/docs/cloud/manage-access/set-up-bigquery-oauth.md
index 709cbced7fb..516a340c951 100644
--- a/website/docs/docs/cloud/manage-access/set-up-bigquery-oauth.md
+++ b/website/docs/docs/cloud/manage-access/set-up-bigquery-oauth.md
@@ -9,7 +9,6 @@ id: "set-up-bigquery-oauth"
This guide describes a feature of the dbt Cloud Enterprise plan. If you’re interested in learning more about an Enterprise plan, contact us at sales@getdbt.com.
:::
-### Overview
dbt Cloud supports developer [OAuth](https://cloud.google.com/bigquery/docs/authentication) with BigQuery, providing an additional layer of security for dbt enterprise users. When BigQuery OAuth is enabled for a dbt Cloud project, all dbt Cloud developers must authenticate with BigQuery in order to use the dbt Cloud IDE. The project's deployment environments will still leverage the BigQuery service account key set in the project credentials.
diff --git a/website/docs/docs/dbt-versions/release-notes/07-June-2023/product-docs-jun.md b/website/docs/docs/dbt-versions/release-notes/07-June-2023/product-docs-jun.md
new file mode 100644
index 00000000000..9217736a2d8
--- /dev/null
+++ b/website/docs/docs/dbt-versions/release-notes/07-June-2023/product-docs-jun.md
@@ -0,0 +1,35 @@
+---
+title: "June 2023 product docs updates"
+description: "June 2023: The Product docs team merged 132 PRs, made various updates to dbt Cloud and Core, such as the Deploy sidebar, Supported platforms page, added a landing page on the References section, added an ADO example to the CI/CD guide, and more"
+sidebar_label: "Update: Product docs changes"
+tags: [June-2023, product-docs]
+date: 2023-07-04
+sidebar_position: 10
+---
+
+Hello from the dbt Docs team: @mirnawong1, @matthewshaver, @nghi-ly, and @runleonarun! First, we’d like to thank the 17 new community contributors to docs.getdbt.com — ✨ @aaronbini, @sjaureguimodo, @aranke, @eiof, @tlochner95, @mani-dbt, @iamtodor, @monilondo, @vrfn, @raginjason, @AndrewRTsao, @MitchellBarker, @ajaythomas, @smitsrr, @leoguyaux, @GideonShils, @michaelmherrera!
+
+Here's what's new to [docs.getdbt.com](http://docs.getdbt.com/) in June:
+
+## ☁ Cloud projects
+
+- We clarified the nuances of [CI and Slim CI jobs](/docs/deploy/continuous-integration), updated the [Scheduler content](/docs/deploy/job-scheduler), added two new pages for the job settings and run visibility, moved the project state page to the [Syntax page](/reference/node-selection/syntax), and provided a landing page for [Deploying with Cloud](/docs/deploy/dbt-cloud-job) to help readers navigate the content better.
+- We reformatted the [Supported data platforms page](/docs/supported-data-platforms) by adding dbt Cloud to the page, splitting it into multiple pages, using cards to display verified adapters, and moving the [Warehouse setup pages](/docs/core/connect-data-platform/about-core-connections) to the Docs section.
+- We launched a new [Lint and format page](/docs/cloud/dbt-cloud-ide/lint-format), which highlights the awesome new dbt Cloud IDE linting/formatting function.
+- We enabled a connection between [dbt Cloud release notes](/docs/dbt-versions/dbt-cloud-release-notes) and the dbt Slack community. This means new dbt Cloud release notes are automatically sent to the slack community [#dbt-cloud channel](https://getdbt.slack.com/archives/CMZ2V0X8V) via RSS feed, keeping users up to date with changes that may affect them.
+- We’ve added two new docs links in the dbt Cloud Job settings user interface (UI). This will provide additional guidance and help users succeed when setting up a dbt Cloud job: [job commands](/docs/deploy/job-commands) and [job triggers](/docs/deploy/job-triggers).
+- We added information related to the newly created [IT license](/docs/cloud/manage-access/about-user-access#license-based-access-control), available for Team and Enterprise plans.
+- We added a new [Supported browser page](/docs/cloud/about-cloud/browsers), which lists the recommended browsers for dbt Cloud.
+- We launched a new page informing users of [new Experimental features option](/docs/dbt-versions/experimental-features) in dbt Cloud.
+- We worked with dbt Engineering to help publish new beta versions of the dbt [dbt Cloud Administrative API docs](/docs/dbt-cloud-apis/admin-cloud-api).
+
+
+## 🎯 Core projects
+
+- We launched the new [MetricFlow docs](/docs/build/build-metrics-intro) on dbt Core v1.6 beta.
+- Split [Global configs](reference/global-configs/about-global-configs) into individual pages, making it easier to find, especially using search.
+
+
+## New 📚 Guides, ✏️ blog posts, and FAQs
+
+- Add an Azure DevOps example to the [Customizing CI/CD guide](/guides/orchestration/custom-cicd-pipelines/3-dbt-cloud-job-on-merge).
diff --git a/website/docs/docs/deploy/dashboard-status-tiles.md b/website/docs/docs/deploy/dashboard-status-tiles.md
index a64838f37a2..361813c526c 100644
--- a/website/docs/docs/deploy/dashboard-status-tiles.md
+++ b/website/docs/docs/deploy/dashboard-status-tiles.md
@@ -31,42 +31,72 @@ In order to set up your dashboard status tile, here is what you need:
You can insert these three fields into the following iFrame, and then embed it **anywhere that you can embed an iFrame**:
```
-
+
```
+:::tip Replace `YOUR_ACCESS_URL` with your region and plan's Access URL
+
+dbt Cloud is hosted in multiple regions in the world and each region has a different access URL. Replace `YOUR_ACCESS_URL` with the appropriate [Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan. For example, if your account is hosted in the EMEA region, you would use the following iFrame code:
+
+```
+
+```
+
+:::
+
## Embedding with BI tools
The dashboard status tile should work anywhere you can embed an iFrame. But below are some tactical tips on how to integrate with common BI tools.
### Mode
Mode allows you to directly [edit the HTML](https://mode.com/help/articles/report-layout-and-presentation/#html-editor) of any given report, where you can embed the iFrame.
-Note that Mode has also built their own [integration](https://mode.com/get-dbt/) with the dbt Cloud Discovery API!
+Note that Mode has also built its own [integration](https://mode.com/get-dbt/) with the dbt Cloud Discovery API!
### Looker
-Looker does not allow you to directly embed HTML, and instead requires creating a [custom visualization](https://docs.looker.com/admin-options/platform/visualizations). One way to do this for admins is to:
+Looker does not allow you to directly embed HTML and instead requires creating a [custom visualization](https://docs.looker.com/admin-options/platform/visualizations). One way to do this for admins is to:
- Add a [new visualization](https://fishtown.looker.com/admin/visualizations) on the visualization page for Looker admins. You can use [this URL](https://metadata.cloud.getdbt.com/static/looker-viz.js) to configure a Looker visualization powered by the iFrame. It will look like this:
-
+
- Once you have set up your custom visualization, you can use it on any dashboard! You can configure it with the exposure name, jobID, and token relevant to that dashboard.
-
+
### Tableau
Tableau does not require you to embed an iFrame. You only need to use a Web Page object on your Tableau Dashboard and a URL in the following format:
+```
+https://metadata.YOUR_ACCESS_URL/exposure-tile?name=&jobId=&token=
+```
+
+:::tip Replace `YOUR_ACCESS_URL` with your region and plan's Access URL
+
+dbt Cloud is hosted in multiple regions in the world and each region has a different access URL. Replace `YOUR_ACCESS_URL` with the appropriate [Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan. For example, if your account is hosted in the North American region, you would use the following code:
+
```
https://metadata.cloud.getdbt.com/exposure-tile?name=&jobId=&token=
+
```
+:::
-
+
### Sigma
Sigma does not require you to embed an iFrame. Add a new embedded UI element in your Sigma Workbook in the following format:
```
-https://metadata.cloud.getdbt.com/exposure-tile?name=&jobId=&token=
+https://metadata.YOUR_ACCESS_URL/exposure-tile?name=&jobId=&token=
+```
+
+:::tip Replace `YOUR_ACCESS_URL` with your region and plan's Access URL
+
+dbt Cloud is hosted in multiple regions in the world and each region has a different access URL. Replace `YOUR_ACCESS_URL` with the appropriate [Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan. For example, if your account is hosted in the APAC region, you would use the following code:
+
+```
+https://metadata.au.dbt.com/exposure-tile?name=&jobId=&token=
+
```
+:::
-
\ No newline at end of file
+
diff --git a/website/docs/docs/supported-data-platforms.md b/website/docs/docs/supported-data-platforms.md
index 031d4aeb6fe..a8ae33a7e0c 100644
--- a/website/docs/docs/supported-data-platforms.md
+++ b/website/docs/docs/supported-data-platforms.md
@@ -33,6 +33,11 @@ The following are **Verified adapters** ✓ you can connect to either in dbt Clo
body="Set up in dbt Cloud Install using the CLI
"
icon="databricks"/>
+
+
-
-
diff --git a/website/docs/guides/migration/versions/02-upgrading-to-v1.5.md b/website/docs/guides/migration/versions/02-upgrading-to-v1.5.md
index 0e2f2507845..811b57e6a33 100644
--- a/website/docs/guides/migration/versions/02-upgrading-to-v1.5.md
+++ b/website/docs/guides/migration/versions/02-upgrading-to-v1.5.md
@@ -58,7 +58,7 @@ models:
```
Some options that could previously be specified before a sub-command can now only be specified afterward. For example, `dbt --profiles-dir . run` isn't valid anymore, and instead, you need to use `dbt run --profiles-dir .`
-Finally: The [built-in `generate_alias_name` macro](https://github.com/dbt-labs/dbt-core/blob/1.5.latest/core/dbt/include/global_project/macros/get_custom_name/get_custom_alias.sql) now includes logic to handle versioned models. If your project has reimplemented the `generate_alias_name` macro with custom logic, and you want to start using [model versions](/docs/collaborate/govern/model-versions), you will need to update the logic in your macro. Note that, while this is **note** a prerequisite for upgrading to v1.5—only for using the new feature—we recommmend that you do this during your upgrade, whether you're planning to use model versions tomorrow or far in the future.
+Finally: The [built-in `generate_alias_name` macro](https://github.com/dbt-labs/dbt-core/blob/1.5.latest/core/dbt/include/global_project/macros/get_custom_name/get_custom_alias.sql) now includes logic to handle versioned models. If your project has reimplemented the `generate_alias_name` macro with custom logic, and you want to start using [model versions](/docs/collaborate/govern/model-versions), you will need to update the logic in your macro. Note that, while this is **not** a prerequisite for upgrading to v1.5—only for using the new feature—we recommmend that you do this during your upgrade, whether you're planning to use model versions tomorrow or far in the future.
### For consumers of dbt artifacts (metadata)
diff --git a/website/docs/quickstarts/snowflake-qs.md b/website/docs/quickstarts/snowflake-qs.md
index 6ae3b66097e..a51a206fe07 100644
--- a/website/docs/quickstarts/snowflake-qs.md
+++ b/website/docs/quickstarts/snowflake-qs.md
@@ -63,7 +63,7 @@ The data used here is stored as CSV files in a public S3 bucket and the followin
- First, delete all contents (empty) in the Editor of the Snowflake worksheet. Then, run this SQL command to create the `customer` table:
```sql
- create table raw.jaffle_shop.customers
+ create table raw.jaffle_shop.customers
( id integer,
first_name varchar,
last_name varchar
@@ -140,7 +140,7 @@ There are two ways to connect dbt Cloud to Snowflake. The first option is Partne
Using Partner Connect allows you to create a complete dbt account with your [Snowflake connection](docs/cloud/connect-data-platform/connect-snowflake), [a managed repository](/docs/collaborate/git/managed-repository), [environments](/docs/build/custom-schemas#managing-environments), and credentials.
-1. In the Snowflake UI, click on the home icon in the upper left corner. Click on your user, and then select **Partner Connect**. Find the dbt tile by scrolling or by searching for dbt in the search bar. Click the tile to connect to dbt.
+1. In the Snowflake UI, click on the home icon in the upper left corner. In the left sidebar, select **Admin**. Then, select **Partner Connect**. Find the dbt tile by scrolling or by searching for dbt in the search bar. Click the tile to connect to dbt.
diff --git a/website/docs/reference/commands/init.md b/website/docs/reference/commands/init.md
index 19a4f3fe47a..468bee5ff60 100644
--- a/website/docs/reference/commands/init.md
+++ b/website/docs/reference/commands/init.md
@@ -4,10 +4,6 @@ sidebar_label: "init"
id: "init"
---
-:::info Improved in v1.0!
-The `init` command is interactive and responsive like never before.
-:::
-
`dbt init` helps get you started using dbt Core!
## New project
@@ -33,6 +29,8 @@ If you've just cloned or downloaded an existing dbt project, `dbt init` can stil
- **Existing project:** If you're the maintainer of an existing project, and you want to help new users get connected to your database quickly and easily, you can include your own custom `profile_template.yml` in the root of your project, alongside `dbt_project.yml`. For common connection attributes, set the values in `fixed`; leave user-specific attributes in `prompts`, but with custom hints and defaults as you'd like.
+
+
```yml
@@ -58,9 +56,43 @@ prompts:
+
+
+
+
+
+
+```yml
+fixed:
+ account: abc123
+ authenticator: externalbrowser
+ database: analytics
+ role: transformer
+ type: snowflake
+ warehouse: transforming
+prompts:
+ target:
+ type: string
+ hint: your desired target name
+ user:
+ type: string
+ hint: yourname@jaffleshop.com
+ schema:
+ type: string
+ hint: usually dbt_
+ threads:
+ hint: "your favorite number, 1-10"
+ type: int
+ default: 8
+```
+
+
+
+
+
```
$ dbt init
-Running with dbt=1.0.0-b2
+Running with dbt=1.0.0
Setting up your profile.
user (yourname@jaffleshop.com): summerintern@jaffleshop.com
schema (usually dbt_): dbt_summerintern
diff --git a/website/docs/reference/resource-configs/database.md b/website/docs/reference/resource-configs/database.md
index 0453ae17bf6..b4759d8b6f3 100644
--- a/website/docs/reference/resource-configs/database.md
+++ b/website/docs/reference/resource-configs/database.md
@@ -1,5 +1,4 @@
---
-title: "About database configuration"
sidebar_label: "database"
resource_types: [models, seeds, tests]
datatype: string
diff --git a/website/docs/reference/resource-configs/docs.md b/website/docs/reference/resource-configs/docs.md
index a986ab4975c..d94b975683d 100644
--- a/website/docs/reference/resource-configs/docs.md
+++ b/website/docs/reference/resource-configs/docs.md
@@ -1,5 +1,4 @@
---
-title: "About docs configuration"
sidebar_label: "docs"
resource_types: models
description: "Docs - Read this in-depth guide to learn about configurations in dbt."
diff --git a/website/docs/reference/resource-configs/schema.md b/website/docs/reference/resource-configs/schema.md
index 255a451ea16..c976bf6502a 100644
--- a/website/docs/reference/resource-configs/schema.md
+++ b/website/docs/reference/resource-configs/schema.md
@@ -1,5 +1,4 @@
---
-title: "About schema configuration"
sidebar_label: "schema"
resource_types: [models, seeds, tests]
description: "Schema - Read this in-depth guide to learn about configurations in dbt."
diff --git a/website/docs/reference/resource-configs/tags.md b/website/docs/reference/resource-configs/tags.md
index 2aaecc3c50e..f6c46f8a088 100644
--- a/website/docs/reference/resource-configs/tags.md
+++ b/website/docs/reference/resource-configs/tags.md
@@ -1,5 +1,4 @@
---
-title: "About tags configuration"
sidebar_label: "tags"
resource_types: all
datatype: string | [string]
diff --git a/website/docs/reference/resource-configs/where.md b/website/docs/reference/resource-configs/where.md
index 3ccd96f2f35..b0953e6f3d4 100644
--- a/website/docs/reference/resource-configs/where.md
+++ b/website/docs/reference/resource-configs/where.md
@@ -163,13 +163,13 @@ models:
```sql
{% macro get_where_subquery(relation) -%}
- {% set where = config.get('where', '') %}
- {% if where and "__three_days_ago__" in where %}
- {# replace placeholder string with result of custom macro #}
- {% set three_days_ago = dbt.dateadd('day', -3, current_timestamp()) %}
- {% set where = where | replace("__three_days_ago__", three_days_ago) %}
- {% endif %}
+ {% set where = config.get('where') %}
{% if where %}
+ {% if "__three_days_ago__" in where %}
+ {# replace placeholder string with result of custom macro #}
+ {% set three_days_ago = dbt.dateadd('day', -3, current_timestamp()) %}
+ {% set where = where | replace("__three_days_ago__", three_days_ago) %}
+ {% endif %}
{%- set filtered -%}
(select * from {{ relation }} where {{ where }}) dbt_subquery
{%- endset -%}
diff --git a/website/static/_redirects b/website/static/_redirects
index 3f964f66d24..27e30e83c9e 100644
--- a/website/static/_redirects
+++ b/website/static/_redirects
@@ -74,7 +74,7 @@
/reference/warehouse-setups/infer-setup /docs/core/connect-data-platform/infer-setup 301
/reference/warehouse-setups/databend-setup /docs/core/connect-data-platform/databend-setup 301
/reference/warehouse-setups/fal-setup /docs/core/connect-data-platform/fal-setup 301
-/reference/warehouse-setups/decodable-setup /docs/core/connect-data-platform/decodable-setup
+/reference/warehouse-setups/decodable-setup /docs/core/connect-data-platform/decodable-setup
# Discovery redirect
/docs/dbt-cloud-apis/metadata-schema-source /docs/dbt-cloud-apis/discovery-schema-source 301
@@ -712,6 +712,7 @@ https://tutorial.getdbt.com/* https://docs.getdbt.com/:splat 301!
/tutorial/building-your-first-project/\* /guides/getting-started/building-your-first-project/:splat 301
/tutorial/refactoring-legacy-sql /guides/migration/tools/refactoring-legacy-sql 301
/blog/change-data-capture-metrics /blog/change-data-capture 301
+/blog/intelligent-slim-ci /docs/deploy/continuous-integration 301
/blog/model-timing-tab /blog/how-we-shaved-90-minutes-off-model 301
/reference/warehouse-setups/resource-configs/materialize-configs/indexes /reference/resource-configs/materialize-configs#indexes 301
/docs/build/building-models /docs/build/models 301
diff --git a/website/static/img/icons/dremio.svg b/website/static/img/icons/dremio.svg
new file mode 100644
index 00000000000..9d6ad9eac25
--- /dev/null
+++ b/website/static/img/icons/dremio.svg
@@ -0,0 +1,54 @@
+
+
+
diff --git a/website/static/img/icons/white/dremio.svg b/website/static/img/icons/white/dremio.svg
new file mode 100644
index 00000000000..9d6ad9eac25
--- /dev/null
+++ b/website/static/img/icons/white/dremio.svg
@@ -0,0 +1,54 @@
+
+
+