From f823b065ebbc4ee635a5c4c3e103d76b12b2a466 Mon Sep 17 00:00:00 2001
From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com>
Date: Thu, 8 Jun 2023 15:20:47 -0400
Subject: [PATCH 001/703] Adding permissions table
---
.../manage-access/enterprise-permissions.md | 8 ++++++
.../snippets/enterprise-permissions-table.md | 26 +++++++++++++++++++
2 files changed, 34 insertions(+)
create mode 100644 website/snippets/enterprise-permissions-table.md
diff --git a/website/docs/docs/cloud/manage-access/enterprise-permissions.md b/website/docs/docs/cloud/manage-access/enterprise-permissions.md
index 3e12bed9939..8f0c8c3b2aa 100644
--- a/website/docs/docs/cloud/manage-access/enterprise-permissions.md
+++ b/website/docs/docs/cloud/manage-access/enterprise-permissions.md
@@ -24,6 +24,14 @@ The following permission sets are available for assignment in dbt Cloud Enterpri
can be granted to dbt Cloud groups which are then in turn granted to users. A dbt Cloud group
can be associated with more than one permission sets.
+:::info Role permisisons
+
+The following table lists the standard roles and permissions associated with them in dbt Cloud. Some roles, such as _Stakeholder_ have more nuanced access to features. View the individual role descriptions for more information beyond the table definitions.
+
+:::
+
+
For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}.
+--------remove this bit yo +## Authentication Methods + +The authentication methods that dbt Core supports are: + +- `database` — Password-based authentication (default, will be used if `method` is not provided) +- `IAM` — Kerberos + + +Click on one of these authentication methods for further details on how to configure your connection profile. Each tab also includes an example `profiles.yml` configuration file for you to review. + +For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}.
---------remove this bit yo ## Authentication Methods The authentication methods that dbt Core supports are: @@ -73,19 +72,19 @@ The following table contains the parameters for database (password-based) connec | Profile field | Example | Description | | ------------- | ------- | ------------ | -| `method` | `database`| Leave this parameter unconfigured, or set this to database. | -| `host` | `hostname.region.redshift.amazonaws.com`| Host of cluster. | -| `user` | `username` | Account username to log into your cluster. | -| `password` | `password1` | Password for authentication. | +| `type` | redshift | The type of data warehouse you are connecting to| +| `method` | `database`| Leave this parameter unconfigured, or set this to database | +| `host` | `hostname.region.redshift.amazonaws.com`| Host of cluster | +| `user` | `username` | Account username to log into your cluster | +| `password` | `password1` | Password for authentication | | `port` | `5439` | | -| `dbname` | `my_db` | Database name.| -| `schema` | `my_schema` | Schema name.| -| `connect_timeout` | `None` or 30 | Number of seconds before connection times out.| +| `dbname` | `my_db` | Database name| +| `schema` | `my_schema` | Schema name| +| `connect_timeout` | `None` or 30 | Number of seconds before connection times out| | `sslmode` | prefer | optional, set the sslmode to connect to the database. Default prefer, which will use 'verify-ca' to connect. For more information on `sslmode`, see Redshift note below TODO| | `role` | TODO | optional| | `ra3_node` | true | Optional, default False. Enables cross-database sources| | `autocommit` | true | Optional, default True. Enables autocommit after each statement| -| `region` | us-east-1 | region to connect to your cluster with |For further info, refer to the GitHub repository: {frontMatter.meta.github_repo}.
+## Configurations + +| Profile field | Example | Description | +| ------------- | ------- | ------------ | +| `type` | redshift | The type of data warehouse you are connecting to| +| `host` | `hostname.region.redshift.amazonaws.com`| Host of cluster | +| `port` | `5439` | | +| `dbname` | `my_db` | Database name| +| `schema` | `my_schema` | Schema name| +| `connect_timeout` | `None` or 30 | Number of seconds before connection times out| +| `sslmode` | prefer | optional, set the sslmode to connect to the database. Default prefer, which will use 'verify-ca' to connect. For more information on `sslmode`, see Redshift note below| +| `role` | None | Optional| +| `autocreate` | false | Optional, default false. Creates user if they do not exist | +| `db_groups` | ['ANALYSTS'] | Optional. A list of existing database group names that the DbUser joins for the current session | +| `ra3_node` | true | Optional, default False. Enables cross-database sources| +| `autocommit` | true | Optional, default True. Enables autocommit after each statement| +| `retries` | 1 | Number of retries | +| `region` | us-east-1 | Optional | -## Authentication Methods + +## Authentication Parameters The authentication methods that dbt Core supports are: @@ -72,19 +91,10 @@ The following table contains the parameters for database (password-based) connec | Profile field | Example | Description | | ------------- | ------- | ------------ | -| `type` | redshift | The type of data warehouse you are connecting to| | `method` | `database`| Leave this parameter unconfigured, or set this to database | | `host` | `hostname.region.redshift.amazonaws.com`| Host of cluster | | `user` | `username` | Account username to log into your cluster | | `password` | `password1` | Password for authentication | -| `port` | `5439` | | -| `dbname` | `my_db` | Database name| -| `schema` | `my_schema` | Schema name| -| `connect_timeout` | `None` or 30 | Number of seconds before connection times out| -| `sslmode` | prefer | optional, set the sslmode to connect to the database. Default prefer, which will use 'verify-ca' to connect. For more information on `sslmode`, see Redshift note below | -| `role` | None | optional| -| `ra3_node` | true | Optional, default False. Enables cross-database sources| -| `autocommit` | true | Optional, default True. Enables autocommit after each statement|` and `` blocks as well as Markdown code
+# blocks.
+include_verbatim = false
diff --git a/.github/workflows/links.yml b/.github/workflows/links.yml
new file mode 100644
index 00000000000..48409555d70
--- /dev/null
+++ b/.github/workflows/links.yml
@@ -0,0 +1,17 @@
+name: Check Docs Links
+
+on: [push]
+ # schedule:
+ # # Run every Monday morning at 6:00AM UTC
+ # - cron: '0 6 * * 1'
+
+jobs:
+ markdown-link-check:
+ runs-on: ubuntu-latest
+ steps:
+ - name: Checkout
+ uses: actions/checkout@v3
+ - name: Link Checker
+ uses: lycheeverse/lychee-action@master
+ with:
+ args: --verbose --config .github/lychee.toml './website/**/*.md'
From d18ab6edc8854c98f25b4f6d7f04a2effdd10457 Mon Sep 17 00:00:00 2001
From: Jason Karlavige
Date: Wed, 5 Jul 2023 13:31:09 -0400
Subject: [PATCH 011/703] add excluded urls
---
.github/lychee.toml | 2 ++
1 file changed, 2 insertions(+)
diff --git a/.github/lychee.toml b/.github/lychee.toml
index 8f8c52088f5..48aa972bf37 100644
--- a/.github/lychee.toml
+++ b/.github/lychee.toml
@@ -41,6 +41,8 @@ exclude = [
'https://img.shields.io',
'https://gitlab.com',
'https://dbtlabs.postman.co',
+ 'https://mobile.twitter.com',
+ 'https://twitter.com',
]
# Exclude all mail addresses from checking
From 3c3ac515b383d48eeafb32684a4309901de78e4d Mon Sep 17 00:00:00 2001
From: Jason Karlavige
Date: Wed, 5 Jul 2023 13:34:23 -0400
Subject: [PATCH 012/703] add 2nd twitter exclude
---
.github/lychee.toml | 1 +
1 file changed, 1 insertion(+)
diff --git a/.github/lychee.toml b/.github/lychee.toml
index 48aa972bf37..2d9f0185496 100644
--- a/.github/lychee.toml
+++ b/.github/lychee.toml
@@ -43,6 +43,7 @@ exclude = [
'https://dbtlabs.postman.co',
'https://mobile.twitter.com',
'https://twitter.com',
+ 'https://www.twitter.com',
]
# Exclude all mail addresses from checking
From dd52e92496f819376f892353b5fa12e88c61cc63 Mon Sep 17 00:00:00 2001
From: Jason Karlavige
Date: Wed, 5 Jul 2023 14:25:22 -0400
Subject: [PATCH 013/703] update faq prop name
---
website/docs/docs/build/jinja-macros.md | 14 ++++----
website/docs/docs/build/seeds.md | 18 +++++------
website/docs/docs/build/snapshots.md | 14 ++++----
website/docs/docs/build/sources.md | 16 +++++-----
website/docs/docs/build/sql-models.md | 32 +++++++++----------
website/docs/docs/build/tests.md | 16 +++++-----
.../dbt-cloud-ide/develop-in-the-cloud.md | 2 +-
.../docs/docs/cloud/git/authenticate-azure.md | 2 +-
website/docs/docs/cloud/git/connect-github.md | 2 +-
website/docs/docs/cloud/git/connect-gitlab.md | 6 ++--
.../docs/docs/collaborate/documentation.md | 12 +++----
.../connect-data-platform/bigquery-setup.md | 4 +--
website/docs/docs/core/pip-install.md | 6 ++--
website/docs/docs/core/source-install.md | 6 ++--
.../docs/docs/dbt-cloud-apis/user-tokens.md | 4 +--
website/docs/quickstarts/bigquery-qs.md | 16 +++++-----
website/docs/quickstarts/databricks-qs.md | 16 +++++-----
website/docs/quickstarts/manual-install-qs.md | 30 ++++++++---------
website/docs/quickstarts/redshift-qs.md | 16 +++++-----
website/docs/quickstarts/snowflake-qs.md | 6 ++--
.../docs/quickstarts/starburst-galaxy-qs.md | 16 +++++-----
website/docs/reference/commands/compile.md | 2 +-
.../docs/reference/configs-and-properties.md | 12 +++----
.../reference/dbt-jinja-functions/dispatch.md | 2 +-
.../resource-configs/full_refresh.md | 2 +-
.../resource-configs/target_schema.md | 2 +-
.../change-way-model-materialized.md | 6 ++--
.../quickstarts/delete-example-models.md | 4 +--
.../snippets/quickstarts/schedule-a-job.md | 2 +-
.../test-and-document-your-project.md | 4 +--
.../snippets/tutorial-add-tests-to-models.md | 14 ++++----
website/src/components/faqs/index.js | 8 ++---
website/src/pages/styles.js | 6 ++--
33 files changed, 159 insertions(+), 159 deletions(-)
diff --git a/website/docs/docs/build/jinja-macros.md b/website/docs/docs/build/jinja-macros.md
index 5b0df69e898..8375eed594f 100644
--- a/website/docs/docs/build/jinja-macros.md
+++ b/website/docs/docs/build/jinja-macros.md
@@ -148,13 +148,13 @@ You can also qualify a macro in your own project by prefixing it with your [pack
## FAQs
-
-
-
-
-
-
-
+
+
+
+
+
+
+
## dbtonic Jinja
diff --git a/website/docs/docs/build/seeds.md b/website/docs/docs/build/seeds.md
index 5ea92a78d4a..6b1abf8f1c3 100644
--- a/website/docs/docs/build/seeds.md
+++ b/website/docs/docs/build/seeds.md
@@ -80,12 +80,12 @@ Seeds are configured in your `dbt_project.yml`, check out the [seed configuratio
You can document and test seeds in YAML by declaring properties — check out the docs on [seed properties](/reference/seed-properties) for more information.
## FAQs
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
diff --git a/website/docs/docs/build/snapshots.md b/website/docs/docs/build/snapshots.md
index 01330c5f0aa..947d4ebef38 100644
--- a/website/docs/docs/build/snapshots.md
+++ b/website/docs/docs/build/snapshots.md
@@ -408,10 +408,10 @@ Snapshot results:
## FAQs
-
-
-
-
-
-
-
+
+
+
+
+
+
+
diff --git a/website/docs/docs/build/sources.md b/website/docs/docs/build/sources.md
index 8c7cf934464..d1b269cdd13 100644
--- a/website/docs/docs/build/sources.md
+++ b/website/docs/docs/build/sources.md
@@ -124,11 +124,11 @@ sources:
You can find more details on the available properties for sources in the [reference section](/reference/source-properties).
### FAQs
-
-
-
-
-
+
+
+
+
+
## Snapshotting source data freshness
With a couple of extra configs, dbt can optionally snapshot the "freshness" of the data in your source tables. This is useful for understanding if your data pipelines are in a healthy state, and is a critical component of defining SLAs for your warehouse.
@@ -204,6 +204,6 @@ where _etl_loaded_at >= date_sub(current_date(), interval 1 day)
```
### FAQs
-
-
-
+
+
+
diff --git a/website/docs/docs/build/sql-models.md b/website/docs/docs/build/sql-models.md
index bfe844e394e..65fdd58adf0 100644
--- a/website/docs/docs/build/sql-models.md
+++ b/website/docs/docs/build/sql-models.md
@@ -103,11 +103,11 @@ Why a _view_ named `dbt_alice.customers`? By default dbt will:
You can use _configurations_ to change any of these behaviors — more on that later.
### FAQs
-
-
-
-
-
+
+
+
+
+
## Configuring models
Configurations are "model settings" that can be set in your `dbt_project.yml` file, _and_ in your model file using a `config` block. Some example configurations include:
@@ -158,8 +158,8 @@ It is important to note that configurations are applied hierarchically — a con
You can learn more about configurations in the [reference docs](/reference/model-configs).
### FAQs
-
-
+
+
## Building dependencies between models
@@ -265,12 +265,12 @@ Additionally, the `ref` function encourages you to write modular transformations
You can also document and test models — skip ahead to the section on [testing](/docs/build/tests) and [documentation](/docs/collaborate/documentation) for more information.
## Additional FAQs
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
diff --git a/website/docs/docs/build/tests.md b/website/docs/docs/build/tests.md
index 51a0346443f..1a40dd42b53 100644
--- a/website/docs/docs/build/tests.md
+++ b/website/docs/docs/build/tests.md
@@ -261,11 +261,11 @@ Note that, if you elect to store test failures:
## FAQs
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
diff --git a/website/docs/docs/cloud/dbt-cloud-ide/develop-in-the-cloud.md b/website/docs/docs/cloud/dbt-cloud-ide/develop-in-the-cloud.md
index c5bb2dc8b9e..a128cbb9bd1 100644
--- a/website/docs/docs/cloud/dbt-cloud-ide/develop-in-the-cloud.md
+++ b/website/docs/docs/cloud/dbt-cloud-ide/develop-in-the-cloud.md
@@ -141,7 +141,7 @@ The dbt Cloud IDE makes it possible to [build and view](/docs/collaborate/build-
## Related questions
-
+
Is there a cost to using the Cloud IDE?
diff --git a/website/docs/docs/cloud/git/authenticate-azure.md b/website/docs/docs/cloud/git/authenticate-azure.md
index f3a534ac923..9e755519e67 100644
--- a/website/docs/docs/cloud/git/authenticate-azure.md
+++ b/website/docs/docs/cloud/git/authenticate-azure.md
@@ -25,4 +25,4 @@ You will be directed back to dbt Cloud, and your profile should be linked. You a
## FAQs
-
+
diff --git a/website/docs/docs/cloud/git/connect-github.md b/website/docs/docs/cloud/git/connect-github.md
index 6113e3ccb42..410b28c9767 100644
--- a/website/docs/docs/cloud/git/connect-github.md
+++ b/website/docs/docs/cloud/git/connect-github.md
@@ -74,4 +74,4 @@ The next time you log into dbt Cloud, you will be able to do so via OAuth throug
## FAQs
-
+
diff --git a/website/docs/docs/cloud/git/connect-gitlab.md b/website/docs/docs/cloud/git/connect-gitlab.md
index e66fa577e5b..1ec8fb08817 100644
--- a/website/docs/docs/cloud/git/connect-gitlab.md
+++ b/website/docs/docs/cloud/git/connect-gitlab.md
@@ -119,6 +119,6 @@ If you imported a repository using the dbt Cloud native integration with GitLab,
## FAQs
-
-
-
+
+
+
diff --git a/website/docs/docs/collaborate/documentation.md b/website/docs/docs/collaborate/documentation.md
index a28ea8ff55f..b613fd7a5ef 100644
--- a/website/docs/docs/collaborate/documentation.md
+++ b/website/docs/docs/collaborate/documentation.md
@@ -67,12 +67,12 @@ First, run `dbt docs generate` — this command tells dbt to compile relevant in
Then, run `dbt docs serve` to use these `.json` files to populate a local website.
## FAQs
-
-
-
-
-
-
+
+
+
+
+
+
## Using Docs Blocks
diff --git a/website/docs/docs/core/connect-data-platform/bigquery-setup.md b/website/docs/docs/core/connect-data-platform/bigquery-setup.md
index bfb97237b29..99ae473b0e2 100644
--- a/website/docs/docs/core/connect-data-platform/bigquery-setup.md
+++ b/website/docs/docs/core/connect-data-platform/bigquery-setup.md
@@ -457,8 +457,8 @@ my-profile:
For a general overview of this process, see the official docs for [Creating Short-lived Service Account Credentials](https://cloud.google.com/iam/docs/creating-short-lived-service-account-credentials).
-
-
+
+
### Execution project
New in v0.21.0
diff --git a/website/docs/docs/core/pip-install.md b/website/docs/docs/core/pip-install.md
index 912af40b9b7..26a15d8ad37 100644
--- a/website/docs/docs/core/pip-install.md
+++ b/website/docs/docs/core/pip-install.md
@@ -8,9 +8,9 @@ You need to use `pip` to install dbt Core on Windows or Linux operating systems.
You can install dbt Core and plugins using `pip` because they are Python modules distributed on [PyPI](https://pypi.org/project/dbt/). We recommend using virtual environments when installing with `pip`.
-
-
-
+
+
+
Once you know [which adapter](/docs/supported-data-platforms) you're using, you can install it as `dbt-`. For example, if using Postgres:
diff --git a/website/docs/docs/core/source-install.md b/website/docs/docs/core/source-install.md
index 6714e88cd10..be9918223fe 100644
--- a/website/docs/docs/core/source-install.md
+++ b/website/docs/docs/core/source-install.md
@@ -35,6 +35,6 @@ You do _not_ need to install `dbt-core` before installing an adapter plugin -- t
To install in editable mode, such as while contributing, use `pip install -e .` instead.
-
-
-
+
+
+
diff --git a/website/docs/docs/dbt-cloud-apis/user-tokens.md b/website/docs/docs/dbt-cloud-apis/user-tokens.md
index 70f3f8f615f..e56d8b2f974 100644
--- a/website/docs/docs/dbt-cloud-apis/user-tokens.md
+++ b/website/docs/docs/dbt-cloud-apis/user-tokens.md
@@ -17,5 +17,5 @@ label.
## FAQs
-
-
\ No newline at end of file
+
+
diff --git a/website/docs/quickstarts/bigquery-qs.md b/website/docs/quickstarts/bigquery-qs.md
index ed913cda647..2ff0ad92606 100644
--- a/website/docs/quickstarts/bigquery-qs.md
+++ b/website/docs/quickstarts/bigquery-qs.md
@@ -179,11 +179,11 @@ Later, you can connect your business intelligence (BI) tools to these views and
#### FAQs
-
-
-
-
-
+
+
+
+
+
## Change the way your model is materialized
@@ -287,9 +287,9 @@ Later, you can connect your business intelligence (BI) tools to these views and
#### FAQs {#faq-2}
-
-
-
+
+
+
diff --git a/website/docs/quickstarts/databricks-qs.md b/website/docs/quickstarts/databricks-qs.md
index 096f6d2b04a..706e87352ee 100644
--- a/website/docs/quickstarts/databricks-qs.md
+++ b/website/docs/quickstarts/databricks-qs.md
@@ -257,11 +257,11 @@ Later, you can connect your business intelligence (BI) tools to these views and
#### FAQs
-
-
-
-
-
+
+
+
+
+
## Change the way your model is materialized
@@ -365,9 +365,9 @@ Later, you can connect your business intelligence (BI) tools to these views and
#### FAQs {#faq-2}
-
-
-
+
+
+
diff --git a/website/docs/quickstarts/manual-install-qs.md b/website/docs/quickstarts/manual-install-qs.md
index f7d15b897f7..97eb8f8e8ea 100644
--- a/website/docs/quickstarts/manual-install-qs.md
+++ b/website/docs/quickstarts/manual-install-qs.md
@@ -130,11 +130,11 @@ When developing locally, dbt connects to your using
#### FAQs
-
-
-
-
-
+
+
+
+
+
### Perform your first dbt run
@@ -200,11 +200,11 @@ When you return to the BigQuery console, you can `select` from this model.
#### FAQs
-
-
-
-
-
+
+
+
+
+
### Change the way your model is materialized
@@ -411,9 +411,9 @@ When you return to the BigQuery console, you can `select` from this model.
#### FAQs {#faq-2}
-
-
-
+
+
+
### Next steps
@@ -438,8 +438,8 @@ You can also explore:
#### FAQs
-
-
+
+
#### Next steps
diff --git a/website/docs/quickstarts/redshift-qs.md b/website/docs/quickstarts/redshift-qs.md
index b5277a47559..f672f16c4ed 100644
--- a/website/docs/quickstarts/redshift-qs.md
+++ b/website/docs/quickstarts/redshift-qs.md
@@ -273,11 +273,11 @@ Later, you can connect your business intelligence (BI) tools to these views and
#### FAQs
-
-
-
-
-
+
+
+
+
+
## Change the way your model is materialized
@@ -381,9 +381,9 @@ Later, you can connect your business intelligence (BI) tools to these views and
#### FAQs {#faq-2}
-
-
-
+
+
+
diff --git a/website/docs/quickstarts/snowflake-qs.md b/website/docs/quickstarts/snowflake-qs.md
index 6ae3b66097e..9fa30439563 100644
--- a/website/docs/quickstarts/snowflake-qs.md
+++ b/website/docs/quickstarts/snowflake-qs.md
@@ -395,9 +395,9 @@ Later, you can connect your business intelligence (BI) tools to these views and
#### FAQs {#faq-2}
-
-
-
+
+
+
## Build models on top of sources
diff --git a/website/docs/quickstarts/starburst-galaxy-qs.md b/website/docs/quickstarts/starburst-galaxy-qs.md
index eb3691d4e04..783a91c47da 100644
--- a/website/docs/quickstarts/starburst-galaxy-qs.md
+++ b/website/docs/quickstarts/starburst-galaxy-qs.md
@@ -293,11 +293,11 @@ Later, you can connect your business intelligence (BI) tools to these views and
#### FAQs
-
-
-
-
-
+
+
+
+
+
## Change the way your model is materialized
@@ -401,9 +401,9 @@ Later, you can connect your business intelligence (BI) tools to these views and
#### FAQs {#faq-2}
-
-
-
+
+
+
diff --git a/website/docs/reference/commands/compile.md b/website/docs/reference/commands/compile.md
index 7b3f18a9c15..5cb4c4ec9ab 100644
--- a/website/docs/reference/commands/compile.md
+++ b/website/docs/reference/commands/compile.md
@@ -43,4 +43,4 @@ The command accesses the data platform to cache related metadata, and to run int
### FAQs
-
+
diff --git a/website/docs/reference/configs-and-properties.md b/website/docs/reference/configs-and-properties.md
index 1001d7b177e..c2ad5b77629 100644
--- a/website/docs/reference/configs-and-properties.md
+++ b/website/docs/reference/configs-and-properties.md
@@ -161,12 +161,12 @@ You can find an exhaustive list of each supported property and config, broken do
* Exposure [properties](/reference/exposure-properties)
## FAQs
-
-
-
-
-
-
+
+
+
+
+
+
## Troubleshooting common errors
diff --git a/website/docs/reference/dbt-jinja-functions/dispatch.md b/website/docs/reference/dbt-jinja-functions/dispatch.md
index d615bbdb430..a165ae59eb0 100644
--- a/website/docs/reference/dbt-jinja-functions/dispatch.md
+++ b/website/docs/reference/dbt-jinja-functions/dispatch.md
@@ -266,4 +266,4 @@ In rare cases, the child adapter may prefer the default implementation to its pa
## FAQs
-
+
diff --git a/website/docs/reference/resource-configs/full_refresh.md b/website/docs/reference/resource-configs/full_refresh.md
index bc875c2ad2b..f75fe3a583b 100644
--- a/website/docs/reference/resource-configs/full_refresh.md
+++ b/website/docs/reference/resource-configs/full_refresh.md
@@ -85,7 +85,7 @@ This logic is encoded in the [`should_full_refresh()`](https://github.com/dbt-la
### Seeds
-
+
## Recommendation
Set `full_refresh: false` for models of especially large datasets, which you would _never_ want dbt to fully drop and recreate.
diff --git a/website/docs/reference/resource-configs/target_schema.md b/website/docs/reference/resource-configs/target_schema.md
index c612a99604d..041f004e20c 100644
--- a/website/docs/reference/resource-configs/target_schema.md
+++ b/website/docs/reference/resource-configs/target_schema.md
@@ -35,7 +35,7 @@ On **BigQuery**, this is analogous to a `dataset`.
This is a **required** parameter, no default is provided.
## FAQs
-
+
## Examples
### Build all snapshots in a schema named `snapshots`
diff --git a/website/snippets/quickstarts/change-way-model-materialized.md b/website/snippets/quickstarts/change-way-model-materialized.md
index 04f92aac9a7..08d1b5ca0d8 100644
--- a/website/snippets/quickstarts/change-way-model-materialized.md
+++ b/website/snippets/quickstarts/change-way-model-materialized.md
@@ -59,6 +59,6 @@ By default, everything gets created as a view. You can override that by material
#### FAQs
-
-
-
+
+
+
diff --git a/website/snippets/quickstarts/delete-example-models.md b/website/snippets/quickstarts/delete-example-models.md
index e290877437c..b2ce36a40f2 100644
--- a/website/snippets/quickstarts/delete-example-models.md
+++ b/website/snippets/quickstarts/delete-example-models.md
@@ -31,5 +31,5 @@ You can now delete the files that dbt created when you initialized the project:
#### FAQs
-
-
+
+
diff --git a/website/snippets/quickstarts/schedule-a-job.md b/website/snippets/quickstarts/schedule-a-job.md
index 55504636192..59d428bdfaa 100644
--- a/website/snippets/quickstarts/schedule-a-job.md
+++ b/website/snippets/quickstarts/schedule-a-job.md
@@ -40,6 +40,6 @@ Congratulations 🎉! You've just deployed your first dbt project!
#### FAQs
-
+
diff --git a/website/snippets/quickstarts/test-and-document-your-project.md b/website/snippets/quickstarts/test-and-document-your-project.md
index e5315b13a09..932f58712af 100644
--- a/website/snippets/quickstarts/test-and-document-your-project.md
+++ b/website/snippets/quickstarts/test-and-document-your-project.md
@@ -10,6 +10,6 @@
#### FAQs
-
-
+
+
diff --git a/website/snippets/tutorial-add-tests-to-models.md b/website/snippets/tutorial-add-tests-to-models.md
index 937a444a1f0..491fc72ba85 100644
--- a/website/snippets/tutorial-add-tests-to-models.md
+++ b/website/snippets/tutorial-add-tests-to-models.md
@@ -52,10 +52,10 @@ When you run `dbt test`, dbt iterates through your YAML files, and constructs a
#### FAQs
-
-
-
-
-
-
-
+
+
+
+
+
+
+
diff --git a/website/src/components/faqs/index.js b/website/src/components/faqs/index.js
index e1f1e5b2cf4..52c4573d883 100644
--- a/website/src/components/faqs/index.js
+++ b/website/src/components/faqs/index.js
@@ -2,18 +2,18 @@ import React, { useState, useEffect } from 'react';
import styles from './styles.module.css';
import { usePluginData } from '@docusaurus/useGlobalData';
-function FAQ({ src, alt_header = null }) {
+function FAQ({ path, alt_header = null }) {
const [isOn, setOn] = useState(false);
- const [filePath, setFilePath] = useState(src)
+ const [filePath, setFilePath] = useState(path)
const [fileContent, setFileContent] = useState({})
// Get all faq file paths from plugin
const { faqFiles } = usePluginData('docusaurus-build-global-data-plugin');
useEffect(() => {
- // Search for faq where frontmatter ID matches src prop
- const faqFile = faqFiles.find(file => file.id === src)
+ // Search for faq where frontmatter ID matches path prop
+ const faqFile = faqFiles.find(file => file.id === path)
// If faqFile found with ID, set filePath for this file
if (faqFile?.id) {
diff --git a/website/src/pages/styles.js b/website/src/pages/styles.js
index 6e65d9775c3..d0bcf2f227c 100644
--- a/website/src/pages/styles.js
+++ b/website/src/pages/styles.js
@@ -95,9 +95,9 @@ description: "this is \[an escaped link](docs.getdbt.com)"
FAQ
-{` `}
-
-
+{` `}
+
+
From a05e0d726a216d199e6915432542277d0d43f910 Mon Sep 17 00:00:00 2001
From: Jason Karlavige
Date: Wed, 5 Jul 2023 14:25:45 -0400
Subject: [PATCH 014/703] update snippet prop name
---
contributing/single-sourcing-content.md | 4 ++--
website/docs/docs/build/hooks-operations.md | 2 +-
website/docs/docs/build/incremental-models.md | 2 +-
website/docs/docs/build/jinja-macros.md | 2 +-
website/docs/docs/build/materializations.md | 2 +-
website/docs/docs/build/metrics.md | 2 +-
website/docs/docs/build/project-variables.md | 2 +-
website/docs/docs/cloud/about-cloud/tenancy.md | 2 +-
.../connect-apache-spark.md | 2 +-
.../connect-data-platform/connect-snowflake.md | 2 +-
.../connect-starburst-trino.md | 4 ++--
.../docs/docs/cloud/git/connect-azure-devops.md | 2 +-
website/docs/docs/cloud/git/setup-azure.md | 2 +-
.../docs/cloud/manage-access/auth0-migration.md | 2 +-
.../set-up-sso-azure-active-directory.md | 2 +-
.../manage-access/set-up-sso-google-workspace.md | 2 +-
.../docs/cloud/manage-access/set-up-sso-okta.md | 4 ++--
.../cloud/manage-access/set-up-sso-saml-2.0.md | 14 +++++++-------
.../docs/cloud/manage-access/sso-overview.md | 2 +-
.../core/connect-data-platform/bigquery-setup.md | 2 +-
.../connect-data-platform/databricks-setup.md | 2 +-
.../core/connect-data-platform/postgres-setup.md | 2 +-
.../core/connect-data-platform/redshift-setup.md | 2 +-
.../connect-data-platform/snowflake-setup.md | 2 +-
.../core/connect-data-platform/spark-setup.md | 4 ++--
.../core/connect-data-platform/trino-setup.md | 6 +++---
.../docs/dbt-cloud-apis/discovery-querying.md | 2 +-
.../schema-discovery-environment.mdx | 2 +-
.../schema-discovery-modelByEnv.mdx | 2 +-
website/docs/docs/dbt-versions/core-versions.md | 4 ++--
.../docs/dbt-versions/upgrade-core-in-cloud.md | 4 ++--
website/docs/docs/deploy/job-notifications.md | 2 +-
website/docs/docs/introduction.md | 2 +-
.../use-dbt-semantic-layer/dbt-semantic-layer.md | 4 ++--
.../quickstart-semantic-layer.md | 6 +++---
.../set-dbt-semantic-layer.md | 6 +++---
website/docs/faqs/Accounts/slack.md | 2 +-
.../orchestration/webhooks/zapier-ms-teams.md | 4 ++--
.../webhooks/zapier-new-cloud-job.md | 4 ++--
.../orchestration/webhooks/zapier-slack.md | 4 ++--
website/docs/quickstarts/bigquery-qs.md | 12 ++++++------
website/docs/quickstarts/databricks-qs.md | 12 ++++++------
website/docs/quickstarts/manual-install-qs.md | 16 ++++++++--------
website/docs/quickstarts/redshift-qs.md | 12 ++++++------
website/docs/quickstarts/snowflake-qs.md | 14 +++++++-------
website/docs/quickstarts/starburst-galaxy-qs.md | 12 ++++++------
.../docs/reference/dbt-jinja-functions/this.md | 2 +-
website/docs/reference/node-selection/syntax.md | 2 +-
.../project-configs/on-run-start-on-run-end.md | 2 +-
.../resource-configs/bigquery-configs.md | 4 ++--
.../docs/reference/resource-configs/grants.md | 2 +-
.../resource-configs/pre-hook-post-hook.md | 8 ++++----
.../reference/resource-configs/spark-configs.md | 2 +-
.../test-and-document-your-project.md | 4 ++--
website/snippets/tutorial-managed-repo.md | 4 ++--
website/src/components/snippet/index.js | 4 ++--
56 files changed, 118 insertions(+), 118 deletions(-)
diff --git a/contributing/single-sourcing-content.md b/contributing/single-sourcing-content.md
index 7a80e71728c..2b854dbff13 100644
--- a/contributing/single-sourcing-content.md
+++ b/contributing/single-sourcing-content.md
@@ -226,7 +226,7 @@ The Snippet component allows for content to be reusable throughout the Docs. Thi
Creating and using a snippet requires two steps:
1. Create a new markdown snippet file in the `website/snippets` directory.
-2. Use the ` ` component within a Docs file.
+2. Use the ` ` component within a Docs file.
### Snippet properties
@@ -247,7 +247,7 @@ Now, we can add this snippet to a Docs file with the Snippet component:
```markdown
Docs content here.
-
+
Docs content here.
```
diff --git a/website/docs/docs/build/hooks-operations.md b/website/docs/docs/build/hooks-operations.md
index 660a8d379c5..1abc5657bad 100644
--- a/website/docs/docs/build/hooks-operations.md
+++ b/website/docs/docs/build/hooks-operations.md
@@ -38,7 +38,7 @@ Hooks are snippets of SQL that are executed at different times:
Hooks are a more-advanced capability that enable you to run custom SQL, and leverage database-specific actions, beyond what dbt makes available out-of-the-box with standard materializations and configurations.
-
+
diff --git a/website/docs/docs/build/incremental-models.md b/website/docs/docs/build/incremental-models.md
index 15b24520711..28345ba1873 100644
--- a/website/docs/docs/build/incremental-models.md
+++ b/website/docs/docs/build/incremental-models.md
@@ -456,5 +456,5 @@ The syntax depends on how you configure your `incremental_strategy`:
-
+
diff --git a/website/docs/docs/build/jinja-macros.md b/website/docs/docs/build/jinja-macros.md
index 8375eed594f..a4122dfe09b 100644
--- a/website/docs/docs/build/jinja-macros.md
+++ b/website/docs/docs/build/jinja-macros.md
@@ -185,5 +185,5 @@ Writing a macro for the first time? Check whether we've open sourced one in [dbt
{% endfor %}
```
-
+
diff --git a/website/docs/docs/build/materializations.md b/website/docs/docs/build/materializations.md
index d639e9f3ebf..70c7878bd69 100644
--- a/website/docs/docs/build/materializations.md
+++ b/website/docs/docs/build/materializations.md
@@ -179,6 +179,6 @@ def model(dbt, session):
**Note:** Incremental models are supported on BigQuery/Dataproc for the `merge` incremental strategy. The `insert_overwrite` strategy is not yet supported.
-
+
diff --git a/website/docs/docs/build/metrics.md b/website/docs/docs/build/metrics.md
index 4f24dface7e..8475cfe8684 100644
--- a/website/docs/docs/build/metrics.md
+++ b/website/docs/docs/build/metrics.md
@@ -714,5 +714,5 @@ The above example will return a dataset that contains the metric provided in the
-
+
diff --git a/website/docs/docs/build/project-variables.md b/website/docs/docs/build/project-variables.md
index b6e1b564ec8..a69132d6a3b 100644
--- a/website/docs/docs/build/project-variables.md
+++ b/website/docs/docs/build/project-variables.md
@@ -120,5 +120,5 @@ If dbt is unable to find a definition for a variable after checking these four p
**Note:** Variable scope is based on the node ultimately using that variable. Imagine the case where a model defined in the root project is calling a macro defined in an installed package. That macro, in turn, uses the value of a variable. The variable will be resolved based on the _root project's_ scope, rather than the package's scope.
-
+
diff --git a/website/docs/docs/cloud/about-cloud/tenancy.md b/website/docs/docs/cloud/about-cloud/tenancy.md
index 0d312767b82..5785533264e 100644
--- a/website/docs/docs/cloud/about-cloud/tenancy.md
+++ b/website/docs/docs/cloud/about-cloud/tenancy.md
@@ -24,4 +24,4 @@ _To learn more about setting up a dbt Cloud single tenant deployment, [please co
### Available features
-
+
diff --git a/website/docs/docs/cloud/connect-data-platform/connect-apache-spark.md b/website/docs/docs/cloud/connect-data-platform/connect-apache-spark.md
index f9a48233977..670b628547b 100644
--- a/website/docs/docs/cloud/connect-data-platform/connect-apache-spark.md
+++ b/website/docs/docs/cloud/connect-data-platform/connect-apache-spark.md
@@ -5,7 +5,7 @@ description: "Setup instructions for connecting Apache Spark to dbt Cloud"
sidebar_label: "Connect Apache Spark"
---
-
+
:::note
See [Connect Databricks](#connect-databricks) for the Databricks version of this page.
diff --git a/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md b/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md
index 600e1dddaa3..4f31c56e8aa 100644
--- a/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md
+++ b/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md
@@ -9,7 +9,7 @@ The following fields are required when creating a Snowflake connection
| Field | Description | Examples |
| ----- | ----------- | -------- |
-| Account | The Snowflake account to connect to. Take a look [here](/docs/core/connect-data-platform/snowflake-setup#account) to determine what the account field should look like based on your region.| |
+| Account | The Snowflake account to connect to. Take a look [here](/docs/core/connect-data-platform/snowflake-setup#account) to determine what the account field should look like based on your region.| |
| Role | A mandatory field indicating what role should be assumed after connecting to Snowflake | `transformer` |
| Database | The logical database to connect to and run queries against. | `analytics` |
| Warehouse | The virtual warehouse to use for running queries. | `transforming` |
diff --git a/website/docs/docs/cloud/connect-data-platform/connect-starburst-trino.md b/website/docs/docs/cloud/connect-data-platform/connect-starburst-trino.md
index 224cf4d9691..db0d3f61728 100644
--- a/website/docs/docs/cloud/connect-data-platform/connect-starburst-trino.md
+++ b/website/docs/docs/cloud/connect-data-platform/connect-starburst-trino.md
@@ -17,11 +17,11 @@ The following are the required fields for setting up a connection with a [Starbu
## Roles in Starburst Enterprise
-
+
## Catalogs and schemas
-
+
## Configuration
diff --git a/website/docs/docs/cloud/git/connect-azure-devops.md b/website/docs/docs/cloud/git/connect-azure-devops.md
index 7e656a11f7e..a84e593a1e2 100644
--- a/website/docs/docs/cloud/git/connect-azure-devops.md
+++ b/website/docs/docs/cloud/git/connect-azure-devops.md
@@ -3,7 +3,7 @@ title: "Connect to Azure DevOps"
id: "connect-azure-devops"
---
-
+
## About Azure DevOps and dbt Cloud
diff --git a/website/docs/docs/cloud/git/setup-azure.md b/website/docs/docs/cloud/git/setup-azure.md
index 363e63eb088..9eca77d7014 100644
--- a/website/docs/docs/cloud/git/setup-azure.md
+++ b/website/docs/docs/cloud/git/setup-azure.md
@@ -5,7 +5,7 @@ description: "You can set up your Azure DevOps by creating an Azure AD app and a
sidebar_label: "Set up Azure DevOps"
---
-
+
## Overview
diff --git a/website/docs/docs/cloud/manage-access/auth0-migration.md b/website/docs/docs/cloud/manage-access/auth0-migration.md
index af430772ca4..68598d92f91 100644
--- a/website/docs/docs/cloud/manage-access/auth0-migration.md
+++ b/website/docs/docs/cloud/manage-access/auth0-migration.md
@@ -16,7 +16,7 @@ If you have not yet configured SSO in dbt Cloud, refer instead to our setup guid
## Auth0 Multi-tenant URIs
-
+
## Start the migration
diff --git a/website/docs/docs/cloud/manage-access/set-up-sso-azure-active-directory.md b/website/docs/docs/cloud/manage-access/set-up-sso-azure-active-directory.md
index 13f49422832..fcc9a79e860 100644
--- a/website/docs/docs/cloud/manage-access/set-up-sso-azure-active-directory.md
+++ b/website/docs/docs/cloud/manage-access/set-up-sso-azure-active-directory.md
@@ -156,7 +156,7 @@ To complete setup, follow the steps below in the dbt Cloud application.
here, you can navigate to the login URL generated for your account's _slug_ to
test logging in with Azure AD.
-
+
diff --git a/website/docs/docs/cloud/manage-access/set-up-sso-google-workspace.md b/website/docs/docs/cloud/manage-access/set-up-sso-google-workspace.md
index 314d1128cb0..1cfa9a49286 100644
--- a/website/docs/docs/cloud/manage-access/set-up-sso-google-workspace.md
+++ b/website/docs/docs/cloud/manage-access/set-up-sso-google-workspace.md
@@ -113,7 +113,7 @@ Settings.
If the verification information looks appropriate, then you have completed the configuration of GSuite SSO.
-
+
## Setting up RBAC
Now you have completed setting up SSO with GSuite, the next steps will be to set up
diff --git a/website/docs/docs/cloud/manage-access/set-up-sso-okta.md b/website/docs/docs/cloud/manage-access/set-up-sso-okta.md
index 70de8285450..0d493bcf29f 100644
--- a/website/docs/docs/cloud/manage-access/set-up-sso-okta.md
+++ b/website/docs/docs/cloud/manage-access/set-up-sso-okta.md
@@ -72,7 +72,7 @@ dashes. It should contain only letters, numbers, and dashes. For example, the _l
`dbt-labs`. Login slugs must be unique across all dbt Cloud accounts,
so pick a slug that uniquely identifies your company.
-
+
* **Single sign on URL**: `https://YOUR_AUTH0_URI/login/callback?connection=`
* **Audience URI (SP Entity ID)**: `urn:auth0::{login slug}`
@@ -187,7 +187,7 @@ configured in the steps above.
test logging in with Okta. Additionally, users added the the Okta app
will be able to log in to dbt Cloud from Okta directly.
-
+
## Setting up RBAC
diff --git a/website/docs/docs/cloud/manage-access/set-up-sso-saml-2.0.md b/website/docs/docs/cloud/manage-access/set-up-sso-saml-2.0.md
index 297e92600f7..d5a16e91792 100644
--- a/website/docs/docs/cloud/manage-access/set-up-sso-saml-2.0.md
+++ b/website/docs/docs/cloud/manage-access/set-up-sso-saml-2.0.md
@@ -18,7 +18,7 @@ provider in order to configure Single Sign On and [role-based access control](/d
## Auth0 Multi-tenant URIs
-
+
## Generic SAML 2.0 integrations
@@ -39,7 +39,7 @@ You'll need administrator access to your SAML 2.0 compliant identity provider to
#### Configuring the application
-
+
To complete this section, you will need to create a login slug. This slug controls the URL where users on your account can log into your application. Login slugs are typically the lowercased name of your organization. It should contain only letters, numbers, and dashes.
separated with dashes. For example, the login slug for dbt Labs would be `dbt-labs`.
@@ -117,7 +117,7 @@ You can use the instructions in this section to configure Okta as your identity
### Configure the Okta application
-
+
To complete this section, you will need to create a login slug. This slug controls the URL where users on your account can log into your application. Login slugs are typically the lowercased name of your organization. It should contain only letters, numbers, and dashes.
separated with dashes. For example, the login slug for dbt Labs would be `dbt-labs`.
@@ -220,7 +220,7 @@ Use this section if you are configuring Google as your identity provider.
### Configure the Google application
-
+
To complete this section, you will need to create a login slug. This slug controls the URL where users on your account
can log into your application. Login slugs are typically the lowercased name of your organization
@@ -288,7 +288,7 @@ If you're using Azure Active Directory (Azure AD), the instructions below will h
### Create Azure AD Enterprise application
-
+
To complete this section, you will need to create a login slug. This slug controls the URL where users on your account can log into your application. Login slugs are typically the lowercased name of your organization
separated with dashes. It should contain only letters, numbers, and dashes. For example, the login slug for dbt Labs would be `dbt-labs`.
@@ -352,7 +352,7 @@ To configure OneLogin, you will need **Administrator** access.
### Configure the OneLogin application
-
+
To complete this section, you will need to create a login slug. This slug controls the URL where users on your account can log into your application. Login slugs are typically the lowercased name of your organization
separated with dashes. It should contain only letters, numbers, and dashes. For example, the login slug for dbt Labs would be `dbt-labs`.
@@ -432,7 +432,7 @@ To complete setup, follow the steps below in dbt Cloud:
5. After completing the setup, you can navigate to the URL generated for your account's _slug_ to test logging in with your identity provider. Additionally, users added the the SAML 2.0 app will be able to log in to dbt Cloud from the IdP directly.
-
+
### Setting up RBAC
diff --git a/website/docs/docs/cloud/manage-access/sso-overview.md b/website/docs/docs/cloud/manage-access/sso-overview.md
index b129b40c029..7e44859c73a 100644
--- a/website/docs/docs/cloud/manage-access/sso-overview.md
+++ b/website/docs/docs/cloud/manage-access/sso-overview.md
@@ -17,7 +17,7 @@ Once you configure SSO, even partially, you cannot disable or revert it. When yo
## Auth0 Multi-tenant URIs
-
+
## SSO process
diff --git a/website/docs/docs/core/connect-data-platform/bigquery-setup.md b/website/docs/docs/core/connect-data-platform/bigquery-setup.md
index 99ae473b0e2..b0fc9fa7cf0 100644
--- a/website/docs/docs/core/connect-data-platform/bigquery-setup.md
+++ b/website/docs/docs/core/connect-data-platform/bigquery-setup.md
@@ -16,7 +16,7 @@ meta:
---
-
+
Overview of {frontMatter.meta.pypi_package}
diff --git a/website/docs/docs/core/connect-data-platform/databricks-setup.md b/website/docs/docs/core/connect-data-platform/databricks-setup.md
index eef6522a8f5..0d24a3b04aa 100644
--- a/website/docs/docs/core/connect-data-platform/databricks-setup.md
+++ b/website/docs/docs/core/connect-data-platform/databricks-setup.md
@@ -16,7 +16,7 @@ meta:
config_page: '/reference/resource-configs/databricks-configs'
---
-
+
Overview of {frontMatter.meta.pypi_package}
diff --git a/website/docs/docs/core/connect-data-platform/postgres-setup.md b/website/docs/docs/core/connect-data-platform/postgres-setup.md
index a6948e6f1ad..5d7467c786d 100644
--- a/website/docs/docs/core/connect-data-platform/postgres-setup.md
+++ b/website/docs/docs/core/connect-data-platform/postgres-setup.md
@@ -16,7 +16,7 @@ meta:
config_page: '/reference/resource-configs/postgres-configs'
---
-
+
Overview of {frontMatter.meta.pypi_package}
diff --git a/website/docs/docs/core/connect-data-platform/redshift-setup.md b/website/docs/docs/core/connect-data-platform/redshift-setup.md
index 7d5fdbf7a97..a86bc7df849 100644
--- a/website/docs/docs/core/connect-data-platform/redshift-setup.md
+++ b/website/docs/docs/core/connect-data-platform/redshift-setup.md
@@ -16,7 +16,7 @@ meta:
config_page: '/reference/resource-configs/redshift-configs'
---
-
+
Overview of {frontMatter.meta.pypi_package}
diff --git a/website/docs/docs/core/connect-data-platform/snowflake-setup.md b/website/docs/docs/core/connect-data-platform/snowflake-setup.md
index 147cfb87867..6bc9c980922 100644
--- a/website/docs/docs/core/connect-data-platform/snowflake-setup.md
+++ b/website/docs/docs/core/connect-data-platform/snowflake-setup.md
@@ -16,7 +16,7 @@ meta:
config_page: '/reference/resource-configs/snowflake-configs'
---
-
+
Overview of {frontMatter.meta.pypi_package}
diff --git a/website/docs/docs/core/connect-data-platform/spark-setup.md b/website/docs/docs/core/connect-data-platform/spark-setup.md
index 00de640ee05..2e3b5a66de8 100644
--- a/website/docs/docs/core/connect-data-platform/spark-setup.md
+++ b/website/docs/docs/core/connect-data-platform/spark-setup.md
@@ -17,8 +17,8 @@ meta:
---
-
-
+
+
:::note
See [Databricks setup](#databricks-setup) for the Databricks version of this page.
diff --git a/website/docs/docs/core/connect-data-platform/trino-setup.md b/website/docs/docs/core/connect-data-platform/trino-setup.md
index 711e735ab6d..396634dc6e6 100644
--- a/website/docs/docs/core/connect-data-platform/trino-setup.md
+++ b/website/docs/docs/core/connect-data-platform/trino-setup.md
@@ -16,7 +16,7 @@ meta:
config_page: '/reference/resource-configs/trino-configs'
---
-
+
Overview of {frontMatter.meta.pypi_package}
@@ -70,10 +70,10 @@ The following profile fields are always required except for `user`, which is als
| `user` | Format for Starburst Enterprise or Trino:
- `user.name`
- `user.name@mydomain.com`
Format for Starburst Galaxy:
- `user.name@mydomain.com/role`
| The username (of the account) to log in to your cluster. When connecting to Starburst Galaxy clusters, you must include the role of the user as a suffix to the username. |
### Roles in Starburst Enterprise
-
+
### Schemas and databases
-
+
## Additional parameters
diff --git a/website/docs/docs/dbt-cloud-apis/discovery-querying.md b/website/docs/docs/dbt-cloud-apis/discovery-querying.md
index 8d602e73e5f..02625eb00a3 100644
--- a/website/docs/docs/dbt-cloud-apis/discovery-querying.md
+++ b/website/docs/docs/dbt-cloud-apis/discovery-querying.md
@@ -10,7 +10,7 @@ Use the Discovery API to evaluate data pipeline health and project state across
Since GraphQL provides a description of the data in the API, the schema displayed in the GraphQL explorer accurately represents the graph and fields available to query.
-
+
## Authorization
diff --git a/website/docs/docs/dbt-cloud-apis/schema-discovery-environment.mdx b/website/docs/docs/dbt-cloud-apis/schema-discovery-environment.mdx
index fdf6aab729d..41fd5555c3f 100644
--- a/website/docs/docs/dbt-cloud-apis/schema-discovery-environment.mdx
+++ b/website/docs/docs/dbt-cloud-apis/schema-discovery-environment.mdx
@@ -6,7 +6,7 @@ id: "discovery-schema-environment"
import { ArgsTable, SchemaTable } from "./schema";
-
+
This environment object allows you to query information about a particular model based on `environmentId`.
diff --git a/website/docs/docs/dbt-cloud-apis/schema-discovery-modelByEnv.mdx b/website/docs/docs/dbt-cloud-apis/schema-discovery-modelByEnv.mdx
index 400735bdce4..078d2512256 100644
--- a/website/docs/docs/dbt-cloud-apis/schema-discovery-modelByEnv.mdx
+++ b/website/docs/docs/dbt-cloud-apis/schema-discovery-modelByEnv.mdx
@@ -6,7 +6,7 @@ id: "discovery-schema-modelByEnv"
import { ArgsTable, SchemaTable } from "./schema";
-
+
This model by environment object allows you to query information about a particular model based on `environmentId`.
diff --git a/website/docs/docs/dbt-versions/core-versions.md b/website/docs/docs/dbt-versions/core-versions.md
index 328b6cf4166..1e3ed62a4ec 100644
--- a/website/docs/docs/dbt-versions/core-versions.md
+++ b/website/docs/docs/dbt-versions/core-versions.md
@@ -8,10 +8,10 @@ dbt Core releases follow [semantic versioning](https://semver.org/) guidelines.
dbt Labs provides different support levels for different versions, which may include new features, bug fixes, or security patches:
-
+
-
+
### Further reading
diff --git a/website/docs/docs/dbt-versions/upgrade-core-in-cloud.md b/website/docs/docs/dbt-versions/upgrade-core-in-cloud.md
index 6c9ffe5d60e..d143aab5ef1 100644
--- a/website/docs/docs/dbt-versions/upgrade-core-in-cloud.md
+++ b/website/docs/docs/dbt-versions/upgrade-core-in-cloud.md
@@ -29,11 +29,11 @@ dbt Labs has always encouraged users to upgrade dbt Core versions whenever a new
We provide different support levels for different versions, which may include new features, bug fixes, or security patches:
-
+
We'll continue to update the following release table so that users know when we plan to stop supporting different versions of Core in dbt Cloud.
-
+
Starting with v1.0, dbt Cloud will ensure that you're always using the latest compatible patch release of `dbt-core` and plugins, including all the latest fixes. You may also choose to try prereleases of those patch releases before they are generally available.
diff --git a/website/docs/docs/deploy/job-notifications.md b/website/docs/docs/deploy/job-notifications.md
index 62a5b822069..c240ca12183 100644
--- a/website/docs/docs/deploy/job-notifications.md
+++ b/website/docs/docs/deploy/job-notifications.md
@@ -22,4 +22,4 @@ There are two options for setting up email notifications. As a **user**, you can
### Slack
-
+
diff --git a/website/docs/docs/introduction.md b/website/docs/docs/introduction.md
index a28e401feb4..c4cfd6e45ac 100644
--- a/website/docs/docs/introduction.md
+++ b/website/docs/docs/introduction.md
@@ -3,7 +3,7 @@ title: "What is dbt?"
id: "introduction"
---
-
+
dbt compiles and runs your analytics code against your data platform, enabling you and your team to collaborate on a single source of truth for metrics, insights, and business definitions. This single source of truth, combined with the ability to define tests for your data, reduces errors when logic changes, and alerts you when issues arise.
diff --git a/website/docs/docs/use-dbt-semantic-layer/dbt-semantic-layer.md b/website/docs/docs/use-dbt-semantic-layer/dbt-semantic-layer.md
index 5fe781ffeb6..95962610f8d 100644
--- a/website/docs/docs/use-dbt-semantic-layer/dbt-semantic-layer.md
+++ b/website/docs/docs/use-dbt-semantic-layer/dbt-semantic-layer.md
@@ -35,9 +35,9 @@ The dbt Semantic Layer reduces code duplication and inconsistency regarding your
## Prerequisites
To use the dbt Semantic Layer, you’ll need to meet the following:
-
+
-
+
## Public Preview
diff --git a/website/docs/docs/use-dbt-semantic-layer/quickstart-semantic-layer.md b/website/docs/docs/use-dbt-semantic-layer/quickstart-semantic-layer.md
index 19a5fb15057..af8de189fa9 100644
--- a/website/docs/docs/use-dbt-semantic-layer/quickstart-semantic-layer.md
+++ b/website/docs/docs/use-dbt-semantic-layer/quickstart-semantic-layer.md
@@ -42,9 +42,9 @@ To try out the features of the dbt Semantic Layer, you first need to have a dbt
## Prerequisites
To use the dbt Semantic Layer, you’ll need to meet the following:
-
+
-
+
:::info 📌
@@ -210,7 +210,7 @@ Once you’ve defined metrics in your dbt project, you can perform a job run in
## Set up dbt Semantic Layer
-
+
## Troubleshooting
diff --git a/website/docs/docs/use-dbt-semantic-layer/set-dbt-semantic-layer.md b/website/docs/docs/use-dbt-semantic-layer/set-dbt-semantic-layer.md
index b045725ca62..9d0c1eee752 100644
--- a/website/docs/docs/use-dbt-semantic-layer/set-dbt-semantic-layer.md
+++ b/website/docs/docs/use-dbt-semantic-layer/set-dbt-semantic-layer.md
@@ -19,15 +19,15 @@ With the dbt Semantic Layer, you'll be able to centrally define business metrics
Before you set up the dbt Semantic Layer, make sure you meet the following:
-
+
-
+
## Set up dbt Semantic Layer
-
+
diff --git a/website/docs/faqs/Accounts/slack.md b/website/docs/faqs/Accounts/slack.md
index 01001141e2e..4faa60fb09a 100644
--- a/website/docs/faqs/Accounts/slack.md
+++ b/website/docs/faqs/Accounts/slack.md
@@ -5,4 +5,4 @@ sidebar_label: 'How to set up Slack'
id: slack
---
-
+
diff --git a/website/docs/guides/orchestration/webhooks/zapier-ms-teams.md b/website/docs/guides/orchestration/webhooks/zapier-ms-teams.md
index 5c622d69f21..aa95b999d4c 100644
--- a/website/docs/guides/orchestration/webhooks/zapier-ms-teams.md
+++ b/website/docs/guides/orchestration/webhooks/zapier-ms-teams.md
@@ -47,7 +47,7 @@ In the next step, you will need the Webhook Secret Key from the prior step, and
Zapier allows you to [store secrets](https://help.zapier.com/hc/en-us/articles/8496293271053-Save-and-retrieve-data-from-Zaps), which prevents your keys from being displayed in plaintext in the Zap code. You will be able to access them via the [StoreClient utility](https://help.zapier.com/hc/en-us/articles/8496293969549-Store-data-from-code-steps-with-StoreClient).
-
+
### 5. Add a code action
Select **Code by Zapier** as the App, and **Run Python** as the Event.
@@ -156,4 +156,4 @@ When you're happy with it, remember to ensure that your `run_id` and `account_id
## Other notes
- If you post to a chat instead of a team channel, you don't need to add the Zapier app to Microsoft Teams.
- If you post to a chat instead of a team channel, note that markdown is not supported and you will need to remove the markdown formatting.
-- If you chose the **Catch Hook** trigger instead of **Catch Raw Hook**, you will need to pass each required property from the webhook as an input instead of running `json.loads()` against the raw body. You will also need to remove the validation code.
\ No newline at end of file
+- If you chose the **Catch Hook** trigger instead of **Catch Raw Hook**, you will need to pass each required property from the webhook as an input instead of running `json.loads()` against the raw body. You will also need to remove the validation code.
diff --git a/website/docs/guides/orchestration/webhooks/zapier-new-cloud-job.md b/website/docs/guides/orchestration/webhooks/zapier-new-cloud-job.md
index 75897c30150..49b01d0db7e 100644
--- a/website/docs/guides/orchestration/webhooks/zapier-new-cloud-job.md
+++ b/website/docs/guides/orchestration/webhooks/zapier-new-cloud-job.md
@@ -36,7 +36,7 @@ In the next step, you will need the Webhook Secret Key from the prior step, and
Zapier allows you to [store secrets](https://help.zapier.com/hc/en-us/articles/8496293271053-Save-and-retrieve-data-from-Zaps), which prevents your keys from being displayed in plaintext in the Zap code. You will be able to access them via the [StoreClient utility](https://help.zapier.com/hc/en-us/articles/8496293969549-Store-data-from-code-steps-with-StoreClient).
-
+
### 4. Add a code action
Select **Code by Zapier** as the App, and **Run Python** as the Event.
@@ -88,4 +88,4 @@ return
```
### 5. Test and deploy
-When you're happy with it, remember to ensure that your `account_id` is no longer hardcoded, then publish your Zap.
\ No newline at end of file
+When you're happy with it, remember to ensure that your `account_id` is no longer hardcoded, then publish your Zap.
diff --git a/website/docs/guides/orchestration/webhooks/zapier-slack.md b/website/docs/guides/orchestration/webhooks/zapier-slack.md
index 03c3402ffe8..d3b0473502b 100644
--- a/website/docs/guides/orchestration/webhooks/zapier-slack.md
+++ b/website/docs/guides/orchestration/webhooks/zapier-slack.md
@@ -46,7 +46,7 @@ In the next step, you will need the Webhook Secret Key from the prior step, and
Zapier allows you to [store secrets](https://help.zapier.com/hc/en-us/articles/8496293271053-Save-and-retrieve-data-from-Zaps). This prevents your keys from being displayed as plaintext in the Zap code. You can access them with the [StoreClient utility](https://help.zapier.com/hc/en-us/articles/8496293969549-Store-data-from-code-steps-with-StoreClient).
-
+
### 4. Add a code action
Select **Code by Zapier** as the App, and **Run Python** as the Event.
@@ -301,4 +301,4 @@ Set the **Message Text** to **5. Threaded Errors Post** from the Run Python step
### 8. Test and deploy
-When you're done testing your Zap, publish it.
\ No newline at end of file
+When you're done testing your Zap, publish it.
diff --git a/website/docs/quickstarts/bigquery-qs.md b/website/docs/quickstarts/bigquery-qs.md
index 2ff0ad92606..84e3b3ae545 100644
--- a/website/docs/quickstarts/bigquery-qs.md
+++ b/website/docs/quickstarts/bigquery-qs.md
@@ -93,7 +93,7 @@ In order to let dbt connect to your warehouse, you'll need to generate a keyfile
## Set up a dbt Cloud managed repository
-
+
## Initialize your dbt project and start developing
@@ -187,15 +187,15 @@ Later, you can connect your business intelligence (BI) tools to these views and
## Change the way your model is materialized
-
+
## Delete the example models
-
+
## Build models on top of other models
-
+
1. Create a new SQL file, `models/stg_customers.sql`, with the SQL from the `customers` CTE in our original query.
2. Create a second new SQL file, `models/stg_orders.sql`, with the SQL from the `orders` CTE in our original query.
@@ -292,8 +292,8 @@ Later, you can connect your business intelligence (BI) tools to these views and
-
+
-
+
diff --git a/website/docs/quickstarts/databricks-qs.md b/website/docs/quickstarts/databricks-qs.md
index 706e87352ee..1222ef2a7d5 100644
--- a/website/docs/quickstarts/databricks-qs.md
+++ b/website/docs/quickstarts/databricks-qs.md
@@ -172,7 +172,7 @@ If you want to connect manually, refer to [Connect to dbt Cloud manually](https:
## Set up a dbt Cloud managed repository
If you used Partner Connect, you can skip to [initializing your dbt project](#initialize-your-dbt-project-and-start-developing) as the Partner Connect provides you with a managed repository. Otherwise, you will need to create your repository connection.
-
+
## Initialize your dbt project and start developing
Now that you have a repository configured, you can initialize your project and start development in dbt Cloud:
@@ -265,15 +265,15 @@ Later, you can connect your business intelligence (BI) tools to these views and
## Change the way your model is materialized
-
+
## Delete the example models
-
+
## Build models on top of other models
-
+
1. Create a new SQL file, `models/stg_customers.sql`, with the SQL from the `customers` CTE in our original query.
2. Create a second new SQL file, `models/stg_orders.sql`, with the SQL from the `orders` CTE in our original query.
@@ -370,6 +370,6 @@ Later, you can connect your business intelligence (BI) tools to these views and
-
+
-
+
diff --git a/website/docs/quickstarts/manual-install-qs.md b/website/docs/quickstarts/manual-install-qs.md
index 97eb8f8e8ea..ea3c6c7ec84 100644
--- a/website/docs/quickstarts/manual-install-qs.md
+++ b/website/docs/quickstarts/manual-install-qs.md
@@ -189,7 +189,7 @@ Check out a new git branch to work on new code:
2. Create a new SQL file in the `models` directory, named `models/customers.sql`.
3. Paste the following query into the `models/customers.sql` file.
-
+
4. From the command line, enter `dbt run`.
@@ -210,15 +210,15 @@ When you return to the BigQuery console, you can `select` from this model.
-
+
### Delete the example models
-
+
### Build models on top of other models
-
+
1. Create a new SQL file, `models/stg_customers.sql`, with the SQL from the `customers` CTE in our original query.
2. Create a second new SQL file, `models/stg_orders.sql`, with the SQL from the `orders` CTE in our original query.
@@ -417,7 +417,7 @@ When you return to the BigQuery console, you can `select` from this model.
### Next steps
-
+
You can also explore:
@@ -428,11 +428,11 @@ You can also explore:
### Add tests to your models
-
+
### Document your models
-
+
3. Run `dbt docs serve` command to launch the documentation in a local website.
@@ -444,7 +444,7 @@ You can also explore:
#### Next steps
-
+
### Commit updated changes
diff --git a/website/docs/quickstarts/redshift-qs.md b/website/docs/quickstarts/redshift-qs.md
index f672f16c4ed..fc7e178f163 100644
--- a/website/docs/quickstarts/redshift-qs.md
+++ b/website/docs/quickstarts/redshift-qs.md
@@ -188,7 +188,7 @@ Now we are going to load our sample data into the S3 bucket that our Cloudformat
7. Click **Next** if the test succeeded. If it failed, you might need to check your Redshift settings and credentials.
## Set up a dbt Cloud managed repository
-
+
## Initialize your dbt project and start developing
Now that you have a repository configured, you can initialize your project and start development in dbt Cloud:
@@ -281,15 +281,15 @@ Later, you can connect your business intelligence (BI) tools to these views and
## Change the way your model is materialized
-
+
## Delete the example models
-
+
## Build models on top of other models
-
+
1. Create a new SQL file, `models/stg_customers.sql`, with the SQL from the `customers` CTE in our original query.
2. Create a second new SQL file, `models/stg_orders.sql`, with the SQL from the `orders` CTE in our original query.
@@ -385,7 +385,7 @@ Later, you can connect your business intelligence (BI) tools to these views and
-
+
-
+
diff --git a/website/docs/quickstarts/snowflake-qs.md b/website/docs/quickstarts/snowflake-qs.md
index 9fa30439563..bf318a597a4 100644
--- a/website/docs/quickstarts/snowflake-qs.md
+++ b/website/docs/quickstarts/snowflake-qs.md
@@ -185,7 +185,7 @@ Using Partner Connect allows you to create a complete dbt account with your [Sno
4. Enter your **Settings** for Snowflake with:
* **Account** — Find your account by using the Snowflake trial account URL and removing `snowflakecomputing.com`. The order of your account information will vary by Snowflake version. For example, Snowflake's Classic console URL might look like: `oq65696.west-us-2.azure.snowflakecomputing.com`. The AppUI or Snowsight URL might look more like: `snowflakecomputing.com/west-us-2.azure/oq65696`. In both examples, your account will be: `oq65696.west-us-2.azure`. For more information, see [Account Identifiers](https://docs.snowflake.com/en/user-guide/admin-account-identifier.html) in the Snowflake docs.
-
+
* **Role** — Leave blank for now. You can update this to a default Snowflake role later.
* **Database** — `analytics`. This tells dbt to create new models in the analytics database.
@@ -211,7 +211,7 @@ Using Partner Connect allows you to create a complete dbt account with your [Sno
## Set up a dbt Cloud managed repository
If you used Partner Connect, you can skip to [initializing your dbt project](#initialize-your-dbt-project-and-start-developing) as the Partner Connect provides you with a managed repository. Otherwise, you will need to create your repository connection.
-
+
## Initialize your dbt project and start developing
Now that you have a repository configured, you can initialize your project and start development in dbt Cloud:
@@ -295,15 +295,15 @@ Later, you can connect your business intelligence (BI) tools to these views and
## Change the way your model is materialized
-
+
## Delete the example models
-
+
## Build models on top of other models
-
+
1. Create a new SQL file, `models/stg_customers.sql`, with the SQL from the `customers` CTE in our original query.
2. Create a second new SQL file, `models/stg_orders.sql`, with the SQL from the `orders` CTE in our original query.
@@ -466,6 +466,6 @@ Sources make it possible to name and describe the data loaded into your warehous
test and document your raw data and also understand the lineage of your sources.
-
+
-
+
diff --git a/website/docs/quickstarts/starburst-galaxy-qs.md b/website/docs/quickstarts/starburst-galaxy-qs.md
index 783a91c47da..d9bd3b98a43 100644
--- a/website/docs/quickstarts/starburst-galaxy-qs.md
+++ b/website/docs/quickstarts/starburst-galaxy-qs.md
@@ -214,7 +214,7 @@ To query the Jaffle Shop data with Starburst Galaxy, you need to create tables u
12. Click **Next** if the test succeeded. If it failed, you might need to check your Starburst Galaxy settings and credentials.
## Set up a dbt Cloud managed repository
-
+
## Initialize your dbt project and start developing
Now that you have a repository configured, you can initialize your project and start development in dbt Cloud:
@@ -301,15 +301,15 @@ Later, you can connect your business intelligence (BI) tools to these views and
## Change the way your model is materialized
-
+
## Delete the example models
-
+
## Build models on top of other models
-
+
1. Create a new SQL file, `models/stg_customers.sql`, with the SQL from the `customers` CTE in our original query.
2. Create a second new SQL file, `models/stg_orders.sql`, with the SQL from the `orders` CTE in our original query.
@@ -405,9 +405,9 @@ Later, you can connect your business intelligence (BI) tools to these views and
-
+
-
+
## Connect to multiple data sources
diff --git a/website/docs/reference/dbt-jinja-functions/this.md b/website/docs/reference/dbt-jinja-functions/this.md
index 0e6208efb3c..9065c660cb0 100644
--- a/website/docs/reference/dbt-jinja-functions/this.md
+++ b/website/docs/reference/dbt-jinja-functions/this.md
@@ -15,7 +15,7 @@ description: "Represents the current model in the database."
## Examples
-
+
diff --git a/website/docs/reference/node-selection/syntax.md b/website/docs/reference/node-selection/syntax.md
index 2e983f9d08f..87772262514 100644
--- a/website/docs/reference/node-selection/syntax.md
+++ b/website/docs/reference/node-selection/syntax.md
@@ -78,7 +78,7 @@ As your selection logic gets more complex, and becomes unwieldly to type out as
consider using a [yaml selector](/reference/node-selection/yaml-selectors). You can use a predefined definition with the `--selector` flag.
Note that when you're using `--selector`, most other flags (namely `--select` and `--exclude`) will be ignored.
-
+
## About node selection
diff --git a/website/docs/reference/project-configs/on-run-start-on-run-end.md b/website/docs/reference/project-configs/on-run-start-on-run-end.md
index 43004795be9..2c5cde4c0c2 100644
--- a/website/docs/reference/project-configs/on-run-start-on-run-end.md
+++ b/website/docs/reference/project-configs/on-run-start-on-run-end.md
@@ -31,7 +31,7 @@ A SQL statement (or list of SQL statements) to be run at the start, or end, of t
## Examples
-
+
diff --git a/website/docs/reference/resource-configs/bigquery-configs.md b/website/docs/reference/resource-configs/bigquery-configs.md
index fcebb0befdd..b3c05c1d827 100644
--- a/website/docs/reference/resource-configs/bigquery-configs.md
+++ b/website/docs/reference/resource-configs/bigquery-configs.md
@@ -728,7 +728,7 @@ view, dbt will grant the view model access to select from the list of datasets
provided. See [BQ docs on authorized views](https://cloud.google.com/bigquery/docs/share-access-views)
for more details.
-
+
@@ -803,4 +803,4 @@ models:
path:
materialized: materialized_view
```
-
\ No newline at end of file
+
diff --git a/website/docs/reference/resource-configs/grants.md b/website/docs/reference/resource-configs/grants.md
index f8a728abdc8..8ef726788dc 100644
--- a/website/docs/reference/resource-configs/grants.md
+++ b/website/docs/reference/resource-configs/grants.md
@@ -211,7 +211,7 @@ We encourage you to read Google's documentation for more context:
- [Understanding GCP roles](https://cloud.google.com/iam/docs/understanding-roles)
- [How to format grantees](https://cloud.google.com/bigquery/docs/reference/standard-sql/data-control-language#user_list)
-
+
### BigQuery examples
diff --git a/website/docs/reference/resource-configs/pre-hook-post-hook.md b/website/docs/reference/resource-configs/pre-hook-post-hook.md
index 4f5071e7aef..1660c50049b 100644
--- a/website/docs/reference/resource-configs/pre-hook-post-hook.md
+++ b/website/docs/reference/resource-configs/pre-hook-post-hook.md
@@ -16,7 +16,7 @@ datatype: sql-statement | [sql-statement]
-
+
@@ -51,7 +51,7 @@ select ...
-
+
@@ -70,7 +70,7 @@ seeds:
-
+
@@ -124,7 +124,7 @@ dbt aims to provide all the boilerplate SQL you need (DDL, DML, and DCL) via out
## Examples
-
+
diff --git a/website/docs/reference/resource-configs/spark-configs.md b/website/docs/reference/resource-configs/spark-configs.md
index 7ef25f70a3f..95a853107f6 100644
--- a/website/docs/reference/resource-configs/spark-configs.md
+++ b/website/docs/reference/resource-configs/spark-configs.md
@@ -9,7 +9,7 @@ To-do:
- use the reference doc structure for this article/split into separate articles
--->
-
+
:::note
See [Databricks configuration](#databricks-configs) for the Databricks version of this page.
diff --git a/website/snippets/quickstarts/test-and-document-your-project.md b/website/snippets/quickstarts/test-and-document-your-project.md
index 932f58712af..e8df2d6a1de 100644
--- a/website/snippets/quickstarts/test-and-document-your-project.md
+++ b/website/snippets/quickstarts/test-and-document-your-project.md
@@ -1,10 +1,10 @@
## Add tests to your models
-
+
## Document your models
-
+
3. Click the book icon in the Develop interface to launch documentation in a new tab.
diff --git a/website/snippets/tutorial-managed-repo.md b/website/snippets/tutorial-managed-repo.md
index eb78f835fc0..78221bbb957 100644
--- a/website/snippets/tutorial-managed-repo.md
+++ b/website/snippets/tutorial-managed-repo.md
@@ -1,8 +1,8 @@
-
+
To set up a managed repository:
1. Under "Setup a repository", select **Managed**.
2. Type a name for your repo such as `bbaggins-dbt-quickstart`
3. Click **Create**. It will take a few seconds for your repository to be created and imported.
-4. Once you see the "Successfully imported repository," click **Continue**.
\ No newline at end of file
+4. Once you see the "Successfully imported repository," click **Continue**.
diff --git a/website/src/components/snippet/index.js b/website/src/components/snippet/index.js
index a4011385458..daf674c3d14 100644
--- a/website/src/components/snippet/index.js
+++ b/website/src/components/snippet/index.js
@@ -7,8 +7,8 @@ import clsx from 'clsx';
Pass the filename of a snippet within the snippets directory
as a prop to use throughout the docs.
*/}
-export default function Snippet({ src }) {
- const file = require('../../../snippets/' + src + '.md')
+export default function Snippet({ path }) {
+ const file = require('../../../snippets/' + path + '.md')
const contents = file.default({});
return (
From 738ff7a6b1a51fc1dbec92b83a67bf219334406f Mon Sep 17 00:00:00 2001
From: Jason Karlavige
Date: Wed, 5 Jul 2023 14:35:00 -0400
Subject: [PATCH 015/703] trigger build
From 48d5744f297747beb6d1cdb02440335cc6503b00 Mon Sep 17 00:00:00 2001
From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com>
Date: Mon, 10 Jul 2023 21:17:13 +0100
Subject: [PATCH 016/703] Update sidebars.js
new sidebar
---
website/sidebars.js | 18 +++++++++++++-----
1 file changed, 13 insertions(+), 5 deletions(-)
diff --git a/website/sidebars.js b/website/sidebars.js
index 3198d95e0f3..46db14dd6d4 100644
--- a/website/sidebars.js
+++ b/website/sidebars.js
@@ -404,13 +404,21 @@ const sidebarSettings = {
{
type: "category",
label: "Use the dbt Semantic Layer",
- collapsed: true,
+ link: { type: "doc", id: "docs/use-dbt-semantic-layer/dbt-sl" },
items: [
- "docs/use-dbt-semantic-layer/quickstart-semantic-layer",
- "docs/use-dbt-semantic-layer/dbt-semantic-layer",
- "docs/use-dbt-semantic-layer/setup-dbt-semantic-layer",
+ "docs/use-dbt-semantic-layer/quickstart-sl",
+ "docs/use-dbt-semantic-layer/setup-dbt-sl",
"docs/use-dbt-semantic-layer/avail-sl-integrations",
- ],
+ {
+ type: "category",
+ label: "Semantic Layer APIs",
+ link: { type: "doc", id: "docs/use-dbt-semantic-layer/sl-api-overview" },
+ items: [
+ "docs/use-dbt-semantic-layer/jdbc",
+ "docs/use-dbt-semantic-layer/graphql",
+ ],
+ },
+ ],
},
{
type: "category",
From 33e15d87ca11c8536ef6b503e8ceab032285d8a6 Mon Sep 17 00:00:00 2001
From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com>
Date: Mon, 10 Jul 2023 21:20:11 +0100
Subject: [PATCH 017/703] Update metrics.md
---
website/docs/docs/build/metrics.md | 14 +++++---------
1 file changed, 5 insertions(+), 9 deletions(-)
diff --git a/website/docs/docs/build/metrics.md b/website/docs/docs/build/metrics.md
index c43e2a86915..4e870dc49a9 100644
--- a/website/docs/docs/build/metrics.md
+++ b/website/docs/docs/build/metrics.md
@@ -8,21 +8,17 @@ keywords:
-:::info dbt Metrics isn't supported
+:::info dbt metrics no longer supported
-dbt Metrics is no longer supported in v1.6 and higher. To build your semantic layer, define and query metrics, and provide data governance - refer to [Build your Semantic Layer](/docs/build/build-metrics-intro) for updated guidance.
+dbt_metrics is no longer supported in v1.6 and higher, and has been [deprecated](https://docs.getdbt.com/blog/deprecating-dbt-metrics). [MetricFlow](/docs/build/about-metricflow?version=1.6), a key component of the [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl), is a new framework for defining metrics in dbt and has replaced dbt_metrics.
-:::
-
-
-
+You can still use the dbt Semantic Layer using the deprecated dbt_metrics package on v1.5 or older, however, these versions are no longer supported and don't receive any code fixes.
-:::info dbt Metrics not recommended
-
-dbt Metrics won't be supported in v1.6 and higher, and is being replaced with MetricFlow. [Defining metrics](/docs/build/build-semantic-layer-intro) with MetricFlow will help shape the future of the dbt Semantic Layer — let us know [your thoughts and join the convo](https://github.com/dbt-labs/dbt-core/discussions/7456) to help build it!
+To build your metrics and semantic layer, we **highly** recommend you [upgrade to dbt v1.6 and higher](link to migration guide) to use the new and most recent version of the dbt Semantic Layer, powered by MetricFlow.
:::
+
From aaca11edb6b75bf4d2127cdf89b8ae89c73bc32f Mon Sep 17 00:00:00 2001
From: mirnawong1
Date: Tue, 11 Jul 2023 11:55:01 +0100
Subject: [PATCH 018/703] mirnas changes
---
website/dbt-versions.js | 12 +
website/docs/docs/build/metrics.md | 16 +-
.../docs/docs/dbt-cloud-apis/apis-overview.md | 3 +-
.../avail-sl-integrations.md | 46 +-
.../dbt-semantic-layer.md | 146 ---
.../docs/use-dbt-semantic-layer/dbt-sl.md | 295 ++++++
.../docs/use-dbt-semantic-layer/graphql.md | 24 +
.../docs/docs/use-dbt-semantic-layer/jdbc.md | 22 +
...art-semantic-layer.md => quickstart-sl.md} | 199 +++-
.../set-dbt-semantic-layer.md | 40 -
.../docs/use-dbt-semantic-layer/setup-sl.md | 107 +++
.../use-dbt-semantic-layer/sl-api-overview.md | 101 ++
.../use-dbt-semantic-layer/sl-manifest.md | 878 ++++++++++++++++++
.../docs/reference/artifacts/dbt-artifacts.md | 6 +-
website/package-lock.json | 448 ++++++++-
website/package.json | 1 +
website/sidebars.js | 11 +-
website/snippets/_legacy-sl-callout.md | 13 +
website/snippets/_v2-sl-prerequisites.md | 52 ++
website/static/_redirects | 4 +
20 files changed, 2178 insertions(+), 246 deletions(-)
delete mode 100644 website/docs/docs/use-dbt-semantic-layer/dbt-semantic-layer.md
create mode 100644 website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
create mode 100644 website/docs/docs/use-dbt-semantic-layer/graphql.md
create mode 100644 website/docs/docs/use-dbt-semantic-layer/jdbc.md
rename website/docs/docs/use-dbt-semantic-layer/{quickstart-semantic-layer.md => quickstart-sl.md} (57%)
delete mode 100644 website/docs/docs/use-dbt-semantic-layer/set-dbt-semantic-layer.md
create mode 100644 website/docs/docs/use-dbt-semantic-layer/setup-sl.md
create mode 100644 website/docs/docs/use-dbt-semantic-layer/sl-api-overview.md
create mode 100644 website/docs/docs/use-dbt-semantic-layer/sl-manifest.md
create mode 100644 website/snippets/_legacy-sl-callout.md
create mode 100644 website/snippets/_v2-sl-prerequisites.md
diff --git a/website/dbt-versions.js b/website/dbt-versions.js
index 52b68d6c084..f2834c921b6 100644
--- a/website/dbt-versions.js
+++ b/website/dbt-versions.js
@@ -175,6 +175,18 @@ exports.versionedPages = [
"page": "docs/build/ratio",
"firstVersion": "1.6",
},
+ {
+ "category": "docs/use-dbt-semantic-layer/sl-api-overview",
+ "firstVersion": "1.6",
+ },
+ {
+ "category": "docs/use-dbt-semantic-layer/graphql",
+ "firstVersion": "1.6",
+ },
+ {
+ "category": "docs/use-dbt-semantic-layer/jdbc",
+ "firstVersion": "1.6",
+ },
]
exports.versionedCategories = [
diff --git a/website/docs/docs/build/metrics.md b/website/docs/docs/build/metrics.md
index 4e870dc49a9..4492d2f5770 100644
--- a/website/docs/docs/build/metrics.md
+++ b/website/docs/docs/build/metrics.md
@@ -6,20 +6,16 @@ keywords:
- dbt metrics layer
---
-
-
:::info dbt metrics no longer supported
dbt_metrics is no longer supported in v1.6 and higher, and has been [deprecated](https://docs.getdbt.com/blog/deprecating-dbt-metrics). [MetricFlow](/docs/build/about-metricflow?version=1.6), a key component of the [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl), is a new framework for defining metrics in dbt and has replaced dbt_metrics.
You can still use the dbt Semantic Layer using the deprecated dbt_metrics package on v1.5 or older, however, these versions are no longer supported and don't receive any code fixes.
-To build your metrics and semantic layer, we **highly** recommend you [upgrade to dbt v1.6 and higher](link to migration guide) to use the new and most recent version of the dbt Semantic Layer, powered by MetricFlow.
+To build your metrics and semantic layer, we **highly** recommend you [upgrade to dbt v1.6 and higher](/guides/migration/versions/upgrading-to-v1.6) to use the new and most recent version of the dbt Semantic Layer, powered by MetricFlow.
:::
-
-
* **v1.3.0**: Metrics have been moved out of the experimental phase
@@ -27,6 +23,7 @@ To build your metrics and semantic layer, we **highly** recommend you [upgrade t
+
A metric is an aggregation over a that supports zero or more dimensions. Some examples of metrics include:
- active users
@@ -214,9 +211,12 @@ Metrics can have many declared **properties**, which define aspects of your metr
### Available calculation methods
+
The method of calculation (aggregation or derived) that is applied to the expression.
+
+
The type of calculation (aggregation or expression) that is applied to the sql property.
@@ -537,6 +537,7 @@ The period to date secondary calculation performs an aggregation on a defined pe
#### Rolling:
+
The rolling secondary calculation performs an aggregation on a number of rows in metric dataset. For example, if the user selects the `week` grain and sets a rolling secondary calculation to `4` then the value returned will be a rolling 4 week calculation of whatever aggregation type was selected. If the `interval` input is not provided then the rolling caclulation will be unbounded on all preceding rows.
| Input | Example | Description | Required |
@@ -548,6 +549,7 @@ The rolling secondary calculation performs an aggregation on a number of rows in
+
The rolling secondary calculation performs an aggregation on a number of rows in the metric dataset. For example, if the user selects the `week` grain and sets a rolling secondary calculation to `4`, then the value returned will be a rolling 4-week calculation of whatever aggregation type was selected.
| Input | Example | Description | Required |
@@ -708,7 +710,7 @@ The above example will return a dataset that contains the metric provided in the
**Important caveat** - You _must_ wrap the `expression` property for `derived` metrics in double quotes to render it. For example, `expression: "{{ metric('develop_metric') }} - 1 "`.
-
-
+
+
diff --git a/website/docs/docs/dbt-cloud-apis/apis-overview.md b/website/docs/docs/dbt-cloud-apis/apis-overview.md
index 9f7c22a7580..8323e3423da 100644
--- a/website/docs/docs/dbt-cloud-apis/apis-overview.md
+++ b/website/docs/docs/dbt-cloud-apis/apis-overview.md
@@ -8,10 +8,11 @@ id: "overview"
Accounts on the _Team_ and _Enterprise_ plans can query the dbt Cloud APIs.
-dbt Cloud provides two APIs:
+dbt Cloud provides the following APIs:
- The [dbt Cloud Administrative API](/docs/dbt-cloud-apis/admin-cloud-api) can be used to administrate a dbt Cloud account.
- The [dbt Cloud Discovery API](/docs/dbt-cloud-apis/discovery-api) can be used to fetch metadata related to the state and health of your dbt project.
+- The [dbt Semantic Layer API](/docs/use-dbt-semantic-layer/sl-api-overview) provides multiple API options which allow you to query your metrics defined in the dbt Semantic Layer.
If you want to learn more about webhooks, refer to [Webhooks for your jobs](/docs/deploy/webhooks).
diff --git a/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md b/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md
index 8c004d865bb..b3f10309999 100644
--- a/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md
+++ b/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md
@@ -5,14 +5,44 @@ description: "Review a wide range of partners you can integrate and query with t
sidebar_label: "Available integrations"
---
-:::info Coming soon
-The dbt Semantic Layer is undergoing a [significant revamp](https://www.getdbt.com/blog/dbt-semantic-layer-whats-next/), making it more efficient to define and query metrics.
+
-**What’s changing?** The dbt_metrics package will be [deprecated](https://docs.getdbt.com/blog/deprecating-dbt-metrics) and replaced with [MetricFlow](/docs/build/about-metricflow?version=1.6), a new way framework for defining metrics in dbt.
+:::info Revamped dbt Semantic Layer
+
+The dbt Semantic Layer has undergone a [significant revamp](https://www.getdbt.com/blog/dbt-semantic-layer-whats-next/), making it more efficient to define and query metrics.
+
+**What’s changed?** The dbt_metrics package has been [deprecated](https://docs.getdbt.com/blog/deprecating-dbt-metrics) and replaced with [MetricFlow](/docs/build/about-metricflow?version=1.6), one of the Semantic Layer's key component and a new way framework for defining metrics in dbt.
-**What's new?** Learn how to [Build your metrics](/docs/build/build-metrics-intro?version=1.6) using MetricFlow, one of the key components that makes up the revamped dbt Semantic Layer. It handles SQL query construction and defines the specification for dbt semantic models and metrics.
:::
+Query dbt metrics via external integrations using the sophisticated [dbt Semantic Layer APIs](/docs/use-dbt-semantic-layer/sl-api-overview) (Discovery, JDBC, and GraphQL API). With these APIs, you can query metrics to avoid duplicative coding, optimize your development workflow, ensure data governance for company metrics, and guarantee consistency for data consumers.
+
+A wide variety of data applications across the modern data stack natively integrate with the dbt Semantic Layer, powered by MetricFlow — from Business Intelligence tools to notebooks, spreadsheets, data catalogs, and more.
+
+The dbt Semantic Layer integrations are capable of querying dbt metrics, importing definitions, surfacing the underlying data in partner tools, and more. Here are some of the integrations you can query from:
+
+- Tableau —
+- Google Sheets —
+- Hex —
+- Mode —
+
+For information on the partner integrations, their documentation, and more — refer to the [dbt Semantic Layer integrations](https://www.getdbt.com/product/semantic-layer-integrations) page.
+
+
+
+
+## Related docs
+
+- [dbt Semantic Layer API](/docs/use-dbt-semantic-layer/sl-api-overview) to learn how to integrate with the Discovery, JDBC, and GraphQL API to query your metrics in downstream tools.
+
+
+
+
+
+import LegacyInfo from '/snippets/_legacy-sl-callout.md';
+
+
+
A wide variety of data applications across the modern data stack natively integrate with the dbt Semantic Layer and dbt metrics — from Business Intelligence tools to notebooks, data catalogs, and more.
The dbt Semantic Layer integrations are capable of querying dbt metrics, importing definitions, surfacing the underlying data in partner tools, and leveraging the dbt Server.
@@ -24,8 +54,6 @@ For information on the partner integrations, their documentation, and more &mdas
## Related docs
-- [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-semantic-layer) to learn more about the dbt Semantic Layer.
-- [Product architecture](/docs/use-dbt-semantic-layer/dbt-semantic-layer#product-architecture) for more information on plan availability.
-- [Public Preview information](/docs/use-dbt-semantic-layer/quickstart-semantic-layer#public-preview) to understand what Public Preview for the dbt Semantic Layer means.
-- [dbt Semantic Layer partner integration guide](/guides/dbt-ecosystem/sl-partner-integration-guide) for information about partner integration guidelines, product roadmap, and API connectivity.
-- [How to design and structure dbt metrics: Recommendations for getting started](https://docs.getdbt.com/blog/how-to-design-and-structure-metrics) to understand best practices for designing and structuring metrics in your dbt project.
+- [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl) to learn more about the dbt Semantic Layer.
+
+
diff --git a/website/docs/docs/use-dbt-semantic-layer/dbt-semantic-layer.md b/website/docs/docs/use-dbt-semantic-layer/dbt-semantic-layer.md
deleted file mode 100644
index 5fe781ffeb6..00000000000
--- a/website/docs/docs/use-dbt-semantic-layer/dbt-semantic-layer.md
+++ /dev/null
@@ -1,146 +0,0 @@
----
-title: "dbt Semantic Layer"
-id: dbt-semantic-layer
-description: "Introducing the dbt Semantic Layer"
-sidebar_label: "dbt Semantic Layer"
----
-
-:::info Coming soon
-The dbt Semantic Layer is undergoing a [significant revamp](https://www.getdbt.com/blog/dbt-semantic-layer-whats-next/), making it more efficient to define and query metrics.
-
-**What’s changing?** The dbt_metrics package will be [deprecated](https://docs.getdbt.com/blog/deprecating-dbt-metrics) and replaced with [MetricFlow](/docs/build/about-metricflow?version=1.6), a new way framework for defining metrics in dbt.
-
-**What's new?** Learn how to [Build your metrics](/docs/build/build-metrics-intro?version=1.6) using MetricFlow, one of the key components that makes up the revamped dbt Semantic Layer. It handles SQL query construction and defines the specification for dbt semantic models and metrics.
-:::
-
-The dbt Semantic Layer allows data teams to centrally define essential business metrics like `revenue`, `customer`, and `churn` in the modeling layer (your dbt project) for consistent self-service within downstream data tools like BI and metadata management solutions. The dbt Semantic Layer provides the flexibility to define metrics on top of your existing models and then query those metrics and models in your analysis tools of choice.
-
-The result? You have less duplicate coding for data teams and more consistency for data consumers.
-
-The dbt Semantic Layer has four main parts:
-
-- Define your metrics in version-controlled dbt project code using MetricFlow
-- Import your metric definitions via the [Discovery API](/docs/dbt-cloud-apis/discovery-api)
-- Query your metric data via the dbt Proxy Server
-- Explore and analyze dbt metrics in downstream tools
-
-
-
-
-### What makes the dbt Semantic Layer different?
-
-The dbt Semantic Layer reduces code duplication and inconsistency regarding your business metrics. By moving metric definitions out of the BI layer and into the modeling layer, data teams can feel confident that different business units are working from the same metric definitions, regardless of their tool of choice. If a metric definition changes in dbt, it’s refreshed everywhere it’s invoked and creates consistency across all applications. You can also use the dbt Semantic Layer to query models and use macros.
-
-
-## Prerequisites
-To use the dbt Semantic Layer, you’ll need to meet the following:
-
-
-
-
-
-## Public Preview
-
-The dbt Semantic Layer is currently available for Public Preview, which means:
-
-— **Who?** The dbt Semantic Layer is open to all dbt Cloud tiers (Developer, Team, and Enterprise) during Public Preview. Review [Product architecture](/docs/use-dbt-semantic-layer/dbt-semantic-layer#product-architecture) for more info on plan availability.
-
-- Team and Enterprise accounts will be able to set up the Semantic Layer and [Discovery API](/docs/dbt-cloud-apis/discovery-api) in the integrated
-partner tool to import metric definition.
-- Developer accounts will be able to query the Proxy Server using SQL, but will not be able to browse dbt metrics in external tools, which requires access to the Discovery API.
-
-— **What?** Public Previews provide early access to new features. The Semantic Layer is stable and you can use it for production deployments, but there may still be some planned additions and modifications to product behaviors before moving to General Availability. We may also introduce new functionality that is not backwards compatible. dbt Labs provides support, and relevant service level objectives (SLOs) apply. We will introduce pricing for the dbt Semantic Layer alongside the General Available (GA) release (future GA date to be announced).
-
-— **When?** Public Preview will end once the dbt Semantic Layer is available for GA. After GA, the dbt Semantic Layer will only be available to dbt Cloud **Team** and **Enterprise** plans.
-
-— **Where?** Public Preview is enabled at the account level so you don’t need to worry about enabling it per user.
-
-## Product architecture
-
-The dbt Semantic Layer product architecture includes four primary components:
-
-| Components | Information | Developer plans | Team plans | Enterprise plans | License |
-| --- | --- | :---: | :---: | :---: | --- |
-| **[dbt project](/docs/build/metrics)** | Define models and metrics in dbt Core. | ✅ | ✅ | ✅ | Open source, Core |
-| **[dbt Server](https://github.com/dbt-labs/dbt-server)**| A persisted HTTP server that wraps dbt core to handle RESTful API requests for dbt operations. | ✅ | ✅ | ✅ | BSL |
-| **SQL Proxy** | Reverse-proxy that accepts dbt-SQL (SQL + Jinja like query models and metrics, use macros), compiles the query into pure SQL, and executes the query against the data platform. | ✅
_* Available during Public Preview only_ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise) |
-| **[Discovery API](/docs/dbt-cloud-apis/discovery-api)** | Accesses metric definitions primarily via integrations and is the source of truth for objects defined in dbt projects (like models, macros, sources, metrics). The Discovery API is updated at the end of every dbt Cloud run. | ❌ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise |
-
-
-
-dbt Semantic Layer integrations will:
-
-- Leverage the Discovery API to fetch a list of objects and their attributes, like metrics
-- Generate a dbt-SQL statement
-- Then query the SQL proxy to evaluate the results of this statement
-
-
-## Manage metrics
-
-:::info 📌
-
-New to dbt or metrics? Check out our [quickstart guide](/quickstarts) to build your first dbt project! If you'd like to define your first metrics, try our [Jaffle Shop](https://github.com/dbt-labs/jaffle_shop_metrics) example project.
-
-:::
-
-If you're not sure whether to define a metric in dbt or not, ask yourself the following:
-
-> *Is this something our teams consistently need to report on?*
-
-An important business metric should be:
-
-- Well-defined (the definition is agreed upon throughout the entire organization)
-- Time-bound (able to be compared across time)
-
-A great example of this is **revenue** — it can be aggregated on multiple levels (weekly, monthly, etc) and is key for the broader business to understand.
-
-- ✅ `Monthly recurring revenue` or `Weekly active users` or `Average order value`
-- ❌ `1-off experimental metric`
-
-
-### Design and define metrics
-
-**Design metrics**
-To read about best practices on structuring and organizing your metrics, review our [How to design and structure dbt metrics: Recommendations for getting started](https://docs.getdbt.com/blog/how-to-design-and-structure-metrics) blog post first.
-
-**Define metrics**
-You can define your metrics in `.yml` files nested under a metrics key and to design or define your own metrics in your dbt project, review the following documents:
-
-- [How to design and structure dbt metrics: Recommendations for getting started](https://docs.getdbt.com/blog/how-to-design-and-structure-metrics) blog to understand best practices for designing and structuring metrics in your dbt project
-- [dbt metrics](docs/build/metrics) for in-depth detail on attributes, filters, how to define and query your metrics and [dbt-metrics package](https://github.com/dbt-labs/dbt_metrics)
-- [dbt Semantic Layer quickstart](/docs/use-dbt-semantic-layer/quickstart-semantic-layer) to get started
-- [Understanding the components of the dbt Semantic Layer](https://docs.getdbt.com/blog/understanding-the-components-of-the-dbt-semantic-layer) blog post to see further examples
-
-Review our helpful metrics video below, which explains what metrics are, why they're important and how you can get started:
-
-
-
-## Related questions
-
-
- How are you storing my data?
-
- The dbt Semantic Layer does not store, or cache, or log your data. On each query to the Semantic Layer, the resulting data passes through dbt Cloud servers where it is never stored, cached, or logged. The data from your data platform gets routed through dbt Cloud servers, to your connecting data tool.
-
-
-
- Is the dbt Semantic Layer open source?
-
- Some components of the dbt Semantic Layer are open source like dbt-core, the dbt_metrics package, and the BSL licensed dbt-server. The dbt Proxy Server (what is actually compiling the dbt code) and the Discovery API are not open source.
-
-During Public Preview, the dbt Semantic Layer is open to all dbt Cloud tiers (Developer, Team, and Enterprise).
-
-
-- dbt Core users can define metrics in their dbt Core projects and calculate them using macros from the metrics package. To use the dbt Semantic Layer integrations, users will need to have a dbt Cloud account.
-- Developer accounts will be able to query the Proxy Server using SQL, but will not be able to browse pre-populated dbt metrics in external tools, which requires access to the Discovery API.
-- Team and Enterprise accounts will be able to set up the Semantic Layer and Discovery API in the integrated partner tool to import metric definition.
-
-
-
- Is there a dbt Semantic Layer discussion hub?
-
- Yes absolutely! Join the dbt Slack community and #dbt-cloud-semantic-layer slack channel for all things related to the dbt Semantic Layer.
-
-
-
-
diff --git a/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md b/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
new file mode 100644
index 00000000000..6901c5b1726
--- /dev/null
+++ b/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
@@ -0,0 +1,295 @@
+---
+title: "dbt Semantic Layer"
+id: dbt-sl
+description: "Introducing the improved dbt Semantic Layer, which allows data teams to centrally define and query metrics."
+sidebar_label: "About the dbt Semantic Layer"
+---
+
+
+
+:::info Revamped dbt Semantic Layer
+
+The dbt Semantic Layer has undergone a [significant revamp](https://www.getdbt.com/blog/dbt-semantic-layer-whats-next/), making it more efficient to define and query metrics.
+
+**What’s changed?** The dbt_metrics package has been [deprecated](https://docs.getdbt.com/blog/deprecating-dbt-metrics) and replaced with [MetricFlow](/docs/build/about-metricflow?version=1.6), one of the Semantic Layer's key component and a new way framework for defining metrics in dbt.
+
+:::
+
+The dbt Semantic Layer, powered by MetricFlow, allows data teams to centrally define essential business metrics like `revenue`, `customer`, and `churn` in the modeling layer (your dbt project) for consistent self-service within downstream data tools like BI and metadata management solutions. The dbt Semantic Layer provides the flexibility to define metrics on top of your existing models and then query those metrics and models in your analysis tools of choice.
+
+The result? You have less duplicate coding for data teams and more consistency for data consumers.
+
+The dbt Semantic Layer has four main parts:
+
+- Define your metrics in version-controlled dbt project code using MetricFlow
+- Import your metric definitions via the [Discovery API](/docs/dbt-cloud-apis/discovery-api)
+- Query your metric data using the [Semantic Layer API](/docs/use-dbt-semantic-layer/sl-api-overview) (JDBC driver, GraphQL API, and the Discovery API)
+- Explore and analyze your metrics in downstream tools
+
+ADD ARCHITECTURE IMAGE
+
+### What makes the dbt Semantic Layer different?
+
+The dbt Semantic Layer reduces code duplication and inconsistency regarding your business metrics. By moving metric definitions out of the BI layer and into the modeling layer, data teams can feel confident that different business units are working from the same metric definitions, regardless of their tool of choice. If a metric definition changes in dbt, it’s refreshed everywhere it’s invoked and creates consistency across all applications. You can also use the dbt Semantic Layer to query models and use macros.
+
+
+## Prerequisites
+
+To use the dbt Semantic Layer, you’ll need to meet the following:
+
+import SetUp from '/snippets/_v2-sl-prerequisites.md';
+
+
+
+
+
+## Public Preview
+
+The dbt Semantic Layer is currently available for Public Preview, which means:
+
+— **Who?** The dbt Semantic Layer is open to all dbt Cloud tiers (Developer, Team, and Enterprise) during Public Preview. Review [Product architecture](#product-architecture) for more info on plan availability.
+
+- Team and Enterprise accounts will be able to set up the Semantic Layer and [Discovery API](/docs/dbt-cloud-apis/discovery-api) in the integrated
+partner tool to import metric definition.
+- Developer accounts will be able to query the Proxy Server using SQL, but will not be able to browse dbt metrics in external tools, which requires access to the Discovery API.
+
+— **What?** Public Previews provide early access to new features. The Semantic Layer is stable and you can use it for production deployments, but there may still be some planned additions and modifications to product behaviors before moving to General Availability. We may also introduce new functionality that is not backward compatible. dbt Labs provides support, and relevant service level objectives (SLOs) apply. We will introduce pricing for the dbt Semantic Layer alongside the General Available (GA) release (future GA date to be announced).
+
+— **When?** Public Preview will end once the dbt Semantic Layer is available for GA. After GA, the dbt Semantic Layer will only be available to dbt Cloud **Team** and **Enterprise** plans.
+
+— **Where?** Public Preview is enabled at the account level so you don’t need to worry about enabling it per user.
+
+## Product architecture
+
+The dbt Semantic Layer product architecture includes four primary components:
+
+| Components | Information | Developer plans | Team plans | Enterprise plans | License |
+| --- | --- | :---: | :---: | :---: | --- |
+| **[MetricFlow](/docs/build/about-metricflow)** | Use MetricFlow in dbt to centrally define your metrics. MetricFlow is a key component of the dbt Semantic Layer and is responsible for SQL query construction and defining specifications for dbt semantic models and metrics. | ✅ | ✅ | ✅ | BSL package (code is source available) |
+| **[dbt Server](https://github.com/dbt-labs/dbt-server)**| A persisted HTTP server that wraps dbt core to handle RESTful API requests for dbt operations. | ✅ | ✅ | ✅ | BSL |
+| **SQL Proxy** | Reverse-proxy that accepts dbt-SQL (SQL + Jinja like query models and metrics, use macros), compiles the query into pure SQL, and executes the query against the data platform. | ✅
_* Available during Public Preview only_ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise) |
+| **[Semantic Layer APIs](/docs/use-dbt-semantic-layer/sl-api-overview)** | Use the Semantic Layer APIS, which includes the Discovery, JDBC, and GraphQL API, to access metric definitions using integrations. | ❌ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise |
+
+ADD COMPONENT IMAGE
+
+dbt Semantic Layer integrations will:
+
+- Leverage the Semantic Layer APIs to fetch a list of objects and their attributes, like metrics
+- Generate a dbt-SQL statement
+- Then query the SQL proxy to evaluate the results of this statement
+
+
+## Manage metrics
+
+:::tip
+New to dbt or metrics? Try our [Jaffle shop example project](https://github.com/dbt-labs/jaffle-sl-template) to help you get started!
+:::
+
+If you're not sure whether to define a metric in dbt or not, ask yourself the following:
+
+> *Is this something our teams consistently need to report on?*
+
+An important business metric should be:
+
+- Well-defined (the definition is agreed upon throughout the entire organization)
+- Time-bound (able to be compared across time)
+
+A great example of this is **revenue** — it can be aggregated on multiple levels (weekly, monthly, etc) and is key for the broader business to understand.
+
+- ✅ `Monthly recurring revenue` or `Weekly active users` or `Average order value`
+- ❌ `1-off experimental metric`
+
+
+### Build metrics
+
+Use [MetricFlow](/docs/build/build-metrics-intro) in dbt to centrally build and define your metrics. MetricFlow is a key component of the dbt Semantic Layer, handles SQL query construction, and defines the specification for dbt semantic models and metrics.
+
+With MetricFlow, you can use familiar constructs like semantic models and metrics to avoid duplicative coding, optimize your development workflow, ensure data governance for company metrics, and guarantee consistency for data consumers.
+
+You can define your metrics in `.yml` files nested under a semantic model key to building your own metrics in your dbt project.
+
+To fully experience the dbt Semantic Layer, including the ability to query dbt metrics via external integrations, you'll need a dbt Cloud Team or Enterprise account.
+
+To begin building and defining metrics, refer to the detailed [MetricFlow](/docs/build/build-metrics-intro) docs for more info.
+
+**Design metrics**
+To read about best practices on structuring and organizing your metrics, review our [How to design and structure dbt metrics: Recommendations for getting started](https://docs.getdbt.com/blog/how-to-design-and-structure-metrics) blog post first.
+
+## Related questions
+
+
+ How do I migrate from the old Semantic Layer to the new one?
+
+ ADD INFO HERE AND MIGRATION SCRIPT.
+
+
+
+
+ How are you storing my data?
+
+ The dbt Semantic Layer does not store, cache, or log your data. On each query to the Semantic Layer, the resulting data passes through dbt Cloud servers where it is never stored, cached, or logged. The data from your data platform gets routed through dbt Cloud servers, to your connecting data tool.
+
+
+
+ Is the dbt Semantic Layer open source?
+
+ Some components of the dbt Semantic Layer are open source like dbt-core, the dbt_metrics package, and the BSL-licensed dbt-server. The dbt Proxy Server (what is actually compiling the dbt code) and the Discovery API are not open sources.
+
+During Public Preview, the dbt Semantic Layer is open to all dbt Cloud tiers (Developer, Team, and Enterprise).
+
+
+- dbt Core users can define metrics in their dbt Core projects and calculate them using macros from the metrics package. To use the dbt Semantic Layer integrations, users will need to have a dbt Cloud account.
+- Developer accounts will be able to query the Proxy Server using SQL, but will not be able to browse pre-populated dbt metrics in external tools, which requires access to the Discovery API.
+- Team and Enterprise accounts will be able to set up the Semantic Layer and Discovery API in the integrated partner tool to import metric definition.
+
+
+
+ Is there a dbt Semantic Layer discussion hub?
+
+ Yes absolutely! Join the dbt Slack community and #dbt-cloud-semantic-layer slack channel for all things related to the dbt Semantic Layer.
+
+
+
+
+
+
+
+
+
+import LegacyInfo from '/snippets/_legacy-sl-callout.md';
+
+
+
+The dbt Semantic Layer allows data teams to centrally define essential business metrics like `revenue`, `customer`, and `churn` in the modeling layer (your dbt project) for consistent self-service within downstream data tools like BI and metadata management solutions. The dbt Semantic Layer provides the flexibility to define metrics on top of your existing models and then query those metrics and models in your analysis tools of choice.
+
+The result? You have less duplicate coding for data teams and more consistency for data consumers.
+
+The dbt Semantic Layer has four main parts:
+
+- Define your metrics in version-controlled dbt project code using MetricFlow
+ * dbt_metrics is now deprecated
+- Import your metric definitions via the [Discovery API](/docs/dbt-cloud-apis/discovery-api)
+- Query your metric data via the dbt Proxy Server
+- Explore and analyze dbt metrics in downstream tools
+
+
+
+
+### What makes the dbt Semantic Layer different?
+
+The dbt Semantic Layer reduces code duplication and inconsistency regarding your business metrics. By moving metric definitions out of the BI layer and into the modeling layer, data teams can feel confident that different business units are working from the same metric definitions, regardless of their tool of choice. If a metric definition changes in dbt, it’s refreshed everywhere it’s invoked and creates consistency across all applications. You can also use the dbt Semantic Layer to query models and use macros.
+
+
+## Prerequisites
+To use the dbt Semantic Layer, you’ll need to meet the following:
+
+
+
+
+
+## Public Preview
+
+The dbt Semantic Layer is currently available for Public Preview, which means:
+
+— **Who?** The dbt Semantic Layer is open to all dbt Cloud tiers (Developer, Team, and Enterprise) during Public Preview. Review [Product architecture](/docs/use-dbt-semantic-layer/dbt-semantic-layer#product-architecture) for more info on plan availability.
+
+- Team and Enterprise accounts will be able to set up the Semantic Layer and [Discovery API](/docs/dbt-cloud-apis/discovery-api) in the integrated
+partner tool to import metric definition.
+- Developer accounts will be able to query the Proxy Server using SQL, but will not be able to browse dbt metrics in external tools, which requires access to the Discovery API.
+
+— **What?** Public Previews provide early access to new features. The Semantic Layer is stable and you can use it for production deployments, but there may still be some planned additions and modifications to product behaviors before moving to General Availability. We may also introduce new functionality that is not backwards compatible. dbt Labs provides support, and relevant service level objectives (SLOs) apply. We will introduce pricing for the dbt Semantic Layer alongside the General Available (GA) release (future GA date to be announced).
+
+— **When?** Public Preview will end once the dbt Semantic Layer is available for GA. After GA, the dbt Semantic Layer will only be available to dbt Cloud **Team** and **Enterprise** plans.
+
+— **Where?** Public Preview is enabled at the account level so you don’t need to worry about enabling it per user.
+
+## Product architecture
+
+The dbt Semantic Layer product architecture includes four primary components:
+
+| Components | Information | Developer plans | Team plans | Enterprise plans | License |
+| --- | --- | :---: | :---: | :---: | --- |
+| **[dbt project](/docs/build/metrics)** | Define models and metrics in dbt Core. | ✅ | ✅ | ✅ | Open source, Core |
+| **[dbt Server](https://github.com/dbt-labs/dbt-server)**| A persisted HTTP server that wraps dbt core to handle RESTful API requests for dbt operations. | ✅ | ✅ | ✅ | BSL |
+| **SQL Proxy** | Reverse-proxy that accepts dbt-SQL (SQL + Jinja like query models and metrics, use macros), compiles the query into pure SQL, and executes the query against the data platform. | ✅
_* Available during Public Preview only_ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise) |
+| **[Discovery API](/docs/dbt-cloud-apis/discovery-api)** | Accesses metric definitions primarily via integrations and is the source of truth for objects defined in dbt projects (like models, macros, sources, metrics). The Discovery API is updated at the end of every dbt Cloud run. | ❌ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise |
+
+
+
+dbt Semantic Layer integrations will:
+
+- Leverage the Discovery API to fetch a list of objects and their attributes, like metrics
+- Generate a dbt-SQL statement
+- Then query the SQL proxy to evaluate the results of this statement
+
+
+## Manage metrics
+
+:::info 📌
+
+New to dbt or metrics? Check out our [quickstart guide](/quickstarts) to build your first dbt project! If you'd like to define your first metrics, try our [Jaffle Shop](https://github.com/dbt-labs/jaffle_shop_metrics) example project.
+
+:::
+
+If you're not sure whether to define a metric in dbt or not, ask yourself the following:
+
+> *Is this something our teams consistently need to report on?*
+
+An important business metric should be:
+
+- Well-defined (the definition is agreed upon throughout the entire organization)
+- Time-bound (able to be compared across time)
+
+A great example of this is **revenue** — it can be aggregated on multiple levels (weekly, monthly, etc) and is key for the broader business to understand.
+
+- ✅ `Monthly recurring revenue` or `Weekly active users` or `Average order value`
+- ❌ `1-off experimental metric`
+
+
+### Design and define metrics
+
+**Design metrics**
+To read about best practices on structuring and organizing your metrics, review our [How to design and structure dbt metrics: Recommendations for getting started](https://docs.getdbt.com/blog/how-to-design-and-structure-metrics) blog post first.
+
+**Define metrics**
+You can define your metrics in `.yml` files nested under a metrics key and to design or define your own metrics in your dbt project, review the following documents:
+
+- [How to design and structure dbt metrics: Recommendations for getting started](https://docs.getdbt.com/blog/how-to-design-and-structure-metrics) blog to understand best practices for designing and structuring metrics in your dbt project
+- [dbt metrics](docs/build/metrics) for in-depth detail on attributes, filters, how to define and query your metrics, and [dbt-metrics package](https://github.com/dbt-labs/dbt_metrics)
+- [dbt Semantic Layer quickstart](/docs/use-dbt-semantic-layer/quickstart-semantic-layer) to get started
+- [Understanding the components of the dbt Semantic Layer](https://docs.getdbt.com/blog/understanding-the-components-of-the-dbt-semantic-layer) blog post to see further examples
+
+Review our helpful metrics video below, which explains what metrics are, why they're important, and how you can get started:
+
+
+
+## Related questions
+
+
+ How are you storing my data?
+
+ The dbt Semantic Layer does not store, or cache, or log your data. On each query to the Semantic Layer, the resulting data passes through dbt Cloud servers where it is never stored, cached, or logged. The data from your data platform gets routed through dbt Cloud servers, to your connecting data tool.
+
+
+
+ Is the dbt Semantic Layer open source?
+
+ Some components of the dbt Semantic Layer are open source like dbt-core, the dbt_metrics package, and the BSL-licensed dbt-server. The dbt Proxy Server (what is actually compiling the dbt code) and the Discovery API are not open sources.
+
+During Public Preview, the dbt Semantic Layer is open to all dbt Cloud tiers (Developer, Team, and Enterprise).
+
+
+- dbt Core users can define metrics in their dbt Core projects and calculate them using macros from the metrics package. To use the dbt Semantic Layer integrations, users will need to have a dbt Cloud account.
+- Developer accounts will be able to query the Proxy Server using SQL, but will not be able to browse pre-populated dbt metrics in external tools, which requires access to the Discovery API.
+- Team and Enterprise accounts will be able to set up the Semantic Layer and Discovery API in the integrated partner tool to import metric definition.
+
+
+
+ Is there a dbt Semantic Layer discussion hub?
+
+ Yes absolutely! Join the dbt Slack community and #dbt-cloud-semantic-layer slack channel for all things related to the dbt Semantic Layer.
+
+
+
+
+
diff --git a/website/docs/docs/use-dbt-semantic-layer/graphql.md b/website/docs/docs/use-dbt-semantic-layer/graphql.md
new file mode 100644
index 00000000000..726039dcf68
--- /dev/null
+++ b/website/docs/docs/use-dbt-semantic-layer/graphql.md
@@ -0,0 +1,24 @@
+---
+title: "GraphQL"
+id: graphql
+description: "Integrate and use the GraphQL API to query your metrics."
+tags: ["semantic-layer, apis"]
+---
+
+
+
+:::info
+
+The dbt Semantic Layer APIs are available for users on a Team or Enterprise plans on dbt v1.6 and higher. To learn more about the Semantic APIs, make sure you select v1.6 or higher in the docs navigation header and upgrade your dbt version.
+
+:::
+
+ADD GRAPHQL CONTENT HERE
+
+
+
+
+
+ADD GRAPHQL CONTENT HERE
+
+
diff --git a/website/docs/docs/use-dbt-semantic-layer/jdbc.md b/website/docs/docs/use-dbt-semantic-layer/jdbc.md
new file mode 100644
index 00000000000..d22f500765d
--- /dev/null
+++ b/website/docs/docs/use-dbt-semantic-layer/jdbc.md
@@ -0,0 +1,22 @@
+---
+title: "JDBC"
+id: jdbc
+description: "Integrate and use the JDBC API to query your metrics."
+tags: ["semantic-layer, apis"]
+---
+
+
+
+:::info
+The dbt Semantic Layer APIs are available for users on a Team or Enterprise plans on dbt v1.6 and higher. To learn more about the Semantic APIs, make sure you select v1.6 or higher in the docs navigation header and upgrade your dbt version.
+:::
+
+ADD JDBC CONTENT HERE
+
+
+
+
+
+ADD JDBC CONTENT HERE
+
+
diff --git a/website/docs/docs/use-dbt-semantic-layer/quickstart-semantic-layer.md b/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md
similarity index 57%
rename from website/docs/docs/use-dbt-semantic-layer/quickstart-semantic-layer.md
rename to website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md
index 19a5fb15057..e844132c587 100644
--- a/website/docs/docs/use-dbt-semantic-layer/quickstart-semantic-layer.md
+++ b/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md
@@ -1,36 +1,190 @@
---
-title: "Quickstart"
-id: quickstart-semantic-layer
-description: "Define metrics and set up the dbt Semantic Layer"
-sidebar_label: "Quickstart"
+title: "Get started with the dbt Semantic Layer"
+id: quickstart-sl
+description: "Go through this get started guide to build and define metrics, set up the dbt Semantic Layer, and query them using the Semantic Layer APIs."
+sidebar_label: "Getting started"
---
-:::info Coming soon
-The dbt Semantic Layer is undergoing a [significant revamp](https://www.getdbt.com/blog/dbt-semantic-layer-whats-next/), making it more efficient to define and query metrics.
+
-**What’s changing?** The dbt_metrics package will be [deprecated](https://docs.getdbt.com/blog/deprecating-dbt-metrics) and replaced with [MetricFlow](/docs/build/about-metricflow?version=1.6), a new way framework for defining metrics in dbt.
+This getting started page recommends a workflow to help you get started creating your first metrics. Here are the following steps you'll take:
-**What's new?** Learn how to [Build your metrics](/docs/build/build-metrics-intro?version=1.6) using MetricFlow, one of the key components that makes up the revamped dbt Semantic Layer. It handles SQL query construction and defines the specification for dbt semantic models and metrics.
+- [Create a semantic model](#create-a-semantic-model)
+- [Create your metrics](#create-your-metrics)
+- [Run your production job](#run-your-production-job)
+- [Set up dbt Semantic Layer](#setup)
+- [Connect to the Semantic Layer APIs](#connect-to-apis)
+- [Query your metrics using partner integrations](#query-metrics)
+
+## Prerequisites
+
+import SetUp from '/snippets/_v2-sl-prerequisites.md';
+
+
+
+
+
+:::tip
+New to dbt or metrics? Try our [Jaffle shop example project](https://github.com/dbt-labs/jaffle-sl-template) to help you get started!
+:::
+
+## Create a semantic model
+
+In MetricFlow, which powers the dbt Semantic Layer, there are two main objects:
+
+- [Semantic models](/docs/build/semantic-models) — Nodes in your semantic graph, connected via entities as edges. MetricFlow takes semantic models defined in YAML configuration files as inputs and creates a semantic graph that you can use to query metrics.
+- [Metrics](/docs/build/metrics-overview) — Can be defined in the same YAML files as your semantic models, or split into separate YAML files into any other subdirectories (provided that these subdirectories are also within the same dbt project repo).
+
+This step will guide you through setting up your semantic models, which consist of [entities](/docs/build/entities), [dimensions](/docs/build/dimensions), and [measures](/docs/build/measures).
+
+1. Name your semantic model, fill in appropriate metadata, and map it to a model in your dbt project.
+
+```yaml
+semantic_models:
+ - name: transactions
+ description: |
+ This table captures every transaction starting July 02, 2014. Each row represents one transaction
+ model: ref('fact_transactions')
+ ```
+
+2. Define your entities. These are the keys in your table that MetricFlow will use to join other semantic models. These are usually columns like `customer_id`, `transaction_id`, and so on.
+
+```yaml
+ entities:
+ - name: transaction
+ type: primary
+ expr: id_transaction
+ - name: customer
+ type: foreign
+ expr: id_customer
+ ```
+
+3. Define your dimensions and measures. dimensions are properties of the records in your table that are non-aggregatable. They provide categorical or time-based context to enrich metrics. Measures are the building block for creating metrics. They are numerical columns that MetricFlow aggregates to create metrics.
+
+```yaml
+measures:
+ - name: transaction_amount_usd
+ description: The total USD value of the transaction.
+ agg: sum
+ dimensions:
+ - name: is_large
+ type: categorical
+ expr: case when transaction_amount_usd >= 30 then true else false end
+```
+
+:::tip
+
+If you're familiar with writing SQL, you can think of dimensions as the columns you would group by and measures as the columns you would aggregate.
+```sql
+select
+ metric_time_day, -- time
+ country, -- categorical dimension
+ sum(revenue_usd) -- measure
+from
+ snowflake.fact_transactions -- sql table
+group by metric_time_day, country -- dimensions
+ ```
:::
-## Public Preview
+## Create your metrics
+
+Now that you've created your first semantic model, it's time to define your first metric. MetricFlow supports different metric types like [simple](/docs/build/simple), [ratio](/docs/build/ratio), [cumulative](/docs/build/cumulative), and [derived](/docs/build/derived). You can define metrics in the same YAML files as your semantic models, or create a new file.
+
+The example metric we'll create is a simple metric that refers directly to a measure, based on the `transaction_amount_usd` measure, which will be implemented as a `sum()` function in SQL.
+
+```yaml
+---
+metrics:
+ - name: transaction_amount_usd
+ type: simple
+ type_params:
+ measure: transaction_amount_usd
+```
+
+1. Click **Save** and then **Preview** the code in the dbt Cloud IDE.
+* Run `mf query --metrics --group-by ` to manually query the metrics and dimensions in the IDE.
+2. Run `mf validate-configs` to validate the changes before committing them.
+3. Commit and merge the code changes that contain the metric definitions.
+
+Review the [MetricFlow](/docs/build/build-metrics-intro) documents to further build your own metrics.
+
+## Run your production job
+
+Once you’ve defined metrics in your dbt project, you can perform a job run in your deployment environment to materialize your metrics. The deployment environment is only supported for the dbt Semantic Layer at this moment.
+
+1. Go to **Deploy** in the navigation header
+2. Select **Jobs** to re-run the job with the most recent code in the deployment environment.
+3. Your metric should appear as a red node in the dbt Cloud IDE and dbt directed acyclic graphs (DAG).
+
+
+
+
+**What’s happening internally?**
+
+- Merging the code into your main branch allows dbt Cloud to pull those changes and builds the definition in the manifest produced by the run.
+- Re-running the job in the deployment environment helps materialize the models, which the metrics depend on, in the data platform. It also makes sure that the manifest is up to date.
+- Your dbt Discovery API pulls in the most recent manifest and allows your integration information to extract metadata from it.
+
+## Set up dbt Semantic Layer
-We're excited to announce the dbt Semantic Layer is currently available for Public Preview, which means:
+
+
+## Connect to the Semantic Layer APIs
+
+add content here
+
+## Query your metrics using partner integrations
+
+add content
+
+:::info 📌
+
+**Note:** You will need access to dbt Cloud and the dbt Semantic Layer from your integrated partner tool of choice.
-— **Who?** The dbt Semantic Layer is open to all dbt Cloud tiers (Developer, Team, and Enterprise) during Public Preview. Review [Product architecture](/docs/use-dbt-semantic-layer/dbt-semantic-layer#product-architecture) for more info on plan availability.
+:::
-- Team and Enterprise accounts will be able to set up the Semantic Layer and [Discovery API](/docs/dbt-cloud-apis/discovery-api) in the integrated
-partner tool to import metric definition.
-- Developer accounts will be able to query the Proxy Server using SQL, but will not be able to browse dbt metrics in external tools, which requires access to the Discovery API.
+## FAQs
-— **What?** Public Previews provide early access to new features. The Semantic Layer is stable and you can use it for production deployments, but there may still be some planned additions and modifications to product behaviors before moving to General Availability. We may also introduce new functionality that is not backwards compatible. dbt Labs provides support, and relevant service level objectives (SLOs) apply. We will introduce pricing for the dbt Semantic Layer alongside the General Available (GA) release (future GA date to be announced).
+If you're encountering some issues when defining your metrics or setting up the dbt Semantic Layer, check out a list of answers to some of the questions or problems you may be experiencing.
-— **When?** Public Preview will end once the dbt Semantic Layer is available for GA. After GA, the dbt Semantic Layer will only be available to dbt Cloud **Team** and **Enterprise** plans.
+
+ How are you storing my data?
+
+ The dbt Semantic Layer does not store, or cache, or log your data. On each query to the Semantic Layer, the resulting data passes through dbt Cloud servers where it is never stored, cached, or logged. The data from your data platform gets routed through dbt Cloud servers, to your connecting data tool.
+
+
+
+ Is the dbt Semantic Layer open source?
+
+ Some components of the dbt Semantic Layer are open source like dbt-core, the dbt_metrics package, and the BSL-licensed dbt-server. The dbt Proxy Server (what is actually compiling the dbt code) and the Discovery API are not open sources.
-— **Where?** Public Preview is enabled at the account level so you don’t need to worry about enabling it per user.
+During Public Preview, the dbt Semantic Layer is open to all dbt Cloud tiers (Developer, Team, and Enterprise).
+
+- dbt Core users can define metrics in their dbt Core projects and calculate them using macros from the metrics package. To use the dbt Semantic Layer integrations, you will need to have a dbt Cloud account.
+- Developer accounts will be able to query the Proxy Server using SQL, but will not be able to browse pre-populated dbt metrics in external tools, which requires access to the Discovery API.
+- Team and Enterprise accounts will be able to set up the Semantic Layer and Discovery API in the integrated partner tool to import metric definitions.
+
+
+
+
+
+## Next steps
-## Introduction
+Are you ready to define your own metrics and bring consistency to data consumers? Review the following documents to understand how to structure, define, and query metrics, and set up the dbt Semantic Layer:
+
+- [Set up dbt Semantic Layer](docs/use-dbt-semantic-layer/setup-dbt-sl)
+- [About MetricFlow](/docs/build/about-metricflow)
+- [Semantic models](/docs/build/semantic-models)
+- [Metrics](/docs/build/metrics-overview)
+
+
+
+
+
+import LegacyInfo from '/snippets/_legacy-sl-callout.md';
+
+
To try out the features of the dbt Semantic Layer, you first need to have a dbt project set up. This quickstart guide will lay out the following steps, and recommends a workflow that demonstrates some of its essential features:
@@ -56,7 +210,6 @@ New to dbt or metrics? Check out our [quickstart guide](/quickstarts) to build
## Installing dbt metrics package
The dbt Semantic Layer supports the calculation of metrics by using the [dbt metrics package](https://hub.getdbt.com/dbt-labs/metrics/latest/). You can install the dbt metrics package in your dbt project by copying the below code blocks.
-
```yml
@@ -176,7 +329,7 @@ metrics:
2. Commit and merge the code changes that contain the metric definitions.
3. If you'd like to further design and define your own metrics, review the following documentation:
- - [dbt metrics](/docs/build/metrics) will povide you in-depth detail on attributes, properties, filters, and how to define and query metrics.
+ - [dbt metrics](/docs/build/metrics) will provide you in-depth detail on attributes, properties, filters, and how to define and query metrics.
- Review [How to design and structure dbt metrics: Recommendations for getting started](https://docs.getdbt.com/blog/how-to-design-and-structure-metrics) blog to understand best practices for designing and structuring metrics in your dbt project.
@@ -226,7 +379,7 @@ If you're encountering some issues when defining your metrics or setting up the
Is the dbt Semantic Layer open source?
- Some components of the dbt Semantic Layer are open source like dbt-core, the dbt_metrics package, and the BSL licensed dbt-server. The dbt Proxy Server (what is actually compiling the dbt code) and the Discovery API are not open source.
+ Some components of the dbt Semantic Layer are open source like dbt-core, the dbt_metrics package, and the BSL-licensed dbt-server. The dbt Proxy Server (what is actually compiling the dbt code) and the Discovery API are not open sources.
During Public Preview, the dbt Semantic Layer is open to all dbt Cloud tiers (Developer, Team, and Enterprise).
@@ -298,4 +451,6 @@ Are you ready to define your own metrics and bring consistency to data consumers
- [How to design and structure dbt metrics: Recommendations for getting started](https://docs.getdbt.com/blog/how-to-design-and-structure-metrics) to understand best practices for designing and structuring metrics in your dbt project
- [dbt metrics](/docs/build/metrics) for in-depth detail on attributes, properties, filters, and how to define and query metrics
- [Understanding the components of the dbt Semantic Layer](https://docs.getdbt.com/blog/understanding-the-components-of-the-dbt-semantic-layer) blog post to see further examples
-- [dbt Server repo](https://github.com/dbt-labs/dbt-server), which is a persisted HTTP server that wraps dbt core to handle RESTful API requests for dbt operations.
+- [dbt Server repo](https://github.com/dbt-labs/dbt-server), which is a persisted HTTP server that wraps dbt core to handle RESTful API requests for dbt operations.
+
+
diff --git a/website/docs/docs/use-dbt-semantic-layer/set-dbt-semantic-layer.md b/website/docs/docs/use-dbt-semantic-layer/set-dbt-semantic-layer.md
deleted file mode 100644
index b045725ca62..00000000000
--- a/website/docs/docs/use-dbt-semantic-layer/set-dbt-semantic-layer.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-title: "Set up the dbt Semantic Layer"
-id: setup-dbt-semantic-layer
-description: "You can set up the dbt Semantic Layer in dbt Cloud."
-sidebar_label: "Set up the dbt Semantic Layer"
----
-
-:::info Coming soon
-The dbt Semantic Layer is undergoing a [significant revamp](https://www.getdbt.com/blog/dbt-semantic-layer-whats-next/), making it more efficient to define and query metrics.
-
-**What’s changing?** The dbt_metrics package will be [deprecated](https://docs.getdbt.com/blog/deprecating-dbt-metrics) and replaced with [MetricFlow](/docs/build/about-metricflow?version=1.6), a new way framework for defining metrics in dbt.
-
-**What's new?** Learn how to [Build your metrics](/docs/build/build-metrics-intro?version=1.6) using MetricFlow, one of the key components that makes up the revamped dbt Semantic Layer. It handles SQL query construction and defines the specification for dbt semantic models and metrics.
-:::
-
-With the dbt Semantic Layer, you'll be able to centrally define business metrics, reduce code duplication and inconsistency, create self-service in downstream tools, and more. Configure the dbt Semantic Layer in dbt Cloud to connect with your integrated partner tool.
-
-## Prerequisites
-
-Before you set up the dbt Semantic Layer, make sure you meet the following:
-
-
-
-
-
-
-
-## Set up dbt Semantic Layer
-
-
-
-
-
-
-## Related docs
-
-- [Integrated partner tools](https://www.getdbt.com/product/semantic-layer-integrations) for info on the different integration partners and their documentation
-- [Product architecture](/docs/use-dbt-semantic-layer/dbt-semantic-layer#product-architecture) page for more information on plan availability
-- [dbt metrics](/docs/build/metrics) for in-depth detail on attributes, properties, filters, and how to define and query metrics
-- [dbt Server repo](https://github.com/dbt-labs/dbt-server), which is a persisted HTTP server that wraps dbt core to handle RESTful API requests for dbt operations
diff --git a/website/docs/docs/use-dbt-semantic-layer/setup-sl.md b/website/docs/docs/use-dbt-semantic-layer/setup-sl.md
new file mode 100644
index 00000000000..40acade5138
--- /dev/null
+++ b/website/docs/docs/use-dbt-semantic-layer/setup-sl.md
@@ -0,0 +1,107 @@
+---
+title: "Set up the dbt Semantic Layer"
+id: setup-sl
+description: "You can set up the dbt Semantic Layer in dbt Cloud."
+sidebar_label: "Set up the dbt Semantic Layer"
+---
+
+
+
+:::info Upgrade your dbt version to access the dbt Semantic Layer
+
+The dbt Semantic Layer has undergone a [significant revamp](https://www.getdbt.com/blog/dbt-semantic-layer-whats-next/), making it more efficient to define and query metrics.
+
+**What’s changed?** The dbt_metrics package has been [deprecated](https://docs.getdbt.com/blog/deprecating-dbt-metrics) and replaced with [MetricFlow](/docs/build/about-metricflow?version=1.6), one of the Semantic Layer's key component and a new way framework for defining metrics in dbt.
+
+:::
+
+With the dbt Semantic Layer, you'll be able to centrally define business metrics, reduce code duplication and inconsistency, create self-service in downstream tools, and more. Configure the dbt Semantic Layer in dbt Cloud to connect with your integrated partner tool.
+
+## Prerequisites
+
+Before you set up the dbt Semantic Layer, make sure you meet the following:
+
+import SetUp from '/snippets/_v2-sl-prerequisites.md';
+
+
+
+
+
+
+
+## Set up dbt Semantic Layer
+
+Before you continue with the following steps, you **must** have a multi-tenant dbt Cloud account hosted in North America.
+ * Team and Enterprise accounts can set up the Semantic Layer and [Discovery API](/docs/dbt-cloud-apis/discovery-api) in the integrated partner tool to import metric definitions.
+ * Developer accounts can query the Proxy Server using SQL but won't be able to browse dbt metrics in external tools, which requires access to the Discovery API.
+
+You can set up the dbt Semantic Layer in dbt Cloud at the environment level by following these steps:
+
+1. Login to your dbt Cloud account
+2. Go to **Account Settings**, and then **Service Tokens** to create a new [service account API token](/docs/dbt-cloud-apis/service-tokens). Save your token somewhere safe.
+3. Assign permissions to service account tokens depending on the integration tool you choose. You can review the [integration partner documentation](https://www.getdbt.com/product/semantic-layer-integrations) to determine the permission sets you need to assign.
+4. Go to **Deploy** and then **Environment**, and select your **Deployment** environment.
+5. Click on **Settings** on the top right side of the page.
+6. Click **Edit** on the top right side of the page.
+7. Select dbt version 1.2 or higher.
+8. Toggle the Semantic Layer **On**.
+9. Copy the full proxy server URL (like `https://eagle-hqya7.proxy.cloud.getdbt.com`) to connect to your [integrated partner tool](https://www.getdbt.com/product/semantic-layer-integrations).
+10. Use the URL in the data source configuration of the integrated partner tool.
+11. Use the data platform login credentials that make sense for how the data is consumed.
+
+:::info📌
+
+Note - It is _not_ recommended that you use your dbt Cloud credentials due to elevated permissions. Instead, you can use your specific integration tool permissions.
+
+:::
+
+12. Set up the [Discovery API](/docs/dbt-cloud-apis/discovery-api) (Team and Enterprise accounts only) in the integrated partner tool to import the metric definitions. The [integrated partner tool](https://www.getdbt.com/product/semantic-layer-integrations) will treat the dbt Server as another data source (like a data platform). This requires:
+
+- The account ID, environment ID, and job ID (visible in the job URL)
+- An [API service token](/docs/dbt-cloud-apis/service-tokens) with job admin and metadata permissions
+- Add the items above to the relevant fields in your integration tool
+
+
+
+
+## Related docs
+
+- [Integrated partner tools](https://www.getdbt.com/product/semantic-layer-integrations) for info on the different integration partners and their documentation
+- [Product architecture](/docs/use-dbt-semantic-layer/dbt-semantic-layer#product-architecture) page for more information on plan availability
+- [dbt Server repo](https://github.com/dbt-labs/dbt-server), which is a persisted HTTP server that wraps dbt core to handle RESTful API requests for dbt operations
+
+
+
+
+
+import LegacyInfo from '/snippets/_legacy-sl-callout.md';
+
+
+
+With the dbt Semantic Layer, you'll be able to centrally define business metrics, reduce code duplication and inconsistency, create self-service in downstream tools, and more. Configure the dbt Semantic Layer in dbt Cloud to connect with your integrated partner tool.
+
+## Prerequisites
+
+Before you set up the dbt Semantic Layer, make sure you meet the following:
+
+
+
+
+
+
+
+## Set up dbt Semantic Layer
+
+
+
+
+
+
+## Related docs
+
+- [Integrated partner tools](https://www.getdbt.com/product/semantic-layer-integrations) for info on the different integration partners and their documentation
+- [Product architecture](/docs/use-dbt-semantic-layer/dbt-semantic-layer#product-architecture) page for more information on plan availability
+- [dbt metrics](/docs/build/metrics) for in-depth detail on attributes, properties, filters, and how to define and query metrics
+- [dbt Server repo](https://github.com/dbt-labs/dbt-server), which is a persisted HTTP server that wraps dbt core to handle RESTful API requests for dbt operations
+
+
diff --git a/website/docs/docs/use-dbt-semantic-layer/sl-api-overview.md b/website/docs/docs/use-dbt-semantic-layer/sl-api-overview.md
new file mode 100644
index 00000000000..39c9e1c571a
--- /dev/null
+++ b/website/docs/docs/use-dbt-semantic-layer/sl-api-overview.md
@@ -0,0 +1,101 @@
+---
+title: "Semantic Layer APIs"
+id: sl-api-overview
+description: "Integrate and query using the Semantic Layer APIs."
+tags: ["semantic-layer, apis"]
+hide_table_of_contents: true
+---
+
+
+
+:::info
+
+The revamped dbt Semantic Layer is available for users on a [Team or Enterprise plans](https://www.getdbt.com/pricing/) and you must be on dbt v1.6 and higher.
+
+To learn more about it, make sure you select v1.6 or higher in the docs navigation header and [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud).
+
+:::
+
+Query dbt metrics via external integrations using the dbt Semantic Layer APIs. With the Semantic Layer APIs, you can query metrics to avoid duplicative coding, optimize your development workflow, ensure data governance for company metrics, and guarantee consistency for data consumers.
+
+By leveraging metrics in dbt Cloud, you can create systems for data consumption, automated reporting, easily feed metrics to downstream tools, and more. This can help you connect to a wide variety of data applications across the modern data stack and natively integrate with the dbt Semantic Layer and dbt metrics — from Business Intelligence tools to notebooks, spreadsheets, data catalogs, and more.
+
+The dbt Semantic Layer generates a [`semantic_manifest.json`](/docs/use-dbt-semantic-layer/sl-manifest) artifact file. This artifact contains comprehensive information about your Semantic Layer. You can use it as a valuable reference to understand the structure and details of your data models.
+
+The dbt Semantic Layer provides the following APIs:
+
+
+
+
+
+
+
+
+
+
+
+## Authentication
+
+Requests to the dbt Semantic Layer APIs can be authorized through two types of API tokens:
+
+- [User tokens](/docs/dbt-cloud-apis/user-tokens)
+- [Service account tokens](/docs/dbt-cloud-apis/service-tokens)
+
+
+
+
+
+
+
+Query dbt metrics via external integrations using the dbt Semantic Layer APIs. With the Semantic Layer APIs, you can query metrics to avoid duplicative coding, optimize your development workflow, ensure data governance for company metrics, and guarantee consistency for data consumers.
+
+By leveraging metrics in dbt Cloud, you can create systems for data consumption, automated reporting, easily feed metrics to downstream tools, and more. This can help you connect to a wide variety of data applications across the modern data stack and natively integrate with the dbt Semantic Layer and dbt metrics — from Business Intelligence tools to notebooks, spreadsheets, data catalogs, and more.
+
+The dbt Semantic Layer generates a [`semantic_manifest.json`](/docs/use-dbt-semantic-layer/sl-manifest) artifact file. This artifact contains comprehensive information about your Semantic Layer. You can use it as a valuable reference to understand the structure and details of your data models.
+
+To query metrics using the dbt Semantic Lay API, accounts must be on a [Team or Enterprise plans](https://www.getdbt.com/pricing/).
+
+The dbt Semantic Layer provides the following APIs:
+
+
+
+
+
+
+
+
+
+
+
+## Authentication
+
+Requests to the dbt Semantic Layer APIs can be authorized through two types of API tokens:
+
+- [User tokens](/docs/dbt-cloud-apis/user-tokens)
+- [Service account tokens](/docs/dbt-cloud-apis/service-tokens)
+
+
diff --git a/website/docs/docs/use-dbt-semantic-layer/sl-manifest.md b/website/docs/docs/use-dbt-semantic-layer/sl-manifest.md
new file mode 100644
index 00000000000..241a137ab42
--- /dev/null
+++ b/website/docs/docs/use-dbt-semantic-layer/sl-manifest.md
@@ -0,0 +1,878 @@
+---
+title: "Semantic manifest"
+id: sl-manifest
+description: "Learn about the semantic manifest.json file and how you can use artifacts to gain insights about your Semantic Layer."
+sidebar_label: "Semantic manifest"
+---
+
+The dbt Semantic Layer creates an [artifact](/reference/artifacts/dbt-artifacts) file called the "Semantic Manifest" (`semantic_manifest.json`). This artifact contains comprehensive information about your Semantic Layer. You can use it as a valuable reference to understand the structure and details of your data models.
+
+Similar to the `manifest.json` file, the Semantic Manifest also lives in the `/target` directory of your dbt project. This is where dbt stores various artifacts (such as compiled models and tests) generated during the execution of your project.
+
+The Semantic Manifest allows you to gain insights into the entities and relationships defined in your Semantic Layer, making it easier to understand your data models and their dependencies.
+
+QUESTIONS:
+- WHAT IS IT PRODUCED BY?
+- WHAT ARE THE TOP LEVEL KEYS?
+- HOW DOES IT RELATED TO THE [JSON SCHEMA FILE](https://schemas.getdbt.com/)?
+
+
+target/semantic_manifest.json
file
+
+```json
+{
+ "semantic_models": [
+ {
+ "name": "orders",
+ "defaults": {
+ "agg_time_dimension": "ds"
+ },
+ "description": "Model containing order data. The grain of the table is the order id.\n",
+ "node_relation": {
+ "alias": "orders",
+ "schema_name": "dbt_sl_test",
+ "database": "ANALYTICS",
+ "relation_name": "ANALYTICS.dbt_sl_test.orders"
+ },
+ "entities": [
+ {
+ "name": "order",
+ "description": null,
+ "type": "primary",
+ "role": null,
+ "expr": "order_id",
+ "metadata": null
+ },
+ {
+ "name": "location",
+ "description": null,
+ "type": "foreign",
+ "role": null,
+ "expr": "location_id",
+ "metadata": null
+ },
+ {
+ "name": "customer",
+ "description": null,
+ "type": "foreign",
+ "role": null,
+ "expr": "customer_id",
+ "metadata": null
+ }
+ ],
+ "measures": [
+ {
+ "name": "order_total",
+ "agg": "sum",
+ "description": null,
+ "create_metric": true,
+ "expr": null,
+ "agg_params": null,
+ "metadata": null,
+ "non_additive_dimension": null,
+ "agg_time_dimension": "ds"
+ },
+ {
+ "name": "food_orders",
+ "agg": "sum",
+ "description": null,
+ "create_metric": false,
+ "expr": "case when is_food_order then order_total else 0 end",
+ "agg_params": null,
+ "metadata": null,
+ "non_additive_dimension": null,
+ "agg_time_dimension": "ds"
+ },
+ {
+ "name": "orders",
+ "agg": "sum",
+ "description": null,
+ "create_metric": false,
+ "expr": "1",
+ "agg_params": null,
+ "metadata": null,
+ "non_additive_dimension": null,
+ "agg_time_dimension": "ds"
+ },
+ {
+ "name": "max_order_value",
+ "agg": "max",
+ "description": null,
+ "create_metric": false,
+ "expr": "order_total",
+ "agg_params": null,
+ "metadata": null,
+ "non_additive_dimension": null,
+ "agg_time_dimension": "ds"
+ },
+ {
+ "name": "min_order_value",
+ "agg": "min",
+ "description": null,
+ "create_metric": false,
+ "expr": "order_total",
+ "agg_params": null,
+ "metadata": null,
+ "non_additive_dimension": null,
+ "agg_time_dimension": "ds"
+ },
+ {
+ "name": "customers",
+ "agg": "count_distinct",
+ "description": null,
+ "create_metric": false,
+ "expr": "customer_id",
+ "agg_params": null,
+ "metadata": null,
+ "non_additive_dimension": null,
+ "agg_time_dimension": "ds"
+ },
+ {
+ "name": "average_order_value",
+ "agg": "average",
+ "description": null,
+ "create_metric": false,
+ "expr": "order_total",
+ "agg_params": null,
+ "metadata": null,
+ "non_additive_dimension": null,
+ "agg_time_dimension": "ds"
+ },
+ {
+ "name": "order_cost",
+ "agg": "sum",
+ "description": null,
+ "create_metric": false,
+ "expr": null,
+ "agg_params": null,
+ "metadata": null,
+ "non_additive_dimension": null,
+ "agg_time_dimension": "ds"
+ },
+ {
+ "name": "locations_with_orders",
+ "agg": "count_distinct",
+ "description": null,
+ "create_metric": true,
+ "expr": "location_id",
+ "agg_params": null,
+ "metadata": null,
+ "non_additive_dimension": null,
+ "agg_time_dimension": "ds"
+ },
+ {
+ "name": "median_order_value",
+ "agg": "median",
+ "description": null,
+ "create_metric": true,
+ "expr": "order_total",
+ "agg_params": null,
+ "metadata": null,
+ "non_additive_dimension": null,
+ "agg_time_dimension": "ds"
+ },
+ {
+ "name": "order_value_p99",
+ "agg": "percentile",
+ "description": null,
+ "create_metric": true,
+ "expr": "order_total",
+ "agg_params": {
+ "percentile": 0.99,
+ "use_discrete_percentile": true,
+ "use_approximate_percentile": false
+ },
+ "metadata": null,
+ "non_additive_dimension": null,
+ "agg_time_dimension": "ds"
+ },
+ {
+ "name": "discrete_order_value_p99",
+ "agg": "percentile",
+ "description": null,
+ "create_metric": true,
+ "expr": "order_total",
+ "agg_params": {
+ "percentile": 0.99,
+ "use_discrete_percentile": true,
+ "use_approximate_percentile": false
+ },
+ "metadata": null,
+ "non_additive_dimension": null,
+ "agg_time_dimension": "ds"
+ }
+ ],
+ "dimensions": [
+ {
+ "name": "location_name",
+ "description": null,
+ "type": "categorical",
+ "is_partition": false,
+ "type_params": null,
+ "expr": null,
+ "metadata": null
+ },
+ {
+ "name": "ds",
+ "description": null,
+ "type": "time",
+ "is_partition": false,
+ "type_params": {
+ "time_granularity": "day",
+ "validity_params": null
+ },
+ "expr": "ordered_at",
+ "metadata": null
+ },
+ {
+ "name": "is_food_order",
+ "description": null,
+ "type": "categorical",
+ "is_partition": false,
+ "type_params": null,
+ "expr": null,
+ "metadata": null
+ },
+ {
+ "name": "is_drink_order",
+ "description": null,
+ "type": "categorical",
+ "is_partition": false,
+ "type_params": null,
+ "expr": null,
+ "metadata": null
+ },
+ {
+ "name": "order_total_dim",
+ "description": null,
+ "type": "categorical",
+ "is_partition": false,
+ "type_params": null,
+ "expr": "order_total",
+ "metadata": null
+ }
+ ],
+ "metadata": null
+ },
+ {
+ "name": "customers",
+ "defaults": {
+ "agg_time_dimension": "first_ordered_at"
+ },
+ "description": "Customer dimension table. The grain of the table is one row per customer.\n",
+ "node_relation": {
+ "alias": "customers",
+ "schema_name": "dbt_sl_test",
+ "database": "ANALYTICS",
+ "relation_name": "ANALYTICS.dbt_sl_test.customers"
+ },
+ "entities": [
+ {
+ "name": "customer",
+ "description": null,
+ "type": "primary",
+ "role": null,
+ "expr": "customer_id",
+ "metadata": null
+ }
+ ],
+ "measures": [],
+ "dimensions": [
+ {
+ "name": "customer_name",
+ "description": null,
+ "type": "categorical",
+ "is_partition": false,
+ "type_params": null,
+ "expr": null,
+ "metadata": null
+ },
+ {
+ "name": "count_lifetime_orders",
+ "description": null,
+ "type": "categorical",
+ "is_partition": false,
+ "type_params": null,
+ "expr": null,
+ "metadata": null
+ },
+ {
+ "name": "lifetime_spend_pre_tax",
+ "description": null,
+ "type": "categorical",
+ "is_partition": false,
+ "type_params": null,
+ "expr": null,
+ "metadata": null
+ },
+ {
+ "name": "lifetime_spend",
+ "description": null,
+ "type": "categorical",
+ "is_partition": false,
+ "type_params": null,
+ "expr": null,
+ "metadata": null
+ },
+ {
+ "name": "customer_type",
+ "description": null,
+ "type": "categorical",
+ "is_partition": false,
+ "type_params": null,
+ "expr": null,
+ "metadata": null
+ },
+ {
+ "name": "first_ordered_at",
+ "description": null,
+ "type": "time",
+ "is_partition": false,
+ "type_params": {
+ "time_granularity": "day",
+ "validity_params": null
+ },
+ "expr": null,
+ "metadata": null
+ },
+ {
+ "name": "last_ordered_at",
+ "description": null,
+ "type": "time",
+ "is_partition": false,
+ "type_params": {
+ "time_granularity": "day",
+ "validity_params": null
+ },
+ "expr": null,
+ "metadata": null
+ }
+ ],
+ "metadata": null
+ }
+ ],
+ "metrics": [
+ {
+ "name": "east_coast_order_amount",
+ "description": "Sum of orders from the east coast.",
+ "type": "simple",
+ "type_params": {
+ "measure": null,
+ "measures": [
+ {
+ "name": "orders",
+ "filter": null,
+ "alias": null
+ }
+ ],
+ "numerator": null,
+ "denominator": null,
+ "expr": null,
+ "window": null,
+ "grain_to_date": null,
+ "metrics": []
+ },
+ "filter": {
+ "where_sql_template": "dimension('location_name') in ('Philadelphia')\""
+ },
+ "metadata": null
+ },
+ {
+ "name": "large_order",
+ "description": "Sum of orders from the east coast.",
+ "type": "simple",
+ "type_params": {
+ "measure": null,
+ "measures": [
+ {
+ "name": "orders",
+ "filter": null,
+ "alias": null
+ }
+ ],
+ "numerator": null,
+ "denominator": null,
+ "expr": null,
+ "window": null,
+ "grain_to_date": null,
+ "metrics": []
+ },
+ "filter": {
+ "where_sql_template": "dimension('order_total_dim')\" >= 20"
+ },
+ "metadata": null
+ },
+ {
+ "name": "food_order_amount",
+ "description": "Sum order amount for food orders.",
+ "type": "simple",
+ "type_params": {
+ "measure": null,
+ "measures": [
+ {
+ "name": "food_orders",
+ "filter": null,
+ "alias": null
+ }
+ ],
+ "numerator": null,
+ "denominator": null,
+ "expr": null,
+ "window": null,
+ "grain_to_date": null,
+ "metrics": []
+ },
+ "filter": null,
+ "metadata": null
+ },
+ {
+ "name": "food_order_pct",
+ "description": "The food cost as the % of the total order",
+ "type": "ratio",
+ "type_params": {
+ "measure": null,
+ "measures": [],
+ "numerator": {
+ "name": "food_orders",
+ "filter": null,
+ "alias": null
+ },
+ "denominator": {
+ "name": "order_total",
+ "filter": null,
+ "alias": null
+ },
+ "expr": null,
+ "window": null,
+ "grain_to_date": null,
+ "metrics": []
+ },
+ "filter": null,
+ "metadata": null
+ },
+ {
+ "name": "new_customer_order_pct",
+ "description": "The food cost as the % of the total order",
+ "type": "ratio",
+ "type_params": {
+ "measure": null,
+ "measures": [],
+ "numerator": {
+ "name": "customers",
+ "filter": {
+ "where_sql_template": "dimension('customer_type', entity_path=['customer'])\" = 'new'"
+ },
+ "alias": "customer_numerator"
+ },
+ "denominator": {
+ "name": "customers",
+ "filter": null,
+ "alias": null
+ },
+ "expr": null,
+ "window": null,
+ "grain_to_date": null,
+ "metrics": []
+ },
+ "filter": null,
+ "metadata": null
+ },
+ {
+ "name": "order_gross_profit",
+ "description": "Gross profit from each order.",
+ "type": "derived",
+ "type_params": {
+ "measure": null,
+ "measures": [],
+ "numerator": null,
+ "denominator": null,
+ "expr": "revenue - cost",
+ "window": null,
+ "grain_to_date": null,
+ "metrics": [
+ {
+ "name": "food_order_amount",
+ "filter": null,
+ "alias": "revenue",
+ "offset_window": null,
+ "offset_to_grain": null
+ },
+ {
+ "name": "order_amount",
+ "filter": null,
+ "alias": "cost",
+ "offset_window": null,
+ "offset_to_grain": null
+ }
+ ]
+ },
+ "filter": null,
+ "metadata": null
+ },
+ {
+ "name": "food_order_pct_cumulative",
+ "description": "Example to show a ratio referenced in a derived ",
+ "type": "derived",
+ "type_params": {
+ "measure": null,
+ "measures": [],
+ "numerator": null,
+ "denominator": null,
+ "expr": "food_order_pct * 100",
+ "window": null,
+ "grain_to_date": null,
+ "metrics": [
+ {
+ "name": "food_order_pct",
+ "filter": null,
+ "alias": null,
+ "offset_window": null,
+ "offset_to_grain": null
+ }
+ ]
+ },
+ "filter": null,
+ "metadata": null
+ },
+ {
+ "name": "cumulative_growth_mom",
+ "description": "Example to show a ratio referenced in a derived ",
+ "type": "derived",
+ "type_params": {
+ "measure": null,
+ "measures": [],
+ "numerator": null,
+ "denominator": null,
+ "expr": "cumulative_order_amount/cumulative_order_amount_lm",
+ "window": null,
+ "grain_to_date": null,
+ "metrics": [
+ {
+ "name": "cumulative_order_amount",
+ "filter": null,
+ "alias": null,
+ "offset_window": null,
+ "offset_to_grain": null
+ },
+ {
+ "name": "cumulative_order_amount",
+ "filter": null,
+ "alias": "cumulative_order_amount_lm",
+ "offset_window": {
+ "count": 1,
+ "granularity": "month"
+ },
+ "offset_to_grain": null
+ }
+ ]
+ },
+ "filter": null,
+ "metadata": null
+ },
+ {
+ "name": "food_order_gross_profit",
+ "description": "The gross profit for each order",
+ "type": "derived",
+ "type_params": {
+ "measure": null,
+ "measures": [],
+ "numerator": null,
+ "denominator": null,
+ "expr": "revenue - cost",
+ "window": null,
+ "grain_to_date": null,
+ "metrics": [
+ {
+ "name": "order_total",
+ "filter": {
+ "where_sql_template": "dimension('is_food_order')\" = True"
+ },
+ "alias": "revenue",
+ "offset_window": null,
+ "offset_to_grain": null
+ },
+ {
+ "name": "order_amount",
+ "filter": {
+ "where_sql_template": "dimension('is_food_order')\" = True"
+ },
+ "alias": "cost",
+ "offset_window": null,
+ "offset_to_grain": null
+ }
+ ]
+ },
+ "filter": null,
+ "metadata": null
+ },
+ {
+ "name": "cumulative_order_amount",
+ "description": "The cumulative value of all orders",
+ "type": "cumulative",
+ "type_params": {
+ "measure": null,
+ "measures": [
+ {
+ "name": "order_total",
+ "filter": null,
+ "alias": null
+ }
+ ],
+ "numerator": null,
+ "denominator": null,
+ "expr": null,
+ "window": null,
+ "grain_to_date": null,
+ "metrics": []
+ },
+ "filter": null,
+ "metadata": null
+ },
+ {
+ "name": "cumulative_order_amount_l1m",
+ "description": "The cumulative value of all orders",
+ "type": "cumulative",
+ "type_params": {
+ "measure": null,
+ "measures": [
+ {
+ "name": "order_total",
+ "filter": null,
+ "alias": null
+ }
+ ],
+ "numerator": null,
+ "denominator": null,
+ "expr": null,
+ "window": {
+ "count": 1,
+ "granularity": "month"
+ },
+ "grain_to_date": null,
+ "metrics": []
+ },
+ "filter": null,
+ "metadata": null
+ },
+ {
+ "name": "cumulative_order_amount_mtd",
+ "description": "The cumulative value of all orders",
+ "type": "cumulative",
+ "type_params": {
+ "measure": null,
+ "measures": [
+ {
+ "name": "order_total",
+ "filter": null,
+ "alias": null
+ }
+ ],
+ "numerator": null,
+ "denominator": null,
+ "expr": null,
+ "window": null,
+ "grain_to_date": "month",
+ "metrics": []
+ },
+ "filter": null,
+ "metadata": null
+ },
+ {
+ "name": "order_count",
+ "description": "The number of orders placed",
+ "type": "simple",
+ "type_params": {
+ "measure": null,
+ "measures": [
+ {
+ "name": "orders",
+ "filter": null,
+ "alias": null
+ }
+ ],
+ "numerator": null,
+ "denominator": null,
+ "expr": null,
+ "window": null,
+ "grain_to_date": null,
+ "metrics": []
+ },
+ "filter": null,
+ "metadata": null
+ },
+ {
+ "name": "max_order_amount",
+ "description": "The highest order value for a given period",
+ "type": "simple",
+ "type_params": {
+ "measure": null,
+ "measures": [
+ {
+ "name": "max_order_value",
+ "filter": null,
+ "alias": null
+ }
+ ],
+ "numerator": null,
+ "denominator": null,
+ "expr": null,
+ "window": null,
+ "grain_to_date": null,
+ "metrics": []
+ },
+ "filter": null,
+ "metadata": null
+ },
+ {
+ "name": "min_order_amount",
+ "description": "The lowest order value for a given period",
+ "type": "simple",
+ "type_params": {
+ "measure": null,
+ "measures": [
+ {
+ "name": "min_order_value",
+ "filter": null,
+ "alias": null
+ }
+ ],
+ "numerator": null,
+ "denominator": null,
+ "expr": null,
+ "window": null,
+ "grain_to_date": null,
+ "metrics": []
+ },
+ "filter": null,
+ "metadata": null
+ },
+ {
+ "name": "customers_with_orders",
+ "description": "Unique count of customers placing orders",
+ "type": "simple",
+ "type_params": {
+ "measure": null,
+ "measures": [
+ {
+ "name": "customers",
+ "filter": null,
+ "alias": null
+ }
+ ],
+ "numerator": null,
+ "denominator": null,
+ "expr": null,
+ "window": null,
+ "grain_to_date": null,
+ "metrics": []
+ },
+ "filter": null,
+ "metadata": null
+ },
+ {
+ "name": "returning_customers_with_orders",
+ "description": "Unique count of customers placing orders",
+ "type": "simple",
+ "type_params": {
+ "measure": null,
+ "measures": [
+ {
+ "name": "customers",
+ "filter": null,
+ "alias": null
+ }
+ ],
+ "numerator": null,
+ "denominator": null,
+ "expr": null,
+ "window": null,
+ "grain_to_date": null,
+ "metrics": []
+ },
+ "filter": {
+ "where_sql_template": "dimension('customer_type', entity_path=['customer']) \" = 'returning'"
+ },
+ "metadata": null
+ },
+ {
+ "name": "new_customer_with_orders",
+ "description": "Unique count of customers placing orders",
+ "type": "simple",
+ "type_params": {
+ "measure": null,
+ "measures": [
+ {
+ "name": "customers",
+ "filter": null,
+ "alias": null
+ }
+ ],
+ "numerator": null,
+ "denominator": null,
+ "expr": null,
+ "window": null,
+ "grain_to_date": null,
+ "metrics": []
+ },
+ "filter": {
+ "where_sql_template": "dimension('customer_type', entity_path=['customer']) \" = 'new'"
+ },
+ "metadata": null
+ },
+ {
+ "name": "average_order_amount",
+ "description": "The average order value",
+ "type": "simple",
+ "type_params": {
+ "measure": null,
+ "measures": [
+ {
+ "name": "average_order_value",
+ "filter": null,
+ "alias": null
+ }
+ ],
+ "numerator": null,
+ "denominator": null,
+ "expr": null,
+ "window": null,
+ "grain_to_date": null,
+ "metrics": []
+ },
+ "filter": null,
+ "metadata": null
+ },
+ {
+ "name": "order_amount",
+ "description": "The cost of fulfilling each order",
+ "type": "simple",
+ "type_params": {
+ "measure": null,
+ "measures": [
+ {
+ "name": "order_cost",
+ "filter": null,
+ "alias": null
+ }
+ ],
+ "numerator": null,
+ "denominator": null,
+ "expr": null,
+ "window": null,
+ "grain_to_date": null,
+ "metrics": []
+ },
+ "filter": null,
+ "metadata": null
+ }
+ ],
+ "interfaces_version": "0.1.0.dev3"
+}
+```
+
+
+## Related docs
+
+- [dbt Semantic Layer APIs](/docs/use-dbt-semantic-layer/sl-api-overview)
+- [About dbt artifacts](/reference/artifacts/dbt-artifacts)
+
diff --git a/website/docs/reference/artifacts/dbt-artifacts.md b/website/docs/reference/artifacts/dbt-artifacts.md
index 2fbcc329484..84a7a40c986 100644
--- a/website/docs/reference/artifacts/dbt-artifacts.md
+++ b/website/docs/reference/artifacts/dbt-artifacts.md
@@ -3,12 +3,15 @@ title: "About dbt artifacts"
sidebar_label: "About dbt artifacts"
---
-With every invocation, dbt generates and saves one or more *artifacts*. Several of these are files (`manifest.json`, `catalog.json`, `run_results.json`, and `sources.json`) that are used to power:
+With every invocation, dbt generates and saves one or more *artifacts*. Several of these are files (`semantic_manifest.json`, `manifest.json`, `catalog.json`, `run_results.json`, and `sources.json`) that are used to power:
+
- [documentation](/docs/collaborate/documentation)
- [state](/reference/node-selection/syntax#about-node-selection)
- [visualizing source freshness](/docs/build/sources#snapshotting-source-data-freshness)
They could also be used to:
+
+- gain insights into your [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl)
- calculate project-level test coverage
- perform longitudinal analysis of run timing
- identify historical changes in structure
@@ -19,6 +22,7 @@ dbt has produced artifacts since the release of dbt-docs in v0.11.0. Starting in
## When are artifacts produced?
Most dbt commands (and corresponding RPC methods) produce artifacts:
+- [semantic manifest](/docs/use-dbt-semantic-layer/sl-manifest): Lives in the `/target` directory of your dbt project and stores various artifacts (such as compiled models and tests) generated during the execution of your project.
- [manifest](/reference/artifacts/manifest-json): produced by commands that read and understand your project
- [run results](/reference/artifacts/run-results-json): produced by commands that run, compile, or catalog nodes in your DAG
- [catalog](catalog-json): produced by `docs generate`
diff --git a/website/package-lock.json b/website/package-lock.json
index b15a903e97f..65c5f6ba759 100644
--- a/website/package-lock.json
+++ b/website/package-lock.json
@@ -16,6 +16,7 @@
"@stoplight/elements": "^7.7.17",
"@svgr/webpack": "^6.0.0",
"axios": "^0.27.2",
+ "build": "^0.1.4",
"classnames": "^2.3.1",
"clsx": "^1.1.1",
"color": "^3.1.2",
@@ -2058,7 +2059,6 @@
"version": "1.5.0",
"resolved": "https://registry.npmjs.org/@colors/colors/-/colors-1.5.0.tgz",
"integrity": "sha512-ooWCrlZP11i8GImSjTHYHLkvFDP48nS4+204nGb1RiX/WXYHmJA2III9/e2DWVabCESdW7hBAEzHRqUn9OUVvQ==",
- "optional": true,
"engines": {
"node": ">=0.1.90"
}
@@ -2159,6 +2159,16 @@
"ms": "^2.1.1"
}
},
+ "node_modules/@dabh/diagnostics": {
+ "version": "2.0.3",
+ "resolved": "https://registry.npmjs.org/@dabh/diagnostics/-/diagnostics-2.0.3.tgz",
+ "integrity": "sha512-hrlQOIi7hAfzsMqlGSFyVucrx38O+j6wiGOf//H2ecvIEqYN4ADBSS2iLMh5UFyDunCNniUIPk/q3riFv45xRA==",
+ "dependencies": {
+ "colorspace": "1.1.x",
+ "enabled": "2.0.x",
+ "kuler": "^2.0.0"
+ }
+ },
"node_modules/@dbt-labs/react-dbt-dag": {
"version": "0.8.7",
"resolved": "https://npm.pkg.github.com/download/@dbt-labs/react-dbt-dag/0.8.7/927fdaa04ced1eca6c6a8505fa4c429df102ca29",
@@ -6919,6 +6929,11 @@
"@types/jest": "*"
}
},
+ "node_modules/@types/triple-beam": {
+ "version": "1.3.2",
+ "resolved": "https://registry.npmjs.org/@types/triple-beam/-/triple-beam-1.3.2.tgz",
+ "integrity": "sha512-txGIh+0eDFzKGC25zORnswy+br1Ha7hj5cMVwKIU7+s0U2AxxJru/jZSMU6OC9MJWP6+pc/hc6ZjyZShpsyY2g=="
+ },
"node_modules/@types/type-is": {
"version": "1.6.3",
"resolved": "https://registry.npmjs.org/@types/type-is/-/type-is-1.6.3.tgz",
@@ -7781,8 +7796,7 @@
"node_modules/async": {
"version": "3.2.4",
"resolved": "https://registry.npmjs.org/async/-/async-3.2.4.tgz",
- "integrity": "sha512-iAB+JbDEGXhyIUavoDl9WP/Jj106Kz9DEn1DPgYw5ruDn0e3Wgi3sKFm55sASdGBNOQB8F59d9qQ7deqrHA8wQ==",
- "dev": true
+ "integrity": "sha512-iAB+JbDEGXhyIUavoDl9WP/Jj106Kz9DEn1DPgYw5ruDn0e3Wgi3sKFm55sASdGBNOQB8F59d9qQ7deqrHA8wQ=="
},
"node_modules/asynckit": {
"version": "0.4.0",
@@ -8472,6 +8486,26 @@
"resolved": "https://registry.npmjs.org/buffer-xor/-/buffer-xor-1.0.3.tgz",
"integrity": "sha512-571s0T7nZWK6vB67HI5dyUF7wXiNcfaPPPTl6zYCNApANjIvYJTg7hlud/+cJpdAhS7dVzqMLmfhfHR3rAcOjQ=="
},
+ "node_modules/build": {
+ "version": "0.1.4",
+ "resolved": "https://registry.npmjs.org/build/-/build-0.1.4.tgz",
+ "integrity": "sha512-KwbDJ/zrsU8KZRRMfoURG14cKIAStUlS8D5jBDvtrZbwO5FEkYqc3oB8HIhRiyD64A48w1lc+sOmQ+mmBw5U/Q==",
+ "dependencies": {
+ "cssmin": "0.3.x",
+ "jsmin": "1.x",
+ "jxLoader": "*",
+ "moo-server": "*",
+ "promised-io": "*",
+ "timespan": "2.x",
+ "uglify-js": "1.x",
+ "walker": "1.x",
+ "winston": "*",
+ "wrench": "1.3.x"
+ },
+ "engines": {
+ "node": ">v0.4.12"
+ }
+ },
"node_modules/builtin-status-codes": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/builtin-status-codes/-/builtin-status-codes-3.0.0.tgz",
@@ -9103,6 +9137,15 @@
"resolved": "https://registry.npmjs.org/colorette/-/colorette-2.0.19.tgz",
"integrity": "sha512-3tlv/dIP7FWvj3BsbHrGLJ6l/oKh1O3TcgBqMn+yyCagOxc23fyzDS6HypQbgxWbkpDnf52p1LuR4eWDQ/K9WQ=="
},
+ "node_modules/colorspace": {
+ "version": "1.1.4",
+ "resolved": "https://registry.npmjs.org/colorspace/-/colorspace-1.1.4.tgz",
+ "integrity": "sha512-BgvKJiuVu1igBUF2kEjRCZXol6wiiGbY5ipL/oVPwm0BL9sIpMIzM8IK7vwuxIIzOXMV3Ey5w+vxhm0rR/TN8w==",
+ "dependencies": {
+ "color": "^3.1.3",
+ "text-hex": "1.0.x"
+ }
+ },
"node_modules/combine-promises": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/combine-promises/-/combine-promises-1.1.0.tgz",
@@ -9862,6 +9905,14 @@
"node": ">=4"
}
},
+ "node_modules/cssmin": {
+ "version": "0.3.2",
+ "resolved": "https://registry.npmjs.org/cssmin/-/cssmin-0.3.2.tgz",
+ "integrity": "sha512-bynxGIAJ8ybrnFobjsQotIjA8HFDDgPwbeUWNXXXfR+B4f9kkxdcUyagJoQCSUOfMV+ZZ6bMn8bvbozlCzUGwQ==",
+ "bin": {
+ "cssmin": "bin/cssmin"
+ }
+ },
"node_modules/cssnano": {
"version": "5.1.15",
"resolved": "https://registry.npmjs.org/cssnano/-/cssnano-5.1.15.tgz",
@@ -10841,6 +10892,11 @@
"url": "https://github.com/sponsors/wooorm"
}
},
+ "node_modules/enabled": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/enabled/-/enabled-2.0.0.tgz",
+ "integrity": "sha512-AKrN98kuwOzMIdAizXGI86UFBoo26CL21UM763y1h/GMSJ4/OHU9k2YlsmBpyScFo/wbLzWQJBMCW4+IO3/+OQ=="
+ },
"node_modules/encodeurl": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-1.0.2.tgz",
@@ -11942,6 +11998,11 @@
"pend": "~1.2.0"
}
},
+ "node_modules/fecha": {
+ "version": "4.2.3",
+ "resolved": "https://registry.npmjs.org/fecha/-/fecha-4.2.3.tgz",
+ "integrity": "sha512-OP2IUU6HeYKJi3i0z4A19kHMQoLVs4Hc+DPqqxI2h/DPZHTm/vjsfC6P0b4jCMy14XizLBqvndQ+UilD7707Jw=="
+ },
"node_modules/feed": {
"version": "4.2.2",
"resolved": "https://registry.npmjs.org/feed/-/feed-4.2.2.tgz",
@@ -12158,6 +12219,11 @@
"react": "^15.0.2 || ^16.0.0 || ^17.0.0"
}
},
+ "node_modules/fn.name": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/fn.name/-/fn.name-1.1.0.tgz",
+ "integrity": "sha512-GRnmB5gPyJpAhTQdSZTSp9uaPSvl09KoYcMQtsB9rQoOmzs9dH6ffeccH+Z+cv6P68Hu5bC6JjRh4Ah/mHSNRw=="
+ },
"node_modules/fnv-plus": {
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/fnv-plus/-/fnv-plus-1.3.1.tgz",
@@ -16264,6 +16330,17 @@
"node": ">=4"
}
},
+ "node_modules/jsmin": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/jsmin/-/jsmin-1.0.1.tgz",
+ "integrity": "sha512-OPuL5X/bFKgVdMvEIX3hnpx3jbVpFCrEM8pKPXjFkZUqg521r41ijdyTz7vACOhW6o1neVlcLyd+wkbK5fNHRg==",
+ "bin": {
+ "jsmin": "bin/jsmin"
+ },
+ "engines": {
+ "node": ">=0.1.93"
+ }
+ },
"node_modules/json-buffer": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/json-buffer/-/json-buffer-3.0.0.tgz",
@@ -16363,6 +16440,28 @@
"verror": "1.10.0"
}
},
+ "node_modules/jxLoader": {
+ "version": "0.1.1",
+ "resolved": "https://registry.npmjs.org/jxLoader/-/jxLoader-0.1.1.tgz",
+ "integrity": "sha512-ClEvAj3K68y8uKhub3RgTmcRPo5DfIWvtxqrKQdDPyZ1UVHIIKvVvjrAsJFSVL5wjv0rt5iH9SMCZ0XRKNzeUA==",
+ "dependencies": {
+ "js-yaml": "0.3.x",
+ "moo-server": "1.3.x",
+ "promised-io": "*",
+ "walker": "1.x"
+ },
+ "engines": {
+ "node": ">v0.4.10"
+ }
+ },
+ "node_modules/jxLoader/node_modules/js-yaml": {
+ "version": "0.3.7",
+ "resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-0.3.7.tgz",
+ "integrity": "sha512-/7PsVDNP2tVe2Z1cF9kTEkjamIwz4aooDpRKmN1+g/9eePCgcxsv4QDvEbxO0EH+gdDD7MLyDoR6BASo3hH51g==",
+ "engines": {
+ "node": "> 0.4.11"
+ }
+ },
"node_modules/katex": {
"version": "0.13.24",
"resolved": "https://registry.npmjs.org/katex/-/katex-0.13.24.tgz",
@@ -16418,6 +16517,11 @@
"node": ">= 8"
}
},
+ "node_modules/kuler": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/kuler/-/kuler-2.0.0.tgz",
+ "integrity": "sha512-Xq9nH7KlWZmXAtodXDDRE7vs6DU1gTU8zYDHDiWLSip45Egwq3plLHzPn27NgvzL2r1LMPC1vdqh98sQxtqj4A=="
+ },
"node_modules/latest-version": {
"version": "5.1.0",
"resolved": "https://registry.npmjs.org/latest-version/-/latest-version-5.1.0.tgz",
@@ -17096,6 +17200,27 @@
"node": ">=8"
}
},
+ "node_modules/logform": {
+ "version": "2.5.1",
+ "resolved": "https://registry.npmjs.org/logform/-/logform-2.5.1.tgz",
+ "integrity": "sha512-9FyqAm9o9NKKfiAKfZoYo9bGXXuwMkxQiQttkT4YjjVtQVIQtK6LmVtlxmCaFswo6N4AfEkHqZTV0taDtPotNg==",
+ "dependencies": {
+ "@colors/colors": "1.5.0",
+ "@types/triple-beam": "^1.3.2",
+ "fecha": "^4.2.0",
+ "ms": "^2.1.1",
+ "safe-stable-stringify": "^2.3.1",
+ "triple-beam": "^1.3.0"
+ }
+ },
+ "node_modules/logform/node_modules/safe-stable-stringify": {
+ "version": "2.4.3",
+ "resolved": "https://registry.npmjs.org/safe-stable-stringify/-/safe-stable-stringify-2.4.3.tgz",
+ "integrity": "sha512-e2bDA2WJT0wxseVd4lsDP4+3ONX6HpMXQa1ZhFQ7SU+GjvORCmShbCMltrtIDfkYhVHrOcPtj+KhmDBdPdZD1g==",
+ "engines": {
+ "node": ">=10"
+ }
+ },
"node_modules/longest-streak": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/longest-streak/-/longest-streak-2.0.4.tgz",
@@ -17188,7 +17313,6 @@
"version": "1.0.12",
"resolved": "https://registry.npmjs.org/makeerror/-/makeerror-1.0.12.tgz",
"integrity": "sha512-JmqCvUhmt43madlpFzG4BQzG2Z3m6tvQDNKdClZnO3VbIudJYmxsT0FNJMeiB2+JTSlTQTSbU8QdesVmwJcmLg==",
- "dev": true,
"dependencies": {
"tmpl": "1.0.5"
}
@@ -17859,6 +17983,14 @@
"integrity": "sha512-/CaclMHKQ3A6rnzBzOADfwdSJ25BFoFT0Emxsc4zYVyav5SkK9iA6lEtIeuN/oRYbwPgviJT+t3l+sjFa28jYg==",
"peer": true
},
+ "node_modules/moo-server": {
+ "version": "1.3.0",
+ "resolved": "https://registry.npmjs.org/moo-server/-/moo-server-1.3.0.tgz",
+ "integrity": "sha512-9A8/eor2DXwpv1+a4pZAAydqLFVrWoKoO1fzdzqLUhYVXAO1Kgd1FR2gFZi7YdHzF0s4W8cDNwCfKJQrvLqxDw==",
+ "engines": {
+ "node": ">v0.4.10"
+ }
+ },
"node_modules/mrmime": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/mrmime/-/mrmime-1.0.1.tgz",
@@ -18438,6 +18570,14 @@
"wrappy": "1"
}
},
+ "node_modules/one-time": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/one-time/-/one-time-1.0.0.tgz",
+ "integrity": "sha512-5DXOiRKwuSEcQ/l0kGCF6Q3jcADFv5tSmRaJck/OqkVFcOzutB134KRSfF0xDrL39MNnqxbHBbUUcjZIhTgb2g==",
+ "dependencies": {
+ "fn.name": "1.x.x"
+ }
+ },
"node_modules/onetime": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/onetime/-/onetime-5.1.2.tgz",
@@ -19901,6 +20041,11 @@
"asap": "~2.0.3"
}
},
+ "node_modules/promised-io": {
+ "version": "0.3.6",
+ "resolved": "https://registry.npmjs.org/promised-io/-/promised-io-0.3.6.tgz",
+ "integrity": "sha512-bNwZusuNIW4m0SPR8jooSyndD35ggirHlxVl/UhIaZD/F0OBv9ebfc6tNmbpZts3QXHggkjIBH8lvtnzhtcz0A=="
+ },
"node_modules/prompts": {
"version": "2.4.2",
"resolved": "https://registry.npmjs.org/prompts/-/prompts-2.4.2.tgz",
@@ -22486,6 +22631,14 @@
"stackframe": "^1.3.4"
}
},
+ "node_modules/stack-trace": {
+ "version": "0.0.10",
+ "resolved": "https://registry.npmjs.org/stack-trace/-/stack-trace-0.0.10.tgz",
+ "integrity": "sha512-KGzahc7puUKkzyMt+IqAep+TVNbKP+k2Lmwhub39m1AsTSkaDutx56aDCo+HLDzf/D26BIHTJWNiTG1KAJiQCg==",
+ "engines": {
+ "node": "*"
+ }
+ },
"node_modules/stack-utils": {
"version": "2.0.6",
"resolved": "https://registry.npmjs.org/stack-utils/-/stack-utils-2.0.6.tgz",
@@ -23328,6 +23481,11 @@
"node": ">=8"
}
},
+ "node_modules/text-hex": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/text-hex/-/text-hex-1.0.0.tgz",
+ "integrity": "sha512-uuVGNWzgJ4yhRaNSiubPY7OjISw4sw4E5Uv0wbjp+OzcbmVU/rsT8ujgcXJhn9ypzsgr5vlzpPqP+MBBKcGvbg=="
+ },
"node_modules/text-table": {
"version": "0.2.0",
"resolved": "https://registry.npmjs.org/text-table/-/text-table-0.2.0.tgz",
@@ -23375,6 +23533,14 @@
"node": ">=0.6.0"
}
},
+ "node_modules/timespan": {
+ "version": "2.3.0",
+ "resolved": "https://registry.npmjs.org/timespan/-/timespan-2.3.0.tgz",
+ "integrity": "sha512-0Jq9+58T2wbOyLth0EU+AUb6JMGCLaTWIykJFa7hyAybjVH9gpVMTfUAwo5fWAvtFt2Tjh/Elg8JtgNpnMnM8g==",
+ "engines": {
+ "node": ">= 0.2.0"
+ }
+ },
"node_modules/tiny-invariant": {
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/tiny-invariant/-/tiny-invariant-1.3.1.tgz",
@@ -23400,8 +23566,7 @@
"node_modules/tmpl": {
"version": "1.0.5",
"resolved": "https://registry.npmjs.org/tmpl/-/tmpl-1.0.5.tgz",
- "integrity": "sha512-3f0uOEAQwIqGuWW2MVzYg8fV/QNnc/IpuJNG837rLuczAaLVHslWHZQj4IGiEl5Hs3kkbhwL9Ab7Hrsmuj+Smw==",
- "dev": true
+ "integrity": "sha512-3f0uOEAQwIqGuWW2MVzYg8fV/QNnc/IpuJNG837rLuczAaLVHslWHZQj4IGiEl5Hs3kkbhwL9Ab7Hrsmuj+Smw=="
},
"node_modules/to-fast-properties": {
"version": "2.0.0",
@@ -23491,6 +23656,14 @@
"url": "https://github.com/sponsors/wooorm"
}
},
+ "node_modules/triple-beam": {
+ "version": "1.4.1",
+ "resolved": "https://registry.npmjs.org/triple-beam/-/triple-beam-1.4.1.tgz",
+ "integrity": "sha512-aZbgViZrg1QNcG+LULa7nhZpJTZSLm/mXnHXnbAbjmN5aSa0y7V+wvv6+4WaBtpISJzThKy+PIPxc1Nq1EJ9mg==",
+ "engines": {
+ "node": ">= 14.0.0"
+ }
+ },
"node_modules/trough": {
"version": "1.0.5",
"resolved": "https://registry.npmjs.org/trough/-/trough-1.0.5.tgz",
@@ -23655,6 +23828,14 @@
"node": "*"
}
},
+ "node_modules/uglify-js": {
+ "version": "1.3.5",
+ "resolved": "https://registry.npmjs.org/uglify-js/-/uglify-js-1.3.5.tgz",
+ "integrity": "sha512-YPX1DjKtom8l9XslmPFQnqWzTBkvI4N0pbkzLuPZZ4QTyig0uQqvZz9NgUdfEV+qccJzi7fVcGWdESvRIjWptQ==",
+ "bin": {
+ "uglifyjs": "bin/uglifyjs"
+ }
+ },
"node_modules/unbox-primitive": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/unbox-primitive/-/unbox-primitive-1.0.2.tgz",
@@ -24445,7 +24626,6 @@
"version": "1.0.8",
"resolved": "https://registry.npmjs.org/walker/-/walker-1.0.8.tgz",
"integrity": "sha512-ts/8E8l5b7kY0vlWLewOkDXMmPdLcVV4GmOQLyxuSswIJsweeFZtAsMF7k1Nszz+TYBQrlYRmzOnr398y1JemQ==",
- "dev": true,
"dependencies": {
"makeerror": "1.0.12"
}
@@ -24975,6 +25155,48 @@
"resolved": "https://registry.npmjs.org/wildcard/-/wildcard-2.0.0.tgz",
"integrity": "sha512-JcKqAHLPxcdb9KM49dufGXn2x3ssnfjbcaQdLlfZsL9rH9wgDQjUtDxbo8NE0F6SFvydeu1VhZe7hZuHsB2/pw=="
},
+ "node_modules/winston": {
+ "version": "3.10.0",
+ "resolved": "https://registry.npmjs.org/winston/-/winston-3.10.0.tgz",
+ "integrity": "sha512-nT6SIDaE9B7ZRO0u3UvdrimG0HkB7dSTAgInQnNR2SOPJ4bvq5q79+pXLftKmP52lJGW15+H5MCK0nM9D3KB/g==",
+ "dependencies": {
+ "@colors/colors": "1.5.0",
+ "@dabh/diagnostics": "^2.0.2",
+ "async": "^3.2.3",
+ "is-stream": "^2.0.0",
+ "logform": "^2.4.0",
+ "one-time": "^1.0.0",
+ "readable-stream": "^3.4.0",
+ "safe-stable-stringify": "^2.3.1",
+ "stack-trace": "0.0.x",
+ "triple-beam": "^1.3.0",
+ "winston-transport": "^4.5.0"
+ },
+ "engines": {
+ "node": ">= 12.0.0"
+ }
+ },
+ "node_modules/winston-transport": {
+ "version": "4.5.0",
+ "resolved": "https://registry.npmjs.org/winston-transport/-/winston-transport-4.5.0.tgz",
+ "integrity": "sha512-YpZzcUzBedhlTAfJg6vJDlyEai/IFMIVcaEZZyl3UXIl4gmqRpU7AE89AHLkbzLUsv0NVmw7ts+iztqKxxPW1Q==",
+ "dependencies": {
+ "logform": "^2.3.2",
+ "readable-stream": "^3.6.0",
+ "triple-beam": "^1.3.0"
+ },
+ "engines": {
+ "node": ">= 6.4.0"
+ }
+ },
+ "node_modules/winston/node_modules/safe-stable-stringify": {
+ "version": "2.4.3",
+ "resolved": "https://registry.npmjs.org/safe-stable-stringify/-/safe-stable-stringify-2.4.3.tgz",
+ "integrity": "sha512-e2bDA2WJT0wxseVd4lsDP4+3ONX6HpMXQa1ZhFQ7SU+GjvORCmShbCMltrtIDfkYhVHrOcPtj+KhmDBdPdZD1g==",
+ "engines": {
+ "node": ">=10"
+ }
+ },
"node_modules/wolfy87-eventemitter": {
"version": "5.2.9",
"resolved": "https://registry.npmjs.org/wolfy87-eventemitter/-/wolfy87-eventemitter-5.2.9.tgz",
@@ -25046,6 +25268,15 @@
"resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz",
"integrity": "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ=="
},
+ "node_modules/wrench": {
+ "version": "1.3.9",
+ "resolved": "https://registry.npmjs.org/wrench/-/wrench-1.3.9.tgz",
+ "integrity": "sha512-srTJQmLTP5YtW+F5zDuqjMEZqLLr/eJOZfDI5ibfPfRMeDh3oBUefAscuH0q5wBKE339ptH/S/0D18ZkfOfmKQ==",
+ "deprecated": "wrench.js is deprecated! You should check out fs-extra (https://github.com/jprichardson/node-fs-extra) for any operations you were using wrench for. Thanks for all the usage over the years.",
+ "engines": {
+ "node": ">=0.1.97"
+ }
+ },
"node_modules/write-file-atomic": {
"version": "3.0.3",
"resolved": "https://registry.npmjs.org/write-file-atomic/-/write-file-atomic-3.0.3.tgz",
@@ -26648,8 +26879,7 @@
"@colors/colors": {
"version": "1.5.0",
"resolved": "https://registry.npmjs.org/@colors/colors/-/colors-1.5.0.tgz",
- "integrity": "sha512-ooWCrlZP11i8GImSjTHYHLkvFDP48nS4+204nGb1RiX/WXYHmJA2III9/e2DWVabCESdW7hBAEzHRqUn9OUVvQ==",
- "optional": true
+ "integrity": "sha512-ooWCrlZP11i8GImSjTHYHLkvFDP48nS4+204nGb1RiX/WXYHmJA2III9/e2DWVabCESdW7hBAEzHRqUn9OUVvQ=="
},
"@cypress/react": {
"version": "7.0.2",
@@ -26735,6 +26965,16 @@
}
}
},
+ "@dabh/diagnostics": {
+ "version": "2.0.3",
+ "resolved": "https://registry.npmjs.org/@dabh/diagnostics/-/diagnostics-2.0.3.tgz",
+ "integrity": "sha512-hrlQOIi7hAfzsMqlGSFyVucrx38O+j6wiGOf//H2ecvIEqYN4ADBSS2iLMh5UFyDunCNniUIPk/q3riFv45xRA==",
+ "requires": {
+ "colorspace": "1.1.x",
+ "enabled": "2.0.x",
+ "kuler": "^2.0.0"
+ }
+ },
"@dbt-labs/react-dbt-dag": {
"version": "0.8.7",
"resolved": "https://npm.pkg.github.com/download/@dbt-labs/react-dbt-dag/0.8.7/927fdaa04ced1eca6c6a8505fa4c429df102ca29",
@@ -30416,6 +30656,11 @@
"@types/jest": "*"
}
},
+ "@types/triple-beam": {
+ "version": "1.3.2",
+ "resolved": "https://registry.npmjs.org/@types/triple-beam/-/triple-beam-1.3.2.tgz",
+ "integrity": "sha512-txGIh+0eDFzKGC25zORnswy+br1Ha7hj5cMVwKIU7+s0U2AxxJru/jZSMU6OC9MJWP6+pc/hc6ZjyZShpsyY2g=="
+ },
"@types/type-is": {
"version": "1.6.3",
"resolved": "https://registry.npmjs.org/@types/type-is/-/type-is-1.6.3.tgz",
@@ -31081,8 +31326,7 @@
"async": {
"version": "3.2.4",
"resolved": "https://registry.npmjs.org/async/-/async-3.2.4.tgz",
- "integrity": "sha512-iAB+JbDEGXhyIUavoDl9WP/Jj106Kz9DEn1DPgYw5ruDn0e3Wgi3sKFm55sASdGBNOQB8F59d9qQ7deqrHA8wQ==",
- "dev": true
+ "integrity": "sha512-iAB+JbDEGXhyIUavoDl9WP/Jj106Kz9DEn1DPgYw5ruDn0e3Wgi3sKFm55sASdGBNOQB8F59d9qQ7deqrHA8wQ=="
},
"asynckit": {
"version": "0.4.0",
@@ -31613,6 +31857,23 @@
"resolved": "https://registry.npmjs.org/buffer-xor/-/buffer-xor-1.0.3.tgz",
"integrity": "sha512-571s0T7nZWK6vB67HI5dyUF7wXiNcfaPPPTl6zYCNApANjIvYJTg7hlud/+cJpdAhS7dVzqMLmfhfHR3rAcOjQ=="
},
+ "build": {
+ "version": "0.1.4",
+ "resolved": "https://registry.npmjs.org/build/-/build-0.1.4.tgz",
+ "integrity": "sha512-KwbDJ/zrsU8KZRRMfoURG14cKIAStUlS8D5jBDvtrZbwO5FEkYqc3oB8HIhRiyD64A48w1lc+sOmQ+mmBw5U/Q==",
+ "requires": {
+ "cssmin": "0.3.x",
+ "jsmin": "1.x",
+ "jxLoader": "*",
+ "moo-server": "*",
+ "promised-io": "*",
+ "timespan": "2.x",
+ "uglify-js": "1.x",
+ "walker": "1.x",
+ "winston": "*",
+ "wrench": "1.3.x"
+ }
+ },
"builtin-status-codes": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/builtin-status-codes/-/builtin-status-codes-3.0.0.tgz",
@@ -32085,6 +32346,15 @@
"resolved": "https://registry.npmjs.org/colorette/-/colorette-2.0.19.tgz",
"integrity": "sha512-3tlv/dIP7FWvj3BsbHrGLJ6l/oKh1O3TcgBqMn+yyCagOxc23fyzDS6HypQbgxWbkpDnf52p1LuR4eWDQ/K9WQ=="
},
+ "colorspace": {
+ "version": "1.1.4",
+ "resolved": "https://registry.npmjs.org/colorspace/-/colorspace-1.1.4.tgz",
+ "integrity": "sha512-BgvKJiuVu1igBUF2kEjRCZXol6wiiGbY5ipL/oVPwm0BL9sIpMIzM8IK7vwuxIIzOXMV3Ey5w+vxhm0rR/TN8w==",
+ "requires": {
+ "color": "^3.1.3",
+ "text-hex": "1.0.x"
+ }
+ },
"combine-promises": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/combine-promises/-/combine-promises-1.1.0.tgz",
@@ -32647,6 +32917,11 @@
"resolved": "https://registry.npmjs.org/cssesc/-/cssesc-3.0.0.tgz",
"integrity": "sha512-/Tb/JcjK111nNScGob5MNtsntNM1aCNUDipB/TkwZFhyDrrE47SOx/18wF2bbjgc3ZzCSKW1T5nt5EbFoAz/Vg=="
},
+ "cssmin": {
+ "version": "0.3.2",
+ "resolved": "https://registry.npmjs.org/cssmin/-/cssmin-0.3.2.tgz",
+ "integrity": "sha512-bynxGIAJ8ybrnFobjsQotIjA8HFDDgPwbeUWNXXXfR+B4f9kkxdcUyagJoQCSUOfMV+ZZ6bMn8bvbozlCzUGwQ=="
+ },
"cssnano": {
"version": "5.1.15",
"resolved": "https://registry.npmjs.org/cssnano/-/cssnano-5.1.15.tgz",
@@ -33406,6 +33681,11 @@
"resolved": "https://registry.npmjs.org/emoticon/-/emoticon-3.2.0.tgz",
"integrity": "sha512-SNujglcLTTg+lDAcApPNgEdudaqQFiAbJCqzjNxJkvN9vAwCGi0uu8IUVvx+f16h+V44KCY6Y2yboroc9pilHg=="
},
+ "enabled": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/enabled/-/enabled-2.0.0.tgz",
+ "integrity": "sha512-AKrN98kuwOzMIdAizXGI86UFBoo26CL21UM763y1h/GMSJ4/OHU9k2YlsmBpyScFo/wbLzWQJBMCW4+IO3/+OQ=="
+ },
"encodeurl": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-1.0.2.tgz",
@@ -34245,6 +34525,11 @@
"pend": "~1.2.0"
}
},
+ "fecha": {
+ "version": "4.2.3",
+ "resolved": "https://registry.npmjs.org/fecha/-/fecha-4.2.3.tgz",
+ "integrity": "sha512-OP2IUU6HeYKJi3i0z4A19kHMQoLVs4Hc+DPqqxI2h/DPZHTm/vjsfC6P0b4jCMy14XizLBqvndQ+UilD7707Jw=="
+ },
"feed": {
"version": "4.2.2",
"resolved": "https://registry.npmjs.org/feed/-/feed-4.2.2.tgz",
@@ -34402,6 +34687,11 @@
"fbjs": "^3.0.1"
}
},
+ "fn.name": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/fn.name/-/fn.name-1.1.0.tgz",
+ "integrity": "sha512-GRnmB5gPyJpAhTQdSZTSp9uaPSvl09KoYcMQtsB9rQoOmzs9dH6ffeccH+Z+cv6P68Hu5bC6JjRh4Ah/mHSNRw=="
+ },
"fnv-plus": {
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/fnv-plus/-/fnv-plus-1.3.1.tgz",
@@ -37434,6 +37724,11 @@
"resolved": "https://registry.npmjs.org/jsesc/-/jsesc-2.5.2.tgz",
"integrity": "sha512-OYu7XEzjkCQ3C5Ps3QIZsQfNpqoJyZZA99wd9aWd05NCtC5pWOkShK2mkL6HXQR6/Cy2lbNdPlZBpuQHXE63gA=="
},
+ "jsmin": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/jsmin/-/jsmin-1.0.1.tgz",
+ "integrity": "sha512-OPuL5X/bFKgVdMvEIX3hnpx3jbVpFCrEM8pKPXjFkZUqg521r41ijdyTz7vACOhW6o1neVlcLyd+wkbK5fNHRg=="
+ },
"json-buffer": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/json-buffer/-/json-buffer-3.0.0.tgz",
@@ -37522,6 +37817,24 @@
"verror": "1.10.0"
}
},
+ "jxLoader": {
+ "version": "0.1.1",
+ "resolved": "https://registry.npmjs.org/jxLoader/-/jxLoader-0.1.1.tgz",
+ "integrity": "sha512-ClEvAj3K68y8uKhub3RgTmcRPo5DfIWvtxqrKQdDPyZ1UVHIIKvVvjrAsJFSVL5wjv0rt5iH9SMCZ0XRKNzeUA==",
+ "requires": {
+ "js-yaml": "0.3.x",
+ "moo-server": "1.3.x",
+ "promised-io": "*",
+ "walker": "1.x"
+ },
+ "dependencies": {
+ "js-yaml": {
+ "version": "0.3.7",
+ "resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-0.3.7.tgz",
+ "integrity": "sha512-/7PsVDNP2tVe2Z1cF9kTEkjamIwz4aooDpRKmN1+g/9eePCgcxsv4QDvEbxO0EH+gdDD7MLyDoR6BASo3hH51g=="
+ }
+ }
+ },
"katex": {
"version": "0.13.24",
"resolved": "https://registry.npmjs.org/katex/-/katex-0.13.24.tgz",
@@ -37560,6 +37873,11 @@
"resolved": "https://registry.npmjs.org/klona/-/klona-2.0.6.tgz",
"integrity": "sha512-dhG34DXATL5hSxJbIexCft8FChFXtmskoZYnoPWjXQuebWYCNkVeV3KkGegCK9CP1oswI/vQibS2GY7Em/sJJA=="
},
+ "kuler": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/kuler/-/kuler-2.0.0.tgz",
+ "integrity": "sha512-Xq9nH7KlWZmXAtodXDDRE7vs6DU1gTU8zYDHDiWLSip45Egwq3plLHzPn27NgvzL2r1LMPC1vdqh98sQxtqj4A=="
+ },
"latest-version": {
"version": "5.1.0",
"resolved": "https://registry.npmjs.org/latest-version/-/latest-version-5.1.0.tgz",
@@ -38042,6 +38360,26 @@
}
}
},
+ "logform": {
+ "version": "2.5.1",
+ "resolved": "https://registry.npmjs.org/logform/-/logform-2.5.1.tgz",
+ "integrity": "sha512-9FyqAm9o9NKKfiAKfZoYo9bGXXuwMkxQiQttkT4YjjVtQVIQtK6LmVtlxmCaFswo6N4AfEkHqZTV0taDtPotNg==",
+ "requires": {
+ "@colors/colors": "1.5.0",
+ "@types/triple-beam": "^1.3.2",
+ "fecha": "^4.2.0",
+ "ms": "^2.1.1",
+ "safe-stable-stringify": "^2.3.1",
+ "triple-beam": "^1.3.0"
+ },
+ "dependencies": {
+ "safe-stable-stringify": {
+ "version": "2.4.3",
+ "resolved": "https://registry.npmjs.org/safe-stable-stringify/-/safe-stable-stringify-2.4.3.tgz",
+ "integrity": "sha512-e2bDA2WJT0wxseVd4lsDP4+3ONX6HpMXQa1ZhFQ7SU+GjvORCmShbCMltrtIDfkYhVHrOcPtj+KhmDBdPdZD1g=="
+ }
+ }
+ },
"longest-streak": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/longest-streak/-/longest-streak-2.0.4.tgz",
@@ -38111,7 +38449,6 @@
"version": "1.0.12",
"resolved": "https://registry.npmjs.org/makeerror/-/makeerror-1.0.12.tgz",
"integrity": "sha512-JmqCvUhmt43madlpFzG4BQzG2Z3m6tvQDNKdClZnO3VbIudJYmxsT0FNJMeiB2+JTSlTQTSbU8QdesVmwJcmLg==",
- "dev": true,
"requires": {
"tmpl": "1.0.5"
}
@@ -38572,6 +38909,11 @@
"integrity": "sha512-/CaclMHKQ3A6rnzBzOADfwdSJ25BFoFT0Emxsc4zYVyav5SkK9iA6lEtIeuN/oRYbwPgviJT+t3l+sjFa28jYg==",
"peer": true
},
+ "moo-server": {
+ "version": "1.3.0",
+ "resolved": "https://registry.npmjs.org/moo-server/-/moo-server-1.3.0.tgz",
+ "integrity": "sha512-9A8/eor2DXwpv1+a4pZAAydqLFVrWoKoO1fzdzqLUhYVXAO1Kgd1FR2gFZi7YdHzF0s4W8cDNwCfKJQrvLqxDw=="
+ },
"mrmime": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/mrmime/-/mrmime-1.0.1.tgz",
@@ -39005,6 +39347,14 @@
"wrappy": "1"
}
},
+ "one-time": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/one-time/-/one-time-1.0.0.tgz",
+ "integrity": "sha512-5DXOiRKwuSEcQ/l0kGCF6Q3jcADFv5tSmRaJck/OqkVFcOzutB134KRSfF0xDrL39MNnqxbHBbUUcjZIhTgb2g==",
+ "requires": {
+ "fn.name": "1.x.x"
+ }
+ },
"onetime": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/onetime/-/onetime-5.1.2.tgz",
@@ -40025,6 +40375,11 @@
"asap": "~2.0.3"
}
},
+ "promised-io": {
+ "version": "0.3.6",
+ "resolved": "https://registry.npmjs.org/promised-io/-/promised-io-0.3.6.tgz",
+ "integrity": "sha512-bNwZusuNIW4m0SPR8jooSyndD35ggirHlxVl/UhIaZD/F0OBv9ebfc6tNmbpZts3QXHggkjIBH8lvtnzhtcz0A=="
+ },
"prompts": {
"version": "2.4.2",
"resolved": "https://registry.npmjs.org/prompts/-/prompts-2.4.2.tgz",
@@ -41969,6 +42324,11 @@
"stackframe": "^1.3.4"
}
},
+ "stack-trace": {
+ "version": "0.0.10",
+ "resolved": "https://registry.npmjs.org/stack-trace/-/stack-trace-0.0.10.tgz",
+ "integrity": "sha512-KGzahc7puUKkzyMt+IqAep+TVNbKP+k2Lmwhub39m1AsTSkaDutx56aDCo+HLDzf/D26BIHTJWNiTG1KAJiQCg=="
+ },
"stack-utils": {
"version": "2.0.6",
"resolved": "https://registry.npmjs.org/stack-utils/-/stack-utils-2.0.6.tgz",
@@ -42579,6 +42939,11 @@
"minimatch": "^3.0.4"
}
},
+ "text-hex": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/text-hex/-/text-hex-1.0.0.tgz",
+ "integrity": "sha512-uuVGNWzgJ4yhRaNSiubPY7OjISw4sw4E5Uv0wbjp+OzcbmVU/rsT8ujgcXJhn9ypzsgr5vlzpPqP+MBBKcGvbg=="
+ },
"text-table": {
"version": "0.2.0",
"resolved": "https://registry.npmjs.org/text-table/-/text-table-0.2.0.tgz",
@@ -42620,6 +42985,11 @@
"setimmediate": "^1.0.4"
}
},
+ "timespan": {
+ "version": "2.3.0",
+ "resolved": "https://registry.npmjs.org/timespan/-/timespan-2.3.0.tgz",
+ "integrity": "sha512-0Jq9+58T2wbOyLth0EU+AUb6JMGCLaTWIykJFa7hyAybjVH9gpVMTfUAwo5fWAvtFt2Tjh/Elg8JtgNpnMnM8g=="
+ },
"tiny-invariant": {
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/tiny-invariant/-/tiny-invariant-1.3.1.tgz",
@@ -42642,8 +43012,7 @@
"tmpl": {
"version": "1.0.5",
"resolved": "https://registry.npmjs.org/tmpl/-/tmpl-1.0.5.tgz",
- "integrity": "sha512-3f0uOEAQwIqGuWW2MVzYg8fV/QNnc/IpuJNG837rLuczAaLVHslWHZQj4IGiEl5Hs3kkbhwL9Ab7Hrsmuj+Smw==",
- "dev": true
+ "integrity": "sha512-3f0uOEAQwIqGuWW2MVzYg8fV/QNnc/IpuJNG837rLuczAaLVHslWHZQj4IGiEl5Hs3kkbhwL9Ab7Hrsmuj+Smw=="
},
"to-fast-properties": {
"version": "2.0.0",
@@ -42707,6 +43076,11 @@
"resolved": "https://registry.npmjs.org/trim-trailing-lines/-/trim-trailing-lines-1.1.4.tgz",
"integrity": "sha512-rjUWSqnfTNrjbB9NQWfPMH/xRK1deHeGsHoVfpxJ++XeYXE0d6B1En37AHfw3jtfTU7dzMzZL2jjpe8Qb5gLIQ=="
},
+ "triple-beam": {
+ "version": "1.4.1",
+ "resolved": "https://registry.npmjs.org/triple-beam/-/triple-beam-1.4.1.tgz",
+ "integrity": "sha512-aZbgViZrg1QNcG+LULa7nhZpJTZSLm/mXnHXnbAbjmN5aSa0y7V+wvv6+4WaBtpISJzThKy+PIPxc1Nq1EJ9mg=="
+ },
"trough": {
"version": "1.0.5",
"resolved": "https://registry.npmjs.org/trough/-/trough-1.0.5.tgz",
@@ -42822,6 +43196,11 @@
"resolved": "https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.34.tgz",
"integrity": "sha512-cJMeh/eOILyGu0ejgTKB95yKT3zOenSe9UGE3vj6WfiOwgGYnmATUsnDixMFvdU+rNMvWih83hrUP8VwhF9yXQ=="
},
+ "uglify-js": {
+ "version": "1.3.5",
+ "resolved": "https://registry.npmjs.org/uglify-js/-/uglify-js-1.3.5.tgz",
+ "integrity": "sha512-YPX1DjKtom8l9XslmPFQnqWzTBkvI4N0pbkzLuPZZ4QTyig0uQqvZz9NgUdfEV+qccJzi7fVcGWdESvRIjWptQ=="
+ },
"unbox-primitive": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/unbox-primitive/-/unbox-primitive-1.0.2.tgz",
@@ -43398,7 +43777,6 @@
"version": "1.0.8",
"resolved": "https://registry.npmjs.org/walker/-/walker-1.0.8.tgz",
"integrity": "sha512-ts/8E8l5b7kY0vlWLewOkDXMmPdLcVV4GmOQLyxuSswIJsweeFZtAsMF7k1Nszz+TYBQrlYRmzOnr398y1JemQ==",
- "dev": true,
"requires": {
"makeerror": "1.0.12"
}
@@ -43771,6 +44149,41 @@
"resolved": "https://registry.npmjs.org/wildcard/-/wildcard-2.0.0.tgz",
"integrity": "sha512-JcKqAHLPxcdb9KM49dufGXn2x3ssnfjbcaQdLlfZsL9rH9wgDQjUtDxbo8NE0F6SFvydeu1VhZe7hZuHsB2/pw=="
},
+ "winston": {
+ "version": "3.10.0",
+ "resolved": "https://registry.npmjs.org/winston/-/winston-3.10.0.tgz",
+ "integrity": "sha512-nT6SIDaE9B7ZRO0u3UvdrimG0HkB7dSTAgInQnNR2SOPJ4bvq5q79+pXLftKmP52lJGW15+H5MCK0nM9D3KB/g==",
+ "requires": {
+ "@colors/colors": "1.5.0",
+ "@dabh/diagnostics": "^2.0.2",
+ "async": "^3.2.3",
+ "is-stream": "^2.0.0",
+ "logform": "^2.4.0",
+ "one-time": "^1.0.0",
+ "readable-stream": "^3.4.0",
+ "safe-stable-stringify": "^2.3.1",
+ "stack-trace": "0.0.x",
+ "triple-beam": "^1.3.0",
+ "winston-transport": "^4.5.0"
+ },
+ "dependencies": {
+ "safe-stable-stringify": {
+ "version": "2.4.3",
+ "resolved": "https://registry.npmjs.org/safe-stable-stringify/-/safe-stable-stringify-2.4.3.tgz",
+ "integrity": "sha512-e2bDA2WJT0wxseVd4lsDP4+3ONX6HpMXQa1ZhFQ7SU+GjvORCmShbCMltrtIDfkYhVHrOcPtj+KhmDBdPdZD1g=="
+ }
+ }
+ },
+ "winston-transport": {
+ "version": "4.5.0",
+ "resolved": "https://registry.npmjs.org/winston-transport/-/winston-transport-4.5.0.tgz",
+ "integrity": "sha512-YpZzcUzBedhlTAfJg6vJDlyEai/IFMIVcaEZZyl3UXIl4gmqRpU7AE89AHLkbzLUsv0NVmw7ts+iztqKxxPW1Q==",
+ "requires": {
+ "logform": "^2.3.2",
+ "readable-stream": "^3.6.0",
+ "triple-beam": "^1.3.0"
+ }
+ },
"wolfy87-eventemitter": {
"version": "5.2.9",
"resolved": "https://registry.npmjs.org/wolfy87-eventemitter/-/wolfy87-eventemitter-5.2.9.tgz",
@@ -43817,6 +44230,11 @@
"resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz",
"integrity": "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ=="
},
+ "wrench": {
+ "version": "1.3.9",
+ "resolved": "https://registry.npmjs.org/wrench/-/wrench-1.3.9.tgz",
+ "integrity": "sha512-srTJQmLTP5YtW+F5zDuqjMEZqLLr/eJOZfDI5ibfPfRMeDh3oBUefAscuH0q5wBKE339ptH/S/0D18ZkfOfmKQ=="
+ },
"write-file-atomic": {
"version": "3.0.3",
"resolved": "https://registry.npmjs.org/write-file-atomic/-/write-file-atomic-3.0.3.tgz",
diff --git a/website/package.json b/website/package.json
index afb7a9b1cd4..080ede9e3e6 100644
--- a/website/package.json
+++ b/website/package.json
@@ -19,6 +19,7 @@
"@stoplight/elements": "^7.7.17",
"@svgr/webpack": "^6.0.0",
"axios": "^0.27.2",
+ "build": "^0.1.4",
"classnames": "^2.3.1",
"clsx": "^1.1.1",
"color": "^3.1.2",
diff --git a/website/sidebars.js b/website/sidebars.js
index 46db14dd6d4..2c68ecf443a 100644
--- a/website/sidebars.js
+++ b/website/sidebars.js
@@ -404,21 +404,22 @@ const sidebarSettings = {
{
type: "category",
label: "Use the dbt Semantic Layer",
- link: { type: "doc", id: "docs/use-dbt-semantic-layer/dbt-sl" },
+ link: { type: "doc", id: "docs/use-dbt-semantic-layer/dbt-sl" },
items: [
- "docs/use-dbt-semantic-layer/quickstart-sl",
- "docs/use-dbt-semantic-layer/setup-dbt-sl",
+ "docs/use-dbt-semantic-layer/quickstart-sl",
+ "docs/use-dbt-semantic-layer/setup-sl",
"docs/use-dbt-semantic-layer/avail-sl-integrations",
{
type: "category",
label: "Semantic Layer APIs",
- link: { type: "doc", id: "docs/use-dbt-semantic-layer/sl-api-overview" },
+ link: { type: "doc", id: "docs/use-dbt-semantic-layer/sl-api-overview" },
items: [
"docs/use-dbt-semantic-layer/jdbc",
"docs/use-dbt-semantic-layer/graphql",
+ "docs/use-dbt-semantic-layer/sl-manifest",
],
},
- ],
+ ],
},
{
type: "category",
diff --git a/website/snippets/_legacy-sl-callout.md b/website/snippets/_legacy-sl-callout.md
new file mode 100644
index 00000000000..8ed4e162f75
--- /dev/null
+++ b/website/snippets/_legacy-sl-callout.md
@@ -0,0 +1,13 @@
+:::info Upgrade your dbt version to access the dbt Semantic Layer
+
+The dbt Semantic Layer has undergone a [significant revamp](https://www.getdbt.com/blog/dbt-semantic-layer-whats-next/), making it more efficient to define and query metrics.
+
+**What’s changed?** The dbt_metrics package has been [deprecated](https://docs.getdbt.com/blog/deprecating-dbt-metrics) and replaced with [MetricFlow](/docs/build/about-metricflow?version=1.6), one of the Semantic Layer's key component and a new way framework for defining metrics in dbt.
+
+**Who does this affect?** The revamped dbt Semantic Layer is available for users on a Team or Enterprise plans on dbt v1.6 and higher. To learn more about it, make sure you select v1.6 or higher in the docs navigation header and [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud).
+
+**What should you do?** We **highly** recommend you upgrade to dbt v1.6 and higher to use the new and most recent version of the dbt Semantic Layer, powered by MetricFlow.
+
+You can still use and configure the legacy dbt Semantic Layer using the deprecated dbt_metrics package on v1.5 or older, however, these versions are no longer supported and won't receive any code fixes.
+
+:::
diff --git a/website/snippets/_v2-sl-prerequisites.md b/website/snippets/_v2-sl-prerequisites.md
new file mode 100644
index 00000000000..bcd5fd02452
--- /dev/null
+++ b/website/snippets/_v2-sl-prerequisites.md
@@ -0,0 +1,52 @@
+
+
+
+- Have a [multi-tenant dbt Cloud](/docs/deploy/regions) instance, hosted in North America
+- Have both your production and development environments running dbt version 1.6 or higher
+- Use Snowflake and Postgres, BigQuery, Databricks, and Redshift data platform
+- Install the [MetricFlow CLI](https://github.com/dbt-labs/metricflow) package
+ * Note — After installing the package, make sure you run at least one model.
+- Set up the [Discovery API](/docs/dbt-cloud-apis/discovery-api) in the integrated tool to import metric definitions
+ * Developer accounts will be able to query the Proxy Server using SQL, but will not be able to browse pre-populated dbt metrics in external tools, which requires access to the Discovery API
+
+
+
+
+
+
+- Have a multi-tenant dbt Cloud instance, hosted in North America
+- Have both your production and development environments running dbt version 1.3 or higher
+- Use Snowflake data platform
+- Install the dbt metrics package version >=1.3.0, <1.4.0
in your dbt project
+ * **Note** — After installing the dbt metrics package and updating the `packages.yml` file, make sure you run at least one model.
+- Set up the Discovery API in the integrated tool to import metric definitions
+ * Developer accounts will be able to query the Proxy Server using SQL, but will not be able to browse pre-populated dbt metrics in external tools, which requires access to the Discovery API
+- Recommended - Review the dbt metrics page and Understanding the components of the dbt Semantic Layer blog
+
+
+
+
+
+- Have a multi-tenant dbt Cloud instance, hosted in North America
+- Have both your production and development environments running dbt version 1.3 or higher
+- Use Snowflake data platform
+- Install the dbt metrics package version >=1.3.0, <1.4.0
in your dbt project
+ * **Note** — After installing the dbt metrics package and updating the `packages.yml` file, make sure you run at least one model.
+- Set up the Discovery API in the integrated tool to import metric definitions
+ * Developer accounts will be able to query the Proxy Server using SQL, but will not be able to browse pre-populated dbt metrics in external tools, which requires access to the Discovery API
+- Recommended - Review the dbt metrics page and Understanding the components of the dbt Semantic Layer blog
+
+
+
+
+
+- Have a multi-tenant dbt Cloud instance, hosted in North America
+- Have both your production and development environments running dbt version 1.2
+- Use Snowflake data platform
+- Install the dbt metrics package version >=0.3.0, <0.4.0
in your dbt project
+ * **Note** — After installing the dbt metrics package and updating the `packages.yml` file, make sure you run at least one model.
+- Set up the Discovery API in the integrated tool to import metric definitions
+ * Developer accounts will be able to query the Proxy Server using SQL, but will not be able to browse pre-populated dbt metrics in external tools, which requires access to the Discovery API
+- Recommended - Review the dbt metrics page and Understanding the components of the dbt Semantic Layer blog
+
+
diff --git a/website/static/_redirects b/website/static/_redirects
index 27e30e83c9e..8f5d274d3f3 100644
--- a/website/static/_redirects
+++ b/website/static/_redirects
@@ -1,4 +1,8 @@
+## semantic layer
+/docs/use-dbt-semantic-layer/dbt-semantic-layer /docs/use-dbt-semantic-layer/dbt-sl 301
+/docs/use-dbt-semantic-layer/set-up-semantic-layer /docs/use-dbt-semantic-layer/setup-sl 301
+/docs/use-dbt-semantic-layer/quickstart-semantic-layer /docs/use-dbt-semantic-layer/quickstart-sl 301
## refocus deploy page
/docs/collaborate/environments/environments-in-dbt /docs/environments-in-dbt 301
/docs/collaborate/environments/dbt-cloud-environments /docs/deploy/dbt-cloud-environments 301
From cfc986a09cc2bebde2ed5887ac3d95f23c08f60a Mon Sep 17 00:00:00 2001
From: mirnawong1
Date: Tue, 11 Jul 2023 12:22:10 +0100
Subject: [PATCH 019/703] fix links
---
website/docs/docs/build/about-metricflow.md | 2 +-
website/docs/docs/build/build-metrics-intro.md | 2 +-
website/docs/docs/cloud/about-cloud/dbt-cloud-features.md | 2 +-
website/docs/docs/deploy/deploy-environments.md | 2 +-
.../docs/guides/dbt-ecosystem/sl-partner-integration-guide.md | 4 ++--
5 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/website/docs/docs/build/about-metricflow.md b/website/docs/docs/build/about-metricflow.md
index f35bed24044..e02fa778b05 100644
--- a/website/docs/docs/build/about-metricflow.md
+++ b/website/docs/docs/build/about-metricflow.md
@@ -10,7 +10,7 @@ This guide introduces MetricFlow's fundamental ideas for new users. MetricFlow,
:::info
-MetricFlow is a new way to define metrics in dbt and one of the key components of the [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-semantic-layer). It handles SQL query construction and defines the specification for dbt semantic models and metrics.
+MetricFlow is a new way to define metrics in dbt and one of the key components of the [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl). It handles SQL query construction and defines the specification for dbt semantic models and metrics.
To fully experience the dbt Semantic Layer, including the ability to query dbt metrics via external integrations, you'll need a [dbt Cloud Team or Enterprise account](https://www.getdbt.com/pricing/).
diff --git a/website/docs/docs/build/build-metrics-intro.md b/website/docs/docs/build/build-metrics-intro.md
index e98ee013d0b..617b3d1bc4a 100644
--- a/website/docs/docs/build/build-metrics-intro.md
+++ b/website/docs/docs/build/build-metrics-intro.md
@@ -7,7 +7,7 @@ tags: [Metrics, Semantic Layer, Governance]
hide_table_of_contents: true
---
-Use MetricFlow in dbt to centrally define your metrics. MetricFlow is a key component of the [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-semantic-layer) and is responsible for SQL query construction and defining specifications for dbt semantic models and metrics.
+Use MetricFlow in dbt to centrally define your metrics. MetricFlow is a key component of the [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl) and is responsible for SQL query construction and defining specifications for dbt semantic models and metrics.
Use familiar constructs like semantic models and metrics to avoid duplicative coding, optimize your development workflow, ensure data governance for company metrics, and guarantee consistency for data consumers.
diff --git a/website/docs/docs/cloud/about-cloud/dbt-cloud-features.md b/website/docs/docs/cloud/about-cloud/dbt-cloud-features.md
index f301dfce34b..a7555adb6a8 100644
--- a/website/docs/docs/cloud/about-cloud/dbt-cloud-features.md
+++ b/website/docs/docs/cloud/about-cloud/dbt-cloud-features.md
@@ -63,7 +63,7 @@ link="/docs/cloud/dbt-cloud-ide/develop-in-the-cloud"
**E
### Semantic Layer
-For Semantic Layer-eligible customers, the next section of environment settings is the Semantic Layer configurations. [The Semantic Layer setup guide](/docs/use-dbt-semantic-layer/setup-dbt-semantic-layer) has the most up-to-date setup instructions!
+For Semantic Layer-eligible customers, the next section of environment settings is the Semantic Layer configurations. [The Semantic Layer setup guide](/docs/use-dbt-semantic-layer/setup-sl) has the most up-to-date setup instructions!
### Deployment connection
diff --git a/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md b/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md
index f2fffd43994..6f7ae1df2e7 100644
--- a/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md
+++ b/website/docs/guides/dbt-ecosystem/sl-partner-integration-guide.md
@@ -86,7 +86,7 @@ To build a successful and seamless dbt Semantic Layer integration, it should exp
This section will explain how to connect to and query the [Discovery API](/docs/dbt-cloud-apis/discovery-api) for model and metric definitions.
-To use the dbt Semantic Layer, you must meet the [prerequisites](/docs/use-dbt-semantic-layer/dbt-semantic-layer#prerequisites).
+To use the dbt Semantic Layer, you must meet the [prerequisites](/docs/use-dbt-semantic-layer/dbt-sl#prerequisites).
Discovery API authorization
@@ -652,7 +652,7 @@ If you'd like to become a formal partner, have product feedback/questions, or ar
## Related docs
-- [dbt Semantic Layer docs](https://docs.getdbt.com/docs/use-dbt-semantic-layer/dbt-semantic-layer) to learn about the product.
+- [dbt Semantic Layer docs](https://docs.getdbt.com/docs/use-dbt-semantic-layer/dbt-sl) to learn about the product.
- [dbt Metrics docs](https://docs.getdbt.com/docs/building-a-dbt-project/metrics) for more information about its components.
- [dbt Semantic Layer intro blog](https://www.getdbt.com/blog/dbt-semantic-layer/) and [launch blog](https://www.getdbt.com/blog/frontiers-of-the-dbt-semantic-layer/) to learn more about the product vision and purpose.
- [dbt Semantic Layer integrations page](https://www.getdbt.com/product/semantic-layer-integrations) for information about the available partner integrations.
From 81d018c3f5524f76dd2ea2794049517c43e13981 Mon Sep 17 00:00:00 2001
From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com>
Date: Tue, 11 Jul 2023 12:30:02 +0100
Subject: [PATCH 020/703] Update
website/docs/docs/use-dbt-semantic-layer/setup-sl.md
---
website/docs/docs/use-dbt-semantic-layer/setup-sl.md | 1 -
1 file changed, 1 deletion(-)
diff --git a/website/docs/docs/use-dbt-semantic-layer/setup-sl.md b/website/docs/docs/use-dbt-semantic-layer/setup-sl.md
index 40acade5138..b16a815198c 100644
--- a/website/docs/docs/use-dbt-semantic-layer/setup-sl.md
+++ b/website/docs/docs/use-dbt-semantic-layer/setup-sl.md
@@ -67,7 +67,6 @@ Note - It is _not_ recommended that you use your dbt Cloud credentials due to e
## Related docs
- [Integrated partner tools](https://www.getdbt.com/product/semantic-layer-integrations) for info on the different integration partners and their documentation
-- [Product architecture](/docs/use-dbt-semantic-layer/dbt-semantic-layer#product-architecture) page for more information on plan availability
- [dbt Server repo](https://github.com/dbt-labs/dbt-server), which is a persisted HTTP server that wraps dbt core to handle RESTful API requests for dbt operations
From 15e48c822d8dabce00fd7af2391b18f1e3c16da6 Mon Sep 17 00:00:00 2001
From: mirnawong1
Date: Tue, 11 Jul 2023 13:12:16 +0100
Subject: [PATCH 021/703] adding partials
---
website/dbt-versions.js | 4 +
website/docs/docs/build/metrics.md | 10 +--
.../avail-sl-integrations.md | 8 +-
.../docs/use-dbt-semantic-layer/dbt-sl.md | 8 +-
.../docs/use-dbt-semantic-layer/graphql.md | 6 +-
.../docs/docs/use-dbt-semantic-layer/jdbc.md | 6 +-
.../use-dbt-semantic-layer/quickstart-sl.md | 4 +
.../docs/use-dbt-semantic-layer/setup-sl.md | 82 +++++++++----------
.../use-dbt-semantic-layer/sl-api-overview.md | 23 +++---
website/snippets/_legacy-sl-callout.md | 4 +-
website/snippets/_new-sl-changes.md | 8 ++
website/snippets/_upgrade-new-sl.md | 7 ++
website/snippets/_v2-sl-prerequisites.md | 4 +-
13 files changed, 89 insertions(+), 85 deletions(-)
create mode 100644 website/snippets/_new-sl-changes.md
create mode 100644 website/snippets/_upgrade-new-sl.md
diff --git a/website/dbt-versions.js b/website/dbt-versions.js
index f2834c921b6..8802538a8e6 100644
--- a/website/dbt-versions.js
+++ b/website/dbt-versions.js
@@ -187,6 +187,10 @@ exports.versionedPages = [
"category": "docs/use-dbt-semantic-layer/jdbc",
"firstVersion": "1.6",
},
+ {
+ "category": "docs/use-dbt-semantic-layer/sl-manifest",
+ "firstVersion": "1.6",
+ },
]
exports.versionedCategories = [
diff --git a/website/docs/docs/build/metrics.md b/website/docs/docs/build/metrics.md
index 4492d2f5770..4e16752e3ae 100644
--- a/website/docs/docs/build/metrics.md
+++ b/website/docs/docs/build/metrics.md
@@ -6,15 +6,9 @@ keywords:
- dbt metrics layer
---
-:::info dbt metrics no longer supported
+import LegacyInfo from '/snippets/_legacy-sl-callout.md';
-dbt_metrics is no longer supported in v1.6 and higher, and has been [deprecated](https://docs.getdbt.com/blog/deprecating-dbt-metrics). [MetricFlow](/docs/build/about-metricflow?version=1.6), a key component of the [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl), is a new framework for defining metrics in dbt and has replaced dbt_metrics.
-
-You can still use the dbt Semantic Layer using the deprecated dbt_metrics package on v1.5 or older, however, these versions are no longer supported and don't receive any code fixes.
-
-To build your metrics and semantic layer, we **highly** recommend you [upgrade to dbt v1.6 and higher](/guides/migration/versions/upgrading-to-v1.6) to use the new and most recent version of the dbt Semantic Layer, powered by MetricFlow.
-
-:::
+
diff --git a/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md b/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md
index b3f10309999..977f05bac2d 100644
--- a/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md
+++ b/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md
@@ -7,13 +7,9 @@ sidebar_label: "Available integrations"
-:::info Revamped dbt Semantic Layer
+import NewSLChanges from '/snippets/_new-sl-changes.md';
-The dbt Semantic Layer has undergone a [significant revamp](https://www.getdbt.com/blog/dbt-semantic-layer-whats-next/), making it more efficient to define and query metrics.
-
-**What’s changed?** The dbt_metrics package has been [deprecated](https://docs.getdbt.com/blog/deprecating-dbt-metrics) and replaced with [MetricFlow](/docs/build/about-metricflow?version=1.6), one of the Semantic Layer's key component and a new way framework for defining metrics in dbt.
-
-:::
+ >
Query dbt metrics via external integrations using the sophisticated [dbt Semantic Layer APIs](/docs/use-dbt-semantic-layer/sl-api-overview) (Discovery, JDBC, and GraphQL API). With these APIs, you can query metrics to avoid duplicative coding, optimize your development workflow, ensure data governance for company metrics, and guarantee consistency for data consumers.
diff --git a/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md b/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
index 6901c5b1726..f252c2d55de 100644
--- a/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
+++ b/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
@@ -7,13 +7,9 @@ sidebar_label: "About the dbt Semantic Layer"
-:::info Revamped dbt Semantic Layer
+import NewSLChanges from '/snippets/_new-sl-changes.md';
-The dbt Semantic Layer has undergone a [significant revamp](https://www.getdbt.com/blog/dbt-semantic-layer-whats-next/), making it more efficient to define and query metrics.
-
-**What’s changed?** The dbt_metrics package has been [deprecated](https://docs.getdbt.com/blog/deprecating-dbt-metrics) and replaced with [MetricFlow](/docs/build/about-metricflow?version=1.6), one of the Semantic Layer's key component and a new way framework for defining metrics in dbt.
-
-:::
+
The dbt Semantic Layer, powered by MetricFlow, allows data teams to centrally define essential business metrics like `revenue`, `customer`, and `churn` in the modeling layer (your dbt project) for consistent self-service within downstream data tools like BI and metadata management solutions. The dbt Semantic Layer provides the flexibility to define metrics on top of your existing models and then query those metrics and models in your analysis tools of choice.
diff --git a/website/docs/docs/use-dbt-semantic-layer/graphql.md b/website/docs/docs/use-dbt-semantic-layer/graphql.md
index 726039dcf68..fd2f60347b0 100644
--- a/website/docs/docs/use-dbt-semantic-layer/graphql.md
+++ b/website/docs/docs/use-dbt-semantic-layer/graphql.md
@@ -7,11 +7,9 @@ tags: ["semantic-layer, apis"]
-:::info
+import UpgradeSL from '/snippets/_upgrade-new-sl.md';
-The dbt Semantic Layer APIs are available for users on a Team or Enterprise plans on dbt v1.6 and higher. To learn more about the Semantic APIs, make sure you select v1.6 or higher in the docs navigation header and upgrade your dbt version.
-
-:::
+
ADD GRAPHQL CONTENT HERE
diff --git a/website/docs/docs/use-dbt-semantic-layer/jdbc.md b/website/docs/docs/use-dbt-semantic-layer/jdbc.md
index d22f500765d..c7189794aa3 100644
--- a/website/docs/docs/use-dbt-semantic-layer/jdbc.md
+++ b/website/docs/docs/use-dbt-semantic-layer/jdbc.md
@@ -7,9 +7,9 @@ tags: ["semantic-layer, apis"]
-:::info
-The dbt Semantic Layer APIs are available for users on a Team or Enterprise plans on dbt v1.6 and higher. To learn more about the Semantic APIs, make sure you select v1.6 or higher in the docs navigation header and upgrade your dbt version.
-:::
+import UpgradeSL from '/snippets/_upgrade-new-sl.md';
+
+
ADD JDBC CONTENT HERE
diff --git a/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md b/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md
index e844132c587..9c5411ad3c3 100644
--- a/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md
+++ b/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md
@@ -7,6 +7,10 @@ sidebar_label: "Getting started"
+import NewSLChanges from '/snippets/_new-sl-changes.md';
+
+
+
This getting started page recommends a workflow to help you get started creating your first metrics. Here are the following steps you'll take:
- [Create a semantic model](#create-a-semantic-model)
diff --git a/website/docs/docs/use-dbt-semantic-layer/setup-sl.md b/website/docs/docs/use-dbt-semantic-layer/setup-sl.md
index b16a815198c..8e1f04db105 100644
--- a/website/docs/docs/use-dbt-semantic-layer/setup-sl.md
+++ b/website/docs/docs/use-dbt-semantic-layer/setup-sl.md
@@ -7,13 +7,9 @@ sidebar_label: "Set up the dbt Semantic Layer"
-:::info Upgrade your dbt version to access the dbt Semantic Layer
+import NewSLChanges from '/snippets/_new-sl-changes.md';
-The dbt Semantic Layer has undergone a [significant revamp](https://www.getdbt.com/blog/dbt-semantic-layer-whats-next/), making it more efficient to define and query metrics.
-
-**What’s changed?** The dbt_metrics package has been [deprecated](https://docs.getdbt.com/blog/deprecating-dbt-metrics) and replaced with [MetricFlow](/docs/build/about-metricflow?version=1.6), one of the Semantic Layer's key component and a new way framework for defining metrics in dbt.
-
-:::
+
With the dbt Semantic Layer, you'll be able to centrally define business metrics, reduce code duplication and inconsistency, create self-service in downstream tools, and more. Configure the dbt Semantic Layer in dbt Cloud to connect with your integrated partner tool.
@@ -29,6 +25,44 @@ import SetUp from '/snippets/_v2-sl-prerequisites.md';
+## Set up dbt Semantic Layer
+
+You can set up the dbt Semantic Layer in dbt Cloud at the environment and project level by following these steps:
+
+
+1. Create a new environment in dbt Cloud by selecting **Deploy** and then **Environments**.
+2. Select **dbt Version 1.6** (or the latest) and enter your deployment credentials.
+3. To configure the new Semantic Layer, you must have a successful run in your new environment. We recommend running `dbt ls` since `dbt build` won’t succeed until you’ve created and defined semantic models and metrics.
+4. To enable the dbt Semantic Layer, go to the **Account Settings** page and then select the specific project you want to enable the Semantic Layer for.
+5. In the **Project Details** page, select **Configure Semantic Layer.** This will prompt you to enter data platform connection credentials for the Semantic Layer and select the environment where you want to enable the Semantic Layer. We recommend using a less privileged set of credentials when setting up your connection. The semantic layer requires SELECT and CREATE TABLE permissions.
+6. After you’ve entered you’re credentials, you should see a **JDBC URL** connection string. Copy this string and save it in your password manager. This can be used to connect to the semantic layer JDBC.
+7. Next, go back to the **Project Details** page and select **Generate Service Token** to create a Semantic Layer service token. Save this token for later.
+8. You’re done 🎉! The semantic layer should is now enabled for your project.
+
+
+
+
+
+
+
+
+
+import LegacyInfo from '/snippets/_legacy-sl-callout.md';
+
+
+
+With the dbt Semantic Layer, you'll be able to centrally define business metrics, reduce code duplication and inconsistency, create self-service in downstream tools, and more. Configure the dbt Semantic Layer in dbt Cloud to connect with your integrated partner tool.
+
+## Prerequisites
+
+Before you set up the dbt Semantic Layer, make sure you meet the following:
+
+
+
+
+
+
+
## Set up dbt Semantic Layer
Before you continue with the following steps, you **must** have a multi-tenant dbt Cloud account hosted in North America.
@@ -61,46 +95,10 @@ Note - It is _not_ recommended that you use your dbt Cloud credentials due to e
- An [API service token](/docs/dbt-cloud-apis/service-tokens) with job admin and metadata permissions
- Add the items above to the relevant fields in your integration tool
-
-
-
-## Related docs
-
-- [Integrated partner tools](https://www.getdbt.com/product/semantic-layer-integrations) for info on the different integration partners and their documentation
-- [dbt Server repo](https://github.com/dbt-labs/dbt-server), which is a persisted HTTP server that wraps dbt core to handle RESTful API requests for dbt operations
-
-
-
-
-
-import LegacyInfo from '/snippets/_legacy-sl-callout.md';
-
-
-
-With the dbt Semantic Layer, you'll be able to centrally define business metrics, reduce code duplication and inconsistency, create self-service in downstream tools, and more. Configure the dbt Semantic Layer in dbt Cloud to connect with your integrated partner tool.
-
-## Prerequisites
-
-Before you set up the dbt Semantic Layer, make sure you meet the following:
-
-
-
-
-
-
-
-## Set up dbt Semantic Layer
-
-
-## Related docs
-- [Integrated partner tools](https://www.getdbt.com/product/semantic-layer-integrations) for info on the different integration partners and their documentation
-- [Product architecture](/docs/use-dbt-semantic-layer/dbt-semantic-layer#product-architecture) page for more information on plan availability
-- [dbt metrics](/docs/build/metrics) for in-depth detail on attributes, properties, filters, and how to define and query metrics
-- [dbt Server repo](https://github.com/dbt-labs/dbt-server), which is a persisted HTTP server that wraps dbt core to handle RESTful API requests for dbt operations
diff --git a/website/docs/docs/use-dbt-semantic-layer/sl-api-overview.md b/website/docs/docs/use-dbt-semantic-layer/sl-api-overview.md
index 39c9e1c571a..8e83f9dd25c 100644
--- a/website/docs/docs/use-dbt-semantic-layer/sl-api-overview.md
+++ b/website/docs/docs/use-dbt-semantic-layer/sl-api-overview.md
@@ -8,21 +8,20 @@ hide_table_of_contents: true
-:::info
+import UpgradeSL from '/snippets/_upgrade-new-sl.md';
-The revamped dbt Semantic Layer is available for users on a [Team or Enterprise plans](https://www.getdbt.com/pricing/) and you must be on dbt v1.6 and higher.
+
-To learn more about it, make sure you select v1.6 or higher in the docs navigation header and [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud).
-:::
+With the Semantic Layer APIs, you can seamlessly query the metrics you built with [MetricFlow](/docs/build/about-metricflow) to avoid duplicative coding, optimize your development workflow, ensure data governance for company metrics, and guarantee consistency for data consumers.
-Query dbt metrics via external integrations using the dbt Semantic Layer APIs. With the Semantic Layer APIs, you can query metrics to avoid duplicative coding, optimize your development workflow, ensure data governance for company metrics, and guarantee consistency for data consumers.
+The full dbt Semantic Layer lets you create systems for data consumption, automated reporting, easily feed metrics to downstream tools, and more. This can help you connect to a wide variety of data applications across the modern data stack and natively integrate with the dbt Semantic Layer and MetricFlow — from Business Intelligence tools to notebooks, spreadsheets, data catalogs, and more.
-By leveraging metrics in dbt Cloud, you can create systems for data consumption, automated reporting, easily feed metrics to downstream tools, and more. This can help you connect to a wide variety of data applications across the modern data stack and natively integrate with the dbt Semantic Layer and dbt metrics — from Business Intelligence tools to notebooks, spreadsheets, data catalogs, and more.
+The dbt Semantic Layer generates a [`semantic_manifest.json` artifact file](/docs/use-dbt-semantic-layer/sl-manifest). This artifact contains comprehensive information about your Semantic Layer. You can use it as a valuable reference to understand the structure and details of your data models.
-The dbt Semantic Layer generates a [`semantic_manifest.json`](/docs/use-dbt-semantic-layer/sl-manifest) artifact file. This artifact contains comprehensive information about your Semantic Layer. You can use it as a valuable reference to understand the structure and details of your data models.
+To query metrics using the dbt Semantic Lay API, accounts must be on a [Team or Enterprise plans](https://www.getdbt.com/pricing/).
-The dbt Semantic Layer provides the following APIs:
+The dbt Semantic Layer provides the following APIs, which you can refer to for more info:
@@ -59,15 +58,15 @@ Requests to the dbt Semantic Layer APIs can be authorized through two types of A
-Query dbt metrics via external integrations using the dbt Semantic Layer APIs. With the Semantic Layer APIs, you can query metrics to avoid duplicative coding, optimize your development workflow, ensure data governance for company metrics, and guarantee consistency for data consumers.
+With the Semantic Layer APIs, you can seamlessly query the metrics you built with [MetricFlow](/docs/build/about-metricflow) to avoid duplicative coding, optimize your development workflow, ensure data governance for company metrics, and guarantee consistency for data consumers.
-By leveraging metrics in dbt Cloud, you can create systems for data consumption, automated reporting, easily feed metrics to downstream tools, and more. This can help you connect to a wide variety of data applications across the modern data stack and natively integrate with the dbt Semantic Layer and dbt metrics — from Business Intelligence tools to notebooks, spreadsheets, data catalogs, and more.
+The full dbt Semantic Layer lets you create systems for data consumption, automated reporting, easily feed metrics to downstream tools, and more. This can help you connect to a wide variety of data applications across the modern data stack and natively integrate with the dbt Semantic Layer and MetricFlow — from Business Intelligence tools to notebooks, spreadsheets, data catalogs, and more.
-The dbt Semantic Layer generates a [`semantic_manifest.json`](/docs/use-dbt-semantic-layer/sl-manifest) artifact file. This artifact contains comprehensive information about your Semantic Layer. You can use it as a valuable reference to understand the structure and details of your data models.
+The dbt Semantic Layer generates a [`semantic_manifest.json` artifact file](/docs/use-dbt-semantic-layer/sl-manifest). This artifact contains comprehensive information about your Semantic Layer. You can use it as a valuable reference to understand the structure and details of your data models.
To query metrics using the dbt Semantic Lay API, accounts must be on a [Team or Enterprise plans](https://www.getdbt.com/pricing/).
-The dbt Semantic Layer provides the following APIs:
+The dbt Semantic Layer provides the following APIs, which you can refer to for more info:
diff --git a/website/snippets/_legacy-sl-callout.md b/website/snippets/_legacy-sl-callout.md
index 8ed4e162f75..4912b770c74 100644
--- a/website/snippets/_legacy-sl-callout.md
+++ b/website/snippets/_legacy-sl-callout.md
@@ -1,10 +1,10 @@
-:::info Upgrade your dbt version to access the dbt Semantic Layer
+:::info Upgrade your dbt version to access the new dbt Semantic Layer
The dbt Semantic Layer has undergone a [significant revamp](https://www.getdbt.com/blog/dbt-semantic-layer-whats-next/), making it more efficient to define and query metrics.
**What’s changed?** The dbt_metrics package has been [deprecated](https://docs.getdbt.com/blog/deprecating-dbt-metrics) and replaced with [MetricFlow](/docs/build/about-metricflow?version=1.6), one of the Semantic Layer's key component and a new way framework for defining metrics in dbt.
-**Who does this affect?** The revamped dbt Semantic Layer is available for users on a Team or Enterprise plans on dbt v1.6 and higher. To learn more about it, make sure you select v1.6 or higher in the docs navigation header and [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud).
+**Who does this affect?** The revamped dbt Semantic Layer is available for users on a [Team or Enterprise plans](https://www.getdbt.com/pricing/) on dbt v1.6 and higher. To learn more about it, make sure you select v1.6 or higher in the docs navigation header and [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud).
**What should you do?** We **highly** recommend you upgrade to dbt v1.6 and higher to use the new and most recent version of the dbt Semantic Layer, powered by MetricFlow.
diff --git a/website/snippets/_new-sl-changes.md b/website/snippets/_new-sl-changes.md
new file mode 100644
index 00000000000..1595078cd8a
--- /dev/null
+++ b/website/snippets/_new-sl-changes.md
@@ -0,0 +1,8 @@
+
+:::tip Introducing the new dbt Semantic Layer 🎉
+
+The dbt Semantic Layer has undergone a [significant revamp](https://www.getdbt.com/blog/dbt-semantic-layer-whats-next/), making it more efficient to define and query metrics. The new dbt Semantic Layer introduces new semantic components, improved governance, enhanced efficiency, data accessibility, and new Semantic Layer APIs. The new dbt Semantic Layer will lay the foundation for effective organization and interaction with data.
+
+**What’s changed?** The dbt_metrics package has been [deprecated](https://docs.getdbt.com/blog/deprecating-dbt-metrics) and replaced with [MetricFlow](/docs/build/about-metricflow?version=1.6), one of the Semantic Layer's key component and a new way framework for defining metrics in dbt.
+
+:::
diff --git a/website/snippets/_upgrade-new-sl.md b/website/snippets/_upgrade-new-sl.md
new file mode 100644
index 00000000000..fb9c106e3df
--- /dev/null
+++ b/website/snippets/_upgrade-new-sl.md
@@ -0,0 +1,7 @@
+:::info Upgrade to access the new dbt Semantic Layer
+
+The new dbt Semantic Layer has been re-released and is now available for users on a [Team or Enterprise plans](https://www.getdbt.com/pricing/) and you must be on dbt v1.6 and higher.
+
+To learn more about it, make sure you select v1.6 or higher in the docs navigation header and [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud).
+
+:::
diff --git a/website/snippets/_v2-sl-prerequisites.md b/website/snippets/_v2-sl-prerequisites.md
index bcd5fd02452..a7b7648fb50 100644
--- a/website/snippets/_v2-sl-prerequisites.md
+++ b/website/snippets/_v2-sl-prerequisites.md
@@ -6,8 +6,8 @@
- Use Snowflake and Postgres, BigQuery, Databricks, and Redshift data platform
- Install the [MetricFlow CLI](https://github.com/dbt-labs/metricflow) package
* Note — After installing the package, make sure you run at least one model.
-- Set up the [Discovery API](/docs/dbt-cloud-apis/discovery-api) in the integrated tool to import metric definitions
- * Developer accounts will be able to query the Proxy Server using SQL, but will not be able to browse pre-populated dbt metrics in external tools, which requires access to the Discovery API
+- Set up the [Semantic Layer API](/docs/use-dbt-semantic-layer/sl-api-overview) in the integrated tool to import metric definitions
+ * Developer accounts will be able to query manually via the CLI using SQL, but will not be able to dynamically query or browse metrics in external tools, which requires access to the Semantic Layer APIs.
From 236893b2174831db21c0efc5dc6c387178aaeba0 Mon Sep 17 00:00:00 2001
From: mirnawong1
Date: Tue, 11 Jul 2023 13:14:04 +0100
Subject: [PATCH 022/703] nit
---
website/snippets/_legacy-sl-callout.md | 2 +-
website/snippets/_upgrade-new-sl.md | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/website/snippets/_legacy-sl-callout.md b/website/snippets/_legacy-sl-callout.md
index 4912b770c74..ab079bcbf77 100644
--- a/website/snippets/_legacy-sl-callout.md
+++ b/website/snippets/_legacy-sl-callout.md
@@ -6,7 +6,7 @@ The dbt Semantic Layer has undergone a [significant revamp](https://www.getdbt.c
**Who does this affect?** The revamped dbt Semantic Layer is available for users on a [Team or Enterprise plans](https://www.getdbt.com/pricing/) on dbt v1.6 and higher. To learn more about it, make sure you select v1.6 or higher in the docs navigation header and [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud).
-**What should you do?** We **highly** recommend you upgrade to dbt v1.6 and higher to use the new and most recent version of the dbt Semantic Layer, powered by MetricFlow.
+**What should you do?** We **highly** recommend you upgrade to dbt v1.6 and higher to use the new and most recent version of the dbt Semantic Layer, powered by MetricFlow. ALSO ADD LINK TO MIGRATION GUIDE
You can still use and configure the legacy dbt Semantic Layer using the deprecated dbt_metrics package on v1.5 or older, however, these versions are no longer supported and won't receive any code fixes.
diff --git a/website/snippets/_upgrade-new-sl.md b/website/snippets/_upgrade-new-sl.md
index fb9c106e3df..fe3e6e532c6 100644
--- a/website/snippets/_upgrade-new-sl.md
+++ b/website/snippets/_upgrade-new-sl.md
@@ -2,6 +2,6 @@
The new dbt Semantic Layer has been re-released and is now available for users on a [Team or Enterprise plans](https://www.getdbt.com/pricing/) and you must be on dbt v1.6 and higher.
-To learn more about it, make sure you select v1.6 or higher in the docs navigation header and [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud).
+To learn more about it, make sure you select v1.6 or higher in the docs navigation header and [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud). ALSO ADD LINK TO MIGRATION GUIDE
:::
From 52b063ded6797b9ac9b3acc2ab74e43cf73c010f Mon Sep 17 00:00:00 2001
From: mirnawong1
Date: Tue, 11 Jul 2023 13:47:16 +0100
Subject: [PATCH 023/703] add migration guide
---
website/docs/guides/migration/sl-migration.md | 139 ++++++++++++++++++
website/sidebars.js | 1 +
website/snippets/_legacy-sl-callout.md | 11 +-
website/snippets/_upgrade-new-sl.md | 2 +-
4 files changed, 147 insertions(+), 6 deletions(-)
create mode 100644 website/docs/guides/migration/sl-migration.md
diff --git a/website/docs/guides/migration/sl-migration.md b/website/docs/guides/migration/sl-migration.md
new file mode 100644
index 00000000000..7e9083ffe74
--- /dev/null
+++ b/website/docs/guides/migration/sl-migration.md
@@ -0,0 +1,139 @@
+---
+title: "dbt Semantic Layer migration guide"
+id: sl-migration
+sidebar_label: "dbt Semantic Layer migration"
+description: "This migration guide will help you migrate from the legacy dbt Semantic Layer to the new one."
+tags: ["semantic-layer"]
+---
+
+WILL CLEAN THIS UP
+
+add who the audience is, what they need to do, and what they need to know
+
+**Technical resources:**
+
+- [Quickstart guide](/docs/build/sl-getting-started)
+- [Example dbt project](https://github.com/dbt-labs/jaffle-sl-template)
+- Configuring SL docs
+- SL API docs
+- [dbt metrics converter](https://github.com/dbt-labs/dbt-converter)
+- Developer workflow demo
+
+## Intro
+
+The legacy Semantic Layer is supported in dbt v1.5 or lower and will be deprecated. The new Semantic Layer is in Beta and supported in dbt v1.6 or higher. If you’re a user of the legacy Semantic Layer and want to use the new one, follow this guide. NOTE: **This requires downtime of your Semantic Layer**
+
+There are 3 migration components:
+
+- Create semantic models and migrate metric definitions
+
+- Setup new Semantic Layer in dbt Cloud
+
+- Update applicable downstream integrations
+
+This guide assumes you have an environment with the legacy Semantic Layer enabled. There are two migration paths available. **Note: Both migration paths involve some downtime:**
+
+1. Migrating an existing environment to the new Semantic Layer.
+2. Creating a net new environment running the Semantic Layer.
+
+
+
+## Migrate an existing environment
+
+This approach will be a hard cut to the new Semantic Layer. Once you update your metrics to the new spec in 1.6, the legacy Semantic Layer will no longer work. You will need to set up the semantic layer in your deployment environment after upgrading to the new metrics spec to avoid unnecessary downtime.
+
+### **Step 1: Migrate Metric configs to the new spec.**
+
+The metrics specification in dbt core has changed in v1.6 in order to support the integration of MetricFlow. We’ve introduced two new objects that MetricFlow expects in order to build a semantic graph: [Semantic Models](/docs/build/semantic-models) and [Metrics](/docs/build/metrics-overview). You will need to migrate your Metrics configs to the new spec. There are two steps to this process:
+
+- Creating net new Semantic Model configs. These will be YAML files in your dbt project.
+- Upgrading the Metrics configs in your project to the new spec.
+
+We’ve built a migration tool that automates this process for you. Check out the [README](https://github.com/dbt-labs/dbt-converter/blob/master/README.md) for installation instructions. Derived metrics are not supported in the migration tool, and will have to be migrated manually.
+
+To learn how to configure Semantic Models and Metrics, check out our [getting started guide]/docs/build/sl-getting-started). **Note: Developing Metrics and Semantic Models in the IDE is not supported in the Beta release.**
+
+Once you’ve created your Semantic Models and created or updated your Metrics, you can test your updated configs using the MetricFlow CLI tools:
+
+1. Install the CLI via `pip install "dbt-metricflow[your_adapter_name]`
+2. Run `dbt parse`. This will parse your project and create a semantic_manifest.json file in your target directory.
+3. Run `mf list metrics` to view the metrics in your project.
+4. Run `mf validate-configs` to run semantic and warehouse validations. This ensures are configs are valid, and the underlying objects exist in your warehouse.
+5. Test querying a metric by running `mf query --metrics --group-by `
+
+When you have tested your metrics and are satisfied that they are correct, commit your changes and push them to your dbt repository. **When you merge the new configurations, the legacy semantic layer will no longer work.** It is recommended that you complete step 2 as soon as you merge the changes to minimize downtime.
+
+### **Step 2: Switch to the new Semantic Layer in your existing Cloud environment**
+
+The suggested workflow for upgrading your Cloud environment to use the new semantic layer is as follows:
+
+- Upgrade the existing deployment environment for your project to v1.6 or higher. Ensure that all jobs are running correctly with the new dbt version. Make any necessary updates to jobs that are impacted by the version upgrade.
+- To set up the Semantic Layer, you need to run a dbt job in your deployment environment. You can use any dbt command, but it must be successful.
+- To activate the new Semantic Layer, go to the ***Account Settings*** page and then **Project Details.** Select the specific project you want to enable the Semantic Layer for.
+- In the **Project Details** page, select **Configure Semantic Layer.** This will prompt you to enter data platform connection credentials for the Semantic Layer and select the environment where you want to enable the Semantic Layer. We recommend using a less privileged set of credentials when setting up your connection. The semantic layer requires SELECT and CREATE TABLE permissions.
+- After you’ve entered you’re credentials, you should see a **JDBC URL** connection string. Copy this string and save it in your password manager. You can use this to connect to the Semantic Layer JDBC interface, which you will use in the next step. This interface allows you to connect and query your metrics in downstream tools.
+- Next, head back to the **Project Details** page and select **Generate Service Token** to create a semantic layer service token. Save this token for later.
+- You’re done! The Semantic Layer should now be enabled for your project! Time to start querying metrics.
+
+## Step 3: Update connection in downstream integrations
+
+Now that your Semantic Layer is set up, you will need to update any downstream integrations that relied on the legacy Semantic Layer. To set up a new connection or update an existing connection, you should do so in the downstream tool and not in dbt Cloud.
+
+**Migration guide for Hex**
+
+…
+
+**Migration guide for Mode**
+
+…
+
+## **Creating a new environment running the new Semantic Layer**
+
+This approach will let you run the legacy Semantic Layer in your existing environment while you stage the metrics spec changes to your dbt project. You’ll set up a brand new deployment environment on dbt v1.6 or higher, which is essentially a clone of your current Semantic Layer environment. When you’re ready to merge your changes, the environment is already set up, and you can safely deprecate your old environment. This approach still involves downtime.
+
+### **Step 1: Setup a new environment running 1.6 and enable the SL**
+
+You will need to create a new environment for the Semantic Layer and migrate your existing jobs to this environment. The workflow is as follows:
+
+- In dbt Cloud, select **Deploy** and then **Environments** to create a new environment.
+- Select **dbt Version 1.6** and enter your deployment credentials.
+- There needs to be a successful run in your new environment to configure the Semantic Layer. We recommend running `dbt ls` as a `dbt build` won’t succeed until you’ve updated your metrics configs.
+- Migrate your jobs from your old deployment environment.
+- To activate the new Semantic Layer, go to the **Account Settings** page and then **Project Details.** Select the specific project you want to enable the Semantic Layer for.
+- In the **Project Details** page, select **Configure Semantic Layer.** This will prompt you to enter data platform connection credentials for the Semantic Layer and select the environment where you want to enable the Semantic Layer. We recommend using a less privileged set of credentials when setting up your connection. The semantic layer requires SELECT and CREATE TABLE permissions.
+- After you’ve entered you’re credentials, you should see a **JDBC URL** connection string. Copy this string and save it in your password manager. This can be used to connect to the semantic layer JDBC.
+- Next, head back to the **Project Details** page and select **Generate Service Token** to create a Semantic Layer service token. Save this token for later.
+- You’re done! The semantic layer should now be enabled for your project! The legacy Semantic Layer will still be running in your old environment.
+
+### **Step 2: Migrate metric configs to the new spec.**
+
+The metrics specification in dbt core has changed in v1.6 in order to support the integration of MetricFlow. We’ve introduced two new objects that MetricFlow expects in order to build a semantic graph: [Semantic Models](/docs/build/semantic-models) and [Metrics](/docs/build/metrics-overview). You will need to migrate your Metrics configs to the new spec. The process broadly looks:
+
+- Creating net new Semantic Model configs. These will be YAML files in your dbt project.
+- Upgrading the Metrics configs in your project to the new spec.
+
+We’ve built a migration tool that automates this process for you. Check out the [README](https://github.com/dbt-labs/dbt-converter/blob/master/README.md) for installation instructions. Derived metrics are not supported in the migration tool, and will have to be migrated manually.
+
+To learn how to configure Semantic Models and Metrics, check out our [getting started guide](/docs/build/sl-getting-started). **Note: Developing Metrics and Semantic Models in the IDE is not supported in the Beta release.**
+
+Once you’ve created your Semantic Models and created or updated your Metrics, you can test your updated configs using the MetricFlow CLI tools:
+
+1. Install the CLI via `pip install "dbt-metricflow[your_adapter_name]`
+2. Run `dbt parse` . This will parse your project, and create a semantic_manifest.json file in your target directory.
+3. Run `mf list metrics` to view the metrics in your project.
+4. Run `mf validate-configs` to run semantic and warehouse validations. This ensures are configs are valid, and the underlying objects exist in your warehouse.
+5. Test querying a metric by running `mf query --metrics --group-by `
+
+When you have tested your metrics and are satisfied that they are correct, commit your changes and push them to your dbt repository. **When you merge the new configurations, the legacy Semantic Layer will no longer work**
+
+## Step 3: Update connection in downstream integrations
+
+Now that your semantic layer is set up, you will need to update any downstream integrations that relied on the legacy Semantic Layer. To set up a new connection or update an existing connection, you should do so in the downstream tool and not in dbt Cloud.
+
+**Migration guide for Hex**
+
+…
+
+**Migration guide for Mode**
+
+…
diff --git a/website/sidebars.js b/website/sidebars.js
index 2c68ecf443a..c372fd9f2a9 100644
--- a/website/sidebars.js
+++ b/website/sidebars.js
@@ -968,6 +968,7 @@ const sidebarSettings = {
type: "category",
label: "Migration",
items: [
+ "guides/migration/sl-migration",
{
type: "category",
label: "Versions",
diff --git a/website/snippets/_legacy-sl-callout.md b/website/snippets/_legacy-sl-callout.md
index ab079bcbf77..eb62b6e3e5b 100644
--- a/website/snippets/_legacy-sl-callout.md
+++ b/website/snippets/_legacy-sl-callout.md
@@ -1,13 +1,14 @@
:::info Upgrade your dbt version to access the new dbt Semantic Layer
-The dbt Semantic Layer has undergone a [significant revamp](https://www.getdbt.com/blog/dbt-semantic-layer-whats-next/), making it more efficient to define and query metrics.
+The dbt Semantic Layer has undergone a [significant revamp](https://www.getdbt.com/blog/dbt-semantic-layer-whats-next/), making it more efficient to define and query metrics. This means the legacy Semantic Layer (dbt v1.5 or lower) is no longer supported and won't receive any code fixes.
-**What’s changed?** The dbt_metrics package has been [deprecated](https://docs.getdbt.com/blog/deprecating-dbt-metrics) and replaced with [MetricFlow](/docs/build/about-metricflow?version=1.6), one of the Semantic Layer's key component and a new way framework for defining metrics in dbt.
+**What’s changed?** The dbt_metrics package has been [deprecated](https://docs.getdbt.com/blog/deprecating-dbt-metrics) and replaced with [MetricFlow](/docs/build/about-metricflow?version=1.6), one of the Semantic Layer's key component and a new way framework for defining metrics in dbt. We also introduce improved governance, enhanced efficiency, data accessibility, and new Semantic Layer APIs.
-**Who does this affect?** The revamped dbt Semantic Layer is available for users on a [Team or Enterprise plans](https://www.getdbt.com/pricing/) on dbt v1.6 and higher. To learn more about it, make sure you select v1.6 or higher in the docs navigation header and [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud).
+**Who does this affect?** The new and revamped dbt Semantic Layer is available for users on [Team or Enterprise plans](https://www.getdbt.com/pricing/) using dbt v1.6 or higher.
-**What should you do?** We **highly** recommend you upgrade to dbt v1.6 and higher to use the new and most recent version of the dbt Semantic Layer, powered by MetricFlow. ALSO ADD LINK TO MIGRATION GUIDE
+**What should you do?** If you're using the legacy Semantic Layer, we **highly** recommend you [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud) to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated [migration guide](/guides/migration/sl-migration) for more info.
-You can still use and configure the legacy dbt Semantic Layer using the deprecated dbt_metrics package on v1.5 or older, however, these versions are no longer supported and won't receive any code fixes.
+-is this correct? no right?
+Although you can still use the legacy dbt Semantic Layer using the deprecated dbt_metrics package on v1.5 or older, these versions are no longer supported and won't receive any code fixes.
:::
diff --git a/website/snippets/_upgrade-new-sl.md b/website/snippets/_upgrade-new-sl.md
index fe3e6e532c6..b30f8318068 100644
--- a/website/snippets/_upgrade-new-sl.md
+++ b/website/snippets/_upgrade-new-sl.md
@@ -2,6 +2,6 @@
The new dbt Semantic Layer has been re-released and is now available for users on a [Team or Enterprise plans](https://www.getdbt.com/pricing/) and you must be on dbt v1.6 and higher.
-To learn more about it, make sure you select v1.6 or higher in the docs navigation header and [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud). ALSO ADD LINK TO MIGRATION GUIDE
+If you're using the legacy Semantic Layer, we **highly** recommend you [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud) to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated [migration guide](/guides/migration/sl-migration) for more info.
:::
From 21ee65420ea461d886ffe538c3f385b7f5af41f0 Mon Sep 17 00:00:00 2001
From: Ly Nguyen
Date: Tue, 11 Jul 2023 11:54:11 -0700
Subject: [PATCH 024/703] Info about admin api v2 vs v3
---
website/docs/docs/dbt-cloud-apis/admin-cloud-api.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/docs/dbt-cloud-apis/admin-cloud-api.md b/website/docs/docs/dbt-cloud-apis/admin-cloud-api.md
index 62b13f7aeb5..06d44fcbbc9 100644
--- a/website/docs/docs/dbt-cloud-apis/admin-cloud-api.md
+++ b/website/docs/docs/dbt-cloud-apis/admin-cloud-api.md
@@ -10,7 +10,7 @@ The dbt Cloud Administrative API is enabled by default for [Team and Enterprise
- Manage your dbt Cloud account
- and more
-Check out our dbt Cloud Admin API docs to help you access the API:
+dbt Cloud currently supports two versions of the Administrative API: v2 and v3. In general, v3 is the recommended version to use, but we don't yet have all our v2 routes upgraded to v3. We're currently working on this. If you can't find what you're looking for in our v3 docs, check out the shorter list of v2 endpoints and see if that helps.
From 1e694f60664fa60efa61cffbf42cd894a8596f99 Mon Sep 17 00:00:00 2001
From: Abhishek Singh
Date: Tue, 11 Jul 2023 18:10:49 -0700
Subject: [PATCH 025/703] Added documentation for Oracle Python models
---
.../warehouse-setups/oracle-setup.md | 156 +++++++++++++++++-
1 file changed, 154 insertions(+), 2 deletions(-)
diff --git a/website/docs/reference/warehouse-setups/oracle-setup.md b/website/docs/reference/warehouse-setups/oracle-setup.md
index 80ec7a39254..fbca33a64d9 100644
--- a/website/docs/reference/warehouse-setups/oracle-setup.md
+++ b/website/docs/reference/warehouse-setups/oracle-setup.md
@@ -215,9 +215,9 @@ Note that Oracle Client versions 21c and 19c are not supported on Windows 7.
-## Configure wallet for Oracle Autonomous Database in Cloud
+## Configure wallet for Oracle Autonomous Database (ADB-S) in Cloud
-dbt can connect to Oracle Autonomous Database (ADB) in Oracle Cloud using either TLS (Transport Layer Security) or mutual TLS (mTLS). TLS and mTLS provide enhanced security for authentication and encryption.
+dbt can connect to Oracle Autonomous Database (ADB-S) in Oracle Cloud using either TLS (Transport Layer Security) or mutual TLS (mTLS). TLS and mTLS provide enhanced security for authentication and encryption.
A database username and password is still required for dbt connections which can be configured as explained in the next section [Connecting to Oracle Database](#connecting-to-oracle-database).
+
+
+## Python Models using Oracle Autonomous Database (ADB-S)
+
+Oracle's Autonomous Database Serverless (ADB-S) users can run dbt-py models using Oracle Machine Learning (OML4PY) which is available without any extra setup required.
+
+### Features
+- User Defined Python function is run in an ADB-S spawned Python 3.10 runtime
+- Import [3rd party Python packages](https://docs.oracle.com/en/database/oracle/machine-learning/oml-notebooks/omlug/oml4py-notebook.html#GUID-78225241-CD6B-4588-AD4B-799079FA1784) installed in the default Python runtime
+- Access to Database session in the Python function
+- DataFrame read API to read `TABLES`, `VIEWS` and ad-hoc `SELECT` queries as DataFrames
+- DataFrame write API to write DataFrames as `TABLES`
+- Supports both table and incremental materialization
+- Integration with conda (Coming Soon)
+
+### Setup
+
+#### Required roles
+
+- User must be non-ADMIN to execute the Python function
+- User must be granted the `OML_DEVELOPER` role
+
+#### OML Cloud Service URL
+
+OML Cloud Service URL is of the following format
+```text
+https://tenant1-dbt.adb.us-sanjose-1.oraclecloudapps.com
+```
+In this example,
+ - `tenant1` is the tenancy ID
+ - `dbt` is the database name
+ - `us-sanjose-1` is the datacenter region
+ - `oraclecloudapps.com` is the root domain
+
+Add `oml_cloud_service_url` to your existing `~/.dbt/profiles.yml`
+
+
+
+```yaml
+dbt_test:
+ target: dev
+ outputs:
+ dev:
+ type: oracle
+ user: "{{ env_var('DBT_ORACLE_USER') }}"
+ pass: "{{ env_var('DBT_ORACLE_PASSWORD') }}"
+ tns_name: "{{ env_var('DBT_ORACLE_TNS_NAME') }}"
+ schema: "{{ env_var('DBT_ORACLE_SCHEMA') }}"
+ oml_cloud_service_url: "https://tenant1-dbt.adb.us-sanjose-1.oraclecloudapps.com"
+```
+
+
+### Python model configurations
+
+| Configuration | Datatype | Examples |
+|--|--------|-----------------------------------------------------------------------------------------------|
+| Materialization | String | `dbt.config(materialized="incremental")` or `dbt.config(materialized="table")` |
+| Service | String | `dbt.config(service="HIGH")` or `dbt.config(service="MEDIUM")` or `dbt.config(service="LOW")` |
+| Async Mode | Boolean | `dbt.config(async_flag=True)`
+| Timeout in seconds only to be used with **_async_** mode (`min: 1800` and `max: 43200`) | Integer | `dbt.config(timeout=1800)` |
+
+In async mode, dbt-oracle will schedule a Python job, poll the job's status and wait for it to complete.
+Without async mode, dbt-oracle will immediately invoke the Python job in a blocking manner. Use async mode for long running Python job
+
+### Python model examples
+
+#### Refer other model
+
+Use `dbt.ref(model_name)` to refer either SQL or Python model
+
+```python
+def model(dbt, session):
+ # Must be either table or incremental (view is not currently supported)
+ dbt.config(materialized="table")
+ # returns oml.core.DataFrame referring a dbt model
+ s_df = dbt.ref("sales_cost")
+ return s_df
+```
+
+#### Refer a source
+
+Use `dbt.source(source_schema, table_name)`
+
+```python
+def model(dbt, session):
+ # Must be either table or incremental (view is not currently supported)
+ dbt.config(materialized="table")
+ # oml.core.DataFrame representing a datasource
+ s_df = dbt.source("sh_database", "channels")
+ return s_df
+
+```
+
+#### Incremental materialization
+
+```python
+def model(dbt, session):
+ # Must be either table or incremental
+ dbt.config(materialized="incremental")
+ # oml.DataFrame representing a datasource
+ sales_cost_df = dbt.ref("sales_cost")
+
+ if dbt.is_incremental:
+ cr = session.cursor()
+ result = cr.execute(f"select max(cost_timestamp) from {dbt.this.identifier}")
+ max_timestamp = result.fetchone()[0]
+ # filter new rows
+ sales_cost_df = sales_cost_df[sales_cost_df["COST_TIMESTAMP"] > max_timestamp]
+
+ return sales_cost_df
+```
+
+#### Concatenate a new column in Dataframe
+
+```python
+
+def model(dbt, session):
+ dbt.config(materialized="table")
+ dbt.config(async_flag=True)
+ dbt.config(timeout=1800)
+
+ sql = f"""SELECT customer.cust_first_name,
+ customer.cust_last_name,
+ customer.cust_gender,
+ customer.cust_marital_status,
+ customer.cust_street_address,
+ customer.cust_email,
+ customer.cust_credit_limit,
+ customer.cust_income_level
+ FROM sh.customers customer, sh.countries country
+ WHERE country.country_iso_code = ''US''
+ AND customer.country_id = country.country_id"""
+
+ # session.sync(query) will run the sql query and returns a oml.core.DataFrame
+ us_potential_customers = session.sync(query=sql)
+
+ # Compute an ad-hoc anomaly score on the credit limit
+ median_credit_limit = us_potential_customers["CUST_CREDIT_LIMIT"].median()
+ mean_credit_limit = us_potential_customers["CUST_CREDIT_LIMIT"].mean()
+ anomaly_score = (us_potential_customers["CUST_CREDIT_LIMIT"] - median_credit_limit)/(median_credit_limit - mean_credit_limit)
+
+ # Add a new column "CUST_CREDIT_ANOMALY_SCORE"
+ us_potential_customers = us_potential_customers.concat({"CUST_CREDIT_ANOMALY_SCORE": anomaly_score.round(3)})
+
+ # Return potential customers dataset as a oml.core.DataFrame
+ return us_potential_customers
+
+```
+
+
+
## Supported Features
@@ -495,6 +646,7 @@ dbt_test:
- Exposures
- Document generation
- Serve project documentation as a website
+- Python Models (from dbt-oracle version 1.5.1)
- All dbt commands are supported
## Not Supported features
From e223a0c6690da14cfd3ebe307dfbd168adde38e3 Mon Sep 17 00:00:00 2001
From: schottj
Date: Tue, 11 Jul 2023 18:49:38 -0700
Subject: [PATCH 026/703] Update job-scheduler.md
recent change to run slots
---
website/docs/docs/deploy/job-scheduler.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/docs/deploy/job-scheduler.md b/website/docs/docs/deploy/job-scheduler.md
index 479dfc7bab2..2c06980d429 100644
--- a/website/docs/docs/deploy/job-scheduler.md
+++ b/website/docs/docs/deploy/job-scheduler.md
@@ -31,7 +31,7 @@ Familiarize yourself with these useful terms to help you understand how the job
| Over-scheduled job | A situation when a cron-scheduled job's run duration becomes longer than the frequency of the job’s schedule, resulting in a job queue that will grow faster than the scheduler can process the job’s runs. |
| Prep time | The time dbt Cloud takes to create a short-lived environment to execute the job commands in the user's cloud data platform. Prep time varies most significantly at the top of the hour when the dbt Cloud Scheduler experiences a lot of run traffic. |
| Run | A single, unique execution of a dbt job. |
-| Run slot | Run slots control the number of jobs that can run concurrently. Each account has a fixed number of run slots, depending on the plan tier, that are shared across projects in the account. Each running job occupies a run slot for the duration of the run, so purchasing more run slots enables more jobs to execute in parallel. |
+| Run slot | Run slots control the number of jobs that can run concurrently. Developer and team plan accounts have a fixed number of run slots; whereas, enterprise customers have unlimited run slots. Each running job occupies a run slot for the duration of the run. |
| Threads | When dbt builds a project's DAG, it tries to parallelize the execution by using threads. The [thread](/docs/running-a-dbt-project/using-threads) count is the maximum number of paths through the DAG that dbt can work on simultaneously. The default thread count in a job is 4. |
| Wait time | Amount of time that dbt Cloud waits before running a job, either because there are no available slots or because a previous run of the same job is still in progress. |
From c3323c3d4a13a31ccb5c6824dbdddb718feb0875 Mon Sep 17 00:00:00 2001
From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com>
Date: Wed, 12 Jul 2023 10:30:17 +0100
Subject: [PATCH 027/703] Update website/docs/docs/deploy/job-scheduler.md
---
website/docs/docs/deploy/job-scheduler.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/docs/deploy/job-scheduler.md b/website/docs/docs/deploy/job-scheduler.md
index 2c06980d429..eae556309ba 100644
--- a/website/docs/docs/deploy/job-scheduler.md
+++ b/website/docs/docs/deploy/job-scheduler.md
@@ -31,7 +31,7 @@ Familiarize yourself with these useful terms to help you understand how the job
| Over-scheduled job | A situation when a cron-scheduled job's run duration becomes longer than the frequency of the job’s schedule, resulting in a job queue that will grow faster than the scheduler can process the job’s runs. |
| Prep time | The time dbt Cloud takes to create a short-lived environment to execute the job commands in the user's cloud data platform. Prep time varies most significantly at the top of the hour when the dbt Cloud Scheduler experiences a lot of run traffic. |
| Run | A single, unique execution of a dbt job. |
-| Run slot | Run slots control the number of jobs that can run concurrently. Developer and team plan accounts have a fixed number of run slots; whereas, enterprise customers have unlimited run slots. Each running job occupies a run slot for the duration of the run. |
+| Run slot | Run slots control the number of jobs that can run concurrently. Developer and Team plan accounts have a fixed number of run slots, and Enterprise users have [unlimited run slots](/docs/dbt-versions/release-notes/July-2023/faster-run#unlimited-job-concurrency-for-enterprise-accounts). Each running job occupies a run slot for the duration of the run. |
| Threads | When dbt builds a project's DAG, it tries to parallelize the execution by using threads. The [thread](/docs/running-a-dbt-project/using-threads) count is the maximum number of paths through the DAG that dbt can work on simultaneously. The default thread count in a job is 4. |
| Wait time | Amount of time that dbt Cloud waits before running a job, either because there are no available slots or because a previous run of the same job is still in progress. |
From e320f08d49bed910841ae4097c35015aa65306db Mon Sep 17 00:00:00 2001
From: mirnawong1
Date: Wed, 12 Jul 2023 11:54:19 +0100
Subject: [PATCH 028/703] adding roxi's updates
---
.../docs/use-dbt-semantic-layer/dbt-sl.md | 28 +++++++++----------
.../docs/use-dbt-semantic-layer/setup-sl.md | 28 ++++++++++++++++---
website/snippets/_legacy-sl-callout.md | 2 +-
website/snippets/_v2-sl-prerequisites.md | 3 +-
4 files changed, 40 insertions(+), 21 deletions(-)
diff --git a/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md b/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
index f252c2d55de..0329bad15fb 100644
--- a/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
+++ b/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
@@ -11,22 +11,20 @@ import NewSLChanges from '/snippets/_new-sl-changes.md';
-The dbt Semantic Layer, powered by MetricFlow, allows data teams to centrally define essential business metrics like `revenue`, `customer`, and `churn` in the modeling layer (your dbt project) for consistent self-service within downstream data tools like BI and metadata management solutions. The dbt Semantic Layer provides the flexibility to define metrics on top of your existing models and then query those metrics and models in your analysis tools of choice.
+The dbt Semantic Layer, powered by MetricFlow, allows data teams to centrally define critical business metrics like `revenue`, in the modeling layer (your dbt project) for consistent self-service within downstream data tools like BI and other data applications. The dbt Semantic Layer provides the flexibility to define metrics on top of your existing models and then query those metrics and models in your analysis tools of choice. It automatically handles joins across your data.
The result? You have less duplicate coding for data teams and more consistency for data consumers.
-The dbt Semantic Layer has four main parts:
+The dbt Semantic Layer has the following major components:
-- Define your metrics in version-controlled dbt project code using MetricFlow
-- Import your metric definitions via the [Discovery API](/docs/dbt-cloud-apis/discovery-api)
-- Query your metric data using the [Semantic Layer API](/docs/use-dbt-semantic-layer/sl-api-overview) (JDBC driver, GraphQL API, and the Discovery API)
-- Explore and analyze your metrics in downstream tools
-
-ADD ARCHITECTURE IMAGE
+- Use [MetricFlow](/docs/build/build-metrics-intro) to define and test your semantic models and metrics in version-controlled dbt project code
+- Use [dbt Semantic Layer interfaces] to explore and analyze these metrics in downstream tools
### What makes the dbt Semantic Layer different?
-The dbt Semantic Layer reduces code duplication and inconsistency regarding your business metrics. By moving metric definitions out of the BI layer and into the modeling layer, data teams can feel confident that different business units are working from the same metric definitions, regardless of their tool of choice. If a metric definition changes in dbt, it’s refreshed everywhere it’s invoked and creates consistency across all applications. You can also use the dbt Semantic Layer to query models and use macros.
+The dbt Semantic Layer reduces code duplication and inconsistency regarding your business metrics. By moving metric definitions out of the BI layer and into the modeling layer, data teams can feel confident that different business units are working from the same metric definitions, regardless of their tool of choice. If a metric definition changes in dbt, it’s refreshed everywhere it’s invoked and creates consistency across all applications.
+
+To read more about why you need a universal Semantic Layer, see this [blog post](https://www.getdbt.com/blog/universal-semantic-layer/).
## Prerequisites
@@ -62,11 +60,11 @@ The dbt Semantic Layer product architecture includes four primary components:
| Components | Information | Developer plans | Team plans | Enterprise plans | License |
| --- | --- | :---: | :---: | :---: | --- |
| **[MetricFlow](/docs/build/about-metricflow)** | Use MetricFlow in dbt to centrally define your metrics. MetricFlow is a key component of the dbt Semantic Layer and is responsible for SQL query construction and defining specifications for dbt semantic models and metrics. | ✅ | ✅ | ✅ | BSL package (code is source available) |
-| **[dbt Server](https://github.com/dbt-labs/dbt-server)**| A persisted HTTP server that wraps dbt core to handle RESTful API requests for dbt operations. | ✅ | ✅ | ✅ | BSL |
-| **SQL Proxy** | Reverse-proxy that accepts dbt-SQL (SQL + Jinja like query models and metrics, use macros), compiles the query into pure SQL, and executes the query against the data platform. | ✅
_* Available during Public Preview only_ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise) |
-| **[Semantic Layer APIs](/docs/use-dbt-semantic-layer/sl-api-overview)** | Use the Semantic Layer APIS, which includes the Discovery, JDBC, and GraphQL API, to access metric definitions using integrations. | ❌ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise |
+| **MetricFlow Server Server** | A proprietary server that takes metric requests and turns.... | ✅ | ✅ | ✅ | BSL |
+| **Semantic Layer Gateway**| Reverse-proxy that accepts dbt-SQL (SQL + Jinja like query models and metrics, use macros), compiles the query into pure SQL, and executes the query against the data platform. | ✅
_* Available during Public Preview only_ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise) |
+| **[Semantic Layer APIs](/docs/use-dbt-semantic-layer/sl-api-overview)** | Use the Semantic Layer APIS, which includes the Discovery, JDBC, and GraphQL API, to access metric definitions using integrations. | ❌ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise) |
-ADD COMPONENT IMAGE
+ADD ARCHITECTURE IMAGE
dbt Semantic Layer integrations will:
@@ -114,9 +112,9 @@ To read about best practices on structuring and organizing your metrics, review
## Related questions
- How do I migrate from the old Semantic Layer to the new one?
+ How do I migrate from the legacy Semantic Layer to the new one?
- ADD INFO HERE AND MIGRATION SCRIPT.
+ If you're using the legacy Semantic Layer, we highly recommend you upgrade your dbt version to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated migration guide for more info.
diff --git a/website/docs/docs/use-dbt-semantic-layer/setup-sl.md b/website/docs/docs/use-dbt-semantic-layer/setup-sl.md
index 8e1f04db105..60b592074e0 100644
--- a/website/docs/docs/use-dbt-semantic-layer/setup-sl.md
+++ b/website/docs/docs/use-dbt-semantic-layer/setup-sl.md
@@ -29,20 +29,34 @@ import SetUp from '/snippets/_v2-sl-prerequisites.md';
You can set up the dbt Semantic Layer in dbt Cloud at the environment and project level by following these steps:
+1. Create new [deployment environment](/docs/deploy/deploy-environments#create-a-deployment-environment) or use an existing environment on dbt 1.6 or higher.
+2. Navigate to **Account Settings** and select the specific project you want to enable the Semantic Layer for.
+3. In the **Project Details** page, navigate to **Configure Semantic Layer**.
+4. Enter the credentials you want the Semantic Layer to use specific to your data platform.
+ * Note: We recommend using a less privileged set of credentials because your Semantic Layer users will be querying it in downstream applications. At a minimum, the Semantic Layer needs to have read access to the schema(s) that contains the dbt models that you used to build your semantic models.
+5. Select the deployment environment you want for the Semantic Layer
+6. Next, go back to the **Project Details** page and select **Generate Service Token** to create a Semantic Layer service token.
+7. Save & copy your environment ID, service token, and host for inputting into a downstream tool
+8. You’ve configured the semantic layer 🎉! You can now define your [semantic models and metrics](/docs/build/build-metrics-intro) in your project.
+
+:::tip
+If you're using the legacy Semantic Layer, we **highly** recommend you [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud) to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated [migration guide](/guides/migration/sl-migration) for more info.
+:::
+
-
@@ -65,6 +79,8 @@ Before you set up the dbt Semantic Layer, make sure you meet the following:
## Set up dbt Semantic Layer
+If you're using the legacy Semantic Layer, we **highly** recommend you [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud) to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated [migration guide](/guides/migration/sl-migration) for more info.
+
Before you continue with the following steps, you **must** have a multi-tenant dbt Cloud account hosted in North America.
* Team and Enterprise accounts can set up the Semantic Layer and [Discovery API](/docs/dbt-cloud-apis/discovery-api) in the integrated partner tool to import metric definitions.
* Developer accounts can query the Proxy Server using SQL but won't be able to browse dbt metrics in external tools, which requires access to the Discovery API.
@@ -99,6 +115,10 @@ Note - It is _not_ recommended that you use your dbt Cloud credentials due to e
-
-
+
+- [Build your metrics](/docs/build/build-metrics-intro)
+- [Available integrations](/docs/use-dbt-semantic-layer/avail-sl-integrations)
+- [Semantic Layer APIs](/docs/use-dbt-semantic-layer/sl-api-overview)
+- [Migrate your legacy Semantic Layer](/guides/migration/sl-migration)
+- [Get started with the dbt Semantic Layer](/docs/use-dbt-semantic-layer/quickstart-sl)
diff --git a/website/snippets/_legacy-sl-callout.md b/website/snippets/_legacy-sl-callout.md
index eb62b6e3e5b..bd0c374dff5 100644
--- a/website/snippets/_legacy-sl-callout.md
+++ b/website/snippets/_legacy-sl-callout.md
@@ -8,7 +8,7 @@ The dbt Semantic Layer has undergone a [significant revamp](https://www.getdbt.c
**What should you do?** If you're using the legacy Semantic Layer, we **highly** recommend you [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud) to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated [migration guide](/guides/migration/sl-migration) for more info.
--is this correct? no right?
+-is this correct? no right?-->
Although you can still use the legacy dbt Semantic Layer using the deprecated dbt_metrics package on v1.5 or older, these versions are no longer supported and won't receive any code fixes.
:::
diff --git a/website/snippets/_v2-sl-prerequisites.md b/website/snippets/_v2-sl-prerequisites.md
index a7b7648fb50..e8027f19c1b 100644
--- a/website/snippets/_v2-sl-prerequisites.md
+++ b/website/snippets/_v2-sl-prerequisites.md
@@ -3,7 +3,8 @@
- Have a [multi-tenant dbt Cloud](/docs/deploy/regions) instance, hosted in North America
- Have both your production and development environments running dbt version 1.6 or higher
-- Use Snowflake and Postgres, BigQuery, Databricks, and Redshift data platform
+- Use Snowflake, BigQuery, Databricks, and Redshift data platform
+- A successful run in the environment where your Semantic Layer is configured
- Install the [MetricFlow CLI](https://github.com/dbt-labs/metricflow) package
* Note — After installing the package, make sure you run at least one model.
- Set up the [Semantic Layer API](/docs/use-dbt-semantic-layer/sl-api-overview) in the integrated tool to import metric definitions
From 280df6032f0f4a4808ab010a07adc648dcb0a5b6 Mon Sep 17 00:00:00 2001
From: mirnawong1
Date: Wed, 12 Jul 2023 11:54:35 +0100
Subject: [PATCH 029/703] remove space
---
website/docs/docs/use-dbt-semantic-layer/setup-sl.md | 1 -
1 file changed, 1 deletion(-)
diff --git a/website/docs/docs/use-dbt-semantic-layer/setup-sl.md b/website/docs/docs/use-dbt-semantic-layer/setup-sl.md
index 60b592074e0..eaf7a611304 100644
--- a/website/docs/docs/use-dbt-semantic-layer/setup-sl.md
+++ b/website/docs/docs/use-dbt-semantic-layer/setup-sl.md
@@ -112,7 +112,6 @@ Note - It is _not_ recommended that you use your dbt Cloud credentials due to e
- Add the items above to the relevant fields in your integration tool
-
From 176b75730c5e27954da79956ad1c94b6b04eed91 Mon Sep 17 00:00:00 2001
From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com>
Date: Wed, 12 Jul 2023 13:46:02 +0100
Subject: [PATCH 030/703] Update
website/docs/docs/core/connect-data-platform/oracle-setup.md
---
website/docs/docs/core/connect-data-platform/oracle-setup.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/docs/core/connect-data-platform/oracle-setup.md b/website/docs/docs/core/connect-data-platform/oracle-setup.md
index b29658e6808..f601709654b 100644
--- a/website/docs/docs/core/connect-data-platform/oracle-setup.md
+++ b/website/docs/docs/core/connect-data-platform/oracle-setup.md
@@ -543,7 +543,7 @@ dbt_test:
| Timeout in seconds only to be used with **_async_** mode (`min: 1800` and `max: 43200`) | Integer | `dbt.config(timeout=1800)` |
In async mode, dbt-oracle will schedule a Python job, poll the job's status and wait for it to complete.
-Without async mode, dbt-oracle will immediately invoke the Python job in a blocking manner. Use async mode for long running Python job
+Without async mode, dbt-oracle will immediately invoke the Python job in a blocking manner. Use async mode for long-running Python jobs.
### Python model examples
From 704403017207a9ae4d5c61b989fde5c6baa712f2 Mon Sep 17 00:00:00 2001
From: Owen <74567580+owenprough-sift@users.noreply.github.com>
Date: Wed, 12 Jul 2023 09:43:06 -0400
Subject: [PATCH 031/703] Update incremental-models.md
Address 2606
---
website/docs/docs/build/incremental-models.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/website/docs/docs/build/incremental-models.md b/website/docs/docs/build/incremental-models.md
index 15b24520711..7af29e9d903 100644
--- a/website/docs/docs/build/incremental-models.md
+++ b/website/docs/docs/build/incremental-models.md
@@ -57,6 +57,7 @@ from raw_app_data.events
{% if is_incremental() %}
-- this filter will only be applied on an incremental run
+ -- (uses > to include records whose timestamp occurred since the last run of this model)
where event_time > (select max(event_time) from {{ this }})
{% endif %}
@@ -137,6 +138,7 @@ from raw_app_data.events
{% if is_incremental() %}
-- this filter will only be applied on an incremental run
+ -- (uses >= to include records arriving later on the same day as the last run of this model)
where date_day >= (select max(date_day) from {{ this }})
{% endif %}
From 6a675e2e70bff9547e6201802e6208d964bad67b Mon Sep 17 00:00:00 2001
From: Jeremy Cohen
Date: Wed, 12 Jul 2023 15:58:00 +0200
Subject: [PATCH 032/703] Semantic Model attribute is `defaults` not `default`
Called defaults not default
---
website/docs/docs/build/semantic-models.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/docs/build/semantic-models.md b/website/docs/docs/build/semantic-models.md
index 043973ac154..c4b14d703fb 100644
--- a/website/docs/docs/build/semantic-models.md
+++ b/website/docs/docs/build/semantic-models.md
@@ -31,7 +31,7 @@ semantic_models:
- name: transaction # A semantic model with the name Transactions
model: ref('fact_transactions') # References the dbt model named `fact_transactions`
description: "Transaction fact table at the transaction level. This table contains one row per transaction and includes the transaction timestamp."
- default:
+ defaults:
agg_time_dimension: transaction_date
entities: # Entities included in the table are defined here. MetricFlow will use these columns as join keys.
From 7ed0db1d7cd39713bfcf12733386351b532c9aad Mon Sep 17 00:00:00 2001
From: Jason Karlavige
Date: Wed, 12 Jul 2023 10:27:58 -0400
Subject: [PATCH 033/703] update workflow name
---
.github/workflows/links.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/.github/workflows/links.yml b/.github/workflows/links.yml
index 48409555d70..19b0449669d 100644
--- a/.github/workflows/links.yml
+++ b/.github/workflows/links.yml
@@ -1,4 +1,4 @@
-name: Check Docs Links
+name: Docs Link Checker
on: [push]
# schedule:
From 04f0265f04da2be8ec5256b8d499513cd105d7f0 Mon Sep 17 00:00:00 2001
From: Jason Karlavige
Date: Wed, 12 Jul 2023 10:48:42 -0400
Subject: [PATCH 034/703] update to run at 12:00 UTC every day
---
.github/workflows/links.yml | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/.github/workflows/links.yml b/.github/workflows/links.yml
index 19b0449669d..7e0dd52b60c 100644
--- a/.github/workflows/links.yml
+++ b/.github/workflows/links.yml
@@ -1,9 +1,9 @@
name: Docs Link Checker
-on: [push]
- # schedule:
- # # Run every Monday morning at 6:00AM UTC
- # - cron: '0 6 * * 1'
+on:
+ schedule:
+ # Run every day at 12:00 UTC
+ - cron: '0 12 * * *'
jobs:
markdown-link-check:
From 68cd06bfb319d4930535dd8bcfd838b1b51105e8 Mon Sep 17 00:00:00 2001
From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com>
Date: Wed, 12 Jul 2023 15:57:30 +0100
Subject: [PATCH 035/703] update to metadata only token
updating from discovery token to metadata only token per [slack thread[https://dbt-labs.slack.com/archives/C017GDLAF7D/p1689149029221619)
---
website/docs/docs/deploy/dashboard-status-tiles.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/docs/deploy/dashboard-status-tiles.md b/website/docs/docs/deploy/dashboard-status-tiles.md
index 361813c526c..67aa1a93c33 100644
--- a/website/docs/docs/deploy/dashboard-status-tiles.md
+++ b/website/docs/docs/deploy/dashboard-status-tiles.md
@@ -22,7 +22,7 @@ First, be sure to enable [source freshness](/docs/deploy/source-freshness) in th
In order to set up your dashboard status tile, here is what you need:
-1. **Discovery Token.** You can learn how to set up a Discovery only token [here](/docs/dbt-cloud-apis/service-tokens).
+1. **Metadata Only token.** You can learn how to set up a Metadata-Only token [here](/docs/dbt-cloud-apis/service-tokens).
2. **Exposure name.** You can learn more about how to set up exposures [here](/docs/build/exposures).
From 6619cf5864e3df1635abeeaf2699a5fcfaebc29a Mon Sep 17 00:00:00 2001
From: mirnawong1
Date: Wed, 12 Jul 2023 17:55:47 +0100
Subject: [PATCH 036/703] change to defaults
---
website/docs/docs/build/about-metricflow.md | 2 +-
website/docs/docs/build/dimensions.md | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/website/docs/docs/build/about-metricflow.md b/website/docs/docs/build/about-metricflow.md
index f35bed24044..6ec7ecfe4b5 100644
--- a/website/docs/docs/build/about-metricflow.md
+++ b/website/docs/docs/build/about-metricflow.md
@@ -129,7 +129,7 @@ semantic_models:
description: "A record for every transaction that takes place. Carts are considered multiple transactions for each SKU."
owners: support@getdbt.com
model: (ref('transactions'))
- default:
+ defaults:
agg_time_dimension: metric_time
# --- entities ---
diff --git a/website/docs/docs/build/dimensions.md b/website/docs/docs/build/dimensions.md
index ec92f7595b2..ce63a0f0ed9 100644
--- a/website/docs/docs/build/dimensions.md
+++ b/website/docs/docs/build/dimensions.md
@@ -254,7 +254,7 @@ semantic_models:
- name: sales_person_tiers
description: SCD Type II table of tiers for sales people
model: {{ref(sales_person_tiers)}}
- default:
+ defaults:
agg_time_dimension: tier_start
dimensions:
From 0e86164e4f52bdf5db63efb5895088646979767e Mon Sep 17 00:00:00 2001
From: Jessie Chen <121250701+jiezhen-chen@users.noreply.github.com>
Date: Wed, 12 Jul 2023 12:05:33 -0700
Subject: [PATCH 037/703] Update redshift-setup.md
---
.../docs/core/connect-data-platform/redshift-setup.md | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/website/docs/docs/core/connect-data-platform/redshift-setup.md b/website/docs/docs/core/connect-data-platform/redshift-setup.md
index ed01b28501d..1f7f130b8e7 100644
--- a/website/docs/docs/core/connect-data-platform/redshift-setup.md
+++ b/website/docs/docs/core/connect-data-platform/redshift-setup.md
@@ -52,7 +52,7 @@ pip is the easiest way to install the adapter:
| ------------- | ------- | ------------ |
| `type` | redshift | The type of data warehouse you are connecting to|
| `host` | hostname.region.redshift.amazonaws.com| Host of cluster |
-| `port` | 5439 | |
+| `port` | 5439 | Optional. Defaults 5439 |
| `dbname` | my_db | Database name|
| `schema` | my_schema | Schema name|
| `connect_timeout` | `None` or 30 | Number of seconds before connection times out|
@@ -63,7 +63,6 @@ pip is the easiest way to install the adapter:
| `ra3_node` | true | Optional, default False. Enables cross-database sources|
| `autocommit` | true | Optional, default True. Enables autocommit after each statement|
| `retries` | 1 | Number of retries |
-| `region` | us-east-1 | Optional |
## Authentication Parameters
@@ -111,10 +110,12 @@ company-name:
host: hostname.region.redshift.amazonaws.com
user: username
password: password1
- port: 5439
dbname: analytics
schema: analytics
connect_timeout: None
+
+ # Optional Redshift configs:
+ port: 5439
sslmode: prefer
role: None
ra3_node: true
@@ -148,6 +149,8 @@ please refer to the official AWS documentation on [Configuration and credential
| `iam_profile` | analyst | dbt will use the specified profile from your ~/.aws/config file |
| `cluster_id` | CLUSTER_ID| Required for IAM |
| `user` | username | Account user to log into your cluster |
+| `region` | us-east-1 | Required for IAM autentication |
+
@@ -170,7 +173,7 @@ please refer to the official AWS documentation on [Configuration and credential
autocreate: true
db_groups: ['ANALYSTS']
- # Other Redshift configs:
+ # Optional Redshift configs:
port: 5439
dbname: analytics
schema: analytics
From 7647e6cb526d9313d478226e6be976f51ebc686c Mon Sep 17 00:00:00 2001
From: Jessie Chen <121250701+jiezhen-chen@users.noreply.github.com>
Date: Wed, 12 Jul 2023 13:32:03 -0700
Subject: [PATCH 038/703] Update redshift-setup.md
---
.../core/connect-data-platform/redshift-setup.md | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/website/docs/docs/core/connect-data-platform/redshift-setup.md b/website/docs/docs/core/connect-data-platform/redshift-setup.md
index 1f7f130b8e7..3e358a3c352 100644
--- a/website/docs/docs/core/connect-data-platform/redshift-setup.md
+++ b/website/docs/docs/core/connect-data-platform/redshift-setup.md
@@ -57,7 +57,7 @@ pip is the easiest way to install the adapter:
| `schema` | my_schema | Schema name|
| `connect_timeout` | `None` or 30 | Number of seconds before connection times out|
| `sslmode` | prefer | optional, set the sslmode to connect to the database. Default prefer, which will use 'verify-ca' to connect. For more information on `sslmode`, see Redshift note below|
-| `role` | None | Optional|
+| `role` | None | Optional, user identifier of the current session|
| `autocreate` | false | Optional, default false. Creates user if they do not exist |
| `db_groups` | ['ANALYSTS'] | Optional. A list of existing database group names that the DbUser joins for the current session |
| `ra3_node` | true | Optional, default False. Enables cross-database sources|
@@ -112,7 +112,6 @@ company-name:
password: password1
dbname: analytics
schema: analytics
- connect_timeout: None
# Optional Redshift configs:
port: 5439
@@ -121,6 +120,8 @@ company-name:
ra3_node: true
autocommit: true
threads: 4
+ connect_timeout: None
+
```
@@ -170,13 +171,11 @@ please refer to the official AWS documentation on [Configuration and credential
host: hostname.region.redshift.amazonaws.com
user: alice
iam_profile: analyst
- autocreate: true
- db_groups: ['ANALYSTS']
+ dbname: analytics
+ schema: analytics
# Optional Redshift configs:
port: 5439
- dbname: analytics
- schema: analytics
threads: 4
connect_timeout: None
[retries](#retries): 1
@@ -185,6 +184,8 @@ please refer to the official AWS documentation on [Configuration and credential
ra3_node: true
autocommit: true
region: us-east-1
+ autocreate: true
+ db_groups: ['ANALYSTS']
```
From 21d60d75b488e095af8c65b82f9b18d9e78661c9 Mon Sep 17 00:00:00 2001
From: Jessie Chen <121250701+jiezhen-chen@users.noreply.github.com>
Date: Wed, 12 Jul 2023 15:05:31 -0700
Subject: [PATCH 039/703] Update redshift-setup.md
---
website/docs/docs/core/connect-data-platform/redshift-setup.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/docs/core/connect-data-platform/redshift-setup.md b/website/docs/docs/core/connect-data-platform/redshift-setup.md
index 3e358a3c352..c95af0a778e 100644
--- a/website/docs/docs/core/connect-data-platform/redshift-setup.md
+++ b/website/docs/docs/core/connect-data-platform/redshift-setup.md
@@ -52,7 +52,7 @@ pip is the easiest way to install the adapter:
| ------------- | ------- | ------------ |
| `type` | redshift | The type of data warehouse you are connecting to|
| `host` | hostname.region.redshift.amazonaws.com| Host of cluster |
-| `port` | 5439 | Optional. Defaults 5439 |
+| `port` | 5439 | |
| `dbname` | my_db | Database name|
| `schema` | my_schema | Schema name|
| `connect_timeout` | `None` or 30 | Number of seconds before connection times out|
From 1b92f8c8b1b95320992c5196f6f64594655bf1b0 Mon Sep 17 00:00:00 2001
From: Jessie Chen <121250701+jiezhen-chen@users.noreply.github.com>
Date: Wed, 12 Jul 2023 15:50:36 -0700
Subject: [PATCH 040/703] Update redshift-setup.md
---
.../docs/docs/core/connect-data-platform/redshift-setup.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/website/docs/docs/core/connect-data-platform/redshift-setup.md b/website/docs/docs/core/connect-data-platform/redshift-setup.md
index c95af0a778e..e4588bb670d 100644
--- a/website/docs/docs/core/connect-data-platform/redshift-setup.md
+++ b/website/docs/docs/core/connect-data-platform/redshift-setup.md
@@ -112,9 +112,9 @@ company-name:
password: password1
dbname: analytics
schema: analytics
+ port: 5439
# Optional Redshift configs:
- port: 5439
sslmode: prefer
role: None
ra3_node: true
@@ -173,9 +173,9 @@ please refer to the official AWS documentation on [Configuration and credential
iam_profile: analyst
dbname: analytics
schema: analytics
+ port: 5439
# Optional Redshift configs:
- port: 5439
threads: 4
connect_timeout: None
[retries](#retries): 1
From 8fc38a4ef202c6602a59cc72ec8bc22731151f47 Mon Sep 17 00:00:00 2001
From: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
Date: Wed, 12 Jul 2023 17:35:19 -0600
Subject: [PATCH 041/703] `merge` is not (yet) supported in dbt-redshift for
v1.6
---
website/docs/docs/build/incremental-models.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/website/docs/docs/build/incremental-models.md b/website/docs/docs/build/incremental-models.md
index 15b24520711..39b2188ff24 100644
--- a/website/docs/docs/build/incremental-models.md
+++ b/website/docs/docs/build/incremental-models.md
@@ -253,7 +253,7 @@ to build incremental models.
Click the name of the adapter in the below table for more information about supported incremental strategies.
-The `merge` strategy is available in dbt-postgres and dbt-redshift beginning in dbt v1.6.
+The `merge` strategy is available in dbt-postgres beginning in dbt v1.6.
@@ -276,7 +276,7 @@ The `merge` strategy is available in dbt-postgres and dbt-redshift beginning in
| data platform adapter | default strategy | additional supported strategies |
| :----------------- | :----------------| : ---------------------------------- |
| [dbt-postgres](/reference/resource-configs/postgres-configs#incremental-materialization-strategies) | `append` | `merge` , `delete+insert` |
-| [dbt-redshift](/reference/resource-configs/redshift-configs#incremental-materialization-strategies) | `append` | `merge`, `delete+insert` |
+| [dbt-redshift](/reference/resource-configs/redshift-configs#incremental-materialization-strategies) | `append` | `delete+insert` |
| [dbt-bigquery](/reference/resource-configs/bigquery-configs#merge-behavior-incremental-models) | `merge` | `insert_overwrite` |
| [dbt-spark](/reference/resource-configs/spark-configs#incremental-models) | `append` | `merge` (Delta only) `insert_overwrite` |
| [dbt-databricks](/reference/resource-configs/databricks-configs#incremental-models) | `append` | `merge` (Delta only) `insert_overwrite` |
From 0f9645bc50b9ef15d3a39a5c5189da70f7569f0c Mon Sep 17 00:00:00 2001
From: Joel Labes
Date: Thu, 13 Jul 2023 16:24:44 +1200
Subject: [PATCH 042/703] Fix incorrect markdown link syntax
---
website/docs/reference/commands/list.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/reference/commands/list.md b/website/docs/reference/commands/list.md
index a3bb3804c6f..6084b3dec70 100644
--- a/website/docs/reference/commands/list.md
+++ b/website/docs/reference/commands/list.md
@@ -5,7 +5,7 @@ description: "Read this guide on how dbt's ls (list) command can be used to list
id: "list"
---
-The `dbt ls` command lists resources in your dbt project. It accepts selector arguments that are similar to those provided in [dbt run](/reference/commands/run). `dbt list` is an alias for `dbt ls`. While `dbt ls` will read your [connection profile]/docs/core/connect-data-platform/connection-profiles to resolve [`target`](/reference/dbt-jinja-functions/target)-specific logic, this command will not connect to your database or run any queries.
+The `dbt ls` command lists resources in your dbt project. It accepts selector arguments that are similar to those provided in [dbt run](/reference/commands/run). `dbt list` is an alias for `dbt ls`. While `dbt ls` will read your [connection profile](/docs/core/connect-data-platform/connection-profiles) to resolve [`target`](/reference/dbt-jinja-functions/target)-specific logic, this command will not connect to your database or run any queries.
### Usage
```
From e4466e12a55c3c4e4b638b51b5a211335b3804fb Mon Sep 17 00:00:00 2001
From: Joel Labes
Date: Thu, 13 Jul 2023 16:26:23 +1200
Subject: [PATCH 043/703] Add missing line to resolve markdown rendering issue
---
website/docs/reference/artifacts/manifest-json.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/website/docs/reference/artifacts/manifest-json.md b/website/docs/reference/artifacts/manifest-json.md
index 0a151b42aa4..c71c073d842 100644
--- a/website/docs/reference/artifacts/manifest-json.md
+++ b/website/docs/reference/artifacts/manifest-json.md
@@ -49,6 +49,7 @@ sidebar_label: "Manifest"
**Produced by:** [`build`](commands/build) [`compile`](commands/compile) [`docs generate`](commands/cmd-docs) [`list`](commands/list) [`parse`](commands/parse) [`run`](commands/run) [`run-operation`](commands/run-operation) [`seed`](commands/seed) [`show`](commands/show) [`snapshot`](commands/snapshot) [`source freshness`](commands/source) [`test`](commands/test)
+
This single file contains a full representation of your dbt project's resources (models, tests, macros, etc), including all node configurations and resource properties. Even if you're only running some models or tests, all resources will appear in the manifest (unless they are disabled) with most of their properties. (A few node properties, such as `compiled_sql`, only appear for executed nodes.)
Today, dbt uses this file to populate the [docs site](/docs/collaborate/documentation), and to perform [state comparison](/reference/node-selection/syntax#about-node-selection). Members of the community have used this file to run checks on how many models have descriptions and tests.
From 3cc3bfc999c6ff44e04241099888f0e34bd14fec Mon Sep 17 00:00:00 2001
From: mirnawong1
Date: Thu, 13 Jul 2023 14:41:59 +0100
Subject: [PATCH 044/703] nits
---
.../docs/use-dbt-semantic-layer/dbt-sl.md | 18 +-
.../docs/docs/use-dbt-semantic-layer/jdbc.md | 170 +++++++++++++++++-
.../use-dbt-semantic-layer/sl-api-overview.md | 70 ++------
website/static/img/sl-architecture.jpg | Bin 0 -> 76397 bytes
4 files changed, 188 insertions(+), 70 deletions(-)
create mode 100644 website/static/img/sl-architecture.jpg
diff --git a/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md b/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
index 0329bad15fb..674656a39c1 100644
--- a/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
+++ b/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
@@ -18,7 +18,7 @@ The result? You have less duplicate coding for data teams and more consistency f
The dbt Semantic Layer has the following major components:
- Use [MetricFlow](/docs/build/build-metrics-intro) to define and test your semantic models and metrics in version-controlled dbt project code
-- Use [dbt Semantic Layer interfaces] to explore and analyze these metrics in downstream tools
+- Use [dbt Semantic Layer APIs](/docs/use-dbt-semantic-layer/sl-api-overview) to explore and analyze these metrics in downstream tools
### What makes the dbt Semantic Layer different?
@@ -39,15 +39,15 @@ import SetUp from '/snippets/_v2-sl-prerequisites.md';
## Public Preview
-The dbt Semantic Layer is currently available for Public Preview, which means:
+## Public Preview
-— **Who?** The dbt Semantic Layer is open to all dbt Cloud tiers (Developer, Team, and Enterprise) during Public Preview. Review [Product architecture](#product-architecture) for more info on plan availability.
+The dbt Semantic Layer is currently available as a Public Beta, which means:
+
+— **Who?** The dbt Semantic Layer is open to all dbt Cloud tiers (Developer, Team, and Enterprise) during [Public Preview](/docs/use-dbt-semantic-layer/dbt-semantic-layer#product-architecture).
-- Team and Enterprise accounts will be able to set up the Semantic Layer and [Discovery API](/docs/dbt-cloud-apis/discovery-api) in the integrated
-partner tool to import metric definition.
- Developer accounts will be able to query the Proxy Server using SQL, but will not be able to browse dbt metrics in external tools, which requires access to the Discovery API.
-— **What?** Public Previews provide early access to new features. The Semantic Layer is stable and you can use it for production deployments, but there may still be some planned additions and modifications to product behaviors before moving to General Availability. We may also introduce new functionality that is not backward compatible. dbt Labs provides support, and relevant service level objectives (SLOs) apply. We will introduce pricing for the dbt Semantic Layer alongside the General Available (GA) release (future GA date to be announced).
+— **What?** Public Previews provide early access to new features. The Semantic Layer is stable and you can use it for production deployments, but there may still be some planned additions and modifications to product behaviors before moving to General Availability. We may also introduce new functionality that is not backwards compatible. dbt Labs provides support, and relevant service level objectives (SLOs) apply. We will introduce pricing for the dbt Semantic Layer alongside the General Available (GA) release (future GA date to be announced).
— **When?** Public Preview will end once the dbt Semantic Layer is available for GA. After GA, the dbt Semantic Layer will only be available to dbt Cloud **Team** and **Enterprise** plans.
@@ -60,11 +60,11 @@ The dbt Semantic Layer product architecture includes four primary components:
| Components | Information | Developer plans | Team plans | Enterprise plans | License |
| --- | --- | :---: | :---: | :---: | --- |
| **[MetricFlow](/docs/build/about-metricflow)** | Use MetricFlow in dbt to centrally define your metrics. MetricFlow is a key component of the dbt Semantic Layer and is responsible for SQL query construction and defining specifications for dbt semantic models and metrics. | ✅ | ✅ | ✅ | BSL package (code is source available) |
-| **MetricFlow Server Server** | A proprietary server that takes metric requests and turns.... | ✅ | ✅ | ✅ | BSL |
+| **[MetricFlow Server]**| A proprietary server that takes metric requests and generates and translates them to optimized SQL for the specific data platform| ❌ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise)|
| **Semantic Layer Gateway**| Reverse-proxy that accepts dbt-SQL (SQL + Jinja like query models and metrics, use macros), compiles the query into pure SQL, and executes the query against the data platform. | ✅
_* Available during Public Preview only_ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise) |
-| **[Semantic Layer APIs](/docs/use-dbt-semantic-layer/sl-api-overview)** | Use the Semantic Layer APIS, which includes the Discovery, JDBC, and GraphQL API, to access metric definitions using integrations. | ❌ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise) |
+| **[Semantic Layer API](/docs/use-dbt-semantic-layer/sl-api-overview)** | A service that passes queries to MetricFlow server and executes the SQL generated by MetricFlow against the data platform|
❌| ✅ | ✅ | Proprietary, Cloud (Team & Enterprise) |
-ADD ARCHITECTURE IMAGE
+
dbt Semantic Layer integrations will:
diff --git a/website/docs/docs/use-dbt-semantic-layer/jdbc.md b/website/docs/docs/use-dbt-semantic-layer/jdbc.md
index c7189794aa3..d64b43bc977 100644
--- a/website/docs/docs/use-dbt-semantic-layer/jdbc.md
+++ b/website/docs/docs/use-dbt-semantic-layer/jdbc.md
@@ -7,16 +7,174 @@ tags: ["semantic-layer, apis"]
-import UpgradeSL from '/snippets/_upgrade-new-sl.md';
+:::info Upgrade to access the new dbt Semantic Layer
-
+The new dbt Semantic Layer has been re-released and is now available for users on a [Team or Enterprise plans](https://www.getdbt.com/pricing/) and you must be on dbt v1.6 and higher.
-ADD JDBC CONTENT HERE
+If you're using the legacy Semantic Layer, we **highly** recommend you [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud) to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated [migration guide](/guides/migration/sl-migration) for more info.
+
+:::
-
+The dbt Semantic Layer JDBC driver allows users to query their metrics and dimensions using the JDBC protocol. It also exposes standard metadata like any JDBC driver.
+* The Semantic Layer JDBC API uses the ArrowFlight SQL protocol, and we utilize the open-source JDBC driver.
+* The driver can be downloaded from [Maven](https://www.google.com/url?q=https://search.maven.org/remotecontent?filepath%3Dorg/apache/arrow/flight-sql-jdbc-driver/12.0.0/flight-sql-jdbc-driver-12.0.0.jar&sa=D&source=docs&ust=1689208856858323&usg=AOvVaw2JfXh7rcmsxYkkdVNWo8PI)
+* The dbt Semantic Layer supports ArrowFlight SQL driver version 12.0.0 and higher.
+* You can embed this driver into your application stack however you wish, and we have an example project for your reference [here](https://www.google.com/url?q=https://github.com/dbt-labs/example-semantic-layer-clients&sa=D&source=docs&ust=1689208936518700&usg=AOvVaw141BhBSfphJ10M7b5ZhYs-)
+
+## Installation
+
+- Install an AWS root CA to the Java Trust Store [documentation](https://www.amazontrust.com/repository/).
+
+## Connection parameters
+The JDBC connection requires a few different connection parameters. We provide the full JDBC string that you can connect with as well as the individual components required.
+
+This is an example of a URL connection string:
+
+
+```
+jdbc:arrow-flight-sql://semantic-layer.cloud.getdbt.com:443?&environmentId=202339&token=SERVICE_TOKEN
+```
+
+The components of the JDBC string are as follows:
+
+- `jdbc:arrow-flight-sql://` — The protocol for the JDBC driver.
+- `dbt Cloud access URL` — The [access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your account's dbt Cloud region. You must always add the `semantic-layer` prefix before the access URL. For example, `semantic-layer.cloud.getdbt.com`
+- `environmentId` — The unique identifier for the dbt environment, you can retrieve this from the dbt Cloud URL when you navigate to your environment under Deployments.
+- `SERVICE_TOKEN` — dbt Cloud Service Token with “Semantic Layer Only” permissions at least. Customer can get the service token by navigating to your account settings and creating a new Service Token. The value must be encoded prior to being put into the string.
+
+
+## Querying the API
+The Semantic Layer JDBC API has built-in metadata calls that can provide a user with information about their metrics and dimensions.
+
+**Metadata Commands and Examples**
+
+
+Fetching all metrics defined
+
+```
+select * from {{
+ semantic_layer.metrics()
+}}
+```
+
+
+Fetching all dimensions for a metric
+
+```
+select * from {{
+ semantic_layer.dimensions(metrics=['food_order_amount'])}}
+```
+Required arguments: `metrics` list with one or multiple metrics in it.
+
+
+Fetching dimension values for one or multiple metrics and single dimension
+
+```
+select * from {{
+semantic_layer.dimension_values(metrics=["food_order_amount"], group_by="customer__customer_name")}}
+```
+Required arguments: `metrics` list with one or multiple metrics in it, a single dimension
+
+
+## Querying Parameters
+
+| Parameter | Description | Example | Required or Optional |
+|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------|----------------------|
+| metrics | The metric name as defined in your dbt metric configuration | `metrics=[revenue]` | Required |
+| group_by | Dimension names or entities to group by. We require a reference to the entity of the dimension (other than for the primary time dimension), which is pre-appended to the front of the dimension name with a double underscore. | `group_by=[user__country, metric_time]` | Optional |
+| grain | A paremeter specific to any time dimension and changes the grain of the data from the default for the metric. | `group_by=[`Dimension('metric_time').grain('week\|day\|month\|quarter\|year')]` | Optional |
+| where | A where clause that allows you to filter on dimensions | `where="metric_time >= '2022-03-08'"` | Optional |
+| limit | Limit the data returned | `limit=10` | Optional |
+| order | Order the data returned | `order_by=['-order_gross_profit']` (remove `-` for ascending order) | Optional |
+| explain | If true, returns generated SQL for the data platform but does not execute it. | `explain=True` | Optional |
+
+
+
+## Examples
+
+**Fetching Metadata for Metrics. You can filter/add any SQL outside of the templating**
+
+```
+select name, dimensions from {{
+ semantic_layer.metrics()
+ }}
+WHERE name='food_order_amount'
+```
+
+
+**Selecting Common Dimensions for multiple metrics**
+
+```
+select * from {{
+semantic_layer.dimensions(metrics=['food_order_amount', 'order_gross_profit'])
+}}
+```
+
+
+**Querying Metrics**
+Revenue and new customers grouped by time
+
+```
+select * from {{
+semantic_layer.query(metrics=['food_order_amount','order_gross_profit'], group_by=['metric_time'])
+ }}
+```
+
+**Multiple metrics and adding granularity**
+
+```
+select * from {{
+semantic_layer.query(metrics=['food_order_amount', 'order_gross_profit'],
+ group_by=[Dimension('metric_time').grain('month')])
+ }}
+```
+
+**Grouping by a categorical dimension**
+
+```
+select * from {{
+semantic_layer.query(metrics=['food_order_amount', 'order_gross_profit'],
+group_by=[Dimension('metric_time').grain('month'), 'customer__customer_type'])
+ }}
+```
+
+
+**Where Filter Example**
+
+```
+select * from {{
+semantic_layer.query(metrics=['food_order_amount', 'order_gross_profit'],
+group_by=[Dimension('metric_time').grain('month'),'customer__customer_type'],
+where="metric_time__month >= '2017-03-09' AND customer__customer_type in ('new')")}}
+```
+
+**Limit & Order**
+
+```
+select * from {{
+semantic_layer.query(metrics=['food_order_amount', 'order_gross_profit'],
+ group_by=[Dimension('metric_time')],
+ limit=10,
+ order_by=['order_gross_profit'])}}
+```
+
+**Explain a query**
+
+```
+select * from {{
+semantic_layer.query(metrics=['food_order_amount', 'order_gross_profit'],
+ group_by=[Dimension('metric_time').grain('month'),'customer__customer_type'],
+ where="metric_time__month >= '2017-03-09' AND customer__customer_type in ('new')",
+ explain=True)
+}}
+```
+
+## Remarks
+
+* You may notice that sometimes we use dimensions on their own – like metric_time, and sometimes we use the object syntax you see like Dimension(’metric_time’). When we are selecting the item alone (e.g., metric_time), we don’t require the “Dimension” syntax, but when we are operating on the object (e.g., adding granularity), the object syntax is required.
+* You may see some dimensions include the following syntax with a double underscore "__" . This denotes a mapping from an entity to a dimension, and this signifies where the dimension in question lives — example (user__country), which indicates someone is looking at the country dimension from the user table.
+* The default output for when you add granularity is {time_dimension_name}__{granularity_level}. If you are doing yearly granularity and the time dimension name is ds, you can expect ds__year.
-ADD JDBC CONTENT HERE
-
diff --git a/website/docs/docs/use-dbt-semantic-layer/sl-api-overview.md b/website/docs/docs/use-dbt-semantic-layer/sl-api-overview.md
index 8e83f9dd25c..85b99bfafd7 100644
--- a/website/docs/docs/use-dbt-semantic-layer/sl-api-overview.md
+++ b/website/docs/docs/use-dbt-semantic-layer/sl-api-overview.md
@@ -1,7 +1,7 @@
---
-title: "Semantic Layer APIs"
+title: "Semantic Layer API"
id: sl-api-overview
-description: "Integrate and query using the Semantic Layer APIs."
+description: "Integrate and query using the Semantic Layer API."
tags: ["semantic-layer, apis"]
hide_table_of_contents: true
---
@@ -12,61 +12,26 @@ import UpgradeSL from '/snippets/_upgrade-new-sl.md';
-
-With the Semantic Layer APIs, you can seamlessly query the metrics you built with [MetricFlow](/docs/build/about-metricflow) to avoid duplicative coding, optimize your development workflow, ensure data governance for company metrics, and guarantee consistency for data consumers.
-
-The full dbt Semantic Layer lets you create systems for data consumption, automated reporting, easily feed metrics to downstream tools, and more. This can help you connect to a wide variety of data applications across the modern data stack and natively integrate with the dbt Semantic Layer and MetricFlow — from Business Intelligence tools to notebooks, spreadsheets, data catalogs, and more.
-
-The dbt Semantic Layer generates a [`semantic_manifest.json` artifact file](/docs/use-dbt-semantic-layer/sl-manifest). This artifact contains comprehensive information about your Semantic Layer. You can use it as a valuable reference to understand the structure and details of your data models.
-
-To query metrics using the dbt Semantic Lay API, accounts must be on a [Team or Enterprise plans](https://www.getdbt.com/pricing/).
-
-The dbt Semantic Layer provides the following APIs, which you can refer to for more info:
-
-
-
-
-
-
-
-
-
-
-
-## Authentication
-
-Requests to the dbt Semantic Layer APIs can be authorized through two types of API tokens:
-
-- [User tokens](/docs/dbt-cloud-apis/user-tokens)
-- [Service account tokens](/docs/dbt-cloud-apis/service-tokens)
-
-
+The rapid growth of different tools in the modern data stack has helped data professionals address the diverse needs of different teams. The downside of this growth is the fragmentation of business logic across teams, tools, and workloads.
-
+The dbt Semantic Layer allows users to define metrics in code & dynamically generate and query datasets in downstream tools based on their dbt governed assets, such as metrics and models. Integrating with the dbt Semantic Layer will help the organizations that use your product make more efficient and trustworthy decisions with their data.
-With the Semantic Layer APIs, you can seamlessly query the metrics you built with [MetricFlow](/docs/build/about-metricflow) to avoid duplicative coding, optimize your development workflow, ensure data governance for company metrics, and guarantee consistency for data consumers.
+The dbt Semantic Layer can be used for a variety of tools and applications of data. Here are some common use cases:
-The full dbt Semantic Layer lets you create systems for data consumption, automated reporting, easily feed metrics to downstream tools, and more. This can help you connect to a wide variety of data applications across the modern data stack and natively integrate with the dbt Semantic Layer and MetricFlow — from Business Intelligence tools to notebooks, spreadsheets, data catalogs, and more.
+* Business intelligence (BI), reporting, and analytics
+* Data quality and monitoring
+* Governance and privacy
+* Data discovery and cataloging
+* Machine learning and data science
-The dbt Semantic Layer generates a [`semantic_manifest.json` artifact file](/docs/use-dbt-semantic-layer/sl-manifest). This artifact contains comprehensive information about your Semantic Layer. You can use it as a valuable reference to understand the structure and details of your data models.
-To query metrics using the dbt Semantic Lay API, accounts must be on a [Team or Enterprise plans](https://www.getdbt.com/pricing/).
+You can seamlessly query the metrics you built with [MetricFlow](/docs/build/about-metricflow) to avoid duplicative coding, optimize your development workflow, ensure data governance for company metrics, and guarantee consistency for data consumers.
-The dbt Semantic Layer provides the following APIs, which you can refer to for more info:
+To query metrics using the dbt Semantic Layer API, accounts must be on a [Team or Enterprise plans](https://www.getdbt.com/pricing/).
+
+The dbt Semantic Layer provides the following APIs which you can use to query your metrics and build and integration, you can refer to for more info:
@@ -92,9 +57,4 @@ The dbt Semantic Layer provides the following APIs, which you can refer to for m
## Authentication
-Requests to the dbt Semantic Layer APIs can be authorized through two types of API tokens:
-
-- [User tokens](/docs/dbt-cloud-apis/user-tokens)
-- [Service account tokens](/docs/dbt-cloud-apis/service-tokens)
-
-
+dbt Cloud authorizes requests to the dbt Semantic Layer APIs. You need to provide an environment ID, host, and [service account tokens](/docs/dbt-cloud-apis/service-tokens).
diff --git a/website/static/img/sl-architecture.jpg b/website/static/img/sl-architecture.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..b6801e88bcea02e972a3f27f7dbe1d1968699188
GIT binary patch
literal 76397
zcmeFZ1zTL%(l82*yAvQt`A}&|%&e;Ft?!K~d?1#hFSHT6=y3T!v
zE2d3dA-#?v>O$hO3rEiqLLted~tm{tb8Q8}!m1r|ZGltt|JvuhxF$^Hrm0
zo@o=$49bvE7B?Xf5xrb3#<55=M??gT$caOQ|CcV}UE43M0gJGzP|wR+b&_^+dF%|A
zx-eU7jhA$Pg;h&c>>ZE7!;k10)ZW;P!QIy3wF?-ZI}fO7Yvy7Ebhov!bLMe>NAlYn
zJfQaLZAKE{Z?Cvmza!CECNciD_Kp>x!sX33bsQ7=ogTB2ZdGF%l
zz{AMs=H|xW#>!ytWWmVH&CSio#KOqJLJxX_-r2*>#mJrB&YAQ-gZw^@sF|~glcj@;
zrM(^SbzCE3dsmltBqXmB{qggkb(*27ADC2DC40uN{oel{jfzTe*e$CE#2{7X;GKYKDWbNx@(zdZVTS2brdClPyF(3~#(
zf1>8U^ZxbWe>?IqzHa?5p!g4*f4d6;G(S8a;~z-lhwpA>A_D^x1d|dKQgsJE%!10q
z+M4ShTQ=Wg*hB6GVAWLXFn0-eC8H}sLWz`zWkwakx4SQWf>MxAqZM!0tzwP=Ki!O?;LdijH)hQbTt(gi?D`u+Qb
z2u_oKEB5cb!O>xHK#u{R(xKwO;9~#nQpD{&`M3M8FAH|UD&k@>P1A+``_z(tLsUL6
z|JT9*;(kLVexJ>S$OHlZdZ~(^@%|r=$WWO8s^U)q>^xyYP
zCTND)0r9Wk0Nn&96*N=rRICI3AJp)QYJ&eC)qjBw2-Tr*no|Fv&_^M|9;-MggYbI$
zs!+4Le8_F7Kd}%elk0@#nDwDjvK)i7G0FxPMBf8w4q7zC8no)a5jZ^Q=`i15_Ep0`
zH2pe_flz^!Yh1~0cjdarUnT^H0kYQL`^dsO~G|RkB^h_*hmgw8W
zq5*ne0KIy949j}|V%=(5SC@08m(#^hbr%?}Q@_?(a(5dw(cWgZ)BPX>HHyTbMgz
zLjqb9DjbOEL~rTRAtiHhAV1iq916Ybb=^-}|L7pe%(6!QSK{$NtJeQ0T_J|>ea5Y+
zN&Yxh1D%lcl`V2mb6JKcanj&G1Lne-x^;a2{wX~DIyiI~tHf9f`A@z`ga<6dtTI+o
zr8J4l(qJ8#k%E>OAmUfP2Nl9ALk^Bkh3qe9Emm7cz8^QCDvA|CRQ0E@@)w3}s%#dk
zh~nAisjiAmMccM`&RaVuA;pa1MiX(SJ%3X&CUi(kxk#
zFCIC`(a?2Cpq^0ZIJ3nmnNGS#fGNx!vLvP6&wbXCz!5mUr^_jp_v1izmLcS-WYdBO
z(SvXgrNfr)gT9)EKLBDRVK~)NUCu%)3m`+~hvdpf$noq=kM|pcO{CS3?~5j&HR+E`
z8X1wz*&2@g7w*nw0_>l>%3>;8$d5P2^sf8UaT#_kQj?RDrnd<_GH(G5DSxj2zXJdz
zUpXG#9a{#_MYcAnwp-bH%#+4q7Jj)qArh|=zm;Z@4QHOD$?+%5|0{U}sX_FjO5HqR
zLjmWG9y-}1FEGz
zjHH)W8{gw;PrR+!a3~t7OBc@vq+f27S|)#`-PblO3Lw+*Q@^({y;Nrv+i;0`KcFj|2I}_N%T1FL9To`wJ$8TgO9no_m*#QDi_hS|Kx~akKXf%TUHCuX
zn#%?v;WD-*vxYdxHZriewuj?Z1!MvqvOs7gmYRZWuXCxELyy6j!9S+@qXGX)fWIPB
z10badz#-YIben5CUB_zaa5-#n0@G7k?7`oHDCB=t&!4;t>j`-0dCi8ck
z0T@Yw!Iyd-H%ln1s1z#X@W&@5_2zCHxf5xb{e3Q2auBA2CzlQNB!QxEm&eOGlT%Y>
zYn}ee4K`Z1eC|vplEJ&*jO+i{{eK(4=~p?j?-qJR>W@myWY`H-s!^5ga>eKr&F0tq*>7$vi
z3)(v2KZsXQACk*?BmzDLWCXd3fnh7lWpE^wSGGw1#@MI7^!55h|7UH#kw#$(h;j>W
zR_eCM6paB63$2H>!<$i@E3vJ|Ei%f=%BF|&HRPiyxc{^xL>a(ysPf|z63l+=v&YGV
z(St-XRcxmeWj0yk%ExlPixJj7S4NV6^phnTb@7k>N83_=LKPMpT*Fu^u8ciUjc*vV
zPoO_K+G!Yz6=;u6im(*P~AlWfmpI8QN5zt(IYHC-d2>?E>3tS$cm(rWKvbKJ#ZjZOAQdJ=yoQa?pH
zCA{q3xpfS)#46g}M-XXsM|9qvZJ3Sc3bEMCGkH9o_4kyYSJ3|jIUx1>ezqmbtM#!e
zw)_pPD4;N^a|+^NcuB$hp=|*q-tehe$^0k9))p*slKdfHhD>7KQHXp(jN9GaorLoH
z@(P^auTyUSk`gNFgw)*&uQXzi(wl%4?Z1_<-OSOAnB^#<0BubLF!t^y6A_0=@VNDa
zAD7)qHMJpN2K3~QLH^T#CG#&4#O|vF?SUkuGYAf2^H&H_GU2ekl_-BkGE$OnjSaD
z%ebUv?yw$Mf5F**=ILYx@$OkUh)ooM;E=RLZ0}>kFs|+os@|j~Q~py2NP-lr%M@)S
z>R5sxH3irl-}g@+@Ar!3GE0;r1LP?G*yQhxQ#5dG@jF%3xt1%BrG!M?ca1pZyt2#3AuEf-GBH|Ve|uY2Tkujaeo%|BnXu7Yajpv(Sy2T36MtH2F&OR^>L
zH%9fd)<0SKW;PH@OkvnOyhS6<&=&NME`E^k6d+aA<>d}SD-<}drbm_jx6QMDq7)E`
zD^J*Q#Q&oU@DviKlkapyiv%tR(|;$&SO5Ozgz;v{Ii~Hj8-ZAjQ=P&I&eY6IX%Dx#
zG38bA{7K&a05}ku1^M`-0C94LAn|n$5}pWjdHNn(NdtCtIvP~q-w~~6E7B^o>Wap{
zl1q(O8;O)Eehx5fb>10`H|@0s;TE3Q&!WYk!-re>*ZfiAYW9XE6aV4Q#<5H89FR(8
z3LjIVNL_N7lH-m(gLoyJptt_?It@WONm{GQvYm{{N+<0Pm5&c69d;F=?^LSv-!(s;
z50mrruXyuee&ByR61j>mxoHUA16U*
z6j=gj#|SMnld-X>_-Z*NKFa@KBjk&=fA0Ho<6GG$t$g!J=XNeI>ypKKYXl#R)8z-yP6})mYTF0WJwR*T6k1C$@L?WvS#EJ%
zws^TeV01n#E3b4Sy67Lx6N}g^OwmV|jkKDrphn@p&CPV3)#47nzuU=?(+gNe8Oh)r
zn}#SKP<|B?z)V7w39FybwkQv0cJb>Aw>2)=qkJyIz}67Kk6H4Ng!h-PgCwpP06
zmM^V4UY5n81GIw^{sJT9U%@jR+R;=qc9KX<3_pY+Fi
zVOh@mrNzljb{)_6<-5S)_X#qYxU3)`p!r!(f4fAe{4`eaCRV?7h!t;9POfAIaj?aLLB+t@#3Wdko_$hk$NIyDvst
zT3@u;Z)yTQ7>Y%-|0+n3r=d#YE)J)f2#fQ&S#~=gVI50716GVp^p~QKM1re@*-z#v
zbmGu4GfzBXr8g5{*eo?oXj*buO%KErGFiWhnp1jkwa1K_B$|7(rCUdHI=Bd`CL@pq
zu6#*eb_w!rM1#AtwAoR?8%=#TqBv#09n4i3D#Kx9ee}rp{Ut45-$w!2l{C$F?Kw%U?~N6uGCwnNICW(aW<(m
z@YZCFq_Yn{Hi;}my6fD9ae9B>oyh-Dce2;MJ~DMo%H%;CE_k+C!CM(TdLeqWcD3T4JEE>m1L=4MBGl_A&19Y&*#BUg$;I(
z?7{SbYmAISgj9@G_j!@=G-wwz3H_45dB{^fK}Wmc*)KZI&3#?2jouH}AJ7xpb9Z1F
zQGbbRq*}9NbAQ)Wrln)`Q6AQD8wd;vf=uJ{sD+DAthZ8AY0%b9vu(6nUUGFjHk1vr
zsIPWRzQD7;IeR3v*Xb+}i@?5?uzOX@z)iTvr@7St2SkS~&-E}$rrO-%tuzhaV
z?$}|ccOVOAgdI5`f+)HN6a{z#gR^6tWDA>k%~dsfw9We5W6|ayDDZS=k(2QzSHr-&
zzCycxD64O!%XQW3BFd0&4-=T$&8a0sfRPvbL0uaah%B5WGWWw`Zxx|&-yGlbbb?z1@6hMDrQ;G4L7lNK`w@b%Wpjq=Y@)bt_3D
zJ%)4DtFG{@sse$rZQN^iuy9ZVh7MbH$ko{LUX36U*V&afW=fv-M-xdmyK++PwUM0^
zumW!Fp{joK+8@0aWDImj9qr$2)|0s2CDga(t4O_+mxPZK|j
zgrr-aJBdNQjWROh`t|Yh2NtcmDHoJN=%emF+_)U$*A4(q|K~Bam;x_at5#Xxv)(ty
zYy`X?*yi?RU~iZY-+NIz>wcO4;TV)E^qlSMVBG+RVs|WaESJ9
zs}Jmi>Q||rOh9gryq>*NmnSm6_=E~BsI*#~{c&jeAX&$b*|qurftKk|q#RKOQuE!K
z0xz-(rtGXyaN!U&H^?oFg2G+`bOFkTcGrgMq#s}7ScU)tpR+=^hAfS0`=(|?C1^FT
z&61hMRUhkstGt-;d1nePzI%u^YGEd$p8$0GPva7uPB2e^fV}*eaUzcdqWe%*c>xg$
zxpcDK9m7I61Q|6+++m7KA}BarW8jCJP1y(^oGW$mruTGo&O^s;nGv#`+5n0*%`k-7
zFV!U=U-v*(fZgl%6yH{8dHm*JETg1*V0q_ryTkJAU^X?(6TxDbUm~iS6|R1@aO5d8*+EjqyiO2?b(OF<_2%e$LZ-wsc)YJr!>smosX1L$&;%R#)5;U<`0Zcq_w
zojec-s=!zlx_ZOe$^y48WIOxK>PE(
zHTT_S!jB6?Hy%yaZh&+W*9T>jnIJ4iVhvr#U=O=r0mY8lBg{L#IF#Q&cCoovkQj3t
znkH%2BD{A5{VoxjDE#3()47wwuRMQ_`L=@f@|0n?{IFSztu*EeKn9H_zJbVB@AZ&Z8F}pk8>dRB6
zR66!S-Z7*OciHV3XPggcNApIci{XQ^uXBi^F{4~CrF0|WM3
zUb2?yvTX5u5w3B7sHn>fXw)lN&5z-?`fQ`4YGis&$Zyl+Jz#@IhwE*xpEtze9935p
zGK0D3J|Xrd4VZcD3hRehuFB*fy(KO2prz>md|#}B+KF)#To}lz$r@0UkHk~jz~&SE
z5Ko!oHF>#nr*+nX;S>TXoa#qo5p{*yM>~4L*+=hQbSBI71(8xd^L=!d+lt5W8CE)s
zwLG28c$>!s{_I+{Q+Hiz4tru0G`P7DR5TP~ThdRv)*qK|Ogkw1ay9xmxIP?2To^p2ZDp7DMPY&AM3-_ta|s155DXJTui@9|Wck
zLLE?yIRTIn8$g148Q6-?<8Std+jLVb`IJob4*1SiJWtXN-n~nRg`R!hy)iO7pdN>r
zuF_Z?@oT-F6j!9s-msrL2(uUzzLe{vHHhD8Mvngn-o$%
zb|nE4!04I?iZIY_VwAWu>`?fg7ywYSA*
z(^ZJt|BgtKOqsrp+ag19yD64PGB*(ewt!{mLrRmwpM1~+
zn&iph@2M`KRA$%~sqq>|@#laSqdn$CPjHpgyD)f_ZaBcsD@Y8w2b3Xus?6~!Zax2B
z=^=c!DV&&)&hPO-rh{H3k0Bt$c?{$iMmsxbi3^-izh|ZJcaUg+>Sxjf$0&AM<&WW?
z(c+lQm8}QXCv47>FkbFmBOpE$XiKJf2CPyRmtck5Jq?!ISnwPt2vT7P68SqOYJ+c;
zQBdH!n`Z*dFrpVW@OHSHx{ZX__(FD&Xn!pnw~bzodjI_K#;$e+eji2*wNDYXUY*Ug
z=XQ+nVZr0qomkGc3;9=reUDkWHx^8Hok{c9P5yo=`a>2?Ku@RIstr&mfE5%{6}~y{
zxZBPsTD=F^QV10V+Tj^4sW-x>d4Q*^ohFPOCUKDgsN%I?$Ughg8rso(g+3XDAie<^
zrkfC*?047Q?r0_ot%TR?*Aa+gScS~$VmU{9((euWZ8uRvCm0LHL{q%jJi0K)qWxkZ
zU~{2Uuzrdm^&8SeueHFoK;IxM_(y~zOGZW@juN9hDdW{yd(C#mJGS~A5UPI)Xyy=w
zVW;q;P!FSduZEdwN=h#~c7dK*w|uci68dPg7{%eKB2pyKUHTMD6gl_Q5r?
z?xa7KScESj-Nk{r4HWBjYCg;*nuTlPjji=Q1Mz+2&usb7Bax+e!Fe>l1Dbm*wUW(}
zRvKqnFyEi=X8Gafkh3&5of0}d)DV9Tj$oU-tzfE1fM+!ybe!A>eDcckJrif@Qznmh
zj8Y*_Xx`k25#acVg!fU1lAgP^iwn`N0oKMEVG*Dt)&Y`mFSi>gb!x-3bx;UR-1%eQ
znlipJVN#?$Wn?V8GPJF!e1$*9u7{@q*_^#uJwOJ&gRA>dEoPER@zY9$lS%a4C
z{RHNs;I~|TdLD3n_P0+p4{jBQ;Vp|-)m2@$sNQVCwDSS{xW5RPI4&zZ
zy%K3DAN!_qI3w{62I_P|!X~**lsx$
zz|dY(1d0^J-5qw+RDSOksZY<#)c4zTV^Wo0#?wKkbiCY{=CLq@?w#Hv&4K<1(1Hzy
zrS6Z=o9w4uVDNz?I?Waaf%G^VZrv(uv29AX9`dvArp{R5f@r|0I?jwoqfdTS6cb2b
zlou@q_<);W@iJt2FRiTOU5Z_A_C<%JXylk1nAkHj`wLobMtNOMNKm9XR}^9YD)Jhu
zK|{~Wau+JWj@{;=s$+`2RNdSlyXUIkc8dAN#3A=Vp*}dtkfYB!8!~A!
z7quL{xq$VrG-5e$XEwaiLTFIEd~_@}LWG){0Cj@wfXmd(yeN4kXNRfckJ^KIRlOOU
zZLY)RUKWHPYgCckuU8-F0p%Ke4z=#}$pLZIGpgKtl=~&nj5pgB$#czz!h9^-HogrM
z*BUMGI4<|B
z7=Bt2MUc3|-vg*x#_fJPeGNhYkA#@eSY*8BJBYd~=!0fcmw{cYH;=XGC_8^;+Ifw9$&fLGI$acWL_6@yuqIKk_4NV6KE9-Sb_msrme3T0B#rI0JaGXn|M(lhG$a{<3O0+5y_!||t(luna>Q9^
zd0JAcB4UqkYTb^SxE}EL;*vhI+8;hrO%ad6rFk<{(bMH0HriZ!APSzBK_rUqioByw
zJcKjQmuCA)6wahTtJF$2_N6bFd5Bz3ecl^>7Ad=f^jb>OPl%6RkG&{>`obz0T&Xw}CuYI5zGiYka3rABG%55U0)Z=3vSc}TC%
zmv>V41$eY^XUpt={`L5K0kjFFLKm~1L+A9y^-p`wVx!*R=fNp@R1+CZ5IXHVmy$GA
zBe9|;icMxren;ns!UU%YgN-NBf@{AFOb}b=87O+c@(%l}2oYC*UX!>YUzit%7f`Afh8*N9_kKWB>)UTWxnr0kf`w^KunZAS(nx$I3
z)1#H2%O0hK@5`IRCddyVhC5Kh%%0twE^;+v&ad*D3$SZ3Zm_^W{9zCV1)r@(qQ&*V
z{GRH&dtGk>tP>5_xCC$;Q&;H};v{o=0Yj>ID(@
zghhk{SVYBKhn@@po!qwJVHT{NI6_@M`dphisfU$$4wvglcV*J-qx@yz?6o=<&fJ+GC`e+
z!`&~>6*89Xjf{D*yOX&gOw}t9iDPvCrdTGYQr9ZnLEd3p!Vod^>`jHRZ4G_Di{sme
z*gc(zn|Wq%+Xz}W!0~T792TP!2mXn#j~^%Hb4MRh;#q2hi9F)4=T8R+xf&BwW8_F?
zipdVXJ+liZw5P8YO}L)?;WZ5RW;^yUn3^R$g(efDs(rgulE)9g?IQ1
z4Vdb}<3WcAu5QdqPiRR7STg2nJRomxYLQp3^#gm{?G=LU9@!wjja)k0VB(j+B0dcm
z2THz%d5Jjkwy(7#jT`ZwbT`#1KT<)O!f3G_Ux;f
zup;hCiciVfikkPc6W`Fm)z^gIUeb32x0L9_Xmxe!ODU|bd#IO>WzF
zy)?vl7T+t@iiU9O@ENk*kiKVG2cz&^+<)k=SmN+`bg#22YR6)kTgMwSv#trJiY{jv
zQXVFjNm@PjjGU1jmnfNBq3bVKmn`yCCQsE$QdqWKvV7;pHX1SbMyAn|wgl#$wD|RC
zkAjFmRWoPwZd#d^Tm>fT%ryF77{zRcWgYlZj^^s;v>PW
zW?3dHt?B2N$MpzCIyJnsulFoK7z1w;i}P8j57B(5UB}k%?$$ZsXhTZTndaEbMuy#>
zQgy>T{2+xB{b;OYef@kSRs(dbW=!|UzDIsOU0T1Z6-(m4?vunZFP!?y*%D5axDYQDe5hI$W5lXewabrn@X}07hw>
zY-=_vQ1TSF@KH`#pR@=>YD$n;2b_F*t&T!oR3fBiHbu#jnZW
z4IEqpYo_UzycSLi1C60E5DI!ahg3j`3fu`O(A-Tc*t#K)H3dx4OOKy8M#ITDs7`CeJ>*O|0f7c+(RGgPTCV
znUAezj#@~ugH0i+DZG%uu}KpNBbked3Y*9xttY(n6u7M5vOC`O*klr)u*zB%!PxWw
z`{-9zT3a|PN>MeKK)KT-qS$A+9>fp!6r?TJGL|3X7mwHPsQ*T}n(b3UVflg9K6Hxx
z-GJb#VcIzp^QC%js)UwR(ptNy36$o&_z&4N2W0O8XoW^L*up`B)r!{;%okp{w}(DR
z($t}SS%Y;?TreDp0mQyk71Qy@jy+_pb9q$jAtBr6i9lKP%c@xo>S10xZ_j5YB@exh-w$cri~6A+x#5}xY3`)2X0
zorw@Gl3BXSIE)EPAlt19t+dQ?R1^T=$-mezGWZ$FRwyg?y^0J?eyDPU%Gl$Z|y
zP>#i$^df_j+;E>7=hPjjdKP_5WDJK?ijSn75lzrMw}VOe+gm(_!>*aH8O`SuhV;sF
z^>ckv)6OUR`!Et4&0AS4O%f5461hbVVsIab{F^+p9U&DtFO6=e;E4FKY3}L^s}YPy
zDiij)jMhu+)9-$SLY}m`Xir^17zi
zL%nRUfand*Vy+I}m6Yzp!eBqQsumdpq=tllXbFdM%A`HnlKu`swnpwB
zTOD~`TPZ94duq~ObMH_g0pC1tu%jFgU2i`DRpZ1yp@G65-Mi*BW&cddB11(4fHKQa
z#B1nSg
zAP(`OPbbyfO)ze;E&q=KX;e(N>?0WUd1s1{s&3j{mOWG)PSLmx`I{!gtY9)f!{8
z^3F{%)D=kmkDEf=o>$rn?VeSuZ_%-^$OBK><+R!DHg9S*{m?_hv8CuFFhR2w#tC{E
zTiUSy{7=>@>N;eeF2iWE$RnYc_SL%a#^WRzG}8@&y18~bEE&c3;Q-F;1Uxm`hi?mW
z1}>FLb6d(CDLsH!@c-@VGyqZf`5Ucmoif-oWRAbkz#j1{G
zCa*^x|9g?dfRdx)+!Pk_fN1X(%RjZIkePsGnY(ISeh1c}PKk&T6jX!zjZNt#Y)5G#n
zSWHC1B751eS=Qok{Z1w^lGR+IQ<$o7Gq_{O-=QQ4%zLe$A!%gg{TXj&JvY4iwqc+>
zFr+}cFnqGz)!9kXHyvJSA_baqoXaEXb`1!T>=M;NHk_vPl0%fT=`~wxqi(Avv2kn3pDRxth5`H-EH46IEx?|s%(gsGF_xJkaTtZc`$W_*EMAO*W^S00TQO`RTmF)w-$UG9fn%G
zzS-A6NN}f|h`5ZEb~<(Io}+8wftbZ`=@*^UF*knpu8Py+iup?;hyvxi$A=sVeRn-|
z`<3;!?}w`&7NDHGcjTL+J)I4Ywr$U^x_fIMbFP%dH^^0Rw~*>jrFz_GecCi2a(wl$
zln@>@yM47{GOpXbF69h|?yFRf{~)473V=IYa?YA=!!uSQMTfOV=5O!>RzqUHG)vPY
z(o$j&c_>~J@zLo)M>=Ta&P8bqe@W2?cF-G8$1M5eBeraqQ|lB(K6vT#HtK(9mvZ}<
zk$k?he>GWk;JST*OZnA_a%^BEUNy=;^NW!U9~7ZjXiOw3XTy@u9TdxjZ%?7m7rg7w
z_&No0!vjnnJXOPSp+v&TugG(;R-T&L3S{iKP&xLnco?{-8saX_1hSW_4)KD=#Q!J%
z6!hCKu(K70b?fjuRhY`~{#dl>5J8f0VJ>HqcSG@@v+MeWtn*Zns#a1meZF=7@YYs7
zL7@X@B3#>7#vZ+In(WmbDjg7x`gn^cimP4P%nWBN&8_&aet8hIf9mdg?n-gjJ+}i_
zEC0)kR%Ob@_atF(XWJKhwmQ--szM!z5{$gRQW2JCvs%ftd-`48Pc3D^D39E3!{9#j
zd5Zs1Z8UdHdsvtJakFdRdvWq3xpe<1AuWm8)p|BoYa?W<>H0lf^Giq3i-T5&Ypbb+
zj`9~}u{5rx#;>+ZKsRARI4MA|Z@F;o;%7Z-syzAqj~#-oZzZKP8}!
z07k;KmDm}q5)H=NU)hgTx(xDM3x6~bZ@{k=c>geYb^Gz%H6!2+K?2)y@cCde3Le?O
z6_yFv;FnR8a&6G?HDm9vI$xD
zS@Q?CmGgVQg>Jn8!x~SdD3Lxwgm@PLmrB8j9h))i2L&B{O2^Cf5pBF5L#My!t1-0s
z#?}O+zw`afm#@#C?|H?vbA2d4h4PWP*n6*s`+Tk?>U^f4=6tz)qws`Yui(QB&9fEY
z;tB#JKpw%aip_i)K7K<)CY)OY^e6lWdBHQSA>TXoH-r*MRzBHh1TG*oJm8f
z#_U>W*P%oXHJRDYp{2i+MJ%HdT@
z%}(tEK`B^#DfzgV4*q?tsY)4*J(poUiqwOy!8R$MsWe(|QZ$npx%{b=sV{n?V#V{O}7HnW88oknb?
zr!+5hj;46!A*U+;tyTN?zCP(K_`CkEG*pb)2*@C~dW{gYm!U+#WxJve5uJ9VcrVeI
zOZ;v;qn1L_l$`(a>sjBM+oVf#(hcC#CY2{t?(O;dgzFSVL!GU5)0QxV?^=q}Mk+_M
zXXiSjTTtF&B|vXNvVGtvx#Qb<<+HxR5fc+XVf8Kq>oNY)J?(u8Vg(`PewsxC(nj7U
zuIn&hfxHMlQ8
zrG|Op*t-=6zn?_ak(pUpPwz)H3OPdMStDDeh6Lg9hc7kKWkpx+PD8vt)RSytJEg}>
z@*g|xh#Efj=~cQ$mR{h^M
zjIp15B&VIG*{_aqrq}*Z&S|Ga@0{ETlL|_k{M}uFLc>vIl^-hgoNYdcm#ZaH6E)NZ
zdrI6VEC_$TvDlx0)7Y+o6REHw9HoCt`mX%@ke`)fUwR({X@cEHj_~TGd0K>NDlk{0`tdc8>JwXqiVSl3)qIXMr)2AP)K(!1f7~|G
zKHk<+ewl1b(a8pj{h$+fzAdfCujQDnp^H(MF|&zp-}}E09TNG(9c$>(4VIRcRmdd1
zG4~=-qe#VSdGZspV)2tuvo(h;@<;kQ)r`jlg%_w?
zgpa|Rf25}S$C4}Ru_M(R?K{b4Ym}F8lqZ;euVlMxm&()CBNkUL#a>VJjW1;Rib6BR
z_CvTncfaJ=bGfmwW!2J6384)9=VW1EQE+qsx(}p72;;1$&%S_^)AM~k{LsvPh{xGI
z=3WK-@CA*i%$3P%if4JB^Z~n+Q*DbC1<{PgCxbvxjT{vPsHQ?$jk(tIMBn5GW_hyQ
zFN#%qT{_#!1M0E%ry5z@S%VfmmuSoGUKD-`P9%JzYu=@Ezs$)apM|;?#hMJabZ8-o
zijoXM0_rQqBMNpJwRl{UY_4?Q1M~C!@n@(0Smw#i@SmG`tc$~`Fef8Qv)_ronN2lh
zDWAkwI$f+W_jT{*AyW=&bUZR3F|>-|8F;i#Pjv3wtU8EAAKVgug$r86z*ywlT4vk{
z5k;bGyQtCizTP8NdZYeFX*iG#?s1ZBLa%0qeklDZL+x{rK(W^f)lU3MSXtu7a{U0-
zum$__nvjM@{&a=P1U}SSNJ5nq+2TtTa*SsiOSF@Cd%T@Ty*Znqz_k@fo|
zwA7p3MGkUD*W0KXtav#YMw>jM>O;QA-RgM==Jb2(YB{JF>&sy?JM;LAAdUy`YE1BV
za}Kx!x5Vp}$j4^vQpqL$YlUGp2NEFVSH3QNW5}zISs|Q~uSuNb^(8R>4qnAdj
zvJF%)`@cq^G^77s{->0cK2QnQ8Y$pwe!@G_ovq^w0~NzuWlfyY!0aNCStMW
zz+l05O^}+hYdb-35O!Ry%Jk7mSFcj`STLI#9r9MhYsuH25<-_P;(sbMUPlNejZKXO
z@?v`tI35`XN&?4U0VlG~UapRwf?y35^eHiHteHjn{d9Q!e;m(0fBT=3#l|O7&@
zux|TcWg{7WkphZ>oRF=aw{bW`j95b{7e>*&WAHr8(;#&}t}6b~`rQS+@frx;fjpB9EKr|c;({9-lG5>#wa%Vtqae+m}(s5p`*V^Qnq7ev~b>`4<$XY_Rc*8m4q
zIS=h3yd>rLs7BIyfGU>#5yJRbkNz{@m-h%heuW44a>IG(xCt0Wd8tM;zb^rc5z
zxTum;FAA;HH#u4*buz3PK-2riMt--OW42{52*1y-SKkt9S7@vjM*}*UIeW^R_L?s2
z>*}k=d+4vF8kt#SKoHX|+wHcvD1IoJGx{>OqDS&@9qMY>q51Y@P^l50JxWDmKS-3_
zL@i=bI;mH!CS20vi-pTq!l{j@jEHhEUO5#C3Bt$UbpshHAE0!<$j$D?(^PSccPss3
z3`t*1lLan_`&ILLL5%UMQ6dW^TP4v*Kc(~By`G513YMTts)`o|ofFsnjq&e~$f#c_2vrr}5GSL6A*I|aZT@Ls+co~_8PkN_lrBrLnW`QT
z(2AEReP&goT9nzOU5_!xof6JwO^=?xlSjH+ES)65Koxr)i$6rGspGE|-$MAzq+vP%
zs6i$|(|SzM4ko}x&^4m};Sc^0OH#{1?kz8kC^DNsQx3e1kU$DB=3VxV0d~*oTed`w=%O)tV__dLZLL*XO2uJ
zb3*{SG`}&VzX_yEkSYDW{^moov8}eR>SsW2SiXP@fp56vL*4foWqK*2GgI0E5~)4a
zNk&La*_!=64XH0(l`UeTFOjwIq~np7`QarMbY>NHe#0~AHY+>E-rPSQaknrN70WiA
z)hfs)kJ%jcoU&S|CXE{k8iVKlTf!%3rQMRbv}KMvQzyZP8kENGN2EzC9K3T7q7HUf&0@eT)&E&;
zoCf+Q3b`(5x2)_sE8MZfZzpRDmA}eVR^X6k4U5Z=tRLM8KF@kt;148rsMN_mze$cs
zmKpAs_8`K&w%8vVj~UcDpXidk<0RFIj);hVHPS;^Cf4&wO!bwMy(;cbc#>bPhIZM0
z2=sA=dI_d!-9`xN>Q>0&a?5ISSnXDkod0kG9S6r3GUii}p}q%9gj(*Kd}bNy7mUl2
zp+vBcaTi@&y^ghcaf>069^f5L(BVw^{O$^g=eJnGqYzL_uMgYHm2Gw+KG^tZz6@J|
z2RKPYsZkv9eWs~UGLT~3Do=+hhsvMa&+ia6AbMV#>hAlLm|e4iF6?8wZT`TJSsC55
z=j~5GoLR*8^s5HR=JP)K$%@OleaI-O8IEdKb4wxe*fNaky4i`Do;l863p{6&)Dt&>
z9}MPLs~#y1}gZ0_svG9+!|BqULH_P-Ixs&e8a6>Bz-|)
zd$}!PYZkg7QlXVkQtqV^yik}E7C)3u;k+)CQI4iC9X!ysW1qovtwN_&m{uH~GKM|(
zyPx4xN3R@i>j|pE22SFH1*wbICPz7ClRVOGSPhw^gfflxprbpb@fLHywL
zUh(t?Kf)bmGjCk$xsa-nhKulA*^c9&C-{$S9JAc+l%OPc8;dSy6_*?JqS7PQ6nb5s
zvLpiY8TH)Ru(1GxFZHSwF~zv$>~m>ZGf~PX^nl(uo>ASN7m@_9HvYL1s1aPNGvy+8
z=a&IUl>l{bH!>OIkk1p{pKW9am*W^tj>clA)Z#cQg>aiSza-f#RE4Jy^G=y1YXgKF
z*%}>jEn$ulQ{wYh>p$~OZK|owZ&HpceETAu?RI%mPGJ^_rQ&0G!PCy(6Mw*2I3WEb
z|NnUU%BZ-SU|Sdl7$nKy!QI_0xDW0Qf#B}$!6mo{cX!tWhu}_d4X(lEO}=~YTWfxu
zU)`rqb?w@_YR>_+_ne7UCtaeSN<}g5UyQU}gEAHO
zZyzhC5Td8JzRf>xs8-b5<5NTS$ac?d1=5)AfhzNO&)Y;FhFb%D_ZBK#2*m$6O<+Tl
zLuAN!N7RI1UCgH!r4ODS&lEj_u|XrNA5^KNUvxM-D?Y0JV{U}~fOCSpwdbw~FH6f!G6mKU7ATl>+wq
zB$1qqmRD!tXw<3ZV^RxO9A{kA>%Vt#X{T@~CQ%)|?#G}iQ@&$J>_nS8_~}YpUu>#f
zc33o-c@(kx{CA`tgWq6SR9#ZPo>aomuAFJlpfC(=E;+-mqkXr^UY$#Etg5*@?NK<{
z!@pdmyy42Skxj;}nU)%99dlEHFpa_mMB?z!Ox(@3&XT7}zYq-k87q8dn0a9>CcSt+
zh%%9op-v~WC_?_CGU*gyx;rNGr7ZjkMc)hpC<^aA7lPbYx{KeQvC+30nes2rul3I>
zCxabcAanD}H1YDemF*|*d`0w>L|BXTs@9
z0g9q~DVoGH@+0$+%56qv(f0b4HP#MZ?c!;ut7KB}xr#;;%-XHYNv~3LbUiD=oy=TM
z2Mb2Y4(gt7`D+v}b~GwCOTWEI0w^|^;5
zz*G4CTTr{L3Rx5kcG=@;P4Y~soo$)2#E>+UR17qRx*C;hEk(Lb^HLKtUvGG5*!lZZ
z+N^XpSm5BqwTnS%?&=MJnNo#ufXby>Ay^tBpQwH9moPxDF=Aiw8vjnW`Cz#b``z(a
z1S8&9+o@uL>?Du+w(S0aAG*aeujc1xM)ABph}o6PQA&cNfPZr#3rnPaiST<_uOocuL%{8c*{*}tX%
z8>G%=YBBvQK>b-aRO_I-(>gBEg%}ZH_*}f5WP7W=R&M
zK4@jcIZup4y=u6|GnaYoM)V-1mdXf{ySXkPhmRFGZ|9~*_~3cue6`xaK5i%}X;xQ(
zTT!tmF!Ms?x%o(ML8N07x#hf-S7?I!wh(r8d$0j1rz=EGoX6)zc{THS=u%@aH1N`M
zj`-xzmz?l1m?SD>KKbCQFbp=clCUqQtVs@2tx{J?>qNU&GIHvXmHWG|N~GAhSHSJ<
z0ar7j?
z{3-7;Hj)e{IVy$93if7xD=1kxyrGed^6jHSNHUVCg3)NHgoQYgNn&%4ncOSWaE>*8
zB1iGUsd^-rEnWe(=IpIGT>(Q|O}YD7G^GUs^E;A2wxi~5lZ7XhI)bV}xbQ3q36jA4
zzvXBkk~n!$rE9+yJ?c9H_cl~7%W?+2+f;4aNm1amC(7WCTIbf>pxA$_6k!N
z=hSNyOgO()kp{w`Z&q>y)Sl*pZ_Yws?_^WlOw!ZaF;BkNnEb*&mq~elk_ADe`M&Lf
zRhUY2OnH#wm9I=Dj^$jY+hJqZ_f
z#vMNj(Y~EspZ2|vd$U<$FkhcnG*k3Yu~O5ZM}X|2OAC@MG>8r8bU+H_^2Pspi54~u
zKFBcOwAq@C1&Im!j}^*kM|d5I#m87vJ~iN4v2toNUYcd3sZ>|ABa|30Bty}p62SFa
zNCZSROT`B*=B(icUg|+KNeh0!n6A+&qc>B6h~A30Q4Cpw21^YaQxYCY;ha(YFlFPf
zSoz#GeZg6&-i6B_ws%vn+3oDbHdcOGpFwKwR0=FgSWFQ*xD;AmEA6a)u4^7AQ+
zA+Dab6N@uv#one@Lpd@k$ehg&o7+a{!-+6}qUd3a6<%
z>>OyzY%*man0!iSZF1BS6dv1o8Rk6q5{__VP)@&;r6F8NNg*UQN&dDC!Dq%a$r?-f
zm6%(bd`4wK;?(%Lda#Esl0g|qLQ5@61yqo~JJOgZ0+XlVr}G$ga-+->Gi8BgEdmIn
z-h?{RG^|**&+R29ZX}s$ZjfH5KGT9OB>#c0@4ex)=hE$uxrmSj`(>tCp%*@-fjRByP*Pu~mZ^|b2Za8ztR8KdKQP1MJVKH;yD$EG+}ulkH)Eyr{v#}A$p|)@SV2{78K-vmOy7OX@NAgRYep$V86C3l9>pD#!f<|m&u(6iZ
zT1_OA{kGN*RR_oG`a!Re3IqaK%z{Nm+dNh*ls1rtasz&H{MwO|`{Ke+0ioO)KKmDC
z)3r(jQ{<7XlD>EtjZQoNK1R0V$3o!{wCKo9aq7a~LF3G`!z?(Ib1EcFj~V
zojH@k)nHesUetoDgRTsh8Z98{*;ewLdSe$FM`iFcg*)3_@=QwNU8+JNbRK0C6;(KL
z^Eb}#%|_F$E*rJA_b3gIkAjtss?>10-0>;Gx$Qzd_xir%$-_h`xmvgfsnzIEHqwu-
z#0!UGTtHh_?Vb{PG{+=9yi1{wd~Jz$kOkNQHePST7<9W9UU;YYg%}K|#un{BE_Dnw
z!NAIMK`iT)Ybcf|XdUdKRq4_|a^Bse0K)*%C^@Zxc>cm-k&(Ak^8A5CrFJg
ziV-`|@F0ToMZ!_vp{SL}D_=|df(#u*7q~zzGskWp?z$VNMeY^#Ck4uQg7p9~YDOHI
zjD_8{Dw4Ht=?Eu&ruOZfO+u=3n?p3Y3r#fiz7<&P2uTxsg4SA+-IfV7gq(kPl0-Sx
z3gGvqX-)I@44Oh$R?EMmG6qQ`QP)+d88yo3-)7g~GZS*8sC|j~8_bq*i2Y;1-0I2C
zCZp8xdt`BV{n%ltnSrdDB9R1*p9Hv$_2BNs7n8qieDCzDlmITNiX`5V%BFWAt-KLLaoUfBP{
zWbmF-<9bb2iPRv{Z^8%KW>R=>TR6J53$JB)c{;)-#tDa`D_M7!YMRNoO;(5-ES%r;
zPLt{=YeWGdIIuLKxk*>4jhffg{`;}qr)%^>8AASzun!x`w1dwNfffeau>g1
z(FFH)uO$eakUBzX%WD27?1Z!u84^^`pU7BcILXMr2j|9JcUB+O($xRQV~pk$xLM5o
zw~Tss^_Du-4bw&lDJfjIH(A;H13{Ac`g5^8Nq?m=?JHaP$;sfsBoRzZKgfn$J`z
z{tB#&Esp3Ph_c=_iPt)Z>)6jBpNta*fi1ivTPHe&E6;^K1di4s|D!9RKP8qxlbEl1
zJ=aB7XC=-k3d0T1r|{^Y(#&lA#Ps%`>$zv#%fxL^Gd$VPmyz?Nv$q5OHSrabf&wN-
zeUTpErM7`e;r;gZ3TdC9AsIi4zzeF)qiMy;0hE5P`qb9Yu8-?6NPR1Yje-*SQQ&Ak
z)XZVJe>@-5k+OfVf7n*j5Q-1EF0`|Mg##hvfw?pV&1cV=3pUG4b+A
z^6b3w$TFMHQIpGSi9Erz`xBV7ng)7{Yw1JUL{RrN>fAH0YWy>F>%rq~J|Cq*p8*zo
zB@cCfO$3}&q#FLg6=>H4Fi{xV*|zGgic_YSC(2Usqh5yO1PLgC3vcC{olfIy=)-pc
zTMtX2gbpaPw4`iXEm;gz(f3)Y1dDxoc@v_yZ=bGGhPN2Cmt_Wo&AJ+SS!j7Cjm>
zR3k-o%yz@+I#8#SOr)z~YF|vbs1_Z6rXJanpqsnmHXfQJj5Xi{4d@_o5{aSnk`0f{
zXu>H?yRM^plKG&41mSxOPzIaT;<|4_d^#mm?cGw0XQW{Zy!j=U{_-OF@EjIW?{&?!
zT#YTNy}W$3d8@3fj9#!2gV1CJ>Eq+vO&McOnkZ?;=!^)bkdY4mkiMTpughg}^b>kO
zQ)veEt_{-o2%|S7eX;T4xdvZm@vP_B964?9WEfE%n!i_%&2vY;^Z2M-)7DQsn>Zik
zI^tsfMoG9@%%
zRl04A9qZ|DGVXd_MY^?HwTe-FTc;Y=snR}Ov-7%8$RMoy6||(X=!4KeE5kdR@g_2p
z!yr=3N(c$aW~p12D2GQ~$E{8FQSds!=kM4#;I0-`T_I1b5&r4$5cg_wzv}8QvuxoL
z#+RZs`Mc5J7NHb5bz{lMxf*?XlMEC!hdH)>Vf+r3)7L`#06;z1
zh7Ber&w#`J%m!4)EiX8_
zZad%f>s|ixEjPLM;JvqDxAb$T3EmuR{Zli~W`9XsZ+HCdyZiJWo2STSlhw`w!GYVk
z5RDL%KB*A%+ixu!UPZf^v{@mB2US1)I*3Z6!2{A!NMkd@llfPl8q=+IxUj^NzSU4r
zU~b!?X@w*&V`7Rj>6O5RHRlv*Q+IVqA=Z>5W)q-mu#@Ke*NcK#G*M-Bw#XA6PSZ-;
z%w6XZ87}+YsooHrFANblOH`T_`_FBdyr;Fxl(XbPZ5b0i1Ry~c@Sj;-U@peggF1Z{{DYlHZK&~oATShZ)8IoPlr-ISUoka?n
zte(W(h*Qg}zAd?jUE$D1-7Jb2O~UvJ?qcZ_g`-G%rS-=7=VPnamzJWDXR-9Ftyfgb
z9x83T%1>ljif24N-g{?#gJKbsoyM??xB`_t@?+>@{CP{v1bhJuHE{?{BLF%
z{sqZ0SF1fz*E#jnVtB^~*j~>+Y|Gr7?8uW~enx$|oH5aUUK|a#Fgmx>-)+*)*|QjH
zIQLCl_&DwZ=xy0Z1
zh&`@}yX$^l|K8`%THg4mnk-jgE}ys`gC~8^fe6aJ&r2*G?P|GPnjP(b
zEGDB$-^jIi7w28mqcSOiiFFIn^AcEv-NZ}XY7{#`j6T~ZKt*zikusg;cq#)K|nY(f{}`-dw1-)^TV&D^V}
zP4*2NbSgp7ibCD$NY%IFMX9bf=b6!IQHDrb^nQz)U5zw<=}pXtH~T8g9m$l?-p%LV~B)xOn)$~=JsXm6ZXImaXUUJp-Bo9ezGfIi<_O@_los=ZM$
zItBOL=lzALPL{~=
zW_dj6z7@{Bf^$E&uj}pBu^jdu-@7@O?Z3Hp{UMOKB1^&iSzf8R*r~DjH*QdKooUwG
zpA_*XjghHIfdh7SKI5p|Ss}GgMrJU>>ZO@*W>bq{&7;YrT(hA)UF=N2W2qmdJ6{#?
zM^mj;xlUVsbjH=UtRe=TIvuJme%)^s>iTq=JUWJjSud-q8C1F%2J`xSYx9a!{?BFF
zEMvZ;?m-G97Naz+8UvDHHQwq|=X&9qFe`sDiE%A`B3|_8eGxy5V7-LyUoqagw{?&9
zXf9g&Cac$7|5G+2eHociNdl8h>el^u1K{gv{%N{v`~w-TH!;=$5EE7?bbAk2<1QI(
zDYv+iOi&wf%BC~bDDteaJ*?5tATCR=@X7MiW(9IxDQuu1sFVo?npRRo^iQ5h?_?`!;t!vuegHsPa
zRWkhHyg-E;g=)X@z4~FJl~zLkwbS}qqe_WhUHWuP%5qOWT@Iriie_6(*Kyhb`i(Yg
ze{1OU_CHb%rK0#HEN@J(WBDH^D(($Yj{S1{E+tn?PD@M0n4}FY^5`0Rcr$MMn7iYP
z=rs>JEps}HEMqAs$C$@n)g}{qy8=$U4jMZkQ+jw14OXa
z3z=k>czvWHKP0-u2Tdq8*#U{Sro#tE-!w6Q+}t!XmR-pviWkVy{aK;SctdTX^6GT!
zT-B#+&Mj0i;FLn)m`t4iS(e(~@iNCTO_PY+4szx7_pR@xJ5*H{ZwmkVNbBF0tJm0>
zhg@f^w&nVEtLE9JAe!e`+is^2-_xZgzD!BH@Yq>-<}Eh29fht9%$3VvdC_mX|24bU
zpEAC8kx+mwk<-I9y*B0BGvR*z`@pexJ%`V!^y&Gitzq>E5&O;JHpjQJ=Shpi&8~Bk
z!BeLyW{PV#@ot^Ud2akP$?n6jZ~1*7F?&bOZm7y3wXa>9My>;&@&O<8>7cJ|%g~UOO({=|RFlZQ0>PK;9{!tw`!CF$REnKf
z#Ryy*WY#zKb7BIoIb9VnvD-F3&kfLxwLoWQXLd!cIEuB?`*b*bpZOTo*VUig2U|i{w$VmV4
zf6k9?s_M5v3``TM8vj_r+!gT&ec<`SNc3!!H9MKZHd#@nL96i~uHcE+>SUDT<$l(O
z9*Z6Cp8OBm*4nX(N7~j_>D<+zxd&j<80t&!Z9urd5S9S0eF3P{I#qbB0SnTn!G)I>
zObO!Q0`w>?>-d}i;@Nx9Mlds9q+exKG-hDvve*}}(Y5<`C#`0ZBFA}j4|zb{l#|rpVtY_zDT6LsS;OJg1q-`
z8@CpU>BfA8S2B`c%#9v%vzI#GK+mOsLc~LXy1#FYNS1=*1L4)Sae}UN-^mzDo(ta%
z+*($Z^N6*p5}ln{ygHf^6tuSb#EsXelYck(i(#tNmTS9)r^6tpQWx`AolB{OWzhaHZ0`P=+N$+N9`o%JX1^57LT>rb`iu)wr|7XOYG%
z7SDZZ5lWlMn(l7paJWB#myB&G3Z%T+?~hYRIyE+e`eXEHB-(CSnzrAQ#SIg-vo?gs
z&4>vZfs%iW;jG?^(he;n0FFlbl-_g@infQ8j+z={aw(8SdLzJ_R}E2)G%emZ8NRTr
zi4v(YLx*?|=STQ0Ho>B5<(f%4$K!dw@|R9}a#^ZSYmiMp4Gd&ZyUw$**$s+fGnQQa
zv@mIm&iKZU9?6RhKwG<6;c7g`ZPPq45;~(`d7&w*L{?NR{(R9p!7>BbN?I(VO48(L2r_xJ>@8XLb*m_Ey)m*Y
z>~j)Fjfbysb4Y;tFb!c>ytxuyPTCG>tdOXF$N0CSnbzh7_3bsz
z^kdI)Th)F46j;VBuyPcZJ-=Kg!w53U_
zU8GvXxEa7L`>{kt%)w<4)!11aAW&^#NEK0F<
z;6udW=1)&2~o$Nc~0h8
z-;>b({xu=Z-}NO0X`uz)teP>w4hAdBrH}<(Mp!Cw)UZxU%zM@8)O$>ziJ%p2r+Bk!
zF)$A~6mC)4lP`cPFmOMsf?5;LE)_ST#rISwZRnVgB*C@ZK!(@Lr7ll6l~!gOdmfoi
zNfL*vE7#zJl$fT|tC-&l?@51L+ZUjgY0MeV&v6xAm)tv#(x`S|`7em(YI**ldX~de
z$(#Cc1aw`bvX&sbMh4W8U(f*`nfc?i94?ac_sMyu#X@w+GcXp8X>YRC&7Gk2Etc7
z>e$Tn`Hlv2ml)r7=S1X_w)la$ao{AYb%cr*o0d~yN3^i&R@Z&^1m1qGHanYYUKRIaAa)fH;{ca(%~~>;u~?+Tl3NN$@w8^qB=6fMr`jC^@xlhH>M>dvnV$@=XE?9ZBGk;nXYNcpblg`((J
z{=p>!CH)y%gM|kY#Mbx?gSc}r;TLOcOe9Ncb5-&6+I0APAFNW<4G-hw3FY4%=-Gok
z@pteu<{0uBTq$!Uj+U!+aeRQmm?N(SyfX@#eJxO{ie}5LJRX^5bLF=t7VQ|4&qhE7
zNy$2khaA;G5;2csX_Ok(;szK1Y@i5s+CVEU_H!53*AcvRAHgD<5oOtU3*AYFRY9v>w6na=PGW@iMvqiy(e
zS5oI*=_*+9+4BwKo1^*N>=FO}*Eb84HiNH7IWBH%eKl0DTJ&qi!M0W1|h`@(Rb#*k?BfAVfW=
zS{%ArztDCwRyDHXB25Zi?=jV1ev
zTsZphZxF+;ZINZt!FZrOv1PisZ8f4Lg&I3u>d_HC`-4$>cz#aN!d~wm!^UaN;v)B<
ziBzLjQ}KiLQdzCI5sW&uL(9IfBAH;*NrQsb0bzpS(~wHmq;<#j-%E%!wR;jD2TPo}
zM}?@xvc?iWwoP=2Z}5}e$1m()!IAz=tFKNw**5Tca`qtQ8ljzN(n(pyFr|5IO#)mw
zqOi$0Bb>@_eimw#V>Vo)O1m#2Ug5%IG(qI=swY4p`;5kBHK^%3U2ANK!X#VlfyUz3yW7{ni5~(5VKNC1@KWg-A2shqs%Sul=_@gpZ*9=nhD`M
zKP=Si9oZl?)9>~=C7ZK9^u++rmP@Y|X=@;k~ux=Q81}8|HhtC%;
zO*0xwB?=8YFwlk=c0s;9ZV?}gw&7T?r7mi4^^ksO#m0bUpvl^?wo-gwZ}K1}bTMcO
zY6OTd?X#SpG{lNFAdWWqjA(i^FHLYa`E6lDPkDF^#kB^3^aGkTZv+TJM0tAXS4zZf
zP$n&(uKfmnz}EF#?-;XA>c^6zX^3JkG!6hAT2xguc#-9RmCr^0uuSUfxs>YGebUp9
zAvYm3+o}hP^!9`LUlP~DVOkl^qcmcsn=JJy6i;mX(b&YnER2ArF6yrf{bQWGd;vxca
z(Qis8_^lzt&1oNfyeZQ?ZQ0&40l!Hi`N@t{JsGzYPh>|2L
zI%R4*#1^~SP+=p9gKLq1((I734TyK???qU|2=V(3dkUs57)bJpaKu#x3|}YuZ}C?2
z;CE!bu0L9;MZ0{6j%M1#h2yytM(30u>Uro>uUsE%gDF5DfTKi!bCorE+0PHo+UoW=
z7kzqzUdlNM-5-U18TDJ|aX##ar8$%JntEM#<3{{!bLOLlskaj86$hp5s;UvA2X8hE
zv6OG1&Q+++iswb(dyD0ok0IReO}^9k@LM8WsUimsska$TseBRfPhN0{=!+@c+!sXY
z)gZ`2Ya_F{AWhaI?&$rePnM)A2v)z%XMar}$|&3)ekmRL0CsExdpr>2GH*%m!>t9>
z37?6Yy%%0Z0R#n*kU&=s#k1c)llh)bKd3ELDuA~)=|XgB1^5xhK>;D)R`(pdnY15r
zqLO8Eywn^#!>C&kvfiR3`Vd18RqB=i;p1+cE7M71#afxlpn$(%5&`0(<5VTlvZ6a@
z|I%Y=O{?EtrDE+c*#?C#J19jX=1C7K>A{nfzvsQEN0Hfu+7VqjO(zJ2fSn|O$cFqG
z^U#@NdYU=H59>0O;3PR(==%O!rWP^mS+Qojr*1{^VMu7`_HG&b0-`9H10C?wCpqS0
zBXqXWvWXxDE*&6`f4+K^)xGE6M~fk??GWRKKcX`K!Z=#0cV2!hSY_pJ$V@x+9Q{FH
zW~xlmqh=|48F|jJI8AyztmO!j}#s;
z41!#RmttZ7p$SSzM2tpY}>WMyr`3)$^1q|dNBQa4UfKA+DC0A;8
z@Y|h?$1-O!jPlG)H#j9DBp45_Y;}bI0S3ZIr~w%q=^~nu%}q{Sm}LQ)HMy@1utyEl
zq3BEi>9YAW>F8~xgxM_7oTMZfG5L2CiG%I-S#h6de7u0kE;$#wq!$P0O6O~={rwrA
zX=h;~WUhk-h94YXDY7~+UDl=b?rCEblNBcNTqkpXG5ybZfBpjMEQY$dV>6PDmr@L;
zX0Z+ZhcFC4AY3fwrQO^#a2jtRceyJecoy8yIjF5{*l1yby|JwZcEAH_2_4;X{we$P
z;m!^}zX{-qSxT)SulX3ieA
z$b=HE?8i{Th57w-sN)s;7O1uu;nPYUwz4&JIwd
z;0$N<(RH8tLoU;Iqu~Jl*53|9Wx=Xt1|J|%OfWqk#Q7QMjO!WyXKS0mAP<%F^}{Cm
zEQh^%6OrCV(8p>TxntE%US2>OyLrHd`8ccXG;$OfYu(0ybo%$Rizp;OknndjrMESJ
ztAKBvm&b?AP)vh$3WN2~{;>*}?j)qW!xy8RRqdG3zb><>xNLJy2S$?iTprFIcZZvG%FJy3)xi52PcOb#zq
z>V3l%UbaCiPFB*WfGhHVKlQ065%?sXCdelnj|b_gQ%YOPzYFv_$G*UcSg|*Na~;b`
zqD;nyR6i@}xX!j8N*qYu{{IW+TeL99G|ZpN1lZ1LABndI;lENbwG9f;Mgy~rR$lOx
zIES3|`afv1+weO8shQ)5BH$}M$IC4qSY4md|9#0j;HfYX0Y}iiJ@eex4i10}IkS#i
zU_z>NzDxRcHhkYQcX!#lYx7^oMK+vpb!Z|Ab+4EhghLFgS
zh(g*^L4*D+aM}%Gv|0ej8g?jK&;2j*q5r+jKmfP!fb8xu0wH&z*qmv35>{xihI0c%
zB?s{Ncv+}`e^d~xsba*l)(y9Qy0&|;6=02};&0p?iq4bsu}_dy!^0wq1hA(_$LSq^
z-+#j#{tgGhI!e)~VnS0&m+z6L;-VQtZj%9L5oM?s)HAQ=GCLpXla*?TKkakM|CXN3
z&;MrED*yBq(6}XTh#NiHpM)u{ri>6Psis`~J|Vsv4UNsgku;rrhW830SBPz?PFXTx
zpJA4qEQABdRyF7Io5InUSyX9DVu~K7B!d$nJQ2oK$Vi#}LKp8X+OE*aTuzJWZ#60n
z$-za&2wrxEJ3QphY5B$bZb_2R{+sPgipr=aG>*XQJC`&AP8}5|p$qBS#wEAR^P8o<6Ax1gv
zoFD~oR%^9H`c+WSl$#nz@I;<*Cj;qZ$_i6MI?`*6+2~na;Y`?~*rM{AlrpH}>QjDN
zC}2DLpDFIf3(XyflQ?=3nEMWxeL^&6a7^VpU)4ui|p@&cfY`G&Nu3U;2TH^8^5Ft({X`e)yE)cQm
zd7j9(koL#Hh6Md+_V~wV!Qn7J59?R;PX%wPazU(l*FBX)c#fDAeIHOg?MI8-uYP_7$hpH
zRB34){gBVEGgoFgt<~Aph4%5I#Wb%en@9lK#l?lNQk>-*A_^|$Ib#Farer$m+pnDH
z>Yd^**V!*rS=KOzDi2MTRw_B$a$pAz;GFJa?zgv@I7cy)i0VXQ(^el7Lp*s)c2UbE
zuB$nVRAISihldQvhg0|a>qVD9FrG1uDR}~f-nPT{U90n~@bFqsLGq8XSn1!9#8ez1
zoar;gh~a<45a_;)C5*?F4N!*L6-Epc;+RJ|VpG5ES5mL{~aBqLA)#XUT8H4<9&i-8#F>VJH4C9MX6r;-qBE}jw>d2)t+PbrcY|4=sgLPq!Y
z0OMe$Dc8KMP1CGoww&C#j_1c3le~C>0h|0v^PJPuOaD4y1b;1)O<=9b*!=SsTTufU
zi_s{m&2sbX?pW;alFpG5oUn<2fXoUMKiso0IO@r~p;qFHO{8Q5%3hU+$IrJ@-86^2uMh_K90$4#*3QVs0^D
z={CuT6Mdp4%LOD0&qJ(!pBJIIN1(QdRXB5kzNh@d?AI4WTU<1vZk(s
zAI)9&ID^6?Fa8FMo*<(gy3&0xMV2~=-r`Pl++%_tgosPDdPr*R^WX%0ZR?E7_<`X}
z1MH*`+dVsD)X+_&*Vn~j(i^tA9_yb76GkRZ=wbQVb-mGx${sE&5&)i>4*9}M7W!5zI43QNx5
z9=&iTp<|u$Jq==iPW~7z9xx{K`!;wcm(z8VT%Pu5s+4G5J`4A^NSVP#9GY1nJ+2uq
zkng^Au!CU~U3tQ@-vpCmyP$?SxoM?#nH+b()ci@t(2^8QVlN^*azq8auAaRkEG2O6
z>*&MnL3GwHZMk&O4w@%BW3Rirh*EbdS;@jbTx9*D7?a@(ef8WyW2k3R5msW|+k*Hv#|16DXj%N8wm0NicIlR-)=jeyU)tnq9mUgXSjyKK|82AzVIX{H|
zr*G!B#?$T3Ap&}FnS95m$l>YH?H@(?4AsQA8Dbp8)x<%`v|<%Jt_!-tD++b-%7_4r
zV$VTb72LJ_Nzl{IgS7+hXCGVqh~@+LSp4qGEZJ`jsDQJGRTlG`MF(?v>Hh;tvqMmz
zr1Y=AfjuVAC0c>=?cov^SJv2S^%ND?eoK-D5{r_ka{V!5D6r{jSJGspP$|C)lN=F*
z0bc^9VqqWBC?s;^m5kJW!o|wWWI`iEv`!pD66s|M3Ru&&dw
zO6E!nGRx&tAY$ve1>WU9)W2gG8$zCR_`j9lx|nbl%g3W;S155DIAaTrlLo46
zz{$8RQ32p1#h`M6@LKqdU(&q$b)@YetuhioAwam=fw4df{kvdza5hg$f1QhwTw;KN
zQ;iyyob-Sh(y2!3G~>REu_tfyMO+$3b3XwJY?eY8pb&R2k)iAhquf6U=Oa`E_GKi9
zmZSck`Ui{xFfvMd$BB28{zWZekr*N{%7lgiJb4#*lb(mqk1w5!{fO2T;N5a!3y)qy
zvIz#=Ae}3CWV;k>TX9P}&>1rPT;;^*2)Yh>jSB0%cQhV0M#qi~Z!t<}P5mfIFxJyC
zNY=Z~h7J3-v|F$-D}5AqfB+@%^HH&8mp)GA@4B%%Y^n-0AVYM351-CQhjfMiisWFD
z+}94Qqdfn-`dI}<-Sl@G8V
zL)kHwRes_$$hI$byzd(l5ni6nV`QT3(rw!F7pznke6y8U0?J3F{VxuM{uM05{Jbjv
z#bFd#?pa%s$_2UT+TNqIy}-Nza{dVhli3XPJqlfFKe2)>|e8q*@0RGk~$iPrv=pcz!rBa$3u
zWYu;di)~SUR8gLkM<(lV+q)38d6VZ29!MA1E_%7~riulcs!POZHrpb?QsqT*+s3@v
zuAA|T$WR0Y5R!p^AJh;s$%tbABS?Bh&@TKDijHA=DZB3i2r+TAQ`X3H=WGCl(x!LE
zaq!UPhQpxPOn#XOX2U|!2ld-48t#nn6K=LL*b61^`HBY58+&!R({aynsZ`LS)$xwi
zd;AO=-HMEVdr&mk+2Yr~zdy{&W#J%s6B6`_2`t0NBQ#aRwZ<)7z36ZmjQ?^nn`-<(
zUO}BU%rX;73{L;?VL0uniki>eivi6Z_wxhwCeTA#i+`Ue=HzDqzA4w2O^^_*E5$;mm9nf+BgN@
z6iXqjq((>+oi)0LhLcT1ZrkMlUG7ri`o<@%Jv^Ddx?T#!J7VXdpGizLi%NX8(c3mD
zlRxRBzMtJUdMS2Y{-n%G#HVyB6W)p0v|%mhyy_oz66s~9rR0NxCSmzyIx^A=)!rYn
z7ri&^e@c(=1{^|U)NOrJbG&&IF|VJcLIcenUry$c4<|CBuob4@)9&-ge8?93${p{w
zo=wEs-g9?vxqD2TNF7oUl{6p;Fh;~rz%@mrmJ!ALcVGc%N={f{5+H1tgz!khtkEJB
z3Utq~uoL^dNnL=7yCC=iQSkz7y*6$U+7dy4KqAR0WQxlwNCUy@KH;U|yjK{3ZVAuC#ja{azp5xWXsLzjUe#2|
z62BeX{$%xHtQJ=b_NU6201I}8hK}ep*)aPt@}eoaCbn&Y9uC?{&u}+Z*;%9aY4o+?Ja0MzmeAFr2l=Pbw8);et*$4
ze(0BO7widjwMN6D((}#1pwHW^mjun%{iH=rRBCx6|aC
z^zIw^h*Ac1-awU-mM6+#01(z&Dj*`0L%h;Qicj&GC5%=1TD@F+rH|mX?dBV2?JI>m
zz4ONd{lbFEY6-O!n*Hw}kM@q8-IZkNlRe8tnEM`&2zYVqWX3o|T$SX$&BLC}Kg-wm
z`JQkTc}-R8-H6{e7b~=V3l<;F>r}Ps{rJ0Wo{(18R$0CrS{2G>p$iB_=*kp8mdzP3
zi8HfaHkf(5PpL9)@I}PF5EEzqKbp=tKCUm?;=v@H*tVTCwr#7i8>6vpJB^JdjmkP#t-JcSHIfa@Kj7*+0;p#asklfRXv
z6PNH*F+Y(~qOb<$SKZ|5Qs-z?4d>n{j^;ZEVGl+^2S1qDy5%t5rypWW4!Ju4d
zlO>X@_Q?+4jE-^fxm*y%e^`|I*!xjwb$L)V;q%D=FqGPLrP*G~3QB$q7G*RKhDF>`
zO&S=lBz|t+RPjn>@ZpSL4J!_+2S%}hNA|6);E70Rzrqi_{Z|BFNA;+R;x^Xynq~d>
zOEHu5ys3K7cEgr1J^r}_Y79DO6&uhgQ0SsO%!TgmVdG4q-<V41Yc;n$eoOb8j_9MAtclSthuvsL
zukUhqHp}T!6#Or8=>hPTDh19TYuMawD5-W!p{b3(c4w;E-A_LGSb`Bg3IEn;!+ouk
z)fs7W68{{okKm&VXQS~Z9rXTQXlx<)L@6K1t}0=4o+G+c^9uIOS9f1j)GRjzIo>Z#
z146$91Aq>2a5Wy6;P`wl(4r+{Jp-+?
z%*7^X82*?R@%|Cd-I#4V}g#fZ%eY@9bN2m6`kqvzQG1t#pf5t1Zgk
zpZ6Eb)PKXvV2s51qLmm=yO~@K4lo?eHISr?df*Ug@kw+QoW^%(vznZBY<~3+Qly=2
z$3|kZCsLs`u&Z2eM5-3;RjPH*AHouRec?@}(*PQ{*p#31%Hpcamv>wo3>1TquCFI*
zv{AL|tCrOmmiMg>uP)!IdE#<{nLBw3Jj+Q}x>hho4fo#q(^;S3&OMw36FsR0)JdDn
ztG~tlIscijw|+8>kn*k}A~Zxz2hKFh!wJz|mh*zw2I;W>zz=}-YuW4h?1!VzYnXxh
z)Inl8Ax~gc^}PG6gtAJdIau4`zCM{>UEGxZ=4$-a^cXSBpVstOZ?>v-{C}4-HeBfI
zI+xv4C(;V1oe-@!Eg}IgBL93`cDFOkV=n(}s`t|qjOlp1{_13PH>{65XTE4-ZB~pk
z$e7O}^`UGs5&Mo&G~vCLefc#80S4v|2ux;Yk;F4RATsfC?eXhm$`jkbr_7MfuX`0H
z&@p*F@QWw4G#)R_Vafk{XS7Y>9%>o6HxzmPifD}u1s&28#8$G=7{8dm@;k;0X?N&(
zbfpPzy@*Nm>%+}ZXSduRU)tBqt~aK=?^A@!EG7N9suE-#sG@S06BLsSasKhjse`)a
z&c*-@yVamvjq1S&xKJcjlYAgNY>CP5yxyzJcy6C>@Ruu?_q53@4-Pno)IePxIKC1E
zlCE+BGx0h^;VlG+(+%fMk9yT60QjMp;dQ-I7933Pl9pKCWk|)5FCDg@>9trvGlfPL
z8kcq>u^~91MbhM_T6w&Hyp@1$2}hA8pqNz
z!Y?w8>$59DTd$KT@PyZ~&IFPaCf9d9j7fb!2WLu7o0UYT4z%
zcg%rm>Cx0v;mDKAxWpYX^AAj_T*XDPmjYvPqJv&~M=L2ZiX(&(5%Fj%t`!Rsw5y?(oRdr)G0@n*{pBt06v`y|{H<^mm^
zB2r-i=0jm2L0fWX6Z3@Mg}N?WliF^9DLK4|_jk%vT4nUg$X^iytgG+RKd#twLcnKd
zT6^~)iJlKd?~WW;v6~Hkamn_nqr`Epd>a@iEd(9l!~!5%igM->ZD8O~DU+Wvqj~Ym
za9L8?Z&HiihXlvX